MySQL的跨数据中心的双向复制

以下步骤以docker容器的方式快速演示如何搭建MySQL的跨数据中心的双向复制.

创建两个网络

docker network create dtle-net-dc1
docker network create dtle-net-dc2

在两个网络中分别创建MySQL

docker run --name mysql-dc1 -e MYSQL_ROOT_PASSWORD=pass -p 33061:3306 --network=dtle-net-dc1 -d mysql:5.7 --gtid-mode=ON --enforce-gtid-consistency=1 --log-bin=bin --server-id=1

docker run --name mysql-dc2 -e MYSQL_ROOT_PASSWORD=pass -p 33062:3306 --network=dtle-net-dc2 -d mysql:5.7 --gtid-mode=ON --enforce-gtid-consistency=1 --log-bin=bin --server-id=2

检查MySQL是否启动成功:

> mysql -h 127.0.0.1 -P 33061 -uroot -ppass -e "select @@version\G"
< *************************** 1. row ***************************
@@version: 5.7.23-log

> mysql -h 127.0.0.1 -P 33062 -uroot -ppass -e "select @@version\G"
< *************************** 1. row ***************************
@@version: 5.7.23-log

在两个网络中分别创建dtle

docker run --name dtle-consul -p 8500:8500 --network=dtle-net-dc1 -d consul:latest
docker run --name dtle-dc1 -p 4646:4646 --network=dtle-net-dc1 -d actiontech/dtle

# dtle-dc2 will work as a client only. No need to start consul-dc2.
docker run --name dtle-dc2 -p 5646:4646 --network=dtle-net-dc2 -d actiontech/dtle

将两个dtle通过公网连通

docker network create dtle-net-public
docker network connect dtle-net-public dtle-dc1
docker network connect dtle-net-public dtle-consul
docker network connect dtle-net-public dtle-dc2

修改dtle的配置

修改容器dtle-dc1内的配置并重启

修改容器dtle-dc1内的配置并重启:

docker exec -u root -it dtle-dc1 vi /dtle/etc/dtle/nomad.hcl
...
docker exec -u root -it dtle-dc1 rm -rf /dtle/var/lib/nomad
docker restart dtle-dc1

配置/dtle/etc/dtle/nomad.hcl修改的内容如下:

name = "nomad1" # rename for each node

# ... (省略未更改项目)

bind_addr = "172.22.0.2"
advertise {
  http = "172.22.0.2"
  rpc  = "172.22.0.2"
  serf = "172.22.0.2"
}

plugin "dtle" {
  config {
    nats_bind = "172.22.0.2:8193"
    nats_advertise = "172.22.0.2:8193"
    nomad_addr = "172.22.0.2:4646"
    # ...
  }
}

其中:

  • 由于dtle-dc1容器存在两个网络 (与MySQL通信的内网dtle-net-dc1, 和与dtle-dc2通信的公网dtle-net-public), 需要指定bind_addradvertise.rpc为本机的dtle-net-public的网络地址, 此处为172.22.0.2

修改容器dtle-dc2内的配置并重启

修改容器dtle-dc2内的配置并重启:

docker exec -u root -it dtle-dc2 vi /dtle/etc/dtle/nomad.hcl
...
docker exec -u root -it dtle-dc2 rm -rf /dtle/var/lib/nomad
docker restart dtle-dc2

配置/dtle/etc/dtle/nomad.hcl修改的内容如下:

name = "nomad2" # rename for each node

# ... (省略未更改项目)

bind_addr = "172.22.0.3"
advertise {
  http = "172.22.0.3"
  rpc  = "172.22.0.3"
  serf = "172.22.0.3"
}

server {
  # 重要!
  # 只有 dtle-dc1 作为server. dtle-dc2 仅作为 client.
  enabled          = false
}

plugin "dtle" {
  config {
    nats_bind = "172.22.0.3:8193"
    nats_advertise = "172.22.0.3:8193"
    nomad_addr = "172.22.0.3:4646"
    # ...
  }
}

其中:

  • 由于dtle-dc2容器存在两个网络 (与MySQL通信的内网dtle-net-dc2, 和与dtle-dc1通信的公网dtle-net-public), 需要指定bind_addradvertise.rpc为本机的dtle-net-public的网络地址, 此处为172.22.0.3

检查是否正常

> curl -XGET "127.0.0.1:4646/v1/nodes" -s | jq

或查看Web UI,确定我们构建了一个 1 server 2 client 的nomad部署。

配置dc1到dc2的复制

获取mysql-dc1的GTID:

> mysql -h 127.0.0.1 -P 33061 -uroot -ppass -e "show master status\G" | grep "Executed_Gtid_Set"
< Executed_Gtid_Set: 41f102d4-d29f-11e8-8de7-0242ac130002:1-5

准备文件job-dc1-dc2.json, 内容如下:

{
  "Job": {
    "ID":"dtle-demo-dc1-2-dc2",
    "Datacenters": ["dc1"],
    "TaskGroups": [{
        "Name": "src",
        "Tasks": [{
          "Name": "src",
          "Driver": "dtle",
          "Constraints": [{
            "LTarget": "${node.unique.name}",
            "RTarget": "nomad1",
            "Operand": "="
          }],
          "Config": {
            "Gtid":"41f102d4-d29f-11e8-8de7-0242ac130002:1-5",
            "ReplicateDoDb": [{
              "TableSchema": "demo",
              "Tables": [{
                "TableName": "demo_tbl"
              }]
            }],
            "SrcConnectionConfig": {
              "Host":"mysql-dc1",
              "Port": 3306,
              "User": "root",
              "Password": "pass"
            },
            "DestConnectionConfig": {
              "Host":"mysql-dc2",
              "Port": 3306,
              "User": "root",
              "Password": "pass"
            }
          }
        }]
      }, {
        "Name": "dest",
        "Tasks": [{
          "Name": "dest",
          "Driver": "dtle",
          "Constraints": [{
            "LTarget": "${node.unique.name}",
            "RTarget": "nomad2",
            "Operand": "="
          }],
          "Config": {
            "DestType": "mysql"
          }
        }]
    }]
  }
}

其中定义了:

  • 源端/目标端的连接字符串
  • 要复制的表为demo.demo_tbl
  • GTID点位, 表示此复制是 增量复制 (双向复制 只支持增量复制)
  • 源任务(src)配置在dc1的dtle节点上执行 (通过 Constraints 指定)
  • 目标任务(dest)配置在dc2的dtle节点上执行 (通过 Constraints 指定)

创建dc1到dc2的复制任务

> curl -XPOST "http://127.0.0.1:4646/v1/jobs" -d @job-dc1-dc2.json -s | jq

查看作业状态

> curl -XGET "127.0.0.1:4646/v1/job/dtle-demo-dc1-2-dc2" -s | jq '.Status'
< "running"

配置dc2到dc1的复制

获取mysql-dc2的GTID:

> mysql -h 127.0.0.1 -P 33062 -uroot -ppass -e "show master status\G" 
< *************************** 1. row ***************************
             File: bin.000003
         Position: 537
     Binlog_Do_DB:
 Binlog_Ignore_DB:
Executed_Gtid_Set: 41f102d4-d29f-11e8-8de7-0242ac130002:6-7,
42158e2f-d29f-11e8-b322-0242ac150002:1-5

准备文件job-dc2-dc1.json, 内容如下:

{
  "Job": {
    "ID":"dtle-demo-dc2-2-dc1",
    "Datacenters": ["dc1"],
    "TaskGroups": [{
        "Name": "src",
        "Tasks": [{
          "Name": "src",
          "Driver": "dtle",
          "Constraints": [{
            "LTarget": "${node.unique.name}",
            "RTarget": "nomad2",
            "Operand": "="
          }],
          "Config": {
            "Gtid":"41f102d4-d29f-11e8-8de7-0242ac130002:6-7,42158e2f-d29f-11e8-b322-0242ac150002:1-5",
            "ReplicateDoDb": [{
              "TableSchema": "demo",
              "Tables": [{
                "TableName": "demo_tbl"
              }]
            }],
            "SrcConnectionConfig": {
              "Host":"mysql-dc2",
              "Port": 3306,
              "User": "root",
              "Password": "pass"
            },
            "DestConnectionConfig": {
              "Host":"mysql-dc1",
              "Port": 3306,
              "User": "root",
              "Password": "pass"
            }
          }
        }]
      }, {
        "Name": "dest",
        "Tasks": [{
          "Name": "dest",
          "Driver": "dtle",
          "Constraints": [{
            "LTarget": "${node.unique.name}",
            "RTarget": "nomad1",
            "Operand": "="
          }],
          "Config": {
            "DestType": "mysql"
          }
        }]
    }]
  }
}

其中与 dc1到dc2的复制任务 不同的是:

  • 源端/目标端的连接字符串
  • GTID点位
  • 源任务(src)配置在dc2的dtle节点上执行
  • 目标任务(dest)配置在dc1的dtle节点上执行

创建dc2到dc1的复制任务

> curl -XPOST "http://127.0.0.1:4646/v1/jobs" -d @job-dc2-dc1.json -s | jq

查看作业状态

> curl -XGET "127.0.0.1:4646/v1/job/dtle-demo-dc2-2-dc1" -s | jq '.Status'
< "running"

测试

此时可在任一端对表demo.demo_tbl进行DDL/DML等各种操作, 查看目标端数据是否一致

数据冲突

dtle不检测数据冲突。如果回放报错(如应数据冲突导致update了不存在的列),则job报错。

其中,DML insert使用replace回放,故insert冲突时,效果是last-win。

建议由业务端确保数据不会冲突。

results matching ""

    No results matching ""