一个 zooKeeper 集群通常由3台及以上的 zooKeeper 服务组成;组成 zooKeeper 集群的每台机器都会在内存中维护当前的服务器状态,并且每台机器之间都会互相保持通信。只要集群中存在超过一半的机器能够正常工作,那么整个集群就能够正常对外服务。

Zookeeper 集群角色:

  • Leader:处理所有的事务请求,处理读请求,集群中只能有一个 Leader。
  • Follower:处理读请求,并转发事务请求给 Leader 服务器;参与事务请求的同步提交投票,与 Leader 进行数据交换;参与 Leader选举投票。
  • Observer:处理读请求;不能参与选举;不影响集群写性能的情况下,提高集群读性能。

Zookeeper 集群搭建:

使用docker-compose配置zoo1~zoo4,其中zoo4配置为 observer 服务;创建对应的data、datalog文件夹。

version: '3.7'

networks:
  docker_net:
    external: true

services:
  zoo1:
    image: zookeeper:3.6.3
    restart: unless-stopped
    hostname: zoo1
    container_name: zoo1
    ports:
      - 2182:2181
    environment:
      ZOO_MY_ID: 1
      ZOO_SERVERS: server.1=0.0.0.0:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181 server.4=zoo4:2888:3888:observer;2181
    volumes:
      - ./zoo1/data:/data
      - ./zoo1/datalog:/datalog
    networks:
      - docker_net

  zoo2:
    image: zookeeper:3.6.3
    restart: unless-stopped
    hostname: zoo2
    container_name: zoo2
    ports:
      - 2183:2181
    environment:
      ZOO_MY_ID: 2
      ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=0.0.0.0:2888:3888;2181 server.3=zoo3:2888:3888;2181 server.4=zoo4:2888:3888:observer;2181
    volumes:
      - ./zoo2/data:/data
      - ./zoo2/datalog:/datalog
    networks:
      - docker_net

  zoo3:
    image: zookeeper:3.6.3
    restart: unless-stopped
    hostname: zoo3
    container_name: zoo3
    ports:
      - 2184:2181
    environment:
      ZOO_MY_ID: 3
      ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=0.0.0.0:2888:3888;2181 server.4=zoo4:2888:3888:observer;2181
    volumes:
      - ./zoo3/data:/data
      - ./zoo3/datalog:/datalog
    networks:
      - docker_net

  zoo4:
    image: zookeeper:3.6.3
    restart: unless-stopped
    hostname: zoo4
    container_name: zoo4
    ports:
      - 2185:2181
    environment:
      ZOO_MY_ID: 4
      PEER_TYPE: observer  
      ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181 server.4=0.0.0.0:2888:3888:observer;2181
    volumes:
      - ./zoo4/data:/data
      - ./zoo4/datalog:/datalog
    networks:
      - docker_net

创建网络:

docker network create docker_net

启动集群:

docker-compose -f docker-compose-zookeeper-cluster.yml up -d

docker exec -it zoo1 /bin/sh进入指定的容器执行 bash,使用zkServer.sh status 命令查看每个实例的服务状态:

$ docker exec -it zoo1 /bin/sh
# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower

$ docker exec -it zoo2 /bin/sh
# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower

$ docker exec -it zoo3 /bin/sh
# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader

$ docker exec -it zoo4 /bin/sh
# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: observer

zoo1、zoo2为 Follower,zoo3为 Leader,zoo4为 Observer。

使用 curator 连接集群并创建数据:

public class CuratorDemo1 {
    public static void main(String[] args) throws Exception {
        String zookeeperConnectionString = "192.168.3.6:2182,192.168.3.6:2183,192.168.3.6:2184";
        RetryPolicy retryPolicy = new ExponentialBackoffRetry(1000, 3);
        CuratorFramework curatorFramework = CuratorFrameworkFactory
                .newClient(zookeeperConnectionString, retryPolicy);
        curatorFramework.start();

        curatorFramework.create().forPath("/curator-node-01", "hello-world".getBytes());
                System.in.read();
    }
}
最后修改:2022 年 11 月 06 日
如果觉得我的文章对你有用,请随意赞赏