文章出處

初識zookeeper(一)之zookeeper的安裝及配置

1、簡要介紹

  zookeeper是一個分布式的應用程序協調服務,是Hadoop和Hbase的重要組件,是一個樹型的目錄服務,支持變更推送。除此還可以用作dubbo服務的注冊中心。

2、安裝

  2.1 下載安裝

wget http://mirrors.cnnic.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz
tar -zxvf zookeeper-3.4.6.tar.gz
cd zookeeper-3.4.6
cp conf/zoo_sample.cfg conf/zoo.cfg

  2.2 配置

    2.2.1 單點方式

    (1)修改zoo.cfg,如果沒有特殊要求,全部默認也可以,主要修改的地方就是dataDir 和 clientPort,如下:

tickTime=2000
initLimit=10
syncLimit=5
dataDir=/app/soft/zookeeper-3.4.6/data (換成真實輸出目錄)
clientPort=2181
#tickTime:這個時間是作為 Zookeeper 服務器之間或客戶端與服務器之間維持心跳的時間間隔,也就是每個 tickTime 時間就會發送一個心跳,以毫秒為單位。
#initLimit:LF初始通信時限,集群中的follower服務器(F)與leader服務器(L)之間初始連接時能容忍的最多心跳數(tickTime的數量)
#syncLimit:集群中的follower服務器與leader服務器之間請求和應答之間能容忍的最多心跳數(tickTime的數量)。
#dataDir:顧名思義就是 Zookeeper 保存數據的目錄,默認情況下,Zookeeper 將寫數據的日志文件也保存在這個目錄里。
#clientPort:這個端口就是客戶端連接 Zookeeper 服務器的端口,Zookeeper 會監聽這個端口,接受客戶端的訪問請求。   
#dataLogDir:日志文件目錄,Zookeeper保存日志文件的目錄
#服務器名稱與地址:集群信息(服務器編號,服務器地址,LF通信端口,選舉端口),規則如:server.N=yyy:A:B
#其中N表示服務器編號,YYY表示服務器的IP地址,A為LF通信端口,表示該服務器與集群中的leader交換的信息的端口。B為選舉端口,表示選舉新leader時服務器間相互通信的端口(當leader掛掉時,其余服務器會相互通信,選擇出新的leader)。一般來說,集群中每個服務器的A端口都是一樣,每個服務器的B端口也是一樣。但是當所采用的為偽集群時,IP地址都一樣,只能時A端口和B端口不一樣。

    (2)啟動:bin/zkServer.sh start

    (3)查看是否成功:bin/zkServer.sh status

    (4)查看日志:vi zooKeeper.out

 

    2.2.2 集群方式(單IP多節點)

    (1)拷貝3份zookeeper-3.4.6.tar.gz,如zookeeper_node1、zookeeper_node2、zookeeper_node3,結構如下:

      
    (2)進入zookeeper_node1--->conf,修改zoo.cfg,如下:

tickTime=2000
initLimit=10
syncLimit=5
dataDir=/app/soft/zookeeper_node1/data (換成真實輸出目錄)
clientPort=2181
server.1=127.0.0.1:2888:3888
server.2=127.0.0.1:2889:3889
server.3=127.0.0.1:2890:3890

    (3)然后在上面dataDir對應的目錄下創建myid文件,如下:

mkdir data
vi myid

    myid指明自己的id,對應上面zoo.cfg中"server."后的數字,第一臺的內容為1,第二臺的內容為2,第三臺的內容為3,內容如下:

1

    (4)依次類推,調整2、3節點的地址及端口,如下:

節點2:      

tickTime=2000
initLimit=10
syncLimit=5
dataDir=/app/soft/zookeeper_node2/data (換成真實輸出目錄)
clientPort=2182
server.1=127.0.0.1:2888:3888
server.2=127.0.0.1:2889:3889
server.3=127.0.0.1:2890:3890

節點3:

tickTime=2000
initLimit=10
syncLimit=5
dataDir=/app/soft/zookeeper_node3/data (換成真實輸出目錄)
clientPort=2183
server.1=127.0.0.1:2888:3888
server.2=127.0.0.1:2889:3889
server.3=127.0.0.1:2890:389

    (5),修改節點2、3的myid;      

    (6)啟動服務,如下:  

[root@localhost soft]# zookeeper_node1/bin/zkServer.sh start
JMX enabled by default
Using config: /app/soft/zookeeper_node1/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@localhost soft]# vi zookeeper.out 

2015-06-25 05:43:13,252 [myid:] - INFO  [main:QuorumPeerConfig@103] - Reading configuration from: /app/soft/zookeeper_nod
2015-06-25 05:43:13,257 [myid:] - INFO  [main:QuorumPeerConfig@340] - Defaulting to majority quorums
2015-06-25 05:43:13,260 [myid:1] - INFO  [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3
2015-06-25 05:43:13,260 [myid:1] - INFO  [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0
2015-06-25 05:43:13,262 [myid:1] - INFO  [main:DatadirCleanupManager@101] - Purge task is not scheduled.
2015-06-25 05:43:13,273 [myid:1] - INFO  [main:QuorumPeerMain@127] - Starting quorum peer
2015-06-25 05:43:13,285 [myid:1] - INFO  [main:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:2181
2015-06-25 05:43:13,312 [myid:1] - INFO  [main:QuorumPeer@959] - tickTime set to 2000
2015-06-25 05:43:13,312 [myid:1] - INFO  [main:QuorumPeer@979] - minSessionTimeout set to -1
2015-06-25 05:43:13,312 [myid:1] - INFO  [main:QuorumPeer@990] - maxSessionTimeout set to -1
2015-06-25 05:43:13,315 [myid:1] - INFO  [main:QuorumPeer@1005] - initLimit set to 10
2015-06-25 05:43:13,359 [myid:1] - INFO  [Thread-1:QuorumCnxManager$Listener@504] - My election bind port: /127.0.0.1:388
2015-06-25 05:43:13,371 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumPeer@714] - LOOKING
2015-06-25 05:43:13,374 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@815] - New election.
2015-06-25 05:43:13,252 [myid:] - INFO  [main:QuorumPeerConfig@103] - Reading configuration from: /app/soft/zookeeper_node1/bin/../conf/zoo.cfg
2015-06-25 05:43:13,257 [myid:] - INFO  [main:QuorumPeerConfig@340] - Defaulting to majority quorums
2015-06-25 05:43:13,260 [myid:1] - INFO  [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3
2015-06-25 05:43:13,260 [myid:1] - INFO  [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0
2015-06-25 05:43:13,262 [myid:1] - INFO  [main:DatadirCleanupManager@101] - Purge task is not scheduled.
2015-06-25 05:43:13,273 [myid:1] - INFO  [main:QuorumPeerMain@127] - Starting quorum peer
2015-06-25 05:43:13,285 [myid:1] - INFO  [main:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:2181
2015-06-25 05:43:13,312 [myid:1] - INFO  [main:QuorumPeer@959] - tickTime set to 2000
2015-06-25 05:43:13,312 [myid:1] - INFO  [main:QuorumPeer@979] - minSessionTimeout set to -1
2015-06-25 05:43:13,312 [myid:1] - INFO  [main:QuorumPeer@990] - maxSessionTimeout set to -1
2015-06-25 05:43:13,315 [myid:1] - INFO  [main:QuorumPeer@1005] - initLimit set to 10
2015-06-25 05:43:13,359 [myid:1] - INFO  [Thread-1:QuorumCnxManager$Listener@504] - My election bind port: /127.0.0.1:3888
2015-06-25 05:43:13,371 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumPeer@714] - LOOKING
2015-06-25 05:43:13,374 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@815] - New election. My id =  1, proposed zxid=0x2
2015-06-25 05:43:13,376 [myid:1] - INFO  [WorkerReceiver[myid=1]:FastLeaderElection@597] - Notification: 1 (message format version), 1 (n.leader), 0x2 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x1 (n.peerEpoch) LOOKING (my state)
2015-06-25 05:43:13,379 [myid:1] - WARN  [WorkerSender[myid=1]:QuorumCnxManager@382] - Cannot open channel to 2 at election address /127.0.0.1:3889
java.net.ConnectException: Connection refused
        at java.net.PlainSocketImpl.socketConnect(Native Method)
        at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
        at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
        at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
        at java.net.Socket.connect(Socket.java:579)
        at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368)
        at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:341)
        at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:449)
        at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:430)
        at java.lang.Thread.run(Thread.java:745)
2015-06-25 05:43:13,385 [myid:1] - WARN  [WorkerSender[myid=1]:QuorumCnxManager@382] - Cannot open channel to 3 at election address /127.0.0.1:3890
java.net.ConnectException: Connection refused
        at java.net.PlainSocketImpl.socketConnect(Native Method)
        at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
        at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)

    發現報錯,是因為2、3節點服務還未啟動。按照節點1的啟動方式,依次啟動2、3節點,再次查看日志,發現服務正常。

    查看日志命令:vi zookeeper.out

    (7)查看服務狀態,如下:    

[root@localhost soft]# zookeeper_node1/bin/zkServer.sh status
JMX enabled by default
Using config: /app/soft/zookeeper_node1/bin/../conf/zoo.cfg
Mode: follower
[root@localhost soft]# zookeeper_node2/bin/zkServer.sh status
JMX enabled by default
Using config: /app/soft/zookeeper_node2/bin/../conf/zoo.cfg
Mode: leader
[root@localhost soft]# zookeeper_node3/bin/zkServer.sh status
JMX enabled by default
Using config: /app/soft/zookeeper_node3/bin/../conf/zoo.cfg
Mode: follower

      可以看出1是follower,2是leader,3是follower。

 

Zookeeper部署中的坑

坑之一

Error contacting service. It is probably not running

在配置完zookeeper集群后,三個節點,分別啟動三個節點如下:

[root@master bin]# zkServer.sh start

JMX enabled by default

Using config: /usr/local/zk/bin/../conf/zoo.cfg

Starting zookeeper … STARTED

在查看zookeeper狀態時遇到

[root@master bin]# zkServer.sh status

JMX enabled by default

Using config: /usr/local/zk/bin/../conf/zoo.cfg

Error contacting service. It is probably not running.

而其他兩個節點卻是現實正常;

分析原因并解決:

分析方法:

先stop 掉原zk

zkServer.sh stop

然后以start-foreground方式啟動,會看到啟動日志

zkServer.sh start-foreground

本文地址:http://www.paymoon.com:8001/index.php/2015/06/04/zookeeper-building/

一些常見原因:

原因之一

其原因是在編輯zoo.cfg配置文件時,指定了log的輸出目錄,但是卻未創建。

因此需要按照里面指定的目錄進行創建。

mkdir /tmp/zookeeper/log

原因之二

最后檢查配置zoo.cfg配置發現是該節點的主機名寫錯了;先停止三個節點zookeeper服務,逐一的修改節點上zoo.cfg配置文件,在逐一的啟動 ,結果顯示正常;

PS: zk類的安裝搭建過程中, 如果報錯, 一定要把status中的錯誤貼出來, 其它的信息不容易找到答案.

原因之三


文章列表




Avast logo

Avast 防毒軟體已檢查此封電子郵件的病毒。
www.avast.com


arrow
arrow
    全站熱搜
    創作者介紹
    創作者 大師兄 的頭像
    大師兄

    IT工程師數位筆記本

    大師兄 發表在 痞客邦 留言(0) 人氣()