1. 程式人生 > >塗抹mysql筆記-搭建mysql高可用體系

塗抹mysql筆記-搭建mysql高可用體系

argument 並且 ren enum 記錄 ica 操作系統 ner 一份

mysql的高可用體系
<>追求更高穩定性的服務體系
可擴展性:橫向擴展(增加節點)、縱向擴展(增加節點的硬件配置)
高可用性
<>Slave+LVS+Keepalived實現高可用:在從庫部署負載均衡器。
<>安裝配置LVS:相當於負載均衡器。我們選擇在192.168.1.9主機名為linux04的服務器上安裝LVS
1、modprobe -l |grep ipvs查看當前操作系統是否存在lpvs模塊。
2、lsmod |grep ip_vs查看是否ip_vs內個模塊是否被加載,如果沒有執行modprobe ip_vs就可以把ip_vs模塊加載到內核
[[email protected]

/* */ ipvsadm-1.26]# lsmod |grep ip_vs
ip_vs 115643 0
libcrc32c 1246 1 ip_vs
ipv6 321422 36 ip_vs,ip6t_REJECT,nf_conntrack_ipv6,nf_defrag_ipv6
3、創建軟連接
ln -s /usr/src/kernels/2.6.32-573.3.1.el6.x86_64/ /usr/src/Linux
4、下載管理工具ipvsadm執行常規的管理操作:http://www.linux-vs.org/software/index.html
wget http://www.linux-vs.org/software/kernel-2.6/ipvsadm-1.26.tar.gz
tar zxvf ipvsadm-1.26.tar.gz
[[email protected] /]# chmod -R 775 ipvsadm-1.26/
[[email protected] /]# cd ipvsadm-1.26/
5、編譯安裝
[[email protected] ipvsadm-1.26]# make
make -C libipvs
make[1]: Entering directory `/soft/ipvsadm-1.26/libipvs‘
gcc -Wall -Wunused -Wstrict-prototypes -g -fPIC -DLIBIPVS_USE_NL -DHAVE_NET_IP_VS_H -c -o libipvs.o libipvs.c
gcc -Wall -Wunused -Wstrict-prototypes -g -fPIC -DLIBIPVS_USE_NL -DHAVE_NET_IP_VS_H -c -o ip_vs_nl_policy.o ip_vs_nl_policy.c
ar rv libipvs.a libipvs.o ip_vs_nl_policy.o
ar: creating libipvs.a
a - libipvs.o
a - ip_vs_nl_policy.o
gcc -shared -Wl,-soname,libipvs.so -o libipvs.so libipvs.o ip_vs_nl_policy.o
make[1]: Leaving directory `/soft/ipvsadm-1.26/libipvs‘
gcc -Wall -Wunused -Wstrict-prototypes -g -DVERSION=\"1.26\" -DSCHEDULERS=\""rr|wrr|lc|wlc|lblc|lblcr|dh|sh|sed|nq"\" -DPE_LIST=\""sip"\" -DHAVE_NET_IP_VS_H -c -o ipvsadm.o ipvsadm.c
ipvsadm.c: In function ‘print_largenum‘:
ipvsadm.c:1383: warning: field width should have type ‘int‘, but argument 2 has type ‘size_t‘
gcc -Wall -Wunused -Wstrict-prototypes -g -DVERSION=\"1.26\" -DSCHEDULERS=\""rr|wrr|lc|wlc|lblc|lblcr|dh|sh|sed|nq"\" -DPE_LIST=\""sip"\" -DHAVE_NET_IP_VS_H -c -o config_stream.o config_stream.c
gcc -Wall -Wunused -Wstrict-prototypes -g -DVERSION=\"1.26\" -DSCHEDULERS=\""rr|wrr|lc|wlc|lblc|lblcr|dh|sh|sed|nq"\" -DPE_LIST=\""sip"\" -DHAVE_NET_IP_VS_H -c -o dynamic_array.o dynamic_array.c
gcc -Wall -Wunused -Wstrict-prototypes -g -o ipvsadm ipvsadm.o config_stream.o dynamic_array.o libipvs/libipvs.a -lnl
ipvsadm.o: In function `parse_options‘:
/soft/ipvsadm-1.26/ipvsadm.c:432: undefined reference to `poptGetContext‘
/soft/ipvsadm-1.26/ipvsadm.c:435: undefined reference to `poptGetNextOpt‘
/soft/ipvsadm-1.26/ipvsadm.c:660: undefined reference to `poptBadOption‘
/soft/ipvsadm-1.26/ipvsadm.c:502: undefined reference to `poptGetNextOpt‘
/soft/ipvsadm-1.26/ipvsadm.c:667: undefined reference to `poptStrerror‘
/soft/ipvsadm-1.26/ipvsadm.c:667: undefined reference to `poptBadOption‘
/soft/ipvsadm-1.26/ipvsadm.c:670: undefined reference to `poptFreeContext‘
/soft/ipvsadm-1.26/ipvsadm.c:677: undefined reference to `poptGetArg‘
/soft/ipvsadm-1.26/ipvsadm.c:678: undefined reference to `poptGetArg‘
/soft/ipvsadm-1.26/ipvsadm.c:679: undefined reference to `poptGetArg‘
/soft/ipvsadm-1.26/ipvsadm.c:690: undefined reference to `poptGetArg‘
/soft/ipvsadm-1.26/ipvsadm.c:693: undefined reference to `poptFreeContext‘
collect2: ld returned 1 exit status
make: *** [ipvsadm] Error 1
報錯需安裝popt-static,下載popt-static-1.13-7.el6.x86_64.rpm通過rpm命令安裝。
而後重新解壓ipvsadm-1.24.tar.gz重新編譯安裝成功。
[[email protected] ipvsadm-1.26]# make install
make -C libipvs
make[1]: Entering directory `/ipvsadm-1.26/libipvs‘
make[1]: Nothing to be done for `all‘.
make[1]: Leaving directory `/ipvsadm-1.26/libipvs‘
if [ ! -d /sbin ]; then mkdir -p /sbin; fi
install -m 0755 ipvsadm /sbin
install -m 0755 ipvsadm-save /sbin
install -m 0755 ipvsadm-restore /sbin
[ -d /usr/man/man8 ] || mkdir -p /usr/man/man8
install -m 0644 ipvsadm.8 /usr/man/man8
install -m 0644 ipvsadm-save.8 /usr/man/man8
install -m 0644 ipvsadm-restore.8 /usr/man/man8
[ -d /etc/rc.d/init.d ] || mkdir -p /etc/rc.d/init.d
install -m 0755 ipvsadm.sh /etc/rc.d/init.d/ipvsadm
6、配置LVS
配置vip添加rs
ipvsadm -A -t 192.168.1.10:3306 -s rr ---->192.168.1.10為vip
ipvsadm -a -t 192.168.1.10:3306 -r 192.168.1.7:3306 -g
ipvsadm -a -t 192.168.1.10:3306 -r 192.168.1.8:3306 -g
查看LVS虛擬服務配置:
[[email protected] ipvsadm-1.26]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.1.10:3306 rr
-> 192.168.1.7:3306 Route 1 0 0
-> 192.168.1.8:3306 Route 1 0 0
將剛剛創建的VIP綁定到LVS所在的服務器網卡上:
ifconfig eth0:0 192.168.1.10
切換到RealServer節點上執行如下命令:
[[email protected] ~]# /sbin/ifconfig lo:10 192.168.1.10 broadcast 192.168.1.10 netmask 255.255.255.255
ifconfig命令查看IP地址是否綁定成功:
[[email protected] ~]# ifconfig lo:10
lo:10 Link encap:Local Loopback
inet addr:192.168.1.10 Mask:255.255.255.255
UP LOOPBACK RUNNING MTU:16436 Metric:1
禁用lo環回接口中的arp廣播執行如下:
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
同樣在192.168.1.8主機名為linux03的服務器上執行RealServer節點上執行的操作。
到此lvs配置完畢可以讓應用層通過192.168.1.10訪問Slave節點了。
6、測試LVS
執行命令如下:
+---------------+-------+
[[email protected] ~]# mysql -usystem -p‘oralinux‘ -h 192.168.1.10 -P 3306 -e "show variables like ‘server_id‘"
Warning: Using a password on the command line interface can be insecure.
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id | 613 |
+---------------+-------+
[[email protected] ~]# mysql -usystem -p‘oralinux‘ -h 192.168.1.10 -P 3306 -e "show variables like ‘server_id‘"
Warning: Using a password on the command line interface can be insecure.
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id | 612 |
+---------------+-------+
說明mysql已經具備負載均衡能力了
<>Keepalived的安裝配置
中斷192.168.1.7主機名為linux02再連接mysql測試:
[[email protected] ~]# mysql -usystem -p‘oralinux‘ -h 192.168.1.10 -P 3306 -e "show variables like ‘server_id‘"
Warning: Using a password on the command line interface can be insecure.
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id | 613 |
+---------------+-------+
[[email protected] ~]# mysql -usystem -p‘oralinux‘ -h 192.168.1.10 -P 3306 -e "show variables like ‘server_id‘"
Warning: Using a password on the command line interface can be insecure.
ERROR 2003 (HY000): Can‘t connect to MySQL server on ‘192.168.1.10‘ (111)
[[email protected] ~]# mysql -usystem -p‘oralinux‘ -h 192.168.1.10 -P 3306 -e "show variables like ‘server_id‘"
Warning: Using a password on the command line interface can be insecure.
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id | 613 |
+---------------+-------+
[[email protected] ~]# mysql -usystem -p‘oralinux‘ -h 192.168.1.10 -P 3306 -e "show variables like ‘server_id‘"
Warning: Using a password on the command line interface can be insecure.
ERROR 2003 (HY000): Can‘t connect to MySQL server on ‘192.168.1.10‘ (111)
[[email protected] ~]#
發現沒有做健康檢查和故障轉移那麽Keepalived上場了。
Keepalived三個功能:實現IP地址的漂移、生成IPVS規則、執行健康檢查
1、下載Keepalived www.keepalived.org在安裝有LVS的調度服務器上安裝keepalived即192.168.1.9主機名為linux04
root用戶下:
tar -zxvf keepalived-1.2.7.tar.gz
chmod -R 775 keepalived-1.2.7/
cd keepalived-1.2.7
./configure --prefix=/keepalived --with-kernel-dir=/usr/src/kernels/2.6.32-358.el6.x86_64/
make
make install
2、root用戶下復制文件到相關路徑以方便調用:
cp /keepalived/sbin/keepalived /usr/sbin/
cp /keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
cp /keepalived/etc/sysconfig/keepalived /etc/sysconfig/
3、配置keepalived
mkdir /etc/keepalived
vi /etc/keepalived/keepalived.conf
global_defs {
notification_email {
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_1_1
}

vrrp_instance V1_MYSQL_READ {
state MASTER
interface eth0
virtual_router_id 1
priority 100
advert_int 1
authentication{
auth_type PASS
auth_pass 3306
}
virtual_ipaddress {
192.168.1.10
}
}

virtual_server 192.168.1.10 3306 {
delay_loop 6
lb_algo rr
lb_kind DR
net_mask 255.255.255.0
#persistence_timeout 20
protocol TCP

real_server 192.168.1.7 3306 {
weight 1
TCP_CHECK {
connect_timeout 5
nb_get_retry 3
delay_before_retry 3
connect_port 3306
}
}
real_server 192.168.1.8 3306 {
weight 1
TCP_CHECK {
connect_timeout 5
nb_get_retry 3
delay_before_retry 3
connect_port 3306
}
}
}

4、啟動keepalived服務,啟動直接清除ipvsadm下的服務:
[[email protected] ~]# ipvsadm -C
[[email protected] ~]# service keepalived start
Starting keepalived: [ OK ]
5、查看keepalived生成的IPVS規則:
[[email protected] keepalived]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.1.10:3306 rr
-> 192.168.1.7:3306 Route 1 0 0
-> 192.168.1.8:3306 Route 1 0 0

<>Dual-Master高可用環境
LVS+Keepalived+Mysql Slaves的組合提高了讀的可靠性,但是Master作為寫仍然是單點怎麽解決這個問題呢?,雖然從數據安全的角度Master不是單點,但是從讀寫分離後寫的角度看,寫成單點了。
接下來我們配置的是雙向復制:
在無人修改對象的情況下在原slave節點(linux02)查詢當前二進制文件和讀寫位置:
system@(none)>show master status \G
*************************** 1. row ***************************
File: mysql-bin.000032
Position: 120
切換到原Master節點(linux01),使其從原slave節點指定位置開始讀取二進制文件:
[email protected]>change master to master_host=‘192.168.1.7‘,master_port=3306,master_user=‘repl‘,master_password=‘oralinux‘,master_log_file=‘mysql-bin.000032‘,master_log_pos=120;
Query OK, 0 rows affected, 2 warnings (0.01 sec)

[email protected]>start slave;
Query OK, 0 rows affected (0.01 sec)

在linux02執行如下命令:
create table 5ienet.t4(id int not null auto_increment,v1 varchar(20),primary key(id));
在linux01查看這個表是否同步過去:
[email protected]> desc t4;
+-------+-------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| v1 | varchar(20) | YES | | NULL | |
+-------+-------------+------+-----+---------+----------------+
2 rows in set (0.00 sec)
雙向復制有個隱患就是:兩端都在執行寫操作,並且是寫入同一個對象。舉例來說,mysql數據庫的表對象主鍵通常是自增長的,插入記錄時往往無需指定主鍵值,那麽這種情況下,若同時在兩個節點分別向一個對象中執行插入,即使明確指定的列值是不同的,但是兩邊產生的主鍵也極有可能重復。我們對此做個模擬:
linux01停止slave線程:stop slave;
[email protected]>stop slave;
Query OK, 0 rows affected (0.01 sec)
linux02執行插入語句:insert into 5ienet.t4 (v1) values(‘192.168.1.7‘);
system@(none)>insert into 5ienet.t4 (v1) values(‘192.168.1.7‘);
Query OK, 1 row affected (0.00 sec)
在linux02查詢t4:system@(none)>select * from 5ienet.t4;
+----+-------------+
| id | v1 |
+----+-------------+
| 1 | 192.168.1.7 |
+----+-------------+
1 row in set (0.00 sec)
在linux01查詢select * from 5ienet.t4;
[email protected]>select * from 5ienet.t4;
Empty set (0.00 sec)
為空因為本地slave服務沒有啟動,linux02節點執行的操作並未同步過來。此時在linux01上插入一條記錄:
[email protected]>insert into 5ienet.t4 (v1) values(‘192.168.1.6‘);
Query OK, 1 row affected (0.00 sec)
啟動linux01的slave服務:
[email protected]>start slave;
Query OK, 0 rows affected (0.00 sec)
[email protected]>show slave status \G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.1.7
Master_User: repl
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000032
Read_Master_Log_Pos: 537
Relay_Log_File: mysql-relay-bin.000003
Relay_Log_Pos: 283
Relay_Master_Log_File: mysql-bin.000032
Slave_IO_Running: Yes
Slave_SQL_Running: No
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 1062
Last_Error: Error ‘Duplicate entry ‘1‘ for key ‘PRIMARY‘‘ on query. Default database: ‘‘. Query: ‘insert into 5ienet.t4 (v1) values(‘192.168.1.7‘)‘
Skip_Counter: 0
Exec_Master_Log_Pos: 277
Relay_Log_Space: 1036
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 1062
Last_SQL_Error: Error ‘Duplicate entry ‘1‘ for key ‘PRIMARY‘‘ on query. Default database: ‘‘. Query: ‘insert into 5ienet.t4 (v1) values(‘192.168.1.7‘)‘
Replicate_Ignore_Server_Ids:
Master_Server_Id: 612
Master_UUID: 2d88ad71-23e0-11e7-8222-080027f93f02
Master_Info_File: /mysql/conf/master.info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State:
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp: 170424 15:07:30
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set:
Executed_Gtid_Set:
Auto_Position: 0
1 row in set (0.00 sec)
結果slave_sql線程停止工作,復制狀態排除錯誤,提示主鍵重復。同樣另一個節點也報這類錯誤。
system@(none)> show slave status \G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.1.6
Master_User: repl
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000024
Read_Master_Log_Pos: 473194323
Relay_Log_File: mysql-relay-bin.000019
Relay_Log_Pos: 283
Relay_Master_Log_File: mysql-bin.000024
Slave_IO_Running: Yes
Slave_SQL_Running: No
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 1062
Last_Error: Error ‘Duplicate entry ‘1‘ for key ‘PRIMARY‘‘ on query. Default database: ‘5ienet‘. Query: ‘insert into 5ienet.t4 (v1) values(‘192.168.1.6‘)‘
Skip_Counter: 0
Exec_Master_Log_Pos: 473194051
Relay_Log_Space: 728
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 1062
Last_SQL_Error: Error ‘Duplicate entry ‘1‘ for key ‘PRIMARY‘‘ on query. Default database: ‘5ienet‘. Query: ‘insert into 5ienet.t4 (v1) values(‘192.168.1.6‘)‘
Replicate_Ignore_Server_Ids:
Master_Server_Id: 611
Master_UUID: 2584299a-2100-11e7-af61-080027196296
Master_Info_File: /mysql/conf/master.info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State:
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp: 170424 15:05:25
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set:
Executed_Gtid_Set:
Auto_Position: 0
1 row in set (0.00 sec)
如理雙向復制sql線程應用錯誤:
1、刪除源端對應的記錄,而後重新執行
2、跳過錯誤:sql_slave_skip_counter用於指定跳過應用最近的n次事件。默認是0
set global sql_slave_skip_counter=1;指定跳過最近的一次事件。每個節點做同樣的操作。
start slave;

linux01:
[email protected]>set global sql_slave_skip_counter=1;
Query OK, 0 rows affected (0.00 sec)
[email protected]>start slave;
Query OK, 0 rows affected (0.01 sec)

linux02:
system@(none)>set global sql_slave_skip_counter=1;
Query OK, 0 rows affected (0.00 sec)
system@(none)>start slave;
Query OK, 0 rows affected (0.06 sec)
任意節點執行下列語句修復數據:
[email protected]>delete from 5ienet.t4 where v1 in(‘192.168.1.6‘,‘192.168.1.7‘);
Query OK, 1 row affected (0.00 sec)

[email protected]>insert into 5ienet.t4 (v1) values(‘192.168.1.6‘);
Query OK, 1 row affected (0.00 sec)

[email protected]>insert into 5ienet.t4 (v1) values(‘192.168.1.7‘);
Query OK, 1 row affected (0.01 sec)
在另一節點查詢:
system@(none)> select * from 5ienet.t4;
+----+-------------+
| id | v1 |
+----+-------------+
| 2 | 192.168.1.6 |
| 3 | 192.168.1.7 |
+----+-------------+
2 rows in set (0.00 sec)
避免自增列值沖突:應用只連接雙主環境的一個節點,或寫入操作只允許在一個節點上執行。
mysql數據庫中的auto_increment列值增長規則由兩個系統變量控制:
auto_increment_increment:指定自增列增長時的遞增值,範圍從1-65535默認是1,也可以指定為0 指定該參數值為0效果等同於指定為1
auto_increment_offset:指定自增列增長時的偏移量,用偏移量來形容這個參數可能不夠直觀,那麽可以將值理解為自增時的初始值。值得範圍及設定規則與auto_increment_increment完全相同。
這兩個參數組合使用,例如自增值從6開始增長,每次遞增10,則設置參數如下:
set auto_increment_increment=10;
set auto_increment_offset=6;
創建一個表插入數據看自增值:
system@(none)>set auto_increment_increment=10;
Query OK, 0 rows affected (0.00 sec)

system@(none)>set auto_increment_offset=6;
Query OK, 0 rows affected (0.00 sec)

system@(none)>create table 5ienet.autoinc(col int not null auto_increment primary key);
Query OK, 0 rows affected (0.02 sec)

system@(none)>insert into 5ienet.autoinc values(null),(null),(null);
Query OK, 3 rows affected (0.01 sec)
Records: 3 Duplicates: 0 Warnings: 0

system@(none)>select * from 5ienet.autoinc;
+-----+
| col |
+-----+
| 6 |
| 16 |
| 26 |
+-----+
3 rows in set (0.00 sec)
將自增值得偏移量改為8
system@(none)>set auto_increment_offset=8;
Query OK, 0 rows affected (0.00 sec)

system@(none)>insert into 5ienet.autoinc values(null),(null),(null);
Query OK, 3 rows affected (0.01 sec)
Records: 3 Duplicates: 0 Warnings: 0

system@(none)>select * from 5ienet.autoinc;
+-----+
| col |
+-----+
| 6 |
| 16 |
| 26 |
| 38 |
| 48 |
| 58 |
+-----+
6 rows in set (0.00 sec)
有了自增值和偏移量我們可以為不同的mysql實例指定不同的自增值規則。對於我們當前的雙主復制環境,將兩個節點的遞增值為改為2,將其中一個節點的偏移量改為1,另一個節點的偏移量改為2,也就是說一個節點生成的自增值為奇書,另一個節點始終為偶數。各節點的自增列生成規則不相同,那麽生成的值就肯定不會重復了。
具體修改兩個節點的my.cnf

<>雙主環境IP自動漂移(主備)
對於雙主環境並發出現各種問題,我們放棄負載均衡但是實現IP地址故障漂移,提高數據庫的高可用性。
在兩個節點安裝配置keepalived(省略)
1、主節點操作:
編輯keepalived.conf

vrrp_script check_run {
script "/keepalived/bin/ka_check_mysql.sh"
interval 10
}

vrrp_instance VPS {
state BACKUP #初始時指定兩臺服務器均為備份狀態,以避免服務重啟時可能造成的震蕩(master角色爭奪)
interface eth0
virtual_router_id 34
priority 100 #優先級,另一節點本參數值可設置的捎小一些。
advert_int 1
nopreempt #不搶占,只在優先級高的機器上設置即可,優先級低的不設置
authentication {
auth_type PASS
auth_pass 3141
}
virtual_ipaddress {
192.168.1.11
}
track_script {
check_run
}
}

ka_check_mysql.sh

#!/bin/bash
source /mysql/scripts/mysql_env.ini
MYSQL_CMD=/mysql/bin/mysql
CHECK_TIME=3 #check three times
MYSQL_OK=1 #MYSQL_OK values to 1 when Mysql service working fine,else values to 0

function check_mysql_health() {
$MYSQL_CMD -u${MYSQL_USER} -p${MYSQL_PASS} -S /mysql/conf/mysql.sock -e "show status;" > /dev/null 2>&1
if [ $? = 0 ] ;then
MYSQL_OK=1
else
MYSQL_OK=0
fi
return $MYSQL_OK
}

while [ $CHECK_TIME -ne 0 ]
do
let "CHECK_TIME -= 1"
check_mysql_health
if [ $MYSQL_OK = 1 ] ; then
CHECK_TIME=0
exit 0
fi

if [ $MYSQL_OK -eq 0 ] && [ $CHECK_TIME -eq 0 ]
then
/etc/init.d/keepalived stop
exit 1
fi
sleep 1
done

這段腳本用來檢查mysql實例是的能夠正常連接,若三次嘗試連接都沒能成功創建連接,則停止本地的keepalived服務,主動觸發vip漂移。
賦予這段腳本執行權限:
chmod +x ka_check_mysql.sh
啟動keepalived:
service keepalived start
keepalived持有的虛擬IP,通過ifconfig差不多,使用ip addr可以查到:
[[email protected] keepalived]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:19:62:96 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.6/24 brd 192.168.1.255 scope global eth0
inet 192.168.1.11/32 scope global eth0
inet6 fe80::a00:27ff:fe19:6296/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:b3:a6:de brd ff:ff:ff:ff:ff:ff
inet6 fe80::a00:27ff:feb3:a6de/64 scope link
valid_lft forever preferred_lft forever
現在應用層就可以通過192.168.1.11這個vip來訪問mysql復制環境中的master實例了

2、備節點操作:
配置另一個Master節點:
安裝keepalived省略。

編輯keepalived.conf

vrrp_script check_run {
script "/keepalived/bin/ka_check_mysql.sh"
interval 10
}

vrrp_instance VPS {
state BACKUP
interface eth0
virtual_router_id 34
priority 90 #此處調低權重。
advert_int 1
nopreempt #不搶占,只在優先級高的機器上設置即可,優先級低的不設置
authentication {
auth_type PASS
auth_pass 3141
}
virtual_ipaddress {
192.168.1.11
}
track_script {
check_run
}
}

ka_check_mysql.sh從另一個節點復制一份即可。
service keepalived start啟動keepalived

3、測試高可用:
在客戶端執行如下命令:
[[email protected] ~]# mysql -usystem -p‘oralinux‘ -h 192.168.1.11 -N -e "select @@hostname"
Warning: Using a password on the command line interface can be insecure.
+---------+
| linux01 |
+---------+
顯示linux01,說明測試正確,因為linux01為主節點。停止linux01服務測試:
然後在節點2執行ip addr發現vip已經漂移到本節點
[[email protected] bin]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet 192.168.1.10/32 brd 192.168.1.10 scope global lo:10
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:f9:3f:02 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.7/24 brd 192.168.1.255 scope global eth0
inet 192.168.1.11/32 scope global eth0
inet6 fe80::a00:27ff:fef9:3f02/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:fd:29:66 brd ff:ff:ff:ff:ff:ff
inet6 fe80::a00:27ff:fefd:2966/64 scope link
valid_lft forever preferred_lft forever
在客戶端連接測試:
[[email protected] ~]# mysql -usystem -p‘oralinux‘ -h 192.168.1.11 -N -e "select @@hostname"
Warning: Using a password on the command line interface can be insecure.
+---------+
| linux02 |
+---------+

<>DRBD為master節點數據提供更高保障:Distributed Replication Block Device,分布式的基於塊設備的復制。
DBRD+Pacemaker+Corosync架構。

<>官方正統的Mysql Cluster
管理節點(Management node):前面提到的管理服務,用來管理mysql cluster中其他節點的節點,它可以配置數據、開始或停止節點、執行備份任務等。因為是由它來管理其他節點,因此這個節點必須首先啟動,管理節點通過命令行工具ndb_mgmd啟動。
Date節點(Data node):用來保存cluster中的數據。Data節點的數量通常應該等於副本數量乘以數據分片的數量。副本用來提供對數據的冗余保護,對於有高可用要求的環境來說,每份數據至少應該擁有2份副本,這樣安全性才有保障。Data節點通過命令行工具ndbd啟動。
SQL節點(SQL node):用於為客戶提供讀取Cluster中數據的服務。這個SQL節點大家可以把它看做是使用NDBCLUSTER引擎的Mysql數據庫服務。(指定了--ndbcluster和--ndb-connectstring參數),這是一個比較特殊的API節點。盡管Mysql Cluster環境中的SQL節點也使用名為mysqld的應用程序啟動服務,不過要註意它與標準的Mysql發行版中的mysqld還是有所不同,這是一種專用的mysqld進程,與標準發行版並不能通用。此外就算使用的是Cluster專用的mysqld進程,只要它沒有連接到Mysql Cluster管理服務端,那也就無法通過NDB引擎讀寫Cluster中的數據。

Mysql Cluster中所說的節點指的是某類進程,對於運行著多個節點的計算機,在Cluster裏會把它稱之為主機
Mysql Cluster社區版下載地址:http://dev.mysql.com/downloads/cluster

<>Cluster安裝與配置
管理節點:192.168.1.20
Data節點1:192.168.1.21
Data節點2:192.168.1.22
SQL節點1:192.168.1.21
SQL節點2:192.168.1.22

源碼安裝Cluster root用戶下:
mkdir /mysql/conf
tar -zxvf
cd
cmake . -DCMAKE_INSTALL_PREFIX=/mysql \
-DDEFAULT_CHARSET=utf8 \
-DDEFAULT_COLLATION=utf8_general_ci \
-DWITH_NDB_JAVA=OFF \
-DWITH_FEDERATED_STORAGE_ENGINE=1 \
-DWITH_NDBCLUSTER_STORAGE_ENGINE=1 \
-DCOMPILATION_COMMENT=‘JASON for MySQLCluster‘ \
-DWITH_READLINE=ON \
-DSYSCONFDIR=/mysql/conf \
-DMYSQL_UNIX_ADDR=/mysql/conf/mysql.sock

make && make install
操作步驟與安裝Mysql Server基本一樣,只是額外指定了兩個新的參數:
WITH_NDB_JAVA:啟用對Java的支持,這個參數從Cluster7.2.9版本開始引入的,默認就是啟用狀態。如果你需要啟用對Java的支持,除了啟用本參數,還需要附加WITH_CLASSPATH參數指定JDK路徑,不過在本套測試環境中各服務器均未安裝JDK因為我們選擇禁用它
WITH_NDBCLUSTER_STORAGE_ENGINE:支持NDBCLUSTER引擎
chown -R mysql:mysql /mysql
vi /home/mysql/.bash_profile增加export PATH=/mysql/bin:$PATH 這樣mysql用戶下在任意路徑都可以調用Cluster服務相關的命令行工具。
上述操作要在三臺服務器執行。如果服務器的軟硬件環境一致,可以選擇在一臺服務器上編譯安裝。之後將編譯好的軟件整個目錄打包復制到其他服務器之間解壓使用。

配置環節mysql用戶下:
1、配置管理節點
mkdir /mysql/mysql-cluster
vi /mysql/mysql-cluster/config.ini
增加下列內容:
[ndbd default]
NoOfReplicas=2 #指定冗余數量,建議該值不低於2,否則數據就無冗余保護
DataMemory=200M #指定為數據分配的內存空間(測試環境,參數值偏小)
IndexMemory=30M #指定為索引分配的內存空間(測試環境,參數值偏小)

[ndb_mgmd]
#指定管理節點選項
hostname=192.168.1.20
datadir=/mysql/mysql-cluster

[ndbd]
#指定Data節點選項
hostname=192.168.1.21
datadir=/mysql/mysql-cluster/data

[ndbd]
#指定Data節點選項
hostname=192.168.1.22
datadir=/mysql/mysql-cluster/data

[mysqld]
#指定SQL節點選項
hostname=192.168.1.21

[mysqld]
#指定SQL節點選項
hostname=192.168.1.22

2、配置Data、SQL節點,操作需要在192.168.1.21/22中執行,
vi /mysql/conf/my.cnf 添加
[mysqld]
ndbcluster

[mysql_cluster]
ndb-connectstring=192.168.1.20
初始化192.168.1.21/22數據庫
/mysql/scripts/mysql_install_db --datadir=/mysql/data --basedir=/mysql
這裏只定義了一個參數ndb_connectstring,該參數用於指定管理節點的地址,指定ndbcluster和ndb-connectstring兩參數並啟動mysql server之後,集群不啟動是無法執行create table或alter table語句的。

mkdir /mysql/mysql-cluster/data
到此配置工作全部完成,啟動mysql cluster:
先啟動管理節點的後臺服務在192.168.1.20下執行:ndb_mgmd -f /mysql/mysql-cluster/config.ini
[[email protected] mysql-cluster]$ ndb_mgmd -f /mysql/mysql-cluster/config.ini
MySQL Cluster Management Server mysql-5.6.14 ndb-7.3.3

ndb_mgm進入到專用的命令行界面:
[[email protected] mysql-cluster]$ ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm>
show查詢當前集群狀態
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 (not connected, accepting connect from 192.168.1.21)
id=3 (not connected, accepting connect from 192.168.1.22)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.1.20 (mysql-5.6.14 ndb-7.3.3)

[mysqld(API)] 2 node(s)
id=4 (not connected, accepting connect from 192.168.1.21)
id=5 (not connected, accepting connect from 192.168.1.22)

切換到192.168.1.21/22服務器啟動data節點:ndbd --initial 註意Data節點在第一次啟動時,執行ndbd命令需要附加--initial參數,以後再執行該命令時,就不需要再附加該參數,否則會清空本地數據。
[[email protected] bin]$ ndbd --initial
2017-04-27 13:44:11 [ndbd] INFO -- Angel connected to ‘192.168.1.20:1186‘
2017-04-27 13:44:11 [ndbd] INFO -- Angel allocated nodeid: 2

[[email protected] conf]$ ndbd --initial
2017-04-27 13:44:43 [ndbd] INFO -- Angel connected to ‘192.168.1.20:1186‘
2017-04-27 13:44:43 [ndbd] INFO -- Angel allocated nodeid: 3

切換到192.168.1.21/22服務器啟動SQL節點:mysqld_safe --defaults-file=/mysql/conf/my.cnf &
到管理節點執行show查看
ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @192.168.1.21 (mysql-5.6.14 ndb-7.3.3, Nodegroup: 0, *)
id=3 @192.168.1.22 (mysql-5.6.14 ndb-7.3.3, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.1.20 (mysql-5.6.14 ndb-7.3.3)

[mysqld(API)] 2 node(s)
id=4 @192.168.1.21 (mysql-5.6.14 ndb-7.3.3)
id=5 @192.168.1.22 (mysql-5.6.14 ndb-7.3.3)
關閉Cluster:SQL節點通過傳統的mysqladmin shutdown即可,Data節點可通過ndb_mgm中的shutdown子命令進行關閉

Cluster應用體驗:
nodeld4>use test;
Database changed
nodeld4>create table n1(id int not null auto_increment primary key,v1 varchar(20)) engine=ndb;
Query OK, 0 rows affected (0.26 sec)
建表時需要通過engine選項指定要創建的是NDB類型表。
nodeld4>insert into n1 values(null,‘a‘);
Query OK, 1 row affected (0.02 sec)
nodeld5>select * from test.n1;
+----+------+
| id | v1 |
+----+------+
| 1 | a |
+----+------+
1 row in set (0.01 sec)
nodeld5>insert into test.n1 values(null,‘b‘);
Query OK, 1 row affected (0.00 sec)
nodeld4>select * from test.n1;
+----+------+
| id | v1 |
+----+------+
| 1 | a |
| 2 | b |
+----+------+
2 rows in set (0.00 sec)

關閉nodeld5對於的SQL節點:mysqladmin shutdown
然後在nodeld4節點繼續執行插入:

nodeld4>insert into n1 values(null,‘c‘);
Query OK, 1 row affected (0.00 sec)
啟動nodeld5:mysqld_safe --defaults-file=/mysql/conf/my.cnf &
nodeld5>select * from test.n1;
+----+------+
| id | v1 |
+----+------+
| 1 | a |
| 2 | b |
| 3 | c |
+----+------+
3 rows in set (0.01 sec)


Cluster環境中的SQL節點也同樣可以節點LVS提供VIP路由到SQL節點,來提供應用層連接的高可用性和負載均衡。
CLuster不常用的原因:Mysql Cluster中要操作的表數據全部都要在內存裏。(數據是持久化在磁盤中的,但要進行讀寫操作的數據必須被加載到內存中,不是傳統數據庫中所謂的最熱數據在內存中,二是所有數據都要在內存中)也就是說所有NDB節點的內存大小,基本就決定了NDBCLUSTER能夠承載的數據庫規模。在最新的NDBCLUSTER版本中,非索引列數據可以保存在磁盤上,不過索引數據必須被加載到內存中。這也是我們稱之為內存數據庫的原因。

<>繼續擴展數據庫服務
該拆分時要拆分
處理策略得想清

塗抹mysql筆記-搭建mysql高可用體系