2020-10-18

節(jié)點(diǎn)(寫出操作步驟)

? (1).在主服務(wù)器端進(jìn)行完全備份撑帖,命令如下

mysqldump -uUSER -pPASSWORD -A -F --single-transaction --master-data=1 > fullback.sql

? (2).把生成的完全備份文件發(fā)送到從節(jié)點(diǎn)服務(wù)器攒霹,命令如下

scp fullback.sql root@SLAVEip:

? (3).配置從節(jié)點(diǎn)恢准,先從完全備份文件中過濾出二進(jìn)制日志的節(jié)點(diǎn)虐先,命令如下

grep '^CHANGE MASTER' fullbackup.sql
CHANGE MASTER TO MASTER_LOG_FILE='mariadb-bin.00000#', MASTER_LOG_POS=#;

? (4).在fullbackup.sql中的'CHANGE MASTER TO'后添加主從復(fù)制信息如下,之后通過該文件實(shí)現(xiàn)數(shù)據(jù)建庫同步

#在fullbackup.sql中添加如下信息
vim fullback.sql
CHANGE MASTER TO
MASTER_HOST='masterhostip',  #主節(jié)點(diǎn)ip
MASTER_USER='USER',  #主節(jié)點(diǎn)創(chuàng)建的用于主從復(fù)制的用戶
MASTER_PASSWORD='password',
MASTER_PORT=3306,                                                              
          MASTER_LOG_FILE='mariadb-bin.00000#', MASTER_LOG_POS=#;   #過濾出的二進(jìn)制節(jié)點(diǎn)
#同步數(shù)據(jù)
mysql -uUSER -pPASSWORD < fullback.sql

? (5).開啟從節(jié)點(diǎn)同步杏瞻,并查看是否成功同步

mysql >start slave;
mysql >show slave status\G

2、當(dāng)master服務(wù)器宕機(jī)衙荐,提升一個(gè)slave成為新的master(寫出操作步驟)

? (1).查看各個(gè)從節(jié)點(diǎn)的同步狀態(tài)捞挥,優(yōu)先選擇提升數(shù)據(jù)最新和主服務(wù)器數(shù)據(jù)最相近的一個(gè)從節(jié)點(diǎn)作為新的master

? (2).確認(rèn)好要提升的服務(wù)器之后修改其數(shù)據(jù)庫配置文件,關(guān)閉read-only,并開啟二進(jìn)制日志

#配置/etc/my.cnf
server-id=#
read-only=OFF
log-bin

? (3).在新的master上進(jìn)行完全備份,并把備份文件發(fā)送到其它的從節(jié)點(diǎn)服務(wù)器上

mysqldump -uUSER -pPASSWORD -A -F --single-transaction --master-data=1 > fullback.sql
scp fullback.sql root@SLAVEip:

? (4).分析舊的master 的二進(jìn)制日志忧吟,將未同步到至新master的二進(jìn)制日志導(dǎo)出來砌函,恢復(fù)到新master,盡可能 恢復(fù)數(shù)據(jù)

grep '^CHANGE MASTER' fullbackup.sql
CHANGE MASTER TO MASTER_LOG_FILE='mariadb-bin.00000#', MASTER_LOG_POS=#;

? (5).其它所有 slave 重新還原數(shù)據(jù)庫,指向新的master

vim backup.sql
CHANGE MASTER TO
MASTER_HOST='masterhostip',  #主節(jié)點(diǎn)ip
MASTER_USER='USER',  #主節(jié)點(diǎn)創(chuàng)建的用于主從復(fù)制的用戶
MASTER_PASSWORD='password',
MASTER_PORT=3306,                                                              
          MASTER_LOG_FILE='mariadb-bin.00000#', MASTER_LOG_POS=#;   #過濾出的二進(jìn)制節(jié)點(diǎn)
#清空舊的同步信息溜族,并加載新的同步信息
stop slave;
reset slave all;
set sql_log_bin=off;
source fullback.sql;
set sql_log_bin=on;
start slave;
show slave status\G

3讹俊、通過 MHA 0.58 搭建一個(gè)數(shù)據(jù)庫集群結(jié)構(gòu)

環(huán)境準(zhǔn)備:

mha管理端:10.0.0.7
master:10.0.0.8
slave1:10.0.0.18
slave2:10.0.0.28

安裝mha包

[root@mha ~]#wget https://github.com/yoshinorim/mha4mysql-manager/releases/download/v0.58/mha4mysql-manager-0.58-0.el7.centos.noarch.rpm
[root@mha ~]#wget https://github.com/yoshinorim/mha4mysql-node/releases/download/v0.58/mha4mysql-node-0.58-0.el7.centos.noarch.rpm
[root@mha ~]#ls
mha4mysql-manager-0.58-0.el7.centos.noarch.rpm  mha4mysql-node-0.58-0.el7.centos.noarch.rpm
#并在每個(gè)機(jī)器上安裝mha4mysql-node-0.58-0.el7.centos.noarch.rpm包
yum -y install mha4mysql-node-0.58-0.el7.centos.noarch.rpm
#在mha機(jī)器上安裝所有包
yum -y install mha4mysql-node-0.58-0.el7.centos.noarch.rpm mha4mysql-manager-0.58-0.el7.centos.noarch.rpm

在除mha機(jī)器之外的每個(gè)機(jī)器上安裝mysql

yum -y install mysql-server

配置mysql主服務(wù)器

[root@master ~]#vim /etc/my.cnf.d/mysql-server.cnf
[mysqld]
server-id=1
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-error=/var/log/mysql/mysqld.log
pid-file=/run/mysqld/mysqld.pid
log-bin
skip_name_resolve=1
general_log 
#啟動服務(wù)并創(chuàng)建和授權(quán)用于主從復(fù)制和mha的賬號
[root@master ~]#systemctl start mysqld
[root@master ~]#mysql
mysql> show master logs;
+--------------------+-----------+-----------+
| Log_name           | File_size | Encrypted |
+--------------------+-----------+-----------+
| centos8-bin.000001 |       179 | No        |
| centos8-bin.000002 |       156 | No        |
+--------------------+-----------+-----------+
2 rows in set (0.00 sec)
mysql> create user repluser@'10.0.0.%' identified by '123456';
Query OK, 0 rows affected (0.01 sec)

mysql> grant replication slave on *.* to mhauser@"10.0.0.%";
Query OK, 0 rows affected (0.00 sec)

mysql> create user mhauser@'10.0.0.%' identified by '123456';
Query OK, 0 rows affected (0.01 sec)

mysql> grant all on *.* to mhauser@'10.0.0.%';
Query OK, 0 rows affected (0.01 sec)

配置從服務(wù)器slave1

[root@slave1 ~]#vim /etc/my.cnf.d/mysql-server.cnf
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-error=/var/log/mysql/mysqld.log
pid-file=/run/mysqld/mysqld.pid
server_id=2
log-bin
read_only
relay_log_purge=0
skip_name_resolve=1
#啟動服務(wù)并實(shí)現(xiàn)主從同步
mysql> change master to
    -> master_host='10.0.0.8',
    -> master_port=3306,
    -> master_user='repluser',
    -> master_password='123456',
    -> master_log_file='master-bin.000002',master_log_pos=156;
Query OK, 0 rows affected, 2 warnings (0.05 sec)
mysql> start slave;

配置從服務(wù)器slave2

[root@slave2 ~]#vim /etc/my.cnf.d/mysql-server.cnf
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-error=/var/log/mysql/mysqld.log
pid-file=/run/mysqld/mysqld.pid
server_id=3
read_only
relay_log_purge=0
skip_name_resolve=1
#啟動服務(wù)并實(shí)現(xiàn)主從同步
mysql> change master to
    -> master_host='10.0.0.8',
    -> master_port=3306,
    -> master_user='repluser',
    -> master_password='123456',
    -> master_log_file='master-bin.000002',master_log_pos=156;
Query OK, 0 rows affected, 2 warnings (0.05 sec)
mysql> start slave;

實(shí)現(xiàn)基于key驗(yàn)證

[root@mha-manager ~]#ssh-keygen
[root@mha-manager ~]#ssh-copy-id 10.0.0.7
[root@mha-manager ~]#rsync -av .ssh 10.0.0.8:/root/
[root@mha-manager ~]#rsync -av .ssh 10.0.0.18:/root/
[root@mha-manager ~]#rsync -av .ssh 10.0.0.28:/root/

在管理節(jié)點(diǎn)建立配置文件

[root@mha-manager ~]#mkdir /etc/mastermha/
[root@mha-manager ~]#vim /etc/mastermha/app1.cnf 
[server default]
user=mhauser
password=123456
manager_workdir=/data/mastermha/app1/
manager_log=/data/mastermha/app1/manager.log
remote_workdir=/data/mastermha/app1/
ssh_user=root
repl_user=repluser
repl_password=123456
ping_interval=1
master_ip_failover_script=/usr/local/bin/master_ip_failover
check_repl_delay=0
master_binlog_dir=/var/lib/mysql/
[server1]
hostname=10.0.0.8
candidate_master=1
[server2]
hostname=10.0.0.18
candidate_master=1
[server3]
hostname=10.0.0.28                

驗(yàn)證MHA環(huán)境

#檢查環(huán)境
[root@mha-manager ~]#masterha_check_ssh --conf=/etc/mastermha/app1.cnf
[root@mha-manager ~]#masterha_check_repl --conf=/etc/mastermha/app1.cnf
#查看狀態(tài)
[root@mha-manager ~]#masterha_check_status --conf=/etc/mastermha/app1.cnf 
#啟動mha
[root@mha ~]#masterha_manager --conf=/etc/mastermha/app1.cnf
Sun Oct 18 16:41:45 2020 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Sun Oct 18 16:41:45 2020 - [info] Reading application default configuration from /etc/mastermha/app1.cnf..
Sun Oct 18 16:41:45 2020 - [info] Reading server configuration from /etc/mastermha/app1.cnf..

測試MHA是否起作用

#關(guān)閉master
[root@master ~]#systemctl stop mysqld
#監(jiān)控mha日志發(fā)現(xiàn)slave1自動被提升為新主
[root@mha ~]#tail -f /data/mastermha/app1/manager.log
Sun Oct 18 16:44:38 2020 - [info] New master is 10.0.0.18(10.0.0.18:3306)
Sun Oct 18 16:44:38 2020 - [info] Starting master failover..
Sun Oct 18 16:44:38 2020 - [info] 
From:
10.0.0.8(10.0.0.8:3306) (current master)
 +--10.0.0.18(10.0.0.18:3306)
 +--10.0.0.28(10.0.0.28:3306)

To:
10.0.0.18(10.0.0.18:3306) (new master)
 +--10.0.0.28(10.0.0.28:3306)
Sun Oct 18 16:44:38 2020 - [info] 
Sun Oct 18 16:44:38 2020 - [info] * Phase 3.4: New Master Diff Log Generation Phase..
Sun Oct 18 16:44:38 2020 - [info] 
Sun Oct 18 16:44:38 2020 - [info]  This server has all relay logs. No need to generate diff files from the latest slave.
Sun Oct 18 16:44:38 2020 - [info] 
Sun Oct 18 16:44:38 2020 - [info] * Phase 3.5: Master Log Apply Phase..
Sun Oct 18 16:44:38 2020 - [info] 
Sun Oct 18 16:44:38 2020 - [info] *NOTICE: If any error happens from this phase, manual recovery is needed.
Sun Oct 18 16:44:38 2020 - [info] Starting recovery on 10.0.0.18(10.0.0.18:3306)..
Sun Oct 18 16:44:38 2020 - [info]  This server has all relay logs. Waiting all logs to be applied.. 
Sun Oct 18 16:44:38 2020 - [info]   done.
Sun Oct 18 16:44:38 2020 - [info]  All relay logs were successfully applied.
Sun Oct 18 16:44:38 2020 - [info] Getting new master's binlog name and position..
Sun Oct 18 16:44:38 2020 - [info]  slave1-bin.000001:156
Sun Oct 18 16:44:38 2020 - [info]  All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='10.0.0.18', MASTER_PORT=3306, MASTER_LOG_FILE='slave1-bin.000001', MASTER_LOG_POS=156, MASTER_USER='mhauser', MASTER_PASSWORD='xxx';
Sun Oct 18 16:44:38 2020 - [warning] master_ip_failover_script is not set. Skipping taking over new master IP address.
Sun Oct 18 16:44:38 2020 - [info] ** Finished master recovery successfully.
Sun Oct 18 16:44:38 2020 - [info] * Phase 3: Master Recovery Phase completed.
Sun Oct 18 16:44:38 2020 - [info] 
Sun Oct 18 16:44:38 2020 - [info] * Phase 4: Slaves Recovery Phase..
Sun Oct 18 16:44:38 2020 - [info] 
Sun Oct 18 16:44:38 2020 - [info] * Phase 4.1: Starting Parallel Slave Diff Log Generation Phase..
Sun Oct 18 16:44:38 2020 - [info] 
Sun Oct 18 16:44:38 2020 - [info] -- Slave diff file generation on host 10.0.0.28(10.0.0.28:3306) started, pid: 5058. Check tmp log /data/mastermha/app1//10.0.0.28_3306_20201018164435.log if it takes time..
Sun Oct 18 16:44:39 2020 - [info] 
Sun Oct 18 16:44:39 2020 - [info] Log messages from 10.0.0.28 ...
Sun Oct 18 16:44:39 2020 - [info] 
Sun Oct 18 16:44:38 2020 - [info]  This server has all relay logs. No need to generate diff files from the latest slave.
Sun Oct 18 16:44:39 2020 - [info] End of log messages from 10.0.0.28.
Sun Oct 18 16:44:39 2020 - [info] -- 10.0.0.28(10.0.0.28:3306) has the latest relay log events.
Sun Oct 18 16:44:39 2020 - [info] Generating relay diff files from the latest slave succeeded.
Sun Oct 18 16:44:39 2020 - [info] 
Sun Oct 18 16:44:39 2020 - [info] * Phase 4.2: Starting Parallel Slave Log Apply Phase..
Sun Oct 18 16:44:39 2020 - [info] 
Sun Oct 18 16:44:39 2020 - [info] -- Slave recovery on host 10.0.0.28(10.0.0.28:3306) started, pid: 5060. Check tmp log /data/mastermha/app1//10.0.0.28_3306_20201018164435.log if it takes time..
Sun Oct 18 16:44:40 2020 - [info] 
Sun Oct 18 16:44:40 2020 - [info] Log messages from 10.0.0.28 ...
Sun Oct 18 16:44:40 2020 - [info] 
Sun Oct 18 16:44:39 2020 - [info] Starting recovery on 10.0.0.28(10.0.0.28:3306)..
Sun Oct 18 16:44:39 2020 - [info]  This server has all relay logs. Waiting all logs to be applied.. 
Sun Oct 18 16:44:39 2020 - [info]   done.
Sun Oct 18 16:44:39 2020 - [info]  All relay logs were successfully applied.
Sun Oct 18 16:44:39 2020 - [info]  Resetting slave 10.0.0.28(10.0.0.28:3306) and starting replication from the new master 10.0.0.18(10.0.0.18:3306)..
Sun Oct 18 16:44:39 2020 - [info]  Executed CHANGE MASTER.
Sun Oct 18 16:44:39 2020 - [info]  Slave started.
Sun Oct 18 16:44:40 2020 - [info] End of log messages from 10.0.0.28.
Sun Oct 18 16:44:40 2020 - [info] -- Slave recovery on host 10.0.0.28(10.0.0.28:3306) succeeded.
Sun Oct 18 16:44:40 2020 - [info] All new slave servers recovered successfully.
Sun Oct 18 16:44:40 2020 - [info] 
Sun Oct 18 16:44:40 2020 - [info] * Phase 5: New master cleanup phase..
Sun Oct 18 16:44:40 2020 - [info] 
Sun Oct 18 16:44:40 2020 - [info] Resetting slave info on the new master..
Sun Oct 18 16:44:40 2020 - [info]  10.0.0.18: Resetting slave info succeeded.
Sun Oct 18 16:44:40 2020 - [info] Master failover to 10.0.0.18(10.0.0.18:3306) completed successfully.
Sun Oct 18 16:44:40 2020 - [info] 
----- Failover Report -----

app1: MySQL Master failover 10.0.0.8(10.0.0.8:3306) to 10.0.0.18(10.0.0.18:3306) succeeded

Master 10.0.0.8(10.0.0.8:3306) is down!

Check MHA Manager logs at mha:/data/mastermha/app1/manager.log for details.

Started automated(non-interactive) failover.
The latest slave 10.0.0.18(10.0.0.18:3306) has all relay logs for recovery.
Selected 10.0.0.18(10.0.0.18:3306) as a new master.
10.0.0.18(10.0.0.18:3306): OK: Applying all logs succeeded.
10.0.0.28(10.0.0.28:3306): This host has the latest relay log events.
Generating relay diff files from the latest slave succeeded.
10.0.0.28(10.0.0.28:3306): OK: Applying all logs succeeded. Slave started, replicating from 10.0.0.18(10.0.0.18:3306)
10.0.0.18(10.0.0.18:3306): Resetting slave info succeeded.
Master failover to 10.0.0.18(10.0.0.18:3306) completed successfully.
#提升新主之后,mha停止
[root@mha ~]#masterha_check_status --conf=/etc/mastermha/app1.cnf
app1 is stopped(2:NOT_RUNNING).

實(shí)現(xiàn)vip漂移

#相關(guān)腳本
#!/usr/bin/env perl
use strict;
use warnings FATAL => 'all';
use Getopt::Long;
my (
$command, $ssh_user, $orig_master_host, $orig_master_ip,
$orig_master_port, $new_master_host, $new_master_ip, $new_master_port
);
my $vip = '10.0.0.100/24';
my $gateway = '10.0.0.254';
my $interface = 'eth0';
my $key = "1";
my $ssh_start_vip = "/sbin/ifconfig $interface:$key $vip;/sbin/arping -I
$interface -c 3 -s $vip $gateway >/dev/null 2>&1";
my $ssh_stop_vip = "/sbin/ifconfig $interface:$key down";
GetOptions(
'command=s' => \$command,
'ssh_user=s' => \$ssh_user,
'orig_master_host=s' => \$orig_master_host,
'orig_master_ip=s' => \$orig_master_ip,
'orig_master_port=i' => \$orig_master_port,
'new_master_host=s' => \$new_master_host,
'new_master_ip=s' => \$new_master_ip,
'new_master_port=i' => \$new_master_port,
);
exit &main();
sub main {
print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";
if ( $command eq "stop" || $command eq "stopssh" ) {
# $orig_master_host, $orig_master_ip, $orig_master_port are passed.
# If you manage master ip address at global catalog database,
# invalidate orig_master_ip here.
my $exit_code = 1;
eval {
print "Disabling the VIP on old master: $orig_master_host \n";
&stop_vip();
$exit_code = 0;
};
if ($@) {
warn "Got Error: $@\n";
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "start" ) {
# all arguments are passed.
# If you manage master ip address at global catalog database,
# activate new_master_ip here.
# You can also grant write access (create user, set read_only=0, etc) here.
my $exit_code = 10;
eval {
print "Enabling the VIP - $vip on the new master - $new_master_host \n";
&start_vip();
$exit_code = 0;
};
if ($@) {
warn $@;
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "status" ) {
print "Checking the Status of the script.. OK \n";
`ssh $ssh_user\@$orig_master_host \" $ssh_start_vip \"`;
exit 0;
}
else {
&usage();
exit 1;
}
}
# A simple system call that enable the VIP on the new master
sub start_vip() {
`ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
# A simple system call that disable the VIP on the old_master
sub stop_vip() {
`ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
}
sub usage {
print
"Usage: master_ip_failover --command=start|stop|stopssh|status --
orig_master_host=host --orig_master_ip=ip --orig_master_port=port --
new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}

測試vip漂移

#在master上添加ip
[root@master ~]#ip addr add 10.0.0.100/24 dev eth0
[root@master ~]#ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:c1:38:f6 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.8/24 brd 10.0.0.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet 10.0.0.100/24 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fec1:38f6/64 scope link 
       valid_lft forever preferred_lft forever
#檢查并啟動mha
[root@mha ~]#masterha_check_ssh --conf=/etc/mastermha/app1.cnf
[root@mha ~]#masterha_check_repl --conf=/etc/mastermha/app1.cnf
[root@mha ~]#masterha_manager --conf=/etc/mastermha/app1.cnf
#停止master服務(wù)
[root@master ~]#systemctl stop mysqld
# 查看slave1的IP,發(fā)現(xiàn)ip成功轉(zhuǎn)移
[root@81 data]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:02:89:3c brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.81/8 brd 10.255.255.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 10.0.0.100/24 brd 10.0.0.255 scope global ens33:1
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe02:893c/64 scope link 
       valid_lft forever preferred_lft forever

4煌抒、實(shí)戰(zhàn)案例:Percona XtraDB Cluster(PXC 5.7)

(1).環(huán)境準(zhǔn)備(四臺centos7系統(tǒng)主機(jī))

pxc1:10.0.0.7
pxc2:10.0.0.17
pxc3:10.0.0.27
pxc4:10.0.0.37

? 關(guān)閉防火墻和selinux仍劈,并保證時(shí)間同步

(2).安裝percona xtraDB Cluster5.7

#配置yum源(選用清華大學(xué)源)
cat /etc/yum.repos.d/pxc.repo
[percona]
name=percona_repo
baseurl=https://mirrors.tuna.tsinghua.edu.cn/percona/release/$releasever/RPMS/$basearch
enabled = 1
gpgcheck = 0
#每臺機(jī)器上都需要配置yum源
[root@pxc1 ~]#scp /etc/yum.repos.d/pxc.repo 10.0.0.17:/etc/yum.repos.d
[root@pxc1 ~]#scp /etc/yum.repos.d/pxc.repo 10.0.0.27:/etc/yum.repos.d
#在三個(gè)節(jié)點(diǎn)都安裝好PXC 5.7
[root@pxc1 ~]#yum install Percona-XtraDB-Cluster-57 -y
[root@pxc2 ~]#yum install Percona-XtraDB-Cluster-57 -y
[root@pxc3 ~]#yum install Percona-XtraDB-Cluster-57 -y

(3).在各個(gè)節(jié)點(diǎn)上分別配置mysql及集群配置文件

#修改各個(gè)節(jié)點(diǎn)的server-id,其它配置無需變動
[root@pxc1 ~]#vim /etc/percona-xtradb-cluster.conf.d/mysqld.cnf 
server-id=1
[root@pxc2 ~]#vim /etc/percona-xtradb-cluster.conf.d/mysqld.cnf 
server-id=2
[root@pxc3 ~]#vim /etc/percona-xtradb-cluster.conf.d/mysqld.cnf 
server-id=3
#各個(gè)節(jié)點(diǎn)修改pxc配置文件如下
#pxc1
[root@pxc1 ~]#vim /etc/percona-xtradb-cluster.conf.d/wsrep.cnf 
[mysqld]
wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
wsrep_cluster_address=gcomm://10.0.0.7,10.0.0.17,10.0.0.27  
binlog_format=ROW
default_storage_engine=InnoDB
wsrep_slave_threads= 8
wsrep_log_conflicts
innodb_autoinc_lock_mode=2
wsrep_node_address=10.0.0.7        
wsrep_cluster_name=pxc-cluster
wsrep_node_name=pxc-cluster-node-1      
pxc_strict_mode=ENFORCING
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth="sstuser:s3cretPass"
#pxc2
[root@pxc1 ~]#vim /etc/percona-xtradb-cluster.conf.d/wsrep.cnf 
[mysqld]
wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
wsrep_cluster_address=gcomm://10.0.0.7,10.0.0.17,10.0.0.27  
binlog_format=ROW
default_storage_engine=InnoDB
wsrep_slave_threads= 8
wsrep_log_conflicts
innodb_autoinc_lock_mode=2
wsrep_node_address=10.0.0.17        
wsrep_cluster_name=pxc-cluster
wsrep_node_name=pxc-cluster-node-2      
pxc_strict_mode=ENFORCING
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth="sstuser:s3cretPass"
#pxc3
[root@pxc1 ~]#vim /etc/percona-xtradb-cluster.conf.d/wsrep.cnf 
[mysqld]
wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
wsrep_cluster_address=gcomm://10.0.0.7,10.0.0.17,10.0.0.27  
binlog_format=ROW
default_storage_engine=InnoDB
wsrep_slave_threads= 8
wsrep_log_conflicts
innodb_autoinc_lock_mode=2
wsrep_node_address=10.0.0.27        
wsrep_cluster_name=pxc-cluster
wsrep_node_name=pxc-cluster-node-3      
pxc_strict_mode=ENFORCING
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth="sstuser:123456"

(4).啟動pxc集群中的第一個(gè)節(jié)點(diǎn)

[root@pxc1 ~]#systemctl start mysql@bootstrap.service
[root@pxc1 ~]#ss -nutl
Netid State      Recv-Q Send-Q Local Address:Port               Peer Address:Port              
tcp   LISTEN     0      128            *:22                         *:*                  
tcp   LISTEN     0      128            *:4567                       *:*                  
tcp   LISTEN     0      128         [::]:22                      [::]:*                  
tcp   LISTEN     0      80          [::]:3306                    [::]:*               #查看root密碼
[root@pxc1 ~]#grep "temporary password" /var/log/mysqld.log 
2020-10-18T04:14:33.485069Z 1 [Note] A temporary password is generated for root@localhost: GmQis8f.VAzo
#登錄數(shù)據(jù)庫修改密碼,并創(chuàng)建和授權(quán)用戶
[root@pxc1 ~]#mysql -uroot -p'GmQis8f.VAzo'
mysql> alter user 'root'@'localhost' identified by '123456';
Query OK, 0 rows affected (0.01 sec)

mysql> create user 'sstuser'@'localhost' identified by '123456';
Query OK, 0 rows affected (0.00 sec)

mysql> grant reload,lock tables,process,replication client on *.* to 'sstuser'@'localhost';
Query OK, 0 rows affected (0.01 sec)
#查看相關(guān)變量和狀態(tài)
mysql> show variables like 'wsrep%'\G
mysql> show status like 'wsrep%'\G
#重點(diǎn)關(guān)注以下內(nèi)容
mysql> show status like 'wsrep%';
+----------------------------+--------------------------------------+
| Variable_name              | Value                                |
+----------------------------+--------------------------------------+
| wsrep_local_state_uuid     | 72fc337b-10f9-11eb-8052-3ba4e644cbd7 |
| ...                        | ...                                  |
| wsrep_local_state          | 4                                    |
| wsrep_local_state_comment  | Synced                               |
| ...                        | ...                                  |
| wsrep_cluster_size         | 1                                    |
| wsrep_cluster_status       | Primary                              |
| wsrep_connected            | ON                                   |
| ...                        | ...                                  |
| wsrep_ready                | ON                                   |
+----------------------------+--------------------------------------+

(5).啟動pxc集群中的其它所有節(jié)點(diǎn)

[root@pxc2 ~]#systemctl start mysql
[root@pxc3 ~]#systemctl start mysql

(6).查看集群狀態(tài)寡壮,驗(yàn)證集群是否成功

#在任意節(jié)點(diǎn)耳奕,查看集群狀態(tài)
[root@pxc1 ~]#mysql -uroot -p123456
mysql> show variables like 'wsrep_node_name';
+-----------------+--------------------+
| Variable_name   | Value              |
+-----------------+--------------------+
| wsrep_node_name | pxc-cluster-node-1 |
+-----------------+--------------------+
1 row in set (0.01 sec)

mysql> show variables like 'wsrep_node_address';
+--------------------+----------+
| Variable_name      | Value    |
+--------------------+----------+
| wsrep_node_address | 10.0.0.7 |
+--------------------+----------+
1 row in set (0.00 sec)

mysql> show variables like 'wsrep_on';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| wsrep_on      | ON    |
+---------------+-------+
1 row in set (0.01 sec)

mysql> show status like 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 3     |
+--------------------+-------+
1 row in set (0.00 sec)

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.00 sec)

mysql> create database testdb1;
Query OK, 1 row affected (0.00 sec)

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
| testdb1            |
+--------------------+
5 rows in set (0.00 sec)
#利用Xshell軟件绑青,同時(shí)在三個(gè)節(jié)點(diǎn)數(shù)據(jù)庫,在其中一個(gè)節(jié)點(diǎn)成功
mysql> create database testdb2;
Query OK, 1 row affected (0.01 sec)
#在其它節(jié)點(diǎn)都提示失敗
mysql> create database testdb2;
ERROR 1007 (HY000): Can't create database 'testdb2'; database exists

(7).在pxc集群中再加一臺新的主機(jī)pxc4:10.0.0.37

[root@pxc4 ~]#yum install Percona-XtraDB-Cluster-57 -y
[root@pxc4 ~]#vim /etc/percona-xtradb-cluster.conf.d/wsrep.cnf
[mysqld]
wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
wsrep_cluster_address=gcomm://10.0.0.7,10.0.0.17,10.0.0.27,10.0.0.37
binlog_format=ROW
default_storage_engine=InnoDB
wsrep_slave_threads= 8
wsrep_log_conflicts
innodb_autoinc_lock_mode=2
wsrep_node_address=10.0.0.37
wsrep_cluster_name=pxc-cluster
wsrep_node_name=pxc-cluster-node-4
pxc_strict_mode=ENFORCING
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth="sstuser:123456"
#其它節(jié)點(diǎn)中添加10.0.0.37節(jié)點(diǎn)ip
[root@pxc4 ~]#vim /etc/percona-xtradb-cluster.conf.d/mysqld.cnf
server-id=4
#啟動服務(wù)
[root@pxc4 ~]#systemctl start mysql
#查看節(jié)點(diǎn)狀態(tài)
mysql> show status like 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 4     |
+--------------------+-------+
1 row in set (0.00 sec)

(8).在pxc集群中修復(fù)故障節(jié)點(diǎn)

#停止任意節(jié)點(diǎn)
[root@pxc4 ~]#systemctl stop mysql
mysql> show status like 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 3     |
+--------------------+-------+
1 row in set (0.00 sec)
#此時(shí)在任意節(jié)點(diǎn)增加新數(shù)據(jù)
mysql> create database testdb3;
Query OK, 1 row affected (0.01 sec)

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
| testdb1            |
| testdb2            |
| testdb3            |
+--------------------+
7 rows in set (0.00 sec)
#恢復(fù)啟動剛才關(guān)掉的節(jié)點(diǎn)屋群,數(shù)據(jù)同步
[root@pxc4 ~]#mysql -uroot -p123456
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
| testdb1            |
| testdb2            |
| testdb3            |
+--------------------+
7 rows in set (0.00 sec)
mysql> show status like 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 4     |
+--------------------+-------+
1 row in set (0.01 sec)

5闸婴、通過 ansible 部署二進(jìn)制 mysql 8

? (1).安裝ansible(此方法需要有epel源)

yum -y install ansible

? (2).準(zhǔn)備相關(guān)文件

#創(chuàng)建ansible文件夾用于存放以下文件,方便管理
[root@ansible ~]#mkdir /data/ansible/files
#下載mysql8.0包到files中
[root@ansible ~]#wget https://dev.mysql.com/get/Downloads/MySQL-8.0/mysql-8.0.21-linux-glibc2.12-x86_64.tar.xz
#準(zhǔn)備數(shù)據(jù)庫配置文件/etc/my.cnf
[root@ansible ~]#cat /data/ansible/files/my.cnf
[mysqld]
datadir=/data/mysql
socket=/data/mysql/mysql.sock
log-error=/data/mysql/mysqld.log
log-bin
[client]
port=3306
socket=/tmp/mysql.sock
#準(zhǔn)備修改安裝好數(shù)據(jù)庫的初始密碼的腳本文件
[root@ansible ~]#cat /data/ansible/files/chpass.sh
#!/bin/bash
PASSWORD=`awk '/temporary password/{print $NF}' /data/mysql/mysqld.log`
        mysqladmin  -uroot -p$PASSWORD password 123456     
#所需文件如下
[root@centos8 ~]#tree /data/ansible/
/data/ansible/
├─ files
   ├── chpass.sh
   ├── my.cnf
   └── mysql-8.0.21-linux-glibc2.12-x86_64.tar.xz

(3).準(zhǔn)備playbook文件

[root@ansible ~]#cat /data/ansible/install_mysql.yml
---
#install mysql8.0
- hosts: dbservers
  remote_user: root
  gather_facts: no

  tasks:
    - name: install packages
      yum: name=libaio,ncurses-compat-libs
    - name: create mysql group
      group: name=mysql gid=306
    - name: create mysql user
      user: name=mysql uid=306 group=mysql shell=/sbin/nologin system=yes create_home=no home=/data/mysql
    - name: config my.cnf
      copy: src=/data/ansible/files/my.cnf dest=/etc/my.cnf
    - name: copy tar to remote host and file mode
      unarchive: src=/data/ansible/files/mysql-8.0.21-linux-glibc2.12-x86_64.tar.xz dest=/usr/local/ owner=root group=root
    - name: create linkfile /usr/local/mysql
      file: src=/usr/local/mysql-8.0.21-linux-glibc2.12-x86_64 dest=/usr/local/mysql state=link
    - name: data dir
      shell: chdir=/usr/local/mysql/ ./bin/mysqld --initialize --datadir=/data/mysql --user=mysql
      tags: data
    - name: service script
      shell: /bin/cp /usr/local/mysql/support-files/mysql.server /etc/init.d/mysqld
    - name: enable service
      shell: /etc/init.d/mysqld start;chkconfig --add mysqld;chkconfig mysqld on
      tags: service
    - name: PATH variable
      copy: content='PATH=/usr/local/mysql/bin:$PATH' dest=/etc/profile.d/mysql.sh
    - name: usefullpath
      shell: source /etc/profile.d/mysql.sh
    - name: change password
      script: /data/ansible/files/chpass.sh

(4).用playbook安裝mysql

#123為root口令
[root@ansible ~]#ansible-playbook /data/ansible/install_mysql.yml -k 123

自動部署完畢芍躏,mysql初始密碼為:123456

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末邪乍,一起剝皮案震驚了整個(gè)濱河市,隨后出現(xiàn)的幾起案子对竣,更是在濱河造成了極大的恐慌庇楞,老刑警劉巖,帶你破解...
    沈念sama閱讀 222,464評論 6 517
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件否纬,死亡現(xiàn)場離奇詭異吕晌,居然都是意外死亡,警方通過查閱死者的電腦和手機(jī)临燃,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 95,033評論 3 399
  • 文/潘曉璐 我一進(jìn)店門睛驳,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人膜廊,你說我怎么就攤上這事乏沸。” “怎么了爪瓜?”我有些...
    開封第一講書人閱讀 169,078評論 0 362
  • 文/不壞的土叔 我叫張陵蹬跃,是天一觀的道長。 經(jīng)常有香客問我铆铆,道長蝶缀,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 59,979評論 1 299
  • 正文 為了忘掉前任薄货,我火速辦了婚禮扼劈,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘菲驴。我一直安慰自己荐吵,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 69,001評論 6 398
  • 文/花漫 我一把揭開白布赊瞬。 她就那樣靜靜地躺著先煎,像睡著了一般。 火紅的嫁衣襯著肌膚如雪巧涧。 梳的紋絲不亂的頭發(fā)上薯蝎,一...
    開封第一講書人閱讀 52,584評論 1 312
  • 那天,我揣著相機(jī)與錄音谤绳,去河邊找鬼占锯。 笑死袒哥,一個(gè)胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的消略。 我是一名探鬼主播堡称,決...
    沈念sama閱讀 41,085評論 3 422
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼艺演!你這毒婦竟也來了却紧?” 一聲冷哼從身側(cè)響起,我...
    開封第一講書人閱讀 40,023評論 0 277
  • 序言:老撾萬榮一對情侶失蹤胎撤,失蹤者是張志新(化名)和其女友劉穎晓殊,沒想到半個(gè)月后,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體伤提,經(jīng)...
    沈念sama閱讀 46,555評論 1 319
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡巫俺,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 38,626評論 3 342
  • 正文 我和宋清朗相戀三年,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了肿男。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片介汹。...
    茶點(diǎn)故事閱讀 40,769評論 1 353
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡,死狀恐怖次伶,靈堂內(nèi)的尸體忽然破棺而出痴昧,到底是詐尸還是另有隱情稽穆,我是刑警寧澤冠王,帶...
    沈念sama閱讀 36,439評論 5 351
  • 正文 年R本政府宣布,位于F島的核電站舌镶,受9級特大地震影響柱彻,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜餐胀,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 42,115評論 3 335
  • 文/蒙蒙 一哟楷、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧否灾,春花似錦卖擅、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 32,601評論 0 25
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至扣汪,卻和暖如春断楷,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背崭别。 一陣腳步聲響...
    開封第一講書人閱讀 33,702評論 1 274
  • 我被黑心中介騙來泰國打工冬筒, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留恐锣,地道東北人。 一個(gè)月前我還...
    沈念sama閱讀 49,191評論 3 378
  • 正文 我出身青樓舞痰,卻偏偏與公主長得像土榴,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個(gè)殘疾皇子匀奏,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 45,781評論 2 361