Mariadb galera cluster installation:
Operating system: CentOS 7.4 version
Cluster number: 3 nodes
Host information: 192.168.153.142 node1 selinux=disabled firewalld Shutdown
192.168.153.143 node2 selinux=disabled firewalld Shut down
192.168.153.144 node3 selinux=disabled firewalld Shut down
Build steps
1. Hosts resolve each other: all three nodes must execute
vim /etc/hosts
192.168.153.142 node1
192.168.153.143 node2
192.168.153.144 node3
2. Install the software package
The first method: (yum install -y MariaDB-server MariaDB-client galera)
Configure yum installation source and configure mariadb galera installation source
yum source configuration hang iso
Set up mariadb yum source and install (all nodes are required)
Modify yum source file
vi /etc/yum.repos.d/mariadb.repo
[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/10.3.5/centos74-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1
enabled=0
When installing galera software, it needs to resolve its dependencies: boost-program-options.x86_64 (direct yum source installation)
The second method: (rpm package installation) all three nodes need to be installed
to download the rpm package from the web: galera-25.3.23-1.rhel7.el7.centos.x86_64.rpm
MariaDB-10.3.5-centos74-x86_64-client .rpm
MariaDB-10.3.5-centos74-x86_64-compat.rpm
MariaDB-10.3.5-centos74-x86_64-common.rpm
MariaDB-10.3.5-centos74-x86_64-server.rpm
rpm -ivh MariaDB-10.3.5- Centos74-x86_64-compat.rpm –nodeps
rpm -ivh MariaDB-10.3.5-centos74-x86_64-common.rpm
rpm -ivh MariaDB-10.3.5-centos74-x86_64-client.rpm
yum install -y boost-program- Options.x86_64 (resolve to install galera dependencies)
rpm -ivh galera-25.3.23-1.rhel7.el7.centos.x86_64.rpm
rpm -ivh MariaDB-10.3.5-centos74-x86_64-server.rpm
3.mariadb initialization (the three nodes need to be executed) After the
installation is complete, it will prompt the need to initialize mariadb (set the password)
systemctl start mariadb
mysql_secure_installation (set the mysql password as prompted)
systemctl stop mariadb
4. Configure the galera
master node configuration file server.cnf
vim /etc/my.cnf.d/server.cnf
[galera]
wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address=”gcomm://192.168 .153.142,192.168.153.143,192.168.153.144 ”
wsrep_node_name = node1
wsrep_node_address = 192.168.153.142
binlog_format = Row
default_storage_engine = the InnoDB
innodb_autoinc_lock_mode = 2
wsrep_slave_threads. 1 =
the innodb_flush_log_at_trx_commit = 0
innodb_buffer_pool_size = 120M
wsrep_sst_method = the rsync
wsrep_causal_reads the ON =
copy of this file mariadb- 2, mariadb-3, and attention should wsrep_node_name wsrep_node_address into the corresponding node hostname and ip.
5. Start the cluster service:
Start the MariaDB Galera Cluster service:
[root@node1 ~]# /bin/galera_new_cluster The
remaining two nodes are started by:
[root@node1 ~]# systemctl start mariadb
Check the cluster status: (The cluster service uses 4567. And 3306 ports))
[root@node1 ~]# netstat -tulpn | grep -e 4567 -e 3306
tcp 0 0 0.0.0.0:4567 0.0.0.0: LISTEN 3557/mysqld
tcp6 0 0 :::3306 ::: LISTEN 3557/mysqld
6. Verify the cluster status:
Execute on node1:
[root@node1 ~]# mysql -uroot -p ##Enter the database to
see if galera plug-in is enabled to
connect to mariadb and check if galera plug-in
MariaDB is enabled [(none)]> show status like “wsrep_ready”;
+—————+——-+
| Variable_name | Value |
+————— +——-+
| wsrep_ready | ON |
+—————+——-+
1 row in set (0.004 sec)
present cluster machine Number
MariaDB [(none)]> show status like “wsrep_cluster_size”;
+——————–+——-+
| Variable_name | Value |
+——————–+——-+
| wsrep_cluster_size | 3 |
+————– ——+——-+
1 row in set (0.001 sec)
check the cluster status
MariaDB [(none)]> show status like “wsrep%”;
+——————————+—————— ———————————————-+
| Variable_name | Value |
+——————————+—————- ————————————————+
| wsrep_apply_oooe | 0.000000 |
| wsrep_apply_oool | 0.000000 |
| wsrep_apply_window | 1.000000 |
| wsrep_causal_reads | 14 |
| wsrep_cert_deps_distance | 1.200000 |
| wsrep_cert_index_size | 3 |
| wsrep_cert_interval | 0.000000 |
| wsrep_cluster_conf_id | 22 |
| wsrep_cluster_size | 3 | ## cluster members
| wsrep_cluster_state_uuid | b8ecf355-233a-11e8-825e-bb38179b0eb4 | ##UUID cluster unique tag
| wsrep_cluster_status | Primary | ##primary server
| wsrep_commit_oooe | 0.000000 |
| Wsrep_commit_oool | 0.000000 |
| wsrep_commit_window | 1.000000 |
| wsrep_connected | ON | ## currently connected in
| wsrep_desync_count | 0 |
| wsrep_evs_delayed | |
| wsrep_evs_evict_list | |
| wsrep_evs_repl_latency | 0/0/0/0/0 |
| wsrep_evs_state | the OPERATIONAL |
| wsrep_flow_control_paused | 0.000000 |
| wsrep_flow_control_paused_ns | 0 |
| wsrep_flow_control_recv | 0 |
| wsrep_flow_control_sent | 0 |
| wsrep_gcomm_uuid | 0eba3aff-2341-11e8-b45a-f277db2349d5 |
| wsrep_incoming_addresses | 192.168.153.142:3306,192.168.153.143:3306, 192.168.153.144:3306 | ## database in connection
| wsrep_last_committed | 9 | ##sql commit record
| wsrep_local_bf_aborts | 0 | ## is interrupted locally by the executing transaction process
| Wsrep_local_cached_downto | 5 |
| wsrep_local_cert_failures | 0 | ## local failed transaction
| wsrep_local_commits | 4 | sql ## local execution
| wsrep_local_index | 0 |
| wsrep_local_recv_queue | 0 |
| wsrep_local_recv_queue_avg | .057143 |
| wsrep_local_recv_queue_max | 2 |
| wsrep_local_recv_queue_min | 0 |
| wsrep_local_replays | 0 |
| wsrep_local_send_queue | 0 | local queue ## emitted
| wsrep_local_send_queue_avg | 0.000000 | ## queues averaging interval
| wsrep_local_send_queue_max |. 1 |
| wsrep_local_send_queue_min | 0 |
| wsrep_local_state |. 4 |
| wsrep_local_state_comment | Synced |
| wsrep_local_state_uuid | b8ecf355-233a-11e8-825e-bb38179b0eb4 | ##Cluster ID
| wsrep_protocol_version | 8 |
| wsrep_provider_name | Galera |
| wsrep_provider_vendor | Codership Oy <info@codership.com> |
| wsrep_provider_version | 25.3.23(r3789) |
| wsrep_ready | ON | ## Plug-In
Wsrep_received | 35 | ##Data Copy Recipients
| wsrep_received_bytes | 5050 |
| wsrep_repl_data_bytes | 1022 |
| wsrep_repl_keys | 14 |
| wsrep_repl_keys_bytes | 232 |
| wsrep_repl_other_bytes | 0 |
| wsrep_replicated |. 5 | ## as the number of copy emitted
| wsrep_replicated_bytes | 1600 | sent replication data word ## The number of sections
| wsrep_thread_count | 2 |
+——————————+———– ————————————————– —+
58 rows in set (0.003 sec)
View connected hosts
MariaDB [(none)]> show status like “wsrep_incoming_addresses”;
+————————–+—————————————————————-+
| Variable_name | Value |
+————————–+—————————————————————-+
| wsrep_incoming_addresses | 192.168.153.142:3306,192.168.153.143:3306,192.168.153.144:3306 |
+————————–+—————————————————————-+
1 row in set (0.002 sec)
7. Test whether the cluster mariad data is synchronized
MariaDB [(none)] create database lizk;
Query OK, 1 row affected (0.010 sec)
MariaDB [(none)]> show databases;
+——————–+
| Database |
+————– ——+
| china |
| hello |
| hi |
| information_schema |
| lizk |
| mysql |
| performance_schema |
| test |
+—————— –+
8 rows in set (0.001 sec)
You can see that the lizk library is synchronized on the other two nodes.
8. Simulated Brain Fissure After Treatment
The following simulations show that in the case of packet loss in network jitter, the two nodes are disconnected and cause brain split. It was performed on 192.168.153.143 192.168.153.144 and two nodes:
iptables -A the INPUT -p TCP -j 4567 –sport the DROP
iptables -A the INPUT -p TCP -j 4567 –dport the DROP
more commands to disable the whole wsrep replication communication port 4567
to see node on 192.168.153.142:
MariaDB [(none)]> Show Status like “WS%”;
+ ——————— ———+—————————————- —-+
| Variable_name | Value |
+——————————+——– + ————————————
| wsrep_apply_oooe | 0.000000 |
| wsrep_apply_oool | 0.000000 |
| wsrep_apply_window | 1.000000 |
| wsrep_causal_reads | 16 |
| wsrep_cert_deps_distance | 1.125000 |
| wsrep_cert_index_size | 3 |
| Wsrep_cert_interval | 0.000000 |
| wsrep_cluster_conf_id | 18446744073709551615 |
| wsrep_cluster_size | 1 |
| wsrep_cluster_state_uuid | b8ecf355-233a-11e8-825e-bb38179b0eb4 |
| wsrep_cluster_status | non-Primary |
now split brain situation has occurred, and the cluster can not execute any commands.
In order to solve this problem, you can execute
set global wsrep_provider_options=”pc.bootstrap=true”;
This command is used to forcibly recover nodes that have split brain.
Verify:
MariaDB [(none)]> = wsrep_provider_options Global SET “to true pc.bootstrap =”;
Query the OK, 0 rows affected (0.015 sec)
MariaDB [(none)]> select @@wsrep_node_name;
+——————-+
| @@wsrep_node_name |
+———– ——–+
| node1 |
+——————-+
1 row in set (0.478 sec)
Finally we will node 192.168.153.143 and 192.168 .153.144 Recover, just clean up the iptables table (because my test environment, the production environment needs to delete the above rules can be):
[root@node3 mysql]# iptables-F
after the restoration to verify:
MariaDB [(none ]]> show status like “wsrep_cluster_size”;
+——————–+——-+
| Variable_name | Value |
+—- —————-+——-+
| wsrep_cluster_size | 3 |
+——————- -+——-+
1 row in set (0.001 sec)
9. Because of the fault, it is necessary to check the two nodes of the cluster and check whether the data can be synchronized after restarting the service.
To stop the operations of mariadb on 192.168.153.143 and 192.168.153.144:
[root@node2 mysql]# systemctl stop mariadb is
at 192.168. Insert data on node 153.142:
MariaDB [test]> select * from test1;
+——+
| id |
+——+
| 2 |
| 2 |
| 1 |
| 3 |
+– —-+
4 rows in set (0.007 sec)
Now restart the other two nodes in the cluster and see the data consistency, as with the master node.
10. Abnormal processing: When the room suddenly loses power, all galera hosts are shut down abnormally, and the galera cluster service cannot start properly when the phone is switched on. How to deal with?
Step 1: Open the mariadb service of the master host of the galera cluster.
Step 2: Start the mariadb service on the member host of the galera cluster.
Exception handling: The mysql service of the master host and member host of the galera cluster cannot be started. What should I do?
Solution one: Step 1. Delete the /var/lib/mysql/grastate.dat status file of
/ garlera master host /bin/galera_new_cluster to start the service. Start normally. Log in and check the wsrep status.
Step 2: Remove the /var/lib/mysql/grastate.dat status file from the galera member host
systemctl restart mariadb Restart the service. Start normally. Log in and check the wsrep status.
Solution two: Step 1, modify the /var/lib/mysql/grastate.dat status file in the main host of the garlera group to
start the service with 0 as 1 /bin/galera_new_cluster. Start normally. Log in and check the wsrep status.
Step 2: Modify the 0 in the /var/lib/mysql/grastate.dat state file in the galera member host to 1
systemctl restart mariadb to restart the service. Start normally. Log in and check the wsrep status.
Recent Comments