Introduction
In this how-to article, we will walk you through the procedures for installing and maintaining a Single-Site Multi-Master MySQL Cluster solution with Load-Balancing services. We will be using Percona XtraDB Cluster Server as our base MySQL database platform, as it has all the necessary components and has built in support for an active/active high availability and high scalability open source solution for MySQL® clustering. It integrates Percona Server and Percona XtraBackup with the Galera library of MySQL high availability solutions in a single package, which enables you to create a cost-effective MySQL high availability cluster.
See more at: https://www.percona.com/software/percona-xtradb-cluster
Percona provides Yum and APT repositories for Red Hat, CentOS, Ubuntu, and Debian. They include Percona Server, Percona XtraDB, Percona XtraBackup, and Percona Toolkit. Subscribing to the repositories makes it easy to install and update your software through your operating system’s package manager.
Repositories URL: Percona Public Repositories
Outline of Packages and Configuration Steps
- Installation of a two (2) Node Percona XtraDB Cluster on CentoOS 6.x
- Basic Server Preparation (Node #1 and #2)
- Node #1 : Configuration
- Node #2 : Configuration
- Installation of a single Keepalived Load-Balancing Server on CentOS 6.x
- Keepalived – Server Preparation
- Keepalived – Compiling from Source
Prerequisites
Before you begin this how-to article, you will need to have the following prerequisites setup first.
- (3) CentOS 6.x Servers, all on the same local area network (LAN).
- Secured CentOS 6.x Firewall Configuration (Best Practice Recommendation – Not Required)
Setup Diagram
Below is a diagram of the expected results once you are completed with this article. Please keep in mind that all IP Addresses used within this article and the diagram are RFC 5737 – IPv4 Address Blocks Reserved for Documentation Purposes Only.
1.1 – Server Preparation (MySQL Node #1 and Node #2)
The base installation of CentOS 6.x comes with a limited set of packages, so the first things we want to do is install the tools we will be using to complete all the required tasks.
First, make sure that you are logged into your server as the root user account.
sudo su -
Now, lets install the basic tools so that we can download files, and edit configuration files easily.
yum install wget nano;
- wget: package for retrieving files using HTTP, HTTPS and FTP, the most widely-used Internet protocols.
- nano: command line text editor with easy to use controls.
Next, we are going to install the required YUM repositories so that we can easily install and keep the Percona software packages updated.
yum install -y http://mirror.atlantic.net/epel/epel-release-latest-6.noarch.rpm; yum install -y https://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm; yum clean all; yum update;
- EPEL: Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux.
- Percona: Percona XtraDB Software Repository.
Now, we are going to disabled SELINUX (Security-Enhanced Linux Kernel), as it conflicts with the Percona XtraDB cluster services. To disable SELINUX you will need to edit the /etc/selinux/config
configuration file.
nano /etc/selinux/config
Next, change the configuration variable SELINUX=enforcing to SELINUX=disabled. After making this change, your configuration file should appear as below and you must reboot your server for this change to take effect.
# This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=disabled # SELINUXTYPE= can take one of these two values: # targeted - Targeted processes are protected, # mls - Multi Level Security protection. SELINUXTYPE=targeted
If you are running IPTABLES firewall services (enabled by default) on your servers, you will need to enable the required service ports on each node server to allow connectivity and sync services between the clustered nodes and MySQL connectivity.
iptables -I INPUT 5 -m state --state NEW -m tcp -p tcp -m multiport --dports 4567:4568 -j ACCEPT; iptables -I INPUT 5 -m state --state NEW -m tcp -p tcp --dport 4444 -j ACCEPT; iptables -I INPUT 5 -m state --state NEW -m tcp -p tcp --dport 3306 -j ACCEPT; service iptables save; service iptables restart;
We are now ready to install the Percona XtraDB Clustering packages on each MySQL server node.
yum install -y Percona-XtraDB-Cluster-56;
1.2 – Node #1 : Configuration
A couple of things that you will need to have, is a secure password for the MySQL root user account, the SST services sync account, and the Keepalived user account. To generate a secured password you can execute the following command to generate a 12 character random password. Make sure to generate three different passwords and to document them for future use during this walk through.
cat /dev/urandom | tr -dc '12345!#$%qwertQWERTasdfgASDFGzxcvbZXCVB' | fold -w 12 | head -n 1;
Now that we have all the software packages installed and passwords generated, we will need to create the /etc/my.cnf
configuration file with the clustering configuration settings.
nano /etc/my.cnf
Next, copy and paste these base configuration settings into your configuration file. (You will need to update specific values based on your server and network settings).
[mysqld] # GENERAL # datadir = /var/lib/mysql user = mysql # LOGGING # # log-error = /var/log/mysql/error.log # log-queries-not-using-indexes = 1 # slow-query-log = 1 # slow-query-log-file = /var/log/mysql/mysql-slow.log # DATA STORAGE # default_storage_engine = InnoDB binlog_format = ROW # CLUSTER CONFIGURATION # wsrep_cluster_name = mysql_clstr01 wsrep_cluster_address = gcomm://192.0.2.11,192.0.2.12 wsrep_node_address = 192.0.2.11 wsrep_node_name = mysql-node-01 wsrep_provider = /usr/lib64/libgalera_smm.so # STATE SNAPSHOT TRANSFER # wsrep_sst_method = xtrabackup-v2 wsrep_sst_auth = "sst-sync-user:<%sst generated password%>" # MyISAM Database Replication # # MyISAM storage engine has only experimental support at this time. # # wsrep_replicate_myisam = 1 # Settings / Tunning Options # innodb_locks_unsafe_for_binlog = 1 innodb_autoinc_lock_mode = 2 # innodb_flush_log_at_trx_commit = 2 # innodb_flush_method = O_DIRECT # innodb_file_per_table = 1 # innodb_buffer_pool_size = 1386971136 # innodb_buffer_pool_size = 6G # innodb_log_file_size = 256M # key_buffer = 208045670 # max_allowed_packet = 67108864 # thread_stack = 192K # thread_cache_size = 10 # query_cache_limit = 1M # query_cache_size = 16M
You are now ready to “bootstrap” the primary node in the cluster.
Bootstrapping refers to getting the initial cluster node initialized and running. By bootstrapping you are defining which node has the initial information, and that all the other nodes should synchronize to (via SST) it. In the event of a cluster-wide crash (or shutdown), bootstrapping the primary node functions the same way: by picking the initial node, you are essentially deciding which cluster node contains the database you want to go forward with.
Cluster membership is not defined by this setting, but is defined by the nodes that join the cluster with the proper “wsrep_cluster_name” variable setting.
/etc/init.d/mysql bootstrap-pxc
You will also want to make sure that you set the MySQL services to auto start upon a server reboot.
chkconfig mysql on
Your first node database service are now operational.
You are now ready to secure your MySQL database installation. By default, an initial MySQL database installation is not secured with a password. To secure your MySQL installation, run the following command and follow the inline prompts. During this process, you will be asked to enter a root password. Use the generated password from the above step during this process.
/usr/bin/mysql_secure_installation
Now that you have secured your database server, we need to setup a couple of user account. The first account will be used for the SST synchronization services and the second account will be used for the Keepalived service check script.
You are now ready to create the two user accounts needed for synchronization and Keepalived.
First, you will need to log into the MySQL CLI console.
mysql -u root -p
Next, you will need to create the user accounts and grant security permissions to the accounts.
CREATE USER 'sst-sync-user'@'localhost' IDENTIFIED BY '<%sst generated password%>'; GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sst-sync-user'@'localhost'; CREATE USER 'keepalived'@'%' IDENTIFIED BY '<%keepalived generated password%>'; FLUSH PRIVILEGES;
We are now ready to setup this node to directly respond to MySQL connections received from the Keepalived load balancing server. In a direct routing LVS setup, the LVS router needs to receive the incoming MySQL connection requests and then send them to the proper real server for processing. The real servers will then need to directly communicate the response data back to the client requesting the data. In order to configure direct routing, each real server must have the VIP address configured on the host, but not respond to ARP requests on the local network.
First, we will setup the server to not respond to any ARP requests for the VIP address.
echo "net.ipv4.conf.lo.arp_ignore = 1" >> /etc/sysctl.conf; echo "net.ipv4.conf.lo.arp_announce = 2" >> /etc/sysctl.conf; sysctl -p;
Next, we are going to create a VIP address on the Loopback adapter. This is done by creating the lo:1
interface configuration file and settings it to auto start on boot up.
nano /etc/sysconfig/network-scripts/ifcfg-lo:1;
Now, paste the following configuration into the interface configuration and save your changes. Make sure to update the VIP address to your network settings.
DEVICE=lo:1 BOOTPROTO=static ONBOOT=yes IPADDR=192.0.2.10 NETMASK=255.255.255.255
Once you have saved the configuration file, you can bring up the VIP address.
ifup lo:1
Your first MySQL Cluster node is now completed.
1.3 – Node #2 : Configuration
You should already have all the software packages installed and passwords generated, so we are now ready to create the /etc/my.cnf
configuration file with the clustering configuration settings.
nano /etc/my.cnf
Next, copy and paste these base configuration settings into your configuration file. (You will need to update specific values based on your server and network settings).
[mysqld] # GENERAL # datadir = /var/lib/mysql user = mysql # LOGGING # # log-error = /var/log/mysql/error.log # log-queries-not-using-indexes = 1 # slow-query-log = 1 # slow-query-log-file = /var/log/mysql/mysql-slow.log # DATA STORAGE # default_storage_engine = InnoDB binlog_format = ROW # CLUSTER CONFIGURATION # wsrep_cluster_name = mysql_clstr01 wsrep_cluster_address = gcomm://192.0.2.11,192.0.2.12 wsrep_node_address = 192.0.2.12 wsrep_node_name = mysql-node-02 wsrep_provider = /usr/lib64/libgalera_smm.so # STATE SNAPSHOT TRANSFER # wsrep_sst_method = xtrabackup-v2 wsrep_sst_auth = "sst-sync-user:<%sst generated password%>" # MyISAM Database Replication # # MyISAM storage engine has only experimental support at this time. # # wsrep_replicate_myisam = 1 # Settings / Tunning Options # innodb_locks_unsafe_for_binlog = 1 innodb_autoinc_lock_mode = 2
You are now ready start the second node in the cluster. You will also want to make sure that you set the MySQL services to auto start upon a server reboot.
service mysql start; chkconfig mysql on;
After a few minutes, your second node services will be operational. During the initial startup, the second server starts the synchronization process with the primary node.
We are now going to setup this node to directly respond to MySQL connections received from the Keepalived load balancing server. First, we will setup the server to not respond to any ARP requests for the VIP address.
echo "net.ipv4.conf.lo.arp_ignore = 1" >> /etc/sysctl.conf; echo "net.ipv4.conf.lo.arp_announce = 2" >> /etc/sysctl.conf; sysctl -p;
Next, we are going to create a VIP address on the Loopback adapter. This is done by creating the lo:1
interface configuration file and settings it to auto start on boot up.
nano /etc/sysconfig/network-scripts/ifcfg-lo:1;
Now, paste the following configuration into the interface configuration and save your changes. Make sure to update the VIP address to your network settings.
DEVICE=lo:1 BOOTPROTO=static ONBOOT=yes IPADDR=192.0.2.10 NETMASK=255.255.255.255
Once you have saved the configuration file, you can bring up the VIP address.
ifup lo:1
Your second MySQL Cluster node is now completed.
We are now going to validate that both servers are operational and synchronized. From each node, execute the following command:
mysql -h localhost -u root -p -e "SHOW STATUS;" | grep "wsrep_ready"
You should receive the response “ON”. If you receive any other response, then the node is not a member of the cluster.
2 – Keepalived Load-Balancing Server Setup (Server #3)
Keepalived is a routing software written in C. The main goal of this open-source project is to provide simple and robust facilities for load-balancing and high-availability to Linux system and Linux based infrastructures. Our Load-balancing framework relies on well-known and widely used Linux Virtual Server (IPVS) kernel module providing Layer4 load-balancing. Keepalived implements a set of checkers to dynamically and adaptively maintain and manage load-balanced server pools according to their health. On the other hand, high-availability is achieved by the VRRP protocol. VRRP is a fundamental brick for router fail-over. In addition, Keepalived implements a set of hooks to the VRRP finite state machine providing low-level and high-speed protocol interactions. Keepalived frameworks can be used independently or all together to provide resilient infrastructures.
Keepalived is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.
The main functionality provided:
- Failover – The native VRRP protocol purpose, based on a roaming set of VRRP VIPs (Virtual IPs)
- VRRP Instance Synchronization – Can specify state monitoring between two VRRP instances. It guarantees that two VRRP instances remain in the same state. The synchronized instances monitor each other.
- Nice Fallback
- Advert Packet Integrity – Using IPSEC-AH ICV
- System Call – During a VRRP state transition, an external script/program can be called to perform an action.
2.1 – Server Preparation
The base installation of CentOS 6.x comes with a limited set of packages, so the first things we want to do is install the tools we will be using to complete all the required tasks.
First, make sure that you are logged into your server as the root user account.
sudo su -
Now, lets install the basic tools so that we can download files, and edit configuration files easily.
yum install wget nano;
- wget: package for retrieving files using HTTP, HTTPS and FTP, the most widely-used Internet protocols.
- nano: command line text editor with easy to use controls.
If you are running IPTABLES firewall services (enabled by default) on your server, you will need to enable the required service ports to allow connectivity to the MySQL services.
iptables -I INPUT 5 -m state --state NEW -m tcp -p tcp --dport 3306 -j ACCEPT; iptables -I INPUT 5 -m state --state NEW -m udp -p udp --dport 161 -j ACCEPT; iptables -I INPUT 5 -d 224.0.0.0/8 -j ACCEPT; iptables -I INPUT 5 -p vrrp -j ACCEPT; service iptables save; service iptables restart;
2.2 – Keepalived : Compiling from Source
From a command prompt, you will need to install the necessary tools to build the Keepalived package from source.
yum install -y kernel-headers kernel-devel gcc make popt-devel openssl-devel ipvsadm net-snmp-devel git mysql;
- popt-devel : Used for command line parsin
- OpenSSL : This library is needed for MD5 and SSL Support
- ipvsadm : Used to maintain or inspect the virtual server table in the Linux kernel
- net-snmp : Provides SNMP monitoring
Once you have the dependencies out of the way, you will need to download the most recent version of the source code. The most recent version can be found here, or you can use the GIT Repository to always download the latest build. We will be using the GIT repository for this installation.
mkdir -p /tmp/keepalived; git clone https://github.com/acassen/keepalived.git /tmp/keepalived; cd /tmp/keepalived;
Now that you have the source code downloaded and extracted, your ready to compile the Keepalived package for your Linux kernel.
./configure --enable-snmp; make LDFLAGS="$(net-snmp-config --agent-libs) -lpopt -lssl -lcrypto"; make install;
After each command, you should double check the output to make sure there were no errors during each process. Once everything compiles correctly, you will be ready to create the necessary Symlinks for the program.
Next, we will create the necessary symlinks for CentOS.
ln -s /usr/local/etc/sysconfig/keepalived /etc/sysconfig/ ln -s /usr/local/etc/rc.d/init.d/keepalived /etc/init.d/ ln -s /usr/local/etc/keepalived/ /etc/keepalived
Next, you need to update the /etc/init.d/keepalived
startup script to call the correct path for the daemon.
nano /etc/init.d/keepalived
Now, replace the application call line daemon keepalived
with the full path to the application. Save your changes and exit the configuration file. Your configuration file should now be updated to call the correct location.
[root@keepalived ~]# cat /etc/init.d/keepalived | less #!/bin/sh # # Startup script for the Keepalived daemon # ... start() { echo -n $"Starting $prog: " daemon /usr/local/sbin/keepalived ${KEEPALIVED_OPTIONS} RETVAL=$? echo [ $RETVAL -eq 0 ] && touch /var/lock/subsys/$prog } ...
We now need to update the keepalived options to enable the SNMP AgentX support.
nano /etc/sysconfig/keepalived;
Update the KEEPALIVED_OPTIONS
values to enable the following option flags. Once updated, save and close the configuration file.
KEEPALIVED_OPTIONS="-D -x"
Next, we are going to create a new /etc/keepalived/keepalived.conf
configuration file for this specific walk through. You can delete the default configuration that is installed or you can move it to a backup file for later reference if you would like. Is this example, we will be moving it to a backup file.
mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak; nano /etc/keepalived/keepalived.conf;
Now copy and past the below configuration into the keepalived.conf file. Be sure to update the passwords to the ones you generated in the above steps.
! Configuration File for keepalived global_defs { router_id LVS_MYSQL_CLSTR1 } ### VRRP Virtual IP Configuration vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass keepalived } virtual_ipaddress { 192.0.2.10 } } ### LVS Configuration virtual_server 192.0.2.10 3306 { delay_loop 2 ## LB Mode : Round-Robin lb_algo rr ## Direct Routing Response lb_kind DR protocol TCP # Real Server to add when all real_servers are down # sorry_server <IPADDR> <PORT> real_server 192.0.2.11 3306 { weight 10 MISC_CHECK { misc_path "/etc/keepalived/check_scripts/check_db 192.0.2.11 keepalived <%keepalived password%>" misc_timeout 5 misc_dynamic } } real_server 192.0.2.12 3306 { weight 10 MISC_CHECK { misc_path "/etc/keepalived/check_scripts/check_db 192.0.2.12 keepalived <%keepalived password%>" misc_timeout 5 misc_dynamic } } }
We are now going to create the MISC_CHECK script that is needed as it does not exist by default. First, we will create the directory, then create the script file and set the file mode to executable.
mkdir -p /etc/keepalived/check_scripts; touch /etc/keepalived/check_scripts/check_db; chmod +x /etc/keepalived/check_scripts/check_db; nano /etc/keepalived/check_scripts/check_db;
Now, copy and paste the bash code below into the file check_db
and save the file.
#!/bin/bash mysql_host="${1}"; mysql_user="${2}"; mysql_pass="${3}"; node_response=$(mysql -h ${mysql_host} -u ${mysql_user} -p${mysql_pass} -e "SHOW GLOBAL VARIABLES;" | grep "wsrep_node_address" | awk '{ print $2 }'); if [ "${node_response}" == "${mysql_host}" ] then # echo "Hostname matched"; exit 0; else # echo "Hostname not matched"; exit 1; fi
Your now ready to start the Keepalived services, as well as set them to auto start upon a system reboot.
chkconfig keepalived on /etc/init.d/keepalived start
Your Keepalived server is now operational. For in-depth instructions on how to customize your Keepalived configuration, please review the user guide.
You should now have a fully functional Multi-Master MySQL Cluster, with a single VIP for connectivity. You can point client connections to your VIP address (192.0.2.10) and the connection will be passed to each real server in your cluster in round-robin mode.
Because we used Keepalived with a VRRP VIP configuration, you can easily add as second Keepalived server to the network to provide for full redundancy of your Load-Balancer setup and MySQL Cluster.
—
Recovering a Percona XtraDB Cluster
https://www.percona.com/blog/2014/09/01/galera-replication-how-to-recover-a-pxc-cluster/