{"id":2236,"date":"2013-07-12T15:09:34","date_gmt":"2013-07-12T07:09:34","guid":{"rendered":"http:\/\/rmohan.com\/?p=2236"},"modified":"2013-07-12T15:10:38","modified_gmt":"2013-07-12T07:10:38","slug":"drbd-mysql","status":"publish","type":"post","link":"https:\/\/mohan.sg\/?p=2236","title":{"rendered":"DRBD MYSQL"},"content":{"rendered":"<p>DRBD is a block device\u00a0which is designed to build high availability clusters.<\/p>\n<p>This is done by mirroring a whole block device via\u00a0(a dedicated)\u00a0network.\u00a0DRBD takes over the data, writes it to the local disk and sends it to the other host.\u00a0On the other host, it takes it to the disk there.\u00a0The other components needed are a cluster membership service, which is supposed to be heartbeat, and some kind of application that works on top of a block device.\u00a0Each device\u00a0(DRBD provides more than one of these devices)\u00a0has a state, which can be &#8216;primary&#8217; or &#8216;secondary&#8217;.\u00a0If the primary node fails, heartbeat is switching the secondary device into primary state and starts the application there.If the failed node comes up again, it is a new secondary node and has to synchronise its content to the primary.\u00a0This, of course, will happen whithout interruption of service in the background.<\/p>\n<p>The\u00a0Distributed Replicated Block Device\u00a0(DRBD) is a\u00a0Linux Kernel module\u00a0that constitutes a distributed storage system.\u00a0You can use DRBD to share block devices\u00a0between Linux servers and, in turn, share file systems and data.<\/p>\n<div>DRBD implements a block device which can be used for storage and which is replicated from a primary server to one or more secondary servers. The distributed block device is handled by the DRBD service. Each DRBD service writes the information from the DRBD block device to a local physical block device (hard disk).<\/div>\n<p>On the primary data writes are written both to the underlying physical block device and distributed to the secondary DRBD services. On the secondary, the writes received through DRBD and written to the local physical block device.The information is shared between the primary DRBD server and the secondary DRBD server synchronously and at a block level, and this means that DRBD can be used in high-availability solutions where you need failover support.<\/p>\n<div>When used with MySQL, DRBD can be used to ensure availability in the event of a failure.\u00a0MySQL is configured to store information on the DRBD block device,\u00a0with one server acting as the primary and a second machine available to operate as an immediate replacement in the event of a failure.<\/div>\n<p>For automatic failover support, you can combine DRBD with the Linux Heartbeat project,\u00a0which manages the interfaces on the two servers and automatically configures the secondary (passive) server to replace the primary (active) server in the event of a failure.\u00a0You can also combine DRBD with MySQL Replication to provide both failover and scalability within your MySQL environment.<\/p>\n<div>NOTE:- Because DRBD is a Linux Kernel module, it is currently not supported on platforms other than Linux.<\/div>\n<p>Configuring the DRBD Environment<\/p>\n<div>To set up DRBD, MySQL and Heartbeat, you follow a number of steps that affect the operating system, DRBD and your MySQL installation.<\/div>\n<p>@ DRBD works through two (or more) servers, each called a\u00a0node.<\/p>\n<p>@ Ensure that your\u00a0DRBD\u00a0nodes are as identically configured\u00a0as possible, so that the secondary machine can act as a direct replacement for the primary machine in the event of system failure.<\/p>\n<p>@ The\u00a0node that contains the primary data, has read\/write access to the data, and in an\u00a0HA environment is the currently active node is called the primary.<\/p>\n<p>@ The server to which the data is replicated is called the\u00a0secondary.<\/p>\n<p>@ A\u00a0collection of nodes that are sharing information\u00a0is referred to as a\u00a0DRBD cluster.<\/p>\n<p>@\u00a0For DRBD to operate, you must have a\u00a0<b>block device<\/b>\u00a0on which the information can be stored on each DRBD node. The lower level block device can be a physical disk partition, a partition from a volume group or RAID device or any other block device.<\/p>\n<p>@ For the distribution of data to work, DRBD is used to create a logical block device that uses the lower level block device for the actual storage of information.To store information on the distributed device, a file system is created on the DRBD logical block device.<\/p>\n<div>@\u00a0When used with MySQL, once the file system has been created, you move the MySQL data directory\u00a0(including InnoDB data files and binary logs)\u00a0to the new file system.<\/div>\n<p>@\u00a0When you set up the secondary DRBD server, you set up the physical block device and the DRBD logical block device that stores the data. The block device data is then copied from the primary to the secondary server.<\/p>\n<p>Installation and configuration sequence<\/p>\n<div>@ First, set up your operating system and environment. This includes\u00a0setting the correct host name, updating the system and preparing the available packages and software required by DRBD, and\u00a0configuring a physical block device to be used with the DRBD block device.@ Installing DRBD requires installing or compiling the DRBD source code and thenconfiguring the DRBD service to set up the block devices to be shared.<\/p>\n<p>@ After configuring DRBD,\u00a0alter the configuration and storage location of the MySQL data.<\/p>\n<p>@ Optionally,\u00a0configure high availability using the Linux Heartbeat service.<\/p>\n<\/div>\n<div><\/div>\n<div>Setting Up Your Operating System for DRBD<\/div>\n<div><\/div>\n<div>To set your Linux environment for using DRBD, follow these system configuration steps:<\/div>\n<div><\/div>\n<div>@\u00a0Make sure that the primary and secondary DRBD servers have the correct host name, and that the host names are unique. You can verify this by using the uname command:<\/div>\n<div><\/div>\n<div>#\u00a0hostname drbd1\u00a0\u00a0 &#8212;&#8211;&gt; set the hostname for first node<br \/>\n#\u00a0hostname drbd2\u00a0\u00a0 &#8212;&#8211;&gt; set the hostname for first node<\/div>\n<div><\/div>\n<div>@\u00a0Each DRBD node must have a unique IP address. Make sure that the IP address information is set correctly within the network configuration and that the\u00a0host name and IP address has been set correctly within the \/etc\/hosts file.<\/div>\n<div><\/div>\n<div>#\u00a0vim \/etc\/hosts<br \/>\n192.168.1.231 drbd1 drbd1#\u00a0vim \/etc\/hosts<br \/>\n192.168.1.237\u00a0 drbd2 drbd2<\/p>\n<\/div>\n<div><\/div>\n<div>@\u00a0Because the block device data is exchanged over the network,everything that is written to the local disk on the DRBD primary is also written to the network for distribution to the DRBD secondary.@ You devote a spare disk, or a partition on an existing disk, as the physical storage location for the DRBD data that is replicated. If the disk is unpartitioned, partition the disk using fdisk, cfdisk or other partitioning solution.\u00a0Do not create a file system on the new partition. (ie, do not partation the new device attached or new partation created).<\/p>\n<\/div>\n<div><\/div>\n<div>#\u00a0fdisk \/dev\/sdb\u00a0 &#8212;&#8211;&gt; in primary node create a partation first<br \/>\nn \/ p(1)<br \/>\nw<br \/>\n#\u00a0partprobe<br \/>\n#\u00a0fdisk -l<br \/>\n\/dev\/sdb1<\/div>\n<div><\/div>\n<div>#\u00a0fdisk \/dev\/hdb\u00a0 &#8212;&#8212;&#8212;-&gt; create a partation in secondary node also<br \/>\nn \/ p(1)<br \/>\nw<br \/>\n#\u00a0partprobe<br \/>\n#\u00a0fdisk -l<br \/>\n\/dev\/hdb1<\/div>\n<div><\/div>\n<div>create a new partation OR\u00a0if you are using a vmware or a virtual box, and you do not have an extra space for a new partation please add an extra data block to have more space.\u00a0and don&#8217;t partation the disk.\u00a0After attaching the drbd device only we need to partation the device. use identical sizes for the partitions on each node, for primary and secondary.@\u00a0If possible, upgrade your system to the latest available Linux kernel for your distribution. Once the kernel has been installed, you must reboot to make the kernel active. To use DRBD, you must also install the relevant kernel development and header files that are required for building kernel modules.<\/p>\n<\/div>\n<div><\/div>\n<div>Before you compile or install DRBD, make sure the following tools and files are there<\/div>\n<div><\/div>\n<div>update and install the latest kernel and kernel header files:-@\u00a0root-shell&gt; up2date kernel-smp-devel kernel-smp<\/p>\n<\/div>\n<div>@\u00a0root-shell&gt; up2date glib-devel openssl-devel libgcrypt-devel glib2-devel pkgconfig ncurses-devel rpm-build rpm-devel redhat-rpm-config gcc gcc-c++ bison flex gnutls-devel lm_sensors-devel net-snmp-devel python-devel bzip2-devel libselinux-devel perl-DBI#\u00a0yum install drbd kmod-drbd\u00a0\u00a0 \/ if any dependency error came<br \/>\nOR<br \/>\n#\u00a0yum install drbd82 kmod-drbd82<\/p>\n<p>[\/etc\/drbd.conf] is the configuration file<\/p>\n<p>To set up a DRBD primary node, configure the DRBD service, create the first DRBD block device, and then create a file system on the device so that you can store files and data.<\/p>\n<p>@\u00a0Set the synchronization rate between the two nodes. This is the rate at which devices are synchronized in the background after a disk failure,\u00a0device replacement or during the initial setup. Keep this in check compared to the speed of your network connection.<\/p>\n<p>@\u00a0To set the synchronization rate, edit the rate setting within the\u00a0syncer\u00a0block:<\/p>\n<\/div>\n<div><\/div>\n<div>Creating your primary node<\/div>\n<div><\/div>\n<div>#\u00a0vim \/etc\/drbd.confglobal { usage-count yes; }<\/p>\n<p>common {<br \/>\nsyncer {<br \/>\nrate 50M;<br \/>\nverify-alg sha1;<br \/>\n}<br \/>\nhandlers { outdate-peer &#8220;\/usr\/lib\/heartbeat\/drbd-peer-outdater&#8221;;}<br \/>\n}<\/p>\n<\/div>\n<div><\/div>\n<div>resource mysqlha {<br \/>\nprotocol C;\u00a0\u00a0\u00a0# Specifies the level of consistency to be\u00a0\u00a0used when information<\/div>\n<div>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 is written to the block device.Data is considered\u00a0 written<\/div>\n<div>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0\u00a0 when the data has reached\u00a0 the local disk and the<\/div>\n<div>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 remote node&#8217;s\u00a0\u00a0physical disk.<br \/>\ndisk {<br \/>\non-io-error detach;<br \/>\nfencing resource-only;<br \/>\n#disk-barrier no;<br \/>\n#disk-flushes no;<br \/>\n}<\/div>\n<div><\/div>\n<div>@ Set up some basic authentication. DRBD supports a simple password hash exchange mechanism. This helps to ensure that only those hosts with the same shared secret are able to join the DRBD node group.<\/div>\n<div><\/div>\n<div>net {<br \/>\ncram-hmac-alg sha1;<br \/>\nshared-secret &#8220;cEnToS&#8221;;<br \/>\nsndbuf-size 512k;<br \/>\nmax-buffers 8000;<br \/>\nmax-epoch-size 8000;<br \/>\nafter-sb-0pri discard-zero-changes;<br \/>\nafter-sb-1pri discard-secondary;<br \/>\nafter-sb-2pri disconnect;<br \/>\ndata-integrity-alg sha1;<br \/>\n}<\/div>\n<div><\/div>\n<div>@ Now you must\u00a0configure the host information.\u00a0You\u00a0must have the node information for the primary and secondary nodes in the drbd.conf file on each host. Configure the following information for each node:<\/div>\n<div>@\u00a0device: The path of the logical block device that is created by DRBD.<\/div>\n<div>@\u00a0disk: The block device that stores the data.<\/div>\n<div>@\u00a0address: The IP address and port number of the host that holds this DRBD device.<\/div>\n<div>@\u00a0meta-disk: The location where the metadata about the DRBD device is stored. If you set this to internal, DRBD uses the physical block device to store the information, by recording the metadata within the last sections of the disk.<\/div>\n<div>The exact size depends on the size of the logical block device you have created, but it may involve up to 128MB.<\/div>\n<div>@\u00a0The IP address of each on block must match the IP address of the corresponding host. Do not set this value to the IP address of the corresponding primary or secondary in each case.<\/div>\n<div><\/div>\n<div>on drbd1 {<br \/>\ndevice \/dev\/drbd0;<br \/>\n#disk \/dev\/sda3;<br \/>\ndisk \/dev\/sdb1;<br \/>\naddress 192.168.1.231:7789;<br \/>\nmeta-disk internal;<br \/>\n}on drbd2 {<br \/>\ndevice \/dev\/drbd0;<br \/>\n#disk \/dev\/sda3;<br \/>\ndisk \/dev\/hdb;<br \/>\naddress 192.168.1.237:7789;<br \/>\nmeta-disk internal;<br \/>\n}<\/p>\n<\/div>\n<div><\/div>\n<div>And in\u00a0 second machine do the same as in first machine<\/div>\n<div><\/div>\n<div>Setting Up a DRBD Secondary Node<\/div>\n<div><\/div>\n<div>The configuration process for setting up a secondary node is the same as for the primary node, except that you do not have to create the file system on the secondary node device, as this information is automatically transferred from the primary node.<\/div>\n<div><\/div>\n<div>@ To set up a secondary node:<\/div>\n<div>Copy the\u00a0\/etc\/drbd.conf\u00a0file from your primary node to your secondary node. It should already contain all the information and configuration that you need, since you had to specify the secondary node IP address and other information for the primary node configuration.<\/div>\n<div><\/div>\n<div>global { usage-count yes; }common {<br \/>\nsyncer {<br \/>\nrate 50M;<br \/>\nverify-alg sha1;<br \/>\n}<\/p>\n<p>handlers { outdate-peer &#8220;\/usr\/lib\/heartbeat\/drbd-peer-outdater&#8221;;}<br \/>\n}<\/p>\n<p>resource mysqlha {<br \/>\nprotocol C;<br \/>\ndisk {<br \/>\non-io-error detach;<br \/>\nfencing resource-only;<br \/>\n#disk-barrier no;<br \/>\n#disk-flushes no;<br \/>\n}<\/p>\n<p>net {<br \/>\ncram-hmac-alg sha1;<br \/>\nshared-secret &#8220;cEnToS&#8221;;<br \/>\nsndbuf-size 512k;<br \/>\nmax-buffers 8000;<br \/>\nmax-epoch-size 8000;<br \/>\nafter-sb-0pri discard-zero-changes;<br \/>\nafter-sb-1pri discard-secondary;<br \/>\nafter-sb-2pri disconnect;<br \/>\ndata-integrity-alg sha1;<br \/>\n}<\/p>\n<p>on drbd1 {<br \/>\ndevice \/dev\/drbd0;<br \/>\n#disk \/dev\/sda3;<br \/>\ndisk \/dev\/sdb1;<br \/>\naddress 192.168.1.231:7789;<br \/>\nmeta-disk internal;<br \/>\n}<\/p>\n<p>on drbd2 {<br \/>\ndevice \/dev\/drbd0;<br \/>\n#disk \/dev\/sda3;<br \/>\ndisk \/dev\/hdb1;<br \/>\naddress 192.168.1.237:7789;<br \/>\nmeta-disk internal;<br \/>\n}<\/p>\n<\/div>\n<div><\/div>\n<div>@@ On both machines, Before starting the primary node and secondary nodes, create the metadata for the devices#\u00a0drbdadm create-md mysqlha<\/p>\n<p>@@ On primary\/active node,<\/p>\n<\/div>\n<div><\/div>\n<div>#\u00a0\/etc\/init.d\/drbd start\u00a0 ## DRBD should now start and initialize, creating the DRBD devices that you have configured.<\/div>\n<div><\/div>\n<div>DRBD creates a standard block device &#8211; to make it usable, you must create a file system on the block device just as you would with any standard disk partition. Before you can create the file system, you must mark the new device as the primary device (that is, where the data is written and stored), and initialize the device. Because this is a destructive operation, you must specify the command line option to overwrite the raw data.<\/div>\n<div><\/div>\n<div>#\u00a0drbdadm &#8212; &#8211;overwrite-data-of-peer primary mysqlha@\u00a0 On seconday\/passive node,<\/p>\n<p>#\u00a0\/etc\/init.d\/drbd start<\/p>\n<\/div>\n<div>@\u00a0 On both machines,#\u00a0\/etc\/init.d\/drbd status<\/p>\n<\/div>\n<div>#\u00a0cat \/proc\/drbd\u00a0\u00a0\u00a0\u00a0\u00a0 ##\u00a0 Monitoring a DRBD Device\u2022\u00a0cs: connection state<br \/>\n\u2022\u00a0st: node state (local\/remote)<br \/>\n\u2022\u00a0ld: local data consistency<br \/>\n\u2022\u00a0ds: data consistency<br \/>\n\u2022\u00a0ns: network send<br \/>\n\u2022\u00a0nr: network receive<br \/>\n\u2022\u00a0dw: disk write<br \/>\n\u2022\u00a0dr: disk read<br \/>\n\u2022\u00a0pe: pending (waiting for ack)<br \/>\n\u2022\u00a0ua: unack&#8217;d (still need to send ack)<br \/>\n\u2022\u00a0al: access log write count<\/p>\n<p>#\u00a0watch -n 10 &#8216;cat \/proc\/drbd&#8217;<\/p>\n<p>@ On primary\/active node,<\/p>\n<\/div>\n<div><\/div>\n<div>#\u00a0mkfs.ext3 \/dev\/drbd0<\/div>\n<div>#\u00a0mkdir \/drbd\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 ##\u00a0 no needed this bcz we need to mount it in<\/div>\n<div>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 another point called\u00a0 \/usr\/local\/mysql\/<\/div>\n<div>#\u00a0mount \/dev\/drbd0 \/drbd\u00a0\u00a0 ## not need this as wellYour primary node is now ready to use.<\/p>\n<p>@ On seconday\/passive node,<\/p>\n<\/div>\n<div>#\u00a0mkdir \/drbd\u00a0\u00a0 ## not needed, it will replicate from primary<\/div>\n<div><\/div>\n<div>[[[[[[[[[[\u00a0for TESTING\u00a0the replication of drbd alone follow the above steps. after the primary node is mounted to a mount point, create any files in it. create same mount point in both the system.<br \/>\n#\u00a0cd \/mountpoint<\/div>\n<div>#\u00a0dd if=\/dev\/zero of=check bs=1024 count=1000000After that in primary<br \/>\n#\u00a0umount \/drbd<\/p>\n<\/div>\n<div>#\u00a0drbdadm secondary mysqlfo\u00a0 ## make the primary node as secondaryAnd in secondary<br \/>\n#\u00a0drbdadm primary mysqlfo\u00a0 ## make the secondary node as primary<\/p>\n<\/div>\n<div>#\u00a0mount \/dev\/drbd0 \/drbd\/<\/div>\n<div>#\u00a0ls \/drbd\/\u00a0\u00a0 ## the data will be replicated into it. ]]]]]]]]]]]]]]]]]]]]]]]]]]]]]<\/div>\n<div><\/div>\n<div>Mysql for DRBD<\/div>\n<div><\/div>\n<div>#\u00a0[MySQL]\u00a0 ##\u00a0 install mysql if its not there, here its installed with partation enabled<br \/>\n@ On primary\/active node,<\/div>\n<div><\/div>\n<div>#\u00a0cd\u00a0 mysql-5.5.12\/<\/div>\n<div>#\u00a0cmake . -LH<\/div>\n<div>#\u00a0cmake .<\/div>\n<div>#\u00a0make<\/div>\n<div>#\u00a0make install<\/div>\n<div>#\u00a0cd \/usr\/local\/mysql\/<\/div>\n<div>#\u00a0chown mysql:mysql . -R<\/div>\n<div>#\u00a0scripts\/mysql_install_db &#8211;datadir=\/usr\/local\/mysql\/data\/ &#8211;user=mysql<\/div>\n<div><\/div>\n<div>#\u00a0scp \/etc\/my.cnf root@192.168.1.231:\/usr\/local\/mysql\/<\/div>\n<div>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 ## config file copied from another machine<\/div>\n<div>#\u00a0cd \/usr\/local\/mysql\/<\/div>\n<div><\/div>\n<div># vim my.cnfdatadir=\/usr\/local\/mysql\/data\/<br \/>\nsocket=\/usr\/local\/mysql\/data\/mysql.sock<br \/>\nlog-error=\/var\/log\/mysqld.log<br \/>\npid-file=\/usr\/local\/mysql\/mysqld.pid<\/p>\n<\/div>\n<div>.\/bin\/mysqld_safe &#8211;defaults-file=\/usr\/local\/mysql\/my.cnf &amp;<\/div>\n<div>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0\u00a0##\u00a0 start mysql server<\/div>\n<div><\/div>\n<div><b>OR<\/b><\/div>\n<div>#<b>\u00a0<\/b>nohup sh \/usr\/local\/mysql\/bin\/mysqld_safe &#8211;defaults-file=\/usr\/local\/mysql\/my.cnf &amp;<\/div>\n<div><\/div>\n<div>#.\/bin\/mysqladmin -h localhost -uroot password &#8216;mysql&#8217;<b><\/b><\/div>\n<div><b>\u00a0<\/b><\/div>\n<div># vim \/etc\/profile<br \/>\nexport PATH=$PATH:\/usr\/local\/mysql\/bin<\/div>\n<div># . \/etc\/profile<\/div>\n<div># mysql -uroot -pmysql<\/div>\n<div># cd \/usr\/local\/mysql\/support-files\/<\/div>\n<div># cp mysql.server \/etc\/init.d\/mysqld<\/div>\n<div># \/etc\/init.d\/mysqld restart# \/etc\/init.d\/mysqld stop<br \/>\n### \/etc\/init.d\/drbd stop &#8212;- dont stop drbd<\/p>\n<\/div>\n<div># mkdir \/tmp\/new<\/div>\n<div><\/div>\n<div># mv \/usr\/local\/mysql\/* \/tmp\/new\/\u00a0## Move the mysql data to a secure location, and mount the drbd partation to \/usr\/local\/mysql<\/div>\n<div># umount \/drbd\u00a0\u00a0## Already mounted partation, need to unmount it and mount the drbd partation to \/usr\/local\/mysql, where mysql data are stored.<\/div>\n<div># mount \/dev\/drbd0 \/usr\/local\/mysql\/\u00a0\u00a0## To this location we want to mount drbd where mysql directory locations and installation files resides.<\/div>\n<div># cp -r \/tmp\/new\/* .\u00a0\u00a0## after mounting the drbd partation to \/usr\/local\/src\/ copy the mysql data&#8217;s back to \/usr\/local\/mysql\/ from the alrady backeded place. now the mysql is in drbd partation.<\/div>\n<div><\/div>\n<div>\u00a0[[[[[[[[[[[[[\u00a0for TESTING\u00a0the mysql replication in drdb<\/div>\n<div><\/div>\n<div>In Primary node<\/div>\n<div>#\u00a0mysql -uroot -pmysql<\/div>\n<div>mysql&gt; create database DRBD;\u00a0 ## if database is not created the entire mysql instance is replicated to secondary.<\/div>\n<div>#\u00a0\/etc\/init.d\/mysqld stop\u00a0\u00a0\u00a0 ## we stopped this because if the mysql from primary node is stopped we want to replicated the mysql service and db in secondary server.<\/div>\n<div><\/div>\n<div>#\u00a0umount \/usr\/local\/mysql\/\u00a0\u00a0 ## umount the mount point in primary<\/div>\n<div>#\u00a0ls \/usr\/local\/mysql\/<\/div>\n<div>#\u00a0drbdadm secondary mysqlfo\u00a0 ## make primary as secondary node<\/div>\n<div>#\u00a0\/etc\/init.d\/drbd status<\/div>\n<div><\/div>\n<div>In secondary Node# drbdadm primary mysqlfo<\/p>\n<\/div>\n<div># \/etc\/init.d\/drbd status<\/div>\n<div># mount \/dev\/drbd0 \/usr\/local\/mysql\/<\/div>\n<div># ls \/usr\/local\/mysql\/<\/div>\n<div># \/usr\/local\/mysql\/bin\/mysql -uroot -pmysql\u00a0\u00a0## we can see the database created in primary replicated to secondary.<\/div>\n<div># \/etcinit.d\/mysqld start]]]]]]]]]]]]]]]]]]]]]]]<\/p>\n<\/div>\n<div><\/div>\n<div>\u00a0Configuring Heartbeat for BRBD (the service attached to DRBD) failover<\/div>\n<div><\/div>\n<div>1.\u00a0Assign hostname node01 to\u00a0primary node\u00a0with IP address 172.16.4.80 to eth0<br \/>\n2. Assign hostname node02 to\u00a0slave node\u00a0with IP address 172.16.4.81<\/div>\n<div><\/div>\n<div>Note: on node01uname -n &#8212;- must return node01<br \/>\nuname -n &#8212;- must return node02<\/p>\n<\/div>\n<div><\/div>\n<div>We set the host name already to configure the drbd.\u00a0Here we use 192.168.1.245 as virtual ip, communications will listen to that IP.<\/div>\n<div><\/div>\n<div>#\u00a0yum install heartbeat heartbeat-devel\u00a0 ##\u00a0 On both the servers@ if config files are not under \/usr\/share\/doc\/heartbeat<\/p>\n<p>#\u00a0cd \/etc\/ha.d\/<\/p>\n<\/div>\n<div>#\u00a0touch authkeys<\/div>\n<div>#\u00a0touch ha.cf<\/div>\n<div>#\u00a0touch haresources<\/div>\n<div><\/div>\n<div>#\u00a0vim authkeys<\/div>\n<div>\u00a0\u00a0 auth 2<br \/>\n2 sha1 test-ha<\/div>\n<div><\/div>\n<div>\u00a0 ## auth 3<br \/>\n3 md5 &#8220;secret&#8221;<\/div>\n<div><\/div>\n<div>#\u00a0chmod 600 \/etc\/ha.d\/authkeys<\/div>\n<div><\/div>\n<div># vim ha.cf<\/div>\n<div><\/div>\n<div>logfacility local0<br \/>\ndebugfile \/var\/log\/ha-debug<br \/>\nlogfile \/var\/log\/ha-log<br \/>\nkeepalive 500ms<br \/>\ndeadtime 10<br \/>\nwarntime 5<br \/>\ninitdead 30<br \/>\nmcast eth0 225.0.0.1 694 2 0<br \/>\nping 192.168.1.22<br \/>\nrespawn hacluster \/usr\/lib\/heartbeat\/ipfail<br \/>\napiauth ipfail gid=haclient uid=hacluster<br \/>\nrespawn hacluster \/usr\/lib\/heartbeat\/dopd<br \/>\napiauth dopd gid=haclient uid=hacluster<br \/>\nauto_failback off (on)<br \/>\nnode drbd1<br \/>\nnode drbd2<\/div>\n<div><\/div>\n<div># vim haresource\u00a0## This file contains the information about resources which we want to highly enable.<\/div>\n<div><\/div>\n<div>drbd1 drbddisk Filesystem::\/dev\/drbd0::\/usr\/local\/mysql::ext3 mysqld 192.168.1.245 (virtual IP)<\/div>\n<div><\/div>\n<div># cd \/etc\/ha.d\/resource.d<\/div>\n<div># vim drbddisk<br \/>\nDEFAULTFILE=&#8221;\/etc\/drbd.conf&#8221;<\/div>\n<div><\/div>\n<div>@ On PRIMARY Node# cp \/etc\/rc.d\/init.d\/mysqld \/etc\/ha.d\/resource.d\/<\/p>\n<p>@ Copy the files from primary node to secondary node<\/p>\n<p># scp -r ha.d root@192.168.1.237:\/etc\/\u00a0## copy all files to node two, because Primary node and secondary node contains the same configuration.<\/p>\n<\/div>\n<div><\/div>\n<div>\u00a0@@ Stop all services in both the nodesnode1$ service mysqld stop<br \/>\nnode1$ umount \/usr\/local\/mysql\/<br \/>\nnode1$ service drbd stop<br \/>\nnode1$ service heartbeat stop<\/p>\n<p>node2$ service mysqld stop<br \/>\nnode2$ umount \/usr\/local\/mysql\/<br \/>\nnode2$ service drbd stop<br \/>\nnode2$ service heartbeat stop<\/p>\n<p>@@ # Automatic startup,<br \/>\nnode1$ chkconfig drbd on<br \/>\nnode2$ chkconfig drbd on<\/p>\n<\/div>\n<div><\/div>\n<div>node1$ chkconfig mysqld off\u00a0## mysql will be handled by heartbeat, its exe we placed in \/ha.d\/resources\/<br \/>\nnode2$ chkconfig mysqld off<br \/>\nnode1$ chkconfig heartbeat on<br \/>\nnode2$ chkconfig heartbeat on<\/div>\n<div><\/div>\n<div># Start drbd on both machines,<br \/>\nnode1$ service drbd start<br \/>\nnode1$ service heartbeat start# Start heartbeat on both machines,<br \/>\nnode2$ service drbd start<br \/>\nnode2$ service heartbeat start<\/p>\n<p>No need of starting mysql Heartbeat will start it automatically.<\/p>\n<\/div>\n<div><\/div>\n<div>For testing the replication<\/div>\n<div><\/div>\n<div>#\/usr\/lib\/heartbeat\/hb_standby\u00a0## Run this command in any host, then that host will going down and the DB will replicate to other system<\/div>\n<div><\/div>\n<div>@ access the DB from a remote host using the virtual IPmysql&gt; grant all privileges on *.* to &#8216;root&#8217;@&#8217;192.168.1.67&#8217; identified by &#8216;mysql1&#8217;;<\/p>\n<\/div>\n<div>mysql&gt; flush privileges;<br \/>\n# delete from user where Host=&#8217;192.168.1.67&#8217;# mysql -uroot -p -h 192.168.1.245<\/p>\n<\/div>\n<div><\/div>\n<div>#[Test Failover Services]<br \/>\nnode1$ hb_standby<br \/>\nnode2$ hb_takeover#[Sanity Checks]<br \/>\nnode1$ service heartbeat stop<br \/>\nnode2$ service heartbeat stop<br \/>\n$\/usr\/lib64\/heartbeat\/BasicSanityCheck<\/p>\n<p>#[commands]<br \/>\n$\/usr\/lib64\/heartbeat\/heartbeat -s<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>DRBD is a block device which is designed to build high availability clusters.<\/p>\n<p>This is done by mirroring a whole block device via (a dedicated) network. DRBD takes over the data, writes it to the local disk and sends it to the other host. On the other host, it takes it to the disk there. [&#8230;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[16],"tags":[],"_links":{"self":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts\/2236"}],"collection":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2236"}],"version-history":[{"count":2,"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts\/2236\/revisions"}],"predecessor-version":[{"id":2238,"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts\/2236\/revisions\/2238"}],"wp:attachment":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2236"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2236"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2236"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}