|
After setting up the HTTP/HTTPS configuration for JBoss server cluster, the next assignment on my table was to set up the HA-JMS on the server cluster.
My JBoss installation folder structure is as below:
2 |
JBOSS_HOME\server\default |
3 |
JBOSS_HOME\server\minimal |
The “all” folder has everything configured for HA, however to understand the process, I have set up HA JMS within default folder. Go to the following folder JBOSS_HOME\server\default\deploy\jms
Remove all the files within the jms folder except jms-ra.rar. Please take a backup of these files some files might be required later. Within the default folder create a new folder named deploy-hasingleton within this folder create a sub-folder named jms. The contents of this folder will be similar to contents of the jms folder we had cleaned up earlier. The contents are as follows:
01 |
Folder - jbossmq-httpil.sar |
02 |
File - hsqldb-jdbc-state-service.xml |
03 |
No change from the default file except change the following tag |
04 |
<depends optional-attribute-name="ConnectionManager">jboss.jca:service=DataSourceBinding,name=MySqlDS</depends> |
05 |
My data source name is MySqlDS, please point to your appropriate data source name. |
06 |
File - jbossmq-destinations-service.xml |
07 |
I have retained the original file please feel free alter this file to |
08 |
reflect your queues/topics. |
09 |
File - jbossmq-service.xml |
13 |
File - jvm-il-service.xml |
15 |
File - mysql-jdbc2-service.xml |
16 |
Change the following tag to point to the correct data source |
17 |
<depends optional-attribute-name="ConnectionManager">jboss.jca:service=DataSourceBinding,name=MySqlDS</depends> |
18 |
File - uil2-service.xml No change |
Check if the JBOSS_HOME\server\default\lib has the following two jars: jbossha.jar, jgroups.jar. If not add them from the JBOSS_HOME\server\all\lib folder.
Open the login-config.xml file from the folder JBOSS_HOME\server\default\conf.
Search for the following tag:
01 |
<!-- Security domain for JBossMQ --> |
02 |
< application-policy name = "jbossmq" > |
04 |
< login-module code = "org.jboss.security.auth.spi.DatabaseServerLoginModule" |
06 |
< module-option name = "unauthenticatedIdentity" >guest</ module-option > |
07 |
< module-option name = "dsJndiName" >java:/MySqlDS</ module-option > |
08 |
< module-option name = "principalsQuery" >SELECT PASSWD FROM JMS_USERS WHERE USERID=?</ module-option > |
09 |
< module-option name = "rolesQuery" >SELECT ROLEID, 'Roles' FROM JMS_ROLES WHERE USERID=?</ module-option > |
And change the dsJndiName to correct data source. Ensure that the name retains java:/ portion.
Copy cluster-service.xml and deploy-hasingleton-service.xml in the JBOSS_HOME\server\default\deploy folder. The sample files are available in JBOSS_HOME\all\default\deploy folder. In my case both the instances are deployed on the same machine, therefore you might need to change the port numbers in the cluster-service.xml.
Configure your datasource. I have created mysql-ds.xml remove the hsqldb-ds.xml file. Sample files for different types of databases are available at JBOSS_HOME\docs\jca directory.
Now start the two instances. In case, the deployment is correct, during startup of the first instance you will see the following statements in the log files:
01 |
------------------------------------------------------- |
02 |
GMS: address is 122.22.22.22:3576 |
03 |
------------------------------------------------------- |
04 |
2009-04-21 17:52:58,419 DEBUG [org.jboss.ha.framework.interfaces.HAPartition.DefaultPartition] ViewAccepted: initial members set |
05 |
2009-04-21 17:52:58,435 DEBUG [org.jboss.ha.framework.server.ClusterPartition] Starting channel |
06 |
2009-04-21 17:52:58,435 DEBUG [org.jboss.ha.framework.interfaces.HAPartition.DefaultPartition] get nodeName |
07 |
2009-04-21 17:52:58,435 DEBUG [org.jboss.ha.framework.interfaces.HAPartition.DefaultPartition] Get current members |
08 |
2009-04-21 17:52:58,435 INFO [org.jboss.ha.framework.interfaces.HAPartition.DefaultPartition] Number of cluster members: 1 |
09 |
2009-04-21 17:52:58,435 INFO [org.jboss.ha.framework.interfaces.HAPartition.DefaultPartition] Other members: 0 |
10 |
2009-04-21 17:52:58,435 INFO [org.jboss.ha.framework.interfaces.HAPartition.DefaultPartition] Fetching state (will wait for 30000 milliseconds): |
And in the second instance startup, the following logs are created:
01 |
------------------------------------------------------- |
02 |
GMS: address is 122.22.22.22:3589 |
03 |
------------------------------------------------------- |
04 |
2009-04-21 17:55:51,780 DEBUG [org.jboss.ha.framework.interfaces.HAPartition.DefaultPartition] ViewAccepted: initial members set |
05 |
2009-04-21 17:55:51,780 DEBUG [org.jboss.ha.framework.server.ClusterPartition] Starting channel |
06 |
2009-04-21 17:55:51,780 DEBUG [org.jboss.ha.framework.interfaces.HAPartition.DefaultPartition] get nodeName |
07 |
2009-04-21 17:55:51,780 DEBUG [org.jboss.ha.framework.interfaces.HAPartition.DefaultPartition] Get current members |
08 |
2009-04-21 17:55:51,780 INFO [org.jboss.ha.framework.interfaces.HAPartition.DefaultPartition] Number of cluster members: 2 |
09 |
2009-04-21 17:55:51,780 INFO [org.jboss.ha.framework.interfaces.HAPartition.DefaultPartition] Other members: 1 |
10 |
2009-04-21 17:55:51,780 INFO [org.jboss.ha.framework.interfaces.HAPartition.DefaultPartition] Fetching state (will wait for 30000 milliseconds): |
11 |
2009-04-21 17:55:51,811 DEBUG [org.jboss.ha.framework.interfaces.HAPartition.DefaultPartition] setState called |
You can use a sample JMS client to insert messages in queue. You can verify that only one queue is created on the cluster and if the master server fails, the backup server will create the queue. The JMS client application will however need to know both the JNDI URLs i.e for the master and the slave. That intelligence will need to be built in by the developer. Please find below my JNDI client and jndi.properties file for your ready reference.
JNDI properties file contents:
1 |
java.naming.factory.initial=org.jnp.interfaces.NamingContextFactory |
4 |
java.naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces |
JNDI client contents:
03 |
import javax.jms.JMSException; |
04 |
import javax.jms.MessageProducer; |
05 |
import javax.jms.Queue; |
06 |
import javax.jms.QueueConnection; |
07 |
import javax.jms.QueueSession; |
08 |
import javax.jms.Session; |
09 |
import javax.jms.TextMessage; |
10 |
import javax.naming.NamingException; |
12 |
public class JNDIClient { |
14 |
public static void main(String[] args) { |
15 |
// create a new intial context, which loads from jndi.properties file |
16 |
javax.naming.Context ctx; |
18 |
ctx = new javax.naming.InitialContext(); |
19 |
// lookup the connection factory |
20 |
javax.jms.QueueConnectionFactory factory |
21 |
= (javax.jms.QueueConnectionFactory)ctx.lookup( "ConnectionFactory" ); |
23 |
// create a new TopicConnection for pub/sub messaging |
24 |
QueueConnection conn = factory.createQueueConnection(); |
25 |
Queue queue = (Queue)ctx.lookup( "queue/A" ); |
27 |
Session session = conn.createQueueSession( false , QueueSession.AUTO_ACKNOWLEDGE); |
28 |
MessageProducer producer = session.createProducer(queue); |
29 |
TextMessage msg = session.createTextMessage(); |
30 |
msg.setStringProperty( "Name" , "Cooler Dude" ); |
31 |
producer.send(queue, msg); |
36 |
} catch (NamingException e) { |
38 |
} catch (JMSException e) { |
Some colleagues of mine were facing problems getting the HTTP/HTTPS clustering setup done for JBoss server. Although I have no experience on working with JBoss I decided to give it a try.
Note that my development environment is Windows. The first thing I did was get hold of JBoss 4.2.2GA installable. Why this one, because this is the one that I have!! I copied the installable twice on my machine’s D: drive, creating two JBoss homes namely D:\JBoss 4.2.2GA-1 and D:\JBoss 4.2.2GA-2. To get the clustering setup done I referred to JBoss documentation. The documentation is decent and assists in getting your setup right. Instructions regarding setting up HTTP related services can be found under section 1.5 titled “HTTP Services”.
First things first. Let us setup the load balancer. The load balancer is not part of the JBoss installable. JBoss uses the popular Apache Web server to assist it in achieving load balancing. The Apache web server’s jk module is used to forward all requests to the JBoss servlet container. Apache web server downloadable is available here. I have used Apache 2.0.52 and 2.0.55 for our demonstration. Per JBoss, any version in the range 2.0.x is acceptable. Next get hold of the JK module binaries from the site. I have used mod_jk-1.2.28-httpd-2.0.52.so. Please select the jk module version compatible to your Apache server. Detailed instructions of version compatibility are available on the download page. for e.g. for jk 1.2.28. Copy the JK module so file in the APACHE_HOME/modules folder.
Modify the APACHE_HOME/conf/httpd.conf and add the following lines at the end of the file.
1 |
# Include mod_jk's specific configuration file |
2 |
Include conf/mod-jk.conf |
Create a new mod-jk.conf file and copy the file in the APACHE_HOME/conf folder. The contents of the file are as below:
02 |
LoadModule jk_module modules/mod_jk-1.2.28-httpd-2.0.52.so |
04 |
# Where to find worker.properties |
05 |
JkWorkersFile conf/workers.properties |
07 |
# Where to put jk logs |
08 |
JkLogFile logs/mod_jk.log |
10 |
# Set the jk log level [debug/error/info] |
13 |
# Select the log format |
14 |
JkLogStampFormat "[%a %b %d %H:%M:%S %Y]" |
16 |
#JkOptions indicate to send SSK KEY SIZE |
17 |
JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories |
20 |
JkRequestLogFormat "%w %V %T" |
22 |
# Mount your applications |
23 |
JkMount /* loadbalancer |
Ensure that the file name is as per the installable copied in the modules folder. As per the instructions in the clustering guide, the mod-jk.conf file should have a Location tag in it. I have removed the same as it is not supported by my older Apache server. You can add the same if required. The JkMount directive in the file configures the URL that needs to be redirected. Currently it is configured to reroute all URLs; feel free to customize if required.
Create a new worker.properties file. Contents are as below:
01 |
# Define list of workers |
02 |
worker.list=loadbalancer,status |
05 |
worker.node1.port=8009 |
06 |
worker.node1.host=localhost |
07 |
worker.node1.type=ajp13 |
08 |
worker.node1.lbfactor=1 |
09 |
worker.node1.cachesize=10 |
12 |
worker.node2.port=8109 |
13 |
worker.node2.host=localhost |
14 |
worker.node2.type=ajp13 |
15 |
worker.node2.lbfactor=1 |
16 |
worker.node2.cachesize=10 |
18 |
# Load balancing behaviour |
19 |
worker.loadbalancer.type=lb |
20 |
worker.loadbalancer.balance_workers=node1,node2 |
21 |
worker.loadbalancer.sticky_session=1 |
22 |
#worker.list=loadbalancer |
24 |
# Status worker for managing load balancer |
25 |
worker.status.type=status |
Note that the ports 8009 and 8109 are the JBoss AJP connector ports and not the HTTP ports. Copy the attached workers.properties in APACHE_HOME/conf folder. I have defined two nodes and am assuming both are located on the same machine.
The server.xml within JBOSS_HOME/server/default/deploy/jboss-web.deployer should have the following tag:
1 |
< Connector port = "8009" address = "${jboss.bind.address}" protocol = "AJP/1.3" |
2 |
emptySessionPath = "true" enableLookups = "false" redirectPort = "8443" > |
This defines the AJP port.
This should be enough to get the JBOSS server running in clustered mode for HTTP.
I also needed to get the HTTPS setup rolling. Apparently Apache does not provide built-in SSL support. To achieve SSL support you will need to download the OpenSSL project and install it. An alternative is to get an integrated apache-openssl download from this site (The site was not available , hence I used downloads from the following site and got the openssl download from here.
An update: The Apache site provides a Windows binary with OpenSSL built-in. So you can use that one as well.
I am assuming that you have created the certificate using the tomcat(jboss) server. For my testing purposes I have created a self signed certificate using the Java utility keytool. The syntax for certificate creation is as below:
1 |
keytool -genkey -alias <aliasName> -keystore <keystore name> |
More clarity is available at this site.
Go to the server.xml file located within the <JBoss_home>\server\default\deploy\jboss-web.deployer folder. Search for a connector tag with the following description:
1 |
< Connector port = "8443" protocol = "HTTP/1.1" SSLEnabled = "true" |
2 |
maxThreads = "150" scheme = "https" secure = "true" |
3 |
clientAuth = "false" sslProtocol = "TLS" |
4 |
keystoreFile = "D:\jboss-4.2.2GA-1\server\default\deploy\jboss-web.deployer\testing.keystore" |
5 |
keystorePass = "testing" /> |
Add the new attributes keystoreFile and keystorePass in the connector tag. Do the same procedure for the other server. Change the port of this server. Add a new AJP connector both the server.xml files.
01 |
< Connector port = "8019" address = "${jboss.bind.address}" protocol = "AJP/1.3" |
02 |
emptySessionPath = "true" enableLookups = "false" scheme = "https" secure = "true" redirectPort = "8443" /> |
04 |
Note that two new attributes scheme and secure have been added in the AJP connector declaration. Ensure that the port number in use do not conflict with other port numbers. |
06 |
The Jboss servers are now ready to receive SSL requests. Now to set up the apache server for taking care of load balancing. If you have installed Apache using the URL provided above, the conf folder will have two files httpd.conf and ssl.conf. Open the conf files are check the ServerRoot and DocumentRoot paths. |
08 |
Make sure that following lines in the httpd.conf file are uncommented. |
09 |
[sourcecode language="text"] |
10 |
LoadModule ssl_module modules/mod_ssl.so |
Add the following lines at the end of the httpd.conf file
2 |
JkMount /* loadbalancer |
5 |
# Include mod_jk's specific configuration file |
6 |
Include conf/mod-jk.conf |
mod-jk.conf remains unchanged.
Unzip the OpenSSL.zip on the machine. Copy the libeay32.dll and ssleay32.dll in Windows\system32 folder of the machine.
The Tomcat keystore and Apache SSL certificates and keys are incompatible. They need to be converted into compatible certificate and key. For details around the conversion process refer the url.
Now you should be having the pem certificate and private key. Copy the files in a suitable folder and make relevant entries for them in the ssl.conf:
1 |
SSLCertificateFile /root/SSL_export/exported-pem.crt |
2 |
SSLCertificateKeyFile /root/SSL_export/exported.key |
The intermediate certificate is not required in case of self signed certificates.
Add the following statement within the VirtualHost tag of the ssl.conf file
1 |
JkMount /* loadbalancerSSL |
In ssl.conf file remove <IfDefine SSL> and </IfDefine> tags ensure that the ServerName, DocumentRoot are pointing to the correct folders. The workers.properties file is configured to handle 4 nodes, two for HTTP requests and two for HTTPS requests.
Here is the updated workers.properties file.
01 |
# Define list of workers |
02 |
worker.list=loadbalancer,loadbalancerSSL,status |
03 |
#worker.list=loadbalancer,status |
06 |
worker.node1.port=8009 |
07 |
worker.node1.host=localhost |
08 |
worker.node1.type=ajp13 |
09 |
worker.node1.lbfactor=1 |
10 |
worker.node1.cachesize=10 |
13 |
worker.node2.port=8109 |
14 |
worker.node2.host=localhost |
15 |
worker.node2.type=ajp13 |
16 |
worker.node2.lbfactor=1 |
17 |
worker.node2.cachesize=10 |
20 |
worker.node3.port=8019 |
21 |
worker.node3.host=localhost |
22 |
worker.node3.type=ajp13 |
23 |
worker.node3.lbfactor=1 |
24 |
worker.node3.cachesize=10 |
27 |
worker.node4.port=8119 |
28 |
worker.node4.host=localhost |
29 |
worker.node4.type=ajp13 |
30 |
worker.node4.lbfactor=1 |
31 |
worker.node4.cachesize=10 |
33 |
# Load balancing behaviour |
34 |
worker.loadbalancer.type=lb |
35 |
worker.loadbalancer.balance_workers=node1,node2 |
36 |
worker.loadbalancer.sticky_session=1 |
37 |
#worker.list=loadbalancer |
39 |
# Load balancing behaviour |
40 |
worker.loadbalancerSSL.type=lb |
41 |
worker.loadbalancerSSL.balance_workers=node3,node4 |
42 |
worker.loadbalancerSSL.sticky_session=1 |
43 |
#worker.list=loadbalancer |
45 |
# Status worker for managing load balancer |
46 |
worker.status.type=status |
The above configuration should be enough to get the JBoss running in a clustered environment for HTTP as well as HTTPS requests.
This post does not cover the portion for sticky session configuration.
Here is an explanation of how to set up DRBD on CentOS6.
# cat /etc/redhat-release
CentOS release 6.2 (Final)
# uname -ri
2.6.32-220.7.1.el6.i686 i386
|
# rpm -qa | grep drbd
kmod-drbd84-8.4.1-1.el6.elrepo.i686
drbd84-utils-8.4.1-1.el6.elrepo.i686
|
Hostname ( uname –n ) of DBRD nodes are:
centos6-drbd1.localdomain
centos6-drbd2.localdomain
These nodes are running as VMs on a KVM host machine ( Linux Mint 12 )
[ install DRBD on CentOS 6 ]
CentOS 6 doesn’t seem to have DRBD repo.
add the DRBD repo
[root@centos6-drbd1 ~]# rpm -Uvh http://elrepo.org/elrepo-release-6-4.el6.elrepo.noarch.rpm
|
edit the DRBD repo.
[root@centos6-drbd1 ~]# vi /etc/yum.repos.d/elrepo.repo
|
[elrepo]
name=ELRepo.org Community Enterprise Linux Repository – el6
baseurl=http://elrepo.org/linux/elrepo/el6/$basearch/
mirrorlist=http://elrepo.org/mirrors-elrepo.el6
#enabled=1
enabled=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-elrepo.org
protect=0
|
confirm you can search DRBD packages via yum
[root@centos6-drbd1 ~]# yum –enablerepo=elrepo search drbd
Loaded plugins: fastestmirror, refresh-packagekit
Loading mirror speeds from cached hostfile
* base: www.ftp.ne.jp
* elrepo: elrepo.org
* extras: www.ftp.ne.jp
* updates: www.ftp.ne.jp
elrepo | 1.9 kB 00:00
elrepo/primary_db | 409 kB 00:01
============================== N/S Matched: drbd ===============================
drbd83-utils.i686 : Management utilities for DRBD %{version}
drbd84-utils.i686 : Management utilities for DRBD
kmod-drbd83.i686 : drbd83 kernel module(s)
kmod-drbd84.i686 : drbd84 kernel module(s)
|
install drbd
[root@centos6-drbd1 ~]# yum –enablerepo=elrepo install –y drbd84-utils.i686 kmod-d
rbd84.i686
<snip>
Installed:
drbd84-utils.i686 0:8.4.1-1.el6.elrepo kmod-drbd84.i686 0:8.4.1-1.el6.elrepo
Complete!
|
do the same things on the other machine
[ when not using LVM ]
Before setting up DRBD , I’m going to add a storage for DRBD to both machines which are running on KVM.
– create a storage with kvm-image for DRBD nodes
for centos6-drbd1
mint-1 images # qemu-img create -f qcow2 /var/disk1/libvirt/images/centos6-drbd1 -1.img 500M Formatting ‘/var/disk1/libvirt/images/centos1-drbd1-1.img’, fmt=qcow2 size=524288000 encryption=off cluster_size=0 |
for centos6-drbd2
mint-1 images # qemu-img create -f qcow2 /var/disk1/libvirt/images/centos6-drbd2-1.img 500M Formatting ‘/var/disk1/libvirt/images/centos1-drbd2-1.img’, fmt=qcow2 size=524288000 encryption=off cluster_size=0 |
I have turned off apparomor on the KVM host.
add the centos6-drbd1-1.img to the DRBD machine centos6-drbd1 on the fly
on the KVM host
virsh # qemu-monitor-command centos6-32-drbd1 ‘pci_add auto storage file=/var/disk1/libvirt/images/centos6-drbd1-1.img,if=scsi’ OK domain 0, bus 0, slot 7, function 0 |
on the KVM host
virsh # qemu-monitor-command centos6-32-drbd1 ‘info block’ drive-virtio-disk0: type=hd removable=0 file=/var/disk1/libvirt/images/centos6-32-drbd1.img ro=0 drv=raw encrypted=0 scsi0-hd0: type=hd removable=0 file=/var/disk1/libvirt/images/centos6-drbd1-1.img ro=0 drv=qcow2 encrypted=0 |
on the VM ( dmesg )
/dev/sda has been added
sd 2:0:0:0: [sda] Attached SCSI disk
sd 2:0:0:0: Attached scsi generic sg0 type 0
|
do the same things on the VM centos6-drbd2
– add a storage device on both DRBD nodes.
on the KVM host , add the storage to centos6-32-drbd2 VM on the fly.
virsh # qemu-monitor-command centos6-32-drbd2 ‘pci_add auto storage file=/var/di
sk1/libvirt/images/centos6-drbd2-1.img,if=scsi’
OK domain 0, bus 0, slot 7, function 0
virsh # qemu-monitor-command centos6-32-drbd2 ‘info block’
drive-virtio-disk0: type=hd removable=0 file=/var/disk1/libvirt/images/centos6-32-drbd2.img ro=0 drv=raw encrypted=0
scsi0-hd0: type=hd removable=0 file=/var/disk1/libvirt/images/centos6-drbd2-1.img ro=0 drv=qcow2 encrypted=0
|
make a partition for DRBD on both VMs with fdisk.
[root@centos6-drbd1 ~]# LANG=C fdisk -l | grep sda
Disk /dev/sda: 524 MB, 524288000 bytes
/dev/sda1 1 1020 511500+ 83 Linux
|
Sample config has been installed under :
[root@centos6-drbd1 ~]# head -10 /usr/share/doc/drbd84-utils-8.4.1/drbd.conf.example
resource example {
options {
on-no-data-accessible suspend-io;
}
net {
cram-hmac-alg “sha1”;
shared-secret “secret_string”;
}
|
make a config file for DRBD , called disk0.res under /etc/drbd.d/ directory.
drbd.conf file reads the following files
[root@centos6-drbd1 ~]# cat /etc/drbd.conf
# You can find an example in /usr/share/doc/drbd…/drbd.conf.example
include “drbd.d/global_common.conf”;
include “drbd.d/*.res”;
|
[root@centos6-drbd1 ~]# uname -n
centos6-drbd1.localdomain
[root@centos6-drbd1 ~]# ifconfig eth0 | grep “inet addr” | awk ‘{print $2}’
addr:192.168.10.50
|
[root@centos6-drbd2 ~]# uname -n
centos6-drbd2.localdomain
[root@centos6-drbd2 ~]# ifconfig eth0 | grep “inet addr” | awk ‘{print $2}’
addr:192.168.10.60
|
/etc/drbd.d/disk0.res file.
[root@centos6-drbd1 ~]# cat /etc/drbd.d/disk0.res
resource disk0 {
protocol C;
net {
cram-hmac-alg sha1;
shared-secret “test”;
}
on centos6-drbd1.localdomain {
device /dev/drbd0;
disk /dev/sda1;
address 192.168.10.50:7788; # centos6-drbd1’s IP
meta-disk internal;
}
on centos6-drbd2.localdomain {
device /dev/drbd0;
disk /dev/sda1;
address 192.168.10.60:7788; # centos6-drbd2’s IP
meta-disk internal;
}
|
copy same config to centos6-drbd2 node.
[root@centos6-drbd1 ~]# scp /etc/drbd.d/disk0.res root@192.168.10.60:/etc/drbd.d/disk0.res
|
– create meta data on both nodes
on centos6-drbd1
[root@centos6-drbd1 ~]# drbdadm create-md disk0
–== Thank you for participating in the global usage survey ==–
The server’s response is:
you are the 2135th user to install this version
Writing meta data…
initializing activity log
NOT initializing bitmap
New drbd meta data block successfully created.
success
|
on centos6-drbd2
[root@centos6-drbd2 ~]# drbdadm create-md disk0
|
– start drbd daemon on both nodes
[root@centos6-drbd1 ~]# /etc/init.d/drbd start
Starting DRBD resources: [
create res: disk0
prepare disk: disk0
adjust disk: disk0
adjust net: disk0
]
……….
***************************************************************
DRBD’s startup script waits for the peer node(s) to appear.
– In case this node was already a degraded cluster before the
reboot the timeout is 0 seconds. [degr-wfc-timeout]
– If the peer was available before the reboot the timeout will
expire after 0 seconds. [wfc-timeout]
(These values are for resource ‘disk0’; 0 sec -> wait forever)
To abort waiting enter ‘yes’ [ 10]:
.
[root@centos6-drbd1 ~]# echo $?
0
|
[root@centos6-drbd2 ~]# /etc/init.d/drbd start
Starting DRBD resources: [
create res: disk0
prepare disk: disk0
adjust disk: disk0
adjust net: disk0
]
.
[root@centos6-drbd2 ~]# echo $?
0
|
check the status of DRBD
[root@centos6-drbd1 ~]# /etc/init.d/drbd status drbd driver loaded OK; device status: version: 8.4.1 (api:1/proto:86-100) GIT-hash: 91b4c048c1a0e06777b5f65d312b38d47abaea80 build by dag@Build32R6, 2011-12-21 06:07:17 m:res cs ro ds p mounted fstype 0:disk0 Connected Secondary/Secondary Inconsistent/Inconsistent C |
[root@centos6-drbd2 ~]# /etc/init.d/drbd status
drbd driver loaded OK; device status:
version: 8.4.1 (api:1/proto:86-100)
GIT-hash: 91b4c048c1a0e06777b5f65d312b38d47abaea80 build by dag@Build32R6, 2011-12-21 06:07:17
m:res cs ro ds p mounted fstype
0:disk0 Connected Secondary/Secondary Inconsistent/Inconsistent C
|
status is secondary/secondary , inconsistent.
To solve this , have the centos6-drbd1 node primary.
on centos6-drbd1
[root@centos6-drbd1 ~]# drbdadm — –overwrite-data-of-peer primary all
|
check the status
[root@centos6-drbd1 ~]# /etc/init.d/drbd status
drbd driver loaded OK; device status:
version: 8.4.1 (api:1/proto:86-100)
GIT-hash: 91b4c048c1a0e06777b5f65d312b38d47abaea80 build by dag@Build32R6, 2011-12-21 06:07:17
m:res cs ro ds p mounted fstype
0:disk0 SyncSource Primary/Secondary UpToDate/Inconsistent C
… sync’ed: 9.6% (465368/511448)K
|
get things done
[root@centos6-drbd1 ~]# /etc/init.d/drbd status
drbd driver loaded OK; device status:
version: 8.4.1 (api:1/proto:86-100)
3GIT-hash: 91b4c048c1a0e06777b5f65d312b38d47abaea80 build by dag@Build32R6, 2011-12-21 06:07:17
m:res cs ro ds p mounted fstype
0:disk0 Connected Primary/Secondary UpToDate/UpToDate C
|
centos6-drbd1 is primary and centos6-drbd2 is secondary.
[root@centos6-drbd1 ~]# drbdadm role all
Primary/Secondary
|
[root@centos6-drbd2 ~]# drbdadm role all
Secondary/Primary
|
– create a filesystem
make a filesystem on primary node ( not secondary !! )
[root@centos6-drbd1 ~]# drbdadm role all
Primary/Secondary
[root@centos6-drbd1 ~]# mkfs.ext4 /dev/drbd0
|
[root@centos6-drbd1 ~]# cat /proc/drbd
version: 8.4.1 (api:1/proto:86-100)
GIT-hash: 91b4c048c1a0e06777b5f65d312b38d47abaea80 build by dag@Build32R6, 2011-12-21 06:07:17
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r—–
ns:610747 nr:0 dw:99299 dr:517056 al:68 bm:32 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
|
version: 8.4.1 (api:1/proto:86-100)
GIT-hash: 91b4c048c1a0e06777b5f65d312b38d47abaea80 build by dag@Build32R6, 2011-12-21 06:07:17
0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r—–
ns:0 nr:74335 dw:74335 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
|
– test
mount /dev/drbd0 to a directory.
on primary node
[root@centos6-drbd1 ~]# mount -t ext4 /dev/drbd0 data/
|
on primary ( centos6-drbd1 )
[root@centos6-drbd1 ~]# echo hello > data/hello.txt
[root@centos6-drbd1 ~]# cat data/hello.txt
hello
|
unmount the directory on the primary
[root@centos6-drbd1 ~]# umount /root/data/
|
have the centos6-drbd1 secondary
Primary/Secondary
[root@centos6-drbd1 ~]# drbdadm secondary all
[root@centos6-drbd1 ~]# drbdadm role all
Secondary/Secondary
|
have the centos6-drbd2 primary
[root@centos6-drbd2 ~]# drbdadm role all
Secondary/Secondary
[root@centos6-drbd2 ~]# drbdadm primary all
[root@centos6-drbd2 ~]# drbdadm role all
Primary/Secondary
|
mount
[root@centos6-drbd2 ~]# mount -t ext4 /dev/drbd0 /root/data/
[root@centos6-drbd2 ~]# cat /root/data/hello.txt
hello
|
[ when using LVM ]
add two storage devices for LVM on both nodes.
mint-1 images # qemu-img create -f qcow2 centos6-drbd1-lvm1.img 100M Formatting ‘centos6-drbd1-lvm1.img’, fmt=qcow2 size=104857600 encryption=off cluster_size=0
mint-1 images # qemu-img create -f qcow2 centos6-drbd1-lvm2.img 100M Formatting ‘centos6-drbd1-lvm2.img’, fmt=qcow2 size=104857600 encryption=off cluster_size=0
mint-1 images # qemu-img create -f qcow2 centos6-drbd2-lvm1.img 100M Formatting ‘centos6-drbd2-lvm1.img’, fmt=qcow2 size=104857600 encryption=off cluster_size=0
mint-1 images # qemu-img create -f qcow2 centos6-drbd2-lvm2.img 100M Formatting ‘centos6-drbd2-lvm2.img’, fmt=qcow2 size=104857600 encryption=off cluster_size=0
mint-1 images # ls *lvm* centos6-drbd1-lvm1.img centos6-drbd2-lvm1.img centos6-drbd1-lvm2.img centos6-drbd2-lvm2.img |
add these devices to DRBD nodes.
on KVM host , add two storages to centos6-drbd1 node.
virsh # qemu-monitor-command centos6-32-drbd1 ‘pci_add auto storage file=/var/disk1/libvirt/images/centos6-drbd1-lvm1.img,if=scsi’ OK domain 0, bus 0, slot 8, function 0
virsh # qemu-monitor-command centos6-32-drbd1 ‘pci_add auto storage file=/var/disk1/libvirt/images/centos6-drbd1-lvm2.img,if=scsi’ OK domain 0, bus 0, slot 9, function 0 |
virsh # qemu-monitor-command centos6-32-drbd2 ‘pci_add auto storage file=/var/di
sk1/libvirt/images/centos6-drbd2-lvm1.img,if=scsi’
OK domain 0, bus 0, slot 8, function 0
virsh # qemu-monitor-command centos6-32-drbd2 ‘pci_add auto storage file=/var/disk1/libvirt/images/centos6-drbd2-lvm2.img,if=scsi’
OK domain 0, bus 0, slot 9, function 0
|
/dev/sdb , /dev/sdc are for LVM
[root@centos6-drbd1 ~]# LANG=C fdisk -l | grep “/dev/sd*” | grep -v sda
Disk /dev/sdb doesn’t contain a valid partition table
Disk /dev/sdc doesn’t contain a valid partition table
Disk /dev/sdb: 104 MB, 104857600 bytes
Disk /dev/sdc: 104 MB, 104857600 bytes
|
– create LVM partitions
on centos6-drbd1
[root@centos6-drbd1 ~]# fdisk /dev/sdb
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1024, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-1024, default 1024):
Using default value 1024
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 8e
Changed system type of partition 1 to 8e (Linux LVM)
Command (m for help): p
Disk /dev/sdb: 104 MB, 104857600 bytes
4 heads, 50 sectors/track, 1024 cylinders
Units = cylinders of 200 * 512 = 102400 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x518bd64a
Device Boot Start End Blocks Id System
/dev/sdb1 1 1024 102375 8e Linux LVM
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
|
do the same thing to /dev/sdc
[root@centos6-drbd1 ~]# LANG=C fdisk -l | grep “/dev/sd*” | grep -v sda Disk /dev/sdb: 104 MB, 104857600 bytes /dev/sdb1 1 1024 102375 8e Linux LVM Disk /dev/sdc: 104 MB, 104857600 bytes /dev/sdc1 1 1024 102375 8e Linux LVM |
make LVM partitions ( /dev/sdb , sdc ) on centos6-drbd2 as well.
centos6-drbd2
[root@centos6-drbd2 ~]# LANG=C fdisk -l | grep “/dev/sd*” | grep -v sda
Disk /dev/sdb: 104 MB, 104857600 bytes
/dev/sdb1 1 1024 102375 8e Linux LVM
Disk /dev/sdc: 104 MB, 104857600 bytes
/dev/sdc1 1 1024 102375 8e Linux LVM
|
– create PV ( physical volume )
there’s no pvcreate command.
install lvm2 via yum
[root@centos6-drbd1 ~]# yum install -y lvm2
[root@centos6-drbd2 ~]# yum install -y lvm2
|
on centos6-drbd1
[root@centos6-drbd1 ~]# pvcreate /dev/sdb1 /dev/sdc1
Writing physical volume data to disk “/dev/sdb1”
Physical volume “/dev/sdb1” successfully created
Writing physical volume data to disk “/dev/sdc1”
Physical volume “/dev/sdc1” successfully created
[root@centos6-drbd1 ~]# pvscan
PV /dev/sdb1 lvm2 [99.98 MiB]
PV /dev/sdc1 lvm2 [99.98 MiB]
Total: 2 [199.95 MiB] / in use: 0 [0 ] / in no VG: 2 [199.95 MiB]
|
do the same thing on centos6-drbd2
[root@centos6-drbd2 ~]# pvcreate /dev/sdb1 /dev/sdc1
|
– create VG ( volume group )
on centos6-drbd1
[root@centos6-drbd1 ~]# vgcreate -s 32m VolGroup01 /dev/sdb1 /dev/sdc1
Volume group “VolGroup01” successfully created
[root@centos6-drbd1 ~]# vgscan
Reading all physical volumes. This may take a while…
Found volume group “VolGroup01” using metadata type lvm2
[root@centos6-drbd1 ~]# vgdisplay
— Volume group —
VG Name VolGroup01
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 192.00 MiB
PE Size 32.00 MiB
Total PE 6
Alloc PE / Size 0 / 0
Free PE / Size 6 / 192.00 MiB
VG UUID ZbTytp-s0st-HG2A-hB49-JuEX-f13I-829dgh
|
on centos6-drbd2
[root@centos6-drbd2 ~]# vgcreate -s 32m VolGroup01 /dev/sdb1 /dev/sdc1
Volume group “VolGroup01” successfully created
|
– create LV ( logical volume )
on centos6-drbd1
[root@centos6-drbd1 ~]# lvcreate -n LogVol01 -L 100M VolGroup01
Rounding up size to full physical extent 128.00 MiB
Logical volume “LogVol01” created
[root@centos6-drbd1 ~]# lvscan
ACTIVE ‘/dev/VolGroup01/LogVol01’ [128.00 MiB] inherit
[root@centos6-drbd1 ~]# lvdisplay
— Logical volume —
LV Name /dev/VolGroup01/LogVol01
VG Name VolGroup01
LV UUID 0bCwiL-kMDE-sHyI-au1a-8Wwl-z67d-23KOkX
LV Write Access read/write
LV Status available
# open 0
LV Size 128.00 MiB
Current LE 4
Segments 2
Allocation inherit
Read ahead sectors auto
– currently set to 256
Block device 253:0
|
do the same thing on centos6-drbd2
[root@centos6-drbd2 ~]# lvcreate -n LogVol01 -L 100M VolGroup01
Rounding up size to full physical extent 128.00 MiB
Logical volume “LogVol01” created
|
– make ext4 filesystem on the LV
on centos6-drbd1
[root@centos6-drbd1 ~]# mkfs.ext4 /dev/VolGroup01/LogVol01
|
on centos6-drbd2
[root@centos6-drbd2 ~]# mkfs.ext4 /dev/VolGroup01/LogVol01
|
mount
on centos6-drbd1
[root@centos6-drbd1 ~]# mount -t ext4 /dev/VolGroup01/LogVol01 /root/lvm_mnt/
|
[root@centos6-drbd1 ~]# echo hello > /root/lvm_mnt/hello.txt
[root@centos6-drbd1 ~]# cat /root/lvm_mnt/hello.txt
hello
[root@centos6-drbd1 ~]# rm /root/lvm_mnt/hello.txt
|
If you want to manage LVM via GUI , you can use “system-config-lvm” which you can install via yum.
[root@centos6-drbd1 ~]# yum install system-config-lvm
|
start GUI tool
– set up DRBD configuration
at first stop DRBD , unmount LV.
on centos6-drbd1
[root@centos6-drbd1 ~]# umount /root/lvm_mnt/
[root@centos6-drbd1 ~]# /etc/init.d/drbd stop
|
on centos6-drbd2
[root@centos6-drbd2 ~]# /etc/init.d/drbd stop
|
make a DRBD cofnig file for this LVM.
on centos6-drbd1
[root@centos6-drbd1 ~]# vi /etc/drbd.d/lvm_disk1.res
[root@centos6-drbd1 ~]# cat /etc/drbd.d/lvm_disk1.res
resource lvm_disk1 {
protocol C;
net {
cram-hmac-alg sha1;
shared-secret “test”;
}
on centos6-drbd1.localdomain {
device /dev/drbd1;
disk /dev/VolGroup01/LogVol01;
address 192.168.10.50:7788; # centos6-drbd1’s IP
meta-disk internal;
}
on centos6-drbd2.localdomain {
device /dev/drbd1;
disk /dev/VolGroup01/LogVol01;
address 192.168.10.60:7788; # centos6-drbd2’s IP
meta-disk internal;
}
|
copy it to centos6-drbd2
[root@centos6-drbd1 ~]# scp /etc/drbd.d/lvm_disk1.res root@192.168.10.60:/etc/dr
bd.d/lvm_disk1.res
|
nnn , error
[root@centos6-drbd1 ~]# drbdadm create-md lvm_disk1
drbd.d/lvm_disk1.res:11: conflicting use of IP ‘192.168.10.50:7788’ …
drbd.d/disk0.res:11: IP ‘192.168.10.50:7788’ first used here.
|
I need to change the port # per DRBD resource.
Okay , I’ll use 7789 for this LVM’s DRBD
confirm 7789 is not used
[root@centos6-drbd1 ~]# lsof -ni:7789
[root@centos6-drbd1 ~]#
|
[root@centos6-drbd1 ~]# cat /etc/drbd.d/lvm_disk1.res
resource lvm_disk1 {
protocol C;
net {
cram-hmac-alg sha1;
shared-secret “test”;
}
on centos6-drbd1.localdomain {
device /dev/drbd1;
disk /dev/VolGroup01/LogVol01;
# address 192.168.10.50:7788; # centos6-drbd1’s IP
address 192.168.10.50:7789; # centos6-drbd1’s IP
meta-disk internal;
}
on centos6-drbd2.localdomain {
device /dev/drbd1;
disk /dev/VolGroup01/LogVol01;
# address 192.168.10.60:7788; # centos6-drbd2’s IP
address 192.168.10.60:7789; # centos6-drbd2’s IP
meta-disk internal;
}
|
copy it to centos6-drbd2
[root@centos6-drbd1 ~]# !877
scp /etc/drbd.d/lvm_disk1.res root@192.168.10.60:/etc/drbd.d/lvm_disk1.res
root@192.168.10.60’s password:
|
try again.
nnn , erro ,, erro code is 40.
[root@centos6-drbd1 ~]# drbdadm create-md lvm_disk1
md_offset 134213632
al_offset 134180864
bm_offset 134176768
Found ext3 filesystem
131072 kB data area apparently used
131032 kB left usable by current configuration
Device size would be truncated, which
would corrupt data and result in
‘access beyond end of device’ errors.
You need to either
* use external meta data (recommended)
* shrink that filesystem first
* zero out the device (destroy the filesystem)
Operation refused.
Command ‘drbdmeta 1 v08 /dev/VolGroup01/LogVol01 internal create-md’ terminated with exit code 40
|
zero out the LV.
[root@centos6-drbd1 ~]# dd if=/dev/zero of=/dev/VolGroup01/LogVol01 bs=1M count=1
|
[root@centos6-drbd2 ~]# dd if=/dev/zero of=/dev/VolGroup01/LogVol01 bs=1M count=1
|
try again
on centos-drbd1
[root@centos6-drbd1 ~]# drbdadm create-md lvm_disk1
Writing meta data…
initializing activity log
NOT initializing bitmap
New drbd meta data block successfully created.
success
|
do the same thing on centos-drbd2
[root@centos6-drbd2 ~]# drbdadm create-md lvm_disk1
|
– start DRBD on both nodes
[root@centos6-drbd1 ~]# /etc/init.d/drbd start
Starting DRBD resources: [
create res: disk0 lvm_disk1
prepare disk: disk0 lvm_disk1
adjust disk: disk0 lvm_disk1
adjust net: disk0 lvm_disk1
]
………
[root@centos6-drbd1 ~]#
[root@centos6-drbd1 ~]# /etc/init.d/drbd status
drbd driver loaded OK; device status:
version: 8.4.1 (api:1/proto:86-100)
GIT-hash: 91b4c048c1a0e06777b5f65d312b38d47abaea80 build by dag@Build32R6, 2011-12-21 06:07:17
m:res cs ro ds p mounted fstype
0:disk0 Connected Secondary/Secondary UpToDate/UpToDate C
1:lvm_disk1 Connected Secondary/Secondary Inconsistent/Inconsistent C
[root@centos6-drbd1 ~]# cat /proc/drbd
version: 8.4.1 (api:1/proto:86-100)
GIT-hash: 91b4c048c1a0e06777b5f65d312b38d47abaea80 build by dag@Build32R6, 2011-12-21 06:07:17
0: cs:Connected ro:Secondary/Secondary ds:UpToDate/UpToDate C r—–
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
1: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r—–
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:131032
|
on centos6-drbd1 ( initialize DRBD disk )
[root@centos6-drbd1 ~]# drbdadm — –overwrite-data-of-peer primary all
[root@centos6-drbd1 ~]# /etc/init.d/drbd status
drbd driver loaded OK; device status:
version: 8.4.1 (api:1/proto:86-100)
GIT-hash: 91b4c048c1a0e06777b5f65d312b38d47abaea80 build by dag@Build32R6, 2011-12-21 06:07:17
m:res cs ro ds p mounted fstype
0:disk0 Connected Primary/Secondary UpToDate/UpToDate C
1:lvm_disk1 Connected Primary/Secondary UpToDate/UpToDate C
[root@centos6-drbd1 ~]# drbdadm role all
Primary/Secondary
Primary/Secondary
[root@centos6-drbd1 ~]# drbd-overview
0:disk0/0 Connected Primary/Secondary UpToDate/UpToDate C r—–
1:lvm_disk1/0 Connected Primary/Secondary UpToDate/UpToDate C r—– /root/lvm_mnt ext4 124M 5.6M 113M 5%
|
[root@centos6-drbd2 ~]# drbdadm role all
Secondary/Primary
Secondary/Primary
[root@centos6-drbd2 ~]# /etc/init.d/drbd status
drbd driver loaded OK; device status:
version: 8.4.1 (api:1/proto:86-100)
GIT-hash: 91b4c048c1a0e06777b5f65d312b38d47abaea80 build by dag@Build32R6, 2011-12-21 06:07:17
m:res cs ro ds p mounted fstype
0:disk0 Connected Secondary/Primary UpToDate/UpToDate C
1:lvm_disk1 Connected Secondary/Primary UpToDate/UpToDate C
|
– make filesystem on the primary node
[root@centos6-drbd1 ~]# mkfs.ext4 /dev/drbd1
|
mount
[root@centos6-drbd1 ~]# mount -t ext4 /dev/drbd1 /root/lvm_mnt/
|
make a file
[root@centos6-drbd1 ~]# echo “hi” > /root/lvm_mnt/hi.txt
[root@centos6-drbd1 ~]# cat /root/lvm_mnt/hi.txt
hi
|
unmount
[root@centos6-drbd1 ~]# umount /root/lvm_mnt/
|
make centos6-drbd1 secondary
[root@centos6-drbd1 ~]# drbdadm secondary all
|
[root@centos6-drbd1 ~]# drbdadm role all
Secondary/Secondary
Secondary/Secondary
|
make centos6-drbd2 priamry
on centos6-drbd2
[root@centos6-drbd2 ~]# drbdadm role all
Secondary/Secondary
Secondary/Secondary
[root@centos6-drbd2 ~]# drbdadm primary all
[root@centos6-drbd2 ~]#
[root@centos6-drbd2 ~]# drbd-overview
0:disk0/0 Connected Primary/Secondary UpToDate/UpToDate C r—–
1:lvm_disk1/0 Connected Primary/Secondary UpToDate/UpToDate C r—–
|
[root@centos6-drbd2 ~]# mkdir /root/lvm_mnt
[root@centos6-drbd2 ~]# mount -t ext4 /dev/drbd1 /root/lvm_mnt/
[root@centos6-drbd2 ~]# ls /root/lvm_mnt/
hi.txt lost+found
[root@centos6-drbd2 ~]# cat /root/lvm_mnt/hi.txt
hi
|
disable resources
[root@centos6-drbd1 ~]# drbd-overview
0:disk0/0 Connected Secondary/Primary UpToDate/UpToDate C r—–
1:lvm_disk1/0 Connected Secondary/Primary UpToDate/UpToDate C r—–
[root@centos6-drbd1 ~]# drbdadm down disk0
[root@centos6-drbd1 ~]# drbdadm down lvm_disk1
[root@centos6-drbd1 ~]# drbd-overview
0:disk0/0 Unconfigured . . . .
1:lvm_disk1/0 Unconfigured . . . .
|
scp : samll tips ( error : bash: scp: command not found )
When I try to copy a file over scp command , I’ve faced the following error.
CentOS6-1# /usr/bin/scp aa.tgz hattori@192.168.0.100: hattori@192.168.0.100’s password: bash: scp: command not found lost connection # |
The reason is openssh-clients have not been installed on the remote machine.
To copy files over scp , openssh-clients needs to be installed on both machines.
I didn’t know that…
CentOS6-2 machine has not had openssh-clients for sure.
CentOS6-1# rpm -qa | grep openssh-client
openssh-clients-5.3p1-81.el6.x86_64
|
CentOS6-2#rpm -qa | grep openssh-clients
CentOS6-2#
|
To solve this , install openssh-clients on CentOS6-2 machine.
CentOS6-2 # yum install -y openssh-clients
|
There are many ways to find out zombie processes , so this is one of examples.
# cat /etc/centos-release
CentOS release 6.2 (Final)
|
find zombie processes
# top -b -n 1 | grep Z
6072 root 20 0 0 0 0 Z 0.0 0.0 0:00.09 dumpcap <defunct>
6075 root 20 0 0 0 0 Z 0.0 0.0 0:00.11 dumpcap <defunct>
|
or
# ps aux | awk ‘{ print $8 ” ” $2 }’ | grep -w Z
Z 6072
Z 6075
|
kill zombie processes
# kill -9 6072
# kill -9 6075
|
If I had to pick one fault of Linux, it would be that for almost everything, the Linux user is inundated with hundreds of possible solutions. This is both a blessing and a curse – for the veterans, it means that we can pick the tool that most matches how we prefer to operate; for the uninitiated, it means that we’re so overwhelmed with options it’s hard to know where to begin.
One exception is software RAID – there’s really only one option: mdadm . I can already hear the LVM advocates screaming at me; no, I don’t have any problem with LVM, and in fact I do use it as well – I just see it as filling a different role than mdadm . I won’t go into the nuances here – just trust me when I say that I use and love both.
There are quite a few how-tos, walkthroughs, and tutorials out there on using mdadm . None that I found, however, came quite near enough to what I was trying to do on my newest computer system. And even when I did get it figured out, the how-tos I read failed to mention what turned out to be a very critical piece of information, the lack of which almost lead to me destroying my newly-created array.
So without further ado, I will walk you through how I created a storage partition on a RAID 10 array using 4 hard drives (my system boots off of a single, smaller hard drive).
The first thing you want to do is make sure you have a plan of attack: What drives/partitions are you going to use? What RAID level? Where is the finished product going to be mounted?
One method that I’ve seen used frequently is to create a single array that’s used for everything, including the system. There’s nothing wrong with that approach, but here’s why I decided on having a separate physical drive for my system to boot from: simplicity. If you want to use a software RAID array for your boot partition as well, there are plenty of resources telling you how you’ll need to install your system and configure your boot loader.
For my setup, I chose a lone 80 GB drive to house my system. For my array, I selected four 750 GB drives. All 5 are SATA. After I installed Ubuntu 9.04 on my 80 GB drive and booted into it, it was time to plan my RAID array.
kromey@vmsys:~$ ls -1 /dev/sd*
/dev/sda
/dev/sdb
/dev/sdc
/dev/sdd
/dev/sde
/dev/sde1
/dev/sde2
/dev/sde5
As you can probably tell, my system is installed on sde . While I would have been happier with it being labeled sda , it doesn’t really matter. sda through sdd then are the drives that I want to combine into a RAID.
mdadm operates on partitions, not raw devices, so the next step is to create partitions on my drives. Since I want to use each entire drive, I’ll just create a single partition on each one. Using fdisk , I choose the fd (Linux raid auto) partition type and create partitions using the entire disk on each one. When I’m done, each drive looks like this:
kromey@vmsys:~$ sudo fdisk -l /dev/sda
Disk /dev/sda: 750.1 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sda1 1 91201 732572001 fd Linux raid autodetect
Now that my partitions are in place, it’s time to pull out mdadm . I won’t re-hash everything that’s in the man pages here, and instead just demonstrate what I did. I’ve already established that I want a RAID 10 array, and setting that up with mdadm is quite simple:
kromey@vmsys:~$ sudo mdadm -v --create /dev/md0 --level=raid10 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
A word of caution: mdadm --create will return immediately, and for all intents and purposes will look like it’s done and ready. It’s not – it takes time for the array to be synchronized. It’s probably usable before then, but to be on the safe side wait until it’s done. My array took about 3 hours (give or take – I was neither watching it closely nor holding a stopwatch!). Wait until your /proc/mdstat looks something like this:
kromey@vmsys:~$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid10 sdb1[1] sda1[0] sdc1[2] sdd1[3]
1465143808 blocks 64K chunks 2 near-copies [4/4] [UUUU]
Edit: As Jon points out in the comments, you can watch cat /proc/mdstat to get near-real-time status and know when your array is ready.
That’s it! All that’s left to do now is create a partition, throw a filesystem on there, and then mount it.
kromey@vmsys:~$ sudo fdisk /dev/md0
kromey@vmsys:~$ sudo mkfs -t ext4 /dev/md0p1
kromey@vmsys:~$ mkdir /srv/hoard
kromey@vmsys:~$ sudo mount /dev/md0p1 /srv/hoard/
Ah, how sweet it is!
kromey@vmsys:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sde1 71G 3.6G 64G 6% /
tmpfs 3.8G 0 3.8G 0% /lib/init/rw
varrun 3.8G 116K 3.8G 1% /var/run
varlock 3.8G 0 3.8G 0% /var/lock
udev 3.8G 184K 3.8G 1% /dev
tmpfs 3.8G 104K 3.8G 1% /dev/shm
lrm 3.8G 2.5M 3.8G 1% /lib/modules/2.6.28-14-generic/volatile
/dev/md0p1 1.4T 89G 1.2T 7% /srv/hoard
Now comes the gotcha that nearly sank me. Well, it wouldn’t have been a total loss, I’d only copied data from an external hard drive to my new array, and could easily have done it again.
Everything I read told me that Debian-based systems (of which Ubuntu is, of course, one) were set up to automatically detect and activate your mdadm -create arrays on boot, and that you don’t need to do anything beyond what I’ve already described. Now, maybe I did something wrong (and if so, please leave a comment correcting me!), but this wasn’t the case for me, leaving me without an assembled array (while somehow making sdb busy so I couldn’t manually assemble the array except in a degraded state!) after a reboot. So I had to edit my /etc/mdadm/mdadm.conf file like so:
kromey@vmsys:~$ cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
#DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
DEVICE /dev/sd[abcd]1
ARRAY /dev/md0 super-minor=0
# This file was auto-generated on Mon, 03 Aug 2009 21:30:49 -0800
# by mkconf $Id$
It certainly looks like my array should have been detected and started when I rebooted. I commented-out the default DEVICES line and added an explicit one, then added an explicit declaration for my array; now it’s properly assembled when my system reboots, which means the fstab entry doesn’t provoke a boot-stopping error anymore, and life is all-around happy!
Update 9 April 2011: In preparation for a server rebuild, I’ve been experimenting with mdadm quite a bit more, and I’ve found a better solution to adding the necessary entries to the mdadm.conf file. Actually, two new solutions:
- Configure your RAID array during the Ubuntu installation. Your
mdadm.conf file will be properly updated with no further action necessary on your part, and you can even have those nice handy fstab entries to boot!
- Run the command
mdadm --examine --scan --config=mdadm.conf >> /etc/mdadm/mdadm.conf in your terminal. Then, open up mdadm.conf in your favorite editor to put the added line(s) into a more reasonable location.
On my new server, I’ll be following solution (1), but on my existing system described in this post, I have taken solution (2); my entire file now looks like this:
kromey@vmsys:~$ cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
ARRAY /dev/md0 level=raid10 num-devices=4 UUID=46c6f1ed:434fd8b4:0eee10cd:168a240d
# This file was auto-generated on Mon, 03 Aug 2009 21:30:49 -0800
# by mkconf $Id$
Notice that I’m again using the default DEVICE line, and notice the new ARRAY line that’s been added. This seems to work a lot better — since making this change, I no longer experience the occasional (and strange) “device is busy” errors during boot (always complaining about /dev/sdb for some reason), making the boot-up process just that much smoother!
OS: CentOS 5.6 i386, CentOS 6.2 i386
Ossec Version: 2.6
Hardware: Virtual Machine (VirtualBox 4.1.14)
About
OSSEC is an opensource Host Intrustion Detection System (HIDS). OSSEC let you monitor log files, integrity of files and detects root kits in a client-server environment.
OSSEC Server Installation
- Install wget and update your system
yum install wget -y
yum update -y
reboot
- If you are using CentOS 6 install EPEL repository
rpm -Uvh http://ftp.heanet.ie/pub/fedora/epel/6/i386/epel-release-6-7.noarch.rpm
- Install atomic repository on your system
wget -q -O - https://www.atomicorp.com/installers/atomic | sh
Press Enter to accept the terms
- Install OSSEC packages and apache for the WUI
yum install ossec-hids ossec-hids-server httpd php -y
- Download and extract ossec-wui
cd /var/www/html
wget http://www.ossec.net/files/ui/ossec-wui-0.3.tar.gz
tar zxvf ossec-wui-*.tar.gz
rm -f ossec-wui-*.tar.gz
mv ossec-wui-* ossec-wui
chown -R apache:apache /var/www/html/ossec-wui
- Download and install ossec-wui patches
mkdir /usr/local/src/ossec
cd /usr/local/src/ossec
wget http://www.dopefish.de/files/ossec/ossec-wui-0.3_ossec_2.6.patch.tgz
cd /var/www/html/ossec-wui
tar zxvf /usr/local/src/ossec/ossec-wui-0.3_ossec_2.6.patch.tgz
mkdir /var/www/html/ossec-wui/tmp
chown apache:apache /var/www/html/ossec-wui/tmp
- Edit ossec configuration file and configure emails parameters in the global section and change the location of apache log files in the end of ossec.conf file
vi /var/ossec/etc/ossec.conf
...
<global>
<email_notification>yes</email_notification>
<email_to>daniel.cid@xxx.com</email_to>
<smtp_server>smtp.xxx.com.</smtp_server>
<email_from>ossecm@ossec.xxx.com.</email_from>
</global>
...
<localfile>
<log_format>apache</log_format>
<location>/var/log/httpd/access_log</location>
</localfile>
<localfile>
<log_format>apache</log_format>
<location>/var/log/httpd/error_log</location>
</localfile>
- Add apache user to ossec group
usermod -G ossec apache
- Configure OSSEC to run at startup and start it
chkconfig ossec-hids on
service ossec-hids start
- Configure apache to run at startup and start it
chkconfig httpd on
service httpd start
That’s it. Ossec server installation completed. You can browse to http://ossec_srv_IP/ossec-wui. The default user and password are: ossec/ossec.
After completing the server installation you can install new clients using these guides:
-
Error Messge: testing snort configuration generate the following message:
...
ERROR: snort_stream5_tcp.c(906) Could not initialize tcp session memory pool.
Fatal Error, Quitting..
- Fix: Add more memory or try to reduce max_tcp connections in snort configuration file
vi /usr/local/snort/etc/snort.conf
preprocessor stream5_global: track_tcp yes, \
track_udp yes, \
track_icmp no, \
max_tcp 162144, \
max_udp 131072, \
max_active_responses 2, \
min_response_seconds 5
OS: CentOS-6.2 i386, Ubuntu 12.04 x86_64 LTS, Ubuntu 10.04 x86_64 LTS, Ubuntu 11.10 i386 Snort Version: 2.9.2.2 IPv6 GRE (Build 121) Hardware: VirtualBox 4.1.12
About
PulledPork is an opensource perl script that can automatically update Snort rules.
Prerequisite
yum install perl-libwww-perl perl-Crypt-SSLeay perl-libwww-perl perl-Archive-Tar -y
apt-get install libcrypt-ssleay-perl liblwp-useragent-determined-perl -y
Install PulledPork
- Download and extract PulledPork
cd /usr/local/src/snort
wget http://pulledpork.googlecode.com/files/pulledpork-0.6.1.tar.gz -O pulledpork.tar.gz
cd /usr/local/snort
tar zxvf /usr/local/src/snort/pulledpork.tar.gz
mv pulledpork-0.6.1 pulledpork
- Generate Oinkcode at Snort web site
- If you are not already register to snort web site so do it now at https://www.snort.org/signup
- Login to Snort web site
- Go to Snort home page and Click on “Get Snort Oinkcode” at the bottom in “Snort Links” section
- Click Generate Code and copy your new Oinkcode
- Change the following in PulledPork configuration file
vi /usr/local/snort/pulledpork/etc/pulledpork.conf
...
rule_url=https://www.snort.org/reg-rules/|snortrules-snapshot.tar.gz|paste here your Oinknumber
# get the rule docs!
#rule_url=https://www.snort.org/reg-rules/|opensource.gz|
#rule_url=https://rules.emergingthreats.net/|emerging.rules.tar.gz|open
# THE FOLLOWING URL is for etpro downloads, note the tarball name change!
# and the et oinkcode requirement!
#rule_url=https://rules.emergingthreats.net/|etpro.rules.tar.gz|
...
rule_path=/usr/local/snort/etc/rules/snort.rules
...
local_rules=/usr/local/snort/etc/rules/local.rules
# Where should I put the sid-msg.map file?
sid_msg=/usr/local/snort/etc/sid-msg.map
...
# Path to the snort binary, we need this to generate the stub files
snort_path=/usr/local/snort/bin/snort
# We need to know where your snort.conf file lives so that we can
# generate the stub files
config_path=/usr/local/snort/etc/snort.conf
# This is the file that contains all of the shared object rules that pulledpork
# has processed, note that this has changed as of 0.4.0 just like the rules_path!
sostub_path=/usr/local/snort/etc/rules/so_rules.rules
...
distro=Ubuntu-10.04 # For CentOS 6.2 you can use RHEL-6-0
...
- Change RULE_PATH variable in snort configuration file
vi /usr/local/snort/etc/snort.conf
...
var RULE_PATH /usr/local/snort/etc/rules
...
- Remove all snort include rules files
sed -i '/^include $RULE_PATH/d' /usr/local/snort/etc/snort.conf
sed -i '/^include $RULE_PATH/d' /usr/local/snort/etc/snort.conf
sed -i '/^include $RULE_PATH/d' /usr/local/snort/etc/snort.conf
- Add the following include files to snort configuration file
echo "include \$RULE_PATH/snort.rules" >> /usr/local/snort/etc/snort.conf
echo "include \$RULE_PATH/local.rules" >> /usr/local/snort/etc/snort.conf
echo "include \$RULE_PATH/so_rules.rules" >> /usr/local/snort/etc/snort.conf
mkdir /usr/local/snort/etc/rules
- Create your local rules file
cp /usr/local/snort/rules/local.rules /usr/local/snort/etc/rules/
- If you don’t have local rules file then create an empty one
touch /usr/local/snort/etc/rules/local.rules
- Run PulledPork for the first time
/usr/local/snort/pulledpork/pulledpork.pl -c /usr/local/snort/pulledpork/etc/pulledpork.conf
- Schedule PulledPork to run every day. Add the following line to the end of crontab file
vi /etc/crontab
...
0 0 * * * root /usr/local/snort/pulledpork/pulledpork.pl -c /usr/local/snort/pulledpork/etc/pulledpork.conf
...
PulledPork installation completed. Now every day PulledPoled will run and update your rules files from Snort site.
For more information about PulledPork go to http://code.google.com/p/pulledpork/.
Tested On
OS: CentOS 6.2 i386, CentOS x86_64, CentOS 5.7, Ubuntu 10.04 TLS Snort Version: Version 2.9.2.3 IPv6 GRE (Build 205) Hardware: Virtual Machine (VirtualBox 4.1.8)
About
Snort is Network Intrusion Detection System (NIDS). Snort can sniff your network and alert you based on his rule DB if there is an attack on your computers network. It is an opensource system that is build from tcpdump (linux sniffer tool).
This guide can be used for installing snort only or as part of a series for installing Snort Barnyard and BASE or Snort Barnyard and Snorby.
Prerequisite
- Update your system using yum update and reboot
yum update -y
reboot
- Install rpm forge repository
rpm -Uhv http://apt.sw.be/redhat/el6/en/i386/rpmforge/RPMS/rpmforge-release-0.5.2-2.el6.rf.i686.rpm
rpm -Uhv http://apt.sw.be/redhat/el6/en/x86_64/rpmforge/RPMS/rpmforge-release-0.5.2-2.el6.rf.x86_64.rpm
- Install PCRE, libdnet and more prerequisite packages
yum install libdnet libdnet-devel pcre pcre-devel gcc make flex byacc bison kernel-devel libxml2-devel wget -y
- Create dir for Snort prerequisite sources
mkdir /usr/local/src/snort
cd /usr/local/src/snort
- Download and install libpcap
wget http://www.tcpdump.org/release/libpcap-1.3.0.tar.gz -O libpcap.tar.gz
tar zxvf libpcap.tar.gz
cd libpcap-*
./configure && make && make install
echo "/usr/local/lib" >> /etc/ld.so.conf
ldconfig -v
cd /usr/local/src/snort
wget http://www.snort.org/downloads/1623 -O daq.tar.gz
tar zxvf daq.tar.gz
cd daq-*
./configure && make && make install
ldconfig -v
- Create snort user and group
groupadd snort
useradd -g snort snort
Install Snort
- Download and install Snort
cd /usr/local/src/snort
wget http://www.snort.org/downloads/1631 -O snort.tar.gz
tar zxvf snort.tar.gz
cd snort-2*
./configure --prefix /usr/local/snort --enable-sourcefire && make && make install
- Create links for Snort files
ln -s /usr/local/snort/bin/snort /usr/sbin/snort
ln -s /usr/local/snort/etc /etc/snort
- Configure Snort startup script to run at startup
cp rpm/snortd /etc/init.d/
chmod +x /etc/init.d/snortd
cp rpm/snort.sysconfig /etc/sysconfig/snort
chkconfig --add snortd
- Delete following lines from snort startup file
vi /etc/init.d/snortd
...
# check if more than one interface is given
if [ `echo $INTERFACE|wc -w` -gt 2 ]; then
...
else
# Run with a single interface (default)
daemon /usr/sbin/snort $ALERTMODE $BINARY_LOG $NO_PACKET_LOG $DUMP_APP -D $PRINT_INTERFACE $INTERFACE -u $USER -g $GROUP $CONF -l $LOGDIR $PASS_FIRST $BPFFILE $BPF
fi
- Comment out the following variable in /etc/sysconfig/snort and add / to the LOGDIR variable
vi /etc/sysconfig/snort
...
LOGDIR=/var/log/snort/
...
#ALERTMODE=fast
...
#BINARY_LOG=1
...
- Download Snort rules files from http://www.snort.org/snort-rules to /usr/local/src/snort
You have to register to the site in order to get the free register user rules
or you can pay and get the most update rules as a "Subscriber user"
- Extract rules file in the new created directory
cd /usr/local/snort
tar zxvf /usr/local/src/snort/snortrules-snapshot-2*
- Create directory for snort logging
mkdir -p /usr/local/snort/var/log
chown snort:snort /usr/local/snort/var/log
ln -s /usr/local/snort/var/log /var/log/snort
- Create links for dynamic rules files and directories
ln -s /usr/local/snort/lib/snort_dynamicpreprocessor /usr/local/lib/snort_dynamicpreprocessor
ln -s /usr/local/snort/lib/snort_dynamicengine /usr/local/lib/snort_dynamicengine
ln -s /usr/local/snort/lib/snort_dynamicrules /usr/local/lib/snort_dynamicrules
chown -R snort:snort /usr/local/snort
- Comment out or delete all reputation preprocessor configuration lines from snot.conf and configure ouput plugin
vi /usr/local/snort/etc/snort.conf
...
#preprocessor reputation: \
# memcap 500, \
# priority whitelist, \
# nested_ip inner, \
# whitelist $WHITE_LIST_PATH/white_list.rules, \
# blacklist $BLACK_LIST_PATH/black_list.rules
...
output unified2: filename snort.log, limit 128
...
- Create Dynamicrules directory
mkdir /usr/local/snort/lib/snort_dynamicrules
cp /usr/local/snort/so_rules/precompiled/RHEL-6-0/i386/2.9*/*so /usr/local/snort/lib/snort_dynamicrules/
cp /usr/local/snort/so_rules/precompiled/RHEL-6-0/x86-64/2.9*/*so /usr/local/snort/lib/snort_dynamicrules/
snort -c /usr/local/snort/etc/snort.conf --dump-dynamic-rules=/usr/local/snort/so_rules
- Enable snort dynamic rules configuration in the end of snort.conf file
vi /usr/local/snort/etc/snort.conf
...
# dynamic library rules
include $SO_RULE_PATH/bad-traffic.rules
include $SO_RULE_PATH/chat.rules
include $SO_RULE_PATH/dos.rules
include $SO_RULE_PATH/exploit.rules
include $SO_RULE_PATH/icmp.rules
include $SO_RULE_PATH/imap.rules
include $SO_RULE_PATH/misc.rules
include $SO_RULE_PATH/multimedia.rules
include $SO_RULE_PATH/netbios.rules
include $SO_RULE_PATH/nntp.rules
include $SO_RULE_PATH/p2p.rules
include $SO_RULE_PATH/smtp.rules
include $SO_RULE_PATH/snmp.rules
include $SO_RULE_PATH/specific-threats.rules
include $SO_RULE_PATH/web-activex.rules
include $SO_RULE_PATH/web-client.rules
include $SO_RULE_PATH/web-iis.rules
include $SO_RULE_PATH/web-misc.rules
...
snort -c /usr/local/snort/etc/snort.conf -T
- Update Snort rules automatically
PulledPork is an opensource perl script that can update your rules files automatically. To install PulledPork please go to this guide Configure Snort automatic rules updating with PulledPork.
Snort installation completed. Now that we have a Snort server writing it’s data in binary format we need to install Barnyard. Barnyard is application that run on Snort binary files and can output the data to MySQL server and then use it with other PHP web application.
Here is a link for Barnyard Installation.
Please visit http://www.snort.org/ for more information about Snort configuration and usage.
|
|
Recent Comments