|
“systemd is a suite of basic building blocks for a Linux system.
It provides a system and service manager that runs as PID 1 and starts the rest of the system.
systemd… offers on-demand starting of daemons, keeps track of processes using Linux control groups, …and implements an elaborate transactional dependency-based service control logic.”
tions came about from a need to run a queue manager at boot time, and seem to work well for me. If you have any feedback, please add a comment to this blog entry.
Creating a simple systemd service
In order to run as a systemd service, you need to create a “unit” file.
The following is a simple unit file for running MQ, which should be saved in /etc/systemd/system/testqm.service
[Unit]
Description=IBM MQ V8 queue manager testqm
[Service]
ExecStart=/opt/mqm/bin/strmqm testqm
ExecStop=/opt/mqm/bin/endmqm -w testqm
Type=forking
User=mqm
Group=mqm
KillMode=none
LimitNOFILE=10240
After=network.target
Let’s break down the key parts of this file:
ExecStart and ExecStop give the main commands to start and stop the queue manager service.
Type=forking tells systemd that the strmqm command is going to fork to another process, so systemd shouldn’t worry about the strmqm process going away.
KillMode=none tells systemd not to try sending SIGTERM or SIGKILL signals to the MQ processes, as MQ will ignore these if they are sent.
LimitNOFILE is needed because systemd services are not subject to the usual PAM-based limits (for example, in /etc/security/limits.conf), so we need to make sure MQ can have enough open files.
After=network.target makes sure that MQ is only started after the network stack is available. Note, that this doesn’t necessarily mean that IP addresses are available, but just the network stack is up. This option is particularly important because it affects the shutdown sequence, and makes sure that the MQ service is stopped before the network is stopped. See here for a good explanation of this.
In order to try out the service, you first need to tell systemd to reload its configuration, which you can do with the following command:
systemctl daemon-reload
Assuming you’ve already created a queue manager called “testqm”, you can now start it as follows:
systemctl start testqm
You can then see the status of the systemd service as follows:
systemctl status testqm
This should show something like this:
? testqm.service – IBM MQ V8 queue manager testqm
Loaded: loaded (/etc/systemd/system/testqm.service; static; vendor preset: disabled)
Active: active (running) since Wed 2016-04-13 10:06:51 EDT; 3s ago
Process: 2351 ExecStart=/opt/mqm/bin/strmqm testqm (code=exited, status=0/SUCCESS)
Main PID: 2354 (amqzxma0)
CGroup: /system.slice/testqm.service
??2354 /opt/mqm/bin/amqzxma0 -m testqm -u mqm
??2359 /opt/mqm/bin/amqzfuma -m testqm
??2364 /opt/mqm/bin/amqzmuc0 -m testqm
??2379 /opt/mqm/bin/amqzmur0 -m testqm
??2384 /opt/mqm/bin/amqzmuf0 -m testqm
??2387 /opt/mqm/bin/amqrrmfa -m testqm -t2332800 -s2592000 -p2592000 -g5184000 -c3600
??2398 /opt/mqm/bin/amqzmgr0 -m testqm
??2410 /opt/mqm/bin/amqfqpub -mtestqm
??2413 /opt/mqm/bin/runmqchi -m testqm -q SYSTEM.CHANNEL.INITQ -r
??2414 /opt/mqm/bin/amqpcsea testqm
??2415 /opt/mqm/bin/amqzlaa0 -mtestqm -fip0
??2418 /opt/mqm/bin/amqfcxba -m testqm
Apr 13 10:06:50 rmohan.com systemd[1]: Starting IBM MQ V8 queue manager testqm…
Apr 13 10:06:50 rmohan.com strmqm[2351]: WebSphere MQ queue manager ‘testqm’ starting.
Apr 13 10:06:50 rmohan.com strmqm[2351]: The queue manager is associated with installation ‘Installation1’.
Apr 13 10:06:50 rmohan.com strmqm[2351]: 5 log records accessed on queue manager ‘testqm’ during the log replay phase.
Apr 13 10:06:50 rmohan.com strmqm[2351]: Log replay for queue manager ‘testqm’ complete.
Apr 13 10:06:50 rmohan.com strmqm[2351]: Transaction manager state recovered for queue manager ‘testqm’.
Apr 13 10:06:51 rmohan.com strmqm[2351]: WebSphere MQ queue manager ‘testqm’ started using V8.0.0.4.
Apr 13 10:06:51 rmohan.com systemd[1]: Started IBM MQ V8 queue manager testqm.
You can see that systemd has identified `amqzxma0` as the main queue manager process. You will also spot that there is a Linux control group (cgroup) for the queue manager. The use of cgroups allows you to specify limits on memory and CPU for your queue manager. You could of course do this without systemd, but it’s helpfully done for you now. This doesn’t constrain your processes by default, but gives you the option to easily apply limits to CPU and memory in the future. Note that you can still run MQ commands like runmqsc as normal. If you run strmqm testqm, you will start the queue manager as normal, as your current user, in your user cgroup. It is perhaps better to get in the habit of running systemctl start testqm` instead, to make sure you’re using your configured settings, and running in the correct cgroup.
Templated service
If you have multiple queue managers, it would be nice to not duplicate the service unit file many times. You can create templated services in systemd to do this. Firstly, stop your testqm service using the following command:
systemctl stop testqm
Next, rename your unit file to `mq@.service`, and edit the file to replace all instances of the queue manager name with “%I”. After doing a daemon-reload again, you can now start your “testqm” queue manager by running the following command:
systemctl start mq@testqm
The full name of the service created will be “mq@testqm.service”, and you can use it just as before.
As it stands, you are supplying the name of the queue manager on the command line, so what about system startup? The non-templated version is an active unit, so would get started up automatically, but with the templated version, the trick is to add an “[Install]” section to your unit file, giving the following:
[Unit]
Description=IBM MQ V8 queue manager %I
[Service]
ExecStart=/opt/mqm/bin/strmqm %I
ExecStop=/opt/mqm/bin/endmqm -w %I
Type=forking
User=mqm
Group=mqm
KillMode=none
LimitNOFILE=10240
After=network.target
[Install]
WantedBy=multi-user.target
After doing a daemon-reload, you can now “enable” a new service instance with the following command:
systemctl enable mq@testqm
You can, of course, run this many times, once for each of your queue managers. Using the “enable” command causes systemd to create symlinks on the filesystem for your particular service instances. In this case, we’ve said that the “multi-user” target (kind of like the old “runlevel 3”), should “want” our queue managers to be running. This basically means that when the system boots into a multi-user mode, the start up of our queue managers should be initiated. They will still be subject to the “After” rule we defined earlier.
Summary
systemd is a powerful set of tools, and we’ve really only scratched the surface here. In this blog entry, we’ve made the first useful step of ensuring that queue managers are hooked correctly into the lifecycle of the server it’s running on. Doing this is very important for failure recovery. Using systemd instead of the old-style init.d scripts should help improve your server’s boot time, as well as providing additional benefits such as the use of cgroups for finer-grained resource control. It’s possible to set up more sophisticated dependencies for your units, if (say) you wanted to ensure your client applications were always started after the queue manager, or you wanted to wait for a mount point to become available. Be careful with adding too many dependencies though, as this could slow down your boot time.
I’m sure there are many of you, dear blog readers, who can recommend further changes or tweaks that helped in your environment. Please share your thoughts in the comments.
Delete local queue – DELETE command
Delete the local queue QL.A on the runmqsc command interface (error case).
Delete ql (QL.A)
1: delete ql (QL.A)
AMQ 8143: WebSphere MQ queue not empty.
In the case of
Delete ql (QL.A)
1: delete ql (QL.A)
AMQ 8143: WebSphere MQ queue not empty.
Message that QL.A has a message that it can not be deleted. Empty the message in QL.A with the CLEAR command and delete again.
Clear ql (QL.A)
2: clear ql (QL.A)
AMQ 8022: WebSphere MQ queue cleared.
Delete ql (QL.A)
3: delete ql (QL.A)
AMQ 800: WebSphere MQ queue deleted.
In the case of
Clear ql (QL.A)
2: clear ql (QL.A)
AMQ 8022: WebSphere MQ queue cleared.
Delete ql (QL.A)
3: delete ql (QL.A)
AMQ 800: WebSphere MQ queue deleted.
By adding the purge option of the delete command, even if a message is in the queue, it can be deleted as it is.
Dis ql (QL.B) curdepth
1: dis ql (QL.B) curdepth
AMQ 8409: Display Queue details.
QUEUE (QL.B) TYPE (QLOCAL)
CURDEPTH (1)
Delete ql (QL.B) purge
2: delete ql (QL.B) purge
AMQ 800: WebSphere MQ queue deleted.
In the case of
Dis ql (QL.B) curdepth
1: dis ql (QL.B) curdepth
AMQ 8409: Display Queue details.
QUEUE (QL.B) TYPE (QLOCAL)
CURDEPTH (1)
Delete ql (QL.B) purge
2: delete ql (QL.B) purge
AMQ 800: WebSphere MQ queue deleted.
You may have a database server which started out small, with all its databases stored on the same disks, that is now experiencing severe storage I/O bottlenecks. With so many heavily accessed databases on the same storage device your queries are timing out while waiting for response from disk. And despite all your efforts in optimizing the databases and queries, there has come a time where the disks just can’t keep up. For this type of scenario, you need to spread your load across more storage devices.
Sadly, MySQL doesn’t have an option to configure separate storage paths for each database like more enterprise database servers do. The solution is to symbolicly link your databases from the new storage device to the MySQL data home directory
As long as the new location has the proper ownership and SELINUX context, this fools MySQL into believing your migrated databases still exist in the data home directory.
Objectives
- Prepare new storage for databases.
- Moving I/O heavy databases to separate storage.
Scenario
We have a MySQL 5.1 server hosting five databases on a single disk. One of the five databases is flooding the disk with I/O due to its work profile and needs to be moved to separate storage. The databases information is shown below.
Database |
Old Data Location |
New Data Location |
webapp02 |
/var/lib/mysql/webapp02 |
/Databases/webapp02 |
The storage for the the database.
Device Name |
Type |
Configuration |
sdb |
SCSI |
4 physical disks in RAID 10 |
Preparing Your new Storage
- Attach the new storage devices to the server.
- Create a single partition and format it with a filesystem.Create a root directory which will be used to contain mount points for your new storage.
mkdir /Databases
- For each database being migrated, create a folder for its storage device to mount to.
mkdir /Databases/webapp02
- Set the SELINUX context type of the new directories to mysqld_db_t to allow MySQL access to them.
chcon -r -t mysqld_db_t /Database
- Modify fstab so that
Copying Databases to New Storage
- Copy your database’s files to the new location, using cp and -preserve=all to maintain ownership and SELINUX contexts.
cp -r -preserve=all /var/lib/mysql/mydb1 /new-mydb1-location
- Verify the SELINUX context is applied correctly to the directories and files.
ls -lZ /Databases && ls -lZ /Databases/*
Point MySQL to New Database Locations
- Stop the MySQL daemon.
service mysqld stop
- Navigate to the MySQL data home directory, which is /var/lib/mysql by default.
- Delete the databases directories for the databases being migrated, making note of the directory names. They’ll be needed in the next step.
- Create soft links to the storage of each database being migrated. The link file names must must the name of the database’s directory name.
ln -s /Databases/myappdb1 myappdb1
- Repeat the process for every databases being moved.
- After all databases have been migrated, restart the MySQL daemon.
- If all goes well, MySQL should start correctly. If it does not, check the system logs for Selinux context errors.
SQL databases are very good at storing and retrieving data, and they can do so quickly. However, no matter how well you tune your database servers there will come a time during periods of high traffic that your database server becomes a large bottleneck. By utilizing technologies like Memcache, we can keep results of frequently used database queries in a cache stored in RAM. Using the cached results significantly decreases that amount of time and effort to retrieve data and present in our application.
Memcache is what’s known as an in-memory key-value store. The key is a unique identifier that is used to quickly search for cached strings or objects. The value is the data that has been cached. For the purpose of storing database query results, the key will typically be the query used on your database.
Key-value Pairs
A key-value pair is essential an array of data. If you have any experience in programming, you will have a good understanding of how data is stored in Memcache. If were to present a key-value for a database query in an easily read form, it would look similar to the example below.
Searching for long text strings isn’t very efficient, so storing your keys as such is a bad idea. The example above is used just to illustrate how data is stored in cache. In a typical environment you would convert your key (the SQL query) into a MD5 hashed value, for example, before storing or retrieving data from Memcache.
Typical Infrastructure
The diagram below illustrates how your infrastructure will typically look when you deploy Memcache in your environment. You will notice that the Memcache server doesn’t communicate directly with your database servers. Instead, they sit in their own pool and your application does all of the work.
Your application will first query the Memcached server(s) for cached database results. If nothing is found, the application will then query your database server(s). Any cached results from the database server will then be written to the Memcache server(s) by your application.
Of course, you can’t just simply drop a Memcached server in and expect your application to be able to use. Your application will have to be modified to utilize the Memcache server. This is outside of the scope of this tutorial, but it is important that you know.
Hardware Requirements
The hardware requirements for Memcache servers is low. There is very little CPU processing involved and virtually no disk storage needed in addition to the operating system. The only resource you need is RAM. How much will depend on what is being cached and the duration of the caches.
Installing Memcached
Memcached can be install anywhere in your infrastructure. For small environments, you may install it on the web application server itself. However, it’s recommended that you create a separate server instance for Memcached. This allows your web application server to focus on just being a application server.
Ubuntu
Memcached is available in the default repositories. To install it, you can run the following command.
sudo apt-get install memcached
CentOS
Memcached is available in the default repositories. To install it, you can run the following command.
yum install memcached
Configuring Memcached
The default configuration should work fine for testing. However, you may want to fine tune it to better fit your server’s hardware in production.
Ubuntu
- Open the configuration file into a text editor.
sudo nano /etc/memcached.conf
- To increase the memory cap, look for the following line. The default value is 64MB.
-m 64
- Change the IP address Memcached will listen on. The address should be accessible to your application server.
-l 192.168.1.40
- Limit how many concurrent connections the server will accept. The default is 1024. Limiting connections is important to ensure the server isn’t overwhelmed with requests.
-c 1024
- Save your changes and exit the text editor.
- Reload the configuration into Memcached to apply your changes.
sudo service memcached reload
CentOS
- Open the configuration file into a text editor.
vi /etc/memcached.conf
- Modify the MAXCONN value to increase or decrease the maximum amount of connections the server can handle. This will be based on your hardware. To determine the appropriate value, you will need to stress test the server.
MAXCONN="1024"
- Modify the CACHESIZE value to increase or decrease the memory cap. This value will depend on how much RAM is available in your server.
CACHESIZE="64"
- Exit the text editor.
- Reload the configuration to apply your changes.
service memcached reload
The distributed file system called Network File System (NFS) allows client computers to access files hosted on other computers over the network. Unlike other network file sharing protocols, such as Microsoft’s SMB, NFS shares must be mounted on the client before they can be access.
Install & Configure NFS
A base server installation of CentOS 6 includes the packages required for NFS. A minimal installation, on the other hand, does not and they will have to be installed. You can ignore this step if you did a base server installation.
- Install NFS and it’s common utilities using yum.
yum install nfs-utils
- Configure the NFS service to run at boot.
chkconfig nfs on
- Configure the rpcbind service to run at boot. This service is required by NFS and must be running before NFS can be started.
chkconfig rpcbind on
- Start the rpcbind service.
service rpcbind start
- Start the NFS service.
serice nfs start
Prepare NFS Exports
Create Directory for Export
The first step to sharing files is to create a directory that will be ‘exported’ to our client computers.
- Create a directory for our first export.
mkdir /exports/documents
Export Directory
By exporting a directory, we are allowing clients to mount it over the network.
- Open the NFS exports file into a text editor.
nano /etc/exports
- To export the document directory created earlier to a specific client with readwrite access, add the following line.
/exports/documents desktop01.serverlab.intra(rw)
- To export the directory to a clients with hostnames ranging from desktop01 to desktop09 with readwrite access, add the following line.
/exports/documents desktop0[1-9].serverlab.intra(rw)
- To export the directory to all clients on a specific network with readwrite access, add the following line.
/exports/documents 172.30.1.0/24(rw)
- To export the directory to everyone with read-only access, add the following line.
/exports/documents *(rw)
- Save your changes and exit the text editor.
- Export the directory defined above.
exportfs -a
Export Options
In our examples above, we are using only a single option of either readwrite or read-only for our export. Here is a list of additional options that can be used in any combination.
rw |
Allow both read and write requests on NFS volume. |
async |
Do not wait for acknowledgements that data has been commited to disk. This improves performance at the cost of data integrity. |
sync |
Wait for acknowledgement that data is committed to disk. |
root_squash |
Map requests from uid/gid 0, the root account and group ID’s, to the anonymous uid/gid. This prevents root access to exports. |
no_root_squash |
Do not squash root’s privileges. |
Assign Static Ports to NFS
The default configuration for NFS is to use random ports for client connections. This isn’t desirable in environments where port counts need to be limited, for security reasons. Asking your network administrator to poke one thousand holes into the firewall isn’t going to make you very many friends. Luckily, we can configure NFS to use only specific ports that are easier to secure.
- Open the NFS network configuration file.
nano /etc/sysconfig/nfs
- Uncomment the highlighted lines.
#
# Define which protocol versions mountd
# will advertise. The values are "no" or "yes"
# with yes being the default
#MOUNTD_NFS_V2="no"
#MOUNTD_NFS_V3="no"
#
#
# Path to remote quota server. See rquotad(8)
#RQUOTAD="/usr/sbin/rpc.rquotad"
# Port rquotad should listen on.
RQUOTAD_PORT=875
# Optinal options passed to rquotad
#RPCRQUOTADOPTS=""
#
#
# Optional arguments passed to in-kernel lockd
#LOCKDARG=
# TCP port rpc.lockd should listen on.
LOCKD_TCPPORT=32803
# UDP port rpc.lockd should listen on.
LOCKD_UDPPORT=32769
#
#
# Optional arguments passed to rpc.nfsd. See rpc.nfsd(8)
# Turn off v2 and v3 protocol support
#RPCNFSDARGS="-N 2 -N 3"
# Turn off v4 protocol support
#RPCNFSDARGS="-N 4"
# Number of nfs server processes to be started.
# The default is 8.
#RPCNFSDCOUNT=8
# Stop the nfsd module from being pre-loaded
#NFSD_MODULE="noload"
# Set V4 grace period in seconds
#NFSD_V4_GRACE=90
#
#
#
# Optional arguments passed to rpc.mountd. See rpc.mountd(8)
#RPCMOUNTDOPTS=""
# Port rpc.mountd should listen on.
MOUNTD_PORT=892
#
#
# Optional arguments passed to rpc.statd. See rpc.statd(8)
#STATDARG=""
# Port rpc.statd should listen on.
STATD_PORT=662
# Outgoing port statd should used. The default is port
# is random
STATD_OUTGOING_PORT=2020
# Specify callout program
#STATD_HA_CALLOUT="/usr/local/bin/foo"
#
#
# Optional arguments passed to rpc.idmapd. See rpc.idmapd(8)
#RPCIDMAPDARGS=""
#
# Set to turn on Secure NFS mounts.
#SECURE_NFS="yes"
# Optional arguments passed to rpc.gssd. See rpc.gssd(8)
#RPCGSSDARGS=""
# Optional arguments passed to rpc.svcgssd. See rpc.svcgssd(8)
#RPCSVCGSSDARGS=""
#
# To enable RDMA support on the server by setting this to
# the port the server should listen on
#RDMA_PORT=20049
- Save your changes and exit the text editor.
- Restart the rpcbind service.
service rpcbind restart
- Restart the NFS service.
service nfs restart
Open the Firewall
Your clients won’t be able to access the export we just created if the firewall is blocking them, which would be done by default. There are a few ways we can accomplish this, each depending on how the server was installed. If you have the System Config Firewall utility installed (system-config-firewall-tui), you can open up access using a simple GUI. Otherwise, you can use IPTables directly and create your own policy.
Use System Config Firewall to Allow Access
- Run System Config Firewall
system-config-firewall-tui
- Ensure the Firewall is enabled.
- Navigate to Customize by pressing Tab, and then press Enter.
- Scroll down the list of trusted services, and select NFS4.
- Navigate to the Forward button by pressing tab, and then press Enter.
- Add the following ports, for both TCP and UDP.
- 111
- 32703
- 32769
- 892
- 875
- 662
- 2020
- Navigate to the Close button by pressing tab, and then press Enter.
- Navigate to the OK button by pressing tab, and then press Enter.
- Your settings will be applied and access will be open to incomg NFS requests.
Configure Firewall Using IPTables Directly
Although System Config Firewall is a
- Open the IPTables policies configuration file into a text editor.
nano /etc/sysconfig/iptables
- Add the following highlighted lines to it
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 2049 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 111 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 111 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 32803 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 32803 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 32769 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 32769 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 892 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 892 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 875 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 875 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 662 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 662 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 2020 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 2020 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT
- Restart IPtables to apply the new rules.
service iptables restart
Mount NFS Exports on Client
Temporary Mount
If the export is only needed temporarily to quickly access a file, you can use the mount command to mount the NFS export. This method will not survive a reboot.
- Log onto the client computer.
- Verify that the NFS export from the server is visible.
showmount <ip or hostname of nfs server> --exports
- The output of the showmount command will list visible exports. In this example, the NFS server’s IP address is 172.30.1.213.
Export list for 172.30.1.213:
/exports/dept *
- Make a directory to mount the export into.
mkdir /dept
- Mount the export.
mount -t nfs 172.30.1.213:/exports/nfs /dept
Persistent Mount using fstab
If the export needs to always be avaiable, you can configure fstab to mount the export at boot.
- Log onto the client computer.
- Verify that the NFS export from the server is visible.
showmount <ip or hostname of nfs server> --exports
- The output of the showmount command will list visible exports. In this example, the NFS server’s IP address is 172.30.1.213.
Export list for 172.30.1.213:
/exports/dept *
- Make a directory to mount the export into.
mkdir /dept
- Open fstab configuration file into a text editor.
nano /etc/fstab
- Add the following line:
172.30.1.213:/exports/nfs /dept nfs defaults 0 0
- Save your changes and exit the text editor.
- Mount the NFS export.
mount -a
Distributed File Systems (DFS) are used to easily access multiple CIFS file shares, hosted on separate servers, using a single namespace on your network. The actual shares can either be hosted locally on the DFS server itself or on separate servers.
The benefit of using a single namespace for your users to access your shares is they don’t have to remember the names of each of your file servers; all shares can be accessed through one root name, regardless of where the actual file server resides. The access request will be redirected to the appropriate servers, all without the users’ knowledge.
Another benefit to using DFS, and it’s a major one, is the ability to migrate shared data from one storage platform or server to another, transparently and quickly without your users ever knowing. You simply redirect a DFS hosted share to a new CIFS share, without ever having to change the share’s name.
With standard cifs file servers, you either need to send your users to a new share path after migration or suffer downtime while flipping between the new and old file server hardware, suffering a possible lengthy outage – depending on the complexity of your shares and their permissions.
Before you Begin
This lab will be based on the following configuration.
- One server running CentOS 6.X.
- An Internet connection to access the the CentOS package repository.
Installing and Configuring Samba
- Install Samba.
yum install samba
- Open Samba’s configuration file into a text editor, like VIM or Gedit.
vi /etc/samba/smb.conf
- Under the [global] settings, define the following options, modifying the highlighted values to match your environment.
[global]
workgroup = WORKGROUP
netbios = MY-DFS-SERVER
host msdfs = yes
Understanding the options we are defining:
workgroup |
The name of your Windows peer-to-peer network. Microsoft’s default is WORKGROUP. |
netbios |
This is the single-label computer name used by a Windows on a network. Set this to the name you want your Samba server to have on the network. |
host msdfs |
The required parameter which tells Samba to act as a DFS host. |
Set Samba Permissions for Selinux
Selinux on CentOS will block connections to Samba shares by default. We’ll need to lift this restriction to allow users to access our DFS and to grant them read/write permissions.
- Allow read and write permission to our samba shares:
setsebool samba_export_all_rw yes
Create the DFS Root Directory
The DFS root directory is where you create your DFS targets – links to other CIFS shares. Targets are created by creating symbolic links to your other cifs shares, using the msdfs protocol. The shares can exist on any server. To keep things simple for this lab, we’ll create a cifs share on the DFS Root which we will create a target for.
- Create the directory that will host the CIFS targets:
mkdir -p /export/dfsroot
- Make sure the DFS root directory is owned by Root.
chown root:root /export/dfsroot
- Secure the DFS root to protect the DFS targets from changes by unauthorized users:
chmod 755 /export/dfsroot
Create a Share to the DFS Root Directory
Next we’ll need to share the DFS root directory to allow our users to access our DFS targets from a single name-space. To do this, we’ll need to open up Samba’s configuration file and add a new share.
- Open Samba’s configuration file in text editor, like VIM:
vi /etc/samba/smb.conf
- Near the bottom of the configuration file, create a Samba share to the DFS root by adding these lines to the bottom of the file, replacing the highlight lines to match your environment:
[dfs]
comment = DFS Root Share
path = /export/dfsroot
browsable = yes
msdfs root = yes
read only = no
Create a Share to be Used as a DFS Target
The DFS targets are the CIFS shares that a DFS root server will provide access to from a single name-space. These shares can be hosted on the local DFS root server or, ideally, on a separate server. To keep things simple, we’ll create our first share on the DFS root server.
- Prepare a directory to be shared out
mkdir -p /export/samba/finance
- Define the share by opening Samba’s configuration file into a text editor, like VIM:
vi /etc/samba/smb.conf
- Near the bottom of the file, configure you share so it looks similar to the following example:
[finance]
path = /export/samba/finance
public = yes
writable = yes
browseable = yes
Add DFS Targets
Finally, it’s time to add some targets to our DFS server. The CIFS shares that we target can be on any host, Linux or Windows. To keep things simple in this tutorial, we’ll create a share on the local DFS host that we’ll use as a DFS target.
- Create a symbolic link to your DFS target.
link -s msdfs:my-dfs-server\finance /export/dfsroot/finance
Warning: The symbolic link file for the msdfs share must be lowercase. If it is not, you will get errors when trying to connect to the share.
- Restart the Samba’s daemon to apply our changes
service smb restart
Overview
In small environments, administering Linux servers using only local accounts is manageable. However, in large environments hosting many hundreds or thousands of servers, the task of administering each server, manually maintaining user accounts and passwords would be a very daunting task. A central Identity and Access solution is required to effectively manage such environments. In large Microsoft Windows datacenters, you typically see Active Directory being used as the Identity and Access solution.
Samba is able to connect to your Active Directory domain to authenticate user credentials from your Windows environment. However, since Samba does not maintain a central identity store, UIDs and GIDs for each user will be different between each Samba server.
Where Does This Fit In
- Small linux environment in a Windows-based infrastructure
Before You Begin
Before you move ahead with this tutorial there are a few prerequisites that must be meet in your environment.
- Active Directory Domain
- Identity Management for Unix installed on domain controllers.
- One CentOS 6 server
- This lab will use the following variables. You’ll need to modify these to match your own environment.
Domain |
CONTOSO.COM |
Domain Controller |
DC01.CONTOSO.COM |
Samba Server Name |
LINUX-SRV1 |
Install Required Linux Packages
Install the following packages onto your Linux machine. You will not be able to join the Active Directory domain or authenticate using domain credentials without them.
- Samba
- Samba-winbind
- oddjob-mkhomedir
To install all three packages at the same time, run the following command as Root or with Root privileges.
yum install samba samba-winbind oddjob-mkhomedir
Configuring Samba
Samba is a critical component that allows Linux to interact with Windows. It must be configured to make the Linux server appear as Windows computer on the network, using NetBIOS broadcasts and Domain prefixes.
- Make a backup copy of /etc/samba/smb.conf
cp /etc/samba/smb.conf /etc/samba/smb.conf.old
- Open /etc/samba/smb.conf into a text editor. For this example, I’ll use VI.
vi /etc/samba/smb.conf
- Edit smb.conf to resemble the example below, modifying the highlighted lines to match your environment.
[global]
log file = /var/log/samba/log.%m
max log size = 50
security = ads
netbios name = LINUX-SRV1
realm = CONTOSO.COM
password server = MYDC01.CONTOSO.COM MYDC02.CONTOSO.COM
workgroup = CONTOSO
idmap uid = 10000-500000
idmap gid = 10000-500000
winbind separator =
winbind enum users = no
winbind enum groups = no
winbind use default domain = yes
template homedir = /home/%U
template shell = /bin/bash
client use spnego = yes
domain master = no
Understanding the options were defining:
netbios name |
This netbios (single label) name the Samba server will use for Windows clients. |
realm |
Fully qualified name of the Active Directory domain the Samba server is joining. |
password server |
List of domain controllers, separated by spaces, that will process Samba logon requests. |
workgroup |
Similar to the netbios name for the Samba server, except for the domain. Active Directory domains, like Windows computers, have netbios names. |
For more information on Samba options, go here:
http://www.samba.org/samba/docs/using_samba/ch06.html |
Modify the Name Service Switch Configuration File
The Name Service Switch is used by Linux to locate account databases. By default, only local files will accessed. We need to point Linux to a domain controller by adding winbind as a database location.
- Open /etc/nsswitch.conf into a text editor.
vi /etc/nsswitch.conf
- Find the following lines:
passwd: files
group: files
And append winbind to them, as shown below:
passwd: files winbind
group: files winbind
Edit Kerberos. Configuration File
Active Directory uses Kerberos, an open source network authentication protocol, to authenticate users. Before your Linux server
- Open /etc/krb5.conf into a text editor
vi /etc/krb5.conf
- Modify it so it looks like the example below, replacing [value] to match your environment.
[libdefaults]
default_realm = CONTOSO.COM
dns_lookup_realm = true
dns_lookup_kdc = true
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
[realms]
REALM.INTERNAL = {
kdc = mydc01.contoso.com
admin_server = mydc01.contoso.com
default_domain = contoso.com
}
[domain_realm]
.contoso.com = CONTOSO.COM
contoso.com = CONTOSO.COM
Start the Daemons
User authentication settings have been set. Now we need to start our daemons and configure them to automatically start after each reboot.
- Samba Server
service smb start; chkconfig smb on
- Winbind
service winbind start; chkconfig winbind on
- Message Bus Daemon
service messagebus start; chkconfig messagebus on
Join the Samba Server to the Domain
We’ve finally reached the part where we can join our Samba server to the Active Directory domain. Run the following command to join the domain, replacing Administrator with the username of a user in your domain who has permissions to join machines:
net ads join -U Administrator
Overview
Bonding is the ability to take two or more network interfaces and present them as one to a client. Depending on the method you use, the bond can create different types of connections. You can, for example, create an aggregate channel to double or even triple your total bandwidth; create a fault tolerant connections to improve server reliability; or load balance connections to handle more requests and to improve response times.
Some of the more advanced bonding methods, like channel aggregation, require switches that support IEEE 802.3ad dynamic link aggregation policy. However, simple load balancing and fault tolerance can be done without.
Server Configuration
To make it easier for you to follow along, the following configuration will be used in this tutorial.
TABLE1 – Lab server configuration
Hostname |
Network Interface |
MAC Address |
IP Address |
Netmask |
Gateway |
SERVER01 |
ETH0 |
00:1F:29:E6:EB:2A |
172.30.0.34 |
255.255.255.0 |
172.30.0.1 |
ETH1 |
00:26:55:35:24:FE |
It’s important that you record the MAC address (hardware address) of every interface. Although not required, it is best practice to assign a MAC address to each bond slave. Without doing so, you cannot confidently unplug an interface and know which slave will be offline, potentially causing a service outage.
Prepare the Interfaces
Before we configure the type of bond we’ll be using, we need to prepare the interfaces first. This envolves creating the bond0 configuration file, and then modifying the existing configurations files for each of your physical network interfaces. In this lab, there are only two.
The steps for configuring the different bond types can be found after this section. Choose one and then follow the instructions.
Create the Bonded Interface
This is the interface your clients will be connecting to.
- Create the configuration file for the first bond, bond0.
touch /etc/sysconfig/network-scripts/ifcfg-bond0
- Open the file in a text editor.
nano /etc/sysconfig/network-scripts/ifcfg-bond0
- Add the following lines to the file.
DEVICE=bond0
IPADDR=172.30.0.34
NETMASK=255.255.255.0
GATEWAY=172.30.0.1
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
- Save your changes and exit the text editor.
Configure Slave 1
- Open the configuration file for the first interface, eth0, into a text editor.
nano /etc/sysconfig/network-scripts/ifcfg-eth0
- Modify the configuration file to look like the following, replacing the highlighted parts to match your environment.
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=none
HWADDR=00:1F:29:E6:EB:2A
USERCTL=no
MASTER=bond0
SLAVE=yes
Configure Slave 2
- Open the configuration file for the second interface, eth1, into a text editor.
nano /etc/sysconfig/network-scripts/ifcfg-eth1
- Modify the configuration file to look like the following, replacing the highlighted parts to match your environment.
DEVICE=eth1
ONBOOT=yes
BOOTPROTO=none
HWADDR=00:26:55:35:24:FE
USERCTL=no
MASTER=bond0
SLAVE=yes
Configure a Fault Tolerant Bond
Also called an Active-Passive connection, this bond type will protect you from a failed physical network interface and, depending on your Ethernet configuration, a failure on your physical network.
- If it doesn’t already exist, create a new file called bonding.conf in /etc/modprobe.d
touch /etc/modprobe.d/bonding.conf
- Open the file in a text editor.
nano /etc/modprobe.d/bonding.conf
- Add the following lines to the file.
alias bond0 bonding
options bond0 miimon=80 mode=1
TABLE2 – Bonding.conf options
bond0 |
Name of the bonded interface being created. |
miimon |
Defines how often, in milli-seconds, the interface is checked to see if it is still active. |
mode |
Type of bond being created. |
- Save your changes and exit the text editor.
Create Round-Robin Bond
Robin-Robin load balancing splits the connections between the two slaves evenly. The incoming connections alternate between the active slaves, taking the load off of each other, to improve request time.
- If it doesn’t already exist, create a new file called bonding.conf in /etc/modprobe.d
touch /etc/modprobe.d/bonding.conf
- Open the file in a text editor.
nano /etc/modprobe.d/bonding.conf
- Add the following lines to the file.
alias bond0 bonding
options bond0 miimon=80 mode=0
TABLE3 – bonding.conf options
bond0 |
Name of the bonded interface being created. |
miimon |
Defines how often, in milli-seconds, the interface is checked to see if it is still active. |
mode |
Type of bond being created. |
- Save your changes and exit the text editor.
Create Aggregate Bond
Network aggregation combines the target network interfaces to create one large network interface. This will multiple the available bandwidth of the server’s network connection by the of NICs installed.
For this bond to work, you need your interfaces to be connected to a switch that supports IEEE 802.3ad dynamic link aggregation policy. Also, the ports have the feature enabled.
- If it doesn’t already exist, create a new file called bonding.conf in /etc/modprobe.d
touch /etc/modprobe.d/bonding.conf
- Open the file in a text editor.
nano /etc/modprobe.d/bonding.conf
- Add the following lines to the file.
alias bond0 bonding
options bond0 miimon=80 mode=4
TABLE4 – bonding.conf options
bond0 |
Name of the bonded interface being created. |
miimon |
Defines how often, in milli-seconds, the interface is checked to see if it is still active. |
mode |
Type of bond being created. |
- Save your changes and exit the text editor.
Restart the Network Services
Our bond is created. It’s time to restart the network services to apply our configuration.
- Restart the sevices
services network restart
This tutorial will guide you through the deployment process of MariaDB on a Red Hat-based Linux server, such as CentOS. We’ll start by configuring the hardware and then move into the installation and configuration of MariaDB.
MariaDB is a fork of the very popular and open source MySQL database, which is now owned by Oracle. In fact, the two were created by the same individual. They are essentially a mirror of each other, so a lot of the knowledge used to run MySQL can be used for MariaDB. This should make migration easier to swallow.
Install MariaDB
Add the MariaDB Repository
We can add the repository to our server to make installing the database service a lot easier – plus we’ll ensure patches are more easily accessible.
- Navigate to /etc/yum.repos.d/ on your CentOS box.
- Create a new file called MariaDB.repo.
- For 64-bit installs of CentOS, add the following lines.
[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/10.0/centos6-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1
- For 32-bit installs of CentOS, add the following lines.
[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/10.0/centos6-x86
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1
- Install the required packages onto your server.
yum install MariaDB-server MariaDB-client
Configuring MariaDB
The configuration files and binaries for MariaDB are mostly the same as MySQL. For example, both use a configuration file called my.cnf. Even the daemon binary is called mysql. This is done done to ensure people can more easily migrate from MySQL.
The default configuration file is too liberal. It should not be used in either testing or production. If you are just playing around and learning the software, however, it should be fine.
The MariaDB installation includes several configuration file templates that can be used to quickly configure the server. The one you choose depends on how large you expect your databases to get.
my-small.cnf |
Ideal for servers with a very limited amount RAM (64MB or less) available for your databases. An example would be a small LAMP server hosting all web-related roles. |
my-medium.cnf |
Ideal for dedicated database servers with 128MB of available memory or less. Another good example for multi-role servers. |
my-large.cnf |
Ideal for for servers with at least 512MB of RAM available for the database server. |
my-huge.cnf |
Ideal for servers with 1GB of RAM or more available to the database server. |
- Backup the original configuration file by renaming it.
mv /etc/my.cnf /etc/my.bak
- Create a new my.cnf file by copying an existing template. We’ll use the medium server template for this example.
cp /usr/share/mysql/my-medium.cnf /etc/my.cnf
- Start the MariaDB service (daemon).
service mysql start
That is not a typo. MariaDB’s service name is mysql.
- Configure MariaDB to start automatically when the system is booted.
chkconfig mysql on
Securing the Installation
The default installation includes settings and accounts that are good for testing, but they will make your server a fairly large security target.
One example is the Root database account – it has no set password. Anyone can access your databases just knowing this account name. Thankfully, much like MySQL, we can run a script that walks us through closing these security concerns.
- Run the secure installation script. MariaDB must be running before this script can be executed.
/usr/bin/mysql_secure_installation
- You are prompted to enter the password for root. Since none exists, you can press Enter to continue.
- A prompt to change the password for root will appear. Press ‘Y’ and then Enter to set one.
- Next you will be prompted to remove anonymous users. Press ‘Y’ and then Enter to do so,
- When asked to disallow root remote login, press ‘Y’ and then Enter. Your root account should never have remote access.
- When prompted to remove the test database, press ‘Y’ and then Enter.
- Finally, you will be asked to reload the privileges table. Press ‘Y’ and then Enter. This will flush out the old permissions to apply the new ones.
Logging into MariaDB
The administer the server and create databases we need to log in. To do this we use the following command.
mysql -u <username> -p
The -u switch tells MariaDB which user account to log in with, and the -p switch tells it to prompt us for a password. To log in as root, we would do the following.
mysql -u root -p
very popular tool for any operational guy’s DevOps utility belt is Puppet – a system configuration management service. It allows you to automate the entire process of system configuration and maintain consistency across groups of servers. Imagine having to deploy 50 servers for a new web farm, with each server requiring the exact same configuration. An especially daunting task when done manually. With Puppet, we simply define a server configuration for the web nodes, including which packages and services are installed and how they themselves are configured. When done we then assign that configuration to those systems.
Another benefit to using a tool like Puppet is the ability to update configurations across your entire infrastructure on the fly. This could mean installation of the latest version of MySQL onto your database servers, or simply modifying DNS configurations for every server in the environment.
Puppet uses a client-server model. By that I mean our configurations are defined and stored on what is called a Puppet master server, and each system that will have its configuration maintained by Puppet has a client installed. Every 30 mintues, by default, each client communicates with the master server to have its configuration audited. When a discrepancy is discovered between the client’s current configuration and what is defined for it, the appropriate actions are completed to bring the system back into compliance.
This tutorial will guide you through setting up and running a Puppet master server using the open-source version of the software on a CentOS 6 server. Unlike the enterprise version of Puppet, the open-source version requires quite a bit of manual configuration. Nothing overwhelming but definitely not as simple as running a single executable.
Goals
- Deploy a Puppet Master server
Installing Puppet
Disabling SELinux
I am a very strong advocate of always running SELinux on Redhat-based servers. I do not take disabling it lightly and avoid doing so where possible. However, at the time of this writing I was unable to find a satisfactory way of enabling SELinux on a Puppet master server. There are SELinux policies for Puppet that can be found on the Internet. Unfortunately, I cannot recommend using any of them since they are not refined enough to ensure the system is secure.
Outright disabling SELinux is very bad idea. You never know when you’ll be able to re-enable it. And if you do disable it, when it comes time to re-enable SELinx the system will have to relabel every file, directory and port with the appropriate contexts. This a very, very, very time consuming process. Instead, I recommend placing SELinux into Permissive mode. This way SELinux doens’t block Puppet process and communications, and our files, directories, and ports all keep their contexts.
- Immediately place SELinux into permissive mode.
setenforce 0
- The command above is not persistent. It will be undone during the next reboot. To make the change persistent, open the SELinux configuration file into a text editor.
nano /etc/sysconfig/selinux
- Change the SELINUX value from enforcing to permissive, as seen in the example below.
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing – SELinux security policy is enforced.
# permissive – SELinux prints warnings instead of enforcing.
# disabled – No SELinux policy is loaded.
SELINUX=permissive
# SELINUXTYPE= can take one of these two values:
# targeted – Targeted processes are protected,
# mls – Multi Level Security protection.
SELINUXTYPE=targeted
- Save your changes and exit the text editor.
Install the Puppet Repo
The easiest way to install Puppet is by adding the Puppet Labs repository file to your server. We can install it by using the freely available RPM provided by Puppet Labs.
- Download and install the PuppetLabs’ repository RPM. At the time of this writing, version 6.7 was available.
rpm -ivh http://yum.puppetlabs.com/el/6/products/i386/puppetlabs-release-6-7.noarch.rpm
- If all was successful, you should now have a filled called puppetlabs.repo located in /etc/yum.repos.d/.
-rw-r--r--. 1 root root 1926 Nov 27 2013 CentOS-Base.repo
-rw-r--r--. 1 root root 638 Nov 27 2013 CentOS-Debuginfo.repo
-rw-r--r--. 1 root root 630 Nov 27 2013 CentOS-Media.repo
-rw-r--r--. 1 root root 3664 Nov 27 2013 CentOS-Vault.repo
-rw-r--r--. 1 root root 1250 Apr 12 2013 puppetlabs.repo
Install the Puppet Master
The Puppet Master is where your nodes get their configuration profiles from.
- Install the Puppet Master package from the Puppetlabs repository.
yum install -y puppet-server
- Start the Puppet Master service.
service puppetmaster start
- Ensure the Puppet master starts at boot.
puppet resource service puppetmaster ensure=running enable=true
Install a Web Server for Puppet Agent Access
Each server being managed by Puppet will have an agent installed. By default, the agent will attempt to connect to a Puppet master server using a HTTPS connection. We need to ensure a web server is available on the master server to allow us to service our clients. You can us any web server, but we’ll be using Apache in this tutorial.
- Install the web server and some other required packages, like Ruby.
yum install -y httpd httpd-devel mod_ssl ruby-devel rubygems openssl-devel gcc-c++ curl-devel zlib-devel make automake
- The web service requires Passenger to process the Ruby files used by Puppet. We install it using Ruby’s gems.
gem install rack passenger
- With the Passenger, we need to install and configure its Apache module.
passenger-install-apache2-module
Prepare Puppet’s Apache directory
- Create a directory.
mkdir -p /usr/share/puppet/rack/puppetmasterd
- Create the document root directory
mkdir /usr/share/puppet/rack/puppetmasterd/public /usr/share/puppet/rack/puppetmasterd/tmp
- Copy the Rack config template to our Apache virtual host’s directory root.
cp /usr/share/puppet/ext/rack/files/config.ru /usr/share/puppet/rack/puppetmasterd/
- Apply the appropriate permissions to the configuration file.
chown puppet /usr/share/puppet/rack/puppetmasterd/config.ru
Create the Apache Virtual Host for Puppet
- Create a configuration file for the Apache virtual host.
touch /etc/httpd/conf.d/puppetlabs.conf
- Edit the file and add the following contents.
# And the passenger performance tuning settings:
PassengerHighPerformance On
#PassengerUseGlobalQueue On
# Set this to about 1.5 times the number of CPU cores in your master:
PassengerMaxPoolSize 6
# Recycle master processes after they service 1000 requests
PassengerMaxRequests 1000
# Stop processes if they sit idle for 10 minutes
PassengerPoolIdleTime 600
Listen 8140
<VirtualHost *:8140>
SSLEngine On
# Only allow high security cryptography. Alter if needed for compatibility.
SSLProtocol All -SSLv2
SSLCipherSuite HIGH:!ADH:RC4+RSA:-MEDIUM:-LOW:-EXP
SSLCertificateFile /var/lib/puppet/ssl/certs/puppet.serverlab.intra.pem
SSLCertificateKeyFile /var/lib/puppet/ssl/private_keys/puppet.serverlab.intra.pem
SSLCertificateChainFile /var/lib/puppet/ssl/ca/ca_crt.pem
SSLCACertificateFile /var/lib/puppet/ssl/ca/ca_crt.pem
SSLCARevocationFile /var/lib/puppet/ssl/ca/ca_crl.pem
SSLVerifyClient optional
SSLVerifyDepth 1
#SSLOptions +StdEnvVars +ExportCertData
SSLOptions +StdEnvVars
# These request headers are used to pass the client certificate
# authentication information on to the puppet master process
RequestHeader set X-SSL-Subject %{SSL_CLIENT_S_DN}e
RequestHeader set X-Client-DN %{SSL_CLIENT_S_DN}e
RequestHeader set X-Client-Verify %{SSL_CLIENT_VERIFY}e
# RackAutoDetect On
DocumentRoot /usr/share/puppet/rack/puppetmasterd/public/
<Directory /usr/share/puppet/rack/puppetmasterd/>
Options None
AllowOverride None
Order Allow,Deny
Allow from All
</Directory>
</VirtualHost>
- Stop the puppetmaster service.
service puppetmaster stop
- Start the Apache service.
service httpd on
- Disable the puppetmaster service to prevent it from starting during system boot.
chkconfig puppetmaster off
- Enable the Apache service to automatically start it during system boot.
chkconfig httpd on
|
|
Recent Comments