June 2025
M T W T F S S
 1
2345678
9101112131415
16171819202122
23242526272829
30  

Categories

June 2025
M T W T F S S
 1
2345678
9101112131415
16171819202122
23242526272829
30  

Red Hat Enterprise Linux 6.8

“Red Hat Enterprise Linux 6.8 is always committed to fulfilling the company’s commitment to providing a solid foundation for driving the development and application of modern enterprise applications,” said Jim Totton, vice president and general manager of Red Hat. ” Critical mission deployment requirements Red Hat Enterprise Linux 6.8 continues to be a strong, solid foundation for meeting the needs of today’s IT operations, as they continue to optimize security and management. ”

New features in Red Hat Enterprise Linux 6.8 include the use of the libreswan library to provide additional security layers for VPNs, replacing previous Openswan project-based settings. In addition, new features have been added to Identity Management Client Code (SSSD) components to provide users with better client performance and simpler management.

In addition, Red Hat Enterprise Linux 6.8 comes with the Relax-and-Recover system archiving tool to reduce administrator management tasks and the dmstats tool to better monitor storage usage and performance. The new Scalable File System extension also supports up to 300TB XFS file system format.

According to its release notes, RHEL 6.8 changes include:

Use libreswan to replace openswan as a new VPN entry scenario
System Security Services Daemon (SSSD) enhances identity management capabilities
Introducing the new Relax-and-Recover (ReAR) system archiving tool to create a local ISO backup
The enhanced yum tool simplifies the process of searching for packages
Display and manage I / O statistics via dmstats
The XFS file system now supports up to 300TB
Provides mirroring for containerized deployment

Linux performance tools

On Centos/RH
yum install sysstat

vmstat -w 2
mpstat -P ALL 2
iostat -dxm 10

nmon:
wget http://pkgs.repoforge.org/nmon/nmon-14g-1.el6.rf.x86_64.rpm
rpm -ivh nmon-14g-1.el6.rf.x86_64.rpm

nicstat

wget http://nchc.dl.sourceforge.net/project/nicstat/nicstat-1.92.tar.gz
untar
yum install glibc.i686
ln -s .nicstat.RedHat_5_i386 .nicstat.RedHat_6_i386

Configuring File Replication in RHEL / CentOS / SL 6/7 with lsyncd and csync2

CentOS / SL 6/7 with lsyncd and csync2

Configuring File Replication in RHEL / CentOS / SL 6/7 with lsyncd and csync2

If you want some directory of yours actively backed after every change, they are a lot of sync options to choose from.
Software like csync, rsync, csync2, lsyncd and many others will all do the job, but in this article we will only review one of them. We will introduce features of lsyncd, a synchronization daemon that enables you to mirror your designated directory with to any other directory on the network or locally.
To save network and disk bandwidth, it only mirror changes to your directory. So lets start.

Installing the necessary packages

To install lsyncd and csync2 in CentOS / RHEL 6 you only need the EPEL repository and it installs directly via yum as detailed below.

RHEL / CentOS / SL 6

Configure the EPEL repository

rpm -Uhv https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm
Install the necessary packages
Yum install csync2 lsyncd xinetd
RHEL / CentOS / SL 7
For versions of RHEL 7 you need to download the version of csync2 that is for Fedora20,

it can be downloaded from the following URL http://rpm.pbone.net/index.php3/stat/3/limit/1/srodzaj/ 1 / dl / 40 / search / csync2 / field% 5B% 5D / 1 / field% 5B% 5D / 2

Once downloaded we installed with the command:

Yum install /ruta/del/rpm/que/download.rpm

The previous steps are the only ones that differ as to version 7 of version 6, once installed the steps are the same in both versions

Configure csync2

Enable the csync2 service under xinetd:

To enable the csync2 service through xinetd

sed -i.bak ‘s#yes#no#g’ /etc/xinetd.d/csync2
Then we lift the xinetd service
service xinetd start
Generate .key for authentication between servers

For the authentication between the servers it is necessary to create a .key file,
We can create it with the command csync2 and the -k option:

csync2 -k /etc/csync2/csync2.key
Configure csync2

To configure csync2 we must modify the file /etc/csync2/csync2.cfg, loading the following:
vim /etc/csync2/csync2.cfg

nossl * *;

group web

{

host node1;

host node2;

host node3;

key /etc/csync2/csync2.key;

include /home/website/public_html;

exclude *.log;

auto younger;

}
It should be noted that the host name (in this case node1) must be the hostname or fqdn of each server

Copy the csync files to the other nodes:

Before copying the settings to the other nodes, the file must be copied /etc/csync2/csync2.cfg a /etc/csync2/csync2_node1.cfg /etc/csync2/csync2_node2.cfg /etc/csync2/csync2_node3.cfg:
cp /etc/csync2/csync2.cfg /etc/csync2/csync2_node1.cfg
cp /etc/csync2/csync2.cfg /etc/csync2/csync2_node2.cfg
cp /etc/csync2/csync2.cfg /etc/csync2/csync2_node3.cfg
Finally we copy to the other nodes:
scp /etc/csync2/* node2:/etc/csync2

scp /etc/csync2/* node3:/etc/csync3

Start csync2 replication

For the initial synchronization it is necessary to run the following command on all the nodes
csyncs2 -xv
Configure lsyncd

Create configuration files

The following lines must be added to the file /etc/lsyncd.conf
vim /etc/lsyncd.conf

settings {

logident        = “lsyncd”,

logfacility     = “user”,

logfile         = “/var/log/lsyncd.log”,

statusFile      = “/var/log/lsyncd_status.log”,

statusInterval  = 1

}

initSync = {

delay = 1,

maxProcesses = 1,

action = function(inlet)

local config = inlet.getConfig()

local elist = inlet.getEvents(function(event)

return event.etype ~= “Init”

end)

local directory = string.sub(config.source, 1, -2)

local paths = elist.getPaths(function(etype, path)

return “\t” .. config.syncid .. “:” .. directory .. path

end)

log(“Normal”, “Processing syncing list:\n”, table.concat(paths, “\n”))

spawn(elist, “/usr/sbin/csync2.sh”, “-C”, config.syncid, “-x”)

end,

collect = function(agent, exitcode)

local config = agent.config

if not agent.isList and agent.etype == “Init” then

if exitcode == 0 then

log(“Normal”, “Startup of ‘”, config.syncid, “‘ instance finished.”)

elseif config.exitcodes and config.exitcodes[exitcode] == “again” then

log(“Normal”, “Retrying startup of ‘”, config.syncid, “‘ instance.”)

return “again”

else

log(“Error”, “Failure on startup of ‘”, config.syncid, “‘ instance.”)

terminate(-1)

end

return

end

local rc = config.exitcodes and config.exitcodes[exitcode]

if rc == “die” then

return rc

end

if agent.isList then

if rc == “again” then

log(“Normal”, “Retrying events list on exitcode = “, exitcode)

else

log(“Normal”, “Finished events list = “, exitcode)

end

else

if rc == “again” then

log(“Normal”, “Retrying “, agent.etype, ” on “, agent.sourcePath, ” = “, exitcode)

else

log(“Normal”, “Finished “, agent.etype, ” on “, agent.sourcePath, ” = “, exitcode)

end

end

return rc

end,

init = function(event)

local inlet = event.inlet;

local config = inlet.getConfig();

log(“Normal”, “Recursive startup sync: “, config.syncid, “:”, config.source)

spawn(event, “/usr/sbin/csync2.sh”, “-C”, config.syncid, “-x”)

end,

prepare = function(config)

if not config.syncid then

error(“Missing ‘syncid’ parameter.”, 4)

end

local c = “csync2_” .. config.syncid .. “.cfg”

local f, err = io.open(“/etc/csync2/” .. c, “r”)

if not f then

error(“Invalid ‘syncid’ parameter: ” .. err, 4)

end

f:close()

end

}

local sources = {

— change the node1 value with respective host

[“/home/website/public_html”] = “node1”,

[“/otrodirectorio”] = “node1”

}

for key, value in pairs(sources) do

sync {initSync, source=key, syncid=value}

end
Do not forget to change the “node1″ in each node. For example node2, your /etc/lsyncd.conf file definition ‘local source’ should use ‘node2’.

Add the path of the lsync configuration file

We must add the path of the file created in the previous step in the file /etc/sysconfig/lsyncd, that we can do with the command sed:
sed -i.bak ‘s#^LSYNCD_OPTIONS=.*#LSYNCD_OPTIONS=” /etc/lsyncd.conf”#g’ /etc/sysconfig/lsyncd
Create the script that will run lsync

To synchronize the folders, lsync will try to execute the script /usr/sbin/csync2.sh, el cual debemos crear:
/usr/sbin/
cat <<EOF > csync2.sh
#!/bin/bash

/usr/sbin/csync2 -xXrv

exit 0

## Exit 0 is added so that lsyncd does not stop when it gives an error
EOF
Give Execution Permissions to Script
Chmod 755 /usr/sbin/csync2.sh
Enable and Start the lsyncd service:

We enable the service to interact with the Operating System
Chkconfig lsyncd on
Then we lift the service
Service lsyncd start
#!/bin/sh
for ((i=0; i<100; i++)); do
touch /home/website/${i}.html
done
#!/bin/bash
for ((i=0;i<10;i++))
do
number=$((1000000 + ($(od -An -N2 -i /dev/random)) % (10000 + 1000)))
for ((j=0; j<1000; j++))
do
touch /home/website/${i}-${j}.html
done
sleep ${number}
done

Apache Error: “semget: No space left on device”

semget: No space left on device

To see how many semaphores are being used, SSH to your server as root and run the following:

ipcs -s

In order to get Apache started again, we must clear the semaphores. Run this for-loop to flush them:

for whatever in `ipcs -s | awk ‘{print $2}’`; do ipcrm -s $whatever; done

On older servers that command may not work. In these cases, you may need to do the following:

/sbin/service httpd stop
ipcs -s | grep nobody | gawk ‘{ print $2 }’ | xargs -n 1 ipcrm sem
/sbin/service httpd start

If this is a common problem for you, you may want to increase the semaphore limits on your server. You can do that by adding the following to the /etc/sysctl.conf file:

# Increases the semaphore limits & extend Apache’s uptime.
kernel.msgmni = 512
kernel.sem = 250 128000 32 512

Then load the new settings into the kernel:

sysctl -p

Note: This post assumes you are running Apache on a Linux server, are familiar with the command line, and have root access to the server.

OpenSSH: Client Information leak from use of roaming connection feature (CVE-2016-0777)

Overview

A flaw in OpenSSH, discovered and reported by Qualys on Jan. 14, 2016, could potentially allow an information leak (CVE-2016-0777) or buffer overflow (CVE-2016-0778) via the OpenSSH client. Specifically, an undocumented feature called roaming, introduced in OpenSSH version 5.4, can be exploited to expose a client’s private SSH key.

Impact

The roaming feature, which allows clients to reconnect to the server automatically should the connection drop (on servers supporting the feature), can be exploited in the default configuration of OpenSSH clients from versions 5.4 through 7.1p1, but is not supported in the default configuration of the OpenSSH server.

All versions of OpenSSH clients from 5.4 through 7.1p1 are affected for anyone who connects via SSH on the following operating systems:

LinuxFreeBSD

Mac OS X

Windows when using OpenSSH for Windows

The following are not affected:

OpenSSH servers in default configurationWindows users utilizing PuTTY to connect

Connections not authenticated via an SSH key

 

Summary

A connection made from an affected client to a compromised or malicious server which uses an SSH key for authentication potentially could expose all or part of the user’s private SSH key.

If the key utilized to authenticate the connection is encrypted, only the encrypted private key could be exposed. However, a malicious party could attempt to brute-force the password offline after obtaining the encrypted key.

Is Your SSH Client Vulnerable?

You can check the version of your SSH client by running the following command:

ssh -V

That will produce output similar to:

workstation$ $ ssh -V
OpenSSH_7.1p2, OpenSSL 1.0.2e 3 Dec 2015

If the version is below 7.1p2, the SSH client is affected.

Resolution

  1. Update your OpenSSL client: Check for any updates to your SSH client and apply them immediately.
  2. Patch older clients: If an update is not yet available for your operating system, you may disable the roaming feature on affected clients by adding the line “UseRoaming no” to your ssh configuration file. You can do so directly or via one of the methods below:

On Linux, you can run the following command to add the necessary line:echo 'UseRoaming no' | sudo tee --append /etc/ssh/ssh_configAnd restart ssh.

 

Error: Cannot retrieve metalink for repository: epel. Please verify its path and try again

root@core /]# yum list
Loaded plugins: fastestmirror
Determining fastest mirrors
Error: Cannot retrieve metalink for repository: epel. Please verify its path and try again

 
[epel]

sslverify=false

sed -i “s/mirrorlist=https/mirrorlist=http/” /etc/yum.repos.d/epel.repo

Monit process

# mysql
check process mysqld with pidfile /var/lib/mysql/ns388683.pid
group database
start program = “/etc/init.d/mysql start”
stop program = “/etc/init.d/mysql stop”
if failed host 127.0.0.1 port 3306 then restart
if 5 restarts within 5 cycles then timeout
# nginx
check process nginx with pidfile /opt/nginx/logs/nginx.pid
start program = “/etc/init.d/nginx start”
stop program = “/etc/init.d/nginx stop”
if failed host 127.0.0.1 port 80 then restart
if cpu is greater than 40% for 2 cycles then alert
if cpu > 60% for 5 cycles then restart
if 10 restarts within 10 cycles then timeout
# redis
check process redis with pidfile /var/run/redis.pid
start program = “/etc/init.d/redis-server start”
stop program = “/etc/init.d/redis-server stop”
group redis
check file dump.rdb with path /var/lib/redis/dump.rdb
if size > 100 MB then alert
# tomcat
check process tomcat with pidfile /var/run/tomcat/tomcat.pid
start program = “/etc/init.d/tomcat start”
as uid solr gid solr
stop program = “/etc/init.d/tomcat stop”
as uid solr gid solr
if failed port 8080 then alert
if failed port 8080 for 5 cycles then restart

Now for the tomcat part – this is based on tomcat being in /usr/local/tomcat where our typical setup script puts everything.
Tomcat, method a (recomended):

Simply run the below snippet to enable monit monitoring of tomcat. This method requires the least work and changes to configurations. Typically monit prefers a bid file to monitor a service as described in method b, but this way works just as well so long as the http port connector is enabled.

echo “check host tomcat with address localhost
stop program = “/etc/init.d/tomcat stop”
start program = “/etc/init.d/tomcat restart”
if failed port 8080 and protocol http
then start
” > /etc/monit.d/tomcat
/etc/init.d/monit restart

Tomcat, method b:

Use this method if you dont have a suitable http connector enabled for your tomcat instance, but be aware that pid files can be left in an inconsistent state in some cases. Which may then require manual intervention anyway. Add a tomcat instance into your config for monit that looks like this (change gid/uid tomcat runs )

check process tomcat with pidfile “/var/run/tomcat/tomcat.pid”
start program = “/usr/local/tomcat/bin/startup.sh”
as uid tomcat gid tomcat
stop program = “/usr/local/tomcat/bin/shutdown.sh”
as uid tomcat gid tomcat
if failed port 8080 then alert
if failed port 8080 for 5 cycles then restart

Then edit your catalina.sh and set

CATALINA_PID to be /var/run/tomcat/tomcat.pid
JAVA_HOME=/usr/java/jdk

Add in at the top

Then of course create the pid directory

mkdir /var/run/tomcat/
chown tomcat.tomcat /var/run/tomcat/

MySQL

Add this to your my.cnf under [mysqld]

pid-file=/var/run/mysqld/mysqld.pid

and this to your monit config

check process mysql with pidfile /var/run/mysqld/mysqld.pid
group database
start program = “/etc/init.d/mysql start”
stop program = “/etc/init.d/mysql stop”
if failed host 127.0.0.1 port 3306 protocol mysql then restart
if 5 restarts within 5 cycles then timeout
depends on mysql_rc

check file mysql_rc with path /etc/init.d/mysql
group database
if failed checksum then unmonitor
if failed permission 755 then unmonitor
if failed uid root then unmonitor
if failed gid root then unmonitor

Disk Space

Add the following to your monit config

check filesystem with path /dev/xvda1
if space usage > 95% then alert
if inode usage > 95% then alert

SSH

Add this to your monit config (change the port if yours is different)

check process sshd with pidfile /var/run/sshd.pid
start program “/etc/init.d/sshd start”
stop program “/etc/init.d/sshd stop”
if failed port 22 protocol ssh then restart
if 5 restarts within 5 cycles then timeout

Shell script

 

if failed port 443 protocol https request / with timeout 5 seconds for 2 cycles then exec "/var/lib/monit/scripts/notifyAndExecute.sh" else if succeeded then exec "/etc/monit/pagerduty-resolve authentication"

MonitSMS

Create a new file named MonitSMS.sh in your /root directory (or wherever you prefer – i use root because I manage many different type of OS) and paste the following code:

  • #!/bin/sh
  • /usr/bin/curl \
  • -X POST http://textbelt.com/text \
  • -d number=1111111111 \
  • -d “message=[$MONIT_HOST] $MONIT_SERVICE – $MONIT_DESCRIPTION”

Change 1111111111  to your cell/mobile phone number. Give it 777 permissions.

  • chmod 777 /root/MonitSMS.sh

Run it once if you like, you should at that point receive a SMS message to your phone.

Integration with Monit

I believe it’s best to use it when your Monit configuration does a timeout and Monit no longer monitors the process. All you need to do next is attach the following code to any/all processes.

  • if 5 restarts within 5 cycles then exec “/root/MonitSMS.sh”

 A full Monit example of monitoring a process…

  • check process NGINX with pidfile /var/run/nginx.pid
  • group nginx
  • start program = “/sbin/service nginx restart”
  • stop program = “/sbin/service nginx stop”
  • if failed host blog.ss88.uk port 443 type TCPSSL for 3 cycles then restart
  • if 5 restarts within 5 cycles then exec “/root/MonitSMS.sh”

useradd Not copying any file from skel directory into it.

Useradd in Linux – Not copying any file from skel directory into it
useradd: warning: the home directory already exists.
Not copying any file from skel directory into it.

If the directory  didn’t exist , the Linux useradd process  creates the directory and copies  the skel files  : .kshrc , .bashrc , .bash_profile and .bash_logout files into the user directory.

As a DBA it is not unusual to receive a server with directories already in place. In those situations a scripted approach is required to copy the skel files. A quick and dirty solution is to add the following line after the useradd sequence

cp -r /etc/skel/. /<user_home_directpory>

An example sequence could be :

groupadd skprod

useradd -m -s /bin/bash -g skprod  -d /data/app/sit01  sit01
chown -R sit01:skprod /data/app/sit01
cp -r /etc/skel/. /data/app/-sit01

for loops in Chef

for config in [ "contacts.cfg", "contactgroups.cfg" ] do
  remote_file "/etc/nagios3/#{config}" do
    source "#{config}"
    owner "root"
    group "root"
    mode 0644
    notifies :restart, resources(:service => "nagios"), :delayed
  end
end

Write it in the order you want it to run

Chef executes resources in the order they appear in a recipe. Coming to Chef from Puppet, I found this to be a welcome surprise. Need to install the NTP daemon and then make sure the service is started? No problem. Just put your service resource after your package resource.

package 'ntp' do
  action :install
end

service 'ntp' do
  action :start
end

This is the most common way to sequence events. You can tell what the control flow is just by reading a cookbook from top to bottom.

Notify resources about changes

Chef provides a notification mechanism for signaling when things change. This lets you do things like restart a service when a configuration file changes. Want to lay down your own NTP configuration file and bounce the service when it’s updated? Easy. Have your template resource notify your service resource of the change.

template '/etc/ntp.conf' do
  notifies :restart, 'service[ntp]'
end

service 'ntp' do
  action :start
end

Something that surprised me when I first started working with Chef, was that notifications on resources queue, then trigger at the end of the run. So the recipe above takes the following actions:

  1. Updates the template.
  2. Queues a restart of the service.
  3. Starts the service.
  4. Restarts the service.

Sometimes you want a notification to trigger right after something changes, instead of waiting until the end of the run. In that case, you can use the :immediately flag.

template '/etc/ntp.conf' do
  notifies :restart, 'service[ntp]', :immediately
end

service 'ntp' do
  action :start
end

Check if resources have changed

Chef resources are first class objects, which means you can ask them about their state during the run. Every resource has an updated_by_last_action? method which returns true if the resource changed. Combining this with sequential execution lets you build robust cookbooks.

Suppose you want to download and unpack a tarball. If the folder you’re unpacking into exists, you’ll want to remove it first. But if the tarball hasn’t changed, you’ll want to skip that step. Download the package with a remote file resource resource, cache the result, and use the only_ifmodifier on a directory resource to skip the removal if the tarball’s unchanged.

tarball = remote_file '~/node-v0.8.20.tar.gz' do
  source 'http://nodejs.org/dist/v0.8.20/node-v0.8.20.tar.gz'
end

folder = directory '~/node-v0.8.20' do
  action :remove
  only_if { tarball.updated_by_last_action? }
end