August 2025
M T W T F S S
 123
45678910
11121314151617
18192021222324
25262728293031

Categories

August 2025
M T W T F S S
 123
45678910
11121314151617
18192021222324
25262728293031

Mysql gallera

mysqlgallera 001 mysqlgallera 002 mysqlgallera 003 mysqlgallera 004 mysqlgallera 005 mysqlgallera 006 mysqlgallera 007 mysqlgallera 008 mysqlgallera 009 mysqlgallera 010 mysqlgallera 011 mysqlgallera 012 mysqlgallera 013 mysqlgallera 014 mysqlgallera 015

Windows Server 2012 R2 between Core and Full Installations

Windows Server 2012 changes all of that. We now can switch between core and full installation, and we can do so on-the-fly. There’s really no more reason why, you should be using a full installation for a server, unless it’s due to an installed application’s or role’s requirements. Unfortunately, there are still a few roles that require the GUI to be installed.

Convert Full to Core

  1. Log onto the server with an account that has administrative permissions.
  2. If it isn’t already open, launch the Server Manager console.52-figure-01
  3. From the Server Manager console, click the Manage menu.
  4. Click Remove Roles and Features.
  5. 52-figure-02
  6. On the Before You Begin screen, click Next.
  7. On the Select Destination Server screen, ensure the server is selected in the server pool, and then click Next.
  8. On the Remove Server Roles screen, click Next
  9. On the Remove Features screen, scroll down the list until you see User Interfaces and Infrastructure and expand it.
  10. Uncheck Graphical Management Tools and Infrastructure and Server Graphical Shell, and then click Next.
  11. 52-figure-03
  12. On the Confirmation screen, review the changes and then click Remove.
  13. When the removal process completes, click Close. The server will now need to be rebooted. During the reboot phase, the remaining components will be removed. Do not power off the server.
  14. After the server reboots, log back in.
  15. Once you’re logged in, you’ll notice that only a command-prompt window is open. You have successfully removed the graphical interface from Windows Server 2012 R2
  16. 52-figure-04

    Convert Core to Full

    1. Log onto the server with an account that has administrative privileges.
    2. Check to see if any graphical user interface components are installed.
      Get-WindowsFeature -Name "Server-Gui-Mgmt-Infra","Server-Gui-Shell"
    3. The output should look like the following, unless you are running in a minimal install, where only Graphical Management Tools and Infrastructure is installed, giving the admin access to Server Manager in server core.

    52-figure-05

  17. We can that no components are installed, in the figure above. Now we need to install both of the features listed above.
    Install-WindowsFeature -Name "Server-Gui-Mgmt-Infra","Server-Gui-Shell"
  18. When the features are installed, a warning will be displayed telling you that the server must be restarted. Go ahead and restart the server now.
  19. When the server completes the reboot process, log in. You should now have a fully functional graphical user interface presented to you.
  20. 52-figure-06

Ansible

Ansible is an open source, powerful automation software for configuring, managing and deploying software applications on the nodes without any downtime just by using SSH. Unlike other alternatives, Ansible is installed on a single host, which can even be your local machine, and uses SSH to communicate with each remote host. This allows it to be incredibly fast at configuring new servers, as there are no prerequisite packages to be installed on each new server.

The controlling machine, where Ansible is installed and Nodes are managed by this controlling machine over SSH. The location of nodes are specified by controlling machine through its inventory. Ansible is agent-less, that means no need of any agent installation on remote nodes, so it means there are no any background daemons or programs are executing for Ansible, when it’s not managing any nodes.

Ansible is a free & open source Configuration and automation tool for UNIX like operating system. It is written in python and similar to Chef or Puppet but there is one difference and advantage of Ansible is that we don’t need to install any agent on the nodes. It uses SSH for making communication to its nodes.

Controller

The controlling machine (Ansible) deploys modules to nodes using SSH protocol and these modules are stored temporarily on remote nodes and communicate with the Ansible machine through a JSON connection over the standard output.
Installation

Installation is pretty easy, verify hostname and IP address before start. The dependancy packages for ansible can be found below.

Set EPEL warehouse
Ansible warehouse yum repository is not in default, so we need to use the following command to enable epel warehouse.

CENTOS 7

rpm -iUvh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm

CENTOS 6

rpm -iUvh  http://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm

CONTROL SERVER – CENTOS 7
APP1 CENTOS 7
APP2 CENTOS 6

[root@clusterserver1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.20 clusterserver1.rmohan.com clusterserver1 controlserver
192.168.1.21 clusterserver2.rmohan.com clusterserver2
192.168.1.63 cluster3.rmohan.com cluster3

Step 1: Set EPEL warehouse

yum install ansible

After the installation is complete, check ansible version:

ansible – release

[root@clusterserver1 ~]# ansible –version
ansible 1.9.4
configured module search path = None
[root@clusterserver1 ~]#

yum install ansible
yum install ntp

echo “*/5 * * * * /usr/sbin/ntpdate pool.ntp.org >/dev/null 2>&1″ >> /var/spool/cron/root

Use ssh-copy-id command to copy the public key to Ansible node.

ssh-keygen  -t rsa -f ~/.ssh/id_rsa  -P ”

awk ‘{if ($0!~/'”$(hostname)”‘|localhost/)print $NF}’ /etc/hosts |xargs -i ssh-copy-id -i ~/.ssh/id_rsa.pub root@{}

To define the node list Ansible  Edit the hosts
Save and exit the file.
Hosts file examples are as follows:

[root@clusterserver1 ~]# cat /etc/ansible/hosts
[appserver]
192.168.1.21
192.168.1.63

try to run the server command Ansible
Use ping connectivity check ‘test-servers’ or ansible node.

ansible  -m ping ‘appserver’

[root@clusterserver1 ~]# ansible  -m ping ‘appserver’
192.168.1.21 | success >> {
“changed”: false,
“ping”: “pong”
}

192.168.1.63 | success >> {
“changed”: false,
“ping”: “pong”
}

Execute shell commands

Check Ansible node running time (uptime): Example 1

ansible -m command -a “uptime” ‘appserver’

[root@clusterserver1 ~]# ansible -m command -a “uptime” ‘appserver’
192.168.1.63 | success | rc=0 >>
23:01:50 up 12:09,  3 users,  load average: 0.00, 0.00, 0.00

192.168.1.21 | success | rc=0 >>
23:01:50 up 13:34,  2 users,  load average: 0.00, 0.01, 0.05

[root@clusterserver1 ~]#

Kernel version check node: Example 2

[root@clusterserver1 ~]# ansible -m command -a “uname -r” ‘appserver’
192.168.1.63 | success | rc=0 >>
2.6.32-573.7.1.el6.x86_64

192.168.1.21 | success | rc=0 >>
3.10.0-123.20.1.el7.x86_64

[root@clusterserver1 ~]# ansible -m command -a “`cat /etc/redhat-release`” ‘appserver’
192.168.1.63 | success | rc=0 >>
2.6.32-573.7.1.el6.x86_64

192.168.1.21 | success | rc=0 >>
3.10.0-123.20.1.el7.x86_64

[root@clusterserver1 ~]# ansible -m command -a “cat /etc/redhat-release” ‘appserver’
192.168.1.63 | success | rc=0 >>
CentOS release 6.7 (Final)

192.168.1.21 | success | rc=0 >>
CentOS Linux release 7.1.1503 (Core)

[root@clusterserver1 ~]# ansible -m command -a “python -c ‘import socket; print(socket.gethostbyname(socket.gethostname()))'” ‘appserver’
192.168.1.63 | success | rc=0 >>
192.168.1.63

192.168.1.21 | success | rc=0 >>
192.168.1.21

[root@clusterserver1 ~]# ansible -m command -a ‘hostname’  ‘appserver’
192.168.1.63 | success | rc=0 >>
cluster3.rmohan.com

192.168.1.21 | success | rc=0 >>
clusterserver2.rmohan.com

[root@clusterserver1 ~]#  ansible -m command -a “useradd mohan” ‘appserver’
192.168.1.21 | FAILED | rc=9 >>
useradd: user ‘mohan’ already exists

192.168.1.63 | success | rc=0 >>

[root@clusterserver1 ~]#ansible -m command -a “grep mohan /etc/passwd” ‘appserver’

[root@clusterserver1 ~]# ansible -m command -a “grep mohan /etc/passwd” ‘appserver’
192.168.1.63 | success | rc=0 >>
mohan:x:500:500::/home/mohan:/bin/bash

192.168.1.21 | success | rc=0 >>
mohan:x:1000:1000:mohan:/home/mohan:/bin/bash

[root@clusterserver1 ~]#ansible -m command -a “df -Th” ‘appserver’
[root@clusterserver1 ~]# ansible -m command -a “df -Th” ‘appserver’
192.168.1.21 | success | rc=0 >>
Filesystem              Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs        18G  1.4G   17G   8% /
devtmpfs                devtmpfs  1.9G     0  1.9G   0% /dev
tmpfs                   tmpfs     1.9G     0  1.9G   0% /dev/shm
tmpfs                   tmpfs     1.9G  8.6M  1.9G   1% /run
tmpfs                   tmpfs     1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/sda1               xfs       497M  167M  330M  34% /boot

192.168.1.63 | success | rc=0 >>
Filesystem           Type   Size  Used Avail Use% Mounted on
/dev/mapper/vg_cluster3-lv_root
ext4    50G  1.4G   46G   3% /
tmpfs                tmpfs  491M     0  491M   0% /dev/shm
/dev/sda1            ext4   477M   55M  398M  12% /boot
/dev/mapper/vg_cluster3-lv_home
ext4    47G   52M   45G   1% /home

[root@clusterserver1 ~]#

Lets install apache on  2 Nodes

[root@clusterserver1 ~]# cat test.yaml
– hosts: appserver
remote_user: root
tasks:
– yum: name=httpd state=latest

[root@clusterserver1 ~]# ansible-playbook test.yaml -f 10

PLAY [appserver] **************************************************************

GATHERING FACTS ***************************************************************
ok: [192.168.1.21]
ok: [192.168.1.63]

TASK: [yum name=httpd state=latest] *******************************************
changed: [192.168.1.63]
changed: [192.168.1.21]

PLAY RECAP ********************************************************************
192.168.1.21               : ok=2    changed=1    unreachable=0    failed=0
192.168.1.63               : ok=2    changed=1    unreachable=0    failed=0

[root@clusterserver1 ~]# cat test.yaml
– hosts: appserver
remote_user: root
tasks:
– yum: name=httpd state=latest
– name: httpd is running and enabled
service: name=httpd state=started enabled=yes

# target hostname or group name
– hosts: appserver
# define tasks
tasks:
# task name (any name you like)
– name: Test Task
# use file module to set the file state
file: path=/home/mohan/test.conf state=touch owner=mohan group=mohan mode=0600

[root@clusterserver1 ~]# ansible-playbook test.yaml -f 10

PLAY [appserver] **************************************************************

GATHERING FACTS ***************************************************************
ok: [192.168.1.63]
ok: [192.168.1.21]

TASK: [Test Task] *************************************************************
changed: [192.168.1.63]
changed: [192.168.1.21]

PLAY RECAP ********************************************************************
192.168.1.21               : ok=2    changed=1    unreachable=0    failed=0
192.168.1.63               : ok=2    changed=1    unreachable=0    failed=0

[root@clusterserver1 ~]# ansible appserver -m shell  -a “rpm -qa | egrep ‘vim-enhanced|wget|unzip'”
192.168.1.63 | success | rc=0 >>
wget-1.12-5.el6_6.1.x86_64

192.168.1.21 | FAILED | rc=1 >>

[root@clusterserver1 ~]#

Apache Spark

Apache Spark 1.5.2 release, this version is a maintenance release that includes fixes Spark stability in some areas, mainly: DataFrame API, Spark Streaming, PySpark, R, Spark SQL and MLlib

 

Apache Spark is one of the hadoop open source cluster computing environments similar, but there are some differences between the two, these useful differences make Spark in some workloads behaved more superior, in other words, Spark Enable memory distributed data sets, in addition to providing interactive query, it also can optimize iterative workloads.

Spark is implemented in the Scala language, which will Scala as its application framework. And Hadoop different, Spark and Scala can be tightly integrated, which can operate as a local collection Scala objects as easily as operating a distributed data sets.

Although creating Spark iterative job to support distributed data sets, but in fact it is complementary to Hadoop, it can run in parallel Hadoo file system. Through third-party clustering framework called Mesos can support this behavior. Spark by the University of California, Berkeley AMP Lab (Algorithms, Machines, and People Lab) development, can be used to build large, low-latency data analysis applications.

 

Spark (http://spark-project.org) is developed in the UC Berkeley AMPLab, to make data analytics fast. It is open source. Spark is for in-memory cluster computing whereas Hadoop-MapReduce is disk-based. Our job can load data into memory and query it repeatedly much quicker than Hadoop-MapReduce. For programmers Spark provides APIs in both Scala and Java. Spark is developed focusing two applications where keeping data in memory helps
  • Iterative Algorithms, which are common in machine learning.
  • Interactive data mining.
 Abstractions Provided by Spark
The main abstraction Spark provides is a Resilient Distributed Dataset (RDD).
RDDs are fault-tolerant, parallel data structures that let users explicitly persist intermediate results in memory, control their partitioning to optimize data placement, and manipulate them using a rich set of operators.
A second abstraction in Spark is shared variables that can be used in parallel operations. Spark supports two types of shared variables
  • broadcast variables
  • accumulators
Driver Program
At a high level, every Spark application consists of a driver program that runs the user’s main function and executes various parallel operations on a cluster.
Operations on RDDs
Spark exposes RDDs through a language-integrated APIs. RDDs support two types of operations.
  •  Transformations, which create a new dataset from an existing one.
  • Actions, which return a value to the driver program after running a computation on the dataset.
For example, map is a transformation that passes each dataset element through a function and returns a new distributed dataset representing the results. On the other hand, reduce is an action that aggregates all the elements of the dataset using some function and returns the final result to the driver program.
More examples of Transformation operations are filter(func), flatMap(func), distinct([numTasks])), reduceByKey(func, [numTasks]) 
More examples of Action operations are collect(), count(), first(), 
saveAsTextFile(path), saveAsSequenceFile(path)

Centos7 install and configure Spark 1.5.2 Standalone mode
[root@clusterserver1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.20 clusterserver1.rmohan.com clusterserver1
192.168.1.21 clusterserver2.rmohan.com clusterserver2

wget –no-cookies –no-check-certificate –header “Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie” “http://download.oracle.com/otn-pub/java/jdk/8u65-b17/jdk-8u65-linux-x64.tar.gz”
tar -zxvf jdk-8u65-linux-x64.tar.gz
mkdir /usr/java
mv jdk1.8.0_65 /usr/java/

cd /usr/java/jdk1.8.0_40/
[root@cluster1 java]# ln -s /usr/java/jdk1.8.0_40/bin/java /usr/bin/java
[root@cluster1 java]# alternatives –install /usr/java/jdk1.8.0_40/bin/java java /usr/java/jdk1.8.0_40/bin/java 2

alternatives –install /usr/java/jdk1.8.0_40/bin/java java /usr/java/jdk1.8.0_40/bin/java 2
alternatives –config java

[root@cluster1 java]# alternatives –config java

There is 1 program that provides ‘java’.

Selection Command
———————————————–
*+ 1 /usr/java/jdk1.8.0_40/bin/java

Enter to keep the current selection[+], or type selection number: 1
[root@cluster1 java]#

alternatives –install /usr/bin/jar jar /opt/jdk1.8.0_40/bin/jar 2
alternatives –install /usr/bin/javac javac /opt/jdk1.8.0_40/bin/javac 2
alternatives –set jar /opt/jdk1.8.0_40/bin/jar
alternatives –set  javac /opt/jdk1.8.0_40/bin/javac
vi /etc/profile.d/java.sh

export JAVA_HOME=/usr/java/jdk1.8.0_40
PATH=$JAVA_HOME/bin:$PATH
export PATH=$PATH:$JAVA_HOME
export JRE_HOME=/usr/java/jdk1.8.0_40/jre
export PATH=$PATH:/usr/java/jdk1.8.0_40/bin:/usr/java/jdk1.8.0_40/jre/bin

wget http://www.apache.org/dyn/closer.lua/spark/spark-1.5.2/spark-1.5.2.tgz

gunzip -c spark-1.5.2.tgz | tar xvf –

wget http://mirror.nus.edu.sg/apache/spark/spark-1.5.2/spark-1.5.2-bin-hadoop1-scala2.11.tgz
gunzip -c spark-1.5.2-bin-hadoop1-scala2.11.tgz | tar xvf –

Download Scala
http://downloads.typesafe.com/scala/2.11.7/scala-2.11.7.tgz?_ga=1.97307478.816346610.1449891008

mkdir /usr/hadoop
mv spark-1.5.2 /usr/hadoop/
mv scala-2.11.7 /usr/hadoop/
mv spark-1.5.2-bin-hadoop1-scala2.11 /usr/hadoop/

vi /etc/profile.d/scala
#SCALA VARIABLES START
export SCALA_HOME=/usr/hadoop/scala-2.11.7
export PATH=$PATH:$SCALA_HOME/bin
#SCALA VARIABLES END

#SPARK VARIABLES START
export SPARK_HOME=/usr/hadoop/spark-1.5.2-bin-hadoop1-scala2.11
export PATH=$PATH:$SPARK_HOME/bin
#SPARK VARIABLES END

export SPARK_MASTER_IP=localhost
export SPARK_WORKER_MEMORY=1024m
export master=spark://localhost:7070

[root@clusterserver1 spark-1.5.2-bin-hadoop1-scala2.11]# scala -version
Scala code runner version 2.11.7 — Copyright 2002-2013, LAMP/EPFL
You have new mail in /var/spool/mail/root
[root@clusterserver1 spark-1.5.2-bin-hadoop1-scala2.11]#

[root@clusterserver1 sbin]# ./start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /usr/hadoop/spark-1                                                                                                                             .5.2-bin-hadoop1-scala2.11/sbin/../logs/spark-root-org.apache.spark.deploy.mas                                                                                                                             ter.Master-1-clusterserver1.rmohan.com.out
localhost: Warning: Permanently added ‘localhost’ (ECDSA) to the list of known hosts.
root@localhost’s password:
localhost: starting org.apache.spark.deploy.worker.Worker, logging to /usr/hadoop/spark-1.5.2-bin-hadoop1-scala2.11/sbin/../logs/spark-root-org.apache.spark.deploy.worker.Worker-1-clusterserver1.rmohan.com.out
[root@clusterserver1 sbin]#

root@clusterserver1 bin]# spark-shell
log4j:WARN No appenders could be found for logger (org.apache.hadoop.security.                                                                                                                             Groups).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more                                                                                                                              info.
Using Spark’s repl log4j profile: org/apache/spark/log4j-defaults-repl.propert                                                                                                                             ies
To adjust logging level use sc.setLogLevel(“INFO”)
15/12/13 08:20:19 WARN MetricsSystem: Using default name DAGScheduler for sour                                                                                                                             ce because spark.app.id is not set.
Spark context available as sc.
15/12/13 08:20:22 WARN Connection: BoneCP specified but not present in CLASSPA                                                                                                                             TH (or one of dependencies)
15/12/13 08:20:23 WARN Connection: BoneCP specified but not present in CLASSPA                                                                                                                             TH (or one of dependencies)
15/12/13 08:20:29 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
15/12/13 08:20:30 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
15/12/13 08:20:30 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
15/12/13 08:20:31 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
15/12/13 08:20:32 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
15/12/13 08:20:38 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
15/12/13 08:20:38 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
15/12/13 08:20:39 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
SQL context available as sqlContext.
Welcome to
____              __
/ __/__  ___ _____/ /__
_\ \/ _ \/ _ `/ __/  ‘_/
/___/ .__/\_,_/_/ /_/\_\   version 1.5.2
/_/

Using Scala version 2.11.7 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_65)
Type in expressions to have them evaluated.
Type :help for more information.

scala> 1+2
res0: Int = 3

scala>

/root/word.txt
hello world
hello hadoop
pls say hello

val readFile = sc.textFile(“file:///root/word.txt”)

scala> val readFile = sc.textFile(“file:///root/word.txt”)
readFile: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[5] at textFile at <console>:24

scala> readFile.count()
15/12/13 08:36:25 WARN LoadSnappy: Snappy native library not loaded
res2: Long = 3

spark 001 spark 002 spark 003 spark 004 spark 005

DRBD setup on Centos 6.7

CentOS 6.7 DRBD installation and configuration notes

DRDB or Distributed Replicated Block Device is nice for HA situations of distributed block devices. This can be used for NAS, SAN, and a number of other use cases.

** NOTE THIS SETUP IS FOR ACTIVE/PASSIVE, NOT ACTIVE

Primary: 192.168.1.60 (cluster1.rmohan.com)

Secondary: 192.168.1.61 (cluster2.rmohan.com)
Requirements

– Two disks (preferably same size)
– Networking between machines (node1 & node2)
– Working DNS resolution (/etc/hosts file)
– NTP synchronized times on both nodes
– Selinux Permissive
– Iptables ports (7788) allowed
(Primary) configured as a master server only
(Secondary) is configured as a server-side only
(Primary, Secondary) based server configuration from the server together
I. Prepare the environment: (Primary, Secondary)
Partition DRBD on both machines

[root@cluster1 ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xe8577511.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won’t be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It’s strongly recommended to
switch off the mode (command ‘c’) and change display units to
sectors (command ‘u’).

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-6527, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-6527, default 6527):
Using default value 6527

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 8e
Changed system type of partition 1 to 8e (Linux LVM)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

root@cluster1 ~]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all — anywhere anywhere state RELATED,ESTABLISHED
ACCEPT icmp — anywhere anywhere
ACCEPT all — anywhere anywhere
ACCEPT tcp — anywhere anywhere state NEW tcp dpt:ssh
REJECT all — anywhere anywhere reject-with icmp-host-prohibited

Chain FORWARD (policy ACCEPT)
target prot opt source destination
REJECT all — anywhere anywhere reject-with icmp-host-prohibited

Chain OUTPUT (policy ACCEPT)
target prot opt source destination
[root@cluster1 ~]# /etc/init.d/iptables stop
iptables: Setting chains to policy ACCEPT: filter [ OK ]
iptables: Flushing firewall rules: [ OK ]
iptables: Unloading modules: [ OK ]
[root@cluster1 ~]# /etc/init.d/iptables stop
[root@cluster1 ~]# iptables -F
[root@cluster1 ~]# iptables-save > /etc/sysconfig/iptables

iptables-save
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [14:1480]
-A INPUT -m state –state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state –state NEW -m tcp –dport 22 -j ACCEPT
-A INPUT -p tcp -m state –state NEW -m tcp –dport 7788:7799 -j ACCEPT
-A INPUT -j REJECT –reject-with icmp-host-prohibited
-A FORWARD -j REJECT –reject-with icmp-host-prohibited
-A OUTPUT -p tcp -m tcp –dport 7788:7799 -j ACCEPT
COMMIT

[root@cluster2 ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xe1b1fb03.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won’t be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It’s strongly recommended to
switch off the mode (command ‘c’) and change display units to
sectors (command ‘u’).

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-6527, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-6527, default 6527):
Using default value 6527

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 8e
Changed system type of partition 1 to 8e (Linux LVM)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@cluster2 ~]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all — anywhere anywhere state RELATED,ESTABLISHED
ACCEPT icmp — anywhere anywhere
ACCEPT all — anywhere anywhere
ACCEPT tcp — anywhere anywhere state NEW tcp dpt:ssh
REJECT all — anywhere anywhere reject-with icmp-host-prohibited

Chain FORWARD (policy ACCEPT)
target prot opt source destination
REJECT all — anywhere anywhere reject-with icmp-host-prohibited

Chain OUTPUT (policy ACCEPT)
target prot opt source destination
[root@cluster2 ~]# /etc/init.d/iptables stop
iptables: Setting chains to policy ACCEPT: filter [ OK ]
iptables: Flushing firewall rules: [ OK ]
iptables: Unloading modules: [ OK ]
[root@cluster2 ~]# iptables -F
[root@cluster2 ~]# iptables-save > /etc/sysconfig/iptables
[root@cluster2 ~]#
Create mysql dir
[root@cluster1 ~]# mkdir /mysql
[root@cluster2 ~]# mkdir /mysql
Ntp update

[root@cluster1 ~]# chkconfig –level 345 ntpd on
[root@cluster1 ~]# /etc/init.d/ntpd restart
Shutting down ntpd: [FAILED]
Starting ntpd: [ OK ]
[root@cluster1 ~]# ntpdate -u 0.centos.pool.ntp.org

[root@cluster2 ~]# chkconfig –level 345 ntpd on
[root@cluster2 ~]# /etc/init.d/ntpd restart
Shutting down ntpd: [FAILED]
Starting ntpd: [ OK ]
[root@cluster2 ~]# ntpdate -u 0.centos.pool.ntp.org
1. Install dependencies: (Primary, Secondary)

# yum install gcc gcc-c++ make glibc flex kernel-devel kernel-headers

Install rpm -Uvh http://www.elrepo.org/elrepo-release-6-6.el6.elrepo.noarch.rpm

Install DRBD on both nodes cluster1 and cluster2

yum -y install drbd83-utils kmod-drbd83

Insert drbd module manually on both machines or reboot:
/sbin/modprobe drbd

add the config

Create the Distributed Replicated Block Device resource file

/etc/drbd.d/clusterdb.res

resource clusterdb
{
startup {
wfc-timeout 30;
outdated-wfc-timeout 20;
degr-wfc-timeout 30;
}

net {
cram-hmac-alg sha1;
shared-secret sync_disk;
}

syncer {
rate 10M;
al-extents 257;
on-no-data-accessible io-error;
}
on cluster1.rmohan.com {
device /dev/drbd0;
disk /dev/sdb1;
address 192.168.1.60:7788;
flexible-meta-disk internal;
}
on cluster2.rmohan.com {
device /dev/drbd0;
disk /dev/sdb1;
address 192.168.1.62:7788;
meta-disk internal;
}
}

Initialize the DRBD meta data storage on both machines:

drbdadm create-md clusterdb

start drbd on both node
service drbd start

On the PRIMARY node run drbdadm command:
drbdsetup /dev/drbd0 primary –overwrite-data-of-peer

Every 2.0s: service drbd status Thu Nov 26 22:13:34 2015
drbd driver loaded OK; device status:
version: 8.3.16 (api:88/proto:86-97)
GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil@Build64R6, 2014-11-24 14:51:37
m:res cs ro ds p mounted fstype
0:clusterdb SyncSource Primary/Secondary UpToDate/Inconsistent C
… sync’ed: 1.2% (50624/51196)M
watch service drbd status

[root@cluster1 ~]# service drbd status
drbd driver loaded OK; device status:
version: 8.3.16 (api:88/proto:86-97)
GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil@Build64R6, 2014-11-24 14:51:37
m:res cs ro ds p mounted fstype
0:clusterdb Connected Primary/Secondary UpToDate/UpToDate C
[root@cluster1 ~]#
/sbin/mkfs.ext4 /dev/drbd0
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
3276800 inodes, 13106615 blocks
655330 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
400 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 37 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

root@cluster1 ~]# mount -t ext4 /dev/drbd0 /mysql/
[root@cluster1 ~]# df -TH
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/vg_cluster1-lv_root
ext4 53G 1.8G 49G 4% /
tmpfs tmpfs 3.0G 0 3.0G 0% /dev/shm
/dev/sda1 ext4 500M 57M 417M 13% /boot
/dev/mapper/vg_cluster1-lv_home
ext4 47G 55M 44G 1% /home
/dev/drbd0 ext4 53G 55M 50G 1% /mysql
[root@cluster1 ~]#
[root@cluster1 mysql]# date ; drbdadm role clusterdb
Fri Nov 27 13:28:57 SGT 2015
Primary/Secondary

[root@cluster2 ~]# date ; drbdadm role clusterdb
Fri Nov 27 13:29:02 SGT 2015
Secondary/Primary
verify replication is up to date

[root@cluster1 ~]# cat /proc/drbd
version: 8.3.16 (api:88/proto:86-97)
GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil@Build64R6, 2014-11-24 14:51:37
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r—–
ns:56 nr:0 dw:0 dr:1052 al:0 bm:3 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

Admin Commands:

[root@cluster1 ~]# drbd-overview
0:clusterdb Connected Primary/Secondary UpToDate/UpToDate C r—–

[root@cluster1 ~]# /etc/init.d/drbd status
drbd driver loaded OK; device status:
version: 8.3.16 (api:88/proto:86-97)
GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil@Build64R6, 2014-11-24 14:51:37
m:res cs ro ds p mounted fstype
0:clusterdb Connected Primary/Secondary UpToDate/UpToDate C
BASIC MANUAL FAIL-OVER
# umount /dev/drbd0
# drbdadm secondary <resource>
Now on the node we wish to make primary promote the resource and mount the device.
# drbdadm primary <resource>
# mount /dev/drbd0/by-res/<resource> <mountpoint>

disconnecting disk:

drbdam disconnect clusterdb
drbdadm disconnect clusterdb
df -TH
umount /dev/drbd0
drbdadm secondary clusterdb

drbdadm connect clusterdb
drbdadm primary clusterdb
mount -t ext4 /dev/drbd0 /mysql/
df -TH
cd /mysql/
Fix DRBD recovery from split brain

root@cluster1 ~]# /etc/init.d/drbd status
drbd driver loaded OK; device status:
version: 8.3.16 (api:88/proto:86-97)
GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil@Build64R6, 2014-11-24 14:51:37
m:res cs ro ds p mounted fstype
0:clusterdb StandAlone Primary/Unknown UpToDate/DUnknown r—–
[root@cluster2 ~]# /etc/init.d/drbd status
drbd driver loaded OK; device status:
version: 8.3.16 (api:88/proto:86-97)
GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil@Build64R6, 2014-11-24 14:51:37
m:res cs ro ds p mounted fstype
0:clusterdb StandAlone Secondary/Unknown UpToDate/DUnknown r—–
Solution:

Step 1: Start drbd manually on both nodes

Step 2: Define one node as secondary and discard data on this

drbdadm secondary all
drbdadm disconnect all
drbdadm — –discard-my-data connect all

Step 3: Define anoher node as primary and connect
drbdadm primary all
drbdadm disconnect all
drbdadm connect all

[root@cluster1 ~]# /etc/init.d/drbd status
drbd driver loaded OK; device status:
version: 8.3.16 (api:88/proto:86-97)
GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil@Build64R6, 2014-11-24 14:51:37
m:res cs ro ds p mounted fstype
0:clusterdb Connected Primary/Secondary UpToDate/UpToDate C
[root@cluster2 ~]# /etc/init.d/drbd status
drbd driver loaded OK; device status:
version: 8.3.16 (api:88/proto:86-97)
GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil@Build64R6, 2014-11-24 14:51:37
m:res cs ro ds p mounted fstype
0:clusterdb Connected Secondary/Primary UpToDate/UpToDate C

Solaris Security Tips

Aditing

  1. Enable the Basic Security Module (BSM):
    /etc/security/bsmconv
  2. Configure the classes of events to log in /etc/security/audit_control:
    dir:/var/audit
    flags:lo,ad,pc,fc,fd,fm
    naflags:lo,ad
    #
    #   lo - login/logout events
    #   ad - administrative actions: mount, exportfs, etc.
    #   pc - process operations: fork, exec, exit, etc.
    #   fc - file creation
    #   fd - file deletion
    #   fm - change of object attributes: chown, flock, etc.
    #
    
  3. Create /etc/security/newauditlog.sh:
    #!/sbin/sh
    #
    # newauditlog.sh - Start a new audit file and expire the old logs
    #
    AUDIT_EXPIRE=30
    AUDIT_DIR="/var/audit"
    
    /usr/sbin/audit -n
    
    cd $AUDIT_DIR # in case it is a link
    /usr/bin/find . $AUDIT_DIR -type f -mtime +$AUDIT_EXPIRE \
        -exec rm {} > /dev/null 2>&1 \;
    
  4. Run the script nightly from cron:
    /usr/bin/crontab -e root
    0 0 * * * /etc/security/newauditlog.sh
    
  5. The audit files generated are not human readable. The praudit(1M) command can be used to convert audit data into several ASCII formats.

 

Boot files

  1. Disable all startup files for services that are not needed from /etc/rc2.d and /etc/rc3.d. Services may be disabled by changing the capital ‘S’ in the name of the script to a lowercase ‘s’. The following startup files should not be disabled:
    S01MOUNTFSYS   S69inet        S72inetsvc     S74xntpd       S80PRESERVE
    S05RMTMPFILES  S71rpc         S74autofs      S75cron        S88utmpd
    S20sysetup     S71sysid.sys   S74syslog      S75savecore    S99audit
    S30sysid.net
    
  2. In order to ensure that all of the startup scripts run with the proper umask, execute the following script:
    umask 022  # make sure umask.sh gets created with the proper mode
    echo "umask 022" > /etc/init.d/umask.sh
    for d in /etc/rc?.d
    do
       ln /etc/init.d/umask.sh $d/S00umask.sh
    done
    
  3. In order to log as much information as possible, add the following lines to your /etc/syslog.conf:
    mail.debug              /var/log/syslog
    *.info;mail.none        /var/adm/messages
    

    Note: Tabs must be used to separate the fields.

    This will log mail entries to /var/log/syslog and everything else to /var/adm/messages.

  4. Log failed login attempts by creating the /var/adm/loginlog file:
    touch /var/adm/loginlog
    chown root /var/adm/loginlog
    chgrp sys /var/adm/loginlog
    
  5. Set the permissions on the log files as follows:
    chmod 600 /var/adm/messages /var/log/syslog /var/adm/loginlog
    
  6. Enable hardware protection for buffer overflow exploits in /etc/system (sun4u, sun4d, and sun4m systems only).
    * Foil certain classes of bug exploits
    set noexec_user_stack = 1
    
    * Log attempted exploits
    set noexec_user_stack_log = 1
    

    Network Services

    1. Create /etc/init.d/nddconfig and create a link to /etc/rc2.d/S70nddconfig.
      touch /etc/init.d/nddconfig
      ln /etc/init.d/nddconfig /etc/rc2.d/S70nddconfig
      

      Add the following lines to the /etc/init.d/nddconfig file:

      #!/bin/sh
      #
      # /etc/init.d/nddconfig
      #
      
      # Fix for broadcast ping bug
      /usr/sbin/ndd -set /dev/ip ip_respond_to_echo_broadcast 0
      
      # Block directed broadcast packets
      /usr/sbin/ndd -set /dev/ip ip_forward_directed_broadcasts 0
      
      # Prevent spoofing
      /usr/sbin/ndd -set /dev/ip ip_strict_dst_multihoming 1
      /usr/sbin/ndd -set /dev/ip ip_ignore_redirect 1
      
      # No IP forwarding
      /usr/sbin/ndd -set /dev/ip ip_forwarding 0
      
      # Drop source routed packets
      /usr/sbin/ndd -set /dev/ip ip_forward_src_routed 0
       
      # Shorten ARP expiration to one minute to minimize ARP spoofing/hijacking
      # [Source: Titan adjust-arp-timers module]
      /usr/sbin/ndd -set /dev/ip ip_ire_flush_interval 60000    
      /usr/sbin/ndd -set /dev/arp arp_cleanup_interval 60               
      
    2. Deny services executed by inetd(3) the ability to create core files and enable logging for all TCP services by editing the /etc/rc2.d/S72inetsvc:
      # Run inetd in "standalone" mode (-s flag) so that it doesn't have
      # to submit to the will of SAF.  Why did we ever let them change inetd?
      
      ulimit -c 0
      /usr/sbin/inetd -s -t&     
      
    3. Configure RFC 1948 TCP sequence number generation in /etc/default/inetinit:
      TCP_STRONG_ISS=2
      
    4. Comment out or remove all unnecessary services in the /etc/inet/inetd.conf file including the following:
      shell		login		exec
      comsat		talk		uucp
      tftp		finger		sysstat
      netstat		time		echo
      discard		daytime		chargen
      rquotad		sprayd		walld
      rexd		rpc.ttdbserverd
      ufsd		printer		dtspc
      rpc.cmsd
      
    5. Create /etc/rc3.d/S79tmpfix so that upon boot the /tmp directory will always have the sticky bit set mode 1777.
      #!/bin/sh
      #ident  "@(#)tmpfix 1.0    95/09/14"
      
      if [ -d /tmp ]
      then
      /usr/bin/chmod 1777 /tmp
      /usr/bin/chgrp sys /tmp
      /usr/bin/chown sys /tmp
      fi
      

      [Source: Titan psfix module]

    Access Controls

    1. Disable network root logins by enabling the “CONSOLE” line in /etc/default/login.
    2. Remove, lock, or comment out unnecessary accounts, including “sys”, “uucp”, “nuucp”, and “listen”. The cleanest way to shut them down is to put “NP” in the password field of the /etc/shadow file.
    3. Require authentication for remote commands by commenting out the following line in /etc/pam.conf:
      #rlogin  auth sufficient /usr/lib/security/pam_rhosts_auth.so.1
      

      and changing the rsh line to read:

      rsh auth required   /usr/lib/security/pam_unix.so.1
      

      [Source: Titan pam-rhosts module]

    4. Only add accounts for users who require access to the system. If using NIS, use the compat mode by editing the /etc/nsswitch.conf file:
                 passwd: compat 
      

      Add each user to the /etc/passwd file

      +nis_user:x::::/home_dir:/bin/sh
      

      and the /etc/shadow file

      +nis_user::10626::::::
      
    5. Create an /etc/issue file to display the following warning banner:
      WARNING: To protect the system from unauthorized use and to ensure that the
      system is functioning properly, activities on this system are monitored and
      recorded and subject to audit. Use of this system is expressed consent to such
      monitoring and recording. Any unauthorized access or use of this Automated
      Information System is prohibited and could be subject to criminal and civil
      penalties.
      
      Source: CIAC-2317 Windows NT Network Security: A Manager’s Guide

      Add the banner to the /etc/motd file:

      cp /etc/motd /etc/motd.orig
      cat /etc/issue /etc/motd.orig > /etc/motd
      
    6. The Automated Security Enhancement Tool (ASET) checks the settings and contents of system files. Many of the setuid and setgid programs on Solaris are used only by root, or by the user or group-id to which they are set.Run aset using the highest security level and review the report files that are generated in/usr/aset/reports.
      /usr/aset/aset -l high
      
    7. Create a master list of the remaining setuid/setgid programs on your system and check that the list remains static over time.
      /bin/find / -type f \( -perm -4000 -o -perm -2000 \) \
                  -exec ls -ldb {} \;
      
    8. Execution of the su(1M) command can be controlled by adding and configuring a wheel group such as that found on most BSD derived systems.
      /usr/sbin/groupadd -g 13 wheel
      /usr/bin/chgrp wheel /usr/bin/su /sbin/static.su
      /usr/bin/chmod 4550 /usr/bin/su /sbin/static.su
      

      The GID for the wheel group does not need to be 13, any valid GID can be used. You will need to edit /etc/group to add users to the wheel group.

    9. Create an /etc/ftpusers file:
      cat /etc/passwd | cut -f1 -d: > /etc/ftpusers
      chown root /etc/ftpusers
      chmod 600 /etc/ftpusers
      

      Remove any users that require ftp access from the /etc/ftpusers file.

    10. Set the default umask so that it does not include world access. Add “umask 027” to the following files:
      /etc/.login              /etc/profile
      /etc/skel/local.cshrc    /etc/skel/local.login
      /etc/skel/local.profile 
      

      Enable the “UMASK” line in the /etc/default/login file and set the value to 027

    11. The files in /etc/cron.d control which users can use the cron(1M) and at(1) facilities.
      Create an /etc/cron.d/cron.allow file:

      echo "root" > /etc/cron.d/cron.allow
      chown root /etc/cron.d/cron.allow
      chmod 600 /etc/cron.d/cron.allow
      

      Create an /etc/cron.d/at.allow file:

      cp -p /etc/cron.d/cron.allow /etc/cron.d/at.allow
      

      Create an /etc/cron.d/cron.deny file:

      cat /etc/passwd | cut -f1 -d: | grep -v root > /etc/cron.d/cron.deny
      chown root /etc/cron.d/cron.deny
      chmod 600 /etc/cron.d/cron.deny
      

      Create an /etc/cron.d/at.deny file:

      cp -p /etc/cron.d/cron.deny /etc/cron.d/at.deny
      
    12. If CDE is installed, replace the default CDE “Welcome” greeting. If the /etc/dt/config/C directory does not exist, create the directory structure and copy the default configuration file:
      mkdir -p /etc/dt/config/C
      chmod -R a+rX /etc/dt/config
      cp -p /usr/dt/config/C/Xresources /etc/dt/config/C
      

      Add the following lines to /etc/dt/config/C/Xresources:

      Dtlogin*greeting.labelString:       %LocalHost%
      Dtlogin*greeting.persLabelString:   login: %s
      
    13. If CDE is installed, disable XDMCP connection access by creating or replacing the /etc/dt/config/Xaccess file:
      #
      # Xaccess - disable all XDMCP connections
      #
      !*
      

      Set the permissions on /etc/dt/config/Xaccess to 444:

      chmod 444 /etc/dt/config/Xaccess
      

    Time Synchronization

    Edit the /etc/inet/ntp.conf file:

    # @(#)ntp.client        1.2     96/11/06 SMI
    #
    # /etc/inet/ntp.client
    #
    # An example file that could be copied over to /etc/inet/ntp.conf; it
    # provides a configuration for a host that passively waits for a server
    # to provide NTP packets on the ntp multicast net.
    #
    # Public NTP Server list: http://www.eecis.udel.edu/~mills/ntp/clock1.htm
    #
    server clock.llnl.gov
    

 

Solaris system information

Solaris system information

Script to get free/unused memory on Solaris can be found with command vmstat. Without options, vmstat displays a one-line summary of the virtual memory activity since the system was booted. Tested on Solaris 5.8, 5.9, 5.10. Can be wrong on Zones.

Script ./mem_usage.sh

#!/bin/sh
mem_free=`vmstat 1 2 | tail -1 | awk '{print $5}' | tail -1`;
mem_free_in_mb=`echo $mem_free/1024 | bc`;
mem_total_in_mb=`/usr/sbin/prtconf 2>/dev/null | grep Memory | awk '{ print $3 }'`;
mem_usage_in_mb=`echo "$mem_total_in_mb-$mem_free_in_mb" | bc`;
mem_usage_in_per=`echo "$mem_usage_in_mb/($mem_total_in_mb/100)" | bc`;
echo "\tPhysical memory size:\t\t $mem_total_in_mb";
echo "\tMemory usage in MB:\t\t $mem_usage_in_mb";
echo "\tMemory usage in %:\t\t $mem_usage_in_per%";

Output:

# ./mem_usage.sh
        Physical memory size:            768
        Memory usage in MB:              633
        Memory usage in %:               90%

2. Solaris swap usage

Script ./swap_usage.sh

#!/bin/sh
swap_total=`/usr/sbin/swap -l | grep "/" | awk '{ sum+=$4} END {print sum}'`;
swap_free=`/usr/sbin/swap -l | grep "/" | awk '{ sum+=$5} END {print sum}'`;
swap_in_use_per=`echo "($swap_total-$swap_free) / ($swap_total/100)" | bc`;
echo "\tSwap in use:\t\t $swap_in_use_per%";

Output:

# ./swap_usage.sh
        Swap in use:             27%

3. Determine If system is Solaris Zone

#!/bin/sh
if [ -f "/usr/sbin/zoneadm" ]; then
  /usr/sbin/zoneadm list | grep global 1>/dev/null;
  if [ "$?" -ne "0" ]; then
    zone_num=`/usr/sbin/zoneadm list | wc -l | sed 's/ //g'`;
    if [ "$zone_num" = "1" ]; then
      zone_status="Zone";
    fi;
  fi;
fi;
echo "\tSystem is:\t\t $zone_status";

4. Check RAID status metastatus and ZFS

#!/bin/sh
if [ -f "/usr/sbin/metastat" ]; then
  metastatus_ok=`/usr/sbin/metastat 2>/dev/null | grep "State:" | wc -l|sed 's/ //g'`;
  metastatus_err=`/usr/sbin/metastat 2>/dev/null | grep "State:" | grep -v "State: Okay" | wc -l|sed 's/ //g'`;
  if [ "$metastatus_ok" = "0" ] && [ "$metastatus_err" = "0" ]; then
    raid_status="no_meta_raids";
  fi
  if [ "$metastatus_ok" != "0" ] && [ "$metastatus_err" = "0" ]; then
    raid_status="OK";
  fi;
  if [ "$metastatus_err" != "0" ]; then
    raid_status="metastat_Failed";
  fi;
  raid_controller="Software RAID (metastatus)";
else
  raid_status="no_metastatus";
fi;
if [ -f "/usr/sbin/zpool" ]; then
  zfs_exists="ZFS"
  zfs_pools=`/usr/sbin/zpool status -v`;
  zfs_failed=`/usr/sbin/zpool status -v | grep "state:" | grep -v "ONLINE" | wc -l |sed 's/ //g'`;
  if [ "$zfs_pools" = "no pools available" ]; then
    zfs_status="no_zfs_pools";
  else
    if [ "$zfs_failed" != "0" ]; then
      zfs_status="ZFS_Failed";
    else
      zfs_status="OK";
    fi;
  fi;
else
  zfs_status="no_zfs";
fi;
echo "\tRAID Controller:\t $raid_controller, $zfs_exists";
echo "\tRAID Status:\t\t $raid_status, $zfs_status";

5. Get serial number on Solaris

#!/bin/sh
if [ -f "/usr/sbin/sneep" ]; then
  serial=`/usr/sbin/sneep`;
else
  serial=`/usr/sbin/eeprom | grep serial | awk -F= '{ print $2}'`;
fi;
echo "\tSerial:\t\t\t $serial";

6. Get Solaris OS name, version, update level

#!/bin/sh
os_name="`uname -s`";
os_ver="`uname -r`";
if head -1 /etc/release | grep u1 1>/dev/null ; then update_ver="U1"; fi;
if head -1 /etc/release | grep u2 1>/dev/null ; then update_ver="U2"; fi;
if head -1 /etc/release | grep u3 1>/dev/null ; then update_ver="U3"; fi;
if head -1 /etc/release | grep u4 1>/dev/null ; then update_ver="U4"; fi;
if head -1 /etc/release | grep u5 1>/dev/null ; then update_ver="U5"; fi;
if head -1 /etc/release | grep u6 1>/dev/null ; then update_ver="U6"; fi;
if head -1 /etc/release | grep u7 1>/dev/null ; then update_ver="U7"; fi;
if head -1 /etc/release | grep u8 1>/dev/null ; then update_ver="U8"; fi;
if head -1 /etc/release | grep u9 1>/dev/null ; then update_ver="U9"; fi;
if head -1 /etc/release | grep u10 1>/dev/null ; then update_ver="U10"; fi;
if head -1 /etc/release | grep s9_58shwpl3 1>/dev/null ; then update_ver="FCS"; fi;
echo "5CtOS version:\t\t $os_name $os_ver $update_ver";

7. Other

Vendor

#!/bin/sh
vendor=`/usr/sbin/prtconf 2>/dev/null | head -1 | awk '{print $3 " " $4}'`;
echo "\tVendor:\t\t\t $vendor";

Hardware

#!/bin/sh
hw="`uname -i`";
echo "\tHardware:\t\t\t $hw";

Platform 32/64bit

#!/bin/sh
os_bit="`isainfo -kv | awk '{print $1}'`";
echo "\tPlatform:\t\t $os_bit";

NTP stratums

#!/bin/sh
ntp_stratum=`/usr/sbin/ntpq -p 2>/dev/null | awk '{ print $3 }'| tail +3`
ntp_status=`echo $ntp_stratum`;
echo "\tNTP:\t\t\t $ntp_status";

All IP addresses

#!/bin/sh
ip_all="`/sbin/ifconfig -a | grep inet | grep -v 127.0.0.1 | awk '{print $2}'`";
ip_all_one_string="`echo $ip_all`";
echo "\tIP all:\t\t\t $ip_all_one_string";

Root files system usage

#!/bin/sh
fs_root="`df -k | grep "/$" | awk '{print $5}'`";
echo "\tFS ROOT:\t\t $fs_root";

All files systems usage

#!/bin/sh
fs_all=`df -k | grep "/" | grep -v "/$" | grep -v "/proc" | grep -v "/etc/mnttab" | grep -v "/etc/dfs/sharetab" | grep -v "/lib/libc.so.1" | awk '{print $6 " " $5 ","}'`
fs_all_one_string="`echo $fs_all`";
echo "\tFS ALL:\t\t\t $fs_all_one_string";

SSL0208E IKEYMAN VeriSign error

SSL0208E IKEYMAN VeriSign error 

SSL0208E IKEYMAN VeriSign error

 

SSL0208E IKEYMAN VeriSign error | v.yeung

Upon installing the certificates received back from VeriSign, the following error may be shown in the error_log when trying to access the site via https:

[Tue Jun 29 10:34:37 2010] [error] [client 10.64.136.75] [e6968ff8] [10436] SSL0208E: SSL Handshake Failed, Certificate validation error. [10.64.136.75:1596 -> 10.34.77.5:443] [10:34:37.000732098]

The error: SSL0208E signifies that a particular certificate may be missing from the chain. There is no easy way to find out which certificate is missing however and more advanced logging must be enabled.

In the httpd.conf file, add a line at the end of the log file:

SSLTrace

So your httpd.conf file may look something like this:

LoadModule ibm_ssl_module modules/mod_ibm_ssl.so
Listen 443
< VirtualHost *:443>
SSLEnable
< /VirtualHost>
KeyFile /IBM/HTTPServer/keydatabase.kdb
SSLDisable
SSLTrace

Stop and restart apache server using apachectl and try to access the site again via https. A new log file under the logs directory will now be written called gsktrace_log.

Most of gsktrace_log will be unreadable however searching for a few keywords will reveal more detailed information on what certificate may be missing in the chain.

In particular look for the “Cert1” term and then the log detail below that. An example of a part of a gsktrace_log is detailed here:

GSKNativeValidator - Current built chain:
Cert1
DN: OU=www.verisign.com/CPS Incorp.by Ref. LIABILITY LTD.(c)97 VeriSign,OU=VeriSign International Server CA - Class 3,OU=VeriSign\, Inc.,O=VeriSign Trust Network
S#: 0x1234567890d02f0f926098233f9fffff
Cert2
DN: CN=yourdomain.com,O=YOUR ORGANISATION NAME LTD,L=Sydney,ST=New South Wales,C=AU
S#: 0x3cc123f1a15b60a733cdc01234567890
.........
GSKMemoryDataSource - Looking for :
OU=Class 3 Public Primary Certification Authority,O=VeriSign\, Inc.,C=US
.........
GSKMemoryDataSource - Trying:
CN=yourdomain.com,O=YOUR ORGANISATION NAME LTD,L=Sydney,ST=New South Wales,C=AU
.........
< and finally...>

... Dead End! Couldn't find any (more) issuer certificates. ...

The section “Looking for :” gives a clue on the certificate that may be missing in your chain that is causing the SSL0208E error. In this particular case, the “Class 3 Public Primary Certification Authority” certificate is missing within IKEYMAN. The solution to this problem was to download and install the correct Root certificate from VeriSign and install it into IKEYMAN (Just do a search for the certificate on Google). Once the httpserver was stopped and started back up, https was up and working.

How to install an SSL certificate on IHS (IBM HTTP Server)

How to install an SSL certificate on IHS (IBM HTTP Server)

How to install an SSL certificate on IHS (IBM HTTP Server)

I’m going to explain how to install an SSL certificate on IHS (IBM HTTP Server).

I have received this request yesterday and today I have struggled with this configuration. So, now if you are in a hurry, I think you can configure an SSL in 5 minutes. So let’s go through the steps:

* TIPS
TIP 1 – Create a .sh script for creating the db, for importing certificates and for receiving the signed key.
TIP 2 – gsk7cmd command supports -Xms1024m -Xmx2048m options for adding extra heap memory to java. This is very usefull because some times you end up with OutOfMemory errors.
TIP3 – After creating the request you can see the request by list request certificates in the keystore, after receiving the signed certificate the certificate request is removed. Don’t worry, this is normal.
TIP4 – SL0208E: SSL Handshake Failed, Certificate validation error.  This error is related to the Root Class3 certificate. Don’t forget to import it to the keystore.

Step 1 – Configure your environment variables

Using command line (as almost on every server)

Step 1 – Configure your environment

export JAVA_HOME=/java/jre
export PATH=/java/jre/bin:$PATH

Step 2 – Create a new key store database:

IHS_ROOT_DIR/gsk7/bin/gsk7cmd -keydb -create -db keystore -pw 1234 -type cms -stash

Step3 – Create a new Key Request:

IHS_ROOT_DIR/gsk7/bin/gsk7cmd -certreq -create -db keystore.kdb -pw 1234 –

label keystorelabel -dn “CN=subdomain.yourcompany.com,O=Company Name,OU=OrganizationUnit,L=Sao Paulo,ST=Sao Paulo,C=BR” -size 2048 -file keyrequest.csr

Step3 – Import primary and secondary intermediate certsign public keys

access this link and copy the primary and secondary intermediate keys

http://www.verisign.com/support/verisign-intermediate-ca/secure-site-intermediate/index.html

copy the Primary Intermediate CA Certificate and save in a file called
primary.crt

copy the Secondary Intermediate CA Certificate and save in a file called
secondary.crt

access Verisign link and choose your product. The most common is “Standard SSL”

https://knowledge.verisign.com/support/mpki-for-ssl-support/index?page=content&actp=CROSSLINK&id=SO4785

Access your product. After accessing your product link, it will be displayed the Class 3 Public Primary Certification Authority. Copy the certificate and store it in a file called

rootclasscert.crt

so now you have the 3 certificates:

primary.crt
secondary.crt
rootclasscert.crt

Step 4 – Import primary, secondary and rootclasscert into your keystore.kdb database

IHS_ROOT_DIR/gsk7/bin/gsk7cmd -Xms1024m -Xmx2048m -cert -add -db keystore.
kdb -pw 1234 -label primary -format ascii -trust enable -file primary.crt

IHS_ROOT_DIR/gsk7/bin/gsk7cmd -Xms1024m -Xmx2048m -cert -add -db keystore.

kdb -pw 1234 -label secondary -format ascii -trust enable -file secondary.crt

IHS_ROOT_DIR/gsk7/bin/gsk7cmd -Xms1024m -Xmx2048m -cert -add -db keystore.

kdb -pw 1234 -label rootclasscert -format ascii -trust enable -file rootclasscert.crt

Step  5 – Send your request file keyrequest.csr to Verisign so to receive a signed certificate.

This step is atomic. You access your Verisign account and copy and paste the request key and Verisign will send the signed certificate by email at the same time.

Step 6 – Receive the file and store it in your database

Copy the content of the cert.cer or copy the attached file to your server and issue the following command:

IHS_ROOT_DIR/gsk7/bin/gsk7cmd -Xms1024m -Xmx2048m -cert -receive -file cert.cer -db keystore.kdb -pw 1234 -format ascii -default_cert yes

Step 7 – Configure your IHS to point to the new keystore

Example:

LoadModule ibm_ssl_module modules/mod_ibm_ssl.so

Listen 443

< virtualhost your.ip.address.number:443 >
ServerName your.ip.address.number
SSLEnable
SSLProtocolDisable SSLv2
KeyFile YOUR_PATH/SSL/keystore.kdb
< /virtualhost>
SSLDisable

Step 8 – Stop and Start IHS.

IHS_ROOT_DIR/bin/adminctl stop
IHS_ROOT_DIR/bin/apachectl stop

IHS_ROOT_DIR/bin/adminctl start
IHS_ROOT_DIR/bin/apachectl start

check your server now using https://yourserver/

Calculating percentages in bash

Calculating percentages in bash

Dividing in bash will cause problems if the result is below zero. This is a problem when you’re trying to work out percentages. For example, if you simply want to divide 1 by 2 the result should be 0.5. However, bash returns the result 0:

user@computer:~> echo $(( 1 / 2 ))
0

To fix this problem use the program bc, “an arbitrary precision calculator language”. Using bc you can do the calculation to a specified number of decimal places:

user@computer:~> echo “scale=2; 1 / 2? | bc
.50

You can use this to work out the percentage as follows:

user@computer:~> echo “scale=2; 1 / 2 * 100? | bc
50.00

To chop off the decimal places use sed:

user@computer:~> echo “scale=2; 1 / 2 * 100? |  bc  |  sed s/\\.[0-9]\\+//
50