November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

Categories

November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

Adding SMS Notifications to Your Bash Scripts

Cost Note: The first 100 text messages / month are free. After that, you may start incurring charges.

Step 1: Create an Appropriate AWS User

Start by creating a new User or Group in AWS with the appropriate privileges. You do NOT want to use your root account in production, for security reasons. I would suggest creating a group here, and adding your user to that group, so you can easily expand privileges  if you want to do more things via AWS CLI.

  • Visit the IAM home, in the AWS console: https://console.aws.amazon.com/iam/
  • Click on “Users”, then “Add User”.
  • Enter a User name, and then check “Programmatic Access”
  • On the “Permissions” Screen, you can either assign the user to a group with these permissions, or assign directly. You need a minimum of AmazonSNSFullAccess permission here.
  • Review, then create your user.
  • Important : Write down or download the CSV for the access and secret key. You CANNOT get these again, without regenerating and changing them.

 

Create a new folder in your HOME directory (~/ on Linux/Mac or C:\Users\USERNAME\ on Windows) called .aws and then create a text file in that folder called credentials without any extension. Now add the following text to that file:

[sns-reminders]
aws_access_key_id = YOUR_AWS_ACCESS_KEY_ID
aws_secret_access_key = YOUR_AWS_SECRET_ACCESS_KEY

 

For a bit of extra security you can lock down the credentials file’s permissions to 600:

$ chmod 600 credentials

Now you’ll want to head over to your IAM console and in the users tab, click “Create New Users”, add a user called “sns-reminders”, make sure that “Generate an access key for each user” is checked and then click “Create”.

You should then be brought to a screen that looks like this:

Step 2: Install and configure AWS CLI

There are many ways to install the AWS CLI. If your platform supports it, I strongly suggest installing with PIP: sudo pip install awscli

If you cannot or wish not to use PIP, see the official guide here for installing AWS CLI.

To configure, first enter: aws configure This will prompt you for the following, which you should have noted from the step above.

AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-west-2
Default output format [None]: ENTER

The access key and secret key should have been generated in step 1. The region can be any of the AWS regions, but if you choose one that does not support SMS, you will have to force that region in the command in step 3.

Step 3: Test It Out!

Try the following code in terminal, to test and make sure everything works. Be sure to substitute your phone number with the placeholder.

aws sns publish --phone-number=1-555-555-5555 --message "Your Message Here"


aws sns publish --topic-arn arn:aws:sns::sms-ack --message "website is up"

If your default region does not support SMS, you can add this tag to force it to one that does: --region us-west-2 .

--message can also take a file name as an argument, with --message file://file.txt, instead of a string, and send the contents of that file.

Wrapping Up

That’s it! Feel free to add the line above to any of your bash scripts or cron jobs, to alert you when something occurs. For example, I have a weekly backup in addition to my nightly, so I have it text me when that has been completed.

oracle databases server

oracle databases server

create database TEST
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1 ‘C:\oraclexe\app\oracle\oradata\TEST\REDO01.LOG’ SIZE 50M BLOCKSIZE 512,
GROUP 2 ‘C:\oraclexe\app\oracle\oradata\TEST\REDO02.LOG’ SIZE 50M BLOCKSIZE 512
DATAFILE’C:\oraclexe\app\oracle\oradata\TEST\SYSTEM.DBF’ size 100m autoextend on
sysaux datafile ‘C:\oraclexe\app\oracle\oradata\TEST\SYSAUX.DBF’ size 100m autoextend on
undo tablespace undotbs1 datafile ‘C:\oraclexe\app\oracle\oradata\TEST\UNDOTBS1.DBF’ size 100m autoextend on
CHARACTER SET AL32UTF8
;

CREATE DATABASE axisdevdb
USER SYS IDENTIFIED BY oracle
USER SYSTEM IDENTIFIED BY oracle
GROUP 1 (‘/u01/app/oracle/oradata/axisdevdb/Disk1/redo01_a.log’) SIZE 100M
GROUP 2 (‘/u01/app/oracle/oradata/axisdevdb/Disk1/redo02_a.log’) SIZE 100M
GROUP 3 (‘/u01/app/oracle/oradata/axisdevdb/Disk1/redo03_a.log’) SIZE 100M
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXLOGHISTORY 1
MAXDATAFILES 100
MAXINSTANCES 1
CHARACTER SET US7ASCII
NATIONAL CHARACTER SET AL16UTF16
DATAFILE ‘/u01/app/oracle/oradata/axisdevdb/Disk1/system01.dbf’ SIZE 325M REUSE
EXTENT MANAGEMENT LOCAL
SYSAUX DATAFILE ‘/u01/app/oracle/oradata/axisdevdb/Disk1/sysaux01.dbf’ SIZE 325M REUSE
SIZE 20M REUSE
SIZE 200M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;

Log into the database server as a user belonging to ‘dba’ [unix ] or ‘ora_dba’ [windows ] group , typically ‘oracle’, or an administrator on your windos machine. You are able to log into Oracle as SYS user, and change the SYSTEM password by doing the following:

$ sqlplus “/ as sysdba”
SQL*Plus: Release 9.2.0.1.0 – Production on Mon Apr 5 15:32:09 2004

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

Connected to:
Oracle9i Enterprise Edition Release 9.2.0.1.0 – Production
With the OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 – Production

SQL> show user

USER is “SYS”

SQL> passw system
Changing password for system
New password:
Retype new password:
Password changed
SQL> quit

Next, we need to change the password of SYS:

$ sqlplus “/ as system”
SQL*Plus: Release 9.2.0.1.0 – Production on Mon Apr 5 15:36:45 2004

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

SP2-0306: Invalid option.
Usage: CONN[ECT] [logon] [AS {SYSDBA|SYSOPER}]
where <logon> ::= <username>[/<password>][@<connect_string>] | /
Enter user-name: system
Enter password:

Connected to:
Oracle9i Enterprise Edition Release 9.2.0.1.0 – Production
With the OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 – Production

SQL> passw sys
Changing password for sys
New password:
Retype new password:
Password changed
SQL> quit
You should now be able to log on the SYS and SYSTEM users, with the passwords you just typed in.

Method 2: Creating pwd file (Tested on Windows Oracle 8.1.7)

Stop the Oracle service of the instance you want to change the passwords of.
Find the PWD###.ora file for this instance, this is usuallly located atC:\oracle\ora81\database\, where ### is the SID of your database.
rename the PWD###.ora file to PWD###.ora.bak for obvious safety reasons.
Create a new pwd file by issuing the command:
orapwd file=C:\oracle\ora81\database\PWD###.ora password=XXXXX
where ### is the SID and XXXXX is the password you would like to use for the SYS and INTERNAL accounts.
Start the Oracle service for the instance you just fixed. You should be able to get in with the SYS user and change other passwords from there.

 

 

 

Oracle default user and password

sqlplus /nolog
conn /as sysdba
alter user system identified by manager

OK, you can now use the Oracle database service normally. Specific instructions and points of attention are given in the next section.

Extra operation
1, modify the instance name

select instance from v$thread;

echo $ORACLE_SID

First, create a user
create user test identified by password
alter user test identified by password ;
Second, the authorization role
grant connect, resource to test;
revoke connect, resource from test;
Second, delete the user
drop user test;
drop user test cascade;
Fourth, create / authorize / delete the role
create role testRole;
grant select on class to testRole;
drop role testRole;

test:
grant resource to nova; //
Role related

Fifth, create wm_concat function on 12C
11g2 and 12C have abandoned the wm_concat function

Create wm_concat function on 12C Oracle default user name and password cheat sheet

create user ums_dev identified by ums_dev?
grant session, connect, resource to ums_dev;
ALTER USER ums_dev ACCOUNT UNLOCK;
alter user ums_dev identified by ums_dev;
grant unlimited tablespace to ums_dev;
alter table table_user add (overdate TIMESTAMP(6));

View Oracle Role Users and Permissions

select user_id, username, DEFAULT_TABLESPACE, ACCOUNT_STATUS,PROFILE from dba_users;

select username,default_tablespace from user_users;

select * from user_role_privs;

select * from user_sys_privs

select * from user_tab_privs

Check the table related information

select sum(bytes)/(1024*1024) tablesize from user_segments where segment_name=’ZW_YINGYEZ’;

select index_name,index_type,table_name from user_indexes order by table_name

select * from user_ind_columns where table_name=’CB_CHAOBIAOSJ201004′

select sum(bytes)/(1024*1024) as indexsize from user_segments
where segment_name=upper(‘AS_MENUINFO’)

select * from v$version

Import and Export

exp dbserver/dbserver1234@ORCL file=/opt/dbserver.dmp owner=dbserver

imp dbserver/dbserver1234@XE file=c:\orabackup\full.dmp log=c:\orabackup\imp.log full=y

Export

exp dbserver/dbserver1234@ORCL file=dbserver.dmp log=dbserver.log owner=ums rows=n

exp dbserver/dbserver1234 jdbc:oracle:thin:@10.10.73.206:1521:umpay file=dbserver.dmp log=dbserver.log owner=ums rows=n

disable default webpage

Once it is done, you shall disable Apache’s default welcome page.

[root@rmohan.com ~]# sed -i ‘s/^/#&/g’ /etc/httpd/conf.d/welcome.conf

Also, prevent the Apache web server from displaying files within the web directory.

[root@rmohan.com ~]# sed -i “s/Options Indexes FollowSymLinks/Options FollowSymLinks/” /etc/httpd/conf/httpd.conf

After that, start and enable the Apache web server.

[root@rmohan.com ~]# systemctl start httpd.service
[root@rmohan.com ~]# systemctl enable httpd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.
Setup WebDAV

For Apache, there are three WebDAV-related modules which will be loaded by default when an Apache web server is getting started.

[root@rmohan.com ~]# httpd -M | grep dav
dav_module (shared)
dav_fs_module (shared)
dav_lock_module (shared)

Next, create a dedicated directory for WebDAV:

[root@rmohan.com ~]# mkdir /var/www/html/webdav
[root@rmohan.com ~]# chown -R apache:apache /var/www/html
[root@rmohan.com ~]# chmod -R 755 /var/www/html

For security purposes, you need to create a user account.

[root@rmohan.com ~]# htpasswd -c /etc/httpd/.htpasswd user1
New password:
Re-type new password:
Adding password for user user1

And also, you need to modify the owner and permissions in order to enhance security

[root@rmohan.com ~]# chown root:apache /etc/httpd/.htpasswd
[root@rmohan.com ~]# chmod 640 /etc/httpd/.htpasswd

Once it is done, you need to create a VirtialHost for WebDAV.

[root@rmohan.com ~]# vi /etc/httpd/conf.d/webdav.conf
DavLockDB /var/www/html/DavLock
<VirtualHost *:80>
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html/webdav/
ErrorLog /var/log/httpd/error.log
CustomLog /var/log/httpd/access.log combined
Alias /webdav /var/www/html/webdav
<Directory /var/www/html/webdav>
DAV On
AuthType Basic
AuthName “webdav”
AuthUserFile /etc/httpd/.htpasswd
Require valid-user
</Directory>
</VirtualHost>

Once the VirtualHost is configured, you need to restart Apache to put your changes into effect.

[root@rmohan.com ~]# systemctl restart httpd.service

Test the functionality of the WebDAV server from a local machine. In order to take advantage of WebDAV, you need to use a qualified client. For example, you can install a program called cadaver on a CentOS 7 desktop

[root@rmohan.com ~]# yum install cadaver
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: mirror.dhakacom.com
* epel: mirror2.totbb.net
* extras: mirrors.viethosting.com
* updates: centos-hn.viettelidc.com.vn
Resolving Dependencies
–> Running transaction check
—> Package cadaver.x86_64 0:0.23.3-9.el7 will be installed
–> Finished Dependency Resolution

.
.
Running transaction
Installing : cadaver-0.23.3-9.el7.x86_64 1/1
Verifying : cadaver-0.23.3-9.el7.x86_64 1/1

Installed:
cadaver.x86_64 0:0.23.3-9.el7

Complete!

Having cadaver installed, use the following command to access the WebDAV server.

[root@rmohan.com ~]# cadaver http://192.168.7.234/webdav/
Authentication required for webdav on server `192.168.7.234′:
Username: user1
Password:
dav:/webdav/>

In the cadaver shell, you can upload and organize files as you wish. Here are some examples. To upload a local file

dav:/webdav/> put /root/Desktop/rmohan.com.txt
Uploading /root/Desktop/rmohan.com.txt to `/webdav/rmohan.com.txt’: succeeded.

To create a directory “dir1” on the WebDAV server

dav:/webdav/> mkdir dir

To quit the cadaver shell

dav:/webdav/> exit
Connection to `192.168.7.234′ closed.
If you want to learn more about cadaver, you can look up the cadaver manual in the Bash shell. With this, the tutorial on setting up a WebDAV Server Using Apache on CentOS 7 comes to an end.

ansible trail

##### Steps for deployment of Ansible on CentOS 7

##### Dependency Tasks

### Install EPEL
sudo yum install epel-release

### Install pending updates
sudo yum -y update

##### Install Ansible

### Install Ansible
sudo yum -y install ansible

### Verify the Version
ansible –version

[db]
node1.rmohan.com
[app]
node2.rmohan.com
[db]
node3.rmohan.com

ansible all –list-hosts
ansible db –list-hosts
ansible db -m ping

connect with password authentication, it’s possible to do with “k”
ansible db -k -m command -a “uptime”

ansible db -k -m command -a “cat /etc/shadow” -b –ask-become-pass

other user’s priviledge except root, specify the option “–become-user=xxx”.
If you’d like to use another way to use priviledge except sudo (su | pbrun | pfexec | runas), specify the option “–become-method=xxx”.

ansible db -m ping
ansible db -m command -a uptime
ansible db -a “tail /var/log/dmesg”

ansible -m ping db
ansible -m ping -all
ansible -m command -a “df -h” db
ansible -m command -a “free -mt” db
ansible -m command -a “uptime” all
ansible -m command -a “arch” all
ansible -m shell -a “hostname” all
ansible -m command -a “df -h” db > /tmp/df_outpur.txt

ansible all -a “echo hello world”
ansible all -m ping
ansible db -m ping
ansible db -m setup -l node-1
ansible db -m command -a “hostname”
ansible db -m command -a “hostname” -o
ansible db -m command -a “uptime”
ansible db -m shell -a ‘echo $TERM’
ansible db -b -m yum -a “name=httpd state=present”

ansible web -b -m service -a “name=httpd state=started”
ansible web -b -m service -a “name=httpd state=stopped”

ansible web -a “/sbin/reboot” -f 10

Adhoc Commands

ansible web -a “yum update -y”
ansible app -a “yum -y install tomcat”
ansible app -a “service tomcat status”
ansible app -a “service tomcat start”
ansible app -a “yum -y install curl wget”
ansible app -a “curl web”
ansible app -a “bash -c ‘curl -k https://github.com/opstree-ansible/ansible-training/blob/master/attendees/exercise/application/sample.war > /var/lib/tomcat/webapps/sample.war'”
ansible app -a “service tomcat restart”
ansible app -a “curl node2.rmohan.com:8080/sample/”

ansible centos -m copy -a “src=test.txt
ansible centos -m copy -a “src=test.txt dest=/tmp/test.txt”
ansible centos -m yum -a “install libselinux-python”
ansible centos -m copy -a “src=test.txt dest=/tmp/test.txt”

vi playbook_sample.yml
# target hostname or group name
– hosts: web
# define tasks
tasks:
# task name (any name you like)
– name: Test file
# use file module to set the file state
file: path=/tmp/test.conf state=touch owner=root group=root mode=0600

run Playbook
ansible-playbook playbook_sample.yml

ansible web1 -m command -a “ls -l /tmp/”

[root@controller test]# ansible web -m command -a “ls -l /tmp/test.conf”
node1.rmohan.com | SUCCESS | rc=0 >>
-rw——- 1 root root 0 Mar 31 20:29 /tmp/test.conf

create a Playbook which Apache httpd is installed and running.
vi playbook_sample2.yml
– hosts: web
# use priviledge (default : root)
become: yes
# the way to use priviledge
become_method: sudo
# define tasks
tasks:
– name: httpd is installed
yum: name=httpd state=installed
– name: httpd is running and enabled
service: name=httpd state=started enabled=yes

ansible-playbook -v playbook_sample2.yml –ask-become-pass

ansible web -m shell -a “/bin/systemctl status httpd | head -3” -b –ask-become-pass

[root@controller test]# ansible web -m shell -a “/bin/systemctl status httpd | head -3” -b –ask-become-pass
SUDO password:
node1.rmohan.com | SUCCESS | rc=0 >>
? httpd.service – The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2018-03-31 20:36:52 +08; 46s ago

/root/ansible/playbook/test

playbook_sample.yml

– hosts: db
become: yes
become_method: sudo
tasks:
– name: General packages are installed
yum: name={{ item }} state=installed
with_items:
– vim-enhanced
– wget
– unzip
tags: General_Packages

[root@controller test]# ansible-playbook playbook_sample.yml –ask-become-pass
SUDO password:

PLAY [db] *****************************************************************************************************************************************************************************************************************************************************************

TASK [Gathering Facts] ****************************************************************************************************************************************************************************************************************************************************
ok: [node3.rmohan.com]

TASK [General packages are installed] *************************************************************************************************************************************************************************************************************************************
ok: [node3.rmohan.com] => (item=[u’vim-enhanced’, u’wget’, u’unzip’])

PLAY RECAP ****************************************************************************************************************************************************************************************************************************************************************
node3.rmohan.com : ok=2 changed=0 unreachable=0 failed=0

[root@controller test]#

ansible db -m shell -a “rpm -qa | grep -E ‘vim-enhanced|wget|unzip'” –ask-become-pass

variables from “GATHERING FACTS”
vi playbook_sample3.yml

# refer to “ansible_distribution”, “ansible_distribution_version”
– hosts: target_servers
tasks:
– name: Refer to Gathering Facts
command: echo “{{ ansible_distribution }} {{ ansible_distribution_version }}”
register: dist
– debug: msg=”{{ dist.stdout }}”

[root@controller test]# ansible-playbook playbook_sample3.yml

PLAY [web] ****************************************************************************************************************************************************************************************************************************************************************

TASK [Gathering Facts] ****************************************************************************************************************************************************************************************************************************************************
ok: [node1.rmohan.com]

TASK [Refer to Gathering Facts] *******************************************************************************************************************************************************************************************************************************************
changed: [node1.rmohan.com]

TASK [debug] **************************************************************************************************************************************************************************************************************************************************************
ok: [node1.rmohan.com] => {
“msg”: “CentOS 7.4.1708″
}

PLAY RECAP ****************************************************************************************************************************************************************************************************************************************************************
node1.rmohan.com : ok=3 changed=1 unreachable=0 failed=0

vi playbook_sample4.yml
– hosts: target_servers
become: yes
become_method: sudo
handlers:
– name: restart sshd
service: name=sshd state=restarted
tasks:
– name: edit sshd_config
lineinfile: >
dest=/etc/ssh/sshd_config
regexp=”{{ item.regexp }}”
line=”{{ item.line }}”
with_items:
– { regexp: ‘^#PermitRootLogin’, line: ‘PermitRootLogin no’ }
notify: restart sshd
tags: Edit_sshd_config

ansible-playbook playbook_sample4.yml –ask-become-pass

export JAVA_HOME=/opt/java/java/
export JRE_HOME=/opt/java/java/jre
export PATH=$PATH:/opt/java/java/bin:/opt/java/java/jre/bin

kubernetes

two lines info my /etc/sysctl.conf

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

sysctl net.bridge.bridge-nf-call-iptables=1
swapoff -a
firewall-cmd –reload
modprobe br_netfilter
echo ‘1’ > /proc/sys/net/bridge/bridge-nf-call-iptables

kubeadm reset

echo ‘Environment=”KUBELET_EXTRA_ARGS=–fail-swap-on=false”‘ >> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

systemctl daemon-reload
systemctl restart kubelet

kubeadm init

[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# chown $(id -u):$(id -g) $HOME/.kube/config

Step 1: Disable SELinux & setup firewall rules

Login to your kubernetes master node and set the hostname and disable selinux using following commands

~]# hostnamectl set-hostname ‘k8s-master’
~]# exec bash
~]# setenforce 0
~]# sed -i –follow-symlinks ‘s/SELINUX=enforcing/SELINUX=disabled/g’ /etc/sysconfig/selinux

Set the following firewall rules.

[root@k8s-master ~]# firewall-cmd –permanent –add-port=6443/tcp
[root@k8s-master ~]# firewall-cmd –permanent –add-port=2379-2380/tcp
[root@k8s-master ~]# firewall-cmd –permanent –add-port=10250/tcp
[root@k8s-master ~]# firewall-cmd –permanent –add-port=10251/tcp
[root@k8s-master ~]# firewall-cmd –permanent –add-port=10252/tcp
[root@k8s-master ~]# firewall-cmd –permanent –add-port=10255/tcp
[root@k8s-master ~]# firewall-cmd –reload
[root@k8s-master ~]# modprobe br_netfilter
[root@k8s-master ~]# echo ‘1’ > /proc/sys/net/bridge/bridge-nf-call-iptables

Note: In case you don’t have your own dns server then update /etc/hosts file on master and worker nodes

192.168.1.30 k8s-master
192.168.1.40 worker-node1
192.168.1.50 worker-node2

Step 2: Configure Kubernetes Repository

Kubernetes packages are not available in the default CentOS 7 & RHEL 7 repositories, Use below command to configure its package repositories.

[root@k8s-master ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
> https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
> EOF [root@k8s-master ~]#

Step 3: Install Kubeadm and Docker

Once the package repositories are configured, run the beneath command to install kubeadm and docker packages.

[root@k8s-master ~]# yum install kubeadm docker -y

Start and enable kubectl and docker service

[root@k8s-master ~]# systemctl restart docker && systemctl enable docker
[root@k8s-master ~]# systemctl restart kubelet && systemctl enable kubelet

Step 4: Initialize Kubernetes Master with ‘kubeadm init’

Run the beneath command to initialize and setup kubernetes master.

[root@k8s-master ~]# kubeadm init

[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# chown $(id -u):$(id -g) $HOME/.kube/config

Step 5: Deploy pod network to the cluster

Try to run below commands to get status of cluster and pods.

kubectl-get-nodes

To make the cluster status ready and kube-dns status running, deploy the pod network so that containers of different host communicated each other. POD network is the overlay network between the worker nodes.

Run the beneath command to deploy network.

[root@k8s-master ~]# export kubever=$(kubectl version | base64 | tr -d ‘\n’)
[root@k8s-master ~]# kubectl apply -f “https://cloud.weave.works/k8s/net?k8s-version=$kubever”
serviceaccount “weave-net” created
clusterrole “weave-net” created
clusterrolebinding “weave-net” created
daemonset “weave-net” created
[root@k8s-master ~]#

Now run the following commands to verify the status

[root@k8s-master ~]# kubectl get nodes
NAME STATUS AGE VERSION
k8s-master Ready 1h v1.7.5
[root@k8s-master ~]# kubectl get pods –all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-k8s-master 1/1 Running 0 57m
kube-system kube-apiserver-k8s-master 1/1 Running 0 57m
kube-system kube-controller-manager-k8s-master 1/1 Running 0 57m
kube-system kube-dns-2425271678-044ww 3/3 Running 0 1h
kube-system kube-proxy-9h259 1/1 Running 0 1h
kube-system kube-scheduler-k8s-master 1/1 Running 0 57m
kube-system weave-net-hdjzd 2/2 Running 0 7m
[root@k8s-master ~]#

Perform the following steps on each worker node
Step 1: Disable SELinux & configure firewall rules on both the nodes

Before disabling SELinux set the hostname on the both nodes as ‘worker-node1’ and ‘worker-node2’ respectively

~]# setenforce 0
~]# sed -i –follow-symlinks ‘s/SELINUX=enforcing/SELINUX=disabled/g’ /etc/sysconfig/selinux
~]# firewall-cmd –permanent –add-port=10250/tcp
~]# firewall-cmd –permanent –add-port=10255/tcp
~]# firewall-cmd –permanent –add-port=30000-32767/tcp
~]# firewall-cmd –permanent –add-port=6783/tcp
~]# firewall-cmd –reload
~]# echo ‘1’ > /proc/sys/net/bridge/bridge-nf-call-iptables

Step 2: Configure Kubernetes Repositories on both worker nodes

~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

Step 3: Install kubeadm and docker package on both nodes

[root@worker-node1 ~]# yum install kubeadm docker -y
[root@worker-node2 ~]# yum install kubeadm docker -y

Start and enable docker service

[root@worker-node1 ~]# systemctl restart docker && systemctl enable docker
[root@worker-node2 ~]# systemctl restart docker && systemctl enable docker

Step 4: Now Join worker nodes to master node

To join worker nodes to Master node, a token is required. Whenever kubernetes master initialized , then in the output we get command and token. Copy that command and run on both nodes.

[root@worker-node1 ~]# kubeadm join –token a3bd48.1bc42347c3b35851 192.168.1.30:6443

[root@worker-node2 ~]# kubeadm join –token a3bd48.1bc42347c3b35851 192.168.1.30:6443

yum update -y
modprobe br_netfilter
sysctl net.bridge.bridge-nf-call-iptables=1
sysctl net.bridge.bridge-nf-call-ip6tables=1

cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF
sysctl –system

swapoff -a

yum install -y yum-utils device-mapper-persistent-data lvm2

yum-config-manager –add-repo https://download.docker.com/linux/centos/docker-ce.repo

yum makecache fast

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

Downgrade with glibc Update to using YUM

Downgrade with glibc Update to using YUM

1). Existing RPM version checking and backup
#rpm -qa | grep glibc
compat-glibc-headers-2.3.4-2.26
glibc-common-2.5-81
glibc-devel-2.5-81
compat-glibc-2.3.4-2.26
compat-glibc-2.3.4-2.26
glibc-2.5-81
glibc-headers-2.5-81
glibc-devel-2.5-81
glibc-2.5-81

2). createrepo REPODATA
/usr/local/src/new_glibc

# pwd
/usr/local/src/new_glibc

#createrepo ./
12/12 – glibc-devel-2.5-123.el5_11.1.i386.rpm
Saving Primary metadata
Saving file lists metadata
Saving other metadata

3). old_glibc.repo
#vim /etc/yum.repos.d/new_glibc.repo
[old-glibc]
baseurl=file:///usr/local/src/new_glibc/
enabled=1
gpgcheck=0

# yum repolist
Loaded plugins: katello, product-id, security, subscription-manager
Updating certificate-based repositories.
Repository ‘new-glibc’ is missing name in configuration, using id
Unable to read consumer identity
new-glibc| 951 B 00:00
new-glibc/primary| 10 kB 00:00 new-glibc12/12
repo id repo name status
new-glibc new-glibc 12
rhel-DVD Red Hat Enterprise Linux 5Server – x86_64 – DVD 3,285
repolist: 3,297

# yum update glibc
Loaded plugins: katello, product-id, security, subscription-manager
Updating certificate-based repositories.
Repository ‘new-glibc’ is missing name in configuration, using id
Unable to read consumer identity
Skipping security plugin, no data
Setting up Update Process
Resolving Dependencies
Skipping security plugin, no data
–> Running transaction check
–> Processing Dependency: glibc = 2.5-81 for package: glibc-devel
–> Processing Dependency: glibc = 2.5-81 for package: glibc-headers
–> Processing Dependency: glibc = 2.5-81 for package: nscd
–> Processing Dependency: glibc = 2.5-81 for package: glibc-devel
—> Package glibc.i686 0:2.5-123.el5_11.1 set to be updated
–> Processing Dependency: glibc-common = 2.5-123.el5_11.1 for package: glibc
—> Package glibc.x86_64 0:2.5-123.el5_11.1 set to be updated
–> Running transaction check
—> Package glibc-common.x86_64 0:2.5-123.el5_11.1 set to be updated
—> Package glibc-devel.i386 0:2.5-123.el5_11.1 set to be updated
—> Package glibc-devel.x86_64 0:2.5-123.el5_11.1 set to be updated
—> Package glibc-headers.x86_64 0:2.5-123.el5_11.1 set to be updated
—> Package nscd.x86_64 0:2.5-123.el5_11.1 set to be updated
–> Finished Dependency Resolution

Dependencies Resolved

=====================================================================
Package Arch Version Repository Size
=====================================================================
Updating:
glibc i686 2.5-123.el5_11.1 new-glibc 5.4 M
glibc x86_64 2.5-123.el5_11.1 new-glibc 4.8 M
Updating for dependencies:
glibc-common x86_64 2.5-123.el5_11.1 new-glibc 16 M
glibc-devel i386 2.5-123.el5_11.1 new-glibc 2.1 M
glibc-devel x86_64 2.5-123.el5_11.1 new-glibc 2.4 M
glibc-headers x86_64 2.5-123.el5_11.1 new-glibc 602 k
nscd x86_64 2.5-123.el5_11.1 new-glibc 178 k

Transaction Summary
====================================================================
Install 0 Package(s)
Upgrade 7 Package(s)

Total download size: 32 M
Is this ok [y/N]: y
Downloading Packages:
——————————————————————-
Total 14 GB/s | 32 MB 00:00
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Updating : glibc-common 1/14
Updating : glibc 2/14
Updating : nscd 3/14
Updating : glibc-headers 4/14
Updating : glibc-devel 5/14
Updating : glibc 6/14
Updating : glibc-devel 7/14
Cleanup : glibc-headers 8/14
Cleanup : glibc-common 9/14
Cleanup : glibc 10/14
Cleanup : glibc 11/14
Cleanup : nscd 12/14
Cleanup : glibc-devel 13/14
Cleanup : glibc-devel 14/14
Installed products updated.

Updated:
glibc.i686 0:2.5-123.el5_11.1
glibc.x86_64 0:2.5-123.el5_11.1

Dependency Updated:
glibc-common.x86_64 0:2.5-123.el5_11.1
glibc-devel.i386 0:2.5-123.el5_11.1
glibc-devel.x86_64 0:2.5-123.el5_11.1
glibc-headers.x86_64 0:2.5-123.el5_11.1
nscd.x86_64 0:2.5-123.el5_11.1

Complete!

#rpm -qa | grep glibc
glibc-devel-2.5-123.el5_11.1
compat-glibc-headers-2.3.4-2.26
compat-glibc-2.3.4-2.26
compat-glibc-2.3.4-2.26
glibc-2.5-123.el5_11.1
glibc-2.5-123.el5_11.1
glibc-devel-2.5-123.el5_11.1
glibc-headers-2.5-123.el5_11.1
glibc-common-2.5-123.el5_11.1

1).

#rpm -qa | grep glibc
glibc-devel-2.5-123.el5_11.1
compat-glibc-headers-2.3.4-2.26
compat-glibc-2.3.4-2.26
compat-glibc-2.3.4-2.26
glibc-2.5-123.el5_11.1
glibc-2.5-123.el5_11.1
glibc-devel-2.5-123.el5_11.1
glibc-headers-2.5-123.el5_11.1
glibc-common-2.5-123.el5_11.1

2). yum downgrade

# yum downgrade glibc glibc-devel glibc-headers glibc-common nscd
Loaded plugins: katello, product-id, security, subscription-manager
Updating certificate-based repositories.
Repository ‘new-glibc’ is missing name in configuration, using id
Unable to read consumer identity
Setting up Downgrade Process
No Match for available package: nscd-2.5-81.x86_64
Resolving Dependencies
–> Running transaction check
—> Package glibc.i686 0:2.5-81 set to be updated
—> Package glibc.x86_64 0:2.5-81 set to be updated
—> Package glibc.i686 0:2.5-123.el5_11.1 set to be erased
—> Package glibc.x86_64 0:2.5-123.el5_11.1 set to be erased
—> Package glibc-common.x86_64 0:2.5-81 set to be updated
—> Package glibc-common.x86_64 0:2.5-123.el5_11.1 set to be erased
—> Package glibc-devel.i386 0:2.5-81 set to be updated
—> Package glibc-devel.x86_64 0:2.5-81 set to be updated
—> Package glibc-devel.i386 0:2.5-123.el5_11.1 set to be erased
—> Package glibc-devel.x86_64 0:2.5-123.el5_11.1 set to be erased
—> Package glibc-headers.x86_64 0:2.5-81 set to be updated
—> Package glibc-headers.x86_64 0:2.5-123.el5_11.1 set to be erased
–> Finished Dependency Resolution

Dependencies Resolved

===========================================
Package Arch Version Repository Size
===========================================
Downgrading:
glibc i686 2.5-81 rhel-DVD 5.3 M
glibc x86_64 2.5-81 rhel-DVD 4.8 M
glibc-common x86_64 2.5-81 rhel-DVD 16 M
glibc-devel i386 2.5-81 rhel-DVD 2.0 M
glibc-devel x86_64 2.5-81 rhel-DVD 2.4 M
glibc-headers x86_64 2.5-81 rhel-DVD 596 k

Transaction Summary
===========================================
Remove 0 Package(s)
Reinstall 0 Package(s)
Downgrade 6 Package(s)

Total download size: 32 M
Is this ok [y/N]: y
Downloading Packages:
——————————————-
Total 10 GB/s | 32 MB 00:00
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : glibc-common 1/12
Installing : 2/12
Installing : glibc-headers 3/12
Installing : glibc-devel 4/12
Installing : glibc 5/12
Installing : glibc-devel 6/12
Cleanup : glibc-headers 7/12
Cleanup : glibc-common 8/12
Cleanup : glibc 9/12
Cleanup : glibc 10/12
Cleanup : glibc-devel 11/12
Cleanup : glibc-devel 12/12
Installed products updated.

Removed:
glibc.i686 0:2.5-123.el5_11.1
glibc.x86_64 0:2.5-123.el5_11.1
glibc-common.x86_64 0:2.5-123.el5_11.1
glibc-devel.i386 0:2.5-123.el5_11.1
glibc-devel.x86_64 0:2.5-123.el5_11.1
glibc-headers.x86_64 0:2.5-123.el5_11.1

Installed:
glibc.i686 0:2.5-81
glibc.x86_64 0:2.5-81
glibc-common.x86_64 0:2.5-81
glibc-devel.i386 0:2.5-81
glibc-devel.x86_64 0:2.5-81
glibc-headers.x86_64 0:2.5-81

Complete!

3).
# rpm -qa | grep glibc
glibc-2.5-81
glibc-2.5-81
compat-glibc-headers-2.3.4-2.26
compat-glibc-2.3.4-2.26
compat-glibc-2.3.4-2.26
glibc-headers-2.5-81
glibc-devel-2.5-81
glibc-common-2.5-81
glibc-devel-2.5-81

http://www.cgplacenta.to/
http://www.99res.com/?s=Linux+Academy
http://www.99res.com/archives/312346.html

tomcat guide to prepare
http://documents.tips/documents/tomcatppt.html

http://ebookee.org/Linux-Academy-Red-Hat-Certified-System-Administrator-rhcsa-V7-professional-Level_2778727.html

http://www.tnctr.com/topic/320006-linux-academy-docker-deep-dive/

http://www.heroturko.info/tutorials/9023-linux-academy-aws-certified-solutions-architect-professional-level.html

http://nitroflare.com/view/7815FEA616C50FA/6._LinuxAcademy_-_Jenkins_and_Build_Automation.part1.rar

http://youbookpdf.com/e-learning/260653-linux-academy-centos-7-enterprise-linux-server-update-professional-level.html

http://youbookpdf.com/e-learning/260656-linux-academy-postgresql-94-administration-professional-level.html

http://www.heroturko.info/tutorials/9023-linux-academy-aws-certified-solutions-architect-professional-level.html

http://www.99res.com/?s=Linux+Academy

http://dlebook.me/index.php?do=search

http://www.downturk.biz/index.php?do=search

MARIADB

http://itfish.net/article/40768.html

How to setup MariaDB Galera Cluster 10.0 on CentOS

RHEL 7 APACHE CLUSTER

Configure High-Avaliablity Cluster on CentOS 7 / RHEL 7

tomcat 8

https://www.ntu.edu.sg/home/ehchua/programming/howto/Tomcat_HowTo.html

http://documents.tips/documents/tomcatppt.html

Ngnix

The company intends to replace http with https in the Ngxin environment. It requires http to force a jump to https. This search on the Internet, the basic summary
Configure rewrite ^(.*)$ https://$host$1 permanent;

Or in the server configuration return 301 https://$server_name$request_uri;

Or in the server with if, here refers to the need to configure multiple domain names

If ($host ~* “^rmohan.com$”) {

Rewrite ^/(.*)$ https://dev.rmohan.com/ permanent;

}

Or in the server configuration error_page 497 https://$host$uri?$args;

Basically on the above several methods, website visit is no problem, jump is ok

After the configuration is successful, prepare to change the address of the APP interface to https. This is a problem.

The investigation found that the first GET request is to receive information, POST pass in the past is no information, I configure the $ request_body in the nginx log, the log inside that does not come with parameters, view the front of the log, POST changed Become a GET. Finding the key to the problem

Through the online search, the discovery was caused by 301. Replaced by 307 problem solving.

301 Moved Permanently The
requested resource has been permanently moved to a new location, and any future references to this resource should use one of several URIs returned by this response

307 Temporary Redirect The
requested resource now temporarily responds to requests from different URIs. Because such redirection is temporary, the client should continue to send future requests to the original address.

From the above we can see that 301 jump is a permanent redirect, and 307 is a temporary redirect. This is the difference between 301 jumps and 307 jumps.

The above may not look very clear, simple and straightforward to express the difference:

Return 307 https://$server_name$request_uri;

307: For a POST request, indicating that the request has not yet been processed, the client should re-initiate a POST request to the URI in Location.

Change to the 307 status code to force the request to change the previous method.

The following configuration 80 and 443 coexist:

Need to be configured in a server, 443 port plus ssl. Comment out ssl on;, as follows:

Server{
listen 80;
listen 443 ssl;
server_name testapp.***.com;
root /data/vhost/test-app;
index index.html index.htm index.shtml index.php;
#ssl on;
ssl_certificate /usr/local/nginx/https/***.crt;
ssl_certificate_key /usr/local/nginx/https/***.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE -RSA-AES256-SHA384:AES256-SHA256:RC4:HIGH:!MD5:!aNULL:!eNULL:!NULL:!DH:!EDH:!AESGCM;
ssl_prefer_server_ciphers on
ssl_session_cache shared:SSL:10m;
error_page 404 /404. Html;
Location ~ [^/]\.php(/|$) {
fastcgi_index index.php;
include fastcgi.conf;
fastcgi_pass 127.0.0.1:9000;
#include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
access_log /data/logs/ Nginx/access.log access;
error_log /data/logs/nginx/error.log crit;
}

The two server wording:

Server{
listen 80;
server_name testapp.***.com;
rewrite ^(.*) https://$server_name$1 permanent;
}

Server{
listen 443;
server_name testapp.***.com;
root /data/vhost/test-app;
index index.html index.htm index.shtml index.php;
Ssl on;
ssl_certificate /usr/local/nginx/https/***.crt;
ssl_certificate_key /usr/local/nginx/https/***.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE- RSA-AES256-SHA384:AES256-SHA256:RC4:HIGH:!MD5:!aNULL:!eNULL:!NULL:!DH:!EDH:!AESGCM;
ssl_prefer_server_ciphers on
ssl_session_cache shared:SSL:10m;
error_page 404 /404.html ;
Location ~ [^/]\.php(/|$) {
fastcgi_index index.php;
include fastcgi.conf;
fastcgi_pass 127.0.0.1:9000;
#include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
access_log /data/logs/ Nginx/access.log access;
error_log /data/logs/nginx/error.log crit;
}

Offer ssl optimization, the following can be used according to business, not all configuration, the general configuration of the red part on the line

Ssl on;
ssl_certificate /usr/local/https/www.localhost.com.crt;
ssl_certificate_key /usr/local/https/www.localhost.com.key;

Ssl_protocols TLSv1 TLSv1.1 TLSv1.2; #allows only TLS protocol
ssl_ciphers ECDHE-RSA-AES256-SHA384:AES256-SHA256:RC4:HIGH:!MD5:!aNULL:!eNULL:!NULL:!DH:!EDH:! AESGCM; # cipher suite, here used CloudFlare’s Internet facing SSL cipher configurationssl_prefer_server_ciphers on; # negotiated the best encryption algorithm for the server ssl_session_cache builtin: 1000 shared: SSL: 10m;
# Session Cache, the Session cache to the server, which may take up More server resources ssl_session_tickets on; # Open the browser’s Session Ticket cache ssl_session_timeout 10m; # SSL session expiration time ssl_stapling on;
# OCSP Stapling is ON, OCSP is a service for online query certificate revocation, using OCSP Stapling can certificate The valid state information is cached to the server to increase the TLS handshake speed ssl_stapling_verify on; #OCSP Stapling verification opens the resolver 8.8.8.8 8.8.4.4 valid=300s; # is used to query the DNS resolver_timeout 5s of the OCSP server; # query domain timeout time

How to Install Configure AWS CLI -Linux, OS X, or Unix

How to Install Configure AWS CLI -Linux, OS X, or Unix
AWS CLI (Command Line Interface)

The AWS Command Line Interface is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.

Steps to Install AWS CLI (Linux, OS X, or Unix)

Prerequisites

1)Linux Machine

2) Python above 2.6.5+

Here my machine Details

1)Fedora release 20 (Heisenbug) Linux rmohan 3.16.6-200.fc20.x86_64

2)[root@rmohan ~]# python –version
Python 2.7.5

Download the AWS CLI Bundled Installer

[root@rmohan tmp]# wget https://s3.amazonaws.com/aws-cli/awscli-bundle.zip

–2016-02-10 15:58:20– https://s3.amazonaws.com/aws-cli/awscli-bundle.zip
Resolving s3.amazonaws.com (s3.amazonaws.com)… 54.231.81.252
Connecting to s3.amazonaws.com (s3.amazonaws.com)|54.231.81.252|:443… connected.
HTTP request sent, awaiting response… 200 OK
Length: 6678296 (6.4M) [application/zip]
Saving to: ‘awscli-bundle.zip’

awscli-bundle.zip 100%[========================================================>] 6.37M 122KB/s in 57s

2016-02-10 15:59:18 (114 KB/s) – ‘awscli-bundle.zip’ saved [6678296/6678296]

Unzip the package.

[root@rmohan tmp]# unzip awscli-bundle.zip
Archive: awscli-bundle.zip
inflating: awscli-bundle/install
inflating: awscli-bundle/packages/argparse-1.2.1.tar.gz
inflating: awscli-bundle/packages/awscli-1.10.3.tar.gz
inflating: awscli-bundle/packages/botocore-1.3.25.tar.gz
inflating: awscli-bundle/packages/colorama-0.3.3.tar.gz
inflating: awscli-bundle/packages/docutils-0.12.tar.gz
inflating: awscli-bundle/packages/futures-3.0.4.tar.gz
inflating: awscli-bundle/packages/jmespath-0.9.0.tar.gz
inflating: awscli-bundle/packages/ordereddict-1.1.tar.gz
inflating: awscli-bundle/packages/pyasn1-0.1.9.tar.gz
inflating: awscli-bundle/packages/python-dateutil-2.4.2.tar.gz
inflating: awscli-bundle/packages/rsa-3.3.tar.gz
inflating: awscli-bundle/packages/s3transfer-0.0.1.tar.gz
inflating: awscli-bundle/packages/simplejson-3.3.0.tar.gz
inflating: awscli-bundle/packages/six-1.10.0.tar.gz
inflating: awscli-bundle/packages/virtualenv-13.0.3.tar.gz

Switch the Dir & Install the same.

[root@rmohan tmp]# cd awscli-bundle/

[root@rmohan awscli-bundle]# ll
total 8
-rwxr-xr-x 1 root root 4528 Feb 9 14:57 install
drwxr-xr-x 2 root root 340 Feb 10 16:01 packages

[root@rmohan awscli-bundle]# ./install -i /usr/local/aws -b /usr/local/bin/aws
Running cmd: /bin/python virtualenv.py –python /bin/python /usr/local/aws
Running cmd: /usr/local/aws/bin/pip install –no-index –find-links file:///tmp/awscli-bundle/packages awscli-1.10.3.tar.gz
You can now run: /usr/local/bin/aws –version

Verify the same.

[root@rmohan awscli-bundle]# aws –version
aws-cli/1.10.3 Python/2.7.5 Linux/3.16.6-200.fc20.x86_64 botocore/1.3.25

[rmohan@rmohan ~]$ aws help

To Read Further Make sure AWS IAM in Place.

Now time to Configuring AWS CLI

[rmohan@rmohan ~]$ aws configure
AWS Access Key ID [None]: AKIA***********4OA
AWS Secret Access Key [None]: zdi*******************iZG2oej
Default region name [None]: ap-southeast-1a
Default output format [None]: json

After this when i test it through me below error.

[rmohan@rmohan ~]$ aws ec2 describe-instances

Could not connect to the endpoint URL: “https://ec2.ap-southeast-1a.amazonaws.com/”

Fix :- region which i configured is wrong. only last with 1 not 1a.

Open following file to fix.

[rmohan@rmohan ~]$ vi .aws/config

[rmohan@rmohan ~]$ aws ec2 describe-instances
{
“Reservations”: []
}

Two IMP Conf file..Just for ref.

[rmohan@rmohan ~]$ cat .aws/config
[default]
output = json
region = ap-southeast-1

[rmohan@rmohan ~]$ cat .aws/credentials
[default]
aws_access_key_id = AKIA***********4OA
aws_secret_access_key = zdi*******************iZG2oejT

[rmohan@rmohan ~]$ aws ec2 create-security-group –group-name rk-sg –description “test security group”
{
“GroupId”: “sg-33777***56”
}

Here you can find screen of GUI as well.

[rmohan@rmohan ~]$ aws ec2 describe-instances –output table –region ap-southeast-1
——————-
|DescribeInstances|
+—————–+
More AWS CLI Command

Redis master-slave + KeepAlived achieve high availability

Redis master-slave + KeepAlived achieve high availability

Redis is a non-relational database that we currently use more frequently. It can support diverse data types, multi-threaded high concurrency support, and redis running in memory with faster read and write. Because of the excellent performance of redis, how can we ensure that redis can deal with downtime during operation?

So today’s summary of the construction of the redis master-slave high-availability system, with reference to online bloggers of some great gods, found that many are pitted, so I share this one time, hoping to help everyone.

Redis Features
Redis is completely open source free, complies with the BSD protocol and is a high performance key-value database.

Redis and other key-value cache products have the following three characteristics:

Redis supports the persistence of data, can keep the data in memory on the disk, and can be loaded again for use when restarted.

Redis not only supports simple key-value types of data, but also provides data structures such as: Strings, Maps, Lists, Sets, and sorted sets. Storage.

Redis supports data backup, that is, data backup in master-slave mode.

The Redis advantage
is extremely high – Redis can read 100K+ times/s and write at 80K+ times/s.

Rich data types – Redis supports Strings, Lists, Hashes, Sets, and Ordered Sets data type operations for binary cases.

Atoms – All operations of Redis are atomic, and Redis also supports atomic execution of all operations after all operations.

Rich features – Redis also supports publish/subscribe, notifications, key expiration, and more.

Prepare environment

CentOS 7 –> 172.16.81.140 –> Master Redis –> Master Keepalived

CentOS7 –> 172.16.81.141 –> From Redis –> Prepared Keepalived

VIP –> 172.16.81.139

Redis (normally 3.0 or later)

KeepAlived (direct online installation)

Redis compile and install

cd /opt
tar -zxvf redis-4.0.6.tar.gz
mv redis-4.0.6 redis
cd redis
makeMALLOC=libc
make PREFIX=/usr/local/redis install

2, configure the redis startup script

vim /etc/init.d/redis

#!/bin/sh

#chkconfig:2345 80 90
# Simple Redisinit.d script conceived to work on Linux systems
# as it doeSUSE of the /proc filesystem.

# Configure the redis port number
REDISPORT=6379
# Configure the redis startup command path
EXE=/usr/local/redis/bin/ redisserver
# Configure the redis connection command path
CLIEXE=/usr/local/redis/bin/redis-cli
# Configure
Redis Run PID path PIDFILE=/var/run/redis_6379.pid
# Configure the path of redis configuration file
CONF=”/etc/redis/redis.conf”
# Configure the connection authentication password for redis
REDISPASSWORD=123456
function start () {
if [ -f $PIDFILE ]

then

echo “$PIDFILE exists,process is already running or crashed”

else

echo “Starting Redisserver…”

$EXE $CONF &

fi
}

function stop () {
if [ ! -f $PIDFILE ]

then

echo “$PIDFILE does not exist, process is not running”

else

PID=$(cat $PIDFILE)

echo “Stopping …”

$CLIEXE -p $REDISPORT -a $REDISPASSWORD shutdown

while [ -x /proc/${PID} ]

do

echo “Waiting forRedis to shutdown …”

sleep 1

done

echo “Redis stopped”

fi
}

function restart () {
stop

sleep 3

start
}

case “$1” in
start)
start
;;
stop)
stop
;;
restart)
restart
;;
*)
echo -e “\e[31m Please use $0 [start|stop|restart] asfirst argument \e[0m”
;;
esac

Grant execution permissions:

chmod +x /etc/init.d/redis

Add boot start:

chkconfig –add redis

chkconfig redis on

See: chkconfig –list | grep redis

The test closed the firewall and selinux beforehand. The production environment is recommended to open the firewall.

3, add redis command environment variables

#vi /etc/profile #Add the
next line of parameters
exportPATH=”$PATH:/usr/local/redis/bin” #Environment
variables become effective
source /etc/profile

4. Start redis service

Service redis start #Check
startup

ps -ef | grep redis

Note: After we perform the same operation on our two servers, the installation completes redis. After the installation is completed, we directly enter the configuration master-slave environment.

Redis master-slave configuration

To extend back to the previous design pattern, our idea is to use 140 as the master, 141 as the slave, and 139 as the VIP elegant address. The application accesses the redis database through the 6379 port of the 139.

In normal operation, when the master node 140 goes down, the VIP floats to 141, and then 141 will take over 140 as the master node, and 140 will become the slave node, continuing to provide read and write operations.

When 140 returns to normal, 140 will perform data synchronization with 141 at this time, 140 the original data will not be lost, and the data that has been written into 141 between synchronization machines will be synchronized. After the data synchronization is completed,

The VIP will return to the 140 node and become the master node because of the weight. 141 will lose the VIP and become the slave node again, and restore to the initial state to continue providing uninterrupted read and write services.

1, configure the redis configuration file

Master-140 configuration file

vim /etc/redis/redis.conf
bind 0.0.0.0
port 6379
daemonize yes
requirepass 123456
slave-serve-stale-data yes
slave-read-only no

Slave-141 configuration file

vim /etc/redis/redis.conf
bind 0.0.0.0
port 6379
daemonize yes
slaveof 172.16.81.140 6379
masterauth 123456
slave-serve-stale-data yes
slave-read-only no

2. Restart the redis service after the configuration is complete! Verify that the master and slave are normal.

Master node 140 terminal login test:

[root@localhost ~]# redis-cli -a 123456
127.0.0.1:6379> INFO
.
.
.
# Replication
role:master
connected_slaves:1
slave0:ip=172.16.81.141,port=6379,state=online,offset=105768,lag=1
master_replid:f83fcc3c98614d770f2205831fef1e877fa3f482
master_replid2:1f25604997a4ad3eb8344e8155990e78acd93312
master_repl_offset:105768
second_repl_offset:447
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:447
repl_backlog_histlen:105322

Login test from node 141 terminal:

[root@localhost ~]# redis-cli -a 123456
127.0.0.1:6379> info
.
.
.
# Replication
role:slave
master_host:172.16.81.140
master_port:6379
master_link_status:up
master_last_io_seconds_ago:5
master_sync_in_progress:0
slave_repl_offset:105992
slave_priority:100
slave_read_only:0
connected_slaves:0
master_replid:f83fcc3c98614d770f2205831fef1e877fa3f482
master_replid2:1f25604997a4ad3eb8344e8155990e78acd93312
master_repl_offset:105992
second_repl_offset:447
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:239
repl_backlog_histlen:105754
3, synchronous test

Master node 140

The masters and slaves of this redis have been completed!

KeepAlived configuration to achieve dual hot standby

Use Keepalived to implement VIP, and achieve disaster recovery through notify_master, notify_backup, notify_fault, notify_stop.

1, configure Keepalived configuration file

Master Keepalived Profile

vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id redis01
}

vrrp_script chk_redis {
script “/etc/keepalived/script/redis_check.sh”
interval 2
}

vrrp_instance VI_1 {
state MASTER
interface eno16777984
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}

track_script {
chk_redis
}
virtual_ipaddress {
172.16.81.139
}

notify_master /etc/keepalived/script/redis_master.sh
notify_backup /etc/keepalived/script/redis_backup.sh
notify_fault /etc/keepalived/script/redis_fault.sh
notify_stop /etc/keepalived/script/redis_stop.sh
}

Spare Keepalived Profile

vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id redis02
}

vrrp_script chk_redis {
script “/etc/keepalived/script/redis_check.sh”
interval 2
}

vrrp_instance VI_1 {
state BACKUP
interface eno16777984
virtual_router_id 51
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}

track_script {
chk_redis
}
virtual_ipaddress {
172.16.81.139
}

notify_master /etc/keepalived/script/redis_master.sh
notify_backup /etc/keepalived/script/redis_backup.sh
notify_fault /etc/keepalived/script/redis_fault.sh
notify_stop /etc/keepalived/script/redis_stop.sh
}

2, configure the script

Master KeepAlived — 140

Create a script directory: mkdir -p /etc/keepalived/script/

cd /etc/keepalived/script/

[root@localhost script]# cat redis_check.sh
#!/bin/bash

ALIVE=`/usr/local/redis/bin/redis-cli -a 123456 PING`

if [ “$ALIVE” == “PONG” ];then

echo $ALIVE

exit 0

else

echo $ALIVE

exit 1

fi

[root@localhost script]# cat redis_master.sh
#!/bin/bash
REDISCLI=”/usr/local/redis/bin/redis-cli -a 123456″
LOGFILE=”/var/log/keepalived-redis-state.log”
sleep 15
echo “[master]” >> $LOGFILE
date >> $LOGFILE
echo “Being master….” >>$LOGFILE 2>&1
echo “Run SLAVEOF cmd …”>> $LOGFILE
$REDISCLI SLAVEOF 172.16.81.141 6379 >>$LOGFILE 2>&1
if [ $? -ne 0 ];then
echo “data rsync fail.” >>$LOGFILE 2>&1
else
echo “data rsync OK.” >> $LOGFILE 2>&1
fi

Sleep 10 # delay 10 seconds later to cancel synchronization after the data synchronization is complete

echo “Run SLAVEOF NO ONE cmd …”>> $LOGFILE

$REDISCLI SLAVEOF NO ONE >> $LOGFILE 2>&1
if [ $? -ne 0 ];then
echo “Run SLAVEOF NO ONE cmd fail.” >>$LOGFILE 2>&1
else
echo “Run SLAVEOF NO ONE cmd OK.” >> $LOGFILE 2>&1
fi

[root@localhost script]# cat redis_backup.sh
#!/bin/bash

REDISCLI=”/usr/local/redis/bin/redis-cli -a 123456″

LOGFILE=”/var/log/keepalived-redis-state.log”

echo “[backup]” >> $LOGFILE

date >> $LOGFILE

echo “Being slave….” >>$LOGFILE 2>&1

Sleep 15 #delay 15 seconds to wait until the data is synchronized to the other side and then switch the role of master-slave

echo “Run SLAVEOF cmd …”>> $LOGFILE

$REDISCLI SLAVEOF 172.16.81.141 6379 >>$LOGFILE 2>&1

[root@localhost script]# cat redis_fault.sh
#!/bin/bash

LOGFILE=/var/log/keepalived-redis-state.log

echo “[fault]” >> $LOGFILE

date >> $LOGFILE

[root@localhost script]# cat redis_stop.sh
#!/bin/bash

LOGFILE=/var/log/keepalived-redis-state.log

echo “[stop]” >> $LOGFILE

date >> $LOGFILE

Slave KeepAlived — 141

mkdir -p /etc/keepalived/script/

cd /etc/keepalived/script/

[root@localhost script]# cat redis_check.sh
#!/bin/bash

ALIVE=`/usr/local/redis/bin/redis-cli -a 123456 PING`

if [ “$ALIVE” == “PONG” ]; then

echo $ALIVE

exit 0

else

echo $ALIVE

exit 1

fi

[root@localhost script]# cat redis_master.sh
#!/bin/bash

REDISCLI=”/usr/local/redis/bin/redis-cli -a 123456″

LOGFILE=”/var/log/keepalived-redis-state.log”

echo “[master]” >> $LOGFILE

date >> $LOGFILE

echo “Being master….” >>$LOGFILE 2>&1

echo “Run SLAVEOF cmd …”>> $LOGFILE

$REDISCLI SLAVEOF 172.16.81.140 6379 >>$LOGFILE 2>&1

sleep 10 #

echo “Run SLAVEOF NO ONE cmd …”>> $LOGFILE

$REDISCLI SLAVEOF NO ONE >> $LOGFILE 2>&1

[root@localhost script]# cat redis_backup.sh
#!/bin/bash

REDISCLI=”/usr/local/redis/bin/redis-cli -a 123456″

LOGFILE=”/var/log/keepalived-redis-state.log”

echo “[backup]” >> $LOGFILE

date >> $LOGFILE

echo “Being slave….” >>$LOGFILE 2>&1

sleep 15 #

echo “Run SLAVEOF cmd …”>> $LOGFILE

$REDISCLI SLAVEOF 172.16.81.140 6379 >>$LOGFILE 2>&1

[root@localhost script]# cat redis_fault.sh
#!/bin/bash

LOGFILE=/var/log/keepalived-redis-state.log

echo “[fault]” >> $LOGFILE

date >> $LOGFILE

[root@localhost script]# cat redis_stop.sh
#!/bin/bash

LOGFILE=/var/log/keepalived-redis-state.log

echo “[stop]” >> $LOGFILE

date >> $LOGFILE

systemctl start keepalived

systemctl enable keepalived

ps -ef | grep keepalived

JBoss Too Many Files Open Error

 

First of all you want to determine what file(s) remain open. I’m assuming your server runs linux, so once you know JBoss’es PID

ps ax | grep something-that-makes-your-jboss-process-unique

you can do

ls -l /proc/jbosspid/fd

to get a nice list of files that are open at that instant.

What you’re going to do next depends a bit on what you see here:

  1. you may just need to up the number of files the server can open a bit with ulimit (also look at systemwide limits on your server)
  2. maybe you spot a number of files your application forgot to close
  3. ….

 

Also the max limit is high

linux-server:~# cat /proc/sys/fs/file-max
202989

and the max ever occupied is well below the limit:
cat /proc/sys/fs/file-nr
6304 0 202989

all users return the same limit (including the jboss user who initiates the app server):
jboss@linux-server:/home$ ulimit -n

A wat to have a look at this is to run the lsof command (only as root) – it will show you all the open file descriptors.

In order to fix that, edit the file in /etc/security/limits.conf and add the following lines and restart your jboss.

jboss          soft    nofile          16384
jboss          hard    nofile          16384

(assuming your jboss is run by the “jboss” user)

 

 

  • Settings in /etc/security/limits.conf take the following form:
    # vi /etc/security/limits.conf
    #<domain>        <type>  <item>  <value>
    
    *               -       core             <value>
    *               -       data             <value>
    *               -       priority         <value>
    *               -       fsize            <value>
    *               soft    sigpending       <value> eg:57344
    *               hard    sigpending       <value> eg:57444
    *               -       memlock          <value>
    *               -       nofile           <value> eg:1024
    *               -       msgqueue         <value> eg:819200
    *               -       locks            <value>
    *               soft    core             <value>
    *               hard    nofile           <value>
    @<group>        hard    nproc            <value>
    <user>          soft    nproc            <value>
    %<group>        hard    nproc            <value>
    <user>          hard    nproc            <value>
    @<group>        -       maxlogins        <value>
    <user>          hard    cpu              <value>
    <user>          soft    cpu              <value>
    <user>          hard    locks            <value>
    
    • <domain> can be:
      • an user name
      • a group name, with @group syntax
      • the wildcard *, for default entry
      • the wildcard %, can be also used with %group syntax, for maxlogin limit
    • <type> can have the two values:
      • “soft” for enforcing the soft limits
      • “hard” for enforcing hard limits
    • <item> can be one of the following:
      • core – limits the core file size (KB)
      • data – max data size (KB)
      • fsize – maximum filesize (KB)
      • memlock – max locked-in-memory address space (KB)
      • nofile – max number of open files
      • rss – max resident set size (KB)
      • stack – max stack size (KB)
      • cpu – max CPU time (MIN)
      • nproc – max number of processes
      • as – address space limit (KB)
      • maxlogins – max number of logins for this user
      • maxsyslogins – max number of logins on the system
      • priority – the priority to run user process with
      • locks – max number of file locks the user can hold
      • sigpending – max number of pending signals
      • msgqueue – max memory used by POSIX message queues (bytes)
      • nice – max nice priority allowed to raise to values: [-20, 19]
      • rtprio – max realtime priority
  • Exit and re-login from the terminal for the change to take effect.
  • More details can be found from below command:
# man limits.conf

Diagnostic Steps

  • To improve performance, we can safely set the limit of processes for the super-user root to be unlimited. Edit the .bashrc file vi /root/.bashrcand add the following line:
# vi /root/.bashrc
ulimit -u unlimited