November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

Categories

November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

Docker creates Tomcat/Weblogic cluster

Install Tomcat image

Prepare the required jdk, tomcat and other software into the home directory, start a container
docker run -t -i -v /home:/opt/data -name mk_tomcat Ubuntu /bin/bash

This command mounts the local home directory to the /opt/data directory of the container. If the container directory does not exist, it will be automatically created. The next step is the basic configuration of tomcat. After setting the jdk environment variable, place the tomcat program in /opt/apache-tomcat. Edit the /etc/supervisor/conf.d/supervisor.conf file and add the tomcat item.

[supervisord]
nodaemon=true

[program:tomcat]
command=/opt/apache-tomcat/bin/startup.sh

[program:sshd]
command=/usr/sbin/sshd -D
docker commit ac6474aeb31d tomcat

Create a new tomcat folder and create a new Dockerfile.
FROM mk_tomcat EXPOSE
22 8080
CMD [“/usr/bin/supervisord”]

Create an image based on the Dockerfile.
Docker build tomcat tomcat

Install weblogic image

The steps and tomcat basically the same, posted here configuration file
supervisor.conf
[supervisord]
nodaemon=true

[program:weblogic]
command=/opt/Middleware/user_projects/domains/base_domain/bin/startWebLogic.sh

[program:sshd]
command=/usr/sbin/sshd -D
dockerfile
FROM weblogic EXPOSE
22 7001
CMD [“/usr/bin/supervisord”]

Use of the tomcat/weblogic image

Use of storage

At startup, use the -v parameter
-v, –volume=[] Bind mount a volume (eg from the host: -v /host:/container, from docker: -v /container)

Map the local disk to the inside of the container, it changes in real time between the host and the container, so we only need to update the directory of the physical host to update the program and upload the code.

Implementation of tomcat and weblogic clusters

Tomcat just need to open multiple containers
docker run -d -v -p 204:22 -p 7003:8080 -v /home/data:/opt/data -name tm1 tomcat /usr/bin/supervisord
docker run -d -v -p 205:22 -p 7004:8080 -v /home/data:/opt/data –name tm2 tomcat /usr/bin/supervisord
docker run -d -v -p 206:22 -p 7005:8080 -v /home/data:/opt/data –name tm3 tomcat /usr/bin/supervisord

Here to talk about weblogic configuration, we all know that weblogic has a domain concept. If you want to deploy using the normal administrator +node method, you need to write the startup scripts for the administrator and server in supervisord respectively. The advantages of doing this are:

  • You can use weblogic clustering, synchronization and other concepts
  • To deploy a clustered application, you only need to install the application once on the cluster.

weakness is:

  • Docker configuration is complicated
  • There is no way to automatically expand the computing capacity of the cluster. If you need to add nodes, you need to first create a node on the administrator, then configure a new container supervisor startup script, and then start the container

Another method is to install all the programs on the adminiserver. When you need to expand, you can start multiple nodes. Its advantages and disadvantages are the opposite of the previous methods. (It is recommended to use this method to deploy the development and test environment)
docker run -d -v -p 204:22 -p 7001:7001 -v /home/data:/opt/data -name node1 weblogic /usr/bin/ Supervisord
docker run -d -v -p 205:22 -p 7002:7001 -v /home/data:/opt/data -name node2 weblogic /usr/bin/supervisord
docker run -d -v -p 206:22 -p 7003:7001 -v /home/data:/opt/data –name node3 weblogic /usr/bin/supervisord

In this way, using nginx as the load balancer in the front end can complete the configuration.

Kubernetes basic concepts study notes

Kubernetes (often called K8s) is an open source system for automatically deploying, extending, and managing containerized applications, and is an “open source version” of Google’s internal tools, Borg.

Kubernetes is currently recognized as the most advanced container cluster management tool. After the release of version 1.0, Kubernetes has been developing at a faster speed and has been fully supported by container ecosystem vendors, including coreos, rancher, and many other public cloud services. Vendors also provide infrastructure services based on Kubernetes’ secondary development when providing container services, such as Huawei. It can be said that Kubernetes is also Docker’s foray into the strongest competitors in container cluster management and service orchestration (Docker Swarm).

Kubernetes defines a set of building blocks that together provide a mechanism for deploying, maintaining, and extending applications. The components that make up Kubernetes are designed to be loosely coupled and scalable so that they can meet a variety of different workloads. Scalability is largely provided by the Kubernetes API – it is used as an internal component of extensions and containers that run on Kubernetes.

Because Kubernetes is a system made up of many components, it is still difficult for Kubernetes to install and deploy, and Kubernetes is developed by Google. There are many internal dependencies that need to be accessed through the wall.

Of course, there are quick installation tools, such as kubeadm, kubeadm is the official tool provided by Kubernetes to quickly install and initialize the Kubernetes cluster. Currently, it is still in an incubator development state. With the release of Kubernetes, the release of each version will be updated synchronously. Of course, the current kubeadm It cannot be used in a production environment.

1. Kubernetes architecture

 

 

2. Kubernetes features

Kubernetes features:

  • Simple : lightweight, simple, easy to use
  • Portable : public, private, hybrid, multi-cloud
  • Extensible : modular, plug-in, mountable, combinable
  • Self -healing : Automatic layout, automatic restart, automatic copy

In layman terms:

  • Automated container deployment and replication
  • Expand or shrink containers at any time
  • Organize containers into groups and provide load balancing among containers
  • Easily upgrade new versions of application containers
  • Provide container flexibility, replace it if the container fails

3. Kubernetes terminology

Kubernetes terminology:

  • Master Node : The computer used to control the Kubernetes node. All task assignments come from this.
  • Minion Node : The computer that performs the request and assignment tasks. The Kubernetes host is responsible for controlling the nodes.
  • Namespace : Namespace is an abstract set of a set of resources and objects. For example, it can be used to divide the internal objects of the system into different project groups or user groups. The common pods, services, replication controllers, and deployments are all part of a single namespace (the default is default), while node, persistentVolumes, etc. do not belong to any namespace.
  • Pod : A container group that is deployed on a single node and contains one or more containers. A Pod can be created, scheduled, and shared with all Kubernetes managed minimum deployment units, all containers in the same container set. IP address, IPC, host name, and other resources. The container set abstracts the network and storage from the underlying container so that you can more easily move the container in the cluster.
  • Deployment : Deployment is a new generation of objects for Pod management. Compared with Replication Controller, it provides more complete functionality and is easier and more convenient to use.
  • Replication Controller : The replication controller manages the lifecycle of pods. They ensure that a specified number of pods are running at any given time. They do this by creating or deleting pods.
  • Service : The service provides a single stable name and address for a group of pods. The service separates the work definition from the container set. The Kubernetes service agent automatically assigns the service request to the correct container set—whether or not the container set moves to the cluster. Which position in it, even if it has been replaced, is also the case.
  • Lable : Labels are used to organize and select object groups based on key-value pairs, which are used for each Kubernetes component.

In Kubernetes, all containers are run in a Pod, a Pod to hold a single container, or multiple cooperating containers. In the latter case, the containers in the Pod are guaranteed to be placed on the same machine and resources can be shared. A Pod can also contain zero or more volumes. Volumes are private to a container or can be shared between containers in a Pod. For each Pod created by the user, the system finds a machine that is healthy and has sufficient capacity, and then starts to start the corresponding container there. If a container fails, it is automatically restarted by Kubernetes’ node agent. This node agent is called a Kubelet. However, if the Pod or his machine fails, it will not be automatically transferred or restarted unless the user also defines a Replication Controller.

Pod’s copy sets can collectively form an entire application, a microservice, or a layer of a multi-tiered application. Once the Pod is created, the system continuously monitors their health status and the health of the machine they are running on. If a Pod has problems due to a software problem or a machine failure, the Replication controller automatically creates a new Pod on a healthy machine.

Kubernetes supports a unique network model. Kubernetes encourages the use of flat address spaces and does not dynamically allocate ports, but instead allows users to choose any port that suits them. To achieve this, it assigns each Pod an IP address.

Kubernetes provides an abstraction of Service that provides a stable IP address and DNS name to correspond to a set of dynamic pods, such as a set of pods that make up a microservice. This Pod group is defined by the Label selector because any Pod group can be specified. When a container running in Kubernetes Pod is connected to this address, the connection is forwarded by the local proxy (called a kube proxy). The agent runs on the source machine, the forwarding destination is a corresponding back-end container, and the exact back-end is selected by the round-robin policy to balance the load. The kube proxy also tracks the dynamic changes of the backend Pod group, such as when the Pod is replaced by a new Pod located on a new machine, and thus the IP and DNS names of the service do not need to be changed.

Each Kubernetes resource, such as a Pod, is identified by a URI and has a UID. A general component of a URI is the type of the object (eg, Pod), the name of the object, and the namespace of the object. For a particular object type, each name is unique in its namespace, and the name of an object is not given in the form of a namespace, which is the default namespace, and the UID is in the range of time and space. only.


More about Service:

  • Service is an abstraction of application services. It provides load balancing and service discovery for applications through labels. The list of Pod IPs and ports that match the labels constitutes endpoints, and kube-proxy is responsible for load balancing service IPs to these endpoints.
  • Each Service automatically assigns a cluster IP (a virtual address that is accessible only within the cluster) and a DNS name through which other containers can access the service without needing to know about the operation of the backend container.

 

 

 

 

4. Kubernetes components

Kubernetes component:

  • Kubectl : The client command line tool that formats the accepted command and sends it to kube-apiserver as an operation entry for the entire system.
  • Kube-apiserver : Serves as a control entry for the entire system, providing interfaces with REST API services.
  • Kube-controller-manager : It is used to perform background tasks in the entire system, including node status status, number of Pods, association between Pods and Service, and so on.
  • Kube-scheduler ( Distributing pods to Nodes): Responsible for node resource management, accepts Pods tasks created by kube-apiserver, and assigns them to a node.
  • Etcd : Responsible for service discovery and configuration sharing between nodes.
  • Kube-proxy : Runs on each compute node and is responsible for the Pod network proxy. Obtain the Service information from etcd periodically to do the corresponding strategy.
  • Kubelet : Runs on each compute node. As an agent, it accepts the Pods task and management container that allocates the node, periodically obtains the container status, and feeds it back to the kube-apiserver.
  • DNS : An optional DNS service that creates a DNS record for each Service object so that all pods can access the service through DNS.
  • flannel : Flannel is CoreOS team designed a cover for Kubernetes network (Overlay Network) tool, you need to download another deployment. We know that when we start Docker there will be an IP address for interacting with the container. If you do not manage it, the IP address may be the same on each machine, and it is limited to communicate on the machine, you can not access Docker containers on other machines. The purpose of the Flannel is to re-plan the use of IP addresses for all nodes in the cluster, so that containers on different nodes can obtain IP addresses that belong to the same intranet and are not duplicated, and allow containers belonging to different nodes to directly pass through. Network IP communication.

The master node contains the components:

Docker
etcd
kube-apiserver
kube-controller-manager
kubelet
kube-scheduler

The minion node contains components:

Docker
kubelet
kube-proxy

Ansible-palybooks

Ansible-palybooks

root@controller:~$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/root/.ssh/id_rsa.
Your public key has been saved in /home/root/.ssh/id_rsa.pub.
The key fingerprint is:
33:b8:4d:8f:95:bc:ed:1a:12:f3:6c:09:9f:52:23:d0 root@controller
The key’s randomart image is:
+–[ RSA 2048]—-+
| |
| . |
| . E |
| o . . |
| . S * |
| + / * |
| . = @ . |
| + o |
| … |
+—————–+
root@controller:~$ cat .ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCpEZIWC8UGJXA5uGDRj6vUd5KlwIvE0cat3opG4ZUajmrZU382PbdO3JG6DJa3beZXylYSOYeKtRVT9DxbeKgeTKQ4m8uamM81NAMRf0ZaiqSsZ9r56h1urlJCfD4y5nXwnUTvoBzZpTvTYwcevBzpNcI/VnBIgpcKQWJq11iHHrcmybbFreujgotHg1XUwCv9BdpXbPnA50XbUyX97uqCE9EzIk7WnSNpTtsmASxMPSWoHB9seOas1mq7UBKo7Xfu7qaJJLIEnMisBLKHPb0hM23BNV2SiacJEpHSB5eJKULtMGDej38HbmBsQI3u+lzcWSRppDIt6BvO05brW5C5 root@controller

copy the key to ansible deployment sever-host for password less auth

root@controller:~$ cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 controller
104.198.143.3 ansible
root@controller:~$ ssh ansible
Last login: Wed Jan 18 02:51:09 2017 from 115.113.77.105
[root@ansible ~]$

——————————-
[root@ansible ~]$ sudo vim /etc/ansible/hosts
[web]
192.168.1.23
[web]
192.168.1.21
——————————-
[root@ansible ~]$ vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.128.0.2 ansible.c.rich-operand-154505.internal ansible # Added by Google
169.254.169.254 metadata.google.internal # Added by Google
192.168.1.23 ansible1
192.168.1.22ansible2
104.198.143.3 ansible

——————————-
[root@ansible ~]$ ansible -m ping web
192.168.1.23 | SUCCESS => {
“changed”: false,
“ping”: “pong”
}
——————————-
[root@ansible ~]$ ansible -m ping web
192.168.1.22| SUCCESS => {
“changed”: false,
“ping”: “pong”
}

——————————-
[root@ansible ~]$ ansible -m ping all -u root
192.168.1.23 | SUCCESS => {
“changed”: false,
“ping”: “pong”
}
192.168.1.22| SUCCESS => {
“changed”: false,
“ping”: “pong”
}
——————————-
[root@ansible ~]$ ansible -m ping all -u root
192.168.1.23 | UNREACHABLE! => {
“changed”: false,
“msg”: “Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).\r\n”,
“unreachable”: true
}
192.168.1.22| UNREACHABLE! => {
“changed”: false,
“msg”: “Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).\r\n”,
“unreachable”: true
}
——————————-
[root@ansible ~]$ ansible -m ping all -b
192.168.1.23 | SUCCESS => {
“changed”: false,
“ping”: “pong”
}
192.168.1.22| SUCCESS => {
“changed”: false,
“ping”: “pong”
}
——————————-
[root@ansible ~]$ ansible -s -m ping all
192.168.1.23 | SUCCESS => {
“changed”: false,
“ping”: “pong”
}
192.168.1.22| SUCCESS => {
“changed”: false,
“ping”: “pong”
}
——————————-
[root@ansible ~]$ vim playbook1.yml

– hosts: all
tasks:
– name: installing telnet package
yum: name=telnet state=present

[root@ansible ~]$ ansible-playbook playbook1.yml

PLAY [all] *********************************************************************

TASK [setup] *******************************************************************
ok: [192.168.1.21]
ok: [192.168.1.23]

TASK [installing telnet package] ***********************************************
fatal: [192.168.1.23]: FAILED! => {“changed”: true, “failed”: true, “msg”: “You need to be root to perform this command.\n”, “rc”: 1, “results”: [“Loaded plugins: fastestmirror\n”]}
fatal: [192.168.1.21]: FAILED! => {“changed”: true, “failed”: true, “msg”: “You need to be root to perform this command.\n”, “rc”: 1, “results”: [“Loaded plugins: fastestmirror\n”]}
to retry, use: –limit @/home/root/playbook1.retry

PLAY RECAP *********************************************************************
192.168.1.22: ok=1 changed=0 unreachable=0 failed=1
192.168.1.23 : ok=1 changed=0 unreachable=0 failed=1

[root@ansible ~]$ ansible-playbook playbook1.yml -b

PLAY [all] *********************************************************************

TASK [setup] *******************************************************************
ok: [192.168.1.21]
ok: [192.168.1.23]

TASK [installing telnet package] ***********************************************
changed: [192.168.1.23]
changed: [192.168.1.21]

PLAY RECAP *********************************************************************
192.168.1.22: ok=2 changed=1 unreachable=0 failed=0
192.168.1.23 : ok=2 changed=1 unreachable=0 failed=0
——————————-
[root@ansible ~]$ vim playbook2.yml

– hosts: all
tasks:
– name: inatalling nfs package
yum: name=nfs-utils state=present

– name: statrting nfs service
service: name=nfs state=started enabled=yes

[root@ansible ~]$ ansible-playbook playbook2.yml –syntax-check

playbook: playbook2.yml
[root@ansible ~]$ ansible-playbook playbook2.yml –check

PLAY [all] *********************************************************************

TASK [setup] *******************************************************************
ok: [192.168.1.21]
ok: [192.168.1.23]

TASK [inatalling nfs package] **************************************************
changed: [192.168.1.23]
changed: [192.168.1.21]

TASK [statrting nfs service] ***************************************************
fatal: [192.168.1.21]: FAILED! => {“changed”: false, “failed”: true, “msg”: “Could not find the requested service \”‘nfs’\”: “}
fatal: [192.168.1.23]: FAILED! => {“changed”: false, “failed”: true, “msg”: “Could not find the requested service \”‘nfs’\”: “}
to retry, use: –limit @/home/root/playbook2.retry

PLAY RECAP *********************************************************************
192.168.1.22: ok=2 changed=1 unreachable=0 failed=1
192.168.1.23 : ok=2 changed=1 unreachable=0 failed=1

————————
[root@ansible ~]$ ansible-playbook playbook2.yml -b

PLAY [all] *********************************************************************

TASK [setup] *******************************************************************
ok: [192.168.1.21]
ok: [192.168.1.23]

TASK [inatalling nfs package] **************************************************
changed: [192.168.1.21]
changed: [192.168.1.23]

TASK [statrting nfs service] ***************************************************
changed: [192.168.1.21]
changed: [192.168.1.23]

PLAY RECAP *********************************************************************
192.168.1.22: ok=3 changed=2 unreachable=0 failed=0
192.168.1.23 : ok=3 changed=2 unreachable=0 failed=0
—————-

Run again and again same play book configuration remains same as Idempotent
[root@ansible ~]$ ansible-playbook playbook2.yml -b

PLAY [all] *********************************************************************

TASK [setup] *******************************************************************
ok: [192.168.1.21]
ok: [192.168.1.23]

TASK [inatalling nfs package] **************************************************
ok: [192.168.1.21]
ok: [192.168.1.23]

TASK [statrting nfs service] ***************************************************
ok: [192.168.1.21]
ok: [192.168.1.23]

PLAY RECAP *********************************************************************
192.168.1.22: ok=3 changed=0 unreachable=0 failed=0
192.168.1.23 : ok=3 changed=0 unreachable=0 failed=0

=================================================
[root@ansible ~]$ ansible all -a “service nfs status” -b
192.168.1.23 | SUCCESS | rc=0 >>
? nfs-server.service – NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
Active: active (exited) since Wed 2017-01-18 04:14:18 UTC; 2min 13s ago
Process: 12036 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
Process: 12035 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Main PID: 12036 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/nfs-server.service

192.168.1.22| SUCCESS | rc=0 >>
? nfs-server.service – NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
Active: active (exited) since Wed 2017-01-18 04:14:18 UTC; 2min 13s ago
Process: 6738 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
Process: 6737 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Main PID: 6738 (code=exited, status=0/SUCCESS)
Memory: 0B
CGroup: /system.slice/nfs-server.service
—————————————————
[root@ansible ~]$ vim playbook3.yml

– hosts: all
become: yes
tasks:
– name: Install Apache.
yum: name={{ item }} state=present
with_items:
– httpd
– httpd-devel
– name: Copy configuration files.
copy:
src: “{{ item.src }}”
dest: “{{ item.dest }}”
owner: root
group: root
mode: 0644
with_items:
– src: “httpd.conf”
dest: “/etc/httpd/conf/httpd.conf”
– src: “httpd-vhosts.conf”
dest: “/etc/httpd/conf/httpd-vhosts.conf”
– name: Make sure Apache is started now and at boot.
service: name=httpd state=started enabled=yes
[root@ansible ~]$ ls -l
total 40
-rw-r–r–. 1 root root 11753 Jan 18 06:27 httpd.conf
-rw-r–r–. 1 root root 824 Jan 18 06:27 httpd-vhosts.conf

[root@ansible ~]$ ansible-playbook playbook3.yml

PLAY [all] *********************************************************************

TASK [setup] *******************************************************************
ok: [192.168.1.21]
ok: [192.168.1.23]

TASK [Install Apache.] *********************************************************
changed: [192.168.1.23] => (item=[u’httpd’, u’httpd-devel’])
changed: [192.168.1.21] => (item=[u’httpd’, u’httpd-devel’])

TASK [Copy configuration files.] ***********************************************
ok: [192.168.1.21] => (item={u’dest’: u’/etc/httpd/conf/httpd.conf’, u’src’: u’httpd.conf’})
ok: [192.168.1.23] => (item={u’dest’: u’/etc/httpd/conf/httpd.conf’, u’src’: u’httpd.conf’})
ok: [192.168.1.21] => (item={u’dest’: u’/etc/httpd/conf/httpd-vhosts.conf’, u’src’: u’httpd-vhosts.conf’})
ok: [192.168.1.23] => (item={u’dest’: u’/etc/httpd/conf/httpd-vhosts.conf’, u’src’: u’httpd-vhosts.conf’})

TASK [Make sure Apache is started now and at boot.] ****************************
changed: [192.168.1.21]
changed: [192.168.1.23]

PLAY RECAP *********************************************************************
192.168.1.22: ok=4 changed=2 unreachable=0 failed=0
192.168.1.23 : ok=4 changed=2 unreachable=0 failed=0

playbook1playbook2
[root@ansible ~]$ sudo vim /etc/ansible/hosts
[web]
192.168.1.23
[web]
192.168.1.21
[multi:children]
web
web

[root@ansible ~]$ ansible multi -m ping
192.168.1.23 | SUCCESS => {
“changed”: false,
“ping”: “pong”
}
192.168.1.22| SUCCESS => {
“changed”: false,
“ping”: “pong”
}
—————————————————

[root@ansible ~]$ ansible multi -a hostname
192.168.1.22| SUCCESS | rc=0 >>
ansible2.c.rich-operand-154505.internal

192.168.1.23 | SUCCESS | rc=0 >>
ansible1.c.rich-operand-154505.internal
—————————————-
[root@ansible ~]$ ansible multi -a ‘free -m’
192.168.1.22| SUCCESS | rc=0 >>
total used free shared buff/cache available
Mem: 3700 229 1673 178 1796 2978
Swap: 0 0 0

192.168.1.23 | SUCCESS | rc=0 >>
total used free shared buff/cache available
Mem: 3700 329 265 16 3105 3031
Swap: 0 0 0
—————————————-
[root@ansible ~]$ ansible multi -a “du -h”
192.168.1.23 | SUCCESS | rc=0 >>
4.0K ./.ssh
56K ./.ansible/tmp/ansible-tmp-1484714028.87-63512995370206
56K ./.ansible/tmp
56K ./.ansible
0 ./.puppetlabs/var
0 ./.puppetlabs/etc
0 ./.puppetlabs/opt/puppet
0 ./.puppetlabs/opt
0 ./.puppetlabs
80K .

192.168.1.22| SUCCESS | rc=0 >>
4.0K ./.ssh
56K ./.ansible/tmp/ansible-tmp-1484714028.87-38086154108105
56K ./.ansible/tmp
56K ./.ansible
80K .
————————————-
[root@ansible ~]$ ansible multi -a ‘service httpd status’ -b
192.168.1.22| FAILED | rc=4 >>
Redirecting to /bin/systemctl status httpd.service
Unit httpd.service could not be found.

192.168.1.23 | FAILED | rc=4 >>
Redirecting to /bin/systemctl status httpd.service
Unit httpd.service could not be found.
————————————-
[root@ansible ~]$ ansible multi -a ‘netstat -tlpn’ -s
192.168.1.22| SUCCESS | rc=0 >>
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 15329/etcd
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/systemd
tcp 0 0 0.0.0.0:20048 0.0.0.0:* LISTEN 6736/rpc.mountd
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 994/sshd
tcp 0 0 0.0.0.0:34457 0.0.0.0:* LISTEN –
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1041/master
tcp 0 0 0.0.0.0:2049 0.0.0.0:* LISTEN –
tcp 0 0 0.0.0.0:43009 0.0.0.0:* LISTEN 6724/rpc.statd
tcp6 0 0 :::10251 :::* LISTEN 15502/kube-schedule
tcp6 0 0 :::6443 :::* LISTEN 15445/kube-apiserve
tcp6 0 0 :::2379 :::* LISTEN 15329/etcd
tcp6 0 0 :::10252 :::* LISTEN 15463/kube-controll
tcp6 0 0 :::111 :::* LISTEN 6524/rpcbind
tcp6 0 0 :::20048 :::* LISTEN 6736/rpc.mountd
tcp6 0 0 :::8080 :::* LISTEN 15445/kube-apiserve
tcp6 0 0 :::22 :::* LISTEN 994/sshd
tcp6 0 0 ::1:25 :::* LISTEN 1041/master
tcp6 0 0 :::36474 :::* LISTEN –
tcp6 0 0 :::2049 :::* LISTEN –
tcp6 0 0 :::54309 :::* LISTEN 6724/rpc.statd

192.168.1.23 | SUCCESS | rc=0 >>
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/systemd
tcp 0 0 0.0.0.0:9999 0.0.0.0:* LISTEN 7715/java
tcp 0 0 0.0.0.0:20048 0.0.0.0:* LISTEN 12034/rpc.mountd
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 7715/java
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 990/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1032/master
tcp 0 0 0.0.0.0:4447 0.0.0.0:* LISTEN 7715/java
tcp 0 0 0.0.0.0:45185 0.0.0.0:* LISTEN –
tcp 0 0 0.0.0.0:2049 0.0.0.0:* LISTEN –
tcp 0 0 0.0.0.0:9990 0.0.0.0:* LISTEN 7715/java
tcp 0 0 0.0.0.0:53095 0.0.0.0:* LISTEN 12018/rpc.statd
tcp6 0 0 :::111 :::* LISTEN 11822/rpcbind
tcp6 0 0 :::20048 :::* LISTEN 12034/rpc.mountd
tcp6 0 0 :::22 :::* LISTEN 990/sshd
tcp6 0 0 :::43255 :::* LISTEN –
tcp6 0 0 :::55927 :::* LISTEN 12018/rpc.statd
tcp6 0 0 ::1:25 :::* LISTEN 1032/master
tcp6 0 0 :::2049 :::* LISTEN –

[root@ansible ~]$ ansible multi -s -m yum -a “name=ntp state=present”
192.168.1.23 | SUCCESS => {
“changed”: false,
“msg”: “”,
“rc”: 0,
“results”: [
“ntp-4.2.6p5-25.el7.centos.x86_64 providing ntp is already installed”
]
}
192.168.1.22| SUCCESS => {
“changed”: false,
“msg”: “”,
“rc”: 0,
“results”: [
“ntp-4.2.6p5-25.el7.centos.x86_64 providing ntp is already installed”
]
}

————————————–
[root@ansible ~]$ ansible multi -s -m service -a “name=ntpd state=started enabled=yes”
————————————–
[root@ansible ~]$ ntpdate
18 Jan 04:57:35 ntpdate[4532]: no servers can be used, exiting
————————————–
[root@ansible ~]$ ansible multi -s -a “service ntpd stop”
192.168.1.23 | SUCCESS | rc=0 >>
Redirecting to /bin/systemctl stop ntpd.service

192.168.1.22| SUCCESS | rc=0 >>
Redirecting to /bin/systemctl stop ntpd.service
————————————–
[root@ansible ~]$ ansible multi -s -a “ntpdate -q 0.rhel.pool.ntp.org”
192.168.1.22| SUCCESS | rc=0 >>
server 138.236.128.112, stratum 2, offset -0.003149, delay 0.05275
server 71.210.146.228, stratum 2, offset 0.003796, delay 0.04633
server 128.138.141.172, stratum 1, offset -0.000194, delay 0.03752
server 69.89.207.199, stratum 2, offset -0.000211, delay 0.05193
18 Jan 04:58:22 ntpdate[10370]: adjust time server 128.138.141.172 offset -0.000194 sec

192.168.1.23 | SUCCESS | rc=0 >>
server 173.230.144.109, stratum 2, offset 0.000549, delay 0.06175
server 45.127.113.2, stratum 3, offset 0.000591, delay 0.06134
server 4.53.160.75, stratum 2, offset -0.000900, delay 0.04163
server 50.116.52.97, stratum 2, offset -0.001006, delay 0.05426
18 Jan 04:58:22 ntpdate[15477]: adjust time server 4.53.160.75 offset -0.000900 sec
————————————–
[root@ansible ~]$ ansible web -s -m yum -a “name=MySQL-python state=present”
192.168.1.23 | SUCCESS => {
“changed”: true,
“msg”: “”,
“rc”: 0,
“results”: [
“Loaded plugins: fastestmirror\nLoading mirror speeds from cached hostfile\n * base: bay.uchicago.edu\n * epel: mirror.steadfast.net\n * extras: mirror.tzulo.com\n * updates: mirror.team-cymru.org\nResolving Dependencies\n–> Running transaction check\n—> Package MySQL-python.x86_64 0:1.2.5-1.el7 will be installed\n–> Finished Dependency Resolution\n\nDependencies Resolved\n\n================================================================================\n Package Arch Version Repository Size\n================================================================================\nInstalling:\n MySQL-python x86_64 1.2.5-1.el7 base 90 k\n\nTransaction Summary\n================================================================================\nInstall 1 Package\n\nTotal download size: 90 k\nInstalled size: 284 k\nDownloading packages:\nRunning transaction check\nRunning transaction test\nTransaction test succeeded\nRunning transaction\n Installing : MySQL-python-1.2.5-1.el7.x86_64 1/1 \n Verifying : MySQL-python-1.2.5-1.el7.x86_64 1/1 \n\nInstalled:\n MySQL-python.x86_64 0:1.2.5-1.el7 \n\nComplete!\n”
]
}

[root@ansible ~]$ ansible web -s -m yum -a “name=python-setuptools state=present”
192.168.1.23 | SUCCESS => {
“changed”: false,
“msg”: “”,
“rc”: 0,
“results”: [
“python-setuptools-0.9.8-4.el7.noarch providing python-setuptools is already installed”
]
}

[root@ansible ~]$ ansible web -s -m easy_install -a “name=django state=present”
192.168.1.23 | SUCCESS => {
“binary”: “/bin/easy_install”,
“changed”: true,
“name”: “django”,
“virtualenv”: null
}
————————————–
[root@ansible ~]$ ansible web -s -m user -a “name=admin state=present”
192.168.1.23 | SUCCESS => {
“changed”: true,
“comment”: “”,
“createhome”: true,
“group”: 1004,
“home”: “/home/admin”,
“name”: “admin”,
“shell”: “/bin/bash”,
“state”: “present”,
“system”: false,
“uid”: 1003
}
[root@ansible ~]$ ansible web -s -m group -a “name=admin state=present”
192.168.1.23 | SUCCESS => {
“changed”: false,
“gid”: 1004,
“name”: “admin”,
“state”: “present”,
“system”: false
}
[root@ansible ~]$ ansible web -s -m user -a “name=first group=admin state=present”
192.168.1.23 | SUCCESS => {
“changed”: true,
“comment”: “”,
“createhome”: true,
“group”: 1004,
“home”: “/home/first”,
“name”: “first”,
“shell”: “/bin/bash”,
“state”: “present”,
“system”: false,
“uid”: 1004
}
[root@ansible ~]$ ansible web -a “tail /etc/passwd”
192.168.1.23 | SUCCESS | rc=0 >>
systemd-network:x:192:192:systemd Network Management:/:/sbin/nologin
tss:x:59:59:Account used by the trousers package to sandbox the tcsd daemon:/dev/null:/sbin/nologin
root:x:1000:1001::/home/root:/bin/bash
test:x:1001:1002::/home/test:/bin/bash
jboss:x:1002:1003::/home/jboss:/bin/bash
rpc:x:32:32:Rpcbind Daemon:/var/lib/rpcbind:/sbin/nologin
rpcuser:x:29:29:RPC Service User:/var/lib/nfs:/sbin/nologin
nfsnobody:x:65534:65534:Anonymous NFS User:/var/lib/nfs:/sbin/nologin
admin:x:1003:1004::/home/admin:/bin/bash
first:x:1004:1004::/home/first:/bin/bash

[root@ansible ~]$ ansible web -a “tail /etc/shadow”
192.168.1.23 | FAILED | rc=1 >>
tail: cannot open ‘/etc/shadow’ for reading: Permission denied

[root@ansible ~]$ ansible web -a “tail /etc/shadow” -b
192.168.1.23 | SUCCESS | rc=0 >>
systemd-network:!!:17176::::::
tss:!!:17176::::::
root:*:17178:0:99999:7:::
test:*:17178:0:99999:7:::
jboss:!!:17182:0:99999:7:::
rpc:!!:17184:0:99999:7:::
rpcuser:!!:17184::::::
nfsnobody:!!:17184::::::
admin:!!:17184:0:99999:7:::
first:!!:17184:0:99999:7:::
——————————–

[root@ansible ~]$ ansible web -m stat -a “path=/etc/hosts”
192.168.1.23 | SUCCESS => {
“changed”: false,
“stat”: {
“atime”: 1484635843.2532218,
“checksum”: “5fed7929fb7be9b1046252b6b7e0e2263fbc0738”,
“ctime”: 1484203757.175483,
“dev”: 2049,
“executable”: false,
“exists”: true,
“gid”: 0,
“gr_name”: “root”,
“inode”: 240,
“isblk”: false,
“ischr”: false,
“isdir”: false,
“isfifo”: false,
“isgid”: false,
“islnk”: false,
“isreg”: true,
“issock”: false,
“isuid”: false,
“md5”: “10f391742a450d220ff00269216eff8a”,
“mode”: “0644”,
“mtime”: 1484203757.175483,
“nlink”: 1,
“path”: “/etc/hosts”,
“pw_name”: “root”,
“readable”: true,
“rgrp”: true,
“roth”: true,
“rusr”: true,
“size”: 297,
“uid”: 0,
“wgrp”: false,
“woth”: false,
“writeable”: false,
“wusr”: true,
“xgrp”: false,
“xoth”: false,
“xusr”: false
}
}

========================================
[root@ansible ~]$ ansible multi -m copy -a “src=/etc/hosts dest=/tmp/hosts”
192.168.1.23 | SUCCESS => {
“changed”: true,
“checksum”: “08aa54eecc8a866b53d38351ea72e5bb97718005”,
“dest”: “/tmp/hosts”,
“gid”: 1001,
“group”: “root”,
“md5sum”: “72ff7a2085a5186d0cab74f14bae1483”,
“mode”: “0664”,
“owner”: “root”,
“secontext”: “unconfined_u:object_r:user_home_t:s0”,
“size”: 369,
“src”: “/home/root/.ansible/tmp/ansible-tmp-1484717608.47-178441141048946/source”,
“state”: “file”,
“uid”: 1000
}
192.168.1.22| SUCCESS => {
“changed”: true,
“checksum”: “08aa54eecc8a866b53d38351ea72e5bb97718005”,
“dest”: “/tmp/hosts”,
“gid”: 1001,
“group”: “root”,
“md5sum”: “72ff7a2085a5186d0cab74f14bae1483”,
“mode”: “0664”,
“owner”: “root”,
“secontext”: “unconfined_u:object_r:user_home_t:s0”,
“size”: 369,
“src”: “/home/root/.ansible/tmp/ansible-tmp-1484717608.85-272034831244848/source”,
“state”: “file”,
“uid”: 1000
}
==========================================
[root@ansible ~]$ ansible multi -s -m fetch -a “src=/etc/hosts dest=/tmp”
192.168.1.23 | SUCCESS => {
“changed”: true,
“checksum”: “5fed7929fb7be9b1046252b6b7e0e2263fbc0738”,
“dest”: “/tmp/192.168.1.23/etc/hosts”,
“md5sum”: “10f391742a450d220ff00269216eff8a”,
“remote_checksum”: “5fed7929fb7be9b1046252b6b7e0e2263fbc0738”,
“remote_md5sum”: null
}
192.168.1.22| SUCCESS => {
“changed”: true,
“checksum”: “0b24c9ee4a888defdf6769d5e72f65761e882f1f”,
“dest”: “/tmp/192.168.1.21/etc/hosts”,
“md5sum”: “e2dd8ef8a5f58f35d7a3f3dce7f2f2bf”,
“remote_checksum”: “0b24c9ee4a888defdf6769d5e72f65761e882f1f”,
“remote_md5sum”: null
}
============================================
[root@ansible ~]$ ls -l /tmp/
total 16
drwxrwxr-x. 3 root root 16 Jan 18 05:34 192.168.1.21
drwxrwxr-x. 3 root root 16 Jan 18 05:34 192.168.1.23

[root@ansible ~]$ cat /tmp/192.168.1.23/etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.128.0.3 ansible1.c.rich-operand-154505.internal ansible1 # Added by Google
169.254.169.254 metadata.google.internal # Added by Google
==========================================
[root@ansible ~]$ ansible multi -m file -a “dest=/tmp/test mode=644 state=directory”
192.168.1.23 | SUCCESS => {
“changed”: true,
“gid”: 1001,
“group”: “root”,
“mode”: “0644”,
“owner”: “root”,
“path”: “/tmp/test”,
“secontext”: “unconfined_u:object_r:user_tmp_t:s0”,
“size”: 6,
“state”: “directory”,
“uid”: 1000
}
192.168.1.22| SUCCESS => {
“changed”: true,
“gid”: 1001,
“group”: “root”,
“mode”: “0644”,
“owner”: “root”,
“path”: “/tmp/test”,
“secontext”: “unconfined_u:object_r:user_tmp_t:s0”,
“size”: 6,
“state”: “directory”,
“uid”: 1000
}
===========================================
[root@ansible ~]$ ansible multi -s -m file -a “dest=/tmp/test mode=644 owner=root state=directory”
192.168.1.22| SUCCESS => {
“changed”: true,
“gid”: 1001,
“group”: “root”,
“mode”: “0644”,
“owner”: “root”,
“path”: “/tmp/test”,
“secontext”: “unconfined_u:object_r:user_tmp_t:s0”,
“size”: 6,
“state”: “directory”,
“uid”: 0
}
192.168.1.23 | SUCCESS => {
“changed”: true,
“gid”: 1001,
“group”: “root”,
“mode”: “0644”,
“owner”: “root”,
“path”: “/tmp/test”,
“secontext”: “unconfined_u:object_r:user_tmp_t:s0”,
“size”: 6,
“state”: “directory”,
“uid”: 0
}
================================================
[root@ansible ~]$ ansible multi -s -B 3600 -a “yum -y update”
=================================================
[root@ansible ~]$ ansible 192.168.1.23 -s -a “tail /var/log/messages”
192.168.1.23 | SUCCESS | rc=0 >>
Jan 18 05:41:03 ansible1 ansible-async_wrapper.py: Return async_wrapper task started.
Jan 18 05:41:03 ansible1 ansible-async_wrapper.py: Starting module and watcher
Jan 18 05:41:03 ansible1 ansible-async_wrapper.py: Start watching 18305 (3600)
Jan 18 05:41:03 ansible1 ansible-async_wrapper.py: Start module (18305)
Jan 18 05:41:03 ansible1 ansible-async_wrapper.py: Module complete (18305)
Jan 18 05:41:08 ansible1 ansible-async_wrapper.py: Done in kid B.
Jan 18 05:42:04 ansible1 systemd-logind: Removed session 180.
Jan 18 05:42:04 ansible1 systemd: Started Session 181 of user root.
Jan 18 05:42:04 ansible1 systemd-logind: New session 181 of user root.
Jan 18 05:42:04 ansible1 systemd: Starting Session 181 of user root.
=================================================
[root@ansible ~]$ ansible multi -s -m shell -a “tail /var/log/messages | grep ansible-command | wc -l”
192.168.1.23 | SUCCESS | rc=0 >>
0

192.168.1.22| SUCCESS | rc=0 >>
0
=================================
[root@ansible ~]$ ansible web -s -m git -a “repo=git://web.com/path/to/repo.git dest=/opt/myapp update=yes version=1.2.4”
192.168.1.23 | FAILED! => {
“changed”: false,
“failed”: true,
“msg”: “Failed to find required executable git”
}
[root@ansible ~]$ ansible web -s -m yum -a “name=git state=present”
192.168.1.23 | SUCCESS => {
“changed”: true,
“msg”: “”,
“rc”: 0,
“results”: [
“Loaded plugins: fastestmirror\nLoading mirror speeds from cached hostfile\n * base: bay.uchicago.edu\n * epel: mirror.steadfast.net\n * extras: mirror.tzulo.com\n * updates: mirror.team-cymru.org\nResolving Dependencies\n–> Running transaction check\n—> Package git.x86_64 0:1.8.3.1-6.el7_2.1 will be installed\n–> Processing Dependency: perl-Git = 1.8.3.1-6.el7_2.1 for package: git-1.8.3.1-6.el7_2.1.x86_64\n–> Processing Dependency: perl(Term::ReadKey) for package: git-1.8.3.1-6.el7_2.1.x86_64\n–> Processing Dependency: perl(Git) for package: git-1.8.3.1-6.el7_2.1.x86_64\n–> Processing Dependency: perl(Error) for package: git-1.8.3.1-6.el7_2.1.x86_64\n–> Processing Dependency: libgnome-keyring.so.0()(64bit) for package: git-1.8.3.1-6.el7_2.1.x86_64\n–> Running transaction check\n—> Package libgnome-keyring.x86_64 0:3.8.0-3.el7 will be installed\n—> Package perl-Error.noarch 1:0.17020-2.el7 will be installed\n—> Package perl-Git.noarch 0:1.8.3.1-6.el7_2.1 will be installed\n—> Package perl-TermReadKey.x86_64 0:2.30-20.el7 will be installed\n–> Finished Dependency Resolution\n\nDependencies Resolved\n\n================================================================================\n Package Arch Version Repository Size\n================================================================================\nInstalling:\n git x86_64 1.8.3.1-6.el7_2.1 base 4.4 M\nInstalling for dependencies:\n libgnome-keyring x86_64 3.8.0-3.el7 base 109 k\n perl-Error noarch 1:0.17020-2.el7 base 32 k\n perl-Git noarch 1.8.3.1-6.el7_2.1 base 53 k\n perl-TermReadKey x86_64 2.30-20.el7 base 31 k\n\nTransaction Summary\n================================================================================\nInstall 1 Package (+4 Dependent packages)\n\nTotal download size: 4.6 M\nInstalled size: 23 M\nDownloading packages:\n——————————————————————————–\nTotal 12 MB/s | 4.6 MB 00:00 \nRunning transaction check\nRunning transaction test\nTransaction test succeeded\nRunning transaction\n Installing : 1:perl-Error-0.17020-2.el7.noarch 1/5 \n Installing : libgnome-keyring-3.8.0-3.el7.x86_64 2/5 \n Installing : perl-TermReadKey-2.30-20.el7.x86_64 3/5 \n Installing : git-1.8.3.1-6.el7_2.1.x86_64 4/5 \n Installing : perl-Git-1.8.3.1-6.el7_2.1.noarch 5/5 \n Verifying : perl-Git-1.8.3.1-6.el7_2.1.noarch 1/5 \n Verifying : perl-TermReadKey-2.30-20.el7.x86_64 2/5 \n Verifying : libgnome-keyring-3.8.0-3.el7.x86_64 3/5 \n Verifying : 1:perl-Error-0.17020-2.el7.noarch 4/5 \n Verifying : git-1.8.3.1-6.el7_2.1.x86_64 5/5 \n\nInstalled:\n git.x86_64 0:1.8.3.1-6.el7_2.1 \n\nDependency Installed:\n libgnome-keyring.x86_64 0:3.8.0-3.el7 perl-Error.noarch 1:0.17020-2.el7 \n perl-Git.noarch 0:1.8.3.1-6.el7_2.1 perl-TermReadKey.x86_64 0:2.30-20.el7 \n\nComplete!\n”
]
}

Managing files in ansible

Managing files in ansible

[root@controller ~]$ ansible localhost –list-hosts

hosts (1):
localhost
——————————————-
[root@controller ~]$ vim file.yaml

– name: creating a file
hosts: localhost
tasks:
– file:
path: /home/root/sample
state: touch
owner: root
group: root
mode: 0755

——————————————-
[root@controller ~]$ ansible-playbook –syntax-check file.yaml

playbook: file.yaml

—————————————-
[root@controller ~]$ ansible-playbook -C file.yaml

PLAY [creating a file] *********************************************************

TASK [setup] *******************************************************************
ok: [localhost]

TASK [file] ********************************************************************
changed: [localhost]

PLAY RECAP *********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0

——————————————-
[root@controller ~]$ stat /home/root/sample
stat: cannot stat ‘/home/root/sample’: No such file or directory
——————————————-
[root@controller ~]$ ansible-playbook file.yaml

PLAY [creating a file] *********************************************************

TASK [setup] *******************************************************************
ok: [localhost]

TASK [file] ********************************************************************
changed: [localhost]

PLAY RECAP *********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0
——————————————-
[root@controller ~]$ stat /home/root/sample
File: ‘/home/root/sample’
Size: 0 Blocks: 0 IO Block: 4096 regular empty file
Device: fd00h/64768d Inode: 51300626 Links: 1
Access: (0333/–wx-wx-wx) Uid: ( 1000/ root) Gid: ( 1000/ root)
Context: unconfined_u:object_r:user_home_t:s0
Access: 2017-01-26 06:39:51.607557462 +0530
Modify: 2017-01-26 06:39:51.607557462 +0530
Change: 2017-01-26 06:39:51.608557462 +0530
Birth: –
——————————————-
[root@controller ~]$ ls -l /home/root/sample
–wx-wx-wx. 1 root root 0 Jan 26 06:39 /home/root/sample
——————————————-
[root@controller ~]$ ansible-playbook file.yaml

PLAY [creating a file] *********************************************************

TASK [setup] *******************************************************************
ok: [localhost]

TASK [file] ********************************************************************
changed: [localhost]

TASK [stat] ********************************************************************
ok: [localhost]

PLAY RECAP *********************************************************************
localhost : ok=3 changed=1 unreachable=0 failed=0
——————————————–
[root@controller ~]$ vim file.yaml

– name: creating a file
hosts: localhost
tasks:
– file:
path: /home/root/sample
state: touch
owner: root
group: root
mode: 0755
– stat: path=/home/root/sample
register: file_status
– debug: msg=”File exists”
when: file_status.stat.exists == true
——————————————–
[root@controller ~]$ ansible-playbook file.yaml

PLAY [creating a file] *********************************************************

TASK [setup] *******************************************************************
ok: [localhost]

TASK [file] ********************************************************************
changed: [localhost]

TASK [stat] ********************************************************************
ok: [localhost]

TASK [debug] *******************************************************************
ok: [localhost] => {
“msg”: “File exists”
}

PLAY RECAP *********************************************************************
localhost : ok=4 changed=1 unreachable=0 failed=0

——————————————–
[root@controller ~]$ vim file.yaml

– name: creating a file
hosts: localhost
tasks:
– file:
path: /home/root/sample
state: touch
owner: root
group: root
mode: 0755
– stat: path=/home/root/sample
register: file_status
– debug: msg=”File exists”
when: file_status.stat.exists == true
– copy: content=”this is for test purpose\n” dest=”/home/root/sample”
when: file_status.stat.exists == true

[root@controller ~]$ ansible-playbook -C file.yaml

PLAY [creating a file] *********************************************************

TASK [setup] *******************************************************************
ok: [localhost]

TASK [file] ********************************************************************
changed: [localhost]

TASK [stat] ********************************************************************
ok: [localhost]

TASK [debug] *******************************************************************
ok: [localhost] => {
“msg”: “File exists”
}

TASK [copy] ********************************************************************
changed: [localhost]

PLAY RECAP *********************************************************************
localhost : ok=5 changed=2 unreachable=0 failed=0

ansible vault

root@controller ~]# ansible-vault create mohan.yml
Vault password:

[root@controller ~]# cat mohan.yml
$ANSIBLE_VAULT;1.1;AES256
38623235633039636166356162393064363936303461306536386237663032383932656164633131
6132633132376266313863366164396535386539666562310a306562383834343431633536353332
63303935623030393261373030343366323361653238306531356434333538613236303738653730
3935313536396361640a343836366434613638316538333165366161306166396564353635383831
30636536366462646362373432396234383432376437633764616239393938366137

[root@controller ~]# ansible-vault view mohan.yml
Vault password:
hai how are you

[root@controller ~]# ansible-vault edit mohan.yml
Vault password:

[root@controller ~]# ansible-vault rekey mohan.yml
Vault password:
New Vault password:
Confirm New Vault password:
Rekey successful

[root@controller ~]# ansible-playbook mohan.yml
ERROR! Decryption failed on /root/mohan.yml

[root@controller ~]# ansible-playbook –ask-vault-pass mohan.yml
Vault password:

[root@controller ~]# ansible-vault encrypt 4.yml
Vault password:
Encryption successful
[root@controller ~]# ansible-playbook 4.yml
ERROR! Decryption failed on /root/4.yml
[root@controller ~]# ansible-playbook –ask-vault-pass 4.yml
Vault password:

PLAY [localhost] ***************************************************************

TASK [setup] *******************************************************************
ok: [localhost]

PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0
=======================================
[root@controller ~]# ansible-vault decrypt 4.yml
Vault password:
Decryption successful

[root@controller ~]# ansible-playbook 4.yml

PLAY [localhost] ***************************************************************

TASK [setup] *******************************************************************
ok: [localhost]

PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0
========================================
[root@controller ~]# ansible-vault decrypt 4.yml –output=4-decrypted.yml
Vault password:
Decryption successful
[root@controller ~]# cat 4.yml
$ANSIBLE_VAULT;1.1;AES256
65386464336638663338363031383263633764393937633839366565336166303935363733616663
6636633734663766353365613063396565383662366539390a613765626239363361386165653763
35353730633164346634666339616232343830643434393563363662386633393830313538306130
3366386539313535380a643639613765653235363235383463663735663639333232353230343664
37346532353963663636303833653230333661333735393339336264303136636165366365326538
39613537353638373464333633353235356538653333643864623063333534303766373039373031
383436656161333330373162633966386639
[root@controller ~]# cat 4-decrypted.yml
– hosts: localhost
vars:
user: joe
home: /home/joe
=======================================
[root@controller ~]# vim vault-pass
redhat_123

[root@controller ~]# ansible-vault decrypt –vault-password-file=vault-pass sample.yaml

[root@controller ~]# ansible-vault create –vault-password-file=vault-pass example.yaml

– name: installing packages
hosts: localhost
tasks:
– yum: name=elinks state=latest

[root@controller ~]# cat example.yaml
$ANSIBLE_VAULT;1.1;AES256
37653137363538613630333039386164353232636333306430336333316164363566373464316634
3636336637336535633039323631313038643366393534650a393762643936343566313638646662
64663338376162643463343232396361383739303635383438323831386539303337623764316537
3961653566353362330a393530333638356663303264326331386166613330323539343436396632
38636630393133393064623437663133376233663934346666313162363838386532626337646134
39316561633530336663663238333766353861666339353134663930663839393532396334643062
64393233653834646463366432633965663432313431656236386664643461386365613363616432
35306537656335316561393966656362393634373237313737623164633836663561363636646332
32663839343461323832626263363762313730346333353034383539333332366463

[root@controller ~]# ansible-playbook example.yaml
ERROR! Decryption failed on /root/example.yaml

[root@controller ~]# ansible-playbook –vault-password-file=vault-pass –syntax-check example.yaml

playbook: example.yaml

[root@controller ~]# ansible-playbook –vault-password-file=vault-pass example.yaml

PLAY [installing packages] *****************************************************

TASK [setup] *******************************************************************
ok: [localhost]

TASK [yum] *********************************************************************
changed: [localhost]

PLAY RECAP *********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0
=========================================

[root@controller ~]# vim newpassword
mohan0494

[root@controller ~]# ansible-vault rekey –new-vault-password-file=newpassword example.yaml
Vault password:
Rekey successful

[root@controller ~]# ansible-playbook –vault-password-file=newpassword example.yaml

PLAY [installing packages] *****************************************************

TASK [setup] *******************************************************************
ok: [localhost]

TASK [yum] *********************************************************************
ok: [localhost]

PLAY RECAP *********************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0

[root@controller ~]# ansible-vault decrypt –vault-password-file=newpassword example.yaml
Decryption successful
[root@controller ~]# cat example.yaml

– name: installing packages
hosts: localhost
tasks:
– yum: name=elinks state=latest

[root@controller ~]# ansible-vault encrypt –vault-password-file=newpassword example.yaml
Encryption successful
[root@controller ~]# cat example.yaml
$ANSIBLE_VAULT;1.1;AES256
64643166623463393937376165333034363635653931663839633836316239333035396161663165
6461613861373731383431303839383839316264366538350a373839396533633333313364626330
31336538356365666537373438306165333534363533636436636666656162346530643539316261
3431343233373135620a336163633164633961353339303433396639373735663038306262613639
65666130303539613131663666313361646538643038643834383966633364353162626233356132
64333930643531343066383164393238383639343764376661303734336532393431633534366238
62313537623834376535643830353361633336613563363535363931343934303739643039386532
62653335373632633465633063653564616430393234343862383437353732383231656138386165
38363135656434363239383065306136653863363334376230393739643539616463

JINJA2 templates in ansible

JINJA2 templates in ansible

[root@workstation ~]# ansible -m ping all
192.168.1.23 | SUCCESS => {
“changed”: false,
“ping”: “pong”
}
192.168.1.22 | SUCCESS => {
“changed”: false,
“ping”: “pong”
}

=========================================
[root@workstation ~]# vim motd.j2
this is {{ ansible_hostname }}.
today’s date is {{ ansible_date_time.date }}
you can ask {{ system_owner }} for access
==========================================
[root@workstation ~]# vim motd.yaml

– hosts: all
vars:
system_owner: root
tasks:
– template:
src: motd.j2
dest: /etc/motd
owner: root
group: root
mode: 777
========================================
[root@workstation ~]# cat /etc/motd
========================================
[root@workstation ~]# ansible-playbook –syntax-check motd.yaml

playbook: motd.yaml

[root@workstation ~]# ansible-playbook -C motd.yaml

PLAY [all] *********************************************************************

TASK [setup] *******************************************************************
ok: [192.168.1.22]
ok: [192.168.1.23]

TASK [template] ****************************************************************
changed: [192.168.1.22]
changed: [192.168.1.23]

PLAY RECAP *********************************************************************
192.168.1.22 : ok=2 changed=1 unreachable=0 failed=0
192.168.1.23 : ok=2 changed=1 unreachable=0 failed=0
=======================================
[root@workstation ~]# ansible-playbook motd.yaml -vv
Using /etc/ansible/ansible.cfg as config file

PLAYBOOK: motd.yaml ************************************************************
1 plays in motd.yaml

PLAY [all] *********************************************************************

TASK [setup] *******************************************************************
ok: [192.168.1.22]
ok: [192.168.1.23]

TASK [template] ****************************************************************
task path: /root/motd.yaml:6
changed: [192.168.1.22] => {“changed”: true, “checksum”: “86167eb96fcd968f9ce2403d02fd99de5454cffa”, “dest”: “/etc/motd”, “gid”: 0, “group”: “root”, “md5sum”: “35128946a8173580f8c13e86523be1a1”, “mode”: “01411”, “owner”: “root”, “secontext”: “system_u:object_r:etc_t:s0”, “size”: 84, “src”: “/root/.ansible/tmp/ansible-tmp-1485712899.32-75249962942078/source”, “state”: “file”, “uid”: 0}
changed: [192.168.1.23] => {“changed”: true, “checksum”: “5619e003a6b2617b359b8a369c3f404035ab0b18”, “dest”: “/etc/motd”, “gid”: 0, “group”: “root”, “md5sum”: “4429a262ca0c9cca26b2bc66482abfbc”, “mode”: “01411”, “owner”: “root”, “secontext”: “system_u:object_r:etc_t:s0”, “size”: 80, “src”: “/root/.ansible/tmp/ansible-tmp-1485712899.32-157949107190502/source”, “state”: “file”, “uid”: 0}

PLAY RECAP *********************************************************************
192.168.1.22 : ok=2 changed=1 unreachable=0 failed=0
192.168.1.23 : ok=2 changed=1 unreachable=0 failed=0
==============================================
[root@servera ~]# logout
Connection to 192.168.1.23 closed.
root@rmohan:~$ ssh root@192.168.1.23
root@192.168.1.23’s password:
Last login: Sun Jan 29 23:29:06 2017 from workstation
this is servera.
today’s date is 2017-01-29
you can ask root for access
==============================================
root@rmohan:~$ ssh root@192.168.1.22
root@192.168.1.22’s password:
Last login: Sun Jan 29 23:31:40 2017 from workstation.example.com
this is workstation.
today’s date is 2017-01-29
you can ask root for access
================================================

[root@workstation ~]# vim inventory
[web]
node1.rmohan.com
node2.rmohan.com

[root@workstation ~]# rm -rf /etc/motd

[root@servera ~]# rm -rf /etc/motd

[root@workstation ~]# ansible-playbook -i inventory –limit node1.rmohan.com motd.yaml

PLAY [all] *********************************************************************

TASK [setup] *******************************************************************
ok: [node1.rmohan.com]

TASK [template] ****************************************************************
changed: [node1.rmohan.com]

PLAY RECAP *********************************************************************
node1.rmohan.com : ok=2 changed=1 unreachable=0 failed=0

Creating role in ansible

Creating a basic role for displaying message of the day

[root@workstation ~]# vim /etc/ansible/roles/motd/main.yml

– name: use motd role playbook
hosts: all

roles:
– motd

[root@workstation ~]# vim /etc/ansible/roles/motd/defaults/main.yml

system_owner: rmohan
[root@workstation ~]# vim /etc/ansible/roles/motd/tasks/main.yml

– name: deliver motd file
template:
src: templates/motd.j2
dest: /etc/motd
owner: root
group: root
mode: 777
[root@workstation ~]# vim /etc/ansible/roles/motd/templates/motd.j2
this is {{ ansible_hostname }}.
today’s date is {{ ansible_date_time.date }}
you can ask {{ system_owner }} for access
[root@workstation ~]# ansible-playbook –syntax-check /etc/ansible/roles/motd/main.yml

playbook: /etc/ansible/roles/motd/main.yml
[root@workstation ~]# ansible-playbook -C /etc/ansible/roles/motd/main.yml

PLAY [use motd role playbook] **************************************************

TASK [setup] *******************************************************************
ok: [192.168.1.22]
ok: [192.168.1.23]

TASK [motd : deliver motd file] ************************************************
changed: [192.168.1.23]
changed: [192.168.1.22]

PLAY RECAP *********************************************************************
192.168.1.22 : ok=2 changed=1 unreachable=0 failed=0
192.168.1.23 : ok=2 changed=1 unreachable=0 failed=0

[root@workstation ~]# ansible-playbook /etc/ansible/roles/motd/main.yml

PLAY [use motd role playbook] **************************************************

TASK [setup] *******************************************************************
ok: [192.168.1.22]
ok: [192.168.1.23]

TASK [motd : deliver motd file] ************************************************
changed: [192.168.1.23]
changed: [192.168.1.22]

PLAY RECAP *********************************************************************
192.168.1.22 : ok=2 changed=1 unreachable=0 failed=0
192.168.1.23 : ok=2 changed=1 unreachable=0 failed=0

[root@workstation ~]# cat /etc/motd
this is workstation.
today’s date is 2017-01-30
you can ask rmohan for access

HANDLERS & register in ansible

HANDLERS & register in ansible

[root@workstation ~]# vim register.yaml

– name: checking the register module functionality
hosts: localhost
tasks:
– name:
command: ps
register: output
– debug: msg=”{{ output.stdout }}”
====================================================
[root@workstation ~]# ansible-playbook –syntax-check register.yaml

playbook: register.yaml

[root@workstation ~]# ansible-playbook register.yaml

PLAY [checking the register module functionality] ******************************

TASK [setup] *******************************************************************
ok: [localhost]

TASK [command] *****************************************************************
changed: [localhost]

TASK [debug] *******************************************************************
ok: [localhost] => {
“msg”: ” PID TTY TIME CMD\n 9690 pts/0 00:00:00 bash\n 11851 pts/0 00:00:00 ansible-playboo\n 11894 pts/0 00:00:00 ansible-playboo\n 11904 pts/0 00:00:00 sh\n 11905 pts/0 00:00:00 python2\n 11906 pts/0 00:00:00 python2\n 11907 pts/0 00:00:00 ps”
}

PLAY RECAP *********************************************************************
localhost : ok=3 changed=1 unreachable=0 failed=0
====================================================
[root@workstation ~]# vim register.yaml

– name: checking the register module functionality
hosts: localhost
tasks:
– name:
shell: “ps -aux | grep ansible”
register: output
– debug: msg={{ output.stdout }}
[root@workstation ~]# ansible-playbook register.yaml

PLAY [checking the register module functionality] ******************************

TASK [setup] *******************************************************************
ok: [localhost]

TASK [command] *****************************************************************
changed: [localhost]

TASK [debug] *******************************************************************
ok: [localhost] => {
“msg”: “root 12337 42.0 5.4 338628 26212 pts/0 Rl+ 03:02 0:00 /usr/bin/python2 /usr/bin/ansible-playbook register.yaml\nroot 12380 0.0 5.9 345696 28664 pts/0 S+ 03:02 0:00 /usr/bin/python2 /usr/bin/ansible-playbook register.yaml\nroot 12390 0.0 0.2 113116 1204 pts/0 S+ 03:02 0:00 /bin/sh -c /usr/bin/python2 /root/.ansible/tmp/ansible-tmp-1485725579.35-37718783810169/command.py; rm -rf \”/root/.ansible/tmp/ansible-tmp-1485725579.35-37718783810169/\” > /dev/null 2>&1 && sleep 0\nroot 12391 0.0 1.6 189176 7976 pts/0 S+ 03:02 0:00 /usr/bin/python2 /root/.ansible/tmp/ansible-tmp-1485725579.35-37718783810169/command.py\nroot 12392 0.0 2.6 206164 12668 pts/0 S+ 03:02 0:00 /usr/bin/python2 /tmp/ansible_70E9iy/ansible_module_command.py\nroot 12393 0.0 0.2 113116 1200 pts/0 S+ 03:02 0:00 /bin/sh -c ps -aux | grep ansible\nroot 12395 0.0 0.1 112644 932 pts/0 S+ 03:02 0:00 grep ansible”
}

PLAY RECAP *********************************************************************
localhost : ok=3 changed=1 unreachable=0 failed=0
====================================================

[root@workstation ~]# vim handler.yaml

– name: checking handler functioning
hosts: localhost
tasks:
– name: starting apache service by adding html content
copy:
src: /root/index.html
dest: /var/www/html/index.html
notify:
– start_apache
handlers:
– name: start_apache
service:
name: httpd
state: started
[root@workstation ~]# vim /root/index.html
this for checking handler

[root@workstation ~]# ansible-playbook –syntax-check handler.yaml

playbook: handler.yaml

[root@workstation ~]# ansible-playbook handler.yaml

PLAY [checking handler functioning] ********************************************

TASK [setup] *******************************************************************
ok: [localhost]

TASK [starting apache service by adding html content] **************************
changed: [localhost]

RUNNING HANDLER [start_apache] *************************************************
changed: [localhost]

PLAY RECAP *********************************************************************
localhost : ok=3 changed=2 unreachable=0 failed=0

[root@workstation ~]# ansible-playbook handler.yaml

PLAY [checking handler functioning] ********************************************

TASK [setup] *******************************************************************
ok: [localhost]

TASK [starting apache service by adding html content] **************************
ok: [localhost]

PLAY RECAP *********************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0

Tags in ansible

Tags in ansible

[root@workstation ~]# vim tags.yaml

– name: installing postfix and stopping from starting service
hosts: localhost
tasks:
– name: installing postfix package
yum: name=postfix state=latest
tags: packageonly
– name: starting service
service: name=postfix state=started
[root@workstation ~]# ansible-playbook –syntax-check tags.yaml

playbook: tags.yaml
[root@workstation ~]# ansible-playbook -C tags.yaml

PLAY [installing postfix and stopping from starting service] *******************

TASK [setup] *******************************************************************
ok: [localhost]

TASK [installing postfix package] **********************************************
changed: [localhost]

TASK [starting service] ********************************************************
fatal: [localhost]: FAILED! => {“changed”: false, “failed”: true, “msg”: “Could not find the requested service postfix: cannot check nor set state”}
to retry, use: –limit @/root/tags.retry

PLAY RECAP *********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=1

[root@workstation ~]#
[root@workstation ~]# ansible-playbook -C tags.yaml –tags ‘packageonly’

PLAY [installing postfix and stopping from starting service] *******************

TASK [setup] *******************************************************************
ok: [localhost]

TASK [installing postfix package] **********************************************
changed: [localhost]

PLAY RECAP *********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0

[root@workstation ~]# ansible-playbook tags.yaml –tags ‘packageonly’

PLAY [installing postfix and stopping from starting service] *******************

TASK [setup] *******************************************************************
ok: [localhost]

TASK [installing postfix package] **********************************************
changed: [localhost]

PLAY RECAP *********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0

[root@workstation ~]# ansible-playbook tags.yaml –skip-tags ‘packageonly’

PLAY [installing postfix and stopping from starting service] *******************

TASK [setup] *******************************************************************
ok: [localhost]

TASK [starting service] ********************************************************
changed: [localhost]

PLAY RECAP *********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0

Creating role in ansible

Creating role in ansible

Creating a basic role for displaying message of the day

[root@workstation ~]# vim /etc/ansible/roles/motd/main.yml

– name: use motd role playbook
hosts: all

roles:
– motd

[root@workstation ~]# vim /etc/ansible/roles/motd/defaults/main.yml

system_owner: rmohan
[root@workstation ~]# vim /etc/ansible/roles/motd/tasks/main.yml

– name: deliver motd file
template:
src: templates/motd.j2
dest: /etc/motd
owner: root
group: root
mode: 777
[root@workstation ~]# vim /etc/ansible/roles/motd/templates/motd.j2
this is {{ ansible_hostname }}.
today’s date is {{ ansible_date_time.date }}
you can ask {{ system_owner }} for access
[root@workstation ~]# ansible-playbook –syntax-check /etc/ansible/roles/motd/main.yml

playbook: /etc/ansible/roles/motd/main.yml
[root@workstation ~]# ansible-playbook -C /etc/ansible/roles/motd/main.yml

PLAY [use motd role playbook] **************************************************

TASK [setup] *******************************************************************
ok: [192.168.1.23]
ok: [192.168.1.22]

TASK [motd : deliver motd file] ************************************************
changed: [192.168.1.22]
changed: [192.168.1.23]

PLAY RECAP *********************************************************************
192.168.1.23 : ok=2 changed=1 unreachable=0 failed=0
192.168.1.22 : ok=2 changed=1 unreachable=0 failed=0

[root@workstation ~]# ansible-playbook /etc/ansible/roles/motd/main.yml

PLAY [use motd role playbook] **************************************************

TASK [setup] *******************************************************************
ok: [192.168.1.23]
ok: [192.168.1.22]

TASK [motd : deliver motd file] ************************************************
changed: [192.168.1.22]
changed: [192.168.1.23]

PLAY RECAP *********************************************************************
192.168.1.23 : ok=2 changed=1 unreachable=0 failed=0
192.168.1.22 : ok=2 changed=1 unreachable=0 failed=0

[root@workstation ~]# cat /etc/motd
this is workstation.
today’s date is 2018-01-30
you can ask rmohan for access