|
HTTPS TLS performance optimization details
HTTPS (HTTP over SSL) is a security-oriented HTTP channel and can be understood as HTTP + SSL/TLS, that is, adding an SSL/TLS layer under HTTP as a security foundation. The predecessor of TLS is SSL. Currently, TLS 1.2 is widely used.
TLS performance tuning
TLS is widely considered to slow down services, mainly because early CPUs are still slow, and only a few sites can afford cryptographic services. But today’s computing power is no longer the bottleneck of TLS. In 2010, Google enabled encryption on its e-mail service by default, after which they stated that SSL/TLS no longer costly calculations:
In our front-end services, SSL/TLS calculations account for less than 1% of CPU load, less than 10KB of memory per connection, and less than 2% of network overhead.
1.
The speed of delay and connection management network communication is determined by two major factors: bandwidth and delay.
Bandwidth: Used to measure how much data can be sent in a unit of time.
Delay: Describe the time required for a message to be sent from one end to the other.
Among them, bandwidth is a secondary factor because usually you can buy more bandwidth at any time; This is unavoidable because it is a limitation that is imposed when data is transmitted over a network connection.
Latency has a particularly large impact on TLS because it has its own well-designed handshaking, adding an additional two round trips during connection initialization.
1.1 TCP Optimization
Each TCP connection has a speed limit called a congestion window that is initially small and grows over time with guaranteed reliability. This mechanism is called slow start.
Therefore, for all TCP connections, the startup is slow and worse for the TLS connection because the TLS handshake protocol consumes precious initial connection bytes (when the congestion window is small). If the congestion window is large enough, there is no additional delay for slow start. However, if the long handshake protocol exceeds the size of the congestion window, the sender must split it into two blocks, send a block first, wait for confirmation (a round trip), increase the congestion window, and then send the rest.
1.1.1 Congestion Window Tuning The
startup speed limit is called the initial congestion window. RFC6928 recommends that the initial congestion window be set to 10 network segments (approximately 15 KB). The early advice was to start with 2-4 network segments.
On older Linux platforms, you can change the initial congestion window of the route:
# ip route | while read p; do ip route change $p initcwnd 10; done
1.1.2 Preventing Slow Start Slow Start
Slow start can affect the connection over a period of time without any traffic, reducing its speed, and the speed drops very quickly. On Linux, you can disable slow start when the connection is idle:
# sysctl -w net.ipv4.tcp_slow_start_after_idle=0 can be made permanent by adding this setting to the /etc/sysctl.conf configuration.
1.2 Long Connections In
most cases, the TLS performance impact is concentrated on the start handshake phase of each connection. An important optimization technique is to keep every connection as close as possible with the number of connections allowed.
The current trend is to use an event-driven WEB server to handle all communications by using a fixed thread pool (even a single thread), thereby reducing the cost of each connection and the possibility of being attacked.
The disadvantage of long connections is that after the last HTTP connection is completed, the server waits for a certain amount of time before closing the connection, although a connection does not consume too many resources, but it reduces the overall scalability of the server. Long connections are suitable for scenarios where the client bursts a large number of requests.
When configuring large long connection timeouts, it is important to limit the number of concurrent connections to avoid server overload. Adjust the server by testing to run within capacity limits. If TLS is handled by OpenSSL, make sure that the server correctly sets the SSL_MODE_RELEASE_BUFFERS flag.
1.3 HTTP/2.0
HTTP/2.0 is a binary protocol that provides features such as multiplexing and header compression to improve performance.
1.4 CDNs
use CDNs to achieve world-class performance, using geographically dispersed servers to provide edge caching and traffic optimization.
The further away the user is from your server, the slower the access to the network, in which case connection establishment is a big limiting factor. For the server to be as close to the end user as possible, the CDN operates a large number of geographically distributed servers, which can provide two ways to reduce latency, namely edge caching and connection management.
1.4.1 Edge Cache
Since the CDN server is close to the user, you can provide your file to the user just as if your server is really there.
1.4.2 Connection Management
If your content is dynamic, user-specific, you cannot cache data through the CDN for a long time. However, a good CDN can help with connection management even without any cache, which is that it can eliminate most of the cost of establishing a connection through a long connection that is maintained for a long time.
Most of the time spent establishing a connection is spent waiting. To minimize waiting, the CDN routes traffic to its closest point to the destination through its own basic settings. Because it is the CDN’s own fully controllable server, it can maintain long internal connections for a long time.
When using a CDN, the user connects to the nearest CDN node. This is only a short distance. The network delay of the TLS handshake is also very short. The existing long-distance connection can be directly reused between the CDN and the server. This means that the user and server have established a valid connection with the CDN Fast Initial TLS handshake.
2. TLS protocol optimization
After connection management, we can focus on the performance characteristics of TLS and have the knowledge of security and speed tuning of the TLS protocol.
2.1 Key Exchange The
maximum cost of using TLS is the CPU-intensive cryptographic operations used for security parameter negotiation except for delays. This part of the communication is called key exchange. The CPU consumption of key exchange largely depends on the server’s chosen private key algorithm, private key length, and key exchange algorithm.
Key length
The difficulty of cracking a key depends on the length of the key. The longer the key, the more secure it is. However, a longer key also means that it takes more time for encryption and decryption.
Key Algorithms
There are currently two key algorithms available: RSA and ECDSA. The current RSA key algorithm recommends a minimum length of 2048 bits (112-bit encryption strength), and 3072 bits (128-bit encryption strength) will be deployed in the future. ECDSA is superior to RSA in terms of performance and security. 256-bit ECDSA (128-bit encryption strength) provides the same security as 3072-bit RSA, but with better performance.
Key Exchange
There are currently two key exchange algorithms available: DHE and ECDHE. Which DHE is too slow is not recommended. The performance of the key exchange algorithm depends on the length of the configured negotiation parameters. For DHE, the commonly used 1024 and 2048 bits provide 80 and 112 bit security levels, respectively. For ECDHE, security and performance depend on something called a **curve**. Secp256r1 provides a 128-bit security level.
In practice, you cannot combine key and key exchange algorithms at will, but you can use combinations specified by the protocol.
2.2 Certificate During
a complete TLS handshake, the server sends its certificate chain to the client for authentication. The length and correctness of the certificate chain have a great influence on the performance of the handshake.
Using as few certificates as possible
for each certificate in the certificate chain increases the handshaking packet. Too many certificates in the certificate chain may cause the TCP initial congestion window to overflow.
Including Only
Required Certificates It is a common mistake to include non-required certificates in the certificate chain. Each such certificate will add an additional 1-2 KB to the handshake protocol.
Providing a complete certificate chain
server must provide a complete certificate chain that is trusted by the root certificate.
Using elliptic curve certificate chains
Because ECDSA private key length uses fewer bits, ECDSA certificates can be smaller.
Avoiding the binding of too many domain names with the same certificate
Each additional domain name increases the size of the certificate, which has a significant impact on a large number of domain names.
2.3 Revocation Checks
Although the status of certificate revocation is constantly changing and the behavior of user agents in revocation of certificates is very different, as a server, the only thing to do is to deliver the revocation information as quickly as possible.
Certificate OCSP using OCSP information is designed to provide real-time queries, allowing the user agent to request only access to the website’s revocation information, and the query is brief and fast (an HTTP request). In contrast, the CRL is a list containing a large number of revoked certificates.
Using OCSP Responders with Fast and Reliable OCSP Responders The performance of OCSP Responders
differs between different CAs and you check their historical OCSP Responders before submitting them to the CA. Another criterion for choosing a CA is how quickly it updates OCSP responders.
Deploying OCSP stapling
OCSP stapling is a protocol feature that allows revocation information (entire OCSP response) to be included in the TLS handshake. After it is enabled, by giving the user agent all the information to revoke the check for better performance, the user agent can be omitted to obtain the CA’s OCSP response program through a separate connection to query the revocation information.
2.4 Protocol Compatibility
If your server is incompatible with the features of some new version protocols (eg TLS 1.2), the browser may need to make multiple attempts with the server to negotiate an encrypted connection. The best way to ensure good TLS performance is to upgrade the latest TLS protocol stack to support newer protocol versions and extensions.
2.5 Hardware Acceleration
As the CPU speed continues to increase, software-based TLS implementations have run fast enough on normal CPUs to process large numbers of HTTPS requests without specialized encryption hardware. However, installing an accelerator card may increase speed.
ORACLE_BASE=/u01/app/oracle
export ORACLE_BASE
ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe
export ORACLE_HOME
ORACLE_SID=XE
export ORACLE_SID
PATH=$ORACLE_HOME/bin:$PATH
export PATH
BKUP_DEST=/home/mohan/backups
find $BKUP_DEST -name 'backup*.dmp' -mtime +10 -exec rm {} \;
cd /home/mohan/backups && /u01/app/oracle/product/11.2.0/xe/bin/exp schema/password FILE=backup_`date +'%Y%m%d-%H%M'`.dmp STATISTICS=NONE
startup nomount pfile=/u01/app/oracle/product/11.2.0/xe/dbs/initXE.ora
/
create database
maxinstances 1
maxloghistory 2
maxlogfiles 16
maxlogmembers 2
maxdatafiles 30
datafile '/u01/app/oracle/oradata/XE/system.dbf'
size 200M reuse autoextend on next 10M maxsize 600M
extent management local
sysaux datafile '/u01/app/oracle/oradata/XE/sysaux.dbf'
size 10M reuse autoextend on next 10M
default temporary tablespace temp tempfile
'/u01/app/oracle/oradata/XE/temp.dbf'
size 20M reuse autoextend on next 10M maxsize 500M
undo tablespace undotbs1 datafile '/u01/app/oracle/oradata/XE/undotbs1.dbf'
size 50M reuse autoextend on next 5M maxsize 500M
character set cl8mswin1251
national character set al16utf16
set time_zone='00:00'
controlfile reuse
logfile '/u01/app/oracle/oradata/XE/log1.dbf' size 50m reuse
, '/u01/app/oracle/oradata/XE/log2.dbf' size 50m reuse
, '/u01/app/oracle/oradata/XE/log3.dbf' size 50m reuse
user system identified by oracle
user sys identified by oracle
/
create tablespace users
datafile '/u01/app/oracle/oradata/XE/users.dbf'
size 100M reuse autoextend on next 10M maxsize 11G
extent management local
/
exit;
Create additional database in Oracle Express edition
*In my windows i have installed oracle11g express edition i want to create new database for my testing purpose but the express edition doesnot support DBCA utiltiy let us we can discuss How to Create additional database in Oracle Express edition or How to create manual database in oracle11g windows enivornment.
S-1:
create directory
C:\Windows\system32>mkdir C:\oraclexe\app\oracle\admin\TEST
C:\Windows\system32>cd C:\oraclexe\app\oracle\admin\TEST
C:\oraclexe\app\oracle\admin\TEST>mkdir adump
C:\oraclexe\app\oracle\admin\TEST>mkdir bdump
C:\oraclexe\app\oracle\admin\TEST>mkdir dpdump
C:\oraclexe\app\oracle\admin\TEST>mkdir pfile
S-2:
Create new instance
C:\Windows\System32>oradim -new -sid test
Instance created.
S-3:
create pfile and Password file like below
C:\Windows\System32>orapwd file=C:\oraclexe\app\oracle\product\11.2.0\server\dat
abase\PWDTEST.ora password=oracle
Note: I just copied the pfile (InitXE.ora )from Xe database into new database pfile(my manual database) location then I changed the file name “initXE.ora” into “initTEST.ora” and opened that file
S-4:
Open the pfile “InitTEST.ora”
xe.__db_cache_size=411041792
xe.__java_pool_size=4194304
xe.__large_pool_size=4194304
xe.__oracle_base='C:\oraclexe\app\oracle'
xe.__pga_aggregate_target=432013312
xe.__sga_target=641728512
xe.__shared_io_pool_size=0
xe.__shared_pool_size=205520896
xe.__streams_pool_size=8388608
*.audit_file_dest='C:\oraclexe\app\oracle\admin\XE\adump'
*.compatible='11.2.0.0.0'
*.control_files='C:\oraclexe\app\oracle\oradata\XE\control.dbf'
*.db_name='XE'
*.DB_RECOVERY_FILE_DEST_SIZE=10G
*.DB_RECOVERY_FILE_DEST='C:\oraclexe\app\oracle\fast_recovery_area'
*.diagnostic_dest='C:\oraclexe\app\oracle\.'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=XEXDB)'
*.job_queue_processes=4
*.memory_target=1024M
*.open_cursors=300
*.remote_login_passwordfile='EXCLUSIVE'
*.sessions=20
*.shared_servers=4
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
S-5:
Change The parameter like below
*.audit_file_dest='C:\oraclexe\app\oracle\admin\TEST\adump'
*.compatible='11.2.0.0.0'
*.control_files='C:\oraclexe\app\oracle\oradata\TEST\control.dbf'
*.db_name='TEST'
*.DB_RECOVERY_FILE_DEST_SIZE=10G
*.DB_RECOVERY_FILE_DEST='C:\oraclexe\app\oracle\fast_recovery_area'
*.diagnostic_dest='C:\oraclexe\app\oracle\.'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=TESTXDB)'
*.job_queue_processes=4
*.memory_target=1024M
*.open_cursors=300
*.remote_login_passwordfile='EXCLUSIVE'
*.sessions=20
*.shared_servers=4
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
S-6:
After modifying the pfile and i started the new instance like below
C:\Windows\System32>set ORACLE_SID=TEST
C:\Windows\System32>sqlplus
SQL*Plus: Release 11.2.0.2.0 Production on Wed Sep 20 12:41:38 2017
Copyright (c) 1982, 2010, Oracle. All rights reserved.
Enter user-name: / as sysdba
Connected to an idle instance.
S-7:
Start the database in nomount stage using pfile
SQL> startup nomount pfile='C:\oraclexe\app\oracle\admin\TEST\pfile\initTEST.ora
'
ORACLE instance started.
Total System Global Area 644468736 bytes
Fixed Size 1385488 bytes
Variable Size 192941040 bytes
Database Buffers 444596224 bytes
Redo Buffers 5545984 bytes
S-8:
Create the database script
create database TEST
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1 'C:\oraclexe\app\oracle\oradata\TEST\REDO01.LOG' SIZE 50M BLOCKSIZE 512,
GROUP 2 'C:\oraclexe\app\oracle\oradata\TEST\REDO02.LOG' SIZE 50M BLOCKSIZE 512
DATAFILE'C:\oraclexe\app\oracle\oradata\TEST\SYSTEM.DBF' size 100m autoextend on
sysaux datafile 'C:\oraclexe\app\oracle\oradata\TEST\SYSAUX.DBF' size 100m autoextend on
undo tablespace undotbs1 datafile 'C:\oraclexe\app\oracle\oradata\TEST\UNDOTBS1.DBF' size 100m autoextend on
CHARACTER SET AL32UTF8
;
S-9:
Execute the @createdatabase.sql file
SQL> @C:\oraclexe\app\oracle\CREATEDATABASE.SQL
Database created.
S-10:
Test our database name and instance status
SQL> select status from v$instance;
STATUS
OPEN
SQL> select * from V$version;
BANNER
Oracle Database 11g Express Edition Release 11.2.0.2.0 - Production
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for 32-bit Windows: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production
SQL> select name from V$database;
NAME
TEST
S-11:
Execute this below two scripts
SQL> @C:\oraclexe\app\oracle\product\11.2.0\server\rdbms\admin\catalog.sql
SQL> @C:\oraclexe\app\oracle\product\11.2.0\server\rdbms\admin\catproc.sql
1) To view database
select * from v$database;
2) To view instance
select * from v$instance;
3) To view all users
select * from all_users;
4) To view table and columns for a particular user
select tc.table_name Table_name
,tc.column_id Column_id
,lower(tc.column_name) Column_name
,lower(tc.data_type) Data_type
,nvl(tc.data_precision,tc.data_length) Length
,lower(tc.data_scale) Data_scale
,tc.nullable nullable
FROM all_tab_columns tc
,all_tables t
WHERE tc.table_name = t.table_name;
select owner from dba_tables
union
select owner from dba_views;
select username from dba_users;
QL> SELECT TABLESPACE_NAME FROM USER_TABLESPACES;
Resulting in:
SYSTEM
SYSAUX
UNDOTBS1
TEMP
USERS
EXAMPLE
DEV_DB
It is also possible to query the users in all tablespaces:
SQL> select USERNAME, DEFAULT_TABLESPACE from DBA_USERS;
Or within a specific tablespace (using my DEV_DB tablespace as an example):
SQL> select USERNAME, DEFAULT_TABLESPACE from DBA_USERS where DEFAULT_TABLESPACE = 'DEV_DB';
ROLES DEV_DB
DATAWARE DEV_DB
DATAMART DEV_DB
STAGING DEV_DB
EXP-00002: error in writing to export file
While exporting table or schema using exp/imp utility you may come across below error.
Most of the time this error occurs due to insufficient space available on disk.so confirm space is available where you are taking taking export dump and re-run export.
[oracle@DEV admin]$ exp test/test@DEV tables=t1,t2,t3,t4 file=exp_tables.dmp log=exp_tables.log
Export: Release 9.2.0.8.0 – Production on Thu Sep 10 12:25:52 2015
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to: Oracle9i Enterprise Edition Release 9.2.0.8.0 – Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.8.0 – Production
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
About to export specified tables via Conventional Path …
. . exporting table t1 1270880 rows exported
. . exporting table t2 2248883 rows exported
. . exporting table t3 2864492 rows exported
. . exporting table t4
EXP-00002: error in writing to export file
EXP-00002: error in writing to export file
EXP-00000: Export terminated unsuccessfully
Docker is an open source application container engine based on the Go language and open sourced under the Apache 2.0 protocol.
Docker allows developers to package their applications and dependencies into a lightweight, portable container that can then be published to any popular Linux machine or virtualized.
Containers are completely sandboxed and do not have any interfaces with each other (similar to the iPhone’s app). More importantly, the container’s performance overhead is extremely low.
Docker application scenarios
- Web application automation packaged and released.
- Automated testing and continuous integration, release.
- Deploy and tune databases or other back-end applications in a service-oriented environment.
- Build or extend your existing OpenShift or Cloud Foundry platform from scratch to build your own PaaS environment.
Docker advantages
Docker allows developers to package their applications and dependencies into a portable container and then publish it to any popular Linux machine for virtualization. Docker has changed the way of virtualization so that developers can directly put their own results into Docker for management. Convenience is already Docker’s biggest advantage. In the past, it took days or even weeks to complete the task, and it took only a few seconds to finish in the Docker container.
If you have phobia, you are still a senior patient. Docker helps you pack your tangle! Such as the Docker image; Docker image contains the operating environment and configuration, so Docker can simplify the deployment of a variety of application examples work. For example, Web applications, background applications, database applications, big data applications such as Hadoop clusters, message queues, and so on can all be packaged into a single mirrored deployment.
On the one hand, the arrival of cloud computing era, so that developers do not have to configure high hardware for the pursuit of effect, Docker changed the mindset of high performance inevitable high prices. The combination of Docker and cloud makes the cloud space more fully utilized. Not only solved the problem of hardware management, but also changed the way of virtualization.
1.Docker’s installation:
Docker supports the following CentOS versions:
- CentOS 7 (64-bit)
- CentOS 6.5 (64-bit) or later
1.2 Prerequisites
Currently, Docker is supported by kernels in the CentOS distribution only.
Docker runs on CentOS 7 and requires a 64-bit system with a 3.10 or higher kernel version.
Docker runs on CentOS-6.5 or higher versions of CentOS. It requires a 64-bit system with a kernel version of 2.6.32-431 or higher.
My operating system version:
Centos7 officially installs docker CE instructions: https://docs.docker.com/install/linux/docker-ce/centos/
Docker packages and dependencies are included in the default CentOS-Extras source. You can update the yum source to install yum install, or you can choose to download the rpm package.
The official download address: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/
1.3 Use wget command to download rpm package
Create a downloads folder and use the wget command to download
[root@sungeek downloads]# wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-18.03.1.ce-1.el7.centos.x86_64.rpm
[ Root@sungeek downloads]# yum install docker-ce-18.03.0.ce-1.el7.centos.x86_64.rpm
1.4 Use yun install to install directly
I use yum update to update the yum source and install yum install
[root@rmohan downloads]# yum update
[root@rmohan downloads]# yum -y install docker-io
[root@localhost downloads]# docker –version –View the version, preferably using wget, so you can find the corresponding The latest installation package
Docker version 1.13.1, build 94f4240/1.13.1
1.5 Start the Docker Process
[root@localhost downloads]# systemctl start docker
[root@localhost downloads]# systemctl status docker
[root@localhost downloads]# systemctl enable docker –Set boot self-booting docker service
1.6 Verifying Successful Installation
Enter docker directly to list the usage of this command
Test Run First Container: hello-world
Since there is no hello-world image locally, a hello-world image will be downloaded and run inside the container.
2. The most commonly used commands
2.1 Use docker images to view images
[root@localhost downloads]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/hello-world latest e38bc07ac18e 2 months ago 1.85 kB
Docker ps is also the most commonly used command, -a : displays all containers, including those that are not running.
2.3 Use docker logs to view the container console output
Get the container’s log
Docker logs [container]
Problem introduction
When the general enterprise is used to provide NFS service, samba service or vsftpd service, the system must provide 7*24 hours of network transmission service. The maximum network transmission speed it can provide is 100MB/s, but when there are a large number of users accessing, the server’s access pressure is very high, and the network transmission rate is particularly slow.
Solution
Therefore, we can use bond technology to achieve load balancing of multiple network cards to ensure automatic backup and load balancing of the network. In this way, the reliability of the network and the high-speed transmission of files in the actual operation and maintenance work are guaranteed.
There are seven (0~6) network card binding modes: bond0, bond1, bond2, bond3, bond4, bond5, bond6.
The common network card binding driver has the following three modes:
- Mode0 Balanced load mode: Usually two network cards work and are automatically backed up, but port aggregation is required on the switch devices connected to the server’s local network card to support bonding technology.
- Mode1 automatic backup technology: usually only one network card works, after it fails, it is automatically replaced with another network card;
- Mode6 Balanced load mode: Normally, both network cards work, and they are automatically backed up. There is no need for the switch device to provide auxiliary support.
Here mainly describes the mode6 network card binding drive mode, because this mode allows two network cards to work together at the same time, when one of the network card failure can automatically backup, and without the need for switch device support, thus ensuring reliable network transmission protection .
The following is the bond binding operation of the network card in RHEL 7 in the VMware virtual machine
- Add another network card device to the virtual machine system and set two network cards in the same network connection mode. As shown in the following figure, the network card device in this mode can bind the network card. Otherwise, the two network cards cannot be added. Send data to each other.
- Configure the binding parameters of the network card device. It should be noted here that the independent network card needs to be configured as a “slave” network card. Serving the “main” network card, it should not have its own IP address. After the following initialization of the device, they can support network card binding.
cd /etc/sysconfig/network-scripts/
TYPE=Ethernet
BOOTPROTO=none
DEVICE=eno16777728
ONBOOT=yes
HWADDR=00:0C:29:E2:25:2D
USERCTL=no
MASTER=bond0
SLAVE=yes
Vim ifcfg-eno33554968 # Edit NIC 2 configuration file
TYPE=Ethernet
BOOTPROTO=none
DEVICE=eno33554968
ONBOOT=yes
HWADDR=00:0C:29:E2:25:2D
MASTER=bond0
SLAVE=yes
- Create a new network card device file ifcfg-bond0, and configure the IP address and other information. In this way, when the user accesses the corresponding service, the two network card devices provide services together.
Vim ifcfg-bond0 # Create a new ifcfg-bond0 configuration file in the current directory.
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
USERCTL=no
DEVICE=bond0
IPADDR=192.168.100.5
PREFIX=24
DNS=192.168.100.1
NM_CONTROLLED=no
- Modify the network card binding drive mode, here we use mode6 (balanced load mode)
Vim /etc/modprobe.d/bond.conf # Configure the mode of the NIC driver
Alias ??bond0 bonding
options bond0 miimon=100 mode=6
- Restart the network service so that the configuration takes effect
Systemctl restart network
- test
First, bonding technology
Bonding is a network card binding technology in a Linux system. It can abstract (bind) n physical NICs on the server into a logical network card, which can improve network throughput and achieve network redundancy. , load and other functions have many advantages.
Bonding technology is implemented at the kernel level of the Linux system. It is a kernel module (driver). To use it, the system needs to have this module. We can use modinfo command to view the information of this module. Generally, it is supported.
# modinfo bonding
filename: /lib/modules/2.6.32-642.1.1.el6.x86_64/kernel/drivers/net/bonding/bonding.ko
author: Thomas Davis, tadavis@lbl.gov and many others
description: Ethernet Channel Bonding Driver, v3.7.1
version: 3.7.1
license: GPL
alias: rtnl-link-bond
srcversion: F6C1815876DCB3094C27C71
depends:
vermagic: 2.6.32-642.1.1.el6.x86_64 SMP mod_unload modversions
parm: max_bonds:Max number of bonded devices (int)
parm: tx_queues:Max number of transmit queues (default = 16) (int)
parm: num_grat_arp:Number of peer notifications to send on failover event (alias of num_unsol_na) (int)
parm: num_unsol_na:Number of peer notifications to send on failover event (alias of num_grat_arp) (int)
parm: miimon:Link check interval in milliseconds (int)
parm: updelay:Delay before considering link up, in milliseconds (int)
parm: downdelay:Delay before considering link down, in milliseconds (int)
parm: use_carrier:Use netif_carrier_ok (vs MII ioctls) in miimon; 0 for off, 1 for on (default) (int)
parm: mode:Mode of operation; 0 for balance-rr, 1 for active-backup, 2 for balance-xor, 3 for broadcast, 4 for 802.3ad, 5 for balance-tlb, 6 for balance-alb (charp)
parm: primary:Primary network device to use (charp)
parm: primary_reselect:Reselect primary slave once it comes up; 0 for always (default), 1 for only if speed of primary is better, 2 for only on active slave failure (charp)
parm: lacp_rate:LACPDU tx rate to request from 802.3ad partner; 0 for slow, 1 for fast (charp)
parm: ad_select:803.ad aggregation selection logic; 0 for stable (default), 1 for bandwidth, 2 for count (charp)
parm: min_links:Minimum number of available links before turning on carrier (int)
parm: xmit_hash_policy:balance-xor and 802.3ad hashing method; 0 for layer 2 (default), 1 for layer 3+4, 2 for layer 2+3 (charp)
parm: arp_interval:arp interval in milliseconds (int)
parm: arp_ip_target:arp targets in n.n.n.n form (array of charp)
parm: arp_validate:validate src/dst of ARP probes; 0 for none (default), 1 for active, 2 for backup, 3 for all (charp)
parm: arp_all_targets:fail on any/all arp targets timeout; 0 for any (default), 1 for all (charp)
parm: fail_over_mac:For active-backup, do not set all slaves to the same MAC; 0 for none (default), 1 for active, 2 for follow (charp)
parm: all_slaves_active:Keep all frames received on an interface by setting active flag for all slaves; 0 for never (default), 1 for always. (int)
parm: resend_igmp:Number of IGMP membership reports to send on link failure (int)
parm: packets_per_slave:Packets to send per slave in balance-rr mode; 0 for a random slave, 1 packet per slave (default), >1 packets per slave. (int)
parm: lp_interval:The number of seconds between instances where the bonding driver sends learning packets to each slaves peer switch. The default is 1. (uint)
The seven working modes of bonding:
Bonding technology provides seven operating modes that need to be specified when used. Each has its own advantages and disadvantages.
- Balance-rr (mode=0) By default, there is a high availability (fault tolerance) and load balancing feature that requires configuration of the switch, each packet is polled for packet delivery (balanced traffic distribution).
- Active-backup (mode=1) Only the high availability (fault-tolerance) function does not require switch configuration. In this mode, only one network card is working and only one mac address is available to the outside world. The disadvantage is that the port utilization rate is relatively low.
- Balance-xor (mode=2) is not commonly used
- Broadcast (mode=3) is not commonly used
- 802.3ad (mode=4) IEEE 802.3ad dynamic link aggregation, requires switch configuration, never used
- Balance-tlb (mode=5) is not commonly used
- Balance-alb (mode=6) has high availability (fault tolerance) and load balancing features and does not require switch configuration (traffic distribution to each interface is not particularly balanced)
There is a lot of information on the specific Internet, understand the characteristics of each mode according to their own choices on the line, generally used 0,1,4,6 these several modes.
Second, CentOS 7 configuration bonding
surroundings:
System: Centos7
Network card: em1, em2
Bond0: 172.16.0.183
Load Mode: mode6( adaptive load balancing )
The two physical network cards em1 and em2 on the server are bound to a logical network card bond0, and the bonding mode selects mode6.
Note: The ip address is configured on bond0. The physical network card does not need to configure an ip address.
1, close and stop the NetworkManager service
STOP NetworkManager.service systemctl # Stop NetworkManager service
systemctl disable NetworkManager.service # prohibit start-up service NetworkManager
Ps: Must be closed, will not interfere with doing the bonding
2, loading the bonding module
modprobe --first-time bonding
There is no prompt to indicate successful loading. If modprobe: ERROR: could not insert ‘bonding’: Module already in kernel indicates that you have already loaded this module.
You can also use lsmod | grep bonding to see if the module is loaded
lsmod | grep bonding
bonding 136705 0
3, create a configuration file based on the bond0 interface
1
|
vim / etc / sysconfig / network - scripts / ifcfg - bond0
|
Modify it as follows, depending on your situation:
DEVICE=bond0
TYPE=Bond
IPADDR=172.16.0.183
NETMASK=255.255.255.0
GATEWAY=172.16.0.1
DNS1=114.114.114.114
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
BONDING_MASTER=yes
BONDING_OPTS="mode=6 miimon=100"
The above BONDING_OPTS=” mode=6 miimon=100 ” indicates that the configured working mode is mode6 (adaptive load balancing), and miimon indicates the frequency of monitoring network links (milliseconds). We set the frequency to 100 milliseconds, depending on your needs. Mode can be specified for other load modes.
4, modify the em1 interface configuration file
vim /etc/sysconfig/network-scripts/ifcfg-em1
Modify it as follows:
DEVICE=em1
USERCTL=no
ONBOOT = yes
MASTER =bond0 # needs to correspond to the value of DEVICE in the ifcfg-bond0 configuration file above
SLAVE= yes
BOOTPROTO=none
5, modify the em2 interface configuration file
vim /etc/sysconfig/network-scripts/ifcfg-em2
Modify it as follows:
DEVICE=em2
USERCTL=no
ONBOOT = yes
MASTER =bond0 # Needs and corresponds to the value of DEVICE in the ifcfg-bond0 configuration file
SLAVE= yes
BOOTPROTO=none
6, test
Restart network service
systemctl restart network
View the interface status information of bond0 (If the error message shows that it is not successful, it is most likely that the bond0 interface is not up)
# cat /proc/net/bonding/bond0
Bonding Mode: adaptive load balancing // Binding mode: Currently it is ald mode (mode 6), ie high availability and load balancing mode
Primary Slave: None
Currently Active Slave: em1
MII Status: up // Interface status: up (MII is the Media Independent Interface abbreviation, interface meaning)
MII Polling Interval (ms): 100 // Time interval for interface polling (here 100 ms)
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: em1 / / prepared interface: em0
MII Status: up / / interface status: up (MII is the Media Independent Interface referred to, the interface means)
Speed: 1000 Mbps // The speed of the port is 1000 Mpbs
Duplex: full // full duplex
Link Failure Count: 0 // Number of link failures: 0
Permanent HW addr: 84:2b:2b:6a:76:d4 // Permanent MAC address
Slave queue ID: 0
Slave Interface: em1 / / prepared interface: em1
MII Status: up / / interface status: up (MII is the Media Independent Interface referred to, the interface means)
Speed: 1000 Mbps
Duplex: full // full duplex
Link Failure Count: 0 // Number of link failures: 0
Permanent HW addr: 84:2b:2b:6a:76:d5 // Permanent MAC address
Slave queue ID: 0
Check the interface information of the network through the ifconfig command
# ifconfig
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500
inet 172.16.0.183 netmask 255.255.255.0 broadcast 172.16.0.255
inet6 fe80::862b:2bff:fe6a:76d4 prefixlen 64 scopeid 0x20<link>
ether 84:2b:2b:6a:76:d4 txqueuelen 0 (Ethernet)
RX packets 11183 bytes 1050708 (1.0 MiB)
RX errors 0 dropped 5152 overruns 0 frame 0
TX packets 5329 bytes 452979 (442.3 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
em1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
ether 84:2b:2b:6a:76:d4 txqueuelen 1000 (Ethernet)
RX packets 3505 bytes 335210 (327.3 KiB)
RX errors 0 dropped 1 overruns 0 frame 0
TX packets 2852 bytes 259910 (253.8 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
em2: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
ether 84:2b:2b:6a:76:d5 txqueuelen 1000 (Ethernet)
RX packets 5356 bytes 495583 (483.9 KiB)
RX errors 0 dropped 4390 overruns 0 frame 0
TX packets 1546 bytes 110385 (107.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 0 (Local Loopback)
RX packets 17 bytes 2196 (2.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 17 bytes 2196 (2.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
The test network is highly available. We unplugged one of the network cables for testing. The conclusions are:
- In the current mode=6 mode, one packet is lost. When the network is restored (the network is inserted back), the packet loss is about 5-6. This indicates that the high-availability function is normal but the packet loss will be more when the network recovers.
- One packet was lost in the test mode=1 mode. When the network was restored (the cable was plugged back in), there was basically no packet loss, indicating that the high-availability function and recovery were normal.
- Mode6 This kind of load mode is very good except that there is packet loss when the fault is recovered. If this can be ignored, this mode can be used; mode1 fault switching and recovery are fast, and there is basically no packet loss and delay. . But the port utilization is relatively low, because this kind of master-backup mode only has one network card at work.
Third, CentOS 6 configuration bonding
Centos6 configuration bonding is basically the same as the above Cetons7, but the configuration is somewhat different.
System: Centos6
Network card: em1, em2
Bond0: 172.16.0.183
Load Mode: mode1(adaptive load balancing) # Here, the load mode is 1, that is, active/standby mode.
1, close and stop the NetworkManager service
service NetworkManager stop
chkconfig NetworkManager off
Ps: If it is installed, close it. If the error message indicates that this is not installed, then do not use it.
2, loading the bonding module
modprobe --first-time bonding
3 , a record built on bond0 interface configuration files
vim /etc/sysconfig/network-scripts/ifcfg-bond0
Modify the following (according to your needs):
DEVICE=bond0
TYPE=Bond
BOOTPROTO=none
ONBOOT=yes
IPADDR=172.16.0.183
NETMASK=255.255.255.0
GATEWAY=172.16.0.1
DNS1=114.114.114.114
USERCTL=no
BONDING_OPTS="mode=6 miimon=100"
4, load the bond0 interface to the kernel
vi /etc/modprobe.d/bonding.conf
Modify it as follows:
5, edit the em1, em2 interface file
vim /etc/sysconfig/network-scripts/ifcfg-em1
Modify it as follows:
DEVICE=em1
MASTER=bond0
SLAVE=yes
USERCTL = no
ONBOOT=yes
BOOTPROTO=none
vim /etc/sysconfig/network-scripts/ifcfg-em2
Modify it as follows:
DEVICE=em2
MASTER=bond0
SLAVE=yes
USERCTL = no
ONBOOT=yes
BOOTPROTO=none
6, load the module, restart the network and test
modprobe bonding
service network restart
Check the status of the bondo interface
cat /proc/net/bonding/bond0
Bonding Mode: fault-tolerance ( active- backup ) # The current load mode of bond0 interface is
active/ backup mode
Primary Slave: None Currently Active Slave: em2
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: em1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 2
Permanent HW addr: 84:2b:2b:6a:76:d4
Slave queue ID: 0
Slave Interface: em2
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 84: 2b: 2b: 6a: 76 : d5
Slave queue ID: 0
Use the ifconfig command to view the status of the next interface. You will find that all MAC addresses in the mode=1 mode are consistent, indicating that the external logic is a mac address.
ifconfig
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500
inet6 fe80::862b:2bff:fe6a:76d4 prefixlen 64 scopeid 0x20<link>
ether 84:2b:2b:6a:76:d4 txqueuelen 0 (Ethernet)
RX packets 147436 bytes 14519215 (13.8 MiB)
RX errors 0 dropped 70285 overruns 0 frame 0
TX packets 10344 bytes 970333 (947.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
em1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
ether 84:2b:2b:6a:76:d4 txqueuelen 1000 (Ethernet)
RX packets 63702 bytes 6302768 (6.0 MiB)
RX errors 0 dropped 64285 overruns 0 frame 0
TX packets 344 bytes 35116 (34.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
em2: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
ether 84:2b:2b:6a:76:d4 txqueuelen 1000 (Ethernet)
RX packets 65658 bytes 6508173 (6.2 MiB)
RX errors 0 dropped 6001 overruns 0 frame 0
TX packets 1708 bytes 187627 (183.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 0 (Local Loopback)
RX packets 31 bytes 3126 (3.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 31 bytes 3126 (3.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Perform a high availability test, unplug one of the cables to see the packet loss and delay, and then plug in the network cable (analog recovery), and then watch the packet loss and delay.
systemctl stop firewalld.service
setenforce 0
sed -i s/^SELINUX=enforcing/SELINUX=disabled/g /etc/selinux/config
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.70 master.apple.com master
192.168.1.71 slave.apple.com slave
192.168.1.73 client1.apple.com client1
192.168.1.74 client2.apple.com client2
yum -y install openldap compat-openldap openldap-clients openldap-servers openldap-servers-sql openldap-devel migrationtools vim
cd /etc/openldap/slapd.d
rm -rf
cp /usr/share/openldap-servers/slapd.ldif /root/ldap/
Set OpenLDAP admin password.
# generate encrypted password
slappasswd -s redhat -n > /etc/openldap/passwd
slappasswd -h {SSHA} -s ldppassword
[root@ldap ~]# slappasswd
New password:
Re-enter new password:
[root@master ldap]# cat slapd.ldif
#
# See slapd-config(5) for details on configuration options.
# This file should NOT be com readable.
#
dn: cn=config
objectClass: olcGlobal
cn: config
olcArgsFile: /var/run/openldap/slapd.args
olcPidFile: /var/run/openldap/slapd.pid
#
# TLS settings
#
olcTLSCACertificatePath: /etc/openldap/certs
olcTLSCertificateFile: “OpenLDAP Server”
olcTLSCertificateKeyFile: /etc/openldap/certs/password
#
# Do not enable referrals until AFTER you have a working directory
# service AND an understanding of referrals.
#
#olcReferral: ldap://root.openldap.org
#
# Sample security restrictions
# Require integrity protection (prevent hijacking)
# Require 112-bit (3DES or better) encryption for updates
# Require 64-bit encryption for simple bind
#
#olcSecurity: ssf=1 update_ssf=112 simple_bind=64
#
# Load dynamic backend modules:
# – modulepath is architecture dependent value (32/64-bit system)
# – back_sql.la backend requires openldap-servers-sql package
# – dyngroup.la and dynlist.la cannot be used at the same time
#
#dn: cn=module,cn=config
#objectClass: olcModuleList
#cn: module
#olcModulepath: /usr/lib/openldap
#olcModulepath: /usr/lib64/openldap
#olcModuleload: accesslog.la
#olcModuleload: auditlog.la
#olcModuleload: back_dnsapple.la
#olcModuleload: back_ldap.la
#olcModuleload: back_mdb.la
#olcModuleload: back_meta.la
#olcModuleload: back_null.la
#olcModuleload: back_passwd.la
#olcModuleload: back_relay.la
#olcModuleload: back_shell.la
#olcModuleload: back_sock.la
#olcModuleload: collect.la
#olcModuleload: constraint.la
#olcModuleload: dds.la
#olcModuleload: deref.la
#olcModuleload: dyngroup.la
#olcModuleload: dynlist.la
#olcModuleload: memberof.la
#olcModuleload: pcache.la
#olcModuleload: ppolicy.la
#olcModuleload: refint.la
#olcModuleload: retcode.la
#olcModuleload: rwm.la
#olcModuleload: seqmod.la
#olcModuleload: smbk5pwd.la
#olcModuleload: sssvlv.la
#olcModuleload: syncprov.la
#olcModuleload: translucent.la
#olcModuleload: unique.la
#olcModuleload: valsort.la
#
# Schema settings
#
dn: cn=schema,cn=config
objectClass: olcSchemaConfig
cn: schema
include: file:///etc/openldap/schema/corba.ldif
include: file:///etc/openldap/schema/core.ldif
include: file:///etc/openldap/schema/cosine.ldif
include: file:///etc/openldap/schema/duaconf.ldif
include: file:///etc/openldap/schema/dyngroup.ldif
include: file:///etc/openldap/schema/inetorgperson.ldif
include: file:///etc/openldap/schema/java.ldif
include: file:///etc/openldap/schema/misc.ldif
include: file:///etc/openldap/schema/nis.ldif
include: file:///etc/openldap/schema/openldap.ldif
include: file:///etc/openldap/schema/ppolicy.ldif
include: file:///etc/openldap/schema/collective.ldif
#
# Frontend settings
#
dn: olcDatabase=frontend,cn=config
objectClass: olcDatabaseConfig
objectClass: olcFrontendConfig
olcDatabase: frontend
#
# Sample global access control policy:
# Root DSE: allow anyone to read it
# Subschema (sub)entry DSE: allow anyone to read it
# Other DSEs:
# Allow self write access
# Allow authenticated users read access
# Allow anonymous users to authenticate
#
#olcAccess: to dn.base=”” by * read
#olcAccess: to dn.base=”cn=Subschema” by * read
#olcAccess: to *
# by self write
# by users read
# by anonymous auth
#
# if no access controls are present, the default policy
# allows anyone and everyone to read anything but restricts
# updates to rootdn. (e.g., “access to * by * read”)
#
# rootdn can always read and write EVERYTHING!
#
#
# Configuration database
#
dn: olcDatabase=config,cn=config
objectClass: olcDatabaseConfig
olcDatabase: config
olcAccess: to * by dn.base=”gidNumber=0+uidNumber=0,cn=peercred,cn=external,c
n=auth” manage by * none
#
# Server status monitoring
#
dn: olcDatabase=monitor,cn=config
objectClass: olcDatabaseConfig
olcDatabase: monitor
olcAccess: to * by dn.base=”gidNumber=0+uidNumber=0,cn=peercred,cn=external,c
n=auth” read by dn.base=”cn=Manager,dc=apple,dc=com” read by * none
#
# Backend database definitions
#
dn: olcDatabase=hdb,cn=config
objectClass: olcDatabaseConfig
objectClass: olcHdbConfig
olcDatabase: hdb
olcSuffix: dc=apple,dc=com
olcRootDN: cn=Manager,dc=apple,dc=com
olcRootPW: {SSHA}cc+n64r5WNtLivZppJmYvWWMo3DIhcAy
olcDbDirectory: /var/lib/ldap
olcDbIndex: objectClass eq,pres
olcDbIndex: ou,cn,mail,surname,givenname eq,pres,sub
rm -rf /etc/openldap/slapd.d/*
[root@master ldap]# slapadd -F /etc/openldap/slapd.d/ -n 0 -l /root/ldap/slapd.ldif
_#################### 100.00% eta none elapsed none fast!
Closing DB…
[root@master ldap]#
[root@server home]# slaptest -u -F /etc/openldap/slapd.d/
config file testing succeeded
config file testing succeeded
chown -Rv ldap.ldap /etc/openldap/slapd.d
[root@server slapd.d]# chown -Rv ldap.ldap /etc/openldap/slapd.d/
[root@master ldap]# cd /etc/openldap/slapd.d/
[root@master slapd.d]# ls -la
total 4
drwxr-x— 3 ldap ldap 45 Jun 15 19:18 .
drwxr-xr-x. 5 root root 92 Jun 15 19:07 ..
drwxr-x— 3 ldap ldap 182 Jun 15 19:18 cn=config
-rw——- 1 ldap ldap 589 Jun 15 19:18 cn=config.ldif
[root@master slapd.d]#
[root@master ldap]# systemctl start slapd.service
[root@master ldap]# systemctl status slapd.service
[root@master ldap]# systemctl enable slapd.service
[root@master ldap]# cat create_user.sh
#!/bin/bash
USER_LIST=ldapuser.txt
HOME_ldap=/home/ldapuser
mkdir -pv $HOME_ldap
for USERID in `awk ‘{print $1}’ $USER_LIST`; do
USERNAME=”`grep “$USERID” $USER_LIST | awk ‘{print $2}’`”
HOMEDIR=${HOME_ldap}/${USERNAME}
useradd $USERNAME -u $USERID -d $HOMEDIR
grep “$USERID” $USER_LIST | awk ‘{print $3}’ | passwd –stdin $USERNAME
done
[root@master ldap]# cat ldapuser.txt
5000 lduser1 123456
5001 lduser2 123456
5002 lduser3 123456
5003 lduser4 123456
5004 lduser5 123456
5005 lduser6 123456
[root@master ldap]#
vim /usr/share/migrationtools/migrate_common.ph
# Default DNS domain
$DEFAULT_MAIL_DOMAIN = “apple.com”;
# Default base
$DEFAULT_BASE = “dc=apple,dc=com”;
vim /usr/share/migrationtools/migrate_common.ph
/usr/share/migrationtools/migrate_base.pl > /root/ldap/base.ldif
/usr/share/migrationtools/migrate_passwd.pl /etc/passwd /root/ldap/user.ldif
cat /root/ldap/user.ldif
/usr/share/migrationtools/migrate_group.pl /etc/group /root/ldap/group.ldif
ldapadd -D “cn=Manager,dc=apple,dc=com” -W -x -f base.ldif
ldapadd -D “cn=Manager,dc=apple,dc=com” -W -x -f user.ldif
ldapadd -D “cn=Manager,dc=apple,dc=com” -W -x -f group.ldif
yum -y install nfs-utils
yum -y install nfs-utils
[root@server ~]# cat /etc/exports
/home/ldapuser 192.168.1.0/24(rw,sync)
[root@server ~]# systemctl start nfs-server.service
[root@client home]# exportfs -rv
exporting *:/home/ldapuser
systemctl enable nfs-server.service
vi /etc/rsyslog.conf
local4.* /var/log/ldap.log
touch /var/log/ldap.log
rsyslog?
systemctl restart rsyslog.service
slapd /var/log/messages
systemctl status slapd.service -l
tail -f /var/log/messages
SSL SETUP IN LDAP
openssl req -nodes -sha256 -newkey rsa:2048 -keyout PrivateKey.key -out CertificateRequest.csr
2. Optional: Check to see if the CSR really has 256bit signatures
openssl req -in CertificateRequest.csr -text -noout
You should see “Signature Algorithm: sha256WithRSAEncryption”
3. Create the certificate
We use the CSR and sign it with the private key and create a public certificate
openssl req -nodes -sha256 -newkey rsa:2048 -keyout PrivateKey.key -out CertificateRequest.csr
openssl req -in CertificateRequest.csr -text -noout
openssl x509 -req -days 365 -sha256 -in CertificateRequest.csr -signkey PrivateKey.key -out my256.crt
cp my256.crt /etc/openldap/certs/server.crt
cp PrivateKey.key /etc/openldap/certs/server.key
cp /etc/pki/tls/certs/ca-bundle.crt /etc/openldap/certs/
cat my256.crt PrivateKey.key >> master.pem
cat my256.crt PrivateKey.key >> slave.pem
vi mod_ssl.ldif
# create new
dn: cn=config
changetype: modify
add: olcTLSCACertificateFile
olcTLSCACertificateFile: /etc/openldap/certs/ca-bundle.crt
–
replace: olcTLSCertificateFile
olcTLSCertificateFile: /etc/openldap/certs/server.crt
–
replace: olcTLSCertificateKeyFile
olcTLSCertificateKeyFile: /etc/openldap/certs/server.key
[root@master ldap]# ldapmodify -Y EXTERNAL -H ldapi:/// -f mod_ssl.ldif
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
modifying entry “cn=config”
[root@master ldap]# vi /etc/sysconfig/slapd
add
SLAPD_URLS=”ldapi:/// ldap:/// ldaps:///”
systemctl restart slapd
client end
[root@master ldap]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.70 master.apple.com master
192.168.1.71 slave.apple.com slave
192.168.1.73 client1.apple.com client1
192.168.1.74 client2.apple.com client2
yum -y install sssd-ldap nss-pam-ldapd nfs-utils
authconfig-tui
authconfig –enableldap –enableldapauth –ldapserver=ldaps://master.apple.com –ldapbasedn=”dc=apple,dc=com” –enablemkhomedir –disableldaptls –update
authconfig –enableldap –enableldapauth –ldapserver=ldaps://slave.apple.com –ldapbasedn=”dc=apple,dc=com” –enablemkhomedir –disableldaptls –update
reate the c hash of the CA certificate.
/etc/pki/tls/misc/c_hash /etc/openldap/cacerts/server.pem
Output:
997ee4fb.0 => /etc/openldap/cacerts/server.pem
Now, symlink the rootCA.pem to the shown 8 digit hex number.
ln -s /etc/openldap/cacerts/server.pem 997ee4fb.0
[root@client1 ~]# echo “TLS_REQCERT allow” >> /etc/openldap/ldap.conf
[root@client1 ~]# echo “tls_reqcert allow” >> /etc/nslcd.conf
Restart the LDAP client service.
systemctl restart nslcd
[root@client1 ~]# getent passwd lduser1
lduser1:x:5000:5000:lduser1:/home/ldapuser/lduser1:/bin/bash
[root@client1 /]# mkdir -p /home/ldapuser
[root@client1 /]# mount -t nfs 192.168.1.70:/home/ldapuser/ /home/ldapuser/
[root@client1/]# cd /home/ldapuser/
[root@client ldapuser]# ls
lduser1 lduser2 lduser3 lduser4 lduser5 lduser6
[root@client ldapuser]# su – lduser1
Last login: Sat May 20 23:11:00 EDT 2017 on pts/0
Configure LDAP Client for TLS connection.
[root@client1 ~]# echo “TLS_REQCERT allow” >> /etc/openldap/ldap.conf
[root@client1 ~]# echo “tls_reqcert allow” >> /etc/nslcd.conf
[root@client1 ~]# authconfig –enableldaptls –update
getsebool: SELinux is disabled
scp server.pem root@client1:/tmp/
Enable debug logging on CentOS 7 LDAP Server
vi /root/ldap/logging.ldif
——
cat logging.ldif
dn: cn=config
replace: olcLogLevel
olcLogLevel: -1
——
# apply
ldapmodify -Y EXTERNAL -H ldapi:/// -f /root/ldap/logging.ldif
# verify
ldapsearch -Y EXTERNAL -H ldapi:/// -b cn=config -s base|grep -i LOG
systemctl restart slapd
vi /etc/rsyslog.conf
——
local4.* -/var/log/slapd.log
——
systemctl restart rsyslog
vi /etc/logrotate.d/syslog
—–
# add this line
/var/log/slapd.log
Master slave replication
root@master ldap]# cat rpuser.ldif
dn: uid=rpuser,dc=apple,dc=com
objectClass: simpleSecurityObject
objectclass: account
uid: rpuser
description: Replication User
userPassword: root1234
[root@master ldap]# ldapadd -x -W -D cn=Manager,dc=apple,dc=com -W -f rpuser.ldif
Enter LDAP Password:
adding new entry “uid=rpuser,dc=apple,dc=com”
Configure LDAP Provider. Add syncprov module.
[root@master ~]# vi mod_syncprov.ldif
# create new
dn: cn=module,cn=config
objectClass: olcModuleList
cn: module
olcModulePath: /usr/lib64/openldap
olcModuleLoad: syncprov.la
[root@master ~]# ldapadd -Y EXTERNAL -H ldapi:/// -f mod_syncprov.ldif
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
adding new entry “cn=module,cn=config”
[root@master ~]# vi syncprov.ldif
# create new
dn: olcOverlay=syncprov,olcDatabase={2}hdb,cn=config
objectClass: olcOverlayConfig
objectClass: olcSyncProvConfig
olcOverlay: syncprov
olcSpSessionLog: 100
[root@master ~]# ldapadd -Y EXTERNAL -H ldapi:/// -f syncprov.ldif
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
adding new entry “olcOverlay=syncprov,olcDatabase={2}hdb,cn=config”
vi syncrepl.ldif
[root@slave ldap]# cat syncrepl.ldif
dn: olcDatabase={2}hdb,cn=config
changetype: modify
add: olcSyncRepl
olcSyncRepl: rid=001
provider=ldap://192.168.1.70:389/
bindmethod=simple
binddn=”uid=rpuser,dc=apple,dc=com”
credentials=root1234
searchbase=”dc=apple,dc=com”
scope=sub
schemachecking=on
type=refreshAndPersist
retry=”30 5 300 3″
interval=00:00:05:00
[root@slave ldap]#
ldapadd -Y EXTERNAL -H ldapi:/// -f syncrepl.ldif
Test the LDAP replication:
Let’s create a user in LDAP called “ldaprptest“, to do that, create a .ldif file on the master LDAP server.
[root@master ~]# vi ldaprptest.ldif
Update the above file with below content.
dn: uid=ldaprptest,ou=People,dc=apple,dc=com
objectClass: top
objectClass: account
objectClass: posixAccount
objectClass: shadowAccount
cn: ldaprptest
uid: ldaprptest
uidNumber: 9988
gidNumber: 100
homeDirectory: /home/ldaprptest
loginShell: /bin/bash
gecos: LDAP Replication Test User
userPassword: redhat123
shadowLastChange: 17058
shadowMin: 0
shadowMax: 99999
shadowWarning: 7
ldapsearch -x cn=ldaprptest -b dc=apple,dc=com
[root@master ldap]# slappasswd
New password:
Re-enter new password:
{SSHA}hbfwS2+203V3p+P6CB5n7nHVZpRB6ns+
[root@master ldap]# vi adduser.ldif
[root@master ldap]# cat adduser.ldif
dn: uid=mohan,ou=People,dc=apple,dc=com
objectClass: top
objectClass: account
objectClass: posixAccount
objectClass: shadowAccount
cn: mohan
uid: mohan
uidNumber: 9999
gidNumber: 100
homeDirectory: /home/mohan
loginShell: /bin/bash
gecos: Mohan [Admin (at) Apple]
userPassword: {SSHA}uWC6jFxw/4nY3GEQfwf4Eh/cq13lvyKy
shadowLastChange: 17058
shadowMin: 0
shadowMax: 99999
shadowWarning: 7
[root@master ldap]# ldapadd -x -W -D cn=Manager,dc=apple,dc=com -W -f adduser.ldif
Enter LDAP Password:
adding new entry “uid=mohan,ou=People,dc=apple,dc=com”
Backup LDAP with slapcat on CentOS 7
#!/bin/bash
export PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
set -e
KEEP=7
BASE_DN=’dc=apple,dc=com’
LDAPBK=”ldap-$( date +%y%m%d-%H%M ).ldif”
BACKUPDIR=’/root/ldap-backup’
test -d “$BACKUPDIR” || mkdir -p “$BACKUPDIR”
slapcat -b “$BASE_DN” -l “$BACKUPDIR/$LDAPBK”
gzip -9 “$BACKUPDIR/$LDAPBK”
ls -1tr $BACKUPDIR/*.ldif.gz | head -n-$KEEP | xargs rm –
ldapmodify -Y EXTERNAL -H ldapi:/// -f monitor.ldif
cp /usr/share/openldap-servers/DB_CONFIG.example /var/lib/ldap/DB_CONFIG
chown ldap:ldap /var/lib/ldap/*
Add the cosine and nis LDAP schemas.
ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/cosine.ldif
ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/nis.ldif
ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/inetorgperson.ldif
Generate base.ldif file for your domain.
vi base.ldif
Use the below information. You can modify it according to your requirement.
dn: dc=apple,dc=com
dc: apple
objectClass: top
objectClass: domain
dn: cn=ldapadm,dc=apple,dc=com
objectClass: organizationalRole
cn: ldapadm
description: LDAP Manager
dn: cn=Manager,dc=apple,dc=com
objectClass: organizationalRole
cn: Manager
description: Directory Manager
dn: ou=People,dc=apple,dc=com
objectClass: organizationalUnit
ou: People
dn: ou=Group,dc=apple,dc=com
objectClass: organizationalUnit
ou: Group
Build the directory structure.
ldapadd -x -W -D cn=Manager,dc=apple,dc=com -W -f base.ldif
ldapsearch -x -W -D ‘cn=Manager,dc=apple,dc=com ‘ -b “” -s base
ldapsearch -x -b ” -s base ‘(objectclass=*)’ namingContexts
This procedure is to backup XFS-based file systems in Red Hat Enterprise Linux 7.x
Backing up a file system (/etc)
1. To backup the /etc/ filessystem to a file called “/tmp/mybackup”, issue the following command:
# cd /tmp
# xfsdump -0f mybackup /dev/mapper/rhel-root -s etc
where:
-f: do the dump to the file “mybackup”
-0: do full backup (1 means incremental)
-s: backup the etc subtree from the filesystem /dev/mapper/rhel-root
Restoring the file system (/etc/)
1. To list all the created dumps with detailed information:
# xfsrestore -I
2. To restore a particular dump:
# xfsrestore –f mybackup /tmp
where:
-f: restore from the file “mybackup”
Interactive Restore
1. This exercise enables you to navigate through the backup file “mybackup”, and allows you to restore selected files instead of perfuming a full restore:
# cd /tmp
# xfsrestore –if mybackup /tmp
2. Once you get the restore prompt, you may ls and cd commands to navigate and list the files inside the backup file.
3. Select the files you want to restore by using the add command:
# add yum.conf
4. Once the selection is complete, write extract to restore the selected files.
ANSIBLE Practice
Ansible-palybooks
root@controller:~$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/root/.ssh/id_rsa.
Your public key has been saved in /home/root/.ssh/id_rsa.pub.
The key fingerprint is:
33:b8:4d:8f:95:bc:ed:1a:12:f3:6c:09:9f:52:23:d0 root@controller
The key’s randomart image is:
+–[ RSA 2048]—-+
| |
| . |
| . E |
| o . . |
| . S * |
| + / * |
| . = @ . |
| + o |
| … |
+—————–+
root@controller:~$ cat .ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCpEZIWC8UGJXA5uGDRj6vUd5KlwIvE0cat3opG4ZUajmrZU382PbdO3JG6DJa3beZXylYSOYeKtRVT9DxbeKgeTKQ4m8uamM81NAMRf0ZaiqSsZ9r56h1urlJCfD4y5nXwnUTvoBzZpTvTYwcevBzpNcI/VnBIgpcKQWJq11iHHrcmybbFreujgotHg1XUwCv9BdpXbPnA50XbUyX97uqCE9EzIk7WnSNpTtsmASxMPSWoHB9seOas1mq7UBKo7Xfu7qaJJLIEnMisBLKHPb0hM23BNV2SiacJEpHSB5eJKULtMGDej38HbmBsQI3u+lzcWSRppDIt6BvO05brW5C5 root@controller
copy the key to ansible deployment sever-host for password less auth
root@controller:~$ cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 controller
104.198.143.3 ansible
root@controller:~$ ssh ansible
Last login: Wed Jan 18 02:51:09 2017 from 115.113.77.105
[root@ansible ~]$
——————————-
[root@ansible ~]$ sudo vim /etc/ansible/hosts
[web]
192.168.1.23
[web]
192.168.1.21
——————————-
[root@ansible ~]$ vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.128.0.2 ansible.c.rich-operand-154505.internal ansible # Added by Google
169.254.169.254 metadata.google.internal # Added by Google
192.168.1.23 ansible1
192.168.1.22ansible2
104.198.143.3 ansible
——————————-
[root@ansible ~]$ ansible -m ping web
192.168.1.23 | SUCCESS => {
“changed”: false,
“ping”: “pong”
}
——————————-
[root@ansible ~]$ ansible -m ping web
192.168.1.22| SUCCESS => {
“changed”: false,
“ping”: “pong”
}
——————————-
[root@ansible ~]$ ansible -m ping all -u root
192.168.1.23 | SUCCESS => {
“changed”: false,
“ping”: “pong”
}
192.168.1.22| SUCCESS => {
“changed”: false,
“ping”: “pong”
}
——————————-
[root@ansible ~]$ ansible -m ping all -u root
192.168.1.23 | UNREACHABLE! => {
“changed”: false,
“msg”: “Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).\r\n”,
“unreachable”: true
}
192.168.1.22| UNREACHABLE! => {
“changed”: false,
“msg”: “Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).\r\n”,
“unreachable”: true
}
——————————-
[root@ansible ~]$ ansible -m ping all -b
192.168.1.23 | SUCCESS => {
“changed”: false,
“ping”: “pong”
}
192.168.1.22| SUCCESS => {
“changed”: false,
“ping”: “pong”
}
——————————-
[root@ansible ~]$ ansible -s -m ping all
192.168.1.23 | SUCCESS => {
“changed”: false,
“ping”: “pong”
}
192.168.1.22| SUCCESS => {
“changed”: false,
“ping”: “pong”
}
——————————-
[root@ansible ~]$ vim playbook1.yml
—
– hosts: all
tasks:
– name: installing telnet package
yum: name=telnet state=present
[root@ansible ~]$ ansible-playbook playbook1.yml
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
ok: [192.168.1.21]
ok: [192.168.1.23]
TASK [installing telnet package] ***********************************************
fatal: [192.168.1.23]: FAILED! => {“changed”: true, “failed”: true, “msg”: “You need to be root to perform this command.\n”, “rc”: 1, “results”: [“Loaded plugins: fastestmirror\n”]}
fatal: [192.168.1.21]: FAILED! => {“changed”: true, “failed”: true, “msg”: “You need to be root to perform this command.\n”, “rc”: 1, “results”: [“Loaded plugins: fastestmirror\n”]}
to retry, use: –limit @/home/root/playbook1.retry
PLAY RECAP *********************************************************************
192.168.1.22: ok=1 changed=0 unreachable=0 failed=1
192.168.1.23 : ok=1 changed=0 unreachable=0 failed=1
[root@ansible ~]$ ansible-playbook playbook1.yml -b
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
ok: [192.168.1.21]
ok: [192.168.1.23]
TASK [installing telnet package] ***********************************************
changed: [192.168.1.23]
changed: [192.168.1.21]
PLAY RECAP *********************************************************************
192.168.1.22: ok=2 changed=1 unreachable=0 failed=0
192.168.1.23 : ok=2 changed=1 unreachable=0 failed=0
——————————-
[root@ansible ~]$ vim playbook2.yml
—
– hosts: all
tasks:
– name: inatalling nfs package
yum: name=nfs-utils state=present
– name: statrting nfs service
service: name=nfs state=started enabled=yes
[root@ansible ~]$ ansible-playbook playbook2.yml –syntax-check
playbook: playbook2.yml
[root@ansible ~]$ ansible-playbook playbook2.yml –check
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
ok: [192.168.1.21]
ok: [192.168.1.23]
TASK [inatalling nfs package] **************************************************
changed: [192.168.1.23]
changed: [192.168.1.21]
TASK [statrting nfs service] ***************************************************
fatal: [192.168.1.21]: FAILED! => {“changed”: false, “failed”: true, “msg”: “Could not find the requested service \”‘nfs’\”: “}
fatal: [192.168.1.23]: FAILED! => {“changed”: false, “failed”: true, “msg”: “Could not find the requested service \”‘nfs’\”: “}
to retry, use: –limit @/home/root/playbook2.retry
PLAY RECAP *********************************************************************
192.168.1.22: ok=2 changed=1 unreachable=0 failed=1
192.168.1.23 : ok=2 changed=1 unreachable=0 failed=1
————————
[root@ansible ~]$ ansible-playbook playbook2.yml -b
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
ok: [192.168.1.21]
ok: [192.168.1.23]
TASK [inatalling nfs package] **************************************************
changed: [192.168.1.21]
changed: [192.168.1.23]
TASK [statrting nfs service] ***************************************************
changed: [192.168.1.21]
changed: [192.168.1.23]
PLAY RECAP *********************************************************************
192.168.1.22: ok=3 changed=2 unreachable=0 failed=0
192.168.1.23 : ok=3 changed=2 unreachable=0 failed=0
—————-
Run again and again same play book configuration remains same as Idempotent
[root@ansible ~]$ ansible-playbook playbook2.yml -b
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
ok: [192.168.1.21]
ok: [192.168.1.23]
TASK [inatalling nfs package] **************************************************
ok: [192.168.1.21]
ok: [192.168.1.23]
TASK [statrting nfs service] ***************************************************
ok: [192.168.1.21]
ok: [192.168.1.23]
PLAY RECAP *********************************************************************
192.168.1.22: ok=3 changed=0 unreachable=0 failed=0
192.168.1.23 : ok=3 changed=0 unreachable=0 failed=0
=================================================
[root@ansible ~]$ ansible all -a “service nfs status” -b
192.168.1.23 | SUCCESS | rc=0 >>
? nfs-server.service – NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
Active: active (exited) since Wed 2017-01-18 04:14:18 UTC; 2min 13s ago
Process: 12036 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
Process: 12035 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Main PID: 12036 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/nfs-server.service
192.168.1.22| SUCCESS | rc=0 >>
? nfs-server.service – NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
Active: active (exited) since Wed 2017-01-18 04:14:18 UTC; 2min 13s ago
Process: 6738 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
Process: 6737 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Main PID: 6738 (code=exited, status=0/SUCCESS)
Memory: 0B
CGroup: /system.slice/nfs-server.service
—————————————————
[root@ansible ~]$ vim playbook3.yml
—
– hosts: all
become: yes
tasks:
– name: Install Apache.
yum: name={{ item }} state=present
with_items:
– httpd
– httpd-devel
– name: Copy configuration files.
copy:
src: “{{ item.src }}”
dest: “{{ item.dest }}”
owner: root
group: root
mode: 0644
with_items:
– src: “httpd.conf”
dest: “/etc/httpd/conf/httpd.conf”
– src: “httpd-vhosts.conf”
dest: “/etc/httpd/conf/httpd-vhosts.conf”
– name: Make sure Apache is started now and at boot.
service: name=httpd state=started enabled=yes
[root@ansible ~]$ ls -l
total 40
-rw-r–r–. 1 root root 11753 Jan 18 06:27 httpd.conf
-rw-r–r–. 1 root root 824 Jan 18 06:27 httpd-vhosts.conf
[root@ansible ~]$ ansible-playbook playbook3.yml
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
ok: [192.168.1.21]
ok: [192.168.1.23]
TASK [Install Apache.] *********************************************************
changed: [192.168.1.23] => (item=[u’httpd’, u’httpd-devel’])
changed: [192.168.1.21] => (item=[u’httpd’, u’httpd-devel’])
TASK [Copy configuration files.] ***********************************************
ok: [192.168.1.21] => (item={u’dest’: u’/etc/httpd/conf/httpd.conf’, u’src’: u’httpd.conf’})
ok: [192.168.1.23] => (item={u’dest’: u’/etc/httpd/conf/httpd.conf’, u’src’: u’httpd.conf’})
ok: [192.168.1.21] => (item={u’dest’: u’/etc/httpd/conf/httpd-vhosts.conf’, u’src’: u’httpd-vhosts.conf’})
ok: [192.168.1.23] => (item={u’dest’: u’/etc/httpd/conf/httpd-vhosts.conf’, u’src’: u’httpd-vhosts.conf’})
TASK [Make sure Apache is started now and at boot.] ****************************
changed: [192.168.1.21]
changed: [192.168.1.23]
PLAY RECAP *********************************************************************
192.168.1.22: ok=4 changed=2 unreachable=0 failed=0
192.168.1.23 : ok=4 changed=2 unreachable=0 failed=0
playbook1playbook2
[root@ansible ~]$ sudo vim /etc/ansible/hosts
[web]
192.168.1.23
[web]
192.168.1.21
[multi:children]
web
web
[root@ansible ~]$ ansible multi -m ping
192.168.1.23 | SUCCESS => {
“changed”: false,
“ping”: “pong”
}
192.168.1.22| SUCCESS => {
“changed”: false,
“ping”: “pong”
}
—————————————————
[root@ansible ~]$ ansible multi -a hostname
192.168.1.22| SUCCESS | rc=0 >>
ansible2.c.rich-operand-154505.internal
192.168.1.23 | SUCCESS | rc=0 >>
ansible1.c.rich-operand-154505.internal
—————————————-
[root@ansible ~]$ ansible multi -a ‘free -m’
192.168.1.22| SUCCESS | rc=0 >>
total used free shared buff/cache available
Mem: 3700 229 1673 178 1796 2978
Swap: 0 0 0
192.168.1.23 | SUCCESS | rc=0 >>
total used free shared buff/cache available
Mem: 3700 329 265 16 3105 3031
Swap: 0 0 0
—————————————-
[root@ansible ~]$ ansible multi -a “du -h”
192.168.1.23 | SUCCESS | rc=0 >>
4.0K ./.ssh
56K ./.ansible/tmp/ansible-tmp-1484714028.87-63512995370206
56K ./.ansible/tmp
56K ./.ansible
0 ./.puppetlabs/var
0 ./.puppetlabs/etc
0 ./.puppetlabs/opt/puppet
0 ./.puppetlabs/opt
0 ./.puppetlabs
80K .
192.168.1.22| SUCCESS | rc=0 >>
4.0K ./.ssh
56K ./.ansible/tmp/ansible-tmp-1484714028.87-38086154108105
56K ./.ansible/tmp
56K ./.ansible
80K .
————————————-
[root@ansible ~]$ ansible multi -a ‘service httpd status’ -b
192.168.1.22| FAILED | rc=4 >>
Redirecting to /bin/systemctl status httpd.service
Unit httpd.service could not be found.
192.168.1.23 | FAILED | rc=4 >>
Redirecting to /bin/systemctl status httpd.service
Unit httpd.service could not be found.
————————————-
[root@ansible ~]$ ansible multi -a ‘netstat -tlpn’ -s
192.168.1.22| SUCCESS | rc=0 >>
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 15329/etcd
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/systemd
tcp 0 0 0.0.0.0:20048 0.0.0.0:* LISTEN 6736/rpc.mountd
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 994/sshd
tcp 0 0 0.0.0.0:34457 0.0.0.0:* LISTEN –
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1041/master
tcp 0 0 0.0.0.0:2049 0.0.0.0:* LISTEN –
tcp 0 0 0.0.0.0:43009 0.0.0.0:* LISTEN 6724/rpc.statd
tcp6 0 0 :::10251 :::* LISTEN 15502/kube-schedule
tcp6 0 0 :::6443 :::* LISTEN 15445/kube-apiserve
tcp6 0 0 :::2379 :::* LISTEN 15329/etcd
tcp6 0 0 :::10252 :::* LISTEN 15463/kube-controll
tcp6 0 0 :::111 :::* LISTEN 6524/rpcbind
tcp6 0 0 :::20048 :::* LISTEN 6736/rpc.mountd
tcp6 0 0 :::8080 :::* LISTEN 15445/kube-apiserve
tcp6 0 0 :::22 :::* LISTEN 994/sshd
tcp6 0 0 ::1:25 :::* LISTEN 1041/master
tcp6 0 0 :::36474 :::* LISTEN –
tcp6 0 0 :::2049 :::* LISTEN –
tcp6 0 0 :::54309 :::* LISTEN 6724/rpc.statd
192.168.1.23 | SUCCESS | rc=0 >>
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/systemd
tcp 0 0 0.0.0.0:9999 0.0.0.0:* LISTEN 7715/java
tcp 0 0 0.0.0.0:20048 0.0.0.0:* LISTEN 12034/rpc.mountd
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 7715/java
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 990/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1032/master
tcp 0 0 0.0.0.0:4447 0.0.0.0:* LISTEN 7715/java
tcp 0 0 0.0.0.0:45185 0.0.0.0:* LISTEN –
tcp 0 0 0.0.0.0:2049 0.0.0.0:* LISTEN –
tcp 0 0 0.0.0.0:9990 0.0.0.0:* LISTEN 7715/java
tcp 0 0 0.0.0.0:53095 0.0.0.0:* LISTEN 12018/rpc.statd
tcp6 0 0 :::111 :::* LISTEN 11822/rpcbind
tcp6 0 0 :::20048 :::* LISTEN 12034/rpc.mountd
tcp6 0 0 :::22 :::* LISTEN 990/sshd
tcp6 0 0 :::43255 :::* LISTEN –
tcp6 0 0 :::55927 :::* LISTEN 12018/rpc.statd
tcp6 0 0 ::1:25 :::* LISTEN 1032/master
tcp6 0 0 :::2049 :::* LISTEN –
[root@ansible ~]$ ansible multi -s -m yum -a “name=ntp state=present”
192.168.1.23 | SUCCESS => {
“changed”: false,
“msg”: “”,
“rc”: 0,
“results”: [
“ntp-4.2.6p5-25.el7.centos.x86_64 providing ntp is already installed”
]
}
192.168.1.22| SUCCESS => {
“changed”: false,
“msg”: “”,
“rc”: 0,
“results”: [
“ntp-4.2.6p5-25.el7.centos.x86_64 providing ntp is already installed”
]
}
————————————–
[root@ansible ~]$ ansible multi -s -m service -a “name=ntpd state=started enabled=yes”
————————————–
[root@ansible ~]$ ntpdate
18 Jan 04:57:35 ntpdate[4532]: no servers can be used, exiting
————————————–
[root@ansible ~]$ ansible multi -s -a “service ntpd stop”
192.168.1.23 | SUCCESS | rc=0 >>
Redirecting to /bin/systemctl stop ntpd.service
192.168.1.22| SUCCESS | rc=0 >>
Redirecting to /bin/systemctl stop ntpd.service
————————————–
[root@ansible ~]$ ansible multi -s -a “ntpdate -q 0.rhel.pool.ntp.org”
192.168.1.22| SUCCESS | rc=0 >>
server 138.236.128.112, stratum 2, offset -0.003149, delay 0.05275
server 71.210.146.228, stratum 2, offset 0.003796, delay 0.04633
server 128.138.141.172, stratum 1, offset -0.000194, delay 0.03752
server 69.89.207.199, stratum 2, offset -0.000211, delay 0.05193
18 Jan 04:58:22 ntpdate[10370]: adjust time server 128.138.141.172 offset -0.000194 sec
192.168.1.23 | SUCCESS | rc=0 >>
server 173.230.144.109, stratum 2, offset 0.000549, delay 0.06175
server 45.127.113.2, stratum 3, offset 0.000591, delay 0.06134
server 4.53.160.75, stratum 2, offset -0.000900, delay 0.04163
server 50.116.52.97, stratum 2, offset -0.001006, delay 0.05426
18 Jan 04:58:22 ntpdate[15477]: adjust time server 4.53.160.75 offset -0.000900 sec
————————————–
[root@ansible ~]$ ansible web -s -m yum -a “name=MySQL-python state=present”
192.168.1.23 | SUCCESS => {
“changed”: true,
“msg”: “”,
“rc”: 0,
“results”: [
“Loaded plugins: fastestmirror\nLoading mirror speeds from cached hostfile\n * base: bay.uchicago.edu\n * epel: mirror.steadfast.net\n * extras: mirror.tzulo.com\n * updates: mirror.team-cymru.org\nResolving Dependencies\n–> Running transaction check\n—> Package MySQL-python.x86_64 0:1.2.5-1.el7 will be installed\n–> Finished Dependency Resolution\n\nDependencies Resolved\n\n================================================================================\n Package Arch Version Repository Size\n================================================================================\nInstalling:\n MySQL-python x86_64 1.2.5-1.el7 base 90 k\n\nTransaction Summary\n================================================================================\nInstall 1 Package\n\nTotal download size: 90 k\nInstalled size: 284 k\nDownloading packages:\nRunning transaction check\nRunning transaction test\nTransaction test succeeded\nRunning transaction\n Installing : MySQL-python-1.2.5-1.el7.x86_64 1/1 \n Verifying : MySQL-python-1.2.5-1.el7.x86_64 1/1 \n\nInstalled:\n MySQL-python.x86_64 0:1.2.5-1.el7 \n\nComplete!\n”
]
}
[root@ansible ~]$ ansible web -s -m yum -a “name=python-setuptools state=present”
192.168.1.23 | SUCCESS => {
“changed”: false,
“msg”: “”,
“rc”: 0,
“results”: [
“python-setuptools-0.9.8-4.el7.noarch providing python-setuptools is already installed”
]
}
[root@ansible ~]$ ansible web -s -m easy_install -a “name=django state=present”
192.168.1.23 | SUCCESS => {
“binary”: “/bin/easy_install”,
“changed”: true,
“name”: “django”,
“virtualenv”: null
}
————————————–
[root@ansible ~]$ ansible web -s -m user -a “name=admin state=present”
192.168.1.23 | SUCCESS => {
“changed”: true,
“comment”: “”,
“createhome”: true,
“group”: 1004,
“home”: “/home/admin”,
“name”: “admin”,
“shell”: “/bin/bash”,
“state”: “present”,
“system”: false,
“uid”: 1003
}
[root@ansible ~]$ ansible web -s -m group -a “name=admin state=present”
192.168.1.23 | SUCCESS => {
“changed”: false,
“gid”: 1004,
“name”: “admin”,
“state”: “present”,
“system”: false
}
[root@ansible ~]$ ansible web -s -m user -a “name=first group=admin state=present”
192.168.1.23 | SUCCESS => {
“changed”: true,
“comment”: “”,
“createhome”: true,
“group”: 1004,
“home”: “/home/first”,
“name”: “first”,
“shell”: “/bin/bash”,
“state”: “present”,
“system”: false,
“uid”: 1004
}
[root@ansible ~]$ ansible web -a “tail /etc/passwd”
192.168.1.23 | SUCCESS | rc=0 >>
systemd-network:x:192:192:systemd Network Management:/:/sbin/nologin
tss:x:59:59:Account used by the trousers package to sandbox the tcsd daemon:/dev/null:/sbin/nologin
root:x:1000:1001::/home/root:/bin/bash
test:x:1001:1002::/home/test:/bin/bash
jboss:x:1002:1003::/home/jboss:/bin/bash
rpc:x:32:32:Rpcbind Daemon:/var/lib/rpcbind:/sbin/nologin
rpcuser:x:29:29:RPC Service User:/var/lib/nfs:/sbin/nologin
nfsnobody:x:65534:65534:Anonymous NFS User:/var/lib/nfs:/sbin/nologin
admin:x:1003:1004::/home/admin:/bin/bash
first:x:1004:1004::/home/first:/bin/bash
[root@ansible ~]$ ansible web -a “tail /etc/shadow”
192.168.1.23 | FAILED | rc=1 >>
tail: cannot open ‘/etc/shadow’ for reading: Permission denied
[root@ansible ~]$ ansible web -a “tail /etc/shadow” -b
192.168.1.23 | SUCCESS | rc=0 >>
systemd-network:!!:17176::::::
tss:!!:17176::::::
root:*:17178:0:99999:7:::
test:*:17178:0:99999:7:::
jboss:!!:17182:0:99999:7:::
rpc:!!:17184:0:99999:7:::
rpcuser:!!:17184::::::
nfsnobody:!!:17184::::::
admin:!!:17184:0:99999:7:::
first:!!:17184:0:99999:7:::
——————————–
[root@ansible ~]$ ansible web -m stat -a “path=/etc/hosts”
192.168.1.23 | SUCCESS => {
“changed”: false,
“stat”: {
“atime”: 1484635843.2532218,
“checksum”: “5fed7929fb7be9b1046252b6b7e0e2263fbc0738”,
“ctime”: 1484203757.175483,
“dev”: 2049,
“executable”: false,
“exists”: true,
“gid”: 0,
“gr_name”: “root”,
“inode”: 240,
“isblk”: false,
“ischr”: false,
“isdir”: false,
“isfifo”: false,
“isgid”: false,
“islnk”: false,
“isreg”: true,
“issock”: false,
“isuid”: false,
“md5”: “10f391742a450d220ff00269216eff8a”,
“mode”: “0644”,
“mtime”: 1484203757.175483,
“nlink”: 1,
“path”: “/etc/hosts”,
“pw_name”: “root”,
“readable”: true,
“rgrp”: true,
“roth”: true,
“rusr”: true,
“size”: 297,
“uid”: 0,
“wgrp”: false,
“woth”: false,
“writeable”: false,
“wusr”: true,
“xgrp”: false,
“xoth”: false,
“xusr”: false
}
}
========================================
[root@ansible ~]$ ansible multi -m copy -a “src=/etc/hosts dest=/tmp/hosts”
192.168.1.23 | SUCCESS => {
“changed”: true,
“checksum”: “08aa54eecc8a866b53d38351ea72e5bb97718005”,
“dest”: “/tmp/hosts”,
“gid”: 1001,
“group”: “root”,
“md5sum”: “72ff7a2085a5186d0cab74f14bae1483”,
“mode”: “0664”,
“owner”: “root”,
“secontext”: “unconfined_u:object_r:user_home_t:s0”,
“size”: 369,
“src”: “/home/root/.ansible/tmp/ansible-tmp-1484717608.47-178441141048946/source”,
“state”: “file”,
“uid”: 1000
}
192.168.1.22| SUCCESS => {
“changed”: true,
“checksum”: “08aa54eecc8a866b53d38351ea72e5bb97718005”,
“dest”: “/tmp/hosts”,
“gid”: 1001,
“group”: “root”,
“md5sum”: “72ff7a2085a5186d0cab74f14bae1483”,
“mode”: “0664”,
“owner”: “root”,
“secontext”: “unconfined_u:object_r:user_home_t:s0”,
“size”: 369,
“src”: “/home/root/.ansible/tmp/ansible-tmp-1484717608.85-272034831244848/source”,
“state”: “file”,
“uid”: 1000
}
==========================================
[root@ansible ~]$ ansible multi -s -m fetch -a “src=/etc/hosts dest=/tmp”
192.168.1.23 | SUCCESS => {
“changed”: true,
“checksum”: “5fed7929fb7be9b1046252b6b7e0e2263fbc0738”,
“dest”: “/tmp/192.168.1.23/etc/hosts”,
“md5sum”: “10f391742a450d220ff00269216eff8a”,
“remote_checksum”: “5fed7929fb7be9b1046252b6b7e0e2263fbc0738”,
“remote_md5sum”: null
}
192.168.1.22| SUCCESS => {
“changed”: true,
“checksum”: “0b24c9ee4a888defdf6769d5e72f65761e882f1f”,
“dest”: “/tmp/192.168.1.21/etc/hosts”,
“md5sum”: “e2dd8ef8a5f58f35d7a3f3dce7f2f2bf”,
“remote_checksum”: “0b24c9ee4a888defdf6769d5e72f65761e882f1f”,
“remote_md5sum”: null
}
============================================
[root@ansible ~]$ ls -l /tmp/
total 16
drwxrwxr-x. 3 root root 16 Jan 18 05:34 192.168.1.21
drwxrwxr-x. 3 root root 16 Jan 18 05:34 192.168.1.23
[root@ansible ~]$ cat /tmp/192.168.1.23/etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.128.0.3 ansible1.c.rich-operand-154505.internal ansible1 # Added by Google
169.254.169.254 metadata.google.internal # Added by Google
==========================================
[root@ansible ~]$ ansible multi -m file -a “dest=/tmp/test mode=644 state=directory”
192.168.1.23 | SUCCESS => {
“changed”: true,
“gid”: 1001,
“group”: “root”,
“mode”: “0644”,
“owner”: “root”,
“path”: “/tmp/test”,
“secontext”: “unconfined_u:object_r:user_tmp_t:s0”,
“size”: 6,
“state”: “directory”,
“uid”: 1000
}
192.168.1.22| SUCCESS => {
“changed”: true,
“gid”: 1001,
“group”: “root”,
“mode”: “0644”,
“owner”: “root”,
“path”: “/tmp/test”,
“secontext”: “unconfined_u:object_r:user_tmp_t:s0”,
“size”: 6,
“state”: “directory”,
“uid”: 1000
}
===========================================
[root@ansible ~]$ ansible multi -s -m file -a “dest=/tmp/test mode=644 owner=root state=directory”
192.168.1.22| SUCCESS => {
“changed”: true,
“gid”: 1001,
“group”: “root”,
“mode”: “0644”,
“owner”: “root”,
“path”: “/tmp/test”,
“secontext”: “unconfined_u:object_r:user_tmp_t:s0”,
“size”: 6,
“state”: “directory”,
“uid”: 0
}
192.168.1.23 | SUCCESS => {
“changed”: true,
“gid”: 1001,
“group”: “root”,
“mode”: “0644”,
“owner”: “root”,
“path”: “/tmp/test”,
“secontext”: “unconfined_u:object_r:user_tmp_t:s0”,
“size”: 6,
“state”: “directory”,
“uid”: 0
}
================================================
[root@ansible ~]$ ansible multi -s -B 3600 -a “yum -y update”
=================================================
[root@ansible ~]$ ansible 192.168.1.23 -s -a “tail /var/log/messages”
192.168.1.23 | SUCCESS | rc=0 >>
Jan 18 05:41:03 ansible1 ansible-async_wrapper.py: Return async_wrapper task started.
Jan 18 05:41:03 ansible1 ansible-async_wrapper.py: Starting module and watcher
Jan 18 05:41:03 ansible1 ansible-async_wrapper.py: Start watching 18305 (3600)
Jan 18 05:41:03 ansible1 ansible-async_wrapper.py: Start module (18305)
Jan 18 05:41:03 ansible1 ansible-async_wrapper.py: Module complete (18305)
Jan 18 05:41:08 ansible1 ansible-async_wrapper.py: Done in kid B.
Jan 18 05:42:04 ansible1 systemd-logind: Removed session 180.
Jan 18 05:42:04 ansible1 systemd: Started Session 181 of user root.
Jan 18 05:42:04 ansible1 systemd-logind: New session 181 of user root.
Jan 18 05:42:04 ansible1 systemd: Starting Session 181 of user root.
=================================================
[root@ansible ~]$ ansible multi -s -m shell -a “tail /var/log/messages | grep ansible-command | wc -l”
192.168.1.23 | SUCCESS | rc=0 >>
0
192.168.1.22| SUCCESS | rc=0 >>
0
=================================
[root@ansible ~]$ ansible web -s -m git -a “repo=git://web.com/path/to/repo.git dest=/opt/myapp update=yes version=1.2.4”
192.168.1.23 | FAILED! => {
“changed”: false,
“failed”: true,
“msg”: “Failed to find required executable git”
}
[root@ansible ~]$ ansible web -s -m yum -a “name=git state=present”
192.168.1.23 | SUCCESS => {
“changed”: true,
“msg”: “”,
“rc”: 0,
“results”: [
“Loaded plugins: fastestmirror\nLoading mirror speeds from cached hostfile\n * base: bay.uchicago.edu\n * epel: mirror.steadfast.net\n * extras: mirror.tzulo.com\n * updates: mirror.team-cymru.org\nResolving Dependencies\n–> Running transaction check\n—> Package git.x86_64 0:1.8.3.1-6.el7_2.1 will be installed\n–> Processing Dependency: perl-Git = 1.8.3.1-6.el7_2.1 for package: git-1.8.3.1-6.el7_2.1.x86_64\n–> Processing Dependency: perl(Term::ReadKey) for package: git-1.8.3.1-6.el7_2.1.x86_64\n–> Processing Dependency: perl(Git) for package: git-1.8.3.1-6.el7_2.1.x86_64\n–> Processing Dependency: perl(Error) for package: git-1.8.3.1-6.el7_2.1.x86_64\n–> Processing Dependency: libgnome-keyring.so.0()(64bit) for package: git-1.8.3.1-6.el7_2.1.x86_64\n–> Running transaction check\n—> Package libgnome-keyring.x86_64 0:3.8.0-3.el7 will be installed\n—> Package perl-Error.noarch 1:0.17020-2.el7 will be installed\n—> Package perl-Git.noarch 0:1.8.3.1-6.el7_2.1 will be installed\n—> Package perl-TermReadKey.x86_64 0:2.30-20.el7 will be installed\n–> Finished Dependency Resolution\n\nDependencies Resolved\n\n================================================================================\n Package Arch Version Repository Size\n================================================================================\nInstalling:\n git x86_64 1.8.3.1-6.el7_2.1 base 4.4 M\nInstalling for dependencies:\n libgnome-keyring x86_64 3.8.0-3.el7 base 109 k\n perl-Error noarch 1:0.17020-2.el7 base 32 k\n perl-Git noarch 1.8.3.1-6.el7_2.1 base 53 k\n perl-TermReadKey x86_64 2.30-20.el7 base 31 k\n\nTransaction Summary\n================================================================================\nInstall 1 Package (+4 Dependent packages)\n\nTotal download size: 4.6 M\nInstalled size: 23 M\nDownloading packages:\n——————————————————————————–\nTotal 12 MB/s | 4.6 MB 00:00 \nRunning transaction check\nRunning transaction test\nTransaction test succeeded\nRunning transaction\n Installing : 1:perl-Error-0.17020-2.el7.noarch 1/5 \n Installing : libgnome-keyring-3.8.0-3.el7.x86_64 2/5 \n Installing : perl-TermReadKey-2.30-20.el7.x86_64 3/5 \n Installing : git-1.8.3.1-6.el7_2.1.x86_64 4/5 \n Installing : perl-Git-1.8.3.1-6.el7_2.1.noarch 5/5 \n Verifying : perl-Git-1.8.3.1-6.el7_2.1.noarch 1/5 \n Verifying : perl-TermReadKey-2.30-20.el7.x86_64 2/5 \n Verifying : libgnome-keyring-3.8.0-3.el7.x86_64 3/5 \n Verifying : 1:perl-Error-0.17020-2.el7.noarch 4/5 \n Verifying : git-1.8.3.1-6.el7_2.1.x86_64 5/5 \n\nInstalled:\n git.x86_64 0:1.8.3.1-6.el7_2.1 \n\nDependency Installed:\n libgnome-keyring.x86_64 0:3.8.0-3.el7 perl-Error.noarch 1:0.17020-2.el7 \n perl-Git.noarch 0:1.8.3.1-6.el7_2.1 perl-TermReadKey.x86_64 0:2.30-20.el7 \n\nComplete!\n”
]
}
|
|
Recent Comments