November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

Categories

November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

LVM volume space scaling in XFS format in centos7

LVM volume space scaling in XFS format in centos7

 

Originally on my CentOS 7 virtual machine, I created 2 partitions:

sda1 for /boot
sda2 with 1 volume group “centos” with 5 logical volumes:
/
/home
/var
/tmp
swap

I noticed later that I had needed more space from /home lvm. It was 15GB, it was only using 1.5GB, so I decided to reduce it down to 5GB:

Code:
# lvreduce -L 5GB /dev/mapper/centos-home

It said successful so I rebooted.

Upon reboot, I was sent to emergency mode, and noticed /home was not listed under df, so I mounted everything in fstab but received an error:

Code:
#mount -a 
mount: /dev/mapper/centos-home: can't read superblock

So I ran an

Code:
# xfs_repair /dev/mapper/centos-home

It gave me same issues about not being able to read the superblock.

Oddly enough, the lvdisplay /dev/mapper/centos-home works and now shows LV Size as 5.00GB down from 15.00GB with all the other information listed…

 

This article describes the real-time process of adjusting the LVM volume space for xfs under centos7.

Actual purpose:

1. Reduce the logical volume /dev/mapper/home from 178G to 10G

2, empty 168G divided into logical volumes /dev/mapper/root

Actual process:

1, back up important data in advance, xfs reduction will lead to data loss

Backup can use xfsdump, data can also be backed up outside the machine (slightly here)

Unmount the volume /dev/mapper/home

[root@localhost ~]# umount /home

3, reduce the volume / dev / mapper / homesize (this step will lead to data loss, see the first point)

[root@localhost ~]# lvreduce -L 5G /dev/mapper/home

WARNING: Reducing active logical volume to 10.00 GiB.

THIS MAY DESTROY YOUR DATA (filesystem etc.)

Do you really want to reduce cl/home? [y/n]:y

Size of logical volume cl/home changed from 178.25 GiB (45633 extents) to 10.00 GiB (2560 extents).

Logical volume cl/home successfully resized.

4, increase the volume /dev/mapper/root size

[root@localhost ~]# lvextend -l +100%FREE /dev/mapper/root

Size of logical volume cl/root changed from 50.00 GiB (12800 extents) to 218.26 GiB (55874 extents).

Logical volume cl/root successfully resized.

5, adjust the xfs file system size

[root@localhost ~]# xfs_growfs /dev/mapper/root

6, re-mount, restore data

If you directly mount an error message:

[root@localhost ~]# mount /dev/mapper/home/home/

Mount: /dev/mapper/home: can’t read superblock

Need to format first

[root@localhost ~]# mkfs.xfs -f /dev/mapper/home

Mount after formatting:

[root@localhost ~]# mount /dev/mapper/home/home/

Recover data after mounting

This step can be used xfsrestore, or manually copy (refer to the first point)

 

 

$lvremove -v /dev/centos/home

Which returned the remaining free space to the volume group.

I then used the $lvextend to extend the /root lv

$lvextend -L +900G /dev/centos/root

And

$xfs_growfs /dev/centos/root

CentOS 7.4 MariaDB Galera Cluster

Mariadb galera cluster installation:
Operating system: CentOS 7.4 version
Cluster number: 3 nodes
Host information: 192.168.153.142 node1 selinux=disabled firewalld Shutdown
192.168.153.143 node2 selinux=disabled firewalld Shut down
192.168.153.144 node3 selinux=disabled firewalld Shut down

Build steps

1. Hosts resolve each other: all three nodes must execute
vim /etc/hosts
192.168.153.142 node1
192.168.153.143 node2
192.168.153.144 node3

2. Install the software package

The first method: (yum install -y MariaDB-server MariaDB-client galera)
Configure yum installation source and configure mariadb galera installation source
yum source configuration hang iso
Set up mariadb yum source and install (all nodes are required)
Modify yum source file

vi /etc/yum.repos.d/mariadb.repo

[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/10.3.5/centos74-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1
enabled=0
When installing galera software, it needs to resolve its dependencies: boost-program-options.x86_64 (direct yum source installation)

The second method: (rpm package installation) all three nodes need to be installed
to download the rpm package from the web: galera-25.3.23-1.rhel7.el7.centos.x86_64.rpm
MariaDB-10.3.5-centos74-x86_64-client .rpm
MariaDB-10.3.5-centos74-x86_64-compat.rpm
MariaDB-10.3.5-centos74-x86_64-common.rpm
MariaDB-10.3.5-centos74-x86_64-server.rpm
rpm -ivh MariaDB-10.3.5- Centos74-x86_64-compat.rpm –nodeps
rpm -ivh MariaDB-10.3.5-centos74-x86_64-common.rpm
rpm -ivh MariaDB-10.3.5-centos74-x86_64-client.rpm
yum install -y boost-program- Options.x86_64 (resolve to install galera dependencies)
rpm -ivh galera-25.3.23-1.rhel7.el7.centos.x86_64.rpm
rpm -ivh MariaDB-10.3.5-centos74-x86_64-server.rpm

3.mariadb initialization (the three nodes need to be executed) After the
installation is complete, it will prompt the need to initialize mariadb (set the password)
systemctl start mariadb
mysql_secure_installation (set the mysql password as prompted)
systemctl stop mariadb

4. Configure the galera
master node configuration file server.cnf
vim /etc/my.cnf.d/server.cnf
[galera]
wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address=”gcomm://192.168 .153.142,192.168.153.143,192.168.153.144 ”
wsrep_node_name = node1
wsrep_node_address = 192.168.153.142
binlog_format = Row
default_storage_engine = the InnoDB
innodb_autoinc_lock_mode = 2
wsrep_slave_threads. 1 =
the innodb_flush_log_at_trx_commit = 0
innodb_buffer_pool_size = 120M
wsrep_sst_method = the rsync
wsrep_causal_reads the ON =
copy of this file mariadb- 2, mariadb-3, and attention should wsrep_node_name wsrep_node_address into the corresponding node hostname and ip.

5. Start the cluster service:
Start the MariaDB Galera Cluster service:
[root@node1 ~]# /bin/galera_new_cluster The
remaining two nodes are started by:
[root@node1 ~]# systemctl start mariadb
Check the cluster status: (The cluster service uses 4567. And 3306 ports))
[root@node1 ~]# netstat -tulpn | grep -e 4567 -e 3306
tcp 0 0 0.0.0.0:4567 0.0.0.0: LISTEN 3557/mysqld
tcp6 0 0 :::3306 ::: LISTEN 3557/mysqld

6. Verify the cluster status:
Execute on node1:
[root@node1 ~]# mysql -uroot -p ##Enter the database to
see if galera plug-in is enabled to
connect to mariadb and check if galera plug-in
MariaDB is enabled [(none)]> show status like “wsrep_ready”;
+—————+——-+
| Variable_name | Value |
+————— +——-+
| wsrep_ready | ON |
+—————+——-+
1 row in set (0.004 sec)
present cluster machine Number
MariaDB [(none)]> show status like “wsrep_cluster_size”;
+——————–+——-+
| Variable_name | Value |
+——————–+——-+
| wsrep_cluster_size | 3 |
+————– ——+——-+
1 row in set (0.001 sec)
check the cluster status
MariaDB [(none)]> show status like “wsrep%”;
+——————————+—————— ———————————————-+
| Variable_name | Value |
+——————————+—————- ————————————————+
| wsrep_apply_oooe | 0.000000 |
| wsrep_apply_oool | 0.000000 |
| wsrep_apply_window | 1.000000 |
| wsrep_causal_reads | 14 |
| wsrep_cert_deps_distance | 1.200000 |
| wsrep_cert_index_size | 3 |
| wsrep_cert_interval | 0.000000 |
| wsrep_cluster_conf_id | 22 |
| wsrep_cluster_size | 3 | ## cluster members
| wsrep_cluster_state_uuid | b8ecf355-233a-11e8-825e-bb38179b0eb4 | ##UUID cluster unique tag
| wsrep_cluster_status | Primary | ##primary server
| wsrep_commit_oooe | 0.000000 |
| Wsrep_commit_oool | 0.000000 |
| wsrep_commit_window | 1.000000 |
| wsrep_connected | ON | ## currently connected in
| wsrep_desync_count | 0 |
| wsrep_evs_delayed | |
| wsrep_evs_evict_list | |
| wsrep_evs_repl_latency | 0/0/0/0/0 |
| wsrep_evs_state | the OPERATIONAL |
| wsrep_flow_control_paused | 0.000000 |
| wsrep_flow_control_paused_ns | 0 |
| wsrep_flow_control_recv | 0 |
| wsrep_flow_control_sent | 0 |
| wsrep_gcomm_uuid | 0eba3aff-2341-11e8-b45a-f277db2349d5 |
| wsrep_incoming_addresses | 192.168.153.142:3306,192.168.153.143:3306, 192.168.153.144:3306 | ## database in connection
| wsrep_last_committed | 9 | ##sql commit record
| wsrep_local_bf_aborts | 0 | ## is interrupted locally by the executing transaction process
| Wsrep_local_cached_downto | 5 |
| wsrep_local_cert_failures | 0 | ## local failed transaction
| wsrep_local_commits | 4 | sql ## local execution
| wsrep_local_index | 0 |
| wsrep_local_recv_queue | 0 |
| wsrep_local_recv_queue_avg | .057143 |
| wsrep_local_recv_queue_max | 2 |
| wsrep_local_recv_queue_min | 0 |
| wsrep_local_replays | 0 |
| wsrep_local_send_queue | 0 | local queue ## emitted
| wsrep_local_send_queue_avg | 0.000000 | ## queues averaging interval
| wsrep_local_send_queue_max |. 1 |
| wsrep_local_send_queue_min | 0 |
| wsrep_local_state |. 4 |
| wsrep_local_state_comment | Synced |
| wsrep_local_state_uuid | b8ecf355-233a-11e8-825e-bb38179b0eb4 | ##Cluster ID
| wsrep_protocol_version | 8 |
| wsrep_provider_name | Galera |
| wsrep_provider_vendor | Codership Oy <info@codership.com> |
| wsrep_provider_version | 25.3.23(r3789) |
| wsrep_ready | ON | ## Plug-In
Wsrep_received | 35 | ##Data Copy Recipients
| wsrep_received_bytes | 5050 |
| wsrep_repl_data_bytes | 1022 |
| wsrep_repl_keys | 14 |
| wsrep_repl_keys_bytes | 232 |
| wsrep_repl_other_bytes | 0 |
| wsrep_replicated |. 5 | ## as the number of copy emitted
| wsrep_replicated_bytes | 1600 | sent replication data word ## The number of sections
| wsrep_thread_count | 2 |
+——————————+———– ————————————————– —+
58 rows in set (0.003 sec)
View connected hosts
MariaDB [(none)]> show status like “wsrep_incoming_addresses”;
+————————–+—————————————————————-+
| Variable_name | Value |
+————————–+—————————————————————-+
| wsrep_incoming_addresses | 192.168.153.142:3306,192.168.153.143:3306,192.168.153.144:3306 |
+————————–+—————————————————————-+
1 row in set (0.002 sec)

7. Test whether the cluster mariad data is synchronized
MariaDB [(none)] create database lizk;
Query OK, 1 row affected (0.010 sec)

MariaDB [(none)]> show databases;
+——————–+
| Database |
+————– ——+
| china |
| hello |
| hi |
| information_schema |
| lizk |
| mysql |
| performance_schema |
| test |
+—————— –+
8 rows in set (0.001 sec)
You can see that the lizk library is synchronized on the other two nodes.

8. Simulated Brain Fissure After Treatment
The following simulations show that in the case of packet loss in network jitter, the two nodes are disconnected and cause brain split. It was performed on 192.168.153.143 192.168.153.144 and two nodes:
iptables -A the INPUT -p TCP -j 4567 –sport the DROP
iptables -A the INPUT -p TCP -j 4567 –dport the DROP
more commands to disable the whole wsrep replication communication port 4567
to see node on 192.168.153.142:
MariaDB [(none)]> Show Status like “WS%”;
+ ——————— ———+—————————————- —-+
| Variable_name | Value |
+——————————+——– + ————————————
| wsrep_apply_oooe | 0.000000 |
| wsrep_apply_oool | 0.000000 |
| wsrep_apply_window | 1.000000 |
| wsrep_causal_reads | 16 |
| wsrep_cert_deps_distance | 1.125000 |
| wsrep_cert_index_size | 3 |
| Wsrep_cert_interval | 0.000000 |
| wsrep_cluster_conf_id | 18446744073709551615 |
| wsrep_cluster_size | 1 |
| wsrep_cluster_state_uuid | b8ecf355-233a-11e8-825e-bb38179b0eb4 |
| wsrep_cluster_status | non-Primary |
now split brain situation has occurred, and the cluster can not execute any commands.
In order to solve this problem, you can execute
set global wsrep_provider_options=”pc.bootstrap=true”;
This command is used to forcibly recover nodes that have split brain.
Verify:
MariaDB [(none)]> = wsrep_provider_options Global SET “to true pc.bootstrap =”;
Query the OK, 0 rows affected (0.015 sec)

MariaDB [(none)]> select @@wsrep_node_name;
+——————-+
| @@wsrep_node_name |
+———– ——–+
| node1 |
+——————-+
1 row in set (0.478 sec)
Finally we will node 192.168.153.143 and 192.168 .153.144 Recover, just clean up the iptables table (because my test environment, the production environment needs to delete the above rules can be):
[root@node3 mysql]# iptables-F
after the restoration to verify:
MariaDB [(none ]]> show status like “wsrep_cluster_size”;
+——————–+——-+
| Variable_name | Value |
+—- —————-+——-+
| wsrep_cluster_size | 3 |
+——————- -+——-+
1 row in set (0.001 sec)

9. Because of the fault, it is necessary to check the two nodes of the cluster and check whether the data can be synchronized after restarting the service.
To stop the operations of mariadb on 192.168.153.143 and 192.168.153.144:
[root@node2 mysql]# systemctl stop mariadb is
at 192.168. Insert data on node 153.142:
MariaDB [test]> select * from test1;
+——+
| id |
+——+
| 2 |
| 2 |
| 1 |
| 3 |
+– —-+
4 rows in set (0.007 sec)
Now restart the other two nodes in the cluster and see the data consistency, as with the master node.

10. Abnormal processing: When the room suddenly loses power, all galera hosts are shut down abnormally, and the galera cluster service cannot start properly when the phone is switched on. How to deal with?
Step 1: Open the mariadb service of the master host of the galera cluster.
Step 2: Start the mariadb service on the member host of the galera cluster.
Exception handling: The mysql service of the master host and member host of the galera cluster cannot be started. What should I do?
Solution one: Step 1. Delete the /var/lib/mysql/grastate.dat status file of
/ garlera master host /bin/galera_new_cluster to start the service. Start normally. Log in and check the wsrep status.
Step 2: Remove the /var/lib/mysql/grastate.dat status file from the galera member host
systemctl restart mariadb Restart the service. Start normally. Log in and check the wsrep status.
Solution two: Step 1, modify the /var/lib/mysql/grastate.dat status file in the main host of the garlera group to
start the service with 0 as 1 /bin/galera_new_cluster. Start normally. Log in and check the wsrep status.
Step 2: Modify the 0 in the /var/lib/mysql/grastate.dat state file in the galera member host to 1
systemctl restart mariadb to restart the service. Start normally. Log in and check the wsrep status.

Install and Configure PostgreSQL 10 on Fedora 27

add software source
rmohan.com@fedora1 ~ $ sudo dnf install https://download.postgresql.org/pub/repos/yum/10/fedora/fedora-27-x86_64/pgdg-fedora10-10-3.noarch.rpm
Last metadata expiration check: 7:30:40 ago on Tue 02 Jan 2018 10:32:40 AM CST.
pgdg-fedora10-10-3.noarch.rpm 6.9 kB/s | 8.8 kB 00:01
Dependencies resolved.
==============================================================================================================================================================================================================================================
Package Arch Version Repository Size
==============================================================================================================================================================================================================================================
Installing:
pgdg-fedora10 noarch 10-3 @commandline 8.8 k

Transaction Summary
==============================================================================================================================================================================================================================================
Install 1 Package

Total size: 8.8 k
Installed size: 3.2 k
Is this ok [y/N]: y
Downloading Packages:
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Installing : pgdg-fedora10-10-3.noarch 1/1
Verifying : pgdg-fedora10-10-3.noarch 1/1

Installed:
pgdg-fedora10.noarch 10-3

Complete!
1.2, install the server and client
rmohan.com@fedora1 ~ $ sudo dnf install postgresql10-server postgresql10
PostgreSQL 10 27 – x86_64 76 kB/s | 164 kB 00:02
Last metadata expiration check: 0:00:00 ago on Tue 02 Jan 2018 06:03:33 PM CST.
Dependencies resolved.
==============================================================================================================================================================================================================================================
Package Arch Version Repository Size
==============================================================================================================================================================================================================================================
Installing:
postgresql10 x86_64 10.1-1PGDG.f27 pgdg10 1.5 M
postgresql10-server x86_64 10.1-1PGDG.f27 pgdg10 4.4 M
Installing dependencies:
libicu x86_64 57.1-9.fc27 updates 8.4 M
postgresql10-libs x86_64 10.1-1PGDG.f27 pgdg10 354 k

Transaction Summary
==============================================================================================================================================================================================================================================
Install 4 Packages

Total download size: 15 M
Installed size: 54 M
Is this ok [y/N]: y
Downloading Packages:
(1/4): postgresql10-10.1-1PGDG.f27.x86_64.rpm 203 kB/s | 1.5 MB 00:07
(2/4): libicu-57.1-9.fc27.x86_64.rpm 3.8 MB/s | 8.4 MB 00:02
(3/4): postgresql10-libs-10.1-1PGDG.f27.x86_64.rpm 36 kB/s | 354 kB 00:09
(4/4): postgresql10-server-10.1-1PGDG.f27.x86_64.rpm 138 kB/s | 4.4 MB 00:32
———————————————————————————————————————————————————————————————————————————————-
Total 460 kB/s | 15 MB 00:32
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Installing : libicu-57.1-9.fc27.x86_64 1/4
Running scriptlet: libicu-57.1-9.fc27.x86_64 1/4
Installing : postgresql10-libs-10.1-1PGDG.f27.x86_64 2/4
Running scriptlet: postgresql10-libs-10.1-1PGDG.f27.x86_64 2/4
Installing : postgresql10-10.1-1PGDG.f27.x86_64 3/4
Running scriptlet: postgresql10-10.1-1PGDG.f27.x86_64 3/4
Running scriptlet: postgresql10-server-10.1-1PGDG.f27.x86_64 4/4
Installing : postgresql10-server-10.1-1PGDG.f27.x86_64 4/4
Running scriptlet: postgresql10-server-10.1-1PGDG.f27.x86_64 4/4
Verifying : postgresql10-server-10.1-1PGDG.f27.x86_64 1/4
Verifying : postgresql10-10.1-1PGDG.f27.x86_64 2/4
Verifying : postgresql10-libs-10.1-1PGDG.f27.x86_64 3/4
Verifying : libicu-57.1-9.fc27.x86_64 4/4

Installed:
postgresql10.x86_64 10.1-1PGDG.f27 postgresql10-server.x86_64 10.1-1PGDG.f27 libicu.x86_64 57.1-9.fc27 postgresql10-libs.x86_64 10.1-1PGDG.f27

Complete!
What we have to say here is that dnf and yum are similar, and they are more efficient than yum in performance. This is also the mainstream package management tool for
RedHat ‘s distribution suites.

1.3, initialization
rmohan.com@fedora1 ~ $ sudo /usr/pgsql-10/bin/postgresql-10-setup initdb
Initializing database … OK

rmohan.com@fedora1 ~ $ sudo systemctl enable postgresql-10.service
Created symlink /etc/systemd/system/multi-user.target.wants/postgresql-10.service ? /usr/lib/systemd/system/postgresql-10.service.
rmohan.com@fedora1 ~ $ sudo systemctl start postgresql-10.service
rmohan.com@fedora1 ~ $ sudo systemctl status postgresql-10.service
? postgresql-10.service – PostgreSQL 10 database server
Loaded: loaded (/usr/lib/systemd/system/postgresql-10.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2018-01-02 18:07:03 CST; 12s ago
Docs: https://www.postgresql.org/docs/10/static/
Process: 4654 ExecStartPre=/usr/pgsql-10/bin/postgresql-10-check-db-dir ${PGDATA} (code=exited, status=0/SUCCESS)
Main PID: 4659 (postmaster)
Tasks: 8 (limit: 4915)
CGroup: /system.slice/postgresql-10.service
??4659 /usr/pgsql-10/bin/postmaster -D /var/lib/pgsql/10/data/
??4660 postgres: logger process
??4662 postgres: checkpointer process
??4663 postgres: writer process
??4664 postgres: wal writer process
??4665 postgres: autovacuum launcher process
??4666 postgres: stats collector process
??4667 postgres: bgworker: logical replication launcher

Jan 02 18:07:03 fedora1 systemd[1]: Starting PostgreSQL 10 database server…
Jan 02 18:07:03 fedora1 postmaster[4659]: 2018-01-02 18:07:03.166 CST [4659] LOG: listening on IPv6 address “::1”, port 5432
Jan 02 18:07:03 fedora1 postmaster[4659]: 2018-01-02 18:07:03.166 CST [4659] LOG: listening on IPv4 address “127.0.0.1”, port 5432
Jan 02 18:07:03 fedora1 postmaster[4659]: 2018-01-02 18:07:03.168 CST [4659] LOG: listening on Unix socket “/var/run/postgresql/.s.PGSQL.5432”
Jan 02 18:07:03 fedora1 postmaster[4659]: 2018-01-02 18:07:03.170 CST [4659] LOG: listening on Unix socket “/tmp/.s.PGSQL.5432”
Jan 02 18:07:03 fedora1 postmaster[4659]: 2018-01-02 18:07:03.176 CST [4659] LOG: redirecting log output to logging collector process
Jan 02 18:07:03 fedora1 postmaster[4659]: 2018-01-02 18:07:03.176 CST [4659] HINT: Future log output will appear in directory “log”.
Jan 02 18:07:03 fedora1 systemd[1]: Started PostgreSQL 10 database server.
1.4, local access
postgres@fedora1 ~ $ psql
psql (10.1)
Type “help” for help.

postgres=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
———–+———-+———-+————-+————-+———————–
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
(3 rows)

postgres=#
2, configuration
We know that under normal circumstances, you need to access the postgresql service on a host other than the host. However, by default, postgresql only provides local access. To allow other hosts to access, you need to configure the following.

2.1. Open support for non-local visits.
Postgresql configuration files in fedora’s distribution suite are mainly in the data directory, ie /var/lib/pgsql/10/data/

postgres@fedora1 ~ $ psql
psql (10.1)
Type “help” for help.

postgres=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
———–+———-+———-+————-+————-+———————–
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
(3 rows)

postgres=# \q
postgres@fedora1 ~ $ ll /var/lib/pgsql/10/data/
total 136
drwx——. 20 postgres postgres 4096 Jan 2 18:07 .
drwx——. 4 postgres postgres 4096 Jan 2 18:06 ..
drwx——. 5 postgres postgres 4096 Jan 2 18:06 base
-rw——-. 1 postgres postgres 30 Jan 2 18:07 current_logfiles
drwx——. 2 postgres postgres 4096 Jan 2 18:08 global
drwx——. 2 postgres postgres 4096 Jan 2 18:07 log
drwx——. 2 postgres postgres 4096 Jan 2 18:06 pg_commit_ts
drwx——. 2 postgres postgres 4096 Jan 2 18:06 pg_dynshmem
-rw——-. 1 postgres postgres 4269 Jan 2 18:06 pg_hba.conf
-rw——-. 1 postgres postgres 1636 Jan 2 18:06 pg_ident.conf
drwx——. 4 postgres postgres 4096 Jan 2 18:12 pg_logical
drwx——. 4 postgres postgres 4096 Jan 2 18:06 pg_multixact
drwx——. 2 postgres postgres 4096 Jan 2 18:07 pg_notify
drwx——. 2 postgres postgres 4096 Jan 2 18:06 pg_replslot
drwx——. 2 postgres postgres 4096 Jan 2 18:06 pg_serial
drwx——. 2 postgres postgres 4096 Jan 2 18:06 pg_snapshots
drwx——. 2 postgres postgres 4096 Jan 2 18:06 pg_stat
drwx——. 2 postgres postgres 4096 Jan 2 18:21 pg_stat_tmp
drwx——. 2 postgres postgres 4096 Jan 2 18:06 pg_subtrans
drwx——. 2 postgres postgres 4096 Jan 2 18:06 pg_tblspc
drwx——. 2 postgres postgres 4096 Jan 2 18:06 pg_twophase
-rw——-. 1 postgres postgres 3 Jan 2 18:06 PG_VERSION
drwx——. 3 postgres postgres 4096 Jan 2 18:06 pg_wal
drwx——. 2 postgres postgres 4096 Jan 2 18:06 pg_xact
-rw——-. 1 postgres postgres 88 Jan 2 18:06 postgresql.auto.conf
-rw——-. 1 postgres postgres 22761 Jan 2 18:06 postgresql.conf
-rw——-. 1 postgres postgres 58 Jan 2 18:07 postmaster.opts
-rw——-. 1 postgres postgres 103 Jan 2 18:07 postmaster.pid
postgres@fedora1 ~ $
We first modify the configuration postgresql.conf, open the restrictions on non-host access, open the file with vim,

59 #listen_addresses = ‘localhost’ # what IP address(es) to listen on;
???
59 #listen_addresses = ‘*’ # what IP address(es) to listen on;

Then use vim to open the file pg_hba.conf, find 82 lines

82 host all all 127.0.0.1/32 ident
Add later

83 host all all 192.168.1.0/24 trust
At this point, restart the postgresql database.

postgres@fedora1 ~ $ vim /var/lib/pgsql/10/data/postgresql.conf
postgres@fedora1 ~ $ vim /var/lib/pgsql/10/data/pg_hba.conf
postgres@fedora1 ~ $ exit
logout
rmohan.com@fedora1 ~ $ sudo systemctl start postgresql-10.service
[sudo] password for rmohan.com:
lwk@fedora1 ~ $ sudo systemctl status postgresql-10.service
? postgresql-10.service – PostgreSQL 10 database server
Loaded: loaded (/usr/lib/systemd/system/postgresql-10.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2018-01-02 18:07:03 CST; 29min ago
Docs: https://www.postgresql.org/docs/10/static/
Process: 4654 ExecStartPre=/usr/pgsql-10/bin/postgresql-10-check-db-dir ${PGDATA} (code=exited, status=0/SUCCESS)
Main PID: 4659 (postmaster)
Tasks: 8 (limit: 4915)
CGroup: /system.slice/postgresql-10.service
??4659 /usr/pgsql-10/bin/postmaster -D /var/lib/pgsql/10/data/
??4660 postgres: logger process
??4662 postgres: checkpointer process
??4663 postgres: writer process
??4664 postgres: wal writer process
??4665 postgres: autovacuum launcher process
??4666 postgres: stats collector process
??4667 postgres: bgworker: logical replication launcher

Jan 02 18:07:03 fedora1 systemd[1]: Starting PostgreSQL 10 database server…
Jan 02 18:07:03 fedora1 postmaster[4659]: 2018-01-02 18:07:03.166 CST [4659] LOG: listening on IPv6 address “::1”, port 5432
Jan 02 18:07:03 fedora1 postmaster[4659]: 2018-01-02 18:07:03.166 CST [4659] LOG: listening on IPv4 address “127.0.0.1”, port 5432
Jan 02 18:07:03 fedora1 postmaster[4659]: 2018-01-02 18:07:03.168 CST [4659] LOG: listening on Unix socket “/var/run/postgresql/.s.PGSQL.5432”
Jan 02 18:07:03 fedora1 postmaster[4659]: 2018-01-02 18:07:03.170 CST [4659] LOG: listening on Unix socket “/tmp/.s.PGSQL.5432”
Jan 02 18:07:03 fedora1 postmaster[4659]: 2018-01-02 18:07:03.176 CST [4659] LOG: redirecting log output to logging collector process
Jan 02 18:07:03 fedora1 postmaster[4659]: 2018-01-02 18:07:03.176 CST [4659] HINT: Future log output will appear in directory “log”.
Jan 02 18:07:03 fedora1 systemd[1]: Started PostgreSQL 10 database server.
rmohan.com@fedora1 ~ $
2.1. Modify the firewall configuration.
Modifying the firewall configuration will add port number 5432 to the firewall whitelist. There are many ways to use ufw

rmohan.com@fedora1 ~ $ dnf list ufw
Last metadata expiration check: 0:00:36 ago on Tue 02 Jan 2018 06:41:47 PM CST.
Available Packages
ufw.noarch 0.35-9.fc27 fedora
rmohan.com@fedora1 ~ $ sudo dnf install ufw
Last metadata expiration check: 0:39:13 ago on Tue 02 Jan 2018 06:03:33 PM CST.
Dependencies resolved.
==============================================================================================================================================================================================================================================
Package Arch Version Repository Size
==============================================================================================================================================================================================================================================
Installing:
ufw noarch 0.35-9.fc27 fedora 222 k

Transaction Summary
==============================================================================================================================================================================================================================================
Install 1 Package

Total download size: 222 k
Installed size: 978 k
Is this ok [y/N]: y
Downloading Packages:
ufw-0.35-9.fc27.noarch.rpm 99 kB/s | 222 kB 00:02
———————————————————————————————————————————————————————————————————————————————-
Total 98 kB/s | 222 kB 00:02
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Installing : ufw-0.35-9.fc27.noarch 1/1
Running scriptlet: ufw-0.35-9.fc27.noarch 1/1
Running as unit: run-rcf2b3a65bf7d43b78a6d1e515b174178.service
Verifying : ufw-0.35-9.fc27.noarch 1/1

Installed:
ufw.noarch 0.35-9.fc27

Complete!
rmohan.com@fedora1 ~ $
rmohan.com@fedora1 ~ $ sudo ufw status
Status: inactive
rmohan.com@fedora1 ~ $ sudo ufw enable
Command may disrupt existing ssh connections. Proceed with operation (y|n)? y
Firewall is active and enabled on system startup
rmohan.com@fedora1 ~ $ sudo ufw status
Status: active

To Action From
— —— —-
SSH ALLOW Anywhere
224.0.0.251 mDNS ALLOW Anywhere
SSH (v6) ALLOW Anywhere (v6)
ff02::fb mDNS ALLOW Anywhere (v6)

rmohan.com@fedora1 ~ $ sudo ufw allow 5432
Rule added
Rule added (v6)
rmohan.com@fedora1 ~ $ sudo ufw default deny
Default incoming policy changed to ‘deny’
(be sure to update your rules accordingly)
rmohan.com@fedora1 ~ $ sudo systemctl enable ufw.service
Created symlink /etc/systemd/system/basic.target.wants/ufw.service ? /usr/lib/systemd/system/ufw.service.
rmohan.com@fedora1 ~ $ sudo systemctl restart ufw.service
rmohan.com@fedora1 ~ $ sudo systemctl status ufw.service
? ufw.service – Uncomplicated firewall
Loaded: loaded (/usr/lib/systemd/system/ufw.service; enabled; vendor preset: disabled)
Active: active (exited) since Tue 2018-01-02 18:47:35 CST; 13s ago
Docs: man:ufw(8)
man:ufw-framework(8)
file://usr/share/doc/ufw/README
Process: 6171 ExecStart=/usr/libexec/ufw/ufw-init start (code=exited, status=0/SUCCESS)
Main PID: 6171 (code=exited, status=0/SUCCESS)

Jan 02 18:47:34 fedora1 systemd[1]: Starting Uncomplicated firewall…
Jan 02 18:47:35 fedora1 systemd[1]: Started Uncomplicated firewall.
rmohan.com@fedora1 ~ $ sudo ufw status
Status: active

To Action From
— —— —-
SSH ALLOW Anywhere
224.0.0.251 mDNS ALLOW Anywhere
5432 ALLOW Anywhere
SSH (v6) ALLOW Anywhere (v6)
ff02::fb mDNS ALLOW Anywhere (v6)
5432 (v6) ALLOW Anywhere (v6)

rmohan.com@fedora1 ~ $

CentOS 7 MongoDB 3.4

nstall MongoDB 3.4 process on yum under CentOS 7 system.

The first step to see if there is a MongoDB configuration yum source

Switch to the yum directory cd /etc/yum.repos.d/

View the file ls

The second part does not exist to add yum source

Create the file touch mongodb-3.4.repo

Edit this file vi mongodb-3.4.repo

content:

Cat /etc/yum.repos.d/mongodb-3.4.repos

[mongodb-org-3.4]

Name=MongoDB Repository

Baseurl=https://repo.mongodb.org/yum/ RedHat /$releasever/mongodb-org/3.4/x86_64/

Gpgcheck=1

Enabled=1

Gpgkey=https://www.mongodb.org/static/pgp/server-3.2.asc

You can modify gpgcheck=0 here to save gpg verification

Update all packages before installation: yum update (optional operation)

Then install: yum install -y mongodb-org

Check the mongo installation location whereis mongod

Check the modified configuration file: vi /etc/mongod.conf

Start mongod :systemctl start mongod.service

Stop mongod :systemctl stop mongod,service

External network access needs to shut down the firewall:

CentOS 7.0 uses firewall as the firewall by default, and it is changed to iptables firewall.

Close the firewall:

Systemctl stop firewalld.service #stop firewall

Systemctl disable firewalld.service #Disable firewall startup

Use mongodb : mongo 192.168.60.102:27017

>use admin

>show dbs

>show collections

After restarting Mongodb, log in to the admin account and create a super-privileged user

Use admin

db.createUser({user:’root’,pwd:’root’,roles:[{ “role” : “root”, “db” : “admin” }]});

Configuration

Fork=true ## allows programs to run in the background

#auth=true ## Start Authentication

Logpath=/data/db/mongodb/logs/mongodb.log logappend=true # Write log mode: set to true to append. The default is to override dbpath=/data/db/mongodb/data/ ## data storage directory

Pidfilepath=/data/db/mongodb/logs/mongodb.pid # Process ID. If not specified, there will be no PID file when starting. Default default.

Port=27017

#bind_ip=192.168.2.73 # Bind addresses. The default is 127.0.0.1. You can only change the data directory storage mode by setting the local connection # to true. Each database file is stored in a different folder in the DBPATH specified directory. # With this option, MongoDB can be configured to store data on different disk devices to increase write throughput or disk capacity. The default is false. # suggest to configure sub-options from the beginning

Directoryperdb=true # Disable log # Enable the operation log for the journal to ensure write consistency and data consistency. Create a journal directory in the dbpath directory

Nojournal = true ##

Max connections # The maximum number of connections. Default: Depends on system (ie ulimit and file descriptor) restrictions. # MongoDB does not limit its own connection. When the setting is greater than the system limit, it is invalid and the system limit prevails. # Set the value of this value higher than the size of the connection pool and the total number of connections to prevent connections at peak times. # Note: This value cannot be set greater than 20000. maxConns=1024

Application Load Balancer

 

 

Create an Application Load Balancer
The Application Load Balancer is a flavor of the Elastic Load Balancing (ELB) service. It works more or less the same as a Classic Load Balancer, however, it has several additional features and some new concepts you need to understand so this Lab will covers those first.
AWS has great documentation to help you get started, so let’s start by referencing it:

The load balancer serves as the single point of contact for clients. You add one or more listeners to your load balancer.

A listener checks for connection requests from clients, using the protocol and port that you configure, and forwards requests to one or more target groups, based on the rules that you define. Each rule specifies a target group, condition, and priority.
When the condition is met, the traffic is forwarded to the target group. You must define a default rule for each listener, and you can add rules that specify different target groups based on the content of the request (also known as content-based routing).
Each target group routes requests to one or more registered targets, such as EC2 instances, using the protocol and port number that you specify. You can register a target with multiple target groups. You can configure health checks on a per target group basis.
Health checks are performed on all targets registered to a target group that is specified in a listener rule for your load balancer.
The following diagram illustrates the basic components. Notice that each listener contains a default rule, and one listener contains another rule that routes requests to a different target group. One target is registered with two target groups.

 

Recommended Network ACL Rules for Your VPC

Recommended Rules for Scenario 1

Scenario 1 is a single subnet with instances that can receive and send Internet traffic. For more information, see Scenario 1: VPC with a Single Public Subnet.

The following table shows the rules we recommended. They block all traffic except that which is explicitly required.

Inbound
Rule # Source IP Protocol Port Allow/Deny Comments
100 0.0.0.0/0 TCP 80 ALLOW Allows inbound HTTP traffic from any IPv4 address.
110 0.0.0.0/0 TCP 443 ALLOW Allows inbound HTTPS traffic from any IPv4 address.
120 Public IPv4 address range of your home network TCP 22 ALLOW Allows inbound SSH traffic from your home network (over the Internet gateway).
130 Public IPv4 address range of your home network TCP 3389 ALLOW Allows inbound RDP traffic from your home network (over the Internet gateway).
140 0.0.0.0/0 TCP 32768-65535 ALLOW Allows inbound return traffic from hosts on the Internet that are responding to requests originating in the subnet.

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

* 0.0.0.0/0 all all DENY Denies all inbound IPv4 traffic not already handled by a preceding rule (not modifiable).
Outbound
Rule # Dest IP Protocol Port Allow/Deny Comments
100 0.0.0.0/0 TCP 80 ALLOW Allows outbound HTTP traffic from the subnet to the Internet.
110 0.0.0.0/0 TCP 443 ALLOW Allows outbound HTTPS traffic from the subnet to the Internet.
120 0.0.0.0/0 TCP 32768-65535 ALLOW Allows outbound responses to clients on the Internet (for example, serving web pages to people visiting the web servers in the subnet).

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

* 0.0.0.0/0 all all DENY Denies all outbound IPv4 traffic not already handled by a preceding rule (not modifiable).

Recommended Rules for IPv6

If you implemented scenario 1 with IPv6 support and created a VPC and subnet with associated IPv6 CIDR blocks, you must add separate rules to your network ACL to control inbound and outbound IPv6 traffic.

The following are the IPv6-specific rules for your network ACL (which are in addition to the rules listed above).

Inbound
Rule # Source IP Protocol Port Allow/Deny Comments
150 ::/0 TCP 80 ALLOW Allows inbound HTTP traffic from any IPv6 address.
160 ::/0 TCP 443 ALLOW Allows inbound HTTPS traffic from any IPv6 address.
170 IPv6 address range of your home network TCP 22 ALLOW Allows inbound SSH traffic from your home network (over the Internet gateway).
180 IPv6 address range of your home network TCP 3389 ALLOW Allows inbound RDP traffic from your home network (over the Internet gateway).
190 ::/0 TCP 32768-65535 ALLOW Allows inbound return traffic from hosts on the Internet that are responding to requests originating in the subnet.

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

* ::/0 all all DENY Denies all inbound IPv6 traffic not already handled by a preceding rule (not modifiable).
Outbound
Rule # Dest IP Protocol Port Allow/Deny Comments
130 ::/0 TCP 80 ALLOW Allows outbound HTTP traffic from the subnet to the Internet.
140 ::/0 TCP 443 ALLOW Allows outbound HTTPS traffic from the subnet to the Internet.
150 ::/0 TCP 32768-65535 ALLOW Allows outbound responses to clients on the Internet (for example, serving web pages to people visiting the web servers in the subnet).

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

* ::/0 all all DENY Denies all outbound IPv6 traffic not already handled by a preceding rule (not modifiable).

Recommended Rules for Scenario 2

Scenario 2 is a public subnet with instances that can receive and send Internet traffic, and a private subnet that can’t receive traffic directly from the Internet. However, it can initiate traffic to the Internet (and receive responses) through a NAT gateway or NAT instance in the public subnet. For more information, see Scenario 2: VPC with Public and Private Subnets (NAT).

For this scenario you have a network ACL for the public subnet, and a separate one for the private subnet. The following table shows the rules we recommend for each ACL. They block all traffic except that which is explicitly required. They mostly mimic the security group rules for the scenario.

ACL Rules for the Public Subnet

Inbound
Rule # Source IP Protocol Port Allow/Deny Comments
100 0.0.0.0/0 TCP 80 ALLOW Allows inbound HTTP traffic from any IPv4 address.
110 0.0.0.0/0 TCP 443 ALLOW Allows inbound HTTPS traffic from any IPv4 address.
120 Public IP address range of your home network TCP 22 ALLOW Allows inbound SSH traffic from your home network (over the Internet gateway).
130 Public IP address range of your home network TCP 3389 ALLOW Allows inbound RDP traffic from your home network (over the Internet gateway).
140 0.0.0.0/0 TCP 1024-65535 ALLOW Allows inbound return traffic from hosts on the Internet that are responding to requests originating in the subnet.

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

* 0.0.0.0/0 all all DENY Denies all inbound IPv4 traffic not already handled by a preceding rule (not modifiable).
Outbound
Rule # Dest IP Protocol Port Allow/Deny Comments
100 0.0.0.0/0 TCP 80 ALLOW Allows outbound HTTP traffic from the subnet to the Internet.
110 0.0.0.0/0 TCP 443 ALLOW Allows outbound HTTPS traffic from the subnet to the Internet.
120 10.0.1.0/24 TCP 1433 ALLOW Allows outbound MS SQL access to database servers in the private subnet.

This port number is an example only. Other examples include 3306 for MySQL/Aurora access, 5432 for PostgreSQL access, 5439 for Amazon Redshift access, and 1521 for Oracle access.

140 0.0.0.0/0 TCP 32768-65535 ALLOW Allows outbound responses to clients on the Internet (for example, serving web pages to people visiting the web servers in the subnet).

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

150 10.0.1.0/24 TCP 22 ALLOW Allows outbound SSH access to instances in your private subnet (from an SSH bastion, if you have one).
* 0.0.0.0/0 all all DENY Denies all outbound IPv4 traffic not already handled by a preceding rule (not modifiable).

ACL Rules for the Private Subnet

Inbound
Rule # Source IP Protocol Port Allow/Deny Comments
100 10.0.0.0/24 TCP 1433 ALLOW Allows web servers in the public subnet to read and write to MS SQL servers in the private subnet.

This port number is an example only. Other examples include 3306 for MySQL/Aurora access, 5432 for PostgreSQL access, 5439 for Amazon Redshift access, and 1521 for Oracle access.

120 10.0.0.0/24 TCP 22 ALLOW Allows inbound SSH traffic from an SSH bastion in the public subnet (if you have one).
130 10.0.0.0/24 TCP 3389 ALLOW Allows inbound RDP traffic from the Microsoft Terminal Services gateway in the public subnet.
140 0.0.0.0/0 TCP 1024-65535 ALLOW Allows inbound return traffic from the NAT device in the public subnet for requests originating in the private subnet.

For information about specifying the correct ephemeral ports, see the important note at the beginning of this topic.

* 0.0.0.0/0 all all DENY Denies all IPv4 inbound traffic not already handled by a preceding rule (not modifiable).
Outbound
Rule # Dest IP Protocol Port Allow/Deny Comments
100 0.0.0.0/0 TCP 80 ALLOW Allows outbound HTTP traffic from the subnet to the Internet.
110 0.0.0.0/0 TCP 443 ALLOW Allows outbound HTTPS traffic from the subnet to the Internet.
120 10.0.0.0/24 TCP 32768-65535 ALLOW Allows outbound responses to the public subnet (for example, responses to web servers in the public subnet that are communicating with DB servers in the private subnet).

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

* 0.0.0.0/0 all all DENY Denies all outbound IPv4 traffic not already handled by a preceding rule (not modifiable).

Recommended Rules for IPv6

If you implemented scenario 2 with IPv6 support and created a VPC and subnets with associated IPv6 CIDR blocks, you must add separate rules to your network ACLs to control inbound and outbound IPv6 traffic.

The following are the IPv6-specific rules for your network ACLs (which are in addition to the rules listed above).

ACL Rules for the Public Subnet

Inbound
Rule # Source IP Protocol Port Allow/Deny Comments
150 ::/0 TCP 80 ALLOW Allows inbound HTTP traffic from any IPv6 address.
160 ::/0 TCP 443 ALLOW Allows inbound HTTPS traffic from any IPv6 address.
170 IPv6 address range of your home network TCP 22 ALLOW Allows inbound SSH traffic over IPv6 from your home network (over the Internet gateway).
180 IPv6 address range of your home network TCP 3389 ALLOW Allows inbound RDP traffic over IPv6 from your home network (over the Internet gateway).
190 ::/0 TCP 1024-65535 ALLOW Allows inbound return traffic from hosts on the Internet that are responding to requests originating in the subnet.

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

* ::/0 all all DENY Denies all inbound IPv6 traffic not already handled by a preceding rule (not modifiable).
Outbound
Rule # Dest IP Protocol Port Allow/Deny Comments
160 ::/0 TCP 80 ALLOW Allows outbound HTTP traffic from the subnet to the Internet.
170 ::/0 TCP 443 ALLOW Allows outbound HTTPS traffic from the subnet to the Internet
180 2001:db8:1234:1a01::/64 TCP 1433 ALLOW Allows outbound MS SQL access to database servers in the private subnet.

This port number is an example only. Other examples include 3306 for MySQL/Aurora access, 5432 for PostgreSQL access, 5439 for Amazon Redshift access, and 1521 for Oracle access.

200 ::/0 TCP 32768-65535 ALLOW Allows outbound responses to clients on the Internet (for example, serving web pages to people visiting the web servers in the subnet)

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

210 2001:db8:1234:1a01::/64 TCP 22 ALLOW Allows outbound SSH access to instances in your private subnet (from an SSH bastion, if you have one).
* ::/0 all all DENY Denies all outbound IPv6 traffic not already handled by a preceding rule (not modifiable).

ACL Rules for the Private Subnet

Inbound
Rule # Source IP Protocol Port Allow/Deny Comments
150 2001:db8:1234:1a00::/64 TCP 1433 ALLOW Allows web servers in the public subnet to read and write to MS SQL servers in the private subnet.

This port number is an example only. Other examples include 3306 for MySQL/Aurora access, 5432 for PostgreSQL access, 5439 for Amazon Redshift access, and 1521 for Oracle access.

170 2001:db8:1234:1a00::/64 TCP 22 ALLOW Allows inbound SSH traffic from an SSH bastion in the public subnet (if applicable).
180 2001:db8:1234:1a00::/64 TCP 3389 ALLOW Allows inbound RDP traffic from a Microsoft Terminal Services gateway in the public subnet, if applicable.
190 ::/0 TCP 1024-65535 ALLOW Allows inbound return traffic from the egress-only Internet gateway for requests originating in the private subnet.

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

* ::/0 all all DENY Denies all inbound IPv6 traffic not already handled by a preceding rule (not modifiable).
Outbound
Rule # Dest IP Protocol Port Allow/Deny Comments
130 ::/0 TCP 80 ALLOW Allows outbound HTTP traffic from the subnet to the Internet.
140 ::/0 TCP 443 ALLOW Allows outbound HTTPS traffic from the subnet to the Internet.
150 2001:db8:1234:1a00::/64 TCP 32768-65535 ALLOW Allows outbound responses to the public subnet (for example, responses to web servers in the public subnet that are communicating with DB servers in the private subnet).

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

* ::/0 all all DENY Denies all outbound IPv6 traffic not already handled by a preceding rule (not modifiable).

Recommended Rules for Scenario 3

Scenario 3 is a public subnet with instances that can receive and send Internet traffic, and a VPN-only subnet with instances that can communicate only with your home network over the VPN connection. For more information, see Scenario 3: VPC with Public and Private Subnets and AWS Managed VPN Access.

For this scenario you have a network ACL for the public subnet, and a separate one for the VPN-only subnet. The following table shows the rules we recommend for each ACL. They block all traffic except that which is explicitly required.

ACL Rules for the Public Subnet

Inbound
Rule # Source IP Protocol Port Allow/Deny Comments
100 0.0.0.0/0 TCP 80 ALLOW Allows inbound HTTP traffic to the web servers from any IPv4 address.
110 0.0.0.0/0 TCP 443 ALLOW Allows inbound HTTPS traffic to the web servers from any IPv4 address.
120 Public IPv4 address range of your home network TCP 22 ALLOW Allows inbound SSH traffic to the web servers from your home network (over the Internet gateway).
130 Public IPv4 address range of your home network TCP 3389 ALLOW Allows inbound RDP traffic to the web servers from your home network (over the Internet gateway).
140 0.0.0.0/0 TCP 32768-65535 ALLOW Allows inbound return traffic from hosts on the Internet that are responding to requests originating in the subnet.

This range is an example only. For information about choosing the correct ephemeral ports for your configuration,see Ephemeral Ports.

* 0.0.0.0/0 all all DENY Denies all inbound IPv4 traffic not already handled by a preceding rule (not modifiable).
Outbound
Rule # Dest IP Protocol Port Allow/Deny Comments
100 0.0.0.0/0 TCP 80 ALLOW Allows outbound HTTP traffic from the subnet to the Internet.
110 0.0.0.0/0 TCP 443 ALLOW Allows outbound HTTPS traffic from the subnet to the Internet.
120 10.0.1.0/24 TCP 1433 ALLOW Allows outbound MS SQL access to database servers in the VPN-only subnet.

This port number is an example only. Other examples include 3306 for MySQL/Aurora access, 5432 for PostgreSQL access, 5439 for Amazon Redshift access, and 1521 for Oracle access.

140 0.0.0.0/0 TCP 32768-65535 ALLOW Allows outbound IPv4 responses to clients on the Internet (for example, serving web pages to people visiting the web servers in the subnet)

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

* 0.0.0.0/0 all all DENY Denies all outbound traffic not already handled by a preceding rule (not modifiable).

ACL Settings for the VPN-Only Subnet

Inbound
Rule # Source IP Protocol Port Allow/Deny Comments
100 10.0.0.0/24 TCP 1433 ALLOW Allows web servers in the public subnet to read and write to MS SQL servers in the VPN-only subnet.

This port number is an example only. Other examples include 3306 for MySQL/Aurora access, 5432 for PostgreSQL access, 5439 for Amazon Redshift access, and 1521 for Oracle access.

120 Private IPv4 address range of your home network TCP 22 ALLOW Allows inbound SSH traffic from the home network (over the virtual private gateway).
130 Private IPv4 address range of your home network TCP 3389 ALLOW Allows inbound RDP traffic from the home network (over the virtual private gateway).
140 Private IP address range of your home network TCP 32768-65535 ALLOW Allows inbound return traffic from clients in the home network (over the virtual private gateway)

This range is an example only. For information about choosing the correct ephemeral ports for your configuration,see Ephemeral Ports.

* 0.0.0.0/0 all all DENY Denies all inbound traffic not already handled by a preceding rule (not modifiable).
Outbound
Rule # Dest IP Protocol Port Allow/Deny Comments
100 Private IP address range of your home network All All ALLOW Allows all outbound traffic from the subnet to your home network (over the virtual private gateway). This rule also covers rule 120; however, you can make this rule more restrictive by using a specific protocol type and port number. If you make this rule more restrictive, then you must include rule 120 in your network ACL to ensure that outbound responses are not blocked.
110 10.0.0.0/24 TCP 32768-65535 ALLOW Allows outbound responses to the web servers in the public subnet.

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

120 Private IP address range of your home network TCP 32768-65535 ALLOW Allows outbound responses to clients in the home network (over the virtual private gateway).

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

* 0.0.0.0/0 all all DENY Denies all outbound traffic not already handled by a preceding rule (not modifiable).

Recommended Rules for IPv6

If you implemented scenario 3 with IPv6 support and created a VPC and subnets with associated IPv6 CIDR blocks, you must add separate rules to your network ACLs to control inbound and outbound IPv6 traffic.

The following are the IPv6-specific rules for your network ACLs (which are in addition to the rules listed above).

ACL Rules for the Public Subnet

Inbound
Rule # Source IP Protocol Port Allow/Deny Comments
150 ::/0 TCP 80 ALLOW Allows inbound HTTP traffic from any IPv6 address.
160 ::/0 TCP 443 ALLOW Allows inbound HTTPS traffic from any IPv6 address.
170 IPv6 address range of your home network TCP 22 ALLOW Allows inbound SSH traffic over IPv6 from your home network (over the Internet gateway).
180 IPv6 address range of your home network TCP 3389 ALLOW Allows inbound RDP traffic over IPv6 from your home network (over the Internet gateway).
190 ::/0 TCP 1024-65535 ALLOW Allows inbound return traffic from hosts on the Internet that are responding to requests originating in the subnet.

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

* ::/0 all all DENY Denies all inbound IPv6 traffic not already handled by a preceding rule (not modifiable).
Outbound
Rule # Dest IP Protocol Port Allow/Deny Comments
150 ::/0 TCP 80 ALLOW Allows outbound HTTP traffic from the subnet to the Internet.
160 ::/0 TCP 443 ALLOW Allows outbound HTTPS traffic from the subnet to the Internet.
170 2001:db8:1234:1a01::/64 TCP 1433 ALLOW Allows outbound MS SQL access to database servers in the private subnet.

This port number is an example only. Other examples include 3306 for MySQL/Aurora access, 5432 for PostgreSQL access, 5439 for Amazon Redshift access, and 1521 for Oracle access.

190 ::/0 TCP 32768-65535 ALLOW Allows outbound responses to clients on the Internet (for example, serving web pages to people visiting the web servers in the subnet)

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

* ::/0 all all DENY Denies all outbound IPv6 traffic not already handled by a preceding rule (not modifiable).

ACL Rules for the VPN-only Subnet

Inbound
Rule # Source IP Protocol Port Allow/Deny Comments
150 2001:db8:1234:1a00::/64 TCP 1433 ALLOW Allows web servers in the public subnet to read and write to MS SQL servers in the private subnet.

This port number is an example only. Other examples include 3306 for MySQL/Aurora access, 5432 for PostgreSQL access, 5439 for Amazon Redshift access, and 1521 for Oracle access.

* ::/0 all all DENY Denies all inbound IPv6 traffic not already handled by a preceding rule (not modifiable).
Outbound
Rule # Dest IP Protocol Port Allow/Deny Comments
130 2001:db8:1234:1a00::/64 TCP 32768-65535 ALLOW Allows outbound responses to the public subnet (for example, responses to web servers in the public subnet that are communicating with DB servers in the private subnet).

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

* ::/0 all all DENY Denies all outbound IPv6 traffic not already handled by a preceding rule (not modifiable).

Recommended Rules for Scenario 4

Scenario 4 is a single subnet with instances that can communicate only with your home network over a VPN connection. For a more information, see Scenario 4: VPC with a Private Subnet Only and AWS Managed VPN Access.

The following table shows the rules we recommended. They block all traffic except that which is explicitly required.

Inbound
Rule # Source IP Protocol Port Allow/Deny Comments
100 Private IP address range of your home network TCP 22 ALLOW Allows inbound SSH traffic to the subnet from your home network.
110 Private IP address range of your home network TCP 3389 ALLOW Allows inbound RDP traffic to the subnet from your home network.
120 Private IP address range of your home network TCP 32768-65535 ALLOW Allows inbound return traffic from requests originating in the subnet.

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

* 0.0.0.0/0 all all DENY Denies all inbound traffic not already handled by a preceding rule (not modifiable).
Outbound
Rule # Dest IP Protocol Port Allow/Deny Comments
100 Private IP address range of your home network All All ALLOW Allows all outbound traffic from the subnet to your home network. This rule also covers rule 120; however, you can make this rule more restrictive by using a specific protocol type and port number. If you make this rule more restrictive, then you must include rule 120 in your network ACL to ensure that outbound responses are not blocked.
120 Private IP address range of your home network TCP 32768-65535 ALLOW Allows outbound responses to clients in the home network.

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

* 0.0.0.0/0 all all DENY Denies all outbound traffic not already handled by a preceding rule (not modifiable).

Recommended Rules for IPv6

If you implemented scenario 4 with IPv6 support and created a VPC and subnet with associated IPv6 CIDR blocks, you must add separate rules to your network ACL to control inbound and outbound IPv6 traffic.

In this scenario, the database servers cannot be reached over the VPN communication via IPv6, therefore no additional network ACL rules are required. The following are the default rules that deny IPv6 traffic to and from the subnet.

ACL Rules for the VPN-only Subnet

Inbound
Rule # Source IP Protocol Port Allow/Deny Comments
* ::/0 all all DENY Denies all inbound IPv6 traffic not already handled by a preceding rule (not modifiable).
Outbound
Rule # Dest IP Protocol Port Allow/Deny Comments
* ::/0 all all DENY Denies all outbound IPv6 traffic not already handled by a preceding rule (not modifiable).

MYSQL 5.8 on Centos 7

  1. systemd is now used to look after mySQL instead of mysqld_safe (which is why you get the -bash: mysqld_safe: command not found error – it’s not installed)
  2. The user table structure has changed.

So to reset the root password, you still start mySQL with --skip-grant-tables options and update the user table, but how you do it has changed.

1. Stop mysql:
systemctl stop mysqld

2. Set the mySQL environment option 
systemctl set-environment MYSQLD_OPTS="--skip-grant-tables"

3. Start mysql usig the options you just set
systemctl start mysqld

4. Login as root
mysql -u root

5. Update the root user password with these mysql commands
mysql> UPDATE mysql.user SET authentication_string = PASSWORD('MyNewPassword')
    -> WHERE User = 'root' AND Host = 'localhost';
mysql> FLUSH PRIVILEGES;
mysql> quit

*** Edit ***
As mentioned my shokulei in the comments, for 5.7.6 and later, you should use 
   mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'MyNewPass';
Or you'll get a warning

6. Stop mysql
systemctl stop mysqld

7. Unset the mySQL envitroment option so it starts normally next time
systemctl unset-environment MYSQLD_OPTS

8. Start mysql normally:
systemctl start mysqld

Try to login using your new password:
7. mysql -u root -p

docker swarm

Docker Engine Starting from version 1.12.0, Docker Swarm is integrated natively. The operation of the cluster can be directly controlled by the docker service command, which is very convenient and the operation process is greatly simplified. Docker Swarm For the average developer, the biggest advantage lies in the native support of the load balancing mechanism, which can effectively scale the service up. With the help of the Raft Consensus algorithm, the robustness of the system is very good and can be tolerated as much as possible (n -1)/2 fault nodes.
Build Swarm Cluster

Install the latest docker

curl -sSL https://get.docker.com/ | sh
CentOS 7?????
firewall-cmd --permanent --zone=trusted --add-port=2377/tcp && \
firewall-cmd --permanent --zone=trusted --add-port=7946/tcp && \
firewall-cmd --permanent --zone=trusted --add-port=7946/udp && \
firewall-cmd --permanent --zone=trusted --add-port=4789/udp && \
firewall-cmd --reload 

Create a management node

$ docker swarm init --advertise-addr 192.168.99.100
Swarm initialized: current node (dxn1zf6l61qsb1josjja83ngz) is now a manager.

To add a worker to this swarm, run the following command:
    docker swarm join \
    --token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c \
    192.168.99.100:2377

To add a manager to this swarm, run the following command:
    docker swarm join \
    --token SWMTKN-1-61ztec5kyafptydic6jfc1i33t37flcl4nuipzcusor96k7kby-5vy9t8u35tuqm7vh67lrz9xp6 \
    192.168.99.100:2377

When the management node is created, we can view the node creation status through the docker info and docker node ls commands.

$ docker info

Containers: 2
Running: 0
Paused: 0
Stopped: 2
  ...snip...
Swarm: active
  NodeID: dxn1zf6l61qsb1josjja83ngz
  Is Manager: true
  Managers: 1
  Nodes: 1
  ...snip...
$ docker node ls

ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
dxn1zf6l61qsb1josjja83ngz *  manager1  Ready   Active        Leader
??worker??

According to the previous command line output result prompt, two workers are now added to the cluster. Remember to replace the corresponding token and IP address with the actual value during execution.

$ docker swarm join \
  --token  SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c \
  192.168.99.100:2377

This node joined a swarm as a worker.
$ docker swarm join \
  --token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c \
  192.168.99.100:2377

This node joined a swarm as a worker.
#????????hostname `hoshnamectl set-hostname worker2`

Now we can see all the nodes in the cluster on the manager1 node

$ docker node ls

ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
3g1y59jwfg7cf99w4lt0f662    worker2   Ready   Active
j68exjopxe7wfl6yuxml7a7j    worker1   Ready   Active
dxn1zf6l61qsb1josjja83ngz *  manager1  Ready   Active        Leader

So far, the cluster environment has been set up.

Deployment Test Service

We deployed nginx as an example to test the Swarm cluster we built.

$ docker service create --replicas 3 --publish 8080:80 --name helloworld nginx

The –replicas parameter here is used to indicate how many instances nginx needs to deploy because there are three physical machines. If replicas is set to 3, swarm will deploy one instance on each of the three machines. If you want to rescale the number of instances, you can use the following command.

docker service scale helloworld=5

We can check the deployment of nginx through a series of commands, such as

$ docker service inspect --pretty helloworld
$ docker service ps helloworld

Deleting a service is also very simple and you can simply execute rm.

$ docker service rm helloworld

Let’s look at a docker-compose.yml file first. It doesn’t matter what this is doing. It’s just a format that is easy to explain:

version: '2'
services:
  web:
    image: dockercloud/hello-world
    ports:
      - 8080
    networks:
      - front-tier
      - back-tier

  redis:
    image: redis
    links:
      - web
    networks:
      - back-tier

  lb:
    image: dockercloud/haproxy
    ports:
      - 80:80
    links:
      - web
    networks:
      - front-tier
      - back-tier
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock 

networks:
  front-tier:
    driver: bridge
  back-tier:
driver: bridge

It can be seen that a standard configuration file should contain three major parts: version, services, and networks. The most critical part is services and networks. Let’s first look at the rules for writing services.

  1. Image
    services:
      web:
    image: hello-world

    The second-level tag under the services tab is web. The name is customized by the user. It is the service name.
    Image is the image name or image ID of the specified service. If the image does not exist locally, Compose will try to pull the image.
    For example, the following formats are all possible:

    image: redis
    image: Ubuntu:14.04
    image: tutum/influxdb
    image: example-registry.com:4000/postgresql
    image: a4bc65fd
  2. Build

The service can be based on a specified image, or it can be based on a Dockerfile. When you use up to start a build task, this build tag is the build, which specifies the path to the Dockerfile folder. Compose will use it to automatically build this image and then use this image to start the service container.

build: /path/to/build/dir

It can also be a relative path, and the Dockerfile can be read as long as the context is determined.

build: ./dir

Set the context root and specify the Dockerfile as the target.

build:
  context: ../
  dockerfile: path/of/Dockerfile

Note that build is a directory, if you want to specify the Dockerfile file you need to use the dockerfile tag in the child tag of the build tag, as in the above example.
If you specify both the image and build tags, Compose will build the image and name the image after the image.

build: ./dir
image: webapp:tag

Since you can define the build task in docker-compose.yml, you must have the arg tag. Just like the ARG directive in the Dockerfile, it can specify the environment variables during the build process, but cancel after the build succeeds, at docker-compose. The yml file also supports this notation:

build:
  context: .
  args:
    buildno: 1
    password: secret

The following writing is also supported. In general, the following wording is more suitable for reading.

build:
  context: .
  args:
    - buildno=1
    - password=secret

Unlike ENV, ARG allows null values. E.g:

args:
  - buildno
  - password

This way the build process can assign values ??to them.

Note: YAML Boolean values ??(true, false, yes, no, on, off) must be quoted (either single or double quotes), otherwise they will be parsed as strings.

  1. Command

Use command to override the default command executed after the container starts.

command: bundle exec thin -p 3000

It can also be written in a format similar to Dockerfile:

command: [bundle, exec, thin, -p, 3000]

4.container_name

As mentioned earlier, Compose’s container name format is: <project name> <service name> <serial number>
Although you can customize the project name, service name, but if you want to fully control the container’s name, you can use this tag to specify:

container_name: app

The name of this container is specified as app.

5.depends_on

In the use of Compose, the biggest advantage is to use less to start the command, but the order of the general project container startup is required, if you start the container directly from top to bottom, it will inevitably fail because of container dependency problems.
For example, if the application container is started when the database container is not started, the application container will exit because it cannot find the database. In order to avoid this situation, we need to add a label, namely depends_on, which resolves the container’s dependency and startup sequence. The problem.
For example, the following container will start two services redis and db, and finally start the web service:

version: '2'
services:
  web:
    build: .
    depends_on:
      - db
      - redis
  redis:
    image: redis
  db:
    image: postgres

Note that when launching a web service using the docker-compose up web method by default, both the redis and db services are started because the dependencies are defined in the configuration file.

6.dns

The same as the –dns parameter, the format is as follows:

Dns: 8.8.8.8
can also be a list:

dns:
  - 8.8.8.8
  - 9.9.9.9

In addition, the configuration of dns_search is similar:

dns_search: example.com
dns_search:
  - dc1.example.com
  - dc2.example.com
  1. Tmpfs

Mounting a temporary directory inside the container has the same effect as the run parameter:

tmpfs: /run
tmpfs:
  - /run
  - /tmp
  1. Entrypoint

In the Dockerfile there is an instruction called the ENTRYPOINT directive that specifies the access point, and Chapter 4 has the difference compared to the CMD.
The access point can be defined in docker-compose.yml, overriding the definition in the Dockerfile:

Entrypoint: The /code/entrypoint.sh
format is similar to Docker, but can also be written like this:

entrypoint:
    - php
    - -d
    - zend_extension=/usr/local/lib/php/extensions/no-debug-non-zts-20100525/xdebug.so
    - -d
    - memory_limit=-1
    - vendor/bin/phpunit

9.env_file

Remember the .env file mentioned earlier. This file can set Compose variables. In docker-compose.yml, you can define a file that stores variables.
If the configuration file is specified with docker-compose -f FILE, the path to the env_file uses the configuration file path.

If there is a conflict between the variable name and the environment instruction, the latter will prevail. The format is as follows:

Env_file: .env
or set multiple according to docker-compose.yml:

env_file:
  - ./common.env
  - ./apps/web.env
  - /opt/secrets.env

Note that the environment variable mentioned here is for the host’s Compose. If there is a build operation in the configuration file, these variables will not enter the build process. If you want to use variables in your build, it is still preferred. The arg tag.

  1. Environment

Unlike the above env_file tag, which is somewhat similar to arg, the effect of this tag is to set the mirror variable, which can save the variable to the image, which means that the starting container will also contain these variable settings. This is the same as arg. The biggest difference.
General arg tag variables are used only during the build process. The ENV instruction in environment and Dockerfile will keep the variables in the image and container, similar to the effect of docker run -e.

environment:
  RACK_ENV: development
  SHOW: 'true'
  SESSION_SECRET:

environment:
  - RACK_ENV=development
  - SHOW=true
  - SESSION_SECRET
  1. The expose

This tag is the same as the EXPOSE directive in the Dockerfile. It is used to specify the exposed port, but only as a reference. In fact, the port mapping of docker-compose.yml still has a tag like ports.

expose:
 - "3000"
 - "8000"
  1. External_links

In the Docker process, we have a lot of containers that are started using docker run alone. To make Compose connect to containers that are not defined in docker-compose.yml, we need a special label, external_links, which allows the Compose project to work. The containers inside are connected to containers outside of the project configuration (provided that at least one container in the external container is connected to the same network as the service in the project).
The format is as follows:

external_links:
 - redis_1
 - project_db_1:mysql
 - project_db_1:postgresql
  1. Extra_hosts

Add the host name tag, which is to add some records to the /etc/hosts file, similar to the –add-host of the Docker client:

extra_hosts:
 - "somehost:162.242.195.82"
 - "otherhost:50.31.209.229"

View the internal hosts of the container after startup:

162.242.195.82  somehost
50.31.209.229   otherhost
  1. Labels

Add metadata to the container, and the meaning of the Dockerfile’s LABEL directive is as follows:

labels:
  com.example.description: "Accounting webapp"
  com.example.department: "Finance"
  com.example.label-with-empty-value: ""
labels:
  - "com.example.description=Accounting webapp"
  - "com.example.department=Finance"
  - "com.example.label-with-empty-value"
  1. Links

Remember the above depends_on, that the tag solves the startup sequence problem, this tag resolves the container connection problem, and is the same as the docker client’s –link, which connects to containers in other services.
The format is as follows:

links:
 - db
 - db:database
 - redis

The alias used will be automatically created in /etc/hosts in the service container. E.g:

172.12.2.186  db
172.12.2.186  database
172.12.2.187  redis

The corresponding environment variable will also be created.

  1. Logging

This tag is used to configure the log service. The format is as follows:

logging:
  driver: syslog
  options:
    syslog-address: "tcp://192.168.0.42:123"

The default driver is json-file. Only json-file and journald can display logs through docker-compose logs. There are other ways to view logs, but Compose does not support them. For optional values, use options.
For more information on this you can read the official documentation:
https://docs.docker.com/engine/admin/logging/overview/

  1. Pid

Pid: “host”
sets the PID mode to host PID mode, sharing the process namespace with the host system. Containers using this tag will be able to access and manipulate the namespaces of other containers and hosts.

  1. Ports

Map the port’s tag.
Using the HOST:CONTAINER format or just specifying the port of the container, the host randomly maps ports.

ports:
 - "3000"
 - "8000:8000"
 - "49100:22"
 - "127.0.0.1:8001:8001"

Note: When using HOST:CONTAINER format to map ports, if you use a container port less than 60 you may get a wrong result, because YAML will parse xx:yy this number format is hexadecimal. Therefore, it is recommended to use a string format.

  1. Security_opt

Override the default label for each container. Simply put, it is the label for managing all services. For example, set the user tag for all services to USER.

security_opt:
  - label:user:USER
  - label:role:ROLE
  1. Stop_signal

Set another signal to stop the container. The SIGTERM stop container is used by default. Set another signal to use the stop_signal tag.

Stop_signal: SIGUSR1

  1. Volumes

Mount a directory or an existing data volume container, either directly using the format [HOST:CONTAINER], or using the format [HOST:CONTAINER:ro], which is read-only for containers This can effectively protect the host’s file system.
The Compose data volume designation path can be a relative path, using . or .. to specify the relative directory.
The format of the data volume can be in the following forms:

Volumes:
// Just specify a path, Docker will automatically create a data volume (this path is inside the container).

- /var/lib/mysql

// Mount data volume using absolute path

  - /opt/data:/var/lib/mysql

// The relative path centered on the Compose configuration file is mounted as a data volume to the container.

- ./cache:/tmp/cache

// Use the relative path of the user (the directory represented by ~/ is /home/<user directory>/ or /root/).

 - ~/configs:/etc/configs/:ro

// An existing named data volume.

- datavolume:/var/lib/mysql

If you do not use the host’s path, you can specify a volume_driver.

volume_driver: mydriver
  1. Volumes_from

Mount data volumes from other containers or services. Optional parameters are: ro or :rw. The former indicates that the container is read-only and the latter indicates that the container is readable and writeable to the data volume. It is readable and writable by default.

volumes_from:
  - service_name
  - service_name:ro
  - container:container_name
  - container:container_name:rw
  1. Cap_add, cap_drop

Add or remove the container’s kernel features. Detailed information is explained in the previous section of the container and will not be repeated here.

cap_add:
  - ALL

cap_drop:
  - NET_ADMIN
  - SYS_ADMIN
  1. Cgroup_parent

Specifies the parent cgroup of a container.

Cgroup_parent: m-executor-abcd

  1. Devices

List of device mappings. Similar to the –device parameter of the Docker client.

devices:
  - "/dev/ttyUSB0:/dev/ttyUSB0"
  1. Extends

This tag can be used to extend another service. Extended content can be from the current file, or from other files, and the same service. Latecomers can choose to overwrite the original configuration.

Extends : file: common.yml
service: webapp
users can use this tag anywhere, as long as the tag content contains both file and service values. The value of file can be a relative or absolute path. If you do not specify the value of file, Compose will read the current YML file information.
More details of the operation are described later in subsection 12.3.4.

  1. Network_mode

The network mode is similar to the –net parameter of the Docker client, except that there is a relatively more service:[service name] format.
E.g:

network_mode: "bridge"
network_mode: "host"
network_mode: "none"
network_mode: "service:[service name]"
network_mode: "container:[container name/id]"

You can specify the network that uses the service or container.

  1. Networks

Join the specified network in the following format:

services:
  some-service:
    networks:
     - some-network
     - other-network

There is also a special child tag aliases for this tag. This is a tag to set the service alias, for example:

services:
  some-service:
    networks:
      some-network:
        aliases:
         - alias1
         - alias3
      other-network:
        aliases:
         - alias2

The same service can have different aliases on different networks.

  1. other

There are also these tags: cpu_shares, cpu_quota, cpuset, domainname, hostname, ipc, mac_address, mem_limit, memswap_limit, privileged, read_only, restart, shm_size, stdin_open, tty, user, working_dir
These are all single-valued tags, similar to Use docker run effect.

cpu_shares: 73
cpu_quota: 50000
cpuset: 0,1

user: postgresql
working_dir: /code

domainname: foo.com
hostname: foo
ipc: host
mac_address: 02:42:ac:11:65:43

mem_limit: 1000000000
memswap_limit: 2000000000
privileged: true

restart: always

read_only: true
shm_size: 64M
stdin_open: true
tty: true

Docker import and export mirrors

Docker Import and Export Image
Docker allows you to export the image to a local file. The export command is docker save. First, let’s look at the use of this command.
[plain] view plain copy
$ sudo docker save –help

You can see that the docker save command is very simple to use, with a -o parameter to specify which file to output the image to.
Earlier we have downloaded some images, here we put the Ubuntu :14.04 image output file ubuntu1404.tar
[plain] view
plaincopy $ sudo docker save -o ubuntu1404.tar ubuntu:14.04
after successful export can be in the local file Look under the file

Import Image
Docker Use docker load command to import exported files to local mirror library again

For example, we can import the exported image file ubuntu1404.tar into the local image library again
[plain] view
plaincopy $ sudo docker load -i ubuntu1404.tar

Remove mirror image
remove mirror command docker rmi

The docker rmi can remove one or more images at a time. To remove the image, you can specify that the image ID or image name can remove the specified image. Here we use CentOS which was just imported .
[plain] view plain copy
$ sudo docker rmi centos:centos6

You can see that the image of centos under the local repository has been deleted.
Before removing an image, ensure that there are no containers under the image (including containers that have been stopped). Otherwise, the image cannot be deleted. You must use docker rm to delete all containers under the image before you can remove the image.
For example, if we remove the image ubuntu:14.04 it cannot be removed directly because the image has container dependencies.

The contents of the image are temporarily closed.

Docker centos 6.9

Docker requires a Linux kernel version of 3.8 or higher, you must check the kernel version of the host operating system before installation. Otherwise, if the kernel is lower than 3.8, Docker can be successfully installed, but after entering Docker, it will automatically exit. .

1. Download and install CentOS 6.9

CentOS 6 series, the latest version is 6.9, because Docker can only run on 64-bit systems, so select an image download CentOS 6.9 64-bit CentOS official website

2, upgrade CentOS Linux kernel

CentOS 6.9 default linux kernel version is 2.6, CentOS 7 default linux kernel version is 3.10, so for CentOS 6.9 you need to upgrade the kernel version

1) Enter the URL of the updated linux kernel http://elrepo.org/tiki/tiki-index.php

2) Follow the instructions to update the kernel and execute the following command in the root account

(1) import public key

rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org

(2) Install ELRepo

For Centos 6,

rpm -Uvh http://www.elrepo.org/elrepo-release-6-6.el6.elrepo.noarch.rpm

For Cenos7,

rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm (external link)

(3) Install the kernel

Long-term supported version, stable (recommended)

yum --enablerepo=elrepo-kernel install -y kernel-lt

Mainline version (mainline)

yum --enablerepo=elrepo-kernel install -y kernel-ml

(4) modify the Grub boot sequence, set the default startup of the newly upgraded kernel

Edit grub.conf file

vi /etc/grub.conf

Modify default to the location of the newly installed kernel

# grub.conf generated by anaconda
#
default=0    
timeout=5
splashimage=(hd0,0)/boot/grub/splash.xpm.gz
hiddenmenu
title CentOS (3.10.28-1.el6.elrepo.x86_64)
        root (hd0,0)
        kernel /boot/vmlinuz-3.10.28-1.el6.elrepo.x86_64 ro root=UUID=0a05411f-16f2-4d69-beb0-2db4cefd3613 rd_NO_LUKS  KEYBOARDTYPE=pc KEYTABLE=us rd_NO_MD crashkernel=auto.UTF-8 rd_NO_LVM rd_NO_DM rhgb quiet
        initrd /boot/initramfs-3.10.28-1.el6.elrepo.x86_64.img
title CentOS (2.6.32-431.3.1.el6.x86_64)
        root (hd0,0)
        kernel /boot/vmlinuz-2.6.32-431.3.1.el6.x86_64 ro root=UUID=0a05411f-16f2-4d69-beb0-2db4cefd3613 rd_NO_LUKS  KEYBOARDTYPE=pc KEYTABLE=us rd_NO_MD crashkernel=auto.UTF-8 rd_NO_LVM rd_NO_DM rhgb quiet
        initrd /boot/initramfs-2.6.32-431.3.1.el6.x86_64.img

(5) Restart, kernel upgrade completed

reboot

3, install docker

(1) disable selinux

Because selinux and LXC have conflicts, selinux is disabled

vi /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#       enforcing - SELinux security policy is enforced.
#       permissive - SELinux prints warnings instead of enforcing.
#       disabled - SELinux is fully disabled.
SELINUX=disabled
# SELINUXTYPE= type of policy in use. Possible values are:
#       targeted - Only targeted network daemons are protected.
#       strict - Full SELinux protection.
SELINUXTYPE=targeted

(2) Configure Fedora EPEL source

Because Docker 6.x and 7.x installation docker is a little different, the docker installation package for CentOS 6.x is called docker-io, which comes from the Fedora epel library. This repository maintains a large number of packages that are not included in the distribution. Software, so first install EPEL, and docker for CentOS 7.x is directly included in the Extras repository of the official image source (the [extras] section enable=1 enable under CentOS-Base.repo)

yum -y install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

(3) Install docker

Install docker-io

yum install -y docker-io

(4) start docker

service docker start

(5) Check Docker Version

docker version
Client version: 1.7.1
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 786b29d/1.7.1
OS/Arch (client): linux/amd64
Server version: 1.7.1
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 786b29d/1.7.1
OS/Arch (server): linux/amd64

(6) Perform docker hello-world

Pull hello-world image

docker pull hello-world

Run hello-world

docker run hello-world
Hello from Docker.
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (Assuming it was not already locally available.)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

For more examples and ideas, visit:
 http://docs.docker.com/userguide/

The above output message indicates that Docker has been completely installed

4, uninstall Docker

If you want to uninstall docker, it is very simple, check the docker installation package

yum list installed | grep docker

Then delete the installation package

yum -y remove docker-io.x86_64

Delete a mirror or container

rm -rf /var/lib/docker