November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

Categories

November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

Oracle backup script

Oracle backup script

#!/bin/bash
#oracle_backup.sh version1.0 chmod 700
#00 00 * * * /usr/local/scripts/oracle_backup.sh
#chmod 700 /usr/local/scripts/oracle_backup.sh
#You must first import the environment variables in the Oracle user’s bash_profile file
#Must be used to define the DBA account first dpdata1 path and authorized directory to read and write, modify the dpdata1 group

export ORACLE_BASE=/usr/local/u01/oracle
export ORACLE_HOME=/usr/local/u01/oracle/product/11.2.0/dbhome_1
export ORACLE_SID=oracle11
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$ORACLE_HOME/bin:$PATH
datetime=$(date +”%Y%m%d”)
dpdata1_dir=”/data/backup/oracle_backup”
oracle_u01=”u01″
oracle_u02=”u02″
oracle_password1=”u01_password”
oracle_password2=”u02_password”

expdp ${oracle_u01}/${oracle_password1} directory=dpdata1 dumpfile=${oracle_u01}_${datetime} logfile=${oracle_u01}_${datetime}.log

if [ $? -ne 0 ];then
echo “$(date +”%Y-%m-%d_%H:%M:%S”)oracle_${oracle_u01}_backup_file!” > ${dpdata1_dir}/${datetime}_err.log
fi

expdp ${oracle_u02}/${oracle_password2} directory=dpdata1 dumpfile=${oracle_u02}_${datetime} logfile=${oracle_u02}_${datetime}.log

if [ $? -ne 0 ];then
echo “$(date +”%Y-%m-%d_%H:%M:%S”)oracle_${oracle_u02}_backup_file!” >> ${dpdata1_dir}/${datetime}_err.log
fi

/usr/bin/bzip2 -z ${dpdata1_dir}/*${datetime}*
find $dpdata1_dir -type f -ctime +30 -name “*.bz2” -exec rm -vf {} \;

CentOS 7.2 Installation Deployment OpenStack Tutorial

CentOS 7.2 Installation Deployment OpenStack Tutorial

Environmental preparation

 

Share CentOS 7.2 installation deployment OpenStack tutorial, we want to help.

1, the system environment

# Uname -r

3.10.0-327.el7.x86_64

# Cat / etc / RedHat -release

CentOS Linux release 7.2.1511 (Core)

2, the server deployment

IP

CPU name

Character

Configuration

192.168.56.108

Controller

Control node

M: 4G; C: 2C; 50G

192.168.56.109

Compute

Calculate nodes

M: 2G; C: 2C; 50G

3, the basic environment to prepare

3.1, configure hosts

# Cat / etc / hosts

192.168.56.108 controller

192.168.56.109 compute

3.2, configure the time synchronization

[Root @ controller ~] # yum install -y ntp

[Root @ controller ~] # vim /etc/ntp.conf

15 restrict -6 :: 1

16 restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap

27 restrict 0.centos.pool.ntp.org nomodify notrap noquery

28 restrict 1.centos.pool.ntp.org nomodify notrap noquery

29 restrict 2.centos.pool.ntp.org nomodify notrap noquery

30 restrict 3.centos.pool.ntp.org nomodify notrap noquery

31 server 127.127.1.0

32 fudge 127.127.1.0 stratum 10

[Root @ controller ~] # systemctl enable ntpd

[Root @ controller ~] # systemctl start ntpd

Configure timing tasks on the control node and compute nodes

[Root @ compute ~] # crontab -l * / 5 * * * * / usr / sbin / ntpdate 192.168.56.108> / dev / null 2> & 1

3.3, turn off the firewall

# Systemctl stop firewalld

3.4, close selinux

# SELINUX = disabled / etc / selinux / config // need to reboot

3.5, install the basic package

[Root @ controller ~] # yum install -yhttp: // dl. Fedora project.org/pub/epel/7/x86_64/e/epel-release-7-7.noarch.rpm

[Root @ controller ~] # yum install -y centos-release-openstack-liberty

[Root @ controller ~] # yum install -y python-openstackclient

3.6, install mysql

[Root @ controller ~] # yum install -y mariadb mariadb-server mysql-python

[Root @ controller ~] # vim /etc/my.cnf

Add the following lines in the mysqld module:

Default-storage-engine = innodb

Innodb_file_per_table

Collation-server = utf8_general_ci

Init-connect = ‘SET NAMES utf8’

Character-set-server = utf8

[Root @ controller ~] # systemctl enable mariadb.service

[Root @ controller ~] # systemctl start mariadb.service

[Root @ controller ~] # mysql_secure_installation

3.7, install rabbitmq

[Root @ controller ~] # yum install-y rabbitmq-server

[Root @ controller ~] # systemctl enable rabbitmq-server.service

[Root @ controller ~] # systemctl start rabbitmq-server.service

[Root @ controller ~] # rabbitmqctl add_user openstack openstack

Creating user “openstack” …

… done.

[Root @ controller ~] # rabbitmqctl set_permissions openstack ‘. *’ ‘. *’ ‘. *’

Setting permissions for user “openstack” in vhost “/” …

… done.

[Root @ controller ~] # rabbitmqctl set_user_tags openstack administrator

[Root @ controller ~] # rabbitmq-plugins enable rabbitmq_management

[Root @ controller ~] # systemctl restart rabbitmq-server.service

In the browser, enter http://192.168.56.108:15672, the default account and password for the guest

Login will be created before the openstack users to join admin, the results are as follows:

 

[root@controller ~]# mysql -u root -p
[root@controller ~]# CREATE DATABASE keystone;
[root@controller ~]# GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone’@’localhost’ IDENTIFIED BY ‘PWS’;
[root@controller ~]# GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone’@’%’ IDENTIFIED BY ‘PWS’;
[root@controller ~]# openssl rand -hex 10
2?keystone

yum install openstack-keystone httpd mod_wsgi python-openstackclient memcached python-memcached
3?memcache

# systemctl enable memcached.service
# systemctl start memcached.service
4?

Edit the /etc/keystone/keystone.conf file and complete the following actions:
In the [DEFAULT] section, define the value of the initial administration token:
1
2
3
[DEFAULT]

admin_token = ADMIN_TOKEN
Replace ADMIN_TOKEN with the random value that you generated in a previous step.
In the [database] section, configure database access:
1
2
3
[database]

connection = mysql://keystone:KEYSTONE_DBPASS@controller/keystone
Replace KEYSTONE_DBPASS with the password you chose for the database.
In the [memcache] section, configure the Memcache service:
1
2
3
[memcache]

servers = localhost:11211
In the [token] section, configure the UUID token provider and Memcached driver:
1
2
3
4
[token]

provider = keystone.token.providers.uuid.Provider
driver = keystone.token.persistence.backends.memcache.Token
In the [revoke] section, configure the SQL revocation driver:
1
2
3
[revoke]

driver = keystone.contrib.revoke.backends.sql.Revoke
(Optional) To assist with troubleshooting, enable verbose logging in the [DEFAULT] section:
1
2
3
[DEFAULT]

verbose = True
Populate the Identity service database:
# su -s /bin/sh -c “keystone-manage db_sync” keystone
5?HTTP

Edit the /etc/httpd/conf/httpd.conf file and configure the ServerName option to reference the controller node:
ServerName controller
Create the /etc/httpd/conf.d/wsgi-keystone.conf file with the following content:
Listen 5000
Listen 35357

<VirtualHost *:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /var/www/cgi-bin/keystone/main
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
LogLevel info
ErrorLogFormat “%{cu}t %M”
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
</VirtualHost>

<VirtualHost *:35357>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /var/www/cgi-bin/keystone/admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
LogLevel info
ErrorLogFormat “%{cu}t %M”
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
</VirtualHost>
Create the directory structure for the WSGI components:
# mkdir -p /var/www/cgi-bin/keystone
Copy the WSGI components from the upstream repository into this directory:
# curl http://git.openstack.org/cgit/openstack/keystone/plain/httpd/keystone.py?h=stable/kilo | tee /var/www/cgi-bin/keystone/main /var/www/cgi-bin/keystone/admin
Adjust ownership and permissions on this directory and the files in it:
# chown -R keystone:keystone /var/www/cgi-bin/keystone
# chmod 755 /var/www/cgi-bin/keystone/*
Restart the Apache HTTP server:
# systemctl enable httpd.service
# systemctl start httpd.service

RedHat7.3 under the network card Teaming binding

For servers, it is a common requirement to bind multiple network cards (link aggregation). Linux supports bonding in earlier versions of the kernel. It can connect multiple Ethernet interfaces to the network through bonding Up, on the one hand can provide greater network bandwidth, on the other hand can also provide better reliability and port redundancy protection.

Bonding configuration has seven modes (from bond0 to bond6), respectively, to provide different transceiver and port redundancy mechanism, the uplink switch configuration and network card also have different requirements, need to choose according to different application scenarios to use.

There are three commonly used modes:
mode = 0: balanced load mode, with automatic backup, but need “Switch” support and settings.
Mode = 1: automatic backup mode, one line if broken, other lines will be automatically back up.
Mode = 6: Balanced load mode, with automatic backup, no “Switch” support and settings.

Bonding interface, there are two kinds of fault switching mechanism, one called MII monitor, that is, only monitor the status of the port, the other is ARP monitor, that is, by sending arp way to test the network connection status. MII monitor way is relatively simple, but there is a problem of inaccurate state detection, such as for the optical port card, if the transceiver of the two fibers appeared in one of the interruption of the situation, for the normal light, broken off the card, Can not detect the change of link state. ARP monitor is used to send arp query to see if there is no way to receive arp reply to test IP connectivity, you can also configure multiple Target to test, compared to MII monitor test results more accurate.

In rhel6 / CentOS 6 is the use of bonding technology dual network card binding technology, and rhel7.3 in the use of teaming technology, in fact, rhel7.3 dual NIC binding can use teaming can also use bonding, it is recommended to use Teaming technology, easy to view and monitor.

Teaming function mainly by the kernel team driver, used as a communication interface libteam lib and userspace teamd three parts, teaming also supports different modes of work, compared to bonding less balance-xor and balance-alb these two Kind of support, which balance-xor can be replaced by lacp, another balance-alb function is said after the teaming will also support.
Broadcast (data is transmitted over all ports)

Roundrobin (data is transmitted over all ports in turn)

Activebackup (one port or link is used while others are kept as a backup)

Loadbalance (with active Tx load balancing and BPF-based Tx port selectors)

Lacp (implements the 802.3ad Link Aggregation Control Protocol)

In essence, the purpose of teaming is to bind the function of the card from the kernel to move out, put these functions into the userspace to solve, keep the kernel simple, let it only do kernel should do.

This article describes rhel7.3 two of the most common dual-NIC binding mode:

(1) activebackup –
active and standby mode a network card is active, the other in a backup state, all traffic on the main link, when the activities When the card is down, enable the backup card.
(2) roundrobin – polling mode
All links are in load balancing state, this feature features increased bandwidth, while supporting fault tolerance.

Here to activebackup – master and backup mode as an example to do the following example configuration:

1, enter the system, see the server network card configuration.

 

As shown above, there are a total of four ports for ens3 and ens8. It is decided to configure dual network card bindings for the two network segments. The service network binds both the ens3f0 and ens8f0 ports. The private network binds both the ens3f1 and ens8f1 ports.
2, we can use the nmcli command for network card binding, the specific order is as follows:
configure the main business network team0, set the mode is activebackup
nmcli con add type team con-name team0 ifname team0 config ‘{“runner”: {“name” “Activebackup”}}
Set the IP address, subnet mask, gateway
nmcli con mod team0 ipv4.addresses 11.11.205.145/28ipv4.gateway 11.11.205.158 ipv4.method manual connectio.autoconnect yes
Add the port bound to team0 ens3f0
nmcli Con add type team-slave con-name team0-port1 ifname ens3f0 master team0
add the port bound to team0 ens8f0
nmcli con add type team-slave con-name team0-port2 ifname ens8f0 master team0
reload connection configuration
nmcli con reload
start Team0

nmcli con up team0

 

3, view the status, use the teamdctl command to verify

4, detection found that the state is normal.

5, if you need to try to change the roundrobin mode, you can enter the team0 and team1 configuration file, modify the mode for the roundrobin, other configuration unchanged.

Tip: When making a network card binding, if you find that the physical network card can not be bound to team0, please check whether the physical card is in the up state.

Kubernetes 1.4 Cluster SetupKubernetes 1.4 Cluster Setup

Kubernetes 1.4 Cluster SetupKubernetes 1.4 Cluster Setup

Distance kubernetes 1.4 has been released for some time, version 1.4 adds many new features, one of the more useful features is the addition to quickly create a cluster of basic needs only two commands will be able to build success; but due to reasons known to all (fuck GFW), resulting in kuadm command does not work, the following records for a moment solutions
First, prepare the environment
3 basic environment for virtual machines, virtual machine information is as follows

192.168.1.107 master

192.168.1.126 node1

192.168.1.217 node2

install docker
docker used here is the 1.12.1 version, install directly from the official tutorial, if slower speed can be switched domestic sources, such as Tsinghua docker source, specifically requested

Google
tee /etc/yum.repos.d/docker.repo

<<-‘EOF'[dockerrepo]name=Docker

Repositorybaseurl=https://yum.dockerproject.org/repo/main/CentOS/7/enabled=1

gpgcheck=1

gpgkey=https://yum.dockerproject.org/gpgEOF

yum install docker-engine -y

systemctl enable dockersystemctl

start dockersystemctl status

docker1.2,

modify the hostname
3 Because virtual machines are copied from a virtual machine basis, in order not to affect the kubectl get nodesquery, you need to change the host name of virtual machines, the following example for the master node, the other nodes correspond to amend
echo “master” > /etc/hostname

#  hosts localhost

vim /etc/hosts

127.0.0.1   master::1

 

192.168.1.107 master

192.168.1.126  node1

192.168.1.217 node2

Second, build a cluster kubernetes
2.1, mounting basic components
According to the official documentation tutorial need to install kubelet, kubeadm, kubectl, kubernetes-cnifour rpm package, but due to the fact GFW reason Google can not download the source rpm, the following is my downloaded to the local through the ladder, rpm download method can help yumdownloader tools specifically requested

Google

socat
yum install -y socat
rpms=(5ce829590fb4d5c860b80e73d4483b8545496a13f68ff3033ba76fa72632a3b6-kubernetes-cni-0.3.0.1-0.07a8a2.x86_64.rpm \
     bbad6f8b76467d0a5c40fe0f5a1d92500baef49dedff2944e317936b110524eb-kubeadm-1.5.0-0.alpha.0.1534.gcf7301f.x86_64.rpm \
     c37966352c9d394bf2cc1f755938dfb679aa45ac866d3eb1775d9c9b87d5e177-kubelet-1.4.0-0.x86_64.rpm \
     fac5b4cd036d76764306bd1df7258394b200be4c11f4e3fdd100bfb25a403ed4-kubectl-1.4.0-0.x86_64.rpm)
for rpmName in ${rpms[@]}; do
  wget http://upyun.mritd.me/kubernetes/$rpmName
done
rpm -ivh *.rpm

 

systemctl enable docker
systemctl enable kubelet
systemctl start docker
systemctl start kubelet

 

 

systemctl enable dockersystemctl enable kubeletsystemctl start dockersystemctl start kubeletAt this view kubelet in fact failed to start because of the lack of configuration, the deployment will automatically restart after a successful
Before the official use kubeadm create a cluster also need to turn off selinux, the next version of this issue has been resolved
#  selinuxsetenforce 02.3, import the relevant image
kubeadm will pull relevant image, due to GFW will cause eventual failure can not be downloaded, so the best way is to use a ladder pull down, and then go in to load, the following is required load into a mirror

 

gcr.io/google_containers/kube-proxy-amd64 v1.4.0
gcr.io/google_containers/kube-discovery-amd64 1.0
gcr.io/google_containers/kubedns-amd64 1.7
gcr.io/google_containers/kube-scheduler-amd64 v1.4.0
gcr.io/google_containers/kube-controller-manager-amd64 v1.4.0
gcr.io/google_containers/kube-apiserver-amd64 v1.4.0
gcr.io/google_containers/etcd-amd64 2.2.5
gcr.io/google_containers/kube-dnsmasq-amd64 1.3
gcr.io/google_containers/exechealthz-amd64 1.1
gcr.io/google_containers/pause-amd64 3.0

 

images=(kube-proxy-amd64:v1.4.0 kube-discovery-amd64:1.0 kubedns-amd64:1.7 kube-scheduler-amd64:v1.4.0 kube-controller-manager-amd64:v1.4.0 kube-apiserver-amd64:v1.4.0 etcd-amd64:2.2.5 kube-dnsmasq-amd64:1.3 exechealthz-amd64:1.1 pause-amd64:3.0 kubernetes-dashboard-amd64:v1.4.0)
for imageName in ${images[@]} ; do
  docker pull mritd/$imageName
  docker tag mritd/$imageName gcr.io/google_containers/$imageName
  docker rmi mritd/$imageName
done

 

kubeadm init --api-advertise-addresses=192.168.1.107
?  ~ kubeadm init --api-advertise-addresses=192.168.1.107
<master/tokens> generated token: "42354d.e1fb733ed0c9a932"
<master/pki> created keys and certificates in "/etc/kubernetes/pki"
<util/kubeconfig> created "/etc/kubernetes/kubelet.conf"
<util/kubeconfig> created "/etc/kubernetes/admin.conf"
<master/apiclient> created API client configuration
<master/apiclient> created API client, waiting for the control plane to become ready
<master/apiclient> all control plane components are healthy after 18.921781 seconds
<master/apiclient> waiting for at least one node to register and become ready
<master/apiclient> first node is ready after 2.014976 seconds
<master/discovery> created essential addon: kube-discovery, waiting for it to become ready
<master/discovery> kube-discovery is ready after 3.505092 seconds
<master/addons> created essential addon: kube-proxy
<master/addons> created essential addon: kube-dns

Kubernetes master initialised successfully!
You can now join any number of machines by running the following on each node:

kubeadm join --token 42354d.e1fb733ed0c9a932 192.168.1.107

 

kubeadm join --token 42354d.e1fb733ed0c9a932 192.168.1.107

master ? get nodes ? master  pod? master  kubectl taint nodes --all dedicated

?  ~ kubectl get nodes                                   
NAME      STATUS    AGE
master    Ready     1m
node1     Ready     1m
node2     Ready     1m


kubectl apply -f https://git.io/weave-kube

Oracle data pump expdp / impdp

Oracle 11g later new features make the default conditions in the allocation of table space when the empty table is ignored to reduce the table space resource consumption, so that the use of Oracle’s exp export user data will be ignored when the empty table, it will cause data Not complete, of course, in the use of exp export data is not no way, this has been mentioned before,

Use this method is to solve the use of exp export user data will not ignore the empty table, but there is a more efficient way is to use expdp / impdp – Oracle data pump, to import and export Oracle data, compared to exp / Imp expdp / impdp is more efficient data import and export tools used, of course, here expdp / impdp and exp / imp difference between the simple to say.

And exp different, the use of expdp export data first use DBA users to enter Oracle to perform a specified backup path and then authorized to read and write before they can operate, the specific steps are as follows:

SQL> create or replace directory dpdata1 as '/data/backup/oracle_backup';
 
Directory created.
 
SQL> grant read,write on directory dpdata1 to u01;
 
Grant succeeded.
 
SQL> grant read,write on directory dpdata1 to u02;
 
Grant succeeded.
 
SQL> select from dba_directories;

Here dpdatal designated to backup the data to export the path, and then authorized to export the directory to the user, it should be noted that this directory must be Oracle users to ensure that the file permissions to write, after the completion of the specified backup directory can Expdp start with the export of data, the use of expdp export data will import.log log file generation, because here is a number of user export, I will specify the log file under the file name

[oracle@localhost ~]$ expdp u01/passwword_u01  directory=dpdata1  dumpfile=u01.dmp logfile=u01.log
expdp u02/password_u02  directory=dpdata1  dumpfile=u02.dmp logfile=u02.log

After the export will be in the specified backup directory to generate backup files and log.

In the import time to use impdp import, the specific use of the method with the qu almost, but need to specify the backup directory, the operation is as follows:

[oracle@localhost oracle_backup]$ impdp u01/passwword_u01 directory=dpdata1 dumpfile=u02.dmp FULL=y
[oracle@localhost oracle_backup]$ impdp u02/passwword_u02 directory=dpdata1 dumpfile=u02.dmp FULL=y

So that the import is completed, but here is the need to note that the use of impdp import data, if Oracle did not create the appropriate user, after the import will automatically create the user, so when you need to pay attention to the import The data is wrong, the simple Oracle data pump expdp / impdp is roughly the same, there are other in accordance with the table name, query conditions, the whole library import and export operations are roughly the same, just need to modify a few parameters, Do too much explanation

—————————- gorgeous split line —————– ———–

YUM deployment of high-level LNMP environment

YUM deployment of high-level LNMP environment

 

Status:
yum epel comes with a low version of php mysql nginx can not meet the test and production environment in the program performance and security requirements

LNMP -> Web environment for rapid deployment

Demand:
yum source version of the php is 5.4, when we need to use 5.6 or 7.0 version of PHP, it can only compile and install. But sometimes we do not want to deal with some dependencies, hoping to be able to quickly deploy through yum, this time You need to introduce a third party yum source
WEBTATIC foreign third party EPEL

Https://webtatic.com/packages/

PHP third party EPEL source

CentOS 6.x source

Rpm -Uvh https: // dl. Fedora project.org/pub/epel/epel-release-latest-6.noarch.rpm
rpm -Uvh https://mirror.webtatic.com/yum/el6/latest.rpm

CentOS 7.x source

Rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
rpm -Uvh https://mirror.webtatic.com/yum/el7/webtatic-release.rpm

You can view the installation package for the source by command

Yum list –enablerepo = webtatic | grep php

Deploy LNMP environment steps —–> install Mysql —-> install PHP —>
NGINX to install php7.1 version of the example
yum install php71w php71w-fpm php71w-common php71w-gd php71w-mbstring php71w-mcrypt php71w -mysqlnd php71w-pdo php71w-bcmath -y

Install php5.6 version example:
yum install php56w php56w-fpm php56w-mysql php56w-mcrypt php56w-bcmath php56w-gd php56w-mbstring php56w-pdo -y

Simple description of the various PHP components function
# basic
php71w
# nginx connection using
php71w-fpm
# wide byte
php71w-mbstring
# connect mysql related
php71w-mysqlnd
# redis extended
php71w-pecl-redis
# encryption using
php71w-mcrypt
# performance acceleration php5. 5 more than the use of
php71w-opcache
installation of these basic can meet a large number of needs, as some expansion, will be installed in the installation of these libraries

 

Mysql official yum EPEL
introduced CentOS 6.x Platform, comes with the version are 5.1, of course, the MySQL website naturally have to provide the appropriate yum method, most people may not care.
CentOS 7.x series are carrying MariaDB

Official description link: https://dev.mysql.com/doc/mysql-repo-excerpt/5.6/en/linux-installation-yum-repo.html

# Update yum source
yum update
# add mysql5.6 yum source
# Centos6
rpm -Uvh http://dev.mysql.com/get/mysql-community-release-el6-5.noarch.rpm
# Centos7
rpm -Uvh http: //dev.mysql.com/get/mysql-community-release-el7-5.noarch.rpm
yum install mysql-server

The installation is complete, start the Mysql service

123456 # start
service mysqld start
# restart
service mysqld restart
# close
service mysqld stop

If the start fails, first check whether the port number is temporary, followed by check the permissions, in addition to the need to check the mysql process

Set the mysql password

Mysqladmin -u root password 123456
Allow remote access
mysql -u root
mysql> use mysql;
# Allow external connection database
mysql> update user set host = ‘%’ where host = ‘127.0.0.1’;
# View the results of the change
mysql> select host, the User, the User password from;
# refresh authority table (this operation must be performed, or the need to restart MySQL)
MySQL> flush privileges;
remember, after a successful connection through the client, be sure to set a password, if a firewall is turned on, you need to let go Database external port, generally 3306

Reset the root password (this operation in two cases)
1, remember the root user password
# 123456 which is the original password, can not be separated from the -p, abcdefg for the new password
mysqladmin -u root -p 123456 password abcdef

2, forget the root user password
# If MySQL is running, first turn off
killall -TERM mysqld
# start MySQL
mysqld_safe – skip-grant-tables &
# this time is to go to MySQL
> use mysql
> update user set password = password (” New_pass “) where user =” root “;
> flush privileges
# Exit, start mysql

Nginx Deployment
Website provides different OS Platform Version Download

Http://nginx.org/en/linux_packages.html#stable

Too many open files

First, the problem found

Just on the line of the project, the front end of the use of LVS + Haproxy do load balancing, support for high concurrent traffic, but after a period of time is always a problem, view the log, found the following Too many open files problem.

May 12, 2017 12:49:20 AM org.apache.tomcat.util.net.JIoEndpoint$Acceptor run
SEVERE: Socket accept failed
java.net.SocketException: Too many open files
        at java.net.PlainSocketImpl.socketAccept(Native Method)
        at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398)
        at java.net.ServerSocket.implAccept(ServerSocket.java:530)
        at java.net.ServerSocket.accept(ServerSocket.java:498)
        at org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServerSocketFactory.java:60)
        at org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:222)
        at java.lang.Thread.run(Thread.java:745)

Second, the maximum number of files to open the description

1, first popularize a few knowledge

  • Linux everything is a file, including input and output devices, network connections, socket, pipes and so on;
  • And file opening number is the most relevant file descriptor (some people like to call the file identifier, English for the file descriptor), the number of file open is the number of file descriptors;
  • The number of open files depends on the type of system, memory size, int (language keyword, such as C99 int) length (non-negative integer), and system administrator settings;
  • The maximum number of file open is for a process, that is, a process can open the number of file handles is limited, can not exceed the maximum number of file open;
  • Ulimit command is only valid for the current shell, so when writing shell script, if you need and can control the maximum number of files open, then the implementation of “ulimit-n file open number” command, in the implementation of the following;
  • The file descriptors that are opened in Linux are stored in / proc / PID / fd /, where PID is the process identifier.

In addition to the need to pay attention, but also need to pay attention to ulimit -v unlimited, the largest available virtual memory (the maximum amount of virtual memory available to the shell and, on some systems, to its children)

2, the maximum number of files to open the global settings

In CentOS and Ubuntu ulimit is a bash inside the built-in command, like if, shift the same, not a separate command, so in Ubuntu usually encounter someone using sudo ulimit -n 65535 command encountered This command prompt (perhaps sudo bug),

Ulimit provides control of the shell or process available resources, including but not limited to the maximum number of file open, maximum available virtual memory, the maximum number of processes, socket buffer, etc., it has two levels of hard and soft, respectively, the corresponding parameter switch The -H and -S, hard limits make the non-root user not add (over) the set value, the soft limit allows the non-root user to increase the hard limit, usually the hard and soft values ??are usually set to the same Value, the command is ulimit -HSn 65535.

Ulimit is only valid for the current shell, in order to take effect anywhere, in addition to the first implementation of the ulimit command is to change the configuration file, that is, change the maximum number of files open the global settings, the method is to edit /etc/security/limits.conf file, add The following two lines, re-login system to take effect.

*  hard  nofile  65535
*  soft  nofile  65535

Among them, “*” means that all users are effective, restart, run anywhere ulimit-n will show 65535.

3, some of the commands associated with the opening of the file and other related commands

  1. View the current system of the total number of files opened: cat / proc / sys / fs / file-max
  2. View the current process of the file to open the number: lsof-p 16075 | wc-l
  3. View the current port of the file to open the number: lsof -i: 80 | wc-l
  4. Before using lsof need to pay attention, lsof is not suitable for viewing a high number of connections or the number of dynamic changes too fast process or port
  5. View the files used by a process: lsof -p 16075
  6. View the file used by a port: lsof -i: 80
  7. View users and programs that use a file: fuser -v / bin / bash

Third, pay attention to matters

1, the use of ulimit-n 65535 can take effect immediately, but after re-login will be restored to the initial system settings 1024.

2, you can also / etc / profile in the final configuration ulimit-n 65535, to take effect.

OpenLDAP dual main structures

OpenLDAP dual main structures

LDAP as a more critical service, a single point is certainly a problem, in addition to the general master and slave, the better choice is the double main, that is, there are two ldap server, and real-time synchronization, and then in front of the load balancing call. One of which hung up, load balancing automatically kicked off, does not affect the use of the entire service. This is the purpose of configuring the dual master. Of course, as a read more write less services, master and slave is also very good ~

Because the new configuration file directory structure and the previous a bit different, stepped on a lot of pit to understand its inherent logic. A valid configuration file is placed in the `/ etc / openldap / slapd.d /` directory, and if there are different new configurations, the configuration files for this directory will be added. So the management of this directory or a certain degree of difficulty. So openldap provides a clever way. We still configure the familiar `slapd.conf` file, and then through the command it` slapd.conf` converted to `slapd.d` directory structure. This thing was studied for two days to understand. The The

Not much to say, start configuring dual master Yum installed, by default `/etc/openldap/` directory is not `slapd.conf` file, but can be copied from other places.

`Cp /usr/share/openldap-servers/slapd.conf.obsolete / etc / openldap / slapd.conf`
Then modify this configuration file, the following shows only the modified place`

vim /etc/openldap/slapd.conf

Modulepath /usr/lib/openldap # Remove the previous pound can
modulepath /usr/lib64/openldap # ibid
moduleload syncprov.ld # module is used to achieve the master and slave and dual master ~

 

Index entryCSN, entryUUID eq
“ `
two servers above the same configuration, behind the configuration a little difference.
Server a:
“ `
serverID 2 # double live ID to be different …
overlay syncprov
syncrepl rid = 001 # this id two to be consistent
provider = ldap: / / ip_address # server ip ip address
type = refreshAndPersist
searchbase =” dc = Xxx, dc = com “# # set up from the root search
schemachecking = off
bindmethod = simple
binddn =” cn = admin, dc = xxx, dc = com “# this user to exist yo, here with the management user
credentials = 1234 # Do not know is to manage the user password, or synchronization password, so it is written to manage the user password. Retry
= “60 +”
mirrormode on
“ `

 

Server b:
“ `
serverID 1
syncrepl rid = 001 # this id two to be consistent
provider = ldap: // ip_address # server ip address
type = refreshAndPersist
searchbase =” dc = xxx, dc = com “# set from the root Start to search
schemachecking = off
bindmethod = simple
binddn = “cn = admin, dc = xxx, dc = com” # this user to exist yo, here with the management of user
credentials = 1234 # do not know is to manage user passwords, or synchronous Password, so it is written to manage the user password. Retry
= “60 +”
mirrormode on
“ `

 

The basic configuration and settings are completed, the next is the slap.conf generated sladp.d directory.
1, delete the contents of the slapd.d directory
`rm -rf /etc/openldap/slapd.d/ *`
2, generate directory friends ~
`slaptest -f /etc/openldap/slapd.conf -F /etc/ openldap/slapd.d`
Tip `config file testing succeeded` it indicates success ~
3, the newly generated file permissions to modify
` “
chown -R & lt ldap:ldap /etc/OpenLDAP
chown -R & lt ldap:ldap /var/lib/LDAP
“ `
4, restart slapd`
service slapd restart`

WINDOWS NANO SERVER 2016

A few facts about Windows Nano Server 2016:

  • Windows Server 2106 Nano Server was released on 13/10/2016 worldwide.
  • Nano Server Is super lightweight Windows Server.
  • The entire Installation Is 400MB
  • There Is no Management Interface, GUI or cmd utility
  • All management Is done remotely using Server Manager, PowerShell, DISM, etc
  • Currently, Nano Server can run the following rules Hyper-V, DNS, IIS, File Server and Server Clustering
  • Hyper-V Is the only hypervisor technology supported Nano Server.
  • Nano Server Is part of the Windows Server 2016 ISO
  • Group Policy Is not support on Nano Server

SOFTWARE COLLECTION window

https://pcriver.com/category/utilities