|
This article is only for the Green League vulnerability scan results, in RHEL / CentOS / OEL5.x x64-bit versions exist as high-risk vulnerabilities, where finishing solutions, again through vulnerability scanning, vulnerability has been patched.
High-risk
OpenSSH ‘schnorr.c’ remote memory corruption vulnerability (CVE-2014-1692)
OpenSSH J-PAKE licensing issues Vulnerability (CVE-2010-4478)
OpenSSH GSSAPI remote code execution vulnerability (CVE-2006-5051)
GNU Bash environment variable remote Command Execution Vulnerability (CVE-2014-6271)
GNU Wget symlink vulnerability (CVE-2014-4877)
The risk
OpenSSH default server configuration Denial of Service Vulnerability (CVE-2010-5107)
OpenSSH glob expression Denial of Service Vulnerability (CVE-2010-4755)
OpenSSH Licenses and Access Control Vulnerability (CVE-2014-2532)
OpenSSH verify_host_key function SSHFP DNS RR Check Bypass Vulnerability (CVE-2014-2653)
OpenSSH S / Key Remote Information Disclosure Vulnerability (CVE-2007-2243)
1. RHEL /CentOS/OEL5/6.x x64-bit versions of the operating system vulnerabilities Galway to upgrade openssh 6.6p1, would eliminate the loopholes in the following school risk, low-risk vulnerabilities can ignored.
(1) using the original source package installation
(2) using rpm installation package upgrade installation, this uses rpm package upgrade patch.
firewall-cmd –zone=public –add-port=22/tcp –permanent
firewall-cmd –zone=public –add-port=80/tcp –permanent
firewall-cmd –zone=public –add-port=443/tcp –permanent
firewall-cmd –reload
Hadoop is to solve big data (up to a computer cannot be stored, a computer cannot be processed within the required time) of reliable storage and processing.
HDFS, the cluster composed by the ordinary PC to provide highly reliable file storage, block by saving multiple copies of the solution to the problem server or hard disk broken.
MapReduce, a programming model provides a simple abstraction Mapper and Reducer, available on unreliable cluster of dozens of PC platform consisting of a hundred units concurrently distributed to handle large data sets, and the concurrency, distribution type (such as inter-machine communication), and fault recovery calculation details hidden.
The Mapper and Reducer abstract, but also a wide variety of complex data processing can be broken down into basic elements. Thus, complex data processing can be broken down multiple grounds Job (contain a Mapper and a Reducer) consisting of directed acyclic graph (DAG), then each Mapper and Reducer into execution on Hadoop cluster, you can obtain results.
With examples of frequency MapReduce statistical word appears in a text file in the WordCount See:
WordCount – Hadoop Wiki, if not hate familiar MapReduce, through the example of MapReduce some understanding of understanding below helpful.
In MapReduce in, Shuffle is a very important process, Shuffle process precisely because of the invisible, you can make on the MapReduce data processing developers to write completely imperceptible distributed and concurrent existence.
Shuffle broadly refers to the figure in the range of between Map and Reduce process.
Hadoop limitations and shortcomings, however, MapRecue following limitations, is more difficult to use.
Low levels of abstraction, need to manually write code to complete, it is difficult to get started use.
Only two operations, Map and Reduce, expressive lacking.
Only a Job Map and Reduce two phases (Phase), complex calculations require a lot of Job completed Job dependencies between the management by the developers themselves.
Processing logic hidden in the details of the code, there is no overall logic intermediate results are also placed HDFS file system ReduceTask need to wait before you can start all MapTask completed
High latency, applicable only Batch data processing, for interactive data processing, real-time data processing is not enough
For iterative data processing performance is relatively poor
For example, to achieve Join two tables with MapReduce is a very tricky process,

As shown below 🙁 Source: Real World Hadoop) Therefore, after the introduction of Hadoop, there have been many of them related to the limitations of technology improvements, such as Pig, Cascading, JAQL, OOzie, Tez, Spark and the like.
Apache SparkApache Spark is an emerging big data processing engine, the main feature is to provide a distributed memory abstract cluster to support applications that require working set.
This abstract is a RDD (Resilient Distributed Dataset), RDD is recorded with an immutable set of partitions, RDD is Spark in the programming model. Spark provides two operations on RDD, transitions and actions. Conversion is used to define a new RDD, including the map, flatMap, filter, union, sample, join, groupByKey, cogroup, ReduceByKey, cros, sortByKey, mapValues ??like, action is to return a result, including collect, reduce, count, save , lookupKey.
Spark’s API is very simple to use, the Spark Word Count example is shown below:
val spark = new SparkContext(master, appName, [sparkHome], [jars])
val file = spark.textFile(“hdfs://…”)
val counts = file.flatMap(line => line.split(” “))
.map(word => (word, 1))
.reduceByKey(_ + _)
counts.saveAsTextFile(“hdfs://…”)
In Spark, all RDD conversions are inert evaluated. RDD switching operation will generate new RDD, the new data RDD RDD dependent on the original data, each RDD also contains multiple partitions. So a program to actually construct a plurality of interdependent RDD component directed acyclic graph (DAG). And by performing an action on RDD this directed acyclic graph as a submission to Spark Job execution.
For example, the above WordCount program will generate the following DAGscala> counts.toDebugStringres0: String = MapPartitionsRDD [7] at reduceByKey at: 14 (1 partitions) ShuffledRDD [6] at reduceByKey at: 14 (1 partitions) MapPartitionsRDD [5] at reduceByKey at: 14 (1 partitions) MappedRDD [4] at map at: 14 (1 partitions) FlatMappedRDD [3] at flatMap at: 14 (1 partitions) MappedRDD [1] at textFile at: 12 (1 partitions) HadoopRDD [0 ] at textFile at: 12 (1 partitions)
Spark for directed acyclic graph Job scheduling, determine the stage (Stage), the partition (Partition), pipeline (Pipeline), task (Task) and Cache (Cache), optimization, and run Job on Spark cluster. Dependence between RDD into wide-dependent (dependent on multiple partitions) and narrow-dependent (only dependent on one partition), in determining the stage, the need to rely on a wide stage divided. According to district level tasks.

Spark failover support in different ways, provides two ways, Linage, through kinship data, and then execute it again in front of the process, Checkpoint, will store the data set to a persistent store.
Spark provide better support for iterative data processing. Each iteration of data can be stored in memory instead of writing to a file.
Spark’s performance has improved so much compared to Hadoop, in October 2014, Spark completed a Daytona Gray category Sort Benchmark test, sort entirely on the disk, and Hadoop comparison of previous test results are shown in the table:

As can be seen from the table to sort 100TB of data (one trillion data), Spark only computing resources used Hadoop 1/10, consuming only 1/3 of Hadoop.
Spark advantage is not only reflected in the performance improvement, Spark framework for batch processing (Spark Core), Interactive (Spark SQL), streaming (Spark Streaming), Machine Learning (MLlib), diagram calculation (GraphX) provides a unified data processing platform, which is opposed to using Hadoop has a great advantage.

According to the statement Databricks Citylink is One Stack To Rule Them All
Especially in some cases, you need to do some ETL work, then train a machine learning model, and finally make some inquiries, if you are using Spark, you can complete this three-part form logic in a program has a large directed acyclic graph (DAG), and Spark have large directed acyclic graph overall optimization.
For example the following procedure:
val points = sqlContext.sql (“SELECT latitude, longitude FROM historic_tweets”)
val model = KMeans.train (points, 10)
sc.twitterStream (…) .map (t => (model.closestCenter (t.location), 1)) .reduceByWindow (“5s”, _ + _)
(Example Source: http: //www.slideshare.net/Hadoop_Summit/building-a-unified-data-pipeline-in-apache-spark)
The first line of this program is to use Spark SQL Check out at some point, the second line is MLlib the K-means algorithm uses a model train these points, the third line is Spark Streaming processing stream messages, using the trained models.
Lambda Architecture
Lambda Architecture Reference Model is a large data processing platform, as shown below:

Which contains three layers, Batch Layer, Speed ??Layer and Serving Layer, due to the Batch Layer and Speed ??Layer data processing logic is the same, if Hadoop as Batch Layer, and with Storm as Speed ??Layer, you need to maintain two different technologies code.
The Spark Lambda Architecture as an integrated solution, as follows:
Batch Layer, HDFS + Spark Core, append incremental real-time data to HDFS using Spark Core batch process the whole amount of data, the total amount of data generated view. ,
Speed ??Layer, Spark Streaming to handle real-time incremental data to generate a low latency real-time view of the data.
Serving Layer, HDFS + Spark SQL (perhaps BlinkDB), storage Batch Layer and Speed ??Layer output of view, to provide low-latency ad hoc query capabilities, bulk data view to view real-time data consolidation.
to sum up
If we say, MapReduce is recognized as a low-level abstraction of distributed data processing, similar to the logic gate circuit and the gate, or gate and NAND gate, then the Spark of RDD is distributed big data processing high level of abstraction, similar to the logic circuit The encoder or decoder, etc.
RDD is a distributed data collection (Collection), any action on this collection can be as straightforward as functional programming operation memory collection, simple, but the implementation of set operations are broken down into a series of Task indeed sent to the background Completing the cluster stage dozens of hundreds of servers. Recently launched a large data processing framework Apache Flink also use data sets (Data Set) and its operation as a programming model.
By the RDD component directed acyclic graph (DAG) execution is scheduler which generates a physical plan and optimize Spark then executed on the cluster. Spark also offers a similar MapReduce execution engine, which use more memory instead of disk, to get better execution performance.
So Hadoop Spark which problems it?
Low levels of abstraction, need to manually write code to complete, it is difficult to get started use.
=> RDD-based abstract, the real data processing logic code is very brief. .
Only two operations, Map and Reduce, expressive lacking.
=> Provide a lot of transitions and actions, many of the basic operations, such as Join, GroupBy has been achieved in RDD conversion and action.
Only a Job Map and Reduce two phases (Phase), complex calculations require a lot of Job completed Job dependencies between the management by the developers themselves.
=> A plurality of switching operation of a Job can contain RDD, when scheduling can generate multiple stages (Stage), and if RDD multiple map operations zoning change, it can be placed on the same Task.
Processing logic hidden in the details of the code, there is no overall logic
=> In Scala, via anonymous functions and higher-order functions, RDD conversion support streaming API, it can provide a holistic view of the processing logic. Code does not contain specific implementation details of the operation, the logic clearer.
Intermediate results are also placed HDFS file system
=> Intermediate results in memory, the memory will be written to fit the local disk, instead of HDFS.
You can start ReduceTask need to wait for all MapTask are completed before
=> Partition lines in the same conversion constitute a Task runs partition different conversion needs Shuffle, is divided into different Stage, the need to wait in front of the Stage can begin after completion.
High latency, applicable only Batch data processing, for interactive data processing, real-time data processing is not enough
=> Provide Discretized Stream processing stream data stream is split into small batch.
For iterative data processing performance is relatively poor
=> By caching data in memory to improve performance iterative calculation.
Therefore, Hadoop MapReduce will be a new generation of large data processing platform alternative is the trend of technological development, and in the next generation of large data processing platform, Spark now has been the most widely recognized and supported.
Docker is an open platform for Sys Admins and developers to build, ship and run distributed applications. Applications are easy and quickly assembled from reusable and portable components, eliminating the silo-ed approach between development, QA, and production environments.
Individual components can be microservices coordinated by a program that contains the business process logic (an evolution of SOA, or Service Oriented Architecture). They can be deployed independently and scaled horizontally as needed, so the project benefits from flexibility and efficient operations. This is of great help in DevOps.

At a high-level, Docker is built of:
– Docker Engine: a portable and lightweight, runtime and packaging tool
– Docker Hub: a cloud service for sharing applications and automating workflows
There are more components (Machine, Swarm) but that’s beyond the basic overview I’m giving here.
Containers are lightweight, portable, isolated, self-sufficient “slices of a server” that contain any application (often they contain microservices).
They deliver on full DevOps goal:
– Build once… run anywhere (Dev, QA, Prod, DR).
– Configure once… run anything (any container).
Docker Features
- Multi-arch, multi-OS ? Stable control API ? Stable plugin API ? Resiliency ? Signature ? Clustering
Docker: ? Is easy to install ? Will run anything, anywhere ? Gives you repeatable builds
Deploy efficiently
- Containers are lightweight – Typical laptop runs 10-100 containers easily
– Typical server can run 100-1000 containers
- Containers can run at native speeds

High level approach
it’s a lightweight VM ? own process space ? own network interface ? can run stuff as root ? can have its own /sbin/init (different from the host)
How does it work?
Isolation with namespaces ? pid ? mnt ? net ? uts ? ipc ? user
docker run -i -t \
\
–net=none \
–lxc-conf=’lxc.network.type=veth’ \
–lxc-conf=’lxc.network.ipv4=172.16.21.112/16′ \
–lxc-conf=’lxc.network.ipv4.gateway=172.16.255.254′ \
–lxc-conf=”lxc.network.link=br0″ \
–lxc-conf=’lxc.network.name=eth0′ \
–lxc-conf=’lxc.network.flags=up’ \
# docker attach [CONTAINER ID] # ps axufwwUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMANDroot 1 0.0 0.0 14728 1900 ? S 02:17 0:00 /bin/bashroot 83 0.0 0.0 177340 3860 ? Ss 02:20 0:00 /usr/sbin/httpdapache 85 0.0 0.0 177340 2472 ? S 02:20 0:00 _ /usr/sbin/httpdapache 86 0.0 0.0 177340 2472 ? S 02:20 0:00 _ /usr/sbin/httpdapache 87 0.0 0.0 177340 2472 ? S 02:20 0:00 _ /usr/sbin/httpdapache 88 0.0 0.0 177340 2472 ? S 02:20 0:00 _ /usr/sbin/httpdapache 89 0.0 0.0 177340 2472 ? S 02:20 0:00 _ /usr/sbin/httpdapache 90 0.0 0.0 177340 2472 ? S 02:20 0:00 _ /usr/sbin/httpdapache 91 0.0 0.0 177340 2472 ? S 02:20 0:00 _ /usr/sbin/httpdapache 92 0.0 0.0 177340 2472 ? S 02:20 0:00 _ /usr/sbin/httpdroot 93 0.0 0.0 16624 1068 ? R+ 02:20 0:00 ps axufww # ifconfigeth0 Link encap:Ethernet HWaddr … inet addr:172.16.21.112 Bcast:172.16.255.255 Mask:255.255.0.0 inet6 addr: fe80::a46d:79ff:fe20:ea7e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1668 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:222716 (217.4 KiB) TX bytes:468 (468.0 b) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
# docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES7baceac4e139 mohan/centos6:latest “/bin/bash” 25 seconds ago Up 25 seconds 8a6311dbdbb0 mohan/centos6:latest “/bin/bash” About an hour ago Up About an hour
Compute efficiency
Almost no overhead ? processes are isolated, but run straight on the host ? CPU performance = native performance ? memory performance = a few % shaved off for (optional) accounting ? network performance = small overhead; can be reduced to zero
Docker can help Developer

- inside my container: – my code – my libraries – my package manager – my app – my data
Locking Down and Patching Containers
A regular system often contains software components that aren’t required by its applications. In contrast, a proper Docker container includes only those dependencies that the application requires, as explicitly prescribed in in the corresponding Dockerfile. This decreases the vulnerability surface of the application’s environment and makes it easier to lock it down. The smaller footprint also decreases the number of components that need to be patched with security updates.
When patching is needed, the workflow is different from a typical vulnerability management approach:
Traditionally, security patches are installed on the system independently of the application, in the hopes that the update doesn’t break the app.
Containers integrate the app with dependencies more tightly and allow for the container’s image to be patched as part of the application deployment process.
Rebuilding the container’s image (e.g., “docker build”) allows the application’s dependencies to be automatically updated.
The container ecosystem changes the work that ops might traditionally perform, but that isn’t necessarily a bad thing.
Running a vulnerability scanner when distributing patches the traditional way doesn’t quite work in this ecosystem. What a container-friendly approach should entail is still unclear. However, it promises the advantage of requiring fewer updates, bringing dev and ops closer together and defining a clear set of software components that need to be patched or otherwise locked down.
Security Benefits and Weaknesses of Containers
Application containers offer operational benefits that will continue to drive the development and adoption of the platform. While the use of such technologies introduces risks, it can also provide security benefits:
Containers make it easier to segregate applications that would traditionally run directly on the same host. For instance, an application running in one container only has access to the ports and files explicitly exposed by other container.
Containers encourage treating application environments as transient, rather static systems that exist for years and accumulate risk-inducing artifacts.
Containers make it easier to control what data and software components are installed through the use of repeatable, scripted instructions in setup files.
Containers offer the potential of more frequent security patching by making it easier to update the environment as part of an application update. They also minimize the effort of validating compatibility between the app and patches.
Not all is peachy in the world of application containers, of course. The security risks that come to mind when assessing how and whether to use containers include the following:
The flexibility of containers makes it easy to run multiple instances of applications (container sprawl) and indirectly leads to Docker images that exist at varying security patch levels.
The isolation provided by Docker is not as robust as the segregation established by hypervisors for virtual machines.
The use and management of application containers is not well-understood by the broader ops, infosec, dev and auditors community yet.
wget http://www.rabbitmq.com/releases/rabbitmq-server/current/rabbitmq-server-3.5.7-1.noarch.rpm
rpm --import http://www.rabbitmq.com/rabbitmq-signing-key-public.asc
yum install rabbitmq-server-3.5.7-1.noarch.rpm
chkconfig rabbitmq-server on
/sbin/service rabbitmq-server start
/sbin/service rabbitmq-server status
rabbitmq-plugins enable rabbitmq_management
rabbitmqctl add_user test test
rabbitmqctl add_user guest test123
rabbitmqctl set_user_tags test administrator
rabbitmqctl set_permissions -p / test ".*" ".*" ".*"
RHCE7 certification study notes
— system file directory structure
, RHEL7 system file directory
Basically RHEL7 directory structure remains the same as the original directory structure RHEL6,
the difference is, RHEL7 increasing the Run directory, the default file is used to automatically mount CD-ROM, and CD-ROM in RHEL6 default mount directory is the Media.
choose manual zoning, you must divide the partition: /, boot, swap.
2, reinstall the system, you only need to format the “/” directory to other directories on their own partitions without formatting, keeping the original user files are not deleted
[root@RHEL7HARDEN /]# ls
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
[root@RHEL7HARDEN /]#
study notes 2
— command line file
1, create and delete files
Create a file touch xxxx
touch -t 20151225 create a file and specify time properties
rm xxx delete the file
rm -rf forced to delete files
2. Create a directory and delete the directory
mkdir -p xxx/yyy recursively create directories;
rmdir xxx delete empty directories;
rm -rf XXX forcibly remove non-empty directories;
3, copy files and directories
cp /path1/xxx /path2/xxx/
cp -p /path1/xxx /path2/yyy copy files to retain the original file attributes;
cp -Rf (or -rf)/path1/ /path2/ copy files to a directory directory path1 path2
cp -a cp -dR –preserve=all
4 Cut files
mv /path1/xx /path2/yy
5 View Files
cat xx less
more
less
3– redirection and piping
Redirect correct content:
cat xx.file> yy.file equivalent
cat xx.file 1> yy.file redirected to the file contents yy.file, will overwrite the original file contents;
cat /etc/passwd & >> /tmp/xx
cat /etc/passwd > /tmp/xx 2>&1
tail -f /var/log/messages >/tmp/xx 2>/tmp/yy
ps aux | grep tail | grep -v ‘grep’
ls -al /dev/
to redirect the file contents input to a command
tr
cat > /tmp/xxx <<EOF
>test1
>test2
>test3
>EOF
cat <<EOF> /tmp/xxx
>test2
>test3
>test4
>EOF
number of rows grep -n before looking for the number of rows plus -i ignore case, -A 3 looking for then displayed, B 3 show the number of rows -v looking for negative keywords before after content, -q Do not display output;
grep -n1 B1 -A root /etc/passwd
ifconfig | grep ‘inet’|grep -v ‘inet6’| awk ‘BEGIN{print “IP\t\tnetmask”}{print $2,”\t”,$4}END{}’
ifconfig | grep ‘inet’|grep -v ‘inet6’| tee -a /tmp/yy|awk ‘BEGIN{print “IP\t\tnetmask”}{print $2,”\t”,$4}END{}’
4 – Vim editor use
1, gedit graphical editing files
2, Vim operating a file, if the file exists is opened, if the file does not exist, it is created:
When using Vim to edit the file to open, the default is the command-line mode:
4. When you edit the file, enter the command line insert mode, press the following keys to enter:
i, from the current cursor enters;
a, from the current cursor one character to enter;
o, in the current row into the next row;
I, from the current cursor jump to the beginning of the line and into the Bank;
A, skip to the end and enter the Bank;
O, in the Bank of the line to insert a row;
r, to replace the current character;
R, to replace the current character and move to the next character;
number + G: jump to a specific row, such as 10G jumps to line 10, GG jump to the last line, gg jump to the first line;
number + yy: copy current number of rows down, in any line can be pasted by p;
number + dd: Cut down the number of rows currently in any line adhesive according to p;
u: undo the last step of the operation;
ctrl + r: to restore the previous step;
ctrl + v: to enter visual block mode, the cursor moves up and down, select the content, press y copy selected content, at any position by pasting;
Fast beginning of the line to add a comment #, move the cursor to select the line, then press I to the start position,
press #, press ESC to exit
#abrt:x:173:173::/etc/abrt:/sbin/nologin
#pulse:x:171:171:PulseAudio System Daemon:/var/run/pulse:/sbin/nologin
#gdm:x:42:42::/var/lib/gdm:/sbin/nologin
#gnome-initial-setup:x:993:991::/run/gnome-initial-setup/:/sbin/nologin
:split?ctrl+w
To view detailed help Vim, you can enter Vimtutor.
5, line mode to save the file, find, property settings, replacement and other operations
Enter the last line mode, ESC from insert into command mode, enter 🙁 / or generally used to find, n from the look down, N from the look-up)
Save: wq to save and exit, or x;
Force Quit: q without saving the file content;!
Display line numbers: set nu, if the default display line numbers, you will need to modify the home directory vimrc file or / etc / vimrc, no file is created, insert a row set nu;
Switching specified line: direct input line number;
Replace: 1,$s /old /new /g Replace globally all
m, ns /old /new /g replace m row to match the contents of all n rows, on behalf of the current line, $representatives last line,
$ -. 1 represents the penultimate line, (1, $) can also be used to replace%,
They are expressed full text. If you want to match the contents of which have special characters such as /, *, etc., to be added in front of the escape character \
You can use the s # old # new #, using the # separator, the special characters do not need to escape;
Find backslash below, if you want to ignore the case, look at the back of the content plus \ c, for example: / servername \ c
study notes 5– manage users and user groups
[root@RHEL7HARDEN /]# passwd –help
Usage: passwd [OPTION…] <accountName>
-k, –keep-tokens keep non-expired authentication tokens
-d, –delete delete the password for the named account (root only)
-l, –lock lock the password for the named account (root only)
-u, –unlock unlock the password for the named account (root only)
-e, –expire expire the password for the named account (root only)
-f, –force force operation
-x, –maximum=DAYS maximum password lifetime (root only)
-n, –minimum=DAYS minimum password lifetime (root only)
-w, –warning=DAYS number of days warning users receives before password expiration (root only)
-i, –inactive=DAYS number of days after password expiration when an account becomes disabled (root only)
-S, –status report password status on the named account (root only)
–stdin read new tokens from stdin (root only)
Help options:
-?, –help Show this help message
–usage Display brief usage message
[root@RHEL7HARDEN /]#
[root@RHEL7HARDEN /]# chage –help
Usage: chage [options] LOGIN
Options:
-d, –lastday LAST_DAY set date of last password change to LAST_DAY
-E, –expiredate EXPIRE_DATE set account expiration date to EXPIRE_DATE
-h, –help display this help message and exit
-I, –inactive INACTIVE set password inactive after expiration
to INACTIVE
-l, –list show account aging information
-m, –mindays MIN_DAYS set minimum number of days before password
change to MIN_DAYS
-M, –maxdays MAX_DAYS set maximim number of days before password
change to MAX_DAYS
-R, –root CHROOT_DIR directory to chroot into
-W, –warndays WARN_DAYS set expiration warning days to WARN_DAYS
root@RHEL7HARDEN /]# useradd –help
Usage: useradd [options] LOGIN
useradd -D
useradd -D [options]
Options:
-b, –base-dir BASE_DIR base directory for the home directory of the
new account
-c, –comment COMMENT GECOS field of the new account
-d, –home-dir HOME_DIR home directory of the new account
-D, –defaults print or change default useradd configuration
-e, –expiredate EXPIRE_DATE expiration date of the new account
-f, –inactive INACTIVE password inactivity period of the new account
-g, –gid GROUP name or ID of the primary group of the new
account
-G, –groups GROUPS list of supplementary groups of the new
account
-h, –help display this help message and exit
-k, –skel SKEL_DIR use this alternative skeleton directory
-K, –key KEY=VALUE override /etc/login.defs defaults
-l, –no-log-init do not add the user to the lastlog and
faillog databases
-m, –create-home create the user’s home directory
-M, –no-create-home do not create the user’s home directory
-N, –no-user-group do not create a group with the same name as
the user
-o, –non-unique allow to create users with duplicate
(non-unique) UID
-p, –password PASSWORD encrypted password of the new account
-r, –system create a system account
-R, –root CHROOT_DIR directory to chroot into
-s, –shell SHELL login shell of the new account
-u, –uid UID user ID of the new account
-U, –user-group create a group with the same name as the user
-Z, –selinux-user SEUSER use a specific SEUSER for the SELinux user mapping
[root@RHEL7HARDEN /]# usermod –help
Usage: usermod [options] LOGIN
Options:
-c, –comment COMMENT new value of the GECOS field
-d, –home HOME_DIR new home directory for the user account
-e, –expiredate EXPIRE_DATE set account expiration date to EXPIRE_DATE
-f, –inactive INACTIVE set password inactive after expiration
to INACTIVE
-g, –gid GROUP force use GROUP as new primary group
-G, –groups GROUPS new list of supplementary GROUPS
-a, –append append the user to the supplemental GROUPS
mentioned by the -G option without removing
him/her from other groups
-h, –help display this help message and exit
-l, –login NEW_LOGIN new value of the login name
-L, –lock lock the user account
-m, –move-home move contents of the home directory to the
new location (use only with -d)
-o, –non-unique allow using duplicate (non-unique) UID
-p, –password PASSWORD use encrypted password for the new password
-R, –root CHROOT_DIR directory to chroot into
-s, –shell SHELL new login shell for the user account
-u, –uid UID new UID for the user account
-U, –unlock unlock the user account
-Z, –selinux-user SEUSER new SELinux user mapping for the user account
[root@RHEL7HARDEN /]# useradd test1
[root@RHEL7HARDEN /]# mkdir /home/test
root@RHEL7HARDEN /]# usermod -d /home/test1 test
usermod: user ‘test’ does not exist
[root@RHEL7HARDEN /]# cp -a /etc/skel/.[^.]* /home/test/
[root@RHEL7HARDEN /]# groups test1
test1 : test1
usermod -a -G mohan test1
usermod -g
[root@RHEL7HARDEN /]# gpasswd –help
Usage: gpasswd [option] GROUP
Options:
-a, –add USER add USER to GROUP
-d, –delete USER remove USER from GROUP
-h, –help display this help message and exit
-Q, –root CHROOT_DIR directory to chroot into
-r, –delete-password remove the GROUP’s password
-R, –restrict restrict access to GROUP to its members
-M, –members USER,… set the list of members of GROUP
-A, –administrators ADMIN,…
set the list of administrators for GROUP
Except for the -A and -M options, the options cannot be combined.
Docker is an open platform for Sys Admins and developers to build, ship and run distributed applications. Applications are easy and quickly assembled from reusable and portable components, eliminating the silo-ed approach between development, QA, and production environments.
Individual components can be microservices coordinated by a program that contains the business process logic (an evolution of SOA, or Service Oriented Architecture). They can be deployed independently and scaled horizontally as needed, so the project benefits from flexibility and efficient operations. This is of great help in DevOps

At a high-level, Docker is built of:
– Docker Engine: a portable and lightweight, runtime and packaging tool
– Docker Hub: a cloud service for sharing applications and automating workflows
There are more components (Machine, Swarm) but that’s beyond the basic overview I’m giving here.
Containers are lightweight, portable, isolated, self-sufficient “slices of a server” that contain any application (often they contain microservices).
They deliver on full DevOps goal:
– Build once… run anywhere (Dev, QA, Prod, DR).
– Configure once… run anything (any container).
Processes in a container are isolated from processes running on the host OS or in other Docker containers.
All processes share the same Linux kernel.
Docker leverages Linux containers to provide separate namespaces for containers, a technology that has been present in Linux kernels for 5+ years. The default container format is called libcontainer. Docker also supports traditional Linux containers using LXC.
It also uses Control Groups (cgroups), which have been in the Linux kernel even longer, to implement resources (such as CPU, memory, I/O) auditing and limiting, and Union file systems that support layering of the container’s file system.

Kernel namespaces isolate containers, avoiding visibility between containers and containing faults. Namespaces isolate:
? pid (processes)
? net (network interfaces, routing)
? ipc (System V interprocess communication [IPC])
? mnt (mount points, file systems)
? uts (host name)
? user (user IDs [UIDs])
Containers or Virtual Machines
Containers are isolated, portable environments where you can run applications along with all the libraries and dependencies they need.
Containers aren’t virtual machines. In some ways they are similar, but there are even more ways that they are different. Like virtual machines, containers share system resources for access to compute, networking, and storage. They are different because all containers on the same host share the same OS kernel, and keep applications, runtimes, and various other services separated from each other using kernel features known as namespaces and cgroups.
Not having a separate instance of a Guest OS for each VM saves space on disk and memory at runtime, improving also the performances.
Docker added the concept of a container image, which allows containers to be used on any host with a modern Linux kernel. Soon Windows applications will enjoy the same portability among Windows hosts as well.
The container image allows for much more rapid deployment of applications than if they were packaged in a virtual machine image.

Containers networking
When Docker starts, it creates a virtual interface named docker0 on the host machine.
docker0 is a virtual Ethernet bridge that automatically forwards packets between any other network interfaces that are attached to it.
For every new container, Docker creates a pair of “peer” interfaces: one “local” eth0 interface and one unique name (e.g.: vethAQI2QT), out in the namespace of the host machine.
Traffic going outside is NATted

ou can create different types of networks in Docker:
veth: a peer network device is created with one side assigned to the container and the other side is attached to a bridge specified by the lxc.network.link.
vlan: a vlan interface is linked with the interface specified by the lxc.network.link and assigned to the container.
phys: an already existing interface specified by the lxc.network.link is assigned to the container.
empty: will create only the loopback interface (at kernel space).
macvlan: a macvlan interface is linked with the interface specified by the lxc.network.link and assigned to the container. It also specifies the mode the macvlan will use to communicate between different macvlan on the same upper device. The accepted modes are: private, Virtual Ethernet Port Aggregator (VEPA) and bridge
Docker Evolution – release 1.7, June 2015
Important innovation has been introduced in the latest release of Docker, that is still experimental.
Plugins
A big new feature is a plugin system for Engine, the first two available are for networking and volumes. This gives you the flexibility to back them with any third-party system.
For networks, this means you can seamlessly connect containers to networking systems such as Weave, Microsoft, VMware, Cisco, Nuage Networks, Midokura and Project Calico. For volumes, this means that volumes can be stored on networked storage systems such as Flocker.
Networking
The release includes a huge update to how networking is done.

Sometime you like to list manually installed packages
just type :
yum list installed | grep -v ‘anaconda\|updates’
and you will get a list of manually installed packages
When you are not able to connect ESXi server to vCenter, or when you cannot connect to ESXi server from VI client it may be necessary to restart the management agents on ESX or ESXi host. In today’s post called How to restart management agents on ESX or ESXi host we will learn this.
You might want to follow this little how to article showing you the way doing it. For the restart of the management agents (mgmt-vmware and vmware-vpxa) directly on ESX/ESXi ESXi 4 and 5.x or ESXi 6.x you will have to do the following:
To restart the management agents on ESXi 6.x
Log in to SSH or Local console as root.
Run these commands:
/etc/init.d/hostd restart
/etc/init.d/vpxa restart
Or also (alternative way)
To reset the management network on a specific VMkernel interface, by default vmk0, run the command:
esxcli network ip interface set -e false -i vmk0; esxcli network ip interface set -e true -i vmk0
Note: Using a semicolon (;) between the two commands ensures the VMkernel interface is disabled and then re-enabled in succession. If the management interface is not running on vmk0, change the above command according to the VMkernel interface used.
To restart all management agents on the host, run the command:
services.sh restart
How to restart the Management agents on ESXi Server – via the console:
1.) Connect to the console of your ESX Server and press F2
2.) Login as root and when using the Up/Down arrows navigate to Restart Management Agents.
3.) Press Enter and press F11 to restart the services.
4.) When the service has been restarted, press Enter. Then you can press Esc to logout of the system.
Then you should see a screen like this one :
To restart the management agents on ESXi 4.x and 5.x:
From Local Console or SSH:
- Log in to SSH or Local console as root.
- Run this command:
/sbin/services.sh restart
You can check also: Service mgmt-vmware restart may not restart hostd (1005566).
To restart the management agents on ESX Server 3.x, ESX 4.x:
1.) Login to your ESX Server as root from SSH session (via putty for example) or directly from the console.
2.) Type service mgmt-vmware restart and press Enter
Make sure that automatic Startup/Shutdown of virtual machines is disabled before running this command otherwise you might reboot the virtual machines. See more at 103312
4.) Type service vmware-vpxa restart and press Enter.
6.) Type logout and press Enter to disconnect from the ESX Server.
Then you should see see this :
Stopping vmware-vpxa: [ OK ]
Starting vmware-vpxa: [ OK ]
Troubleshoot ESXi Host Management And Connectivity issues
If you care having problems connecting to an ESXi host, your first port of call should be to check the host’s configuration. Things to check include:
- Physical connectivity
- IP/subnet mask
- VLAN on the vSwitch
- VLAN configuration on the physical switch
You can use the ‘Test Management Network’ option in the DCUI to test basic connectivity:
If the network connectivity tests succeed, but the host is still unable to be managed by vCenter or connected to with the vSphere client, then it may be that the hosts management agents need to be restarted. Bear in mind that restarting the management agents on a host may impact tasks that are running on host. You can get a list of running tasks by running the following command at the CLI:
# vim-cmd vimsvc/task_list
Investigating the tasks that are running is covered in this VMware KB article. After reviewing the running tasks, you can decide whether to proceed with restarting the host’s management agents.
Restarting the Management Agents on an ESXi Host
There are a couple of ways in which you can restart a ESXi host’s management agents. You can either use the DCUI or restart the agents via the CLI. Using the DCUI it is just a case of using the ‘Restart Management Agents’ menu option, which can be found under ‘Troubleshooting Options’
To restart the management agents using the CLI, establish a connection via SSH or use the local console. Run the following commands to restart the vpxa agent and hostd:
/etc/init.d/hostd restart
/etc/init.d/vpxa restart
To restart all management agents on the host, run the command:
services.sh restart
This will restart all ESXi services including vpxa and hostd:
/sbin # services.sh restart
Running vmtoolsd stop
watchdog-vmtoolsd: Terminating watchdog process with PID 72671
vmtoolsd stopped
Running wsman stop
Stopping openwsmand
Running sfcbd stop
................
The services that will be restarted can be seen if you run ‘chkconfig -io’:
/sbin # chkconfig -io
/etc/init.d/lwiod
/etc/init.d/SSH
/etc/init.d/DCUI
/etc/init.d/ESXShell
/etc/init.d/usbarbitrator
/etc/init.d/lbtd
/etc/init.d/vprobed
/etc/init.d/storageRM
/etc/init.d/hostd
/etc/init.d/sensord
/etc/init.d/slpd
/etc/init.d/memscrubd
/etc/init.d/dcbd
/etc/init.d/cdp
/etc/init.d/vobd
/etc/init.d/vpxa
/etc/init.d/sfcbd-watchdog
/etc/init.d/sfcbd
/etc/init.d/wsman
/etc/init.d/vmtoolsd
You can see which services are set to start by running ‘chkconfig –list’:
/sbin # chkconfig --list
lsassd off
netlogond off
lwiod on
ntpd off
SSH on
iked off
DCUI on
ESXShell on
usbarbitrator on
lbtd on
vprobed on
storageRM on
hostd on
sensord on
slpd on
memscrubd on
dcbd on
cdp on
vobd on
vpxa on
sfcbd-watchdog on
sfcbd on
wsman on
vmtoolsd on
vmware-fdm off
Resetting the Management Network/Interface
Rather than restarting the managements, it may be worth trying a reset on the management interface. To do so, run the following command:
esxcli network ip interface set -e false -i vmk0; esxcli network ip interface set -e true -i vmk0
This command is actually in two parts, the bit before the ‘;’ will disable the interface, while the bit after the ‘;’ will immediately enable it again, thereby performing a ‘reset’.
RHEL 7 / CentOS 7.
This post is to Secure Single User Mode / Rescue Mode / Emergency mode on RHEL 7 / CentOS 7 in Grub2, By performing this Article you will able to secure your Grub2 Edits with Username and Password,
It is always a good idea to protect your Grub2.
In This Howto, We will protect Grub2 with Encrypted Password and Plain Password.
To Follow this how to make sure you have root password to make changes in Grub2, Please make sure you are doing exact as per instructions and going through notes.
Do this on your own risk, You will be the only responsible if anything goes wrong in any case 🙂
Protect Grub2 with Plain Password Method
1. Login as a root user or user with rights to edit grub2 configuration file (sudo).
[tejas-barot@RHEL7HARDEN ~]$ su –
2. Make a backup of existing grub.cfg and default /etc/grub.d/10_linux so if anything goes wrong we can always restore it.
[root@RHEL7HARDEN ~]# cp /boot/grub2/grub.cfg /boot/grub2/grub.cfg.orig
[root@RHEL7HARDEN ~]# cp /etc/grub.d/10_linux /etc/grub.d/10_linux.orig
3. Now, Adding Entries to protect Grub2 with username and password:
Note1: Replace Username and Password from below lines and Add below lines at last in file /etc/grub.d/10_linux
Note2: Make sure you don’t insert following entries multiple time.
[root@RHEL7HARDEN ~]# vi /etc/grub.d/10_linux
cat << EOF
set superusers=”mohan” password mohan test123
EOF
4. Now let us Generate New grub.cfg, Execute following command.
[root@RHEL7HARDEN ~]# grub2-mkconfig –output=/tmp/grub2.cfg
5. Now Replace this New configured grub2.cfg with existing grub2.cfg
[root@RHEL7HARDEN ~]# mv /boot/grub2/grub.cfg /boot/grub2/grub.cfg.move
[root@RHEL7HARDEN ~]# mv /tmp/grub2.cfg /boot/grub2/grub.cfg
6. That’s It, Now You can reboot and Press “e” on Grub Menu, It will ask you for the password.
Protect Grub2 with Password Encrypted Method
1. Login as a root user or user with rights to edit grub2 configuration file (sudo).
[tejas-barot@RHEL7HARDEN ~]$ su –
2. Make a backup of existing grub.cfg and default /etc/grub.d/10_linux so if anything goes wrong we can always restore it.
[root@RHEL7HARDEN ~]# cp /boot/grub2/grub.cfg /boot/grub2/grub.cfg.orig
[root@RHEL7HARDEN ~]# cp /etc/grub.d/10_linux /etc/grub.d/10_linux.orig
3. Let’s Generate Encrypted password with “grub2-mkpasswd-pbkdf2”, Once you will execute below command it will ask you for the password, Please enter password twice, It will generate password string which you need to add to 10_linux file. ( Shortened version of string, You will have to paste complete string )
[root@RHEL7HARDEN ~]# grub2-mkpasswd-pbkdf2
Enter Password:
Reenter Password:
PBKDF2 hash of your password is grub.pbkdf2.sha512.10000.F1C4CFAA5A51EED123BE8238C23B25B2A6909AFC9812F0D45
4. Now, Adding Entries to protect Grub2 with username and password:
Note1: Replace Username and Password from below lines and Add below lines at last in file /etc/grub.d/10_linux
Note2: Make sure you don’t insert following entries multiple time.
Note3: Here I have added Short String for example, you will have to add full string to make it work.
[root@RHEL7HARDEN ~]# vi /etc/grub.d/10_linux
cat << EOF
set superusers=”mohan” password_pbkdf2 mohan grub.pbkdf2.sha512.10000.62A93492C2F85EB4DC91FCD9E91933DE4A345519F9F9CAA2EF098A1BBE1272DCABE6A493F853708BE5BE46403835F0DEBD50E4A7F6E8843C402D23DB867872A9.30463770C028A430FF6760CDD55172F23861F6D9CF7458171E14F8DBCA77967C25A77313E41D7F1E57737DF36F3FF5B6CDA7B2600473289897D0EE8B35AF48EA
EOF
5. Now let us Generate New grub.cfg, Execute following command.
[root@RHEL7HARDEN ~]# grub2-mkconfig –output=/tmp/grub2.cfg
6. Now Replace this New configured grub2.cfg with existing grub2.cfg
[root@RHEL7HARDEN ~]# mv /boot/grub2/grub.cfg /boot/grub2/grub.cfg.move
[root@RHEL7HARDEN ~]# mv /tmp/grub2.cfg /boot/grub2/grub.cfg
7. That’s It, Now You can reboot and Press “e” on Grub Menu, It will ask you for the password
|
|
Recent Comments