June 2025
M T W T F S S
 1
2345678
9101112131415
16171819202122
23242526272829
30  

Categories

June 2025
M T W T F S S
 1
2345678
9101112131415
16171819202122
23242526272829
30  

CentOS / RHEL 7 : How to disable IPv6

Post describes procedure to disable IPv6 on CentOS/RHEL 7. There are 2 ways to do this :
1. Disable IPv6 in kernel module (requires reboot)
2. Disable IPv6 using sysctl settings (no reboot required)

To verify if IPv6 is enabled or not, execute :

# ifconfig -a | grep inet6
inet6 fe80::211:aff:fe6a:9de4 prefixlen 64 scopeid 0x20
inet6 ::1 prefixlen 128 scopeid 0x10[host]
1. Disable IPv6 in kernel module (requires reboot)

1. Edit /etc/default/grub and add ipv6.disable=1 in line GRUB_CMDLINE_LINUX, e.g.:

# cat /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT=”console”
GRUB_CMDLINE_LINUX=”ipv6.disable=1 crashkernel=auto rhgb quiet”
GRUB_DISABLE_RECOVERY=”true”
2. Regenerate a GRUB configuration file and overwrite existing one:

# grub2-mkconfig -o /boot/grub2/grub.cfg
3. Restart system and verify no line “inet6” in “ip addr show” command output.

# shutdown -r now
# ip addr show | grep net6
2. Disable IPv6 using sysctl settings (no reboot required)

1. Append below lines in /etc/sysctl.conf:

net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
NOTE : To disable IPv6 on a single interface add below lines to /etc/sysctl.conf :
net.ipv6.conf.[interface].disable_ipv6 = 1 ### put interface name here [interface]
net.ipv6.conf.default.disable_ipv6 = 1
2. To make the settings affective, execute :

# sysctl -p
NOTE : make sure the file /etc/ssh/sshd_config contains the line AddressFamily inet to avoid breaking SSH Xforwarding if you are using the sysctl method
3. Add the AddressFamily line to sshd_config :

# vi /etc/ssh/sshd_config
….
AddressFamily inet
….
Restart sshd for changes to get get effect :

# systemctl restart sshd

RHEL 6 vs RHEL 7 Difference

Red Hat Enterprise Linux 7 is an major / drastic change to enterprise. To serve / meet today’s business critical application performance RHEL 7 is the best Operating system to use, very light weight and container based. In this article we are going to see RHEL 6 vs RHEL 7 Difference Between Previous and Newer Version. RHEL 7 vs RHEL 6.

What’s Changed in RHEL 7 Administration side
System Boot time is optimized to boot faster
Anaconda Installer completely redesigned
Grub boot loader version changed from 0.97 to Grub 2
No More SysV Initd system changed to Systemd system
Network Interface names changed from ethx to ensxxx
Introduced new concept of creating multiple Network profiles to activate based on network you connected (Ex. Home, Office and Other)
Default Database is changed from MySQL to MariaDB
No More editing of Network configuration file for assigning IP address and creating Teaming interfaces use nmcli utility
Ifconfig and route commands are deprecated in RHEL 7, Replaced with ip command
GNOME Version 2 replaced with GNOME 3 default Desktop
System User UID range changed from 0-449 to 0-999
Locate command is changed to mlocate
Cluster Resource Manager changed from RGManager to Pacemaker and all CMAN features merged into Corosync
Netstat command replaced with ss command
NTP Daemon replaced with chronyd faster way to sync time RHEL 6 vs RHEL 7
Directories /bin, /sbin, /lib and /lib64 moved under /usr directory

RHEL 6 vs RHEL 7 Difference
Feature Name RHEL 6 RHEL 7
Default File System Ext4 XFS
Kernel Version 2.6.xx 3.10.xx
Release Name Santiago Maipo
Gnome Version GNOME 2 GNOME 3.8
KDE Version KDE 4.1 KDE 4.6
Release Date Wednesday, November 10, 2010 Tuesday, June 10, 2014
NFS Version NFS 4 NFS 4.1. NFS V2 is deprecated in RHEL 7
Samba Version SMB 3.6 SMB 4.4
Default Database MySQL MariaDB
Cluster Resource Manager Rgmanager Pacemaker
Network Interface Grouping Bonding can be done as Active-Backup, XOR, IEEE and Load Balancing Team Driver will support multiple types of Teaming methods called Active-Backup, Load-balancing and Broadcast
KDUMP Kdump does’t support with large RAM Size RHEL 7 can be supported up to 3TB
Boot Loader Grub 2
/boot/grub2/grub.cfg Grub 0.97
/boot/grub/grub.conf
File System Check e2fsck
-Inode check. Block and size check
–Directory Structure check
-Directory Link Check
-reference count check
-Group Summary Check xfs_replair
– Inode blockmap checks
-Inode allocation map checks
-Inode size check
-Directory check
-Path Name check
-Link count check
-Freemap check
-Super block check
Process ID Initd Process ID 1 Systemd Process ID 1
Port Security Iptables by default service port is enabled when service is switched on. Firewalld instead of iptables. Iptables can also support with RHEL 7, but we can’t use both of them at the same time. Firewall will not allow any port until and unless you enabled it.
Boot Time 40 Sec 20 Sec
File System Size EXT4 16TB with XFS 100TB XFS 500TB with EXT4 16TB
Processor Architecture 32Bit and 64Bit Only 64Bit.
Network Configuration Tool setup nmtui
Host name Config File /etc/sysconfig/network /etc/hostname No need to edit hostname file to write permanent hostname simply use hostnamectl command
Interface Name eth0 ens33xxx
Managing Services service sshd start
service sshd restart
chkconfig sshd on systemctl start sshd.service
systemctl restart sshd.service
systemctl enable sshd.service
System Logs /var/log/ /var/log
journalctl
Run Levels runlevel 0 – Power Off
runlevel 1 – Single User Mode
runlevel 2 – Multi User without Networking
runlevel 3 – Multi User CLI
runlevel 4 – Not USed
runlevel 5 – GUI Mode
runlevel 6 – Restart There is no run levels in RHEL 7. Run levels are called as targets
Poweroff.target
rescue.target
multi-user.target
graphical.target
reboot.target
UID Information Normal User UID will start from 500 to 65534
System Users UID will start from 1 to 499 Normal User UID start from 1000 – 65534
System Users UID will start from 1 to 999Because Services are increased compare to RHEL 6
By Pass Root Password Prompt append 1 or s or init=/bin/bash to Kernel command line Append rd.break or init=/bin/bash to kernel command line
Rebooting and Poweroff poweroff – init 0
reboot – init 6 systemctl poweroff
systemctl reboot
YUM Commands yum groupinstall
yum groupinfo yum group install
yum group info
Newly Introduced Features in RHEL 7
No More 32-bit installation packages
Docker is an Open Source Project, it helps to deploy applications inside Linux containers.
Thanks for the Read, Please Provide your valuable feedback on the same. RHEL 6 vs RHEL 7

Conclusion: There are lot many changes out of all few are listed above. For complete and detailed information please read Red

TCP dump and NMAP

1, to detect whether the specified network segment FTP service host, do not do DNS reverse analysis

nmap -sS –n –p 21 192.168.0.0/24
2, to detect whether the specified server has a specific port services

nmap –n –p T:21-25,80,110,3389–sS 192.168.0.1

3, the use of TCP connection scan to detect the specified server, even if it can not ping ? still continue to detect

4, nmap -sT –PO 192.168.0.1

5, detect the specified server operating system type

nmap –O –n 192.168.0.1

6, the detection of local area network in which the mainframe to open the service

nmap –sS 192.168.0.0/24

7, detection 192.168.0.0 and 172.16.0.0/16 network which are running in the host

nmap –sP –n 192.168.0.0/24 172.16.0.0/16
8, fast scan host open port

Nmap -F 192.168.0.1

1, intercepted eth0 card 10 times to send and receive all the data packets and packet capture results will be saved to the test file, and then read test packet results file

Tcpdump -i eth0 -c 10 -w test

Tcpdump -r test

2, intercepted to access all the packets at port 80 (port range specified port port 1-1024)

Tcpdump port 80

3, intercept all from the host 192.168.1.100 access to all data packets

Tcpdump host 192.168.1.100

4, intercepted ip packet source address is 192.168.1.100 (the purpose is dst)

Tcpdump src 192.168.1.100

5, intercept host 192.168.1.100 and host 192.168.1.102 communication

Tcpdump host 192.168.1.100 and 192.168.1.102

6, intercepted tcp protocol and the source address 192.168.1.100 to access the port 80

Tcpdump tcp and src 192.168.1.100 and port 80

7, intercept host 192.168.1.100 addition and 192.168.1.102 addition to all ip packets

Tcpdump ip host 192.168.1.100 and! 192.168.1.102

8, intercept length greater than 1000 packets, for DDOS attacks, you can use

Tcpdump -i eth0 greater 1000

SCALEIO 2.0 NOW AVAILABLE

SCALEIO 2.0 NOW AVAILABLE

EMC has released ScaleIO 2.0 couple of days ago. More information – https://community.emc.com/docs/DOC-52581

Some new features (source ScaleIO 2.0 release notes):

  • Extended MDM cluster – introduces the option of a 5-node MDM cluster, which is able to withstand two points of failure.
  • Read Flash Cache (RFcache) – use PCI flash cards and/or SSDs for caching of the HDDs in the SDS.
  • User authentication using Active Directory (AD) over LDAP.
  • The multiple SDS feature – allows the installation of multiple SDSs on a single Linux or VMware-based server.
  • Oscillating failure handling – provides the ability to handle error situations, and to reduce their impact on normal system operation. This feature detects and reports various oscillating failures, in cases when components fail repeatedly and cause unnecessary failovers.
  • Instant maintenance mode – allows you to restart a server that hosts an SDS, without initiating data migration or exposing the system to the danger of having only a single copy of data.
  • Communication between the ScaleIO system and ESRS (EMC Secure Remote Support) servers is now supported – this feature replaces the call-home mechanism. It allows authorized access to the ScaleIO system for support sessions.
  • Authenticate communication between the ScaleIO MDM and SDS components, and between the MDM and external components, using a Public and Private Key (Key-Pair) associated with a certificate – this will allow strong authentication of components associated with a given ScaleIO system. A Certificate Authority certificate or self-signed certificate can be used.
  • In-flight checksum protection provided for data reads and writes – this feature addresses errors that change the payload during the transit through the ScaleIO system.
  • Performance profiles – predefined settings that affect system performance.

ScaleIO can be downloaded from EMC website – http://www.emc.com/products-solutions/trial-software-download/scaleio.htm. ScaleIO 2.0 supports VMWare (5.5 and 6.0), Linux and Windows.

Useful commands for Hadoop HDFS troubleshooting

[hadoop@master ~]$ hdfs dfsadmin -report
17/04/13 00:06:51 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
Configured Capacity: 37576769536 (35.00 GB)
Present Capacity: 33829806080 (31.51 GB)
DFS Remaining: 33829789696 (31.51 GB)
DFS Used: 16384 (16 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

————————————————-
Live datanodes (2):

Name: 192.168.1.82:50010 (lab2.rmohan.com)
Hostname: lab2.rmohan.com
Decommission Status : Normal
Configured Capacity: 18788384768 (17.50 GB)
DFS Used: 8192 (8 KB)
Non DFS Used: 1873375232 (1.74 GB)
DFS Remaining: 16915001344 (15.75 GB)
DFS Used%: 0.00%
DFS Remaining%: 90.03%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Thu Apr 13 00:06:49 SGT 2017
Name: 192.168.1.81:50010 (lab1.rmohan.com)
Hostname: lab1.rmohan.com
Decommission Status : Normal
Configured Capacity: 18788384768 (17.50 GB)
DFS Used: 8192 (8 KB)
Non DFS Used: 1873588224 (1.74 GB)
DFS Remaining: 16914788352 (15.75 GB)
DFS Used%: 0.00%
DFS Remaining%: 90.03%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Thu Apr 13 00:06:49 SGT 2017

CentOS 7.3 under the Hadoop 2.7.2 cluster

CentOS 7.3 under the Hadoop 2.7.2 cluster

how to setup a Hadoop cluster on CentOS linux system. Before you read this article, I assume you already have all basic conceptions about Hadoop and Linux operating system.

mv ifcfg-eno16777736 ifcfg-eth0
vi /etc/udev/rules.d/90-eno-fix.rules
# This file was automatically generated on systemd update
SUBSYSTEM==”net”, ACTION==”add”, DRIVERS==”?*”, ATTR{address}==”00:0c:29:9e:8f:95″, NAME=”eno16777736″

# This file was automatically generated on systemd update
SUBSYSTEM==”net”, ACTION==”add”, DRIVERS==”?*”, ATTR{address}==”00:0c:29:9e:8f:95″, NAME=”eth0″

 

vi /etc/hosts

192.168.1.80 master.rmohan.com master
192.168.1.81 lab1.rmohan.com lab1
192.168.1.82 lab2.rmohan.com lab2

Architecture

IP Address Hostname Role
192.168.1.80 master NameNode, ResourceManager
192.168.1.81 slave1 SecondaryNameNode, DataNode, NodeManager
192.168.1.82 slave2 DataNode, NodeManager

 

Before we start, we will understand the meaning of the following:

DataNode

A DataNode stores data in the Hadoop File System. A functional file system has more than one DataNode, with the data replicated across them.

NameNode

The NameNode is the centrepiece of an HDFS file system. It keeps the directory of all files in the file system, and tracks where across the cluster the file data is kept. It does not store the data of these file itself.

NodeManager

The NodeManager (NM) is YARN’s per-node agent, and takes care of the individual compute nodes in a Hadoop cluster. This includes keeping up-to date with the ResourceManager (RM), overseeing containers’ life-cycle management; monitoring resource usage (memory, CPU) of individual containers, tracking node-health, log’s management and auxiliary services which may be exploited by different YARN applications.

ResourceManager

ResourceManager (RM) is the master that arbitrates all the available cluster resources and thus helps manage the distributed applications running on the YARN system. It works together with the per-node NodeManagers (NMs) and the per-application ApplicationMasters (AMs).

Secondary Namenode

Secondary Namenode whole purpose is to have a checkpoint in HDFS. It is just a helper node for namenode.

 

 

HDFS (Hadoop distributed file system)

GFS paper from Google, published in October 2003, HDFS is GFS clone version.

Is the basis of data storage management in the Hadoop system. It is a highly fault tolerant system that can detect and respond to hardware failures for running on low-cost general-purpose hardware. HDFS simplifies the file consistency model, through streaming data access, providing high-throughput application data access capabilities for applications with large data sets.

HDFS this part of the main part of the composition of a few:

(1), Client: cut file; access HDFS; interact with NameNode, get file location information; interact with DataNode, read and write data.

(2), NameNode: Master node, only one in hadoop1.X, manage the HDFS name space and data block mapping information, configure the replica policy, handle the client request. For large clusters, Hadoop1.x has two of the biggest flaws: 1) for large clusters, namenode memory becomes a bottleneck, namenode’s scalability problem; 2) namenode’s single point of failure problem.

In response to the above two defects, Hadoop2.x later on these two issues were resolved. For the defect 1) proposed Federation namenode to solve, the program is mainly through multiple namenode to achieve multiple namespace to achieve the namingode horizontal expansion. Thereby alleviating the problem of a single namenode memory.

For defect 2), hadoop2.X proposed to achieve two namenode implementation of hot standby HA solution to solve. One of which is in the standby state, one in the active state.

(3), DataNode: Slave node, store the actual data, report the stored information to the NameNode.

(4), Secondary NameNode: Assist NameNode, share its workload; regularly merge fsimage and edits, push to NameNode; in case of emergency, can help restore NameNode, but Secondary NameNode is not a HotDress for NameNode.

At present, the hard disk is not bad, we can through secondarynamenode to achieve namenode recovery.

3, Mapreduce (distributed computing framework)

Source from Google’s MapReduce thesis, published in December 2004, Hadoop MapReduce is google MapReduce Clone Edition. MapReduce is a computational model used to calculate large amounts of data. Where Map specifies the operation of the independent elements on the dataset to generate key-value pairs of intermediate results. Reduce is the specification of all the “values” of the same “key” in the intermediate result to get the final result. MapReduce such a functional division, is very suitable for a large number of computers in a distributed parallel environment for data processing.

MapReduce calculation framework to the development of two versions of MapReduce API now, for MR1 main components have the following components:

(1), JobTracker: Master node, only one, the main task is the allocation of resources and job scheduling and supervision and management, management of all operations, job / task monitoring, error handling, etc .; the task is broken down into a series of tasks, and assigned to TaskTracker.

(2), TaskTracker: Slave node, run Map Task and Reduce Task; and interact with JobTracker, reporting task status.

(3), Map Task: parse each data record, passed to the user to write the map (), and the implementation of the output will be written to the local disk.

(4), Reducer Task: from the implementation of the results of the Map Task, remote read the input data, sort the data, the data passed to the user to write the reduce function implementation.

In this process, there is a shuffle process, which is the key to understanding the MapReduce calculation framework for the process. The process contains the map function output to the reduce function to enter all the operations in this intermediate process, called the shuffle process. In this process, can be divided into the map side and reduce end.

Map side:

1) After the input data is fragmented, the size of the slice is related to the size of the original file, the size of the file block. Each piece corresponds to a map task.

2) map task in the implementation process, the results will be stored into memory, when the memory occupies a certain threshold (the threshold can be set), the map will be the middle of the results written to the local disk, the formation of temporary File This process is called overwriting.

3) map In the process of overflow, will be specified according to the number of designated tasks to reduce the corresponding partition, which is the partition process. Each partition corresponds to a reduce task. And in the process of writing, the corresponding sort. In the process of overflow can also set the conbiner process, the process with the results of the reduction should be consistent, so the application of the process there is a certain limit, need to be used with caution.

4) At the end of each map, there is only one temporary file as input to reduce, so the Merge operations are merged into multiple temporary files that are overwritten to disk. Finally form a temporary file of an internal partition.

Reduce end:

1) first to achieve data localization, the need to remote node on the map output to the local copy.

2) Merge process, the merger process is mainly on the different nodes on the map output results are combined.

3) continue to copy and merge, the final form of an input file. Reduce stores the final results on HDFS.

For MR2 is the new generation of MR API. It is mainly run on Yarn’s resource management framework.

4, Yarn (resource management framework)

The framework was hadoop2.x later after the optimization of the JobTracker and TaskTracker models before hadoop1.x, which resulted in the separation of the JobTracker’s resource allocation and job scheduling and oversight. The framework of the main ResourceManager, Applicationmatser, nodemanager. The main work process is as follows: the ResourceManager is mainly responsible for all the application of resource allocation, ApplicationMaster is mainly responsible for the task scheduling of each job, that is, each job corresponds to an ApplicationMaster. Nodemanager is a command that receives Resourcemanager and ApplicationMaster to implement the allocation of resources.

The ResourceManager allocates a Conbiner after receiving the client’s job submission request. Here it is to be noted that the Resoucemanager allocation resource is allocated in units of Conbiner. The first assigned Conbiner starts Applicationmaster, which is primarily responsible for job scheduling. After the Applicationmanager starts, it communicates directly with the NodeManager.

In YARN, resource management is done by the ResourceManager and the NodeManager, where the scheduler in the ResourceManager is responsible for resource allocation and the NodeManager is responsible for resource provisioning and isolation. ResourceManager assigns resources on a NodeManager to a task (this is the so-called “resource scheduling”), the NodeManager needs to provide the appropriate resources for the task as required, and even ensure that these resources should be exclusive, provide the basis for the task of guarantee, This is the so-called resource isolation.

In the Yarn platform can run multiple computing framework, such as: MR, Tez, Storm, Spark and other calculations, the framework.

5, Sqoop (data synchronization tool)

Sqoop is an abbreviation for SQL-to-Hadoop, which is used primarily for transferring data between a legacy database and Hadoop. Data import and export is essentially a Mapreduce program that takes full advantage of MR parallelism and fault tolerance. Which is the main use of the MP Map task to achieve parallel import, export. Sqoop development has now appeared in two versions, one is sqoop1.xx series, one is sqoop1.99.X series. For the sqoop1 series, mainly through the command line to operate.

Sqoop1 import principle: from the traditional database to obtain metadata information (schema, table, field, field type), the import function is converted to Map Map only map operations, mapreduce a lot of map, each map read a piece of data, and then parallel Complete the copy of the data.

Sqoop1 export Principle: Get the schema of the exported table, meta information, and the field match in Hadoop; multiple map only jobs run at the same time, and the data in hdfs is exported to the relational database.

Sqoop1.99.x is a product of sqoop2, which is not fully functional products, in a test phase, generally will not be used in commercial products.

Sqoop tools, the current understanding of it is that there may be some problems because when the import and export, map task failed, then Applicationmaster will re-schedule another task to run this failed task. But there may be a problem that the data imported by the Map task before the failure and the result of the re-scheduling of the map task will be repeated.

6, Mahout (data mining algorithm library)

Mahout originated in 2008, initially Apache Lucent subproject, which in a very short period of time has made considerable development, is now Apache’s top project. Compared with the traditional MapReduce programming method to achieve the machine learning algorithm, often need to spend a lot of development time, and the cycle is longer, and Mahout’s main goal is to create some scalable machine learning domain classic algorithm to achieve Developers are more convenient and quick to create intelligent applications.

Mahout now includes a wide range of data mining methods such as clustering, classification, recommendation engine (collaborative filtering), and frequent set mining. In addition to the algorithm, Mahout also includes data input / output tools, data mining support architectures such as integration with other storage systems such as databases, MongoDB or Cassandra.

Mahout the following components will generate the corresponding jar package. At this point we need to understand a question: how to use mahout in the end?

In fact, mahout is just a machine learning algorithm library, in which the corresponding machine learning algorithm, such as: recommended system (including user-based and object-based recommendations), clustering and classification algorithm. And some of these algorithms to achieve the MapReduce, spark can be run on the hadoop platform, in the actual development process, only the corresponding jar package can be.

7, Hbase (distributed inventory database)

Bigtable papers from Google, published in November 2006, the traditional relational database is a row-oriented database. HBase is a Google Bigtable clone that HBase is a scalable, highly reliable, high-performance, distributed and column-oriented dynamic schema database for structured data. Unlike traditional relational databases, HBase uses a BigTable data model: an enhanced sparse sorting table (Key / Value), where the keys consist of row keywords, column keywords, and timestamps. HBase provides random, real-time read and write access to large-scale data, while data stored in HBase can be processed using MapReduce, which combines data storage with parallel computing.

Hbase table features

1), large: a table can have billions of rows, millions of columns;

2), no model: each row has a sortable primary key and any number of columns, columns can be dynamically increased according to the need, the same table can have different rows of different columns;

3), oriented columns: oriented column (family) storage and permission control, column (family) independent retrieval;

4), sparse: null (null) column does not take up storage space, the table can be designed very sparse;

5), multi-version data: each unit of data can have multiple versions, by default, the version number is automatically assigned, is the cell when the timestamp inserted;

6), data type single: Hbase in the data are strings, no type.

Hbase physical model

Each column family is stored in a separate file on HDFS, and null values ??are not saved.

Key and Version number have one in each column family;

HBase maintains a multi-level index for each value, ie, its physical storage:

1, all the rows in the table are sorted by the row key;

2, Table in the direction of the line is divided into multiple Regions;

3, Region by size, each table began with only one region, with the data increased, region increasing, when increased to a threshold when the region will be divided into two new regions, then there will be More and more region;

4, Region is the smallest unit of distributed storage and load balancing in Hbase, and different regions are distributed to different RegionsServer. ,

5, Region is the smallest unit of distributed storage, but not the smallest unit of storage. Region consists of one or more stores, each store holds a columns family; each Strore is made up of a memStore and 0 to multiple StoreFiles, StoreFile contains HFile; memStore is stored in memory and StoreFile is stored on HDFS.

8, Zookeeper (distributed collaborative service)

From the Google Chubby paper, published in November 2006, Zookeeper is Chubby clone version, mainly to solve the distributed environment of data management issues: unified naming, state synchronization, cluster management, configuration synchronization.

Zookeeper’s main implementation is two steps: 1), election Leader 2), sync data. This component needs to be used when implementing HA high availability for namenode.

9, Pig (Hadoop-based data flow system)

By yahoo! Open source, the design motivation is to provide a MapReduce-based ad-hoc (calculated at query time) data analysis tools

Defines a data flow language – Py Latin, which converts scripts into MapReduce tasks on Hadoop. Usually used for off-line analysis.

10, Hive (Hadoop-based data warehouse)

Open source by facebook, initially used to solve the massive structure of the log data statistics.

Hive defines a SQL-like query language (HQL) that transforms SQL into a MapReduce task on Hadoop. Usually used for off-line analysis.

11, Flume (log collection tool)

Cloudera open source log collection system, with distributed, highly reliable, high fault tolerance, easy to customize and expand the characteristics.

It abstracts the data from the process of generating, transmitting, processing, and ultimately writing to the target path as a data stream. The data source supports the custom data sender in Flume to support the collection of various protocol data. At the same time, Flume data stream provides the ability to process log data easily, such as filtering, format conversion and so on. In addition, Flume also has the ability to write logs to various data targets (customizable). In general, Flume is a scalable, complex environment for the massive log collection system.

3.1. install necessary packages for OS

We pick up CentOS minimal ISO as our installation prototype, once the system installed, we need several more basic packages:

yum install -y net-tools
yum install -y openssh-server
yum install -y wget
yum install -y ntpd
systemctl enable ntpd ; systemctl start ntpd
ntpdate -u 0.centos.pool.ntp.org
The first line is to install ifconfig, while the second one is to be able to be ssh login by remote peer.

3.2. setup hostname for all nodes

This step is optional, but important for better self-identify while you use same username to walk through different nodes.

hostnamectl set-hostname master

ex: at master node

re-login to check the effect

3.3. setup jdk for all nodes

install jdk from oracle official website

wget –header “Cookie: oraclelicense=accept-securebackup-cookie” http://download.oracle.com/otn-pub/java/jdk/8u121-b13/e9e7ea248e2c4826b92b3f075a80e441/jdk-8u121-linux-x64.rpm
yum localinstall -y jdk-8u121-linux-x64.rpm
rm jdk-8u121-linux-x64.rpm
add java.sh under /etc/profile.d/

[root@master ~]# yum localinstall -y jdk-8u121-linux-x64.rpm
Loaded plugins: fastestmirror
Examining jdk-8u121-linux-x64.rpm: 2000:jdk1.8.0_121-1.8.0_121-fcs.x86_64
Marking jdk-8u121-linux-x64.rpm to be installed
Resolving Dependencies
–> Running transaction check
—> Package jdk1.8.0_121.x86_64 2000:1.8.0_121-fcs will be installed
–> Finished Dependency Resolution

Dependencies Resolved

=========================================================================================================================================================================
Package Arch Version Repository Size
=========================================================================================================================================================================
Installing:
jdk1.8.0_121 x86_64 2000:1.8.0_121-fcs /jdk-8u121-linux-x64 263 M

Transaction Summary
=========================================================================================================================================================================
Install 1 Package

Total size: 263 M
Installed size: 263 M
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : 2000:jdk1.8.0_121-1.8.0_121-fcs.x86_64 1/1
Unpacking JAR files…
tools.jar…
plugin.jar…
javaws.jar…
deploy.jar…
rt.jar…
jsse.jar…
charsets.jar…
localedata.jar…
Verifying : 2000:jdk1.8.0_121-1.8.0_121-fcs.x86_64 1/1

Installed:
jdk1.8.0_121.x86_64 2000:1.8.0_121-fcs

Complete!
[root@master ~]#
java.sh content:

export JAVA_HOME=/usr/java/jdk1.8.0_121
export JRE_HOME=/usr/java/jdk1.8.0_121/jre
export CLASSPATH=$JAVA_HOME/lib:.
export PATH=$PATH:$JAVA_HOME/bin
re-login, and you’ll find all environment variables, and java is well installed.

Approach to verification:

java -version
ls $JAVA_HOME
echo $PATH
if the java version goes wrong, you can

[root@master ~]# update-alternatives –config java

There is 1 program that provides ‘java’.

Selection Command
———————————————–
*+ 1 /usr/java/jdk1.8.0_121/jre/bin/java

3.4. setup user and user group on all nodes

groupadd hadoop
useradd -d /home/hadoop -g hadoop hadoop
passwd hadoop
3.5. modify hosts file for network inter-recognition on all nodes

echo ‘192.168.1.80 master.rmohan.com master’ >> /etc/hosts
echo ‘192.168.1.81 lab1.rmohan.com lab1 slave1’ >> /etc/hosts
echo ‘192.168.1.82 lab1.rmohan.com lab2 slave2’ >> /etc/hosts
check the recognition works:

ping master
ping slave1
ping slave2

3.6. setup ssh no password login on all nodes

su – hadoop
ssh-keygen -t rsa
ssh-copy-id master
ssh-copy-id slave1
ssh-copy-id slave2
now you can ssh login to all 3 nodes without passwd, please have a try to check it out.

3.7. stop & disable firewall

systemctl stop firewalld.service
systemctl disable firewalld.service
4. Hadoop Setup

P.S. the whole Step 4 operations happens on a single node, let’s say, master. In addition, we’ll login as user hadoop to finish all operations.

su – hadoop
4.1. Download and untar on the file system.
[hadoop@master ~]$ wget http://mirrors.sonic.net/apache/hadoop/common/hadoop-2.7.2/hadoop-2.7.2.tar.gz
–2017-04-12 22:49:01– http://mirrors.sonic.net/apache/hadoop/common/hadoop-2.7.2/hadoop-2.7.2.tar.gz
Resolving mirrors.sonic.net (mirrors.sonic.net)… 69.12.162.27
Connecting to mirrors.sonic.net (mirrors.sonic.net)|69.12.162.27|:80… connected.
HTTP request sent, awaiting response… 200 OK
Length: 212046774 (202M) [application/x-gzip]
Saving to: ‘hadoop-2.7.2.tar.gz’

1% [> ] 2,932,536 811KB/s eta 4m 24s

tar -zxvf hadoop-2.7.2.tar.gz
rm hadoop-2.7.2.tar.gz
chmod 775 hadoop-2.7.2
4.2. Add environment variables for hadoop

append following content onto ~/.bashrc

export HADOOP_HOME=/home/hadoop/hadoop-2.7.2
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
then make these variables effect:

source ~/.bashrc
4.3. Modify configuration files for hadoop

Add slave node hostnames into $HADOOP_HOME/etc/hadoop/slaves file
echo slave1 > $HADOOP_HOME/etc/hadoop/slaves
echo slave2 >> $HADOOP_HOME/etc/hadoop/slaves
Add secondary node hostname into $HADOOP_HOME/etc/hadoop/masters file
echo slave1 > $HADOOP_HOME/etc/hadoop/masters
Modify $HADOOP_HOME/etc/hadoop/core-site.xml as following
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000/</value>
<description>namenode settings</description>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hadoop-2.7.2/tmp/hadoop-${user.name}</value>
<description> temp folder </description>
</property>
<property>
<name>hadoop.proxyuser.hadoop.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hadoop.groups</name>
<value>*</value>
</property>
</configuration>
Modify $HADOOP_HOME/etc/hadoop/hdfs-site.xml as following
<configuration>
<property>
<name>dfs.namenode.http-address</name>
<value>master:50070</value>
<description> fetch NameNode images and edits </description>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>slave1:50090</value>
<description> fetch SecondNameNode fsimage </description>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
<description> replica count </description>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///home/hadoop/hadoop-2.7.2/hdfs/name</value>
<description> namenode </description>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///home/hadoop/hadoop-2.7.2/hdfs/data</value>
<description> DataNode </description>
</property>
<property>
<name>dfs.namenode.checkpoint.dir</name>
<value>file:///home/hadoop/hadoop-2.7.2/hdfs/namesecondary</value>
<description> check point </description>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.stream-buffer-size</name>
<value>131072</value>
<description> buffer </description>
</property>
<property>
<name>dfs.namenode.checkpoint.period</name>
<value>3600</value>
<description> duration </description>
</property>
</configuration>
Modify $HADOOP_HOME/etc/hadoop/mapred-site.xml as following

<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobtracker.address</name>
<value>hdfs://trucy:9001</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
<description>MapReduce JobHistory Server host:port, default port is 10020.</description>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
<description>MapReduce JobHistory Server Web UI host:port, default port is 19888.</description>
</property>
</configuration>

Modify $HADOOP_HOME/etc/hadoop/yarn-site.xml as following
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>
</configuration>
4.4. Create necessary folders

mkdir -p $HADOOP_HOME/tmp
mkdir -p $HADOOP_HOME/hdfs/name
mkdir -p $HADOOP_HOME/hdfs/data
4.5. Copy hadoop folders and environment settings to slaves

scp ~/.bashrc slave1:~/
scp ~/.bashrc slave2:~/

scp -r ~/hadoop-2.7.2 slave1:~/
scp -r ~/hadoop-2.7.2 slave2:~/
5. Launch hadoop cluster service

Format namenode for the first time launch

hdfs namenode -format
Launch dfs distributed file system
start-dfs.sh

[hadoop@master ~]$ start-dfs.sh
17/04/12 23:27:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
Starting namenodes on [master]
The authenticity of host ‘master (192.168.1.80)’ can’t be established.
ECDSA key fingerprint is d8:55:ea:50:b3:bb:8a:fc:90:a2:0e:54:3e:79:60:bc.
Are you sure you want to continue connecting (yes/no)? yes
master: Warning: Permanently added ‘master,192.168.1.80’ (ECDSA) to the list of known hosts.
hadoop@master’s password:
master: starting namenode, logging to /home/hadoop/hadoop-2.7.2/logs/hadoop-hadoop-namenode-master.rmohan.com.out
slave2: starting datanode, logging to /home/hadoop/hadoop-2.7.2/logs/hadoop-hadoop-datanode-lab2.rmohan.com.out
slave1: starting datanode, logging to /home/hadoop/hadoop-2.7.2/logs/hadoop-hadoop-datanode-lab1.rmohan.com.out
Starting secondary namenodes [slave1]
slave1: starting secondarynamenode, logging to /home/hadoop/hadoop-2.7.2/logs/hadoop-hadoop-secondarynamenode-lab1.rmohan.com.out
17/04/12 23:27:42 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable

Launch yarn distributed computing system
start-yarn.sh

[hadoop@master ~]$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/hadoop-2.7.2/logs/yarn-hadoop-resourcemanager-master.rmohan.com.out
slave2: starting nodemanager, logging to /home/hadoop/hadoop-2.7.2/logs/yarn-hadoop-nodemanager-lab2.rmohan.com.out
slave1: starting nodemanager, logging to /home/hadoop/hadoop-2.7.2/logs/yarn-hadoop-nodemanager-lab1.rmohan.com.out
[hadoop@master ~]$

Shutdown Hadoop Cluster

stop-yarn.sh
stop-dfs.sh
6. Verify the hadoop cluster is up and healthy

6.1. Verify by jps processus

Check jps on each node, and view results.

[hadoop@master ~]$ jps
3184 ResourceManager
3441 Jps
2893 NameNode
[hadoop@master ~]$

On slave1 node:
[hadoop@lab1 ~]$ jps
3026 NodeManager
3127 Jps
2811 DataNode
2907 SecondaryNameNode
[hadoop@lab1 ~]$

On slave2 node:
[hadoop@lab2 ~]$ jps
2722 DataNode
2835 NodeManager
2934 Jps
[hadoop@lab2 ~]$

6.2. Verify on Web interface

6.2. Verify on Web interface

192.168.1.80:50070 to view hdfs storage status.
192.168.1.80:8088 to view yarn computing system resources and application status.

7. End

This is all about basic version of hadoop 3 nodes cluster, for high availability version & hadoop relative eco-systems,

I’ll give it on other posts, thanks for contacting me if there is anything mistype or you have any suggestions or anything you don’t understand.

 

 

How do I add a new node to a Hadoop cluster?

1. Install a new node at hadoop or copy it from another node

2. Copy the namingode configuration file to the node

3. Modify the masters and slaves files, increase the node, all nodes have to modify

4. Set the ssh password to access the node

5. Start the datanode and tasktracker (hadoop-daemon.sh start datanode / tasktracker) on the node separately

6. Run start-balancer.sh for data load balancing

YumDownloader on CentOS 7 / RHEL 7

YumDownloader

yum install yum-utils

yumdownloader bind-utils

[root@lab4 ~]# yum install yum-utils
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: centos.shinjiru.com
 * extras: centos.netonboard.com
 * updates: centos.netonboard.com
Resolving Dependencies
--> Running transaction check
---> Package yum-utils.noarch 0:1.1.31-40.el7 will be installed
--> Processing Dependency: python-kitchen for package: yum-utils-1.1.31-40.el7.noarch
--> Processing Dependency: libxml2-python for package: yum-utils-1.1.31-40.el7.noarch
--> Running transaction check
---> Package libxml2-python.x86_64 0:2.9.1-6.el7_2.3 will be installed
---> Package python-kitchen.noarch 0:1.1.1-5.el7 will be installed
--> Processing Dependency: python-chardet for package: python-kitchen-1.1.1-5.el7.noarch
--> Running transaction check
---> Package python-chardet.noarch 0:2.2.1-1.el7_1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=========================================================================================================================================================================
 Package Arch Version Repository Size
=========================================================================================================================================================================
Installing:
 yum-utils noarch 1.1.31-40.el7 base 116 k
Installing for dependencies:
 libxml2-python x86_64 2.9.1-6.el7_2.3 base 247 k
 python-chardet noarch 2.2.1-1.el7_1 base 227 k
 python-kitchen noarch 1.1.1-5.el7 base 267 k

Transaction Summary
=========================================================================================================================================================================
Install 1 Package (+3 Dependent packages)

Total download size: 856 k
Installed size: 4.3 M
Is this ok [y/d/N]: Y
Downloading packages:
(1/4): yum-utils-1.1.31-40.el7.noarch.rpm | 116 kB 00:00:00
(2/4): libxml2-python-2.9.1-6.el7_2.3.x86_64.rpm | 247 kB 00:00:05
(3/4): python-kitchen-1.1.1-5.el7.noarch.rpm | 267 kB 00:00:06
(4/4): python-chardet-2.2.1-1.el7_1.noarch.rpm | 227 kB 00:00:06
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 137 kB/s | 856 kB 00:00:06
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
 Installing : python-chardet-2.2.1-1.el7_1.noarch 1/4
 Installing : python-kitchen-1.1.1-5.el7.noarch 2/4
 Installing : libxml2-python-2.9.1-6.el7_2.3.x86_64 3/4
 Installing : yum-utils-1.1.31-40.el7.noarch 4/4
 Verifying : yum-utils-1.1.31-40.el7.noarch 1/4
 Verifying : python-kitchen-1.1.1-5.el7.noarch 2/4
 Verifying : libxml2-python-2.9.1-6.el7_2.3.x86_64 3/4
 Verifying : python-chardet-2.2.1-1.el7_1.noarch 4/4

Installed:
 yum-utils.noarch 0:1.1.31-40.el7

Dependency Installed:
 libxml2-python.x86_64 0:2.9.1-6.el7_2.3 python-chardet.noarch 0:2.2.1-1.el7_1 python-kitchen.noarch 0:1.1.1-5.el7

Complete!
[root@lab4 ~]# mkdir software
[root@lab4 ~]# cd software/
[root@lab4 software]# ls
[root@lab4 software]# yumdownloader samba httpd --destdir /root/software/ --resolve
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: centos.shinjiru.com
 * extras: centos.netonboard.com
 * updates: centos.netonboard.com
--> Running transaction check
---> Package httpd.x86_64 0:2.4.6-45.el7.centos will be installed
--> Processing Dependency: httpd-tools = 2.4.6-45.el7.centos for package: httpd-2.4.6-45.el7.centos.x86_64
--> Processing Dependency: /etc/mime.types for package: httpd-2.4.6-45.el7.centos.x86_64
--> Processing Dependency: libaprutil-1.so.0()(64bit) for package: httpd-2.4.6-45.el7.centos.x86_64
--> Processing Dependency: libapr-1.so.0()(64bit) for package: httpd-2.4.6-45.el7.centos.x86_64
---> Package samba.x86_64 0:4.4.4-12.el7_3 will be installed
--> Processing Dependency: samba-libs = 4.4.4-12.el7_3 for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: samba-common-tools = 4.4.4-12.el7_3 for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: samba-common-libs = 4.4.4-12.el7_3 for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: samba-common = 4.4.4-12.el7_3 for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: samba-client-libs = 4.4.4-12.el7_3 for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libwbclient = 4.4.4-12.el7_3 for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libxattr-tdb-samba4.so(SAMBA_4.4.4)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libwbclient.so.0(WBCLIENT_0.9)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libutil-tdb-samba4.so(SAMBA_4.4.4)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libutil-reg-samba4.so(SAMBA_4.4.4)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libtevent.so.0(TEVENT_0.9.9)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libtevent.so.0(TEVENT_0.9.16)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libtevent-util.so.0(TEVENT_UTIL_0.0.1)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libtevent-unix-util.so.0(TEVENT_UNIX_UTIL_0.0.1)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libtdb.so.1(TDB_1.2.5)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libtdb.so.1(TDB_1.2.1)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libtalloc.so.2(TALLOC_2.0.2)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsys-rw-samba4.so(SAMBA_4.4.4)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsocket-blocking-samba4.so(SAMBA_4.4.4)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsmbregistry-samba4.so(SAMBA_4.4.4)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsmbd-shim-samba4.so(SAMBA_4.4.4)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsmbd-base-samba4.so(SAMBA_4.4.4)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsmbconf.so.0(SMBCONF_0)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsmb-transport-samba4.so(SAMBA_4.4.4)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libserver-id-db-samba4.so(SAMBA_4.4.4)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsecrets3-samba4.so(SAMBA_4.4.4)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsamba3-util-samba4.so(SAMBA_4.4.4)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsamba-util.so.0(SAMBA_UTIL_0.0.1)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsamba-sockets-samba4.so(SAMBA_4.4.4)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsamba-security-samba4.so(SAMBA_4.4.4)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsamba-passdb.so.0(SAMBA_PASSDB_0.2.0)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsamba-hostconfig.so.0(SAMBA_HOSTCONFIG_0.0.1)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsamba-errors.so.1(SAMBA_ERRORS_1)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsamba-debug-samba4.so(SAMBA_4.4.4)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsamba-cluster-support-samba4.so(SAMBA_4.4.4)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libreplace-samba4.so(SAMBA_4.4.4)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libpopt-samba3-samba4.so(SAMBA_4.4.4)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libndr.so.0(NDR_0.0.1)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libndr-standard.so.0(NDR_STANDARD_0.0.1)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libndr-samba-samba4.so(SAMBA_4.4.4)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libndr-nbt.so.0(NDR_NBT_0.0.1)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libmsghdr-samba4.so(SAMBA_4.4.4)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libmessages-dgm-samba4.so(SAMBA_4.4.4)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: liblibsmb-samba4.so(SAMBA_4.4.4)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libgse-samba4.so(SAMBA_4.4.4)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libgenrand-samba4.so(SAMBA_4.4.4)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libdbwrap-samba4.so(SAMBA_4.4.4)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libcliauth-samba4.so(SAMBA_4.4.4)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libcli-smb-common-samba4.so(SAMBA_4.4.4)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libcli-nbt-samba4.so(SAMBA_4.4.4)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libcli-cldap-samba4.so(SAMBA_4.4.4)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libauth-samba4.so(SAMBA_4.4.4)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libCHARSET3-samba4.so(SAMBA_4.4.4)(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libxattr-tdb-samba4.so()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libwbclient.so.0()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libutil-tdb-samba4.so()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libutil-reg-samba4.so()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libtevent.so.0()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libtevent-util.so.0()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libtevent-unix-util.so.0()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libtdb.so.1()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libtalloc.so.2()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsys-rw-samba4.so()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsocket-blocking-samba4.so()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsmbregistry-samba4.so()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsmbd-shim-samba4.so()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsmbd-base-samba4.so()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsmbconf.so.0()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsmb-transport-samba4.so()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libserver-id-db-samba4.so()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsecrets3-samba4.so()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsamba3-util-samba4.so()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsamba-util.so.0()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsamba-sockets-samba4.so()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsamba-security-samba4.so()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsamba-passdb.so.0()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsamba-hostconfig.so.0()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsamba-errors.so.1()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsamba-debug-samba4.so()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libsamba-cluster-support-samba4.so()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libreplace-samba4.so()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libpopt-samba3-samba4.so()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libndr.so.0()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libndr-standard.so.0()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libndr-samba-samba4.so()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libndr-nbt.so.0()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libmsghdr-samba4.so()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libmessages-dgm-samba4.so()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: liblibsmb-samba4.so()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libgse-samba4.so()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libgenrand-samba4.so()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libdbwrap-samba4.so()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libcliauth-samba4.so()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libcli-smb-common-samba4.so()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libcli-nbt-samba4.so()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libcli-cldap-samba4.so()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libauth-samba4.so()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libCHARSET3-samba4.so()(64bit) for package: samba-4.4.4-12.el7_3.x86_64
--> Running transaction check
---> Package apr.x86_64 0:1.4.8-3.el7 will be installed
---> Package apr-util.x86_64 0:1.5.2-6.el7 will be installed
---> Package httpd-tools.x86_64 0:2.4.6-45.el7.centos will be installed
---> Package libtalloc.x86_64 0:2.1.6-1.el7 will be installed
---> Package libtdb.x86_64 0:1.3.8-1.el7_2 will be installed
---> Package libtevent.x86_64 0:0.9.28-1.el7 will be installed
---> Package libwbclient.x86_64 0:4.4.4-12.el7_3 will be installed
---> Package mailcap.noarch 0:2.1.41-2.el7 will be installed
---> Package samba-client-libs.x86_64 0:4.4.4-12.el7_3 will be installed
--> Processing Dependency: libldb.so.1(LDB_1.1.19)(64bit) for package: samba-client-libs-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libldb.so.1(LDB_1.1.1)(64bit) for package: samba-client-libs-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libldb.so.1(LDB_0.9.23)(64bit) for package: samba-client-libs-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libldb.so.1(LDB_0.9.15)(64bit) for package: samba-client-libs-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libldb.so.1(LDB_0.9.10)(64bit) for package: samba-client-libs-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libldb.so.1()(64bit) for package: samba-client-libs-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libcups.so.2()(64bit) for package: samba-client-libs-4.4.4-12.el7_3.x86_64
---> Package samba-common.noarch 0:4.4.4-12.el7_3 will be installed
---> Package samba-common-libs.x86_64 0:4.4.4-12.el7_3 will be installed
---> Package samba-common-tools.x86_64 0:4.4.4-12.el7_3 will be installed
---> Package samba-libs.x86_64 0:4.4.4-12.el7_3 will be installed
--> Processing Dependency: libpytalloc-util.so.2(PYTALLOC_UTIL_2.1.6)(64bit) for package: samba-libs-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libpytalloc-util.so.2(PYTALLOC_UTIL_2.0.6)(64bit) for package: samba-libs-4.4.4-12.el7_3.x86_64
--> Processing Dependency: libpytalloc-util.so.2()(64bit) for package: samba-libs-4.4.4-12.el7_3.x86_64
--> Running transaction check
---> Package cups-libs.x86_64 1:1.6.3-26.el7 will be installed
---> Package libldb.x86_64 0:1.1.26-1.el7 will be installed
---> Package pytalloc.x86_64 0:2.1.6-1.el7 will be installed
--> Finished Dependency Resolution
(1/18): apr-1.4.8-3.el7.x86_64.rpm | 103 kB 00:00:00
(2/18): apr-util-1.5.2-6.el7.x86_64.rpm | 92 kB 00:00:00
(3/18): libldb-1.1.26-1.el7.x86_64.rpm | 125 kB 00:00:00
(4/18): libtalloc-2.1.6-1.el7.x86_64.rpm | 34 kB 00:00:00
(5/18): libtdb-1.3.8-1.el7_2.x86_64.rpm | 45 kB 00:00:00
(6/18): cups-libs-1.6.3-26.el7.x86_64.rpm | 356 kB 00:00:01
(7/18): libtevent-0.9.28-1.el7.x86_64.rpm | 34 kB 00:00:00
(8/18): libwbclient-4.4.4-12.el7_3.x86_64.rpm | 100 kB 00:00:00
(9/18): httpd-tools-2.4.6-45.el7.centos.x86_64.rpm | 84 kB 00:00:01
(10/18): mailcap-2.1.41-2.el7.noarch.rpm | 31 kB 00:00:00
(11/18): pytalloc-2.1.6-1.el7.x86_64.rpm | 15 kB 00:00:00
(12/18): samba-4.4.4-12.el7_3.x86_64.rpm | 610 kB 00:00:01
(13/18): samba-common-tools-4.4.4-12.el7_3.x86_64.rpm | 451 kB 00:00:00
(14/18): httpd-2.4.6-45.el7.centos.x86_64.rpm | 2.7 MB 00:00:04
(15/18): samba-libs-4.4.4-12.el7_3.x86_64.rpm | 260 kB 00:00:00
(16/18): samba-common-4.4.4-12.el7_3.noarch.rpm | 191 kB 00:00:04
(17/18): samba-common-libs-4.4.4-12.el7_3.x86_64.rpm | 161 kB 00:00:04
(18/18): samba-client-libs-4.4.4-12.el7_3.x86_64.rpm | 4.6 MB 00:00:07
[root@lab4 software]# ls
apr-1.4.8-3.el7.x86_64.rpm libtalloc-2.1.6-1.el7.x86_64.rpm samba-4.4.4-12.el7_3.x86_64.rpm
apr-util-1.5.2-6.el7.x86_64.rpm libtdb-1.3.8-1.el7_2.x86_64.rpm samba-client-libs-4.4.4-12.el7_3.x86_64.rpm
cups-libs-1.6.3-26.el7.x86_64.rpm libtevent-0.9.28-1.el7.x86_64.rpm samba-common-4.4.4-12.el7_3.noarch.rpm
httpd-2.4.6-45.el7.centos.x86_64.rpm libwbclient-4.4.4-12.el7_3.x86_64.rpm samba-common-libs-4.4.4-12.el7_3.x86_64.rpm
httpd-tools-2.4.6-45.el7.centos.x86_64.rpm mailcap-2.1.41-2.el7.noarch.rpm samba-common-tools-4.4.4-12.el7_3.x86_64.rpm
libldb-1.1.26-1.el7.x86_64.rpm pytalloc-2.1.6-1.el7.x86_64.rpm samba-libs-4.4.4-12.el7_3.x86_64.rpm
[root@lab4 software]#

[root@lab4 software]# yumdownloader "@Development Tools" --destdir /root/software/ --resolve
Loaded plugins: fastestmirror



root@lab4 software]# ls
apr-1.4.8-3.el7.i686.rpm libgcc-4.8.5-11.el7.i686.rpm perl-macros-5.16.3-291.el7.x86_64.rpm
apr-1.4.8-3.el7.x86_64.rpm libgfortran-4.8.5-11.el7.x86_64.rpm perl-parent-0.225-244.el7.noarch.rpm
apr-util-1.5.2-6.el7.i686.rpm libgnome-keyring-3.8.0-3.el7.x86_64.rpm perl-PathTools-3.40-5.el7.x86_64.rpm
apr-util-1.5.2-6.el7.x86_64.rpm libldb-1.1.26-1.el7.x86_64.rpm perl-Pod-Escapes-1.04-291.el7.noarch.rpm
autoconf-2.69-11.el7.noarch.rpm libmodman-2.0.1-8.el7.i686.rpm perl-podlators-2.5.1-3.el7.noarch.rpm
automake-1.13.4-3.el7.noarch.rpm libmpc-1.0.1-3.el7.x86_64.rpm perl-Pod-Perldoc-3.20-4.el7.noarch.rpm
binutils-2.25.1-22.base.el7.x86_64.rpm libproxy-0.4.11-10.el7.i686.rpm perl-Pod-Simple-3.28-4.el7.noarch.rpm
bison-2.7-4.el7.x86_64.rpm libquadmath-4.8.5-11.el7.x86_64.rpm perl-Pod-Usage-1.63-3.el7.noarch.rpm
boost-system-1.53.0-26.el7.x86_64.rpm libquadmath-devel-4.8.5-11.el7.x86_64.rpm perl-Scalar-List-Utils-1.27-248.el7.x86_64.rpm
boost-thread-1.53.0-26.el7.x86_64.rpm libselinux-2.5-6.el7.i686.rpm perl-Socket-2.010-4.el7.x86_64.rpm
byacc-1.9.20130304-3.el7.x86_64.rpm libsepol-2.5-6.el7.i686.rpm perl-srpm-macros-1-8.el7.noarch.rpm
bzip2-1.0.6-13.el7.x86_64.rpm libstdc++-4.8.5-11.el7.i686.rpm perl-Storable-2.45-3.el7.x86_64.rpm
cpp-4.8.5-11.el7.x86_64.rpm libstdc++-devel-4.8.5-11.el7.x86_64.rpm perl-TermReadKey-2.30-20.el7.x86_64.rpm
cscope-15.8-9.el7.x86_64.rpm libtalloc-2.1.6-1.el7.x86_64.rpm perl-Test-Harness-3.28-3.el7.noarch.rpm
ctags-5.8-13.el7.x86_64.rpm libtasn1-3.8-3.el7.i686.rpm perl-Text-ParseWords-3.29-4.el7.noarch.rpm
cups-libs-1.6.3-26.el7.x86_64.rpm libtdb-1.3.8-1.el7_2.x86_64.rpm perl-Thread-Queue-3.02-2.el7.noarch.rpm
cyrus-sasl-lib-2.1.26-20.el7_2.i686.rpm libtevent-0.9.28-1.el7.x86_64.rpm perl-threads-1.87-4.el7.x86_64.rpm
diffstat-1.57-4.el7.x86_64.rpm libtool-2.4.2-21.el7_2.x86_64.rpm perl-threads-shared-1.43-6.el7.x86_64.rpm
doxygen-1.8.5-3.el7.x86_64.rpm libuuid-2.23.2-33.el7.i686.rpm perl-Time-HiRes-1.9725-3.el7.x86_64.rpm
dwz-0.11-3.el7.x86_64.rpm libverto-0.2.5-4.el7.i686.rpm perl-Time-Local-1.2300-2.el7.noarch.rpm
dyninst-8.2.0-2.el7.x86_64.rpm libwbclient-4.4.4-12.el7_3.x86_64.rpm perl-XML-Parser-2.41-10.el7.x86_64.rpm
elfutils-0.166-2.el7.x86_64.rpm m4-1.4.16-10.el7.x86_64.rpm pkgconfig-0.27.1-4.el7.i686.rpm
emacs-filesystem-24.3-19.el7_3.noarch.rpm mailcap-2.1.41-2.el7.noarch.rpm pkgconfig-0.27.1-4.el7.x86_64.rpm
expat-2.1.0-10.el7_3.i686.rpm make-3.82-23.el7.x86_64.rpm pytalloc-2.1.6-1.el7.x86_64.rpm
file-libs-5.11-33.el7.i686.rpm mokutil-0.9-2.el7.x86_64.rpm rcs-5.9.0-5.el7.x86_64.rpm
flex-2.5.37-3.el7.x86_64.rpm mpfr-3.1.1-4.el7.x86_64.rpm readline-6.2-9.el7.i686.rpm
gcc-4.8.5-11.el7.x86_64.rpm ncurses-libs-5.9-13.20130511.el7.i686.rpm redhat-rpm-config-9.1.0-72.el7.centos.noarch.rpm
gcc-c++-4.8.5-11.el7.x86_64.rpm neon-0.30.0-3.el7.i686.rpm rpm-build-4.11.3-21.el7.x86_64.rpm
gcc-gfortran-4.8.5-11.el7.x86_64.rpm neon-0.30.0-3.el7.x86_64.rpm rpm-sign-4.11.3-21.el7.x86_64.rpm
gdb-7.6.1-94.el7.x86_64.rpm nettle-2.7.1-8.el7.i686.rpm rsync-3.0.9-17.el7.x86_64.rpm
gettext-0.18.2.1-4.el7.x86_64.rpm nss-softokn-freebl-3.16.2.3-14.4.el7.i686.rpm samba-4.4.4-12.el7_3.x86_64.rpm
gettext-common-devel-0.18.2.1-4.el7.noarch.rpm openssl-libs-1.0.1e-60.el7_3.1.i686.rpm samba-client-libs-4.4.4-12.el7_3.x86_64.rpm
gettext-devel-0.18.2.1-4.el7.x86_64.rpm p11-kit-0.20.7-3.el7.i686.rpm samba-common-4.4.4-12.el7_3.noarch.rpm
git-1.8.3.1-6.el7_2.1.x86_64.rpm pakchois-0.4-10.el7.i686.rpm samba-common-libs-4.4.4-12.el7_3.x86_64.rpm
glib2-2.46.2-4.el7.i686.rpm pakchois-0.4-10.el7.x86_64.rpm samba-common-tools-4.4.4-12.el7_3.x86_64.rpm
glibc-2.17-157.el7_3.1.i686.rpm patch-2.7.1-8.el7.x86_64.rpm samba-libs-4.4.4-12.el7_3.x86_64.rpm
glibc-devel-2.17-157.el7_3.1.x86_64.rpm patchutils-0.3.3-4.el7.x86_64.rpm sqlite-3.7.17-8.el7.i686.rpm
glibc-headers-2.17-157.el7_3.1.x86_64.rpm pcre-8.32-15.el7_2.1.i686.rpm subversion-1.7.14-10.el7.i686.rpm
gmp-6.0.0-12.el7_1.i686.rpm perl-5.16.3-291.el7.x86_64.rpm subversion-1.7.14-10.el7.x86_64.rpm
gnutls-3.3.24-1.el7.i686.rpm perl-Carp-1.26-244.el7.noarch.rpm subversion-libs-1.7.14-10.el7.i686.rpm
httpd-2.4.6-45.el7.centos.x86_64.rpm perl-constant-1.27-2.el7.noarch.rpm subversion-libs-1.7.14-10.el7.x86_64.rpm
httpd-tools-2.4.6-45.el7.centos.x86_64.rpm perl-Data-Dumper-2.145-3.el7.x86_64.rpm swig-2.0.10-5.el7.x86_64.rpm
indent-2.2.11-13.el7.x86_64.rpm perl-Encode-2.51-7.el7.x86_64.rpm systemtap-3.0-7.el7.x86_64.rpm
intltool-0.50.2-6.el7.noarch.rpm perl-Error-0.17020-2.el7.noarch.rpm systemtap-client-3.0-7.el7.x86_64.rpm
kernel-devel-3.10.0-514.10.2.el7.x86_64.rpm perl-Exporter-5.68-3.el7.noarch.rpm systemtap-devel-3.0-7.el7.x86_64.rpm
kernel-headers-3.10.0-514.10.2.el7.x86_64.rpm perl-File-Path-2.09-2.el7.noarch.rpm systemtap-runtime-3.0-7.el7.x86_64.rpm
keyutils-libs-1.5.8-3.el7.i686.rpm perl-File-Temp-0.23.01-3.el7.noarch.rpm trousers-0.3.13-1.el7.i686.rpm
krb5-libs-1.14.1-27.el7_3.i686.rpm perl-Filter-1.49-3.el7.x86_64.rpm unzip-6.0-16.el7.x86_64.rpm
libcom_err-1.42.9-9.el7.i686.rpm perl-Getopt-Long-2.40-2.el7.noarch.rpm zip-3.0-11.el7.x86_64.rpm
libdb-5.3.21-19.el7.i686.rpm perl-Git-1.8.3.1-6.el7_2.1.noarch.rpm zlib-1.2.7-17.el7.i686.rpm
libdwarf-20130207-4.el7.x86_64.rpm perl-HTTP-Tiny-0.033-3.el7.noarch.rpm
libffi-3.0.13-18.el7.i686.rpm perl-libs-5.16.3-291.el7.x86_64.rpm
[root@lab4 software]#

RHEL 7 / CENTOS 7 USE CLASSIC ETH0 STYLE DEVICE

WHY WAS IT CHANGED ?

Red Hat Enterprise Linux 7 introduced a new scheme for naming network devices called “Consistent Device Naming”. It’s called Consistent Device Naming because previously the name of the devices [eth0,eth1,eth2] was completely dependant upon the order the kernel detected them as it booted. In certain circumstances, such as adding new devices to an existing system, the naming scheme could become unreliable.

FURTHER READING

The official Red Hat 7 Documentation on consistent device naming can be found here.

WHAT DOES THE NEW SCHEME LOOK LIKE ?

# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
 link/ether 00:0c:29:89:1b:2e brd ff:ff:ff:ff:ff:ff

HOW DO I CHANGE IT BACK TO ETH[0-9] STYLE NAMING ?

In summary we need to

  • Add extra parameters to the kernel configuration
  • Add this to the boot configuration
  • Restart the machine
  • Move the existing interfaces to the new scheme
  • Restart the network service

ADD EXTRA PARAMETERS TO THE KERNEL CONFIGURATION

Modify the grub bootloader to pass some extra parameters to the kernel at boot time. The kernel will then use these options to decide which naming scheme to use.

First we backup and edit the grub configuration file.

# cp /etc/default/grub /etc/default/grub.bak

Then we can safely edit the grub configuration file

# vim /etc/default/grub

The config file will look similar to the following

GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet "
GRUB_DISABLE_RECOVERY="true"

The line that starts “GRUB_CMDLINE_LINUX” needs to have some extra paramters added.

The extra parameters are

biosdevname=0 net.ifnames=0

So the final file looks like

GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet biosdevname=0 net.ifnames=0 "
GRUB_DISABLE_RECOVERY="true"

ADD THIS TO THE BOOT CONFIGURATION

If you are using a UEFI system then rebuild grub with this command

grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg

Otherwise use the following

# grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-3.10.0-327.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-327.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-3c913eca0eab4ebcb6da402e03553776
Found initrd image: /boot/initramfs-0-rescue-3c913eca0eab4ebcb6da402e03553776.img
done

RESTART THE MACHINE

Now we will restart the host, and the new naming scheme will take effect on reboot.

# shutdown -r now

MOVE THE EXISTING INTERFACES TO THE NEW SCHEME

It’s possible you may now need to reconfigure your network interface.

Here you can see the network interface is up, however there is no IP information associated with the new device name.

# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
 link/ether 00:0c:29:89:1b:2e brd ff:ff:ff:ff:ff:ff

For this example we will assume i’m not using NetworkManager. Therefore I’ll be editing the network configuration files in /etc/sysconfig/network-scripts directly.

Change into the network scripts directory.

# cd /etc/sysconfig/network-scripts/

Rename the old interface configuration file to new scheme

# mv ifcfg-eno16777736 ifcfg-eth0

Update the contents of the configuration file to use the new scheme

# sed -i 's/eno16777736/eth0/' ifcfg-eth0

mv ifcfg-eno16777736 ifcfg-eth0
vi /etc/udev/rules.d/90-eno-fix.rules
# This file was automatically generated on systemd update
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:0c:29:9e:8f:95", NAME="eno16777736"

# This file was automatically generated on systemd update
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:0c:29:9e:8f:95", NAME="eth0"


RESTART THE NETWORK SERVICE

Finally restart the network service so the changes take effect.

# systemctl restart network

Nginx for serving files bigger than 1GB

ealize that nginx was not serving files larger than 1GB. After investigation I found that it was due to the proxy_max_temp_file_size variable, that is configured by default to serve up to 1024 MB max.

This variable indicates the max size of a temporary file when the data served is bigger than the proxy buffer. If it is indeed bigger than the buffer, it will be served synchronously from the upstream server, avoiding the disk buffering.
If you configure proxy_max_temp_file_size to 0, then your temporary files will be disabled.

In this fix it was enough to locate this variable inside the location block, although you can use it inside server and httpd blocks. With this configuration you will optimize nginx for serving more than 1GB of data.

location / {
...
proxy_max_temp_file_size 1924m;
...
}

Restart nginx to take the changes:

service nginx restart

find large files on Linux

Find large files on Fedora / CentOS / RHEL

Search for big files (50MB or more) on your current directory:

find ./ -type f -size +50000k -exec ls -lh {} ; | awk '{ print $9 ": " $5 }'

Output:

[root@my.server.com:~]pwd
/home

[root@my.server.com:~]find . -type f -size +50000k -exec ls -lh {} ; | awk '{ print $9 ": " $5 }'
./user1/tmp/analog/cache: 79M
./syscall8/public_html/wp.zip: 146M
./bob54/public_html/adserver/var/debug.log: 86M
./marqu35/logs/adserver.site.com-May-2014.gz: 70M
./astrolab72/tmp/analog/cache: 75M

Search in my /var directory for 80MB or max file size:

find /var -type f -size +80000k -exec ls -lh {} ; | awk '{ print $9 ": " $5 }'

Find large files on Debian / Ubuntu Linux

Search in current directory:

find ./ -type f -size +10000k -exec ls -lh {} ; | awk '{ print $8 ": " $5 }'

Search the /home directory:

find /home -type f -size +10000k -exec ls -lh {} ; | awk '{ print $8 ": " $5 }'

If you know other ways to quickly find large files on Linux please share it with us.