Centos and Rhel 6.5 Hadoop 2.4 3 Node Server
hadoop word has been popular for many years, the mention of big data will think hadoop, hadoop then what role is it?
Official definition: hadoop is a developing and running large-scale data processing software platform. Core Words is a platform, which means
we have a lot of data, there are several computers, we know the task decomposition to process the data on each computer,
but do not know how to assign tasks, how to recycle a result, hadoop probably help us did it.
Hadoop Distributed File System (HDFS) is a very important part of Hadoop, this paper briefly describes several features of HDFS,
and then analyze the principles behind it, namely how to achieve these characteristics.
1, HDFS
We should be the first consideration is how to save huge amounts of data, how management.
This has been distributed file system, HDFS.
2, Map-Reduce
After the data is saved, how do we deal with these data do, if the way I deal with the complexity, rather than just sort of find how to do this operation?
There is a need to be able to provide a place to write code, let us write their own operations,
which then decompose inside, distribution, recycling and so forth.
3, Hive
Can compile the code is good, but too much trouble to compile the code and database personnel are familiar with SQL statements,
SQL statements can be processed, do not Map-Reduce it, so there are a Hive. And big data is inseparable from the database anyway, is inseparable from the table,
Hive will be able to talk about the data mapped into the data table, and then operate on the convenience, its drawback is slower.
4, HBase
Since Hive slower, then there is no faster database? HBase is that he was born of a query, the query quickly.
5, Sqoop
Did we not have a lot of well-known databases like MySQL, Oracle, data is the existence of this inside me,
and how imported into HDFS as well? Sqoop provides conversion between relational databases and between HDFS.
6, Flume
So much work on the computer, if one little problem, or at which service a problem, how do you know what the worse?
Flume provides a highly reliable log collection system.
7, Mahout
Many large data processing is used for data mining, there are several common that machine learning algorithms,
since the algorithms are fixed and on what kinds of, it developed a thing called Mahout achieve a variety of algorithms, developers can be more quick to use.
8, Zookeeper
ZooKeeper goal is a good package of key services complex and error-prone, easy to use interface and the performance of high efficiency,
stable system functions available to the user. White said that the zoo administrator, he is used to manage the elephant (Hadoop), bees (Hive)’s.
These are the key members of the Hadoop family, there are several less common would not have introduced, that the role of these members,
the whole can do for Hadoop will have a preliminary understanding, the rest is slowly learning the principles of the various parts of and use it.
1, high fault tolerance. This is the core characteristics of HDFS, and deployed in a large amount of data on inexpensive hardware, even if some of the disks fails, HDFS can quickly recover lost data.
2, simple consistency. This means that HDFS suited write once, read many of the program, the file is written, you do not modify it. Like MapReduce program or web crawler is a perfect fit for this feature.
3, instead of moving the mobile computing data. This good explanation, the data is too big, bad move, HDFS provides interfaces that allow the program to move to their own data from the close position.
4, platform compatibility. Platform should be able to resolve the differences, this allows more extensive use HDFS.
HDFS architecture
1, HDFS is a typical master-slave relationship, the master node is NameNode, DataNode from a node.
NameNode nodes are managers, key management system name space, when the program needs to read the data, first wanted to ask NameNode storage location of data blocks.
There are many DataNode node, usually in the form of tissue rack, the rack and then connected through the switch. DataNode main function is to save the data block, but also to information NameNode report data blocks, not three seconds to send a “heartbeat”, if 10 minutes have not received a heartbeat, then consider this DataNode broken, then you need Data recovered.
2, the following describes the principle DataNode backup, this is one of the reasons HDFS high fault tolerance.
Data on HDFS are not only save what you can, and each file will be copied several times (default 3 times), and then placed in different places in order to avoid data loss.
That is how to save it? There are three copies of each data block, the first one is the data itself, and the second is saved in the same rack (can be understood as the same hard disk) under different DataNode last saved on a different rack on DataNode.
3, except Namenode and DataNode, there is SecondaryNameNode, his role is mainly cyclical merge files stored on NameNode storage location of data blocks, while the NameNode damaged, you can manually recover from SecondaryNameNode in part, but not all.
4, SecondaryNameNode NameNode can not solve a single problem, in order to improve fault tolerance, HDFS as well as HA (high availability) mechanism: Two NameNode. There Federation mechanisms: multiple NameNode.
5, the data block (block), like the Linux system has the smallest unit of data per disk read and write: 512 bytes
The HDFS has the same concept, but the size becomes a 64M, it is because HDFS require multiple read, but reading is to constantly seek, we should try to make a minimum data transfer time compared to seek, if Seek to transmit hundredth time, seek time of 10ms, the transmission speed of 100MB / s, then the block size is 100MB. After the hard disk transfer speed, the block size may be increased. But too much is not good block, a task processing a block, the task will be slower. When the file is less than 64MB, the system will assign a Block consent to this file, but the actual disk resource is no wasted.
6, for a large number of small files, HDFS provides two containers, unified file management: SequenceFile and MapFile.
7, the compression. Compression can reduce the space, there are three: gzip, LZO, Snappy. gzip compression rate highest, but consuming CPU, and slow. Snappy lowest compression ratio, but fast. LZO centered.
HDFS operating
Finally, some common command HDFS
1, hadoop fs – Here are some basic operations:
hadoop fs-mkdir (path) to build folder
hadoop fs-ls (path) list files and directories
hadoop fs-put file upload path
hadoop fs-get file path download
hadoop fs-text file viewer
hadoop fs-rm file deletion
2, hadoop namenode-formate format NameNode
3, hadoop job-submit submit jobs
hadoo job-kill kill jobs
4, hadoop fsck-blocks block print out reports
hadoop fsck-racks print DataNode network topology
Summary
This article describes several features of HDFS and to explain some of its key principles and features of HDFS Finally, common command.
After reading this article HDFS can have a basic understanding of the principles of the specific details or to read a book,
in addition to that there will be HDFS using Java to operate.
Hadoop MapReduce Next Generation – Cluster Setup
192.168.1.40 cluster1.rmohan.com = Master
192.168.1.41 cluster2.rmohan.com = Slave1
192.168.1.42 cluster3.rmohan.com = Slave2
[root@cluster1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.40 cluster1.rmohan.com cluster1
192.168.1.41 cluster2.rmohan.com cluster2
192.168.1.42 cluster3.rmohan.com cluster3
Add ssh-keygen -t rsa
create in cluster2 and cluster3 update it on the
[root@cluster1 ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
ef:a6:45:ea:bd:c5:f7:6e:1e:3a:16:71:50:51:f8:47 root@cluster1.rmohan.com
The key’s randomart image is:
+–[ RSA 2048]—-+
| .++|
| .. E|
| .o |
| . .o|
| S . o .|
| + . . |
| . o o o. |
| . +.. +..o|
| oo+….++|
+—————–+
[root@cluster1 ~]#
[root@cluster1 .ssh]# ls
id_rsa id_rsa.pub known_hosts
enable on all the 3 server
vi /etc/ssh/sshd_config
RSAAuthentication yes
PubkeyAuthentication yes
AuthorizedKeysFile .ssh/authorized_keys
scp .ssh/id_rsa.pub root@cluster2.rmohan.com:/root/.ssh/id_rsa.pub
scp .ssh/id_rsa.pub root@cluster3.rmohan.com:/root/.ssh/id_rsa.pub
[root@cluster3 .ssh]# cat id_rsa.pub >> authorized_keys
cd /root/
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA1mH8BU9kD89womyrIJ10+xNNmXeNszbKX+C1M03ygLynX4ppFo/UX26UnS1GdZFUMTbTKeHAAcIp7n0puHZU4vgF+SAOZMMaeOJT5Qdt6d8CK2neyTi3kYvP6b5+D0ug9ZENG1hc2V+WfYzvimoNsA1lCLz3v8JF3/Ubs9IkU3u+/ipNzKBW0jk/RmDXwGN4SIV7FzoyKVPsdc9kpMKBBb/pX+IlZcd5KvpE/0RaWSKDFh5rE6exNUyw5V3zCNHDLHINAKq+fT+z8dvGCd0ejV7694KrCaxiUKCDWtY2WzVZ43aqvAHqZsqCIkVMOdC7CW5anvvJKcHduuumyTDUwQ== root@cluster2.rmohan.com
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAu+MYbCwHl8NEKmiY4cExIZfVjTv4F2xGlJDXX+/pJKt1Xk2QFZg4i6JCN2h/GreSlpX/GenDU/C21QG/LN3o5hg2Y/88rFStY2O0K5Z44twVwkc+BJxTpBsfNqQfKnqOEVKOS6xGYK7LT3ytr8NaLp/bGV49bLrMoZCNIYuizH5BTW3IqxKsp0JJ5XSqTuuPZh4LPPn8Ecl9HvVDxJ1xBP00FYSTPg48YErvyVUMpercEIoWM3k+rJUh3DqFOyN+O92nqy7/S290rM6dk9p0R6/43MZjVYM61c6AtlxG7m4bt3Gk0dxC8WHbQRTfEdbUuq/jqXN1LPKZV8KCmPGvuQ== root@cluster3.rmohan.com
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAxHQ1ZgJc1Fj65OlHqg89Kli6e+lO9D7gde9HNWKMu6ELWV+jMMbzv9b0o7Q4zfCd5T3+0BuV19iTt3rvEAk2NyVSIQFh5s1XNmIb6mSIt+ORg/agIwES9Hx747gX3iV4uidbhlgGMtvzpBO2NHUeJt4vh7ALTQ26Frkcb3npPEt2KiWmpOcx5C9EJqeMWTt81HFXwZsj/wEX0Vi977KFspvgSnoxcWPd2WTa4WFdq6Lyo8vQYigo85AWv+7CxiLnliR0Zf+W9QWOTALCRRlwnCOYOU0Q+8qEPK3aD0NW4fJ8Ez9gvMZEo5SrStyqIp0xlRrJzQjIlZR0YrmBKPRK8Q== root@cluster1.rmohan.com
Install Java
All 3 Nodes
[root@cluster1 software]# rpm -ivh jdk-7u17-linux-x64.rpm
[root@cluster1 software]# cat /etc/profile.d/java.sh
export JAVA_HOME=/usr/java/jdk1.7.0_17
export JRE_HOME=/usr/java/jdk1.7.0_17/jre
export PATH=$PATH:/usr/java/jdk1.7.0_17
export CLASSPATH=./:/usr/java/jdk1.7.0_17/lib:/usr/java/jdk1.7.0_17/jre/lib
[root@cluster1 software]#
vi /etc/profile.d/hadoop.sh
export HADOOP_HOME=/usr/hadoop/hadoop-2.4.0
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_MAPARED_HOME=${HADOOP_HOME}
export HADOOP_COMMON_HOME=${HADOOP_HOME}
export HADOOP_HDFS_HOME=${HADOOP_HOME}
export YARN_HOME=${HADOOP_HOME}
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export HDFS_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export YARN_CONF_DIR=${HADOOP_HOME}/etc/hadoop
mkdir /usr/hadoop
mkdir /usr/hadoop/dfs
mkdir /usr/hadoop/dfs/name
mkdir /usr/hadoop/dfs/data
wget http://mirror.nus.edu.sg/apache/hadoop/common/hadoop-2.4.0/hadoop-2.4.0.tar.gz
tar -xvf hadoop-2.4.0.tar.gz -C /usr/hadoop/
[root@cluster1 etc]# cd hadoop/
[root@cluster1 hadoop]# ls
capacity-scheduler.xml hadoop-env.sh httpfs-env.sh mapred-env.cmd ssl-client.xml.example
configuration.xsl hadoop-metrics2.properties httpfs-log4j.properties mapred-env.sh ssl-server.xml.example
container-executor.cfg hadoop-metrics.properties httpfs-signature.secret mapred-queues.xml.template yarn-env.cmd
core-site.xml hadoop-policy.xml httpfs-site.xml mapred-site.xml.template yarn-env.sh
hadoop-env.cmd hdfs-site.xml log4j.properties slaves yarn-site.xml
[root@cluster1 hadoop]# pwd
/usr/hadoop/hadoop-2.4.0/etc/hadoop
~/hadoop-2.4.0/etc/hadoop/hadoop-env.sh
~/hadoop-2.4.0/etc/hadoop/yarn-env.sh
~/hadoop-2.4.0/etc/hadoop/slaves
~/hadoop-2.4.0/etc/hadoop/core-site.xml
~/hadoop-2.4.0/etc/hadoop/hdfs-site.xml
~/hadoop-2.4.0/etc/hadoop/mapred-site.xml
~/hadoop-2.4.0/etc/hadoop/yarn-site.xml
Add the java details
1) hadoop-env.sh
vi hadoop-env.sh
# The java implementation to use.
export JAVA_HOME=${JAVA_HOME}
export JAVA_HOME=/usr/java/jdk1.7.0_17
2) yarn-env.sh
vi yarn-env.sh
export JAVA_HOME=/usr/java/jdk1.7.0_17
3) slaves
cat slaves
cluster2.rmohan.com
cluster3.rmohan.com
4) core-site.xml
vi core-site.xml
5) mapred-site.xml
cp mapred-site.xml.template mapred-site.xml
6) hdfs-site.xml
vi hdfs-site.xml
mapred-site.xml
7) yarn-site.xml
vi yarn-site.xml
Copy the files to Node2 and Node3
scp -r hadoop-2.4.0 root@cluster2.rmohan.com:/usr/hadoop/
scp -r hadoop-2.4.0 root@cluster3.rmohan.com:/usr/hadoop/
/usr/hadoop/hadoop-2.4.0/sbin
[root@cluster1 sbin]# ./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
14/06/06 17:37:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
Starting namenodes on [hadoopmaster]
The authenticity of host ‘hadoopmaster (192.168.1.40)’ can’t be established.
RSA key fingerprint is 42:90:4a:3c:a3:fc:8e:85:ca:f9:d6:10:bb:93:ed:b2.
Are you sure you want to continue connecting (yes/no)? yes
hadoopmaster: Warning: Permanently added ‘hadoopmaster’ (RSA) to the list of known hosts.
hadoopmaster: starting namenode, logging to /usr/hadoop/hadoop-2.4.0/logs/hadoop-root-namenode-cluster1.rmohan.com.out
cluster2.rmohan.com: datanode running as process 2815. Stop it first.
cluster3.rmohan.com: datanode running as process 2803. Stop it first.
Starting secondary namenodes [hadoopmaster]
hadoopmaster: starting secondarynamenode, logging to /usr/hadoop/hadoop-2.4.0/logs/hadoop-root-secondarynamenode-cluster1.rmohan.com.out
14/06/06 17:37:41 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /usr/hadoop/hadoop-2.4.0/logs/yarn-root-resourcemanager-cluster1.rmohan.com.out
cluster3.rmohan.com: starting nodemanager, logging to /usr/hadoop/hadoop-2.4.0/logs/yarn-root-nodemanager-cluster3.rmohan.com.out
cluster2.rmohan.com: starting nodemanager, logging to /usr/hadoop/hadoop-2.4.0/logs/yarn-root-nodemanager-cluster2.rmohan.com.out
[root@cluster1 sbin]# ./stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
14/06/06 18:24:56 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
Stopping namenodes on [hadoopmaster]
hadoopmaster: stopping namenode
cluster2.rmohan.com: stopping datanode
cluster3.rmohan.com: stopping datanode
Stopping secondary namenodes [hadoopmaster]
hadoopmaster: stopping secondarynamenode
14/06/06 18:25:16 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
stopping yarn daemons
stopping resourcemanager
cluster2.rmohan.com: stopping nodemanager
cluster3.rmohan.com: stopping nodemanager
no proxyserver to stop
[root@cluster1 sbin]#
[root@cluster1 hadoop-2.4.0]# /usr/hadoop/hadoop-2.4.0/sbin/slaves.sh /usr/java/jdk1.7.0_17/bin/jps
cluster3.rmohan.com: 3886 NodeManager
cluster2.rmohan.com: 4057 Jps
cluster3.rmohan.com: 4052 Jps
cluster2.rmohan.com: 3789 DataNode
cluster3.rmohan.com: 3784 DataNode
cluster2.rmohan.com: 3891 NodeManager
[root@cluster1 hadoop-2.4.0]# /usr/java/jdk1.7.0_17/bin/jps
5407 NameNode
5726 ResourceManager
5584 SecondaryNameNode
6442 Jps
[root@cluster1 hadoop-2.4.0]#
[root@cluster1 bin]# ./hadoop namenode -format
************************************************************/
[root@cluster1 bin]# ./hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
14/06/06 17:55:05 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = cluster1.rmohan.com/192.168.1.40
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.4.0
STARTUP_MSG: classpath = /usr/hadoop/hadoop-2.4.0/etc/hadoop:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/commons-net-3.1.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/junit-4.8.2.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/jettison-1.1.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/activation-1.1.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/commons-io-2.4.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/xz-1.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/paranamer-2.3.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/guava-11.0.2.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/hadoop-auth-2.4.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/hadoop-annotations-2.4.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/asm-3.2.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/commons-el-1.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/avro-1.7.4.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/hadoop-nfs-2.4.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/hadoop-common-2.4.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/common/hadoop-common-2.4.0-tests.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/hdfs:/usr/hadoop/hadoop-2.4.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/hdfs/hadoop-hdfs-2.4.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/hdfs/hadoop-hdfs-2.4.0-tests.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.4.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/guice-3.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/activation-1.1.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/xz-1.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/jackson-xc-1.8.8.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/asm-3.2.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/lib/zookeeper-3.4.5.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.4.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-client-2.4.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.4.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-api-2.4.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.4.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-server-common-2.4.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.4.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.4.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.4.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.4.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.4.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.4.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.4.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.4.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.4.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.4.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.4.0.jar:/usr/hadoop/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0-tests.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = http://svn.apache.org/repos/asf/hadoop/common -r 1583262; compiled by ‘jenkins’ on 2014-03-31T08:29Z
STARTUP_MSG: java = 1.7.0_17
************************************************************/
14/06/06 17:55:05 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
14/06/06 17:55:05 INFO namenode.NameNode: createNameNode [-format]
14/06/06 17:55:06 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
Formatting using clusterid: CID-ab82af59-a88b-4e6a-b2b6-9e7ba859e781
14/06/06 17:55:06 INFO namenode.FSNamesystem: fsLock is fair:true
14/06/06 17:55:06 INFO namenode.HostFileManager: read includes:
HostSet(
)
14/06/06 17:55:06 INFO namenode.HostFileManager: read excludes:
HostSet(
)
14/06/06 17:55:06 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
14/06/06 17:55:06 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
14/06/06 17:55:06 INFO util.GSet: Computing capacity for map BlocksMap
14/06/06 17:55:06 INFO util.GSet: VM type = 64-bit
14/06/06 17:55:06 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
14/06/06 17:55:06 INFO util.GSet: capacity = 2^21 = 2097152 entries
14/06/06 17:55:06 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
14/06/06 17:55:06 INFO blockmanagement.BlockManager: defaultReplication = 3
14/06/06 17:55:06 INFO blockmanagement.BlockManager: maxReplication = 512
14/06/06 17:55:06 INFO blockmanagement.BlockManager: minReplication = 1
14/06/06 17:55:06 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
14/06/06 17:55:06 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
14/06/06 17:55:06 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
14/06/06 17:55:06 INFO blockmanagement.BlockManager: encryptDataTransfer = false
14/06/06 17:55:06 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
14/06/06 17:55:06 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE)
14/06/06 17:55:06 INFO namenode.FSNamesystem: supergroup = supergroup
14/06/06 17:55:06 INFO namenode.FSNamesystem: isPermissionEnabled = true
14/06/06 17:55:06 INFO namenode.FSNamesystem: HA Enabled: false
14/06/06 17:55:06 INFO namenode.FSNamesystem: Append Enabled: true
14/06/06 17:55:07 INFO util.GSet: Computing capacity for map INodeMap
14/06/06 17:55:07 INFO util.GSet: VM type = 64-bit
14/06/06 17:55:07 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
14/06/06 17:55:07 INFO util.GSet: capacity = 2^20 = 1048576 entries
14/06/06 17:55:07 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/06/06 17:55:07 INFO util.GSet: Computing capacity for map cachedBlocks
14/06/06 17:55:07 INFO util.GSet: VM type = 64-bit
14/06/06 17:55:07 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
14/06/06 17:55:07 INFO util.GSet: capacity = 2^18 = 262144 entries
14/06/06 17:55:07 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
14/06/06 17:55:07 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
14/06/06 17:55:07 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
14/06/06 17:55:07 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
14/06/06 17:55:07 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
14/06/06 17:55:07 INFO util.GSet: Computing capacity for map NameNodeRetryCache
14/06/06 17:55:07 INFO util.GSet: VM type = 64-bit
14/06/06 17:55:07 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
14/06/06 17:55:07 INFO util.GSet: capacity = 2^15 = 32768 entries
14/06/06 17:55:07 INFO namenode.AclConfigFlag: ACLs enabled? false
Re-format filesystem in Storage Directory /usr/hadoop/dfs/name ? (Y or N) y
14/06/06 17:55:09 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1380935534-192.168.1.40-1402048509097
14/06/06 17:55:09 INFO common.Storage: Storage directory /usr/hadoop/dfs/name has been successfully formatted.
14/06/06 17:55:09 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
14/06/06 17:55:09 INFO util.ExitUtil: Exiting with status 0
14/06/06 17:55:09 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at cluster1.rmohan.com/192.168.1.40
************************************************************/
[root@cluster1 bin]# netstat -antp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 2588/sshd
tcp 0 0 127.0.0.1:6010 0.0.0.0:* LISTEN 2461/sshd
tcp 0 0 192.168.1.40:9001 0.0.0.0:* LISTEN 3610/java
tcp 0 0 192.168.1.40:22 192.168.1.4:51736 ESTABLISHED 2755/sshd
tcp 0 0 192.168.1.40:22 192.168.1.1:58989 ESTABLISHED 2465/sshd
tcp 0 176 192.168.1.40:22 192.168.1.1:58985 ESTABLISHED 2461/sshd
tcp 0 0 ::ffff:192.168.1.40:8080 :::* LISTEN 3744/java
tcp 0 0 ::ffff:192.168.1.40:8081 :::* LISTEN 3744/java
tcp 0 0 ::ffff:192.168.1.40:8082 :::* LISTEN 3744/java
tcp 0 0 :::22 :::* LISTEN 2588/sshd
tcp 0 0 :::8088 :::* LISTEN 3744/java
tcp 0 0 ::1:6010 :::* LISTEN 2461/sshd
tcp 0 0 :::8033 :::* LISTEN 3744/java
tcp 0 0 ::ffff:192.168.1.40:8082 ::ffff:192.168.1.42:49955 ESTABLISHED 3744/java
tcp 0 0 ::ffff:192.168.1.40:8082 ::ffff:192.168.1.41:55070 ESTABLISHED 3744/java
[root@cluster1 bin]#
URLS To Check the Cluster Details
http://cluster1.rmohan.com:8088/cluster
http://cluster1.rmohan.com:9001/status.jsp
http://cluster1.rmohan.com:50070/dfshealth.jsp
[root@cluster1 hadoop-2.4.0]# ./bin/hdfs dfsadmin -report
14/06/06 18:46:16 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
Configured Capacity: 105689374720 (98.43 GB)
Present Capacity: 92940345344 (86.56 GB)
DFS Remaining: 92940296192 (86.56 GB)
DFS Used: 49152 (48 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
————————————————-
Datanodes available: 2 (2 total, 0 dead)
Live datanodes:
Name: 192.168.1.41:50010 (cluster2.rmohan.com)
Hostname: cluster2.rmohan.com
Decommission Status : Normal
Configured Capacity: 52844687360 (49.22 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 6304174080 (5.87 GB)
DFS Remaining: 46540488704 (43.34 GB)
DFS Used%: 0.00%
DFS Remaining%: 88.07%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Last contact: Fri Jun 06 18:46:15 SGT 2014
Name: 192.168.1.42:50010 (cluster3.rmohan.com)
Hostname: cluster3.rmohan.com
Decommission Status : Normal
Configured Capacity: 52844687360 (49.22 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 6444855296 (6.00 GB)
DFS Remaining: 46399807488 (43.21 GB)
DFS Used%: 0.00%
DFS Remaining%: 87.80%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Last contact: Fri Jun 06 18:46:15 SGT 2014
root@cluster1 hadoop-2.4.0]# ./bin/hdfs fsck / -files -blocks
14/06/06 18:48:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where app
Connecting to namenode via http://hadoopmaster:50070
FSCK started by root (auth:SIMPLE) from /192.168.1.40 for path / at Fri Jun 06 18:48:04 SGT 2014
/
Status: HEALTHY
Total size: 0 B
Total dirs: 1
Total files: 0
Total symlinks: 0
Total blocks (validated): 0
Minimally replicated blocks: 0
Over-replicated blocks: 0
Under-replicated blocks: 0
Mis-replicated blocks: 0
Default replication factor: 3
Average block replication: 0.0
Corrupt blocks: 0
Missing replicas: 0
Number of data-nodes: 2
Number of racks: 1
FSCK ended at Fri Jun 06 18:48:04 SGT 2014 in 10 milliseconds
The filesystem under path ‘/’ is HEALTHY
How to create DFS Directory
[root@cluster1 hadoop-2.4.0]# ./bin/hdfs dfs -mkdir /mohan
14/06/06 18:49:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
[root@cluster1 hadoop-2.4.0]# cat /tmp/test/test1
hello www.google.com
hello1 www.yahoo.com
hello2 www.msn.com
hello3 www.rediff.com
hello4 www.amazon.com
hello5 www.ebay.com
[root@cluster1 hadoop-2.4.0]# cat /tmp/test/test2
the feeling that you understand and share another person’s experiences and emotions : the ability to share someone else’s feelings
He felt great empathy with the poor.
His months spent researching prison life gave him greater empathy towards convicts.
Poetic empathy understandably seeks a strategy of identification with victims Helen Vendler, New Republic, 5 May 2003
Origin of EMPATHY
Greek empatheia, literally, passion, from empaths emotional, from em- + pathos feelings, emotion more at pathos
[root@cluster1 test]# /usr/hadoop/hadoop-2.4.0/bin/hadoop fs -put -f test1.txt /mohan
14/06/06 19:45:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
[root@cluster1 test]# /usr/hadoop/hadoop-2.4.0/bin/hadoop fs -put -f test2.txt /mohan
14/06/06 19:45:10 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
./bin/hadoop jar ./share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-2.3.0-sources.jar org.apache.hadoop.examples.WordCount /mohan /output
./bin/hadoop fs -cat /output/part-r-00000
./bin/hadoop fs -cat /output/part-r-00000
Recent Comments