{"id":3116,"date":"2014-06-06T21:02:44","date_gmt":"2014-06-06T13:02:44","guid":{"rendered":"http:\/\/rmohan.com\/?p=3116"},"modified":"2014-06-07T15:11:23","modified_gmt":"2014-06-07T07:11:23","slug":"centos-and-rhel-6-5-hadoop-2-4-3-node-server-cluster","status":"publish","type":"post","link":"https:\/\/mohan.sg\/?p=3116","title":{"rendered":"Centos and Rhel 6.5 Hadoop 2.4  3 Node Server Cluster"},"content":{"rendered":"<p><strong>Centos and Rhel 6.5 Hadoop 2.4  3 Node Server <\/strong><\/p>\n<p>hadoop word has been popular for many years, the mention of big data will think hadoop, hadoop then what role is it?<br \/>\nOfficial definition: hadoop is a developing and running large-scale data processing software platform. Core Words is a platform, which means<br \/>\nwe have a lot of data, there are several computers, we know the task decomposition to process the data on each computer,<br \/>\nbut do not know how to assign tasks, how to recycle a result, hadoop probably help us did it.<\/p>\n<p>Hadoop Distributed File System (HDFS) is a very important part of Hadoop, this paper briefly describes several features of HDFS,<br \/>\nand then analyze the principles behind it, namely how to achieve these characteristics.<\/p>\n<p>1, HDFS<\/p>\n<p>We should be the first consideration is how to save huge amounts of data, how management.<br \/>\nThis has been distributed file system, HDFS.<\/p>\n<p>2, Map-Reduce<\/p>\n<p>After the data is saved, how do we deal with these data do, if the way I deal with the complexity, rather than just sort of find how to do this operation?<br \/>\nThere is a need to be able to provide a place to write code, let us write their own operations,<br \/>\nwhich then decompose inside, distribution, recycling and so forth.<\/p>\n<p>3, Hive<\/p>\n<p>Can compile the code is good, but too much trouble to compile the code and database personnel are familiar with SQL statements,<br \/>\nSQL statements can be processed, do not Map-Reduce it, so there are a Hive. And big data is inseparable from the database anyway, is inseparable from the table,<br \/>\n Hive will be able to talk about the data mapped into the data table, and then operate on the convenience, its drawback is slower.<\/p>\n<p>4, HBase<\/p>\n<p>Since Hive slower, then there is no faster database? HBase is that he was born of a query, the query quickly.<\/p>\n<p>5, Sqoop<\/p>\n<p>Did we not have a lot of well-known databases like MySQL, Oracle, data is the existence of this inside me,<br \/>\nand how imported into HDFS as well? Sqoop provides conversion between relational databases and between HDFS.<\/p>\n<p>6, Flume<br \/>\nSo much work on the computer, if one little problem, or at which service a problem, how do you know what the worse?<br \/>\nFlume provides a highly reliable log collection system.<\/p>\n<p>7, Mahout<\/p>\n<p>Many large data processing is used for data mining, there are several common that machine learning algorithms,<br \/>\nsince the algorithms are fixed and on what kinds of, it developed a thing called Mahout achieve a variety of algorithms, developers can be more quick to use.<\/p>\n<p>8, Zookeeper<\/p>\n<p>ZooKeeper goal is a good package of key services complex and error-prone, easy to use interface and the performance of high efficiency,<br \/>\nstable system functions available to the user. White said that the zoo administrator, he is used to manage the elephant (Hadoop), bees (Hive)&#8217;s.<br \/>\nThese are the key members of the Hadoop family, there are several less common would not have introduced, that the role of these members,<br \/>\nthe whole can do for Hadoop will have a preliminary understanding, the rest is slowly learning the principles of the various parts of and use it.<\/p>\n<p><a href=\"http:\/\/rmohan.com\/wp-content\/uploads\/2014\/06\/Hadoop.png\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/rmohan.com\/wp-content\/uploads\/2014\/06\/Hadoop.png\" alt=\"Hadoop\" width=\"512\" height=\"233\" class=\"aligncenter size-full wp-image-3117\" srcset=\"https:\/\/mohan.sg\/wp-content\/uploads\/2014\/06\/Hadoop.png 512w, https:\/\/mohan.sg\/wp-content\/uploads\/2014\/06\/Hadoop-300x136.png 300w, https:\/\/mohan.sg\/wp-content\/uploads\/2014\/06\/Hadoop-150x68.png 150w, https:\/\/mohan.sg\/wp-content\/uploads\/2014\/06\/Hadoop-400x182.png 400w\" sizes=\"(max-width: 512px) 100vw, 512px\" \/><\/a><br \/>\nHDFS Features<\/p>\n<p>1, high fault tolerance. This is the core characteristics of HDFS, and deployed in a large amount of data on inexpensive hardware, even if some of the disks fails, HDFS can quickly recover lost data.<\/p>\n<p>2, simple consistency. This means that HDFS suited write once, read many of the program, the file is written, you do not modify it. Like MapReduce program or web crawler is a perfect fit for this feature.<\/p>\n<p>3, instead of moving the mobile computing data. This good explanation, the data is too big, bad move, HDFS provides interfaces that allow the program to move to their own data from the close position.<\/p>\n<p>4, platform compatibility. Platform should be able to resolve the differences, this allows more extensive use HDFS.<\/p>\n<p>HDFS architecture<\/p>\n<p>1, HDFS is a typical master-slave relationship, the master node is NameNode, DataNode from a node.<\/p>\n<p>NameNode nodes are managers, key management system name space, when the program needs to read the data, first wanted to ask NameNode storage location of data blocks.<\/p>\n<p>There are many DataNode node, usually in the form of tissue rack, the rack and then connected through the switch. DataNode main function is to save the data block, but also to information NameNode report data blocks, not three seconds to send a &#8220;heartbeat&#8221;, if 10 minutes have not received a heartbeat, then consider this DataNode broken, then you need Data recovered.<\/p>\n<p>2, the following describes the principle DataNode backup, this is one of the reasons HDFS high fault tolerance.<\/p>\n<p>Data on HDFS are not only save what you can, and each file will be copied several times (default 3 times), and then placed in different places in order to avoid data loss.<\/p>\n<p>That is how to save it? There are three copies of each data block, the first one is the data itself, and the second is saved in the same rack (can be understood as the same hard disk) under different DataNode last saved on a different rack on DataNode.<\/p>\n<p>3, except Namenode and DataNode, there is SecondaryNameNode, his role is mainly cyclical merge files stored on NameNode storage location of data blocks, while the NameNode damaged, you can manually recover from SecondaryNameNode in part, but not all.<\/p>\n<p>4, SecondaryNameNode NameNode can not solve a single problem, in order to improve fault tolerance, HDFS as well as HA (high availability) mechanism: Two NameNode. There Federation mechanisms: multiple NameNode.<\/p>\n<p>5, the data block (block), like the Linux system has the smallest unit of data per disk read and write: 512 bytes<\/p>\n<p>The HDFS has the same concept, but the size becomes a 64M, it is because HDFS require multiple read, but reading is to constantly seek, we should try to make a minimum data transfer time compared to seek, if Seek to transmit hundredth time, seek time of 10ms, the transmission speed of 100MB \/ s, then the block size is 100MB. After the hard disk transfer speed, the block size may be increased. But too much is not good block, a task processing a block, the task will be slower. When the file is less than 64MB, the system will assign a Block consent to this file, but the actual disk resource is no wasted.<\/p>\n<p>6, for a large number of small files, HDFS provides two containers, unified file management: SequenceFile and MapFile.<\/p>\n<p>7, the compression. Compression can reduce the space, there are three: gzip, LZO, Snappy. gzip compression rate highest, but consuming CPU, and slow. Snappy lowest compression ratio, but fast. LZO centered.<\/p>\n<p><strong>HDFS operating<\/strong><\/p>\n<p>Finally, some common command HDFS<\/p>\n<p>1, hadoop fs &#8211; Here are some basic operations:<\/p>\n<p>hadoop fs-mkdir (path) to build folder<\/p>\n<p>hadoop fs-ls (path) list files and directories<\/p>\n<p>hadoop fs-put file upload path<\/p>\n<p>hadoop fs-get file path download<\/p>\n<p>hadoop fs-text file viewer<\/p>\n<p>hadoop fs-rm file deletion<\/p>\n<p>2, hadoop namenode-formate format NameNode<\/p>\n<p>3, hadoop job-submit submit jobs<\/p>\n<p>hadoo job-kill kill jobs<\/p>\n<p>4, hadoop fsck-blocks block print out reports<\/p>\n<p>hadoop fsck-racks print DataNode network topology<\/p>\n<p>Summary<\/p>\n<p>This article describes several features of HDFS and to explain some of its key principles and features of HDFS Finally, common command.<br \/>\n After reading this article HDFS can have a basic understanding of the principles of the specific details or to read a book,<br \/>\nin addition to that there will be HDFS using Java to operate.<\/p>\n<p><a href=\"http:\/\/rmohan.com\/wp-content\/uploads\/2014\/06\/hadoopcluster.jpg\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/rmohan.com\/wp-content\/uploads\/2014\/06\/hadoopcluster.jpg\" alt=\"hadoopcluster\" width=\"645\" height=\"486\" class=\"aligncenter size-full wp-image-3118\" srcset=\"https:\/\/mohan.sg\/wp-content\/uploads\/2014\/06\/hadoopcluster.jpg 645w, https:\/\/mohan.sg\/wp-content\/uploads\/2014\/06\/hadoopcluster-300x226.jpg 300w, https:\/\/mohan.sg\/wp-content\/uploads\/2014\/06\/hadoopcluster-150x113.jpg 150w, https:\/\/mohan.sg\/wp-content\/uploads\/2014\/06\/hadoopcluster-400x301.jpg 400w\" sizes=\"(max-width: 645px) 100vw, 645px\" \/><\/a><\/p>\n<p>Hadoop MapReduce Next Generation &#8211; Cluster Setup<\/p>\n<p>192.168.1.40 cluster1.rmohan.com   = Master<br \/>\n192.168.1.41 cluster2.rmohan.com   = Slave1<br \/>\n192.168.1.42 cluster3.rmohan.com   = Slave2<\/p>\n<p>[root@cluster1 ~]# cat \/etc\/hosts<br \/>\n127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4<br \/>\n::1         localhost localhost.localdomain localhost6 localhost6.localdomain6<br \/>\n192.168.1.40 cluster1.rmohan.com cluster1<br \/>\n192.168.1.41 cluster2.rmohan.com cluster2<br \/>\n192.168.1.42 cluster3.rmohan.com cluster3<\/p>\n<p>Add ssh-keygen -t rsa<br \/>\ncreate in cluster2 and cluster3 update it on the <\/p>\n<p>[root@cluster1 ~]# ssh-keygen -t rsa<br \/>\nGenerating public\/private rsa key pair.<br \/>\nEnter file in which to save the key (\/root\/.ssh\/id_rsa):<br \/>\nEnter passphrase (empty for no passphrase):<br \/>\nEnter same passphrase again:<br \/>\nYour identification has been saved in \/root\/.ssh\/id_rsa.<br \/>\nYour public key has been saved in \/root\/.ssh\/id_rsa.pub.<br \/>\nThe key fingerprint is:<br \/>\nef:a6:45:ea:bd:c5:f7:6e:1e:3a:16:71:50:51:f8:47 root@cluster1.rmohan.com<br \/>\nThe key&#8217;s randomart image is:<br \/>\n+&#8211;[ RSA 2048]&#8212;-+<br \/>\n|              .++|<br \/>\n|             .. E|<br \/>\n|              .o |<br \/>\n|             . .o|<br \/>\n|        S .   o .|<br \/>\n|         + . .   |<br \/>\n|        . o o o. |<br \/>\n|       . +.. +..o|<br \/>\n|        oo+&#8230;.++|<br \/>\n+&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;+<br \/>\n[root@cluster1 ~]#<\/p>\n<p>[root@cluster1 .ssh]# ls<br \/>\nid_rsa  id_rsa.pub  known_hosts<\/p>\n<p>enable on all the 3 server <\/p>\n<p>vi \/etc\/ssh\/sshd_config<\/p>\n<p>RSAAuthentication yes<br \/>\nPubkeyAuthentication yes<br \/>\nAuthorizedKeysFile  .ssh\/authorized_keys<\/p>\n<p>scp .ssh\/id_rsa.pub root@cluster2.rmohan.com:\/root\/.ssh\/id_rsa.pub<\/p>\n<p>scp .ssh\/id_rsa.pub root@cluster3.rmohan.com:\/root\/.ssh\/id_rsa.pub<\/p>\n<p>[root@cluster3 .ssh]# cat id_rsa.pub >> authorized_keys<br \/>\ncd \/root\/<\/p>\n<p>chmod 700 ~\/.ssh<br \/>\nchmod 600 ~\/.ssh\/authorized_keys <\/p>\n<p>ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA1mH8BU9kD89womyrIJ10+xNNmXeNszbKX+C1M03ygLynX4ppFo\/UX26UnS1GdZFUMTbTKeHAAcIp7n0puHZU4vgF+SAOZMMaeOJT5Qdt6d8CK2neyTi3kYvP6b5+D0ug9ZENG1hc2V+WfYzvimoNsA1lCLz3v8JF3\/Ubs9IkU3u+\/ipNzKBW0jk\/RmDXwGN4SIV7FzoyKVPsdc9kpMKBBb\/pX+IlZcd5KvpE\/0RaWSKDFh5rE6exNUyw5V3zCNHDLHINAKq+fT+z8dvGCd0ejV7694KrCaxiUKCDWtY2WzVZ43aqvAHqZsqCIkVMOdC7CW5anvvJKcHduuumyTDUwQ== root@cluster2.rmohan.com<br \/>\nssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAu+MYbCwHl8NEKmiY4cExIZfVjTv4F2xGlJDXX+\/pJKt1Xk2QFZg4i6JCN2h\/GreSlpX\/GenDU\/C21QG\/LN3o5hg2Y\/88rFStY2O0K5Z44twVwkc+BJxTpBsfNqQfKnqOEVKOS6xGYK7LT3ytr8NaLp\/bGV49bLrMoZCNIYuizH5BTW3IqxKsp0JJ5XSqTuuPZh4LPPn8Ecl9HvVDxJ1xBP00FYSTPg48YErvyVUMpercEIoWM3k+rJUh3DqFOyN+O92nqy7\/S290rM6dk9p0R6\/43MZjVYM61c6AtlxG7m4bt3Gk0dxC8WHbQRTfEdbUuq\/jqXN1LPKZV8KCmPGvuQ== root@cluster3.rmohan.com<br \/>\nssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAxHQ1ZgJc1Fj65OlHqg89Kli6e+lO9D7gde9HNWKMu6ELWV+jMMbzv9b0o7Q4zfCd5T3+0BuV19iTt3rvEAk2NyVSIQFh5s1XNmIb6mSIt+ORg\/agIwES9Hx747gX3iV4uidbhlgGMtvzpBO2NHUeJt4vh7ALTQ26Frkcb3npPEt2KiWmpOcx5C9EJqeMWTt81HFXwZsj\/wEX0Vi977KFspvgSnoxcWPd2WTa4WFdq6Lyo8vQYigo85AWv+7CxiLnliR0Zf+W9QWOTALCRRlwnCOYOU0Q+8qEPK3aD0NW4fJ8Ez9gvMZEo5SrStyqIp0xlRrJzQjIlZR0YrmBKPRK8Q== root@cluster1.rmohan.com<\/p>\n<p>Install Java <\/p>\n<p>All 3 Nodes <\/p>\n<p>[root@cluster1 software]# rpm -ivh jdk-7u17-linux-x64.rpm<\/p>\n<p>[root@cluster1 software]# cat \/etc\/profile.d\/java.sh<br \/>\nexport JAVA_HOME=\/usr\/java\/jdk1.7.0_17<br \/>\nexport JRE_HOME=\/usr\/java\/jdk1.7.0_17\/jre<br \/>\nexport PATH=$PATH:\/usr\/java\/jdk1.7.0_17<br \/>\nexport CLASSPATH=.\/:\/usr\/java\/jdk1.7.0_17\/lib:\/usr\/java\/jdk1.7.0_17\/jre\/lib<br \/>\n[root@cluster1 software]#<\/p>\n<p>vi \/etc\/profile.d\/hadoop.sh<\/p>\n<p>export HADOOP_HOME=\/usr\/hadoop\/hadoop-2.4.0<br \/>\nexport PATH=$PATH:$HADOOP_HOME\/bin<br \/>\nexport PATH=$PATH:$HADOOP_HOME\/sbin<br \/>\nexport HADOOP_MAPARED_HOME=${HADOOP_HOME}<br \/>\nexport HADOOP_COMMON_HOME=${HADOOP_HOME}<br \/>\nexport HADOOP_HDFS_HOME=${HADOOP_HOME}<br \/>\nexport YARN_HOME=${HADOOP_HOME}<br \/>\nexport HADOOP_CONF_DIR=${HADOOP_HOME}\/etc\/hadoop<br \/>\nexport HDFS_CONF_DIR=${HADOOP_HOME}\/etc\/hadoop<br \/>\nexport YARN_CONF_DIR=${HADOOP_HOME}\/etc\/hadoop<\/p>\n<p>mkdir \/usr\/hadoop<br \/>\nmkdir  \/usr\/hadoop\/dfs<br \/>\nmkdir  \/usr\/hadoop\/dfs\/name<br \/>\nmkdir  \/usr\/hadoop\/dfs\/data<\/p>\n<p>wget http:\/\/mirror.nus.edu.sg\/apache\/hadoop\/common\/hadoop-2.4.0\/hadoop-2.4.0.tar.gz<\/p>\n<p>tar -xvf hadoop-2.4.0.tar.gz -C  \/usr\/hadoop\/<\/p>\n<p>[root@cluster1 etc]# cd hadoop\/<br \/>\n[root@cluster1 hadoop]# ls<br \/>\ncapacity-scheduler.xml  hadoop-env.sh               httpfs-env.sh            mapred-env.cmd              ssl-client.xml.example<br \/>\nconfiguration.xsl       hadoop-metrics2.properties  httpfs-log4j.properties  mapred-env.sh               ssl-server.xml.example<br \/>\ncontainer-executor.cfg  hadoop-metrics.properties   httpfs-signature.secret  mapred-queues.xml.template  yarn-env.cmd<br \/>\ncore-site.xml           hadoop-policy.xml           httpfs-site.xml          mapred-site.xml.template    yarn-env.sh<br \/>\nhadoop-env.cmd          hdfs-site.xml               log4j.properties         slaves                      yarn-site.xml<br \/>\n[root@cluster1 hadoop]# pwd<br \/>\n\/usr\/hadoop\/hadoop-2.4.0\/etc\/hadoop<\/p>\n<p>~\/hadoop-2.4.0\/etc\/hadoop\/hadoop-env.sh<\/p>\n<p>~\/hadoop-2.4.0\/etc\/hadoop\/yarn-env.sh<\/p>\n<p>~\/hadoop-2.4.0\/etc\/hadoop\/slaves<\/p>\n<p>~\/hadoop-2.4.0\/etc\/hadoop\/core-site.xml<\/p>\n<p>~\/hadoop-2.4.0\/etc\/hadoop\/hdfs-site.xml<\/p>\n<p>~\/hadoop-2.4.0\/etc\/hadoop\/mapred-site.xml<\/p>\n<p>~\/hadoop-2.4.0\/etc\/hadoop\/yarn-site.xml<\/p>\n<p> Add the java details <\/p>\n<p>1) hadoop-env.sh<\/p>\n<p>vi hadoop-env.sh<\/p>\n<p># The java implementation to use.<br \/>\nexport JAVA_HOME=${JAVA_HOME}<br \/>\nexport JAVA_HOME=\/usr\/java\/jdk1.7.0_17<\/p>\n<p>2) yarn-env.sh<br \/>\nvi yarn-env.sh<\/p>\n<p>export JAVA_HOME=\/usr\/java\/jdk1.7.0_17<\/p>\n<p>3) slaves <\/p>\n<p>cat slaves<br \/>\ncluster2.rmohan.com<br \/>\ncluster3.rmohan.com <\/p>\n<p>4)  core-site.xml<\/p>\n<p>vi core-site.xml<\/p>\n<p><configuration>\n        <property>\n                <name>hadoop.tmp.dir<\/name><br \/>\n                <value>\/usr\/hadoop\/hadoop-2.4.0\/tmp\/hadoop-${user.name}<\/value>\n        <\/property>\n        <property>\n                <name>fs.default.name<\/name><br \/>\n                <value>hdfs:\/\/hadoopmaster:9000<\/value>\n        <\/property>\n<\/configuration><\/p>\n<p>5) mapred-site.xml<\/p>\n<p>cp mapred-site.xml.template mapred-site.xml<\/p>\n<p><configuration>\n <property>\n                <name>mapred.job.tracker<\/name><br \/>\n                <value>http:\/\/hadoopmaster:9001<\/value>\n        <\/property>\n <property>\n  <name>mapreduce.framework.name<\/name><br \/>\n        <value>yarn<\/value>\n    <\/property>\n<property>\n        <name>mapreduce.jobhistory.address<\/name><br \/>\n        <value>hadoopmaster:10020<\/value>\n    <\/property>\n<property>\n        <name>mapreduce.jobhistory.webapp.address<\/name><br \/>\n        <value>hadoopmaster:19888<\/value>\n    <\/property>\n<property>\n        <name>mapred.system.dir<\/name><br \/>\n        <value>\/mapred\/system<\/value><br \/>\n\t\t<final>true<\/final>\n\t\t<\/property>\n<property>\n        <name>mapred.local.dir<\/name><br \/>\n        <value>\/mapred\/local<\/value><br \/>\n\t\t<final>true<\/final>\n\t\t<\/property>\n<p><\/configuration><\/p>\n<p>6) hdfs-site.xml<\/p>\n<p>vi hdfs-site.xml<\/p>\n<p><configuration>\n <property>\n <name>dfs.namenode.secondary.http-address<\/name><br \/>\n        <value>hadoopmaster:9001<\/value>\n <\/property>\n<property>\n <name>dfs.namenode.name.dir<\/name><br \/>\n <value>file:\/usr\/hadoop\/dfs\/name<\/value>\n    <\/property>\n<property>\n    <name>dfs.datanode.data.dir<\/name><br \/>\n    <value>file:\/usr\/hadoop\/dfs\/data<\/value>\n    <\/property>\n<property>\n    <name>dfs.replication<\/name><br \/>\n    <value>1<\/value>\n    <\/property>\n<property>\n    <name>dfs.webhdfs.enabled<\/name><br \/>\n    <value>true<\/value>\n <\/property>\n<\/configuration><\/p>\n<p>mapred-site.xml<\/p>\n<p>7) yarn-site.xml<\/p>\n<p>vi yarn-site.xml<\/p>\n<p><configuration>\n        <property>\n                <name>yarn.resourcemanager.address<\/name><br \/>\n                <value>Hadoopmaster:8080<\/value>\n        <\/property>\n        <property>\n                <name>yarn.resourcemanager.scheduler.address<\/name><br \/>\n                <value>hadoopmaster:8081<\/value>\n        <\/property>\n        <property>\n                <name>yarn.resourcemanager.resource-tracker.address<\/name><br \/>\n                <value>hadoopmaster:8082<\/value>\n        <\/property>\n        <property>\n                <name>yarn.nodemanager.resource.memory-mb<\/name><br \/>\n                <value>10240<\/value>\n        <\/property>\n        <property>\n                <name>yarn.nodemanager.remote-app-log-dir<\/name><br \/>\n                <value>${hadoop.tmp.dir}\/nodemanager\/remote<\/value>\n        <\/property>\n        <property>\n                <name>yarn.nodemanager.log-dirs<\/name><br \/>\n                <value>${hadoop.tmp.dir}\/nodemanager\/logs<\/value>\n        <\/property>\n        <property>\n                <name>yarn.nodemanager.aux-services<\/name><br \/>\n                <value>mapreduce_shuffle<\/value>\n        <\/property>\n<\/configuration><\/p>\n<p>Copy the files to Node2 and Node3<\/p>\n<p>scp -r hadoop-2.4.0 root@cluster2.rmohan.com:\/usr\/hadoop\/<br \/>\nscp -r hadoop-2.4.0 root@cluster3.rmohan.com:\/usr\/hadoop\/<\/p>\n<p>\/usr\/hadoop\/hadoop-2.4.0\/sbin<\/p>\n<p>[root@cluster1 sbin]# .\/start-all.sh<br \/>\nThis script is Deprecated. Instead use start-dfs.sh and start-yarn.sh<br \/>\n14\/06\/06 17:37:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform&#8230; using builtin-java classes where applicable<br \/>\nStarting namenodes on [hadoopmaster]<br \/>\nThe authenticity of host &#8216;hadoopmaster (192.168.1.40)&#8217; can&#8217;t be established.<br \/>\nRSA key fingerprint is 42:90:4a:3c:a3:fc:8e:85:ca:f9:d6:10:bb:93:ed:b2.<br \/>\nAre you sure you want to continue connecting (yes\/no)? yes<br \/>\nhadoopmaster: Warning: Permanently added &#8216;hadoopmaster&#8217; (RSA) to the list of known hosts.<br \/>\nhadoopmaster: starting namenode, logging to \/usr\/hadoop\/hadoop-2.4.0\/logs\/hadoop-root-namenode-cluster1.rmohan.com.out<br \/>\ncluster2.rmohan.com: datanode running as process 2815. Stop it first.<br \/>\ncluster3.rmohan.com: datanode running as process 2803. Stop it first.<br \/>\nStarting secondary namenodes [hadoopmaster]<br \/>\nhadoopmaster: starting secondarynamenode, logging to \/usr\/hadoop\/hadoop-2.4.0\/logs\/hadoop-root-secondarynamenode-cluster1.rmohan.com.out<br \/>\n14\/06\/06 17:37:41 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform&#8230; using builtin-java classes where applicable<br \/>\nstarting yarn daemons<br \/>\nstarting resourcemanager, logging to \/usr\/hadoop\/hadoop-2.4.0\/logs\/yarn-root-resourcemanager-cluster1.rmohan.com.out<br \/>\ncluster3.rmohan.com: starting nodemanager, logging to \/usr\/hadoop\/hadoop-2.4.0\/logs\/yarn-root-nodemanager-cluster3.rmohan.com.out<br \/>\ncluster2.rmohan.com: starting nodemanager, logging to \/usr\/hadoop\/hadoop-2.4.0\/logs\/yarn-root-nodemanager-cluster2.rmohan.com.out<\/p>\n<p>[root@cluster1 sbin]# .\/stop-all.sh<br \/>\nThis script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh<br \/>\n14\/06\/06 18:24:56 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform&#8230; using builtin-java classes where applicable<br \/>\nStopping namenodes on [hadoopmaster]<br \/>\nhadoopmaster: stopping namenode<br \/>\ncluster2.rmohan.com: stopping datanode<br \/>\ncluster3.rmohan.com: stopping datanode<br \/>\nStopping secondary namenodes [hadoopmaster]<br \/>\nhadoopmaster: stopping secondarynamenode<br \/>\n14\/06\/06 18:25:16 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform&#8230; using builtin-java classes where applicable<br \/>\nstopping yarn daemons<br \/>\nstopping resourcemanager<br \/>\ncluster2.rmohan.com: stopping nodemanager<br \/>\ncluster3.rmohan.com: stopping nodemanager<br \/>\nno proxyserver to stop<br \/>\n[root@cluster1 sbin]#<\/p>\n<p>[root@cluster1 hadoop-2.4.0]# \/usr\/hadoop\/hadoop-2.4.0\/sbin\/slaves.sh \/usr\/java\/jdk1.7.0_17\/bin\/jps<br \/>\ncluster3.rmohan.com: 3886 NodeManager<br \/>\ncluster2.rmohan.com: 4057 Jps<br \/>\ncluster3.rmohan.com: 4052 Jps<br \/>\ncluster2.rmohan.com: 3789 DataNode<br \/>\ncluster3.rmohan.com: 3784 DataNode<br \/>\ncluster2.rmohan.com: 3891 NodeManager<\/p>\n<p>[root@cluster1 hadoop-2.4.0]# \/usr\/java\/jdk1.7.0_17\/bin\/jps<br \/>\n5407 NameNode<br \/>\n5726 ResourceManager<br \/>\n5584 SecondaryNameNode<br \/>\n6442 Jps<br \/>\n[root@cluster1 hadoop-2.4.0]#<\/p>\n<p>[root@cluster1 bin]# .\/hadoop namenode -format<br \/>\n************************************************************\/<br \/>\n[root@cluster1 bin]# .\/hadoop namenode -format<br \/>\nDEPRECATED: Use of this script to execute hdfs command is deprecated.<br \/>\nInstead use the hdfs command for it.<\/p>\n<p>14\/06\/06 17:55:05 INFO namenode.NameNode: STARTUP_MSG:<br \/>\n\/************************************************************<br \/>\nSTARTUP_MSG: Starting NameNode<br \/>\nSTARTUP_MSG:   host = cluster1.rmohan.com\/192.168.1.40<br \/>\nSTARTUP_MSG:   args = [-format]<br \/>\nSTARTUP_MSG:   version = 2.4.0<br \/>\nSTARTUP_MSG:   classpath = \/usr\/hadoop\/hadoop-2.4.0\/etc\/hadoop:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/jaxb-api-2.2.2.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/jackson-mapper-asl-1.8.8.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/protobuf-java-2.5.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/jetty-6.1.26.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/commons-beanutils-core-1.8.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/jersey-json-1.9.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/jaxb-impl-2.2.3-1.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/commons-net-3.1.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/snappy-java-1.0.4.1.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/jetty-util-6.1.26.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/jsr305-1.3.9.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/jsp-api-2.1.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/slf4j-api-1.7.5.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/commons-beanutils-1.7.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/jasper-runtime-5.5.23.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/junit-4.8.2.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/jettison-1.1.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/log4j-1.2.17.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/commons-cli-1.2.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/activation-1.1.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/commons-io-2.4.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/httpclient-4.2.5.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/commons-digester-1.8.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/xz-1.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/jets3t-0.9.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/commons-codec-1.4.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/commons-logging-1.1.3.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/commons-collections-3.2.1.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/paranamer-2.3.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/jackson-xc-1.8.8.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/java-xmlbuilder-0.4.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/jersey-core-1.9.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/jackson-jaxrs-1.8.8.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/commons-math3-3.1.1.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/netty-3.6.2.Final.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/jackson-core-asl-1.8.8.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/guava-11.0.2.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/hadoop-auth-2.4.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/mockito-all-1.8.5.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/slf4j-log4j12-1.7.5.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/commons-lang-2.6.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/commons-configuration-1.6.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/hadoop-annotations-2.4.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/jersey-server-1.9.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/asm-3.2.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/httpcore-4.2.5.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/jsch-0.1.42.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/commons-compress-1.4.1.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/commons-httpclient-3.1.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/xmlenc-0.52.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/commons-el-1.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/avro-1.7.4.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/stax-api-1.0-2.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/servlet-api-2.5.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/jasper-compiler-5.5.23.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/lib\/zookeeper-3.4.5.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/hadoop-nfs-2.4.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/hadoop-common-2.4.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/common\/hadoop-common-2.4.0-tests.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/hdfs:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/hdfs\/lib\/jackson-mapper-asl-1.8.8.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/hdfs\/lib\/protobuf-java-2.5.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/hdfs\/lib\/jetty-6.1.26.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/hdfs\/lib\/jetty-util-6.1.26.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/hdfs\/lib\/jsr305-1.3.9.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/hdfs\/lib\/jsp-api-2.1.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/hdfs\/lib\/jasper-runtime-5.5.23.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/hdfs\/lib\/log4j-1.2.17.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/hdfs\/lib\/commons-cli-1.2.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/hdfs\/lib\/commons-io-2.4.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/hdfs\/lib\/commons-daemon-1.0.13.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/hdfs\/lib\/commons-codec-1.4.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/hdfs\/lib\/commons-logging-1.1.3.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/hdfs\/lib\/jersey-core-1.9.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/hdfs\/lib\/netty-3.6.2.Final.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/hdfs\/lib\/jackson-core-asl-1.8.8.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/hdfs\/lib\/guava-11.0.2.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/hdfs\/lib\/commons-lang-2.6.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/hdfs\/lib\/jersey-server-1.9.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/hdfs\/lib\/asm-3.2.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/hdfs\/lib\/xmlenc-0.52.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/hdfs\/lib\/commons-el-1.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/hdfs\/lib\/servlet-api-2.5.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/hdfs\/hadoop-hdfs-2.4.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/hdfs\/hadoop-hdfs-2.4.0-tests.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/hdfs\/hadoop-hdfs-nfs-2.4.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/jaxb-api-2.2.2.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/jackson-mapper-asl-1.8.8.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/protobuf-java-2.5.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/jetty-6.1.26.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/jersey-json-1.9.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/guice-3.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/jaxb-impl-2.2.3-1.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/jetty-util-6.1.26.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/leveldbjni-all-1.8.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/jsr305-1.3.9.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/aopalliance-1.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/jersey-guice-1.9.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/jettison-1.1.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/log4j-1.2.17.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/commons-cli-1.2.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/activation-1.1.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/commons-io-2.4.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/xz-1.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/commons-codec-1.4.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/jline-0.9.94.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/commons-logging-1.1.3.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/commons-collections-3.2.1.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/jackson-xc-1.8.8.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/jersey-core-1.9.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/jackson-jaxrs-1.8.8.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/jackson-core-asl-1.8.8.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/guava-11.0.2.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/commons-lang-2.6.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/jersey-server-1.9.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/asm-3.2.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/javax.inject-1.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/guice-servlet-3.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/commons-compress-1.4.1.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/commons-httpclient-3.1.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/stax-api-1.0-2.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/jersey-client-1.9.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/servlet-api-2.5.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/lib\/zookeeper-3.4.5.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/hadoop-yarn-server-tests-2.4.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/hadoop-yarn-client-2.4.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/hadoop-yarn-applications-distributedshell-2.4.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/hadoop-yarn-api-2.4.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/hadoop-yarn-server-nodemanager-2.4.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/hadoop-yarn-server-common-2.4.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/hadoop-yarn-applications-unmanaged-am-launcher-2.4.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/hadoop-yarn-server-web-proxy-2.4.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/hadoop-yarn-server-applicationhistoryservice-2.4.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/hadoop-yarn-common-2.4.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/yarn\/hadoop-yarn-server-resourcemanager-2.4.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/mapreduce\/lib\/junit-4.10.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/mapreduce\/lib\/jackson-mapper-asl-1.8.8.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/mapreduce\/lib\/protobuf-java-2.5.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/mapreduce\/lib\/guice-3.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/mapreduce\/lib\/snappy-java-1.0.4.1.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/mapreduce\/lib\/aopalliance-1.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/mapreduce\/lib\/jersey-guice-1.9.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/mapreduce\/lib\/log4j-1.2.17.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/mapreduce\/lib\/commons-io-2.4.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/mapreduce\/lib\/xz-1.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/mapreduce\/lib\/paranamer-2.3.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/mapreduce\/lib\/jersey-core-1.9.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/mapreduce\/lib\/netty-3.6.2.Final.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/mapreduce\/lib\/jackson-core-asl-1.8.8.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/mapreduce\/lib\/hamcrest-core-1.1.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/mapreduce\/lib\/hadoop-annotations-2.4.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/mapreduce\/lib\/jersey-server-1.9.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/mapreduce\/lib\/asm-3.2.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/mapreduce\/lib\/javax.inject-1.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/mapreduce\/lib\/guice-servlet-3.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/mapreduce\/lib\/commons-compress-1.4.1.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/mapreduce\/lib\/avro-1.7.4.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/mapreduce\/hadoop-mapreduce-examples-2.4.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/mapreduce\/hadoop-mapreduce-client-shuffle-2.4.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/mapreduce\/hadoop-mapreduce-client-hs-2.4.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/mapreduce\/hadoop-mapreduce-client-jobclient-2.4.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/mapreduce\/hadoop-mapreduce-client-core-2.4.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/mapreduce\/hadoop-mapreduce-client-hs-plugins-2.4.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/mapreduce\/hadoop-mapreduce-client-common-2.4.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/mapreduce\/hadoop-mapreduce-client-app-2.4.0.jar:\/usr\/hadoop\/hadoop-2.4.0\/share\/hadoop\/mapreduce\/hadoop-mapreduce-client-jobclient-2.4.0-tests.jar:\/contrib\/capacity-scheduler\/*.jar:\/contrib\/capacity-scheduler\/*.jar<br \/>\nSTARTUP_MSG:   build = http:\/\/svn.apache.org\/repos\/asf\/hadoop\/common -r 1583262; compiled by &#8216;jenkins&#8217; on 2014-03-31T08:29Z<br \/>\nSTARTUP_MSG:   java = 1.7.0_17<br \/>\n************************************************************\/<br \/>\n14\/06\/06 17:55:05 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]<br \/>\n14\/06\/06 17:55:05 INFO namenode.NameNode: createNameNode [-format]<br \/>\n14\/06\/06 17:55:06 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform&#8230; using builtin-java classes where applicable<br \/>\nFormatting using clusterid: CID-ab82af59-a88b-4e6a-b2b6-9e7ba859e781<br \/>\n14\/06\/06 17:55:06 INFO namenode.FSNamesystem: fsLock is fair:true<br \/>\n14\/06\/06 17:55:06 INFO namenode.HostFileManager: read includes:<br \/>\nHostSet(<br \/>\n)<br \/>\n14\/06\/06 17:55:06 INFO namenode.HostFileManager: read excludes:<br \/>\nHostSet(<br \/>\n)<br \/>\n14\/06\/06 17:55:06 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000<br \/>\n14\/06\/06 17:55:06 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true<br \/>\n14\/06\/06 17:55:06 INFO util.GSet: Computing capacity for map BlocksMap<br \/>\n14\/06\/06 17:55:06 INFO util.GSet: VM type       = 64-bit<br \/>\n14\/06\/06 17:55:06 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB<br \/>\n14\/06\/06 17:55:06 INFO util.GSet: capacity      = 2^21 = 2097152 entries<br \/>\n14\/06\/06 17:55:06 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false<br \/>\n14\/06\/06 17:55:06 INFO blockmanagement.BlockManager: defaultReplication         = 3<br \/>\n14\/06\/06 17:55:06 INFO blockmanagement.BlockManager: maxReplication             = 512<br \/>\n14\/06\/06 17:55:06 INFO blockmanagement.BlockManager: minReplication             = 1<br \/>\n14\/06\/06 17:55:06 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2<br \/>\n14\/06\/06 17:55:06 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false<br \/>\n14\/06\/06 17:55:06 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000<br \/>\n14\/06\/06 17:55:06 INFO blockmanagement.BlockManager: encryptDataTransfer        = false<br \/>\n14\/06\/06 17:55:06 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000<br \/>\n14\/06\/06 17:55:06 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)<br \/>\n14\/06\/06 17:55:06 INFO namenode.FSNamesystem: supergroup          = supergroup<br \/>\n14\/06\/06 17:55:06 INFO namenode.FSNamesystem: isPermissionEnabled = true<br \/>\n14\/06\/06 17:55:06 INFO namenode.FSNamesystem: HA Enabled: false<br \/>\n14\/06\/06 17:55:06 INFO namenode.FSNamesystem: Append Enabled: true<br \/>\n14\/06\/06 17:55:07 INFO util.GSet: Computing capacity for map INodeMap<br \/>\n14\/06\/06 17:55:07 INFO util.GSet: VM type       = 64-bit<br \/>\n14\/06\/06 17:55:07 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB<br \/>\n14\/06\/06 17:55:07 INFO util.GSet: capacity      = 2^20 = 1048576 entries<br \/>\n14\/06\/06 17:55:07 INFO namenode.NameNode: Caching file names occuring more than 10 times<br \/>\n14\/06\/06 17:55:07 INFO util.GSet: Computing capacity for map cachedBlocks<br \/>\n14\/06\/06 17:55:07 INFO util.GSet: VM type       = 64-bit<br \/>\n14\/06\/06 17:55:07 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB<br \/>\n14\/06\/06 17:55:07 INFO util.GSet: capacity      = 2^18 = 262144 entries<br \/>\n14\/06\/06 17:55:07 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033<br \/>\n14\/06\/06 17:55:07 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0<br \/>\n14\/06\/06 17:55:07 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000<br \/>\n14\/06\/06 17:55:07 INFO namenode.FSNamesystem: Retry cache on namenode is enabled<br \/>\n14\/06\/06 17:55:07 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis<br \/>\n14\/06\/06 17:55:07 INFO util.GSet: Computing capacity for map NameNodeRetryCache<br \/>\n14\/06\/06 17:55:07 INFO util.GSet: VM type       = 64-bit<br \/>\n14\/06\/06 17:55:07 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB<br \/>\n14\/06\/06 17:55:07 INFO util.GSet: capacity      = 2^15 = 32768 entries<br \/>\n14\/06\/06 17:55:07 INFO namenode.AclConfigFlag: ACLs enabled? false<br \/>\nRe-format filesystem in Storage Directory \/usr\/hadoop\/dfs\/name ? (Y or N) y<br \/>\n14\/06\/06 17:55:09 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1380935534-192.168.1.40-1402048509097<br \/>\n14\/06\/06 17:55:09 INFO common.Storage: Storage directory \/usr\/hadoop\/dfs\/name has been successfully formatted.<br \/>\n14\/06\/06 17:55:09 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0<br \/>\n14\/06\/06 17:55:09 INFO util.ExitUtil: Exiting with status 0<br \/>\n14\/06\/06 17:55:09 INFO namenode.NameNode: SHUTDOWN_MSG:<br \/>\n\/************************************************************<br \/>\nSHUTDOWN_MSG: Shutting down NameNode at cluster1.rmohan.com\/192.168.1.40<br \/>\n************************************************************\/<\/p>\n<p>[root@cluster1 bin]# netstat -antp<br \/>\nActive Internet connections (servers and established)<br \/>\nProto Recv-Q Send-Q Local Address               Foreign Address             State       PID\/Program name<br \/>\ntcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN      2588\/sshd<br \/>\ntcp        0      0 127.0.0.1:6010              0.0.0.0:*                   LISTEN      2461\/sshd<br \/>\ntcp        0      0 192.168.1.40:9001           0.0.0.0:*                   LISTEN      3610\/java<br \/>\ntcp        0      0 192.168.1.40:22             192.168.1.4:51736           ESTABLISHED 2755\/sshd<br \/>\ntcp        0      0 192.168.1.40:22             192.168.1.1:58989           ESTABLISHED 2465\/sshd<br \/>\ntcp        0    176 192.168.1.40:22             192.168.1.1:58985           ESTABLISHED 2461\/sshd<br \/>\ntcp        0      0 ::ffff:192.168.1.40:8080    :::*                        LISTEN      3744\/java<br \/>\ntcp        0      0 ::ffff:192.168.1.40:8081    :::*                        LISTEN      3744\/java<br \/>\ntcp        0      0 ::ffff:192.168.1.40:8082    :::*                        LISTEN      3744\/java<br \/>\ntcp        0      0 :::22                       :::*                        LISTEN      2588\/sshd<br \/>\ntcp        0      0 :::8088                     :::*                        LISTEN      3744\/java<br \/>\ntcp        0      0 ::1:6010                    :::*                        LISTEN      2461\/sshd<br \/>\ntcp        0      0 :::8033                     :::*                        LISTEN      3744\/java<br \/>\ntcp        0      0 ::ffff:192.168.1.40:8082    ::ffff:192.168.1.42:49955   ESTABLISHED 3744\/java<br \/>\ntcp        0      0 ::ffff:192.168.1.40:8082    ::ffff:192.168.1.41:55070   ESTABLISHED 3744\/java<\/p>\n<p>[root@cluster1 bin]#<\/p>\n<p>URLS To Check the Cluster Details <\/p>\n<p>http:\/\/cluster1.rmohan.com:8088\/cluster<\/p>\n<p><a href=\"http:\/\/rmohan.com\/wp-content\/uploads\/2014\/06\/hadoop2.jpg\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/rmohan.com\/wp-content\/uploads\/2014\/06\/hadoop2.jpg\" alt=\"hadoop2\" width=\"1366\" height=\"768\" class=\"aligncenter size-full wp-image-3119\" srcset=\"https:\/\/mohan.sg\/wp-content\/uploads\/2014\/06\/hadoop2.jpg 1366w, https:\/\/mohan.sg\/wp-content\/uploads\/2014\/06\/hadoop2-300x168.jpg 300w, https:\/\/mohan.sg\/wp-content\/uploads\/2014\/06\/hadoop2-1024x575.jpg 1024w, https:\/\/mohan.sg\/wp-content\/uploads\/2014\/06\/hadoop2-150x84.jpg 150w, https:\/\/mohan.sg\/wp-content\/uploads\/2014\/06\/hadoop2-400x224.jpg 400w, https:\/\/mohan.sg\/wp-content\/uploads\/2014\/06\/hadoop2-900x506.jpg 900w\" sizes=\"(max-width: 1366px) 100vw, 1366px\" \/><\/a><\/p>\n<p><a href=\"http:\/\/rmohan.com\/wp-content\/uploads\/2014\/06\/hadoop3.jpg\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/rmohan.com\/wp-content\/uploads\/2014\/06\/hadoop3.jpg\" alt=\"hadoop3\" width=\"1366\" height=\"768\" class=\"aligncenter size-full wp-image-3120\" srcset=\"https:\/\/mohan.sg\/wp-content\/uploads\/2014\/06\/hadoop3.jpg 1366w, https:\/\/mohan.sg\/wp-content\/uploads\/2014\/06\/hadoop3-300x168.jpg 300w, https:\/\/mohan.sg\/wp-content\/uploads\/2014\/06\/hadoop3-1024x575.jpg 1024w, https:\/\/mohan.sg\/wp-content\/uploads\/2014\/06\/hadoop3-150x84.jpg 150w, https:\/\/mohan.sg\/wp-content\/uploads\/2014\/06\/hadoop3-400x224.jpg 400w, https:\/\/mohan.sg\/wp-content\/uploads\/2014\/06\/hadoop3-900x506.jpg 900w\" sizes=\"(max-width: 1366px) 100vw, 1366px\" \/><\/a><\/p>\n<p>http:\/\/cluster1.rmohan.com:9001\/status.jsp<\/p>\n<p>http:\/\/cluster1.rmohan.com:50070\/dfshealth.jsp<\/p>\n<p><a href=\"http:\/\/rmohan.com\/wp-content\/uploads\/2014\/06\/hadoop4.jpg\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/rmohan.com\/wp-content\/uploads\/2014\/06\/hadoop4.jpg\" alt=\"hadoop4\" width=\"1366\" height=\"768\" class=\"aligncenter size-full wp-image-3121\" srcset=\"https:\/\/mohan.sg\/wp-content\/uploads\/2014\/06\/hadoop4.jpg 1366w, https:\/\/mohan.sg\/wp-content\/uploads\/2014\/06\/hadoop4-300x168.jpg 300w, https:\/\/mohan.sg\/wp-content\/uploads\/2014\/06\/hadoop4-1024x575.jpg 1024w, https:\/\/mohan.sg\/wp-content\/uploads\/2014\/06\/hadoop4-150x84.jpg 150w, https:\/\/mohan.sg\/wp-content\/uploads\/2014\/06\/hadoop4-400x224.jpg 400w, https:\/\/mohan.sg\/wp-content\/uploads\/2014\/06\/hadoop4-900x506.jpg 900w\" sizes=\"(max-width: 1366px) 100vw, 1366px\" \/><\/a><\/p>\n<p>[root@cluster1 hadoop-2.4.0]# .\/bin\/hdfs dfsadmin -report<br \/>\n14\/06\/06 18:46:16 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform&#8230; using builtin-java classes where applicable<br \/>\nConfigured Capacity: 105689374720 (98.43 GB)<br \/>\nPresent Capacity: 92940345344 (86.56 GB)<br \/>\nDFS Remaining: 92940296192 (86.56 GB)<br \/>\nDFS Used: 49152 (48 KB)<br \/>\nDFS Used%: 0.00%<br \/>\nUnder replicated blocks: 0<br \/>\nBlocks with corrupt replicas: 0<br \/>\nMissing blocks: 0<\/p>\n<p>&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;-<br \/>\nDatanodes available: 2 (2 total, 0 dead)<\/p>\n<p>Live datanodes:<br \/>\nName: 192.168.1.41:50010 (cluster2.rmohan.com)<br \/>\nHostname: cluster2.rmohan.com<br \/>\nDecommission Status : Normal<br \/>\nConfigured Capacity: 52844687360 (49.22 GB)<br \/>\nDFS Used: 24576 (24 KB)<br \/>\nNon DFS Used: 6304174080 (5.87 GB)<br \/>\nDFS Remaining: 46540488704 (43.34 GB)<br \/>\nDFS Used%: 0.00%<br \/>\nDFS Remaining%: 88.07%<br \/>\nConfigured Cache Capacity: 0 (0 B)<br \/>\nCache Used: 0 (0 B)<br \/>\nCache Remaining: 0 (0 B)<br \/>\nCache Used%: 100.00%<br \/>\nCache Remaining%: 0.00%<br \/>\nLast contact: Fri Jun 06 18:46:15 SGT 2014<\/p>\n<p>Name: 192.168.1.42:50010 (cluster3.rmohan.com)<br \/>\nHostname: cluster3.rmohan.com<br \/>\nDecommission Status : Normal<br \/>\nConfigured Capacity: 52844687360 (49.22 GB)<br \/>\nDFS Used: 24576 (24 KB)<br \/>\nNon DFS Used: 6444855296 (6.00 GB)<br \/>\nDFS Remaining: 46399807488 (43.21 GB)<br \/>\nDFS Used%: 0.00%<br \/>\nDFS Remaining%: 87.80%<br \/>\nConfigured Cache Capacity: 0 (0 B)<br \/>\nCache Used: 0 (0 B)<br \/>\nCache Remaining: 0 (0 B)<br \/>\nCache Used%: 100.00%<br \/>\nCache Remaining%: 0.00%<br \/>\nLast contact: Fri Jun 06 18:46:15 SGT 2014<\/p>\n<p>root@cluster1 hadoop-2.4.0]# .\/bin\/hdfs fsck \/ -files -blocks<br \/>\n14\/06\/06 18:48:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform&#8230; using builtin-java classes where app<br \/>\nConnecting to namenode via http:\/\/hadoopmaster:50070<br \/>\nFSCK started by root (auth:SIMPLE) from \/192.168.1.40 for path \/ at Fri Jun 06 18:48:04 SGT 2014<br \/>\n\/ <dir><br \/>\nStatus: HEALTHY<br \/>\n Total size:    0 B<br \/>\n Total dirs:    1<br \/>\n Total files:   0<br \/>\n Total symlinks:                0<br \/>\n Total blocks (validated):      0<br \/>\n Minimally replicated blocks:   0<br \/>\n Over-replicated blocks:        0<br \/>\n Under-replicated blocks:       0<br \/>\n Mis-replicated blocks:         0<br \/>\n Default replication factor:    3<br \/>\n Average block replication:     0.0<br \/>\n Corrupt blocks:                0<br \/>\n Missing replicas:              0<br \/>\n Number of data-nodes:          2<br \/>\n Number of racks:               1<br \/>\nFSCK ended at Fri Jun 06 18:48:04 SGT 2014 in 10 milliseconds<\/p>\n<p>The filesystem under path &#8216;\/&#8217; is HEALTHY<\/p>\n<p>How to create DFS Directory <\/p>\n<p>[root@cluster1 hadoop-2.4.0]# .\/bin\/hdfs dfs -mkdir \/mohan<br \/>\n14\/06\/06 18:49:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform&#8230; using builtin-java classes where applicable<\/p>\n<p>[root@cluster1 hadoop-2.4.0]# cat \/tmp\/test\/test1<br \/>\nhello   www.google.com<br \/>\nhello1  www.yahoo.com<br \/>\nhello2  www.msn.com<br \/>\nhello3  www.rediff.com<br \/>\nhello4  www.amazon.com<br \/>\nhello5  www.ebay.com<\/p>\n<p>[root@cluster1 hadoop-2.4.0]# cat \/tmp\/test\/test2<br \/>\nthe feeling that you understand and share another person&#8217;s experiences and emotions : the ability to share someone else&#8217;s feelings<br \/>\nHe felt great empathy with the poor.<br \/>\nHis months spent researching prison life gave him greater empathy towards convicts.<br \/>\nPoetic empathy understandably seeks a strategy of identification with victims  Helen Vendler, New Republic, 5 May 2003<br \/>\nOrigin of EMPATHY<br \/>\nGreek empatheia, literally, passion, from empaths emotional, from em- + pathos feelings, emotion  more at pathos<\/p>\n<p>[root@cluster1 test]# \/usr\/hadoop\/hadoop-2.4.0\/bin\/hadoop fs -put -f test1.txt \/mohan<br \/>\n14\/06\/06 19:45:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform&#8230; using builtin-java classes where applicable<br \/>\n[root@cluster1 test]# \/usr\/hadoop\/hadoop-2.4.0\/bin\/hadoop fs -put -f test2.txt \/mohan<br \/>\n14\/06\/06 19:45:10 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform&#8230; using builtin-java classes where applicable<\/p>\n<p>.\/bin\/hadoop jar .\/share\/hadoop\/mapreduce\/sources\/hadoop-mapreduce-examples-2.3.0-sources.jar org.apache.hadoop.examples.WordCount \/mohan \/output<\/p>\n<p>.\/bin\/hadoop fs -cat \/output\/part-r-00000<br \/>\n.\/bin\/hadoop fs -cat \/output\/part-r-00000<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Centos and Rhel 6.5 Hadoop 2.4 3 Node Server <\/p>\n<p>hadoop word has been popular for many years, the mention of big data will think hadoop, hadoop then what role is it? Official definition: hadoop is a developing and running large-scale data processing software platform. Core Words is a platform, which means we have a [&#8230;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[60],"tags":[],"_links":{"self":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts\/3116"}],"collection":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=3116"}],"version-history":[{"count":1,"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts\/3116\/revisions"}],"predecessor-version":[{"id":3122,"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts\/3116\/revisions\/3122"}],"wp:attachment":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=3116"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=3116"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=3116"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}