1
2
3
4
5
6
|
系统版本:CentOS 5.8 x86_64 JAVA版本:JDK- 1.7 .0_25 Hadoop版本:hadoop- 2.2 . 0 192.168 . 149.128 namenode (充当namenode、secondary namenode和ResourceManager角色) 192.168 . 149.129 datanode1 (充当datanode、nodemanager角色) 192.168 . 149.130 datanode2 (充当datanode、nodemanager角色) |
1
2
3
4
5
6
|
# Do not remove the following line, or var ious programs # that require network functionality will fail. 127.0 . 0.1 localhost.localdomain localhost 192.168 . 149.128 node1 192.168 . 149.129 node2 192.168 . 149.130 node3 |
1
2
3
4
|
在namenode 128 上执行ssh-keygen,一路Enter回车即可。 然后把公钥/root/.ssh/id_rsa.pub拷贝到datanode服务器即可,拷贝方法如下: ssh-copy-id -i .ssh/id_rsa.pub root@ 192.168 . 149.129 ssh-copy-id -i .ssh/id_rsa.pub root@ 192.168 . 149.130 |
1
2
3
4
5
|
tar -xvzf jdk-7u25-linux-x64.tar.gz &&mkdir -p /usr/java/ ; mv /jdk1. 7 .0_25 /usr/java/ 即可。 安装完毕并配置java环境变量,在/etc/profile末尾添加如下代码: export JAVA_HOME=/usr/java/jdk1. 7 .0_25/ export PATH=$JAVA_HOME/bin:$PATH export CLASSPATH=$JAVE_HOME/lib/dt.jar:$JAVE_HOME/lib/tools.jar:./ |
1
2
3
4
|
[root@node1 ~]# java -version java version "1.7.0_25" Java(TM) SE Runtime Environment (build 1.7 .0_25-b15) Java HotSpot(TM) 64 -Bit Server VM (build 23.25 -b01, mixed mode) |
1
2
|
tar -xzvf hadoop- 2.2 . 0 .tar.gz &&mv hadoop- 2.2 . 0 /data/hadoop/ (注* 先在namenode服务器上都安装hadoop版本即可,datanode先不用安装,待会修改完配置后统一安装datanode) |
1
2
3
4
5
|
在/etc/profile末尾继续添加如下代码,并执行source /etc/profile生效。 export HADOOP_HOME=/data/hadoop/ export PATH=$PATH:$HADOOP_HOME/bin/ export JAVA_LIBRARY_PATH=/data/hadoop/lib/ native / (注* 我们需要在namenode、datanode三台服务器上都配置Hadoop相关变量) |
1
2
3
4
5
6
7
8
9
10
11
12
13
|
<?xml version= "1.0" ?> <?xml-stylesheet type= "text/xsl" href=\'#\'" Put site-specific property overrides in this file. --> <configuration> <property> <name>fs. default .name</name> <value>hdfs: //192.168.149.128:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/tmp/hadoop-${user.name}</value> <description>A base for other temporary directories.</description> </property> </configuration> |
1
2
3
4
5
6
7
8
|
<?xml version= "1.0" ?> <?xml-stylesheet type= "text/xsl" href=\'#\'" Put site-specific property overrides in this file. --> <configuration> <property> <name>mapred.job.tracker</name> <value> 192.168 . 149.128 : 9001 </value> </property> </configuration> |
1
2
3
4
5
6
7
8
9
10
11
12
13
|
<?xml version= "1.0" encoding= "UTF-8" ?> <?xml-stylesheet type= "text/xsl" href=\'#\'" /name> <value>/data/hadoop/data_name1,/data/hadoop/data_name2</value> </property> <property> <name>dfs.data.dir</name> <value>/data/hadoop/data_1,/data/hadoop/data_2</value> </property> <property> <name>dfs.replication</name> <value> 2 </value> </property> </configuration> |
1
|
192.168 . 149.128 |
1
2
|
192.168 . 149.129 192.168 . 149.130 |
1
|
for i in `seq 129 130 ` ; do scp -r /data/hadoop/ root@ 192.168 . 149 .$i:/data/ ; done |
1
|
cd /data/hadoop/ ; ./bin/hadoop namenode -format |
1
|
[root@node1 hadoop]# ./sbin/start-all.sh |