一,安装环境与配置前准备工作
硬件:4个虚拟机分别为master1:192.168.1.220,master2:192.168.1.221,slave1:192.168.1.222,slave2:192.168.1.223
http://code.google.com/p/hdfs-fuse/downloads/list fuse-hdfs
系统:红帽 LINUX 5
HADOOP版本:最新版本hadoop-2.0.0-alpha 安装包为hadoop-2.0.0-alpha.tar.gz
下载官网地址:http://apache.etoak.com/hadoop/common/hadoop-2.0.0-alpha/
JDK版本:jdk-6u6-linux-i586.bin(最低要求为JDK 1.6)
虚拟机的安装和LINUX的安装不介绍,GOOGLE一大堆
创建相关目录:mkdir /usr/hadoop(hadoop安装目录)mkdir /usr/java(JDK安装目录)
二,安装JDK(所有节点都一样)
1,将下载好的jdk-6u6-linux-i586.bin通过SSH上传到/usr/java下
2,进入JDK安装目录cd /usr/java 并且执行chmod +x jdk-6u6-linux-i586.bin
3,执行./jdk-6u6-linux-i586.bin(一路回车,遇到yes/no全部yes,最后会done,安装成功)
4,配置环境变量,执行cd /etc命令后执行vi profile,在行末尾添加
- export JAVA_HOME=/usr/java/jdk1.6.0_27
- export CLASSPATH=.:$JAVA_HOME/lib/tools.jar:/lib/dt.jar
- export PATH=$JAVA_HOME/bin:$PATH
5,执行chmod +x profile将其变成可执行文件
6,执行source profile使其配置立即生效
7,执行java -version查看是否安装成功
三,修改主机名,所有节点均一样配置
1,连接到主节点192.168.1.220,修改network,执行cd /etc/sysconfig命令后执行vi network,修改HOSTNAME=master1
2,修改hosts文件,执行cd /etc命令后执行vi hosts,在行末尾添加:
192.168.1.220 master1
192.168.1.221 master2
192.168.1.222 slave1
192.168.1.223 slave2
3,执行hostname master1
4,执行exit后重新连接可看到主机名以修改OK
四,配置SSH无密码登陆
1,SSH无密码原理简介:首先在master上生成一个密钥对,包括一个公钥和一个私钥,并将公钥复制到所有的slave上。
然后当master通过SSH连接slave时,slave就会生成一个随机数并用master的公钥对随机数进行加密,并发送给master。
最后master收到加密数之后再用私钥解密,并将解密数回传给slave,slave确认解密数无误之后就允许master不输入密码进行连接了
2,具体步骤:
1、执行命令ssh-keygen -t rsa之后一路回车,查看刚生成的无密码钥对:cd .ssh 后执行ll
2、把id_rsa.pub追加到授权的key里面去。执行命令cat ~/.ssh/id_rsa.pub >>~/.ssh/authorized_keys
3、修改权限:执行chmod 600 ~/.ssh/authorized_keys
4、确保cat /etc/ssh/sshd_config 中存在如下内容
RSAAuthentication yes
PubkeyAuthentication yes
AuthorizedKeysFile .ssh/authorized_keys
如需修改,则在修改后执行重启SSH服务命令使其生效:service sshd restart
5、将公钥复制到所有的slave机器上:scp ~/.ssh/id_rsa.pub 192.168.1.222:~/ 然后输入yes,最后输入slave机器的密码
6、在slave机器上创建.ssh文件夹:mkdir ~/.ssh 然后执行chmod 700 ~/.ssh(若文件夹以存在则不需要创建)
7、追加到授权文件authorized_keys执行命令:cat ~/id_rsa.pub >> ~/.ssh/authorized_keys 然后执行chmod 600 ~/.ssh/authorized_keys
8、重复第4步
9、验证命令:在master机器上执行 ssh 192.168.1.222发现主机名由master1变成slave1即成功,最后删除id_rsa.pub文件:rm -r id_rsa.pub
3,按照以上步骤分别配置master1,master2,slave1,slave2,要求每个master与每个slave之间都可以无密码登录
五,安装HADOOP,所有节点都一样
1,将hadoop-2.0.0-alpha.tar.gz上传到HADOOP的安装目录/usr/hadoop中
2,解压安装包:tar -zxvf hadoop-2.0.0-alpha.tar.gz
3,创建tmp文件夹:mkdir /usr/hadoop/tmp
4,配置环境变量:vi /etc/profile
- export HADOOP_DEV_HOME=/usr/hadoop/hadoop-2.0.0-alpha
- export PATH=$PATH:$HADOOP_DEV_HOME/bin
- export PATH=$PATH:$HADOOP_DEV_HOME/sbin
- export HADOOP_MAPARED_HOME=${HADOOP_DEV_HOME}
- export HADOOP_COMMON_HOME=${HADOOP_DEV_HOME}
- export HADOOP_HDFS_HOME=${HADOOP_DEV_HOME}
- export YARN_HOME=${HADOOP_DEV_HOME}
- export HADOOP_CONF_DIR=${HADOOP_DEV_HOME}/etc/hadoop
- export HDFS_CONF_DIR=${HADOOP_DEV_HOME}/etc/hadoop
- export YARN_CONF_DIR=${HADOOP_DEV_HOME}/etc/hadoop
5,配置HADOOP
配置文件位于/usr/hadoop/hadoop-2.0.0-alpha/etc/hadoop下
1、创建并配置hadoop-env.sh
vi /usr/hadoop/hadoop-2.0.0-alpha/etc/hadoop/hadoop-env.sh 在末尾添加export JAVA_HOME=/usr/java/jdk1.6.0_27
2、配置core-site.xml文件
- <property>
- <name>hadoop.tmp.dir</name>
- <value>/usr/hadoop/tmp</value>
- </property>
- <property>
- <name>fs.default.name</name>
- <value>hdfs://localhost:9000</value>
- </property>
3、创建并配置slaves:vi slaves 并添加以下内容
192.168.1.222
192.168.1.223
4、配置hdfs-site.xml
- <configuration>
- <property>
- <name>dfs.namenode.name.dir</name>
- <value>file:/usr/hadoop/hdfs/name</value>
- <final>true</final>
- </property>
- <property>
- <name>dfs.federation.nameservice.id</name>
- <value>ns1</value>
- </property>
- <property>
- <name>dfs.namenode.backup.address.ns1</name>
- <value>192.168.1.223:50100</value>
- </property>
- <property>
- <name>dfs.namenode.backup.http-address.ns1</name>
- <value>192.168.1.223:50105</value>
- </property>
- <property>
- <name>dfs.federation.nameservices</name>
- <value>ns1,ns2</value>
- </property>
- <property>
- <name>dfs.namenode.rpc-address.ns1</name>
- <value>192.168.1.220:9000</value>
- </property>
- <property>
- <name>dfs.namenode.rpc-address.ns2</name>
- <value>192.168.1.221:9000</value>
- </property>
- <property>
- <name>dfs.namenode.http-address.ns1</name>
- <value>192.168.1.220:23001</value>
- </property>
- <property>
- <name>dfs.namenode.http-address.ns2</name>
- <value>192.168.1.221:13001</value>
- </property>
- <property>
- <name>dfs.dataname.data.dir</name>
- <value>file:/usr/hadoop/hdfs/data</value>
- <final>true</final>
- </property>
- <property>
- <name>dfs.namenode.secondary.http-address.ns1</name>
- <value>192.168.1.220:23002</value>
- </property>
- <property>
- <name>dfs.namenode.secondary.http-address.ns2</name>
- <value>192.168.1.221:23002</value>
- </property>
- <property>
- <name>dfs.namenode.secondary.http-address.ns1</name>
- <value>192.168.1.220:23003</value>
- </property>
- <property>
- <name>dfs.namenode.secondary.http-address.ns2</name>
- <value>192.168.1.221:23003</value>
- </property>
- </configuration>
5、配置yarn-site.xml
- <configuration>
- <!-- Site specific YARN configuration properties -->
- <property>
- <name>yarn.resourcemanager.address</name>
- <value>192.168.1.220:18040</value>
- </property>
- <property>
- <name>yarn.resourcemanager.scheduler.address</name>
- <value>192.168.1.220:18030</value>
- </property>
- <property>
- <name>yarn.resourcemanager.webapp.address</name>
- <value>192.168.1.220:18088</value>
- </property>
- <property>
- <name>yarn.resourcemanager.resource-tracker.address</name>
- <value>192.168.1.220:18025</value>
- </property>
- <property>
- <name>yarn.resourcemanager.admin.address</name>
- <value>192.168.1.220:18141</value>
- </property>
- <property>
- <name>yarn.nodemanager.aux-services</name>
- <value>mapreduce.shuffle</value>
- </property>
- </configuration>
六,启动HADOOP集群,并测试WORDCOUNT
1,格式化 namenode:分别在两个master上执行:hadoop namenode -format -clusterid eric
2,启动HADOOP:在master1执行start-all.sh或先执行start-dfs.sh再执行start-yarn.sh
3,分别在各个节点上执行jps命令,显示结果如下即成功启动:
[root@master1 hadoop]# jps
1956 Bootstrap
4183 Jps
3938 ResourceManager
3845 SecondaryNameNode
3652 NameNode
[root@master2 ~]# jps
3778 Jps
1981 Bootstrap
3736 SecondaryNameNode
3633 NameNode
[root@slave1 ~]# jps
3766 Jps
3675 NodeManager
3551 DataNode
[root@slave1 ~]# jps
3675 NodeManager
3775 Jps
3551 DataNode
4,在master1上,创建输入目录:hadoop fs -mkdir hdfs://192.168.1.220:9000/input
5,将/usr/hadoop/hadoop-2.0.0-alpha/目录下的所有txt文件复制到hdfs分布式文件系统的目录里,执行以下命令
hadoop fs -put /usr/hadoop/hadoop-2.0.0-alpha/*.txt hdfs://192.168.1.220:9000/input
6,在master1上,执行HADOOP自带的例子,wordcount包,命令如下:
cd /usr/hadoop/hadoop-2.0.0-alpha/share/hadoop/mapreduce
hadoop jar hadoop-mapreduce-examples-2.0.0-alpha.jar wordcount hdfs://192.168.1.220:9000/input hdfs://192.168.1.220:9000/output
7,在master1上,查看结果命令如下:
[root@master1 hadoop]# hadoop fs -ls hdfs://192.168.1.220:9000/output
Found 2 items
-rw-r--r-- 2 root supergroup 0 2012-06-29 22:59 hdfs://192.168.1.220:9000/output/_SUCCESS
-rw-r--r-- 2 root supergroup 8739 2012-06-29 22:59 hdfs://192.168.1.220:9000/output/part-r-00000
[root@master1 hadoop]# hadoop fs -ls hdfs://192.168.1.220:9000/input
Found 3 items
-rw-r--r-- 2 root supergroup 15164 2012-06-29 22:55 hdfs://192.168.1.220:9000/input/LICENSE.txt
-rw-r--r-- 2 root supergroup 101 2012-06-29 22:55 hdfs://192.168.1.220:9000/input/NOTICE.txt
-rw-r--r-- 2 root supergroup 1366 2012-06-29 22:55 hdfs://192.168.1.220:9000/input/README.txt
[root@master1 hadoop]# hadoop fs -cat hdfs://192.168.1.220:9000/output/part-r-00000即可看到每个单词的数量
8,可以通过IE访问:http://192.168.1.220:23001/dfshealth.jsp
到此整个过程就结束了………
参考文献:http://www.cnblogs.com/xia520pi/archive/2012/05/16/2503949.html
http://blog.csdn.net/azhao_dn/article/details/7480201
http://www.cnblogs.com/MGGOON/archive/2012/03/14/2396481.html
http://www.haogongju.net/art/763686以及官方网站