Installing hadoop 2.2.0 clusters with 3 nodes(one for namenode/resourcemanager and secondary namenode while the other tow nodes for datanode/nodemanager)
1. ip assignments
192.168.122.1 namenode
192.168.122.2 datanode
192.168.122.3 datanode
2. download the latest stable hadoop tarball (2.2.0) and untar it to /home/xxx/hadoop/hadoop-2.2.0
3. prepare the runtime enviroments
a. java
install oracle java 1.7.0 and set JAVA_HOME
b. ssh without passphase
b1. make sure the namenode has ssh client and server using the following commands
#which ssh / which sshd / which ssh-keygen
b2. generate ssh key pair
#ssh-keygen -t rsa
the above commond will produce three files in ~/.ssh dir
b3.dirstribute public key and validate logins
#scp ~/.ssh/id_rsa.pub zhj@192.168.122.2:~/authorized_keys
#scp ~/.ssh/id_rsa.pub zhj@192.168.122.3:~/authorized_keys
---
login 192.168.122.2 and 192.168.122.3 and run the following commands
#mkdir ~/.shh
#chmod 700 ~/.ssh
#mv ~/authorized_keys ~/.ssh/
#chmod 600 ~/.ssh/authorized_keys
IF the ssh still prompts your enter password to login, the execute the following commnads
$ chmod go-w $HOME $HOME/.ssh
$ chmod 600 $HOME/.ssh/authorized_keys
$ chown `whoami` $HOME/.ssh/authorized_keys
4. edit the core config files for hadoop clusters (nonsecurity mode)
core-site.xml
hdfs-site.xml ( dfs.namenode.hosts is important)
yarn-site.xml
mapred-site.xml
-----
dfs.namenode.hosts -> hosts.txt
the content for hosts.txt like following(the ips for every datanode in the cluster):
192.168.122.2
192.168.122.3
5.edit /etc/hosts in 192.168.122.1 (without DNS)
192.168.122.1 host.dataminer
192.168.122.2 f1.zhj
192.168.122.3 f2.zhj
meanwhile edit the /etc/hosts in 192.168.122.2/3
127.0.0.1 f1.zhj
6.edit ~/.bashrc and HADOOP_HOME \ HADOOP_CONF_DIR while append the bin and sbin dir to PATH
run the command to make it effective. #source ~/.bashrc
NOTE: the sample hadoop cluster is based on my notebook with Ubuntu13.10 with KVM which hosts the other two datanode with fedora20.
References:
http://allthingshadoop.com/2010/04/20/hadoop-cluster-setup-ssh-key-authentication/
相关推荐
Hadoop 2.2.0 配置文件 在4台CentOs 6.4版本下运行成功
自己配了一遍hadoop2.2.0,写给新手,绝对详细,后面还有一个配好了的测试文档,在我的资源里找
Hadoop 2.2.0版本中在64为linux系统上运行所需要的native库文件。需要时用此native文件夹覆盖Hadoop 2.2.0中native文件夹即可。
Hadoop2.2.0安装配置手册,新手安装和配置
hadoop2.2.0/2.6.0/2.7.0/2.7.1 64位安装包。
Hadoop官网上下载的hadoop-2.2.0安装包是32位的,直接运行在64位的linux系统上会有问题,所以需要自己将hadoop-2.2.0安装包编译为64位。
资源名称:CentOS 6.5 x64下安装19实体节点Hadoop 2.2.0集群配置指南内容简介: CentOS 6.5 x64下安装19实体节点Hadoop 2.2.0集群配置指南主要讲述的是CentOS 6.5 x64下安装19实体节点Hadoop 2.2.0集群配置指南;...
hadoop2.2.0 eclipse插件-重新编译过。hadoop用的是hadoop2.2.0版本,eclipse用的是 eclipse-kepler。 插件 eclipse-kepler
在CentSO_64bit集群搭建,hadoop2.2(64位)编译 新版亮点: 基于yarn计算框架和高可用性DFS的第一个稳定版本。 注1:官网只提供32位release版本, 若机器为64位,需要手动编译。 环境配置是个挺烦人的活,麻烦不说还...
Hadoop2.2.0完全分布式集群平台安装设置 HDFS HA架构: 1、先设定电脑的IP为静态地址: 2、设置各个主机的hostname 3、在所有电脑的/etc/hosts添加以下配置: 4、设置SSH无密码登陆 5、下载解压hadoop-2.2.0.tar.gz...
此hadoop是hadoop-2.2.0是32位的编译出来的,亲测可用
Hadoop2.2.0安装配置手册!完全分布式Hadoop集群搭建过程 按照文档中的操作步骤,一步步操作就可以完全实现hadoop2.2.0版本的完全分布式集群搭建过程
hadoop2.2.0安装指南
Hadoop 2.2.0 分布式安装指导
64位的hadoop安装包和源码包
用于windows下eclipse连接hadoop2.2.0的插件以及eclipse版本,亲测可用
Hadoop2.2.0 CHM 英文 API 如果打不开,右键解除锁定即可查阅
hadoop2.2.0集群搭建
本文档主要详细介绍了Hadoop 2.2.0版本的编译安装过程。