欢迎投稿

今日深度:

hadoop集群配置,

hadoop集群配置,


Hadoop 1.0.0 集群安装
环境说明
Windows 下安装 VMware Workstation 7,用 VMware 安装 Redhat Enterprise Linux 6.2 虚拟机并克隆,共创建了 3 台虚拟机。
地址和机器名分别为:
192.168.0.100 master.hadoop
192.168.0.101 slave1.hadoop
192.168.0.102 slave2.hadoop
下载所需软件,分别为:
jdk-6u30-linux-i586-rpm.bin
hadoop-1.0.0.tar.gz
(namenode)
(datanode)
(datanode)
1.Hosts 文件配置
在所有机器上配置 /etc/hosts 文件
#vi /etc/hosts
添加如下行:
192.168.0.100 master.hadoop
192.168.0.101 slave1.hadoop
192.168.0.102 slave2.hadoop
2.安装 Java
在所有机器上安装 Java
#mkdir /usr/local/java
然后将 jdk6u30 复制到 java 文件夹下
#cd /usr/local/java
#chmod u+x jdk-6u30-linux-i586.bin
#./jdk-6u30-linux-i586.bin;
3.设置环境变量
在所有机器上添加
#vi /etc/profile 添加环境变量
export JAVA_HOME=/usr/java/jdk1.6.0_30/
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin
执行#source /etc/profile ,使环境变量立即生效
4.建立 hduser 用户和 hadoop 组
在所有机器上操作
# groupadd hadoop
# useradd -g hadoop hduser
设置密码
# passwd hduser
5.禁用 IPV6
在所有机器上操作
#vi /etc/modprobe.d/anaconda.conf 添加如下行:
install ipv6 /bin/true
重新启动系统生效6.在 master.hadoop 机器上配置 SSH
使用 hduser 账户登录 master.hadoop 机器
hduser@master:~$ ssh-keygen -t rsa -P ""
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hduser/.ssh/id_rsa):
Created directory '/home/hduser/.ssh'.
Your identification has been saved in /home/hduser/.ssh/id_rsa.
Your public key has been saved in /home/hduser/.ssh/id_rsa.pub.
The key fingerprint is:
......
The key's randomart image is:
......
hduser@master:~$
hduser@master:~$ cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
hduser@master:~$ ssh localhost
The authenticity of host 'localhost (127.0.0.1)' can't be established.
RSA key fingerprint is .......
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
hduser@master:~$ ssh master.hadoop
The authenticity of host 'master.hadoop (192.168.0.100)' can't be established.
RSA key fingerprint is .......
Are you sure you want to continue connecting (yes/no)? yes
......
hduser@master:~$ ssh-copy-id -i $HOME/.ssh/id_rsa.pub hduser@slave1.hadoop
提示输入 slave1.hadoop 上的 hduser 用户密码,然后完成。
hduser@master:~$ ssh-copy-id -i $HOME/.ssh/id_rsa.pub hduser@slave2.hadoop
提示输入 slave1.hadoop 上的 hduser 用户密码,然后完成。
这样就能无密码登录,如下图:7.在 master.hadoop 上安装 Hadoop
#
#
#
#
cd /usr/local
tar xzf hadoop-1.0.0.tar.gz
mv hadoop-1.0.0 hadoop
chown -R hduser:hadoop hadoop
#vi /home/hduser/.bashrc
# Some convenient aliases and functions for running Hadoop-related commands
unalias fs &> /dev/null
alias fs="hadoop fs"
unalias hls &> /dev/null
alias hls="fs -ls"
# If you have LZO compression enabled in your Hadoop cluster and
# compress job outputs with LZOP (not covered in this tutorial):
# Conveniently inspect an LZOP compressed file from the command
# line; run via:
#
# $ lzohead /hdfs/path/to/lzop/compressed/file.lzo
#
# Requires installed 'lzop' command.
#
lzohead () {
hadoop fs -cat $1 | lzop -dc | head -1000 | less
}
vi $HADOOP_HOME/conf/hadoop-env.sh
如下修改:
# The java implementation to use. Required.
export JAVA_HOME=/usr/java/jdk1.6.0_30
修改 $HADOOP_HOME/conf/core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://master.hadoop:54310</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
修改 $HADOOP_HOME/conf/mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>master.hadoop:54311</value>
<description>The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.</description>
</property>
</configuration>
修改 $HADOOP_HOME/conf/hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
</configuration>
$vi $HADOOP_HOME/conf/masters
修改 localhost 为 master.hadoop
$vi $HADOOP_HOME/conf/slaves
修改 localhost 为 master.hadoop
添加 slave1.hadoop
slave2.hadoop
8.在所有机器上创建 hadoop.tmp.dir 所需目录
#
#
#
#
mkdir -p /app/hadoop/tmp
chown hduser:hadoop /app/hadoop/tmp
...and if you want to tighten up security, chmod from 755 to 750...
chmod 750 /app/hadoop/tmp
9.部署slave节点
将master.hadoop上安装配置完成的hadoop文件夹复制到slave1.hadoop和slave2.hadoop:
hduser@master: hadoop$ scp -r /usr/local/hadoop hduser@slave1.hadoop:/usr/local/
hduser@master: hadoop$ scp -r /usr/local/hadoop hduser@slave2.hadoop:/usr/local/
10.格式化 HDFS 文件系统
在 master.hadoop 上使用 hduser 操作
hduser@master: hadoop$ bin/hadoop namenode -format
11.启动 Hadoop
在 master.hadoop 上 bin/start-all.sh 启动,用 jps 命令看到
DataNode,Jps,NameNode,JobTracker,TaskTracker,SecondaryNameNode 则正常。访问http://192.168.0.100:50030
访问http://192.168.0.100:50060访问 http://192.168.0.100:5007012.测试经典示例 wordcount
a)首先准备两个本地文档
hduser@master: hadoop$ vi /tmp/test1.txt
hduser@master: hadoop$ vi /tmp/test2.txt
随意在其中写入一些单词,以空格分开
b)在 hdfs 中新建目录
hduser@master: hadoop$ bin/hadoop dfs -mkdir myfile
c)上传本地文件到 hdfs 中指定的目录;
hduser@master: hadoop$ bin/hadoop dfs -copyFromLocal /tmp/test*.txt myfile
d)运行 wordcount
hduser@master: hadoop$ bin/hadoop jar hadoop-examples-1.0.0.jar wordcount myfile myfileout
e)查看运行结果
hduser@master: hadoop$ bin/hadoop dfs -cat myfileout/part-r-00000
至此,完成 Hadoop 集群的安装。
安装过程中碰到的问题和解决方法:
1、用 VMware 克隆系统,启动新系统发现报警。
Bringing up interface eth0: Device eth0 does not seem to be present, delaying initialization. [FAILED]
解决方法:
root 登录
系统自动生成了一个 Auto eth1 网络连接,将 Auto eth1 删除
#vi /etc/udev/rules.d/70-persistent-net.rules,如下图:
将 NAME=”eth0”的 ATTR{address}修改为 NAME=”eth1”的 ATTR{address},注释掉 NAME=”eth1”这一行。
将 System eth0 的 MAC 修改为上面修改的地址,点击“应用”

/etc/init.d/network restart 命令显示正常。修复完成。
2、Slave 机器上 JAVA 进程异常
在 master.hadoop 上运行 start-all.sh 命令后,master.hadoop 上 JAVA 进程全部启动,slave1.hadoop 和 slave2.hadoop 上只有 Jps 的
java 进程
检查 slave 机器上$HADOOP_HOME/logs/hadoop-hduser-datanode-slave2.hadoop.log 文件
发现报错:ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Call to /192.168.0.100:54310 failed on
local exception: java.net.NoRouteToHostException: No route to host解决方法:
关闭防火墙
# service iptables save
# service iptables stop
# chkconfig iptables off
3、每次运行 hadoop 命令时出现“Warning: $HADOOP_HOME is deprecated.”报警
每次运行 hadoop 命令的时候都出现该报警,检查发现,是 hadoop 1.0.0 的一个 BUG,已经提供了一个 h-7869.patch
https://issues.apache.org/jira/browse/HADOOP-7869?page=com.atlassian.jira.plugin.system.issuetabpanels
新发布版本比较容易出现低级错误,要么等下一个版本修复这个问题,要么自己搭建一套 hadoop 开发环境,用 svn 从
svn.apache.org 上获取源代码,用 Eclipse 在源代码上应用这个补丁并重新 build hadoop 才能解决。
鉴于这个报警不影响正常功能,等下一个版本。
4、运行 wordcount 任务异常
hdfs.DFSClient: Exception in createBlockOutputStream 192.168.0.100:50010 java.io.IOException: Bad connect ack with firstBadLink as
192.168.0.101:50010
hdfs.DFSClient: Exception in createBlockOutputStream 192.168.0.100:50010 java.io.IOException: Bad connect ack with firstBadLink as
192.168.0.102:50010
如图:Master.hadoop 连接到 slave1.hadoop 和 slave2.hadoop 的 50010 端口异常,关闭两台 slave 上防火墙解决。
安装仅仅是开始,祝大家玩得愉快。
张雷
2012-2-6
weibo.com/cloudn

www.htsjk.Com true http://www.htsjk.com/Hadoop/41292.html NewsArticle hadoop集群配置, Hadoop 1.0.0 集群安装 环境说明 Windows 下安装 VMware Workstation 7,用 VMware 安装 Redhat Enterprise Linux 6.2 虚拟机并克隆,共创建了 3 台虚拟机。 地址和机器名分别为: 192.168.0.100 m...
相关文章
    暂无相关文章
评论暂时关闭