欢迎投稿

今日深度:

5.修改hadoop配置文件,

5.修改hadoop配置文件,


一、创建Hadoop使用到的目录

先在本地创建目录
/home/hadoop/hadoop
/home/hadoop/hadoop/tmp
/home/hadoop/hadoop/namenode
/home/hadoop/hadoop/datanode

master@master:/home/hadoop$ su hadoop  #切换到hadoop用户
密码: 

hadoop@master:~$ mkdir hadoop
hadoop@master:~$ chmod -R 777 hadoop
hadoop@master:~$ 
hadoop@master:~$ cd hadoop
hadoop@master:~/hadoop$ mkdir tmp
hadoop@master:~/hadoop$ chmod -R 777 tmp
hadoop@master:~/hadoop$ 
hadoop@master:~/hadoop$ mkdir namenode
hadoop@master:~/hadoop$ chmod -R 777 namenode
hadoop@master:~/hadoop$ 
hadoop@master:~/hadoop$ mkdir datanode
hadoop@master:~/hadoop$ chmod -R 777 datanode
hadoop@master:~/hadoop$ 

============

二、修改配置文件

(1)sudo gedit hadoop-env.sh

# The java implementation to use.
export JAVA_HOME=/data/jdk1.8.0_111

(2)sudo gedit yarn-env.sh

# some Java parameters
# export JAVA_HOME=/home/y/libexec/jdk1.6.0/
export JAVA_HOME=/data/jdk1.8.0_111

(3)sudo gedit slaves

slave1
slave2

(4)sudo core-site.xml
先在本地创建/home/hadoop/hadoop/tmp目录

<configuration>
 <property>
  <name>fs.defaultFS</name>
  <value>hdfs://master:8020</value>
 </property>

 <property>
  <name>hadoop.tmp.dir</name>
  <value>file:///home/hadoop/hadoop/tmp</value>
  <description>Abase for other temporary directories.</description>
 </property>

 <property>
  <name>hadoop.proxyuser.hadoop.hosts</name>
  <value>*</value>
  <description>abouyun用户可以代理任意机器上的用户</description>
 </property>

 <property>
  <name>hadoop.proxyuser.hadoop.groups</name>
  <value>*</value>
  <description>abouyun用户代理任何组下的用户</description>
 </property>

 <property>
  <name>io.file.buffer.size</name>
  <value>131072</value>
 </property>

</configuration>

(5)sudo gedit hdfs-site.xml
注意: 需要在本地创建/home/hadoop/hadoop/namenode 和 /home/hadoop/hadoop/datanode 目录

<configuration>
 <property>
  <name>dfs.namenode.secondary.http-address</name>
  <value>master:9001</value>
 </property>

 <property>
  <name>dfs.namenode.name.dir</name>
  <value>file:///home/hadoop/hadoop/namenode</value>
 </property>

 <property>
  <name>dfs.datanode.data.dir</name>
  <value>file:///home/hadoop/hadoop/datanode</value>
 </property>

 <property>
  <name>dfs.replication</name>
  <value>3</value>
 </property>

 <property>
  <name>dfs.webhdfs.enabled</name>
  <value>true</value>
 </property>
</configuration>

(6)sudo gedit mapred-site.xml
cp mapred-site.xml.template mapred-site.xml

<configuration>
 <property>
  <name>mapreduce.framework.name</name>
  <value>yarn</value>
 </property>

 <property>
  <name>mapreduce.jobhistory.address</name>
  <value>master:10020</value>
 </property>

 <property>
  <name>mapreduce.jobhistory.webapp.address</name>
  <value>master:19888</value>
 </property>
</configuration>

(7)sudo gedit yarn-site.xml

<configuration>
 <property>
  <name>yarn.nodemanager.aux-services</name>
  <value>mapreduce_shuffle</value>
 </property>

 <property>
  <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
  <value>org.apache.hadoop.mapred.ShuffleHandler</value>
 </property>

 <property>
  <name>yarn.resourcemanager.address</name>
  <value>master:8032</value>
 </property>

 <property>
  <name>yarn.resourcemanager.scheduler.address</name>
  <value>master:8030</value>
 </property>

 <property>
  <name>yarn.resourcemanager.resource-tracker.address</name>
  <value>master:8031</value>
 </property>

 <property>
  <name>yarn.resourcemanager.admin.address</name>
  <value>master:8033</value>
 </property>

 <property>
  <name>yarn.resourcemanager.webapp.address</name>
  <value>master:8088</value>
 </property>
</configuration>

============

www.htsjk.Com true http://www.htsjk.com/Hadoop/10328.html NewsArticle 5.修改hadoop配置文件, 一、创建Hadoop使用到的目录 先在本地创建目录/home/hadoop/hadoop/home/hadoop/hadoop/tmp/home/hadoop/hadoop/namenode/home/hadoop/hadoop/datanodemaster @master :/home/hadoop $ su hadoop #切换到...
相关文章
    暂无相关文章
评论暂时关闭