欢迎投稿

今日深度:

Hadoop,

Hadoop,


core-site.xml

<configuration>
<property>
        <name>hadoop.tmp.dir</name>
        <value>file:/home/wangxiaotong/hadoop-2.7.5/tmp</value>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://10.11.6.79:9000</value>
    </property>
    <property>
        <name>io.file.buffer.size</name>
        <value>131072</value>
    </property>
</configuration>

hdfs-site.xml

<configuration>
  <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/home/wangxiaotong/hadoop-2.7.5/hdfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/home/wangxiaotong/hadoop-2.7.5/hdfs/data</value>
    </property>
    <property>
        <name>dfs.webhdfs.enabled</name>
        <value>true</value>
    </property>
        <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>10.11.6.79:50090</value>
        </property>


        <property>
        <name>dfs.namenode.http-address</name>
        <value>10.11.6.79:50070</value>
        </property>
</configuration>

mapred-site.xml

<configuration>
 <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>10.11.6.79:10020</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>10.11.6.79:19888</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.done-dir</name>
        <value>/home/wangxiaotong/hadoop-2.7.5/history/done</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.intermediate-done-dir</name>
        <value>/home/wangxiaotong/hadoop-2.7.5/history/done_intermediate</value>
    </property>
</configuration>

yarn-site.xml

<configuration>

<!-- Site specific YARN configuration properties -->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>10.11.6.79</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>10.11.6.79:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>10.11.6.79:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>10.11.6.79:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>10.11.6.79:8033</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>10.11.6.79:8088</value>
    </property>
</configuration>

slaves

10.11.6.52
10.11.6.53
10.11.6.54
10.11.6.55
10.11.6.57
10.11.6.58
10.11.6.70
10.11.6.71
10.11.6.72
10.11.6.73
10.11.6.74
10.11.6.75
10.11.6.77
10.11.6.78

hadoop-env,yarn-env,mapred-env

export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64

配置Hadoop环境变量

export HADOOP_HOME=/home/wangxiaotong/hadoop-2.7.5
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

启动

hadoop-daemons.sh start datanode 启动所有的数据节点
hadoop-daemon.sh start datanode 只启动当前的数据节点
start-all 启动所有服务
stop-all 停止所有服务

  • master节点进程:
2437 NameNode
2569 SecondaryNameNode
4778 Jps
31467 Kafka
31004 QuorumPeerMain
  • slave节点进程:
26576 DataNode
22563 QuorumPeerMain
23044 Kafka
29402 Jps

参考链接:http://blog.51cto.com/balich/2062052

读取

hdfs dfs -cat hdfs://centos7-dase-79:9000/flink-checkpoints/test.txt

www.htsjk.Com true http://www.htsjk.com/Hadoop/38303.html NewsArticle Hadoop, core-site.xml configurationproperty namehadoop.tmp.dir/name valuefile:/home/wangxiaotong/hadoop-2.7.5/tmp/value /property property namefs.defaultFS/name valuehdfs://10.11.6.79:9000/value /property property nameio.file.buffer.size/...
相关文章
    暂无相关文章
评论暂时关闭