hadoop开发环境搭建,
Hadoop分布式的好处:
DFS 为基础的分布式计算框架和key、value 数据高效的解决运算的瓶颈,而且开发人员不用再写复杂的分布式程序,只要底层框架完备开发人员只要用较少的代码就可以完成分布式程序的开发,这使得开发人员只需要关注业务逻辑的即可
Hadoop的window开发环境搭建流程:
Cygwin的安装:
1从cygwin官网下载cygwin,安装的组件有:net :openssl openssh ;editors:vim;devel:subversion
一路安装完成。
2安装jdk
3配置jdk环境变量,CYGWIN =ntsec path添加:c:/cygwin/bin;c:/cygwin/usr/bin及jdk path。
4ssh-host-config配置sshd服务
过程如下:
*** Query: Should privilege separation be used? (yes/no) yes #输入yes
*** Query: Do you want to install sshd as a service?
*** Query: (Say "no" if it is already installed as a service) (yes/no) yes #输入yes
*** Query: Enter the value of CYGWIN for the daemon: [ntsec] ntsec #输入ntsec
*** Query: Do you want to use a different name? (yes/no) no #输入no,不指定启动用户
*** Query: Create new privileged user account cyg_server? (yes/no) no #输入no,不指定启动用户
*** Query: Do you want to proceed anyway? (yes/no) yes #输入yes
启动ssh服务:
cygrunsrv --start sshd
启动window服务:services.msc查看一下,没启动重装。
net user 查看组用户
出现sshd用户为正常
Hadoop的安装:
1将Hadoop0.2.02jar文件下载到home/Admin/下。
2解压缩
3配置文件:包括 hadoop-env.sh , core-site.xml, hdfs-site.xml, mapred-site.xml , masters , slaves一共 6 个文件。配置如下:
hadoop-env.sh
export JAVA_HOME=/cygdrive/d/jdk1.6.0_10
export HADOOP_CLASSPATH=/home/hhp/hadoop-0.20.2
core-site.xml:
<property>
<name>fs.default.name</name>
<value>hdfs://202.112.1.50:9000</value>
</property>
hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
mapred-site.xml :
<property>
<name>mapred.job.tracker</name>
<value> 202.112.1.50 :9001</value>
</property>
4免密码ssh设置
操作如下:
ssh-keygen –t rsa –P ‘’ ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
用以下这个测试:
ssh localhost
5格式化namenode
bin/hadoop namenode –format只格式一次,下次启动不用格式
5启动hadoop守护进程
bin/start-all.sh
测试如下:
NameNode - http://localhost:50070/
JobTracker - http://localhost:50030/
配置eclipse:
1将插件hadoop-eclipse-plugin-0.20.3-SNAPSHOT 放入eclipse
启动eclipse
配置9001 9000
2:切换到mapreduce透视图,可以看到大象就好。
3.新建 Map/Reduce工程并运行
新建 Map/Reduce project, 即可在工程中建立 Mapper,Reducer,MapReduce Driver类,如图:
4. 运行如下示例:
Mapper代码如下:
package test.map;
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
public class MapA extends Mapper<Object, Text, Text, IntWritable> {
private final static IntWritable one =new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context)throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
Deducer代码如下:
package test.reduce;
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
public class ReduceA extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context
) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
MapReduce Driver代码如下:
package test;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
import test.map.MapA;
import test.reduce.ReduceA;
public class Driver {
public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException {
Configuration conf = new Configuration();
Job job = new Job(conf, "word count");
job.setJarByClass(Driver.class);
job.setMapperClass(MapA.class);
job.setCombinerClass(ReduceA.class);
job.setReducerClass(ReduceA.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path("ab"));
FileOutputFormat.setOutputPath(job, new Path("output2"));
System.exit(job.waitForCompletion(true) ? 0 : 1);
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
以上代码例子为hadoop自带wordcount例子, 只是将其内部类才为普通类而已。
写完以上代码 点击 Run as –> run on hadoop即可运行