很多hadoop初學者估計都我一樣,由于沒有足夠的機器資源,只能在虛擬機里弄一個linux安裝hadoop的偽分布,然后在host機上win7里使用eclipse或Intellj idea來寫代碼測試,那么問題來了,win7下的eclipse或intellij idea如何遠程提交map/reduce任務到遠程hadoop,并斷點調試?
一、準備工作
1.1 在win7中,找一個目錄,解壓hadoop-2.6.0,本文中是D:\yangjm\Code\study\hadoop\hadoop-2.6.0 (以下用$HADOOP_HOME表示)
1.2 在win7中添加幾個環境變量
HADOOP_HOME=D:\yangjm\Code\study\hadoop\hadoop-2.6.0
HADOOP_BIN_PATH=%HADOOP_HOME%\bin
HADOOP_PREFIX=D:\yangjm\Code\study\hadoop\hadoop-2.6.0
另外,PATH變量在最后追加;%HADOOP_HOME%\bin
二、eclipse遠程調試
1.1 下載hadoop-eclipse-plugin插件
hadoop-eclipse-plugin是一個專門用于eclipse的hadoop插件,可以直接在IDE環境中查看hdfs的目錄和文件內容。其源代碼托管于github上,官網地址是 https://github.com/winghc/hadoop2x-eclipse-plugin
有興趣的可以自己下載源碼編譯,百度一下N多文章,但如果只是使用 https://github.com/winghc/hadoop2x-eclipse-plugin/tree/master/release%20 這里已經提供了各種編譯好的版本,直接用就行,將下載后的hadoop-eclipse-plugin-2.6.0.jar復制到eclipse/plugins目錄下,然后重啟eclipse就完事了
1.2 下載windows64位平臺的hadoop2.6插件包(hadoop.dll,winutils.exe)
在hadoop2.6.0源碼的hadoop-common-project\hadoop-common\src\main\winutils下,有一個vs.net工程,編譯這個工程可以得到這一堆文件,輸出的文件中,
hadoop.dll、winutils.exe 這二個最有用,將winutils.exe復制到$HADOOP_HOME\bin目錄,將hadoop.dll復制到%windir%\system32目錄 (主要是防止插件報各種莫名錯誤,比如空對象引用啥的)
注:如果不想編譯,可直接下載編譯好的文件 hadoop2.6(x64)V0.2.zip
1.3 配置hadoop-eclipse-plugin插件
啟動eclipse,windows->show view->other
window->preferences->hadoop map/reduce 指定win7上的hadoop根目錄(即:$HADOOP_HOME)
然后在Map/Reduce Locations 面板中,點擊小象圖標
添加一個Location
這個界面灰常重要,解釋一下幾個參數:
Location name 這里就是起個名字,隨便起
Map/Reduce(V2) Master Host 這里就是虛擬機里hadoop master對應的IP地址,下面的端口對應 hdfs-site.xml里dfs.datanode.ipc.address屬性所指定的端口
DFS Master Port: 這里的端口,對應core-site.xml里fs.defaultFS所指定的端口
最后的user name要跟虛擬機里運行hadoop的用戶名一致,我是用hadoop身份安裝運行hadoop 2.6.0的,所以這里填寫hadoop,如果你是用root安裝的,相應的改成root
這些參數指定好以后,點擊Finish,eclipse就知道如何去連接hadoop了,一切順利的話,在Project Explorer面板中,就能看到hdfs里的目錄和文件了
可以在文件上右擊,選擇刪除試下,通常第一次是不成功的,會提示一堆東西,大意是權限不足之類,原因是當前的win7登錄用戶不是虛擬機里hadoop的運行用戶,解決辦法有很多,比如你可以在win7上新建一個hadoop的管理員用戶,然后切換成hadoop登錄win7,再使用eclipse開發,但是這樣太煩,最簡單的辦法:
hdfs-site.xml里添加
1 <property> 2 <name>dfs.permissions</name> 3 <value>false</value> 4 </property>
然后在虛擬機里,運行hadoop dfsadmin -safemode leave
保險起見,再來一個 hadoop fs -chmod 777 /
總而言之,就是徹底把hadoop的安全檢測關掉(學習階段不需要這些,正式生產上時,不要這么干),最后重啟hadoop,再到eclipse里,重復剛才的刪除文件操作試下,應該可以了。
1.4 創建WoldCount示例項目
新建一個項目,選擇Map/Reduce Project
后面的Next就行了,然后放一上WodCount.java,代碼如下:
1 package yjmyzz; 2 3 import java.io.IOException; 4 import java.util.StringTokenizer; 5 6 import org.apache.hadoop.conf.Configuration; 7 import org.apache.hadoop.fs.Path; 8 import org.apache.hadoop.io.IntWritable; 9 import org.apache.hadoop.io.Text; 10 import org.apache.hadoop.mapreduce.Job; 11 import org.apache.hadoop.mapreduce.Mapper; 12 import org.apache.hadoop.mapreduce.Reducer; 13 import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; 14 import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; 15 import org.apache.hadoop.util.GenericOptionsParser; 16 17 public class WordCount { 18 19 public static class TokenizerMapper 20 extends Mapper<Object, Text, Text, IntWritable> { 21 22 private final static IntWritable one = new IntWritable(1); 23 private Text word = new Text(); 24 25 public void map(Object key, Text value, Context context) throws IOException, InterruptedException { 26 StringTokenizer itr = new StringTokenizer(value.toString()); 27 while (itr.hasMoreTokens()) { 28 word.set(itr.nextToken()); 29 context.write(word, one); 30 } 31 } 32 } 33 34 public static class IntSumReducer extends Reducer<Text, IntWritable, Text, IntWritable> { 35 private IntWritable result = new IntWritable(); 36 37 public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { 38 int sum = 0; 39 for (IntWritable val : values) { 40 sum += val.get(); 41 } 42 result.set(sum); 43 context.write(key, result); 44 } 45 } 46 47 public static void main(String[] args) throws Exception { 48 Configuration conf = new Configuration(); 49 String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs(); 50 if (otherArgs.length < 2) { 51 System.err.println("Usage: wordcount <in> [<in>...] <out>"); 52 System.exit(2); 53 } 54 Job job = Job.getInstance(conf, "word count"); 55 job.setJarByClass(WordCount.class); 56 job.setMapperClass(TokenizerMapper.class); 57 job.setCombinerClass(IntSumReducer.class); 58 job.setReducerClass(IntSumReducer.class); 59 job.setOutputKeyClass(Text.class); 60 job.setOutputValueClass(IntWritable.class); 61 for (int i = 0; i < otherArgs.length - 1; ++i) { 62 FileInputFormat.addInputPath(job, new Path(otherArgs[i])); 63 } 64 FileOutputFormat.setOutputPath(job, 65 new Path(otherArgs[otherArgs.length - 1])); 66 System.exit(job.waitForCompletion(true) ? 0 : 1); 67 } 68 }
然后再放一個log4j.properties,內容如下:(為了方便運行起來后,查看各種輸出)
1 log4j.rootLogger=INFO, stdout 2 3 #log4j.logger.org.springframework=INFO 4 #log4j.logger.org.apache.activemq=INFO 5 #log4j.logger.org.apache.activemq.spring=WARN 6 #log4j.logger.org.apache.activemq.store.journal=INFO 7 #log4j.logger.org.activeio.journal=INFO 8 9 log4j.appender.stdout=org.apache.log4j.ConsoleAppender 10 log4j.appender.stdout.layout=org.apache.log4j.PatternLayout 11 log4j.appender.stdout.layout.ConversionPattern=%d{ABSOLUTE} | %-5.5p | %-16.16t | %-32.32c{1} | %-32.32C %4L | %m%n
最終的目錄結構如下:
然后可以Run了,當然是不會成功的,因為沒給WordCount輸入參數,參考下圖:
1.5 設置運行參數
因為WordCount是輸入一個文件用于統計單詞字,然后輸出到另一個文件夾下,所以給二個參數,參考上圖,在Program arguments里,輸入
hdfs://172.28.20.xxx:9000/jimmy/input/README.txt
hdfs://172.28.20.xxx:9000/jimmy/output/
大家參考這個改一下(主要是把IP換成自己虛擬機里的IP),注意的是,如果input/READM.txt文件沒有,請先手動上傳,然后/output/ 必須是不存在的,否則程序運行到最后,發現目標目錄存在,也會報錯,這個弄完后,可以在適當的位置打個斷點,終于可以調試了:
三、intellij idea 遠程調試hadoop
3.1 創建一個maven的WordCount項目
pom文件如下:
1 <?xml version="1.0" encoding="UTF-8"?> 2 <project xmlns="http://maven.apache.org/POM/4.0.0" 3 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 4 xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> 5 <modelVersion>4.0.0</modelVersion> 6 7 <groupId>yjmyzz</groupId> 8 <artifactId>mapreduce-helloworld</artifactId> 9 <version>1.0-SNAPSHOT</version> 10 11 <dependencies> 12 <dependency> 13 <groupId>org.apache.hadoop</groupId> 14 <artifactId>hadoop-common</artifactId> 15 <version>2.6.0</version> 16 </dependency> 17 <dependency> 18 <groupId>org.apache.hadoop</groupId> 19 <artifactId>hadoop-mapreduce-client-jobclient</artifactId> 20 <version>2.6.0</version> 21 </dependency> 22 <dependency> 23 <groupId>commons-cli</groupId> 24 <artifactId>commons-cli</artifactId> 25 <version>1.2</version> 26 </dependency> 27 </dependencies> 28 29 <build> 30 <finalName>${project.artifactId}</finalName> 31 </build> 32 33 </project>
項目結構如下:
項目上右擊-》Open Module Settings 或按F12,打開模塊屬性
添加依賴的Libary引用
然后把$HADOOP_HOME下的對應包全導進來
導入的libary可以起個名稱,比如hadoop2.6
3.2 設置運行參數
注意二個地方:
1是Program aguments,這里跟eclipes類似的做法,指定輸入文件和輸出文件夾
2是Working Directory,即工作目錄,指定為$HADOOP_HOME所在目錄
然后就可以調試了
intellij下唯一不爽的,由于沒有類似eclipse的hadoop插件,每次運行完wordcount,下次再要運行時,只能手動命令行刪除output目錄,再行調試。為了解決這個問題,可以將WordCount代碼改進一下,在運行前先刪除output目錄,見下面的代碼:
1 package yjmyzz; 2 3 import java.io.IOException; 4 import java.util.StringTokenizer; 5 6 import org.apache.hadoop.conf.Configuration; 7 import org.apache.hadoop.fs.FileSystem; 8 import org.apache.hadoop.fs.Path; 9 import org.apache.hadoop.io.IntWritable; 10 import org.apache.hadoop.io.Text; 11 import org.apache.hadoop.mapreduce.Job; 12 import org.apache.hadoop.mapreduce.Mapper; 13 import org.apache.hadoop.mapreduce.Reducer; 14 import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; 15 import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; 16 import org.apache.hadoop.util.GenericOptionsParser; 17 18 public class WordCount { 19 20 public static class TokenizerMapper 21 extends Mapper<Object, Text, Text, IntWritable> { 22 23 private final static IntWritable one = new IntWritable(1); 24 private Text word = new Text(); 25 26 public void map(Object key, Text value, Context context) throws IOException, InterruptedException { 27 StringTokenizer itr = new StringTokenizer(value.toString()); 28 while (itr.hasMoreTokens()) { 29 word.set(itr.nextToken()); 30 context.write(word, one); 31 } 32 } 33 } 34 35 public static class IntSumReducer extends Reducer<Text, IntWritable, Text, IntWritable> { 36 private IntWritable result = new IntWritable(); 37 38 public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { 39 int sum = 0; 40 for (IntWritable val : values) { 41 sum += val.get(); 42 } 43 result.set(sum); 44 context.write(key, result); 45 } 46 } 47 48 49 /** 50 * 刪除指定目錄 51 * 52 * @param conf 53 * @param dirPath 54 * @throws IOException 55 */ 56 private static void deleteDir(Configuration conf, String dirPath) throws IOException { 57 FileSystem fs = FileSystem.get(conf); 58 Path targetPath = new Path(dirPath); 59 if (fs.exists(targetPath)) { 60 boolean delResult = fs.delete(targetPath, true); 61 if (delResult) { 62 System.out.println(targetPath + " has been deleted sucessfullly."); 63 } else { 64 System.out.println(targetPath + " deletion failed."); 65 } 66 } 67 68 } 69 70 public static void main(String[] args) throws Exception { 71 Configuration conf = new Configuration(); 72 String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs(); 73 if (otherArgs.length < 2) { 74 System.err.println("Usage: wordcount <in> [<in>...] <out>"); 75 System.exit(2); 76 } 77 78 //先刪除output目錄 79 deleteDir(conf, otherArgs[otherArgs.length - 1]); 80 81 Job job = Job.getInstance(conf, "word count"); 82 job.setJarByClass(WordCount.class); 83 job.setMapperClass(TokenizerMapper.class); 84 job.setCombinerClass(IntSumReducer.class); 85 job.setReducerClass(IntSumReducer.class); 86 job.setOutputKeyClass(Text.class); 87 job.setOutputValueClass(IntWritable.class); 88 for (int i = 0; i < otherArgs.length - 1; ++i) { 89 FileInputFormat.addInputPath(job, new Path(otherArgs[i])); 90 } 91 FileOutputFormat.setOutputPath(job, 92 new Path(otherArgs[otherArgs.length - 1])); 93 System.exit(job.waitForCompletion(true) ? 0 : 1); 94 } 95 }
但是光這樣還不夠,在IDE環境中運行時,IDE需要知道去連哪一個hdfs實例(就好象在db開發中,需要在配置xml中指定DataSource一樣的道理),將$HADOOP_HOME\etc\hadoop下的core-site.xml,復制到resouces目錄下,類似下面這樣:
里面的內容如下:
<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://172.28.20.***:9000</value> </property> </configuration>
上面的IP換成虛擬機里的IP即可
文章列表
留言列表