最新av偷拍av偷窥av网站,在教室伦流澡到高潮h麻豆,一个人在线高清免费观看,中文字幕av无码一二三区电影,中国丰满熟妇xxxx性

您當前的位置:JZ5U綠色下載站文章中心設計學院Photoshop → 文章內容
  • Spring Hadoop搭建例子 配置Spring方法

當你想要開端體驗 Spring Hadoop 的時分, 你會遇到形形色色奇異的問題, 目前也有人開端陸續(xù)報答了.

假如你只是想要簡單的試用一下, 又不想要本人處理這些疑問雜癥, 倡議大家能夠參考下面的步驟來快速體驗一下 Spring Hadoop 的能力.

Spring Hadoop 快速入門

Step1. 下載 Spring Hadoop, 這邊是運用 git 去下載, 假如你對 git 不熟習的話, 也能夠直接從官網下載再解緊縮。

這邊的例子里面是用我的 home 目錄為例, 大家記得要改成你本人的目錄稱號

/home/evanshsu mkdir springhadoop

/home/evanshsu cd springhadoop

/home/evanshsu/springhadoop git init

/home/evanshsu/springhadoop git pull "git://github.com/SpringSource/spring-hadoop.git"

Step2. build spring-hadoop.jar

build完之后, 我們要把一切的 jar 檔都放在 /home/evanshsu/springhadoop/lib 里面, 以便之后把一切的jar 檔包在同一包里面

/home/evanshsu/springhadoop ./gradlew jar

/home/evanshsu/springhadoop mkdir lib

/home/evanshsu/springhadoop cp build/libs/spring-data-hadoop-1.0.0.BUILD-SNAPSHOT.jar lib/

Step3. get spring-framework.

由于 spring hadoop 是倚賴于 spring-framework 的, 所以我們也要把 spring-framework 的 jar 檔放在 lib 里面

/home/evanshsu/spring wget "http://s3.amazonaws.com/dist.springframework.org/release/SPR/spring-framework-3.1.1.RELEASE.zip"

/home/evanshsu/spring unzip spring-framework-3.1.1.RELEASE.zip

/home/evanshsu/spring cp spring-framework-3.1.1.RELEASE/dist/*.jar /home/evanshsu/springhadoop/lib/

Step4. 修正 build file 讓我們能夠把一切的 jar 檔, 封裝到同一個 jar 檔里面

/home/evanshsu/spring/samples/wordcount vim build.gradle

[code=php]

description = 'Spring Hadoop Samples - WordCount'

apply plugin: 'base'
apply plugin: 'java'
apply plugin: 'idea'
apply plugin: 'eclipse'

repositories {
flatDir(dirs: '/home/evanshsu/springhadoop/lib/')
// Public Spring artefacts
maven { url "http://repo.springsource.org/libs-release" }
maven { url "http://repo.springsource.org/libs-milestone" }
maven { url "http://repo.springsource.org/libs-snapshot" }
}

dependencies {
compile fileTree('/home/evanshsu/springhadoop/lib/')
compile "org.apache.hadoop:hadoop-examples:$hadoopVersion"
// see HADOOP-7461
runtime "org.codehaus.jackson:jackson-mapper-asl:$jacksonVersion"

testCompile "junit:junit:$junitVersion"
testCompile "org.springframework:spring-test:$springVersion"
}

jar {
from configurations.compile.collect { it.isDirectory() ? it : zipTree(it).matching{
        exclude 'META-INF/spring.schemas'
        exclude 'META-INF/spring.handlers'
        } }
}

[code/]

Step5. 這邊有一個特殊的 hadoop.properties 主要是放置 hadoop 相關的設定數據.

根本上我們要把 wordcount.input.path wordcount.output.path 改成之后執(zhí)行 wordcount 要運用的目錄, 而且wordcount.input.path 里面記得要放幾個文本文件

另外, 還要把 hd.fs 改成你 hdfs 的設定

假如你是用國網中心 Hadoop 的話, 要把 hd.fs 改成 hd.fs=hdfs://gm2.nchc.org.tw:8020

/home/evanshsu/spring/samples/wordcount vim src/main/resources/hadoop.properties

[code=php]

wordcount.input.path=/user/evanshsu/input.txt
wordcount.output.path=/user/evanshsu/output

hive.host=localhost
hive.port=12345
hive.url=jdbc:hive://${hive.host}:${hive.port}
hd.fs=hdfs://localhost:9000
mapred.job.tracker=localhost:9001

path.cat=bin${file.separator}stream-bin${file.separator}cat
path.wc=bin${file.separator}stream-bin${file.separator}wc

input.directory=logs
log.input=/logs/input/
log.output=/logs/output/

distcp.src=${hd.fs}/distcp/source.txt
distcp.dst=${hd.fs}/distcp/dst

[code/]

Step6. 這是最重要的一個配置文件, 有用過 Spring 的人都曉得這個配置文件是Spring 的靈魂

/home/evanshsu/spring/samples/wordcount vim src/main/resources/META-INF/spring/context.xml

xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

xmlns:context="http://www.springframework.org/schema/context"

xmlns:hdp="http://www.springframework.org/schema/hadoop"

xmlns:p="http://www.springframework.org/schema/p"

xsi:schemaLocation="http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans.xsd

http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd

http://www.springframework.org/schema/hadoop http://www.springframework.org/schema/hadoop/spring-hadoop.xsd">

fs.default.name=${hd.fs}

input-path="${wordcount.input.path}" output-path="${wordcount.output.path}"

mapper="org.springframework.data.hadoop.samples.wordcount.WordCountMapper"

reducer="org.springframework.data.hadoop.samples.wordcount.WordCountReducer"

jar-by-class="org.springframework.data.hadoop.samples.wordcount.WordCountMapper" />

Step7. 加上本人的 mapper, reducer

/home/evanshsu/spring/samples/wordcount vim src/main/java/org/springframework/data/hadoop/samples/wordcount/WordCountMapper.java

package org.springframework.data.hadoop.samples.wordcount;

import java.io.IOException;

import java.util.StringTokenizer;

import org.apache.hadoop.io.IntWritable;

import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapreduce.Mapper;

public class WordCountMapper extends Mapper {

private final static IntWritable one = new IntWritable(1);

private Text word = new Text();

public void map(Object key, Text value, Context context)

throws IOException, InterruptedException {

StringTokenizer itr = new StringTokenizer(value.toString());

while (itr.hasMoreTokens()) {

word.set(itr.nextToken());

context.write(word, one);

}

}

}

/home/evanshsu/spring/samples/wordcount vim src/main/java/org/springframework/data/hadoop/samples/wordcount/WordCountReducer.java

package org.springframework.data.hadoop.samples.wordcount;

import java.io.IOException;

import org.apache.hadoop.io.IntWritable;

import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapreduce.Reducer;

public class WordCountReducer extends

Reducer {

private IntWritable result = new IntWritable();

public void reduce(Text key, Iterable values, Context context)

throws IOException, InterruptedException {

int sum = 0;

for (IntWritable val : values) {

sum += val.get();

}

result.set(sum);

context.write(key, result);

}

}

Step8. 加上 spring.schemas, spring.handlers

/home/evanshsu/spring/samples/wordcount vim src/main/resources/META-INF/spring.schemas

http\://www.springframework.org/schema/context/spring-context.xsd=org/springframework/context/config/spring-context-3.1.xsd

http\://www.springframework.org/schema/hadoop/spring-hadoop.xsd=/org/springframework/data/hadoop/config/spring-hadoop-1.0.xsd

/home/evanshsu/spring/samples/wordcount vim src/main/resources/META-INF/spring.handlers

http\://www.springframework.org/schema/p=org.springframework.beans.factory.xml.SimplePropertyNamespaceHandler

http\://www.springframework.org/schema/context=org.springframework.context.config.ContextNamespaceHandler

http\://www.springframework.org/schema/hadoop=org.springframework.data.hadoop.config.HadoopNamespaceHandler

Step9. 終于到最后一步啰, 這一步我們要把一切的 jar 檔封裝在一同, 并且丟到hadoop 上面去跑

/home/evanshsu/spring/samples/wordcount ../../gradlew jar

/home/evanshsu/spring/samples/wordcount hadoop jar build/libs/wordcount-1.0.0.M1.jar org.springframework.data.hadoop.samples.wordcount.Main

Step10. 最后來確認看看結果有沒有跑出來吧

/home/evanshsu/spring/samples/wordcount hadoop fs -cat /user/evanshsu/output/*


  • 作者:互聯(lián)網  來源:本站整理  發(fā)布時間:2019-11-21 14:21:23


------------------------------- · 相關文檔瀏覽 · --------------------------------------------------------------------- · 熱門文檔瀏覽 · -------------------------------------