有說(shuō)胎記是前世死的方式,偶肚子上有個(gè),于是想锯岖,難不成上輩子是被人捅死的介袜,誰(shuí)那么狠。出吹。遇伞。后來(lái)遇到個(gè)人,在同樣的位置也有個(gè)類似的捶牢,忽然就平衡了鸠珠。
神回復(fù):也可能你們倆上輩子是很烤串
前言
上一篇主要講的是spark環(huán)境的搭建和任務(wù)的提交,這一篇是將spark直接部署在springboot搭建的web服務(wù)里叫确,一些數(shù)據(jù)邏輯交給spark去處理跳芳,至于原理等我對(duì)spark有了更深的理解再來(lái)一一講述!
編碼
使用springboot快速搭建一個(gè)web框架竹勉,之前對(duì)pom中的依賴配置不是怎么在意飞盆,進(jìn)過(guò)spark和scala版本的坑之后,發(fā)現(xiàn)想配置一個(gè)完美的pom是多么的不容易次乓,下面傾情奉送
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.3.2.RELEASE</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<properties>
<scala.version>2.10.4</scala.version>
<spark.version>1.6.2</spark.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<exclusions>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-logging</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-log4j</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-thymeleaf</artifactId>
</dependency>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.4.4</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>${spark.version}</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-launcher_2.10</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-mllib_2.10</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.10</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.4</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.specs</groupId>
<artifactId>specs</artifactId>
<version>1.2.5</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.ansj</groupId>
<artifactId>ansj_seg</artifactId>
<version>5.1.1</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
</plugins>
</build>
這里包含了springboot和spark需要的依賴
然后再寫(xiě)一個(gè)計(jì)算單詞個(gè)數(shù)的方法吓歇,這個(gè)程序跟以前的一樣,只是SparkConfig
的配置有所改變
@Component
public class WordCountService implements Serializable {
private static final Pattern SPACE = Pattern.compile(" ");
@Autowired
private transient JavaSparkContext sc;
public Map<String, Integer> run() {
Map<String, Integer> result = new HashMap<>();
JavaRDD<String> lines = sc.textFile("C:\\Users\\bd2\\Downloads\\blsmy.txt").cache();
lines.map(new Function<String, String>() {
@Override
public String call(String s) throws Exception {
System.out.println(s);
return s;
}
});
System.out.println(lines.count());
JavaRDD<String> words = lines.flatMap(new FlatMapFunction<String, String>() {
@Override
public Iterable<String> call(String s) throws Exception {
return Arrays.asList(SPACE.split(s));
}
});
JavaPairRDD<String, Integer> ones = words.mapToPair(new PairFunction<String, String, Integer>() {
private static final long serialVersionUID = 1L;
public Tuple2<String, Integer> call(String s) {
return new Tuple2<String, Integer>(s, 1);
}
});
JavaPairRDD<String, Integer> counts = ones.reduceByKey(new Function2<Integer, Integer, Integer>() {
private static final long serialVersionUID = 1L;
public Integer call(Integer i1, Integer i2) {
return i1 + i2;
}
});
List<Tuple2<String, Integer>> output = counts.collect();
for (Tuple2<String, Integer> tuple : output) {
result.put(tuple._1(),tuple._2());
}
return result;
}
}
注意 注意 注意
上面兩點(diǎn)寫(xiě)法需要注意
implements Serializable
和private transient JavaSparkContext sc
transient
為的是不讓sc序列化票腰,如果沒(méi)有它做修飾城看,你會(huì)遇到這樣錯(cuò)
Task not serializable] with root cause
java.io.NotSerializableException: com.quick.spark.xxx
別說(shuō)我怎么知道的,這個(gè)問(wèn)題花了整整一下午一把血與淚啊杏慰,中文测柠,,英文和日文的解答都尼瑪看了缘滥。轰胁。。文本我用的是《巴黎圣母院》的英文版朝扼,下面是結(jié)果
結(jié)果
代碼我放在了GitHub上,有興趣的可以看一看赃阀。
后記
代碼都放在了公司了,自己住的地方網(wǎng)速慢的要死擎颖,短短一篇文章寫(xiě)了半個(gè)多小時(shí)榛斯。。搂捧。
接觸spark不到四天驮俗,通過(guò)demo對(duì)其有了更進(jìn)一步的認(rèn)識(shí),前幾天買的書(shū)《Spark快速大數(shù)據(jù)分析》今天剛到允跑,值得去看一看意述。
后續(xù)
早上使用java8提供的lambda表達(dá)式改了以下代碼,如下圖
代碼量減少了一倍吮蛹,據(jù)說(shuō)效率還提高了荤崇。。潮针。