本篇文章主要講解phoenix與spark做整合脖苏,目的是將phoenix做存儲(chǔ)程拭,spark做計(jì)算層。這樣就結(jié)合了phoenix查詢速度快和spark計(jì)算速度快的優(yōu)點(diǎn)棍潘。
在這里將Phoenix的表作為spark的RDD或者DataFrames來(lái)操作恃鞋,并且將操作的結(jié)果寫(xiě)回phoenix中。
這樣做也擴(kuò)大了兩者的使用場(chǎng)景亦歉。
下面我們就來(lái)做兩者的整合
先說(shuō)下版本:
Phoenix 版本 4.4.0
Hbase版本 0.98
spark版本 spark-1.5.2-bin-hadoop2.6
首先配置 SPARK_CLASSPATH
要想在spark中操作phoenix恤浪,就必須讓spark可以找到phoenix的相關(guān)類(lèi),所以我們把client放到spark_classpath中
export SPARK_CLASSPATH=$SPARK_CLASSPATH:/home/hadoop/phoenix/phoenix-spark-4.4.0-HBase-0.98-tests.jar
export SPARK_CLASSPATH=$SPARK_CLASSPATH:/home/hadoop/phoenix/phoenix-4.4.0-HBase-0.98-client.jar
export SPARK_CLASSPATH=$SPARK_CLASSPATH:/home/hadoop/phoenix/phoenix-server-client-4.4.0-HBase-0.98.jar
這樣就可以在spark-shell中操作phoenix了(很簡(jiǎn)單吧)肴楷。
下來(lái)結(jié)合兩者做下實(shí)驗(yàn):
1> 在phoenix中創(chuàng)建幾張表
[hadoop@10.10.113.45 ~/phoenix/bin]$>./sqlline.py 10.10.113.45:2181
0: jdbc:phoenix:10.10.113.45:2181> CREATE TABLE EMAIL_ENRON(
. . . . . . . . . . . . . . . . .> MAIL_FROM BIGINT NOT NULL,
. . . . . . . . . . . . . . . . .> MAIL_TO BIGINT NOT NULL
. . . . . . . . . . . . . . . . .> CONSTRAINT pk PRIMARY KEY(MAIL_FROM, MAIL_TO));
0: jdbc:phoenix:10.10.113.45:2181> CREATE TABLE EMAIL_ENRON_PAGERANK(
. . . . . . . . . . . . . . . . .> ID BIGINT NOT NULL,
. . . . . . . . . . . . . . . . .> RANK DOUBLE
. . . . . . . . . . . . . . . . .> CONSTRAINT pk PRIMARY KEY(ID));
No rows affected (0.52 seconds)
查看下是否創(chuàng)建成功
0: jdbc:phoenix:10.10.113.45:2181> !tables
+------------------------------------------+------------------------------------------+------------------------------------------+--------------+
| TABLE_CAT | TABLE_SCHEM | TABLE_NAME | |
+------------------------------------------+------------------------------------------+------------------------------------------+--------------+
| | SYSTEM | CATALOG | SYSTEM TABLE |
| | SYSTEM | FUNCTION | SYSTEM TABLE |
| | SYSTEM | SEQUENCE | SYSTEM TABLE |
| | SYSTEM | STATS | SYSTEM TABLE |
| | | EMAIL_ENRON | TABLE |
| | | EMAIL_ENRON_PAGERANK | TABLE |
+------------------------------------------+------------------------------------------+------------------------------------------+--------------+
0: jdbc:phoenix:10.10.113.45:2181>
2> 在將數(shù)據(jù)load到phoenix中水由,數(shù)據(jù)有40萬(wàn)行。
[hadoop@10.10.113.45 ~/phoenix/bin]$>./psql.py -t EMAIL_ENRON 10.10.113.45:2181 /home/hadoop/sfs/enron.csv
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
15/12/03 10:06:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
csv columns from database.
CSV Upsert complete. 367662 rows upserted
Time: 21.783 sec(s)
數(shù)據(jù)來(lái)源:https://snap.stanford.edu/data/email-Enron.html
然后在查詢下
0: jdbc:phoenix:10.10.113.45:2181> select count(*) from EMAIL_ENRON;
+------------------------------------------+
| COUNT(1) |
+------------------------------------------+
| 367662 |
+------------------------------------------+
1 row selected (0.289 seconds)
看37萬(wàn)數(shù)據(jù)赛蔫,查詢不到一秒I翱汀!呵恢!
下面進(jìn)入到spark-shell 的交互模式鞠值,我們做一個(gè)PageRank 算法的例子。
[hadoop@10.10.113.45 ~/spark/bin]$>./spark-shell
scala> import org.apache.spark.graphx._
import org.apache.spark.graphx._
scala> import org.apache.phoenix.spark._
import org.apache.phoenix.spark._
scala> val rdd = sc.phoenixTableAsRDD("EMAIL_ENRON", Seq("MAIL_FROM", "MAIL_TO"), zkUrl=Some("10.10.113.45"))
rdd: org.apache.spark.rdd.RDD[Map[String,AnyRef]] = MapPartitionsRDD[2] at map at SparkContextFunctions.scala:39
scala> val rawEdges = rdd.map{ e => (e("MAIL_FROM").asInstanceOf[VertexId], e("MAIL_TO").asInstanceOf[VertexId]) }
rawEdges: org.apache.spark.rdd.RDD[(org.apache.spark.graphx.VertexId, org.apache.spark.graphx.VertexId)] = MapPartitionsRDD[3] at map at <console>:29
scala> val graph = Graph.fromEdgeTuples(rawEdges, 1.0)
graph: org.apache.spark.graphx.Graph[Double,Int] = org.apache.spark.graphx.impl.GraphImpl@621bb3c3
scala> val pr = graph.pageRank(0.001)
pr: org.apache.spark.graphx.Graph[Double,Double] = org.apache.spark.graphx.impl.GraphImpl@55e444b1
scala>pr.vertices.saveToPhoenix("EMAIL_ENRON_PAGERANK", Seq("ID", "RANK"), zkUrl = Some("10.10.113.45"))(這一步會(huì)很耗內(nèi)存渗钉,可能有的同學(xué)在測(cè)試的時(shí)候會(huì)報(bào)OOM齿诉,建議增大spark中executor memory,driver memory的大小)
我們?cè)谌hoenix中查看一下結(jié)果粤剧。
0: jdbc:phoenix:10.10.113.45:2181> select count(*) from EMAIL_ENRON_PAGERANK;
+------------------------------------------+
| COUNT(1) |
+------------------------------------------+
| 29000 |
+------------------------------------------+
1 row selected (0.113 seconds)
0: jdbc:phoenix:10.10.113.45:2181> SELECT * FROM EMAIL_ENRON_PAGERANK ORDER BY RANK DESC LIMIT 5;
+------------------------------------------+------------------------------------------+
| ID | RANK |
+------------------------------------------+------------------------------------------+
| 273 | 117.18141799210386 |
| 140 | 108.63091596789913 |
| 458 | 107.2728800448782 |
| 588 | 106.11840798585399 |
| 566 | 105.13932886531066 |
+------------------------------------------+------------------------------------------+
5 rows selected (0.568 seconds)
作者:頭條號(hào) / 數(shù)據(jù)庫(kù)那些事
鏈接:http://toutiao.com/i6223959691949507074/
來(lái)源:頭條號(hào)(今日頭條旗下創(chuàng)作平臺(tái))