??Graphx中的結(jié)點(diǎn)ID只能是Long型的,但是在實(shí)際的業(yè)務(wù)中有時會遇到字符串類型的ID古胆,這時需要建立一個結(jié)點(diǎn)ID的映射。
使用python隨機(jī)生成100條字符串類型的邊
from random import randint
vertices = ['v_'+str(i) for i in range(1000)]
edges = []
while len(edges) != 100:
i = randint(0, 1000)
j = randint(0, 1000)
if i == j:
continue
else:
edges.append((vertices[i], vertices[j]))
for i, j in edges:
print '%s %s' % (i, j)
對這個網(wǎng)絡(luò)求聯(lián)通圖
import spark.sqlContext.implicits._
import org.apache.spark.sql._
import org.apache.spark.graphx.{Graph, VertexId, Edge}
// 加載數(shù)據(jù),數(shù)據(jù)格式為每行:vi vj
val data = spark.sparkContext.textFile("/data/graph_sample").map{line =>
val items = line.split(" ")
(items(0), items(1))
}.toDF("vi", "vj")
// 建立映射
val dict = data.select("vi").union(data.select("vj")).distinct.rdd
.zipWithIndex().map {
case (Row(id: String), index) =>
(id, index)
}.toDF("id", "vid")
val dictVi = dict.withColumnRenamed("id", "vi").withColumnRenamed("vid", "vid_i")
val dictVj = dict.withColumnRenamed("id", "vj").withColumnRenamed("vid", "vid_j")
val data2 = data.join(dictVi, Seq("vi")).join(dictVj, Seq("vj"))
// 構(gòu)造網(wǎng)絡(luò)
val vertices = data2.select("vid_i")
.union(data2.select("vid_j"))
.distinct
.map{case Row(id: VertexId)=>(id, "")}
val edges = data2.select("vid_i", "vid_j")
.flatMap{
case Row(vidi: Long, vidj: Long) =>
Array(Edge(vidi, vidj, ""), Edge(vidj, vidi, ""))
}
val g = Graph(vertices.rdd, edges.rdd, "")
// 求聯(lián)通子圖
val cc = g.connectedComponents()
// 結(jié)點(diǎn)ID映射回原來的ID
val ret = cc.vertices.toDF("vid", "cid").join(dict, Seq("vid"))
遇到的坑
??在線上實(shí)際使用的時候,遇到過這樣的一個問題:對于一個結(jié)點(diǎn)瓦胎,開始時是一個ID猿规,但是在執(zhí)行過程中它的ID變了衷快。猜測產(chǎn)生這個問題的原因是Catalyst優(yōu)化錯誤導(dǎo)致的,最后采用了一種強(qiáng)行中斷sql優(yōu)化的方式:將映射好的ID存入hive中姨俩,然后再從hive讀取進(jìn)來蘸拔。相關(guān)問題可以參考:GraphFrames的一個issue