碰壁Spark + Mongodb

在嘗試Spark + MongoDB過程中老虫,總是遇到Cursor xxxxx not found錯(cuò)誤, 嘗試加入keep_alive_ms 和 pipeline 也不能解決問題端圈。

目前總數(shù)據(jù)量在10000條左右,從Mongodb中加載后交給Spark的NaiveBayes training.

    pipeline = {{ $limit: 5000 },{ $skip: 2000 }}
    has_train = spark_session.read.format("com.mongodb.spark.sql.DefaultSource") \
        .option("spark.mongodb.input.uri", "mongodb://mongo_and_spark_server:27017/resume_db.has_train") \
        .option("spark.mongodb.keep_alive_ms", "3600000") \
        .option("pipeline", pipeline) \
        .load()

在1.6之前,我們需要手動(dòng)部署并指明額外加載第三方j(luò)ar文件路徑,在實(shí)驗(yàn)2.1的時(shí)候扣溺,這些Package會(huì)自動(dòng)下載

./spark-2.1.1-bin-hadoop2.7/bin/spark-submit \
    --packages org.mongodb.spark:mongo-spark-connector_2.11:2.0.0 \
    --py-files ./utility.py \
    --files ./dicts/statistic_college.txt \
    --files ./dicts/degrees.txt \
    --files ./dicts/diming.txt \
    --files ./dicts/subjects.txt \
    --files ./dicts/training_org.txt \
    naive_bayes.py

Output:

# ./submit.sh 
Ivy Default Cache set to: /root/.ivy2/cache
The jars for the packages stored in: /root/.ivy2/jars
:: loading settings :: url = jar:file:/home/pluto/spark/spark-2.1.1-bin-hadoop2.7/jars/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
org.mongodb.spark#mongo-spark-connector_2.11 added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent;1.0
    confs: [default]
    found org.mongodb.spark#mongo-spark-connector_2.11;2.0.0 in central
    found org.mongodb#mongo-java-driver;3.2.2 in central
:: resolution report :: resolve 221ms :: artifacts dl 4ms
    :: modules in use:
    org.mongodb#mongo-java-driver;3.2.2 from central in [default]
    org.mongodb.spark#mongo-spark-connector_2.11;2.0.0 from central in [default]
    ---------------------------------------------------------------------
    |                  |            modules            ||   artifacts   |
    |       conf       | number| search|dwnlded|evicted|| number|dwnlded|
    ---------------------------------------------------------------------
    |      default     |   2   |   0   |   0   |   0   ||   2   |   0   |
    ---------------------------------------------------------------------
:: retrieving :: org.apache.spark#spark-submit-parent
    confs: [default]
    0 artifacts copied, 2 already retrieved (0kB/7ms)
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties

當(dāng)數(shù)據(jù)量增大時(shí),總是遇到如下錯(cuò)誤瓜晤,目前還沒有排查出錯(cuò)誤根源 :(

17/05/29 01:23:16 INFO MongoClientCache: Closing MongoClient: [mongo_and_spark_server:27017]
17/05/29 01:23:16 INFO connection: Closed connection [connectionId{localValue:2, serverValue:42}] to mongo_and_spark_server:27017 because the pool has been closed.
17/05/29 01:23:16 INFO MongoClientCache: Closing MongoClient: [mongo_and_spark_server:27017]
17/05/29 01:23:16 INFO connection: Closed connection [connectionId{localValue:4, serverValue:46}] to mongo_and_spark_server:27017 because the pool has been closed.
17/05/29 01:27:56 WARN TaskSetManager: Lost task 3.0 in stage 2.0 (TID 5, mongo_and_spark_server, executor 2): com.mongodb.MongoCursorNotFoundException: Query failed with error code -5 and error message 'Cursor 34611963569 not found on server mongo_and_spark_server:27017' on server mongo_and_spark_server:27017
    at com.mongodb.operation.QueryHelper.translateCommandException(QueryHelper.java:27)
    at com.mongodb.operation.QueryBatchCursor.getMore(QueryBatchCursor.java:215)
    at com.mongodb.operation.QueryBatchCursor.hasNext(QueryBatchCursor.java:103)
    at com.mongodb.MongoBatchCursorAdapter.hasNext(MongoBatchCursorAdapter.java:46)
    at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:42)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:377)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
    at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.next(SerDeUtil.scala:120)
    at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.next(SerDeUtil.scala:112)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.foreach(SerDeUtil.scala:112)
    at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:504)
    at org.apache.spark.api.python.PythonRunner$WriterThread$$anonfun$run$3.apply(PythonRDD.scala:328)
    at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1951)
    at org.apache.spark.api.python.PythonRunner$WriterThread.run(PythonRDD.scala:269)

17/05/29 01:27:56 INFO TaskSetManager: Starting task 3.1 in stage 2.0 (TID 6, mongo_and_spark_server, executor 2, partition 3, ANY, 6787 bytes)
17/05/29 01:29:01 WARN TaskSetManager: Lost task 2.0 in stage 2.0 (TID 4, mongo_and_spark_server, executor 0): com.mongodb.MongoCursorNotFoundException: Query failed with error code -5 and error message 'Cursor 34615739977 not found on server mongo_and_spark_server:27017' on server mongo_and_spark_server:27017
    at com.mongodb.operation.QueryHelper.translateCommandException(QueryHelper.java:27)
    at com.mongodb.operation.QueryBatchCursor.getMore(QueryBatchCursor.java:215)
    at com.mongodb.operation.QueryBatchCursor.hasNext(QueryBatchCursor.java:103)
    at com.mongodb.MongoBatchCursorAdapter.hasNext(MongoBatchCursorAdapter.java:46)
    at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:42)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:377)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
    at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.next(SerDeUtil.scala:120)
    at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.next(SerDeUtil.scala:112)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.foreach(SerDeUtil.scala:112)
    at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:504)
    at org.apache.spark.api.python.PythonRunner$WriterThread$$anonfun$run$3.apply(PythonRDD.scala:328)
    at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1951)
    at org.apache.spark.api.python.PythonRunner$WriterThread.run(PythonRDD.scala:269)

17/05/29 01:29:01 INFO TaskSetManager: Starting task 2.1 in stage 2.0 (TID 7, mongo_and_spark_server, executor 2, partition 2, ANY, 6799 bytes)

參考:

https://docs.mongodb.com/spark-connector/v2.0/configuration/#spark-input-conf
https://docs.mongodb.com/manual/core/aggregation-pipeline-optimization/
http://www.mongoing.com/tj/mongodb_shanghai_spark

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末锥余,一起剝皮案震驚了整個(gè)濱河市,隨后出現(xiàn)的幾起案子痢掠,更是在濱河造成了極大的恐慌驱犹,老刑警劉巖,帶你破解...
    沈念sama閱讀 218,858評(píng)論 6 508
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件足画,死亡現(xiàn)場(chǎng)離奇詭異雄驹,居然都是意外死亡,警方通過查閱死者的電腦和手機(jī)淹辞,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,372評(píng)論 3 395
  • 文/潘曉璐 我一進(jìn)店門医舆,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人桑涎,你說我怎么就攤上這事彬向。” “怎么了攻冷?”我有些...
    開封第一講書人閱讀 165,282評(píng)論 0 356
  • 文/不壞的土叔 我叫張陵娃胆,是天一觀的道長(zhǎng)。 經(jīng)常有香客問我等曼,道長(zhǎng)里烦,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 58,842評(píng)論 1 295
  • 正文 為了忘掉前任禁谦,我火速辦了婚禮胁黑,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘州泊。我一直安慰自己丧蘸,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,857評(píng)論 6 392
  • 文/花漫 我一把揭開白布遥皂。 她就那樣靜靜地躺著力喷,像睡著了一般。 火紅的嫁衣襯著肌膚如雪演训。 梳的紋絲不亂的頭發(fā)上弟孟,一...
    開封第一講書人閱讀 51,679評(píng)論 1 305
  • 那天,我揣著相機(jī)與錄音样悟,去河邊找鬼拂募。 笑死庭猩,一個(gè)胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的陈症。 我是一名探鬼主播蔼水,決...
    沈念sama閱讀 40,406評(píng)論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼,長(zhǎng)吁一口氣:“原來是場(chǎng)噩夢(mèng)啊……” “哼爬凑!你這毒婦竟也來了徙缴?” 一聲冷哼從身側(cè)響起,我...
    開封第一講書人閱讀 39,311評(píng)論 0 276
  • 序言:老撾萬榮一對(duì)情侶失蹤嘁信,失蹤者是張志新(化名)和其女友劉穎于样,沒想到半個(gè)月后,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體潘靖,經(jīng)...
    沈念sama閱讀 45,767評(píng)論 1 315
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡穿剖,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,945評(píng)論 3 336
  • 正文 我和宋清朗相戀三年,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了卦溢。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片糊余。...
    茶點(diǎn)故事閱讀 40,090評(píng)論 1 350
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡,死狀恐怖单寂,靈堂內(nèi)的尸體忽然破棺而出贬芥,到底是詐尸還是另有隱情,我是刑警寧澤宣决,帶...
    沈念sama閱讀 35,785評(píng)論 5 346
  • 正文 年R本政府宣布蘸劈,位于F島的核電站,受9級(jí)特大地震影響尊沸,放射性物質(zhì)發(fā)生泄漏威沫。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,420評(píng)論 3 331
  • 文/蒙蒙 一洼专、第九天 我趴在偏房一處隱蔽的房頂上張望棒掠。 院中可真熱鬧,春花似錦屁商、人聲如沸烟很。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,988評(píng)論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽溯职。三九已至,卻和暖如春帽哑,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背叹俏。 一陣腳步聲響...
    開封第一講書人閱讀 33,101評(píng)論 1 271
  • 我被黑心中介騙來泰國打工妻枕, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人。 一個(gè)月前我還...
    沈念sama閱讀 48,298評(píng)論 3 372
  • 正文 我出身青樓屡谐,卻偏偏與公主長(zhǎng)得像述么,于是被迫代替她去往敵國和親。 傳聞我的和親對(duì)象是個(gè)殘疾皇子愕掏,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 45,033評(píng)論 2 355

推薦閱讀更多精彩內(nèi)容