Janusgraph Spark yarn-client模式批量導(dǎo)入配置

Janusgraph是一個(gè)分布式圖數(shù)據(jù)庫愁茁,繼承自titan。Janusgraph的批量導(dǎo)入(bulkload)默認(rèn)使用spark的local模式運(yùn)行樱哼,不支持yarn-cluster模式铸鹰。雖然支持yarn-client模式忍弛,但官方?jīng)]有說明如何配置,配置起來有許多坑。本文將介紹如何配置yarn-client模式的批量導(dǎo)入渗勘。
首先介紹基本配置沐绒,然后介紹導(dǎo)入批量導(dǎo)入的配置,最后介紹批量導(dǎo)入的優(yōu)化旺坠。

本文所用軟件版本:
janusgraph: 0.1.1
hbase: 1.1.2
hadoop: 2.7.1

基本配置

  1. 首先從官網(wǎng)下載并解壓janusgraph到本地/data/janusgraph/目錄乔遮。
  2. 然后配置圖數(shù)據(jù)庫前后端。由于我們用的是es + hbase取刃, 所以直接修改/data/janusgraph/conf/janusgraph-hbase-es.properties :
#重要
gremlin.graph=org.janusgraph.core.JanusGraphFactory
#hbase配置
storage.batch-loading=true
storage.backend=hbase
storage.hostname=c1-nn1.bdp.idc,c1-nn2.bdp.idc,c1-nn3.bdp.idc
storage.hbase.ext.hbase.zookeeper.property.clientPort=2181
storage.hbase.table = yisou:test_graph
#es配置
index.search.backend=elasticsearch
index.search.hostname=10.120.64.69  #es是只安裝在本地申眼,此為本機(jī)ip。
index.search.elasticsearch.client-only=true
index.search.index-name=yisou_test_graph
#默認(rèn)cache配置
cache.db-cache = true
cache.db-cache-clean-wait = 20
cache.db-cache-time = 180000
cache.db-cache-size = 0.5

3.修改/data/janusgraph/lib下的jar包蝉衣。由于在跑yarn-client批量導(dǎo)入時(shí)有g(shù)uava等jar包沖突括尸,我根據(jù)沖突情況對(duì)lib下面的jar包作了調(diào)整。主要調(diào)整了3個(gè)jar包:

  1. hbase-client-1.2.4.jar ==> yisou-hbase-1.0-SNAPSHOT.jar
    由于lib下的hbase-client-1.2.4.jar用的guava與我們yarn集群的guava版本有沖突病毡,所以我們用了公司內(nèi)部的去除了guava的hbase-client濒翻,即yisou-hbase-1.0-SNAPSHOT.jar 。
    如果不替換啦膜,報(bào)錯(cuò) "Caused by: java.lang.IllegalAccessError: tried to access method com.google.common.base.Stopwatch.<init>()V from class org.apache.hadoop.hbase.zookeeper.MetaTableLocator"
  2. spark-assembly-1.6.1-hadoop2.6.0.jar ==> spark-assembly-1.6.2-hadoop2.6.0.jar
    lib自帶的spark-assembly-1.6.1-hadoop2.6.0.jar也會(huì)引起guava沖突有送,我將其替換成spark-assembly-1.6.2-hadoop2.6.0.jar。
    如果不替換僧家,將會(huì)報(bào)錯(cuò)"java.lang.NoSuchMethodError: groovy.lang.MetaClassImpl.hasCustomStaticInvokeMethod()Z"
  3. 刪除 hbase-protocol-1.2.4.jar.
    如果不刪除雀摘,將會(huì)報(bào)錯(cuò) "com.google.protobuf.ServiceException: java.lang.NoSuchMethodError: org.apache.hadoop.hbase.protobuf.generated.RPCProtos$ConnectionHeader$Builder.setVersionInfo(Lorg/apache/hadoop/hbase/protobuf/generated/RPCProtos$VersionInfo;)Lorg/apache/hadoop/hbase/protobuf/generated/RPCProtos$ConnectionHeader$Builder;"

4.配置圖中邊和節(jié)點(diǎn)屬性,具體參考官網(wǎng)八拱,本文不展開阵赠。

批量導(dǎo)入配置

由于需要與yarn配合,將導(dǎo)入程序放在yarn上執(zhí)行肌稻,所以需要hadoop相關(guān)環(huán)境配置清蚀。需要修改兩個(gè)配置文件,一個(gè)是Janusgraph的啟動(dòng)腳本/data/janusgraph/lib/gremlin.sh, 另一個(gè)是hadoop和spark相關(guān)的配置/data/janusgraph/conf/hadoop-graph/hadoop-script.properties爹谭。

1.復(fù)制/data/janusgraph/lib/gremlin.sh, 假定命名為yarn-gremlin.sh枷邪。 然后增加hadoop的配置到JAVA_OPTIONS和CLASSPATH中。這樣能保證hadoop相關(guān)配置能被程序讀取到诺凡,便于正常啟動(dòng)spark在yarn上的任務(wù)东揣。

#!/bin/bash
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_HOME=/usr/local/hadoop-2.7.1
export JAVA_OPTIONS="$JAVA_OPTIONS -Djava.library.path=$HADOOP_HOME/lib/native"
export CLASSPATH=$HADOOP_CONF_DIR
#JANUSGRAPH_HOME為用戶安裝janusgraph的目錄/data/janusgraph/
cd $JANUSGRAPH_HOME
./bin/gremlin.sh

2.修改/data/janusgraph/conf/hadoop-graph/hadoop-script.properties
主要根據(jù)要導(dǎo)入文件的格式修改inputFormat、指定要導(dǎo)入的hdfs文件路徑腹泌、parse函數(shù)路徑以及spark master指定為yarn-client等嘶卧。

#
# Hadoop Graph Configuration
#
gremlin.graph=org.apache.tinkerpop.gremlin.hadoop.structure.HadoopGraph
gremlin.hadoop.graphInputFormat=org.apache.tinkerpop.gremlin.hadoop.structure.io.script.ScriptInputFormat
gremlin.hadoop.graphOutputFormat=org.apache.tinkerpop.gremlin.hadoop.structure.io.graphson.GraphSONOutputFormat
gremlin.hadoop.jarsInDistributedCache=true

#導(dǎo)入文件的hdfs路徑。也可以在加載該配置文件后指定
gremlin.hadoop.inputLocation=/user/yisou/taotian1/janus/data/fewData.test.dup
#解析hdfs文件的parse函數(shù)路徑真屯。也可以在加載該配置文件后指定
gremlin.hadoop.scriptInputFormat.script=/user/yisou/taotian1/janus/data/conf/vertex_parse.groovy
#gremlin.hadoop.outputLocation=output

#
# SparkGraphComputer with Yarn Configuration
#
spark.master=yarn-client
spark.executor.memory=6g
spark.executor.instances=10
spark.executor.cores=2
spark.serializer=org.apache.spark.serializer.KryoSerializer
# spark.kryo.registrationRequired=true
# spark.storage.memoryFraction=0.2
# spark.eventLog.enabled=true
# spark.eventLog.dir=/tmp/spark-event-logs
# spark.ui.killEnabled=true

#cache config
gremlin.spark.persistContext=true
gremlin.spark.graphStorageLevel=MEMORY_AND_DISK
#gremlin.spark.persistStorageLevel=DISK_ONLY


#####################################
# GiraphGraphComputer Configuration #
#####################################
giraph.minWorkers=2
giraph.maxWorkers=3
giraph.useOutOfCoreGraph=true
giraph.useOutOfCoreMessages=true
mapred.map.child.java.opts=-Xmx1024m
mapred.reduce.child.java.opts=-Xmx1024m
giraph.numInputThreads=4
giraph.numComputeThreads=4
# giraph.maxPartitionsInMemory=1
# giraph.userPartitionCount=2

執(zhí)行批量導(dǎo)入

啟動(dòng)命令:

sh /data/janusgraph/lib/yarn-gremlin.sh

批量導(dǎo)入命令:

local_root="/data/janusgraph"
hdfs_root="/user/yisou/taotian1/janus"
social_graph="${local_root}/conf/janusgraph-hbase-es.properties"
graph = GraphFactory.open("${local_root}/conf/hadoop-script.properties")
graph.configuration().setProperty("gremlin.hadoop.inputLocation","/user/yisou/taotian1/janus/data/fewData.test.dup")
graph.configuration().setProperty("gremlin.hadoop.scriptInputFormat.script", "${hdfs_root}/conf/vertex_parse.groovy")
blvp = BulkLoaderVertexProgram.build().writeGraph(social_graph).create(graph)
graph.compute(SparkGraphComputer).program(blvp).submit().get()

運(yùn)行結(jié)果:

sh /data/janusgraph/lib/yarn-gremlin.sh
\,,,/
(o o)
-----oOOo-(3)-oOOo-----
plugin activated: janusgraph.imports
plugin activated: tinkerpop.server
plugin activated: tinkerpop.utilities
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/data2/janusgraph-0.1.1-hadoop2/lib/slf4j-log4j12-1.7.12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/data2/janusgraph-0.1.1-hadoop2/lib/logback-classic-1.1.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/data2/janusgraph-0.1.1-hadoop2/lib/spark-assembly-1.6.2-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/data2/janusgraph-0.1.1-hadoop2/lib/yisou-hbase-1.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
21:22:00,392  INFO HadoopGraph:87 - HADOOP_GREMLIN_LIBS is set to: /data2/janusgraph-0.1.1-hadoop2/lib
plugin activated: tinkerpop.hadoop
plugin activated: tinkerpop.spark
plugin activated: tinkerpop.tinkergraph
gremlin>
gremlin> local_root="/data2/janusgraph-0.1.1-hadoop2/social"
==>/data2/janusgraph-0.1.1-hadoop2/social
gremlin> hdfs_root="/user/yisou/taotian1/janus"
==>/user/yisou/taotian1/janus
gremlin> social_graph="${local_root}/conf/janusgraph-hbase-es-social.properties"
==>/data2/janusgraph-0.1.1-hadoop2/social/conf/janusgraph-hbase-es-social.properties
gremlin> graph = GraphFactory.open("${local_root}/conf/hadoop-yarn.properties")
==>hadoopgraph[scriptinputformat->graphsonoutputformat]
gremlin> graph.configuration().setProperty("gremlin.hadoop.inputLocation","/user/yisou/taotian1/janus/tmp1person/")
==>null
gremlin> graph.configuration().setProperty("gremlin.hadoop.scriptInputFormat.script", "${hdfs_root}/person_parse.groovy")
==>null
gremlin> blvp = BulkLoaderVertexProgram.build().writeGraph(social_graph).create(graph)
==>BulkLoaderVertexProgram[bulkLoader=IncrementalBulkLoader, vertexIdProperty=bulkLoader.vertex.id, userSuppliedIds=false, keepOriginalIds=true, batchSize=0]
gremlin> graph.compute(SparkGraphComputer).program(blvp).submit().get()
21:25:04,666  INFO deprecation:1173 - mapred.reduce.child.java.opts is deprecated. Instead, use mapreduce.reduce.java.opts
21:25:04,667  INFO deprecation:1173 - mapred.map.child.java.opts is deprecated. Instead, use mapreduce.map.java.opts
21:25:04,680  INFO KryoShimServiceLoader:117 - Set KryoShimService provider to org.apache.tinkerpop.gremlin.hadoop.structure.io.HadoopPoolShimService@4cb2918c (class org.apache.tinkerpop.gremlin.hadoop.structure.io.HadoopPoolShimService) because its priority value (0) is the highest available
21:25:04,680  INFO KryoShimServiceLoader:123 - Configuring KryoShimService provider org.apache.tinkerpop.gremlin.hadoop.structure.io.HadoopPoolShimService@4cb2918c with user-provided configuration
  21:25:10,479  WARN SparkConf:70 - The configuration key 'spark.yarn.user.classpath.first' has been deprecated as of Spark 1.3 and may be removed in the future. Please use spark.{driver,executor}.userClassPathFirst instead.
21:25:10,505  INFO SparkContext:58 - Running Spark version 1.6.2
21:25:10,524  WARN SparkConf:70 - The configuration key 'spark.yarn.user.classpath.first' has been deprecated as of Spark 1.3 and may be removed in the future. Please use spark.{driver,executor}.userClassPathFirst instead.
21:25:10,564  INFO SecurityManager:58 - Changing view acls to: yisou
21:25:10,565  INFO SecurityManager:58 - Changing modify acls to: yisou
21:25:10,566  INFO SecurityManager:58 - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yisou); users with modify permissions: Set(yisou)
21:25:10,833  WARN SparkConf:70 - The configuration key 'spark.yarn.user.classpath.first' has been deprecated as of Spark 1.3 and may be removed in the future. Please use spark.{driver,executor}.userClassPathFirst instead.
21:25:10,835  WARN SparkConf:70 - The configuration key 'spark.yarn.user.classpath.first' has been deprecated as of Spark 1.3 and may be removed in the future. Please use spark.{driver,executor}.userClassPathFirst instead.
21:25:11,035  INFO Utils:58 - Successfully started service 'sparkDriver' on port 36502.
21:25:11,576  INFO Slf4jLogger:80 - Slf4jLogger started
  21:25:11,646  INFO Remoting:74 - Starting remoting
............
21:25:20,736  INFO Client:58 - Submitting application 2727164 to ResourceManager
21:25:20,771  INFO YarnClientImpl:273 - Submitted application application_1466564207556_2727164
21:25:21,780  INFO Client:58 - Application report for application_1466564207556_2727164 (state: ACCEPTED)
21:25:21,785  INFO Client:58 -
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: root.yisou
start time: 1500297920750
final status: UNDEFINED
tracking URL: http://c1-nn3.bdp.idc:8981/proxy/application_1466564207556_2727164/
21:25:22,787  INFO Client:58 - Application report for application_1466564207556_2727164 (state: ACCEPTED)
21:25:23,789  INFO Client:58 - Application report for application_1466564207556_2727164 (state: ACCEPTED)
21:25:24,791  INFO Client:58 - Application report for application_1466564207556_2727164 (state: ACCEPTED)
21:25:25,793  INFO Client:58 - Application report for application_1466564207556_2727164 (state: ACCEPTED)
21:25:39,585  INFO JettyUtils:58 - Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
21:25:39,823  INFO Client:58 - Application report for application_1466564207556_2727164 (state: RUNNING)
21:25:39,824  INFO Client:58 -
client token: N/A
diagnostics: N/A
ApplicationMaster host: 10.130.1.50
ApplicationMaster RPC port: 0
queue: root.yisou
start time: 1500297920750
final status: UNDEFINED
tracking URL: http://c1-nn3.bdp.idc:8981/proxy/application_1466564207556_2727164/
..........
21:25:42,864  INFO SparkContext:58 - Added JAR /data2/janusgraph-0.1.1-hadoop2/lib/commons-codec-1.7.jar at http://10.130.64.69:38209/jars/commons-codec-1.7.jar with timestamp 1500297942864
21:25:42,866  INFO SparkContext:58 - Added JAR /data2/janusgraph-0.1.1-hadoop2/lib/commons-lang-2.5.jar at http://10.130.64.69:38209/jars/commons-lang-2.5.jar with timestamp 1500297942866
21:25:42,869  INFO SparkContext:58 - Added JAR /data2/janusgraph-0.1.1-hadoop2/lib/commons-collections-3.2.2.jar at http://10.130.64.69:38209/jars/commons-collections-3.2.2.jar with timestamp 1500297942869
21:25:42,872  INFO SparkContext:58 - Added JAR /data2/janusgraph-0.1.1-hadoop2/lib/commons-io-2.3.jar at http://10.130.64.69:38209/jars/commons-io-2.3.jar with timestamp 1500297942872
21:25:42,874  INFO SparkContext:58 - Added JAR /data2/janusgraph-0.1.1-hadoop2/lib/jetty-util-6.1.26.jar at http://10.130.64.69:38209/jars/jetty-util-6.1.26.jar with timestamp 1500297942874
21:25:42,879  INFO SparkContext:58 - Added JAR /data2/janusgraph-0.1.1-hadoop2/lib/htrace-core-3.1.0-incubating.jar at http://10.130.64.69:38209/jars/htrace-core-3.1.0-incubating.jar with timestamp 1
............
21:26:14,751  INFO MapOutputTrackerMaster:58 - Size of output statuses for shuffle 2 is 146 bytes
21:26:14,767  INFO TaskSetManager:58 - Finished task 0.0 in stage 6.0 (TID 4) in 40 ms on c1-dn31.bdp.idc (1/1)
21:26:14,767  INFO YarnScheduler:58 - Removed TaskSet 6.0, whose tasks have all completed, from pool
21:26:14,767  INFO DAGScheduler:58 - ResultStage 6 (foreachPartition at SparkExecutor.java:173) finished in 0.042 s
21:26:14,768  INFO DAGScheduler:58 - Job 1 finished: foreachPartition at SparkExecutor.java:173, took 1.776125 s
21:26:14,775  INFO ShuffledRDD:58 - Removing RDD 2 from persistence list
21:26:14,785  INFO BlockManager:58 - Removing RDD 2
==>result[hadoopgraph[scriptinputformat->graphsonoutputformat],memory[size:0]]
gremlin> 21:26:22,515  INFO YarnClientSchedulerBackend:58 - Registered executor NettyRpcEndpointRef(null) (c1-dn9.bdp.idc:60762) with ID 8

批量導(dǎo)入性能優(yōu)化

如果不做優(yōu)化脸候,janusgraph批量導(dǎo)入的速度非常慢,導(dǎo)入4千萬條數(shù)據(jù)大約需要3.5小時(shí)。優(yōu)化后可降低到1小時(shí).
1.加大ids.block-size和storage.buffer-size參數(shù)的大性寺佟(在janusgraph-hbase-es.properties中配置)泵额。
ids.block-size=100000000
storage.buffer-size=102400

2.指定hbase初始的region數(shù)目(在janusgraph-hbase-es.properties中配置)。
storage.hbase.region-count = 50

3.邊和頂點(diǎn)同時(shí)導(dǎo)入携添,而不是頂點(diǎn)和邊分成不同的文件嫁盲,分開導(dǎo)入。格式可參考/data/janusgraph/data/grateful-dead.txt烈掠。

總結(jié)

本文主要講解了janusgraph中如何配置yarn-client的方式批量導(dǎo)入節(jié)點(diǎn)和邊羞秤。

分為基本配置和批量導(dǎo)入的配置兩部分,基本配置中需要注意janusgraph自帶jar包與用戶yarn環(huán)境中jar包的沖突問題左敌,可替換或者刪除相關(guān)jar包瘾蛋。

批量導(dǎo)入配置中重點(diǎn)是在gremlin.sh中添加hadoop的相關(guān)配置,將hadoop環(huán)境配置到JAVA_OPTIONS和CLASSPATH中矫限。

(完)

參考鏈接

Titan 數(shù)據(jù)庫使用
圖數(shù)據(jù)庫Titan在生產(chǎn)環(huán)境中的使用全過程+分析
合并頂點(diǎn)和邊哺哼,批量導(dǎo)入parse函數(shù)樣例
Yet Another Analytics & Intelligence Communication Series

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個(gè)濱河市叼风,隨后出現(xiàn)的幾起案子取董,更是在濱河造成了極大的恐慌,老刑警劉巖无宿,帶你破解...
    沈念sama閱讀 217,657評(píng)論 6 505
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件茵汰,死亡現(xiàn)場(chǎng)離奇詭異,居然都是意外死亡孽鸡,警方通過查閱死者的電腦和手機(jī)蹂午,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 92,889評(píng)論 3 394
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來梭灿,“玉大人画侣,你說我怎么就攤上這事”ざ剩” “怎么了?”我有些...
    開封第一講書人閱讀 164,057評(píng)論 0 354
  • 文/不壞的土叔 我叫張陵溉卓,是天一觀的道長皮迟。 經(jīng)常有香客問我,道長桑寨,這世上最難降的妖魔是什么伏尼? 我笑而不...
    開封第一講書人閱讀 58,509評(píng)論 1 293
  • 正文 為了忘掉前任,我火速辦了婚禮尉尾,結(jié)果婚禮上爆阶,老公的妹妹穿的比我還像新娘。我一直安慰自己,他們只是感情好辨图,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,562評(píng)論 6 392
  • 文/花漫 我一把揭開白布班套。 她就那樣靜靜地躺著,像睡著了一般故河。 火紅的嫁衣襯著肌膚如雪吱韭。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 51,443評(píng)論 1 302
  • 那天鱼的,我揣著相機(jī)與錄音理盆,去河邊找鬼。 笑死凑阶,一個(gè)胖子當(dāng)著我的面吹牛猿规,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播宙橱,決...
    沈念sama閱讀 40,251評(píng)論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼坎拐,長吁一口氣:“原來是場(chǎng)噩夢(mèng)啊……” “哼!你這毒婦竟也來了养匈?” 一聲冷哼從身側(cè)響起哼勇,我...
    開封第一講書人閱讀 39,129評(píng)論 0 276
  • 序言:老撾萬榮一對(duì)情侶失蹤,失蹤者是張志新(化名)和其女友劉穎呕乎,沒想到半個(gè)月后积担,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 45,561評(píng)論 1 314
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡猬仁,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,779評(píng)論 3 335
  • 正文 我和宋清朗相戀三年帝璧,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片湿刽。...
    茶點(diǎn)故事閱讀 39,902評(píng)論 1 348
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡的烁,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出诈闺,到底是詐尸還是另有隱情渴庆,我是刑警寧澤,帶...
    沈念sama閱讀 35,621評(píng)論 5 345
  • 正文 年R本政府宣布雅镊,位于F島的核電站襟雷,受9級(jí)特大地震影響,放射性物質(zhì)發(fā)生泄漏仁烹。R本人自食惡果不足惜耸弄,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,220評(píng)論 3 328
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望卓缰。 院中可真熱鬧计呈,春花似錦砰诵、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,838評(píng)論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至苇瓣,卻和暖如春尉间,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背击罪。 一陣腳步聲響...
    開封第一講書人閱讀 32,971評(píng)論 1 269
  • 我被黑心中介騙來泰國打工哲嘲, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人媳禁。 一個(gè)月前我還...
    沈念sama閱讀 48,025評(píng)論 2 370
  • 正文 我出身青樓眠副,卻偏偏與公主長得像,于是被迫代替她去往敵國和親竣稽。 傳聞我的和親對(duì)象是個(gè)殘疾皇子囱怕,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 44,843評(píng)論 2 354

推薦閱讀更多精彩內(nèi)容