Kubeadm快速部署k8s集群

環(huán)境準(zhǔn)備:

k8s-master ---- 192.168.1.199 --2CPU2核,內(nèi)存2G多搀,Centos7.x
k8s-node1 ---- 192.168.1.220-- 2CPU2核,內(nèi)存4G,Centos7.x
k8s-node2 ----- 192.168.1.221--2CPU2核近零,內(nèi)存4G,Centos7.x

一.系統(tǒng)初始化:

使用ansible批量操作:

cat /etc/ansible/hosts
[k8s]
192.168.1.199 name=k8s-master
192.168.1.220 name=k8s-node1
192.168.1.221 name=k8s-node2

playbook.yml

---
- hosts: k8s
  gather_facts: no
  tasks:
    - name: 關(guān)閉防火墻
      systemd: name=firewalld state=stopped enabled=no
    - name: 關(guān)閉sellinux
      shell: sed -i 's/enforcing/disabled/' /etc/selinux/config
    - name: 關(guān)閉swp
      shell: sed -ri 's/.*swap.*/#&/' /etc/fstab
    - name:  時(shí)間同步
      yum: name=ntpdate state=installed
    - name: 執(zhí)行時(shí)間同步
      shell: ntpdate time.windows.com
    - name: 拷貝修改參數(shù)腳本
      copy: src=edit_sysctl.sh dest=/root/
    - name: 運(yùn)行腳本
      shell: sh /root/edit_sysctl.sh
    - name: 設(shè)置主機(jī)名
      tags: hostname
      shell: hostnamectl set-hostname {{ name }}

edit_sysctl.sh

#!/bin/bash
# 將橋接的IPv4流量傳遞到iptables的鏈
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl --system

所有主機(jī)改下hosts文件

vim /etc/hosts
192.168.1.199 k8s-master
192.168.1.220 k8s-node1
192.168.1.221 k8s-node2

系統(tǒng)初始工作完成后抄肖,最好重啟下系統(tǒng)秒赤,讓剛才關(guān)閉swap生效。

二.安裝Docker/kubeadm/kubelet【所有節(jié)點(diǎn)】

Docker
$ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
$ yum -y install docker-ce
$ systemctl enable docker && systemctl start docker

配置加速器
$ cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF

重啟docker
$ systemctl restart docker
$ docker info
添加阿里云YUM軟件源
 cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安裝kubeadm憎瘸,kubelet和kubectl

$ yum install -y kubelet-1.19.0 kubeadm-1.19.0 kubectl-1.19.0
$ systemctl enable kubelet

三.部署Kubernets Master

1.在master(119)執(zhí)行初始化命令

kubeadm init \
  --apiserver-advertise-address=192.168.1.199 \
  --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
  --kubernetes-version v1.19.0 \
  --service-cidr=10.96.0.0/12 \
  --pod-network-cidr=10.244.0.0/16 \
  --ignore-preflight-errors=all

或者編輯配置文件vi kubeadm.conf ,將以上內(nèi)容配置好陈瘦,在引用執(zhí)行kubeadm init --config kubeadm.conf --ignore-preflight-errors=all

參數(shù)說(shuō)明:

  • --apiserver-advertise-address 集群通告地址
  • --image-repository 由于默認(rèn)拉取鏡像地址k8s.gcr.io國(guó)內(nèi)無(wú)法訪問(wèn)幌甘,這里指定阿里云鏡像倉(cāng)庫(kù)地址
  • --kubernetes-version K8s版本,與上面安裝的一致
  • --service-cidr 集群內(nèi)部虛擬網(wǎng)絡(luò)痊项,Pod統(tǒng)一訪問(wèn)入口
  • --pod-network-cidr Pod網(wǎng)絡(luò)锅风,,與下面部署的CNI網(wǎng)絡(luò)組件yaml中保持一致


    image.png

2.拷貝kubectl使用的連接k8s認(rèn)證文件到默認(rèn)路徑:

bash
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

3.查看節(jié)點(diǎn)狀態(tài)

$ kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   NotReady    master   2m   v1.18.0

四.Worker Node加入集群

向集群添加新節(jié)點(diǎn)鞍泉,執(zhí)行在kubeadm init 最后輸出的kubeadm join命令:

kubeadm join 192.168.1.199:6443 --token brjfr2.xtp3ytjalcrw7div \
>     --discovery-token-ca-cert-hash sha256:fbaf11091afe3792277492d7253fa7dfb81293bdc6ecaac7df0f6213ec4d43c5 
image.png

五.部署網(wǎng)絡(luò)CNI

現(xiàn)在還是NotReady狀態(tài)皱埠,一般這種情況先看kubelet的日志來(lái)找問(wèn)題。
以為現(xiàn)在還沒(méi)部署網(wǎng)絡(luò)咖驮,所以肯定是網(wǎng)絡(luò)問(wèn)題啦边器。
k8s-master kubelet[6737]: W0603 15:52:00.865681 6737 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d

Calicoa組件

Calico是一個(gè)純?nèi)龑拥臄?shù)據(jù)中心網(wǎng)絡(luò)方案,Calico支持廣泛的平臺(tái)托修,包括Kubernetes忘巧、OpenStack等。
Calico 在每一個(gè)計(jì)算節(jié)點(diǎn)利用 Linux Kernel 實(shí)現(xiàn)了一個(gè)高效的虛擬路由器( vRouter) 來(lái)負(fù)責(zé)數(shù)據(jù)轉(zhuǎn)發(fā)睦刃,而每個(gè) vRouter 通過(guò) BGP 協(xié)議負(fù)責(zé)把自己上運(yùn)行的 workload 的路由信息向整個(gè) Calico 網(wǎng)絡(luò)內(nèi)傳播砚嘴。
此外,Calico 項(xiàng)目還實(shí)現(xiàn)了 Kubernetes 網(wǎng)絡(luò)策略涩拙,提供ACL功能际长。

部署Calicoa

下載calico的yaml

wget https://docs.projectcalico.org/manifests/calico.yaml

修改 calico.yaml中的 ip網(wǎng)段


image.png

這是將默認(rèn)的192.168.網(wǎng)段改成10.244.0.0/16(最開(kāi)始初始化kubeadm init的時(shí)候指定的)

加載組件

kubectl apply -f calico.yml

稍等一下,查看集群節(jié)點(diǎn)狀態(tài)和pod狀態(tài)兴泥,全部正常工育。

[root@k8s-master ~]# kubectl get node
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   62m     v1.19.0
k8s-node1    Ready    <none>   24m     v1.19.0
k8s-node2    Ready    <none>   6m52s   v1.19.0
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-97769f7c7-7kwqm   1/1     Running   0          8m38s
calico-node-bqv2g                         1/1     Running   0          8m38s
calico-node-jgxln                         1/1     Running   0          8m38s
calico-node-t2qh4                         1/1     Running   0          6m55s
coredns-6d56c8448f-9mgzw                  1/1     Running   0          62m
coredns-6d56c8448f-s6z7b                  1/1     Running   0          62m
etcd-k8s-master                           1/1     Running   0          62m
kube-apiserver-k8s-master                 1/1     Running   0          62m
kube-controller-manager-k8s-master        1/1     Running   0          62m
kube-proxy-hqvkk                          1/1     Running   0          6m55s
kube-proxy-nc6l8                          1/1     Running   0          24m
kube-proxy-psfcn                          1/1     Running   0          62m
kube-scheduler-k8s-master                 1/1     Running   0          62m

注意:
如果執(zhí)行kubeadm失敗了就會(huì)出現(xiàn)一些錯(cuò)誤信息,然后解決了搓彻,再次使用kubeadm init來(lái)初始化也不會(huì)成功翅娶,因?yàn)榈谝淮芜\(yùn)行環(huán)境是錯(cuò)誤的環(huán)境文留。這個(gè)需要將清理當(dāng)前環(huán)境,保持一個(gè)純凈的環(huán)境再去執(zhí)行初始化

1.清空當(dāng)前初始化環(huán)境

kubeadm reset

2.calico pod未準(zhǔn)備就緒竭沫,那么需要每個(gè)節(jié)點(diǎn)手動(dòng)拉取鏡像看是否拉取到

grep image calico.yaml 每個(gè)節(jié)點(diǎn)拉取看看是否快

docker pull calico/xxx

六.測(cè)試k8s集群

  • 驗(yàn)證Pod工作
  • 驗(yàn)證Pod網(wǎng)絡(luò)通信
  • 驗(yàn)證DNS解析

在Kubernetes集群中創(chuàng)建一個(gè)pod燥翅,驗(yàn)證是否正常運(yùn)行:

$ kubectl create deployment nginx --image=nginx
$ kubectl expose deployment nginx --port=80 --type=NodePort
$ kubectl get pod,svc
image.png

image.png

七.部署Dashboard UI

[root@k8s-master ~]# cat kubernertes-dashboard.yaml 
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.0.3
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.4
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

加載組件

kubectl apply -f  kubernertes-dashboard.yaml 

查看pod狀態(tài)

[root@k8s-master ~]# kubectl get pods -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-7b59f7d4df-v79bx   1/1     Running   0          2m52s
kubernetes-dashboard-5dbf55bd9d-hblj7        1/1     Running   0          2m52s
訪問(wèn): https://任意一NodeIP:30001

創(chuàng)建service account并綁定默認(rèn)cluster-admin管理員集群角色:

# 創(chuàng)建用戶(hù)
$ kubectl create serviceaccount dashboard-admin -n kube-system
# 用戶(hù)授權(quán)
$ kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
# 獲取用戶(hù)Token
$ kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

使用輸出的token登錄Dashboard。


image.png

整套部署下來(lái)遇到的問(wèn)題:


image.png

排查半天蜕提,感覺(jué)是權(quán)限問(wèn)題森书,然后從新清空環(huán)境部署,問(wèn)題依舊谎势。
最后打算看實(shí)時(shí)日志凛膏,就新開(kāi)了一個(gè)終端,又執(zhí)行了一次apply脏榆,重新加載組件猖毫。。须喂。居然解決了吁断。。坞生。

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末仔役,一起剝皮案震驚了整個(gè)濱河市,隨后出現(xiàn)的幾起案子是己,更是在濱河造成了極大的恐慌又兵,老刑警劉巖,帶你破解...
    沈念sama閱讀 206,968評(píng)論 6 482
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件卒废,死亡現(xiàn)場(chǎng)離奇詭異沛厨,居然都是意外死亡,警方通過(guò)查閱死者的電腦和手機(jī)摔认,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 88,601評(píng)論 2 382
  • 文/潘曉璐 我一進(jìn)店門(mén)俄烁,熙熙樓的掌柜王于貴愁眉苦臉地迎上來(lái),“玉大人级野,你說(shuō)我怎么就攤上這事页屠。” “怎么了蓖柔?”我有些...
    開(kāi)封第一講書(shū)人閱讀 153,220評(píng)論 0 344
  • 文/不壞的土叔 我叫張陵辰企,是天一觀的道長(zhǎng)。 經(jīng)常有香客問(wèn)我况鸣,道長(zhǎng)牢贸,這世上最難降的妖魔是什么? 我笑而不...
    開(kāi)封第一講書(shū)人閱讀 55,416評(píng)論 1 279
  • 正文 為了忘掉前任镐捧,我火速辦了婚禮潜索,結(jié)果婚禮上臭增,老公的妹妹穿的比我還像新娘。我一直安慰自己竹习,他們只是感情好誊抛,可當(dāng)我...
    茶點(diǎn)故事閱讀 64,425評(píng)論 5 374
  • 文/花漫 我一把揭開(kāi)白布。 她就那樣靜靜地躺著整陌,像睡著了一般拗窃。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上泌辫,一...
    開(kāi)封第一講書(shū)人閱讀 49,144評(píng)論 1 285
  • 那天随夸,我揣著相機(jī)與錄音,去河邊找鬼震放。 笑死宾毒,一個(gè)胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的殿遂。 我是一名探鬼主播诈铛,決...
    沈念sama閱讀 38,432評(píng)論 3 401
  • 文/蒼蘭香墨 我猛地睜開(kāi)眼,長(zhǎng)吁一口氣:“原來(lái)是場(chǎng)噩夢(mèng)啊……” “哼勉躺!你這毒婦竟也來(lái)了?” 一聲冷哼從身側(cè)響起觅丰,我...
    開(kāi)封第一講書(shū)人閱讀 37,088評(píng)論 0 261
  • 序言:老撾萬(wàn)榮一對(duì)情侶失蹤饵溅,失蹤者是張志新(化名)和其女友劉穎,沒(méi)想到半個(gè)月后妇萄,有當(dāng)?shù)厝嗽跇?shù)林里發(fā)現(xiàn)了一具尸體蜕企,經(jīng)...
    沈念sama閱讀 43,586評(píng)論 1 300
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 36,028評(píng)論 2 325
  • 正文 我和宋清朗相戀三年冠句,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了轻掩。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 38,137評(píng)論 1 334
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡懦底,死狀恐怖唇牧,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情聚唐,我是刑警寧澤丐重,帶...
    沈念sama閱讀 33,783評(píng)論 4 324
  • 正文 年R本政府宣布,位于F島的核電站杆查,受9級(jí)特大地震影響扮惦,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜亲桦,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 39,343評(píng)論 3 307
  • 文/蒙蒙 一崖蜜、第九天 我趴在偏房一處隱蔽的房頂上張望浊仆。 院中可真熱鬧,春花似錦豫领、人聲如沸抡柿。這莊子的主人今日做“春日...
    開(kāi)封第一講書(shū)人閱讀 30,333評(píng)論 0 19
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)沙绝。三九已至,卻和暖如春鼠锈,著一層夾襖步出監(jiān)牢的瞬間闪檬,已是汗流浹背。 一陣腳步聲響...
    開(kāi)封第一講書(shū)人閱讀 31,559評(píng)論 1 262
  • 我被黑心中介騙來(lái)泰國(guó)打工购笆, 沒(méi)想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留粗悯,地道東北人。 一個(gè)月前我還...
    沈念sama閱讀 45,595評(píng)論 2 355
  • 正文 我出身青樓同欠,卻偏偏與公主長(zhǎng)得像样傍,于是被迫代替她去往敵國(guó)和親。 傳聞我的和親對(duì)象是個(gè)殘疾皇子铺遂,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 42,901評(píng)論 2 345

推薦閱讀更多精彩內(nèi)容