虛擬化運(yùn)維--容器網(wǎng)絡(luò)--K8S+Flannel(十五)

一家妆、服務(wù)器架構(gòu)

環(huán)境介紹:CentOS Linux 3.10.0-957.el7.x86_64

名稱 IP 服務(wù)
master 192.168.247.130 kubelet、kubeadm簸呈、kubectl榕订、kubernetes-cni、docker蜕便、flannel
node1 192.168.247.131 kubelet劫恒、kubeadm、kubectl、kubernetes-cni
node2 192.168.247.132 kubelet忍饰、kubeadm、kubectl、kubernetes-cni

二迁杨、安裝配置K8S(所有節(jié)點(diǎn))

1. 前置條件:安裝Docker 并啟動(dòng)Docker

# 關(guān)閉 SeLinux
[root@master ~]# setenforce 0
[root@master ~]# sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

# 關(guān)閉 swap
[root@master ~]# swapoff -a
[root@master ~]# yes | cp /etc/fstab /etc/fstab_bak
[root@master ~]# cat /etc/fstab_bak |grep -v swap > /etc/fstab

# 配置內(nèi)核參數(shù)
[root@master ~]# vi /etc/sysctl.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.all.forwarding = 1

[root@master ~]# sysctl -p
net.ipv4.ip_forward = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.all.forwarding = 1

# 配置國(guó)內(nèi)鏡像
[root@master ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2. master節(jié)點(diǎn)

# 安裝kubelet、kubeadm伶选、kubectl词渤、kubernetes-cni
[root@master ~]# yum install -y kubelet-1.16.2 kubeadm-1.16.2 kubectl-1.16.2 kubernetes-cni-0.7.5
# 啟動(dòng)kubelet
[root@master ~]# systemctl enable kubelet && systemctl start kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

# 查看需要的鏡像版本
[root@master ~]#  kubeadm config images list
W1105 09:44:45.595840   11838 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W1105 09:44:45.596854   11838 version.go:102] falling back to the local client version: v1.16.2
k8s.gcr.io/kube-apiserver:v1.16.2
k8s.gcr.io/kube-controller-manager:v1.16.2
k8s.gcr.io/kube-scheduler:v1.16.2
k8s.gcr.io/kube-proxy:v1.16.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.15-0
k8s.gcr.io/coredns:1.6.2

# 根據(jù)需要的版本,直接拉取國(guó)內(nèi)鏡像喧务,并修改tag (所有節(jié)點(diǎn))
[root@manager ~]# vi kubeadm.sh
腳本內(nèi)容:
#!/bin/bash

## 使用如下腳本下載國(guó)內(nèi)鏡像赖歌,并修改tag為google的tag
set -e

KUBE_VERSION=v1.16.2
KUBE_PAUSE_VERSION=3.1
ETCD_VERSION=3.3.15-0
CORE_DNS_VERSION=1.6.2

GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers

images=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
pause-amd64:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION})

for imageName in ${images[@]} ; do
  docker pull $ALIYUN_URL/$imageName
  docker tag  $ALIYUN_URL/$imageName $GCR_URL/$imageName
  docker rmi $ALIYUN_URL/$imageName
done

# 運(yùn)行腳本,拉取鏡像
[root@manager ~]# sh ./kubeadm.sh

# master節(jié)點(diǎn)執(zhí)行功茴,初始化k8s 一定要注意IP地址為本機(jī)IP庐冯。
# 初始化主節(jié)點(diǎn) pod-network-cidr: 選項(xiàng)--pod-network-cidr=192.168.0.0/16表示集群將使用網(wǎng)絡(luò)的子網(wǎng)范圍
[root@manager ~]# sudo kubeadm init  --apiserver-advertise-address 192.168.247.130  --kubernetes-version=v1.16.2  --pod-network-cidr=192.168.0.0/16

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.247.130:6443 --token nla9a9.wz320s15z4zopwgv \
    --discovery-token-ca-cert-hash sha256:3168a2e3963d9f35e590d5459f59c85393b6b8a42abeb2377849886ab82d8ef0 

# 初始化 root 用戶的 kubectl 配置 (環(huán)境變量設(shè)置)為當(dāng)前用戶授權(quán)kubectl權(quán)限
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

三、安裝Flannel

# 安裝Flannel(只在Master節(jié)點(diǎn))
[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

# 查看pods,等待pod的STATUS全為Running坎穿,然后ctrl+c退出
[root@master ~]# watch kubectl get pods --all-namespaces
Every 2.0s: kubectl get pods --all-namespaces                                                                                                            Thu Nov  7 12:26:46 2019

NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
kube-system   coredns-5644d7b6d9-flr7l         1/1     Running   0          11m
kube-system   coredns-5644d7b6d9-l79hw         1/1     Running   0          11m
kube-system   etcd-master                      1/1     Running   0          10m
kube-system   kube-apiserver-master            1/1     Running   0          10m
kube-system   kube-controller-manager-master   1/1     Running   0          10m
kube-system   kube-flannel-ds-amd64-tppb8      1/1     Running   0          73s
kube-system   kube-proxy-jgbv8                 1/1     Running   0          11m
kube-system   kube-scheduler-master            1/1     Running   0          10m

# 查看網(wǎng)絡(luò)
[root@master ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:53:5a:8e brd ff:ff:ff:ff:ff:ff
    inet 192.168.247.130/24 brd 192.168.247.255 scope global noprefixroute dynamic ens33
       valid_lft 5268044sec preferred_lft 5268044sec
    inet6 fe80::7888:4525:c7b7:73e6/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:aa:20:6c:3b brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether da:d2:a6:6a:d8:c3 brd ff:ff:ff:ff:ff:ff
    inet 10.244.0.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::d8d2:a6ff:fe6a:d8c3/64 scope link 
       valid_lft forever preferred_lft forever
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether 0a:ad:78:04:25:0f brd ff:ff:ff:ff:ff:ff
    inet 10.244.0.1/24 scope global cni0
       valid_lft forever preferred_lft forever
    inet6 fe80::8ad:78ff:fe04:250f/64 scope link 
       valid_lft forever preferred_lft forever
6: veth05805c5c@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default 
    link/ether 9a:c4:bf:89:55:9a brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::98c4:bfff:fe89:559a/64 scope link 
       valid_lft forever preferred_lft forever
7: vetha2ba003a@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default 
    link/ether 2a:7a:7f:04:c3:d1 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::287a:7fff:fe04:c3d1/64 scope link 
       valid_lft forever preferred_lft forever
       
# 查看節(jié)點(diǎn)
[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
master   Ready    master   13m   v1.16.2

在k8s master節(jié)點(diǎn)上需要運(yùn)行以下組件:
Kubernetes API Server:提供http restful接口服務(wù)展父,也是集群控制入口
Kubernetes Controller Manager:資源對(duì)象控制中心
Kubernetes Scheduler:負(fù)責(zé)pod的調(diào)度
kubelet每隔幾秒鐘重新啟動(dòng)一次,因?yàn)樗诒罎⒀h(huán)中等待kubeadm告訴它該怎么做玲昧。 此崩潰循環(huán)是正称苘裕現(xiàn)象,請(qǐng)繼續(xù)進(jìn)行下一步孵延,并且kubelet將開始正常運(yùn)行吕漂。

四、創(chuàng)建集群

# node1加入集群
[root@node1 ~]# kubeadm join 192.168.247.130:6443 --token hw3ejo.rsrdyi73hl7yixvs \
>     --discovery-token-ca-cert-hash sha256:3b0c89163746d0a3f6b2c6dc190381def07963bc3637fa5e4e5ea9171b04aaa0 
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.4. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@master ~]# kubectl get nodes
NAME     STATUS     ROLES    AGE   VERSION
master   Ready      master   23m   v1.16.2
node1    NotReady   <none>   99s   v1.16.2

# 拷貝文件
[root@master ~]#  scp /etc/kubernetes/admin.conf node1:/etc/kubernetes/admin.conf
[root@master ~]#  scp /etc/kubernetes/admin.conf node2:/etc/kubernetes/admin.conf

# node2加入集群
[root@node2 ~]# kubeadm join 192.168.247.130:6443 --token hw3ejo.rsrdyi73hl7yixvs \
>     --discovery-token-ca-cert-hash sha256:3b0c89163746d0a3f6b2c6dc190381def07963bc3637fa5e4e5ea9171b04aaa0 
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.4. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

# 查看節(jié)點(diǎn)
[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE    VERSION
master   Ready    master   62m    v1.16.2
node1    Ready    <none>   40m    v1.16.2
node2    Ready    <none>   3m6s   v1.16.2

# 查看docker 信息 查看驅(qū)動(dòng)
[root@master ~]# docker info
 Cgroup Driver: systemd

# 查看組件信息
[root@master ~]#  kubectl get cs 
NAME                 AGE
scheduler            <unknown>
controller-manager   <unknown>
etcd-0               <unknown>

# .查看當(dāng)前可用的API版本
[root@master ~]#  kubectl api-versions
admissionregistration.k8s.io/v1
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
autoscaling/v2beta2
batch/v1
batch/v1beta1
certificates.k8s.io/v1beta1
coordination.k8s.io/v1
coordination.k8s.io/v1beta1
events.k8s.io/v1beta1
extensions/v1beta1
networking.k8s.io/v1
networking.k8s.io/v1beta1
node.k8s.io/v1beta1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
scheduling.k8s.io/v1
scheduling.k8s.io/v1beta1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1

# 節(jié)點(diǎn)網(wǎng)絡(luò)
[root@node1 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:00:21:f4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.247.131/24 brd 192.168.247.255 scope global dynamic ens33
       valid_lft 5264385sec preferred_lft 5264385sec
    inet6 fe80::20c:29ff:fe00:21f4/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:cd:49:46:98 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN 
    link/ether ba:33:87:c5:26:91 brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever


五尘应、常見問題:

  1. Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
    錯(cuò)誤表明證書可能不匹配惶凝。
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

  1. 主節(jié)點(diǎn):untime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
    網(wǎng)絡(luò)插件沒有安裝:
[root@master ~]# kubectl create -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml

  1. 子節(jié)點(diǎn):Unable to update cni config: no networks found in /etc/cni/net.d
    沒有pull所需鏡像。
# 執(zhí)行腳本
[root@node2 ~]# sh ./kubeadm.sh

  1. (子節(jié)點(diǎn))The connection to the server localhost:8080 was refused - did you specify the right host or port?
    出現(xiàn)這個(gè)問題的原因是kubectl命令需要使用kubernetes-admin來運(yùn)行犬钢,解決方法如下苍鲜,將主節(jié)點(diǎn)中的【/etc/kubernetes/admin.conf】文件拷貝到從節(jié)點(diǎn)相同目錄下,然后配置環(huán)境變量:
[root@master ~]# scp /etc/kubernetes/admin.conf node1:/etc/kubernetes/admin.conf
root@node1's password: 
admin.conf                                                                                                                                     100% 5455     1.9MB/s   00:00   
# 配置環(huán)境變量
[root@node1 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
# 立即生效
[root@node1 ~]# source ~/.bash_profile

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末娜饵,一起剝皮案震驚了整個(gè)濱河市坡贺,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌,老刑警劉巖遍坟,帶你破解...
    沈念sama閱讀 211,194評(píng)論 6 490
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件拳亿,死亡現(xiàn)場(chǎng)離奇詭異,居然都是意外死亡愿伴,警方通過查閱死者的電腦和手機(jī)肺魁,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 90,058評(píng)論 2 385
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來隔节,“玉大人鹅经,你說我怎么就攤上這事≡踅耄” “怎么了瘾晃?”我有些...
    開封第一講書人閱讀 156,780評(píng)論 0 346
  • 文/不壞的土叔 我叫張陵,是天一觀的道長(zhǎng)幻妓。 經(jīng)常有香客問我蹦误,道長(zhǎng),這世上最難降的妖魔是什么肉津? 我笑而不...
    開封第一講書人閱讀 56,388評(píng)論 1 283
  • 正文 為了忘掉前任强胰,我火速辦了婚禮,結(jié)果婚禮上妹沙,老公的妹妹穿的比我還像新娘偶洋。我一直安慰自己,他們只是感情好距糖,可當(dāng)我...
    茶點(diǎn)故事閱讀 65,430評(píng)論 5 384
  • 文/花漫 我一把揭開白布玄窝。 她就那樣靜靜地躺著,像睡著了一般肾筐。 火紅的嫁衣襯著肌膚如雪哆料。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 49,764評(píng)論 1 290
  • 那天吗铐,我揣著相機(jī)與錄音东亦,去河邊找鬼。 笑死唬渗,一個(gè)胖子當(dāng)著我的面吹牛典阵,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播镊逝,決...
    沈念sama閱讀 38,907評(píng)論 3 406
  • 文/蒼蘭香墨 我猛地睜開眼壮啊,長(zhǎng)吁一口氣:“原來是場(chǎng)噩夢(mèng)啊……” “哼!你這毒婦竟也來了撑蒜?” 一聲冷哼從身側(cè)響起歹啼,我...
    開封第一講書人閱讀 37,679評(píng)論 0 266
  • 序言:老撾萬榮一對(duì)情侶失蹤玄渗,失蹤者是張志新(化名)和其女友劉穎,沒想到半個(gè)月后狸眼,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體藤树,經(jīng)...
    沈念sama閱讀 44,122評(píng)論 1 303
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 36,459評(píng)論 2 325
  • 正文 我和宋清朗相戀三年拓萌,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了岁钓。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 38,605評(píng)論 1 340
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡微王,死狀恐怖屡限,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情炕倘,我是刑警寧澤钧大,帶...
    沈念sama閱讀 34,270評(píng)論 4 329
  • 正文 年R本政府宣布,位于F島的核電站激才,受9級(jí)特大地震影響拓型,放射性物質(zhì)發(fā)生泄漏额嘿。R本人自食惡果不足惜瘸恼,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 39,867評(píng)論 3 312
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望册养。 院中可真熱鬧东帅,春花似錦、人聲如沸球拦。這莊子的主人今日做“春日...
    開封第一講書人閱讀 30,734評(píng)論 0 21
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽坎炼。三九已至愧膀,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間谣光,已是汗流浹背檩淋。 一陣腳步聲響...
    開封第一講書人閱讀 31,961評(píng)論 1 265
  • 我被黑心中介騙來泰國(guó)打工, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留萄金,地道東北人蟀悦。 一個(gè)月前我還...
    沈念sama閱讀 46,297評(píng)論 2 360
  • 正文 我出身青樓,卻偏偏與公主長(zhǎng)得像氧敢,于是被迫代替她去往敵國(guó)和親日戈。 傳聞我的和親對(duì)象是個(gè)殘疾皇子,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 43,472評(píng)論 2 348