部署Kubeadm遇到的哪些問題工闺,并且如何解決

1)設置錯誤導致kubeadm安裝k8s失敗

提示:ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables

[root@node01 data]# kubeadm join masterIP地址:6443 --token abcdef.0123456789abcdef \
>     --discovery-token-ca-cert-hash sha256:e7a08a24b68c738cccfcc3ae56b7a433704f3753991c782208b46f20c57cf0c9
[preflight] Running pre-flight checks
    [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.12. Latest validated version: 19.03
error execution phase preflight: [preflight] Some fatal errors occurred:
    [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

解決辦法:

echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
echo "1" >/proc/sys/net/bridge/bridge-nf-call-ip6tables

2)kubeadm安裝k8s1.20.1版本添加node節(jié)點時出錯

參考這個地址:https://bbs.csdn.net/topics/397412529?page=1

accepts at most 1 arg(s), received 2
To see the stack trace of this error execute with --v=5 or higher

稍等片刻后, 加入成功如下:

W0801 19:27:06.500319   12557 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

3)部署Kubernetes遇到的問題與解決方法(初始化等)

參考這個地址:https://blog.csdn.net/clareeee/article/details/121100431

4)子節(jié)點服務器上運行kubectl get node卻發(fā)現(xiàn)報錯了, 如下

[root@k8s-node02-17:~]# kubectl get node
The connection to the server localhost:8080 was refused - did you specify the right host or port?

可以發(fā)現(xiàn)按安裝成功后间狂,日志提示的如下步驟操作即可

# 在各個子節(jié)點創(chuàng)建.kube目錄
[root@master data]# kubectl get nodes
W1116 20:29:22.881159 20594 loader.go:223] Config not found: /etc/kubernetes/admin.conf
The connection to the server localhost:8080 was refused - did you specify the right host or port?

#Master節(jié)點執(zhí)行:
scp /etc/kubernetes/admin.conf root@node01:/etc/kubernetes/admin.conf
scp /etc/kubernetes/admin.conf root@node01:/etc/kubernetes/admin.conf

#Node節(jié)點執(zhí)行:
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile

# 最后運行測試, 發(fā)現(xiàn)不報錯了
[root@k8s-master01-15 ~]# kubectl get node
NAME               STATUS     ROLES    AGE   VERSION
k8s-master01-15    NotReady   master   20m   v1.18.6
k8s-node01-16      NotReady   <none>   19m   v1.18.6
k8s-node02-17      NotReady   <none>   19m   v1.18.6

5) kubectl get cs 問題

提示:Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused. 出現(xiàn)這種情況俊戳,是/etc/kubernetes/manifests/下的kube-controller-manager.yaml和kube-scheduler.yaml設置的默認端口是0導致的惨奕,解決方式是注釋掉對應的port即可雪位,操作如下:

[root@k8s-master01-15 manifests]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                       ERROR
scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused   
controller-manager   Healthy     ok                                                                                            
etcd-0               Healthy     {"health":"true","reason":""} 


[root@k8s-master01-15 /etc/kubernetes/manifests]# vim kube-scheduler.yaml +19
.....
spec:
  containers:
  - command:
    - kube-scheduler
    - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
    - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
    - --bind-address=127.0.0.1
    - --kubeconfig=/etc/kubernetes/scheduler.conf
    - --leader-elect=true
    #- --port=0      # 注釋這行


[root@k8s-master01-15 /etc/kubernetes/manifests]# vim kube-controller-manager.yaml +26
.....
spec:
  containers:
  - command:
    - kube-controller-manager
    - --allocate-node-cidrs=true
    - --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --bind-address=127.0.0.1
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --cluster-cidr=10.244.0.0/16
    - --cluster-name=kubernetes
    - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
    - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
    - --controllers=*,bootstrapsigner,tokencleaner
    - --kubeconfig=/etc/kubernetes/controller-manager.conf
    - --leader-elect=true
    #- --port=0      # 注釋這行

# 重啟kubelet.service 服務
systemctl restart kubelet.service

# 檢查cs狀態(tài)
[root@k8s-master01-15 manifests]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
scheduler            Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""}   
controller-manager   Healthy   ok  

6) k8s的coredns組件的問題(部署flannel網(wǎng)絡插件)

提示:network: open /run/flannel/subnet.env: no such file or directory

[root@k8s-master01-15 core]# kubectl get pods -n kube-system -o wide
NAME                             READY   STATUS              RESTARTS         AGE     IP               NODE     NOMINATED NODE   READINESS GATES
coredns-78fcd69978-6t76m         0/1     ContainerCreating   0                126m    <none>           node01   <none>           <none>
coredns-78fcd69978-nthg8         0/1     ContainerCreating   0                126m    <none>           node01   <none>           <none>
.....

[root@k8s-master01-15 core]# kubectl describe pods coredns-78fcd69978-6t76m -n kube-system
.......
Events:
  Type     Reason                  Age                      From     Message
  ----     ------                  ----                     ----     -------
  Normal   SandboxChanged          19m (x4652 over 104m)    kubelet  Pod sandbox changed, it will be killed and re-created.
  Warning  FailedCreatePodSandBox  4m26s (x5461 over 104m)  kubelet  (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "8e2992e19a969235ff30271e317565b48ffe57d8261dc86f92249005a5eaaec5" network for pod "coredns-78fcd69978-6t76m": networkPlugin cni failed to set up pod "coredns-78fcd69978-6t76m_kube-system" network: open /run/flannel/subnet.env: no such file or directory

解決方案:

查看是否有 /run/flannel/subnet.env 這個文件,master 上是存在的梨撞,也有內(nèi)容:

[root@k8s-master01-15:~]# mkdir -p /run/flannel/

cat >> /run/flannel/subnet.env <<EOF
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
EOF

7)發(fā)現(xiàn) kube-proxy出現(xiàn)異常雹洗,狀態(tài):CrashLoopBackOff

kube-proxy是作用于service的,作用主要是負責service的實現(xiàn)卧波,實現(xiàn)了內(nèi)部從pod到service和外部的從node port向service的訪問

[root@master core]# kubectl get pods -n kube-system -o wide
NAME                             READY   STATUS              RESTARTS         AGE    IP               NODE     NOMINATED NODE   READINESS GATES
coredns-78fcd69978-dts5t         0/1     ContainerCreating   0                146m   <none>           node02   <none>           <none>
coredns-78fcd69978-v8g7z         0/1     ContainerCreating   0                146m   <none>           node02   <none>           <none>
etcd-master                      1/1     Running             0                147m   172.23.199.15   master   <none>           <none>
kube-apiserver-master            1/1     Running             0                147m   172.23.199.15   master   <none>           <none>
kube-controller-manager-master   1/1     Running             0                147m   172.23.199.15   master   <none>           <none>
kube-proxy-9nxhp                 0/1     CrashLoopBackOff    33 (37s ago)     144m   172.23.199.16   node01   <none>           <none>
kube-proxy-gqrvl                 0/1     CrashLoopBackOff    33 (86s ago)     145m   172.23.199.17   node02   <none>           <none>
kube-proxy-p825v                 0/1     CrashLoopBackOff    33 (2m54s ago)   146m   172.23.199.15   master   <none>           <none>
kube-scheduler-master            1/1     Running             0                147m   172.23.199.15   master   <none>           <none>

# 使用kubectl describe XXX 排查pod狀態(tài)的信息
[root@master core]# kubectl describe pod kube-proxy-9nxhp -n kube-system
...
Events:
  Type     Reason   Age                  From     Message
  ----     ------   ----                 ----     -------
  Warning  BackOff  8s (x695 over 150m)  kubelet  Back-off restarting failed container

解決方案:

在1.19版本之前,kubeadm部署方式啟用ipvs模式時,初始化配置文件需要添加以下內(nèi)容:

---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
  SupportIPVSProxyMode: true
mode: ipvs

如果部署是1.20以上版本,又是使用kubeadm進行集群初始化時,雖然可以正常部署,但是查看pod情況的時候可以看到kube-proxy無法運行成功,報錯部分內(nèi)容如下:

[root@k8s-master01-15:~]# kubectl get pod -A|grep kube-proxy
kube-system kube-proxy-7vrbv 0/1 CrashLoopBackOff 9 43m
kube-system kube-proxy-ghs7h 0/1 CrashLoopBackOff 9 43m
kube-system kube-proxy-l9twb 0/1 CrashLoopBackOff 1 7s

查看日志信息

[root@k8s-master01-15:~]# kubectl logs kube-proxy-9qbwp  -n kube-system
E0216 03:00:11.595910       1 run.go:74] "command failed" err="failed complete: unrecognized feature gate: SupportIPVSProxyMode"

通過報錯可以看到kube-proxy無法識別SupportIPVSProxyMode這個字段,于是訪問官方查看最新版本ipvs開啟的正確配置,通過https://github.com/kubernetes/kubernetes/blob/master/pkg/proxy/ipvs/README.md#check-ipvs-proxy-rules时肿,可以看到官方說明:

Cluster Created by Kubeadm
If you are using kubeadm with a configuration file, you have to add mode: ipvs in a KubeProxyConfiguration (separated by -- that is also passed to kubeadm init).

kubeProxy:
  config:
    mode: ipvs

由于集群已經(jīng)初始化成功了,所以現(xiàn)在改kubeadm初始化配置文件沒有意義,因為我們需要直接修改kube-proxy的啟動配置

通過查看kube-pxory的資源清單可以知道, kube-proxy的配置文件是通過configmap方式掛載到容器中的,因此我們只需要對應修改configmap中的配置內(nèi)容,就可以將無效字段刪除

[root@k8s-master01-15:~]# kubectl -o yaml get pod -n kube-system kube-proxy-24tkb
...... # 其他內(nèi)容省略
   99     volumes:
   100    - configMap:
   101        defaultMode: 420
   102        name: kube-proxy
   103      name: kube-proxy
...... # 其他內(nèi)容省略

[root@k8s-master01-15:~]# kubectl get cm -n kube-system
NAME                                 DATA   AGE
coredns                              1      19h
extension-apiserver-authentication   6      19h
kube-flannel-cfg                     2      109m
kube-proxy                           2      19h
kube-root-ca.crt                     1      19h
kubeadm-config                       1      19h
kubelet-config-1.23                  1      19h

在編輯模式中找到以下字段,刪除后保存退出

[root@k8s-master01-15:~]# kubectl edit cm kube-proxy -n kube-system    
featureGates: 
  SupportIPVSProxyMode: true

然后將刪除所有kube-proxy進行重啟,查看pod運行情況

[root@k8s-master01-15:~]# watchpod -n kube-system
NAME                               READY   STATUS    RESTARTS       AGE
coredns-64897985d-c9t7s            1/1     Running   0              19h
coredns-64897985d-knvxg            1/1     Running   0              19h
etcd-master01                      1/1     Running   0              19h
kube-apiserver-master01            1/1     Running   0              19h
kube-controller-manager-master01   1/1     Running   0              19h
kube-flannel-ds-6lbmw              1/1     Running   15 (56m ago)   110m
kube-flannel-ds-97mkh              1/1     Running   15 (56m ago)   110m
kube-flannel-ds-fthvm              1/1     Running   15 (56m ago)   110m
kube-proxy-4jj7b                   1/1     Running   0              55m
kube-proxy-ksltf                   1/1     Running   0              55m
kube-proxy-w8dcr                   1/1     Running   0              55m
kube-scheduler-master01            1/1     Running   0              19h

在服務器上安裝ipvsadm,查看ipvs模式是否啟用成功

[root@k8s-master01-15:~]# yum install ipvsadm -y
[root@k8s-master01-15:~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.96.0.1:443 rr
  -> 172.23.142.233:6443          Masq    1      3          0         
TCP  10.96.0.10:53 rr
  -> 10.244.0.2:53                Masq    1      0          0         
  -> 10.244.0.3:53                Masq    1      0          0         
TCP  10.96.0.10:9153 rr
  -> 10.244.0.2:9153              Masq    1      0          0         
  -> 10.244.0.3:9153              Masq    1      0          0         
UDP  10.96.0.10:53 rr
  -> 10.244.0.2:53                Masq    1      0          0         
  -> 10.244.0.3:53                Masq    1      0          0 

8)解決pod的IP無法ping通的問題

集群安裝完成后, 啟動一個pod

# 啟動pod, 命名為nginx-offi, 里面運行的容器為從官網(wǎng)拉取的Nginx鏡像
[root@k8s-master01-15:~]# kubectl run nginx-offi --image=nginx
pod/nginx-offi created

# 查看pod的運行信息, 可以看到狀態(tài)為 "Running" ,IP為 "10.244.1.7", 運行在了 "k8s-node01-16" 節(jié)點上
[root@k8s-master01-15:~]# kubectl get pod -o wide
NAME            READY   STATUS    RESTARTS   AGE   IP           NODE             NOMINATED NODE   READINESS GATES
nginx-offi      1/1     Running   0          55s   10.244.1.7   k8s-node01-16   <none>           <none>

但是如果在主節(jié)點k8s-master01-15 或 另一個子節(jié)點 k8s-node02-17上訪問剛才運行的pod, 卻發(fā)現(xiàn)訪問不到, 可以嘗試 ping一下該IP地址:10.244.1.7也ping不通, 盡管前面我們已經(jīng)安裝好了flannel.

UP發(fā)現(xiàn): 是iptables 規(guī)則的問題, 前面我們在初始化服務器設置的時候清除了iptables的規(guī)則, 但主要原因是不是安裝了 flannel 還是哪一步的問題, 會導致 iptables 里面又多出了規(guī)則

# 查看iptables
(root@k8s-master01-15:~)# iptables -L -n
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
KUBE-FIREWALL  all  --  0.0.0.0/0            0.0.0.0/0           

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
KUBE-FORWARD  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */
DOCKER-USER  all  --  0.0.0.0/0            0.0.0.0/0           
DOCKER-ISOLATION-STAGE-1  all  --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
DOCKER     all  --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
KUBE-FIREWALL  all  --  0.0.0.0/0            0.0.0.0/0           

Chain DOCKER (1 references)
target     prot opt source               destination         

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target     prot opt source               destination         
DOCKER-ISOLATION-STAGE-2  all  --  0.0.0.0/0            0.0.0.0/0           
RETURN     all  --  0.0.0.0/0            0.0.0.0/0           

Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target     prot opt source               destination         
DROP       all  --  0.0.0.0/0            0.0.0.0/0           
RETURN     all  --  0.0.0.0/0            0.0.0.0/0           

Chain DOCKER-USER (1 references)
target     prot opt source               destination         
RETURN     all  --  0.0.0.0/0            0.0.0.0/0           

Chain KUBE-FIREWALL (2 references)
target     prot opt source               destination         
DROP       all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
DROP       all  -- !127.0.0.0/8          127.0.0.0/8          /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT

Chain KUBE-KUBELET-CANARY (0 references)
target     prot opt source               destination         

Chain KUBE-FORWARD (1 references)
target     prot opt source               destination         
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */ mark match 0x4000/0x4000
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED
# Warning: iptables-legacy tables present, use iptables-legacy to see them

需要再次清空 iptables 規(guī)則

iptables -F &&  iptables -X &&  iptables -F -t nat &&  iptables -X -t nat

再次查看iptables

(root@k8s-master01-15:~)# iptables -L -n
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
KUBE-FORWARD  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain KUBE-FORWARD (1 references)
target     prot opt source               destination         
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */ mark match 0x4000/0x4000
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED
# Warning: iptables-legacy tables present, use iptables-legacy to see them

再次ping或者訪問pod, 即可成功

(root@k8s-master01-15:~)# curl 10.244.1.7
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a >nginx.org</a>.<br/>
Commercial support is available at
<a >nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
最后編輯于
?著作權歸作者所有,轉載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市港粱,隨后出現(xiàn)的幾起案子螃成,更是在濱河造成了極大的恐慌,老刑警劉巖查坪,帶你破解...
    沈念sama閱讀 221,635評論 6 515
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件寸宏,死亡現(xiàn)場離奇詭異,居然都是意外死亡偿曙,警方通過查閱死者的電腦和手機氮凝,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 94,543評論 3 399
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來遥昧,“玉大人覆醇,你說我怎么就攤上這事√砍簦” “怎么了永脓?”我有些...
    開封第一講書人閱讀 168,083評論 0 360
  • 文/不壞的土叔 我叫張陵,是天一觀的道長鞋仍。 經(jīng)常有香客問我常摧,道長,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 59,640評論 1 296
  • 正文 為了忘掉前任落午,我火速辦了婚禮谎懦,結果婚禮上,老公的妹妹穿的比我還像新娘溃斋。我一直安慰自己界拦,他們只是感情好,可當我...
    茶點故事閱讀 68,640評論 6 397
  • 文/花漫 我一把揭開白布梗劫。 她就那樣靜靜地躺著享甸,像睡著了一般。 火紅的嫁衣襯著肌膚如雪梳侨。 梳的紋絲不亂的頭發(fā)上蛉威,一...
    開封第一講書人閱讀 52,262評論 1 308
  • 那天,我揣著相機與錄音走哺,去河邊找鬼蚯嫌。 笑死,一個胖子當著我的面吹牛丙躏,可吹牛的內(nèi)容都是我干的择示。 我是一名探鬼主播,決...
    沈念sama閱讀 40,833評論 3 421
  • 文/蒼蘭香墨 我猛地睜開眼晒旅,長吁一口氣:“原來是場噩夢啊……” “哼对妄!你這毒婦竟也來了?” 一聲冷哼從身側響起敢朱,我...
    開封第一講書人閱讀 39,736評論 0 276
  • 序言:老撾萬榮一對情侶失蹤剪菱,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后拴签,有當?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體孝常,經(jīng)...
    沈念sama閱讀 46,280評論 1 319
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 38,369評論 3 340
  • 正文 我和宋清朗相戀三年蚓哩,在試婚紗的時候發(fā)現(xiàn)自己被綠了构灸。 大學時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 40,503評論 1 352
  • 序言:一個原本活蹦亂跳的男人離奇死亡岸梨,死狀恐怖喜颁,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情曹阔,我是刑警寧澤半开,帶...
    沈念sama閱讀 36,185評論 5 350
  • 正文 年R本政府宣布,位于F島的核電站赃份,受9級特大地震影響寂拆,放射性物質發(fā)生泄漏奢米。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點故事閱讀 41,870評論 3 333
  • 文/蒙蒙 一纠永、第九天 我趴在偏房一處隱蔽的房頂上張望鬓长。 院中可真熱鬧,春花似錦尝江、人聲如沸涉波。這莊子的主人今日做“春日...
    開封第一講書人閱讀 32,340評論 0 24
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽怠蹂。三九已至,卻和暖如春少态,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背易遣。 一陣腳步聲響...
    開封第一講書人閱讀 33,460評論 1 272
  • 我被黑心中介騙來泰國打工彼妻, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人豆茫。 一個月前我還...
    沈念sama閱讀 48,909評論 3 376
  • 正文 我出身青樓侨歉,卻偏偏與公主長得像,于是被迫代替她去往敵國和親揩魂。 傳聞我的和親對象是個殘疾皇子幽邓,可洞房花燭夜當晚...
    茶點故事閱讀 45,512評論 2 359

推薦閱讀更多精彩內(nèi)容