基于CentOS使用kubeadm安裝kubernetes_v1.18.x

一叉谜、簡介

Kubernetes是Google 2014年創(chuàng)建管理的旗吁,是Google 10多年大規(guī)模容器管理技術(shù)Borg的開源版本踩萎。它是容器集群管理系統(tǒng)停局,是一個開源的平臺,可以實(shí)現(xiàn)容器集群的自動化部署香府、自動擴(kuò)縮容董栽、維護(hù)等功能。

通過Kubernetes你可以:

  • 快速部署應(yīng)用
  • 快速擴(kuò)展應(yīng)用
  • 無縫對接新的應(yīng)用功能
  • 節(jié)省資源企孩,優(yōu)化硬件資源的使用

Kubernetes 特點(diǎn):

  • 可移植: 支持公有云锭碳,私有云,混合云勿璃,多重云(multi-cloud)
  • 可擴(kuò)展: 模塊化, 插件化, 可掛載, 可組合
  • 自動化: 自動部署擒抛,自動重啟,自動復(fù)制补疑,自動伸縮/擴(kuò)展

Kubernetes 項(xiàng)目的架構(gòu)
由 Master 和 Node 兩種節(jié)點(diǎn)組成
這兩種角色分別對應(yīng)著控制節(jié)點(diǎn)和計算節(jié)點(diǎn)歧沪。

控制節(jié)點(diǎn):
即 Master 節(jié)點(diǎn),由三個緊密協(xié)作的獨(dú)立組件組合而成莲组,分別是:
kube-apiserver: 負(fù)責(zé) API 服務(wù)
kube-scheduler: 負(fù)責(zé)調(diào)度
kube-controller-manager: 負(fù)責(zé)容器編排
整個集群的持久化數(shù)據(jù)诊胞,則由 kube-apiserver 處理后保存在 Ectd 中

計算節(jié)點(diǎn):
最核心的部分是 kubelet 的組件
kubelet 主要負(fù)責(zé)同容器運(yùn)行時(比如 Docker 項(xiàng)目)打交道。這個交互依賴的是 CRI(Container Runtime Interface)的遠(yuǎn)程調(diào)用接口锹杈,這個接口定義了容器運(yùn)行時的各項(xiàng)核心操作撵孤,比如:啟動一個容器需要的所有參數(shù)。
k8s 項(xiàng)目并不關(guān)心你部署的是什么容器運(yùn)行時竭望、使用的什么技術(shù)實(shí)現(xiàn)邪码,只要你的這個容器運(yùn)行時能夠運(yùn)行標(biāo)準(zhǔn)的容器鏡像,它就可以通過實(shí)現(xiàn) CRI 接入到 k8s 項(xiàng)目當(dāng)中咬清。

二闭专、環(huán)境配置要求
1.安裝要求

在開始之前,部署Kubernetes集群機(jī)器需要滿足以下幾個條件:

  • 一臺或多臺機(jī)器枫振,操作系統(tǒng) CentOS7.x-86_x64
  • 硬件配置:2GB或更多RAM喻圃,2個CPU或更多CPU,硬盤30GB或更多
  • 集群中所有機(jī)器之間網(wǎng)絡(luò)互通
  • 可以訪問外網(wǎng)粪滤,需要拉取鏡像
  • 禁止swap分區(qū)
主機(jī)名 IP地址 角色 操作系統(tǒng) CPU/MEM 平臺
k8s-master 192.168.174.129 master CentOS7.4 2C/2G VMware
k8s-node1 192.168.174.130 node1 CentOS7.4 2C/2G VMware
k8s-node2 192.168.174.131 node2 CentOS7.4 2C/2G VMware
2.節(jié)點(diǎn)基本配置

在所有節(jié)點(diǎn)上配置

1)配置主機(jī)名并解析

#修改主機(jī)名
hostnamectl set-hostname yourhostname
bash   #使其生效

# 查看修改結(jié)果
hostnamectl status

#設(shè)置hostname解析
echo  "your ip  $(hostname)"  >> /etc/hosts

2)關(guān)閉防火墻

#關(guān)閉防火墻
systemctl stop firewalld
#設(shè)置不自啟
systemctl disable firewalld

3)關(guān)閉selinux

#臨時關(guān)閉
setenforce 0
#永久關(guān)閉
sed -i 's/enforcing/disabled/' /etc/selinux/config

4)關(guān)閉swap分區(qū)

#臨時關(guān)閉
swapoff -a
#永久關(guān)閉
sed -i 's/.*swap/#&/' /etc/fstab

5)配置靜態(tài)ip
設(shè)置靜態(tài)IP斧拍,進(jìn)行calico網(wǎng)絡(luò)方案時,以固定IP

6)加載ipvs模塊
默認(rèn)情況下杖小,Kube-proxy將在kubeadm部署的集群中以iptables模式運(yùn)行

需要注意的是肆汹,當(dāng)內(nèi)核版本大于4.19時愚墓,移除了nf_conntrack_ipv4模塊,kubernetes官方建議使用nf_conntrack代替昂勉,否則報錯無法找到nf_conntrack_ipv4模塊

yum install -y ipset ipvsadm
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF

chmod +x /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules

7)配置內(nèi)核參數(shù)

# vim /etc/sysctl.d/kubernetes.conf 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
?
# modprobe br_netfilter
# sysctl -p /etc/sysctl.d/kubernetes.conf

8)打開文件數(shù)

echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
三浪册、安裝docker及kubelet(在所有節(jié)點(diǎn)執(zhí)行)
1.安裝并配置docker

Kubernetes默認(rèn)CRI(容器運(yùn)行時)為Docker,因此先安裝Docker岗照。
1)配置yum源

# cd /etc/yum.repos.d/
# curl -O http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

2)安裝docker

yum -y install docker-ce

3)配置docker

# mkdir /etc/docker
# vim /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],
  "registry-mirrors": ["https://pf5f57i3.mirror.aliyuncs.com"]
}

4)啟動docker

systemctl enable docker
systemctl start docker
2.安裝kubelet村象、kubeadm、kubectl

1)配置k8s的yum源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=1
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2)安裝kubelet攒至、kubeadm厚者、kubectl
默認(rèn)安裝最新版本,此處指定為1.18.4迫吐,根據(jù)需求更改版本

yum install -y kubelet-1.18.4 kubeadm-1.18.4 kubectl-1.18.4

3.配置docker.service文件并重啟docker库菲,啟動kubelet
3)修改docker Cgroup Driver為systemd

# # 將/usr/lib/systemd/system/docker.service文件中的這一行 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
# # 修改為 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd
# 如果不修改,在添加 worker 節(jié)點(diǎn)時可能會碰到如下錯誤
# [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". 
# Please follow the guide at https://kubernetes.io/docs/setup/cri/
sed -i "s#^ExecStart=/usr/bin/dockerd.*#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd#g" /usr/lib/systemd/system/docker.service

4)重啟docker志膀,并啟動kubelet

systemctl daemon-reload
systemctl restart docker
systemctl enable kubelet && systemctl start kubelet

此時熙宇,有的朋友可能會出現(xiàn)docker重啟起不來,我百度谷歌了之后溉浙,有說json文件不對的烫止,有說要更新系統(tǒng)的,前者我都試了沒有用放航,最后有一篇文章說把daemon.json改成daemon.conf烈拒,就可以了。

四广鳍、初始化Master節(jié)點(diǎn)(僅master節(jié)點(diǎn))
1.開始初始化
kubeadm init --kubernetes-version=1.18.4  \
--apiserver-advertise-address=192.168.174.129   \
--image-repository mirrorgcrio  \
--service-cidr=10.10.0.0/16 --pod-network-cidr=10.122.0.0/16

POD的網(wǎng)段為: 10.122.0.0/16荆几, api server地址就是master本機(jī)IP。
使用kubeadm config print init-defaults可以打印集群初始化默認(rèn)的使用的配置赊时。
這里采用命令行方式初始化吨铸,由于kubeadm 默認(rèn)從官網(wǎng)k8s.grc.io下載所需鏡像,國內(nèi)無法訪問祖秒,因此需要通過–image-repository指定mirrorgcrio诞吱。 根據(jù)您服務(wù)器網(wǎng)速的情況,您需要等候 3 - 10 分鐘竭缝。

集群初始化成功返回如下信息:

W0623 15:38:10.822265   93979 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.10.0.1 192.168.174.129]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.174.129 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.174.129 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0623 15:44:13.593752   93979 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0623 15:44:13.599325   93979 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.514485 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: ydy3g3.1lahkvfvm1qyoy86
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.174.129:6443 --token ydy3g3.1lahkvfvm1qyoy86 \
    --discovery-token-ca-cert-hash sha256:7c43918ee287d21fe9b70e4868e2e0fdd8c5f6b829a825822aecdb8d207494fc

記錄生成的最后部分內(nèi)容房维,此塊內(nèi)容需要在node節(jié)點(diǎn)上加入k8s集群的時候執(zhí)行。

2.配置kubectl

方式一抬纸,通過配置文件

 mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

方式二咙俩,通過環(huán)境變量

echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> ~/.bashrc
source ~/.bashrc
3.查看節(jié)點(diǎn)
[root@k8s-master ~]# kubectl get node
NAME         STATUS     ROLES    AGE     VERSION
k8s-master   NotReady   master   5m22s   v1.18.4
[root@k8s-master ~]# kubectl get pod --all-namespaces
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system   coredns-54f99b968c-s85xv             0/1     Pending   0          5m15s
kube-system   coredns-54f99b968c-wmffs             0/1     Pending   0          5m15s
kube-system   etcd-k8s-master                      1/1     Running   0          5m31s
kube-system   kube-apiserver-k8s-master            1/1     Running   0          5m31s
kube-system   kube-controller-manager-k8s-master   1/1     Running   0          5m31s
kube-system   kube-proxy-n8h22                     1/1     Running   0          5m15s
kube-system   kube-scheduler-k8s-master            1/1     Running   0          5m31s

此時,node節(jié)點(diǎn)為NotReady,因?yàn)閏orednspod沒有啟動阿趁,缺少網(wǎng)絡(luò)pod

五膜蛔、安裝calico網(wǎng)絡(luò)(僅在master節(jié)點(diǎn)操作)

kubernetes支持多種網(wǎng)絡(luò)方案,我這里安裝的calico脖阵。

1.用kubectl命令安裝
[root@k8s-master ~]# kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created

第一次部署的時候失敗了皂股,使用kubectl describe pod [podname] -n [namespace]查看詳細(xì)信息,得知找不到鏡像命黔,然后查看yaml文件里面的calico的版本呜呐,手動拉取下來,重新部署就成功了纷铣。

2.再次查看node和pod狀態(tài)

查看pod和node

[root@k8s-master ~]# kubectl get pod --all-namespaces
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-58b656d69f-zf6kz   1/1     Running   0          24m
kube-system   calico-node-9hbcx                          1/1     Running   0          24m
kube-system   coredns-54f99b968c-fp6qd                   1/1     Running   0          33m
kube-system   coredns-54f99b968c-s85xv                   1/1     Running   0          72m
kube-system   etcd-k8s-master                            1/1     Running   0          72m
kube-system   kube-apiserver-k8s-master                  1/1     Running   0          72m
kube-system   kube-controller-manager-k8s-master         1/1     Running   0          72m
kube-system   kube-proxy-n8h22                           1/1     Running   0          72m
kube-system   kube-scheduler-k8s-master                  1/1     Running   0          72m
[root@k8s-master ~]# kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   72m   v1.18.4

此時集群狀態(tài)正常

六卵史、部署node節(jié)點(diǎn)
1.加入node節(jié)點(diǎn)

用我們上面初始化master節(jié)點(diǎn)最后輸出的命令,如果忘記了搜立,可以通過kubeadm token create --print-join-command來獲取。

[root@k8s-master ~]# kubeadm token create --print-join-command
kubeadm join 192.168.174.129:6443 --token a95vmc.yy4p8btqoa7e5dwd     --discovery-token-ca-cert-hash sha256:7c43918ee287d21fe9b70e4868e2e0fdd8c5f6b829a825822aecdb8d207494fc

然后在兩個node節(jié)點(diǎn)上執(zhí)行

[root@k8s-node1 ~]# kubeadm join 192.168.174.129:6443 --token 46q0ei.ivbs1u1n2a3tayma     --discovery-token-ca-cert-hash sha256:7c43918ee287d21fe9b70e4868e2e0fdd8c5f6b829a825822aecdb8d207494fc
W0623 17:26:34.473791   81323 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

加入后等一會查看節(jié)點(diǎn)狀態(tài)

2.查看節(jié)點(diǎn)狀態(tài)
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE    VERSION
k8s-master   Ready    master   116m   v1.18.4
k8s-node1    Ready    <none>   14m    v1.18.4
k8s-node2    Ready    <none>   13m    v1.18.4

待所有節(jié)點(diǎn)都ready后, 集群部署完成槐秧。

七.kube-proxy開啟ipvs(在master節(jié)點(diǎn)執(zhí)行就行)

修改ConfigMap的kube-system/kube-proxy中的config.conf啄踊,mode: “ipvs”

kubectl edit cm kube-proxy -n kube-system

之后重啟各個節(jié)點(diǎn)上的kube-proxy pod:

kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市刁标,隨后出現(xiàn)的幾起案子颠通,更是在濱河造成了極大的恐慌,老刑警劉巖膀懈,帶你破解...
    沈念sama閱讀 211,290評論 6 491
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件顿锰,死亡現(xiàn)場離奇詭異,居然都是意外死亡启搂,警方通過查閱死者的電腦和手機(jī)硼控,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 90,107評論 2 385
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來胳赌,“玉大人牢撼,你說我怎么就攤上這事∫缮唬” “怎么了熏版?”我有些...
    開封第一講書人閱讀 156,872評論 0 347
  • 文/不壞的土叔 我叫張陵,是天一觀的道長捍掺。 經(jīng)常有香客問我撼短,道長,這世上最難降的妖魔是什么挺勿? 我笑而不...
    開封第一講書人閱讀 56,415評論 1 283
  • 正文 為了忘掉前任曲横,我火速辦了婚禮,結(jié)果婚禮上满钟,老公的妹妹穿的比我還像新娘胜榔。我一直安慰自己胳喷,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 65,453評論 6 385
  • 文/花漫 我一把揭開白布夭织。 她就那樣靜靜地躺著吭露,像睡著了一般。 火紅的嫁衣襯著肌膚如雪尊惰。 梳的紋絲不亂的頭發(fā)上讲竿,一...
    開封第一講書人閱讀 49,784評論 1 290
  • 那天,我揣著相機(jī)與錄音弄屡,去河邊找鬼题禀。 笑死,一個胖子當(dāng)著我的面吹牛膀捷,可吹牛的內(nèi)容都是我干的迈嘹。 我是一名探鬼主播,決...
    沈念sama閱讀 38,927評論 3 406
  • 文/蒼蘭香墨 我猛地睜開眼全庸,長吁一口氣:“原來是場噩夢啊……” “哼秀仲!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起壶笼,我...
    開封第一講書人閱讀 37,691評論 0 266
  • 序言:老撾萬榮一對情侶失蹤神僵,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后覆劈,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體保礼,經(jīng)...
    沈念sama閱讀 44,137評論 1 303
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 36,472評論 2 326
  • 正文 我和宋清朗相戀三年责语,在試婚紗的時候發(fā)現(xiàn)自己被綠了炮障。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 38,622評論 1 340
  • 序言:一個原本活蹦亂跳的男人離奇死亡鹦筹,死狀恐怖铝阐,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情铐拐,我是刑警寧澤徘键,帶...
    沈念sama閱讀 34,289評論 4 329
  • 正文 年R本政府宣布,位于F島的核電站遍蟋,受9級特大地震影響吹害,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜虚青,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 39,887評論 3 312
  • 文/蒙蒙 一它呀、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧,春花似錦纵穿、人聲如沸下隧。這莊子的主人今日做“春日...
    開封第一講書人閱讀 30,741評論 0 21
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽人柿。三九已至磁餐,卻和暖如春竹观,著一層夾襖步出監(jiān)牢的瞬間输拇,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 31,977評論 1 265
  • 我被黑心中介騙來泰國打工抢野, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留拷淘,地道東北人。 一個月前我還...
    沈念sama閱讀 46,316評論 2 360
  • 正文 我出身青樓指孤,卻偏偏與公主長得像启涯,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子邓厕,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 43,490評論 2 348