kubeadm 搭建Kubernetes 1.18集群

Kubernetes 1.18主要更新內(nèi)容

1毅臊、Kubernetes拓?fù)涔芾砥鳎═opology Manager ) 升級(jí)到Beta版

拓?fù)涔芾砥鞴δ?是1.18版中Kubernetes的beta版本功能坦冠,它可以使CPU和設(shè)備(如SR-IOV VFs)實(shí)現(xiàn)NUMA旺罢,這將使你的工作負(fù)載在針對(duì)低延遲而優(yōu)化的環(huán)境中運(yùn)行赋荆。在引入拓?fù)涔芾砥髦翱衩兀珻PU和設(shè)備管理器會(huì)做出彼此獨(dú)立的資源分配決策在验,那么可能會(huì)導(dǎo)致在多套接字( multi-socket )系統(tǒng)上分配不良肾请,從而導(dǎo)致關(guān)鍵型應(yīng)用程序的性能下降。

2贴浙、Serverside Apply引入Beta 2版本

Server-side Apply 在1.16中被升級(jí)為Beta砂吞,在1.18中引入Beta 2版本。這個(gè)新版本將跟蹤和管理所有新Kubernetes對(duì)象的字段更改崎溃,從而使你知道更改了什么資源以及何時(shí)更改的蜻直。

3、使用IngressClass擴(kuò)展Ingress袁串,并用IngressClass替換不推薦使用的注釋
在Kubernetes 1.18中概而,Ingress有兩個(gè)重要的補(bǔ)充:一個(gè)新pathType字段和一個(gè)新IngressClass資源。該pathType字段允許指定路徑應(yīng)如何匹配囱修。除了默認(rèn)ImplementationSpecific類型外赎瑰,還有new Exact和Prefixpath類型。

該IngressClass資源用于描述Kubernetes集群中的Ingress類型破镰。入口可以通過ingressClassName在入口上使用新字段來指定與它們關(guān)聯(lián)的類餐曼。這個(gè)新資源和字段替換了不建議使用的kubernetes.io/ingress.class注釋。

4鲜漩、SIG-CLI引入kubectl debug命令

SIG-CLI一直在爭論是否需要調(diào)試實(shí)用程序源譬。隨著臨時(shí)容器(ephemeral containers)的發(fā)展,開發(fā)人員越來越需要更多類似kubectl exec的命令孕似。該kubectl debug命令的添加(它是Alpha版本踩娘,但歡迎你提供反饋),使開發(fā)人員可以輕松地在集群中調(diào)試其Pod喉祭。我們認(rèn)為這種增加是無價(jià)的养渴。此命令允許創(chuàng)建一個(gè)臨時(shí)容器,該容器在要檢查的Pod旁邊運(yùn)行泛烙,并且還附加到控制臺(tái)以進(jìn)行交互式故障排除理卑。

5、Alpha版本引入Windows CSI

隨著Kubernetes 1.18的發(fā)布胶惰,用于Windows的CSI代理的Alpha版本也已發(fā)布傻工。CSI代理使非特權(quán)(預(yù)先批準(zhǔn))的容器能夠在Windows上執(zhí)行特權(quán)存儲(chǔ)操作。現(xiàn)在孵滞,可以利用CSI代理在Windows中支持CSI驅(qū)動(dòng)程序中捆。

更多新特性請(qǐng)查看官網(wǎng): 1.18新特性

本次環(huán)境說明:
kubeadm需要的CPU配置最低為2核

IP 主機(jī)名 節(jié)點(diǎn) 系統(tǒng)配置
10.0.0.70 master1 master CentOS7.8 & 2C4G
10.0.0.71 node1 node CentOS7.8 & 2C4G
10.0.0.72 node2 node CentOS7.8 & 2C4G

一、初始化環(huán)境

批量修改主機(jī)名坊饶,以及免密

# host綁定
cat >> /etc/hosts <<EOF
10.0.0.70  master1
10.0.0.71  node1
10.0.0.72  node2
EOF

# 在master節(jié)點(diǎn)分發(fā)秘鑰
yum install -y expect
ssh-keygen -t rsa -P "" -f /root/.ssh/id_rsa
for i in master1 node1 node2;do
expect -c "
spawn ssh-copy-id -i /root/.ssh/id_rsa.pub root@$i
        expect {
                \"*yes/no*\" {send \"yes\r\"; exp_continue}
                \"*password*\" {send \"123\r\"; exp_continue}
                \"*Password*\" {send \"123\r\";}
        } "
done 

所有節(jié)點(diǎn)關(guān)閉Selinux泄伪、iptables、swap分區(qū)

systemctl stop firewalld
systemctl disable firewalld
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat
iptables -P FORWARD ACCEPT
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

所有節(jié)點(diǎn)配置yum源

curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum clean all
yum makecache

由于開啟內(nèi)核 ipv4 轉(zhuǎn)發(fā)需要加載 br_netfilter 模塊匿级,所以加載下該模塊:

#每臺(tái)節(jié)點(diǎn)

modprobe br_netfilter
modprobe ip_conntrack

優(yōu)化內(nèi)核參數(shù)

cat > /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
vm.swappiness=0 # 禁止使用 swap 空間蟋滴,只有當(dāng)系統(tǒng) OOM 時(shí)才允許使用它
vm.overcommit_memory=1 # 不檢查物理內(nèi)存是否夠用
vm.panic_on_oom=0 # 開啟 OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF

sysctl -p /etc/sysctl.d/kubernetes.conf
# 分發(fā)到所有節(jié)點(diǎn)
for i in master1 node1 node2
do
    scp kubernetes.conf root@$i:/etc/sysctl.d/
    ssh root@$i sysctl -p /etc/sysctl.d/kubernetes.conf
done

bridge-nf 使得netfilter可以對(duì)Linux網(wǎng)橋上的 IPv4/ARP/IPv6 包過濾染厅。比如,設(shè)置net.bridge.bridge-nf-call-iptables=1后津函,二層的網(wǎng)橋在轉(zhuǎn)發(fā)包時(shí)也會(huì)被 iptables的 FORWARD 規(guī)則所過濾肖粮。常用的選項(xiàng)包括:

net.bridge.bridge-nf-call-arptables:是否在 arptables 的 FORWARD 中過濾網(wǎng)橋的 ARP 包
net.bridge.bridge-nf-call-ip6tables:是否在 ip6tables 鏈中過濾 IPv6 包
net.bridge.bridge-nf-call-iptables:是否在 iptables 鏈中過濾 IPv4 包
net.bridge.bridge-nf-filter-vlan-tagged:是否在 iptables/arptables 中過濾打了 vlan 標(biāo)簽的包。

所有節(jié)點(diǎn)安裝ipvs

為什么要使用IPVS,從k8s的1.8版本開始尔苦,kube-proxy引入了IPVS模式涩馆,IPVS模式與iptables同樣基于Netfilter,但是采用的hash表允坚,因此當(dāng)service數(shù)量達(dá)到一定規(guī)模時(shí)魂那,hash查表的速度優(yōu)勢就會(huì)顯現(xiàn)出來,從而提高service的服務(wù)性能稠项。

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack

#查看是否已經(jīng)正確加載所需的內(nèi)核模塊

所有節(jié)點(diǎn)安裝ipset

yum install ipset -y

ipset介紹

iptables是Linux服務(wù)器上進(jìn)行網(wǎng)絡(luò)隔離的核心技術(shù)涯雅,內(nèi)核在處理網(wǎng)絡(luò)請(qǐng)求時(shí)會(huì)對(duì)iptables中的策略進(jìn)行逐條解析,因此當(dāng)策略較多時(shí)效率較低展运;而是用IPSet技術(shù)可以將策略中的五元組(協(xié)議活逆,源地址,源端口,目的地址乐疆,目的端口)合并到有限的集合中划乖,可以大大減少iptables策略條目從而提高效率。測試結(jié)果顯示IPSet方式效率將比iptables提高100倍

為了方面ipvs管理挤土,這里安裝一下ipvsadm。

yum install ipvsadm -y

所有節(jié)點(diǎn)設(shè)置系統(tǒng)時(shí)區(qū)

timedatectl set-timezone Asia/Shanghai
 #將當(dāng)前的 UTC 時(shí)間寫入硬件時(shí)鐘
timedatectl set-local-rtc 0
 #重啟依賴于系統(tǒng)時(shí)間的服務(wù)
systemctl restart rsyslog 
systemctl restart crond

最后一步最好update一下 (可選操作)

yum update -y

二误算、 安裝docker

#所有機(jī)器
export VERSION=19.03
curl -fsSL "https://get.docker.com/" | bash -s -- --mirror Aliyun


# 配置鏡像加速

mkdir -p /etc/docker/
cat>/etc/docker/daemon.json<<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors": [
      "https://fz5yth0r.mirror.aliyuncs.com",
      "https://dockerhub.mirrors.nwafu.edu.cn/",
      "https://mirror.ccs.tencentyun.com",
      "https://docker.mirrors.ustc.edu.cn/",
      "https://reg-mirror.qiniu.com",
      "https://registry.docker-cn.com"
  ],
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m",
    "max-file": "3"
  }
}
EOF

啟動(dòng)docker

[root@master1 ~]# systemctl start docker && systemctl enable docker 

三仰美、 安裝kubeadm

默認(rèn)yum源在國外,這里需要修改為國內(nèi)阿里源

cat <<EOF >/etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF

安裝kubeadmin儿礼、kubectl和kubelet服務(wù)咖杂。

yum install -y \
    kubeadm-1.18.3 \
    kubectl-1.18.3 \
    kubelet-1.18.3 \
    --disableexcludes=kubernetes && \
    systemctl enable kubelet

查看Kubernetes所需要的鏡像版本號(hào)。并提前下載好所需的七個(gè)鏡像蚊夫。

kubeadm config images pull --image-repository=registry.aliyuncs.com/google_containers --kubernetes-version=v1.18.3

初始化Master節(jié)點(diǎn)诉字。

# –-pod-network-cidr:用于指定Pod的網(wǎng)絡(luò)范圍
# –-service-cidr:用于指定service的網(wǎng)絡(luò)范圍;
# --image-repository: 鏡像倉庫的地址知纷,和提前下載的鏡像倉庫應(yīng)該對(duì)應(yīng)上壤圃。


kubeadm init --kubernetes-version=v1.18.3 \
  --pod-network-cidr=10.244.0.0/16 \
  --service-cidr=10.1.0.0/16 \
  --image-repository=registry.aliyuncs.com/google_containers

W0408 10:10:19.704855    9534 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.1.50]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.1.50 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.1.50 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0408 10:10:29.102901    9534 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0408 10:10:29.104505    9534 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 26.007875 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 4oxcgj.1dqz97nbu4pcf84l
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.0.70:6443 --token 4oxcgj.1dqz97nbu4pcf84l \
    --discovery-token-ca-cert-hash sha256:2445a08ab9e210e9d3f82949ae16472d47abbc188a2b28e4d6470b02d5ddce3a

出現(xiàn)Your Kubernetes control-plane has initialized successfully!即可

初始化完成后,需要按照提示執(zhí)行以下命令琅轧。注意最后的join命令后續(xù)會(huì)使用到伍绳,需要記錄。

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看Kubernetes狀態(tài)乍桂。

# 還是NotReady狀態(tài)
kubectl get nodes
NAME         STATUS     ROLES    AGE     VERSION
k8s-master   NotReady   master   4m45s   v1.18.3

······

部署flannel網(wǎng)絡(luò)冲杀。

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml


刪除kube-flannel.yml中的kube-flannel-ds-arm64效床、kube-flannel-ds-armkube-flannel-ds-ppc64lekube-flannel-ds-s390x這4個(gè)daemonset刪除权谁,只保留kube-flannel-ds-amd64剩檀,因?yàn)橐话愕膶?shí)驗(yàn)環(huán)境都是amd64架構(gòu)的。

# 修改鏡像地址為:
[root@master1 ~]# vim flannel.yml 
····
····
 image: registry.cn-hangzhou.aliyuncs.com/ljcc/flannel:v0.12.0-amd64


···
···
kubectl apply -f kube-flannel.yml

kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
master1   Ready    master   14m   v1.18.3

# 即可看到全部為running狀態(tài)

kubectl get all -n kube-system
NAME                                     READY   STATUS    RESTARTS   AGE
pod/coredns-7ff77c879f-8flq5             1/1     Running   0          23m
pod/coredns-7ff77c879f-xth7n             1/1     Running   0          23m
pod/etcd-k8s-master                      1/1     Running   0          24m
pod/kube-apiserver-k8s-master            1/1     Running   0          18m
pod/kube-controller-manager-k8s-master   1/1     Running   1          24m
pod/kube-flannel-ds-amd64-8nft4          1/1     Running   0          13m
pod/kube-proxy-fk7tn                     1/1     Running   0          23m
pod/kube-scheduler-k8s-master            1/1     Running   1          24m

NAME               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
service/kube-dns   ClusterIP   10.1.0.10    <none>        53/UDP,53/TCP,9153/TCP   24m

NAME                                   DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
daemonset.apps/kube-flannel-ds-amd64   1         1         1       1            1           <none>                   13m
daemonset.apps/kube-proxy              1         1         1       1            1           kubernetes.io/os=linux   24m

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns   2/2     2            2           24m

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/coredns-7ff77c879f   2         2         2       23m

join node節(jié)點(diǎn)到kubernetes集群

kubeadm join 10.0.0.70:6443 --token 4oxcgj.1dqz97nbu4pcf84l \
    --discovery-token-ca-cert-hash sha256:2445a08ab9e210e9d3f82949ae16472d47abbc188a2b28e4d6470b02d5ddce3a  

若忘記 token和hash指:

# 查看hash值
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed  's/^ .* //'

# 查看token
kubeadm token list

查看kubeernetes集群狀態(tài)

[root@master1 ~]# kubectl get nodes
NAME      STATUS   ROLES    AGE   VERSION
master1   Ready    master   9h    v1.18.3
node1     Ready    <none>   8h    v1.18.3
node2     Ready    <none>   8h    v1.18.3

完成旺芽!
不足之處沪猴,下次優(yōu)化。

文中部分參考來源

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末甥绿,一起剝皮案震驚了整個(gè)濱河市字币,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌共缕,老刑警劉巖洗出,帶你破解...
    沈念sama閱讀 222,378評(píng)論 6 516
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場離奇詭異图谷,居然都是意外死亡翩活,警方通過查閱死者的電腦和手機(jī),發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 94,970評(píng)論 3 399
  • 文/潘曉璐 我一進(jìn)店門便贵,熙熙樓的掌柜王于貴愁眉苦臉地迎上來菠镇,“玉大人,你說我怎么就攤上這事承璃±#” “怎么了?”我有些...
    開封第一講書人閱讀 168,983評(píng)論 0 362
  • 文/不壞的土叔 我叫張陵盔粹,是天一觀的道長隘梨。 經(jīng)常有香客問我,道長舷嗡,這世上最難降的妖魔是什么轴猎? 我笑而不...
    開封第一講書人閱讀 59,938評(píng)論 1 299
  • 正文 為了忘掉前任,我火速辦了婚禮进萄,結(jié)果婚禮上捻脖,老公的妹妹穿的比我還像新娘。我一直安慰自己中鼠,他們只是感情好可婶,可當(dāng)我...
    茶點(diǎn)故事閱讀 68,955評(píng)論 6 398
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著兜蠕,像睡著了一般扰肌。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上熊杨,一...
    開封第一講書人閱讀 52,549評(píng)論 1 312
  • 那天曙旭,我揣著相機(jī)與錄音盗舰,去河邊找鬼。 笑死桂躏,一個(gè)胖子當(dāng)著我的面吹牛钻趋,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播剂习,決...
    沈念sama閱讀 41,063評(píng)論 3 422
  • 文/蒼蘭香墨 我猛地睜開眼蛮位,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了鳞绕?” 一聲冷哼從身側(cè)響起失仁,我...
    開封第一講書人閱讀 39,991評(píng)論 0 277
  • 序言:老撾萬榮一對(duì)情侶失蹤,失蹤者是張志新(化名)和其女友劉穎们何,沒想到半個(gè)月后萄焦,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 46,522評(píng)論 1 319
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡冤竹,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 38,604評(píng)論 3 342
  • 正文 我和宋清朗相戀三年拂封,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片鹦蠕。...
    茶點(diǎn)故事閱讀 40,742評(píng)論 1 353
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡冒签,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出钟病,到底是詐尸還是另有隱情萧恕,我是刑警寧澤,帶...
    沈念sama閱讀 36,413評(píng)論 5 351
  • 正文 年R本政府宣布肠阱,位于F島的核電站廊鸥,受9級(jí)特大地震影響,放射性物質(zhì)發(fā)生泄漏辖所。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 42,094評(píng)論 3 335
  • 文/蒙蒙 一磨德、第九天 我趴在偏房一處隱蔽的房頂上張望缘回。 院中可真熱鬧,春花似錦典挑、人聲如沸酥宴。這莊子的主人今日做“春日...
    開封第一講書人閱讀 32,572評(píng)論 0 25
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽拙寡。三九已至,卻和暖如春琳水,著一層夾襖步出監(jiān)牢的瞬間肆糕,已是汗流浹背般堆。 一陣腳步聲響...
    開封第一講書人閱讀 33,671評(píng)論 1 274
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留诚啃,地道東北人淮摔。 一個(gè)月前我還...
    沈念sama閱讀 49,159評(píng)論 3 378
  • 正文 我出身青樓,卻偏偏與公主長得像始赎,于是被迫代替她去往敵國和親和橙。 傳聞我的和親對(duì)象是個(gè)殘疾皇子,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 45,747評(píng)論 2 361