中國何時(shí)亮劍 k8s搭建

版權(quán)聲明:原創(chuàng)作品锯仪,謝絕轉(zhuǎn)載!否則將追究法律責(zé)任铜异。

前言

最近中國和印度的局勢也是愈演愈烈哥倔。作為一個(gè)愛國青年我有些憤怒,但有時(shí)又及其的驕傲揍庄。不知道是因?yàn)橹袊饨粡?qiáng)勢還是軟弱咆蒿,怎樣也應(yīng)該有個(gè)態(tài)度吧?這是干嘛蚂子?就會抗議 在不就搞一些軍演蜡秽。有毛用啊缆镣?

自己判斷可能是國家有自己的打算吧芽突!就好比獅子和瘋狗一樣何必那!中國和印度的紛紛擾擾董瞻,也不知道怎樣霸氣側(cè)漏還是在傷仲永寞蚌。

霸氣側(cè)漏是航母的電子彈射還是核潛艇或者是無人機(jī).....

項(xiàng)目開始

我想大家都知道docker 但是也都玩過k8s吧!

搭建kubernetes集群時(shí)遇到一些問題钠糊,網(wǎng)上有不少搭建文檔可以參考挟秤,但是滿足以下網(wǎng)絡(luò)互通才能算k8s集群ready。

需求如下:

k8s結(jié)構(gòu)圖如下:

以下是版本和機(jī)器信息:

節(jié)點(diǎn)初始化

更新CentOS-Base.repo為阿里云yum源

mv?-f?/etc/yum.repos.d/CentOS-Base.repo?/etc/yum.repos.d/CentOS-Base.repo.bk;

curl?-o?/etc/yum.repos.d/CentOS-Base.repo?http://mirrors.aliyun.com/repo/Centos-7.repo

設(shè)置bridge

cat?<?/etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables?=?1

net.bridge.bridge-nf-call-iptables?=?1

net.bridge.bridge-nf-call-arptables?=?1

EOF

sudo?sysctl?--system

disable selinux (請不要用setenforce 0)

sed?-i?'s/SELINUX=enforcing/SELINUX=disabled/'?/etc/selinux/config

關(guān)閉防火墻

sudo?systemctl?disable?firewalld.service

sudo?systemctl?stop?firewalld.service

關(guān)閉iptables

sudo?yum?install?-y?iptables-services;iptables?-F;???#可略過sudo?systemctl?disable?iptables.service

sudo?systemctl?stop?iptables.service

安裝相關(guān)軟件

sudo?yum?install?-y?vim?wget?curl?screen?git?etcd?ebtables?flannel

sudo?yum?install?-y?socat?net-tools.x86_64?iperf?bridge-utils.x86_64

安裝docker (目前默認(rèn)安裝是1.12)

sudo?yum?install?-y?yum-utils?device-mapper-persistent-data?lvm2

sudo?yum?install?-y?libdevmapper*?docker

安裝kubernetes

方便復(fù)制粘貼如下:

##設(shè)置kubernetes.repo為阿里云源抄伍,適合國內(nèi)cat < /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=0 EOF##設(shè)置kubernetes.repo為阿里云源艘刚,適合能連通google的網(wǎng)絡(luò)cat < /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg ? ?https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF## 安裝k8s 1.7.2 (kubernetes-cni會作為依賴一并安裝,在此沒有做版本指定)exportK8SVERSION=1.7.2 sudo yum install -y"kubectl-${K8SVERSION}-0.x86_64""kubelet-${K8SVERSION}-0.x86_64""kubeadm-${K8SVERSION}-0.x86_64"

重啟機(jī)器 (這一步是需要的)

reboot

重啟機(jī)器后執(zhí)行如下步驟

配置docker daemon并啟動docker

cat?</etc/sysconfig/docker

OPTIONS="-H?unix:///var/run/docker.sock?-H?tcp://127.0.0.1:2375?--storage-driver=overlay?--exec-opt?native.cgroupdriver=cgroupfs?--graph=/localdisk/docker/graph?--insecure-registry=gcr.io?--insecure-registry=quay.io??--insecure-registry=registry.cn-hangzhou.aliyuncs.com?--registry-mirror=http://138f94c6.m.daocloud.io"EOF

systemctl?start?docker

systemctl?status?docker?-l

拉取k8s 1.7.2 需要的鏡像

quay.io/calico/node:v1.3.0

quay.io/calico/cni:v1.9.1

quay.io/calico/kube-policy-controller:v0.6.0

gcr.io/google_containers/pause-amd64:3.0

gcr.io/google_containers/kube-proxy-amd64:v1.7.2

gcr.io/google_containers/kube-apiserver-amd64:v1.7.2

gcr.io/google_containers/kube-controller-manager-amd64:v1.7.2

gcr.io/google_containers/kube-scheduler-amd64:v1.7.2

gcr.io/google_containers/etcd-amd64:3.0.17

gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4

gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4

gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4

在非k8s master節(jié)點(diǎn) 10.12.0.22 上啟動ETCD (也可搭建成ETCD集群)

screen?etcd?-name="EtcdServer"?-initial-advertise-peer-urls=http://10.12.0.22:2380?-listen-peer-urls=http://0.0.0.0:2380?-listen-client-urls=http://10.12.0.22:2379?-advertise-client-urls?http://10.12.0.22:2379?-data-dir?/var/lib/etcd/default.etcd

在每個(gè)節(jié)點(diǎn)上check是否可通達(dá)ETCD, 必須可通才行, 不通需要看下防火墻是不是沒有關(guān)閉

etcdctl?--endpoint=http://10.12.0.22:2379?member?list

etcdctl?--endpoint=http://10.12.0.22:2379?cluster-health

在k8s master節(jié)點(diǎn)上使用kubeadm啟動截珍,

pod-ip網(wǎng)段設(shè)定為10.68.0.0/16, cluster-ip網(wǎng)段為默認(rèn)10.96.0.0/16

如下命令在master節(jié)點(diǎn)上執(zhí)行

cat?<<?EOF?>kubeadm_config.yaml

apiVersion:?kubeadm.k8s.io/v1alpha1

kind:?MasterConfiguration

api:

advertiseAddress:?10.12.0.18??bindPort:?6443

etcd:

endpoints:

-?http://10.12.0.22:2379

networking:

dnsDomain:?cluster.local

serviceSubnet:?10.96.0.0/16

podSubnet:?10.68.0.0/16

kubernetesVersion:?v1.7.2#token:?#tokenTTL:?0EOF##kubeadm?init?--config?kubeadm_config.yaml

執(zhí)行kubeadm init命令后稍等幾十秒攀甚,master上api-server, scheduler, controller-manager容器都啟動起來,以下命令來check下master

如下命令在master節(jié)點(diǎn)上執(zhí)行

rm?-rf?$HOME/.kube

mkdir?-p?$HOME/.kube

sudo?cp?-i?/etc/kubernetes/admin.conf?$HOME/.kube/config

sudo?chown?$(id?-u):$(id?-g)?$HOME/.kube/config

kubectl?get?cs?-o?wide?--show-labels

kubectl?get?nodes?-o?wide?--show-labels

節(jié)點(diǎn)加入, 需要kubeadm init命令輸出的token, 如下命令在node節(jié)點(diǎn)上執(zhí)行

systemctl?start?docker

systemctl?start?kubelet

kubeadm?join?--token?*{6}.*{16}?10.12.0.18:6443?--skip-preflight-checks

在master節(jié)點(diǎn)上觀察節(jié)點(diǎn)加入情況岗喉, 因?yàn)檫€沒有創(chuàng)建網(wǎng)絡(luò)秋度,所以,所有master和node節(jié)點(diǎn)都是NotReady狀態(tài)钱床, kube-dns也是pending狀態(tài)

kubectl?get?nodes?-o?wide

watch?kubectl?get?all?--all-namespaces?-o?wide

對calico.yaml做了修改

刪除ETCD創(chuàng)建部分荚斯,使用外部ETCD

修改CALICO_IPV4POOL_CIDR為10.68.0.0/16

calico.yaml如下:

# Calico Version v2.3.0

#http://docs.projectcalico.org/v2.3/releases#v2.3.0

# This manifest includes the following component versions:

# calico/node:v1.3.0

# calico/cni:v1.9.1

# calico/kube-policy-controller:v0.6.0

# This Config Map is used to configure a self-hosted Calico installation.kind:Config MapapiVersion:v1metadata:name:calico-confignamespace:kube-systemdata:

# The location of your etcd cluster. ?This uses the Service clusterIP defined below.etcd_endpoints:"http://10.12.0.22:2379"

# Configure the Calico backend to use.calico_backend:"bird"

# The CNI network configuration to install on each node.cni_network_config:|- ? ?{"name":"k8s-pod-network","cniVersion":"0.1.0","type":"calico","etcd_endpoints":"__ETCD_ENDPOINTS__","log_level":"info","ipam":{"type":"calico-ipam"},"policy":{"type":"k8s","k8s_api_root":"https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__","k8s_auth_token":"__SERVICEACCOUNT_TOKEN__"},"kubernetes":{"kubeconfig":"/etc/cni/net.d/__KUBECONFIG_FILENAME__"} ? ?}---

# This manifest installs the calico/node container, as well

# as the Calico CNI plugins and network config on

# each master and worker node in a Kubernetes cluster.kind:DaemonSetapiVersion:extensions/v1beta1metadata:name:calico-nodenamespace:kube-systemlabels:k8s-app:calico-nodespec:selector:matchLabels:k8s-app:calico-nodetemplate:metadata:labels:k8s-app:calico-nodeannotations:

# Mark this pod as a critical add-on; when enabled, the critical add-on scheduler

# reserves resources for critical add-on pods so that they can be rescheduled after

# a failure. ?This annotation works in tandem with the toleration below.scheduler.alpha.kubernetes.io/critical-pod:''spec:hostNetwork:truetolerations:- key:node-role.kubernetes.io/mastereffect:NoSchedule

# Allow this pod to be rescheduled while the node is in "critical add-ons only" mode.

# This, along with the annotation above marks this pod as a critical add-on.key:CriticalAddonsOnlyoperator:ExistsserviceAccountName:calico-cni-plugincontainers:

# Runs calico/node container on each Kubernetes node. ?This# container programs network policy and routes on each# host.- name:calico-nodeimage:quay.io/calico/node:v1.3.0env:

# The location of the Calico etcd cluster.- name:ETCD_ENDPOINTSvalueFrom:configMapKeyRef:name:calico-configkey:etcd_endpoints

# Enable BGP. ?Disable to enforce policy only.- name:CALICO_NETWORKING_BACKENDvalueFrom:config MapKeyRef:name:calico-configkey:calico_backend

# Disable file logging so `kubectl logs` works.- name:CALICO_DISABLE_FILE_LOGGINGvalue:"true"

# Set Felix endpoint to host default action to ACCEPT.- name:FELIX_DEFAULTENDPOINTTOHOSTACTIONvalue:"ACCEPT"

# Configure the IP Pool from which Pod IPs will be chosen.- name:CALICO_IPV4POOL_CIDRvalue:"10.68.0.0/16"- name:CALICO_IPV4POOL_IPIPvalue:"always"

# Disable IPv6 on Kubernetes.- name:FELIX_IPV6SUPPORTvalue:"false"

# Set Felix logging to "info"- name:FELIX_LOGSEVERITYSCREENvalue:"info"

# Auto-detect the BGP IP address.- name:IPvalue:""securityContext:privileged:trueresources:requests:cpu:250mvolumeMounts:- mountPath:/lib/modulesname:lib-modulesreadOnly:true- mountP/var/run/caliconame:var-run-calicoreadOnly:false

# This container installs the Calico CNI binaries

# and CNI network config file on each node.- name:install-cniimage:quay.io/calico/cni:v1.9.1command:["/install-cni.sh"]env:

# The location of the Calico etcd cluster.- name:ETCD_ENDPOINTSvalueFrom:configMapKeyRef:name:calico-configkey:etcd_endpoints

# The CNI network config to install on each node.- name:CNI_NETWORK_CONFIGvalueFrom:configMapKeyRef:name:calico-configkey:cni_network_configvolumeMounts:- mountPath:/host/opt/cni/biname:cni-bin-dir- mountPath:/host/etc/cni/net.dname:cni-net-dirvolumes:

# Used by calico/node.- name:lib-moduleshostPath:path:/lib/modules- name:var-run-calicohostPath:path:/var/run/calico# Used to install CNI.- name:cni-bin-dirhostPath:path:/opt/cni/bin- name:cni-net-dirhostPath:path:/etc/cni/net.d---# This manifest deploys the Calico policy controller on Kubernetes.

# See https://github.com/projectcalico/k8s-policyapiVersion:extensions/v1beta1kind:Deploymentmetadata:name:calico-policy-controllernamespace:kube-systemlabels:k8s-app:calico-policyspec:

# The policy controller can only have a single active instance.replicas:1strategy:type:Recreatetemplate:metadata:name:calico-policy-controllernamespace:kube-systemlabels:k8s-app:calico-policy-controllerannotations:

# Mark this pod as a critical add-on; when enabled, the critical add-on scheduler

# reserves resources for critical add-on pods so that they can be rescheduled after

# a failure. ?This annotation works in tandem with the toleration below.scheduler.alpha.kubernetes.io/critical-pod:''spec:

# The policy controller must run in the host network namespace so that

# it isn't governed by policy that would prevent it from working.hostNetwork:truetolerations:- key:node-role.kubernetes.io/mastereffect:NoSchedule# Allow this pod to be rescheduled while the node is in "critical add-ons only" mode.

# This, along with the annotation above marks this pod as a critical add-on.- key:CriticalAddonsOnlyoperator:ExistsserviceAccountName:calico-policy-controllercontainers:- name:calico-policy-controllerimage:quay.io/calico/kube-policy-controller:v0.6.0env:

# The location of the Calico etcd cluster.- name:ETCD_ENDPOINTSvalueFrom:configMapKeyRef:name:calico-configkey:etcd_endpoints

# The location of the Kubernetes API. ?Use the default Kubernetes

# service for API access.- name:K8S_APIvalue:"https://kubernetes.default:443"

# Since we're running in the host namespace and might not have KubeDNS

# access, configure the container's /etc/hosts to resolve

# kubernetes.default to the correct service clusterIP.- name:CONFIGURE_ETC_HOSTSvalue:"true"---apiVersion:rbac.authorization.k8s.io/v1beta1kind:ClusterRoleBindingmetadata:name:calico-cni-pluginroleRef:apiGroup:rbac.authorization.k8s.iokind:ClusterRolename:calico-cni-pluginsubjects:- kind:ServiceAccountname:calico-cni-pluginnamespace:kube-system---kind:ClusterRoleapiVersion:rbac.authorization.k8s.io/v1beta1metadata:name:calico-cni-pluginnamespace:kube-systemrules:- apiGroups:[""]resources:-pods-nodesverbs:-get---apiVersion:v1kind:ServiceAccountmetadata:name:calico-cni-pluginnamespace:kube-system---apiVersion:rbac.authorization.k8s.io/v1beta1kind:ClusterRoleBindingmetadata:name:calico-policy-controllerroleRef:apiGroup:rbac.authorization.k8s.iokind:ClusterRolename:calico-policy-controllersubjects:- kind:ServiceAccountname:calico-policy-controllernamespace:kube-system---kind:ClusterRoleapiVersion:rbac.authorization.k8s.io/v1beta1metadata:name:calico-policy-controllernamespace:kube-systemrules:- apiGroups:-""-extensionsresources:-pods-namespaces-networkpoliciesverbs:-watch-list---apiVersion:v1kind:ServiceAccountmetadata:name:calico-policy-controllernamespace:kube-system

創(chuàng)建calico跨主機(jī)網(wǎng)絡(luò), 在master節(jié)點(diǎn)上執(zhí)行如下命令

kubectl?apply?-f?calico.yaml

注意觀察每個(gè)節(jié)點(diǎn)上會有名為calico-node-****的pod起來, calico-policy-controller和kube-dns也會起來, 這些pod都在kube-system名字空間里

>kubectl?get?all?--all-namespaces

NAMESPACE?????NAME?????????????????????????????????????????????????READY?????STATUS????RESTARTS???AGE

kube-system???po/calico-node-2gqf2?????????????????????????????????2/2???????Running???0??????????19h

kube-system???po/calico-node-fg8gh?????????????????????????????????2/2???????Running???0??????????19h

kube-system???po/calico-node-ksmrn?????????????????????????????????2/2???????Running???0??????????19h

kube-system???po/calico-policy-controller-1727037546-zp4lp?????????1/1???????Running???0??????????19h

kube-system???po/etcd-izuf6fb3vrfqnwbct6ivgwz??????????????????????1/1???????Running???0??????????19h

kube-system???po/kube-apiserver-izuf6fb3vrfqnwbct6ivgwz????????????1/1???????Running???0??????????19h

kube-system???po/kube-controller-manager-izuf6fb3vrfqnwbct6ivgwz???1/1???????Running???0??????????19h

kube-system???po/kube-dns-2425271678-3t4g6?????????????????????????3/3???????Running???0??????????19h

kube-system???po/kube-proxy-6fg1l??????????????????????????????????1/1???????Running???0??????????19h

kube-system???po/kube-proxy-fdbt2??????????????????????????????????1/1???????Running???0??????????19h

kube-system???po/kube-proxy-lgf3z??????????????????????????????????1/1???????Running???0??????????19h

kube-system???po/kube-scheduler-izuf6fb3vrfqnwbct6ivgwz????????????1/1???????Running???0??????????19h

NAMESPACE?????NAME???????????????????????CLUSTER-IP??????EXTERNAL-IP???PORT(S)?????????AGE

default???????svc/kubernetes?????????????10.96.0.1???????????????443/TCP?????????19h

kube-system???svc/kube-dns???????????????10.96.0.10??????????????53/UDP,53/TCP???19h

NAMESPACE?????NAME??????????????????????????????DESIRED???CURRENT???UP-TO-DATE???AVAILABLE???AGE

kube-system???deploy/calico-policy-controller???1?????????1?????????1????????????1???????????19h

kube-system???deploy/kube-dns???????????????????1?????????1?????????1????????????1???????????19h

NAMESPACE?????NAME?????????????????????????????????????DESIRED???CURRENT???READY?????AGE

kube-system???rs/calico-policy-controller-1727037546???1?????????1?????????1?????????19h

kube-system???rs/kube-dns-2425271678???????????????????1?????????1?????????1?????????19h

部署dash-board

wget?https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml

kubectl?create?-f?kubernetes-dashboard.yaml

部署heapster

wget?https://github.com/kubernetes/heapster/archive/v1.4.0.tar.gz

tar?-zxvf?v1.4.0.tar.gzcd?heapster-1.4.0/deploy/kube-config/influxdb

kubectl?create?-f?./

其他命令

強(qiáng)制刪除某個(gè)pod

kubectl?delete?pod??--namespace=??--grace-period=0?--force

重置某個(gè)node節(jié)點(diǎn)

kubeadm?reset

systemctl?stop?kubelet;

docker?ps?-aq?|?xargs?docker?rm?-fv

find?/var/lib/kubelet?|?xargs?-n?1?findmnt?-n?-t?tmpfs?-o?TARGET?-T?|?uniq?|?xargs?-r?umount?-v;

rm?-rf?/var/lib/kubelet?/etc/kubernetes/?/var/lib/etcd

systemctl?start?kubelet;

訪問dashboard (在master節(jié)點(diǎn)上執(zhí)行)

kubectl?proxy?--address=0.0.0.0?--port=8001?--accept-hosts='^.*'

or

kubectl?proxy?--port=8011?--address=192.168.61.100?--accept-hosts='^192\.168\.61\.*'

access?to?http://0.0.0.0:8001/ui

Access to API with authentication token

APISERVER=$(kubectl?config?view?|?grep?server?|?cut?-f?2-?-d?":"?|?tr?-d?"?")

TOKEN=$(kubectl?describe?secret?$(kubectl?get?secrets?|?grep?default?|?cut?-f1?-d?'?')?|?grep?-E?'^token'?|?cut?-f2?-d':'?|?tr?-d?'\t')

curl?$APISERVER/api?--header?"Authorization:?Bearer?$TOKEN"?--insecure

讓master節(jié)點(diǎn)參與調(diào)度事期,默認(rèn)master是不參與到任務(wù)調(diào)度中的

kubectl?taint?nodes?--all?node-role.kubernetes.io/master-

or

kubectl?taint?nodes?--all?dedicated-

kubernetes master 消除隔離之前 Annotations

Name:???????????izuf6fb3vrfqnwbct6ivgwzRole:Labels:?????????beta.kubernetes.io/arch=amd64

beta.kubernetes.io/os=linux

kubernetes.io/hostname=izuf6fb3vrfqnwbct6ivgwz

node-role.kubernetes.io/master=Annotations:????????node.alpha.kubernetes.io/ttl=0

volumes.kubernetes.io/controller-managed-attach-detach=true

kubernetes master 消除隔離之后 Annotations

Name:???????????izuf6fb3vrfqnwbct6ivgwzRole:Labels:?????????beta.kubernetes.io/arch=amd64

beta.kubernetes.io/os=linux

kubernetes.io/hostname=izuf6fb3vrfqnwbct6ivgwz

node-role.kubernetes.io/master=Annotations:????????node.alpha.kubernetes.io/ttl=0

volumes.kubernetes.io/controller-managed-attach-detach=trueTaints:?????????

總結(jié):通過測試已經(jīng)完成但是還有錯(cuò)看過文檔的伙伴能猜到嗎滥壕?

本文出自 “李世龍” 博客,謝絕轉(zhuǎn)載兽泣!

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末绎橘,一起剝皮案震驚了整個(gè)濱河市,隨后出現(xiàn)的幾起案子撞叨,更是在濱河造成了極大的恐慌,老刑警劉巖浊洞,帶你破解...
    沈念sama閱讀 206,482評論 6 481
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件牵敷,死亡現(xiàn)場離奇詭異,居然都是意外死亡法希,警方通過查閱死者的電腦和手機(jī)枷餐,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 88,377評論 2 382
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來苫亦,“玉大人毛肋,你說我怎么就攤上這事∥萁#” “怎么了润匙?”我有些...
    開封第一講書人閱讀 152,762評論 0 342
  • 文/不壞的土叔 我叫張陵,是天一觀的道長唉匾。 經(jīng)常有香客問我孕讳,道長,這世上最難降的妖魔是什么巍膘? 我笑而不...
    開封第一講書人閱讀 55,273評論 1 279
  • 正文 為了忘掉前任厂财,我火速辦了婚禮,結(jié)果婚禮上峡懈,老公的妹妹穿的比我還像新娘璃饱。我一直安慰自己,他們只是感情好肪康,可當(dāng)我...
    茶點(diǎn)故事閱讀 64,289評論 5 373
  • 文/花漫 我一把揭開白布荚恶。 她就那樣靜靜地躺著,像睡著了一般磷支。 火紅的嫁衣襯著肌膚如雪裆甩。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 49,046評論 1 285
  • 那天齐唆,我揣著相機(jī)與錄音嗤栓,去河邊找鬼。 笑死,一個(gè)胖子當(dāng)著我的面吹牛茉帅,可吹牛的內(nèi)容都是我干的叨叙。 我是一名探鬼主播,決...
    沈念sama閱讀 38,351評論 3 400
  • 文/蒼蘭香墨 我猛地睜開眼堪澎,長吁一口氣:“原來是場噩夢啊……” “哼擂错!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起樱蛤,我...
    開封第一講書人閱讀 36,988評論 0 259
  • 序言:老撾萬榮一對情侶失蹤钮呀,失蹤者是張志新(化名)和其女友劉穎,沒想到半個(gè)月后昨凡,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體爽醋,經(jīng)...
    沈念sama閱讀 43,476評論 1 300
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 35,948評論 2 324
  • 正文 我和宋清朗相戀三年便脊,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了蚂四。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 38,064評論 1 333
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡哪痰,死狀恐怖遂赠,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情晌杰,我是刑警寧澤跷睦,帶...
    沈念sama閱讀 33,712評論 4 323
  • 正文 年R本政府宣布,位于F島的核電站肋演,受9級特大地震影響送讲,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜惋啃,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 39,261評論 3 307
  • 文/蒙蒙 一哼鬓、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧边灭,春花似錦异希、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 30,264評論 0 19
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至惰帽,卻和暖如春憨降,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背该酗。 一陣腳步聲響...
    開封第一講書人閱讀 31,486評論 1 262
  • 我被黑心中介騙來泰國打工授药, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留士嚎,地道東北人。 一個(gè)月前我還...
    沈念sama閱讀 45,511評論 2 354
  • 正文 我出身青樓悔叽,卻偏偏與公主長得像莱衩,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個(gè)殘疾皇子娇澎,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 42,802評論 2 345

推薦閱讀更多精彩內(nèi)容