Service Mesh - 使用 kubeadm 和 MetalLB 搭建 Kubernetes & Istio 環(huán)境

假設(shè)

Role IP OS RAM CPU
Master 172.16.50.146 Ubuntu 20.04 4G 2
Node1 172.16.50.147 Ubuntu 20.04 4G 2
Node2 172.16.50.148 Ubuntu 20.04 4G 2

安裝前

更改 hostname

  • Master

    $ sudo vim hostname
    $ cat /etc/hostname
    master
    
  • Node1

    $ sudo vim hostname
    $ cat /etc/hostname
    node1
    
  • Node2

    $ sudo vim hostname
    $ cat /etc/hostname
    node2
    

注:直接使用 sudo hostname xxx 只能臨時改變 hostname渣窜,重啟機(jī)器后還是會變成舊的 hostname纤虽。因此钠四,這里推薦大家直接修改 /etc/hostname存皂,永久改變 hostname化焕。

如果因?yàn)闄C(jī)器重啟改變 hostname唉擂,而導(dǎo)致 kubelet.go:2268] node "xxx" not found熙参,可以修改完 hostname 后使用 systemctl restart kubelet 重啟解決問題析命。

驗(yàn)證每個節(jié)點(diǎn)的 MAC 地址和 product_uuid 是唯一的

$ ip link
$ sudo cat /sys/class/dmi/id/product_uuid

關(guān)閉每個節(jié)點(diǎn)的防火墻

$ sudo ufw disable

關(guān)閉每個節(jié)點(diǎn)的 swap

$ sudo swapoff -a; sudo sed -i '/swap/d' /etc/fstab

提示: 如果需要在多臺計(jì)算機(jī)上同時執(zhí)行命令兴泥,則可以使用 iTerm?+ Shift + i跨所有選項(xiàng)卡輸入工育。

image

讓 iptables 查看每個節(jié)點(diǎn)的橋接流量

加載 br_netfilter

$ sudo modprobe br_netfilter
$ lsmod | grep br_netfilter
br_netfilter           28672  0
bridge                176128  1 br_netfilter
$ cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
> br_netfilter
> EOF
br_netfilter

設(shè)置 net.bridge.bridge-nf-call-iptables

$ cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

$ sudo sysctl --system
* Applying /etc/sysctl.d/10-console-messages.conf ...
kernel.printk = 4 4 1 7
......
* Applying /etc/sysctl.conf ...

每個節(jié)點(diǎn)安裝 Docker

根據(jù) https://docs.docker.com/engine/install/ubuntu/https://docs.docker.com/engine/install/linux-postinstall/ 指導(dǎo)安裝 Docker。

配置

配置 Docker 守護(hù)程序搓彻,尤其是使用 systemd 來管理容器的 cgroup如绸。

sudo mkdir /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

重啟 Docker 并在啟動時啟用:

sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker

每個節(jié)點(diǎn)安裝 kubeadm, kubelet, kubectl

更新 apt 軟件包索引并安裝使用 Kubernetes apt 存儲庫所需的軟件包:

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

下載 Google Cloud 公共簽名密鑰:

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

添加 Kubernetes apt 存儲庫:

echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

更新 apt 軟件包索引,安裝 kubelet旭贬,kubeadmkubectl怔接,并固定其版本:

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

在 master 節(jié)點(diǎn)

初始化 kubernetes 集群

使用 masterIP 地址替換以下命令中的 172.16.50.146

$ sudo kubeadm init --apiserver-advertise-address=172.16.50.146 --pod-network-cidr=192.168.0.0/16  --ignore-preflight-errors=all
[init] Using Kubernetes version: v1.20.5
[preflight] Running pre-flight checks
    [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 172.16.50.146]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [172.16.50.146 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [172.16.50.146 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 82.502493 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: en2kq9.2basuxxemkuv1yvu
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.16.50.146:6443 --token en2kq9.2basuxxemkuv1yvu \
    --discovery-token-ca-cert-hash sha256:97e84ca61b5d888476f5cdfd36fa141eaf2631e78e7d32c8c3d209e54be72870

配置 kubectl

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

部署 calico 網(wǎng)絡(luò)

安裝 Tigera Calico operator 和自定義資源定義稀轨。

kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml

通過創(chuàng)建必要的自定義資源來安裝 Calico扼脐。有關(guān)此清單中可用配置選項(xiàng)的更多信息,請參見安裝參考奋刽。

kubectl create -f https://docs.projectcalico.org/manifests/custom-resources.yaml

: 在創(chuàng)建此清單之前瓦侮,請閱讀其內(nèi)容并確保其設(shè)置適合你的環(huán)境艰赞。例如,你可能需要更改默認(rèn) IPCIDR 以匹配你的 Pod 網(wǎng)絡(luò) CIDR脏榆。

使用以下命令確認(rèn)所有 Pod 都在運(yùn)行猖毫。

kubectl get pods -n calico-system -w

等到每個 Pod 都正常運(yùn)行。

: Tigera operator 將資源安裝在 calico-system 名稱空間中须喂。其他安裝方法可以改用 kube-system 命名空間。

移除 master 節(jié)點(diǎn)上的 taints趁蕊,以便可以在其上安排 pod坞生。

kubectl taint nodes --all node-role.kubernetes.io/master-

應(yīng)該返回以下內(nèi)容。

node/master untainted

確認(rèn)集群中現(xiàn)在有一個節(jié)點(diǎn)掷伙。 它應(yīng)該返回類似以下的內(nèi)容是己。

$ kubectl get nodes -o wide
NAME     STATUS   ROLES                  AGE     VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
master   Ready    control-plane,master   8m57s   v1.20.5   172.16.50.146   <none>        Ubuntu 20.04.2 LTS   5.4.0-70-generic   docker://20.10.5

在節(jié)點(diǎn)上

加入集群

在每個節(jié)點(diǎn)中運(yùn)行 join 命令(初始化 Kubernetes 集群輸出中可以找到)。

$ sudo kubeadm join 172.16.50.146:6443 --token en2kq9.2basuxxemkuv1yvu \
>     --discovery-token-ca-cert-hash sha256:97e84ca61b5d888476f5cdfd36fa141eaf2631e78e7d32c8c3d209e54be72870
[preflight] Running pre-flight checks
    [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

如果 token 已過期任柜,則可以從 master 節(jié)點(diǎn)創(chuàng)建新的 token卒废。

kubeadm token create --print-join-command

驗(yàn)證集群

$ kubectl get node
NAME     STATUS   ROLES                  AGE     VERSION
master   Ready    control-plane,master   19m     v1.20.5
node1    Ready    <none>                 5m48s   v1.20.5
node2    Ready    <none>                 4m57s   v1.20.5

使用其他機(jī)器控制集群

在 master 節(jié)點(diǎn)上復(fù)制 admin.conf$HOME 目錄下。

$ sudo cp /etc/kubernetes/admin.conf $HOME
$ sudo chown {user} /home/{user}/admin.conf

Scp $HOME/admin.conf 到其他機(jī)器宙地。

$ scp {user}@172.16.50.146:/home/{user}/admin.conf .

$ kubectl --kubeconfig ./admin.conf get nodes
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   45m   v1.20.5
node1    Ready    <none>                 31m   v1.20.5
node2    Ready    <none>                 31m   v1.20.5

Metric Server

部署 Metric Server 的目的是使用 top 命令查看簡單的指標(biāo)摔认。

在沒有 Metric server 的情況下,使用 top 命令將會返回錯誤宅粥。

$ kubectl top node
error: Metrics API not available

安裝

$ kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

配置

$ kubectl edit deploy -n kube-system metrics-server

這將打開一個部署文件的文本編輯器参袱,你需要做以下修改。

spec.template.spec.containers 增加如下參數(shù):

- --kubelet-insecure-tls

修改后秽梅,部署文件應(yīng)該大致如下:

      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        # Add this line
        - --kubelet-insecure-tls 
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        image: k8s.gcr.io/metrics-server/metrics-server:v0.4.2

等待 metrics-server 狀態(tài)更新為 Running抹蚀。

$ kubectl get pod -n kube-system -w
NAME                              READY   STATUS              RESTARTS   AGE
...
metrics-server-76f8d9fc69-jb94v   0/1     ContainerCreating   0          43s
metrics-server-76f8d9fc69-jb94v   1/1     Running             0          81s

再次使用 top 命令查看。

? kubectl top node
NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
master   242m         12%    2153Mi          56%       
node1    143m         7%     2158Mi          56%       
node2    99m          4%     1665Mi          43%

Dashboard

安裝

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml

使用以下命令確認(rèn)所有 Pod 都在運(yùn)行企垦。

? kubectl get pod -n kubernetes-dashboard -w
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-79c5968bdc-w2gmc   1/1     Running   0          8m
kubernetes-dashboard-9f9799597-w9fbz         1/1     Running   0          8m

訪問

要從本地站訪問 Dashboard环壤,必須創(chuàng)建一個通往 Kubernetes 集群的安全通道。運(yùn)行以下命令:

kubectl proxy

然后通過 http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/ 訪問 Dashboard钞诡。

創(chuàng)建示例用戶

創(chuàng)建 Service Account

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
EOF

創(chuàng)建 ClusterRoleBinding

cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
EOF

創(chuàng)建 Bearer Token

$ kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"

現(xiàn)在郑现,復(fù)制 token 并將其粘貼到登錄屏幕上的輸入 Token 字段中。

單擊登錄按鈕臭增,僅此而已懂酱。你現(xiàn)在已以管理員身份登錄。

image

Istio

安裝 istioctl

curl -L https://istio.io/downloadIstio | sh -
cd istio-1.9.2
export PATH=$PWD/bin:$PATH

部署 Istio operator

$ istioctl operator init
Installing operator controller in namespace: istio-operator using image: docker.io/istio/operator:1.9.2
Operator controller will watch namespaces: istio-system
? Istio operator installed
? Installation complete

安裝 Istio

$ kubectl create ns istio-system
namespace/istio-system created

$ kubectl apply -f - <<EOF
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  namespace: istio-system
  name: example-istiocontrolplane
spec:
  profile: default
EOF
istiooperator.install.istio.io/example-istiocontrolplane created

使用以下命令確認(rèn)所有 Pod 都在運(yùn)行誊抛。

$ kubectl get pod -n istio-system -w
NAME                                    READY   STATUS    RESTARTS   AGE
istio-ingressgateway-7cc49dcd99-c4mtf   1/1     Running   0          94s
istiod-687f965684-n8rkv                 1/1     Running   0          3m26s

MetalLB

Kubernetes 不為裸機(jī)集群提供網(wǎng)絡(luò)負(fù)載均衡器的實(shí)現(xiàn)(LoadBalancer 類型的服務(wù))列牺。 Kubernetes 附帶的 Network LB 的實(shí)現(xiàn)都是調(diào)用各種 IaaS 平臺(GCP,AWS拗窃,Azure 等)的粘合代碼瞎领。 如果你未在受支持的 IaaS 平臺(GCP泌辫,AWS,Azure 等)上運(yùn)行九默,則 LoadBalancers 在創(chuàng)建后將無限期保持 “pending” 狀態(tài)震放。

MetalLB 可以解決 istio ingress gateway EXTERNAL-IP “pending” 的問題.

安裝

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/metallb.yaml
# On first install only
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

配置

$ kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.16.50.147-172.16.50.148 #Update this with your Nodes IP range
EOF

實(shí)例

啟用 istio 自動注入。

$ kubectl label namespace default istio-injection=enabled
namespace/default labeled

部署 book info 實(shí)例驼修。

$ cd istio-1.9.2
$ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
service/details created
serviceaccount/bookinfo-details created
deployment.apps/details-v1 created
service/ratings created
serviceaccount/bookinfo-ratings created
deployment.apps/ratings-v1 created
service/reviews created
serviceaccount/bookinfo-reviews created
deployment.apps/reviews-v1 created
deployment.apps/reviews-v2 created
deployment.apps/reviews-v3 created
service/productpage created
serviceaccount/bookinfo-productpage created
deployment.apps/productpage-v1 created

部署 book info gateway殿遂。

$ kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
gateway.networking.istio.io/bookinfo-gateway created
virtualservice.networking.istio.io/bookinfo created

使用以下命令確認(rèn)所有 Pod 都在運(yùn)行。

$ kubectl get pod
NAME                              READY   STATUS    RESTARTS   AGE
details-v1-79f774bdb9-62x6b       2/2     Running   0          19m
productpage-v1-6b746f74dc-4g4hk   2/2     Running   0          19m
ratings-v1-b6994bb9-rz6pq         2/2     Running   0          19m
reviews-v1-545db77b95-bcnd8       2/2     Running   0          19m
reviews-v2-7bf8c9648f-zcgfx       2/2     Running   0          19m
reviews-v3-84779c7bbc-78bk7       2/2     Running   0          19m

獲取 istio ingress gatewayEXTERNAL-IP乙各。

$ kubectl get service -n istio-system
NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                                                                      AGE
istio-ingressgateway   LoadBalancer   10.99.204.213   172.16.50.147   15021:32373/TCP,80:30588/TCP,443:31095/TCP,15012:31281/TCP,15443:32738/TCP   73m
istiod                 ClusterIP      10.103.238.79   <none>          15010/TCP,15012/TCP,443/TCP,15014/TCP                                        75m

http://EXTERNAL-IP/productpage 訪問 productpage墨礁。

image

參考

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市耳峦,隨后出現(xiàn)的幾起案子恩静,更是在濱河造成了極大的恐慌,老刑警劉巖蹲坷,帶你破解...
    沈念sama閱讀 222,252評論 6 516
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件驶乾,死亡現(xiàn)場離奇詭異,居然都是意外死亡循签,警方通過查閱死者的電腦和手機(jī)级乐,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 94,886評論 3 399
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來懦底,“玉大人唇牧,你說我怎么就攤上這事【厶疲” “怎么了丐重?”我有些...
    開封第一講書人閱讀 168,814評論 0 361
  • 文/不壞的土叔 我叫張陵,是天一觀的道長杆查。 經(jīng)常有香客問我扮惦,道長,這世上最難降的妖魔是什么亲桦? 我笑而不...
    開封第一講書人閱讀 59,869評論 1 299
  • 正文 為了忘掉前任崖蜜,我火速辦了婚禮,結(jié)果婚禮上客峭,老公的妹妹穿的比我還像新娘豫领。我一直安慰自己,他們只是感情好舔琅,可當(dāng)我...
    茶點(diǎn)故事閱讀 68,888評論 6 398
  • 文/花漫 我一把揭開白布等恐。 她就那樣靜靜地躺著,像睡著了一般。 火紅的嫁衣襯著肌膚如雪课蔬。 梳的紋絲不亂的頭發(fā)上囱稽,一...
    開封第一講書人閱讀 52,475評論 1 312
  • 那天,我揣著相機(jī)與錄音二跋,去河邊找鬼战惊。 笑死,一個胖子當(dāng)著我的面吹牛扎即,可吹牛的內(nèi)容都是我干的吞获。 我是一名探鬼主播,決...
    沈念sama閱讀 41,010評論 3 422
  • 文/蒼蘭香墨 我猛地睜開眼谚鄙,長吁一口氣:“原來是場噩夢啊……” “哼衫哥!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起襟锐,我...
    開封第一講書人閱讀 39,924評論 0 277
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎膛锭,沒想到半個月后粮坞,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 46,469評論 1 319
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡初狰,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 38,552評論 3 342
  • 正文 我和宋清朗相戀三年莫杈,在試婚紗的時候發(fā)現(xiàn)自己被綠了。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片奢入。...
    茶點(diǎn)故事閱讀 40,680評論 1 353
  • 序言:一個原本活蹦亂跳的男人離奇死亡筝闹,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出腥光,到底是詐尸還是另有隱情关顷,我是刑警寧澤,帶...
    沈念sama閱讀 36,362評論 5 351
  • 正文 年R本政府宣布武福,位于F島的核電站议双,受9級特大地震影響,放射性物質(zhì)發(fā)生泄漏捉片。R本人自食惡果不足惜平痰,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 42,037評論 3 335
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望伍纫。 院中可真熱鬧宗雇,春花似錦、人聲如沸莹规。這莊子的主人今日做“春日...
    開封第一講書人閱讀 32,519評論 0 25
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至嘹履,卻和暖如春腻扇,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背砾嫉。 一陣腳步聲響...
    開封第一講書人閱讀 33,621評論 1 274
  • 我被黑心中介騙來泰國打工幼苛, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人焕刮。 一個月前我還...
    沈念sama閱讀 49,099評論 3 378
  • 正文 我出身青樓舶沿,卻偏偏與公主長得像,于是被迫代替她去往敵國和親配并。 傳聞我的和親對象是個殘疾皇子括荡,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 45,691評論 2 361

推薦閱讀更多精彩內(nèi)容