初識kubernetes第二周

一.kubernetes高可用集群二進(jìn)制部署

使用kubeasz部署k8s集群畦攘,項(xiàng)目使用的服務(wù)器如下

   類型                      服務(wù)器IP           主機(jī)名              VIP
K8S Master1               172.20.20.101     k8s-master1        172.20.20.188
K8S Master2               172.20.20.102     k8s-master2        172.20.20.188
K8S Master3               172.20.20.103     k8s-master3        172.20.20.188
Harbor1                   172.20.20.91      k8s-harbor1
etcd節(jié)點(diǎn)1                 172.20.20.121     k8s-etcd1
etcd節(jié)點(diǎn)2                 172.20.20.122     k8s-etcd2
etcd節(jié)點(diǎn)3                 172.20.20.123     k8s-etcd3
Haproxy1                  172.20.20.81      k8s-ha1
Node節(jié)點(diǎn)1                 172.20.20.111     k8s-node1
Node節(jié)點(diǎn)2                 172.20.20.112     k8s-node2
Node節(jié)點(diǎn)3                 172.20.20.113     k8s-node3
deploy                    172.20.20.88      k8s-deploy

1.下載項(xiàng)目源碼盗胀、二進(jìn)制及離線鏡像

下載工具腳本ezdown,本次使用的版本是3.4.6

root@k8s-deploy:~# export release=3.4.6
root@k8s-deploy:~# wget https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown
root@k8s-deploy:~# chmod a+x ezdown

下載kubeasz代碼慨亲、二進(jìn)制们拙、默認(rèn)容器鏡像

root@k8s-deploy:~# ./ezdown -D
root@k8s-deploy:~# cd /etc/kubeasz/
root@k8s-deploy:/etc/kubeasz# ls
ansible.cfg  bin  docs  down  example  ezctl  ezdown  manifests  pics  playbooks  README.md  roles  tools
root@k8s-deploy:/etc/kubeasz# docker ps
CONTAINER ID   IMAGE        COMMAND                  CREATED         STATUS         PORTS     NAMES
1b19038e7639   registry:2   "/entrypoint.sh /etc…"   5 minutes ago   Up 5 minutes             local_registry
root@k8s-deploy:/etc/kubeasz# docker images
REPOSITORY                                           TAG       IMAGE ID       CREATED         SIZE
registry                                             2         4bb5ea59f8e0   6 weeks ago     24MB
easzlab/kubeasz                                      3.4.6     a475dd279050   3 months ago    157MB
easzlab/kubeasz-k8s-bin                              v1.25.9   9c25793e6966   3 months ago    1.1GB
easzlab/kubeasz-ext-bin                              1.7.1     5c1895de99b2   4 months ago    606MB
calico/kube-controllers                              v3.24.5   38b76de417d5   8 months ago    71.4MB
easzlab.io.local:5000/calico/kube-controllers        v3.24.5   38b76de417d5   8 months ago    71.4MB
calico/cni                                           v3.24.5   628dd7088041   8 months ago    198MB
easzlab.io.local:5000/calico/cni                     v3.24.5   628dd7088041   8 months ago    198MB
calico/node                                          v3.24.5   54637cb36d4a   8 months ago    226MB
easzlab.io.local:5000/calico/node                    v3.24.5   54637cb36d4a   8 months ago    226MB
easzlab/pause                                        3.9       78d53e70b442   9 months ago    744kB
easzlab.io.local:5000/easzlab/pause                  3.9       78d53e70b442   9 months ago    744kB
easzlab/k8s-dns-node-cache                           1.22.13   7b3b529c5a5a   10 months ago   64.3MB
easzlab.io.local:5000/easzlab/k8s-dns-node-cache     1.22.13   7b3b529c5a5a   10 months ago   64.3MB
kubernetesui/dashboard                               v2.7.0    07655ddf2eeb   10 months ago   246MB
easzlab.io.local:5000/kubernetesui/dashboard         v2.7.0    07655ddf2eeb   10 months ago   246MB
kubernetesui/metrics-scraper                         v1.0.8    115053965e86   14 months ago   43.8MB
easzlab.io.local:5000/kubernetesui/metrics-scraper   v1.0.8    115053965e86   14 months ago   43.8MB
coredns/coredns                                      1.9.3     5185b96f0bec   14 months ago   48.8MB
easzlab.io.local:5000/coredns/coredns                1.9.3     5185b96f0bec   14 months ago   48.8MB
easzlab.io.local:5000/easzlab/metrics-server         v0.5.2    f965999d664b   20 months ago   64.3MB
easzlab/metrics-server 

2.創(chuàng)建集群

準(zhǔn)備ssh免密登錄

root@k8s-deploy:/etc/kubeasz/clusters/k8s-cluster1# apt install sshpass
root@k8s-deploy:/etc/kubeasz/clusters/k8s-cluster1# ssh-keygen 
root@k8s-deploy:/etc/kubeasz/clusters/k8s-cluster1# cat key.sh 
#!/bin/bash
IP="
172.20.20.101
172.20.20.102
172.20.20.103
172.20.20.111
172.20.20.112
172.20.20.113
172.20.20.121
172.20.20.122
172.20.20.123
"
for node in ${IP};do
        sshpass -p root1234 ssh-copy-id ${node} -o StrictHostKeyChecking=no
          echo "${node} ssh-gen ok"
        ssh ${node} ln -sv /usr/bin/python3 /usr/bin/python
          echo "${node}  python ok"
done
root@k8s-deploy:/etc/kubeasz/clusters/k8s-cluster1# bash key.sh 
root@k8s-deploy:/etc/kubeasz/clusters/k8s-cluster1# ln -sf /usr/bin/python3 /usr/bin/python

創(chuàng)建新集群

root@k8s-deploy:/etc/kubeasz# ./ezctl new k8s-cluster1
2023-08-02 07:12:59 DEBUG generate custom cluster files in /etc/kubeasz/clusters/k8s-cluster1
2023-08-02 07:12:59 DEBUG set versions
2023-08-02 07:12:59 DEBUG cluster k8s-cluster1: files successfully created.
2023-08-02 07:12:59 INFO next steps 1: to config '/etc/kubeasz/clusters/k8s-cluster1/hosts'
2023-08-02 07:12:59 INFO next steps 2: to config '/etc/kubeasz/clusters/k8s-cluster1/config.yml'

修改所有的node節(jié)點(diǎn)和master節(jié)點(diǎn)的hosts文件沼死,添加harbor的域名解析

root@k8s-master1:~# echo "172.20.20.91 harbor.zhao.net" >> /etc/hosts

調(diào)整下面相關(guān)配置

root@k8s-deploy:/etc/kubeasz/clusters/k8s-cluster1# docker login harbor.zhao.net    #使用之前搭建的harbor倉庫
root@k8s-deploy:/etc/kubeasz/clusters/k8s-cluster1# docker tag easzlab/pause:3.9 harbor.zhao.net/baseimages/pause:3.9
root@k8s-deploy:/etc/kubeasz/clusters/k8s-cluster1# docker push harbor.zhao.net/baseimages/pause:3.9
root@k8s-deploy:/etc/kubeasz/clusters/k8s-cluster1# vim config.yml
# [containerd]基礎(chǔ)容器鏡像
SANDBOX_IMAGE: "harbor.zhao.net/baseimages/pause:3.9"   #將該鏡像上傳至本地鏡像倉庫,后面可以直接本地下載

# node節(jié)點(diǎn)最大pod 數(shù)
MAX_PODS: 500

dns_install: "no"
ENABLE_LOCAL_DNS_CACHE: false
metricsserver_install: "no"
dashboard_install: "no"
prom_install: "no"

root@k8s-deploy:/etc/kubeasz/clusters/k8s-cluster1# vim hosts   
[etcd]
172.20.20.121
172.20.20.122
172.20.20.123

[kube_master]
172.20.20.101 k8s_nodename='master-01'
172.20.20.102 k8s_nodename='master-02'

[kube_node]
172.20.20.111 k8s_nodename='worker-01'
172.20.20.112 k8s_nodename='worker-02'

# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.100.0.0/16"

# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="10.200.0.0/16"

# NodePort Range
NODE_PORT_RANGE="30000-62767"

# Cluster DNS Domain
CLUSTER_DNS_DOMAIN="zhao.local"

# -------- Additional Variables (don't change the default value right now) ---
# Binaries Directory
bin_dir="/usr/local/bin"

創(chuàng)建證書和環(huán)境準(zhǔn)備

root@k8s-deploy:/etc/kubeasz# ./ezctl setup k8s-cluster1 01

安裝etcd集群

root@k8s-deploy:/etc/kubeasz# ./ezctl setup k8s-cluster1 02

完成后骂铁,驗(yàn)證集群狀態(tài)吹零,三臺 etcd 的輸出均為 healthy 時(shí)表示集群服務(wù)正常罩抗。

root@k8s-etcd2:~# export NODEIPS="172.20.20.121 172.20.20.122 172.20.20.123"
root@k8s-etcd2:~# for ip in ${NODEIPS};do /usr/local/bin/etcdctl --endpoints=https://${ip}:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem endpoint health;done
https://172.20.20.121:2379 is healthy: successfully committed proposal: took = 17.122232ms
https://172.20.20.122:2379 is healthy: successfully committed proposal: took = 9.518915ms
https://172.20.20.123:2379 is healthy: successfully committed proposal: took = 11.88871ms

安裝容器進(jìn)行時(shí)

root@k8s-deploy:/etc/kubeasz# vim roles/containerd/templates/config.toml.j2   #在165行{% endif %}后面添加如下字段拉庵,使用本地harbor倉庫下載,便于后期維護(hù)
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."harbor.zhao.net"]
          endpoint = ["https://harbor.zhao.net"]
        [plugins."io.containerd.grpc.v1.cri".registry.configs."harbor.zhao.net".tls]
          insecure_skip_verify = true
        [plugins."io.containerd.grpc.v1.cri".registry.configs."harbor.zhao.net".auth]
          username = "admin"
          password = "123456"
root@k8s-deploy:/etc/kubeasz# ./ezctl setup k8s-cluster1 03

安裝kube_master節(jié)點(diǎn)

root@k8s-deploy:/etc/kubeasz# ./ezctl setup k8s-cluster1 04
root@k8s-deploy:/etc/kubeasz# kubectl get node
NAME        STATUS                     ROLES    AGE   VERSION
master-01   Ready,SchedulingDisabled   master   22s   v1.25.9
master-02   Ready,SchedulingDisabled   master   22s   v1.25.9

安裝kube_node節(jié)點(diǎn)

root@k8s-deploy:/etc/kubeasz# ./ezctl setup k8s-cluster1 05
root@k8s-deploy:/etc/kubeasz# kubectl get node
NAME        STATUS                     ROLES    AGE   VERSION
master-01   Ready,SchedulingDisabled   master   34m   v1.25.9
master-02   Ready,SchedulingDisabled   master   34m   v1.25.9
worker-01   Ready                      node     58s   v1.25.9
worker-02   Ready                      node     58s   v1.25.9

安裝網(wǎng)絡(luò)組件

root@k8s-deploy:/etc/kubeasz# docker tag calico/node:v3.24.5 harbor.zhao.net/baseimages/calico-node:v3.24.5
root@k8s-deploy:/etc/kubeasz# docker push harbor.zhao.net/baseimages/calico-node:v3.24.5
root@k8s-deploy:/etc/kubeasz# docker tag calico/cni:v3.24.5 harbor.zhao.net/baseimages/calico-cni:v3.24.5
root@k8s-deploy:/etc/kubeasz# docker push harbor.zhao.net/baseimages/calico-cni:v3.24.5
root@k8s-deploy:/etc/kubeasz# docker tag calico/kube-controllers:v3.24.5 harbor.zhao.net/baseimages/calico-kube-controllers:v3.24.5
root@k8s-deploy:/etc/kubeasz# docker push harbor.zhao.net/baseimages/calico-kube-controllers:v3.24.5
root@k8s-deploy:/etc/kubeasz# vim roles/calico/templates/calico-v3.24.yaml.j2   #在配置文件里套蒂,將上面上次的三個(gè)鏡像钞支,依次修改,后期維護(hù)可以直接在本地倉庫下載操刀,不用連外部網(wǎng)絡(luò)
 
        - name: install-cni
          image: harbor.zhao.net/baseimages/calico-cni:v3.24.5

        - name: "mount-bpffs"
          image: harbor.zhao.net/baseimages/calico-node:v3.24.5

        - name: "mount-bpffs"
          image: harbor.zhao.net/baseimages/calico-node:v3.24.5

        - name: calico-kube-controllers
          image: harbor.zhao.net/baseimages/calico-kube-controllers:v3.24.5

root@k8s-deploy:/etc/kubeasz# ./ezctl setup k8s-cluster1 06
root@k8s-deploy:/etc/kubeasz# kubectl get pod -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-8554c8c6f9-lmxbc   1/1     Running   0          38s
kube-system   calico-node-4szms                          1/1     Running   0          38s
kube-system   calico-node-c9v5d                          1/1     Running   0          38s
kube-system   calico-node-nr5rt                          1/1     Running   0          38s
kube-system   calico-node-xnhtc                          1/1     Running   0          38s

安裝完成后烁挟,驗(yàn)證集群

root@k8s-deploy:/etc/kubeasz# kubectl create ns myserver
namespace/myserver created
root@k8s-deploy:/etc/kubeasz# kubectl run net-test1 --image=centos:7.9.2009 sleep 10000000 -n myserver
pod/net-test1 created
root@k8s-deploy:/etc/kubeasz# kubectl run net-test2 --image=centos:7.9.2009 sleep 10000000 -n myserver
pod/net-test2 created
root@k8s-deploy:/etc/kubeasz# kubectl run net-test3 --image=centos:7.9.2009 sleep 10000000 -n myserver
pod/net-test3 created

root@k8s-deploy:/etc/kubeasz# kubectl get pod -n myserver -o wide
NAME        READY   STATUS              RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
net-test1   1/1     Running             0          54s   10.200.36.66   worker-01   <none>           <none>
net-test2   0/1     ContainerCreating   0          38s   <none>         worker-02   <none>           <none>
net-test3   0/1     ContainerCreating   0          33s   <none>         worker-02   <none>           <none>
root@k8s-deploy:/etc/kubeasz# kubectl exec -it net-test1 bash -n myserver
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@net-test1 /]# 
[root@net-test1 /]# 
[root@net-test1 /]# ping 223.6.6.6
PING 223.6.6.6 (223.6.6.6) 56(84) bytes of data.
64 bytes from 223.6.6.6: icmp_seq=1 ttl=127 time=6.92 ms
64 bytes from 223.6.6.6: icmp_seq=2 ttl=127 time=6.51 ms
64 bytes from 223.6.6.6: icmp_seq=3 ttl=127 time=5.67 ms
^C
--- 223.6.6.6 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 5.678/6.370/6.921/0.521 ms

3.橫向擴(kuò)容

添加master

root@k8s-deploy:/etc/kubeasz# ./ezctl add-master k8s-cluster1 172.20.20.103
root@k8s-deploy:/etc/kubeasz# kubectl get node
NAME            STATUS                     ROLES    AGE   VERSION
172.20.20.103   Ready,SchedulingDisabled   master   50s   v1.25.9
master-01       Ready,SchedulingDisabled   master   62m   v1.25.9
master-02       Ready,SchedulingDisabled   master   62m   v1.25.9
worker-01       Ready                      node     28m   v1.25.9
worker-02       Ready                      node     28m   v1.25.9

添加node

root@k8s-deploy:/etc/kubeasz# ./ezctl add-node k8s-cluster1 172.20.20.113
root@k8s-deploy:/etc/kubeasz# kubectl get node
NAME            STATUS                     ROLES    AGE     VERSION
172.20.20.103   Ready,SchedulingDisabled   master   4m54s   v1.25.9
172.20.20.113   Ready                      node     43s     v1.25.9
master-01       Ready,SchedulingDisabled   master   66m     v1.25.9
master-02       Ready,SchedulingDisabled   master   66m     v1.25.9
worker-01       Ready                      node     32m     v1.25.9
worker-02       Ready                      node     32m     v1.25.9

4.升級kubernetes

下載二進(jìn)制包,下載地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#downloads-for-v12512

root@k8s-deploy:/usr/local/src# ls
kubernetes-client-linux-amd64.tar.gz  kubernetes-server-linux-amd64.tar.gz
kubernetes-node-linux-amd64.tar.gz    kubernetes.tar.gz
root@k8s-deploy:/usr/local/src# tar xf kubernetes-client-linux-amd64.tar.gz 
root@k8s-deploy:/usr/local/src# tar xf kubernetes-node-linux-amd64.tar.gz 
root@k8s-deploy:/usr/local/src# tar xf kubernetes-server-linux-amd64.tar.gz 
root@k8s-deploy:/usr/local/src# tar xf kubernetes.tar.gz 

修改所有的node節(jié)點(diǎn)的kube-lb.conf

root@k8s-node1:/etc/containerd# vim /etc/kube-lb/conf/kube-lb.conf
upstream backend {
        server 172.20.20.103:6443    max_fails=2 fail_timeout=3s;
       # server 172.20.20.101:6443    max_fails=2 fail_timeout=3s;   #將三個(gè)node節(jié)點(diǎn)的master1禁掉
        server 172.20.20.102:6443    max_fails=2 fail_timeout=3s;
    }
root@k8s-node1:/etc/containerd# systemctl reload kube-lb.service

替換master1節(jié)點(diǎn)的kubernetes二進(jìn)制文件

關(guān)閉master1的相關(guān)進(jìn)程
root@k8s-master1:~# systemctl stop kube-apiserver.service kube-controller-manager.service kube-scheduler.service kube-proxy.service kubelet.service 

#從deploy節(jié)點(diǎn)下載二進(jìn)制文件
ot@k8s-deploy:/usr/local/src/kubernetes/server/bin# scp kube-apiserver kube-controller-manager kube-scheduler kube-proxy kubelet kubectl 172.20.20.101:/usr/local/bin
kube-apiserver                                                                         100%  120MB  45.2MB/s   00:02    
kube-controller-manager                                                                100%  109MB  41.5MB/s   00:02    
kube-scheduler                                                                         100%   45MB  45.2MB/s   00:01    
kube-proxy                                                                             100%   40MB  48.7MB/s   00:00    
kubelet                                                                                100%  110MB  40.6MB/s   00:02    
kubectl                                                                                100%   44MB  43.3MB/s   00:01  

#檢查master1節(jié)點(diǎn)上的文件版本并重啟
root@k8s-master1:~# /usr/local/bin/kube-apiserver --version
Kubernetes v1.25.12
root@k8s-master1:~# systemctl start kube-apiserver.service kube-controller-manager.service kube-scheduler.service kube-proxy.service kubelet.service

#在deploy節(jié)點(diǎn)檢查骨坑,master1節(jié)點(diǎn)已經(jīng)升級成功(在master節(jié)點(diǎn)和node節(jié)點(diǎn)上使用下面命令撼嗓,需要從deploy服務(wù)器的/root/.kube/下面的文件拷貝到master和node節(jié)點(diǎn)的相同位置)
root@k8s-deploy:/usr/local/src/kubernetes/server/bin# kubectl get node
NAME            STATUS                     ROLES    AGE    VERSION
172.20.20.103   Ready,SchedulingDisabled   master   47m    v1.25.9
172.20.20.113   Ready                      node     43m    v1.25.9
master-01       Ready,SchedulingDisabled   master   109m   v1.25.12
master-02       Ready,SchedulingDisabled   master   109m   v1.25.9
worker-01       Ready                      node     75m    v1.25.9
worker-02       Ready                      node     75m    v1.25.9

其余兩個(gè)節(jié)點(diǎn)也按照相同的方法,先把node節(jié)點(diǎn)的/etc/kube-lb/conf/kube-lb.conf配置里需要升級的節(jié)點(diǎn)禁用掉欢唾,然后重啟kube-lb進(jìn)程且警,再關(guān)閉需要升級節(jié)點(diǎn)的相關(guān)進(jìn)程,替換相關(guān)的二進(jìn)制文件礁遣,再重啟

root@k8s-node1:~# vim /etc/kube-lb/conf/kube-lb.conf
stream {
    upstream backend {
       # server 172.20.20.103:6443    max_fails=2 fail_timeout=3s;     #禁掉準(zhǔn)備升級的進(jìn)程
        server 172.20.20.101:6443    max_fails=2 fail_timeout=3s;
       # server 172.20.20.102:6443    max_fails=2 fail_timeout=3s;
    }
root@k8s-node1:~# systemctl reload kube-lb.service

root@k8s-master2:~# systemctl stop kube-apiserver kube-controller-manager kube-scheduler kube-proxy kubelet

root@k8s-deploy:/usr/local/src/kubernetes/server/bin# scp kube-apiserver kube-controller-manager kube-scheduler kube-proxy kubelet kubectl 172.20.20.102:/usr/local/bin/

root@k8s-master2:~# systemctl start kube-apiserver kube-controller-manager kube-scheduler kube-proxy kubelet

root@k8s-deploy:/usr/local/src/kubernetes/server/bin# kubectl get node
NAME            STATUS                     ROLES    AGE   VERSION
172.20.20.103   Ready,SchedulingDisabled   master   16h   v1.25.12
172.20.20.113   Ready                      node     16h   v1.25.9
master-01       Ready,SchedulingDisabled   master   17h   v1.25.12
master-02       Ready,SchedulingDisabled   master   17h   v1.25.12
worker-01       Ready                      node     17h   v1.25.9
worker-02       Ready                      node     17h   v1.25.9

升級node節(jié)點(diǎn)
驅(qū)逐pod

root@k8s-deploy:/usr/local/src/kubernetes/server/bin# kubectl drain worker-01 --ignore-daemonsets --force  #這里的node節(jié)點(diǎn)地址斑芜,要按下面命令輸出的NAME的地址寫,不能直接寫node的ip地址祟霍,否則會(huì)保錯(cuò)杏头,這里的work-01就是node1節(jié)點(diǎn)
root@k8s-deploy:/usr/local/src/kubernetes/server/bin# kubectl get node
NAME            STATUS                     ROLES    AGE   VERSION
172.20.20.103   Ready,SchedulingDisabled   master   17h   v1.25.12
172.20.20.113   Ready                      node     17h   v1.25.9
master-01       Ready,SchedulingDisabled   master   18h   v1.25.12
master-02       Ready,SchedulingDisabled   master   18h   v1.25.12
worker-01       Ready,SchedulingDisabled   node     18h   v1.25.9
worker-02       Ready                      node     18h   v1.25.9  

停止node節(jié)點(diǎn)服務(wù)

root@k8s-node1:~# systemctl stop kubelet.service kube-proxy.service

拷貝node節(jié)點(diǎn)的二進(jìn)制文件

root@k8s-deploy:/usr/local/src/kubernetes/server/bin# scp kubelet kube-proxy kubectl 172.20.20.111:/usr/local/bin

啟動(dòng)node節(jié)點(diǎn)服務(wù)

root@k8s-node1:~# systemctl start kubelet.service kube-proxy.service

取消不調(diào)度狀態(tài)

root@k8s-deploy:/usr/local/src/kubernetes/server/bin# kubectl uncordon worker-01  #這里依舊要寫kubectl get node命令輸出的name名,直接用ip地址會(huì)報(bào)錯(cuò)
root@k8s-deploy:/usr/local/src/kubernetes/server/bin# kubectl get node
NAME            STATUS                     ROLES    AGE   VERSION
172.20.20.103   Ready,SchedulingDisabled   master   17h   v1.25.12
172.20.20.113   Ready                      node     17h   v1.25.9
master-01       Ready,SchedulingDisabled   master   19h   v1.25.12
master-02       Ready,SchedulingDisabled   master   19h   v1.25.12
worker-01       Ready                      node     18h   v1.25.12
worker-02       Ready                      node     18h   v1.25.9

然后用同樣的步驟升級另外兩個(gè)node

root@k8s-deploy:/usr/local/src/kubernetes/server/bin# kubectl get node
NAME            STATUS                     ROLES    AGE   VERSION
172.20.20.103   Ready,SchedulingDisabled   master   18h   v1.25.12
172.20.20.113   Ready                      node     17h   v1.25.12
master-01       Ready,SchedulingDisabled   master   19h   v1.25.12
master-02       Ready,SchedulingDisabled   master   19h   v1.25.12
worker-01       Ready                      node     18h   v1.25.12
worker-02       Ready                      node     18h   v1.25.12

同時(shí)更新kubeasz中的文件版本沸呐,后面增加節(jié)點(diǎn)可以直接使用新版本醇王,不需要在單獨(dú)升級

root@k8s-deploy:/usr/local/src/kubernetes/server/bin# \cp kube-apiserver kube-controller-manager kube-scheduler kube-proxy kubelet kubectl /etc/kubeasz/bin

5.升級containerd

如上,先驅(qū)逐pod崭添,然后停止服務(wù)寓娩,再升級
下載containerd和runc,然后解壓

root@k8s-deploy:/usr/local/src# tar xf containerd-1.6.22-linux-amd64.tar.gz
root@k8s-deploy:/usr/local/src/bin# mv runc.amd64 runc
root@k8s-deploy:/usr/local/src/bin# tar xf crictl-v1.27.1-linux-amd64.tar.gz
root@k8s-deploy:/usr/local/src/bin# tar xf nerdctl-1.4.0-linux-amd64.tar.gz
root@k8s-deploy:/usr/local/src/bin# ls
containerd                        containerd-rootless.sh  containerd-shim-runc-v1  containerd-stress  ctr      runc
containerd-rootless-setuptool.sh  containerd-shim         containerd-shim-runc-v2  crictl             nerdctl

拷貝升級文件到指定路徑
替換kubeasz中的舊版本

root@k8s-deploy:/usr/local/src/bin# \cp ./* /etc/kubeasz/bin/containerd-bin/  #注意1.25的版本crictl nerdctl要復(fù)制到/etc/kubeasz/bin/目錄

停止node節(jié)點(diǎn)服務(wù)

root@k8s-node1:/usr/local/src# systemctl disable kubelet kube-proxy containerd
root@k8s-node1:/usr/local/src# reboot

替換node中的舊版本

root@k8s-deploy:/usr/local/src/bin# scp ./* 172.20.20.111:/usr/local/src  #1.25的文件位置和1.24有點(diǎn)差別
root@k8s-node1:/usr/local/src# mv containerd* ctr runc /usr/local/bin/containerd-bin/
root@k8s-node1:/usr/local/src# mv crictl nerdctl /usr/local/bin/

啟動(dòng)服務(wù)

root@k8s-node1:/usr/local/src# systemctl start kubelet kube-proxy containerd
root@k8s-node1:/usr/local/src# systemctl enable kubelet kube-proxy containerd
root@k8s-deploy:~# kubectl get node -o wide
NAME            STATUS                     ROLES    AGE   VERSION    INTERNALON      CONTAINER-RUNTIME
172.20.20.103   Ready,SchedulingDisabled   master   22h   v1.25.12   172.20.2neric   containerd://1.6.20
172.20.20.113   Ready                      node     22h   v1.25.12   172.20.2neric   containerd://1.6.20
master-01       Ready,SchedulingDisabled   master   23h   v1.25.12   172.20.2neric   containerd://1.6.20
master-02       Ready,SchedulingDisabled   master   23h   v1.25.12   172.20.2neric   containerd://1.6.20
worker-01       Ready                      node     22h   v1.25.12   172.20.2neric   containerd://1.6.22
worker-02       Ready                      node     22h   v1.25.12   172.20.2neric   containerd://1.6.20

然后用同樣的方法升級其他節(jié)點(diǎn)

root@k8s-deploy:~# kubectl get node -o wide
NAME            STATUS                     ROLES    AGE   VERSION    INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
172.20.20.103   Ready,SchedulingDisabled   master   28h   v1.25.12   172.20.20.103   <none>        Ubuntu 20.04.3 LTS   5.4.0-153-generic   containerd://1.6.22
172.20.20.113   Ready                      node     28h   v1.25.12   172.20.20.113   <none>        Ubuntu 20.04.3 LTS   5.4.0-153-generic   containerd://1.6.22
master-01       Ready,SchedulingDisabled   master   29h   v1.25.12   172.20.20.101   <none>        Ubuntu 20.04.3 LTS   5.4.0-153-generic   containerd://1.6.22
master-02       Ready,SchedulingDisabled   master   29h   v1.25.12   172.20.20.102   <none>        Ubuntu 20.04.3 LTS   5.4.0-153-generic   containerd://1.6.22
worker-01       Ready                      node     29h   v1.25.12   172.20.20.111   <none>        Ubuntu 20.04.3 LTS   5.4.0-155-generic   containerd://1.6.22
worker-02       Ready                      node     29h   v1.25.12   172.20.20.112   <none>        Ubuntu 20.04.3 LTS   5.4.0-153-generic   containerd://1.6.22

二.etcd的備份和恢復(fù)-基于快照

1.備份數(shù)據(jù)

在etcd節(jié)點(diǎn)備份

root@k8s-etcd2:~# etcdctl snapshot save /tmp/test.db

恢復(fù)數(shù)據(jù)

root@k8s-etcd2:/tmp# etcdctl snapshot restore test.db --data-dir=/opt/etcd/  #注意/opt/etcd/目錄不能存在數(shù)據(jù)

修改etcd數(shù)據(jù)目錄

root@k8s-etcd2:/tmp#vim /etc/systemd/system/etcd.service
 --data-dir=/var/lib/etcd \   #修改目錄到/opt/etcd/或者清空/var/lib/etcd把數(shù)據(jù)恢復(fù)到此路徑

通過管理節(jié)點(diǎn)備份數(shù)據(jù)

#備份之前建3個(gè)pod測試
root@k8s-deploy:/etc/kubeasz# kubectl run net-test1 --image=centos:7.9.2009 sleep 1000000000 -n myserver
root@k8s-deploy:/etc/kubeasz# kubectl run net-test2 --image=centos:7.9.2009 sleep 1000000000 -n myserver
root@k8s-deploy:/etc/kubeasz# kubectl run net-test3 --image=centos:7.9.2009 sleep 1000000000 -n myserver
#備份
root@k8s-deploy:/etc/kubeasz# ./ezctl backup k8s-cluster1
root@k8s-deploy:/etc/kubeasz/clusters/k8s-cluster1/backup# ls
snapshot_202308030943.db  snapshot.db
root@k8s-deploy:/etc/kubeasz/clusters/k8s-cluster1/backup# kubectl delete pod net-test3 -n myserver #刪除一個(gè)pod
root@k8s-deploy:/etc/kubeasz# ./ezctl backup k8s-cluster1  #再次備份
root@k8s-deploy:/etc/kubeasz/clusters/k8s-cluster1/backup# ll
total 10640
drwxr-xr-x 2 root root    4096 Aug  3 09:47 ./
drwxr-xr-x 5 root root    4096 Aug  2 12:50 ../
-rw------- 1 root root 3624992 Aug  3 09:43 snapshot_202308030943.db
-rw------- 1 root root 3624992 Aug  3 09:47 snapshot_202308030947.db
-rw------- 1 root root 3624992 Aug  3 09:47 snapshot.db

通過管理節(jié)點(diǎn)恢復(fù)數(shù)據(jù)

root@k8s-deploy:/etc/kubeasz# kubectl run net-test3 --image=centos:7.9.2009 sleep 1000000000 -n myserver
root@k8s-deploy:/etc/kubeasz# ./ezctl restore k8s-cluster1
root@k8s-deploy:~# kubectl get pod -A
NAMESPACE              NAME                                         READY   STATUS    RESTARTS       AGE
kube-system            calico-kube-controllers-8554c8c6f9-lmxbc     1/1     Running   3 (5h4m ago)   29h
kube-system            calico-node-29zws                            1/1     Running   0              29h
kube-system            calico-node-9nzxl                            1/1     Running   1 (7h9m ago)   29h
kube-system            calico-node-kwmcd                            1/1     Running   0              29h
kube-system            calico-node-mkprg                            1/1     Running   0              29h
kube-system            calico-node-nwn8g                            1/1     Running   0              29h
kube-system            calico-node-shpht                            1/1     Running   0              29h
myserver               net-test1                                    1/1     Running   0              5h20m
myserver               net-test2                                    1/1     Running   0              29h

然后恢復(fù)到三個(gè)pod

#將三個(gè)pod的備份更改名字
root@k8s-deploy:/etc/kubeasz/clusters/k8s-cluster1/backup# cp -rf snapshot_202308030943.db snapshot.db
root@k8s-deploy:/etc/kubeasz# ./ezctl restore k8s-cluster1
root@k8s-deploy:~# kubectl get pod -A
NAMESPACE              NAME                                         READY   STATUS    RESTARTS        AGE
kube-system            calico-kube-controllers-8554c8c6f9-lmxbc     1/1     Running   3 (5h7m ago)    29h
kube-system            calico-node-29zws                            1/1     Running   0               29h
kube-system            calico-node-9nzxl                            1/1     Running   1 (7h12m ago)   29h
kube-system            calico-node-kwmcd                            1/1     Running   0               29h
kube-system            calico-node-mkprg                            1/1     Running   0               29h
kube-system            calico-node-nwn8g                            1/1     Running   0               29h
kube-system            calico-node-shpht                            1/1     Running   0               29h
myserver               net-test1                                    1/1     Running   0               5h23m
myserver               net-test2                                    1/1     Running   0               29h
myserver               net-test3                                    1/1     Running   0               29h

三.整理coredns的域名解析流程和Corefile配置

1.域名解析流程

image.png

2.安裝coredns

#直接使用升級kubernetes時(shí)候下載的yaml文件
root@k8s-deploy:/usr/local/src/kubernetes/cluster/addons/dns/coredns# cp coredns.yaml.base /root/yaml
root@k8s-deploy:~/yaml# mv coredns.yaml.base coredns.yaml

查看host中的cluster DNS Domain

root@k8s-deploy:~# vim /etc/kubeasz/clusters/k8s-cluster1/hosts 
# Cluster DNS Domain
CLUSTER_DNS_DOMAIN="zhao.local"

進(jìn)入pod查看nameserver地址

root@k8s-deploy:/etc/kubeasz# kubectl exec -it net-test3 bash -n myserver
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@net-test3 /]# cd /etc
[root@net-test3 etc]# cat resolv.conf 
search myserver.svc.zhao.local svc.zhao.local zhao.local
nameserver 10.100.0.2
options ndots:5

下載coredns鏡像并上傳到本地harbor

root@k8s-deploy:~/yaml# docker pull coredns/coredns:1.9.3
root@k8s-deploy:~/yaml# docker tag coredns/coredns:1.9.3 harbor.zhao.net/baseimages/coredns:1.9.3
root@k8s-deploy:~/yaml# docker push harbor.zhao.net/baseimages/coredns:1.9.3

修改yaml文件

root@k8s-deploy:~/yaml# vim coredns-1.9.3.yaml
data:
  Corefile: |
    .:53 {
        errors
        health {
            lameduck 5s
        }
        ready
#修改成查詢到的域名
        kubernetes zhao.local in-addr.arpa ip6.arpa {
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
            ttl 30
#開啟多副本
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  replicas: 2  #開啟2個(gè)副本
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
#修改鏡像地址
      containers:
      - name: coredns
        image: harbor.zhao.net/baseimages/coredns:1.9.3
        imagePullPolicy: IfNotPresent

     #設(shè)置資源限制(生產(chǎn)環(huán)境配置盡量配置高點(diǎn))
       resources:
          limits:
            memory: 256Mi
            cpu: 200m
   #配置進(jìn)入pod查詢到的nameserver地址
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.100.0.2

執(zhí)行yaml文件

root@k8s-deploy:~/yaml# kubectl apply -f coredns.yaml 
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created

查看kube-dns

root@k8s-deploy:~/yaml# kubectl get svc -A
NAMESPACE              NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
default                kubernetes                  ClusterIP   10.100.0.1       <none>        443/TCP                      30h
kube-system            kube-dns                    ClusterIP   10.100.0.2       <none>        53/UDP,53/TCP,9153/TCP       3h12m

檢查coredns副本數(shù)量

root@k8s-deploy:~/yaml# kubectl get pod -A
NAMESPACE              NAME                                         READY   STATUS    RESTARTS        AGE
kube-system            calico-kube-controllers-8554c8c6f9-lmxbc     1/1     Running   3 (5h42m ago)   30h
kube-system            calico-node-29zws                            1/1     Running   0               29h
kube-system            calico-node-9nzxl                            1/1     Running   1 (7h47m ago)   29h
kube-system            calico-node-kwmcd                            1/1     Running   0               29h
kube-system            calico-node-mkprg                            1/1     Running   0               29h
kube-system            calico-node-nwn8g                            1/1     Running   0               29h
kube-system            calico-node-shpht                            1/1     Running   0               29h
kube-system            coredns-7979d89cf5-hszw7                     1/1     Running   0               3h4m
kube-system            coredns-7979d89cf5-tm662                     1/1     Running   0               3h6m
myserver               net-test1                                    1/1     Running   0               5h58m
myserver               net-test2                                    1/1     Running   0               30h
myserver               net-test3                                    1/1     Running   0               30h

進(jìn)入pod測試

root@k8s-deploy:~# kubectl exec -it net-test1 bash -n myserver
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version.AND] instead.
[root@net-test1 /]# ping www.baidu.com
PING www.a.shifen.com (14.119.104.254) 56(84) bytes of data.
64 bytes from 14.119.104.254 (14.119.104.254): icmp_seq=1 ttl=127 time=37.1 ms
64 bytes from 14.119.104.254 (14.119.104.254): icmp_seq=2 ttl=127 time=37.2 ms
64 bytes from 14.119.104.254 (14.119.104.254): icmp_seq=3 ttl=127 time=37.3 ms
^C
--- www.a.shifen.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 37.158/37.245/37.353/0.237 ms

3.Corefile配置

errors:錯(cuò)誤信息標(biāo)準(zhǔn)輸入
health:在CoreDNS的http://localhost:8080/health端口提供CoreDNS服務(wù)的健康報(bào)告。
ready:監(jiān)聽8181端口根暑,當(dāng)coredns插件就緒力试,訪問該接口返回200 OK。
kubernetes:CoreDNS將基于kubernetes service name進(jìn)行DNS查詢并返回查詢記錄給客戶端排嫌。
prometheus:CoreDNS的度量指標(biāo)數(shù)據(jù)以Prometheus的key-value的格式在http://localhost:9153/metrics URL上提供畸裳。
forward:不是kubernetes集群內(nèi)的其它任何域名查詢都將轉(zhuǎn)發(fā)到預(yù)定義的目的server,如(/etc/resolv.conf或IP(如223.6.6.6))淳地。
cache:啟用service解析緩存怖糊,單位為秒。
loop:檢測域名解析是否有死循環(huán)颇象,如coredns轉(zhuǎn)發(fā)給內(nèi)網(wǎng)DNS服務(wù)器伍伤,而內(nèi)網(wǎng)DNS服務(wù)器又轉(zhuǎn)發(fā)給coredns,如果發(fā)現(xiàn)解析是死循環(huán)遣钳,則強(qiáng)制終止CoreDNS進(jìn)程(kubernetes會(huì)重新)扰魂。
reload:檢測corefile是否更改,在重新編輯configmap配置后蕴茴,默認(rèn)2分鐘后會(huì)優(yōu)雅的自動(dòng)加載劝评。
loadbalance:輪詢DNS域名解析,如果一個(gè)域名存在多個(gè)記錄則輪詢解析倦淀。

四.dashboard的使用

1.安裝dashboard

下載鏡像并上傳至本地倉庫
地址: https://github.com/kubernetes/dashboard/releases

root@k8s-deploy:~/yaml/dashboard-2.7# docker pull kubernetesui/dashboard:v2.7.0
root@k8s-deploy:~/yaml/dashboard-2.7# docker pull docker.io/kubernetesui/metrics-scraper:v1.0.8
root@k8s-deploy:~/yaml/dashboard-2.7# docker tag kubernetesui/dashboard:v2.7.0 harbor.zhao.net/baseimages/dashboard:v2.7.0
root@k8s-deploy:~/yaml/dashboard-2.7# docker push harbor.zhao.net/baseimages/dashboard:v2.7.0
root@k8s-deploy:~/yaml/dashboard-2.7# docker tag docker.io/kubernetesui/metrics-scraper:v1.0.8 harbor.zhao.net/baseimages/metrics-scraper:v1.0.8
root@k8s-deploy:~/yaml/dashboard-2.7# docker push harbor.zhao.net/baseimages/metrics-scraper:v1.0.8

修改yml文件

root@k8s-deploy:~/yaml/dashboard-2.7# vim dashhboard-2.76.yaml\
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
#默認(rèn)僅限k8s內(nèi)部訪問需要修改成NodePort
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
#端口范圍需要跟/etc/kubeasz/clusters/k8s-cluster1/hosts文件中定義的端口范圍保持一致
      nodePort: 30000

      containers:
        - name: kubernetes-dashboard
          image: harbor.zhao.net/baseimages/dashboard:v2.7.0
          imagePullPolicy: Always 

      containers:
        - name: dashboard-metrics-scraper
          image: harbor.zhao.net/baseimages/metrics-scraper:v1.0.8

執(zhí)行yaml文件

root@k8s-deploy:~/yaml/dashboard-2.7# kubectl apply -f dashhboard-2.76.yaml 
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

訪問web頁面


image.png

創(chuàng)建user和secret

root@k8s-deploy:~/yaml/dashboard-2.7# cat  admin-user.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

root@k8s-deploy:~/yaml/dashboard-2.7# cat admin-secret.yaml 
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
  name: dashboard-admin-user
  namespace: kubernetes-dashboard 
  annotations:
    kubernetes.io/service-account.name: "admin-user"

執(zhí)行user和secret的yaml文件

root@k8s-deploy:~/yaml/dashboard-2.7# kubectl apply -f admin-user.yaml 
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
root@k8s-deploy:~/yaml/dashboard-2.7# kubectl get secrets -A
NAMESPACE              NAME                              TYPE     DATA   AGE
kube-system            calico-etcd-secrets               Opaque   3      27h
kubernetes-dashboard   kubernetes-dashboard-certs        Opaque   0      2m5s
kubernetes-dashboard   kubernetes-dashboard-csrf         Opaque   1      2m5s
kubernetes-dashboard   kubernetes-dashboard-key-holder   Opaque   2      2m5s
root@k8s-deploy:~/yaml/dashboard-2.7# kubectl apply -f admin-secret.yaml 
secret/dashboard-admin-user created
root@k8s-deploy:~/yaml/dashboard-2.7# kubectl get secrets -A |grep admin
kubernetes-dashboard   dashboard-admin-user              kubernetes.io/service-acco

查看dashboard-admin-user的token

root@k8s-deploy:~/yaml/dashboard-2.7# kubectl describe secrets dashboard-admin-user
Name:         dashboard-admin-user
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: eecfa718-4f46-4412-a152-5dd8cb266d

Type:  kubernetes.io/service-account-token

Data
====
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6Ik5STGZ2eFRiZ1hPdUtWOTczcm1NVkF2cUlSSXZxRWtdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3ByZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaWVydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbi11c2VyIiwia3ViZXJuZXRlcy5pbWFjY291bnQudWlkIjoiZWVjZmE3MTgtNGY0Ni00NDEyLWExNTItNWRkOGNiMjY2ZDAwIiwic3ViIjoic3lzbmV0ZXMtZGFzaGJvYXJkOmFkbWluLXVzZXIifQ.GFBHXfMCeYUm5scvpXvGJIl631YzZU5L6_pLo_S4EYGS2nbys4vPDucrVH4QSMNGx0baFdMRRWcsvWAauGtwPQSu38LeQdSpE7C82n3cHzQf5Kr-lpBJqceUX-ctfqeWq_azfgqBb_140NGCac1NcrcmnNFJoRgU_6u1O7Sqzw0dDTpwRXQgEy-DDjs7Jjr4l556YgQYPEePc1zGbGtqBOFw49uMSx3hTvfH8TeOytw48B0367C3Kw
ca.crt:     1310 bytes
namespace:  20 bytes

登錄dashboard


image.png
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末蒋畜,一起剝皮案震驚了整個(gè)濱河市,隨后出現(xiàn)的幾起案子撞叽,更是在濱河造成了極大的恐慌姻成,老刑警劉巖,帶你破解...
    沈念sama閱讀 218,607評論 6 507
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件愿棋,死亡現(xiàn)場離奇詭異科展,居然都是意外死亡,警方通過查閱死者的電腦和手機(jī)初斑,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,239評論 3 395
  • 文/潘曉璐 我一進(jìn)店門辛润,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人见秤,你說我怎么就攤上這事砂竖。” “怎么了鹃答?”我有些...
    開封第一講書人閱讀 164,960評論 0 355
  • 文/不壞的土叔 我叫張陵乎澄,是天一觀的道長。 經(jīng)常有香客問我测摔,道長置济,這世上最難降的妖魔是什么解恰? 我笑而不...
    開封第一講書人閱讀 58,750評論 1 294
  • 正文 為了忘掉前任,我火速辦了婚禮浙于,結(jié)果婚禮上护盈,老公的妹妹穿的比我還像新娘。我一直安慰自己羞酗,他們只是感情好腐宋,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,764評論 6 392
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著檀轨,像睡著了一般胸竞。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上参萄,一...
    開封第一講書人閱讀 51,604評論 1 305
  • 那天卫枝,我揣著相機(jī)與錄音,去河邊找鬼讹挎。 笑死校赤,一個(gè)胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的淤袜。 我是一名探鬼主播痒谴,決...
    沈念sama閱讀 40,347評論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼衰伯,長吁一口氣:“原來是場噩夢啊……” “哼铡羡!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起意鲸,我...
    開封第一講書人閱讀 39,253評論 0 276
  • 序言:老撾萬榮一對情侶失蹤烦周,失蹤者是張志新(化名)和其女友劉穎,沒想到半個(gè)月后怎顾,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體读慎,經(jīng)...
    沈念sama閱讀 45,702評論 1 315
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,893評論 3 336
  • 正文 我和宋清朗相戀三年槐雾,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了夭委。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 40,015評論 1 348
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡募强,死狀恐怖株灸,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情擎值,我是刑警寧澤慌烧,帶...
    沈念sama閱讀 35,734評論 5 346
  • 正文 年R本政府宣布,位于F島的核電站鸠儿,受9級特大地震影響屹蚊,放射性物質(zhì)發(fā)生泄漏厕氨。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,352評論 3 330
  • 文/蒙蒙 一汹粤、第九天 我趴在偏房一處隱蔽的房頂上張望命斧。 院中可真熱鬧,春花似錦嘱兼、人聲如沸冯丙。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,934評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽胃惜。三九已至,卻和暖如春哪雕,著一層夾襖步出監(jiān)牢的瞬間船殉,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 33,052評論 1 270
  • 我被黑心中介騙來泰國打工斯嚎, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留利虫,地道東北人。 一個(gè)月前我還...
    沈念sama閱讀 48,216評論 3 371
  • 正文 我出身青樓堡僻,卻偏偏與公主長得像糠惫,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個(gè)殘疾皇子钉疫,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 44,969評論 2 355

推薦閱讀更多精彩內(nèi)容