K8s的環(huán)境搭建部署

搭建說明

至少準備三臺機器 電腦性能比較好可以開三個虛擬機,cpu或內(nèi)存不夠可以選擇購買阿里云或者騰訊云

k8s可以搭建單master和多master压语,一般學(xué)習過程我們就準備三臺機器搭建一個master 兩個 node

每臺機器要求

  • cpu 兩核以上
  • 內(nèi)存 至少2GB
  • 三臺機器網(wǎng)絡(luò)要能互通
  • 關(guān)閉防火墻 不然后面會出現(xiàn)很多問題需要一個個去開放端口。
  • 禁用SELinux
  • 關(guān)閉swap分區(qū) 本來機器性能不夠,所以還是把虛擬內(nèi)存分區(qū)關(guān)閉
  • 時間需要同步

可以參考官網(wǎng)搭建 https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

機器準備

10.0.4.11 node01
10.0.4.13 node02
10.0.4.9 master

修改hostname

# 10.0.4.9 執(zhí)行
hostnamectl set-hostname master
#10.0.4.11執(zhí)行
hostnamectl set-hostname node01
# 10.0.4.13執(zhí)行
hostnamectl set-hostname node02

配置host

每臺機器執(zhí)行

cat >> /etc/hosts<<EOF
10.0.4.9 master
10.0.4.11 node01
10.0.4.13 node02
EOF

網(wǎng)絡(luò)時間同步

每臺機器時間最好同步下协怒,避免后面出現(xiàn)問題

每臺機器運行

查看是否有 ntpdate

which ntpdate

# 如果沒有就安裝
yum install ntpdate -y

統(tǒng)一時區(qū)上海時區(qū)

ln -snf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
bash -c "echo 'Asia/Shanghai' > /etc/timezone"

使用阿里服務(wù)器進行時間更新

# 使用阿里服務(wù)器進行時間更新
ntpdate ntp1.aliyun.com

查看當前時間

[root@node01 ~]# date
Tue Nov  1 00:08:10 CST 2022

禁用SELinux

所有節(jié)點執(zhí)行,讓容器可以讀取主機文件系統(tǒng)

# 臨時關(guān)閉
setenforce 0
# 永久禁用
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
# 或者設(shè)置為permissive也是相當于禁用的
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

關(guān)閉防火墻

systemctl stop firewalld
systemctl disable firewalld
systemctl status firewalld

如果不關(guān)閉防火墻需要把節(jié)點自建相互通信的端口開放

api-server 8080,6443
controller-manager 10252
scheduler 10251
kubelet 10250,10255
etcd 2379,2380
dns 53 (tcp upd)

關(guān)閉swap分區(qū)

關(guān)閉swap可以提升性能

# 臨時關(guān)閉swap
swapoff -a

# 永久關(guān)閉
sed -ri 's/.*swap.*/#&/' /etc/fstab
[root@node01 ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:           3694         596         435           0        2662        2806
Swap:          1024           4        1020

[root@node01 ~]# swapoff -a
[root@node01 ~]# sed -ri 's/.*swap.*/#&/' /etc/fstab

[root@node01 ~]# free -m   
              total        used        free      shared  buff/cache   available
Mem:           3694         596         434           0        2663        2806
Swap:             0           0           0

配置 k8s 安裝源

所有節(jié)點配置

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[k8s]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

如果本地有yum源庐扫,可以指定baseurl為 file://dir 離線安裝

安裝 kubeadm句狼,kubelet 和 kubectl

目前最新版本1.24已經(jīng)移除docker渡蜻,如果需要docker就指定版本安裝

我們使用1.23.9版本

yum install -y kubelet-1.23.9 kubectl-1.23.9 kubeadm-1.23.9
  • kubeadm:用來初始化集群的指令碱璃。
  • kubelet:在集群中的每個節(jié)點上用來啟動 Pod 和容器等褐荷。
  • kubectl:用來與集群通信的命令行工具敷扫。

官網(wǎng)提醒

kubeadm 不能幫你安裝或者管理 kubeletkubectl, 所以你需要確保它們與通過 kubeadm 安裝的控制平面的版本相匹配诚卸。 如果不這樣做葵第,則存在發(fā)生版本偏差的風險,可能會導(dǎo)致一些預(yù)料之外的錯誤和問題合溺。 然而卒密,控制平面與 kubelet 之間可以存在一個次要版本的偏差,但 kubelet 的版本不可以超過 API 服務(wù)器的版本棠赛。 例如哮奇,1.7.0 版本的 kubelet 可以完全兼容 1.8.0 版本的 API 服務(wù)器,反之則不可以

查看安裝版本是否正確

[root@master k8s]# yum info kubeadm
Loaded plugins: fastestmirror, langpacks
Repository epel is listed more than once in the configuration
Loading mirror speeds from cached hostfile
Installed Packages
Name        : kubeadm
Arch        : x86_64
Version     : 1.23.9
Release     : 0
Size        : 43 M
Repo        : installed
From repo   : k8s
Summary     : Command-line utility for administering a Kubernetes cluster.
URL         : https://kubernetes.io
License     : ASL 2.0
Description : Command-line utility for administering a Kubernetes cluster.

Available Packages
Name        : kubeadm
Arch        : x86_64
Version     : 1.25.3
Release     : 0
Size        : 9.8 M
Repo        : k8s
Summary     : Command-line utility for administering a Kubernetes cluster.
URL         : https://kubernetes.io
License     : ASL 2.0
Description : Command-line utility for administering a Kubernetes cluster.

可以看情況先啟動kubelet睛约,我是后面在啟動的

systemctl start kubectl
systemctl enable kubectl

設(shè)置 cgroup driver

docker的默認cgroup驅(qū)動cgroupfs,修改為與k8s一致

# 修改/etc/docker/daemon.json 添加一行
# 指定docker的cgroupdriver為systemd倍宾,官方推薦 docker和k8s的cgroup driver必須一致 否則啟動不了
"exec-opts": ["native.cgroupdriver=systemd"],

# 重啟
systemctl daemon-reload
systemctl restart docker

部署master節(jié)點

apiserver-advertise-address master節(jié)點地址

image-repository 使用阿里云的鏡像地址 不然訪問很慢

kubernetes-version 版本號

其它就設(shè)置為默認值

kubeadm init \
--apiserver-advertise-address=10.0.4.9 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.23.9 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/16
# 將打印的 kubeadm join 記錄下來免得后面去找
kubeadm join 10.0.4.9:6443 --token x22atb.reldvil72yia0ac4 \
        --discovery-token-ca-cert-hash sha256:c32a489c444bf5242543811c1aad5b5925693341699756ea4523a4228da6e5ff
        
# 日志里還會提示一段命令 后面需要用到
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

# 實在忘記了重新獲取token
kubeadm token create --print-join-command

第二種部署方式(選用)

#導(dǎo)出默認配置
kubeadm config print init-defaults > init-kubeadm.conf
# 修改默認配置
# init
kubeadm init --config init-kubeadm.conf

查看版本

kubectl version
# 報錯 The connection to the server localhost:8080 was refused - did you specify the right host or port

# 前面沒有執(zhí)行 現(xiàn)在執(zhí)行
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

# 再次訪問 kubectl version 此時沒有報錯了

# 如果其它機器需要使用 kubectl
# 拷貝$HOME/.kube/config到其它機器

查看nodes

[root@master ~]# kubectl get nodes
NAME     STATUS     ROLES                  AGE   VERSION
master   NotReady   control-plane,master   36m   v1.23.9

# 狀態(tài)為NotReady 沒有成功 查看日志
tail -f /var/log/messages
# 需要安裝網(wǎng)絡(luò)插件
Nov  1 01:06:05 VM-4-9-centos kubelet: E1101 01:06:05.769861    7352 kubelet.go:2391] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"

安裝網(wǎng)絡(luò)插件和蚪,便于pod之間可以相互通信

這里我選擇kube-flannel,當然也可以選擇Calico CNI插件

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

一般是下載不了,需要翻墻下載一個,我這里下載了一個

---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
       #image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
       #image: flannelcni/flannel:v0.20.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
       #image: flannelcni/flannel:v0.20.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
# 從yml中可以看到這里與我們搭建master時的 --pod-network-cidr=10.244.0.0/16配置一樣
# 如果使用 kube-flannel就最后設(shè)置默認值就行
net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }

使用kubectl 安裝網(wǎng)絡(luò)插件

[root@master k8s]# kubectl apply -f kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

等待一段時間再次查看nodes

Ready

[root@master k8s]# kubectl get nodes
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   50m   v1.23.9

# 日志正常
[root@master k8s]# tail -f /var/log/messages
Nov  1 01:42:27 VM-4-9-centos ntpd[655]: Listen normally on 9 cni0 10.244.0.1 UDP 123
Nov  1 01:42:27 VM-4-9-centos ntpd[655]: Listen normally on 10 veth91dca84c fe80::e86e:34ff:fe92:50a7 UDP 123
Nov  1 01:42:27 VM-4-9-centos ntpd[655]: Listen normally on 11 cni0 fe80::d097:f0ff:fed3:e444 UDP 123
Nov  1 01:42:27 VM-4-9-centos ntpd[655]: Listen normally on 12 veth77048412 fe80::9cf4:6cff:fee8:5547 UDP 123
Nov  1 01:43:01 VM-4-9-centos systemd: Started Session 4243 of user root.
Nov  1 01:44:01 VM-4-9-centos systemd: Started Session 4244 of user root.

查看pod

[root@master k8s]# kubectl get pods -A
NAMESPACE      NAME                             READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-lpfvv            1/1     Running   0          4m29s
kube-system    coredns-6d8c4cb4d-96hvd          1/1     Running   0          53m
kube-system    coredns-6d8c4cb4d-wrm4s          1/1     Running   0          53m
kube-system    etcd-master                      1/1     Running   0          53m
kube-system    kube-apiserver-master            1/1     Running   0          53m
kube-system    kube-controller-manager-master   1/1     Running   0          53m
kube-system    kube-proxy-7kqc5                 1/1     Running   0          53m
kube-system    kube-scheduler-master            1/1     Running   0          53m

如果安裝失敗可以使用kubeadm reset 恢復(fù)原狀重新安裝

node節(jié)點加入集群

# 找到上面記錄的 token 執(zhí)行
kubeadm join 10.0.4.9:6443 --token x22atb.reldvil72yia0ac4 \
        --discovery-token-ca-cert-hash sha256:c32a489c444bf5242543811c1aad5b5925693341699756ea4523a4228da6e5ff
        
        
# 報錯 需要允許 iptables 檢查橋接流量
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1

允許 iptables 檢查橋接流量

設(shè)置 net.bridge.bridge-nf-call-iptables =1 以便于Linux 節(jié)點的 iptables 能夠正確查看橋接流量

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# 不需要重新啟動
sudo sysctl --system

如果碰到提示 systemctl enable kebuctl.service 之類的信息 復(fù)制執(zhí)行一下即可
再次執(zhí)行

#如果token忘記了或者過期了可以重新生成一個
kubeadm token create --print-join-command --ttl=0

[root@node01 ~]# kubeadm join 10.0.4.9:6443 --token q1g5bp.qtc45zl0umpu1viy --discovery-token-ca-cert-hash sha256:c32a489c444bf5242543811c1aad5b5925693341699756ea4523a4228da6e5ff
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

############### 
[root@node01 ~]# kubectl get nodes
NAME     STATUS     ROLES                  AGE    VERSION
master   Ready      control-plane,master   125m   v1.23.9
node01   NotReady   <none>                 53s    v1.23.9


#另外一臺node節(jié)點也加入涝涤,等待一段時間訪問 都已經(jīng)Ready
[root@node02 ~]# kubectl get nodes
NAME     STATUS   ROLES                  AGE     VERSION
master   Ready    control-plane,master   129m    v1.23.9
node01   Ready    <none>                 5m47s   v1.23.9
node02   Ready    <none>                 3m44s   v1.23.9

啟動kubelet

如果啟動有報錯,通過命令 journalctl -f -u kubelet 查看日志

# 現(xiàn)在啟動
systemctl enable --now kubelet
1667243133980.png

安裝dashboard

翻墻下載 https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.0/aio/deploy/recommended.yaml

這里我修改了一下信息 添加 type: NodePort 暴露端口30000 不設(shè)置的話只能部署ingress訪問

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30000
  selector:
    k8s-app: kubernetes-dashboard
[root@master k8s]# kubectl apply -f kube-dashboard.yml 
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created


[root@master k8s]# kubectl get pods -A | grep dashboard
kubernetes-dashboard   dashboard-metrics-scraper-6f669b9c9b-prj4m   1/1     Running   0          63s
kubernetes-dashboard   kubernetes-dashboard-67b9478795-zzrds        1/1     Running   0          63s

給dashboard創(chuàng)建管理員角色

kube-dashboard-adminuser.yml

[root@master k8s]# kubectl apply -f kube-dashboard-adminuser.yml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
# kube-dashboard-adminuser
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
# 確定 ip
kubectl get pods,svc -n kubernetes-dashboard -o wide

訪問https://ip:9200

1667245176593.png

獲取登錄token

[root@master k8s]# kubectl describe secrets -n kubernetes-dashboard admin-user-token  | grep token | awk 'NR==3{print $2}'

eyJhbGciOiJSUzI1NiIsImtpZCI6Imh3NGJpbjlZQjZubDg0OWY2Ri1xMDdLSkV6dC1fM2MyMzVmVW5XZnhlelkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXd4bnNxIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJkOGJkZmNhNi1jZDAxLTQzOTgtODE1Mi0wZGYyNGYxOTQzMzMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.qBmPRz_8fUjYbFAn9jMTjHtZMeBvEQPRBtkwC5ZuYE4CNU3-6z81G3c8uuWxrZvgEei_BYUXrYxlChMksQkMMhn6xjR3o1PhLEHAz7o6Vv0jeYfXY0-aFe2PRzSc3aZjoEHhz7-G5OMSiGU9W1_Ltg7PqetwfXSPo39rIweo4P0AKY689IChq3nZXDX2MjExvuqVsCVgRSilPf1azUsZLC_R-cwHfOloPDgBWmbKDatbL_LqRtmMQ705YQH_G89I257Mf2Ki-KsCB8sm7uqrt1EwU4ovU5UEDk05hwxcEXIay2m5vXyVOESysJMR8g9j2F4B8ulv0ixpE41-eC0tlQ

登錄成功


1667245364811.png
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末滩报,一起剝皮案震驚了整個濱河市蔗坯,隨后出現(xiàn)的幾起案子琼稻,更是在濱河造成了極大的恐慌,老刑警劉巖商膊,帶你破解...
    沈念sama閱讀 210,978評論 6 490
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件伏伐,死亡現(xiàn)場離奇詭異,居然都是意外死亡晕拆,警方通過查閱死者的電腦和手機藐翎,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 89,954評論 2 384
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人吝镣,你說我怎么就攤上這事赚导。” “怎么了赤惊?”我有些...
    開封第一講書人閱讀 156,623評論 0 345
  • 文/不壞的土叔 我叫張陵,是天一觀的道長凰锡。 經(jīng)常有香客問我未舟,道長,這世上最難降的妖魔是什么掂为? 我笑而不...
    開封第一講書人閱讀 56,324評論 1 282
  • 正文 為了忘掉前任裕膀,我火速辦了婚禮,結(jié)果婚禮上勇哗,老公的妹妹穿的比我還像新娘昼扛。我一直安慰自己,他們只是感情好欲诺,可當我...
    茶點故事閱讀 65,390評論 5 384
  • 文/花漫 我一把揭開白布抄谐。 她就那樣靜靜地躺著,像睡著了一般扰法。 火紅的嫁衣襯著肌膚如雪蛹含。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 49,741評論 1 289
  • 那天塞颁,我揣著相機與錄音浦箱,去河邊找鬼。 笑死祠锣,一個胖子當著我的面吹牛酷窥,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播伴网,決...
    沈念sama閱讀 38,892評論 3 405
  • 文/蒼蘭香墨 我猛地睜開眼蓬推,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了澡腾?” 一聲冷哼從身側(cè)響起拳氢,我...
    開封第一講書人閱讀 37,655評論 0 266
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎蛋铆,沒想到半個月后馋评,有當?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 44,104評論 1 303
  • 正文 獨居荒郊野嶺守林人離奇死亡刺啦,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 36,451評論 2 325
  • 正文 我和宋清朗相戀三年留特,在試婚紗的時候發(fā)現(xiàn)自己被綠了。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 38,569評論 1 340
  • 序言:一個原本活蹦亂跳的男人離奇死亡蜕青,死狀恐怖苟蹈,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情右核,我是刑警寧澤慧脱,帶...
    沈念sama閱讀 34,254評論 4 328
  • 正文 年R本政府宣布,位于F島的核電站贺喝,受9級特大地震影響菱鸥,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜躏鱼,卻給世界環(huán)境...
    茶點故事閱讀 39,834評論 3 312
  • 文/蒙蒙 一氮采、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧染苛,春花似錦鹊漠、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 30,725評論 0 21
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至畔师,卻和暖如春楞陷,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背茉唉。 一陣腳步聲響...
    開封第一講書人閱讀 31,950評論 1 264
  • 我被黑心中介騙來泰國打工固蛾, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人度陆。 一個月前我還...
    沈念sama閱讀 46,260評論 2 360
  • 正文 我出身青樓艾凯,卻偏偏與公主長得像,于是被迫代替她去往敵國和親懂傀。 傳聞我的和親對象是個殘疾皇子趾诗,可洞房花燭夜當晚...
    茶點故事閱讀 43,446評論 2 348

推薦閱讀更多精彩內(nèi)容