k8s資源限制

[toc]

一董饰、網(wǎng)絡(luò)策略

1.1 部署tomcat和nginx

1.1.1 創(chuàng)建命名空間

  • 創(chuàng)建linux和app命名空間并打label
kubectl create ns linux
kubectl create ns app
kubectl label ns linux nsname=linux
kubectl label ns app nsname=app

1.1.2 tomcat和nginx的yaml

  • tomcat的deployment和service
kind: Deployment
# 這里需要注意自己的k8s版本监氢,通過kubectl explain deployment查看自己的k8s版本支持的版本號
# 這里以kubernetes v1.22.2為例
apiVersion: apps/v1
metadata:
  labels:
    app: app-tomcat-deployment-label
  name: app-tomcat-deployment
  namespace: app
spec:
  replicas: 1
  selector:
    matchLabels:
      app:app-tomcat-selector
  template:
    metadata:
      labels:
        app: app-tomcat-selector
    spec:
      containers:
      - name: app-tomcat-container
        image: tomcat:7.0.109-jdk8-openjdk
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP
          name: http
          
---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: app-tomcat-service-label
  name: app-tomcat-service
  namespace: app
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
    nodePort: 30015
  selector:
    app: app-tomcat-selector
  • nginx的deployment和service
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    app: app-nginx-deployment-label
  name: app-nginx-deployment
  namespace: app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app-nginx-selector
  template:
    metadata:
      labels:
        app: app-nginx-selector
        project: app
    spec:
      containers:
      - name: app-nginx-container
        image: nginx:1.20.2-alpine
        imagePullPolicy: Always
        ports:
        - containerPort: 80
          protocol: TCP
          name: http
        - containerPort: 443
          protocol: TCP
          name: https

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: app-nginx-service-label
  name: app-nginx-service
  namespace: app
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 30014
  - name: https
    port: 443
    protocol: TCP
    targetPort: 443
    nodePort: 30453
  selector:
    app: app-nginx-selector

linux命名空間下的yaml同樣配置,需修改下yaml文件中的命名空間和標(biāo)簽

1.1.3 部署tomcat和nginx

# 部署當(dāng)前命名空間下的所有yaml文件
kubectl apply -f .

1.1.4 配置動靜分離

  • 在tomcat中創(chuàng)建頁面
# TOMCAT_CONTAINER需要替換為tomcat真實(shí)的容器名
kubectl -n app exec -it TOMCAT_CONTAINER bash 
cd webapps
mkdir app
echo "app app" > app/index.jsp
  • 配置nginx轉(zhuǎn)發(fā)
kubectl -n app exec -it NGINX_CONTAINER sh
# nginx是基于alpine系統(tǒng)的吻育,安裝vim命令
apk add vim
# 配置匹配目錄轉(zhuǎn)發(fā)至tomcat規(guī)則
vim /etc/nginx/conf.d/default.conf
# 添加如下規(guī)則
location /app {
    # TOMCAT_SERVICE需要替換為tomcat真實(shí)的service名字
    proxy_pass http://TOMCAT_SERVICE; 
}
# 檢查nginx配置文件語法
nginx -t
# 檢查正確后璃俗,重新加載配置文件姻报,讓配置文件生效
nginx -s reload

遇到復(fù)制粘貼不了的情況

1645584428139.png

在命令模式下設(shè)置即可

1645584683595.png

linux命名空間同樣配置站欺,需要修改頁面為"linux app"

1.1.5 驗(yàn)證配置

  • nginx頁面
1645585087768.png
  • tomcat頁面
1645585231992.png
  • 通過nginx端口訪問tomcat
1645585279621.png

默認(rèn)情況下pod跨命名空間也可以訪問

1.2 配置網(wǎng)絡(luò)策略規(guī)則

1.2.1 入規(guī)則標(biāo)簽限制

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: tomcat-access--networkpolicy
  namespace: app
spec:
  policyTypes:
  - Ingress
  podSelector:
    matchLabels:
      app: app-tomcat-selector # 對匹配到的目的Pod應(yīng)用以下規(guī)則
  ingress: # 入棧規(guī)則蜻懦,如果指定目標(biāo)端口就是匹配全部端口和協(xié)議甜癞,協(xié)議TCP, UDP, or SCTP
  - from:
    - podSelector:
        matchLabels:
          app: app-nginx-selector # 如果存在多個matchLabel條件,要同時滿足條件A阻肩、條件B带欢、條件X

添加了網(wǎng)絡(luò)策略后,默認(rèn)禁止了跨namespace訪問目標(biāo)pod

沒有在配置中允許的pod烤惊,同namespace也無法訪問

不允許從宿主機(jī)訪問pod

該策略只允許通namespace含有特定標(biāo)簽的源pod訪問目標(biāo)pod

該策略不影響各namespace的pod與非明確禁止的pod之間的相互訪問

  • 可以使用describe查看networkpolicy
kubectl -n app describe networkpolicies.networking.k8s.io tomcat-access--networkpolicy
Name:         tomcat-access--networkpolicy
Namespace:    app
Created on:   2022-02-23 03:18:02 +0000 UTC
Labels:       <none>
Annotations:  <none>
Spec:
  PodSelector:     app=app-tomcat-selector
  Allowing ingress traffic:
    To Port: <any> (traffic allowed to all ports)
    From:
      PodSelector: app=app-nginx-selector
  Not affecting egress traffic
  Policy Types: Ingress
  • 添加規(guī)則后從linux命名空間下訪問tomcat的地址
1645597541156.png

訪問tomcat的pod地址不通乔煞,是因?yàn)榫W(wǎng)絡(luò)規(guī)則限制;訪問nginx的pod地址可以通過

1.2.2 入規(guī)則標(biāo)簽端口限制

  • 在1.2.1的yaml文件后邊添加如下內(nèi)容柒室,位于from下邊渡贾,跟podSelector同級
ports: #入棧規(guī)則,如果指定目標(biāo)端口就是匹配全部端口和協(xié)議,協(xié)議TCP, UDP, or SCTP
    - protocol: TCP
      #port: 8080 #允許通過TCP協(xié)議訪問目標(biāo)pod的8080端口,但是其它沒有允許的端口將全部禁止訪問
      port: 80
  • 添加規(guī)則后從app命名空間下nginx中訪問tomcat的地址雄右,不放開8080端口空骚,同命名空間下,符合podSelector匹配規(guī)則也訪問不通擂仍;放開8080端口囤屹,可以訪問通
1645598181215.png

1.2.3 入規(guī)則多端口限制

  • 1.2.1的yaml文件,用如下內(nèi)容替換from內(nèi)容
  - from:
    - podSelector: #匹配源pod,matchLabels: {}為不限制源pod即允許所有pod
        matchLabels: {}
    ports: #入棧規(guī)則逢渔,如果指定目標(biāo)端口就是匹配全部端口和協(xié)議肋坚,協(xié)議TCP, UDP, or SCTP
    - protocol: TCP
      port: 8080
    - protocol: TCP
      port: 8081
    - protocol: TCP
      port: 8082
  • 同namespace下的所有pod可以訪問目標(biāo)pod的多個端口
1645599041496.png
1645599069009.png

1.2.4 入規(guī)則互訪

  • 1.2.1的yaml文件,用如下內(nèi)容替換Ingress內(nèi)容
  - Ingress
  podSelector:
    matchLabels: {} 
  ingress:
  - from:
    - podSelector: 
        matchLabels: {}
  • 同namespace下所有pod可以相互訪問
1645599069009.png
1645599983023.png

1.2.5 入規(guī)則IP限制

  • 1.2.1的yaml文件肃廓,用如下內(nèi)容替換from內(nèi)容
- from:
    - ipBlock:
        cidr: 10.200.0.0/16 #白名單智厌,允許訪問的地址范圍,沒有允許的將禁止訪問目標(biāo)pod
        except:
        - 10.200.218.0/24 #在以上范圍內(nèi)禁止訪問的源IP地址
        - 10.200.230.239/32 #在以上范圍內(nèi)禁止訪問的源IP地址
    ports: 
    - protocol: TCP
      port: 8080
    - protocol: TCP
      port: 8081
  • 白名單內(nèi)地址可以訪問通盲赊,明確限制的地址訪問不通
1645600536617.png
1645600464526.png

這個規(guī)則只限制ip地址铣鹏,不限制命名空間,不通命名空間下的pod哀蘑,只要ip地址在白名單地址段诚卸,也可以訪問目標(biāo)pod

1.2.6 入規(guī)則命名空間限制

  • 1.2.1的yaml文件,用如下內(nèi)容替換Ingress內(nèi)容
- Ingress
  podSelector: #目標(biāo)pod
    matchLabels: {} #允許訪問app namespace 中的所有pod
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          nsname: linux #只允許指定的namespace訪問
    - namespaceSelector:
        matchLabels:
          nsname: app #只允許指定的namespace訪問
    ports:
    - protocol: TCP
      port: 8080
    - protocol: TCP
      port: 8081
    - protocol: TCP
      port: 8082
  • 允許多個namespace訪問指定namespace中的所有pod和多個端口递礼,沒有被允許的namespace中的pod訪問不通
1645601040168.png
1645601079875.png

1.2.7 出規(guī)則IP端口限制

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: egress-access-networkpolicy
  namespace: app
spec:
  policyTypes:
  - Egress
  podSelector: #目標(biāo)pod選擇器
    matchLabels:  #基于label匹配目標(biāo)pod
      app: app-tomcat-selector #匹配app namespace中app的值為app-tomcat-selector的pod,然后基于egress中的指定網(wǎng)絡(luò)策略進(jìn)行出口方向的網(wǎng)絡(luò)限制
  egress:
  - to:
    - ipBlock:
        cidr: 10.200.0.0/16 #允許匹配到的pod出口訪問的目的CIDR地址范圍
    - ipBlock:
        cidr: 172.31.7.106/32 #允許匹配到的pod出口訪問的目的主機(jī)
    ports:
    - protocol: TCP
      port: 80 #允許匹配到的pod訪問目的端口為80的訪問
    - protocol: TCP
      port: 53 #允許匹配到的pod訪問目的端口為53 即DNS的解析
    - protocol: UDP
      port: 53 #允許匹配到的pod訪問目的端口為53 即DNS的解析
  • 目標(biāo)pod可以訪問指定地址段的多個端口惨险,沒有放開的端口訪問不通
1645602269891.png

1.2.8 出規(guī)則標(biāo)簽端口限制

  • 1.2.7的yaml文件,用如下內(nèi)容替換spec內(nèi)容
spec:
  policyTypes:
  - Egress
  podSelector:
    matchLabels:
      app: app-nginx-selector
  egress:
  - to:
    - podSelector: 
        matchLabels:
          app: app-tomcat-selector
    ports:
    - protocol: TCP
      port: 8080
    - protocol: TCP
      port: 53
    - protocol: UDP
      port: 53
  • 限制目標(biāo)pod只能訪問匹配標(biāo)簽的pod的多個端口
1645603454703.png

1.2.9 出規(guī)則命名空間限制

  • 1.2.7的yaml文件脊髓,用如下內(nèi)容替換egress內(nèi)容
egress:
  - to:
    - namespaceSelector:
        matchLabels:
          nsname: python #指定允許訪問的目的namespace
    - namespaceSelector:
        matchLabels:
          nsname: linux #指定允許訪問的目的namespace
    ports:
    - protocol: TCP
      port: 8080
    - protocol: TCP
      port: 53
    - protocol: UDP
      port: 53
  • 指定目標(biāo)pod可以訪問多個命名空間的多個端口,未被允許的端口訪問不通
1645604351218.png

calicoctl get networkpolicy -n app

這條命令可以使用calico的命令行工具查看網(wǎng)絡(luò)規(guī)則栅受,使用方法和kubectl一樣将硝,可以describe恭朗,可以-o yaml

注:網(wǎng)絡(luò)規(guī)則需要網(wǎng)絡(luò)組件支持,calico支持依疼,flannel不支持

二痰腮、Ingress-NGINX

2.1 ExternalName

  • ExternalName的yaml文件
apiVersion: v1
kind: Service
metadata:
  name: my-external-test-name
  namespace: default
spec:
  type: ExternalName  #service類型
  externalName: www.baidu.com   #外部域名
  • 在service的ExternalName類型中綁定baidu的域名,pod內(nèi)直接訪問域名即可
1645666300469.png
  • CLusterIP和Endpoint的yaml文件
apiVersion: v1
kind: Service
metadata:
  name: mysql-production-server-name
  namespace: default
spec:
  ports:
    - port: 6379
---
kind: Endpoints
apiVersion: v1
metadata:
  name: mysql-production-server-name
  namespace: default
subsets:
  - addresses:
      - ip: 192.168.204.182
    ports:
      - port: 6379
  • 修改redis配置文件
1645667181373.png
  • 重啟redis服務(wù)
systemctl restart redis
  • 進(jìn)入pod中律罢,安裝epel-release膀值,安裝redis客戶端,通過域名連接redis
1645668178651.png

在centos 7安裝redis報(bào)錯

1645666852755.png

安裝epel源

1645666927600.png

再安裝

1645666991690.png

2.2 創(chuàng)建ingress-nginx

  • 啟動ingress-nginx的pod
1645668585935.png
  • 網(wǎng)頁訪問測試
1645668637961.png

訪問任意一臺主機(jī)的80端口误辑,會得到這個報(bào)錯沧踏,證明部署成功

2.3 匹配規(guī)則

2.3.1 匹配單主機(jī)

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-web
  namespace: linux
  annotations:
    kubernetes.io/ingress.class: "nginx" ##指定Ingress Controller的類型
    nginx.ingress.kubernetes.io/use-regex: "true" ##指定后面rules定義的path可以使用正則表達(dá)式
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "600" ##連接超時時間,默認(rèn)為5s
    nginx.ingress.kubernetes.io/proxy-send-timeout: "600" ##后端服務(wù)器回轉(zhuǎn)數(shù)據(jù)超時時間,默認(rèn)為60s
    nginx.ingress.kubernetes.io/proxy-read-timeout: "600" ##后端服務(wù)器響應(yīng)超時時間,默認(rèn)為60s
    nginx.ingress.kubernetes.io/proxy-body-size: "50m" ##客戶端上傳文件,最大大小巾钉,默認(rèn)為20m
    nginx.ingress.kubernetes.io/app-root: /index.html

spec:
  rules:
  - host: www.test.com
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: linux-tomcat-app1-service
            port:
              number: 80
  • 訪問如下
1645670786676.png

需要現(xiàn)在linux命名空間下創(chuàng)建兩個tomcat的service和deployment翘狱,在tomcat中創(chuàng)建目錄和訪問頁,在主機(jī)上的hosts文件添加域名將解析(可以是負(fù)載均衡器的地址或者集群任意一個node的地址)

2.3.2 匹配多主機(jī)

  • 在2.3.1的yaml中添加如下內(nèi)容
  - host: mobile.test.com
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: linux-tomcat-app2-service
            port:
              number: 80
  • 訪問如下
1645671237081.png

2.3.3 匹配路徑

  • 2.3.1的yaml中,host替換為如下內(nèi)容
  - host: www.test.com
    http:
      paths:
      - pathType: Prefix
        path: "/app1"
        backend:
          service:
            name: magedu-tomcat-app1-service
            port:
              number: 80

      - pathType: Prefix
        path: "/app2"
        backend:
          service:
            name: magedu-tomcat-app2-service
            port:
              number: 80
  • 訪問如下
1645671419515.png
1645671438059.png

2.3.4 匹配單域名

  • 簽發(fā)域名證書
openssl req -x509 -sha256 -newkey rsa:4096 -keyout ca.key -out ca.crt -days 3560 -nodes -subj '/CN=www.test.com'
openssl req -new -newkey rsa:4096 -keyout server.key -out server.csr -nodes -subj '/CN=www.test.com'
openssl x509 -req -sha256 -days 3650 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt
  • 證書上傳至k8s
kubectl -n linux create secret generic tls-secret --from-file=tls.crt=server.crt --from-file=tls.key=server.key
  • 匹配規(guī)則yaml文件
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-web
  namespace: linux
  annotations:
    kubernetes.io/ingress.class: "nginx" ##指定Ingress Controller的類型
    nginx.ingress.kubernetes.io/ssl-redirect: 'true' #SSL重定向砰苍,即將http請求強(qiáng)制重定向至https潦匈,等于nginx中的全站https
spec:
  tls:
  - hosts:
    - www.test.com
    secretName: tls-secret

  rules:
  - host: www.test.com
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: magedu-tomcat-app1-service
            port:
              number: 80
  • 訪問如下
1645673459174.png

2.3.5 匹配多域名

  • 簽發(fā)多域名證書
openssl req -new -newkey rsa:4096 -keyout mobile.key -out mobile.csr -nodes -subj '/CN=mobile.test.com'
openssl x509 -req -sha256 -days 3650 -in mobile.csr -CA ca.crt -CAkey ca.key -set_serial 01  -out mobile.crt

針對2.3.4中簽發(fā)的ca證書,簽發(fā)多個域名

  • 證書上傳至k8s
kubectl  create secret generic mobile-tls-secret --from-file=tls.crt=mobile.crt --from-file=tls.key=mobile.key -n linux
  • 2.3.4 的yaml中赚导,spec替換為如下內(nèi)容
spec:
  tls:
  - hosts:
    - www.test.com
    secretName: tls-secret
  - hosts:
    - mobile.test.com
    secretName: mobile-tls-secret
  rules:
  - host: www.test.com
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: magedu-tomcat-app1-service
            port:
              number: 80


  - host: mobile.test.com
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: magedu-tomcat-app2-service
            port:
              number: 80
  • 訪問如下
1645673459174.png
1645673765032.png

三茬缩、資源限制

3.1 限制資源

  • 限制容器cpu和內(nèi)存
        resources:
          limits:
            cpu: "1.2"
            memory: "512Mi"
          requests:
            memory: "100Mi"
            cpu: "500m"

requests的值一定要小于等于limits的值,不然創(chuàng)建時會報(bào)錯吼旧;加上limits凰锡,容器最大允許使用的資源上限就是limits中設(shè)置的值

  • limitRange
apiVersion: v1
kind: LimitRange
metadata:
  name: limitrange-magedu
  namespace: linux
spec:
  limits:
  - type: Container       #限制的資源類型
    max:
      cpu: "2"            #限制單個容器的最大CPU
      memory: "2Gi"       #限制單個容器的最大內(nèi)存
    min:
      cpu: "500m"         #限制單個容器的最小CPU
      memory: "512Mi"     #限制單個容器的最小內(nèi)存
    default:
      cpu: "500m"         #默認(rèn)單個容器的CPU限制
      memory: "512Mi"     #默認(rèn)單個容器的內(nèi)存限制
    defaultRequest:
      cpu: "500m"         #默認(rèn)單個容器的CPU創(chuàng)建請求
      memory: "512Mi"     #默認(rèn)單個容器的內(nèi)存創(chuàng)建請求
    maxLimitRequestRatio:
      cpu: 2              #限制CPU limit/request比值最大為2
      memory: 2         #限制內(nèi)存limit/request比值最大為1.5
  - type: Pod
    max:
      cpu: "4"            #限制單個Pod的最大CPU
      memory: "4Gi"       #限制單個Pod最大內(nèi)存
  - type: PersistentVolumeClaim
    max:
      storage: 50Gi        #限制PVC最大的requests.storage
    min:
      storage: 30Gi        #限制PVC最小的requests.storage

創(chuàng)建后可以指定命名空間使用describe命令查看;如果設(shè)置的資源超過limitrange中規(guī)定的值黍少,kubectl創(chuàng)建的時候不會報(bào)錯寡夹,但是pod由于資源超過限制,起不來厂置,不會被看到菩掏,但是沒有報(bào)錯信息;需要使用describe命令查看deployment輸出yaml或者json格式內(nèi)容昵济,message有提示信息

  • ResourceQuato
apiVersion: v1
kind: ResourceQuota
metadata:
  name: quota-magedu
  namespace: linux
spec:
  hard:
    requests.cpu: "8" # 最小CPU
    limits.cpu: "8" # 最大CPU
    requests.memory: 4Gi # 最小內(nèi)存
    limits.memory: 4Gi # 最大內(nèi)存
    requests.nvidia.com/gpu: 4 # 最小英偉達(dá)GPU
    pods: "2" # 最多pod數(shù)量
    services: "6" # 最多service數(shù)量

ResourceQuato是針對命名空間的資源限制

四智绸、RBAC授權(quán)

4.1 創(chuàng)建用戶授權(quán)

  • 在linux命名空間下創(chuàng)建賬戶linux
kubectl -n linux create serviceaccount linux

k8s在創(chuàng)建用戶的時候,會給用戶分配一個token访忿,這個token會一直伴隨著用戶瞧栗,除非用戶被刪除重建,否則token不會變海铆,secret名字可以使用get secret進(jìn)行查看迹恐,describe查看token字符串

  • 創(chuàng)建角色
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: linux
  name: linux-role
rules:
- apiGroups: ["*"]
  resources: ["pods","pods/exec"]
  verbs: ["*"]
- apiGroups: ["extensions", "apps/v1"]
  resources: ["deployments"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]

可以使用describe進(jìn)行查看

  • 創(chuàng)建角色綁定
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: role-bind-magedu
  namespace: magedu
subjects:
- kind: ServiceAccount
  name: magedu
  namespace: magedu
roleRef:
  kind: Role
  name: magedu-role
  apiGroup: rbac.authorization.k8s.io

可以使用describe進(jìn)行查看

4.2 限制用戶授權(quán)

  • 創(chuàng)建角色僅有查看權(quán)限,替換4.1創(chuàng)建角色中的rules為如下內(nèi)容
rules:
- apiGroups: ["*"]
  resources: ["pods","pods/exec"]
  verbs: ["get", "list", "watch"]
- apiGroups: ["extensions", "apps/v1"]
  resources: ["deployments"]
  verbs: ["get", "list", "watch"]

使用token登陸dashboard時卧斟,沒有權(quán)限刪除

4.3 生成kubeconfig文件

  • 創(chuàng)建csr文件
cat >> linux-csr.json <<EOF
{
  "CN": "China",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF
  • 簽發(fā)證書
cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem  -ca-key=/etc/kubernetes/ssl/ca-key.pem -config= /etc/kubeasz/clusters/cluster1/ssl/ca-config.json  -profile=kubernetes linux-csr.json | cfssljson -bare  linux
  • 生成普通用戶kubeconfig文件
kubectl config set-cluster cluster1 --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --server=https://192.168.204.101:6443 --kubeconfig=magedu.kubeconfig
  • 設(shè)置客戶端認(rèn)證參數(shù)
kubectl config set-credentials linux \
--client-certificate=linux.pem \
--client-key=linux-key.pem \
--embed-certs=true \
--kubeconfig=linux.kubeconfig
  • 設(shè)置上下文參數(shù)(多集群使用上下文區(qū)分)
kubectl config set-context cluster1 \
--cluster=cluster1 \
--user=linux \
--namespace=linux \
--kubeconfig=linux.kubeconfig

https://kubernetes.io/zh/docs/concepts/configuration/organize-cluster-access-kubeconfig/

  • 設(shè)置默認(rèn)上下文
kubectl config use-context cluster1 --kubeconfig=magedu.kubeconfig
  • 查看token
kubectl -n linux describe secret `kubectl -n linux get secret | grep linux | awk '{print $1}'`
  • 編輯linux.kubeconfig殴边,最后邊追加寫入token
    token: eyJhbGciOiJSUzI1NiIsImtpZCI6IkxpeWdUSzlaQ1AtMldqNjhwXzdaeVFJWEFHdnhLNEI5bGJ2UTZtaEcyMDQifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtYWdlZHUiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWFnZWR1LXRva2VuLWx6Y3g4Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1hZ2VkdSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjkwZjZkZTI5LWJlZjMtNGVlOC04MGMxLWI2OWZjZGE2N2IxZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptYWdlZHU6bWFnZWR1In0.G2JTuh3B9ncIt8sumk22oD13rUyzvHPtukudvtYZ7W6L_Tn1jRY8YldZKXB9PnejChm2_O3GWh84CCWr2kpad2ELH3nTFMdzFNig-sfoaubt19ZabdLRch1Pd9wu-4YWxPPjUxi3ZQvnxo-TIJ26k_Y5MVQuc81HW2NgvzFGTg4jh6Uusd12uz9HT7Z_JQn7CgSZLg2OrbAuq7OgUVqBqOpoVkN1CXsD6qu_xC7c_dvVsYmU9u-W8VFu4ScKNK1G1P77wsuIiBNN543wJ53dXTePDOrWJbkZvyDBfNFd4PaCCBCl9GVA8GlWVOWiV3A_xFh3D-ZTlFDRLLgJZAI5cQ

五憎茂、日志收集

5.1 鏡像準(zhǔn)備

  • 制作自定義基礎(chǔ)系統(tǒng)鏡像包c(diǎn)entos

添加filebeat到基礎(chǔ)鏡像包,后續(xù)收集日志使用

  • 基于centos制作自定義jdk鏡像

添加環(huán)境變量锤岸,以便能夠全局執(zhí)行java命令

  • 基于jdk制作tomcat鏡像

創(chuàng)建日志掛載目錄

  • 基于tomcat制作業(yè)務(wù)鏡像

添加filebeat.yml文件竖幔,讓filebeat讀取配置并生效

5.2 搭建中間件

5.2.1 elasticsearch

  • 安裝elasticsearch
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.16.0-amd64.deb
dpkg -i elasticsearch-7.16.0-amd64.deb

修改配置文件/etc/elasticsearch/elasticsearch.yml(版本:elasticsearch-7.16.0-amd64.deb)

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: my-es
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: IP
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["IP"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["IP"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
action.destructive_requires_name: true
#
# ---------------------------------- Security ----------------------------------
#
#                                 *** WARNING ***
#
# Elasticsearch security features are not enabled by default.
# These features are free, but require configuration changes to enable them.
# This means that users don’t have to provide credentials and can get full access
# to the cluster. Network connections are also not encrypted.
#
# To protect your data, we strongly encourage you to enable the Elasticsearch security features.
# Refer to the following documentation for instructions.
#
# https://www.elastic.co/guide/en/elasticsearch/reference/7.16/configuring-stack-security.html
  • 重啟elasticsearch
systemctl restart elasticsearch.service

5.2.2 logstash

  • 安裝logstach
wget https://artifacts.elastic.co/downloads/logstash/logstash-7.16.0-amd64.deb
dpkg -i logstash-7.16.0-amd64.deb

修改配置文件 /etc/logstash/logstash.yml(版本:logstash-7.16.0-amd64.deb)

這里邊只有數(shù)據(jù)路徑和日志路徑,可以不用修改

在/etc/logstash/conf.d/目錄下編寫json配置文件是偷,文件名字可以自定義(kafka-to-es.conf)拳氢,指定輸入的kafka集群和topic,根據(jù)類日志型判斷輸然后出到es集群中

input {
  kafka {
    bootstrap_servers => "KAFKA_IP:9092"
    topics => ["TOPIC"]
    codec => "json"
  }
}

output{
  if [fields][type] == "tomcat-accesslog" {
    elasticsearch {
      hosts => ["ES_IP:9200"]
      index => "accesslog-%{+YYYY.MM.dd}"
    }
  }

  if [fields][type] == "tomcat-catalina" {
    elasticsearch {
      hosts => ["ES_IP:9200"]
      index => "catalinalog-%{+YYYY.MM.dd}"
    }
  }
}
  • 重啟logstash
systemctl restart logstash.service

5.2.3 kibana

  • 安裝kibana
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.16.0-amd64.deb
dpkg -i kibana-7.16.0-amd64.deb

修改配置文件/etc/kibana/kibana.yml(版本:kibana-7.16.0-amd64.deb)

# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "192.168.204.113"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
server.rewriteBasePath: false

# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: ""

# The maximum payload size in bytes for incoming server requests.
server.maxPayload: 1048576

# The Kibana server's name.  This is used for display purposes.
server.name: "elasticsearch"

# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["http://192.168.204.113:9200"]

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
kibana.index: ".kibana"

# The default application to load.
kibana.defaultAppId: "home"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "kibana_system"
#elasticsearch.password: "pass"

# Kibana can also authenticate to Elasticsearch via "service account tokens".
# If may use this token instead of a username/password.
# elasticsearch.serviceAccountToken: "my_token"

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
elasticsearch.shardTimeout: 30000

# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
elasticsearch.logQueries: false

# Specifies the path where Kibana creates the process ID file.
pid.file: /run/kibana/kibana.pid

# Enables you to specify a file where Kibana stores log output.
logging.dest: stdout

# Set the value of this setting to true to suppress all logging output.
logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
logging.verbose: false

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
i18n.locale: "zh-CN"
  • 重啟kibana
systemctl restart kibana.service

5.2.4 zookeeper

  • 安裝java
apt install -y openjdk-8-jdk
  • 安裝zookeeper
wget https://www.apache.org/dyn/closer.lua/zookeeper/zookeeper-3.5.9/apache-zookeeper-3.5.9-bin.tar.gz
tar xf apache-zookeeper-3.5.9-bin.tar.gz

復(fù)制配置文件并重命名為zoo.cfg蛋铆,修改配置文件

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/data/zookeeper/zkdata
dataLogDir=/data/zookeeper/zklogs
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=IP:2888:3888
  • 啟動zookeeper
./bin/kafka-server-start.sh -daemon ./config/server.properties

5.2.5 kafka

  • 安裝kafka
wget https://archive.apache.org/dist/kafka/3.0.0/kafka_2.13-3.0.0.tgz
tar xf kafka_2.13-3.0.0.tgz

修改配置文件

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0

############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from 
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://:9092

# Hostname and port the broker will advertise to producers and consumers. If not set, 
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
advertised.listeners=PLAINTEXT://192.168.204.111:9092

# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3

# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=1073741824


############################# Log Basics #############################

# A comma separated list of directories under which to store log files
log.dirs=/data/kafka/kafka-logs

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=IP:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000


############################# Group Coordinator Settings #############################

# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0
  • 啟動kafka
./bin/zkServer.sh start

5.2.6 filebeat

  • 安裝filebeat
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.12.1-x86_64.rpm
rpm -ivh filebeat-7.12.1-x86_64.rpm

由于鏡像是基于centos系統(tǒng)的馋评,所以下載的鏡像是rpm包

修改配置文件/etc/filebeat/filebeat.yml(版本:filebeat-7.12.1-x86_64.rpm)

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /apps/tomcat/logs/catalina.out
  fields:
    type: tomcat-catalina
- type: log
  enabled: true
  paths:
    - /apps/tomcat/logs/localhost_access_log.*.txt
  fields:
    type: tomcat-accesslog
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 1
setup.kibana:

# 連接kafka報(bào)錯時,日志可以打開debug模式戒职,方便定位失敗原因
#logging.level

output.kafka:
  hosts: ["192.168.204.111:9092"]
  required_acks: 1
  topic: "magedu-n60-app1"
  compression: gzip
  max_message_bytes: 1000000
  • 啟動filebeat
/usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat

-e選項(xiàng)是將日志標(biāo)準(zhǔn)輸出栗恩,并且不寫日志文件;-c選項(xiàng)是執(zhí)行配置文件

5.3 查看日志

  • 日志收集邏輯圖
1646293470844.png

業(yè)務(wù)容器中產(chǎn)生業(yè)務(wù)日志洪燥,通過filebeat客戶端收集磕秤,傳輸?shù)絢afka集群,kafka依賴zookeeper集群捧韵,經(jīng)由logstash進(jìn)行邏輯判斷市咆,對業(yè)務(wù)類型進(jìn)行分類,存入elasticsearch集群中再来,由kibana展示出來

  • kafka查看topic
/opt/kafka_2.12-3.0.0/bin/kafka-topics.sh --list --bootstrap-server localhost:9092 
1646363948681.png
  • kafka消費(fèi)者指定topic查看日志
/opt/kafka_2.12-3.0.0/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic TOPIC
1646362018263.png

filebeat客戶端收集日志到kafka的時候蒙兰,需要注意kafka的版本,kafka版本太低芒篷,會報(bào)錯

  • 查看es日志
1646361828162.png

谷歌瀏覽器插件elasticsearch head搜变,如果谷歌瀏覽器找不到,或者找到插件不可用针炉,可以用谷歌雙核瀏覽器挠他,應(yīng)用市場中可以搜索到

  • 通過kibana展示日志
1646362242337.png
1646362344096.png
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市篡帕,隨后出現(xiàn)的幾起案子殖侵,更是在濱河造成了極大的恐慌,老刑警劉巖镰烧,帶你破解...
    沈念sama閱讀 218,640評論 6 507
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件拢军,死亡現(xiàn)場離奇詭異,居然都是意外死亡怔鳖,警方通過查閱死者的電腦和手機(jī)茉唉,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,254評論 3 395
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人赌渣,你說我怎么就攤上這事魏铅〔蹋” “怎么了坚芜?”我有些...
    開封第一講書人閱讀 165,011評論 0 355
  • 文/不壞的土叔 我叫張陵,是天一觀的道長斜姥。 經(jīng)常有香客問我鸿竖,道長,這世上最難降的妖魔是什么铸敏? 我笑而不...
    開封第一講書人閱讀 58,755評論 1 294
  • 正文 為了忘掉前任卡啰,我火速辦了婚禮困曙,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘。我一直安慰自己醇王,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,774評論 6 392
  • 文/花漫 我一把揭開白布千绪。 她就那樣靜靜地躺著娃兽,像睡著了一般。 火紅的嫁衣襯著肌膚如雪禁筏。 梳的紋絲不亂的頭發(fā)上持钉,一...
    開封第一講書人閱讀 51,610評論 1 305
  • 那天,我揣著相機(jī)與錄音篱昔,去河邊找鬼每强。 笑死,一個胖子當(dāng)著我的面吹牛州刽,可吹牛的內(nèi)容都是我干的空执。 我是一名探鬼主播,決...
    沈念sama閱讀 40,352評論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼穗椅,長吁一口氣:“原來是場噩夢啊……” “哼辨绊!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起房待,我...
    開封第一講書人閱讀 39,257評論 0 276
  • 序言:老撾萬榮一對情侶失蹤邢羔,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后桑孩,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體拜鹤,經(jīng)...
    沈念sama閱讀 45,717評論 1 315
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,894評論 3 336
  • 正文 我和宋清朗相戀三年流椒,在試婚紗的時候發(fā)現(xiàn)自己被綠了敏簿。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 40,021評論 1 350
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖惯裕,靈堂內(nèi)的尸體忽然破棺而出温数,到底是詐尸還是另有隱情,我是刑警寧澤蜻势,帶...
    沈念sama閱讀 35,735評論 5 346
  • 正文 年R本政府宣布撑刺,位于F島的核電站,受9級特大地震影響握玛,放射性物質(zhì)發(fā)生泄漏够傍。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,354評論 3 330
  • 文/蒙蒙 一挠铲、第九天 我趴在偏房一處隱蔽的房頂上張望冕屯。 院中可真熱鬧,春花似錦拂苹、人聲如沸安聘。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,936評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽浴韭。三九已至,卻和暖如春音羞,著一層夾襖步出監(jiān)牢的瞬間囱桨,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 33,054評論 1 270
  • 我被黑心中介騙來泰國打工嗅绰, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留舍肠,地道東北人。 一個月前我還...
    沈念sama閱讀 48,224評論 3 371
  • 正文 我出身青樓窘面,卻偏偏與公主長得像翠语,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子财边,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 44,974評論 2 355

推薦閱讀更多精彩內(nèi)容