第10課 Kubernetes之Service不能訪問(wèn)排查流程實(shí)踐

摘要

在學(xué)習(xí)Kubernetes過(guò)程中谐岁,經(jīng)常會(huì)遇到Service無(wú)法訪問(wèn)狮斗,這篇文章總結(jié)了可能導(dǎo)致的情況,希望能幫助你找到問(wèn)題所在。

內(nèi)容

為了完成本次演練掂骏,先運(yùn)行部署一個(gè)應(yīng)用:

# kubectl create deployment web --image=nginx --replicas=3
deployment.apps/web created
# kubectl expose deployment web --port=8082 --type=NodePort
service/web exposed

確保Pod運(yùn)行:

#  kubectl get pods,svc
NAME                      READY   STATUS    RESTARTS   AGE
pod/dnsutils              1/1     Running   25         25h
pod/mysql-5ws56           1/1     Running   0          20h
pod/mysql-fwpgc           1/1     Running   0          25h
pod/mysql-smggm           1/1     Running   0          20h
pod/myweb-8dc2n           1/1     Running   0          25h
pod/myweb-mfbpd           1/1     Running   0          25h
pod/myweb-zn8z2           1/1     Running   0          25h
pod/web-96d5df5c8-8fwsb   1/1     Running   0          69s
pod/web-96d5df5c8-g6hgp   1/1     Running   0          69s
pod/web-96d5df5c8-t7xzv   1/1     Running   0          69s

NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
service/kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP          25h
service/mysql        ClusterIP   10.99.230.190    <none>        3306/TCP         25h
service/myweb        NodePort    10.105.77.88     <none>        8080:31330/TCP   25h
service/web          NodePort    10.103.246.193   <none>        8082:31303/TCP   17s

問(wèn)題1:無(wú)法通過(guò) Service 名稱訪問(wèn)

如果你是訪問(wèn)的Service名稱没讲,需要確保CoreDNS服務(wù)已經(jīng)部署:

# kubectl get pods -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-74ff55c5b-8q44c              1/1     Running   0          26h
coredns-74ff55c5b-f7j5g              1/1     Running   0          26h
etcd-k8s-master                      1/1     Running   2          26h
kube-apiserver-k8s-master            1/1     Running   2          26h
kube-controller-manager-k8s-master   1/1     Running   0          26h
kube-flannel-ds-f5tn6                1/1     Running   0          21h
kube-flannel-ds-ftfgf                1/1     Running   0          26h
kube-proxy-hnp7c                     1/1     Running   0          26h
kube-proxy-njw8l                     1/1     Running   0          21h
kube-scheduler-k8s-master            1/1     Running   0          26h

確認(rèn)CoreDNS已部署眯娱,如果狀態(tài)不是Running,請(qǐng)檢查容器日志進(jìn)一步查找問(wèn)題爬凑。
采用dnsutils來(lái)測(cè)試域名解析徙缴。
dnsutils.yaml

apiVersion: v1
kind: Pod
metadata:
  name: dnsutils
spec:
  containers:
  - name: dnsutils
    image: mydlqclub/dnsutils:1.3
    imagePullPolicy: IfNotPresent
    command: ["sleep","3600"]

運(yùn)行并進(jìn)入容器

# kubectl create -f dnsutils.yaml

# kubectl exec -it dnsutils sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.

/ # nslookup web
Server:     10.96.0.10
Address:    10.96.0.10#53

Name:   web.default.svc.cluster.local
Address: 10.103.246.193

如果解析失敗,可以嘗試限定命名空間:

/ # nslookup web.default
Server:     10.96.0.10
Address:    10.96.0.10#53

Name:   web.default.svc.cluster.local
Address: 10.103.246.193

如果解析成功嘁信,需要調(diào)整應(yīng)用使用跨命名空間的名稱訪問(wèn)Service于样。

如果仍然解析失敗,嘗試使用完全限定的名稱:

/ # nslookup web.default.svc.cluster.local
Server:     10.96.0.10
Address:    10.96.0.10#53

Name:   web.default.svc.cluster.local
Address: 10.103.246.193

說(shuō)明:其中“default”表示正在操作的命名空間潘靖,“svc”表示是一個(gè)Service穿剖,“cluster.local”是集群域。

再集群中的Node嘗試指定DNS IP(你的可能不同卦溢,可以通過(guò)kubectl get svc -n kube-system查看)解析下:

#  nslookup web.default.svc.cluster.local
Server:     103.224.222.222
Address:    103.224.222.222#53

** server can't find web.default.svc.cluster.local: REFUSED

發(fā)現(xiàn)查找不到糊余。檢查 /etc/resolv.conf 文件是否正確,增加coreDNS的IP和查找路徑单寂。
增加:

nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5

改為:
vim /etc/resolv.conf

# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 103.224.222.222
nameserver 103.224.222.223
nameserver 8.8.8.8
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5

說(shuō)明:

nameserver:行必須指定CoreDNS Service贬芥,它通過(guò)在kubelet設(shè)置 --cluster-dns 參加自動(dòng)配置。

search :行必須包含一個(gè)適當(dāng)?shù)暮缶Y宣决,以便查找 Service 名稱誓军。在本例中,它在本地 Namespace(default.svc.cluster.local)疲扎、所有 Namespace 中的 Service(svc.cluster.local)以及集群(cluster.local)中查找服務(wù)昵时。

options :行必須設(shè)置足夠高的 ndots,以便 DNS 客戶端庫(kù)優(yōu)先搜索路徑椒丧。在默認(rèn)情況下壹甥,Kubernetes 將這個(gè)值設(shè)置為 5。

問(wèn)題2:無(wú)法通過(guò) Service IP訪問(wèn)

假設(shè)可以通過(guò)Service名稱訪問(wèn)(CoreDNS正常工作)壶熏,那么接下來(lái)要測(cè)試的 Service 是否工作正常句柠。從集群中的一個(gè)節(jié)點(diǎn),訪問(wèn) Service 的 IP:

# curl -I 10.103.246.193
HTTP/1.1 200 OK
Server: Tengine
Date: Sun, 22 Aug 2021 13:04:15 GMT
Content-Type: text/html
Content-Length: 1326
Last-Modified: Wed, 26 Apr 2017 08:03:47 GMT
Connection: keep-alive
Vary: Accept-Encoding
ETag: "59005463-52e"
Accept-Ranges: bytes

本集群異常,連接超時(shí):

# curl -I 10.103.246.193
curl: (7) Failed to connect to 10.103.246.193 port 8082: Connection timed out

思路1:Service 端口配置是否正確?

檢查 Service 配置和使用的端口是否正確:

# kubectl get svc web -o yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2021-08-22T04:04:11Z"
  labels:
    app: web
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          .: {}
          f:app: {}
      f:spec:
        f:externalTrafficPolicy: {}
        f:ports:
          .: {}
          k:{"port":8082,"protocol":"TCP"}:
            .: {}
            f:port: {}
            f:protocol: {}
            f:targetPort: {}
        f:selector:
          .: {}
          f:app: {}
        f:sessionAffinity: {}
        f:type: {}
    manager: kubectl-expose
    operation: Update
    time: "2021-08-22T04:04:11Z"
  name: web
  namespace: default
  resourceVersion: "118039"
  uid: fa5bbc6b-7a79-45a4-b6ba-e015340d2bab
spec:
  clusterIP: 10.103.246.193
  clusterIPs:
  - 10.103.246.193
  externalTrafficPolicy: Cluster
  ports:
  - nodePort: 31303
    port: 8082
    protocol: TCP
    targetPort: 8082
  selector:
    app: web
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

說(shuō)明:

  • spec.ports[]:訪問(wèn)ClusterIP帶的端口溯职,8082
  • targetPort :目標(biāo)端口精盅,是容器中服務(wù)提供的端口,8082
  • spec.nodePort :集群外部訪問(wèn)端口,http://NodeIP:31303

思路2:Service 是否正確關(guān)聯(lián)到Pod谜酒?

檢查 Service 關(guān)聯(lián)的 Pod 是否正確:

# kubectl get pods  -o wide -l app=web
NAME                  READY   STATUS    RESTARTS   AGE    IP           NODE        NOMINATED NODE   READINESS GATES
web-96d5df5c8-8fwsb   1/1     Running   0          4h9m   10.244.1.5   k8s-node2   <none>           <none>
web-96d5df5c8-g6hgp   1/1     Running   0          4h9m   10.244.1.6   k8s-node2   <none>           <none>
web-96d5df5c8-t7xzv   1/1     Running   0          4h9m   10.244.1.4   k8s-node2   <none>           <none>

-l app=hostnames 參數(shù)是一個(gè)標(biāo)簽選擇器叹俏。

在 Kubernetes 系統(tǒng)中有一個(gè)控制循環(huán),它評(píng)估每個(gè) Service 的選擇器僻族,并將結(jié)果保存到 Endpoints 對(duì)象中粘驰。

在k8s-node2上卻是可以通的。

root@k8s-node2:/data/k8s# curl -I 10.244.1.4
HTTP/1.1 200 OK
Server: nginx/1.21.1
Date: Sun, 22 Aug 2021 08:16:16 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 06 Jul 2021 14:59:17 GMT
Connection: keep-alive
ETag: "60e46fc5-264"
Accept-Ranges: bytes

這3個(gè)POD都部署在k8s-node2上述么,不是查詢的k8s-master節(jié)點(diǎn)蝌数。
說(shuō)明本集群的2個(gè)節(jié)點(diǎn)不同,大概率是flannel出問(wèn)題了度秘。

在 Kubernetes 系統(tǒng)中有一個(gè)控制循環(huán)顶伞,它評(píng)估每個(gè) Service 的選擇器,并將結(jié)果保存到 Endpoints 對(duì)象中剑梳。

root@k8s-master:/data/k8s# kubectl get endpoints web
NAME   ENDPOINTS                                         AGE
web    10.244.1.4:8082,10.244.1.5:8082,10.244.1.6:8082   4h14m

結(jié)果所示枝哄, endpoints 控制器已經(jīng)為 Service 找到了 Pods。但并不說(shuō)明關(guān)聯(lián)的Pod就是正確的阻荒,還需要進(jìn)一步確認(rèn)Service 的 spec.selector 字段是否與Deployment中的 metadata.labels 字段值一致挠锥。

root@k8s-master:/data/k8s# kubectl get svc web -o yaml
...
  selector:
    app: web
...

獲取deployment的信息;

root@k8s-master:/data/k8s# kubectl get deployment web -o yaml

...
  selector:
    matchLabels:
      app: web
...

思路3:Pod 是否正常工作?

檢查Pod是否正常工作侨赡,繞過(guò)Service蓖租,直接訪問(wèn)Pod IP:

root@k8s-master:/data/k8s# kubectl get pods -o wide
NAME                  READY   STATUS    RESTARTS   AGE     IP           NODE         NOMINATED NODE   READINESS GATES
dnsutils              1/1     Running   29         29h     10.244.0.4   k8s-master   <none>           <none>
mysql-5ws56           1/1     Running   0          24h     10.244.1.3   k8s-node2    <none>           <none>
mysql-fwpgc           1/1     Running   0          29h     10.244.0.5   k8s-master   <none>           <none>
mysql-smggm           1/1     Running   0          24h     10.244.1.2   k8s-node2    <none>           <none>
myweb-8dc2n           1/1     Running   0          29h     10.244.0.7   k8s-master   <none>           <none>
myweb-mfbpd           1/1     Running   0          29h     10.244.0.6   k8s-master   <none>           <none>
myweb-zn8z2           1/1     Running   0          29h     10.244.0.8   k8s-master   <none>           <none>
web-96d5df5c8-8fwsb   1/1     Running   0          4h21m   10.244.1.5   k8s-node2    <none>           <none>
web-96d5df5c8-g6hgp   1/1     Running   0          4h21m   10.244.1.6   k8s-node2    <none>           <none>
web-96d5df5c8-t7xzv   1/1     Running   0          4h21m   10.244.1.4   k8s-node2    <none>           <none>

部署在另一個(gè)節(jié)點(diǎn)的pods不可以通信。

root@k8s-master:/data/k8s# curl -I 10.244.1.3:3306
curl: (7) Failed to connect to 10.244.1.4 port 3306: Connection timed out

部署在本節(jié)點(diǎn)的pods可以通信羊壹。

root@k8s-master:/data/k8s# curl -I 10.244.0.5:3306
5.7.35=H9A_)c????b.>,q#99~/~mysql_native_password!?#08S01Got packets out of order

此處問(wèn)題在此指向2個(gè)節(jié)點(diǎn)pod無(wú)法通信的問(wèn)題蓖宦。

注: 使用的是 Pod 端口(3306),而不是 Service 端口(3306)油猫。

如果不能正常響應(yīng)稠茂,說(shuō)明容器中服務(wù)有問(wèn)題, 這個(gè)時(shí)候可以用kubectl logs查看日志或者使用 kubectl exec 直接進(jìn)入 Pod檢查服務(wù)情妖。

除了本身服務(wù)問(wèn)題外睬关,還有可能是CNI網(wǎng)絡(luò)組件部署問(wèn)題,現(xiàn)象是:curl訪問(wèn)10次毡证,可能只有兩三次能訪問(wèn)电爹,能訪問(wèn)時(shí)候正好Pod是在當(dāng)前節(jié)點(diǎn),這并沒(méi)有走跨主機(jī)網(wǎng)絡(luò)料睛。
如果是這種現(xiàn)象丐箩,檢查網(wǎng)絡(luò)組件運(yùn)行狀態(tài)和容器日志:

root@k8s-master:/data/k8s# kubectl get pods -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-74ff55c5b-8q44c              1/1     Running   0          29h
coredns-74ff55c5b-f7j5g              1/1     Running   0          29h
etcd-k8s-master                      1/1     Running   2          29h
kube-apiserver-k8s-master            1/1     Running   2          29h
kube-controller-manager-k8s-master   1/1     Running   0          29h
kube-flannel-ds-f5tn6                1/1     Running   0          24h
kube-flannel-ds-ftfgf                1/1     Running   0          29h
kube-proxy-hnp7c                     1/1     Running   0          29h
kube-proxy-njw8l                     1/1     Running   0          24h
kube-scheduler-k8s-master            1/1     Running   0          29h

思路4:kube-proxy 組件正常工作嗎摇邦?

如果到了這里,你的 Service 正在運(yùn)行屎勘,也有 Endpoints施籍, Pod 也正在服務(wù)。
接下來(lái)就該檢查負(fù)責(zé) Service 的組件kube-proxy是否正常工作概漱。
確認(rèn) kube-proxy 運(yùn)行狀態(tài):

root@k8s-master:/data/k8s# ps -ef |grep kube-proxy
root      8494  8469  0 Aug21 ?        00:00:15 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=k8s-master
root     24323 25972  0 16:34 pts/1    00:00:00 grep kube-proxy

如果有進(jìn)程存在丑慎,下一步確認(rèn)它有沒(méi)有工作中有錯(cuò)誤,比如連接主節(jié)點(diǎn)失敗犀概。
要做到這一點(diǎn),必須查看日志夜惭。查看日志方式取決于K8s部署方式姻灶,如果是kubeadm部署。
檢查k8s-master的日志

root@k8s-master:/data/k8s# kubectl logs kube-proxy-hnp7c  -n kube-system
I0821 02:41:24.705408       1 node.go:172] Successfully retrieved node IP: 192.168.0.3
I0821 02:41:24.705709       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.0.3), assume IPv4 operation
W0821 02:41:24.740886       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0821 02:41:24.740975       1 server_others.go:185] Using iptables Proxier.
I0821 02:41:24.742224       1 server.go:650] Version: v1.20.5
I0821 02:41:24.742656       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0821 02:41:24.742680       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0821 02:41:24.742931       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0821 02:41:24.742990       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0821 02:41:24.747556       1 config.go:315] Starting service config controller
I0821 02:41:24.748858       1 shared_informer.go:240] Waiting for caches to sync for service config
I0821 02:41:24.748901       1 config.go:224] Starting endpoint slice config controller
I0821 02:41:24.748927       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0821 02:41:24.849006       1 shared_informer.go:247] Caches are synced for endpoint slice config 
I0821 02:41:24.849071       1 shared_informer.go:247] Caches are synced for service config 

檢查k8s-node2的日志

root@k8s-master:/data/k8s# kubectl logs kube-proxy-njw8l  -n kube-system
I0821 07:43:39.092419       1 node.go:172] Successfully retrieved node IP: 192.168.0.5
I0821 07:43:39.092475       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.0.5), assume IPv4 operation
W0821 07:43:39.108196       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0821 07:43:39.108294       1 server_others.go:185] Using iptables Proxier.
I0821 07:43:39.108521       1 server.go:650] Version: v1.20.5
I0821 07:43:39.108814       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0821 07:43:39.109295       1 config.go:315] Starting service config controller
I0821 07:43:39.109304       1 shared_informer.go:240] Waiting for caches to sync for service config
I0821 07:43:39.109323       1 config.go:224] Starting endpoint slice config controller
I0821 07:43:39.109327       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0821 07:43:39.209418       1 shared_informer.go:247] Caches are synced for endpoint slice config 
I0821 07:43:39.209418       1 shared_informer.go:247] Caches are synced for service config 

發(fā)現(xiàn)一個(gè)信息诈茧,Unknown proxy mode "", assuming iptables proxy产喉,表明采用的是iptables模式。

如果是二進(jìn)制方式部署:

journalctl -u kube-proxy

思路5:kube-proxy 是否在寫(xiě) iptables 規(guī)則敢会?

kube-proxy 的主要負(fù)載 Services 的 負(fù)載均衡 規(guī)則生成曾沈,默認(rèn)情況下使用iptables實(shí)現(xiàn),檢查一下這些規(guī)則是否已經(jīng)被寫(xiě)好了鸥昏。
檢查k8s-master的iptables記錄:

root@k8s-master:/data/k8s# iptables-save |grep web
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/myweb" -m tcp --dport 31330 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/myweb" -m tcp --dport 31330 -j KUBE-SVC-FCM76ICS4D7Y4C5Y
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/web" -m tcp --dport 31303 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/web" -m tcp --dport 31303 -j KUBE-SVC-LOLE4ISW44XBNF3G
-A KUBE-SEP-KYOPKKRUSGN4EPOL -s 10.244.0.8/32 -m comment --comment "default/myweb" -j KUBE-MARK-MASQ
-A KUBE-SEP-KYOPKKRUSGN4EPOL -p tcp -m comment --comment "default/myweb" -m tcp -j DNAT --to-destination 10.244.0.8:8080
-A KUBE-SEP-MOKUSSRWIVOFT5Y7 -s 10.244.0.7/32 -m comment --comment "default/myweb" -j KUBE-MARK-MASQ
-A KUBE-SEP-MOKUSSRWIVOFT5Y7 -p tcp -m comment --comment "default/myweb" -m tcp -j DNAT --to-destination 10.244.0.7:8080
-A KUBE-SEP-V6Q53FEPJ64J3EJW -s 10.244.1.6/32 -m comment --comment "default/web" -j KUBE-MARK-MASQ
-A KUBE-SEP-V6Q53FEPJ64J3EJW -p tcp -m comment --comment "default/web" -m tcp -j DNAT --to-destination 10.244.1.6:8082
-A KUBE-SEP-YCBVNDXW4SG5UDC3 -s 10.244.1.5/32 -m comment --comment "default/web" -j KUBE-MARK-MASQ
-A KUBE-SEP-YCBVNDXW4SG5UDC3 -p tcp -m comment --comment "default/web" -m tcp -j DNAT --to-destination 10.244.1.5:8082
-A KUBE-SEP-YQ4MLBG6JI5O2LTN -s 10.244.0.6/32 -m comment --comment "default/myweb" -j KUBE-MARK-MASQ
-A KUBE-SEP-YQ4MLBG6JI5O2LTN -p tcp -m comment --comment "default/myweb" -m tcp -j DNAT --to-destination 10.244.0.6:8080
-A KUBE-SEP-ZNATZ23XMS7WU546 -s 10.244.1.4/32 -m comment --comment "default/web" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZNATZ23XMS7WU546 -p tcp -m comment --comment "default/web" -m tcp -j DNAT --to-destination 10.244.1.4:8082
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.105.77.88/32 -p tcp -m comment --comment "default/myweb cluster IP" -m tcp --dport 8080 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.105.77.88/32 -p tcp -m comment --comment "default/myweb cluster IP" -m tcp --dport 8080 -j KUBE-SVC-FCM76ICS4D7Y4C5Y
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.103.246.193/32 -p tcp -m comment --comment "default/web cluster IP" -m tcp --dport 8082 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.103.246.193/32 -p tcp -m comment --comment "default/web cluster IP" -m tcp --dport 8082 -j KUBE-SVC-LOLE4ISW44XBNF3G
-A KUBE-SVC-FCM76ICS4D7Y4C5Y -m comment --comment "default/myweb" -m statistic --mode random --probability 0.33333333349 -j KUBE-SEP-YQ4MLBG6JI5O2LTN
-A KUBE-SVC-FCM76ICS4D7Y4C5Y -m comment --comment "default/myweb" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-MOKUSSRWIVOFT5Y7
-A KUBE-SVC-FCM76ICS4D7Y4C5Y -m comment --comment "default/myweb" -j KUBE-SEP-KYOPKKRUSGN4EPOL
-A KUBE-SVC-LOLE4ISW44XBNF3G -m comment --comment "default/web" -m statistic --mode random --probability 0.33333333349 -j KUBE-SEP-ZNATZ23XMS7WU546
-A KUBE-SVC-LOLE4ISW44XBNF3G -m comment --comment "default/web" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-YCBVNDXW4SG5UDC3
-A KUBE-SVC-LOLE4ISW44XBNF3G -m comment --comment "default/web" -j KUBE-SEP-V6Q53FEPJ64J3EJW

檢查k8s-node2的iptables記錄:

root@k8s-node2:/data/k8s# iptables-save |grep web
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/myweb" -m tcp --dport 31330 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/myweb" -m tcp --dport 31330 -j KUBE-SVC-FCM76ICS4D7Y4C5Y
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/web" -m tcp --dport 31303 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/web" -m tcp --dport 31303 -j KUBE-SVC-LOLE4ISW44XBNF3G
-A KUBE-SEP-KYOPKKRUSGN4EPOL -s 10.244.0.8/32 -m comment --comment "default/myweb" -j KUBE-MARK-MASQ
-A KUBE-SEP-KYOPKKRUSGN4EPOL -p tcp -m comment --comment "default/myweb" -m tcp -j DNAT --to-destination 10.244.0.8:8080
-A KUBE-SEP-MOKUSSRWIVOFT5Y7 -s 10.244.0.7/32 -m comment --comment "default/myweb" -j KUBE-MARK-MASQ
-A KUBE-SEP-MOKUSSRWIVOFT5Y7 -p tcp -m comment --comment "default/myweb" -m tcp -j DNAT --to-destination 10.244.0.7:8080
-A KUBE-SEP-V6Q53FEPJ64J3EJW -s 10.244.1.6/32 -m comment --comment "default/web" -j KUBE-MARK-MASQ
-A KUBE-SEP-V6Q53FEPJ64J3EJW -p tcp -m comment --comment "default/web" -m tcp -j DNAT --to-destination 10.244.1.6:8082
-A KUBE-SEP-YCBVNDXW4SG5UDC3 -s 10.244.1.5/32 -m comment --comment "default/web" -j KUBE-MARK-MASQ
-A KUBE-SEP-YCBVNDXW4SG5UDC3 -p tcp -m comment --comment "default/web" -m tcp -j DNAT --to-destination 10.244.1.5:8082
-A KUBE-SEP-YQ4MLBG6JI5O2LTN -s 10.244.0.6/32 -m comment --comment "default/myweb" -j KUBE-MARK-MASQ
-A KUBE-SEP-YQ4MLBG6JI5O2LTN -p tcp -m comment --comment "default/myweb" -m tcp -j DNAT --to-destination 10.244.0.6:8080
-A KUBE-SEP-ZNATZ23XMS7WU546 -s 10.244.1.4/32 -m comment --comment "default/web" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZNATZ23XMS7WU546 -p tcp -m comment --comment "default/web" -m tcp -j DNAT --to-destination 10.244.1.4:8082
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.105.77.88/32 -p tcp -m comment --comment "default/myweb cluster IP" -m tcp --dport 8080 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.105.77.88/32 -p tcp -m comment --comment "default/myweb cluster IP" -m tcp --dport 8080 -j KUBE-SVC-FCM76ICS4D7Y4C5Y
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.103.246.193/32 -p tcp -m comment --comment "default/web cluster IP" -m tcp --dport 8082 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.103.246.193/32 -p tcp -m comment --comment "default/web cluster IP" -m tcp --dport 8082 -j KUBE-SVC-LOLE4ISW44XBNF3G
-A KUBE-SVC-FCM76ICS4D7Y4C5Y -m comment --comment "default/myweb" -m statistic --mode random --probability 0.33333333349 -j KUBE-SEP-YQ4MLBG6JI5O2LTN
-A KUBE-SVC-FCM76ICS4D7Y4C5Y -m comment --comment "default/myweb" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-MOKUSSRWIVOFT5Y7
-A KUBE-SVC-FCM76ICS4D7Y4C5Y -m comment --comment "default/myweb" -j KUBE-SEP-KYOPKKRUSGN4EPOL
-A KUBE-SVC-LOLE4ISW44XBNF3G -m comment --comment "default/web" -m statistic --mode random --probability 0.33333333349 -j KUBE-SEP-ZNATZ23XMS7WU546
-A KUBE-SVC-LOLE4ISW44XBNF3G -m comment --comment "default/web" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-YCBVNDXW4SG5UDC3
-A KUBE-SVC-LOLE4ISW44XBNF3G -m comment --comment "default/web" -j KUBE-SEP-V6Q53FEPJ64J3EJW

如果你已經(jīng)講代理模式改為IPVS了塞俱,可以通過(guò)這種方式查看。正確情況下信息如:

[root@k8s-node1 ~]# ipvsadm -ln
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port Forward Weight ActiveConn InActConn
...
TCP 10.104.0.64:80 rr
  -> 10.244.169.135:80 Masq 1 0 0
  -> 10.244.36.73:80 Masq 1 0 0
  -> 10.244.169.136:80 Masq 1 0 0...

使用ipvsadm查看ipvs相關(guān)規(guī)則吏垮,若是沒(méi)有這個(gè)命令能夠直接yum安裝

apt-get  install -y ipvsadm

目前k8s-master的情況如下:

root@k8s-master:/data/k8s# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn

正常會(huì)得到上面結(jié)果障涯,如果沒(méi)有對(duì)應(yīng)規(guī)則,說(shuō)明kube-proxy組件沒(méi)工作或者與當(dāng)前操作系統(tǒng)不兼容導(dǎo)致生成規(guī)則失敗膳汪。

附:Service工作流程圖(附圖為示意唯蝶,非實(shí)際IP地址。)


問(wèn)題2解決:無(wú)法通過(guò) Service IP訪問(wèn)

查看iptables-save的結(jié)果沒(méi)有發(fā)現(xiàn)異常遗嗽,還是對(duì)iptalbes方式不夠熟悉粘我。采用kube-proxy開(kāi)啟ipvs代替iptables的方案看看。

在k8s-master,k8s-node2這2個(gè)節(jié)點(diǎn)執(zhí)行以下操作痹换。

加載內(nèi)核恼髯郑快

查看內(nèi)核模塊是否加載負(fù)

# lsmod|grep ip_vs
ip_vs_sh               16384  0
ip_vs_wrr              16384  0
ip_vs_rr               16384  0
ip_vs                 147456  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          106496  7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4
libcrc32c              16384  2 raid456,ip_vs

若是沒(méi)有加載,使用以下命令加載ipvs相關(guān)模塊性能

modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4

更改kube-proxy配置

# kubectl edit configmap kube-proxy -n kube-system

找到以下部分的內(nèi)容.net

    ipvs:
      minSyncPeriod: 0s
      scheduler: ""
      syncPeriod: 30s
    kind: KubeProxyConfiguration
    metricsBindAddress: ""
    mode: "ipvs"
    nodePortAddresses: null

其中mode原來(lái)是空娇豫,默認(rèn)為iptables模式柔纵,改成ipvs日志
scheduler默認(rèn)是空,默認(rèn)負(fù)載均衡算法為輪訓(xùn)code
編輯完锤躁,保存退出搁料。

刪除全部kube-proxy的pod

# kubectl get pods -n kube-system |grep kube-proxy
kube-proxy-hnp7c                     1/1     Running   0          30h
kube-proxy-njw8l                     1/1     Running   0          25h

root@k8s-node2:/data/k8s# kubectl delete pod   kube-proxy-hnp7c  -n kube-system
pod "kube-proxy-hnp7c" deleted
root@k8s-node2:/data/k8s# kubectl delete pod   kube-proxy-njw8l  -n kube-system 
pod "kube-proxy-njw8l" deleted

root@k8s-node2:/data/k8s#  kubectl get pods -n kube-system |grep kube-proxy
kube-proxy-4sv2c                     1/1     Running   0          36s
kube-proxy-w7kpm                     1/1     Running   0          16s

# kubectl logs kube-proxy-4sv2c  -n kube-system

root@k8s-node2:/data/k8s# kubectl logs kube-proxy-4sv2c  -n kube-system
I0822 09:36:38.757662       1 node.go:172] Successfully retrieved node IP: 192.168.0.3
I0822 09:36:38.757707       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.0.3), assume IPv4 operation
I0822 09:36:38.772798       1 server_others.go:258] Using ipvs Proxier.
W0822 09:36:38.774131       1 proxier.go:445] IPVS scheduler not specified, use rr by default
I0822 09:36:38.774388       1 server.go:650] Version: v1.20.5
I0822 09:36:38.774742       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0822 09:36:38.775051       1 config.go:224] Starting endpoint slice config controller
I0822 09:36:38.775127       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0822 09:36:38.775245       1 config.go:315] Starting service config controller
I0822 09:36:38.775290       1 shared_informer.go:240] Waiting for caches to sync for service config
I0822 09:36:38.875365       1 shared_informer.go:247] Caches are synced for endpoint slice config 
I0822 09:36:38.875616       1 shared_informer.go:247] Caches are synced for service config 

.有.....Using ipvs Proxier......便可.

運(yùn)行ipvsadm

使用ipvsadm查看ipvs相關(guān)規(guī)則或详,若是沒(méi)有這個(gè)命令能夠直接使用apt-get安裝。

root@k8s-master:/data/k8s# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.17.0.1:31330 rr
  -> 10.244.0.6:8080              Masq    1      0          0         
  -> 10.244.0.7:8080              Masq    1      0          0         
  -> 10.244.0.8:8080              Masq    1      0          0         
TCP  192.168.0.3:31303 rr
  -> 10.244.1.4:8082              Masq    1      0          0         
  -> 10.244.1.5:8082              Masq    1      0          0         
  -> 10.244.1.6:8082              Masq    1      0          0         
TCP  192.168.0.3:31330 rr
  -> 10.244.0.6:8080              Masq    1      0          0         
  -> 10.244.0.7:8080              Masq    1      0          0         
  -> 10.244.0.8:8080              Masq    1      0          0         
TCP  10.96.0.1:443 rr
  -> 192.168.0.3:6443             Masq    1      0          0         
TCP  10.96.0.10:53 rr
  -> 10.244.0.2:53                Masq    1      0          0         
  -> 10.244.0.3:53                Masq    1      0          0         
TCP  10.96.0.10:9153 rr
  -> 10.244.0.2:9153              Masq    1      0          0         
  -> 10.244.0.3:9153              Masq    1      0          0         
TCP  10.99.230.190:3306 rr
  -> 10.244.0.5:3306              Masq    1      0          0         
  -> 10.244.1.2:3306              Masq    1      0          0         
  -> 10.244.1.3:3306              Masq    1      0          0         
TCP  10.103.246.193:8082 rr
  -> 10.244.1.4:8082              Masq    1      0          0         
  -> 10.244.1.5:8082              Masq    1      0          0         
  -> 10.244.1.6:8082              Masq    1      0          0         
TCP  10.105.77.88:8080 rr
  -> 10.244.0.6:8080              Masq    1      0          0         
  -> 10.244.0.7:8080              Masq    1      0          0         
  -> 10.244.0.8:8080              Masq    1      0          0         
TCP  10.244.0.0:31303 rr
  -> 10.244.1.4:8082              Masq    1      0          0         
  -> 10.244.1.5:8082              Masq    1      0          0         
  -> 10.244.1.6:8082              Masq    1      0          0         
TCP  10.244.0.0:31330 rr
  -> 10.244.0.6:8080              Masq    1      0          0         
  -> 10.244.0.7:8080              Masq    1      0          0         
  -> 10.244.0.8:8080              Masq    1      0          0         
TCP  10.244.0.1:31303 rr
  -> 10.244.1.4:8082              Masq    1      0          0         
  -> 10.244.1.5:8082              Masq    1      0          0         
  -> 10.244.1.6:8082              Masq    1      0          0         
TCP  10.244.0.1:31330 rr
  -> 10.244.0.6:8080              Masq    1      0          0         
  -> 10.244.0.7:8080              Masq    1      0          0         
  -> 10.244.0.8:8080              Masq    1      0          0         
TCP  127.0.0.1:31303 rr
  -> 10.244.1.4:8082              Masq    1      0          0         
  -> 10.244.1.5:8082              Masq    1      0          0         
  -> 10.244.1.6:8082              Masq    1      0          0         
TCP  127.0.0.1:31330 rr
  -> 10.244.0.6:8080              Masq    1      0          0         
  -> 10.244.0.7:8080              Masq    1      0          0         
  -> 10.244.0.8:8080              Masq    1      0          0         
TCP  172.17.0.1:31303 rr
  -> 10.244.1.4:8082              Masq    1      0          0         
  -> 10.244.1.5:8082              Masq    1      0          0         
  -> 10.244.1.6:8082              Masq    1      0          0         
UDP  10.96.0.10:53 rr
  -> 10.244.0.2:53                Masq    1      0          564       
  -> 10.244.0.3:53                Masq    1      0          563
root@k8s-master:/data/k8s# curl -I 10.103.246.193:8082
^C
root@k8s-master:/data/k8s# curl -I 114.67.107.240:8082
^C

還是沒(méi)有解決郭计。

底層的iptables設(shè)置

百度收到了以下一篇文章霸琴,解決flannel下k8s pod及容器無(wú)法跨主機(jī)互通問(wèn)題,參考其完成在k8s-master和k8s-node2的配置昭伸。

# iptables -P INPUT ACCEPT
# iptables -P FORWARD ACCEPT
# iptables -F

# iptables -L -n

root@k8s-master:/data/k8s#  iptables -L -n
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
JDCLOUDHIDS_IN_LIVE  all  --  0.0.0.0/0            0.0.0.0/0           
JDCLOUDHIDS_IN  all  --  0.0.0.0/0            0.0.0.0/0           

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
KUBE-FORWARD  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */
ACCEPT     all  --  10.244.0.0/16        0.0.0.0/0           
ACCEPT     all  --  0.0.0.0/0            10.244.0.0/16       

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
JDCLOUDHIDS_OUT_LIVE  all  --  0.0.0.0/0            0.0.0.0/0           
JDCLOUDHIDS_OUT  all  --  0.0.0.0/0            0.0.0.0/0           

Chain DOCKER-USER (0 references)
target     prot opt source               destination         

Chain JDCLOUDHIDS_IN (1 references)
target     prot opt source               destination         

Chain JDCLOUDHIDS_IN_LIVE (1 references)
target     prot opt source               destination         

Chain JDCLOUDHIDS_OUT (1 references)
target     prot opt source               destination         

Chain JDCLOUDHIDS_OUT_LIVE (1 references)
target     prot opt source               destination         

Chain KUBE-EXTERNAL-SERVICES (0 references)
target     prot opt source               destination         

Chain KUBE-FIREWALL (0 references)
target     prot opt source               destination         

Chain KUBE-FORWARD (1 references)
target     prot opt source               destination         
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */ mark match 0x4000/0x4000
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED

Chain KUBE-KUBELET-CANARY (0 references)
target     prot opt source               destination         

Chain KUBE-PROXY-CANARY (0 references)
target     prot opt source               destination         

Chain KUBE-SERVICES (0 references)
target     prot opt source               destination

然后重新操作梧乘,發(fā)現(xiàn)服務(wù)節(jié)點(diǎn)能直接訪問(wèn)了。但是8082端口還是不能訪問(wèn)庐杨,跨節(jié)點(diǎn)ping包還是不成功的选调。

root@k8s-master:/data/k8s# curl -I 10.103.246.193:8082
^C
root@k8s-master:/data/k8s# curl -I 114.67.107.240:8082
^C

root@k8s-master:/data/k8s# ping 10.244.1.3
PING 10.244.1.3 (10.244.1.3) 56(84) bytes of data.
^C
--- 10.244.1.3 ping statistics ---
12 packets transmitted, 0 received, 100% packet loss, time 10999ms

root@k8s-master:/data/k8s# ping 10.244.0.5
PING 10.244.0.5 (10.244.0.5) 56(84) bytes of data.
64 bytes from 10.244.0.5: icmp_seq=1 ttl=64 time=0.089 ms
64 bytes from 10.244.0.5: icmp_seq=2 ttl=64 time=0.082 ms
^C
--- 10.244.0.5 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.082/0.085/0.089/0.009 ms


# curl -I 10.103.246.193
HTTP/1.1 200 OK
Server: Tengine
Date: Sun, 22 Aug 2021 13:10:02 GMT
Content-Type: text/html
Content-Length: 1326
Last-Modified: Wed, 26 Apr 2017 08:03:47 GMT
Connection: keep-alive
Vary: Accept-Encoding
ETag: "59005463-52e"
Accept-Ranges: bytes

參考

(1)K8s常見(jiàn)問(wèn)題:Service 不能訪問(wèn)排查流程 https://mp.weixin.qq.com/s/oCRWkBquUnRLC36CPwoZ1Q

(2)kube-proxy開(kāi)啟ipvs代替iptables
https://www.shangmayuan.com/a/8fae7d6c18764194a8adce91.html

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個(gè)濱河市灵份,隨后出現(xiàn)的幾起案子仁堪,更是在濱河造成了極大的恐慌,老刑警劉巖填渠,帶你破解...
    沈念sama閱讀 212,383評(píng)論 6 493
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件弦聂,死亡現(xiàn)場(chǎng)離奇詭異,居然都是意外死亡氛什,警方通過(guò)查閱死者的電腦和手機(jī)莺葫,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 90,522評(píng)論 3 385
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來(lái)枪眉,“玉大人捺檬,你說(shuō)我怎么就攤上這事∶惩” “怎么了欺冀?”我有些...
    開(kāi)封第一講書(shū)人閱讀 157,852評(píng)論 0 348
  • 文/不壞的土叔 我叫張陵,是天一觀的道長(zhǎng)萨脑。 經(jīng)常有香客問(wèn)我隐轩,道長(zhǎng),這世上最難降的妖魔是什么渤早? 我笑而不...
    開(kāi)封第一講書(shū)人閱讀 56,621評(píng)論 1 284
  • 正文 為了忘掉前任职车,我火速辦了婚禮,結(jié)果婚禮上鹊杖,老公的妹妹穿的比我還像新娘悴灵。我一直安慰自己,他們只是感情好骂蓖,可當(dāng)我...
    茶點(diǎn)故事閱讀 65,741評(píng)論 6 386
  • 文/花漫 我一把揭開(kāi)白布积瞒。 她就那樣靜靜地躺著,像睡著了一般登下。 火紅的嫁衣襯著肌膚如雪茫孔。 梳的紋絲不亂的頭發(fā)上叮喳,一...
    開(kāi)封第一講書(shū)人閱讀 49,929評(píng)論 1 290
  • 那天,我揣著相機(jī)與錄音缰贝,去河邊找鬼馍悟。 笑死,一個(gè)胖子當(dāng)著我的面吹牛剩晴,可吹牛的內(nèi)容都是我干的锣咒。 我是一名探鬼主播,決...
    沈念sama閱讀 39,076評(píng)論 3 410
  • 文/蒼蘭香墨 我猛地睜開(kāi)眼赞弥,長(zhǎng)吁一口氣:“原來(lái)是場(chǎng)噩夢(mèng)啊……” “哼毅整!你這毒婦竟也來(lái)了?” 一聲冷哼從身側(cè)響起绽左,我...
    開(kāi)封第一講書(shū)人閱讀 37,803評(píng)論 0 268
  • 序言:老撾萬(wàn)榮一對(duì)情侶失蹤悼嫉,失蹤者是張志新(化名)和其女友劉穎,沒(méi)想到半個(gè)月后妇菱,有當(dāng)?shù)厝嗽跇?shù)林里發(fā)現(xiàn)了一具尸體承粤,經(jīng)...
    沈念sama閱讀 44,265評(píng)論 1 303
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡暴区,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 36,582評(píng)論 2 327
  • 正文 我和宋清朗相戀三年闯团,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片仙粱。...
    茶點(diǎn)故事閱讀 38,716評(píng)論 1 341
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡房交,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出伐割,到底是詐尸還是另有隱情候味,我是刑警寧澤,帶...
    沈念sama閱讀 34,395評(píng)論 4 333
  • 正文 年R本政府宣布隔心,位于F島的核電站白群,受9級(jí)特大地震影響,放射性物質(zhì)發(fā)生泄漏硬霍。R本人自食惡果不足惜帜慢,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 40,039評(píng)論 3 316
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望唯卖。 院中可真熱鬧粱玲,春花似錦、人聲如沸拜轨。這莊子的主人今日做“春日...
    開(kāi)封第一講書(shū)人閱讀 30,798評(píng)論 0 21
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)橄碾。三九已至卵沉,卻和暖如春颠锉,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背偎箫。 一陣腳步聲響...
    開(kāi)封第一講書(shū)人閱讀 32,027評(píng)論 1 266
  • 我被黑心中介騙來(lái)泰國(guó)打工木柬, 沒(méi)想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人淹办。 一個(gè)月前我還...
    沈念sama閱讀 46,488評(píng)論 2 361
  • 正文 我出身青樓眉枕,卻偏偏與公主長(zhǎng)得像,于是被迫代替她去往敵國(guó)和親怜森。 傳聞我的和親對(duì)象是個(gè)殘疾皇子速挑,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 43,612評(píng)論 2 350

推薦閱讀更多精彩內(nèi)容