Metallb調(diào)試分析

零 前言

image

一 環(huán)境信息

集群節(jié)點(diǎn)

[root@master ~]# kubectl  get node -o wide
NAME     STATUS   ROLES    AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION           CONTAINER-RUNTIME
master   Ready    master   19h   v1.17.5   192.168.26.10   <none>        CentOS Linux 7 (Core)   3.10.0-1127.el7.x86_64   docker://19.3.9
node1    Ready    <none>   18h   v1.17.5   192.168.26.11   <none>        CentOS Linux 7 (Core)   3.10.0-1127.el7.x86_64   docker://19.3.9
node2    Ready    <none>   18h   v1.17.5   192.168.26.12   <none>        CentOS Linux 7 (Core)   3.10.0-1127.el7.x86_64   docker://19.3.9
[root@master ~]#

POD部署

[root@master ~]# kubectl  get pod -o wide
NAME                                  READY   STATUS    RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
metallb-controller-75bf779d4f-585mp   1/1     Running   0          14s   10.244.104.8     node2    <none>           <none>
metallb-speaker-4cnnj                 1/1     Running   0          14s   192.168.26.12    node2    <none>           <none>
metallb-speaker-kkd5n                 1/1     Running   0          14s   192.168.26.11    node1    <none>           <none>
metallb-speaker-w8bs4                 1/1     Running   0          14s   192.168.26.10    master   <none>           <none>
my-nginx-f97c96f6d-dfnj9              1/1     Running   0          27s   10.244.166.131   node1    <none>           <none>

測試LB服務(wù)

my-nginx為LoadBalancer類型的服務(wù)鞠绰,分配的IP為主機(jī)網(wǎng)段192.168.26.190

[root@master ~]# kubectl  get svc
NAME         TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1      <none>           443/TCP        19h
my-nginx     LoadBalancer   10.101.85.30   192.168.26.190   80:32366/TCP   17s

二 節(jié)點(diǎn)信息

Master節(jié)點(diǎn)

  • eth0,eth1,docker0網(wǎng)卡會進(jìn)行廣播

  • kube-ipvs0設(shè)置為NOARP比驻,不會進(jìn)行廣播

[root@master ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1300 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:0e:4e:dd brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
       valid_lft 85378sec preferred_lft 85378sec
    inet6 fe80::8fb:7623:d2f6:25e4/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1300 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 10:00:00:00:00:a0 brd ff:ff:ff:ff:ff:ff
    inet 192.168.26.10/24 brd 192.168.26.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::1200:ff:fe00:a0/64 scope link
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:df:3f:fc:54 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
5: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 42:d5:d5:cc:8d:d7 brd ff:ff:ff:ff:ff:ff
6: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
    link/ether 72:f1:c0:46:00:c2 brd ff:ff:ff:ff:ff:ff
    inet 10.96.0.10/32 brd 10.96.0.10 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.96.0.1/32 brd 10.96.0.1 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.101.139.35/32 brd 10.101.139.35 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 192.168.26.190/32 brd 192.168.26.190 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
7: calicebcde35cc6@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link
       valid_lft forever preferred_lft forever

Node1節(jié)點(diǎn)

[root@node1 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1300 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:0e:4e:dd brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
       valid_lft 85306sec preferred_lft 85306sec
    inet6 fe80::7b7f:9e4b:166d:56cf/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1300 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 10:00:00:00:00:b1 brd ff:ff:ff:ff:ff:ff
    inet 192.168.26.11/24 brd 192.168.26.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::1200:ff:fe00:b1/64 scope link
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:a9:ab:b7:d8 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
5: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 52:d4:68:89:70:4f brd ff:ff:ff:ff:ff:ff
6: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
    link/ether 0a:ed:a7:28:3c:5a brd ff:ff:ff:ff:ff:ff
    inet 10.96.0.10/32 brd 10.96.0.10 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.96.0.1/32 brd 10.96.0.1 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.101.139.35/32 brd 10.101.139.35 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 192.168.26.190/32 brd 192.168.26.190 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
7: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 10.244.166.128/32 brd 10.244.166.128 scope global tunl0
       valid_lft forever preferred_lft forever

Node2節(jié)點(diǎn)

[root@node2 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1300 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:0e:4e:dd brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
       valid_lft 85054sec preferred_lft 85054sec
    inet6 fe80::3b32:152f:273d:43c8/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1300 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 10:00:00:00:00:b2 brd ff:ff:ff:ff:ff:ff
    inet 192.168.26.12/24 brd 192.168.26.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::1200:ff:fe00:b2/64 scope link
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:09:e0:d2:f0 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
5: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 8a:6f:d9:a9:99:93 brd ff:ff:ff:ff:ff:ff
6: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
    link/ether 12:42:77:11:42:72 brd ff:ff:ff:ff:ff:ff
    inet 10.101.139.35/32 brd 10.101.139.35 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 192.168.26.190/32 brd 192.168.26.190 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.96.0.10/32 brd 10.96.0.10 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.96.0.1/32 brd 10.96.0.1 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
7: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 10.244.104.0/32 brd 10.244.104.0 scope global tunl0
       valid_lft forever preferred_lft forever
8: cali5f2d86330cb@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link
       valid_lft forever preferred_lft forever
9: cali26eb7e820f9@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link
       valid_lft forever preferred_lft forever

測試節(jié)點(diǎn)

  • 192.168.26.190 的mac地址為10:00:00:00:00:b2叶堆,對應(yīng)節(jié)點(diǎn)Node2祠乃。
[root@out ~]# arp -an
? (192.168.26.11) at 10:00:00:00:00:b1 [ether] on eth1
? (10.0.2.3) at 52:54:00:12:35:03 [ether] on eth0
? (192.168.26.12) at 10:00:00:00:00:b2 [ether] on eth1
? (192.168.26.10) at 10:00:00:00:00:a0 [ether] on eth1
? (192.168.26.190) at 10:00:00:00:00:b1 [ether] on eth1
? (10.0.2.2) at 52:54:00:12:35:02 [ether] on eth0
[root@out ~]# curl 192.168.26.190
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>

三 工作原理

metallb分為兩部分,controller和speaker覆获。

  • controller用于給svc分配loadbalancer ip
  • speaker用于arp廣播(arp主動廣播率拒,arp響應(yīng))

Controller

{"caller":"service.go:98","event":"ipAllocated","ip":"192.168.26.190","msg":"IP address assigned by controller","service":"default/my-nginx","ts":"2020-05-22T02:17:11.742233189Z"}

[root@master ~]# kubectl  logs metallb-controller-75bf779d4f-585mp
{"branch":"HEAD","caller":"main.go:142","commit":"v0.8.1","msg":"MetalLB controller starting version 0.8.1 (commit v0.8.1, branch HEAD)","ts":"2020-05-22T02:17:11.577936238Z","version":"0.8.1"}
{"caller":"main.go:108","configmap":"default/metallb","event":"startUpdate","msg":"start of config update","ts":"2020-05-22T02:17:11.686448912Z"}
{"caller":"main.go:121","configmap":"default/metallb","event":"endUpdate","msg":"end of config update","ts":"2020-05-22T02:17:11.686475979Z"}
{"caller":"k8s.go:376","configmap":"default/metallb","event":"configLoaded","msg":"config (re)loaded","ts":"2020-05-22T02:17:11.68648444Z"}
{"caller":"main.go:49","event":"startUpdate","msg":"start of service update","service":"default/kubernetes","ts":"2020-05-22T02:17:11.686507792Z"}
{"caller":"service.go:33","event":"clearAssignment","msg":"not a LoadBalancer","reason":"notLoadBalancer","service":"default/kubernetes","ts":"2020-05-22T02:17:11.686521668Z"}
{"caller":"main.go:75","event":"noChange","msg":"service converged, no change","service":"default/kubernetes","ts":"2020-05-22T02:17:11.686549849Z"}
{"caller":"main.go:76","event":"endUpdate","msg":"end of service update","service":"default/kubernetes","ts":"2020-05-22T02:17:11.686559142Z"}
{"caller":"main.go:49","event":"startUpdate","msg":"start of service update","service":"default/my-nginx","ts":"2020-05-22T02:17:11.686570384Z"}
{"caller":"service.go:85","error":"controller not synced","msg":"controller not synced yet, cannot allocate IP; will retry after sync","op":"allocateIP","service":"default/my-nginx","ts":"2020-05-22T02:17:11.686579009Z"}
{"caller":"main.go:72","event":"endUpdate","msg":"end of service update","service":"default/my-nginx","ts":"2020-05-22T02:17:11.686587521Z"}
{"caller":"main.go:49","event":"startUpdate","msg":"start of service update","service":"kube-system/kube-dns","ts":"2020-05-22T02:17:11.686598889Z"}
{"caller":"service.go:33","event":"clearAssignment","msg":"not a LoadBalancer","reason":"notLoadBalancer","service":"kube-system/kube-dns","ts":"2020-05-22T02:17:11.686606378Z"}
{"caller":"main.go:75","event":"noChange","msg":"service converged, no change","service":"kube-system/kube-dns","ts":"2020-05-22T02:17:11.68662786Z"}
{"caller":"main.go:76","event":"endUpdate","msg":"end of service update","service":"kube-system/kube-dns","ts":"2020-05-22T02:17:11.686634351Z"}
{"caller":"main.go:126","event":"stateSynced","msg":"controller synced, can allocate IPs now","ts":"2020-05-22T02:17:11.686645509Z"}
{"caller":"main.go:49","event":"startUpdate","msg":"start of service update","service":"kube-system/kube-dns","ts":"2020-05-22T02:17:11.698513135Z"}
{"caller":"service.go:33","event":"clearAssignment","msg":"not a LoadBalancer","reason":"notLoadBalancer","service":"kube-system/kube-dns","ts":"2020-05-22T02:17:11.698558483Z"}
{"caller":"main.go:75","event":"noChange","msg":"service converged, no change","service":"kube-system/kube-dns","ts":"2020-05-22T02:17:11.698596972Z"}
{"caller":"main.go:76","event":"endUpdate","msg":"end of service update","service":"kube-system/kube-dns","ts":"2020-05-22T02:17:11.698605272Z"}
{"caller":"main.go:49","event":"startUpdate","msg":"start of service update","service":"default/kubernetes","ts":"2020-05-22T02:17:11.698617575Z"}
{"caller":"service.go:33","event":"clearAssignment","msg":"not a LoadBalancer","reason":"notLoadBalancer","service":"default/kubernetes","ts":"2020-05-22T02:17:11.703655381Z"}
{"caller":"main.go:75","event":"noChange","msg":"service converged, no change","service":"default/kubernetes","ts":"2020-05-22T02:17:11.703710198Z"}
{"caller":"main.go:76","event":"endUpdate","msg":"end of service update","service":"default/kubernetes","ts":"2020-05-22T02:17:11.703726179Z"}
{"caller":"main.go:49","event":"startUpdate","msg":"start of service update","service":"default/my-nginx","ts":"2020-05-22T02:17:11.703745316Z"}
{"caller":"service.go:98","event":"ipAllocated","ip":"192.168.26.190","msg":"IP address assigned by controller","service":"default/my-nginx","ts":"2020-05-22T02:17:11.742233189Z"}

speaker日志

根據(jù)apr返回,查找190對應(yīng)的mac地址迈螟,可用確定speaker運(yùn)行在哪個(gè)節(jié)點(diǎn)叉抡。

pod運(yùn)行在node1 192.168.26.11節(jié)點(diǎn)。arp返回190的mac為node1的eth1網(wǎng)卡答毫,所以請求會發(fā)送到node1節(jié)點(diǎn)褥民。

{"caller":"main.go:340","event":"serviceAnnounced","ip":"192.168.26.190","msg":"service has IP, announcing","pool":"default","protocol":"layer2","service":"default/my-nginx","ts":"2020-05-22T02:17:11.74920593Z"}

[root@master ~]# kubectl  logs metallb-speaker-kkd5n
{"branch":"main","caller":"main.go:84","commit":"734ee674","msg":"MetalLB speaker starting (commit 734ee674, branch main)","ts":"2020-05-22T02:17:10.094496521Z","version":""}
{"caller":"main.go:105","msg":"Not starting fast dead node detection (MemberList), need ml-bindaddr / ml-labels / ml-namespace config","op":"startup","ts":"2020-05-22T02:17:10.094565059Z"}
{"caller":"announcer.go:103","event":"createARPResponder","interface":"eth0","msg":"created ARP responder for interface","ts":"2020-05-22T02:17:10.096927728Z"}
{"caller":"announcer.go:112","event":"createNDPResponder","interface":"eth0","msg":"created NDP responder for interface","ts":"2020-05-22T02:17:10.097172952Z"}
{"caller":"announcer.go:103","event":"createARPResponder","interface":"eth1","msg":"created ARP responder for interface","ts":"2020-05-22T02:17:10.097312492Z"}
{"caller":"announcer.go:112","event":"createNDPResponder","interface":"eth1","msg":"created NDP responder for interface","ts":"2020-05-22T02:17:10.097523494Z"}
{"caller":"announcer.go:103","event":"createARPResponder","interface":"docker0","msg":"created ARP responder for interface","ts":"2020-05-22T02:17:10.097732881Z"}
{"caller":"announcer.go:103","event":"createARPResponder","interface":"cali37e7c6d2053","msg":"created ARP responder for interface","ts":"2020-05-22T02:17:10.098019843Z"}
{"caller":"announcer.go:112","event":"createNDPResponder","interface":"cali37e7c6d2053","msg":"created NDP responder for interface","ts":"2020-05-22T02:17:10.098082182Z"}
{"caller":"main.go:383","configmap":"default/metallb","event":"startUpdate","msg":"start of config update","ts":"2020-05-22T02:17:10.234129838Z"}
{"caller":"main.go:407","configmap":"default/metallb","event":"endUpdate","msg":"end of config update","ts":"2020-05-22T02:17:10.234162307Z"}
{"caller":"k8s.go:402","configmap":"default/metallb","event":"configLoaded","msg":"config (re)loaded","ts":"2020-05-22T02:17:10.234171521Z"}
{"caller":"bgp_controller.go:285","event":"nodeLabelsChanged","msg":"Node labels changed, resyncing BGP peers","ts":"2020-05-22T02:17:10.234193311Z"}
{"caller":"main.go:264","event":"startUpdate","msg":"start of service update","service":"default/kubernetes","ts":"2020-05-22T02:17:10.234204251Z"}
{"caller":"main.go:268","event":"endUpdate","msg":"end of service update","service":"default/kubernetes","ts":"2020-05-22T02:17:10.234212547Z"}
{"caller":"main.go:264","event":"startUpdate","msg":"start of service update","service":"default/my-nginx","ts":"2020-05-22T02:17:10.234221455Z"}
{"caller":"main.go:277","event":"endUpdate","msg":"end of service update","service":"default/my-nginx","ts":"2020-05-22T02:17:10.234227764Z"}
{"caller":"main.go:264","event":"startUpdate","msg":"start of service update","service":"kube-system/kube-dns","ts":"2020-05-22T02:17:10.234235443Z"}
{"caller":"main.go:268","event":"endUpdate","msg":"end of service update","service":"kube-system/kube-dns","ts":"2020-05-22T02:17:10.234243163Z"}
{"caller":"main.go:264","event":"startUpdate","msg":"start of service update","service":"default/kubernetes","ts":"2020-05-22T02:17:10.23948262Z"}
{"caller":"main.go:268","event":"endUpdate","msg":"end of service update","service":"default/kubernetes","ts":"2020-05-22T02:17:10.239523709Z"}
{"caller":"main.go:264","event":"startUpdate","msg":"start of service update","service":"default/my-nginx","ts":"2020-05-22T02:17:10.239534319Z"}
{"caller":"main.go:277","event":"endUpdate","msg":"end of service update","service":"default/my-nginx","ts":"2020-05-22T02:17:10.239540994Z"}
{"caller":"main.go:264","event":"startUpdate","msg":"start of service update","service":"kube-system/kube-dns","ts":"2020-05-22T02:17:10.239550003Z"}
{"caller":"main.go:268","event":"endUpdate","msg":"end of service update","service":"kube-system/kube-dns","ts":"2020-05-22T02:17:10.239556402Z"}
{"caller":"main.go:264","event":"startUpdate","msg":"start of service update","service":"default/my-nginx","ts":"2020-05-22T02:17:11.749145884Z"}
{"caller":"main.go:340","event":"serviceAnnounced","ip":"192.168.26.190","msg":"service has IP, announcing","pool":"default","protocol":"layer2","service":"default/my-nginx","ts":"2020-05-22T02:17:11.74920593Z"}
{"caller":"main.go:343","event":"endUpdate","msg":"end of service update","service":"default/my-nginx","ts":"2020-05-22T02:17:11.749256307Z"}

四 異常處理

Speaker Pod掛掉

可以通過增加nodeselector模擬pod掛掉。

  • 選主出來的speaker pod掛掉洗搂,由于kubeproxy還是在工作消返,所以不會影響服務(wù)。

  • 由于svc信息沒有發(fā)生變化耘拇,所以190對應(yīng)的mac地址不會改變

Node節(jié)點(diǎn)宕機(jī)

arp的響應(yīng)會發(fā)生變化撵颊。

speaker/main.go:196 watchMemberListEvents

節(jié)點(diǎn)驅(qū)逐

metallb 選主watch節(jié)點(diǎn)變更(watchMemberListEvents),當(dāng)前節(jié)點(diǎn)被kubectl delete node后(當(dāng)前pod)惫叛,

其它的speaker pod感知到變化秦驯,會重新watch apiserver。所以lb ip會被重新廣播挣棕。

五 代碼分析

https://github.com/huiwq1990/metallb/commits/hg

六 參考

https://www.objectif-libre.com/en/blog/2019/06/11/metallb/

https://metallb.universe.tf/

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末译隘,一起剝皮案震驚了整個(gè)濱河市,隨后出現(xiàn)的幾起案子洛心,更是在濱河造成了極大的恐慌固耘,老刑警劉巖,帶你破解...
    沈念sama閱讀 219,188評論 6 508
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件词身,死亡現(xiàn)場離奇詭異厅目,居然都是意外死亡,警方通過查閱死者的電腦和手機(jī),發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,464評論 3 395
  • 文/潘曉璐 我一進(jìn)店門损敷,熙熙樓的掌柜王于貴愁眉苦臉地迎上來葫笼,“玉大人,你說我怎么就攤上這事拗馒÷沸牵” “怎么了?”我有些...
    開封第一講書人閱讀 165,562評論 0 356
  • 文/不壞的土叔 我叫張陵诱桂,是天一觀的道長洋丐。 經(jīng)常有香客問我,道長挥等,這世上最難降的妖魔是什么友绝? 我笑而不...
    開封第一講書人閱讀 58,893評論 1 295
  • 正文 為了忘掉前任,我火速辦了婚禮肝劲,結(jié)果婚禮上迁客,老公的妹妹穿的比我還像新娘。我一直安慰自己辞槐,他們只是感情好哲泊,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,917評論 6 392
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著催蝗,像睡著了一般切威。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上丙号,一...
    開封第一講書人閱讀 51,708評論 1 305
  • 那天先朦,我揣著相機(jī)與錄音,去河邊找鬼犬缨。 笑死喳魏,一個(gè)胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的怀薛。 我是一名探鬼主播刺彩,決...
    沈念sama閱讀 40,430評論 3 420
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼枝恋!你這毒婦竟也來了创倔?” 一聲冷哼從身側(cè)響起,我...
    開封第一講書人閱讀 39,342評論 0 276
  • 序言:老撾萬榮一對情侶失蹤焚碌,失蹤者是張志新(化名)和其女友劉穎畦攘,沒想到半個(gè)月后,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體十电,經(jīng)...
    沈念sama閱讀 45,801評論 1 317
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡知押,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,976評論 3 337
  • 正文 我和宋清朗相戀三年叹螟,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片台盯。...
    茶點(diǎn)故事閱讀 40,115評論 1 351
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡罢绽,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出静盅,到底是詐尸還是另有隱情良价,我是刑警寧澤,帶...
    沈念sama閱讀 35,804評論 5 346
  • 正文 年R本政府宣布温亲,位于F島的核電站棚壁,受9級特大地震影響杯矩,放射性物質(zhì)發(fā)生泄漏栈虚。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,458評論 3 331
  • 文/蒙蒙 一史隆、第九天 我趴在偏房一處隱蔽的房頂上張望魂务。 院中可真熱鬧,春花似錦泌射、人聲如沸粘姜。這莊子的主人今日做“春日...
    開封第一講書人閱讀 32,008評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽孤紧。三九已至,卻和暖如春拒秘,著一層夾襖步出監(jiān)牢的瞬間号显,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 33,135評論 1 272
  • 我被黑心中介騙來泰國打工躺酒, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留押蚤,地道東北人。 一個(gè)月前我還...
    沈念sama閱讀 48,365評論 3 373
  • 正文 我出身青樓羹应,卻偏偏與公主長得像揽碘,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個(gè)殘疾皇子园匹,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 45,055評論 2 355