[docker 網(wǎng)絡(luò)] docker跨主機(jī)網(wǎng)絡(luò) ovs gre 測(cè)試2

1. 前言

本文承接上文[docker 網(wǎng)絡(luò)] docker跨主機(jī)網(wǎng)絡(luò) ovs gre 測(cè)試1, 將對(duì)ovs使用gre實(shí)現(xiàn)docker跨主機(jī)容器訪問(wèn)繼續(xù)進(jìn)行測(cè)試. [docker 網(wǎng)絡(luò)] docker跨主機(jī)網(wǎng)絡(luò) ovs gre 測(cè)試1中容器的網(wǎng)絡(luò)在同一個(gè)子網(wǎng)中, 本文將會(huì)測(cè)試容器在不同子網(wǎng)中如何實(shí)現(xiàn)的.

需要對(duì)docker網(wǎng)絡(luò)類型有基本了解, 可以參考[mydocker]---docker的四種網(wǎng)絡(luò)模型與原理實(shí)現(xiàn)(1)[mydocker]---docker的四種網(wǎng)絡(luò)模型與原理實(shí)現(xiàn)(2).

1.1 當(dāng)前環(huán)境

vm1

[root@vm1 ~]# cat /proc/sys/net/ipv4/ip_forward
0
[root@vm1 ~]# iptables -t nat -F
[root@vm1 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.19.0.1      0.0.0.0         UG    0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
172.19.0.0      0.0.0.0         255.255.240.0   U     0      0        0 eth0
[root@vm1 ~]# ovs-vsctl show
91e815a1-1021-4c97-a21c-893ab8c28e37
    ovs_version: "2.5.1"
[root@vm1 ~]# 

vm2

[root@vm2 ~]# echo 0 > /proc/sys/net/ipv4/ip_forward
[root@vm2 ~]# 
[root@vm2 ~]# cat /proc/sys/net/ipv4/ip_forward
0
[root@vm2 ~]# iptables -t nat -F
[root@vm2 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.19.0.1      0.0.0.0         UG    0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
172.19.0.0      0.0.0.0         255.255.240.0   U     0      0        0 eth0
[root@vm2 ~]# ovs-vsctl show
533800d4-246f-4099-a776-8254610db91f
    ovs_version: "2.5.1"
[root@vm2 ~]# 

2. ovs 容器在不同子網(wǎng)中

2.1 vm1中設(shè)置

[root@vm1 ~]# ip link add docker0 type bridge
[root@vm1 ~]# ip addr add 172.17.1.254/24 dev docker0
[root@vm1 ~]# ip link set docker0 up
[root@vm1 ~]# ip netns add ns1
[root@vm1 ~]# ip link add veth0 type veth peer name veth1
[root@vm1 ~]# brctl addif docker0 veth0
[root@vm1 ~]# ip link set veth1 netns ns1
[root@vm1 ~]# ip link set veth0 up
[root@vm1 ~]# ip netns exec ns1 sh
sh-4.2# ip addr add 172.17.1.1/24 dev veth1
sh-4.2# ip link set veth1 up
sh-4.2# ip link set lo up
sh-4.2# route add default gw 172.17.1.254
sh-4.2# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.17.1.254    0.0.0.0         UG    0      0        0 veth1
172.17.1.0      0.0.0.0         255.255.255.0   U     0      0        0 veth1
sh-4.2# ping -c 1 172.17.1.254
PING 172.17.1.254 (172.17.1.254) 56(84) bytes of data.
64 bytes from 172.17.1.254: icmp_seq=1 ttl=64 time=0.078 ms

--- 172.17.1.254 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms
sh-4.2# ping -c 1 172.19.0.12
PING 172.19.0.12 (172.19.0.12) 56(84) bytes of data.
64 bytes from 172.19.0.12: icmp_seq=1 ttl=64 time=0.048 ms

--- 172.19.0.12 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms
sh-4.2# exit
exit
[root@vm1 ~]# 

2.2 vm2中設(shè)置

[root@vm2 ~]# ip link add docker0 type bridge
[root@vm2 ~]# ip addr add 192.168.2.254/24 dev docker0
[root@vm2 ~]# ip link set docker0 up
[root@vm2 ~]# ip netns add ns1
[root@vm2 ~]# ip link add veth0 type veth peer name veth1
[root@vm2 ~]# brctl addif docker0 veth0
[root@vm2 ~]# ip link set veth1 netns ns1
[root@vm2 ~]# ip link set veth0 up
[root@vm2 ~]# ip netns exec ns1 sh
sh-4.2# ip addr add 192.168.2.1/24 dev veth1
sh-4.2# ip link set veth1 up
sh-4.2# ip link set lo up
sh-4.2# route add default gw 192.168.2.254
sh-4.2# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.2.254   0.0.0.0         UG    0      0        0 veth1
192.168.2.0     0.0.0.0         255.255.255.0   U     0      0        0 veth1
sh-4.2# ping -c 1 192.168.2.254
PING 192.168.2.254 (192.168.2.254) 56(84) bytes of data.
64 bytes from 192.168.2.254: icmp_seq=1 ttl=64 time=0.052 ms

--- 192.168.2.254 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms
sh-4.2# ping -c 1 172.19.0.8
PING 172.19.0.8 (172.19.0.8) 56(84) bytes of data.
64 bytes from 172.19.0.8: icmp_seq=1 ttl=64 time=0.031 ms

--- 172.19.0.8 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms
sh-4.2# exit
exit
[root@vm2 ~]# 
圖片.png

2.3 加入gre配置

很顯然目前兩個(gè)容器是ping不通的.

[root@vm1 ~]# ping -c 1 192.168.2.254
PING 192.168.2.254 (192.168.2.254) 56(84) bytes of data.

--- 192.168.2.254 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

[root@vm1 ~]# ping -c 1 192.168.2.1
PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data.

--- 192.168.2.1 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

[root@vm1 ~]# 
2.3.1 增加一個(gè)ovs網(wǎng)橋

vm1

[root@vm1 ~]# ovs-vsctl add-br ovs1
[root@vm1 ~]# 
[root@vm1 ~]# ovs-vsctl add-port ovs1 rou1 -- set interface rou1 type=internal
[root@vm1 ~]# 
[root@vm1 ~]# ifconfig rou1 192.168.1.1/24
[root@vm1 ~]# 
[root@vm1 ~]# ovs-vsctl show
91e815a1-1021-4c97-a21c-893ab8c28e37
    Bridge "ovs1"
        Port "rou1"
            Interface "rou1"
                type: internal
        Port "ovs1"
            Interface "ovs1"
                type: internal
    ovs_version: "2.5.1"
[root@vm1 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.19.0.1      0.0.0.0         UG    0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
172.17.1.0      0.0.0.0         255.255.255.0   U     0      0        0 docker0
172.19.0.0      0.0.0.0         255.255.240.0   U     0      0        0 eth0
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 rou1
[root@vm1 ~]# 

vm2

[root@vm2 ~]# ovs-vsctl add-br ovs2
[root@vm2 ~]# ovs-vsctl add-port ovs2 rou2 -- set interface rou2 type=internal
[root@vm2 ~]# ifconfig rou2 192.168.1.2/24
[root@vm2 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.19.0.1      0.0.0.0         UG    0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
172.19.0.0      0.0.0.0         255.255.240.0   U     0      0        0 eth0
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 rou2
192.168.2.0     0.0.0.0         255.255.255.0   U     0      0        0 docker0
[root@vm2 ~]# ovs-vsctl show
533800d4-246f-4099-a776-8254610db91f
    Bridge "ovs2"
        Port "rou2"
            Interface "rou2"
                type: internal
        Port "ovs2"
            Interface "ovs2"
                type: internal
    ovs_version: "2.5.1"

在vm1中訪問(wèn)vm2中的rou2

[root@vm1 ~]# ping -c 1 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.036 ms

--- 192.168.1.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms
[root@vm1 ~]# ping -c 1 192.168.1.2
PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.
From 192.168.1.1 icmp_seq=1 Destination Host Unreachable

--- 192.168.1.2 ping statistics ---
1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms

[root@vm1 ~]# 
圖片.png
2.3.2 加入gre tunnel

vm1

[root@vm1 ~]# ovs-vsctl add-port ovs1 gre1 -- set interface gre1 type=gre options:remote_ip=172.19.0.8
[root@vm1 ~]# 
[root@vm1 ~]# ovs-vsctl show
91e815a1-1021-4c97-a21c-893ab8c28e37
    Bridge "ovs1"
        Port "gre1"
            Interface "gre1"
                type: gre
                options: {remote_ip="172.19.0.8"}
        Port "rou1"
            Interface "rou1"
                type: internal
        Port "ovs1"
            Interface "ovs1"
                type: internal
    ovs_version: "2.5.1"
[root@vm1 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.19.0.1      0.0.0.0         UG    0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
172.17.1.0      0.0.0.0         255.255.255.0   U     0      0        0 docker0
172.19.0.0      0.0.0.0         255.255.240.0   U     0      0        0 eth0
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 rou1

vm2

[root@vm2 ~]# ovs-vsctl add-port ovs2 gre2 -- set interface gre2 type=gre options:remote_ip=172.19.0.12
[root@vm2 ~]# ovs-vsctl show
533800d4-246f-4099-a776-8254610db91f
    Bridge "ovs2"
        Port "gre2"
            Interface "gre2"
                type: gre
                options: {remote_ip="172.19.0.12"}
        Port "rou2"
            Interface "rou2"
                type: internal
        Port "ovs2"
            Interface "ovs2"
                type: internal
    ovs_version: "2.5.1"
[root@vm2 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.19.0.1      0.0.0.0         UG    0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
172.19.0.0      0.0.0.0         255.255.240.0   U     0      0        0 eth0
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 rou2
192.168.2.0     0.0.0.0         255.255.255.0   U     0      0        0 docker0
[root@vm2 ~]# 
圖片.png

測(cè)試

// 訪問(wèn)vm2中的rou2
[root@vm1 ~]# ping -c 1 192.168.1.2
PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.
64 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=1.30 ms

--- 192.168.1.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.309/1.309/1.309/0.000 ms
// 訪問(wèn)本機(jī)中的rou1
[root@vm1 ~]# ping -c 1 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.026 ms

--- 192.168.1.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms
[root@vm1 ~]# 

vm2

// 訪問(wèn)vm1中的rou1
[root@vm2 ~]# ping -c 1 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.691 ms

--- 192.168.1.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms
// 訪問(wèn)本機(jī)中的rou2
[root@vm2 ~]# ping -c 1 192.168.1.2
PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.
64 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=0.028 ms

--- 192.168.1.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms
[root@vm2 ~]# 
2.3.3 將ovs 綁定到 docker0中

vm1

[root@vm1 ~]# brctl addif docker0 ovs1
[root@vm1 ~]# ip link set ovs1 up
[root@vm1 ~]# bridge link
16: veth0 state UP @(null): <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master docker0 state forwarding priority 32 cost 2 
22: ovs1 state UNKNOWN : <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master docker0 state forwarding priority 32 cost 100 
[root@vm1 ~]# 

vm2

[root@vm2 ~]# brctl addif docker0 ovs2
[root@vm2 ~]# ip link set ovs2 up
[root@vm2 ~]# bridge link
16: veth0 state UP @(null): <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master docker0 state forwarding priority 32 cost 2 
22: ovs2 state UNKNOWN : <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master docker0 state forwarding priority 32 cost 100 
[root@vm1 ~]# 
圖片.png

測(cè)試, 還是ping不通, 這是因?yàn)椴辉谕粋€(gè)網(wǎng)絡(luò)上, 在[[docker 網(wǎng)絡(luò)]ovs gre 測(cè)試1]中已經(jīng)測(cè)試過(guò).

[root@vm1 ~]# ping -c 1 192.168.2.1
PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data.

--- 192.168.2.1 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

[root@vm1 ~]# ping -c 1 192.168.2.254
PING 192.168.2.254 (192.168.2.254) 56(84) bytes of data.

--- 192.168.2.254 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

[root@vm1 ~]# 
2.3.4 加入路由

vm1

[root@vm1 ~]# route add -net 192.168.2.0/24 dev rou1
[root@vm1 ~]# 
[root@vm1 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.19.0.1      0.0.0.0         UG    0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
172.17.1.0      0.0.0.0         255.255.255.0   U     0      0        0 docker0
172.19.0.0      0.0.0.0         255.255.240.0   U     0      0        0 eth0
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 rou1
192.168.2.0     0.0.0.0         255.255.255.0   U     0      0        0 rou1

vm2

[root@vm2 ~]# route add -net 172.17.1.0/24 dev rou2
[root@vm2 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.19.0.1      0.0.0.0         UG    0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
172.17.1.0      0.0.0.0         255.255.255.0   U     0      0        0 rou2
172.19.0.0      0.0.0.0         255.255.240.0   U     0      0        0 eth0
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 rou2
192.168.2.0     0.0.0.0         255.255.255.0   U     0      0        0 docker0
[root@vm2 ~]# 

測(cè)試 從vm1中訪問(wèn)

// ping vm2中的rou2可以通 
[root@vm1 ~]# ping -c 1 192.168.2.254
PING 192.168.2.254 (192.168.2.254) 56(84) bytes of data.
64 bytes from 192.168.2.254: icmp_seq=1 ttl=64 time=1.11 ms

--- 192.168.2.254 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.111/1.111/1.111/0.000 ms
// ping vm2中的ns1不通 因?yàn)関m2中沒(méi)有開(kāi)通ip_forward功能
[root@vm1 ~]# ping -c 1 192.168.2.1
PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data.

--- 192.168.2.1 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

在vm2中加入ip_forward功能

[root@vm2 ~]# echo 1 > /proc/sys/net/ipv4/ip_forward

再次測(cè)試

[root@vm1 ~]# ping -c 1 192.168.2.1
PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data.
64 bytes from 192.168.2.1: icmp_seq=1 ttl=63 time=0.709 ms

--- 192.168.2.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.709/0.709/0.709/0.000 ms

同樣的道理, 也需要給vm1加入ip_forward功能

[root@vm1 ~]# echo 1 > /proc/sys/net/ipv4/ip_forward
圖片.png
2.3.5 容器之間訪問(wèn)
[root@vm1 ~]# ip netns exec ns1 sh
sh-4.2# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.17.1.254    0.0.0.0         UG    0      0        0 veth1
172.17.1.0      0.0.0.0         255.255.255.0   U     0      0        0 veth1
// 訪問(wèn)本機(jī)的docker0
sh-4.2# ping -c 1 172.17.1.254
PING 172.17.1.254 (172.17.1.254) 56(84) bytes of data.
64 bytes from 172.17.1.254: icmp_seq=1 ttl=64 time=0.050 ms

--- 172.17.1.254 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms
// 訪問(wèn)本機(jī)的ns1
sh-4.2# ping -c 1 172.17.1.1
PING 172.17.1.1 (172.17.1.1) 56(84) bytes of data.
64 bytes from 172.17.1.1: icmp_seq=1 ttl=64 time=0.026 ms

--- 172.17.1.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms
// 訪問(wèn)本機(jī)的ip
sh-4.2# ping -c 1 172.19.0.12
PING 172.19.0.12 (172.19.0.12) 56(84) bytes of data.
64 bytes from 172.19.0.12: icmp_seq=1 ttl=64 time=0.038 ms

--- 172.19.0.12 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms
// 訪問(wèn)本機(jī)的rou1
sh-4.2# ping -c 1 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.044 ms

--- 192.168.1.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms
// 訪問(wèn)vm2的rou2
sh-4.2# ping -c 1 192.168.1.2
PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.
64 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=0.760 ms

--- 192.168.1.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.760/0.760/0.760/0.000 ms
// 訪問(wèn)vm2的docker0
sh-4.2# ping -c 1 192.168.2.254
PING 192.168.2.254 (192.168.2.254) 56(84) bytes of data.
64 bytes from 192.168.2.254: icmp_seq=1 ttl=64 time=0.353 ms

--- 192.168.2.254 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms
// 訪問(wèn)vm2的ns1
sh-4.2# ping -c 1 192.168.2.1
PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data.
64 bytes from 192.168.2.1: icmp_seq=1 ttl=63 time=0.624 ms

--- 192.168.2.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms
// 訪問(wèn)vm2的ip 需要加入iptables規(guī)則
sh-4.2# ping -c 1 172.19.0.8
PING 172.19.0.8 (172.19.0.8) 56(84) bytes of data.

--- 172.19.0.8 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

2.3.6 加入iptables規(guī)則
[root@vm1 ~]# iptables -t nat -A POSTROUTING -s 172.17.1.0/24 -o eth0 -j MASQUERADE
[root@vm1 ~]# 
[root@vm1 ~]# ip netns exec ns1 ping -c 1 172.19.0.8
PING 172.19.0.8 (172.19.0.8) 56(84) bytes of data.
64 bytes from 172.19.0.8: icmp_seq=1 ttl=63 time=0.380 ms

--- 172.19.0.8 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms
[root@vm1 ~]# 

給vm2中加入

[root@vm2 ~]# iptables -t nat -A POSTROUTING -s 192.168.2.0/24 -o eth0 -j MASQUERADE
圖片.png

3. 參考

1. https://blog.csdn.net/wodeamd1/article/details/81282437
2. https://blog.csdn.net/song7999/article/details/80403527
3. Docker 容器與容器云
4. https://blog.csdn.net/qq_27366789/article/details/83348366

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末碴里,一起剝皮案震驚了整個(gè)濱河市惕澎,隨后出現(xiàn)的幾起案子蜻底,更是在濱河造成了極大的恐慌永部,老刑警劉巖延蟹,帶你破解...
    沈念sama閱讀 206,126評(píng)論 6 481
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件酷宵,死亡現(xiàn)場(chǎng)離奇詭異呼猪,居然都是意外死亡卤档,警方通過(guò)查閱死者的電腦和手機(jī),發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 88,254評(píng)論 2 382
  • 文/潘曉璐 我一進(jìn)店門谣辞,熙熙樓的掌柜王于貴愁眉苦臉地迎上來(lái)迫摔,“玉大人,你說(shuō)我怎么就攤上這事潦闲≡懿ぃ” “怎么了?”我有些...
    開(kāi)封第一講書(shū)人閱讀 152,445評(píng)論 0 341
  • 文/不壞的土叔 我叫張陵歉闰,是天一觀的道長(zhǎng)。 經(jīng)常有香客問(wèn)我卓起,道長(zhǎng)和敬,這世上最難降的妖魔是什么? 我笑而不...
    開(kāi)封第一講書(shū)人閱讀 55,185評(píng)論 1 278
  • 正文 為了忘掉前任戏阅,我火速辦了婚禮昼弟,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘奕筐。我一直安慰自己舱痘,他們只是感情好变骡,可當(dāng)我...
    茶點(diǎn)故事閱讀 64,178評(píng)論 5 371
  • 文/花漫 我一把揭開(kāi)白布。 她就那樣靜靜地躺著芭逝,像睡著了一般塌碌。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上旬盯,一...
    開(kāi)封第一講書(shū)人閱讀 48,970評(píng)論 1 284
  • 那天台妆,我揣著相機(jī)與錄音,去河邊找鬼胖翰。 笑死接剩,一個(gè)胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的萨咳。 我是一名探鬼主播懊缺,決...
    沈念sama閱讀 38,276評(píng)論 3 399
  • 文/蒼蘭香墨 我猛地睜開(kāi)眼,長(zhǎng)吁一口氣:“原來(lái)是場(chǎng)噩夢(mèng)啊……” “哼培他!你這毒婦竟也來(lái)了鹃两?” 一聲冷哼從身側(cè)響起,我...
    開(kāi)封第一講書(shū)人閱讀 36,927評(píng)論 0 259
  • 序言:老撾萬(wàn)榮一對(duì)情侶失蹤靶壮,失蹤者是張志新(化名)和其女友劉穎怔毛,沒(méi)想到半個(gè)月后,有當(dāng)?shù)厝嗽跇?shù)林里發(fā)現(xiàn)了一具尸體腾降,經(jīng)...
    沈念sama閱讀 43,400評(píng)論 1 300
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡拣度,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 35,883評(píng)論 2 323
  • 正文 我和宋清朗相戀三年,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了螃壤。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片抗果。...
    茶點(diǎn)故事閱讀 37,997評(píng)論 1 333
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡,死狀恐怖奸晴,靈堂內(nèi)的尸體忽然破棺而出冤馏,到底是詐尸還是另有隱情,我是刑警寧澤寄啼,帶...
    沈念sama閱讀 33,646評(píng)論 4 322
  • 正文 年R本政府宣布逮光,位于F島的核電站,受9級(jí)特大地震影響墩划,放射性物質(zhì)發(fā)生泄漏涕刚。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 39,213評(píng)論 3 307
  • 文/蒙蒙 一乙帮、第九天 我趴在偏房一處隱蔽的房頂上張望杜漠。 院中可真熱鬧,春花似錦、人聲如沸驾茴。這莊子的主人今日做“春日...
    開(kāi)封第一講書(shū)人閱讀 30,204評(píng)論 0 19
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)锈至。三九已至晨缴,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間裹赴,已是汗流浹背喜庞。 一陣腳步聲響...
    開(kāi)封第一講書(shū)人閱讀 31,423評(píng)論 1 260
  • 我被黑心中介騙來(lái)泰國(guó)打工, 沒(méi)想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留棋返,地道東北人延都。 一個(gè)月前我還...
    沈念sama閱讀 45,423評(píng)論 2 352
  • 正文 我出身青樓,卻偏偏與公主長(zhǎng)得像睛竣,于是被迫代替她去往敵國(guó)和親晰房。 傳聞我的和親對(duì)象是個(gè)殘疾皇子,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 42,722評(píng)論 2 345

推薦閱讀更多精彩內(nèi)容