上一章節(jié)我們安裝了KVM和KVM管理工具,接下來我們配置KVM宿主機(hosts)為虛擬機(guests)提供網(wǎng)絡(luò)以及存儲資源滥壕。
KVM安裝完成后默認會虛擬機(guests)創(chuàng)建文件系統(tǒng)/var/lib/libvirt/images
,如果您打算在宿主機(hosts)之間遷移VM(guests)的話货葬,你需要配置共享存儲饺饭,例如NFS、NAS或Ceph等卤材。
同時你必須配置網(wǎng)橋,以便VM能夠與外部通訊峦失。接下來我們將描述如何配置網(wǎng)橋以及如何為VM及鏡像創(chuàng)建存儲資源池扇丛。
如何配置網(wǎng)橋
如果您發(fā)現(xiàn)您的機器上有"virbr0"網(wǎng)卡,您可以使用命令virsh net-destroy default
去刪除它或者選擇忽略尉辑,這個網(wǎng)卡是KVM安裝時自動創(chuàng)建的NAT網(wǎng)卡帆精。
下面我們來手動配置網(wǎng)橋橋接到網(wǎng)卡"eth1",步驟如下:
- 登錄到宿主機,檢查網(wǎng)絡(luò)配置;
[root@localhost ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:f5:9f:b2 brd ff:ff:ff:ff:ff:ff inet 192.168.88.144/24 brd 192.168.88.255 scope global dynamic eth0 valid_lft 1248sec preferred_lft 1248sec inet6 fe80::ec2f:eb00:c9f6:4250/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:f5:9f:bc brd ff:ff:ff:ff:ff:ff inet 192.168.57.254/24 brd 192.168.57.255 scope global eth1 valid_lft forever preferred_lft forever inet6 fe80::e096:d49b:5afe:d1ca/64 scope link valid_lft forever preferred_lft forever
- 配置網(wǎng)卡"eth1"橋接網(wǎng)橋卓练;
[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1 [root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1 TYPE=Ethernet BOOTPROTO=static IPV4_FAILURE_FATAL=no NAME=eth1 DEVICE=eth1 ONBOOT=yes BRIDGE=br0
- 創(chuàng)建網(wǎng)橋"br0"的配置文件隘蝎;
[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-br0 [root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-br0 TYPE="Bridge" DEVICE=br0 BOOTPROTO=static IPADDR=192.168.57.254 NETMASK=255.255.255.0 ONBOOT="yes" DELAY=0 STP=0
- 開啟ipv4路由轉(zhuǎn)發(fā)功能;
[root@localhost ~]# grep "net.ipv4.ip_forward" /etc/sysctl.conf || echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf [root@localhost ~]# sysctl -p net.ipv4.ip_forward = 1
- 重啟網(wǎng)絡(luò)服使網(wǎng)橋"br0"配置生效襟企;
[root@localhost ~]# systemctl network restart Unknown operation 'network'. [root@localhost ~]# systemctl restart network [root@localhost ~]# systemctl status network ● network.service - LSB: Bring up/down networking Loaded: loaded (/etc/rc.d/init.d/network; bad; vendor preset: disabled) Active: active (exited) since Thu 2017-06-08 14:37:50 CST; 6s ago Docs: man:systemd-sysv-generator(8) Process: 6568 ExecStop=/etc/rc.d/init.d/network stop (code=exited, status=0/SUCCESS) Process: 6793 ExecStart=/etc/rc.d/init.d/network start (code=exited, status=0/SUCCESS) Jun 08 14:37:49 localhost.localdomain systemd[1]: Starting LSB: Bring up/down networking... Jun 08 14:37:49 localhost.localdomain network[6793]: Bringing up loopback interface: [ OK ] Jun 08 14:37:50 localhost.localdomain network[6793]: Bringing up interface eth0: Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/3) Jun 08 14:37:50 localhost.localdomain network[6793]: [ OK ] Jun 08 14:37:50 localhost.localdomain network[6793]: Bringing up interface eth1: Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/4) Jun 08 14:37:50 localhost.localdomain network[6793]: [ OK ] Jun 08 14:37:50 localhost.localdomain network[6793]: Bringing up interface br0: [ OK ] Jun 08 14:37:50 localhost.localdomain systemd[1]: Started LSB: Bring up/down networking. [root@localhost ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:f5:9f:b2 brd ff:ff:ff:ff:ff:ff inet 192.168.88.144/24 brd 192.168.88.255 scope global dynamic eth0 valid_lft 1786sec preferred_lft 1786sec inet6 fe80::ec2f:eb00:c9f6:4250/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP qlen 1000 link/ether 00:0c:29:f5:9f:bc brd ff:ff:ff:ff:ff:ff 4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 00:0c:29:f5:9f:bc brd ff:ff:ff:ff:ff:ff inet 192.168.57.254/24 brd 192.168.57.255 scope global br0 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fef5:9fbc/64 scope link valid_lft forever preferred_lft forever
- 檢查網(wǎng)橋信息,看上去“br0”已經(jīng)橋接到“eth1”了嘱么;
[root@localhost ~]# brctl show bridge name bridge id STP enabled interfaces br0 8000.000c29f59fbc no eth1
配置存儲資源池
KVM默認的存儲資源池的路徑為/var/lib/libvirt/images
。
用戶沒有強要求使用共享存儲來存儲鏡像(images)顽悼,配置共享存儲能夠很方便實現(xiàn)在宿主機之間遷移vm的功能拱撵,KVM已經(jīng)支持vm熱遷移功能,這點類似于VMware的vmotion功能表蝙。
KVM支持多種共享存儲做為存儲資源池。
本章我們使用NFS乓旗,首先我們來搭建一個NFS環(huán)境府蛇。
NFS環(huán)境搭搭建
# 安裝nfs rpm包
[root@nfs-server ~]# yum install -y nfs-utils
# 配置共享目錄,允許本網(wǎng)段訪問
[root@nfs-server ~]# mkdir /images
[root@nfs-server ~]# echo -e "/images\t192.168.57.0/24(rw,no_root_squash)" > /etc/exports
# 啟動NFS服務(wù)
[root@nfs-server ~]# systemctl enable rpcbind nfs-server
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
[root@nfs-server ~]# systemctl start rpcbind nfs-server
[root@nfs-server ~]# systemctl status rpcbind nfs-server
● rpcbind.service - RPC bind service
Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; indirect; vendor preset: enabled)
Active: active (running) since Thu 2017-06-08 15:38:46 CST; 15s ago
Process: 3461 ExecStart=/sbin/rpcbind -w $RPCBIND_ARGS (code=exited, status=0/SUCCESS)
Main PID: 3467 (rpcbind)
CGroup: /system.slice/rpcbind.service
└─3467 /sbin/rpcbind -w
Jun 08 15:38:46 nfs-server systemd[1]: Starting RPC bind service...
Jun 08 15:38:46 nfs-server systemd[1]: Started RPC bind service.
● nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
Active: active (exited) since Thu 2017-06-08 15:38:57 CST; 5s ago
Process: 3473 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
Process: 3472 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Main PID: 3473 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/nfs-server.service
Jun 08 15:38:57 nfs-server systemd[1]: Starting NFS server and services...
Jun 08 15:38:57 nfs-server systemd[1]: Started NFS server and services.
# 放通防火墻
[root@nfs-server ~]# firewall-cmd --add-service=nfs --permanent
success
[root@nfs-server ~]# firewall-cmd --reload
success
KVM存儲池配置
-
配置KVM宿主機連接NFS Server.
[root@kvm-node1 ~]# yum install nfs-utils -y [root@kvm-node1 ~]# systemctl enable rpcbind && systemctl start rpcbind [root@kvm-node1 ~]# systemctl status rpcbind ● rpcbind.service - RPC bind service Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; indirect; vendor preset: enabled) Active: active (running) since Thu 2017-06-08 15:56:56 CST; 16s ago Process: 8744 ExecStart=/sbin/rpcbind -w $RPCBIND_ARGS (code=exited, status=0/SUCCESS) Main PID: 8745 (rpcbind) CGroup: /system.slice/rpcbind.service └─8745 /sbin/rpcbind -w Jun 08 15:56:56 kvm-node1 systemd[1]: Starting RPC bind service... Jun 08 15:56:56 kvm-node1 systemd[1]: Started RPC bind service. [root@kvm-node1 ~]# mount -t nfs 192.168.57.200:/images /var/lib/libvirt/images [root@kvm-node1 ~]# df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/cl-root xfs 17G 4.2G 13G 25% / devtmpfs devtmpfs 478M 0 478M 0% /dev tmpfs tmpfs 489M 88K 489M 1% /dev/shm tmpfs tmpfs 489M 7.1M 482M 2% /run tmpfs tmpfs 489M 0 489M 0% /sys/fs/cgroup /dev/sda1 xfs 1014M 140M 875M 14% /boot tmpfs tmpfs 98M 8.0K 98M 1% /run/user/0 192.168.57.200:/images nfs4 17G 4.2G 13G 25% /var/lib/libvirt/images [root@kvm-node1 ~]# echo -e "192.168.57.200:/images\t/var/lib/libvirt/images\tnfs\tdefaults\t0\t0" >> /etc/fstab
-
配置KVM存儲資源池
[root@kvm-node1 ~]# virsh pool-list Name State Autostart ------------------------------------------- [root@kvm-node1 ~]# virsh pool-build default Pool default built [root@kvm-node1 ~]# virsh pool-start default Pool default started [root@kvm-node1 ~]# virsh pool-list Name State Autostart ------------------------------------------- default active yes [root@kvm-node1 ~]# virsh pool-info default Name: default UUID: d84dc74b-b0f4-4197-92b0-d9025620f0a4 State: running Persistent: yes Autostart: yes Capacity: 16.99 GiB Allocation: 4.10 GiB Available: 12.88 GiB [root@kvm-node1 ~]# df -Th /var/lib/libvirt/images/ Filesystem Type Size Used Avail Use% Mounted on 192.168.57.200:/images nfs4 17G 4.2G 13G 25% /var/lib/libvirt/images