libvirt三種接口:
- 命令行:virsh
- 圖形化:virt-manager
- Web:webvirtmgr
命令行工具:virsh
1碍脏,確認(rèn)宿主機(jī)是否支持KVM虛擬化:
egrep '(vmx|svm)' --color /proc/cpuinfo
2,安裝libvirt相關(guān)工具包:
apt-get install -y qemu-kvm libvirt-bin virtinst
3键痛,配置橋接網(wǎng)卡:
apt-get install bridge-utils
vim /etc/network/interface:
#allow-hotplug eth0
#auto eth0
#iface eth0 inet dhcp
auto br0
iface br0 inet static
address 172.16.17.195
netmask 255.255.254.0
gateway 172.16.16.1
dns-nameservers 172.16.0.9
bridge_ports eth0
bridge_stp off
bridge_fd 0
systemctl disable NetworkManager
systemctl stop NetworkManager
/etc/init.d/networking restart
(Maybe you need to restart the machine.)
4匪煌,新建qcow2格式的磁盤并創(chuàng)建虛擬機(jī):
qemu-img create -f qcow2 test02.img 7G #KVM磁盤默認(rèn)為raw格式
virt-install --name=guest01 --ram 512 --vcpus=1 --disk path=/home/vhost/test01.img,size=10,bus=virtio --accelerate --cdrom /root/debian.iso --vnc --vncport=5920 --vnclisten=0.0.0.0 --network bridge=br0,model=virtio --noautoconsole
5澈蟆,virsh管理工具:
The following lists common usages of virsh command.
To create a new guest domain and start a VM:
$ virsh create alice.xml
To stop a VM and destroy a guest domain:
$ virsh destroy alice
To shutdown a VM (without destroying a domain):
$ virsh shutdown alice
To suspend a VM:
$ virsh suspend alice
To resume a suspended VM:
$ virsh resume alice
To access login console of a running VM:
$ virsh console alice
To autostart a VM upon host booting:
$ virsh autostart alice
To get domain information of a VM:
$ virsh dominfo alice
To edit domain XML of a VM:
$ virsh edit alice
參考文檔:http://xmodulo.com/use-kvm-command-line-debian-ubuntu.html
圖形化管理工具:virt-manager
apt-get install virt-manager
略···
Web端管理工具:webvirtmgr
1抡医,installation:
apt-get install git python-pip python-libvirt python-libxml2 novnc supervisor nginx
2呼渣,拉取代碼及Django相關(guān)環(huán)境配置(記得使用豆瓣源):
git clone git://github.com/retspen/webvirtmgr.git
cd webvirtmgr
pip install -r requirements.txt -i http://pypi.douban.com/simple/
./manage.py syncdb
./manage.py collectstatic #創(chuàng)建用戶密碼并保存
3擂橘,設(shè)置nginx反代:
cd ..
mv webvirtmgr /var/www/
Add file webvirtmgr.conf in /etc/nginx/conf.d:
server {
listen 80 default_server;
server_name $hostname;
#access_log /var/log/nginx/webvirtmgr_access_log;
location /static/ {
root /var/www/webvirtmgr/webvirtmgr; # or /srv instead of /var
expires max;
}
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header Host $host:$server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 600;
proxy_read_timeout 600;
proxy_send_timeout 600;
client_max_body_size 1024M; # Set higher depending on your needs
}
}
cd /etc/nginx/sites-available
mv default default.bak
chown -R www-data:www-data /var/www/webvirtmgr
/etc/init.d/nginx restart
4晌区,設(shè)置supervisor:
vim /etc/insserv/overrides/novnc:
#!/bin/sh
### BEGIN INIT INFO
# Provides: nova-novncproxy
# Required-Start: $network $local_fs $remote_fs $syslog
# Required-Stop: $remote_fs
# Default-Start:
# Default-Stop:
# Short-Description: Nova NoVNC proxy
# Description: Nova NoVNC proxy
### END INIT INFO
Add file webvirtmgr.conf in /etc/supervisor/conf.d:
[program:webvirtmgr]
command=/usr/bin/python /var/www/webvirtmgr/manage.py run_gunicorn -c /var/www/webvirtmgr/conf/gunicorn.conf.py
directory=/var/www/webvirtmgr
autostart=true
autorestart=true
logfile=/var/log/supervisor/webvirtmgr.log
log_stderr=true
user=nginx
[program:webvirtmgr-console]
command=/usr/bin/python /var/www/webvirtmgr/console/webvirtmgr-console
directory=/var/www/webvirtmgr
autostart=true
autorestart=true
stdout_logfile=/var/log/supervisor/webvirtmgr-console.log
redirect_stderr=true
user=nginx
/etc/init.d/supervisor restart
5,update:
cd /var/www/webvirtmgr
git pull
./manage.py collectstatic
/etc/init.d/supervisor restart
6通贞,設(shè)置SSH認(rèn)證朗若,nginx的用戶www-data通過用戶webvirtmgr免密ssh到libvirt服務(wù)器:
- 切換到nginx用戶 (On system where WebVirtMgr is installed):
su - www-data -s /bin/bash
- 為www-data創(chuàng)建.ssh配置文件:
sudo mkdir /var/www/.ssh
sudo chmod 700 /var/www/.ssh
sudo vim /var/www/.ssh/config
StrictHostKeyChecking=no
UserKnownHostsFile=/dev/null
- Create SSH public key:
sudo ssh-keygen
Enter file in which to save the key (/root/.ssh/id_rsa): /var/www/.ssh/id_rsa
- change owner and permission for folder /var/www/.ssh:
sudo chmod -R 0600 /var/www/.ssh/config
sudo chown -R www-data:www-data /var/www/.ssh
- Add webvirtmgr user (on qemu-kvm/libvirt host server) and add it to the proper group :
useradd webvirtmgr
passwd webvirtngr
usermod -G libvirt-qemu -a webvirtmgr
- 為用戶webvirtmgr配置.ssh目錄并拷貝www-data的公鑰到本目錄:
mkdir /home/webvirtmgr/.ssh
chmod 700 /home/webvirtmgr/.ssh
- Back to webvirtmgr host and copy public key to qemu-kvm/libvirt host server:
su - nginx -s /bin/bash
ssh-copy-id webvirtmgr@qemu-kvm-libvirt-host
- On qemu-kvm-libvirt-host:
chmod 0600 /home/webvirtmgr/.ssh/authorized_keys
chown -R webvirtmgr:webvirtmgr /home/webvirtmgr/.ssh
- You should connect without entering a password:
ssh webvirtmgr@qemu-kvm-libvirt-host
-
Set up permissions to manage libvirt:
Create file /etc/polkit-1/localauthority/50-local.d/50-libvirt-remote-access.pkla (permissions for user webvirtmgr):
[Remote libvirt SSH access]
Identity=unix-user:webvirtmgr
Action=org.libvirt.unix.manage
ResultAny=yes
ResultInactive=yes
ResultActive=yes
/etc/init.d/libvirtd restart
- 通過SSH連接虛擬機(jī)會出現(xiàn)web頁面過了20s就自動斷開連接,git上找到一個臨時性的解決方法:
vim /usr/lib/python2.7/dist-packages/websockify/websocket.py
#注釋掉以下4行配置:
if not multiprocessing:
# os.fork() (python 2.4) child reaper
signal.signal(signal.SIGCHLD, self.fallback_SIGCHLD)
else:
# make sure that _cleanup is called when children die
# by calling active_children on SIGCHLD
signal.signal(signal.SIGCHLD, self.multiprocessing_SIGCHLD)
7昌罩,web管理界面登錄配置:
github參考文檔:https://github.com/retspen/webvirtmgr/wiki/Install-WebVirtMgr
ceph塊設(shè)備快速部署:
1哭懈,設(shè)置admin節(jié)點(diǎn)root免密碼登錄其他節(jié)點(diǎn):
- 使用ssh-keygen生成密鑰,位于~/.ssh/id_rsa.pub,拷貝id_rsa.pub文件到所有節(jié)點(diǎn)/root/.ssh/authorized_keys
- 為所有節(jié)點(diǎn)配置/etc/hosts文件以互相信任:
172.16.1.10 osd1
172.16.1.20 osd2
172.16.1.30 osd3
2茎用,使用國內(nèi)的鏡像源并同步時間:
略
3遣总,添加ceph源并安裝ceph-deploy:
- 添加release key:
wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
- 添加官方ceph源:
echo deb http://download.ceph.com/debian-{ceph-stable-release}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
(deb https://download.ceph.com/debian-jewel/ jessie main)
- 更新倉庫并安裝
apt-get update && apt-get install ceph-deploy
4,創(chuàng)建目錄統(tǒng)一存放ceph的所有配置文件:
mkdir /cluster
cd /cluster
5轨功,創(chuàng)建集群:
ceph-deploy new node1
node1是mon節(jié)點(diǎn)旭斥,執(zhí)行該命令會生成ceph配置文件、monitor密鑰文件以及日志文件古涧。
6垂券,修改默認(rèn)冗余參數(shù):
echo "osd pool default size = 2" >> /cluster/ceph.conf
由于我們目前只有兩個osd節(jié)點(diǎn),而默認(rèn)的冗余份數(shù)是3羡滑,因此我們需要設(shè)置為2菇爪,如果osd節(jié)點(diǎn)大于2,則此步驟省略柒昏。
7凳宙,配置網(wǎng)卡和網(wǎng)絡(luò):
略
如果你有多個網(wǎng)卡,可以把 public network 寫入 Ceph 配置文件的 [global] 段下昙楚。
8近速,安裝ceph:
ceph-deploy install node1 node2 node3
9,配置初始 monitor堪旧、并收集所有密鑰:
ceph-deploy mon create-initial
10削葱,配置osd節(jié)點(diǎn):
- 格式化osd節(jié)點(diǎn)磁盤:
ceph-deploy disk zap node2:vdb
ceph-deploy disk zap node3:vdb
- 以上步驟會清空磁盤的所有數(shù)據(jù)。 接下來創(chuàng)建osd淳梦,注意由于我們只是測試析砸,故沒有使用單獨(dú)的磁盤作為journal,實(shí)際在生產(chǎn)環(huán)境下爆袍,需要配備SSD分區(qū)作為journal首繁,能夠最大化IO吞吐量.
ceph-deploy osd create node2:vdb
ceph-deploy osd create node3:vdb
11作郭,配置admin節(jié)點(diǎn):
ceph-deploy admin node1 node2 node3
chmod +r /etc/ceph/ceph.client.admin.keyring # 保證具有讀取的密鑰的權(quán)限
12,檢查集群的健康狀況:
ceph health
RBD快速入門(以NFS共享為例):
1弦疮,在管理節(jié)點(diǎn)上夹攒,通過 ceph-deploy 把 Ceph 安裝到 ceph-client 節(jié)點(diǎn):
ceph-deploy install ceph-client
2,在管理節(jié)點(diǎn)上胁塞,用 ceph-deploy 把 Ceph 配置文件和 ceph.client.admin.keyring 拷貝到 ceph-client:
ceph-deploy admin ceph-client
ceph-deploy 工具會把密鑰環(huán)復(fù)制到客戶端 /etc/ceph 目錄咏尝,要確保此密鑰環(huán)文件有讀權(quán)限(如 chmod +r /etc/ceph/ceph.client.admin.keyring )。
3啸罢,在ceph-client上創(chuàng)建rbd塊設(shè)備:
ceph osd pool create nfs-pool 128 128
rbd create nfs-pool/share1 --size 2048
rbd map nfs-pool/share1 --id admin --keyfile /etc/ceph/ceph.client.admin.keyring
rbd showmapped
mkfs.ext4 -m0 /dev/rbd/nfs-pool/share1
mkdir /mnt/nfs-share
mount -t ext4 /dev/rbd/nfs-pool/share1 /mnt/nfs-share/
4编检,NFS服務(wù)配置:
apt-get install -y nfs-server
vim /etc/exports
/mnt/nfs-share 172.16.*.*(rw,no_root_squash,no_all_squash,sync)
/etc/init.d/nfs-kernel-server restart
/etc/init.d/nfs-common restart
/etc/init.d/rpcbind restart
showmount -e localhost
5,NFS客戶端掛載:
mkdir /nfs-test
showmount -e NFS-SERVER-IP
mount -t nfs NFS-SERVER-IP:/mnt/nfs-share /nfs-test/
通過libvirt使用ceph RBD
1扰才,創(chuàng)建存儲池 libvirt-pool 允懂,設(shè)定了 128 個歸置組。
ceph osd pool create libvirt-pool 128 128
ceph osd lspools
2衩匣,創(chuàng)建 Ceph 用戶 client.libvirt 蕾总,且權(quán)限限制到 libvirt-pool 。
ceph auth get-or-create client.libvirt mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=libvirt-pool'
ceph auth list
3舵揭,用 QEMU 在 RBD 存儲池中創(chuàng)建映像 image01 谤专、存儲池為 libvirt-pool 。
qemu-img create -f rbd rbd:libvirt-pool/new-libvirt-image 10G
或者用RBD創(chuàng)建映像:
rbd create libvirt-pool/image02 --size 10240 [--object-size 8M]
4午绳,配置VM置侍。
virsh edit guest01
在< devices > 下應(yīng)該有 <disk> 條目:
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/home/vhost/test.img'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
將你創(chuàng)建的RBD映像配置為< disk >條目:
<disk type='network' device='disk'>
<driver name='qemu' type='raw'/>
<source protocol='rbd' name='libvirt-pool/image01'>
<host name='mon1' port='6789'/>
</source>
<target dev='vda' bus='virtio'/>
</disk>
5,如果你的 Ceph 存儲集群啟用了 Ceph 認(rèn)證(默認(rèn)已啟用)拦焚,那么必須生成一個 secret蜡坊,并將其加入配置文件。
cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
<usage type='ceph'>
<name>client.libvirt secret</name>
</usage>
</secret>
EOF
virsh secret-define --file secret.xml
<uuid of secret is output here>
ceph auth get-key client.libvirt | sudo tee client.libvirt.key
virsh secret-set-value --secret {uuid of secret} --base64 $(cat client.libvirt.key) && rm client.libvirt.key secret.xml
加入配置文件中:
...
</source>
<auth username='libvirt'>
<secret type='ceph' uuid='9ec59067-fdbc-a6c0-03ff-df165c0587b8'/>
</auth>
<target ...
6赎败,在webvirtmgr里接入Ceph塊設(shè)備秕衙。
官方中文參考文檔:http://docs.ceph.org.cn/rbd/libvirt/
以上。