場景描述
如果用的虛擬機 ceph 卷損壞想進行 xfs_repair 修復或者把里面的數(shù)據(jù)導出退渗,可以把這個損壞的系統(tǒng)盤當做數(shù)據(jù)盤掛載到其他虛擬機上進行操作
可以參考如下方式:
[root@controller-1 ~]# nova show 9960baeb-dbfb-4105-937c-6e195655a2a4
+--------------------------------------+---------------------------------------------------------------------------------+
| Property | Value |
+--------------------------------------+---------------------------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | 5F_DL360 |
| OS-EXT-SRV-ATTR:host | compute-d02-8.domain.tld |
| OS-EXT-SRV-ATTR:hostname | test0723 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | compute-d02-8.domain.tld |
| OS-EXT-SRV-ATTR:instance_name | instance-0000d779 |
| OS-EXT-SRV-ATTR:kernel_id | |
| OS-EXT-SRV-ATTR:launch_index | 0 |
| OS-EXT-SRV-ATTR:ramdisk_id | |
| OS-EXT-SRV-ATTR:reservation_id | r-oicjl9is |
| OS-EXT-SRV-ATTR:root_device_name | /dev/vda |
| OS-EXT-SRV-ATTR:user_data | - |
| OS-EXT-STS:power_state | 1 |
| OS-EXT-STS:task_state | - |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2019-07-23T08:22:09.000000 |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| config_drive | True |
| created | 2019-07-23T08:21:21Z |
| description | - |
| flavor | w-4C-4G-100G (dba0feff-56dd-4f05-b5d7-89411daa4f12) |
| hostId | 1357e7881ac3b18eebf2a4811bd14c31867751f17cd8a799ada62cf9 |
| host_status | UP |
| id | 9960baeb-dbfb-4105-937c-6e195655a2a4 |
| image | Attempt to boot from volume - no image supplied |
| key_name | - |
| locked | False |
| metadata | {} |
| name | test0723 |
| os-extended-volumes:volumes_attached | [{"id": "0f9eca69-95d4-4a46-ab8c-868660934641", "delete_on_termination": true}] |
| progress | 0 |
| status | ACTIVE |
| tenant_id | a7ad64c8e28e4d218f4f1f7773112070 |
| updated | 2019-07-23T08:22:10Z |
| user_id | b4d91c89ad23445785af2a68a2f94804 |
+--------------------------------------+---------------------------------------------------------------------------------+
關閉虛擬機
nova stop 9960baeb-dbfb-4105-937c-6e195655a2a4
查看 卷在 nova 和 cinder 數(shù)據(jù)庫中的記錄雹拄,便于還原
mysql> select * from cinder.volume_attachment where volume_id ="0f9eca69-95d4-4a46-ab8c-868660934641"\G
*************************** 1. row ***************************
created_at: 2019-07-23 08:22:03
updated_at: 2019-07-23 08:22:03
deleted_at: NULL
deleted: 0
id: 97e92940-0952-4d22-86d4-9296c50831e0
volume_id: 0f9eca69-95d4-4a46-ab8c-868660934641
attached_host: NULL
instance_uuid: 9960baeb-dbfb-4105-937c-6e195655a2a4
mountpoint: /dev/vda
attach_time: 2019-07-23 08:22:03
detach_time: NULL
attach_mode: rw
attach_status: attached
1 row in set (0.00 sec)
mysql> select * from nova.block_device_mapping where volume_id = '0f9eca69-95d4-4a46-ab8c-868660934641'\G
*************************** 1. row ***************************
created_at: 2019-07-23 08:21:21
updated_at: 2019-07-23 08:22:04
deleted_at: NULL
id: 68664
device_name: /dev/vda
delete_on_termination: 1
snapshot_id: NULL
volume_id: 0f9eca69-95d4-4a46-ab8c-868660934641
volume_size: 50
no_device: 0
connection_info: {"driver_volume_type": "rbd", "connector": {"initiator": "iqn.1993-08.org.debian:01:7f7f98e49ba", "ip": "10.125.1.223", "platform": "x86_64", "host": "compute-d02-8.domain.tld", "os_type": "linux2", "multipath": false}, "serial": "0f9eca69-95d4-4a46-ab8c-868660934641", "data": {"secret_type": "ceph", "name": "volumes/volume-0f9eca69-95d4-4a46-ab8c-868660934641", "encrypted": false, "secret_uuid": "a5d0dd94-57c4-ae55-ffe0-7e3732a24455", "qos_specs": null, "hosts": ["10.125.136.2", "10.125.136.7", "10.125.136.12"], "volume_id": "0f9eca69-95d4-4a46-ab8c-868660934641", "conffile": "/etc/ceph/ceph.conf", "auth_enabled": true, "access_mode": "rw", "auth_username": "volumes", "ports": ["6789", "6789", "6789"]}}
instance_uuid: 9960baeb-dbfb-4105-937c-6e195655a2a4
deleted: 0
source_type: image
destination_type: volume
guest_format: NULL
device_type: disk
disk_bus: virtio
boot_index: 0
image_id: f4a18a5e-5352-40e2-8ff1-ca2ac7ca5ed4
1 row in set (0.00 sec)
mysql>
更改 cinder 數(shù)據(jù)庫的 掛載點
mysql> update cinder.volume_attachment set mountpoint='/dev/vdb' where volume_id = "0f9eca69-95d4-4a46-ab8c-868660934641";
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0
mysql>
更改 nova 數(shù)據(jù)庫
mysql> update nova.block_device_mapping set device_name = '/dev/vdb', boot_index=1 where volume_id = '0f9eca69-95d4-4a46-ab8c-868660934641';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0
卸載卷
[root@controller-1 ~]# nova volume-detach 9960baeb-dbfb-4105-937c-6e195655a2a4 0f9eca69-95d4-4a46-ab8c-868660934641
[root@controller-1 ~]# nova volume-attach 22b53b03-fd02-437b-9238-5c951c048cc5 0f9eca69-95d4-4a46-ab8c-868660934641
+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/vdb |
| id | 0f9eca69-95d4-4a46-ab8c-868660934641 |
| serverId | 22b53b03-fd02-437b-9238-5c951c048cc5 |
| volumeId | 0f9eca69-95d4-4a46-ab8c-868660934641 |
+----------+--------------------------------------+
虛擬機內部查看 多塊 vdc 盤
[root@test-allocation-ratio-3 ~]# blkid
/dev/vda1: UUID="3e109aa3-f171-4614-ad07-c856f20f9d25" TYPE="xfs"
/dev/vdb: SEC_TYPE="msdos" LABEL="config-2" UUID="4B4D-6734" TYPE="vfat"
/dev/vdc1: UUID="3e109aa3-f171-4614-ad07-c856f20f9d25" TYPE="xfs"
但是 vda1 和 vdc1 的 blkid 是一樣的 所以掛載會報錯
重新生成uuid (記住原來的UUID蹋辅,便于還原)
[root@test-allocation-ratio-3 ~]# xfs_admin -U 8c922c24-7110-4ba8-9af7-d275ded029b9 /dev/vdc1
Clearing log and setting UUID
writing all SBs
new UUID = 8c922c24-7110-4ba8-9af7-d275ded029b9
[root@test-allocation-ratio-3 ~]#
[root@test-allocation-ratio-3 ~]#
在虛擬機內部掛載 vdc
[root@test-allocation-ratio-3 ~]# mount /dev/vdc1 /tmp/test/
查看掛載數(shù)據(jù)送丰,并進行修改
[root@test-allocation-ratio-3 ~]# cat /tmp/test/etc/fstab
#
# /etc/fstab
# Created by anaconda on Tue Jan 17 22:18:46 2017
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=3e109aa3-f171-4614-ad07-c856f20f9d25 / xfs defaults 0 0
[root@test-allocation-ratio-3 ~]#
[root@test-allocation-ratio-3 ~]#
[root@test-allocation-ratio-3 ~]# echo "## add by test" >> /tmp/test/etc/fstab
[root@test-allocation-ratio-3 ~]#
[root@test-allocation-ratio-3 ~]#
[root@test-allocation-ratio-3 ~]# cat /tmp/test/etc/fstab
#
# /etc/fstab
# Created by anaconda on Tue Jan 17 22:18:46 2017
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=3e109aa3-f171-4614-ad07-c856f20f9d25 / xfs defaults 0 0
## add by test
[root@test-allocation-ratio-3 ~]# umount /dev/vdc1
[root@test-allocation-ratio-3 ~]# xfs_admin -U 3e109aa3-f171-4614-ad07-c856f20f9d25 /dev/vdc1
Clearing log and setting UUID
writing all SBs
new UUID = 3e109aa3-f171-4614-ad07-c856f20f9d25 #還原 blkid
卸載硬盤怒详,并還原數(shù)據(jù)庫
[root@controller-1 ~]# nova volume-detach 22b53b03-fd02-437b-9238-5c951c048cc5 0f9eca69-95d4-4a46-ab8c-868660934641
mysql> update cinder.volume_attachment set mountpoint='/dev/vda' ,deleted=0 where id = "97e92940-0952-4d22-86d4-9296c50831e0";
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0
mysql> update nova.block_device_mapping set device_name = '/dev/vda', boot_index=0 ,deleted=0 where id= '68664';
注意: nova 和 cinder 庫會多條記錄,可以參考之前的數(shù)據(jù)庫信息進行比對修改
硬重啟虛擬機烙懦,查看修改的文件
[root@controller-1 ~]# nova reboot 9960baeb-dbfb-4105-937c-6e195655a2a4 --hard
[root@test0713 ~]# cat /tmp/test/etc/fstab
#
# /etc/fstab
# Created by anaconda on Tue Jan 17 22:18:46 2017
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=3e109aa3-f171-4614-ad07-c856f20f9d25 / xfs defaults 0 0
## add by test
可以看到 test073 虛擬機內部 fstab 文件已經(jīng)進行修改了
總結
1.修改數(shù)據(jù)庫把系統(tǒng)盤變成普通數(shù)據(jù)盤
2.找個測試的虛擬機驱入,進行掛載
3.虛擬機內部會多塊盤,修改blkid,進行mount
4.對掛載的數(shù)據(jù)進行修改或者導出
5.對盤里數(shù)據(jù)操作完成之后進行umount并還原blkid
6.還原修改的 nova氯析、cinder 數(shù)據(jù)庫
7.對虛擬機進行硬重啟