CentOS7 desktop openstack queens 環(huán)境搭建 單網(wǎng)卡雙節(jié)點(diǎn)

  • 使用\\\r\n替換官網(wǎng)教程命令中的\和換行

  • 網(wǎng)絡(luò)配置

  1. 網(wǎng)卡配置基本格式
TYPE=Ethernet
BOOTPROTO=static
ONBOOT=yes
NAME=eth0
DEVICE=eth0
IPADDR=192.168.0.51
NETMASK=255.255.255.0
GATEWAY=192.168.0.1
DNS1=192.168.0.1

  • hosts配置(所有節(jié)點(diǎn))
192.168.0.51       controller
192.168.0.52       compute1

reboot

為了方便臂聋,所有節(jié)點(diǎn)都作為chrony客戶端
vi /etc/chrony.conf

#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst

server ntp1.aliyun.com iburst
  • 安裝openstack軟件包(所有節(jié)點(diǎn))

yum install centos-release-openstack-queens -y
yum install python-openstackclient -y

  • 如果selinux沒有禁用(所有節(jié)點(diǎn))

CentOS7 minimal 關(guān)閉 firewall NetworkManager selinux
安裝 openstack-selinux軟件包以自動管理OpenStack服務(wù)的安全策略
yum install openstack-selinux -y

  • 配置sql(僅控制節(jié)點(diǎn))
  1. 下載
    yum install mariadb mariadb-server python2-PyMySQL -y
  2. 備份配置
    cd /etc/my.cnf.d/
    tar czvf my.cnf.d.tar.gz *
  3. 創(chuàng)建配置openstack.cnf
    vi openstack.cnf
[mysqld]
bind-address = 192.168.0.51

default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
  1. 啟動
    systemctl enable mariadb.service && systemctl start mariadb.service
  2. 配置數(shù)據(jù)庫安裝密碼(可選)
    mysql_secure_installation
  3. 修改密碼
    登錄:mysql -u root mysql
    改密碼:UPDATE user SET PASSWORD=PASSWORD('123456') where USER='root';
    FLUSH PRIVILEGES;
    退出:quit
    重啟服務(wù):systemctl restart mariadb.service
    注:發(fā)現(xiàn)如果不重啟服務(wù)龄毡,那么密碼不生效
  • 消息隊列RabbitMQ(控制節(jié)點(diǎn))
  1. 下載
    yum install rabbitmq-server -y
  2. 啟動
    systemctl enable rabbitmq-server.service && systemctl start rabbitmq-server.service
  3. 添加openstack用戶
    rabbitmqctl add_user openstack 123456
    rabbitmqctl set_permissions openstack ".*" ".*" ".*"
  • Memcached(僅控制節(jié)點(diǎn))

注:用于身份的tokens的緩存

  1. 下載
    yum install memcached python-memcached -y
  2. 配置
    vi /etc/sysconfig/memcached
OPTIONS="-l 127.0.0.1,::1,controller"
  1. 啟動
    systemctl enable memcached.service && systemctl start memcached.service
  • ETCD(控制節(jié)點(diǎn))
  1. 下載
    yum install etcd -y
  2. 配置
    vi /etc/etcd/etcd.conf
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.0.51:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.0.51:2379"
ETCD_NAME="controller"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.0.51:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.0.51:2379"
ETCD_INITIAL_CLUSTER="controller=http://192.168.0.51:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"
  1. 啟動
    systemctl enable etcd && systemctl start etcd

  • keystone(控制節(jié)點(diǎn))
  1. 創(chuàng)建數(shù)據(jù)庫
    mysql -uroot -p123456
    CREATE DATABASE keystone;
    GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '123456';
    GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'controller' IDENTIFIED BY '123456';
    GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '123456';
  2. 安裝配置
    yum install openstack-keystone httpd mod_wsgi -y
    vi /etc/keystone/keystone.conf
[database]
connection = mysql+pymysql://keystone:123456@controller/keystone

[token]
provider = fernet
  1. 填充數(shù)據(jù)庫
    su -s /bin/sh -c "keystone-manage db_sync" keystone
  2. 初始化Fernet密鑰存儲庫
    keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
    keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
  3. 引導(dǎo)身份服務(wù)
    keystone-manage bootstrap --bootstrap-password 123456 --bootstrap-admin-url http://controller:5000/v3/ --bootstrap-internal-url http://controller:5000/v3/ --bootstrap-public-url http://controller:5000/v3/ --bootstrap-region-id RegionOne
    注:keystone-manage bootstrap命令實(shí)際上是創(chuàng)建了default domain
  4. 配置Apache HTTP服務(wù)器
    vi /etc/httpd/conf/httpd.conf
ServerName controller

ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

  1. 啟動httpd
    systemctl enable httpd.service && systemctl start httpd.service
  2. 登錄admin
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
  1. 創(chuàng)建exampledomain(這個可用可不用夯巷,只是一個創(chuàng)建domain的示例)
    openstack domain create --description "An Example Domain" example
  2. 創(chuàng)建service project
    openstack project create --domain default --description "Service Project" service
    注:service project是給服務(wù)用的俗扇,每個服務(wù)會使用唯一的user添加到serviceproject中
  3. 創(chuàng)建非特權(quán)用戶和項(xiàng)目(這就是創(chuàng)建用戶、角色电抚、項(xiàng)目的一般步驟含思,可用可不用拳魁,也是一個示例)
    openstack project create --domain default --description "Demo Project" demo
    openstack user create --domain default --password-prompt demo
    openstack role create user
    openstack role add --project demo --user demo user
    注:user角色是必須存在的,否則在管理端創(chuàng)建項(xiàng)目的時候會失敗
  4. demo用戶登錄
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
  1. 請求admin用戶身份認(rèn)證令牌
    openstack --os-auth-url http://controller:5000/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name admin --os-username admin token issue
    注:參數(shù)跟8.登錄admin是一樣的狐血,如果存在相應(yīng)環(huán)境變量淀歇,那么對應(yīng)參數(shù)可以忽略,比如執(zhí)行了8.登錄admin匈织,那么獲取token只需要使用
    openstack token issue
  2. 請求demo用戶身份認(rèn)證令牌
    openstack --os-auth-url http://controller:5000/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name demo --os-username demo token issue
  • glance服務(wù)(控制節(jié)點(diǎn))
  1. 創(chuàng)建數(shù)據(jù)庫
    mysql -uroot -p123456
    CREATE DATABASE glance;
    GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '123456';
    GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'controller' IDENTIFIED BY '123456';
    GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '123456';
  2. 創(chuàng)建glance用戶(其domain浪默、project、角色)
    . admin-openrc
    openstack user create --domain default --password-prompt glance
    openstack role add --project service --user glance admin
    注:前面說過serviceproject用于服務(wù)缀匕,現(xiàn)在就把glance用戶添加到serviceproject中纳决,并且添加管理員角色
  3. 創(chuàng)建glance服務(wù)
    openstack service create --name glance --description "OpenStack Image" image
    openstack endpoint create --region RegionOne image public http://controller:9292
    openstack endpoint create --region RegionOne image internal http://controller:9292
    openstack endpoint create --region RegionOne image admin http://controller:9292
    注:openstack service createglance是服務(wù)名,image是服務(wù)type乡小,這個應(yīng)該是隨意的
  4. 安裝glance包
    yum install openstack-glance -y
  5. 配置
    vi /etc/glance/glance-api.conf
[database]
connection = mysql+pymysql://glance:123456@controller/glance

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = 123456

[paste_deploy]
flavor = keystone

[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
  1. 配置/etc/glance/glance-registry.conf
    注:Glance注冊服務(wù)及其API已在皇后區(qū)版本中棄用阔加,即systemctl start openstack-glance-registry.service沒用了
  2. 填充數(shù)據(jù)庫
    su -s /bin/sh -c "glance-manage db_sync" glance
  3. 啟動
    systemctl enable openstack-glance-api.service && systemctl start openstack-glance-api.service
  • compute(控制節(jié)點(diǎn))
  1. 創(chuàng)建數(shù)據(jù)庫
    mysql -uroot -p123456
    CREATE DATABASE nova_api;
    CREATE DATABASE nova;
    CREATE DATABASE nova_cell0;
    GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
    GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'controller' IDENTIFIED BY '123456';
    GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '123456';

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'controller' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '123456';

GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'controller' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '123456';

  1. 創(chuàng)建nova用戶
    . admin-openrc
    openstack user create --domain default --password-prompt nova
    openstack role add --project service --user nova admin
  2. 創(chuàng)建nova service
    openstack service create --name nova --description "OpenStack Compute" compute
    openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
    openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
    openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
  3. 創(chuàng)建placement用戶
    . admin-openrc
    openstack user create --domain default --password-prompt placement
    openstack role add --project service --user placement admin
  4. 創(chuàng)建placement service
    openstack service create --name placement --description "Placement API" placement
    openstack endpoint create --region RegionOne placement public http://controller:8778
    openstack endpoint create --region RegionOne placement internal http://controller:8778
    openstack endpoint create --region RegionOne placement admin http://controller:8778
  5. 安裝
    yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api -y
  6. 配置
    vi /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
my_ip = 192.168.0.51
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
transport_url = rabbit://openstack:123456@controller

[api_database]
connection = mysql+pymysql://nova:123456@controller/nova_api

[database]
connection = mysql+pymysql://nova:123456@controller/nova

[api]
auth_strategy = keystone

[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 123456

[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 123456

注:關(guān)于nova.virt.firewall.NoopFirewallDriver,默認(rèn)情況下满钟,Compute使用內(nèi)部防火墻驅(qū)動程序胜榔。由于Networking服務(wù)包含防火墻驅(qū)動程序胳喷,因此必須使用nova.virt.firewall.NoopFirewallDriver防火墻驅(qū)動程序禁用Compute防火墻驅(qū)動 程序产镐。nova.virt.firewall.NoopFirewallDriver firewall driver

  1. 配置httpd船响,啟用對Placement API的訪問
    vi /etc/httpd/conf.d/00-nova-placement-api.conf
<Directory /usr/bin>
  <IfVersion >= 2.4>
     Require all granted
  </IfVersion>
  <IfVersion < 2.4>
     Order allow,deny
     Allow from all
  </IfVersion>
</Directory>
  1. 重啟httpd
    systemctl restart httpd
  2. 填充數(shù)據(jù)庫
    su -s /bin/sh -c "nova-manage api_db sync" nova
    su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
    su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
    su -s /bin/sh -c "nova-manage db sync" nova
  3. 驗(yàn)證是否正確注冊
    nova-manage cell_v2 list_cells
  4. 啟動服務(wù)
    systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service && systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
  • compute(計算節(jié)點(diǎn))
  1. 下載
    yum install openstack-nova-compute -y
  2. 配置
    vi /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller
my_ip = 192.168.0.52
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api]
auth_strategy = keystone

[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 123456

[vnc]
enabled = True
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 123456
  1. 啟動
    systemctl enable libvirtd.service openstack-nova-compute.service && systemctl restart libvirtd.service openstack-nova-compute.service
  2. 將本計算節(jié)點(diǎn)添加到cell數(shù)據(jù)庫中(在控制節(jié)點(diǎn)上執(zhí)行)
    . admin-openrc
    重啟計算節(jié)點(diǎn)洼冻,否則下面的命令獲取的Hostlocalhost.localdomain番甩,更會影響下下條命令
    openstack compute service list --service nova-compute
    su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
  3. 驗(yàn)證(在控制節(jié)點(diǎn)上)
    openstack compute service list
  • neutron(控制節(jié)點(diǎn))
  1. 創(chuàng)建數(shù)據(jù)庫
    mysql -uroot -p123456
    CREATE DATABASE neutron;
    GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '123456';
    GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'controller' IDENTIFIED BY '123456';
    GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '123456';
  2. 創(chuàng)建neutron用戶
    . admin-openrc
    openstack user create --domain default --password-prompt neutron
    openstack role add --project service --user neutron admin
  3. 創(chuàng)建neutron service
    openstack service create --name neutron --description "OpenStack Networking" network
    openstack endpoint create --region RegionOne network public http://controller:9696
    openstack endpoint create --region RegionOne network internal http://controller:9696
    openstack endpoint create --region RegionOne network admin http://controller:9696
  4. 下載(self-service networks
    yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y
  5. 配置(self-service networks
    vi /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[database]
connection = mysql+pymysql://neutron:123456@controller/neutron

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456

[nova]
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 123456

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
  1. 配置(self-service networks
    vi /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security

[ml2_type_flat]
flat_networks = provider

[ml2_type_vxlan]
vni_ranges = 1:1000

[securitygroup]
enable_ipset = true
  1. 配置(self-service networks
    vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens33

[vxlan]
enable_vxlan = true
local_ip = 192.168.0.51
l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
  1. 配置(self-service networks
    通過驗(yàn)證以下所有sysctl值設(shè)置為1:確保您的Linux操作系統(tǒng)內(nèi)核支持網(wǎng)橋過濾器
    vi /etc/sysctl.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

重啟生效:reboot
立即生效:
modprobe br_netfilter
sysctl -p
引用:br_netfilter 模塊開機(jī)自動方法
CentOS 7 開機(jī)加載內(nèi)核模塊
注:因?yàn)閛penstack已經(jīng)配置了br_netfilter開機(jī)自啟動唆缴,所以不需要進(jìn)行開機(jī)啟動重復(fù)配置了

  1. 配置(self-service networks
    vi /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = linuxbridge
  1. 配置(self-service networks
    vi /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
  1. 配置
    vi /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = 123456
  1. 配置
    vi /etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
service_metadata_proxy = true
metadata_proxy_shared_secret = 123456
  1. 數(shù)據(jù)庫
    ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
  2. 啟動
    systemctl restart openstack-nova-api.service
    systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service && systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
  3. 啟動(self-service networks
    systemctl enable neutron-l3-agent.service && systemctl restart neutron-l3-agent.service
  • neutron(計算節(jié)點(diǎn))
  1. 下載
    yum install openstack-neutron-linuxbridge ebtables ipset -y
  2. 配置
    vi /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
  1. 配置(self-service networks
    vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens33

[vxlan]
enable_vxlan = true
local_ip = 192.168.0.52
l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
  1. 配置(self-service networks
    通過驗(yàn)證以下所有sysctl值設(shè)置為1:確保您的Linux操作系統(tǒng)內(nèi)核支持網(wǎng)橋過濾器
    vi /etc/sysctl.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

重啟生效:reboot
立即生效:
modprobe br_netfilter
sysctl -p

  1. 配置
    vi /etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
  1. 啟動
    systemctl restart openstack-nova-compute.service
  2. 啟動
    systemctl enable neutron-linuxbridge-agent.service && systemctl start neutron-linuxbridge-agent.service
  3. 驗(yàn)證(在控制節(jié)點(diǎn))
    openstack network agent list

* cinder(在控制節(jié)點(diǎn))

  1. 創(chuàng)建數(shù)據(jù)庫
    mysql -uroot -p123456
    CREATE DATABASE cinder;
    GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '123456';
    GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'controller' IDENTIFIED BY '123456';
    GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '123456';
  2. 創(chuàng)建cinder用戶
    . admin-openrc
    openstack user create --domain default --password-prompt cinder
    openstack role add --project service --user cinder admin
  3. 創(chuàng)建cinder2楞捂、cinder3服務(wù)
    openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
    openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
    openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
    openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
    openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
    openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
    openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
    openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
    注:cinder需要2個服務(wù)
  4. 下載
    yum install openstack-cinder -y
  5. 配置
    vi /etc/cinder/cinder.conf
[DEFAULT]
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone
my_ip = 192.168.0.51

[database]
connection = mysql+pymysql://cinder:123456@controller/cinder

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = 123456

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
  1. 填充數(shù)據(jù)庫
    su -s /bin/sh -c "cinder-manage db sync" cinder
  2. 配置
    vi /etc/nova/nova.conf
[cinder]
os_region_name = RegionOne
  1. 重啟
    systemctl restart openstack-nova-api.service
  2. 啟動
    systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service && systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

* cinder-lvm(在塊存儲節(jié)點(diǎn))

  1. 下載
    yum install lvm2 device-mapper-persistent-data -y
  2. 啟動
    systemctl enable lvm2-lvmetad.service && systemctl start lvm2-lvmetad.service
  3. 創(chuàng)建lvm物理卷
    pvcreate /dev/vda
    注:這里的/dev/vda是一個新掛載的磁盤
  4. 創(chuàng)建lvm卷組cinder-volumes
    vgcreate cinder-volumes /dev/vda
    注:cinder-volumes是一個卷組严里,是把多個分區(qū)或磁盤合并成的一個磁盤迈倍,就是把這個提供出去择浊,感覺塊存儲就是提供了一個磁盤戴卜,然后里面由openstack自己分區(qū)
  5. 配置lvm僅掃描/dev/vda
    vi /etc/lvm/lvm.conf
devices {

filter = [ "a/vda/", "r/.*/"]

配置/etc/lvm/lvm.conf的原因
注:上面aacceptrreject

* cinder(在塊存儲節(jié)點(diǎn))

  1. 安裝
    yum install openstack-cinder targetcli python-keystone -y
  2. 配置
    vi /etc/cinder/cinder.conf
[DEFAULT]
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone
my_ip = 192.168.0.53
enabled_backends = lvm
glance_api_servers = http://controller:9292

[database]
connection = mysql+pymysql://cinder:123456@controller/cinder

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = 123456

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

enabled_backends = lvmlvm[lvm]是關(guān)聯(lián)的琢岩,enabled_backends是任意的投剥,比如enabled_backends = lvm1,就有[lvm1]

  1. 啟動
    systemctl enable openstack-cinder-volume.service target.service && systemctl start openstack-cinder-volume.service target.service
  2. 驗(yàn)證(在控制節(jié)點(diǎn))
    openstack volume service list
  • dashboard(控制節(jié)點(diǎn))
  1. 下載
    yum install openstack-dashboard -y
  2. 配置
    vi /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "controller"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

ALLOWED_HOSTS = ['*']

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
   'default': {
        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
        'LOCATION': 'controller:11211',
   }
}
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
   "identity": 3,
   "image": 2,
   "volume": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_NEUTRON_NETWORK = {
   ...
   'enable_router': False,
   'enable_quotas': False,
   'enable_distributed_router': False,
   'enable_ha_router': False,
   'enable_lb': False,
   'enable_firewall': False,
   'enable_vpn': False,
   'enable_fip_topology_check': False,
}
TIME_ZONE = "UTC"

注:'enable_router': True,可以使用路由器(router)担孔,但是這個必須有self-service network

  1. 配置
    vi /etc/httpd/conf.d/openstack-dashboard.conf
WSGIApplicationGroup %{GLOBAL}
  1. 啟動服務(wù)
    systemctl restart httpd.service memcached.service
  2. 訪問
    http://controller/dashboard
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末江锨,一起剝皮案震驚了整個濱河市,隨后出現(xiàn)的幾起案子糕篇,更是在濱河造成了極大的恐慌啄育,老刑警劉巖,帶你破解...
    沈念sama閱讀 219,039評論 6 508
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件拌消,死亡現(xiàn)場離奇詭異挑豌,居然都是意外死亡,警方通過查閱死者的電腦和手機(jī)墩崩,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,426評論 3 395
  • 文/潘曉璐 我一進(jìn)店門氓英,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人鹦筹,你說我怎么就攤上這事铝阐。” “怎么了铐拐?”我有些...
    開封第一講書人閱讀 165,417評論 0 356
  • 文/不壞的土叔 我叫張陵徘键,是天一觀的道長。 經(jīng)常有香客問我遍蟋,道長吹害,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 58,868評論 1 295
  • 正文 為了忘掉前任匿值,我火速辦了婚禮赠制,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘。我一直安慰自己钟些,他們只是感情好烟号,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,892評論 6 392
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著政恍,像睡著了一般汪拥。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上篙耗,一...
    開封第一講書人閱讀 51,692評論 1 305
  • 那天迫筑,我揣著相機(jī)與錄音,去河邊找鬼宗弯。 笑死脯燃,一個胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的蒙保。 我是一名探鬼主播辕棚,決...
    沈念sama閱讀 40,416評論 3 419
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼邓厕!你這毒婦竟也來了逝嚎?” 一聲冷哼從身側(cè)響起,我...
    開封第一講書人閱讀 39,326評論 0 276
  • 序言:老撾萬榮一對情侶失蹤详恼,失蹤者是張志新(化名)和其女友劉穎补君,沒想到半個月后,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體昧互,經(jīng)...
    沈念sama閱讀 45,782評論 1 316
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡挽铁,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,957評論 3 337
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發(fā)現(xiàn)自己被綠了硅堆。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片屿储。...
    茶點(diǎn)故事閱讀 40,102評論 1 350
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖渐逃,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情民褂,我是刑警寧澤茄菊,帶...
    沈念sama閱讀 35,790評論 5 346
  • 正文 年R本政府宣布,位于F島的核電站赊堪,受9級特大地震影響面殖,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜哭廉,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,442評論 3 331
  • 文/蒙蒙 一脊僚、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧,春花似錦辽幌、人聲如沸增淹。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,996評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽虑润。三九已至,卻和暖如春加酵,著一層夾襖步出監(jiān)牢的瞬間拳喻,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 33,113評論 1 272
  • 我被黑心中介騙來泰國打工猪腕, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留冗澈,地道東北人。 一個月前我還...
    沈念sama閱讀 48,332評論 3 373
  • 正文 我出身青樓陋葡,卻偏偏與公主長得像亚亲,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子脖岛,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 45,044評論 2 355

推薦閱讀更多精彩內(nèi)容