nova计算服务 的部署

news/2025/2/23 2:50:52/

OpenStack-Placement组件部署

创建数据库实例和数据库用户

[root@ct ~]# mysql -uroot -p123123

在这里插入图片描述

创建Placement服务用户和API的endpoint

创建placement用户

[root@ct ~]# openstack user create --domain default --password PLACEMENT_PASS placement
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | e2fd74e3578f4d47a1f1ab30fff76d80 |
| name                | placement                        |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

给与placement用户对service项目拥有admin权限

[root@ct ~]# openstack service create --name placement --description "Placement API" placement
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Placement API                    |
| enabled     | True                             |
| id          | 6332d10b09144e509971822c6749a267 |
| name        | placement                        |
| type        | placement                        |
+-------------+----------------------------------+

注册API端口到placement的service中;注册的信息会写入到mysql中

[root@ct ~]# openstack endpoint create --region RegionOne placement public http://ct:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 296458b3da894b318f7ee10018480da3 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 6332d10b09144e509971822c6749a267 |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://ct:8778                   |
+--------------+----------------------------------+[root@ct ~]# openstack endpoint create --region RegionOne placement internal http://ct:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 569d80ebf9cc4587b03e203c1037cf73 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 6332d10b09144e509971822c6749a267 |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://ct:8778                   |
+--------------+----------------------------------+
[root@ct~]# openstack endpoint create --region RegionOne placement admin http://ct:8778+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 565c7960765f4431859639949cb7b5d2 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 6332d10b09144e509971822c6749a267 |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://ct:8778                   |
+--------------+----------------------------------+

安装placement服务

[root@ct ~]#  yum -y install openstack-placement-api
[root@ct ~]#  cp /etc/placement/placement.conf{,.bak}
placement.conf	修改placement配置文件
[root@ct ~]#  grep -Ev '^$|#' /etc/placement/placement.conf.bak > /etc/placement/placement.conf
[root@ct ~]#  openstack-config --set /etc/placement/placement.conf placement_database connection mysql+pymysql://placement:PLACEMENT_DBPASS@ct/placement
[root@ct ~]#  openstack-config --set /etc/placement/placement.conf api auth_strategy keystone
[root@ct ~]#  openstack-config --set /etc/placement/placement.conf keystone_authtoken auth_url  http://ct:5000/v3
[root@ct ~]#  openstack-config --set /etc/placement/placement.conf keystone_authtoken memcached_servers ct:11211
[root@ct ~]#  openstack-config --set /etc/placement/placement.conf keystone_authtoken auth_type password
[root@ct ~]#  openstack-config --set /etc/placement/placement.conf keystone_authtoken project_domain_name Default
[root@ct ~]#  openstack-config --set /etc/placement/placement.conf keystone_authtoken user_domain_name Default
[root@ct ~]#  openstack-config --set /etc/placement/placement.conf keystone_authtoken project_name service
[root@ct ~]#  openstack-config --set /etc/placement/placement.conf keystone_authtoken username placement
[root@ct ~]#  openstack-config --set /etc/placement/placement.conf keystone_authtoken password PLACEMENT_PASS
[root@ct placement]# cat placement.conf		##查看placement配置文件
[DEFAULT]
[api]
auth_strategy = keystone
[cors]
[keystone_authtoken]
auth_url = http://ct:5000/v3		##指定keystone地址
memcached_servers = ct:11211		##session信息是缓存放到了memcached中
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = PLACEMENT_PASS
[oslo_policy]
[placement]
[placement_database]
connection = mysql+pymysql://placement:PLACEMENT_DBPASS@ct/placement
[profiler]

导入数据库

su -s /bin/sh -c "placement-manage db sync" placement

修改Apache配置文件: 00-placemenct-api.conf

安装完placement服务后会自动创建该文件-虚拟主机配置

[root@ct ~]# cd /etc/httpd/conf.d/
[root@ct conf.d]# vim 00-placement-api.conf 		##安装完placement会自动创建此文件
</Location>		##配置文件最后插入以下内容
<Directory /usr/bin>			#此处是bug,必须添加下面的配置来启用对placement api的访问,否则在访问apache的
<IfVersion >= 2.4>				#api时会报403;添加在文件的最后即可Require all granted
</IfVersion>
<IfVersion < 2.4>				#apache版本;允许apache访问/usr/bin目录;否则/usr/bin/placement-api将不允许被访问Order allow,deny				Allow from all			#允许apache访问
</IfVersion>
</Directory>
重新启动apache
[root@ct placement]# systemctl restart httpd		##重启Apache为了识别虚拟主机中修改的部分
测试
① curl 测试访问
[root@ct placement]# curl ct:8778
{"versions": [{"status": "CURRENT", "min_version": "1.0", "max_version": "1.36", "id": "v1.0", "links": [{"href": "", "rel": "self"}]}]}
② 查看端口占用(netstat、lsof)
[root@ct placement]# netstat -natp | grep 8778
tcp6       0      0 :::8778                 :::*                    LISTEN      72994/httpd ③ 检查placement状态
[root@ct placement]# placement-status upgrade check
+----------------------------------+
| Upgrade Check Results            |
+----------------------------------+
| Check: Missing Root Provider IDs |
| Result: Success                  |
| Details: None                    |
+----------------------------------+
| Check: Incomplete Consumers      |
| Result: Success                  |
| Details: None                    |
+----------------------------------+

计算节点Nova服务配置

创建nova数据库,并执行授权操作

[root@ct ~]# mysql -uroot -p123123
MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;
MariaDB [(none)]> grant all privileges on nova_api.* to 'nova'@'localhost' identified by 'NOVA_DBPASS';
MariaDB [(none)]> grant all privileges on nova_api.* to 'nova'@'%' identified byy 'NOVA_DBPASS';
MariaDB [(none)]> grant all privileges on nova.* to 'nova'@'localhost' identified by 'NOVA_DBPASS';
MariaDB [(none)]> grant all privileges on nova.* to 'nova'@'%' identified by 'NOOVA_DBPASS';
MariaDB [(none)]> grant all privileges on nova_cell0.* to 'nova'@'localhost' ideentified by 'NOVA_DBPASS';
MariaDB [(none)]> grant all privileges on nova_cell0.* to 'nova'@'%' identified  by 'NOVA_DBPASS';
Query OK, 0 rows affected (0.000 sec)
MariaDB [(none)]> flush privileges;
MariaDB [(none)]> exit

管理Nova用户及服务

[root@ct ~]# openstack user create --domain default --password NOVA_PASS nova	##创建nova用户
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | dcf3b5df290c42638de1f41a834d4284 |
| name                | nova                             |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+[root@ct ~]# openstack role add --project service --user nova admin	##把nova用户添加到service项目,拥有admin权限
创建nova服务
[root@ct ~]# openstack service create --name nova --description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Compute                |
| enabled     | True                             |
| id          | d1f76108c8484212a8f296467508aa49 |
| name        | nova                             |
| type        | compute                          |
+-------------+----------------------------------+给Nova服务关联endpoint(端点)
[root@ct ~]# openstack endpoint create --region RegionOne compute public http://ct:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 4bdb48b8c8c549609c157fcbab532bd7 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | d1f76108c8484212a8f296467508aa49 |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://ct:8774/v2.1              |
+--------------+----------------------------------+
[root@ct ~]# openstack endpoint create --region RegionOne compute internal http://ct:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 4509b03c2767491580ac064df598d9e7 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | d1f76108c8484212a8f296467508aa49 |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://ct:8774/v2.1              |
+--------------+----------------------------------+[root@ct ~]# openstack endpoint create --region RegionOne compute admin http://ct:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | e6c30360730f448ca21b20666f18a7a4 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | d1f76108c8484212a8f296467508aa49 |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://ct:8774/v2.1              |
+--------------+----------------------------------+

安装nova组件、修改nova配置文件(nova.conf)

[root@ct ~]# yum -y install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler
[root@ct ~]# cd /etc/nova/
[root@ct nova]# cp -a /etc/nova/nova.conf{,.bak}
[root@ct nova]#  grep -Ev '^$|#' /etc/nova/nova.conf.bak > /etc/nova/nova.confopenstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadataopenstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.100.120	##修改为ct的IP(内部IP)openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron trueopenstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriveropenstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@ctopenstack-config --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:NOVA_DBPASS@ct/nova_apiopenstack-config --set /etc/nova/nova.conf database connection mysql+pymysql://nova:NOVA_DBPASS@ct/novaopenstack-config --set /etc/nova/nova.conf placement_database connection mysql+pymysql://placement:PLACEMENT_DBPASS@ct/placementopenstack-config --set /etc/nova/nova.conf api auth_strategy keystoneopenstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://ct:5000/v3openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers ct:11211openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type passwordopenstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name Defaultopenstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name Defaultopenstack-config --set /etc/nova/nova.conf keystone_authtoken project_name serviceopenstack-config --set /etc/nova/nova.conf keystone_authtoken username novaopenstack-config --set /etc/nova/nova.conf keystone_authtoken password NOVA_PASSopenstack-config --set /etc/nova/nova.conf vnc enabled trueopenstack-config --set /etc/nova/nova.conf vnc server_listen ' $my_ip'openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address ' $my_ip'openstack-config --set /etc/nova/nova.conf glance api_servers http://ct:9292openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmpopenstack-config --set /etc/nova/nova.conf placement region_name RegionOneopenstack-config --set /etc/nova/nova.conf placement project_domain_name Defaultopenstack-config --set /etc/nova/nova.conf placement project_name serviceopenstack-config --set /etc/nova/nova.conf placement auth_type passwordopenstack-config --set /etc/nova/nova.conf placement user_domain_name Defaultopenstack-config --set /etc/nova/nova.conf placement auth_url http://ct:5000/v3openstack-config --set /etc/nova/nova.conf placement username placementopenstack-config --set /etc/nova/nova.conf placement password PLACEMENT_PASS
[root@ct nova]# cat nova.conf		##查看配置文件
[DEFAULT]
enabled_apis = osapi_compute,metadata						##指定支持的api类型
my_ip = 192.168.100.120										##定义本地IP
use_neutron = true											##通过neutron获取IP地址
firewall_driver = nova.virt.firewall.NoopFirewallDriver		##防火墙的引擎
transport_url = rabbit://openstack:RABBIT_PASS@ct			##指定连接的rabbitmq
[api]
auth_strategy = keystone									##指定使用keystone认证
[api_database]
connection = mysql+pymysql://nova:NOVA_DBPASS@ct/nova_api	##api连入数据库
[barbican]
[cache]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[database]
connection = mysql+pymysql://nova:NOVA_DBPASS@ct/nova		
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://ct:9292		
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]				##配置keystone的认证信息
auth_url = http://ct:5000/v3		##到此url去认证
memcached_servers = ct:11211		##memcache数据库地址:端口
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = NOVA_PASS
[libvirt]
[metrics]
[mks]
[neutron]
[notifications]
[osapi_v21]
[oslo_concurrency]						##指定锁路径
lock_path = /var/lib/nova/tmp			##锁的作用是创建虚拟机时,在执行某个操作的时候,需要等此步骤执行完后才能执行下一个步骤,不能并行执行,保证操作是一步一步的执行
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://ct:5000/v3
username = placement
password = PLACEMENT_PASS
[powervm]
[privsep]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]										##此处如果配置不正确,则连接不上虚拟机的控制台
enabled = true
server_listen =  $my_ip						##指定vnc的监听地址
server_proxyclient_address =  $my_ip		##server的客户端地址为本机地址;此地址是管理网的地址
[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm]
[placement_database]

初始化数据库

①:初始化nova_api数据库
[root@ct ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
②:注册cell0数据库;nova服务内部把资源划分到不同的cell中,把计算节点划分到不同的cell中;openstack内部基于cell把计算节点进行逻辑上的分组
[root@ct ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
③:创建cell1单元格;
[root@ct ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
269a515f-5d1a-4acc-be6b-26f3146b0ec6		##cell1的id
④:初始化nova数据库;可以通过 /var/log/nova/nova-manage.log 日志判断是否初始化成功
[root@ct ~]# su -s /bin/sh -c "nova-manage db sync" nova
/usr/lib/python2.7/site-packages/pymysql/cursors.py:170: Warning: (1831, u'Duplicate index `block_device_mapping_instance_uuid_virtual_name_device_name_idx`. This is deprecated and will be disallowed in a future release')result = self._query(query)
/usr/lib/python2.7/site-packages/pymysql/cursors.py:170: Warning: (1831, u'Duplicate index `uniq_instances0uuid`. This is deprecated and will be disallowed in a future release')result = self._query(query)
⑤:可使用以下命令验证cell0和cell1是否注册成功
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova #验证cell0和cell1组件是否注册成功+-------+--------------------------------------+----------------------------+-----------------------------------------+----------+
|  名称 |                 UUID                 |       Transport URL        |                数据库连接               | Disabled |
+-------+--------------------------------------+----------------------------+-----------------------------------------+----------+
| cell0 | 00000000-0000-0000-0000-000000000000 |           none:/           | mysql+pymysql://nova:****@ct/nova_cell0 |  False   |
| cell1 | 269a515f-5d1a-4acc-be6b-26f3146b0ec6 | rabbit://openstack:****@ct |    mysql+pymysql://nova:****@ct/nova    |  False   |
+-------+--------------------------------------+----------------------------+-----------------------------------------+----------+

启动Nova服务

[root@ct ~]# systemctl enable openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
[root@ct ~]# systemctl start openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
①:检查nova服务端口
[root@ct ~]# netstat -tnlup|egrep '8774|8775'
tcp        0      0 0.0.0.0:8775            0.0.0.0:*               LISTEN      48566/python2       
tcp        0      0 0.0.0.0:8774            0.0.0.0:*               LISTEN      48566/python2
[root@ct ~]# curl http://ct:8774

计算节点配置Nova服务

c1、c2安装nova-compute组件
yum -y install openstack-nova-compute
修改配置文件nova.conf
cp -a /etc/nova/nova.conf{,.bak}
grep -Ev '^$|#' /etc/nova/nova.conf.bak > /etc/nova/nova.conf
c1节点:
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@ct
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.100.121				#修改为对应节点的内部IP
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron true
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf api auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://ct:5000/v3
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers ct:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password NOVA_PASS
openstack-config --set /etc/nova/nova.conf vnc enabled true
openstack-config --set /etc/nova/nova.conf vnc server_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address ' $my_ip'
openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://192.168.10.101:6080/vnc_auto.html
openstack-config --set /etc/nova/nova.conf glance api_servers http://ct:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf placement region_name RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name Default
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement user_domain_name Default
openstack-config --set /etc/nova/nova.conf placement auth_url http://ct:5000/v3
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password PLACEMENT_PASS
openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu
[root@c1 nova]# cat nova.conf		##查看配置文件
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:RABBIT_PASS@ct
my_ip = 192.168.100.121#修改为对应节点的内部IP
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[api_database]
[barbican]
[cache]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[database]
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://ct:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://ct:5000/v3
memcached_servers = ct:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = NOVA_PASS
[libvirt]
virt_type = qemu
[metrics]
[mks]
[neutron]
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://ct:5000/v3
username = placement
password = PLACEMENT_PASS
[powervm]
[privsep]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address =  $my_ip
novncproxy_base_url = http://192.168.100.121:6080/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm]

开启服务

 systemctl enable libvirtd.service openstack-nova-compute.servicesystemctl start libvirtd.service openstack-nova-compute.service以上操作计算节点-c2与c1相同(除IP地址不同)
controller节点操作
查看compute节点是否注册到controller上,通过消息队列;需要在controller节点执行
[root@ct ~]# openstack compute service list --service nova-compute
+----+--------------+------+------+---------+-------+----------------------------+
| ID | Binary       | Host | Zone | Status  | State | Updated At                 |
+----+--------------+------+------+---------+-------+----------------------------+
|  8 | nova-compute | c1   | nova | enabled | up    | 2021-08-26T13:29:30.000000 |
|  9 | nova-compute | c2   | nova | enabled | up    | 2021-08-26T13:29:33.000000 |
+----+--------------+------+------+---------+-------+----------------------------+
扫描当前openstack中有可用的计算节点,发现后会把计算节点创建到cell中,后面就可以在cell中创建虚拟机;相当于openstack内部对计算节点进行分组,把计算节点分配到不同的cell中
[root@ct ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell 'cell1': 269a515f-5d1a-4acc-be6b-26f3146b0ec6
Checking host mapping for compute host 'c1': 77c14ea3-41e4-4008-81b2-4b18565796ec
Creating host mapping for compute host 'c1': 77c14ea3-41e4-4008-81b2-4b18565796ec
Checking host mapping for compute host 'c2': 5398f1e7-ee65-471a-bb7c-357764c357a1
Creating host mapping for compute host 'c2': 5398f1e7-ee65-471a-bb7c-357764c357a1
Found 2 unmapped computes in cell: 269a515f-5d1a-4acc-be6b-26f3146b0ec6
默认每次添加个计算节点,在控制端就需要执行一次扫描,这样会很麻烦,所以可以修改控制端nova的主配置文件
[root@ct ~]# vim /etc/nova/nova.conf
[scheduler]		##此处下面插入一行
discover_hosts_in_cells_interval = 300 		##每300秒扫描一次;自动添加计算节点[root@ct ~]# systemctl restart openstack-nova-api.service
验证计算节点服务
#检查 nova 的各个服务是否都是正常,以及 compute 服务是否注册成功
[root@ct ~]# openstack compute service list 
+----+----------------+------+----------+---------+-------+----------------------------+
| ID | Binary         | Host | Zone     | Status  | State | Updated At                 |
+----+----------------+------+----------+---------+-------+----------------------------+
|  4 | nova-scheduler | ct   | internal | enabled | up    | 2021-08-26T13:34:17.000000 |
|  7 | nova-conductor | ct   | internal | enabled | up    | 2021-08-26T13:34:19.000000 |
|  8 | nova-compute   | c1   | nova     | enabled | up    | 2021-08-26T13:34:20.000000 |
|  9 | nova-compute   | c2   | nova     | enabled | up    | 2021-08-26T13:34:13.000000 |
+----+----------------+------+----------+---------+-------+----------------------------+[root@ct ~]# openstack catalog list		##查看各个组件的 api 是否正常
+-----------+-----------+---------------------------------+
| Name      | Type      | Endpoints                       |
+-----------+-----------+---------------------------------+
| placement | placement | RegionOne                       |
|           |           |   public: http://ct:8778        |
|           |           | RegionOne                       |
|           |           |   admin: http://ct:8778         |
|           |           | RegionOne                       |
|           |           |   internal: http://ct:8778      |
|           |           |                                 |
| glance    | image     | RegionOne                       |
|           |           |   public: http://ct:9292        |
|           |           | RegionOne                       |
|           |           |   admin: http://ct:9292         |
|           |           | RegionOne                       |
|           |           |   internal: http://ct:9292      |
|           |           |                                 |
| keystone  | identity  | RegionOne                       |
|           |           |   admin: http://ct:5000/v3/     |
|           |           | RegionOne                       |
|           |           |   internal: http://ct:5000/v3/  |
|           |           | RegionOne                       |
|           |           |   public: http://ct:5000/v3/    |
|           |           |                                 |
| nova      | compute   | RegionOne                       |
|           |           |   internal: http://ct:8774/v2.1 |
|           |           | RegionOne                       |
|           |           |   public: http://ct:8774/v2.1   |
|           |           | RegionOne                       |
|           |           |   admin: http://ct:8774/v2.1    |
|           |           |                                 |
+-----------+-----------+---------------------------------+[root@ct ~]# openstack image list		##查看是否能够拿到镜像+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| ca02269b-9b20-4a03-9b35-5c889b624db8 | cirros | active |
+--------------------------------------+--------+--------+#查看cell的api和placement的api是否正常,只要其中一个有误,后期无法创建虚拟机
[root@ct ~]# nova-status upgrade check
+--------------------------------+
| Upgrade Check Results          |
+--------------------------------+
| Check: Cells v2                |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Placement API           |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Ironic Flavor Migration |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Cinder API              |
| Result: Success                |
| Details: None                  |
+--------------------------------+

http://www.ppmy.cn/news/154180.html

相关文章

几种编码格式介绍

1. ASCII码 学过计算机的人都知道ASCII码&#xff0c;总共有128个&#xff0c;用一个字节的低7位表示&#xff0c;0~31是控制字符如换行回车删除等&#xff1b;32~126是打印字符&#xff0c;可以通过键盘输入并且能够显示出来。 2. ISO-8859-1 128个字符显然是不够用的&…

CSS实现3D展开效果

CSS实现3D展开效果 不知道你们看没看看过游戏角色3D展开时的那种效果。或者当我们想要买东西的时候&#xff0c;当我们hover放置在这个商品时&#xff0c;会出现一个商品要弹出手机的效果。 先看效果 游戏角色3D展开&#xff0c;当我们鼠标hover至这个角色时&#xff0c;它就…

Elasticsearch:如何使用自定义的证书安装 Elastic Stack 8.x

在我之前的文章 “如何在 Linux&#xff0c;MacOS 及 Windows 上进行安装 Elasticsearch”&#xff0c;我详细描述了如何在各个平台中安装 Elastic Stack 8.x。在其中的文章中&#xff0c;我们大多采用默认的证书来安装 Elasticsearch。在今天的文章中&#xff0c;我们用自己创…

ElasticSearch学习(一)

一、ElasticSearch的下载 下载地址https://www.elastic.co/downloads/elasticsearch 选择windows版本的下载 解压后运行bin下的elasticsearch.bat文件 访问http://localhost:9200/ 进行验证 出现下图即为启动成功 二、辅助管理工具Kibana5 下载地址&#xff1a;https://…

The `certs(%1$s)` contains the merchant‘s certificate serial number(%2$s) which is not allowed here.

php对接微信h5新版 V3支付遇到的第一个报错 意思是说证书序列号不正确 解决过程&#xff1a; 1、我得到的证书有apiclient_cert.pem&#xff0c;apiclient_key.pem&#xff0c;wpay.p12这三个文件 2、我下载了官网提供的sdk(下载地址 https://github.com/wechatpay-apiv3/wecha…

android 获取设备id 崩溃,获取Android设备ID时出错

我正在尝试在我的Android应用中检索我的adroid设备的设备ID。但是,在我的程序中添加以下行后,错误存在并且程序无法启动: String ts = Context.TELEPHONY_SERVICE; TelephonyManager mTelephonyMgr = (TelephonyManager) getSystemService(ts); 从logcat开始,它说问题来自第…

黑苹果】宏基Acer Aspire A515-51G+i5-8250U+支持 MacOS 10.15.x、10.14.x 和 10.13.x EFI文件下载!

电脑配置 型号&#xff1a;Acer Aspire A515-51G CPU &#xff1a; 英特尔酷睿 i5-8250U &#xff08;基 1.6GHz&#xff0c; 提升 3.6GHz&#xff0c; 合成负载稳定 2.4GHz&#xff0c; cinebench 1200pts&#xff09; RAM &#xff1a; 4GB DDR4 未知焊接到主板上&#xff…

windows 日志事件 ID

应用程序组管理 4783 已创建基本的应用程序组。 4784 基本应用程序组已被更改。 4785 已将成员添加到基本应用程序组。 4786 已从基本应用程序组中删除一个成员。 4787 非成员已添加到基本应用程序组。 4788 非成员已从删除基本应用程序组。 4789 基本应用程序组已删除。 4791…