OpenStack Yoga版安装笔记(十四)启动一个实例

devtools/2024/9/30 0:28:53/

1、官方文档

OpenStack Installation Guideicon-default.png?t=O83Ahttps://docs.openstack.org/install-guide/

本次安装是在Ubuntu 22.04上进行,基本按照OpenStack Installation Guide顺序执行,主要内容包括:

  • 环境安装 (已完成)
  • OpenStack服务安装
    • keyston安装(已完成)
    • glance安装 (已完成)
    • placement安装(已完成) 
    • nova安装(已安装)
    • neutron安装(已完成)
  • 启动一个实例 ◄──

注:Openstack官方网站页面进行了调整,Yoga的相关组件安装可以参考:

OpenStack Docs: Yoga Installation Guides

2、Create virtual networks(Provider network)

安装说明:

Openstack中,instance就是指vm(virtual machine)。launch an instance就是创建并启动一个虚机。

在前面的neutron安装中,介绍了两种网络拓扑方案,一个是provider network,另一个是self-service networks。

根据第一个方案即provider network网络拓扑创建命名为provider的network,然后创建命名为provider的subnet,创建的虚机位于这个subnet。该network通过bridge连接到物理网络端口(创建network时,指定provider-physical-network参数)。这个网络包括一个DHCP服务器为创建的虚机动态分配IP地址。

首先记录controller node 和 compute1 node的ip地址信息,用于比较信息变化。

controller: 

root@controller:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:a8:e0:3c brd ff:ff:ff:ff:ff:ffaltname enp2s1inet 10.0.20.11/24 brd 10.0.20.255 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:fea8:e03c/64 scope link valid_lft forever preferred_lft forever
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:a8:e0:46 brd ff:ff:ff:ff:ff:ffaltname enp2s2inet6 fe80::20c:29ff:fea8:e046/64 scope link valid_lft forever preferred_lft forever
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000link/ether 52:54:00:7b:e8:20 brd ff:ff:ff:ff:ff:ffinet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0valid_lft forever preferred_lft forever

virbr0 是 KVM 默认创建的一个 Bridge,其作用是为连接其上的虚机网卡提供 NAT 访问外网的功能。

virbr0 默认分配了一个IP 192.168.122.1,并为连接其上的其他虚拟网卡提供 DHCP 服务。

virbr0和本次安装无关。

 compute1:

root@compute1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:51:16:68 brd ff:ff:ff:ff:ff:ffaltname enp2s0inet 10.0.20.12/24 brd 10.0.20.255 scope global ens32valid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:fe51:1668/64 scope link valid_lft forever preferred_lft forever
3: ens35: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000link/ether 00:0c:29:51:16:72 brd ff:ff:ff:ff:ff:ffaltname enp2s3
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000link/ether 52:54:00:db:70:49 brd ff:ff:ff:ff:ff:ffinet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0valid_lft forever preferred_lft forever
root@compute1:~# 

2.1 Create the network

root@osclient ~(admin/amdin)# openstack network create  --share --external \
>   --provider-physical-network provider \
>   --provider-network-type flat provider
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | UP                                   |
| availability_zone_hints   |                                      |
| availability_zones        |                                      |
| created_at                | 2024-09-21T09:06:01Z                 |
| description               |                                      |
| dns_domain                | None                                 |
| id                        | 48f2b88e-7740-4d94-a631-69e2abadf25b |
| ipv4_address_scope        | None                                 |
| ipv6_address_scope        | None                                 |
| is_default                | None                                 |
| is_vlan_transparent       | None                                 |
| mtu                       | 1500                                 |
| name                      | provider                             |
| port_security_enabled     | True                                 |
| project_id                | ee65b6c3961747b988ab8bd1cc19fb93     |
| provider:network_type     | flat                                 |
| provider:physical_network | provider                             |
| provider:segmentation_id  | None                                 |
| qos_policy_id             | None                                 |
| revision_number           | 1                                    |
| router:external           | External                             |
| segments                  | None                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tags                      |                                      |
| updated_at                | 2024-09-21T09:06:01Z                 |
+---------------------------+--------------------------------------+
root@osclient ~(admin/amdin)# 
  1. openstack network create:这是OpenStack命令行客户端(CLI)的基本命令,用于创建一个新的网络。

  2. --share:这个选项表示创建的网络是共享的,意味着多个项目(projects,即租户)可以连接到这个网络。

  3. --external:这个选项指定创建的网络是一个外部网络。外部网络通常用于连接虚拟网络到物理网络,使得虚拟机实例能够访问外部网络资源。(定义一个network为external,为产生相关的动作。)

  4. --provider-physical-network provider:这个选项指定物理网络的名称。在这个例子中,物理网络的名称是provider。注意这个物理网络的名称provider是在 /etc/neutron/plugins/ml2/ml2_conf.ini中进行定义的(flat_networks = provider)。至于这个物理网络provider在每台主机的虚拟网络环境在如何映射到该主机的物理网络端口,则在/etc/neutron/plugins/ml2/linuxbridge_agent.ini进行定义(physical_interface_mappings = provider:ens34)。这意味着所创建的network后续在每台主机上按需创建的bridge将关联对应的该主机上的物理网络端口。

  5. --provider-network-type flat:这个选项指定网络类型为flatflat网络类型意味者使用上面指定的物理网络provider的flat网络类型。意味着provider在每台主机关联的物理端口采用untag方式(不采用802.1q的封装),直接将所创建的network上的虚拟机的流量从该物理网络端口发出去。

  6. provider:这是所创建的network名称。在这个命令中,新创建的外部网络将被命名为provider(这个名字可以任意,注意不要和前面的provider混淆)

--provider-physical-network provider概念解读

2.2 Create a subnet on the network

root@osclient ~(admin/amdin)# openstack subnet create --network provider \
>   --allocation-pool start=203.0.113.101,end=203.0.113.250 \
>   --dns-nameserver 8.8.4.4 --gateway 203.0.113.1 \
>   --subnet-range 203.0.113.0/24 provider
+----------------------+--------------------------------------+
| Field                | Value                                |
+----------------------+--------------------------------------+
| allocation_pools     | 203.0.113.101-203.0.113.250          |
| cidr                 | 203.0.113.0/24                       |
| created_at           | 2024-09-26T00:19:21Z                 |
| description          |                                      |
| dns_nameservers      | 8.8.4.4                              |
| dns_publish_fixed_ip | None                                 |
| enable_dhcp          | True                                 |
| gateway_ip           | 203.0.113.1                          |
| host_routes          |                                      |
| id                   | 8279842e-d7c5-4ba6-a037-831e0a72a938 |
| ip_version           | 4                                    |
| ipv6_address_mode    | None                                 |
| ipv6_ra_mode         | None                                 |
| name                 | provider                             |
| network_id           | 48f2b88e-7740-4d94-a631-69e2abadf25b |
| project_id           | ee65b6c3961747b988ab8bd1cc19fb93     |
| revision_number      | 0                                    |
| segment_id           | None                                 |
| service_types        |                                      |
| subnetpool_id        | None                                 |
| tags                 |                                      |
| updated_at           | 2024-09-26T00:19:21Z                 |
+----------------------+--------------------------------------+
root@osclient ~(admin/amdin)# 
root@osclient ~(admin/amdin)# openstack network list
+--------------------------------------+----------+--------------------------------------+
| ID                                   | Name     | Subnets                              |
+--------------------------------------+----------+--------------------------------------+
| 48f2b88e-7740-4d94-a631-69e2abadf25b | provider | 8279842e-d7c5-4ba6-a037-831e0a72a938 |
+--------------------------------------+----------+--------------------------------------+
root@osclient ~(admin/amdin)# openstack subnet list
+--------------------------------------+----------+--------------------------------------+----------------+
| ID                                   | Name     | Network                              | Subnet         |
+--------------------------------------+----------+--------------------------------------+----------------+
| 8279842e-d7c5-4ba6-a037-831e0a72a938 | provider | 48f2b88e-7740-4d94-a631-69e2abadf25b | 203.0.113.0/24 |
+--------------------------------------+----------+--------------------------------------+----------------+
root@osclient ~(admin/amdin)# 
  • openstack subnet create: 这是创建子网的命令。

  • --network provider: 这个参数指定了子网所属的网络。这里的provider是之前创建的网络的名称。

  • --allocation-pool start=203.0.113.101,end=203.0.113.250: 这个参数定义了子网中的IP地址分配池。这意味着从203.0.113.101到203.0.113.250的IP地址将被分配给子网中的设备。

  • --dns-nameserver 8.8.4.4: 这个参数指定了子网的DNS服务器地址。在这个例子中,使用了Google提供的公共DNS服务器8.8.4.4。

  • --gateway 203.0.113.1: 这个参数指定了子网的网关地址。网关是子网中用于路由流量到其他网络的设备。在这个例子中,网关地址被设置为203.0.113.1。

  • --subnet-range 203.0.113.0/24: 这个参数定义了子网的范围。203.0.113.0/24表示子网的网络地址是203.0.113.0,子网掩码是255.255.255.0(/24),这意味着子网可以包含256个IP地址(从203.0.113.1到203.0.113.254,其中.0是网络地址,.255是广播地址,通常不分配给设备)。

总结来说,这个命令是在OpenStack中创建一个子网,该子网属于名为provider的网络,具有指定的IP地址范围、DNS服务器、网关,以及一个IP地址分配池。

此时,Openstack user adminproject admin创建的network/subnet为:

networ/subnet示意

实际在主机的网络操作,在controller node上会针对"provider" network创建一个network namespace,运行dhcp服务,专门用于这个network的dhcp服务(因为dhcp agent安装在controller node);

同时在controller node针对"provider" network,创建一个bridge,dhcp的netns会连接到这个bridge。

root@controller:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:a8:e0:3c brd ff:ff:ff:ff:ff:ffaltname enp2s1inet 10.0.20.11/24 brd 10.0.20.255 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:fea8:e03c/64 scope link valid_lft forever preferred_lft forever
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master brq48f2b88e-77 state UP group default qlen 1000link/ether 00:0c:29:a8:e0:46 brd ff:ff:ff:ff:ff:ffaltname enp2s2inet6 fe80::20c:29ff:fea8:e046/64 scope link valid_lft forever preferred_lft forever
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000link/ether 52:54:00:7b:e8:20 brd ff:ff:ff:ff:ff:ffinet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0valid_lft forever preferred_lft forever
5: tapa51b2fe4-04@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master brq48f2b88e-77 state UP group default qlen 1000link/ether ce:a2:22:a5:77:6a brd ff:ff:ff:ff:ff:ff link-netns qdhcp-48f2b88e-7740-4d94-a631-69e2abadf25b
6: brq48f2b88e-77: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000link/ether 36:f4:3e:a8:e0:c3 brd ff:ff:ff:ff:ff:ffinet6 fe80::34f4:3eff:fea8:e0c3/64 scope link valid_lft forever preferred_lft forever
root@controller:~# brctl show
bridge name     bridge id               STP enabled     interfaces
brq48f2b88e-77          8000.36f43ea8e0c3       no              ens34tapa51b2fe4-04
virbr0          8000.5254007be820       yes
root@controller:~# ip netns exec qdhcp-48f2b88e-7740-4d94-a631-69e2abadf25b ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ns-a51b2fe4-04@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000link/ether fa:16:3e:5b:d6:a5 brd ff:ff:ff:ff:ff:ff link-netnsid 0inet 203.0.113.101/24 brd 203.0.113.255 scope global ns-a51b2fe4-04valid_lft forever preferred_lft foreverinet 169.254.169.254/32 brd 169.254.169.254 scope global ns-a51b2fe4-04valid_lft forever preferred_lft foreverinet6 fe80::a9fe:a9fe/128 scope link valid_lft forever preferred_lft foreverinet6 fe80::f816:3eff:fe5b:d6a5/64 scope link valid_lft forever preferred_lft forever
root@controller:~# root@controller:~# ip netns exec qdhcp-48f2b88e-7740-4d94-a631-69e2abadf25b ps -aux | grep dns
libvirt+    1237  0.0  0.0  10084   392 ?        S    Sep25   0:00 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper
root        1238  0.0  0.0  10084   392 ?        S    Sep25   0:00 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper
nobody      6280  0.0  0.0  10504   408 ?        S    00:19   0:00 dnsmasq --no-hosts --no-resolv --pid-file=/var/lib/neutron/dhcp/48f2b88e-7740-4d94-a631-69e2abadf25b/pid --dhcp-hostsfile=/var/lib/neutron/dhcp/48f2b88e-7740-4d94-a631-69e2abadf25b/host --addn-hosts=/var/lib/neutron/dhcp/48f2b88e-7740-4d94-a631-69e2abadf25b/addn_hosts --dhcp-optsfile=/var/lib/neutron/dhcp/48f2b88e-7740-4d94-a631-69e2abadf25b/opts --dhcp-leasefile=/var/lib/neutron/dhcp/48f2b88e-7740-4d94-a631-69e2abadf25b/leases --dhcp-match=set:ipxe,175 --dhcp-userclass=set:ipxe6,iPXE --local-service --bind-dynamic --dhcp-range=set:subnet-8279842e-d7c5-4ba6-a037-831e0a72a938,203.0.113.0,static,255.255.255.0,86400s --dhcp-option-force=option:mtu,1500 --dhcp-lease-max=256 --conf-file=/dev/null --domain=openstacklocal
root       12274  0.0  0.0   4024  2096 pts/0    S+   02:26   0:00 grep --color=auto dns
root@controller:~# 

3、Create m1.nano flavor

root@osclient ~(admin/amdin)# openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
+----------------------------+---------+
| Field                      | Value   |
+----------------------------+---------+
| OS-FLV-DISABLED:disabled   | False   |
| OS-FLV-EXT-DATA:ephemeral  | 0       |
| description                | None    |
| disk                       | 1       |
| id                         | 0       |
| name                       | m1.nano |
| os-flavor-access:is_public | True    |
| properties                 |         |
| ram                        | 64      |
| rxtx_factor                | 1.0     |
| swap                       |         |
| vcpus                      | 1       |
+----------------------------+---------+
root@osclient ~(admin/amdin)# 

这个命令创建了一个名为 m1.nano 的虚拟机规格,它具有1个CPU核心、64MB的内存和1GB的磁盘空间。这个规格可以用于启动资源需求非常低的虚拟机实例。

4、Generate a key pair 

1、在启动虚拟机实例之前,需要将user的公钥添加到云计算平台的计算服务中。这是为了设置SSH公钥认证,以便在实例(即虚拟机)启动后能够安全地连接到虚拟机。

这里采用user "myuser"登录进project "myproject"创建虚机。

root@osclient:~# cat demo-openrc 
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=myproject
export OS_USERNAME=myuser
export OS_PASSWORD=openstack
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
export PS1='\u@\h \W(myproject/myuser)\$ '
root@osclient:~# source demo-openrc 
root@osclient ~(myproject/myuser)# pwd
/root
root@osclient ~(myproject/myuser)# 

2、生成SSH密钥对

root@osclient ~(myproject/myuser)# ssh-keygen -q -N ""
Enter file in which to save the key (/root/.ssh/id_rsa): 
root@osclient ~(myproject/myuser)# ls .ssh
authorized_keys  id_rsa  id_rsa.pub  known_hosts
root@osclient ~(myproject/myuser)# cat .ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLdHcjoGyJ1sZPf8uNgncglzcVpgTMphK+jZecOvuAWZcr224pevVaa7OCCjHY10WjxG94It/ZhO7s1PYK6bmV/3p116h1CypK4URXg8u3FV6nEWk4lD/bykVY6vyo1GyNlzXiYM4g5b+Q1B0q2/6BScWaciSv7ujCJj7FlV7lh+jMkXaOU/BBCDfpMP9tvnzujMo2giy29SZycN4JETR69fPNtI0Lvw+lERWZy9bUn9TenbhivKMeEpnO2aFrUyztq4DJlA4C+nvApDm+yDRVW2+Lb02doEc8159FR48usW5mGALUnHLQ2dtmLOjXJeDA6acn9Yx96cWuWHse477CbVu38lsR1sHKnI+Lz4IwK0Fj5iduGwMqeTnKM1Z5z6hF1Nert4YsETPd6A8pQ5U4jjMzYly1xiA3wAcoaM8hFpLW0UVl//SiYjcwwb23rhAH9WgliY+vxO3M+Fu0eodavzZuyAEqyd/IeDD7vEBYRqAzZTYHK6lBbHBD3I/aHg0= root@osclient

命令 ssh-keygen -q -N "" 是用于生成SSH密钥对的命令,具体参数解释如下:

  • ssh-keygen: 这是生成SSH密钥对的命令。

  • -q: 这个参数表示静默模式,即在生成密钥对的过程中不输出任何信息。

  • -N "": 这个参数后面通常跟一个字符串,用来设置密钥的密码(passphrase)。在这个命令中,-N "" 表示不设置密码,即生成一个没有密码保护的密钥对。

执行这个命令后,系统会在默认的SSH密钥存储位置(通常是 ~/.ssh 目录)生成一对新的密钥:

  • 公钥文件通常命名为 id_rsa.pub
  • 私钥文件通常命名为 id_rsa

生成的公钥可以添加到远程服务器的 ~/.ssh/authorized_keys 文件中,以便实现无密码登录。

3、将公钥上传到OpenStack

root@osclient ~(myproject/myuser)# openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
+-------------+-------------------------------------------------+
| Field       | Value                                           |
+-------------+-------------------------------------------------+
| created_at  | None                                            |
| fingerprint | 9e:63:db:4f:eb:5c:6f:e1:ef:45:e2:77:59:bd:ef:40 |
| id          | mykey                                           |
| is_deleted  | None                                            |
| name        | mykey                                           |
| type        | ssh                                             |
| user_id     | 9382b59561c04dd1abf0a4cb7a8252ec                |
+-------------+-------------------------------------------------+
root@osclient ~(myproject/myuser)# 
root@osclient ~(myproject/myuser)# openstack keypair list
+-------+-------------------------------------------------+------+
| Name  | Fingerprint                                     | Type |
+-------+-------------------------------------------------+------+
| mykey | 9e:63:db:4f:eb:5c:6f:e1:ef:45:e2:77:59:bd:ef:40 | ssh  |
+-------+-------------------------------------------------+------+

5、Add security group rules

在OpenStack中,默认的安全组会被应用到所有实例上,并且包含了一些默认的防火墙规则,这些规则通常禁止远程访问实例。对于像CirrOS这样的Linux镜像,我们建议至少允许ICMP(ping)和安全Shell(SSH)。

向默认的security group添加新的rule:

1、permit icmp

root@osclient ~(myproject/myuser)# openstack security group rule create --proto icmp default
+-------------------------+--------------------------------------+
| Field                   | Value                                |
+-------------------------+--------------------------------------+
| created_at              | 2024-09-27T13:04:13Z                 |
| description             |                                      |
| direction               | ingress                              |
| ether_type              | IPv4                                 |
| id                      | 2cc95680-49b4-4b6b-8cea-52f8cb7302aa |
| name                    | None                                 |
| port_range_max          | None                                 |
| port_range_min          | None                                 |
| project_id              | f5e75a3f7cc347ad89d20dcfe70dae01     |
| protocol                | icmp                                 |
| remote_address_group_id | None                                 |
| remote_group_id         | None                                 |
| remote_ip_prefix        | 0.0.0.0/0                            |
| revision_number         | 0                                    |
| security_group_id       | 15dfe688-d6fc-4231-a670-7b832e08fb9d |
| tags                    | []                                   |
| tenant_id               | f5e75a3f7cc347ad89d20dcfe70dae01     |
| updated_at              | 2024-09-27T13:04:13Z                 |
+-------------------------+--------------------------------------+
root@osclient ~(myproject/myuser)# 

2、Permit secure shell (SSH) access:

root@osclient ~(myproject/myuser)# openstack security group rule create --proto tcp --dst-port 22 default
+-------------------------+--------------------------------------+
| Field                   | Value                                |
+-------------------------+--------------------------------------+
| created_at              | 2024-09-27T13:07:47Z                 |
| description             |                                      |
| direction               | ingress                              |
| ether_type              | IPv4                                 |
| id                      | 6452f09e-cbce-4ff9-845e-dcfb7144f62d |
| name                    | None                                 |
| port_range_max          | 22                                   |
| port_range_min          | 22                                   |
| project_id              | f5e75a3f7cc347ad89d20dcfe70dae01     |
| protocol                | tcp                                  |
| remote_address_group_id | None                                 |
| remote_group_id         | None                                 |
| remote_ip_prefix        | 0.0.0.0/0                            |
| revision_number         | 0                                    |
| security_group_id       | 15dfe688-d6fc-4231-a670-7b832e08fb9d |
| tags                    | []                                   |
| tenant_id               | f5e75a3f7cc347ad89d20dcfe70dae01     |
| updated_at              | 2024-09-27T13:07:47Z                 |
+-------------------------+--------------------------------------+
root@osclient ~(myproject/myuser)

3、查看安全组里面的rule:

root@osclient ~(myproject/myuser)# openstack security group rule list
+--------------------------------------+-------------+-----------+-----------+------------+-----------+--------------------------------------+----------------------+--------------------------------------+
| ID                                   | IP Protocol | Ethertype | IP Range  | Port Range | Direction | Remote Security Group                | Remote Address Group | Security Group                       |
+--------------------------------------+-------------+-----------+-----------+------------+-----------+--------------------------------------+----------------------+--------------------------------------+
| 1adec9af-14a9-4288-8364-e79a8fa3b75a | None        | IPv4      | 0.0.0.0/0 |            | ingress   | 15dfe688-d6fc-4231-a670-7b832e08fb9d | None                 | 15dfe688-d6fc-4231-a670-7b832e08fb9d |
| 2cc95680-49b4-4b6b-8cea-52f8cb7302aa | icmp        | IPv4      | 0.0.0.0/0 |            | ingress   | None                                 | None                 | 15dfe688-d6fc-4231-a670-7b832e08fb9d |
| 6452f09e-cbce-4ff9-845e-dcfb7144f62d | tcp         | IPv4      | 0.0.0.0/0 | 22:22      | ingress   | None                                 | None                 | 15dfe688-d6fc-4231-a670-7b832e08fb9d |
| a5046171-f9e9-451f-acfe-662ef32ea651 | None        | IPv6      | ::/0      |            | ingress   | 15dfe688-d6fc-4231-a670-7b832e08fb9d | None                 | 15dfe688-d6fc-4231-a670-7b832e08fb9d |
| a7dffce7-946e-421e-bfa9-22fa65f4bf7a | None        | IPv4      | 0.0.0.0/0 |            | egress    | None                                 | None                 | 15dfe688-d6fc-4231-a670-7b832e08fb9d |
| d9bae044-c411-4d73-a5f4-ab422e3152a9 | None        | IPv6      | ::/0      |            | egress    | None                                 | None                 | 15dfe688-d6fc-4231-a670-7b832e08fb9d |
+--------------------------------------+-------------+-----------+-----------+------------+-----------+--------------------------------------+----------------------+--------------------------------------+
root@osclient ~(myproject/myuser)# 

6、Launch an instance

6.1 Determine instance options

root@osclient:~# source demo-openrc 
root@osclient ~(myproject/myuser)# 
root@osclient ~(myproject/myuser)# openstack flavor list
+----+---------+-----+------+-----------+-------+-----------+
| ID | Name    | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+---------+-----+------+-----------+-------+-----------+
| 0  | m1.nano |  64 |    1 |         0 |     1 | True      |
+----+---------+-----+------+-----------+-------+-----------+
root@osclient ~(myproject/myuser)# openstack image list
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| 429decdd-9230-49c0-b735-70364c226eb5 | cirros | active |
+--------------------------------------+--------+--------+
root@osclient ~(myproject/myuser)# openstack network list
+--------------------------------------+----------+--------------------------------------+
| ID                                   | Name     | Subnets                              |
+--------------------------------------+----------+--------------------------------------+
| 48f2b88e-7740-4d94-a631-69e2abadf25b | provider | 8279842e-d7c5-4ba6-a037-831e0a72a938 |
+--------------------------------------+----------+--------------------------------------+
root@osclient ~(myproject/myuser)# openstack security group list
+--------------------------------------+---------+------------------------+----------------------------------+------+
| ID                                   | Name    | Description            | Project                          | Tags |
+--------------------------------------+---------+------------------------+----------------------------------+------+
| 15dfe688-d6fc-4231-a670-7b832e08fb9d | default | Default security group | f5e75a3f7cc347ad89d20dcfe70dae01 | []   |
+--------------------------------------+---------+------------------------+----------------------------------+------+
root@osclient ~(myproject/myuser)# 

6.2 Launch the instance(记录出现错误和排除过程,原文错误

root@osclient ~(myproject/myuser)# openstack server create --flavor m1.nano --image cirros \
>   --nic net-id=48f2b88e-7740-4d94-a631-69e2abadf25b --security-group default \
>   --key-name mykey provider-instance
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
<class 'keystoneauth1.exceptions.discovery.DiscoveryFailure'> (HTTP 500) (Request-ID: req-e0b4228c-3b17-4c5b-9033-b36f7793d553)
root@osclient ~(myproject/myuser)# 

启动instance时,报错。创建虚机的请求首先需发往nova-api,检查nova-api的log:

root@controller:~# tail -n 1000 /var/log/nova/nova-api.log
...
2024-09-27 13:36:36.360 1642 WARNING keystoneauth.identity.generic.base [req-a4e30ec1-d74e-413b-8a2f-07508bfe6e5f 9382b59561c04dd1abf0a4cb7a8252ec f5e75a3f7cc347ad89d20dcfe70dae01 - default default] Failed to discover available identity versions when contacting https://controller/identity. Attempting to parse version from URL.: keystoneauth1.exceptions.connection.ConnectFailure: Unable to establish connection to https://controller/identity: HTTPSConnectionPool(host='controller', port=443): Max retries exceeded with url: /identity (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fea2abebdf0>: Failed to establish a new connection: [Errno 111] ECONNREFUSED'))
2024-09-27 13:36:36.366 1642 ERROR nova.api.openstack.wsgi [req-a4e30ec1-d74e-413b-8a2f-07508bfe6e5f 9382b59561c04dd1abf0a4cb7a8252ec f5e75a3f7cc347ad89d20dcfe70dae01 - default default] Unexpected exception in API method: keystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Unable to establish connection to https://controller/identity: HTTPSConnectionPool(host='controller', port=443): Max retries exceeded with url: /identity (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fea2abebdf0>: Failed to establish a new connection: [Errno 111] ECONNREFUSED'))

错误信息表明在尝试连接到OpenStack的Identity服务(Keystone)时出现了问题。具体来说,错误信息指出无法建立到https://controller/identity的连接,并且超过了最大重试次数。这通常是由于网络问题或配置错误导致的。

检查controller node的/etc/nova/nova.conf,按下面内容修改后,重启服务:

[service_user]
send_service_user_token = true
auth_url = https://controller/identity  <--需修改为:auth_url = http://controller:5000/identity
auth_strategy = keystone
auth_type = password
project_domain_name = Default
project_name = service
user_domain_name = Default
username = nova
password = openstack 

可以创建虚机,但发现虚机的状态为ERROR:

root@osclient ~(myproject/myuser)# openstack server create --flavor m1.nano --image cirros   --nic net-id=48f2b88e-7740-4d94-a631-69e2abadf25b --security-group default   --key-name mykey provider-instance
+-----------------------------+-----------------------------------------------+
| Field                       | Value                                         |
+-----------------------------+-----------------------------------------------+
| OS-DCF:diskConfig           | MANUAL                                        |
| OS-EXT-AZ:availability_zone |                                               |
| OS-EXT-STS:power_state      | NOSTATE                                       |
| OS-EXT-STS:task_state       | scheduling                                    |
| OS-EXT-STS:vm_state         | building                                      |
| OS-SRV-USG:launched_at      | None                                          |
| OS-SRV-USG:terminated_at    | None                                          |
| accessIPv4                  |                                               |
| accessIPv6                  |                                               |
| addresses                   |                                               |
| adminPass                   | ZLvm7CtGav5B                                  |
| config_drive                |                                               |
| created                     | 2024-09-27T13:51:22Z                          |
| flavor                      | m1.nano (0)                                   |
| hostId                      |                                               |
| id                          | 23bab8ab-5ce5-461b-9f9b-b5bfcff45529          |
| image                       | cirros (429decdd-9230-49c0-b735-70364c226eb5) |
| key_name                    | mykey                                         |
| name                        | provider-instance                             |
| progress                    | 0                                             |
| project_id                  | f5e75a3f7cc347ad89d20dcfe70dae01              |
| properties                  |                                               |
| security_groups             | name='15dfe688-d6fc-4231-a670-7b832e08fb9d'   |
| status                      | BUILD                                         |
| updated                     | 2024-09-27T13:51:22Z                          |
| user_id                     | 9382b59561c04dd1abf0a4cb7a8252ec              |
| volumes_attached            |                                               |
+-----------------------------+-----------------------------------------------+root@osclient ~(myproject/myuser)# openstack server list
+--------------------------------------+-------------------+--------+----------+--------+---------+
| ID                                   | Name              | Status | Networks | Image  | Flavor  |
+--------------------------------------+-------------------+--------+----------+--------+---------+
| 23bab8ab-5ce5-461b-9f9b-b5bfcff45529 | provider-instance | ERROR  |          | cirros | m1.nano |
+--------------------------------------+-------------------+--------+----------+--------+---------+
root@osclient ~(myproject/myuser)# 

查看日志:root@compute1:~# tail -n 2000 /var/log/nova/nova-compute.log出现类似错误,修改compute1的/etc/nova/nova.conf的相同部分的信息,并重启服务:

root@compute1:~# vi /etc/nova/nova.conf 
...
[service_user]
send_service_user_token = true
auth_url = http://controller:5000/identity  <---修改后的内容
auth_strategy = keystone
auth_type = password
project_domain_name = Default
project_name = service
user_domain_name = Default
username = nova
password = openstack
...
root@compute1:~# service nova-compute restart
root@compute1:~#

删除之前建立的虚机:

root@osclient ~(myproject/myuser)# openstack server list
+--------------------------------------+-------------------+--------+----------+--------+---------+
| ID                                   | Name              | Status | Networks | Image  | Flavor  |
+--------------------------------------+-------------------+--------+----------+--------+---------+
| 23bab8ab-5ce5-461b-9f9b-b5bfcff45529 | provider-instance | ERROR  |          | cirros | m1.nano |
+--------------------------------------+-------------------+--------+----------+--------+---------+
root@osclient ~(myproject/myuser)# openstack server delete  23bab8ab-5ce5-461b-9f9b-b5bfcff45529

重新执行创建虚机的命令,虚机正常建立:

root@osclient ~(myproject/myuser)# openstack server create --flavor m1.nano --image cirros   --nic net-id=48f2b88e-7740-4d94-a631-69e2abadf25b --security-group default   --key-name mykey provider-instance
+-----------------------------+-----------------------------------------------+
| Field                       | Value                                         |
+-----------------------------+-----------------------------------------------+
| OS-DCF:diskConfig           | MANUAL                                        |
| OS-EXT-AZ:availability_zone |                                               |
| OS-EXT-STS:power_state      | NOSTATE                                       |
| OS-EXT-STS:task_state       | scheduling                                    |
| OS-EXT-STS:vm_state         | building                                      |
| OS-SRV-USG:launched_at      | None                                          |
| OS-SRV-USG:terminated_at    | None                                          |
| accessIPv4                  |                                               |
| accessIPv6                  |                                               |
| addresses                   |                                               |
| adminPass                   | Fkcpj47EGcxG                                  |
| config_drive                |                                               |
| created                     | 2024-09-27T14:09:37Z                          |
| flavor                      | m1.nano (0)                                   |
| hostId                      |                                               |
| id                          | 4e2e96de-b9be-4da8-925c-e3048d8a3b44          |
| image                       | cirros (429decdd-9230-49c0-b735-70364c226eb5) |
| key_name                    | mykey                                         |
| name                        | provider-instance                             |
| progress                    | 0                                             |
| project_id                  | f5e75a3f7cc347ad89d20dcfe70dae01              |
| properties                  |                                               |
| security_groups             | name='15dfe688-d6fc-4231-a670-7b832e08fb9d'   |
| status                      | BUILD                                         |
| updated                     | 2024-09-27T14:09:37Z                          |
| user_id                     | 9382b59561c04dd1abf0a4cb7a8252ec              |
| volumes_attached            |                                               |
+-----------------------------+-----------------------------------------------+
root@osclient ~(myproject/myuser)# openstack server list
+--------------------------------------+-------------------+--------+------------------------+--------+---------+
| ID                                   | Name              | Status | Networks               | Image  | Flavor  |
+--------------------------------------+-------------------+--------+------------------------+--------+---------+
| 4e2e96de-b9be-4da8-925c-e3048d8a3b44 | provider-instance | ACTIVE | provider=203.0.113.155 | cirros | m1.nano |
+--------------------------------------+-------------------+--------+------------------------+--------+---------+
root@osclient ~(myproject/myuser)# 
root@osclient ~(myproject/myuser)# openstack server show 4e2e96de-b9be-4da8-925c-e3048d8a3b44
+-----------------------------+----------------------------------------------------------+
| Field                       | Value                                                    |
+-----------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig           | MANUAL                                                   |
| OS-EXT-AZ:availability_zone | nova                                                     |
| OS-EXT-STS:power_state      | Running                                                  |
| OS-EXT-STS:task_state       | None                                                     |
| OS-EXT-STS:vm_state         | active                                                   |
| OS-SRV-USG:launched_at      | 2024-09-27T14:09:17.000000                               |
| OS-SRV-USG:terminated_at    | None                                                     |
| accessIPv4                  |                                                          |
| accessIPv6                  |                                                          |
| addresses                   | provider=203.0.113.155                                   |
| config_drive                |                                                          |
| created                     | 2024-09-27T14:09:37Z                                     |
| flavor                      | m1.nano (0)                                              |
| hostId                      | 892d1a79d804f6b0fbfb68938ec0df8a0abc8e3d52660529538123e4 |
| id                          | 4e2e96de-b9be-4da8-925c-e3048d8a3b44                     |
| image                       | cirros (429decdd-9230-49c0-b735-70364c226eb5)            |
| key_name                    | mykey                                                    |
| name                        | provider-instance                                        |
| progress                    | 0                                                        |
| project_id                  | f5e75a3f7cc347ad89d20dcfe70dae01                         |
| properties                  |                                                          |
| security_groups             | name='default'                                           |
| status                      | ACTIVE                                                   |
| updated                     | 2024-09-27T22:23:25Z                                     |
| user_id                     | 9382b59561c04dd1abf0a4cb7a8252ec                         |
| volumes_attached            |                                                          |
+-----------------------------+----------------------------------------------------------+
root@osclient ~(myproject/myuser)# 

6.3 检查hypervisor时,发现问题(人为配置错误)

root@controller ~(admin/amdin)# nova hypervisor-list
+--------------------------------------+---------------------+-------+---------+
| ID                                   | Hypervisor hostname | State | Status  |
+--------------------------------------+---------------------+-------+---------+
| 205c89e0-fb82-4def-a0f6-bfe4b120ab79 | compute1            | up    | enabled |
| 027eb56f-a860-41b8-afa3-91b65f1c8777 | controller          | up    | enabled |
+--------------------------------------+---------------------+-------+---------+
root@controller ~(admin/amdin)# nova hypervisor-show 205c89e0-fb82-4def-a0f6-bfe4b120ab79
+-------------------------+--------------------------------------+
| Property                | Value                                |
+-------------------------+--------------------------------------+
| host_ip                 | 10.0.20.11                           |
| hypervisor_hostname     | compute1                             |
| hypervisor_type         | QEMU                                 |
| hypervisor_version      | 6002000                              |
| id                      | 205c89e0-fb82-4def-a0f6-bfe4b120ab79 |
| service_disabled_reason | None                                 |
| service_host            | compute1                             |
| service_id              | c04e53a4-fdb8-4915-9b1a-f5d195e753c4 |
| state                   | up                                   |
| status                  | enabled                              |
| uptime                  |  23:22:06 up  1:04,  1 user,  load   |
|                         | average: 0.19, 0.24, 0.25            |
+-------------------------+--------------------------------------+
root@controller ~(admin/amdin)# nova hypervisor-show 027eb56f-a860-41b8-afa3-91b65f1c8777
+-------------------------+--------------------------------------+
| Property                | Value                                |
+-------------------------+--------------------------------------+
| host_ip                 | 10.0.20.11                           |
| hypervisor_hostname     | controller                           |
| hypervisor_type         | QEMU                                 |
| hypervisor_version      | 6002000                              |
| id                      | 027eb56f-a860-41b8-afa3-91b65f1c8777 |
| service_disabled_reason | None                                 |
| service_host            | controller                           |
| service_id              | b3d4e71d-088a-4249-8d8f-e6d8528c698d |
| state                   | up                                   |
| status                  | enabled                              |
| uptime                  |  23:22:46 up  1:05,  1 user,  load   |
|                         | average: 0.18, 0.18, 0.17            |
+-------------------------+--------------------------------------+

两个hypervisor的host_ip都是10.0.20.11,这是有问题的。compute1应该是10.0.20.12,检查compute1的/etc/nova/nova.conf发现配置错误:

[DEFAULT]
log_dir = /var/log/nova
lock_path = /var/lock/nova
state_path = /var/lib/nova
transport_url = rabbit://openstack:openstack@controller
my_ip = 10.0.20.11 <----应该是10.0.20.12!!!

修改后,reboot compute1:

root@compute1 ~(admin/amdin)# vi /etc/nova/nova.conf
root@compute1 ~(admin/amdin)# reboot

为稳妥起见,reboot controller:

root@controller ~(admin/amdin)# reboot

再次检查,hypervisor显示正常:

root@controller ~(admin/amdin)# nova hypervisor-list
+--------------------------------------+---------------------+-------+---------+
| ID                                   | Hypervisor hostname | State | Status  |
+--------------------------------------+---------------------+-------+---------+
| 205c89e0-fb82-4def-a0f6-bfe4b120ab79 | compute1            | up    | enabled |
| 027eb56f-a860-41b8-afa3-91b65f1c8777 | controller          | up    | enabled |
+--------------------------------------+---------------------+-------+---------+
root@controller ~(admin/amdin)# nova hypervisor-show 205c89e0-fb82-4def-a0f6-bfe4b120ab79
+-------------------------+--------------------------------------+
| Property                | Value                                |
+-------------------------+--------------------------------------+
| host_ip                 | 10.0.20.12                           |
| hypervisor_hostname     | compute1                             |
| hypervisor_type         | QEMU                                 |
| hypervisor_version      | 6002000                              |
| id                      | 205c89e0-fb82-4def-a0f6-bfe4b120ab79 |
| service_disabled_reason | None                                 |
| service_host            | compute1                             |
| service_id              | c04e53a4-fdb8-4915-9b1a-f5d195e753c4 |
| state                   | up                                   |
| status                  | enabled                              |
| uptime                  |  00:39:54 up 4 min,  1 user,  load   |
|                         | average: 0.08, 0.16, 0.08            |
+-------------------------+--------------------------------------+
root@controller ~(admin/amdin)# nova hypervisor-show 027eb56f-a860-41b8-afa3-91b65f1c8777
+-------------------------+--------------------------------------+
| Property                | Value                                |
+-------------------------+--------------------------------------+
| host_ip                 | 10.0.20.11                           |
| hypervisor_hostname     | controller                           |
| hypervisor_type         | QEMU                                 |
| hypervisor_version      | 6002000                              |
| id                      | 027eb56f-a860-41b8-afa3-91b65f1c8777 |
| service_disabled_reason | None                                 |
| service_host            | controller                           |
| service_id              | b3d4e71d-088a-4249-8d8f-e6d8528c698d |
| state                   | up                                   |
| status                  | enabled                              |
| uptime                  |  00:35:22 up 2 min,  1 user,  load   |
|                         | average: 0.76, 0.52, 0.21            |
+-------------------------+--------------------------------------+
root@controller ~(admin/amdin)#

 6.4 查看instance具体运行在哪个hypervisor上

执行operstack server start重新启动instance:

root@osclient ~(myproject/myuser)# openstack server list
+--------------------------------------+-------------------+---------+------------------------+--------+---------+
| ID                                   | Name              | Status  | Networks               | Image  | Flavor  |
+--------------------------------------+-------------------+---------+------------------------+--------+---------+
| 4e2e96de-b9be-4da8-925c-e3048d8a3b44 | provider-instance | SHUTOFF | provider=203.0.113.155 | cirros | m1.nano |
+--------------------------------------+-------------------+---------+------------------------+--------+---------+
root@osclient ~(myproject/myuser)# openstack server start provider-instance
root@osclient ~(myproject/myuser)# openstack server list
+--------------------------------------+-------------------+--------+------------------------+--------+---------+
| ID                                   | Name              | Status | Networks               | Image  | Flavor  |
+--------------------------------------+-------------------+--------+------------------------+--------+---------+
| 4e2e96de-b9be-4da8-925c-e3048d8a3b44 | provider-instance | ACTIVE | provider=203.0.113.155 | cirros | m1.nano |
+--------------------------------------+-------------------+--------+------------------------+--------+---------+
root@osclient ~(myproject/myuser)# 

 在controller先查看hypervisor list,在查看具体hypervisor的虚机运行情况:

root@controller ~(admin/amdin)# nova hypervisor-list
+--------------------------------------+---------------------+-------+---------+
| ID                                   | Hypervisor hostname | State | Status  |
+--------------------------------------+---------------------+-------+---------+
| 205c89e0-fb82-4def-a0f6-bfe4b120ab79 | compute1            | up    | enabled |
| 027eb56f-a860-41b8-afa3-91b65f1c8777 | controller          | up    | enabled |
+--------------------------------------+---------------------+-------+---------+

查看controller是否有虚机,显示没有:

root@controller ~(admin/amdin)# nova hypervisor-servers controller
+----+------+---------------+---------------------+
| ID | Name | Hypervisor ID | Hypervisor Hostname |
+----+------+---------------+---------------------+
+----+------+---------------+---------------------+

 查看compute1是否有虚机,显示有1个虚机:

root@controller ~(admin/amdin)# nova hypervisor-servers compute1
+--------------------------------------+-------------------+--------------------------------------+---------------------+
| ID                                   | Name              | Hypervisor ID                        | Hypervisor Hostname |
+--------------------------------------+-------------------+--------------------------------------+---------------------+
| 4e2e96de-b9be-4da8-925c-e3048d8a3b44 | instance-00000003 | 205c89e0-fb82-4def-a0f6-bfe4b120ab79 | compute1            |
+--------------------------------------+-------------------+--------------------------------------+---------------------+

 也可以通过virsh命令进行查看:

root@compute1:~# virsh listId   Name                State
-----------------------------------1    instance-00000003   runningroot@compute1:~# virsh dominfo instance-00000003
Id:             1
Name:           instance-00000003
UUID:           4e2e96de-b9be-4da8-925c-e3048d8a3b44
OS Type:        hvm
State:          running
CPU(s):         1
CPU time:       40.7s
Max memory:     65536 KiB
Used memory:    65536 KiB
Persistent:     yes
Autostart:      disable
Managed save:   no
Security model: apparmor
Security DOI:   0
Security label: libvirt-4e2e96de-b9be-4da8-925c-e3048d8a3b44 (enforcing)root@compute1:~# 

 7、Access the instance using the virtual console

1、获取虚拟机实例的控制台URL

root@osclient ~(myproject/myuser)# openstack console url show provider-instance
+----------+-------------------------------------------------------------------------------------------+
| Field    | Value                                                                                     |
+----------+-------------------------------------------------------------------------------------------+
| protocol | vnc                                                                                       |
| type     | novnc                                                                                     |
| url      | http://controller:6080/vnc_auto.html?path=%3Ftoken%3D1674eeed-8a9d-4c2e-aef3-1f26ddf2f2b6 |
+----------+-------------------------------------------------------------------------------------------+

2、通过10.0.20.1 pc的浏览器进行访问:

没有显示终端字幕,原因待查明。

查看日志文件:

root@controller ~(admin/amdin)# tail -n 100 /var/log/nova/nova-novncproxy.log
...
2024-09-28 01:36:39.857 3537 INFO nova.console.websocketproxy [-] 10.0.20.1 - - [28/Sep/2024 01:36:39] 10.0.20.1: Plain non-SSL (ws://) WebSocket connection
2024-09-28 01:36:39.858 3537 INFO nova.console.websocketproxy [-] 10.0.20.1 - - [28/Sep/2024 01:36:39] 10.0.20.1: Path: '/?token=1674eeed-8a9d-4c2e-aef3-1f26ddf2f2b6'
2024-09-28 01:36:40.031 3537 INFO nova.console.websocketproxy [req-0434a76f-5004-4af4-b9ac-697267eb3ede - - - - -]   6: connect info: ConsoleAuthToken(access_url_base='http://controller:6080/vnc_auto.html',console_type='novnc',created_at=2024-09-28T01:34:12Z,host='10.0.20.12',id=6,instance_uuid=4e2e96de-b9be-4da8-925c-e3048d8a3b44,internal_access_path=None,port=5900,token='***',updated_at=None)
2024-09-28 01:36:40.032 3537 INFO nova.console.websocketproxy [req-0434a76f-5004-4af4-b9ac-697267eb3ede - - - - -]   6: connecting to: 10.0.20.12:5900
2024-09-28 01:36:40.038 3537 INFO nova.console.securityproxy.rfb [req-0434a76f-5004-4af4-b9ac-697267eb3ede - - - - -] Finished security handshake, resuming normal proxy mode using secured socket
root@controller ~(admin/amdin)# 

从日志信息中,我们可以看到以下关键点:

  1. WebSocket连接

    • 一个来自IP地址10.0.20.1的客户端尝试通过非SSL的WebSocket (ws://) 连接到Nova控制台代理。
  2. 连接路径

    • 连接请求包含了一个token,用于验证连接的合法性。
  3. 控制台认证信息

    • 连接尝试被Nova控制台代理服务接收,并提供了控制台认证信息,包括访问URL、控制台类型(NoVNC)、创建时间、主机地址、实例UUID、端口和token。
  4. 连接到VNC服务器

    • 代理服务尝试将连接转发到虚拟机实例的VNC服务器,地址为10.0.20.12,端口为5900
  5. 安全握手

    • 安全握手完成,代理服务切换到使用安全套接字的正常代理模式。

 8、重新创建虚机

在从qdhcp ping “provider-instance"虚机过程中,ping不通。发现compute1的ens35没有正确配置。

执行以下命令:

root@compute1:~# vi /etc/netplan/00-installer-config.yaml 
root@compute1:~# netplan apply
root@compute1:~# cat /etc/netplan/00-installer-config.yaml 
# This is the network config written by 'subiquity'
network:ethernets:ens32:addresses:- 10.0.20.12/24nameservers:addresses:- 10.0.20.2search: []routes:- to: defaultvia: 10.0.20.2ens35:dhcp4: falseversion: 2
root@compute1:~# netplan apply

为保证没其他问题,重新创建了虚机:

root@osclient ~(myproject/myuser)# openstack server list
+--------------------------------------+-------------------+--------+------------------------+--------+---------+
| ID                                   | Name              | Status | Networks               | Image  | Flavor  |
+--------------------------------------+-------------------+--------+------------------------+--------+---------+
| 4e2e96de-b9be-4da8-925c-e3048d8a3b44 | provider-instance | ACTIVE | provider=203.0.113.155 | cirros | m1.nano |
+--------------------------------------+-------------------+--------+------------------------+--------+---------+
root@osclient ~(myproject/myuser)# openstack server delete 4e2e96de-b9be-4da8-925c-e3048d8a3b44\
> 
root@osclient ~(myproject/myuser)# openstack server listroot@osclient ~(myproject/myuser)# openstack network list
+--------------------------------------+----------+--------------------------------------+
| ID                                   | Name     | Subnets                              |
+--------------------------------------+----------+--------------------------------------+
| 48f2b88e-7740-4d94-a631-69e2abadf25b | provider | 8279842e-d7c5-4ba6-a037-831e0a72a938 |
+--------------------------------------+----------+--------------------------------------+
root@osclient ~(myproject/myuser)# openstack server create --flavor m1.nano --image cirros \
>   --nic net-id=48f2b88e-7740-4d94-a631-69e2abadf25b --security-group default \
>   --key-name mykey provider-instance
+-----------------------------+-----------------------------------------------+
| Field                       | Value                                         |
+-----------------------------+-----------------------------------------------+
| OS-DCF:diskConfig           | MANUAL                                        |
| OS-EXT-AZ:availability_zone |                                               |
| OS-EXT-STS:power_state      | NOSTATE                                       |
| OS-EXT-STS:task_state       | scheduling                                    |
| OS-EXT-STS:vm_state         | building                                      |
| OS-SRV-USG:launched_at      | None                                          |
| OS-SRV-USG:terminated_at    | None                                          |
| accessIPv4                  |                                               |
| accessIPv6                  |                                               |
| addresses                   |                                               |
| adminPass                   | ee9VbWSvbbG8                                  |
| config_drive                |                                               |
| created                     | 2024-09-28T02:49:20Z                          |
| flavor                      | m1.nano (0)                                   |
| hostId                      |                                               |
| id                          | d2e4bc39-63c8-4c80-b33f-52f4e1891f50          |
| image                       | cirros (429decdd-9230-49c0-b735-70364c226eb5) |
| key_name                    | mykey                                         |
| name                        | provider-instance                             |
| progress                    | 0                                             |
| project_id                  | f5e75a3f7cc347ad89d20dcfe70dae01              |
| properties                  |                                               |
| security_groups             | name='15dfe688-d6fc-4231-a670-7b832e08fb9d'   |
| status                      | BUILD                                         |
| updated                     | 2024-09-28T02:49:20Z                          |
| user_id                     | 9382b59561c04dd1abf0a4cb7a8252ec              |
| volumes_attached            |                                               |
+-----------------------------+-----------------------------------------------+

9、创建虚机后的网络拓扑 

创建虚机后,Openstack视角的抽象网络拓扑:

Openstack视图的网络拓扑

 实际的网络拓扑,其中openstack创建了qdhcpxxxx、brqxxxx、provider-instance:

实际网络拓扑示意

10、SSH登录虚机

 1、qdhcp 可以ping通虚机

root@controller ~(admin/amdin)# ip netns exec qdhcp-48f2b88e-7740-4d94-a631-69e2abadf25b ping 203.0.113.125
PING 203.0.113.125 (203.0.113.125) 56(84) bytes of data.
64 bytes from 203.0.113.125: icmp_seq=1 ttl=64 time=2.23 ms
64 bytes from 203.0.113.125: icmp_seq=2 ttl=64 time=0.786 ms
^C
--- 203.0.113.125 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.786/1.506/2.227/0.720 ms
root@controller ~(admin/amdin)# ip netns exec qdhcp-48f2b88e-7740-4d94-a631-69e2abadf25b ping 203.0.113.90
PING 203.0.113.90 (203.0.113.90) 56(84) bytes of data.
64 bytes from 203.0.113.90: icmp_seq=1 ttl=64 time=0.233 ms
64 bytes from 203.0.113.90: icmp_seq=2 ttl=64 time=0.323 ms
64 bytes from 203.0.113.90: icmp_seq=3 ttl=64 time=0.300 ms
^C
--- 203.0.113.90 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.233/0.285/0.323/0.038 ms
root@controller ~(admin/amdin)# 

2、win11能ping 虚机

C:\>ipconfigWindows IP 配置
...
以太网适配器 VMware Network Adapter VMnet6:连接特定的 DNS 后缀 . . . . . . . :本地链接 IPv6 地址. . . . . . . . : fe80::f73a:9:c195:8516%30IPv4 地址 . . . . . . . . . . . . : 203.0.113.90子网掩码  . . . . . . . . . . . . : 255.255.255.0默认网关. . . . . . . . . . . . . :C:\>ping 203.0.113.125正在 Ping 203.0.113.125 具有 32 字节的数据:
来自 203.0.113.1 的回复: 无法访问目标主机。
来自 203.0.113.125 的回复: 字节=32 时间=3ms TTL=64
来自 203.0.113.125 的回复: 字节=32 时间<1ms TTL=64
来自 203.0.113.125 的回复: 字节=32 时间<1ms TTL=64203.0.113.125 的 Ping 统计信息:数据包: 已发送 = 4,已接收 = 4,丢失 = 0 (0% 丢失),
往返行程的估计时间(以毫秒为单位):最短 = 0ms,最长 = 3ms,平均 = 1msC:\>

3、win11 ssh 登录虚机provider-instance

使用SecureCRT:

username/password: cirros/gocubsgo:

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000link/ether fa:16:3e:60:78:cd brd ff:ff:ff:ff:ff:ffinet 203.0.113.125/24 brd 203.0.113.255 scope global eth0valid_lft forever preferred_lft foreverinet6 fe80::f816:3eff:fe60:78cd/64 scope link valid_lft forever preferred_lft forever
$ 


http://www.ppmy.cn/devtools/118968.html

相关文章

智慧城市主要运营模式分析

(一)运营模式演变 作为新一代信息化技术落地应用的新事物,智慧城市在建设模式方面借鉴了大量工程建设的经验,如平行发包(DBB,Design-Bid-Build)、EPC工程总承包、PPP等模式等,这些模式在不同的发展阶段和条件下发挥了重要作用。 在智慧城市发展模式从政府主导、以建为主、…

数仓建模:DataX同步Mysql数据到Hive如何批量生成建表语句?| 基于SQL实现

目录 一、需求 二、实现步骤 1.数据类型转换维表 2.sql批量生成建表语句 三、小结 如果觉得本文对你有帮助&#xff0c;那么不妨也可以选择去看看我的博客专栏 &#xff0c;部分内容如下&#xff1a; 数字化建设通关指南 专栏 原价99&#xff0c;现在活动价39.9&#x…

Battery Connector接触电阻仿真

下面是一款Battery Connector的透视图 该Battery Connector接触电阻设计目标值为:30 Milliohms Maximum。这款Battery Connector的端子比较复杂,难以用公式进行准确计算,这里用Ansys Q3D对接触电阻做仿真计算。在上一篇连接器接触电阻仿真教程中已经讲过:连接器的接…

Quill Editor 富文本编辑器的高度问题

问题现象 1. 编辑框只有一行高&#xff1b; 2. 编辑框高度足够&#xff0c;但显示不全&#xff0c;左侧有滚动条。向下拉滚动条&#xff0c;编辑框把工具栏向上顶出去&#xff0c;工具栏看不见了。 网上搜出来一大堆各种说法&#xff0c;照猫画虎&#xff0c;有时候对&#…

ConcurrentHashMap是怎么实现的?

1.是什么 ConcurrentHashMap 是 Java 并发包&#xff08;java.util.concurrent&#xff09;中的一个线程安全的哈希表实现。与 HashMap 相比&#xff0c;ConcurrentHashMap 在并发环境下具有更高的性能&#xff0c;因为它允许多个线程并发地进行读写操作而不会导致数据不一致。…

linux中system和shell有什么关系

在Linux中&#xff0c;system函数和Shell之间有着密切的关系&#xff0c;主要体现在以下几个方面&#xff1a; 一、system函数简介 system函数是C语言标准库&#xff08;<stdlib.h>&#xff09;中的一个函数&#xff0c;它允许程序执行一个外部命令&#xff0c;就像在S…

redis01

redis概念 远程字典服务 是一个开源的使用ANSI C语言编写&#xff0c;支持网络&#xff0c;可基于内存亦可持久化的日志类型&#xff0c;ker-value数据库&#xff0c;并提供多种语言的API&#xff0c;它支持多种类型的数据结构&#xff0c;&#xff1a;字符串 散列 列表 集合…

人工智能领域-----机器学习和深度学习的区别

机器学习和深度学习都是人工智能领域中的重要概念&#xff0c;它们之间存在以下一些区别&#xff1a; 一、定义与概念 机器学习&#xff1a; 是一种让计算机自动学习和改进的方法&#xff0c;通过从数据中学习模式和规律&#xff0c;从而能够对新的数据进行预测或决策。涵盖了…