雷达智富

首页 > 内容 > 程序笔记 > 正文

程序笔记

如何在 Debian 12 上安装 OpenStack(Bookworm)

2024-07-23 56

Openstack 是一种开源且免费使用的云解决方案,使您能够通过各种补充服务构建私有基础设施即服务 (IaaS) 平台。 OpenStack 中的每个服务都提供一个应用程序编程接口 (API) 来促进集成。 OpenStack 中可用的关键组件将提供计算、网络和存储资源。它可以使用 openstack 工具从命令行界面进行操作,也可以通过其直观的仪表板进行操作,您可以在其中管理 OpenStack 云并监控其资源。

在本文中,我们详细介绍了在 Debian 12 Linux 计算机上本地设置私有 OpenStack 云所需的步骤。这是单节点安装,仅适用于家庭实验室学习和测试目的。我们正在使用手动方法安装其他解决方案,例如 KollaOpenStack Ansible

在开始进行此设置之前,请确保您的计算机满足以下最低要求。

  1. 全新安装 Debian 12 Linux
  2. BIOS 中启用 CPU 虚拟化扩展
  3. root 用户或具有 sudo 权限的用户
  4. 2 个 vCPU
  5. 8GB 内存
  6. 20GB磁盘容量
  7. 良好的互联网连接

让我们开始!。

1. 准备环境

我们的环境有以下变量;

  • Debian 12 服务器 IP 地址:192.168.1.2
  • Debian 12 服务器主机名:osp01.home.cloudlabske.io
  • 网络接口:eno1
  • 默认 OpenStack 区域:RegionOne
  • 默认域:默认

设置服务器主机名。

sudo hostnamectl set-hostname osp01.home.cloudlabske.io

编辑 /etc/hosts 文件将服务器 IP 地址映射到配置的主机名。

$ sudo vim /etc/hosts
192.168.1.2 osp01.home.cloudlabske.io osp01

在开始其他配置之前更新系统。假设您正在一台干净的 Debian 机器上工作。

sudo apt update && sudo apt upgrade -y

可能需要重新启动。确认一下就可以了

[ -e /var/run/reboot-required ] && sudo reboot

配置NTP时间同步。

  • 使用 Systemd timesyncd

打开 timesyncd.conf 文件进行编辑并更新 NTP 服务器的地址。

$ sudo vim /etc/systemd/timesyncd.conf
[Time]
NTP=192.168.1.1

重新启动 systemd-timesyncd 服务。

sudo systemctl restart systemd-timesyncd

确认状态

sudo timedatectl timesync-status
  • 使用 Chrony

安装 Chrony 并配置 NTP 服务器以进行时间调整。 NTP 使用 123/UDP。

sudo apt -y install chrony vim

您可以更改 NTP 服务器或使用默认服务器。

$ sudo vim /etc/chrony/chrony.conf
pool 2.debian.pool.ntp.org iburst

将时区设置为您当前的位置

sudo timedatectl set-timezone Africa/Nairobi
sudo timedatectl set-ntp true

确认设置

$ timedatectl
               Local time: Wed 2024-01-31 21:47:22 EAT
           Universal time: Wed 2024-01-31 18:47:22 UTC
                 RTC time: Wed 2024-01-31 18:47:22
                Time zone: Africa/Nairobi (EAT, +0300)
System clock synchronized: yes
              NTP service: active
          RTC in local TZ: no

重新启动 chrony 服务

sudo systemctl restart chrony

手动同步系统时间。

$ sudo chronyc sources
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^- ntp1.icolo.io                 2   6    37    57   +770us[ +770us] +/-   13ms
^* ntp0.icolo.io                 2   6    37    57    -15us[ -895us] +/-   13ms
^- time.cloudflare.com           3   6    37    58  +3221us[+3221us] +/-   71ms
^- time.cloudflare.com           3   6    37    59  +3028us[+2156us] +/-   71ms

2.安装MariaDB、RabbitMQ、Memcached

从这里我们可以切换到root用户帐户。

$ sudo -i
# or
$ sudo su -

安装MariaDB数据库服务器

apt install mariadb-server -y

调整最大数据库连接以避免连接超时。

# vim /etc/mysql/mariadb.conf.d/50-server.cnf
max_connections        = 700

更改后重新启动 MariaDB 服务。

systemctl restart mariadb

还要安装Python MySQL扩展包。

apt install python3-pymysql

完成后,安装 RabbitMQ、Memcached 和 Nginx Web 服务器。

apt install memcached rabbitmq-server  nginx libnginx-mod-stream

为OpenStack添加RabbitMQ用户,设置密码并授予权限。

rabbitmqctl add_user openstack StrongPassw0rd01
rabbitmqctl set_permissions openstack ".*" ".*" ".*"

禁用默认的 nginx 网页。

unlink /etc/nginx/sites-enabled/default

重新启动服务。

systemctl restart mariadb rabbitmq-server memcached nginx

3. 安装和配置 Keystone

OpenStack 身份服务 (Keystone) 是身份验证、授权和服务目录的单点集成。

创建数据库,并授予用户适当的权限。

# mysql
create database keystone; 
grant all privileges on keystone.* to keystone@'localhost' identified by 'StrongPassw0rd01'; 
flush privileges; 
exit;

安装 Keystone 及其依赖项,包括 OpenStack 客户端。

apt install keystone python3-openstackclient apache2 python3-oauth2client libapache2-mod-wsgi-py3  -y

对于所有提示,请回答“”。

编辑 keystone 配置文件并更改地址并配置数据库连接设置和令牌提供程序。

# vim /etc/keystone/keystone.conf
# Specify Memcache Server on line 363
memcache_servers = localhost:11211

# Add MariaDB connection information around line 543:
[database]
connection = mysql+pymysql://keystone:StrongPassw0rd01@localhost/keystone

# Set  token provider in line 2169
provider = fernet

通过运行以下命令,使用数据填充身份服务数据库。

su -s /bin/bash keystone -c "keystone-manage db_sync"

安全地忽略“忽略异常:...”错误。

接下来我们初始化 Fernet 密钥存储库:

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

引导身份服务。使用最新版本的 OpenStack keystone 身份可以在所有接口的同一端口上运行。

export controller=$(hostname -f)

keystone-manage bootstrap --bootstrap-password StrongPassw0rd01 \
--bootstrap-admin-url https://$controller:5000/v3/ \
--bootstrap-internal-url https://$controller:5000/v3/ \
--bootstrap-public-url https://$controller:5000/v3/ \
--bootstrap-region-id RegionOne

按照之前在 Apache 配置文件中设置的方式设置服务器 FQDN。

# vim /etc/apache2/apache2.conf
ServerName osp01.home.cloudlabske.io

为 keystone 创建 Apache VirtualHost 配置。这将使我们能够使用 FQDN 访问 API,而不是使用 IP 寻址。

vim /etc/apache2/sites-available/keystone.conf

修改并粘贴以下内容。但请记住将 SSL 路径替换为您的路径。

1. 使用 Let's Encrypt

请参阅以下有关使用 Let's Encrypt 的指南。

  • 如何在 Linux 上生成 Let’s Encrypt SSL 证书
  • 在专用网络上使用 Cloudflare 生成 Let’s Encrypt SSL 证书
  • 如何生成 Let's Encrypt 通配符 SSL 证书

在此示例中使用的 SSL 证书如下。

  • /etc/letsencrypt/live/osp01.home.cloudlabske.io/cert.pem
  • /etc/letsencrypt/live/osp01.home.cloudlabske.io/privkey.pem
  • /etc/letsencrypt/live/osp01.home.cloudlabske.io/chain.pem

2.使用OpenSSL

对于 OpenSSL 自签名证书,生成如下。

# vim /etc/ssl/openssl.cnf
[ home.cloudlabske.io ]
subjectAltName = DNS:osp01.home.cloudlabske.io

# Generate certificates
cd /etc/ssl/private
openssl genrsa -aes128 2048 > openstack_server.key
openssl rsa -in server.key -out openstack_server.key
openssl req -utf8 -new -key openstack_server.key -out openstack_server.csr
openssl x509 -in openstack_server.csr -out openstack_server.crt -req -signkey openstack_server.key -extfile /etc/ssl/openssl.cnf -extensions home.cloudlabske.io  -days 3650
chmod 600 server.key

密钥和证书的路径将是。

  • /etc/ssl/private/openstack_server.crt
  • /etc/ssl/private/openstack_server.key

修改以下内容以适合您的环境。

Listen 5000

<VirtualHost *:5000>
    SSLEngine on
    SSLHonorCipherOrder on
    SSLCertificateFile /etc/letsencrypt/live/osp01.home.cloudlabske.io/cert.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/osp01.home.cloudlabske.io/privkey.pem
    SSLCertificateChainFile /etc/letsencrypt/live/osp01.home.cloudlabske.io/chain.pem
    WSGIScriptAlias / /usr/bin/keystone-wsgi-public
    WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
    WSGIProcessGroup keystone-public
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    LimitRequestBody 114688

    <IfVersion >= 2.4>
      ErrorLogFormat "%{cu}t %M"
    </IfVersion>

    ErrorLog /var/log/apache2/keystone.log
    CustomLog /var/log/apache2/keystone_access.log combined

    <Directory /usr/bin>
        Require all granted
    </Directory>
</VirtualHost>

Alias /identity /usr/bin/keystone-wsgi-public
<Location /identity>
    SetHandler wsgi-script
    Options +ExecCGI

    WSGIProcessGroup keystone-public
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
</Location>

启用所需的 Apache 模块和 keystone Web 配置。

a2enmod ssl
a2ensite keystone 
systemctl disable --now keystone
systemctl restart apache2

为 OpenStack 客户端生成 keystone 访问文件。

export controller=$(hostname -f)

tee ~/keystonerc<<EOF
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=StrongPassw0rd01
export OS_AUTH_URL=https://$controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF

设置权限并获取文件以使用它。

chmod 600 ~/keystonerc
source ~/keystonerc
echo "source ~/keystonerc " >> ~/.bashrc

创建项目

创建服务项目,其中将包含您添加到环境中的每项服务的唯一用户。

root@osp01 ~(keystone)$ openstack project create --domain default --description "Service Project" service
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Service Project                  |
| domain_id   | default                          |
| enabled     | True                             |
| id          | 1067895d9b99452b8d1758eda755c7bc |
| is_domain   | False                            |
| name        | service                          |
| options     | {}                               |
| parent_id   | default                          |
| tags        | []                               |
+-------------+----------------------------------+

root@osp01 ~(keystone)$ openstack project list
+----------------------------------+---------+
| ID                               | Name    |
+----------------------------------+---------+
| 1067895d9b99452b8d1758eda755c7bc | service |
| 9a102dfdf9a54e8382fefdca727b2553 | admin   |
+----------------------------------+---------+
root@osp01 ~(keystone)$

4.安装并配置Glance(图像服务)

OpenStack 镜像服务 (glance) 允许集群用户使用 REST API 发现、注册和检索虚拟机镜像。使用 API,您可以查询虚拟机映像元数据并检索实际映像。虚拟机映像可通过映像服务在多个位置提供。

添加数据库用户和密码一目了然。该数据库将存储虚拟机映像元数据。

# mysql
create database glance; 
grant all privileges on glance.* to glance@'localhost' identified by 'StrongPassw0rd01';
flush privileges; 
exit;

将glance用户添加到Keystone服务项目中。

# openstack user create --domain default --project service --password StrongPassw0rd01 glance
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| default_project_id  | 1067895d9b99452b8d1758eda755c7bc |
| domain_id           | default                          |
| enabled             | True                             |
| id                  | a4af040dceff40d1a01beb14d268a7d9 |
| name                | glance                           |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

将admin角色添加到glance用户和服务项目中:

openstack role add --project service --user glance admin

创建glance服务实体:

# openstack service create --name glance --description "OpenStack Image service" image
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Image service          |
| enabled     | True                             |
| id          | db9cb71d9f2b41128784458b057d468d |
| name        | glance                           |
| type        | image                            |
+-------------+----------------------------------+

将控制器 FQDN 保存为变量以方便使用。

export controller=$(hostname -f)

在默认区域 RegionOne 中创建图像服务 API 端点。我们将创建公共、管理和内部端点。

# openstack endpoint create --region RegionOne image public https://$controller:9292
+--------------+----------------------------------------+
| Field        | Value                                  |
+--------------+----------------------------------------+
| enabled      | True                                   |
| id           | 5f5a8246813e436ab31ebeb37b1bb843       |
| interface    | public                                 |
| region       | RegionOne                              |
| region_id    | RegionOne                              |
| service_id   | db9cb71d9f2b41128784458b057d468d       |
| service_name | glance                                 |
| service_type | image                                  |
| url          | https://osp01.home.cloudlabske.io:9292 |
+--------------+----------------------------------------+

# openstack endpoint create --region RegionOne image internal https://$controller:9292
+--------------+----------------------------------------+
| Field        | Value                                  |
+--------------+----------------------------------------+
| enabled      | True                                   |
| id           | 953c077f90944774a205f5244aa28ce8       |
| interface    | internal                               |
| region       | RegionOne                              |
| region_id    | RegionOne                              |
| service_id   | db9cb71d9f2b41128784458b057d468d       |
| service_name | glance                                 |
| service_type | image                                  |
| url          | https://osp01.home.cloudlabske.io:9292 |
+--------------+----------------------------------------+

# openstack endpoint create --region RegionOne image admin https://$controller:9292
+--------------+----------------------------------------+
| Field        | Value                                  |
+--------------+----------------------------------------+
| enabled      | True                                   |
| id           | 3788fbdc728f4e8fab7d370ba2559103       |
| interface    | admin                                  |
| region       | RegionOne                              |
| region_id    | RegionOne                              |
| service_id   | db9cb71d9f2b41128784458b057d468d       |
| service_name | glance                                 |
| service_type | image                                  |
| url          | https://osp01.home.cloudlabske.io:9292 |
+--------------+----------------------------------------+

安装 OpenStack Glance 包

apt install glance -y

对于所有自动配置选项,回答“”。

配置 Glance API

该 API 接受图像 API 调用以进行图像发现、检索和存储。

备份当前的 Glance API 配置文件。

 mv /etc/glance/glance-api.conf /etc/glance/glance-api.conf.orig

创建新的 Glance API 配置文件。

vim /etc/glance/glance-api.conf

粘贴并修改下面提供的值以适合您的环境。

  • [DEFAULT] 部分中,配置 RabbitMQ 连接
  • [glance_store] 部分中,配置本地文件系统存储和图像文件的位置
  • [database]部分中,配置数据库访问
  • [keystone_authtoken][paste_deploy] 部分中,配置身份服务访问权限
[DEFAULT]
bind_host = 127.0.0.1
# RabbitMQ connection info
transport_url = rabbit://openstack:StrongPassw0rd01@localhost
enforce_secure_rbac = true

[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

[database]
# MariaDB connection info
connection = mysql+pymysql://glance:StrongPassw0rd01@localhost/glance

# keystone auth info
[keystone_authtoken]
www_authenticate_uri = https://osp01.home.cloudlabske.io:5000
auth_url = https://osp01.home.cloudlabske.io:5000
memcached_servers = 127.0.0.1:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = StrongPassw0rd01
# if using self-signed certs on Apache2 Keystone, turn to [true]
insecure = false

[paste_deploy]
flavor = keystone

[oslo_policy]
enforce_new_defaults = true

设置新文件权限。

chown root:glance /etc/glance/glance-api.conf
chmod 640 /etc/glance/glance-api.conf

填充图像服务数据库:

su -s /bin/bash -c "glance-manage db_sync" glance

启动并启用 Glance 服务。

systemctl restart glance-api && systemctl enable glance-api

配置 Nginx

vim /etc/nginx/nginx.conf

通过将 Glance 连接详细信息添加到代理请求进行修改。请记住为侦听地址、SSL 证书和密钥设置正确的值。

# Add the following to the end of file
stream {
    upstream glance-api {
        server 127.0.0.1:9292;
    }
    server {
        listen 192.168.1.2:9292 ssl;
        proxy_pass glance-api;
    }
    ssl_certificate "/etc/letsencrypt/live/osp01.home.cloudlabske.io/fullchain.pem";
    ssl_certificate_key "/etc/letsencrypt/live/osp01.home.cloudlabske.io/privkey.pem";
}

完成后重新启动 nginx Web 服务。

systemctl restart nginx

5. 安装和配置Nova

OpenStack 计算是基础设施即服务 (IaaS) 系统的主要部分。它提供云计算系统的托管和管理。

OpenStack 计算服务的组件。

  • nova-api 服务:接受并响应最终用户计算 API 调用。
  • nova-api-metadata 服务:接受来自实例的元数据请求。
  • nova-compute 服务:通过虚拟机管理程序 API 创建和终止虚拟机实例的工作守护进程。
  • nova-scheduler 服务:从队列中获取虚拟机实例请求,并确定其在哪个计算服务器主机上运行。
  • nova-conductor 模块:协调 nova-compute 服务和数据库之间的交互。
  • nova-novncproxy 守护进程:提供通过 VNC 连接访问正在运行的实例的代理。
  • nova-spicehtml5proxy 守护进程:提供通过 SPICE 连接访问正在运行的实例的代理。
  • 队列:在守护进程之间传递消息的中心枢纽
  • SQL 数据库:存储云基础架构的大多数构建时和运行时状态,包括可用实例类型、正在使用的实例、可用网络和项目。

1) 准备设置先决条件

在本指南中,我们选择的虚拟化是带有 libvirt 的 KVM。安装 KVM 和其他所需的实用程序。

apt install qemu-kvm libvirt-daemon libvirt-daemon-system bridge-utils libosinfo-bin virtinst

确认 BIOS 中启用了 CPU 虚拟化扩展。

# lsmod | grep kvm
kvm_intel             380928  0
kvm                  1142784  1 kvm_intel
irqbypass              16384  1 kvm

在 MariaDB for Nova、Nova API、Placement、Nova cell 上添加用户和数据库。

# mysql
create database nova;
grant all privileges on nova.* to nova@'localhost' identified by 'StrongPassw0rd01'; 

create database nova_api; 
grant all privileges on nova_api.* to nova@'localhost' identified by 'StrongPassw0rd01'; 

create database placement; 
grant all privileges on placement.* to placement@'localhost' identified by 'StrongPassw0rd01'; 

create database nova_cell0; 
grant all privileges on nova_cell0.* to nova@'localhost' identified by 'StrongPassw0rd01'; 

flush privileges;
exit

创建nova用户:

# openstack user create --domain default --project service --password StrongPassw0rd01 nova
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| default_project_id  | 1067895d9b99452b8d1758eda755c7bc |
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 424afd4671ad49268bdbd14fe32b6fe2 |
| name                | nova                             |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

为nova用户添加admin角色:

openstack role add --project service --user nova admin

在服务项目中添加放置用户

openstack user create --domain default --project service --password StrongPassw0rd01 placement

将管理员角色添加到展示位置用户:

openstack role add --project service --user placement admin

创建nova服务入口

# openstack service create --name nova --description "OpenStack Compute service" compute
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Compute service        |
| enabled     | True                             |
| id          | ba737aa8b0a240fab38bdf49b31a60f0 |
| name        | nova                             |
| type        | compute                          |
+-------------+----------------------------------+

创建安置服务条目。

# openstack service create --name placement --description "OpenStack Compute Placement service" placement
+-------------+-------------------------------------+
| Field       | Value                               |
+-------------+-------------------------------------+
| description | OpenStack Compute Placement service |
| enabled     | True                                |
| id          | ae365b6e32ec4db985ec9c6e7f685ae1    |
| name        | placement                           |
| type        | placement                           |
+-------------+-------------------------------------+

定义 Nova API 主机

export controller=$(hostname -f)

为 nova 创建公共端点。

# openstack endpoint create --region RegionOne compute public https://$controller:8774/v2.1/%\(tenant_id\)s
+--------------+-----------------------------------------------------------+
| Field        | Value                                                     |
+--------------+-----------------------------------------------------------+
| enabled      | True                                                      |
| id           | 50890db0f27443ddb547d24786340330                          |
| interface    | public                                                    |
| region       | RegionOne                                                 |
| region_id    | RegionOne                                                 |
| service_id   | ba737aa8b0a240fab38bdf49b31a60f0                          |
| service_name | nova                                                      |
| service_type | compute                                                   |
| url          | https://osp01.home.cloudlabske.io:8774/v2.1/%(tenant_id)s |
+--------------+-----------------------------------------------------------+

为 nova 创建专用端点。

# openstack endpoint create --region RegionOne compute internal https://$controller:8774/v2.1/%\(tenant_id\)s
+--------------+-----------------------------------------------------------+
| Field        | Value                                                     |
+--------------+-----------------------------------------------------------+
| enabled      | True                                                      |
| id           | 96b3abd5ca314429b0602a2bc153af77                          |
| interface    | internal                                                  |
| region       | RegionOne                                                 |
| region_id    | RegionOne                                                 |
| service_id   | ba737aa8b0a240fab38bdf49b31a60f0                          |
| service_name | nova                                                      |
| service_type | compute                                                   |
| url          | https://osp01.home.cloudlabske.io:8774/v2.1/%(tenant_id)s |
+--------------+-----------------------------------------------------------+

为 nova 创建管理端点。

# openstack endpoint create --region RegionOne compute admin https://$controller:8774/v2.1/%\(tenant_id\)s
+--------------+-----------------------------------------------------------+
| Field        | Value                                                     |
+--------------+-----------------------------------------------------------+
| enabled      | True                                                      |
| id           | 8fcd6f0a2d4c4816b09ca214e311597a                          |
| interface    | admin                                                     |
| region       | RegionOne                                                 |
| region_id    | RegionOne                                                 |
| service_id   | ba737aa8b0a240fab38bdf49b31a60f0                          |
| service_name | nova                                                      |
| service_type | compute                                                   |
| url          | https://osp01.home.cloudlabske.io:8774/v2.1/%(tenant_id)s |
+--------------+-----------------------------------------------------------+

为 nova 创建公共、私有和管理端点。

# openstack endpoint create --region RegionOne placement public https://$controller:8778
+--------------+----------------------------------------+
| Field        | Value                                  |
+--------------+----------------------------------------+
| enabled      | True                                   |
| id           | 2fc42dd9223d41aea94779daa6a80e19       |
| interface    | public                                 |
| region       | RegionOne                              |
| region_id    | RegionOne                              |
| service_id   | ae365b6e32ec4db985ec9c6e7f685ae1       |
| service_name | placement                              |
| service_type | placement                              |
| url          | https://osp01.home.cloudlabske.io:8778 |
+--------------+----------------------------------------+

# openstack endpoint create --region RegionOne placement internal https://$controller:8778
+--------------+----------------------------------------+
| Field        | Value                                  |
+--------------+----------------------------------------+
| enabled      | True                                   |
| id           | fd284797981540c2b219139edbdbdf69       |
| interface    | internal                               |
| region       | RegionOne                              |
| region_id    | RegionOne                              |
| service_id   | ae365b6e32ec4db985ec9c6e7f685ae1       |
| service_name | placement                              |
| service_type | placement                              |
| url          | https://osp01.home.cloudlabske.io:8778 |
+--------------+----------------------------------------+

# openstack endpoint create --region RegionOne placement admin https://$controller:8778
+--------------+----------------------------------------+
| Field        | Value                                  |
+--------------+----------------------------------------+
| enabled      | True                                   |
| id           | 4c40f9d36e384c6685b9f56e7d951329       |
| interface    | admin                                  |
| region       | RegionOne                              |
| region_id    | RegionOne                              |
| service_id   | ae365b6e32ec4db985ec9c6e7f685ae1       |
| service_name | placement                              |
| service_type | placement                              |
| url          | https://osp01.home.cloudlabske.io:8778 |
+--------------+----------------------------------------+

2)安装并配置Nova服务

安装 Nova 软件包

apt install nova-api nova-scheduler nova-conductor nova-novncproxy python3-novaclient placement-api

备份当前nova配置文件

mv /etc/nova/nova.conf /etc/nova/nova.conf.orig

创建新配置

vim /etc/nova/nova.conf

修改文件中的设置时进行粘贴。配置 RabbitMQ 连接、VNC、Glance API、

[DEFAULT]
allow_resize_to_same_host = True
osapi_compute_listen = 127.0.0.1
osapi_compute_listen_port = 8774
metadata_listen = 127.0.0.1
metadata_listen_port = 8775
state_path = /var/lib/nova
enabled_apis = osapi_compute,metadata
log_dir = /var/log/nova
# RabbitMQ connection details
transport_url = rabbit://openstack:StrongPassw0rd01@localhost

[api]
auth_strategy = keystone

[vnc]
enabled = True
novncproxy_host = 127.0.0.1
novncproxy_port = 6080
novncproxy_base_url = https://osp01.home.cloudlabske.io:6080/vnc_auto.html

# Glance connection info
[glance]
api_servers = https://osp01.home.cloudlabske.io:9292

[oslo_concurrency]
lock_path = $state_path/tmp

# MariaDB connection info
[api_database]
connection = mysql+pymysql://nova:StrongPassw0rd01@localhost/nova_api

[database]
connection = mysql+pymysql://nova:StrongPassw0rd01@localhost/nova

# Keystone auth info
[keystone_authtoken]
www_authenticate_uri = https://osp01.home.cloudlabske.io:5000
auth_url = https://osp01.home.cloudlabske.io:5000
memcached_servers = localhost:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = StrongPassw0rd01
# if using self-signed certs on Apache2 Keystone, turn to [true]
insecure = false

[placement]
auth_url = https://osp01.home.cloudlabske.io:5000
os_region_name = RegionOne
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = placement
password = StrongPassw0rd01
# if using self-signed certs on Apache2 Keystone, turn to [true]
insecure = false

[wsgi]
api_paste_config = /etc/nova/api-paste.ini

[oslo_policy]
enforce_new_defaults = true

设置所有权和权限。

chgrp nova /etc/nova/nova.conf
chmod 640 /etc/nova/nova.conf

设置 Nova 的控制台代理类型

sudo sed -i 's/^NOVA_CONSOLE_PROXY_TYPE=.*/NOVA_CONSOLE_PROXY_TYPE=novnc/g' /etc/default/nova-consoleproxy

备份当前放置配置

mv /etc/placement/placement.conf /etc/placement/placement.conf.orig

为 Nova 放置创建新的配置文件

vim /etc/placement/placement.conf

调整配置并将其粘贴到文件中。

[DEFAULT]
debug = false

[api]
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = https://osp01.home.cloudlabske.io:5000
auth_url = https:/osp01.home.cloudlabske.io:5000
memcached_servers = localhost:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = placement
password = StrongPassw0rd01
# if using self-signed certs on Apache2 Keystone, turn to [true]
insecure = false

[placement_database]
connection = mysql+pymysql://placement:StrongPassw0rd01@localhost/placement

创建放置 API

vim /etc/apache2/sites-available/placement-api.conf

以下是要放入文件中的内容。您无需在此处进行任何更改。

Listen 127.0.0.1:8778

<VirtualHost *:8778>
    WSGIScriptAlias / /usr/bin/placement-api
    WSGIDaemonProcess placement-api processes=5 threads=1 user=placement group=placement display-name=%{GROUP}
    WSGIProcessGroup placement-api
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    LimitRequestBody 114688

    <IfVersion >= 2.4>
      ErrorLogFormat "%{cu}t %M"
    </IfVersion>

    ErrorLog /var/log/apache2/placement_api_error.log
    CustomLog /var/log/apache2/placement_api_access.log combined

    <Directory /usr/bin>
        Require all granted
    </Directory>
</VirtualHost>

Alias /placement /usr/bin/placement-api
<Location /placement>
  SetHandler wsgi-script
  Options +ExecCGI

  WSGIProcessGroup placement-api
  WSGIApplicationGroup %{GLOBAL}
  WSGIPassAuthorization On
</Location>

设置正确的文件权限

chgrp placement /etc/placement/placement.conf
chmod 640 /etc/placement/placement.conf

将 UWSGI 绑定地址更新为 localhost。

sed -i -e "s/UWSGI_BIND_IP=.*/UWSGI_BIND_IP=\"127.0.0.1\"/"  /etc/init.d/nova-api
sed -i -e "s/UWSGI_BIND_IP=.*/UWSGI_BIND_IP=\"127.0.0.1\"/" /etc/init.d/nova-api-metadata

启用 place-api apache 网站。

a2ensite placement-api

完成后重新启动服务。

systemctl disable --now placement-api && systemctl restart apache2

打开 Nginx 配置文件。

vim /etc/nginx/nginx.conf

通过添加带有颜色的线条来更新。将 192.168.1.2 替换为您的服务器 IP 地址。

stream {
    upstream glance-api {
        server 127.0.0.1:9292;
    }
    server {
        listen 192.168.1.2:9292 ssl;
        proxy_pass glance-api;
    }
    upstream nova-api {
        server 127.0.0.1:8774;
    }
    server {
        listen 192.168.1.2:8774 ssl;
        proxy_pass nova-api;
    }
    upstream nova-metadata-api {
        server 127.0.0.1:8775;
    }
    server {
        listen 192.168.1.2:8775 ssl;
        proxy_pass nova-metadata-api;
    }
    upstream placement-api {
        server 127.0.0.1:8778;
    }
    server {
        listen 192.168.1.2:8778 ssl;
        proxy_pass placement-api;
    }
    upstream novncproxy {
        server 127.0.0.1:6080;
    }
    server {
        listen 192.168.1.2:6080 ssl;
        proxy_pass novncproxy;
    }
    ssl_certificate "/etc/letsencrypt/live/osp01.home.cloudlabske.io/fullchain.pem";
    ssl_certificate_key "/etc/letsencrypt/live/osp01.home.cloudlabske.io/privkey.pem";
}

导入所有必需的数据。

# Populate the placement database
su -s /bin/bash placement -c "placement-manage db sync"

# Populate the nova-api database
su -s /bin/bash nova -c "nova-manage api_db sync"

# Register the cell0 database
su -s /bin/bash nova -c "nova-manage cell_v2 map_cell0"


# Populate the nova database
su -s /bin/bash nova -c "nova-manage db sync"

# Create the cell1 cell
su -s /bin/sh nova -c "nova-manage cell_v2 create_cell --name cell1"

停止与 Nova 操作相关的服务。

systemctl stop nova-api nova-api-metadata nova-conductor nova-scheduler nova-novncproxy

重新启动 nginx 网络服务器

systemctl restart nginx

然后启动其他服务

 systemctl enable --now nova-api nova-api-metadata nova-conductor nova-scheduler nova-novncproxy

验证 nova cell0cell1 是否已正确注册:

# su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
+-------+--------------------------------------+-----------------------------------+------------------------------------------------+----------+
|  Name |                 UUID                 |           Transport URL           |              Database Connection               | Disabled |
+-------+--------------------------------------+-----------------------------------+------------------------------------------------+----------+
| cell0 | 00000000-0000-0000-0000-000000000000 |               none:/              | mysql+pymysql://nova:****@localhost/nova_cell0 |  False   |
| cell1 | d3a70005-5861-427e-9bdf-984b15400d7e | rabbit://openstack:****@localhost |    mysql+pymysql://nova:****@localhost/nova    |  False   |
+-------+--------------------------------------+-----------------------------------+------------------------------------------------+----------+

列出已注册的计算服务。

# openstack compute service list
+--------------------------------------+----------------+-------+----------+---------+-------+----------------------------+
| ID                                   | Binary         | Host  | Zone     | Status  | State | Updated At                 |
+--------------------------------------+----------------+-------+----------+---------+-------+----------------------------+
| 6f75eb27-9c66-41c0-b0fa-15f1a48cb25c | nova-conductor | osp01 | internal | enabled | up    | 2024-02-02T07:21:04.000000 |
| 802d523d-1f92-427b-9f90-691bf54268af | nova-scheduler | osp01 | internal | enabled | up    | 2024-02-02T07:21:05.000000 |
+--------------------------------------+----------------+-------+----------+---------+-------+----------------------------+

3) 安装 Nova KVM 计算

安装 Nova KVM 计算包

apt install nova-compute nova-compute-kvm  -y 

打开nova配置文件。

vim /etc/nova/nova.conf

按如下方式更新 VNC 设置。

[vnc]
enabled = True
server_listen = 192.168.1.2
server_proxyclient_address = 192.168.1.2
novncproxy_host = 127.0.0.1
novncproxy_port = 6080
ovncproxy_host = 127.0.0.1
novncproxy_port = 6080
novncproxy_base_url = https://osp01.home.cloudlabske.io:6080/vnc_auto.html

完成后重新启动 nova-compute

systemctl restart nova-compute.service

发现细胞并映射找到的宿主。

su -s /bin/bash nova -c "nova-manage cell_v2 discover_hosts"

检查新的 nova 主机服务列表。

# openstack compute service list
+--------------------------------------+----------------+-------+----------+---------+-------+----------------------------+
| ID                                   | Binary         | Host  | Zone     | Status  | State | Updated At                 |
+--------------------------------------+----------------+-------+----------+---------+-------+----------------------------+
| 6f75eb27-9c66-41c0-b0fa-15f1a48cb25c | nova-conductor | osp01 | internal | enabled | up    | 2024-02-02T07:32:44.000000 |
| 802d523d-1f92-427b-9f90-691bf54268af | nova-scheduler | osp01 | internal | enabled | up    | 2024-02-02T07:32:45.000000 |
| 83fd3604-7345-4258-a3a2-324900b04b8e | nova-compute   | osp01 | nova     | enabled | up    | 2024-02-02T07:32:43.000000 |
+--------------------------------------+----------------+-------+----------+---------+-------+----------------------------+

6.配置网络服务(Neutron)

OpenStack Networking (neutron) 提供了一个集成,用于创建由其他 OpenStack 服务管理的接口设备并将其附加到网络。

组件:

  • neutron-server:接受 API 请求并将其路由到适当的 OpenStack 网络插件以执行操作。
  • OpenStack 网络插件和代理:插入和拔出端口、创建网络或子网以及提供 IP 寻址。
  • 消息队列:大多数 OpenStack 网络安装使用它来在 neutron-server 和各种代理之间路由信息

1)准备环境

为 Neutron 网络服务创建数据库和用户。

# mysql
create database neutron_ml2; 
grant all privileges on neutron_ml2.* to neutron@'localhost' identified by 'StrongPassw0rd01'; 
flush privileges; 
exit

接下来,我们在 Keystone 上添加 Neutron 的用户或服务。

# openstack user create --domain default --project service --password StrongPassw0rd01 neutron
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| default_project_id  | 1067895d9b99452b8d1758eda755c7bc |
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 71d4813059f5472f852a946bdaf272f4 |
| name                | neutron                          |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

# openstack role add --project service --user neutron admin

# openstack service create --name neutron --description "OpenStack Networking service" network
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Networking service     |
| enabled     | True                             |
| id          | 7da12e4154ad4f97b8f449f01d6a56ec |
| name        | neutron                          |
| type        | network                          |
+-------------+----------------------------------+

创建所需的端点。

# Save your server
export controller=$(hostname -f)

# openstack endpoint create --region RegionOne network public https://$controller:9696
+--------------+----------------------------------------+
| Field        | Value                                  |
+--------------+----------------------------------------+
| enabled      | True                                   |
| id           | 3bc3eb0a234a46b68fa2190095f4cd53       |
| interface    | public                                 |
| region       | RegionOne                              |
| region_id    | RegionOne                              |
| service_id   | 7da12e4154ad4f97b8f449f01d6a56ec       |
| service_name | neutron                                |
| service_type | network                                |
| url          | https://osp01.home.cloudlabske.io:9696 |
+--------------+----------------------------------------+

# openstack endpoint create --region RegionOne network internal https://$controller:9696
+--------------+----------------------------------------+
| Field        | Value                                  |
+--------------+----------------------------------------+
| enabled      | True                                   |
| id           | 2bc933e3f8fc4238874adc2cf0b764f9       |
| interface    | internal                               |
| region       | RegionOne                              |
| region_id    | RegionOne                              |
| service_id   | 7da12e4154ad4f97b8f449f01d6a56ec       |
| service_name | neutron                                |
| service_type | network                                |
| url          | https://osp01.home.cloudlabske.io:9696 |
+--------------+----------------------------------------+

# openstack endpoint create --region RegionOne network admin https://$controller:9696
+--------------+----------------------------------------+
| Field        | Value                                  |
+--------------+----------------------------------------+
| enabled      | True                                   |
| id           | fa110991eab34d4e9e1c639865ce2b14       |
| interface    | admin                                  |
| region       | RegionOne                              |
| region_id    | RegionOne                              |
| service_id   | 7da12e4154ad4f97b8f449f01d6a56ec       |
| service_name | neutron                                |
| service_type | network                                |
| url          | https://osp01.home.cloudlabske.io:9696 |
+--------------+----------------------------------------+

2)安装和配置Neutron

为 OpenStack 安装 Neutron 包

apt install neutron-server neutron-metadata-agent neutron-openvswitch-agent neutron-plugin-ml2 neutron-l3-agent  neutron-metadata-agent openvswitch-switch python3-neutronclient neutron-dhcp-agent

备份当前配置文件。

mv /etc/neutron/neutron.conf /etc/neutron/neutron.conf.orig

创建新的配置文件

vim /etc/neutron/neutron.conf

粘贴此处提供的内容时进行修改。

[DEFAULT]
bind_host = 127.0.0.1
bind_port = 9696
core_plugin = ml2
service_plugins = router
auth_strategy = keystone
state_path = /var/lib/neutron
dhcp_agent_notification = True
allow_overlapping_ips = True
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True

# RabbitMQ connection info
transport_url = rabbit://openstack:StrongPassw0rd01@localhost

[agent]
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

# Keystone auth info
[keystone_authtoken]
www_authenticate_uri = https://osp01.home.cloudlabske.io:5000
auth_url = https://osp01.home.cloudlabske.io:5000
memcached_servers = localhost:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = StrongPassw0rd01
# if using self-signed certs on Apache2 Keystone, turn to [true]
insecure = false

# MariaDB connection info
[database]
connection = mysql+pymysql://neutron:StrongPassw0rd01@localhost/neutron_ml2

# Nova auth info
[nova]
auth_url = https://osp01.home.cloudlabske.io:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = StrongPassw0rd01
# if using self-signed certs on Apache2 Keystone, turn to [true]
insecure = false

[oslo_concurrency]
lock_path = $state_path/tmp

[oslo_policy]
enforce_new_defaults = true

编辑元数据代理配置并设置主机、代理共享密钥和内存缓存主机地址。

# vim /etc/neutron/metadata_agent.ini
nova_metadata_host = osp01.home.cloudlabske.io
nova_metadata_protocol = https
# specify any secret key you like
metadata_proxy_shared_secret = metadata_secret
# specify Memcache server
memcache_servers = localhost:11211

备份 ml2 配置并创建一个新配置。

mv /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.orig
vim /etc/neutron/plugins/ml2/ml2_conf.ini

如下更新新设置。

[DEFAULT]

[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types =
mechanism_drivers = openvswitch
extension_drivers = port_security

[ml2_type_flat]

[ml2_type_vxlan]

[securitygroup]
enable_security_group = True
enable_ipset = True

通过将接口驱动程序设置为openvswitch来配置第3层代理。

# vim /etc/neutron/l3_agent.ini
interface_driver = openvswitch

还有用于 openvswitch 的 DHCP 接口驱动程序并启用 dnsmasq dhcp 驱动程序。

# vim /etc/neutron/dhcp_agent.ini
# Confirm in line 18
interface_driver = openvswitch

# uncomment line 37
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

创建新的 Open vSwitch 代理配置文件。

mv /etc/neutron/plugins/ml2/openvswitch_agent.ini /etc/neutron/plugins/ml2/openvswitch_agent.ini.orig
 vim /etc/neutron/plugins/ml2/openvswitch_agent.ini

配置如下:

[DEFAULT]

[agent]

[ovs]

[securitygroup]
firewall_driver = openvswitch
enable_security_group = True
enable_ipset = True

创建所需的文件并设置正确的权限

touch /etc/neutron/fwaas_driver.ini
chmod 640 /etc/neutron/{neutron.conf,fwaas_driver.ini}
chmod 640 /etc/neutron/plugins/ml2/{ml2_conf.ini,openvswitch_agent.ini}
chgrp neutron /etc/neutron/{neutron.conf,fwaas_driver.ini}
chgrp neutron /etc/neutron/plugins/ml2/{ml2_conf.ini,openvswitch_agent.ini}

打开 Nova 配置并添加 Neutron 网络设置。

# vim /etc/nova/nova.conf
# add follows into the [DEFAULT] section
vif_plugging_is_fatal = True
vif_plugging_timeout = 300

# Add the following to the end : Neutron auth info
# the value of [metadata_proxy_shared_secret] is the same with the one in [metadata_agent.ini]
[neutron]
auth_url = https://osp01.home.cloudlabske.io:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = StrongPassw0rd01
service_metadata_proxy = True
metadata_proxy_shared_secret = metadata_secret
insecure = false

更新 UWSGI_BIND_IP 绑定地址。

sed -i -e "s/UWSGI_BIND_IP=.*/UWSGI_BIND_IP=\"127.0.0.1\"/"  /etc/init.d/neutron-api

更新 nginx 流

 vim /etc/nginx/nginx.conf

添加用于代理的中子上游和服务器参数。

stream {
    upstream glance-api {
        server 127.0.0.1:9292;
    }
    server {
        listen 192.168.1.2:9292 ssl;
        proxy_pass glance-api;
    }
    upstream nova-api {
        server 127.0.0.1:8774;
    }
    server {
        listen 192.168.1.2:8774 ssl;
        proxy_pass nova-api;
    }
    upstream nova-metadata-api {
        server 127.0.0.1:8775;
    }
    server {
        listen 192.168.1.2:8775 ssl;
        proxy_pass nova-metadata-api;
    }
    upstream placement-api {
        server 127.0.0.1:8778;
    }
    server {
        listen 192.168.1.2:8778 ssl;
        proxy_pass placement-api;
    }
    upstream novncproxy {
        server 127.0.0.1:6080;
    }
    server {
        listen 192.168.1.2:6080 ssl;
        proxy_pass novncproxy;
    }
    upstream neutron-api {
        server 127.0.0.1:9696;
    }
    server {
        listen 192.168.1.2:9696 ssl;
        proxy_pass neutron-api;
    }
    ssl_certificate "/etc/letsencrypt/live/osp01.home.cloudlabske.io/fullchain.pem";
    ssl_certificate_key "/etc/letsencrypt/live/osp01.home.cloudlabske.io/privkey.pem";
}

ml2_conf.ini 创建符号链接

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

然后填充neutron数据库

su -s /bin/bash neutron -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini upgrade head"

预期执行输出;

....
INFO  [alembic.runtime.migration] Running upgrade 1e0744e4ffea -> 6135a7bd4425
INFO  [alembic.runtime.migration] Running upgrade 6135a7bd4425 -> 8df53b0d2c0e
INFO  [alembic.runtime.migration] Running upgrade 8df53b0d2c0e -> 1bb3393de75d, add qos policy rule Packet Rate Limit
INFO  [alembic.runtime.migration] Running upgrade 1bb3393de75d -> c181bb1d89e4
INFO  [alembic.runtime.migration] Running upgrade c181bb1d89e4 -> ba859d649675
INFO  [alembic.runtime.migration] Running upgrade ba859d649675 -> e981acd076d3
INFO  [alembic.runtime.migration] Running upgrade e981acd076d3 -> 76df7844a8c6, add Local IP tables
INFO  [alembic.runtime.migration] Running upgrade 76df7844a8c6 -> 1ffef8d6f371, migrate RBAC registers from "target_tenant" to "target_project"
INFO  [alembic.runtime.migration] Running upgrade 1ffef8d6f371 -> 8160f7a9cebb, drop portbindingports table
INFO  [alembic.runtime.migration] Running upgrade 8160f7a9cebb -> cd9ef14ccf87
INFO  [alembic.runtime.migration] Running upgrade cd9ef14ccf87 -> 34cf8b009713
INFO  [alembic.runtime.migration] Running upgrade 34cf8b009713 -> I43e0b669096
INFO  [alembic.runtime.migration] Running upgrade I43e0b669096 -> 4e6e655746f6
INFO  [alembic.runtime.migration] Running upgrade 4e6e655746f6 -> 659cbedf30a1
INFO  [alembic.runtime.migration] Running upgrade 659cbedf30a1 -> 21ff98fabab1
INFO  [alembic.runtime.migration] Running upgrade 21ff98fabab1 -> 5881373af7f5
INFO  [alembic.runtime.migration] Running upgrade 7d9d8eeec6ad -> a8b517cff8ab
INFO  [alembic.runtime.migration] Running upgrade a8b517cff8ab -> 3b935b28e7a0
INFO  [alembic.runtime.migration] Running upgrade 3b935b28e7a0 -> b12a3ef66e62
INFO  [alembic.runtime.migration] Running upgrade b12a3ef66e62 -> 97c25b0d2353
INFO  [alembic.runtime.migration] Running upgrade 97c25b0d2353 -> 2e0d7a8a1586
INFO  [alembic.runtime.migration] Running upgrade 2e0d7a8a1586 -> 5c85685d616d
  OK
root

激活OVS接口。

ip link set up ovs-system

停止中子服务。

systemctl stop neutron-api neutron-rpc-server neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent neutron-openvswitch-agent nova-api nova-compute nginx

启动中子服务。

systemctl start neutron-api neutron-rpc-server neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent neutron-openvswitch-agent nova-api nova-compute nginx

启用服务在系统启动时启动。

systemctl enable neutron-api neutron-rpc-server neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent neutron-openvswitch-agent

确认网络代理列表。

# openstack network agent list
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host  | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
| 2c802774-b93a-45fb-b23b-aa9994237e23 | Metadata agent     | osp01 | None              | :-)   | UP    | neutron-metadata-agent    |
| 52e59a27-59c3-45a3-bca0-1c55dae3281e | L3 agent           | osp01 | nova              | :-)   | UP    | neutron-l3-agent          |
| 96e812a7-fb0f-4099-a989-1b203843d8c8 | Open vSwitch agent | osp01 | None              | :-)   | UP    | neutron-openvswitch-agent |
| e02cf121-ed3e-4a5e-9cf8-87dc28aa28be | DHCP agent         | osp01 | nova              | :-)   | UP    | neutron-dhcp-agent        |
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+

3)配置Neutron Flat网络

确认您的接口在服务器上激活。

$ ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 1c:69:7a:ab:be:de brd ff:ff:ff:ff:ff:ff
    altname enp0s31f6
3: ovs-system: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 7e:b9:25:db:58:bd brd ff:ff:ff:ff:ff:ff
4: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 9e:91:4a:20:26:4f brd ff:ff:ff:ff:ff:ff

打开网络设置文件并调整为与下图类似。

# vim /etc/network/interfaces
auto eno1
iface eno1 inet manual
  
auto br-eno1
iface br-eno1 inet static
  address 192.168.1.2
  netmask 255.255.255.0
  gateway 192.168.1.1
  dns-nameservers 192.168.1.1

auto ovs-system
iface ovs-system inet manual

代替

  • eno1 替换为您的物理网络接口名称
  • breno1 要添加的桥。
  • 192.168.1.2 为您的计算机 IP 地址,255.255.255.0 为网络掩码
  • 192.168.1.1 也是 192.168.1.1 的网关和 DNS 服务器。

输入接口名称和桥名称作为变量。

INT_NAME=eno1
BR_NAME=br-eno1

将 OVS 桥添加到系统中。

ovs-vsctl add-br $BR_NAME
ip link set $INT_NAME up
ip link set $BR_NAME up

将端口添加到网桥

ovs-vsctl add-port $BR_NAME $INT_NAME

修改openvswitch_agent.ini并进行物理桥映射。

# vim /etc/neutron/plugins/ml2/openvswitch_agent.ini
# add a line undet [ovs] section
[ovs]
bridge_mappings = physnet1:br-eno1

由于我们使用平面网络,因此将网络指定为映射到上面的网桥。

# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2_type_flat]
flat_networks = physnet1

重新启动 neutron 服务。

systemctl restart neutron-api neutron-rpc-server neutron-openvswitch-agent

4) 创建虚拟网络

定义项目ID

projectID=$(openstack project list | grep service | awk '{print $2}')

创建共享网络

# openstack network create --project $projectID \
--share --provider-network-type flat --provider-physical-network physnet1 private
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | UP                                   |
| availability_zone_hints   |                                      |
| availability_zones        |                                      |
| created_at                | 2024-02-02T08:45:27Z                 |
| description               |                                      |
| dns_domain                | None                                 |
| id                        | 36577b59-f6e1-4844-a0d8-a277c9ddc780 |
| ipv4_address_scope        | None                                 |
| ipv6_address_scope        | None                                 |
| is_default                | False                                |
| is_vlan_transparent       | None                                 |
| mtu                       | 1500                                 |
| name                      | private                              |
| port_security_enabled     | True                                 |
| project_id                | 1067895d9b99452b8d1758eda755c7bc     |
| provider:network_type     | flat                                 |
| provider:physical_network | physnet1                             |
| provider:segmentation_id  | None                                 |
| qos_policy_id             | None                                 |
| revision_number           | 1                                    |
| router:external           | Internal                             |
| segments                  | None                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tags                      |                                      |
| updated_at                | 2024-02-02T08:45:27Z                 |
+---------------------------+--------------------------------------+

在我们刚刚创建的网络上创建子网。这里我们使用的是;

  • 网络:192.168.1.0/24
  • DHCP启动:192.168.1.101
  • DHCP端:192.168.1.149
  • 网关和DNS服务器:192.168.1.1
 openstack subnet create subnet1 --network private \
--project $projectID --subnet-range 192.168.1.0/24 \
--allocation-pool start=192.168.1.101,end=192.168.1.149 \
--gateway 192.168.1.1 --dns-nameserver 192.168.1.1

列出在 openstack 上创建的网络和子网。

# openstack network list
+--------------------------------------+---------+--------------------------------------+
| ID                                   | Name    | Subnets                              |
+--------------------------------------+---------+--------------------------------------+
| 36577b59-f6e1-4844-a0d8-a277c9ddc780 | private | 6f216cd7-acd3-4c31-bc5e-67875c5dcc09 |
+--------------------------------------+---------+--------------------------------------+

# openstack subnet list
+--------------------------------------+---------+--------------------------------------+----------------+
| ID                                   | Name    | Network                              | Subnet         |
+--------------------------------------+---------+--------------------------------------+----------------+
| 6f216cd7-acd3-4c31-bc5e-67875c5dcc09 | subnet1 | 36577b59-f6e1-4844-a0d8-a277c9ddc780 | 192.168.1.0/24 |
+--------------------------------------+---------+--------------------------------------+----------------+

7. 添加计算风格和 SSH 密钥

在OpenStack中,flavor用于定义nova计算实例的计算、内存和存储容量。将其视为服务器的硬件配置。

样品;

  • m1.tiny 规格:CPU 1、内存 2048M、根磁盘 20G
  • m1.small 规格:CPU 2、内存 4096、根磁盘 30G
  • m1.medium 规格:CPU 2、内存 8192、根磁盘 40G

请参阅下面有关创建口味的示例。

openstack flavor create --id 1 --vcpus 1 --ram 2048 --disk 20 m1.tiny
openstack flavor create --id 2 --vcpus 1 --ram 4096 --disk 30 m1.small
openstack flavor create --id 3 --vcpus 1 --ram 8192 --disk 40 m1.medium

列出 OpenStack 云中可用的 Flavors。

root@osp01 ~(keystone)$ openstack flavor list
+----+-----------+------+------+-----------+-------+-----------+
| ID | Name      |  RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+-----------+------+------+-----------+-------+-----------+
| 1  | m1.tiny   | 2048 |   20 |         0 |     1 | True      |
| 2  | m1.small  | 4096 |   30 |         0 |     1 | True      |
| 3  | m1.medium | 8192 |   40 |         0 |     1 | True      |
+----+-----------+------+------+-----------+-------+-----------+

添加 SSH 密钥

如果不存在,您可以生成 SSH 密钥对。

ssh-keygen -q -N ""

添加在命名时创建的密钥。

# openstack keypair create --public-key ~/.ssh/id_rsa.pub default-pubkey
+-------------+-------------------------------------------------+
| Field       | Value                                           |
+-------------+-------------------------------------------------+
| created_at  | None                                            |
| fingerprint | 19:7b:5c:14:a2:21:7a:a3:dd:56:c6:e4:3a:22:e8:3f |
| id          | default-pubkey                                  |
| is_deleted  | None                                            |
| name        | default-pubkey                                  |
| type        | ssh                                             |
| user_id     | 61800deb7d664bbcb4f3eef188cc8dbc                |
+-------------+-------------------------------------------------+

# openstack keypair list
+----------------+-------------------------------------------------+------+
| Name           | Fingerprint                                     | Type |
+----------------+-------------------------------------------------+------+
| default-pubkey | 19:7b:5c:14:a2:21:7a:a3:dd:56:c6:e4:3a:22:e8:3f | ssh  |
| jmutai-pubkey  | 19:7b:5c:14:a2:21:7a:a3:dd:56:c6:e4:3a:22:e8:3f | ssh  |
+----------------+-------------------------------------------------+------+

8. 创建安全组

安全组是网络访问规则的命名集合,用于限制有权访问实例的流量类型。当您启动实例时,您可以为其分配一个或多个安全组。如果您不创建安全组,新实例将自动分配到默认安全组,除非您明确指定不同的安全组。

默认安全组名为default.

# openstack security group list
+--------------------------------------+---------+------------------------+----------------------------------+------+
| ID                                   | Name    | Description            | Project                          | Tags |
+--------------------------------------+---------+------------------------+----------------------------------+------+
| c1ab8c8f-bd2e-43ab-8f6f-54a045885411 | default | Default security group | 9a102dfdf9a54e8382fefdca727b2553 | []   |
+--------------------------------------+---------+------------------------+----------------------------------+------+

# openstack security group rule list  default
+--------------------------------------+-------------+-----------+-----------+------------+-----------+--------------------------------------+----------------------+
| ID                                   | IP Protocol | Ethertype | IP Range  | Port Range | Direction | Remote Security Group                | Remote Address Group |
+--------------------------------------+-------------+-----------+-----------+------------+-----------+--------------------------------------+----------------------+
| 2a4aa470-935a-474a-a8bd-06623218a287 | None        | IPv4      | 0.0.0.0/0 |            | egress    | None                                 | None                 |
| 6cf36173-e187-4ed2-82f4-f5ead4ad3134 | None        | IPv6      | ::/0      |            | egress    | None                                 | None                 |
| 7d4af0e4-fb46-40b5-b447-8e7d22cbdb4d | None        | IPv4      | 0.0.0.0/0 |            | ingress   | c1ab8c8f-bd2e-43ab-8f6f-54a045885411 | None                 |
| a98b779a-f63a-44ff-834e-c3a557f2864d | None        | IPv6      | ::/0      |            | ingress   | c1ab8c8f-bd2e-43ab-8f6f-54a045885411 | None                 |
+--------------------------------------+-------------+-----------+-----------+------------+-----------+--------------------------------------+----------------------+

让我们创建一个允许所有内容进出的安全组。

openstack security group create allow_all --description "Allow all ports"
openstack security group rule create --protocol TCP --dst-port 1:65535 --remote-ip 0.0.0.0/0 allow_all
openstack security group rule create --protocol ICMP --remote-ip 0.0.0.0/0 allow_all

列出安全组以确认其已创建。

# openstack security group list
+--------------------------------------+-----------+------------------------+----------------------------------+------+
| ID                                   | Name      | Description            | Project                          | Tags |
+--------------------------------------+-----------+------------------------+----------------------------------+------+
| 287e76b4-337a-4c08-9e3d-84efd9274edb | allow_all | Allow all ports        | 9a102dfdf9a54e8382fefdca727b2553 | []   |
| c1ab8c8f-bd2e-43ab-8f6f-54a045885411 | default   | Default security group | 9a102dfdf9a54e8382fefdca727b2553 | []   |
+--------------------------------------+-----------+------------------------+----------------------------------+------+

下面是一个限制访问的安全组。仅允许访问已知端口,例如 22、80、443、icmp

openstack security group create base --description "Allow common ports"
openstack security group rule create --protocol TCP --dst-port 22 --remote-ip 0.0.0.0/0 base
openstack security group rule create --protocol TCP --dst-port 80 --remote-ip 0.0.0.0/0 base
openstack security group rule create --protocol TCP --dst-port 443 --remote-ip 0.0.0.0/0 base
openstack security group rule create --protocol ICMP --remote-ip 0.0.0.0/0 base

9. 添加操作系统映像并创建测试虚拟机

我们有一篇专门的文章介绍如何将操作系统云映像上传到 OpenStack Glance 映像服务。

  • 如何将 VM 云映像上传到 OpenStack Glance

上传后通过列出可用图像进行确认。

# openstack image list
+--------------------------------------+-----------------+--------+
| ID                                   | Name            | Status |
+--------------------------------------+-----------------+--------+
| 37c638d5-caa0-4570-a126-2c9d64b262b4 | AlmaLinux-9     | active |
| 3ae8095e-a774-468b-8376-c3d1b8a70bdf | CentOS-Stream-9 | active |
| 83bf7ac6-9248-415b-ac89-269f2b70fdb4 | Debian-12       | active |
| 02799133-06ed-483d-9121-e3791c12bb1c | Fedora-39       | active |
+--------------------------------------+-----------------+--------+

确认口味

# openstack flavor list
+----+-----------+------+------+-----------+-------+-----------+
| ID | Name      |  RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+-----------+------+------+-----------+-------+-----------+
| 1  | m1.tiny   | 2048 |   20 |         0 |     1 | True      |
| 2  | m1.small  | 4096 |   30 |         0 |     1 | True      |
| 3  | m1.medium | 8192 |   40 |         0 |     1 | True      |
+----+-----------+------+------+-----------+-------+-----------+

列出网络

# openstack network list
+--------------------------------------+---------+--------------------------------------+
| ID                                   | Name    | Subnets                              |
+--------------------------------------+---------+--------------------------------------+
| 36577b59-f6e1-4844-a0d8-a277c9ddc780 | private | 6f216cd7-acd3-4c31-bc5e-67875c5dcc09 |
+--------------------------------------+---------+--------------------------------------+

确认配置的安全组。

# openstack security group  list
+--------------------------------------+-----------+------------------------+----------------------------------+------+
| ID                                   | Name      | Description            | Project                          | Tags |
+--------------------------------------+-----------+------------------------+----------------------------------+------+
| 287e76b4-337a-4c08-9e3d-84efd9274edb | allow_all | Allow all ports        | 9a102dfdf9a54e8382fefdca727b2553 | []   |
| c1ab8c8f-bd2e-43ab-8f6f-54a045885411 | default   | Default security group | 9a102dfdf9a54e8382fefdca727b2553 | []   |
+--------------------------------------+-----------+------------------------+----------------------------------+------+

列出 OpenStack 上配置的密钥对。

# openstack keypair list
+----------------+-------------------------------------------------+------+
| Name           | Fingerprint                                     | Type |
+----------------+-------------------------------------------------+------+
| default-pubkey | 19:7b:5c:14:a2:21:7a:a3:dd:56:c6:e4:3a:22:e8:3f | ssh  |
| jmutai-pubkey  | 19:7b:5c:14:a2:21:7a:a3:dd:56:c6:e4:3a:22:e8:3f | ssh  |
+----------------+-------------------------------------------------+------+

在 Nova 计算上创建 VM 实例

openstack server create --flavor m1.small \
--image AlmaLinux-9  \
--security-group allow_all \
--network private \
--key-name  default-pubkey \
AlmaLinux-9

10.配置Horizon – OpenStack仪表板

Horizon 是一个基于 Django 的项目,旨在提供完整的 OpenStack 仪表板以及用于从可重用组件构建新仪表板的可扩展框架。仪表板所需的唯一核心服务是身份服务。

安装仪表板 openstack 包。

apt install openstack-dashboard -y

打开 local_settings.py 文件进行编辑。

vim /etc/openstack-dashboard/local_settings.py

阿尤特

# In line 99 : change Memcache server
CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
        'LOCATION': '127.0.0.1:11211',
    },
}

# In line 107 : add
SESSION_ENGINE = "django.contrib.sessions.backends.cache"
# line 120 : set Openstack Host
# line 121 : comment out and add a line to specify URL of Keystone Host
OPENSTACK_HOST = "osp01.home.cloudlabske.io"
#OPENSTACK_KEYSTONE_URL = "http://%s/identity/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_URL = "https://osp01.home.cloudlabske.io:5000/v3"
# line 125 : set your timezone
TIME_ZONE = "Africa/Nairobi"

# Add to the end of file
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'

# set [True] below if you are using self signed certificate
OPENSTACK_SSL_NO_VERIFY = False

还要编辑 Apache 默认 ssl 文件。

# vim /etc/apache2/sites-available/default-ssl.conf
# In line 31,32, configure path to your SSL certificate and key
SSLCertificateFile      /etc/letsencrypt/live/osp01.home.cloudlabske.io/cert.pem
SSLCertificateKeyFile   /etc/letsencrypt/live/osp01.home.cloudlabske.io/privkey.pem

# In line 41 : uncomment and specify your chain file
SSLCertificateChainFile /etc/letsencrypt/live/osp01.home.cloudlabske.io/chain.pem

r
# change to your Memcache server

指定Memcache地址

# vim /etc/openstack-dashboard/local_settings.d/_0006_debian_cache.py
CACHES = {
  'default' : {
    #'BACKEND': 'django.core.cache.backends.locmem.LocMemCache'
    'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
    'LOCATION': '127.0.0.1:11211',
  }
}

创建新的 OpenStack 仪表板 Apache 配置文件。

vim /etc/apache2/conf-available/openstack-dashboard.conf

添加以下内容。

WSGIScriptAlias / /usr/share/openstack-dashboard/wsgi.py process-group=horizon
WSGIDaemonProcess horizon user=horizon group=horizon processes=3 threads=10 display-name=%{GROUP}
WSGIProcessGroup horizon
WSGIApplicationGroup %{GLOBAL}

Alias /static /var/lib/openstack-dashboard/static/
Alias /horizon/static /var/lib/openstack-dashboard/static/

<Directory /usr/share/openstack-dashboard>
  Require all granted
</Directory>

<Directory /var/lib/openstack-dashboard/static>
  Require all granted
</Directory>

启用站点

a2enconf openstack-dashboard
a2enmod ssl
a2ensite default-ssl

复制策略文件。

mv /etc/openstack-dashboard/policy /etc/openstack-dashboard/policy.org

重新启动 Apache Web 服务器。

chown -R horizon /var/lib/openstack-dashboard/secret-key
systemctl restart apache2

您现在可以通过 https://(服务器的主机名)/ 访问 Horizon 仪表板

使用 Keystone 中的用户和匹配的密码登录。

结论

在 Debian 12 (Bookworm) 上安装 OpenStack 可为您提供强大、可靠且高度可扩展的云基础设施解决方案。如果您逐步按照本文中概述的步骤进行操作,您应该能够自信地设置利用 Debian 稳定性的 OpenStack 云系统。我们希望这篇文章对您有很大帮助,感谢您访问我们的网站。


更新于:4个月前
赞一波!

文章评论

评论问答