Deploy Openstack

Controller node Configuration

Site Configuration

Clone and initialize chi-in-a-box. This assumes that we're using the /opt directory that was set up in the last section.

git clone https://github.com/KTH-EXPECA/chi-in-a-box /opt/chi-in-a-box
cd /opt/chi-in-a-box

Create the default site config folder

./cc-ansible init --site /opt/site-config

Copy ExPECA configuration folder to site-config

git clone git@github.com:KTH-EXPECA/TestbedConfig.git /opt/TestbedConfig
cp -r /opt/TestbedConfig/chi-site-config/* /opt/site-config

Change hosts in inventory and add storage-01 if you have a separate storage node

# /opt/site-config/inventory/hosts

# These initial groups are the only groups required to be modified. The
# additional groups are for more control of the environment.
[control]
controller-01

[network]
controller-01

[compute]
# No compute node; this is a baremetal-only cluster.

[monitoring]
controller-01

[storage]
storage-01

[deployment]
localhost ansible_connection=local

Change host_vars in inventory for controller

# /opt/site-config/inventory/host_vars/controller-01

---
# Initial assumption is that this is also the deployment node,
# therefore any provisioning can be done locally.
ansible_connection: local

network_interface: brinternal
kolla_external_vip_interface: brpublic

Create a host_vars file for the storage node

# /opt/site-config/inventory/host_vars/storage-01

---
# Initial assumption is that this is also the deployment node,
# therefore any provisioning can be done locally.
ansible_connection: ssh

network_interface: brinternal

In your site config directory, /opt/site-config, you now have a file called defaults.yml You need to modify the networking parameters such as vip addresses and Neutron cidr's according to your public and internal network cidr values.

# /opt/site-config/defaults.yml

kolla_internal_vip_address: 10.20.111.254
kolla_external_vip_address: 10.0.87.254

neutron_networks:
- name: public
  bridge_name: brpublic
  external_interface: veth-publicb
  cidr: 10.0.87.0/24
  gateway_ip: 10.0.87.1
  allocation_pools:
    - start: 10.0.87.30
      end: 10.0.87.250
- name: physnet1
  bridge_name: brinternal
  external_interface: veth-internalb
  on_demand_vlan_ranges:
    - 200:250
  reservable_vlan_ranges:
    - 251:300

Add storage-01 host's name and internal ip to /etc/hosts

sudo vim /etc/hosts
...
10.20.111.1 storage-01

Bootstrap Servers

Run ./cc-ansible --site /opt/site-config/ bootstrap-servers

This will install docker, configure /etc/hosts, and generally configure the system to run the rest of the installation.

Afterwards, you should see that /etc/hosts contains an entry for your hostname, mapping it to the interface address. Note that this is NOT the haproxy VIP!

$ cat /etc/hosts
...
# BEGIN ANSIBLE GENERATED HOSTS
10.20.111.100 controller-01
# END ANSIBLE GENERATED HOSTS
10.20.111.1 storage-01

Enable ip forwarding by adding the following to /etc/sysctl.conf

net.ipv4.ip_forward=1

Then reload the settings by running sudo sysctl -p.

Create Swift Rings

Before Swift deployment we need to generate rings, which are binary compressed files that at a high level let the various Swift services know where data is in the cluster.

A helper script create_swift_rings.sh is provided at TestbedConfig/tools folder and should be run from the controller node. The commands work with d0 disks with partition table created in preparations section on the storage node. It generates the rings and stores them in /opt/site-config/node_custom_config/swift/.

Modify the script create_swift_rings.sh

  • with the storage node's internal ip as NODE variable,

  • and with the storage device name e.g. d0

# /opt/TestbedConfig/tools/create_swift_rings.sh

NODE="10.20.111.1"

Create swift folder in node_custom_config and run the script

mkdir -p /opt/site-config/node_custom_config/swift
cd /opt/TestbedConfig/tools/
chmod +x create_swift_rings.sh
./create_swift_rings.sh
Device d0r1z1-10.20.111.1:6000R10.20.111.1:6000/d0_"" with 1.0 weight got id 0
Device d0r1z1-10.20.111.1:6001R10.20.111.1:6001/d0_"" with 1.0 weight got id 0
Device d0r1z1-10.20.111.1:6002R10.20.111.1:6002/d0_"" with 1.0 weight got id 0
Reassigned 1024 (100.00%) partitions. Balance is now 0.00.  Dispersion is now 0.00
Reassigned 1024 (100.00%) partitions. Balance is now 0.00.  Dispersion is now 0.00
Reassigned 1024 (100.00%) partitions. Balance is now 0.00.  Dispersion is now 0.00

Verify the generated files

expeca@controller-01:/opt/site-config/node_custom_config/swift$ ls
account.builder  account.ring.gz  backups  container.builder  container.ring.gz  object.builder  object.ring.gz

Run Pre-checks

Run cc-ansible --site /opt/site-config/ prechecks

This will warn you about missing configuration, and other common errors.

Downloading Containers

ExPECA uses Dockerhub as the secondry Docker registry. Ensure you have login credentials for it. Insert your Dockerhub username to the site config's defaults.yml.

# /opt/site-config/defaults.yml
...
secondry_docker_registry_username: <your-docker-login-username>

Add additional passwords

Run ./cc-ansible --site /opt/site-config edit_passwords

Your default editor will open, and you'll see the following. Ensure that docker_registry_password, secondry_docker_registry_password, and tenant_switch_password is configured as follows. The secondry_docker_registry_password and tenant_switch_password lines do not exist, hence must be added. We use passwords to create swift prefix and suffix as well.

...
docker_registry_password: kaQBdG9PRwlpTzknnhfGmvPf
secondry_docker_registry_password: <your-docker-login-token>
tenant_switch_password: <tenant-switch-password>
swift_hash_path_prefix:
swift_hash_path_suffix:
swift_keystone_password:
...

The passwords file will be re-encrypted when you exit the editor.

After configuring this password, pull the containers!

Pull container images

Run ./cc-ansible --site /opt/site-config pull

Generate Letsencrypt certificates

Currently, the initial certificate generation is not yet automated. You must perform the following steps:

# Make a directory for your certs
docker exec -it letsencrypt_certbot mkdir -p /www/data/<kolla_external_fqdn>/
# Request the initial certificate
docker exec -it letsencrypt_certbot certbot certonly --noninteractive --agree-tos
Saving debug log to /var/log/kolla/letsencrypt/letsencrypt.log
Plugins selected: Authenticator webroot, Installer None
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for testbed.expeca.proj.kth.se
Waiting for verification...
Cleaning up challenges

Copy fullchain.pem to site-config/certificates/haproxy.pem.

cd /opt/chi-in-a-box
mkdir -p ../site-config/certificates/
sudo cp /var/lib/docker/volumes/letsencrypt_certs/_data/live/testbed.expeca.proj.kth.se/fullchain.pem ../site-config/certificates/haproxy.pem

Append the private key to ../site-config/certificates/haproxy.pem.

sudo vim /var/lib/docker/volumes/letsencrypt_certs/_data/live/testbed.expeca.proj.kth.se/privkey.pem
vim ../site-config/certificates/haproxy.pem

Deploy

You're now ready to run the deploy! This will bring up basic control plane services, listening on the IP addresses you configured above.

NOTE: If you had a break between the pre-check and this deploy step, then first run the pre-check again, followed by making sure that the edits to defaults.yml and config.yml files are accurate.

Run ./cc-ansible --site /opt/site-config deploy

Access your site

After deploy completes, you'll be able to access the horizon webui at http://<kolla_external_vip_address>

The username is admin, and the password can be found by running cc-ansible --site /opt/site-config view_passwords | grep keystone_admin_password

Partial Deploy

If you encounter errors and need to re-run the deploy step, which is expensive, you can skip parts you know have already succeeded. You can watch the Ansible output to see which "role" (service) it is updating. If you know a certain role has completed successfully, you can try skipping it on the next run with the --skip-tags option, e.g. --skip-tags keystone,nova to skip the Keystone and Nova provisioning. You can persist these by uncommenting their lines in kolla-skip-tags

Post-Deploy

Once the deployment is complete, there should be a more or less functional OpenStack deployment that you can log in to. However, much of the bare metal functionality will not work, as there are a few special entities necessary, namely:

  • A provisioning network on its own VLAN, which Ironic must know about

  • Access on that VLAN to the Ironic TFTP server

  • A "baremetal" Nova flavor that is used to allow users to schedule bare metal deployments

  • Ironic deploy images available in Glance that hold the deployment ramdisk/kernel

  • A special "freepool" Nova aggregate used by Blazar to manage node reservations

All of these will be provisioned by running the post-deploy script:

./cc-ansible --site /opt/site-config post-deploy

Set up admin auth

post-deploy will create a file named admin-openrc.sh in your site-config directory. This file shouldn't be checked into source control, as it contains secrets.

To use it, run source /opt/site-config/admin-openrc.sh and source /opt/chi-in-a-box/venv/bin/activate

This will set a variety of environment variables, all prefixed with OS_

Verify that it works by running openstack token issue

Install the OpenStack Client

Run the following commands:

# run this in your home directory
cd ~
# create a new virtualenv, and use it
python3 -m venv .venv
source .venv/bin/activate

# update to a recent version of pip
pip install --upgrade pip

# install the openstack client libraries
pip install python-openstackclient python-doniclient python-ironicclient

Last updated