As you know OpenStack skills are in high demand in the current IT job market.

To get you kick started on this technology, you can take a look in one of the most popular OpenStack certifications, the Red Hat Certified System Administrator in Red Hat OPENSTACK.

The certification by any means will give you the skills to operate an OpenStack production infrastructure, but it will give you the foundations to start with this technology.

In this guide, I will share with you how to build an OpenStack home lab to start building the hands-on experience. I will follow RedHat documentation to get a formal structure and flow.

Notes and Pre-requisites:

  • For this Lab you can use RedHat 7 or Centos 7
  • NTP and DNS should be available in the lab network
  • For this Lab you need 2 host, Node 01 and Node02. Node01 will be the controller node, this is where all the OpenStack management components will be hosted. Node02 will be the compute Nova or hypervisor node.
  • You can build this lab in a virtual environment ( your environment need to support nested virtualization ) or physical environment (you need to enable CPU virtualization support in the BIOS)
  • If you choose to build the Lab using RedHat, with a valid RedHat account ensure to get a registration to access to the RedHat Openstack repos, ( you can use the 90 days trial offered by RedHat ) or go for Centos as the access to Centos repos is free.

Let’s start to build the OpenStack Lab!

Overview of the installation and configuration of OpenStack:

Configure Node01 and Node02 with the following settings:

  • Configure NTP using 192.168.1.254 as the NTP server for both nodes
  • Create a private network named int and a subnet named subint using the 172.25.51.0/24 network. Set the gateway for this network to 172.25.51.25.
  • Create a public network named ext and a subnet named subext using the 172.25.1.0/24 network. The gateway for this network is 172.25.1.254. Floating IPs that must be allocated to this network range from 172.25. 1.25 to 172.25. 1.99.
  • Use a VLAN range of 1000-2000, and configure br-ex as the external bridge for public access to the instance. Configure br-eth1 as the network bridge between node01 and node02. br-eth1 will be attached to eth1 for both node01 and node02.
  • Use the ML2 framework.
  • Configure Horizon to accept SSL connections.
  • Enable the Nova compute service only on node02.

Configure OpenStack as follows:

  • Create a project named project1. Set the quota for this project to 4 VCPUs, 4096 MB RAM, and two floating IP addresses.
  • Create a new flavor named m2.tiny, which includes 1 VPCU, 1024 MB RAM, a 20 GB root disk, a 2 GB ephemeral disk, and a 512 MB swap disk.
  • Create a new user named john with an email address of john@node01.hb8802.com. Set the password to p@ssw0rd and include this user as a member of the project1 project.
  • Create a new user named admin01 with an email address of admin@node01.hb8802.com. Set the password to p@ssw0rd and include this user as an admin of the project1 project.
  • In the project1 project, create a new image named small. You can download QCOW2 images from http://docs.openstack.org/image-guide/content/ch_obtaining_images.html.

Set no minimum disk, minimum 512 MB RAM, and make the image public.

Set no minimum disk, minimum 1024 MB RAM, and make the image public.

  • In the project1 project, allocate two floating IP addresses.
  • In the project1 project, create a new security group named sec1 with a description of Web and SSH. Allow TCP/22TCP/443, and ICMP -1:-1 from CIDR 0.0.0.0/0, and TCP/80 from the sec1 source group.
  • In the project1 project, create an SSH key pair named key1. Save the private key locally.
  • In the project1 project, launch a new instance named small using the small image and the m2.tiny flavor. Include the key1 key pair and the sec1 security group. Associate the 172.25.1.26 floating IP address.
  • In the project1 project, launch a new instance named web using the web image and the m1.small flavor. Include the key1 key pair and the sec1 security group. Associate the 172.25.1.27 floating IP address.
  • In the project1 project, create a new 2 GB volume named vol1 and attach this volume to the web instance.

Time for the hands-on!!

Jump to Putty and log into the node01.hb8802.com machine as root (with a password of p@ssw0rd).

Install updates on node01.hb8802.com.

  • [root@node01 ~]# yum update -y

Reboot if needed (e.g., if you have a kernel update).

[root@node01 ~]# reboot

  • Log into the node02.hb8802.com machine as root (with a password of p@ssw0rd).
  • Install updates on node02.

[root@node02 ~]# yum update -y

Reboot if needed (e.g., if you have a kernel update).

[root@node02 ~]# reboot

  • On node01.hb8802.com, install the openstack-packstack package.

[root@node01 ~]# yum install -y openstack-packstack

  • Generate an answer file with packstack.

[root@node01 ~]# packstack --gen-answer-file /root/answers.txt

Edit the /root/answers.txt and ensure the following items are configured:

  • CONFIG_NTP_SERVERS=192.168.1.254
  • CONFIG_KEYSTONE_ADMIN_PW=p@ssw0rd
  • CONFIG_HORIZON_SSL=y
  • CONFIG_PROVISION_DEMO=n
  • CONFIG_COMPUTE_HOSTS=172.25.1.11
  • CONFIG_SWIFT_INSTALL=n
  • CONFIG_CEILOMETER_INSTALL=n
  • CONFIG_NAGIOS_INSTALL=n
  • CONFIG_NEUTRON_L2_PLUGIN=ml2
  • CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vlan
  • CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vlan
  • CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
  • CONFIG_NEUTRON_ML2_VLAN_RANGES=physnet1:1000:2000
  • CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vlan
  • CONFIG_NEUTRON_OVS_VLAN_RANGES=physnet1:1000:2000
  • CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-eth1
  • CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-eth1:eth1

Configure OpenStack using the answer file.

[root@node01 ~]# packstack --answer-file /root/answers.txt

Note

Once prompted, use p@ssw0rd as the root password for Packstack to install the services on node01 and node02.

  • We need to tie the br-eth1 bridge together with the eth1 NIC on node02:

[root@node02 ~]# ovs-vsctl add-port br-eth1 eth1

Note

The Packstack installation created the internal network bridges br-int and br-eth1 and added the eth1 interface to br-eth1 on both node01 and node02. Packstack also created the external bridge br-ex on node01, the only node which will communicate to an external network. The remaining step is to manually configure the br-ex and eth0 network interface files to create the external network master/slave interface connection.

  • On node01 update the interface configuration files. First, make a backup copy of the original ifcfg-eth0 file.

[root@node01 ~]# cp /etc/sysconfig/network-scripts/ifcfg-eth0 /root/

  • Use ifcfg-eth0 as a template for the new bridge by copying it to ifcfg-br-e1.

[root@node01 ~]# cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-br-ex

Remove all entries but DEVICE and ONBOOT from the /etc/sysconfig/network-scripts/ifcfg-eth0 file. Add extra entries for the OVS bridges:

  • [root@node01 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
  • DEVICE=eth0
  • ONBOOT=yes
  • TYPE=OVSPort
  • DEVICETYPE=ovs
  • OVS_BRIDGE=br-ex

The bridge device configuration file /etc/sysconfig/network-scripts/ifcfg-br-ex must be configured as well.

  • [root@node01 ~]# cat /etc/sysconfig/network-scripts/ifcfg-br-ex
  • DEVICE=br-ex
  • BOOTPROTO=static
  • ONBOOT=yes
  • TYPE=OVSBridge
  • DEVICETYPE=ovs
  • USERCTL=yes
  • PEERDNS=yes
  • IPV6INIT=no
  • IPADDR=172.25.10
  • NETMASK=255.255.255.0
  • GATEWAY=172.25.254
  • DNS1=192.168.1.254

On node01, add the eth0 device to the br-ex OVS bridge and restart the network. Enter both commands on one line, as shown, to ensure that the network service is restarted without losing command line access to the system.

[root@node01 ~]# ovs-vsctl add-port br-ex eth0;systemctl restart network.service

  • Statically set the host name of node01:

[root@node01 ~]# hostnamectl set-hostname node01.hb8802.com

  • On node02, update /etc/nova/nova.conf to work with Neutron network interfaces:

[root@node02 ~]# crudini --set /etc/nova/nova.conf DEFAULT vif_plugging_is_fatal false[root@node02 ~]# crudini --set /etc/nova/nova.conf DEFAULT vif_plugging_timeout 0

  • Restart Nova services:

[root@node02 ~]# openstack-service restart nova

  • On workstation1.hb8802.com, open the Horizon dashboard by browsing to https://node01.hb8802.com/dashboard. Log in as admin with the password p@ssw0rd. If necessary, check the value of the OS_PASSWORD variable configured in /root/keystonerc_admin on node01.

[root@node01 ~]# source /root/keystonerc_admin

[root@node01 ~(keystone_admin)]$ echo $OS_PASSWORD p@ssw0rd1234

Open firefox and go to https://node01.hb8802.com/dashboard

  • Use the Horizon dashboard to create a new project (tenant) named project1. Use a description of Project for project1. Set the quota to four VCPUs, four instances, 4096 MB RAM, and two floating IPs.

Alternatively, you can use the CLI to create the tenant:

[root@node01 ~(keystone_admin)]# keystone tenant-create --name project1 --description 'Project for project1'

In the Admin tab, open the Identity Panel and click Projects. Click the Create Project button. Add the name and description as described previously. On the Quota tab, set the quotas as shown previously. Click the Finish button.

Alternatively, you can use the CLI to set up the quotas:

[root@node01 ~(keystone_admin)]# nova quota-update --floating-ips 2 --instances 4 --ram 4096 --cores 4 project1

  • Create a new flavor named m2.tiny, which includes 1 VPCU, 1024 MB RAM, a 20 GB root disk, a 2 GB ephemeral disk, and a 512 MB swap disk.

In the Admin tab, open the System Panel and click Flavors. Click the Create Flavor button. Enter the information as previously shown. Click the Create Flavor button to confirm changes.

Alternatively, you can use the CLI to set up the quotas:

[root@node01 ~(keystone_admin)]# nova flavor-create --ephemeral 2 --swap 512 m2.tiny auto 1024 20 1

  • Create a new user named john. Use an email address of john@node01.hb8802.com and a password of p@ssw0rd. Include this user in the project1 project with a _member_ role.

In the Admin tab, open the Identity Panel and click Users. Click the Create User button. Enter the information as previously shown. Click the Create User button to confirm changes.

Alternatively, you can use the CLI to create the user and add it as a member to the project:

[root@node01 ~(keystone_admin)]# keystone user-create --name=john --pass=p@ssw0rd --email=john@node01.hb8802.com

[root@node01 ~(keystone_admin)]# keystone user-role-add --user john --role member --tenant project1

  • Create a new user named admin01. Use an email address of admin01@node01.hb8802.com, a password of p@ssw0rd, the project1 project, with an admin role.

In the Admin tab, open the Identity Panel and click Users. Click the Create User button. Enter the information as previously shown. Click the Create User button to confirm changes.

Alternatively, you can use the CLI to create the user and add it as a member to the project:

[root@node01 ~(keystone_admin)]# keystone user-create --name=admin01 --pass=p@ssw0rd --email=admin01@node01.hb8802.com[root@node01 ~(keystone_admin)]# keystone user-role-add --user admin01 --role admin --tenant project1

  • Log out as admin and log in as john.

Alternatively, create a new john credentials file in /root/keystonerc_john and source it:

unset SERVICE_TOKEN SERVICE_ENDPOINT

export OS_USERNAME=john

export OS_TENANT_NAME=project1

export OS_PASSWORD=p@ssw0rd

export OS_AUTH_URL=http://node01.hb8802.com:35357/v2.0/

export PS1='[\u@\h \W(keystone_john)]\$ '

[root@node01 ~(keystone_admin)]# source /root/keystonerc_john

  • Add a new image named small using the QCOW2 image located at http://classroom.hb8802.com/pub/materials/small.img. Set no minimum disk, minimum 512 MB RAM, and make the image public.

In the Project tab, open the Compute tab and click Images. Click the Create Image button. Enter the information as previously shown. Click the Create Image button to confirm changes.

Alternatively, you can use the CLI to create a new Glance image:

[root@node01 ~(keystone_john)]# glance image-create --name small --min-ram 512 --is-public True --disk-format qcow2 --container-format bare --copy-from http://classroom.hb8802.com/pub/materials/small.img

  • Add a new image named web using the QCOW2 image located at http://classroom.hb8802.com/pub/materials/web.img. Set no minimum disk, minimum 1024 MB RAM, and make the image public.

In the Project tab, open the Compute tab and click Images. Click the Create Image button. Enter the information as previously shown. Click the Create Image button to confirm changes.

Alternatively, you can use the CLI to create a new Glance image:

[root@node01 ~(keystone_john)]# glance image-create --name web --min-ram 1024 --is-public True --disk-format qcow2 --container-format bare --copy-from http://classroom.hb8802.com/pub/materials/web.img

  • Next, configure networking. In the Project tab, open the Network and select Networks. Click theCreate Network button. Enter the internal network information as previously shown. Click the Subnet tab. Enter the internal subnet information as previously shown. Click the Create button to confirm changes.

Alternatively, you can use the CLI to create the network and the subnet:

[root@node01 ~(keystone_john)]# neutron net-create int[root@node01 ~(keystone_john)]# neutron subnet-create --name subint int 172.25.51.0/24 --gateway 172.25.51.25

Click the Create Network button again. Enter the external network information as previously shown. Click the Subnet tab. Enter the external subnet information as previously shown. Browse to the Subnet Detail tab.

Deselect the Enable DHCP checkbox and leave the rest of the fields blank. Click the Create button to confirm changes. Alternatively, you can use the CLI to create the network and the subnet:

[root@node01 ~(keystone_john)]# neutron net-create ext[root@node01 ~(keystone_john)]# neutron subnet-create --allocation-pool start=172.25.1.25,end=172.25.1.99 --gateway 172.25.1.254 --disable-dhcp --name subext ext 172.25.1.0/24

In the Project tab, open the Network and select Routers. Click the Create Router button. Enter router1 for the router name. Click the Create button to confirm changes.

Alternatively, you can use the CLI to create the router:

[root@node01 ~(keystone_john)]# neutron router-create router1

  • Sign out as the john user and sign in as admin. In the Admin tab, open the System Panel and selectNetworks. Click the Edit Network button for the public ( ext) network row. Select the External Network checkbox Click the Save Changes button to confirm changes.

Alternatively, you can use the CLI to update the network:

[root@node01 ~(keystone_john)]# source /root/keystonerc_admin[root@node01 ~(keystone_admin)]# neutron net-update ext --router:external=True

  • Sign out as the admin user and sign in as john. In the Project tab, open the Network tab and selectRouters. Click the Set Gateway button in the router1 row. In the External Network menu, choose ext. Click the Set Gateway button to confirm changes.

Alternatively, you can use the CLI to attach the router to the ext network:

[root@node01 ~(keystone_admin)]# source /root/keystonerc_john[root@node01 ~(keystone_john)]# neutron router-gateway-set router1 ext

  • Click the router1 link. Press the Add Interface button. In the Subnet menu, select int: 172.25.51.0/24 (subint). Press the Add Interface button.

Alternatively, you can use the CLI to add an interface to the network (as the john user):

[root@node01 ~(keystone_john)]# neutron router-interface-add router1 subint

  • Allocate two floating IP addresses.

In the Project tab, open the Compute menu and select Access & Security. Choose the Floating IPs tab. Click the Allocate IP To Project button. Use the ext pool and click the Allocate IP button. Repeat the process to have two floating IP address (172.25. 1.26 and 172.25. 1.27).

Alternatively, you can use the CLI to allocate two floating IPs:

[root@node01 ~(keystone_john)]# nova floating-ip-create ext[root@node01 ~(keystone_john)]# nova floating-ip-create ext

  • Create a new security group named sec1 with a description of Web and SSH. Allow SSHHTTPS, and ALL ICMP from CIDR 0.0.0.0/0, and TCP/80 from the sec1 source group.

In the Project tab, open the Compute menu and select the Access & Security link. Go to theSecurity Groups tab. Press the Create Security Group button. Enter sec1 as the name and Web and SSH as the description. Press the Create Security Group button to accept these additions. Press theManage Rules button for the sec1 security group. Press the Add Rule button. In the Rule drop-down menu, choose SSH. Leave the Remote and CIDR options as the default. Press the Add button. Press theAdd Rule button again. Choose HTTPS as the Rule and press the Add button. Press the Add Rule button again. Choose ALL ICMP as the Rule and press the Add button. Press the Add Rule button one more time. Choose HTTP as the Rule. Choose Security Group in the Remote dropdown menu. Choose sec1 (current) from the Security Group dropdown menu. Press the Add button.

Alternatively, you can use the CLI to create the security group, and set rules for it:

[root@node01 ~]# nova secgroup-create sec1 "Web and SSH"[root@node01 ~(keystone_john)]# nova secgroup-add-rule sec1 tcp 22 22 0.0.0.0/0[root@node01 ~(keystone_john)]# nova secgroup-add-rule sec1 tcp 443 443 0.0.0.0/0[root@node01 ~(keystone_john)]# nova secgroup-add-rule sec1 icmp -1 -1 0.0.0.0/0[root@node01 ~(keystone_john)]# nova secgroup-add-group-rule sec1 sec1 tcp 80 80

  • Create an SSH key pair for the virtual machines named key1.

In the Project tab, open the Compute menu and select Access & Security. Choose the Keypairs tab. Press the Create Keypair button. Enter the name as shown previously. Press the Create Keypair button. Save the key1.pem file to the default location, which should be in /home/student/Downloads/key1.pem on workstation1.hb8802.com.

Alternatively, you can use the CLI to generate the key pair:

[student@workstationX ~]$ nova keypair-add key1 > /home/student/Downloads/key1.pem[student@workstationX ~]$ chmod 0600 /home/student/Downloads/key1.pem

  • Launch a new instance named small using the small image and the m2.tiny flavor. Use the key1 key pair and include the sec1 security group. Associate the 172.25.1.26 floating IP address.

In the Project tab, open the Compute menu and select Instances. Press the Launch Instance button. In the Details tab, enter the image and name as previously shown. In the Access & Security tab, enable the key pair and security group listed previously. In the Networking tab, press the + button next to the int network. Press the Launch button.

Alternatively, you can use the CLI to launch the instance:

[root@node01 ~(keystone_john)]# neutron net-list[root@node01 ~(keystone_john)]# nova boot --flavor m2.tiny --image small --key-name key1 --nic net-id=<uuid of int network> --security-groups sec1 small

Once the instance has been created, open the Actions dropdown menu and select Associate Floating IP. Choose the 172.25.1.26 floating IP address, and choose the small instance, then press the Associate button.

Alternatively, you can use the CLI to allocate the floating IP:

[root@node01 ~(keystone_john)]# nova add-floating-ip small 172.25.1.26

  • Launch a new instance named web using the web image and the m1.small flavor. Use the key1 key pair and include the sec1 security group. Associate the 172.25. 1.27 floating IP address.

In the Project tab, open the Compute menu and select Instances. Press the Launch Instance button. In the Details tab, enter the image, name, and flavor as shown previously. In the Access & Security tab, enable the key pair and security group listed previously. In the Networking tab, press the + button next to the int network. Press the Launch button.

Alternatively, you can use the CLI to launch the instance:

[root@node01 ~(keystone_john)]# neutron net-list[root@node01 ~(keystone_john)]# nova boot --flavor m1.small --image web --key-name key1 --nic net-id=<uuid of int network> --security-groups sec1 web

Once the instance has been created, open the Actions dropdown menu and select Associate Floating IP. Choose the 172.25. 1.27 floating IP address and choose the web instance, then press the Associate button.

Alternatively, you can use the CLI to allocate the floating IP:

[root@node01 ~(keystone_john)]# nova add-floating-ip web 172.25.1.27

  • Once both instances are available, verify the network services.

[student@workstationX ~]$ firefox https://172.25.1.27[student@workstationX ~]$ chmod 600 /root/key1.pem[student@workstationX ~]$ ssh root@172.25.1.26[student@workstationX ~]$ ssh -i /root/key1.pem root@172.25.1.27

  • Create a 2 GB volume with a name of vol1. Attach this volume to the web instance.

In the Project tab, open the Compute menu and select Volumes. Press the Create Volume button. Enter the name and size as shown previously. Press the Create Volume button to confirm changes.

Alternatively, you can use the CLI to create the volume:

[root@node01 ~(keystone_john)]# cinder create --display-name vol1 2

Press the Edit Attachments button for the vol1 volume. Choose the web instance and press theAttach Volume button.

Alternatively, you can use the CLI to attach the volume:

[root@node01 ~(keystone_john)]# nova volume-attach web <uuid of vol1> auto

And this is it for now. Keep tuned@

Click here to see the inital post