Puppet, OpenStack, Hiera, with Vagrant

Seems my most successful posts so far are the Couch to OpenStack bits followed closely by the series of posts on Chef, Razor, and OpenStack. However, Chef isn’t the only way to get OpenStack installed. In fact, Puppet Labs have a set of manifests / modules that work quite well as well. This post will describe how to get started using them along with Hiera to deploy OpenStack.

TL;DR – clone this repo, then vagrant up to get Puppet, Hiera, and OpenStack.

Getting Started

Before we get started we will assume you have at least three Ubuntu 12.04 boxes (VMs, Cloud instances, or physical boxes). These will serve the following roles:

  • Puppet Server
  • OpenStack Controller
  • OpenStack Compute node

Installing the Puppet Server

In this configuration, the Puppet server provides the configuration management details for the remainder of the environment. Additionally, we use PuppetLabs Hiera to store the many and varied variables required to setup the environment.

How to do it

To install the Puppet server, you’ll need to ssh into your first Ubuntu box as a user that has sudo access and run the following commands:

wget http://apt.puppetlabs.com/puppetlabs-release-precise.deb
sudo dpkg -i puppetlabs-release-precise.deb
sudo apt-get update
sudo apt-get install -y puppet puppetmaster
sudo puppet resource service puppetmaster ensure=running enable=true
sudo puppet resource service puppet ensure=running enable=true

Install OpenStack Modules, Setup Hiera data source, Setup Manifests

Note, this section is really really long. It will also involve copy/paste. In this case, it really is better to browse the git repo (here) and pull the relevant bits out. Next up, we install the PuppetLabs modules for OpenStack as well as place our configuration values into a YAML file for Hiera.

How to do it

While SSH’d in as user with sudo access to your puppet server, execute the following to install the OpenStack modules:

sudo puppet module install puppetlabs/apt
sudo puppet module install puppetlabs/openstack

Next up, we setup our Hiera data source by running the following:

sudo cat > /var/lib/hiera/common.yaml <<EOF

# Data needed for Class[‘openstack::compute’]

# The IP and interface that external sources will use to communicate with the instance and hypervisors.
openstack::compute::public_interface:   ‘eth0’
openstack::compute::internal_address:   “%{ipaddress_eth0}”
openstack::compute::iscsi_ip_address:   “%{ipaddress_eth0}”

# The interface that will handle instance to intance communication and instance outbound traffic.
openstack::compute::private_interface:  ‘eth1’

# It most all cases the libvirt_type will be kvm for production clusters.
openstack::compute::libvirt_type:       ‘qemu’

# This adds networking deamon so that we remove single points of failure.
openstack::compute::multi_host:         true

# IP or hostname of the controller node
openstack::compute::db_host:            ‘192.168.100.101’
openstack::compute::rabbit_host:        ‘192.168.100.101’
openstack::compute::keystone_host:      ‘192.168.100.101’
openstack::compute::vncproxy_host:      ‘192.168.100.101’
openstack::compute::glance_api_servers: ‘192.168.100.101:9292’

# An IP address range tha Openstack can use for distributing internal DHCP addresses.
openstack::compute::fixed_range:        ‘192.168.101.0/24’

# Password and users for the plumbing components of Openstack.
openstack::compute::nova_user_password: ‘qr9A2mzc)@C&4wQ’
openstack::compute::nova_db_password:   ‘4g#Xzfv8%*GA4Wv’
openstack::compute::cinder_db_password: ‘4g#Xzfv8%*GA4Wv’
openstack::compute::rabbit_password:    ‘RYiTg4{f8e2*{hL’

# VNC is helpful for troubleshooting but not all cloud images allow you to login via a console.
openstack::compute::vnc_enabled:        true

# Verbose just makes life easier.
openstack::compute::verbose:            true

# The quantum module wasn’t ready at time of release of the openstack module.
openstack::compute::quantum:           false

# Data needed for Clas[‘openstack::controller’]
# The IP and interface that external sources will use to communicate with the instance and hypervisors.
openstack::controller::public_address:       “%{ipaddress_eth0}”
openstack::controller::public_interface:     ‘eth0’

# The interface that will handle instance to intance communication and instance outbound traffic.
openstack::controller::private_interface:    ‘eth1’

# The initial admin account created by Puppet.
openstack::controller::admin_email:          admin@example.com
openstack::controller::admin_password:       ‘.F}k86U4PG,TcyY’

# The initial region this controller will manage.
openstack::controller::region:               ‘region-one’

# Password and users for the plumbing components of Openstack.
openstack::controller::mysql_root_password:  ‘B&6p,JoC4B%2CJo’
openstack::controller::keystone_db_password: ‘4g#Xzfv8%*GA4Wv’
openstack::controller::keystone_admin_token: ‘$9*uKaa3mdn7eQMVoGVBKwZ+C’
openstack::controller::glance_db_password:   ‘4g#Xzfv8%*GA4Wv’
openstack::controller::glance_user_password: ‘qr9A2mzc)@C&4wQ’
openstack::controller::nova_db_password:     ‘4g#Xzfv8%*GA4Wv’
openstack::controller::nova_user_password:   ‘qr9A2mzc)@C&4wQ’
openstack::controller::cinder_db_password:   ‘4g#Xzfv8%*GA4Wv’
openstack::controller::cinder_user_password: ‘qr9A2mzc)@C&4wQ’
openstack::controller::secret_key:           ‘LijkVnU9bwGmUhnLBZvuB49hAETfQ(M,hg*AYoxcxcj’
openstack::controller::rabbit_password:      ‘RYiTg4{f8e2*{hL’

# The memcache, DB, and glance hosts are the controller node so just talk to them over localhost.
openstack::controller::db_host:              ‘127.0.0.1’
openstack::controller::db_type:              ‘mysql’
openstack::controller::glance_api_servers:
   – ‘127.0.0.1:9292’
openstack::controller::cache_server_ip:      ‘127.0.0.1’
openstack::controller::cache_server_port:    ‘11211’

# An IP address range that Openstack can use for distributing internal DHCP addresses.
openstack::controller::fixed_range:          ‘192.168.101.0/24’

# An IP address range that Openstack can use for assigning “publicly” accesible IP addresses.  In a simple case this can  be a subset of the IP subnet that you put your public interface on, e.g. 10.0.0.1/23 and 10.0.1.1/24.
openstack::controller::floating_range:       ‘10.0.0.1/24’

# This adds networking deamon so that we remove single points of failure.
openstack::controller::multi_host:           true

# Verbose just makes life easier.
openstack::controller::verbose:              true

# The quantum module wasn’t ready at time of release of the openstack module.
openstack::controller::quantum:              false

# Data needed for Class[‘openstack::auth_file’]
openstack::auth_file::admin_password:       ‘.F}k86U4PG,TcyY’
openstack::auth_file::keystone_admin_token: ‘$9*uKaa3mdn7eQMVoGVBKwZ+C’
openstack::auth_file::controller_node:      ‘127.0.0.1’
EOF

And to setup the manifests:

cat > /etc/puppet/manifests/site.pp <<EOF
node /puppet-controller.puppet.lab/ {
    class { ‘openstack::repo::uca’:
        release => ‘grizzly’,
    }
 
    class { ‘openstack::auth_file’:
        admin_password       => hiera(‘openstack::auth_file::admin_password’),
        keystone_admin_token => hiera(‘openstack::auth_file::keystone_admin_token’),
        controller_node      => hiera(‘openstack::auth_file::controller_node’),
    }
 
    class { ‘openstack::controller’:
        admin_email          => hiera(‘openstack::controller::admin_email’),
        admin_password       => hiera(‘openstack::controller::admin_password’),
        allowed_hosts        => [‘127.0.0.%’, ‘192.168.100.%’],
        cinder_db_password   => hiera(‘openstack::controller::cinder_db_password’),
        cinder_user_password => hiera(‘openstack::controller::cinder_user_password’),
        glance_db_password   => hiera(‘openstack::controller::glance_db_password’),
        glance_user_password => hiera(‘openstack::controller::glance_user_password’),
        keystone_admin_token => hiera(‘openstack::controller::keystone_admin_token’),
        keystone_db_password => hiera(‘openstack::controller::keystone_db_password’),
        mysql_root_password  => hiera(‘openstack::controller::mysql_root_password’),
        nova_db_password     => hiera(‘openstack::controller::nova_db_password’),
        nova_user_password   => hiera(‘openstack::controller::nova_user_password’),
        private_interface    => hiera(‘openstack::controller::private_interface’),
        public_address       => hiera(‘openstack::controller::public_address’),
        public_interface     => hiera(‘openstack::controller::public_interface’),
        quantum              => hiera(‘openstack::controller::quantum’),
        rabbit_password      => hiera(‘openstack::controller::rabbit_password’),
        secret_key           => hiera(‘openstack::controller::secret_key’),
    }
}

node /puppet-compute/ {
    class { ‘openstack::repo::uca’:
        release => ‘grizzly’,
    }
 
    class { ‘openstack::compute’:
        public_interface    => hiera(‘openstack::compute::public_interface’),
        internal_address    => hiera(‘openstack::compute::internal_address’),
        iscsi_ip_address    => hiera(‘openstack::compute::iscsi_ip_address’),
        private_interface   => hiera(‘openstack::compute::private_interface’),
        libvirt_type        => hiera(‘openstack::compute::libvirt_type’),
        multi_host          => hiera(‘openstack::compute::multi_host’),
        db_host             => hiera(‘openstack::compute::db_host’),
        rabbit_host         => hiera(‘openstack::compute::rabbit_host’),
        keystone_host       => hiera(‘openstack::compute::keystone_host’),
        vncproxy_host       => hiera(‘openstack::compute::vncproxy_host’),
        glance_api_servers  => hiera(‘openstack::compute::glance_api_servers’),
        fixed_range         => hiera(‘openstack::compute::fixed_range’),
        nova_user_password  => hiera(‘openstack::compute::nova_user_password’),
        nova_db_password    => hiera(‘openstack::compute::nova_db_password’),
        cinder_db_password  => hiera(‘openstack::compute::cinder_db_password’),
        rabbit_password     => hiera(‘openstack::compute::rabbit_password’),
    }
}
EOF

Installing the nodes:

There are a few steps to installing the nodes. First we install puppet agent on the node and execute it the first time. From there we sign the certificate on the puppet server. Finally we execute the puppet agent again on the node. Our example will cover the installation of the controller, however the procedure is the same between the nodes.

How to do it

SSH into both the puppet server and the “puppet-controller” node. On the puppet controller, execute the following:

wget http://apt.puppetlabs.com/puppetlabs-release-precise.deb
sudo dpkg -i puppetlabs-release-precise.deb
sudo apt-get update
sudo apt-get install -y puppet
sudo puppet agent -td

Next, on the puppet server:

sudo puppet cert sign –all

Finally, back on the node:

sudo puppet agent -td

Summary

In this post we showed you how to set up a Puppet Server and use the PuppetLabs modules to install OpenStack. We used Hiera to store configuration data. We also provided you a set of scripts and a vagrant file in case you wanted to test this configuration.

6 thoughts on “Puppet, OpenStack, Hiera, with Vagrant

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.