I had an interesting and relatively tough time with OpenStack. First, the bad news: Due to change in direction/focus, I won’t be spending much time on OpenStack any more. There are still lots of other interesting topics however that have yet to be covered.

Good news: I learned a lot of lessons that may help individuals playing around with OpenStack.

Summary of lessons learned:

  • play with automated OpenStack deployment options such as Mirantis and RedHat RDO for both learning and PoC purposes to deploy OpenStack in as little as half a day

After having been through a few different options for learning about OpenStack, I felt that LinuxAcademy.com‘s OpenStack courses were the best.  There are several factors that are working in their favour:

  • They have three OpenStack courses – Essentials, Associate and Deployment – each getting progressively more technical – which makes it easier to get started.
  • The Essentials and Associate courses come with pre-built live OpenStack labs that you can practice newly acquired skills with
  • The Deployment course focuses on building your own OpenStack environment on your own workstation or even a small server with virtualized nodes for learning purposes
  • At $30/month, the value, quality and the content, especially the live practice servers provided are simply amazing.  No where else in the technology sector have I seen such great learning value being provided at such a price point.  You can find tons of other resources that offer courses for cheaper (Udemy) or free (YouTube, Vimeo, etc.) but the quality really suffers in the other options.  “You get what you pay for”… and in my opinion, with LinuxAcademy.com, you get a whole lot more than what you pay for.
One big lesson I learned is that deploying OpenStack ‘manually’ is an insanely tedious process. The Deployment course, which focuses on deploying OpenStack Icehouse release on Ubuntu based servers consist of typing a ton of cryptic commands. Here is literally one of 100 or so such examples:
$ keystone service-create --name=glance --type=image \
  --description="OpenStack Image Service"
$ keystone endpoint-create \
  --service-id=$(keystone service-list | awk '/ image / {print $2}') \
  --publicurl=http://controller:9292 \
  --internalurl=http://controller:9292 \
  --adminurl=http://controller:9292

It is still important to create an OpenStack environment ‘manually’ at least once if not twice as it’s a great learning process. In between the ‘copy-and-paste’ sessions of the LinuxAcademy.com’s Deployment guide, the instructor does great job of reviewing the OpenStack concepts.

When going through the Deployment guide however, the instructor neglects to mention that he’s essentially following this official OpenStack document in the guide. Instead of hopelessly trying to copy exactly what the instructor is doing, and taking a chance at making a mistake, you can copy and paste most of the commands from the official OpenStack document while viewing the videos from LinuxAcademy.com to follow along with the process at the same time – this will make your life much easier!  (Be sure to always change the appropriate variables – server names, passwords, usernames, etc.).

After you’ve been through the manual process once or twice, and had the opportunity to become intimately familiar with OpenStack’s inner workings, next step would be to play around with a couple of different commercial, supported and significantly more automated options. Out of these, I will only discuss the two most popular ones:  Mirantis and Red Hat OpenStack (and it’s community version known as RDO)

Instead of literally spending a week or two with the manual process of deploying OpenStack, one can get up and running with OpenStack with Mirantis or RDO within a day, or even as little as half a day.  Of course these OpenStack distributions hide all the complexity, and even partially remove the configurability of some components, but, still provide more than enough knobs for 99% of production deployments out there.

RDO can be installed on Red Hat Enterprise Linux, or, for learning purposes, on CentOS (and other compatible distros). Mirantis supports multiple Linux distros. Mirantis also comes with it’s own separate ‘Fuel-Master’ virtual appliance which is used to deploy and control the OpenStack infrastructure.

I haven’t had enough time to play with either of these too much, but, Mirantis definitely seemed to provide a smoother experience, with great Web 2.0 esque GUI, and ability to get up an running with a proof-of-concept (PoC) or production environment with much more ease than the RDO option (for which, one would need to worry about evaluation, licences, subscription, etc.). With built-in PXE boot server option, and ability to point-and-click to deploy OpenStack components (Controller, Compute, Storage and other nodes) in both virtual and physical environments, Mirantis definitely wins the ease-of-use award.

You can get started with Mirantis here and with Red Hat RDO here. For Mirantis, you would use the same process for PoC or production environment – except you’d deploy on physical servers instead of a virtualized environment on your workstation.  For Red Hat OpenStack, for PoC or production environments, you’d need to follow this guide.

Have fun OpenStack-ing!

 

 

Advertisements