RFC workflow for Launchpad

Now that we are actually getting some Rails code to production, I have worked with the Change Control team and Change Advisory Board to incorporate Launchpad into the OIT change control process.  This process is similar to the old one, with some fantastic new features:

  • The developer (submitter) will get the BUILD TEST task
  • Upon receiving this task, the developer can deploy with Launchpad as many times as necessary (incrementing tags — see below).
  • Each TEST/PROD deploy generates a notification to change control.  They will be checking for an associated RFC!
    • A forthcoming change to Launchpad will include a field to give the RFC number, further reinforcing this
  • Change control will update the BUILD PROD task to use the latest deployed tag.  You may want to state this explicitly in the closure text, in addition to pasting the deploy history.

 

  • Rules:
    • Deploy tags only (more on git tagging)
      • Always include a tag message summarizing the changes
      • Tag convention:  v1.0.1, v3.2.21, v1.2.4a
        • First digit:  major releases.  Very rare, for large milestones in the project.  (Note not large BUNDLES of updates… we should be more iterative than ever now!)
        • Second digit: significant feature additions or enhancements
        • Third digit: minor additions, tweaks, or bug fixes.  This number can get high if necessary!
        • Letter:  optional, rare, only for hotfixes
    • DO NOT ALTER TAGS.  This new process allows you to iterate tag numbers in TEST.  It’s easy to make new ones.  DO IT!
    • Document deployments in Assyst.  Upon closing the task (when you’re ready for PROD), paste into Assyst a list of all your deployments
      • See <app_web_root>/version.txt, a file generated by Launchpad, to help with this.

Here’s a sample RFC:

Banner Web Services v1.0.2 
------------------------------------------------ 
v1.0.2 contains a new service to return a student's favorite color

Test 
Step 1 - [YOU, THE SUBMITTER] - api-internal-test.dc.nd.edu 
a. Use Launchpad (launchpad.dc.nd.edu) to deploy app "ndapi" to the TEST environment. 
    App: NDAPI
    Environment: test
    Task: Deploy:cold
    Tag: v1.0.2
    Do_Migration: True

Step 2 - [FUNCTIONAL_USER] - Test using attached testing spreadsheet. 

Step 3 - Webinspect 
[ATTACH WEBINSPECT PLAN]

Prod 
Step 1 - [Bruce Stump|Pete Bouris] - api-internal-prod.dc.nd.edu 
a. Use Launchpad (launchpad.dc.nd.edu) to deploy app "ndapi" to the PROD environment. 
    App: NDAPI
    Environment: production
    Task: Deploy:cold
    Tag: v1.0.2
    Do_Migration: True

Step 2 - [FUNCTIONAL_USER] - Test using attached testing spreadsheet.

Note that you must be specific in your Launchpad steps for the person running the prod deploy.  Soon, I will release command line tools / API endpoints for Launchpad that will make this less error-prone.

This is a great step forward, enabling developers to react quickly to issues that pop up during functional testing.  Thanks to the Change Control team and the CAB for their time, attention, and approval of this new process!

Launchpad: A Rails app deployment platform

Capistrano is a great tool for building scripts that execute on remote hosts.  While its functionality lends itself to many different applications, it’s a de facto standard for deploying Ruby on Rails apps.  A few months ago, I used it to automate app deployments and other tasks such as restarting server processes, and behold, it was very good.

I had provisioned each of the remote hosts using Puppet, so I knew that my machine configurations were good.  This meant that I could use the same capistrano scripts for multiple apps, as long as they used the same server stack and ran on one of these hosts.  In short, consistency enables automation.

However, these are a few issues with this approach.

  • Distribution of Credentials.  Capistrano needs a login to the remote host.  I can’t just give passwords or pem files to developers; our separation of responsibilities policy doesn’t allow it.
  • Proliferation of Cap Scripts.  I can’t hand over scripts to developers and expect them to stay the same.  I need to centralize these things and maintain one copy in one place.
  • Visibility.  I need these automated tools to work in tandem with our change control processes.  That means auditing and logging.
  • Access Control.  If I’m going to centralize, I need some way to say who can do what.

Enter Launchpad.

This is my solution: a web app that wraps all this functionality.  Launchpad has the following features:

  • A centralized repository of application data
    • git urls
    • deploy targets (dev, test, prod)
    • remote hosts
  • A UI for running capistrano tasks
  • Fine-grained access control per app/environment/task
  • Notification groups for deployment events (partially implemented)
  • Full audit trails of all actions taken in the system and the resulting output
  • Support for multiple stacks / capistrano scripts
  • JSON API (deploying soon)

 

Launchpad owns the remote host credentials, so users never have to see them.  As a result, I can give developers the ability to deploy outside of dev in a way that is safe, consistent, and thoroughly auditable.  My next blog post will outline the ways in which our Change Control team has worked to accommodate this new ability.

Right now, the only stack implemented in Launchpad is an NGINX/Unicorn stack for Rails apps, but there really is no limit to what we can deploy with this tool on top of capistrano.

Launchpad is available to internal OIT developers; see me for details.

Better, Faster, More Consistent

It wasn’t long ago that OIT wasted time and energy having DBAs manually execute SQL scripts created by developers.  Then, Sharif Nijim developed the “autodeploy” tool that allows us to run SQL scripts automatically from SVN tags.  Developers have a faster way to run SQL without imposing on DBAs, and DBAs have their valuable time freed up for more important work.  We have never looked back.  I’m hoping Launchpad will do the same with application deployments.  Onward!

AWS Public Sector Symposium

It has been a while since our last update, so now is the perfect time to give a little perspective as to where we stand with regards to our Amazon initiatives.

Amazon graciously invited Bob and me to present the Notre Dame AWS story and its fifth annual Public Sector Symposium.

Bq_6fKUCIAEKAvz.jpg-large

We described how we got started with AWS in the context of migrating www.nd.edu 18 months ago.  Over the past 18 months, we have experienced zero service issues in this, our most reliable and resilient service offering.  Zero.

Since then, as mentioned in our 2014 spring update when Notre Dame hosted the Common Solutions Group in May, we reviewed progress to-date in the context of governance, people and processes, and our initiatives to grow our institutional understanding of how to operate in the cloud.

Bob and I had fun presenting, and were able to give a synopsis on:

  • Conductor Migration:  ND-engineered content management system that migrated to AWS in March 2014.  You can read all about Conductor’s operational statistics here. The short story is the migration was a massive success.  Improved performance, reduced cost, increased use of the AWS tools (RDS, Elasticache), and a sense of joy on the part of the engineers involved in the move
  • ND Mobile App:  Instantiation of the Kurogo framework in providing a revamped mobile experience.  Zero performance issues associated with rollout.
  • AAA in the Cloud:  Replicating our authentication store in Amazon.  Using Route 53 to heartbeat against local resources so that in a failure condition, login.nd.edu redirects to the authentication store in Amazon for traffic originating from off campus.  This was tested in June and performed as designed.  Since the preponderance of our main academic applications, including Sakai, Box, and Gmail, are hosted off campus, the result of this effort is that we have separated service availability from local border network incidents.  If campus is offline for some reason, off-campus students, faculty, and staff are still able to use the University’s SaaS services.
  • Backups:  We are in the midst of testing Panzura, a cloud storage gateway product. Preliminary testing looks promising – look for an update on this ongoing testing soon.

We are absolutely thrilled with the progress being made, and look forward to continuing to push the envelope.  More to come!

Onward!

Posted in AWS

Calling Oracle Stored Procedures from Ruby with ruby-plsql

Isn’t it nice when something just works?  We are building Ruby on Rails apps on top of Oracle, so we’re using the Oracle Enhanced ActiveRecord adapter on top of the ruby-oci8 driver library.

The ActiveRecord adapter gives us a nice AR wrapper around our existing Oracle schema, which is great, but what about when I want to work with stored procedures or functions?  Turns out the author of this adapter, Raimonds Simanovskis, has a gem just for this called ruby-plsql.

Include the gem in your Gemfile:

gem 'ruby-plsql'

Then, write an initializer that hooks it to your existing ActiveRecord connection (config/plsql.rb)

plsql.activerecord_class = ActiveRecord::Base

After that, calling procedure is easy.  Oracle return types are automatically cast to ruby types.  Oracle exceptions are raised as OciError, which contains a “code” and “sql” attribute.  However, you can call a “message” method on that exception to get the full error output.

Here I call an Oracle procedure, idcard.nd_is_valid_pin using the plsql object provided in the gem:

ok_pin = plsql.idcard.nd_is_valid_pin( new_pin )
  if ok_pin
    plsql.idcard.update_pin_pr( @info.ndid, params[:old_pin], pin )
  else
    raise Errors::InvalidInput
  end
rescue OCIError => e
 render json: { error: e.message }, status: :unprocessable_entity

That’s it!  Nice and easy, and “rsims” is two for two.