Feature Image

Updating Framework Versions – A Leisurely Stroll

I’m presently the primary application developer for conductor.nd.edu, map.nd.edu and a few other limited scope applications.  When I started working at the University of Notre Dame back in May 2009, I inherited these applications.

Fortunately, all of the applications I inherited had an automated test suite — a collection of programs that can be run to verify system behavior.  In the case of these applications, the test suites are developed and maintained by the application developer.

The test suites are a vital component of each of the applications I maintain.  Without them, I’d be lost.  Some test suites are better than others, but as I work on each application I also work to continually improve the test suite.

Recently, I just completed the process of migrating map.nd.edu’s authentication system from LDAP to CAS.  The advantages of CAS are pretty straightforward — I do not have to worry about maintaining the authentication component of each application.  This means I can remove code from the application — always a good thing. And the application doesn’t process user credentials, which eliminates one potential security hole from the application.

While performing this update, I also decided that I would update the underlying Ruby on Rails framework.  We were on version 2.3.5 and needed to increment to a more recent version — more recent versions of applications typically squash some bugs and close any discovered security holes.

The steps to increment the version were fairly simple, here is the script I followed:

10 increment version number
20 run tests
30 commit changes if tests were successful
40 update broken code and goto 20 if tests were unsuccessful
50 goto 10 if application's current version number is not latest version number


The key concept is that I walked from version 2.3.5 to version 2.3.14 by making sure 2.3.6, 2.3.7, etc. all passed their tests.  Never once did I open the browser during this process.

Once that was done, I began working on the CAS implementation.  Adding this feature went very smoothly.  When all of it was done, I began kicking around the application in the browser, making sure that things were working as expected.  I could automate these tests with Selenium, but have yet to invest the time in this tool.

I didn’t find any problems, but in an alternate universe, I’m sure there were issues.  After all, I had just incremented 9 minor versions of the application and implemented a whole new feature.

Enter Bizarro World

Let’s pretend for a moment that I did find problems.  It is possible the problem was untested from the beginning, introduced in the version update, introduced with the CAS feature, or something else entirely.

My first step is write a test that verifies the problem exists (i.e. the test fails).  With that automated test, I can begin to unravel what has happened. The first thing to do is go back to before any recent changes were made.  After all I want to know did my changes introduce the problem.

Given that I always use source control, it is easy to step back to a previous code state.

With the repository in it’s original state (i.e. before I incremented the Ruby on Rails version), I then run the test.  Did it still fail? If so, the changes likely had no effect.  If the test passes, then changes that I made broke the application.

Since we are still pretending, let’s pretend the test passed.  I now know that somewhere in my changes, I broke something.  At this point, I begin walking the repository one commit at a time to it’s most recent state (i.e. CAS has been implemented).  At each step, I run the test.  If it still passes, then I move to the following commit.  If it fails, I have found the commit that introduced the problem and can work to fix it.

Since we use git as our source control, I can automate the above process with the `git bisect` command.  I run `git bisect` by indicating where to start, finish, and what test to run at each step.  Then, I sit back and let my computer do the work. Note the test program that you are running will likely need to reside temporarily outside of the repository, as `git bisect` will checkout each version of the code.

Fortunately, in this universe, I didn’t encounter any of these problems, and instead was able to CAS-ify my first Rails application without breaking anything that I know of.




Presentation at HighEdWeb 2011

Erik Runyon and I presented at HighEdWeb 2011 on Feeding the Beast: Fostering a Culture of Sharing (source code can be found at https://github.com/erunyon/HighEdWeb-2011).  This was my first conference presentation and it started with some technical difficulties. Note to self, we probably shouldn’t have used an HTML 5 slide presentation.

We outlined the various cases in our department and on campus where we providing access to data and are using easily accessible data.  It was not a definitive list, but we did talk about the various challenges we face in our distributed work environment.

We kept the presentation short, wrapping up in 25 minutes, then opened up for questions.  And it was full of questions.  A common theme amongst the questions was:

“How can I get others to share with me?”

This is particularly challenging when faced with the fact that several departments at a University might be on a cost recovery model.  My answer was perhaps a little discouraging:

“You can’t. However, if you work to build relations and share with others, then they will hopefully be more likely to share with you.”

In fact, think back to your days on the playground.  If someone had something that you wanted to play with, you may well have said to your mom, dad, or teacher “They aren’t sharing!”  What you really meant was “I want to play with that.”

I know for me, my parents would say “Why don’t you play with that over there.” What I learned since my early childhood and from being a parent is that the simple sentence has two more unstated sentences.

“Why don’t you play with that other toy. Show the other kid that you are having fun.  Then they will want to play with your toys and you can play with their toys.”

Pretty simple concept, if you want others to share with you, make sure you have something worth sharing.

I’m hoping that a video will surface, as there were other questions and discussions that I believe were very valuable.  I hope the answers that we gave to those questions were also valuable.

Trying to Juggle Only One Thing at a Time

I am the primary maintainer of Conductor.  As such I am responsible for implementing new features, squashing bugs, tuning it’s overall performance, and improving the code’s maintainability.  It is very easy for these tasks to blend.

As I was working on adding a new feature to Conductor’s custom database, I found areas in the code that could be reworked for better clarity.  In rather short order I went from writing new code to adjusting existing code.   And I quickly grew to regret it.

The problem was that I had crossed from one task into another. The code base, in it’s current state, was juggling two incomplete concepts.

I am not a juggler.  However, I can quite easily juggle one ball.  I can even, to a lesser extent, juggle two balls.  The third ball is beyond me.  In fact, I find that juggling two balls is more than twice as difficult as juggling one ball.

So my changes were now unnecessarily complicated.  Not insurmountably so, but inconveniently so.  Instead of each task taking 10 minutes a piece, together they would take 30 minutes to resolve.

Programming is the art of transforming thought into instruction.  It is therefore easy to drift along a stream of consciousness and find the changes in your code to be unduly complicated.

Red, Green, Commit, Refactor

If I can follow the above 4 words of advice, I can reduce the potential for over-complications.

Red: When working on resolving the problem, I need to come up with the test (ideally automated) that will fail until the problem is resolved.

Green: Write the code to resolve the problem and verify the problem is resolved.

Commit: Update the code repository with the changes.

Refactor: Now go in and clean things up so it is either more legible or understandable.

More on Red-Green-Refactor

Content is King and the King needs to Move

It appears that Marketing Communication’s message concerning “Content is King” is taking hold.  Our copy writer, Mike Roe is completely booked for the July 2011 to July 2012 fiscal year. (In fact I suspect he’s overbooked.)

Lets Talk about Chess for a Moment

I’ve played quite a bit of chess, though strangely never in a tournament.  My 9th grade son has been involved in chess since kindergarten so I’ve been to my fair share of tournaments.

One of the unique moves in chess is castling your king; It serves two purposes.  One as a protecte protective measure to get your king out of the volatile middle columns of the board, and two to bring your powerful rook into play.

Now Back to Our Regular Programming

Creating content takes time and that is one precious resource you are not getting more of. Your website is a window into your world and your message.  Email, Twitter, Facebook, LinkedIn, and at some point Google+ are other windows into your world.

The message you are conveying is completely and entirely dependent on the content pieces you produce.

Think of these pieces of content as pieces on a chess board.  As those pieces are developed and pushed into the field of play, you should also be thinking about castling your king…content.

But be wary as your defensive positioning can easily become a liability.  If you fail to advance one of your pawns that is guarding the king, an enemy rook or enemy queen on your first rank can quickly spell checkmate.

So as you are working with your content think about it’s portability, in part because you don’t want to write it again, but also because the technology platform of today may be gone tomorrow (I know my VCR doesn’t work anymore, how about yours?).

Blogging platforms, such as WordPress and Blogger provide easy tools for piping data out of your blog and into another blog. Google+ and Facebook are also behind the idea of data liberation, providing a means of downloading everything you’ve done (Scary right?).

Conductor, Marketing & Communications CMS, also provides a means of getting the information out of your website (the documentation is a little sparse, but it’s one of my goals for the year to flesh it out). And up until now, getting content into Conductor was not nearly as easy.

The Big Reveal

For the past few weeks, along with launching Notre Dame Philosophical Review‘s new site, I’ve been working on a data migration tool to move webpages from outside of Conductor into Conductor.  The tool is intended for developers (those with an understanding of HTML markup) and greatly assists in moving content into Conductor.

This data migration tool is in it’s alpha stage, having only successfully moved content onto my local machine.  But it will evolve over time.

Please Hold While I Transfer You

The Setup

This month, my team has been working on moving Notre Dame Philosophical Reviews (NDPR) from a custom ColdFusion micro-application to Conductor.  Notre Dame Philosophical Reviews is an extremely popular site in the nd.edu domain. Below is the introductory text from the original site:

Notre Dame Philosophical Reviews is entirely devoted to publishing substantive, high-quality book reviews (normal length: 1500-2500 words). Our goal is to review a good majority of the scholarly philosophy books issued each year and to have the review appear within six to twelve months of the book’s publication. The journal will be published only on-line (available free, both through e-mail subscription and on this website). Reviews are commissioned and vetted by a distinguished national and international Editorial Board.

The site is successful and popular because it’s reviews “appear within six to twelve months of the book’s publication.”  This relatively quick turn around on scholarly works fills a unique niche.

The Challenge

We needed the data to be precisely copied, preserving links and images wherever possible.  Below are the 3 challenges we had to address:


Since it’s first review, “Kant’s Theory of Taste, A Reading of the Critique of Aesthetic Judgment” in 2002, Notre Dame Philosophical Review has posted 2200+ reviews.  Each of these reviews are carefully vetted by NDPR’s Editorial Board.  There are plenty of footnotes, mathematical symbols, rough breathing symbols, smooth breathing symbols, and just about any other characters you might use.


Given that other people likely have a review bookmarked, or referenced, the original articles URL is very important.  So we have to make sure that the converted data can be referenced by the old URL.  This was going to be handled by having the original URL redirect to the new URL via a 301 Moved Permanently status. The 301 status ensures that search engines will update their index to say that the original file is now found at the new URL.


The original reviews were stored in an Oracle database.  The “guts” of the review were stored in a CLOB (Character Large Object); The problem that we encountered was getting that data out while maintaining a high fidelity.  Having worked with mySQL and PostgreSQL, both open source relation database management systems (RDBMS), I am very much used to data portability and tools that make that easy.  We struggled with getting the CLOB data out of the database and maintaining data fidelity.

The Solution

I was going to crawl the site, convert the content, verify the data, and upload an updated copy of the review to Conductor.

Step 1: Crawl the Site

Pretty straight-forward.  I had a list of all the Review IDs, and wrote a script to capture each of those pages.

Step 2: Parse the Source

Using the most excellent HPricot library (“A swift, liberal HTML parser”), I wrote a script to extract from each file the relevant fields and store them locally for processing.  This script was also responsible for identifying any images and links for later processing.

Step 3: Verify / Render

Given that I was dealing with character encoding issues, I needed to make sure that the content I was storing locally would render identical to the corresponding web page.  I wrote a script to render each review in a custom template and compare, via the Unix diff command, whether there were any material differences (i.e. I ignored white-space differences).

Step 4: Transform Links

With the content verified, I wrote a script to process each of the images and links, downloading a copy of each of the images, and making sure that the URLs were reasonably well formed.

Step 5: Upload Images

With the downloaded images, the next script uploaded those images and mapped the URL of the original to the new URL. Using the RestClient library, I simply POSTed the images that were hosted on the original NDPR to Conductor, and captured the response.

Step 6: Update Content

The next script was responsible for updating content.  It replaced the image and link URLs with the new paths.  It also converted any “. . .” to the … character (&hellp;).

Step 7: Create Reviews

With all of the content updated, the next script created the review in Conductor.  Again using the RestClient library, I POSTed the appropriate form data and captured the response.

Step 8: Generate Redirects

With the newly created Reviews, the next script generated the redirect table as per Apache’s RewriteMap table.

My solution is up at Github

Wrap Up

I ended up running the script against a local copy of the new site, to see if there were any gotchas.  Once I felt comfortable with the process, I ran the script using one review on the new site.  When that completed, I ran a second, more complicated review.  After each of these live reviews, my team (Kate Russell, Nick Johnson, and Erik Runyon) reviewed the results.  We leaned very heavily on Kate’s experience as an editor as fidelity to the original was of the utmost importance.

As an Aside

We found one article that appeared to be in Chinese.  Upon further inspection, the entire review’s content had been mangled via some character encoding issue.  The problem was present in the original review and the converted/new review.  So I wrote one other script to crawl the content of each review and looking for anything that didn’t have at least one capital letter.  Fortunately, only the one review appeared to be mangled.  Our conjecture is that it is related to an update to the underlying ColdFusion application and the Oracle database.  A previous solution to un-mangle the content was to edit the review in the micro-application’s admin, adding a space to the beginning of the review, then saving the review.   Sadly this did not work.

Updating the Engine

For the past month I’ve been updating Conductor from Ruby on Rails 2 to Ruby on Rails 3.  It has proven to be both a great learning experience and an exercise in patience.  I’ve worked with Rails 2 for about 3 ½ years; I’ve grown accustomed to it’s idioms and its quirks.  On the surface, Rails 3 is very similar, but under the hood is where the most stark changes can be seen. Mercifully, over the various minor revisions (i.e. Rails 2.3.1), the system has clearly identified what has been deprecated and would be removed in future versions.

Conductor is comprised of about 10,000 lines of production code and 15,000 lines of test code.  The production code is what everyone else interacts with: viewing pages, uploading images, managing templates, etc.  The test code is what the Conductor development team interacts with: verifying that pages can be viewed, verifying that images are uploaded, verifying any additional behavior, etc.  When running Conductor’s test suite, the deprecation notices look like such:

DEPRECATION WARNING: reorder is deprecated.
  Please use except(:order).order(...) instead.
  (called from RAILS_ROOT/conductor/config/application.rb:7)

Step 1: Make sure the original “works”

The first step was to make sure that all deprecation notices for the Rails 2 version were taken care of.  This was fairly straight-forward as the file and line numbers were given.It was also something I’d done as part of the ongoing routine maintenance of Conductor. With the deprecation notices gone and a fully functional test suite, I proceeded to the next step.

Step 2: It’s Worse than that, He’s Dead Jim

Using git source control, I made a branch try-rails-3 and flipped to Rails 3…and the local application wouldn’t start.  It was dead, fortunately that was to be expected. With the help of the Rails 3 release notes and the Rails Upgrade plugin, I slowly brought the system back to life. This involved moving some configuration files, updating some plugins, and a patience.

Step 3: The Devil’s in the Details

Once the system booted up, I then turned to the test suite.  I pushed play and waited for the results (sadly the test suite takes quite a bit of time to run).  There were plenty of errors, some related to the sequence of loading files, others related to broken plugins, and some simply needing attention.  I picked through the tests, and eventually got all of them running.

With the test suite running, I focused on manually verifying the system.  That is where I found the next batch of trouble.  With Rails 3, all text sent to the browser was now automatically HTML escaped as a security measure.  This resulted in some pages rendering broken HTML. I had broken pages, even though my tests worked. So the answer was to write some tests to verify the page wasn’t broken.

Step 3 ¾: Camera Two, Camera Three

Applying the Red, Green, Refactor principal, I decided that I would write test code against the Rails 2 base, as I wanted to ensure the current Rails 2 code was behaving as expected.  Then, via git’s awesome cherry-pick command, I grabbed the commit that had the functioning test and added it to the Rails 3 branch.  I ran the test, and as expected, it failed…Good.  I went in and fixed the failure. I reran the test and it passed…Good. That particular bug was gone, and moving forward, I wouldn’t need to worry about it. As I encountered this problem, I repeated the solution, adding several more tests to the system.

Step 3 ⅞: Meanwhile Back at the Hall of Justice

While I was working on the Rails 3 changes, I was also maintaining the Rails 2 version of Conductor.  Some of these changes were more substantial, others were not.  What was happening, however, is that the Rails 2 and Rails 3 branches of work were diverging, and there would be a reckoning.

Step 4: Construction Paper, Scissors and Glue

All of the tests were working. I had manually verified all of the administrative functions in the Rails 3 branch. Any problems I had encountered were fixed with an underlying test to verify the behavior.  I felt confident that I had gotten most of the problems.  So I merged the Rails 2 branch into the Rails 3 branch.

Since the branches had diverged, there were merge conflicts that I needed to resolve; Git is amazing and tracking changes, but when changes happen to the same line of a file in different branches, a merge conflict results.  I resolved these to the best of my ability…And then ran the tests.

There were some broken tests, but eventually, I got things working.  I again manually verified the administrative functions in the Rails 3 branch.

Step 5: It’s Alive…Almost

I pushed the changes to our beta server and made sure everything worked.  Even though my test environment is rather robust, there are always slight difference between different systems.  So, with the help of my team, we tested the administrative functions against a clone of the production environment.  Everything looked good; And now the Rails 3 update is ready for a deploy.

My Testing Philosophy

In order to verify something is working correctly you must first verify that it was not working.

The above philosophy is a version of the old proverb “If it isn’t broke don’t fix it.” Note the implied act of testing that it is broken. In the various software that I’ve worked on over the past 6 years, I have worked to include and maintain automated test suites. Each application’s test suite includes different kinds of tests as well as tests that were created for a variety of reasons.

The Kinds of Tests

I started writing tests as part of working on Ruby on Rails applications, so my testing experience follows it’s testing guidelines.

  • Unit tests – testing a small chunk of code (i.e. Determine a Page’s URL upon creation)
  • Functional tests – testing a single action (i.e. Create a Page)
  • Integration tests – testing the interaction of multiple actions (Login → Create a Page → View the new Page → Logout)

Together these kinds tests create the vast majority of my automated tests; There are other tests that verify the response of external systems: search.nd.edu and eds.nd.edu are two examples.

The Reasons for a Given Test

In each of these cases, the code is changing. By taking the time to write a handful of tests, I can better understand the problem as well as work at exposing any other underlying issues.

Just this past week, by working on one test, I discovered the solution to a problem that I hadn’t been able to solve.

Why Automated Tests

The big payoff is that if I have a robust test suite, I can run it at anytime, over and over, and verify that all the tests pass; Which in turn raises my confidence that the tested system is working properly.  It does not, however, guarantee that it is working, only that what I’m testing is working.

As an added perk, the tests I write convey what I am expecting the system to do. Which means taking time to understand the tests may help me understand the nuances and interactions of a more complicated software system. The tests also help my fellow programmers understand what is going on.

However test suite that successfully runs need not indicate that the system works. It only verifies that the tests work. Which leads to…

Problems with Automated Tests

  • Did I think of all of the possible scenarios?
  • Did I properly configure my test environment?
  • Did I account for differences between my test environment and the production environment?
  • Can I make the test environment as close to the production environment as possible?

And the real big kicker…

Do I Have the Support to Write a Test Suite.

It takes time to write tests, and in some cases people may balk at taking that time, but how are you going to test “the whole system” after you’ve made a “small change” and another…and another…and how about that first change by the new developer…you know the one that came in after your entire programming team got hit by the proverbial bus.


It Doesn’t Work if it isn’t Tested

I was going to test my software anyway, so why not take the time to have the machine do what it’s best at doing: repetitive tasks. I took time to learn how to write tests and then writing explicit instructions for my computer to do the things that I should be doing with each code update.

Is it fool proof? No, but, so long as I’m learning and “taking notes”, my tests are learning as well. Ultimately these tests reflect my understanding of the software application.

Example of Fixing a Bug with Testing

  • A software bug is reported.
  • I run the test suite and verify all the tests pass.
  • I write a test to duplicate the failure. This test must initially fail; After all a failed test verifies that something is broken.
  • I update the code until the test passes; I have verified that it is working. I run the test suite and verify all the tests pass.

Now, imagine if your initial test suite had zero tests, and in fixing the bug, you created one test. Run that test after each update to make sure you don’t regress.


Automated Testing Institute

Kent Beck: Test Driven Development

Jay Fields: How We Test

The Extra Mile is Full of Surprises

Yesterday, a colleague of mine was having problems with an image. It seems that the code she was using, and had always used to insert an image was not working. After a quick look, there was an extra space in the code that was preventing the IMG-tag from properly rendering. (Note: the site uses Textile to manage it’s content; So an extra space means the Textile to HTML parser may encounter problems).

I poked around in the code for a bit, wanting to ensure that Conductor was not “helping” her by inserting an extra space…This did appear to be the case; And according to my colleague, she had in fact hit the space bar. As a general rule, the system should prevent bad data from being entered.  So I updated the system to remove any trailing spaces.  As is typical, when I make a change to the code, I write a test that first fails.  Then I update the code, and get the test to pass. After all, I can’t verify something is fixed unless I can first verify it is broken.

This fix was proving to be a bit cantankerous. The test was failing in a slightly unexpected way.  And then it struck me… While I was looking and updating the code to prevent the extra spacing from causing a problem, I had stumbled upon another problem. So, I dug in, and found, to my elation, the root cause of a long-standing, erratic bug that I had been chasing around, but never successfully squashing.

I cleaned up the first test that was failing, and wrote a second test to duplicate the behavior that was causing the long standing error. This second test didn’t initially pass, but I knew I was on the right path. So I tweaked my test, and was ecstatic when the test failed. After all, I had to verify that it was failing before I could fix it. I then stepped into the code, and fixed the problem. And the test succeeded! Which meant that my test now verified the expected behavior.

What started out as ensuring proper data, even accidentally entered, became a fix for a long standing problem.  Incidentally, my colleague was also a sem-regular “victim” of the long standing problem.

Carol, thanks for hanging in there with me as we muddled through this problem.

It’s all in a URL

I’m not just a web application developer, I’m a support technician and an ambassador. If people are having problems with Conductor, they will typically contact us via email or submit a help request in the Conductor admin. If the problem is a more technical issue it will be routed to me. If the request came via the help request in Conductor, then I automatically get the URL of the page that the request was sent from. Without this information, I am left to guess, or as I habitually do, reply to the submitter with:

What is the URL that you are having trouble with?

The challenge is Conductor supports 170 live sites and 60 sites under development. So without the simple URL, I;m going to spend time looking for the best match, based on the submitter’s signature and job title. It is easier to send the request back, asking for the URL…and starting the conversation as an ambassador.

By sending the simple email “What is the URL?”, I begin the dialogue of addressing the specific problem and listening for any additional problems or issues the requestor might have regarding the system. The source of the specific problem can be just about anything: software error, data error, connectivity problems, misbehaving browser, user error, unexpected system behavior, solar flares, or the dreaded “D: all of the above”. To the best of my ability, I make work to assure the requestor that I am addressing and solving the problem regardless of “whose fault it is.” Then, with the information in hand, I don my Sherlock Holme’s cap and pipe, and work towards a resolution.

Pining for a little less change

This past week, one of the problems I’ve worked to solve was why our custom interface to the Google Search Appliance at Notre Dame was choking on the multi-site search of Italian Studies. What was happening was that the syntax for querying both The Devers Program in Dante Studies and Italian Studies was to include a query parameter of as_oq=site:www.dante.nd.edu+site:italianstudies.nd.edu. The resulting URL looked something like this:



"mayan changing table" courtesy of smcgee

Note the colon between site and italianstudies.nd.edu has been replaced with %3A; The URL has been encoded. The browser and server correctly handle it, however, a problem arises. If the URL has already been encoded (replace : with %3A) and it gets re-encoded the %3A becomes %253A, because, wait for it…

The % is another character that needs to be encoded.

Instead of the URI encoding being idempotent, an URI encoded two times is different than the same URI encoded three times. Do you see the potential problem? If you are working with a series of functions or even separate libraries, one of those functions or libraries may decide that it needs to encode a URI before it does something with the URI. But, what if the URI was encoded by another process somewhere else in the program chain? There is certainly “best practices” that can be applied for URI encoding, but I believe URI encoding should be idempotent. Hopefully there is a sound reason for it not being idempotent…But at this point, I’m unaware of why.