Feature Image

We are poets!

So I just finished day two of RailsConf, and I had a very interesting experience. My final session of the day was held by an actor-turned-programmer by the name of Adam Cuppy. He started things off with a quote by DHH from last years conference, where David challenged everyone to see themselves not as software engineers, but as software writers. However, he showed how the term writer has no purpose behind it; simply a description of things a writer has done. Taking a look at synonyms leads to the word poet. Examining poet shows us that a poet has super-powers! Now who doesn’t want super-powers? And one thing everyone knows is that a person with super powers has the responsibility to use them for the betterment of others.

Through this super-power of expression, poets are able to relay meaning and feeling with their words. They are able to use the syntax if a language to convey feeling, and paint a picture in the reader’s mind. And so too are we challenged to paint a picture with the language we are writing in. Just imagine for a minute, how easy it would be for a new programmer to a project to on-board if the application code they were looking at read like a play? What if we could engage this new person and made it not only easy, but enjoyable for them to learn the application?

Think about the last time you began on an existing code base. What did you spend the most time trying to learn: the how of the program, or the why of the program? With forethought and planning, we can develop conventions that make it easy to discern the meaning of our code. If we can make it easy for someone reading it to know how this method fits into the application, they will be able to see the while picture much more quickly. And think how beautiful that picture could look.

So I echo his enthusiasm: go forth and be poets! Give meaning to your code, and do something great.

Lessons in Rails: Email is slow

Now that my first Rails application is in production, there are many things I wish I would have know at the onset. Not the least of these is how slow (to send) and tedious (to implement) email messages are within the framework. Snail Mail Firstly, the creation and maintenance of mailers is just tiresome. Having multiple views for each message; not to mention keeping the text up to date. Second, being new to pretty much all things web based, I initially inserted all of the sending of emails into the main request/response cycle of the controllers. Needless to say, once we began load testing we started to see terrible performance. A lot of timeouts, and ugly blank screens coming back. Upon investigation, we determined that the emails were the culprits. Being a functional requirement, and therefore unable to be done away with, what were we to do?

We decided to do the same thing you do with anything that is slow and bothersome: we made it someone else’s problem. To address the maintenance issue we created one mailer to rule them all, and pushed all aspects of all the messages into a database table. This, combined with a nice variable substitution object, allows the end users to maintain the emails their system is sending out. This simple move made everyone on both sides extremely happy (and happy customers make happy bosses).

Next, we farmed the sending of the messages off to a queue background job. Since we are using a Postgres database, we decided on the Que gem. The setup for this was super easy, and can even be configured to work with your testing framework so as not to invalidate that wonderful suite you have going. Implementation was straight forward and we instantly saw improved performance.

I plan to use this paradigm for every application I have in the future, and I hope this helps another new RoR developer avoid learning this lesson the hard way.

Higher Education Needs the Public Cloud

Today is an exciting time to be a part of higher education IT! We are participating in the biggest transformation in our field since the client/server computing model displaced mainframes – the adoption of public cloud computing. The innovation, flexibility, cost effectiveness and security that the public cloud brings to our institutions will permanently change the way that we work. This new technology model will render the construction of on-campus data centers obsolete and transform academic and administrative computing for decades to come.

Why is this transformation happening? We’ve reached a tipping point where network speeds allow the consolidation of computing resources in a manner where large providers can achieve massive economies of scale. For the vast majority of workloads, there’s really no difference if computing power is located down the hall or across the country. Computing infrastructure is now a commodity for us to leverage rather than an art for us to master. We have the opportunity to add more value higher up in the stack by becoming integrators and innovators instead of hardware maintainers.

We Need the Cloud’s Disruptive Innovation

The reality is that cloud providers can simply innovate faster than we can. Our core mission is education and research – not information technology. The core mission of cloud providers is providing rock solid infrastructure services that make IT easier. How can we possibly compete with this laser-like focus? Better question – why would we even want to try? Instead of building data centers that compete with cloud providers, we can leverage the innovations they bring to the table and ensure that our laser-like focus is in the right place – on our students and faculty.

As an example, consider the automatic scaling capabilities of cloud providers. At Notre Dame, we leveraged Amazon Web Services’ autoscaling capability totransform the way we host the University website. We now provision exactly the number of servers required to support our site at any given time and deprovision servers when they are no longer needed. Could we have built this autoscaling capability in our own data center? Sure. The technology has been around for years, but we hadn’t done it because we were focused on other things. AWS’ engineering staff solved that for us by building the capability into their product.

We Need the Cloud’s Unlimited Capacity and Flexibility

The massive scale of public cloud infrastructures makes them appear to have essentially unlimited capacity from our perspective. Other than some extremely limited high performance computing applications, it’s hard to imagine a workload coming out of our institutions that a major cloud provider couldn’t handle on a no-notice basis. We have the ability to quickly provision massive computing resources, use them for as long or short a duration as necessary, and then quickly deprovision them.

The beauty of doing this type of provisioning in the public cloud is that overprovisioning becomes a thing of the past. We no longer need to plan our capacity to handle an uncertain future demand – we can simply add resources on-demand as they are needed.

We Need the Cloud’s Cost Effectiveness

Cloud solutions are cost effective for two reasons. First, they allow us to leverage the massive scale of cloud providers. Gartner estimates that the public cloud market in 2013 reached $131 billion in spend. The combined on-campus data centers of all higher education institutions combined constitute a tiny fraction of that size. When companies like Amazon, Google and Microsoft build at macro-enterprise scale, they are able to generate a profit while still passing on significant cost savings to customers. The history of IaaS price cuts by AWS, Google and others bear this out.

The second major cost benefit of the public cloud stems from the public cloud’s “pay as you go” model. Computing no longer requires major capital investments – it’s now available at per-hour, per-GB and per-action rates. If you provision a massive amount of computing power to perform serious number crunching for a few hours, you pay for those hours and no more. Overprovisioning is now the responsibility of the IaaS provider and the costs are shared across all customers.

We Need the Cloud’s Security and Resiliency

Security matters. While some may cite security as a reason not to move to the cloud, security is actually a reason to make the move. Cloud providers invest significant time and money in building highly secure environments. They are able to bring security resources to bear that we can only dream about having on our campuses. The Central Intelligence Agency recently recognized this and made a $600M investment in cloud computing. If IaaS security is good enough for the CIA, it should be good enough for us. That’s not to say that moving to the cloud is the silver bullet for security – we’ll still need a solid understanding of information security to implement our services properly in a cloud environment.

The cloud also simplifies the creation of resilient, highly available services. Most providers operate multiple data centers in geographically diverse regions and offer toolkits that help build solutions that leverage that geographic diversity. The Obama for America campaign discovered this when they picked up and moved their entire operation from the AWS eastern region to a west coast region in hours as Superstorm Sandy bore down on the eastern seaboard.

Higher education needs the cloud. The innovation, flexibility, cost effectiveness and security provided by public cloud solutions give us a tremendous head start on building tomorrow’s technology infrastructure for our campuses. Let’s enjoy the journey!

Originally published on LinkedIn November 12, 2014

Capistrano upload fails with no error

There are a few reasons the upload! method of capistrano might fail on you, but you usually see some kind of an error, such as a read-only file system error on the logged-in user.  I just had a very mysterious failure with no error text whatsoever, even in verbose mode.

Turns out the issue was STDOUT output from my remote .bashrc file.  I was setting some things in my remote host’s bashrc, and had it echoing debug messages (ie “started ssh agent”, and, most cleverly, “hi from bashrc!”).  Almost every other capistrano command had no problem with this, but upload! would just sit there and hang, not receiving the server response it expects.

I removed the echo statements, and everything went back to normal.  So… maybe don’t do that.  Hopefully this helps somebody, someday.

Why Differing Perspectives are Good

Yesterday afternoon, we were pushing to get a Ruby on Rails app deployed into AWS.  One firewall rule away, the developer put in a change request that he thought would get there and stepped out for a quick break.  Since we’re all sitting in the same room, some might think it overkill.

Cloud Central

We have an information security professional embedded in our Cloud First team who has final authority over security-related decisions.  Reviewing the request, he didn’t feel comfortable with the nature of the change.

When the developer returned, he and our InfoSec person discussed the nature of the change request.  It turns out what was being requested was valid, but the there was a slight error in how it was written up.

Problem solved, firewall rule processed, and boom – our app was in business.

Having a team with broad perspectives is not only desirable, it is absolutely necessary for success.  And that is exactly the type of consideration that went into drawing members for our #NDCloudFirst mission:


Augmented Team



Today is the most exciting day of my modern professional life.  It’s the day we are announcing to the world our goal of migrating 80% of our IT service portfolio to the cloud over the next three years.

Yes, that’s right, 80% in 3 years.  What an opportunity!  What a challenge!  What a goal!  What a mission for a focused group of IT professionals!

The following infographic accurately illustrates our preference in terms of prioritizing how we will achieve this goal.

Screen Shot 2014-11-07 at 8.54.34 AM


Opportunistically, we will select SaaS products first, then PaaS products, followed by IaaS, with solutions requiring on-premises infrastructure reserved for situations where there a compelling need for geographic proximity.

The layer at which we, as an IT organization, can add value without disrupting university business processes is IaaS.  After extensive analysis, we have selected Amazon Web Services as our IaaS partner of choice, and are looking forward to a strong partnership as we embark upon this journey.

Already documented on this blog are success stories Notre Dame has enjoyed migrating www.nd.edu, the infrastructure for the Notre Dame mobile app, Conductor (and its ~400 departmental web sites), a copy of our authentication service, and server backups into AWS.  We have positioned ourselves to capitalize on what we have learned from these experiences and proceed with migrating the rest of the applications which are currently hosted on campus.

So incredibly, incredibly fired up about the challenge that is before us.

If you want to learn more, please head over to Cloud Central: http://oit.nd.edu/cloud-first/


Just because you can, doesn’t mean you

ImageUploadedByAG Free1350804245.491429

One of the applications that is a shoe-in candidate for migration has a smalltime usage profile.  We are talking 4 hits/day.  No big deal, it’s a business process consideration.

It needs to interface with enterprise data, resident in our Banner database.  No worries there, the data this app needs access to is decoupled via web services.  Now lets swing our attention to the apps transactional data storage requirements.

First question – does it need any of the Oracle-specific feature set?  No.  So, let’s not use Oracle – no need to further bind ourselves in that area.  Postgres is a reasonable alternative.

OK, so, RDS?  Yes please – no need to start administering an Postgres stack when all we want to do is use it.

Multiple availability zones?  Great question.  Fantastic question!  Glad you asked.

Consider the usage profile of this app.  4 records per day. 4.  Can the recovery point/time objectives be met with snapshotting?  Absolutely.  Is that more cost-effective than running a multi-AZ configuration?  Yes.

Does it make sense for this application?


Thank you Amazon for providing a fantastic set of tools, and thank you to the #NDCloudFirst team for thinking through using those tools appropriately.

The Speed of Light

How fast is it really?  In the course I teach, students have the opportunity to interact with a database, taking their logical models, turning them into physical designs, and finally implementing them.

Up until this semester, I have made use of a database that is local to campus.  The ongoing management and maintenance of that environment is something which is of no particular interest to me – I just want to use the database.  Database-as-a-Service, as it were.  As in, Amazon Relational Database Service.

Lucky for us all, Amazon has a generous grant program for education.  After a very straight-forward application process, I was all set to experiment.

To baseline, I executed a straightforward query against a small, local table.  Unsurprisingly, the response time was lightning-fast.



Using RDS, I went ahead and created an Oracle database, just like the one I have typically used on campus.  After setting up a VPC, subnet groups, and creating a database subnet group, I chose to create this instance in Amazon’s N. Virginia Eastern Region.  Firing off the test, we find that, yes, it takes time for light to travel between Notre Dame’s campus and northern Virginia:



Looks like it added about 30 milliseconds.  I can live with that.

Out of curiosity, how fast would it be to the west coast?  Say, Amazon’s Oregon Western Region?  Fortunately, it is a trivial exercise to find out.  I simply snapshotted the database and copied the snapshot from the eastern region to the west.  A simple restore and security group assignment later, and I could re-execute my test:



Looks like the time added was roughly double – 60 milliseconds.

Is that accurate?  According to Google Maps, it looks like yes indeed, Oregon is roughly twice as far away from Notre Dame as Virginia.  The speed of light doesn’t lie.

So, what did I learn?  First, imagine for a moment what I just did.  Instantiate an Oracle database, on the east coast, and the west coast.  From nothing!  No servers to order, to routers to buy, no disks to burn in, no gnomes to wire equipment together, no Oracle Universal Installer to walk through.  I still get a thrill every time I use Amazon services and think about what is actually happening.  I can already see myself when I’m 70, regaling stories about what it was like to actually see a data center.

OK, deep breath.

Second, is 30 milliseconds acceptable?  For my needs, absolutely.  My students can learn what they need to, and the 30 millisecond hit per interaction is not going to inhibit that process.  It’s certainly a reasonable price to pay, especially considering there is nothing to maintain.

What is the enterprise implication?  Is 30 milliseconds going to be insufficient?  An obstacle that inhibits business processes?  We shall see.  For local databases and remote web/application servers, perhaps.  Perhaps not.

This is why we test, remembering that despite what a remarkably amazing toolset AWS represents, we are still bound by the speed limit of light.

AWS Midwest Region, anyone?

RDS Subnet Groups

If you are having issues finding your VPC when you are creating an RDS it is more than likely because you are missing an RDS Subnet Group. To remedy this problem simply traverse to RDS > Subnet Groups and click Create DB Subnet group. Make sure your subnet group spans all AZs.

Screen Shot 2014-10-09 at 4.15.59 PM

Posted in AWS

Enthusiasm – it’s contagious

I just interviewed a guy today for a position as a Ruby on Rails developer. This would be his first software development position, having previously worked as a customer service rep for an insurance company. His background includes a degree in film, so not your typical coder back story. He was a confident guy, well-spoken and quite personable. While speaking with him I could feel how excited he was at the prospect of continuing to work with the platform. But it wasn’t just what he would be doing, but also where he would be doing it. He complimented the campus (something you hear a lot about Notre Dame), was even happy about the color in the trees, but he stated that he couldn’t wait to help work on things that would influence change. To him working in a higher education setting, having the ability to impact the students and make a positive change for the world as a whole was quite invigorating.

I found myself coming out of that interview with a new energy for my position. I think too often we start to take for granted where we are in life, and forget to really appreciate what we are doing and the influence we can actually have in the world. I have a really cool career, get the opportunity to work with some really great people, and no matter how long the walk is in to work everyday the view is always incredible. So as we look to the future in our careers, trying always to get somewhere better, remember to take a moment and remember what made you so excited when you first started down the path you are on.