10 Things a Modern IT Professional Should Do

OM3.29.11 MCOB Study

You work in IT. You’re in higher education. It’s 2014. Get a move on!

1. Back up your files.

When your machine crashes, you’ll get no sympathy. The whole world knows you should back up your computer. There’s no excuse. Go get CrashPlan or some other way to make sure your stuff isn’t lost. No, don’t go on to number two until you’ve finished this. It’s that important.

2. Use a password manager.

Most people only use a couple of passwords frequently enough to remember them. Go get PasswordSafe (pwSafe on the Mac) and start generating random passwords for your accounts. Don’t bother remembering them, just store ’em in your password manager and copy/paste when you need them. Secure this password manager with one hard password (the only one you have to remember).

3. Use a collaboration tool to share files rather than emailing them.

Email attachments fill up your mailbox and affect performance. Now think about how that might affect your recipients. Just use a tool like Box and send a share link. Much easier and you can control it later if you need to.

4. Learn some basic scripting, even if it’s just an Excel formula.

You don’t need to be a programmer to do some basic string manipulation or some basic conditionals (if-then). Excel or Google Spreadsheets are incredibly powerful tools in their own right.

5. Find a reliable IT news source relevant to your area (news site, mailing list, blog, twitter).

Make it part of your daily habit to check your RSS (try feedly.com) or Twitter feeds. Seek out and follow a couple of good news sites, tech journalists, or smart people who keep tabs on IT stuff. Let these people be your filter and don’t be in the dark about what’s going on in your industry.

6. Build relationships with people who can help you improve, especially from other departments or organizations.

We learn from each other – by listening, by teaching, or by doing. Join campus communities (e.g., Developer Meetups or Mobile Community of Practice), higher ed user groups, Educause constituent groups, industry consortiums, etc. If it doesn’t exist, create one.

7. Contribute back to the community – an open source project, present at a conference, write a blog post, etc.

You’re standing on the shoulders of giants, and with luck your successors may just stand on yours. We benefit from so many other generous contributions and we’re fortunate to work in higher education, where our counterparts are willing to swap stories and strategies. Be a part of the larger community.

8. Listen to your customers.

Work is happier when your customers are happier. What can you do that will make their lives better? How do you know without talking to them? Get out there and watch them use your software, have them brainstorm ideas, or just listen to them complain. Just knowing that they’re being heard will make your customers happier.

9. Figure out your smartphone.

There’s nothing sillier than an IT person who is computer illiterate, and that counts for your smartphone as well. It’s a magical supercomputer in your pocket. It probably cost you a bunch of money and it does way more than phone, email, and Angry Birds. You can tie in with your VoIP phone, access files via Box, edit Google Docs, submit expense reports, and other wizardry.

10. Break stuff.

Our world is full of settings screens, drop-down menus, config files, old hardware, and cheap hosting. Sadly, too many people are afraid of what might go wrong and they never discover the latest features, time-saving methods, or opportunities to innovate. Try out the nightly build (in dev, maybe). Ask vendors about loaning you some demo hardware. Challenge yourself on whether there’s another way to get it done. You’re in IT. Don’t be afraid to break something – it can be fixed. And who knows? You might just learn a thing or two.

Our Finest Hour

Winston-Churchill4

Last Friday (1/24/2014) saw our campus experience multiple service disruptions.  Both were service provider issues – one with Box, the other with Google.  In both cases, the services were resolved in a timely manner.  Both the Box and Google status pages were kept up to date.

What was required from OIT to resolve these interruptions?  Communication and vendor management.  We did not have to burn hours and hours of people time digging in to determine the root cause of these issues.  We did not have to bring in pizza and soda, flip charts and markers, bagels and coffee, and more pizza and more soda to fuel ourselves to deal with this issue.  We did not work late and long into the night, identifying and resolving the root cause.

Why?  Because we have vendors we trust, employing more engineers than we have humans in all of OIT, singularly focused on their product.

I was part of the team that worked through our Exchange issues last fall.  I will be the first to tell you that the week we spent getting to operational stability was a phenomenal team effort.  We had an engineer from Microsoft and all of the considerable talent OIT could bring to bear to solve our issues.  And solve them we did.  Yes, we worked late into the night.  Yes, we fueled ourselves with grease and caffeine.  Yes, we used giant post-its and all the typical crisis management tools.

Everyone on the team did not get much sleep, did not spend the typical evenings with their families, and displayed their remarkable devotion to their profession and to our University.  Everyone on the team was elated when we solved a gnarly set of technical issues and got to operational stability.  And everyone on the team did not relive that experience this past Friday.  They were able to focus on their core mission.

We have moved all the way up the stack of abstraction in the space offered by both Box and Google.  It is something to celebrate, as it allows us to relentlessly focus on delivering value to the students, faculty, and staff at our Lady’s University.

 

Google Apps Scripts, LDAP, and Wookiee Steak

I’ve been a Google Apps user for as long as I could score an invite code to Gmail. Then came Google Calendar, Google Docs and Spreadsheets, and a whole slew of other stuff. But as I moved from coder to whatever-I-am-now, I stopped following along with the new stuff Google and others kept putting out there. I mean, I’m vaguely aware of Google Apps Engine and something called Google Apps Script, but I hadn’t spent any real time looking at them.

But as any good IT professional should, I’m not afraid to do a little scripting here any there to accomplish a task. It’s fairly common to get a CSV export that needs a little massaging or find a perfect API to mash up with some other data.

The other day, I found myself working with some data in a Google Spreadsheet and thinking about one of my most common scripting tasks – appending data to existing data. For instance, I start with something like a list of users and want to know whether they are faculty, staff, or student.

netid name Affiliation
cgrundy1 Chas Grundy ???
katie Katie Rose ???

Aside: Could I get this easily by searching EDS? Sure. Unfortunately, my data set might be thousands of records long and while I could certainly ask our friends in identity management to get me the bulk data I want, they’re busy and I might need it right away. Or at least, I’m impatient and don’t want to wait. Oh, and I need something like this probably once or twice a week so I’d just get on their nerves. Moving on…

So that’s when I began to explore Google Apps Script. It’s a Javascript-based environment to interact with various Google Apps, perform custom operations, or access external APIs. This last part is what got my attention.

First, let’s think about how we might use a feature like this. Imagine the following function:

=getLDAPAttribute("cgrundy1","ndPrimaryAffiliation")

If I passed it the NetID and attribute I wanted, perhaps this function would be so kind as to go fetch it for me. Sounds cool to me.

Now the data I want to append lives in LDAP, but Javascript (and GA Script) can’t talk to LDAP directly. But Google Apps Script can interpret JSON (as well as SOAP or XML), so I needed a little LDAP web service. Let’s begin shaving the yak. Or we could shave a wookiee.

Google Apps Script and LDAP API flowchart

First, I cobbled together a quick PHP microapp and threw it into my Notre Dame web space. All it does is take two parameters, q and attribute, query LDAP, and return the requested attribute in JSON format.

Here’s an example:

index.php?q=cgrundy1&attribute=ndPrimaryAffiliation

And this returns:

{
"title": "LDAP",
"attributes": {
"netid": "cgrundy1",
"ndprimaryaffiliation": "Staff"
}
}

View the full PHP script here

For a little extra convenience, the script allows q to be a NetID or email address. Inputs are sanitized to avoid LDAP injections, but there’s no throttling at the script level so for now it’s up to the LDAP server to enforce any protections against abuse.

Next, the Google Apps Script needs to actually make the request. Let’s create the custom function. In Spreadsheets, this is found under Tools > Script Editor.


function getLDAPAttribute(search,attribute) {
search = search.toLowerCase();
attribute = attribute.toLowerCase();
var attr_value = "";
var url = 'https://www3.nd.edu/~cgrundy1/gapps-ldap-test/?'
+ 'q=' + search
+ '&attribute=' + attribute;
var response = UrlFetchApp.fetch(url);
var json = response.getContentText();
var data = JSON.parse(json);
attr_value = data.attributes[attribute];
return attr_value;
};

This accepts our parameters, passes them along to the PHP web service in a GET request, parses the response as JSON, and returns the attribute value.

Nota bene: Google enforces quotas on various operations including URL Fetch. The PHP script and the Google Apps Script function could be optimized, cache results, etc. I didn’t do those things. You are welcome to.

Anyway, let’s put it all together and see how it works:

Screenshot of custom formula in action

Well, there you have it. A quick little PHP microapp, a simple custom Javascript function, and now a bunch of cool things I can imagine doing with Google Apps. And just in case you only clicked through to this article because of the title:

<joke>I tried the Wookiee Steak, but it was a little chewy.</joke>

Tunneling Home

ShawshankRedempt_184Pyxurz

So I have an EC2 instance sitting in a subnet in a VPC on Amazon.  Thanks to Puppet, It’s got a rails server, nginx, and an Oracle client.  But it’s got no one to talk to.

It’s time to build a VPN tunnel to campus.  Many, many thanks go to Bob Richman, Bob Winding, Jaime Preciado-Beas, and Vincent Melody for banding together to work out what I’m about to describe.

It turns out the AWS side of this configuration is not actually very difficult. Once traffic reaches us, there’s a lot more configuration to do!  Here’s a quick sketch:

VPN-tunnelIPs and subnets obscured

 

You can see the VPC, subnet, and instance on the right.  The rectangle represents the routing table attached to my subnet.  Addresses in the same subnet as the instance get routed back inside the subnet.  Everything else (0.0.0.0/0) goes to the internet gateway and out to the world.

Configuring The Tunnel

To actually communicate with resources inside the Notre Dame firewall, such as our Banner databases, we need a few new resources.  These objects are pretty simple to create in software on AWS:

  1. the virtual private gateway. This is basically a router that sits on the AWS side of the VPN tunnel we’ll create.  You attach it to the VPC, and then you’re done with that object.
  2. the customer gateway.  When you create this object, you give it the public IP of a router on your network.  We’re using one that resides on the third floor of this building.  You need to configure this router to function as the VPN endpoint.  Fortunately, we have people like Bob Richman, who just know how to do that sort of thing.  If we didn’t, AWS provides a “download configuration” button that gives you a config file to apply to the router.  You can specify the manufacturer, type, and firmware level of the router so that it should be plug-and-play.
  3. the VPN connection. This object bridges the two resources named above.

Setting up Routing

Now we want certain traffic to flow over this connection to ND and back again.  Here’s where I start to pretend to know things about networking.

  1. AWS routing table.  We need to set routes on the subnet to which our instance belongs, forwarding traffic intended for Notre Dame resources to the Virtual Private Gateway described above.  No problem.  We define the IP/subnet ranges we want (example: the range for our Banner database service listeners), and route them to the VPG.
  2. VPN Connection static routes.  As I mentioned, this resource bridges the VPG and the Customer gateway on our side.  So it needs the same rules to be configured as static routes.

At this point, we are in business!  Kind of.  I can ping my EC2 instance from campus, but I can’t talk to Oracle from EC2.

Fun Times with DNS

Getting to our development database from anywhere requires a bit of hoop-jumping.  For an end user like me, running a SQL client on my laptop, it typically goes like this:

  1. I use LDAP to connect to an OID (Oracle Internet Directory) server, specifying the service name I want.  My ldap.ora file contains four different domain names: two in front of the firewall and two behind.  It fails over until it can reach one.  So it’s not super-intelligent, but no matter where I call from, one of them should work.
  2. The OID server responds with the domain name of a RAC cluster machine that can respond to my request.
  3. My request proceeds to the RAC cluster, which responds with the domain of a particular RAC node that can service the actual SQL query.

With a little help from Infosec, setting up ND firewall rules, we can connect to LDAP, we can connect to the RAC cluster, and we can even connect to the RAC node.  Via telnet, using IP addresses. Notice the reliance on DNS above?  This got us into a bit of a mess.

Essentially, it was necessary to set up special rules to allow my AWS VPN traffic to use the ND-side DNS servers.  I needed to edit my EC2 instance’s resolv.conf to use them.  We also ran into an issue where the RAC node resolved to a public IP instead of a private one.  This was apparently a bit of a hack during the original RAC setup, and firewall rules have been established to treat it like a private IP.  So again, special rules needed to be established to let me reach that IP over the VPN tunnel.

Success!

After these rules were in place and routes added to the VPN to use them, viola! I am now able to make a Banner query from AWS.  This is a fantastic step forward for app development in the cloud.  It’s only one piece of the puzzle, but an important one, as it is inevitable that we will want to deploy services to AWS that talk to ND resources of one kind or another.

Our networking, infosec, and database guys will probably tell you that some cleanup ought to be done on our side re: network architecture.  There are some “interesting” exceptions in the way we have laid out these particular services and their attendant firewall configuration.  The special rules we created to get this working are not really scalable.  However, these challenges are surmountable, and worth examining as we move forward.

In the meantime, we have made a valuable proof-of-concept for cloud application development, and opened up opportunities for some things I have wanted to do, like measure network latency between AWS and ND.  Perhaps a topic for a future blog post!

Onward!