400 Catholic pamphlets digitized

Posted on November 11, 2011 in Uncategorized by Eric Lease Morgan

Team Catholic Pamphlets has finished digitizing, processing, and making available close to 400 pieces of material available in the Aleph as well as Primo — http://bit.ly/sw1JH8

More specifically, we had a a set of Catholic pamphlets located in Special Collections converted into TIFF and PDF files. We then had OCR (optical character recognition) done against them, and the result was saved on a few local computers — parts of our repository. We then copied and enhanced the existing MARC records describing the pamphlets, and we ingested them into Aleph. From there they flowed to Primo.

When search results are returned for Catholic Pamphlet items, the reader is given the opportunity to download the PDF version and/or apply text mining services against them in order to enhance the process of understanding. For example, here are links to a specific catalog record, the pamphlet’s PDF version, and text mining interface:

Our next step is two-fold. First, we will document our experience and what we learned. Second, we will share this documentation with the wider audience. We hope to complete these last two tasks before we go home for the Christmas Holiday. Wish us luck.

Field trip to the Mansueto Library at the University of Chicago

Posted on November 2, 2011 in Uncategorized by Eric Lease Morgan

On Wednesday, October 19, 2011 the Hesburgh Libraries Professional Development Committee organized a field trip to the Mansueto Library at the University of Chicago. This posting documents some of my things seen, heard, and learned. If I had one take-away, it was the fact that the initiatives of the libraries at the University of Chicago are driven by clearly articulated needs/desires of their university faculty.


Mansueto Library, the movie!

The adventure began early in the morning as a bunch of us from the Hesburgh Libraries (Collette Mak, David Sullivan, Julie Arnott, Kenneth Kinslow, Mandy Havert, Marsha Stevenson, Rick Johnson, and myself) boarded the South Shore train bound for Chicago. Getting off at 57th Street, we walked a few short blocks to the University, and arrived at 10:45. The process was painless, if not easy and inexpensive.

David Larsen (our host) greeted us at the door, gave us the opportunity to put our things down, and immediately introduced us to David Borycz who gave us a tour of the Mansueto Library. If my memory serves me correctly, a need for an additional university library was articulated about ten years ago. Plans were drafted and money allocated. As time went on the need for more money — almost double — was projected. That was when Mr. & Mrs. Mansueto stepped up to the plate and offered the balance. With its dome made of uniquely shaped glass parts and eyeball shape, the Library looks like a cross between the Louvre Pyramid (Paris) and the Hemisfèric in Valencia (Spain). The library itself serves three functions: 1) reading room, 2) book storage, and 3) combination digitization & conservation lab. For such a beautiful and interesting space, I was surprised the later function was included in the mix which occupied almost half of the above ground space.

The reading room was certainly an inviting space. Long tables complete with lights. Quiet. Peaceful. Inviting. Contemplative.

The back half of the ground-level was occupied by both a digitization and conservation lab. Lots of scanners including big, small, and huge. Their scanning space is not a public space. There were no students, staff, nor faculty digitizing things there. Instead, their scanning lab began as a preservation service, grew from there, and now digitizes things after being vetted through a committee prioritizing projects. The conservation lab was complete with large tables, de-acidification baths, and hydration chambers. Spacious. Well-equipped. Located in a wonderful place.

Borycz then took us down to see the storage area. Five stories deep, this space is similar to the storage space at Valparaiso University. Each book is assigned a unique identifier. Books are sorted by size and put into large metal bins (also assigned a unique number). The identifiers are then saved in a database denoting the location in the cavernous space below. One of the three elevators/lifts then transport the big metal boxes to their permanent locations. The whole space will hold about 3.5 million volumes (the entire collection of the Hesburgh Libraries), but at the present time there are only 900,000 volumes currently stored there. How did they decide what would go to the storage area? Things that need not be browsed (like runs of bound serial volumes), things that are well-indexed, things that have been digitized, and “elephant” folios.

When we returned from lunch our respective libraries did bits of show & tell. I shared about the Hesburgh Libraries efforts to digitize Catholic pamphlets and provide text mining interfaces against the result. Rick Johnson demonstrated the state of the Seaside Project. We were then shown the process the University of Chicago librarians were using to evaluate the EBSCOhost “discovery service”. An interface was implemented, but the library is not sure exactly what content is being indexed, and the indexed items’ metadata seems applied inconsistently. Moreover, it is difficult (if not impossible) to customize the way search results are ranked and prioritized. All is not lost. The index does include the totality of JSTOR, which is seen as a plus. Librarians have also discovered that the index does meet the needs of many library patrons. The library staff have also enhanced other library interfaces pointing them to the EBSCO service if patrons browse past two or three pages of search results. When show & tell was finished we broke into smaller groups for specific discussions, and I visited the folks in the digitization unit. We then congregated in the lobby, made our way back to the train, and returned to South Bend by 7:30 in the evening.

The field trip was an unqualified success. It was fun, easy, educational, team-building, inexpensive, collegial, and enlightening. Throughout the experience we heard over and over again how directives were taken by University of Chicago faculty on new directions. These faculty then advocated for the library, priorities were set, and goals were fulfilled. The Hesburgh Libraries at the University of Notre Dame is geographically isolated. In my opinion we must make more concerted efforts to both visit other libraries and bring other librarians to Notre Dame. Such experiences enrich us all.

Scholarly publishing presentations

Posted on November 1, 2011 in Uncategorized by Eric Lease Morgan

As a part of Open Access Week, a number of us (Cheri Smith, Collette Mak, Parker Ladwig, and myself) organized a set of presentations on the topic of scholarly publishing with the goal of increasing awareness of the issues across the Hesburgh Libraries. This posting outlines the event which took place on Thursday, October 27, 2011.

The first presentation was given by Kasturi Halder (Julius Nieuwland Professor of Biological Sciences and Founding Director of the Center for Rare and Neglected Diseases) who described her experience working with the Public Library of Science (PLoS). Specifically, Halder is the editor-in-chief of PLoS Pathogens with a total editorial staff of close to 140 persons. The journal receives about 200 submissions per month, and her efforts require approximately one hour of time per day. She describes the journal as if it were a community, and she says one of the biggest problems they have right now is internationalization. Halder was a strong advocate for open access publishing. “It is important to make the content available because the research is useful all over the world… When the content is free it can be used in any number of additional ways including text mining and course packs… Besides, the research is government funded and ought to be given back to the public… Patients should have access to articles.” Halder lauded PLoS One, a journal which accepts anything as long as it has been peer-reviewed, and she cited an article co-written by as many as sixty-four students here at Notre Dame as an example. Finally, Halder advocated article-level impact as opposed to journal-level impact as a measure of success.

Anthony Holter (Assistant Professional Specialist in the Mary Ann Remick Leadership Program, Institute for Educational Initiatives) outlined how Catholic Education has migrated from a more traditional scholarly publication to something that stretches the definition of a journal. Started in 1997 as a print journal, Catholic Education was sponsored and supported by four institutions of higher education, each paying an annual fee. The purpose of the journal was (and still is) to “promote and disseminate scholarship about the purposes, practices, and issues in Catholic education at all levels.” Over time the number of sponsors grew and eventually faced two problems. First, they realized that libraries were paying twice for the content. Once for the membership fee and again for a subscription. Second, many practitioners appreciated the journal when they were in school, but as they graduated they no longer had access to it. What to do? The solution was to go open access. The journal is now hosted at Boston College. In this new venue Holter has more access to usage statistics than he has ever had before making it easier for him to track trends. For example, he saw many searches on topics of leadership, and consequently, he anticipates a special issue on leadership in the near future. Finally, Holter also sees the journal akin to a community, and the editorial board plans to exploit social networks to a greater degree in an effort to make the community more interactive. “We are trying to create a rich tapestry of a journal.” For the time being, Project Euclid fits the bill.

Finally Peter Cholak (Professor of Mathematics, College of Science) put words to characteristics of useful scholarly journals and used the Notre Dame Journal of Formal Logic as an example. Cholak looks to journals to add value to scholarly research. He does not want to pay any sort of page or image charges (which are sometimes the case in open access publications). Cholak looks for author-friendly copyright agreements from publishers. This is the case because his community is expected (more or less) to submit their soon-to-be-published articles in a repository called MathSciNet. He uses MathSciNet as both a dissemination and access tool. A few years ago the Notre Dame Journal of Formal Logic needed a new home, and Cholak visited many people across the Notre Dame campus looking for ways to make is sustainable. (I remember him coming to the libraries, for example.) He found little, if any, support. Sustainability is a major issue. “Who is going to pay? Creation, peer-review, and dissemination all require time and money?” Project Euclid fits the bill.

The presentations were well-received by the audience of about twenty people. Most were from the Libraries but others were from across the University. It was interesting to compare & contrast the disciplines. One was theoretical. Another was empirical. The third was both academic and practical at once and at the same time. There was lively discussion after the formal presentations. Such was the goal. I sincerely believe each of the presenters have more things in common than differences when it comes to scholarly communication. At the same time they represented a wide spectrum of publishing models. This spectrum is the result of the current economic and technological environment, and the challenge is to see the forest from the trees. The challenge for libraries is to understand the wider perspectives and implement solutions satisfying the needs of most people given limited amounts of resources. In few places is this more acute than in the realm of scholarly communication.

Tablet-base “reading”

Posted on October 15, 2011 in Uncategorized by Eric Lease Morgan

A number of us got together today, and we had nice time doing show & tell as well as discussing “tablet-based ‘reading'”. We included:

  • Carole Pilkinton
  • Charles Vardeman
  • Elliott Visconsi
  • Eric Lease Morgan
  • Jean McManus
  • Laura Fuderer
  • Markus Krusche
  • Sean O’Brien

Elliot demonstrated iPad Shakespeare while Charles and Markus filled in the gaps when it came to the technology. Sean and I did the same thing when it came to the Catholic Youth Literature Project. Some points during the discussion included but were not limited to:

  • the two projects complement each other in their approaches
  • the availability of usable texts make such projects difficult
  • evaluating the effectiveness of these tools is challenging
  • such applications require significant resources to create
  • these types of application demonstrate a large degree of potential

Fun in academia and the digital humanities.

Big Tent Digital Humanities Meeting

Posted on October 5, 2011 in Uncategorized by Eric Lease Morgan

Well, it wasn’t really a “Big Tent Digital Meeting”, but more like a grilled cheese lunch. No matter what it was, a number of “digital humanists” fro across campus got together, learned a few new faces, and shared our experiences. We are building community.

Catholic Pamphlets and practice workflow

Posted on September 27, 2011 in Uncategorized by Eric Lease Morgan

The Catholic Pamphlets Project has past its first milestone, specifically, practicing with its workflow which included digitizing and making accessible thirty-ish pamphlets in the Libraries’s catalog, “discovery system”, and implementing a text mining interface. This blog posting describes this success in greater detail.

For the past four months or so a growing number of us have been working on a thing affectionately called the Catholic Pamphlets Project. To one degree or another, these people have included:

Aaron Bales •  Adam Heet •  Denise Massa •  Eileen Laskowski •  Jean McManus •  Julie Arnott •  Lisa Stienbarger •  Lou Jordan •  Mark Dehmlow •  Mary McKeown •  Natalia Lyandres •  Pat Lawton •  Rejesh Balekai •  Rick Johnson •  Robert Fox •  Sherri Jones

Our long-term goal is to digitize the set of 5,000 locally held Catholic pamphlets, save them in the library’s repository, update the catalog and “discovery system” (Primo) to include links to digital versions of the content, and provide rudimentary text mining services against the lot. The short-term goal is/was to apply these processes to 30 of the 5,000 pamphlets. And I am happy to say that as of Wednesday (September 21) we completed our short-term goal.

catalog display

The Hesburgh Libraries owns approximately 5,000 Catholic pamphlets — a set of physically smaller rather than larger publications dating from the early 1800s to the present day. All of these items are located in the Libraries’s Special Collection Department, and all of them have been individually cataloged.

As a part of a university (President’s Circle) grant, we endeavored to scan these documents, convert them into PDF files, save them to our institutional repository, enhance their bibliographic records, make them accessible through our catalog and “discovery system”, and provide text mining services against them. To date we have digitized just less than 400 pamphlets. Each page of each pamphlet has been scanned and saved as a TIFF file. The TIFF files were concatenated, converted into PDF files, and OCR’ed. The sum total of disk space consumed by this content is close to 92GB.

detail display

In order to practice with workflow, we selected about 30 of these pamphlets and enhanced their bibliographic records to denote their digital nature. These enhancements included URLs pointing to PDF versions of the pamphlets as well as URLs pointing to the text mining interfaces. When the enhancements were done we added them to the catalog. Once there they “flowed” to the “discovery system” (Primo). You can see these records from the following URL — http://bit.ly/qcnGNB. At the same time we extracted the plain text from the PDFs and made them accessible via a text mining interface allowing the reader to see what words/phrases are most commonly used in individual pamphlets. The text mining interface also includes a concordance — http://concordance.library.nd.edu/app/. These later services are implemented as a means of demonstrating how library catalogs can evolve from inventory lists to tools for use & understanding.

most frequently used words

While the practice may seem all but trivial, it required about three months of time. Between vacations, conferences, other priorities, and minor glitches the process took more time than originally planned. The biggest glitch was with Internet Explorer. We saved our PDF files in Fedora. Easy. Each PDF file had a URL coming from Fedora which we put into the cataloging records. But alas, Internet Explorer was not able to process the Fedora URLs because: 1) Fedora was not pointing to files but data streams, and/or 2) Fedora was not including an HTTP header called “filename disposition” which includes a file name extension. No other browsers we tested had these limitations. Consequently we (Rob Fox) wrote a bit of middleware taking a URL as input, getting the content from Fedora, and passing it back to the browser. Problem solved. This was a hack for sure. “Thank you, Rob!”

concordance display

We presently have no plans (resources) to digitize the balance of the pamphlets, but it is my personal hope we process (catalog, store, and make accessible via text mining) the remaining 325 pamphlets before Christmas. Wish us luck.

Catholic Youth Literature Project update

Posted on August 31, 2011 in Uncategorized by Eric Lease Morgan

This is a tiny Catholic Youth Literature Project update.

Using a Perl module called Lingua::Fathom I calculated the size (in words) and readability scores of all the documents in the Project. I then updated the underlying MyLibrary database with these values, as well as wrote them back out again to the “catalog”. Additionally, I spent time implementing the concordance. The interface is not very iPad-like right now, but that will come. The following (tiny) screen shots illustrate the fruits of my labors.

Give it a whirl and tell me what you think, but remember, it is designed for iPad-like devices.

Catholic Youth Literature Project: A Beginning

Posted on August 27, 2011 in Uncategorized by Eric Lease Morgan

This posting outlines some of the beginnings behind the Catholic Youth Literature Project.

The Catholic Youth Literature Project is about digitizing, teaching, and learning from public domain literature from the 1800’s intended for Catholic children. The idea is to bring together some of this literature, make it easily available for downloading and reading, and enable learners to “read” texts in new & different ways. I am working with Jean McManus, Pat Lawton, and Sean O’Brien on this Project. My specific tasks are to:

  • assemble a corpus of documents, in this case about 50 PDF files of books written for Catholic children from the 1800’s
  • “catalog” the each item in the corpus and describe them using author names, titles, size (measured in words), readability (measured by grade level, etc.), statistically significant key words, names & places programmatically extracted from the texts, and dates
  • enable to learners to download the book and read it in the traditional manner
  • provide “services against the texts” where these services include things such as but not limited to: list the most frequently used words or phrases in the book, list all the words starting with a given letter, chart & graph where those words and phrases exist in the text, employ a concordance against the texts so the reader can see how the words are used in context, list all the names & places from the text an allow the reader to look them up in Wikipedia as well as plot them on a world map, programmatically summarize the book, extract all the date-related values from the book and plot the result on a timeline, tabulate the parts-of-speech (nouns, verbs, adjectives, etc.) in a document and graph the result, and provide the means for centrally discussing the content of the books with fellow learners
  • finally, provide all of these services on an iPad

Written (and spoken) language follow sets of loosely defined rules. If this were not the case, then none of us would be able to understand one another. If I have digital versions of books, I can use a computer to extract and tabulate the words/phrases it contains, once that is done I can then look for patterns or anomalies. For example, I might use these tools to see how Thoreau uses the word “woodchuck” in Walden. When I do I see that he doesn’t like woodchucks because they eat his beans. In addition, I can see how Thoreau used the word “woodchuck” in a different book and literally see how he used it differently. In the second book he discusses woodchucks in relation to other small animals. A reader could learn these things through the traditional reading process, but the time and effort to do so is laborious. These tools will enable the reader to do such things across many books at the same time.

In the Spring O’Brien is teaching a class on children and Catholicism. He will be using my tool as a part of his class.

I do not advocate this tool as a replacement for traditional “close” reading. This is a supplement. It is an addition. These tools are analogous to tables-of-contents and back-of-the-book indexes. Just because a person reads those things does not mean they understand the book. Similarly, just because they use my tools does not mean they know what the book contains.

I have created the simplest of “catalogs” so far, and here is screen dump:

Catholic Youth Literature Project catalog

Catholic Youth Literature Project catalog

You can also try the “catalog” for yourself, but remember, the interface is designed for iPads (and other Webkit-based browsers). Your milage may vary.

Wish us luck, and ‘more later.

Pot-Luck Picnic and Mini-Disc Golf Tournament

Posted on August 22, 2011 in Uncategorized by Eric Lease Morgan

The 5th Annual Hesburgh Libraries Pot-Luck Picnic And Mini-Disc Tournament was an unqualified success. Around seventy-five people met to share a meal and themselves. I believe this was the biggest year for the tournament with about a dozen teams represented. Team Hanstra took away the trophy after a sudden death playoff against Team Procurement. Both teams had scores of 20. “Congrats, Team Hanstra! See you next year.”

disc golfers

Disc Golfers

From the picnic’s opening words:

Libraries are not about collections. Libraries are not about public service. Libraries are not about buildings and spaces. Libraries are not about books, journals, licensed content, nor computers. Instead, libraries are about what happens when all of these things are brought together into a coherent whole. None of these things are more important than the others. None of them come before the others. They are all equally important. They all have more things in common than differences.

That is what this picnic is all about. It is about sharing time together and appreciating our similarities. Only through working together as a whole will we be able to accomplish our goal — providing excellent library services to this, the great University of Notre Dame.

Gotta go. Gotta throw.

Code4Lib Midwest: A Travelogue

Posted on August 13, 2011 in Uncategorized by Eric Lease Morgan

This is a travelogue documenting my experiences at second Code4Lib Midwest Meeting (July 28 & 29, 2011) at the University of Illinois, Chicago.

Attendees of Code4Lib Midwest

Attendees of Code4Lib Midwest

Day #1

The meeting began with a presentation by Peter Schlumpf (Avanti Library Systems). In it he described and demonstrated Avanti Nova, an application used to create and maintain semantic maps. To do so, a person first creates objects denoted by strings of characters. This being Library Land, these strings of characters can be anything from books to patrons, from authors to titles, from URLs to call numbers. Next a person creates links (relationships) between objects. These links are seemingly simple. One points to another, vice versa, or the objects link to each other. The result of these two processes forms a thing Schlumpf called a relational matrix. Once the relational matrix is formed queries can be applied against it and reports can be created. Towards the end of the presentation Schlumpf demonstrated how Avanti Nova could be used to implement a library catalog as well as represent the content of a MARC record.

Robert Sandusky (University of Illinois, Chicago) shared with the audience information about a thing called the DataOne Toolkit. DataOne is a federation of data repositories including nodes such as Dryad, MNs, and UC3 Merritt. The Toolkit supports an application programmer interface to support three levels of federation compliance: read, write, and replicate. I was particularity interested in DataOne’s data life cycle: collect, assure, describe, deposit, preserve, discover, integrate, analyze, collect. I also liked the set of adjectives and processes used to describe the vision of DataOne: adaptive, cognitive, community building, data sharing, discovery & access, education & training, inclusive, informed, integrate and synthesis, resilient, scalable, and usable. Sandusky encouraged members of the audience (and libraries in general) to become members of DataOne as well as community-based repositories. He and DataOne see libraries playing a significant role when it comes to replication of research data.

Somewhere in here I, Eric Lease Morgan (University of Notre Dame), spoke to the goals of the Digital Public Library of America (DPLA) as well as outlined my particular DPLA Beta-Sprint Proposal. In short, I advocated the library community move beyond the process of find & get and towards the process of use & understanding.

Ken Irwin (Wittenberg University) gave a short & sweet lightning talk about “hacking” trivial projects. Using an example from his workplace — an application used to suggest restaurants — he described how he polished is JQuery skills and enhanced is graphic design skills. In short he said, “There is a value for working on things that are not necessarily library-related… By doing so there is less pressure to do it ‘correctly’.” I thought these were words of wisdom and point to the need for play and experimentation.

Rick Johnson (University of Notre Dame) described how he and his group are working in an environment where the demand is greater than the supply. Questions he asked of the group, in an effort to create discussion, included: how do we move from a development shop to a production shop, how do we deal with a backlog of projects, to what degree are we expected to address library problems versus university problems, to what extent should our funding be grant-supported and if highly, then what is our role in the creation of these grants. What I appreciated most about Johnson’s remarks was the following: “A library is still a library no matter what medium they collect.” I wish more of our profession expressed such sentiments.

Margaret Heller (Dominican University) asked the question, “How can we assist library students learn a bit of technology and at the same time get some work done?” To answer her question she described how her students created a voting widget, did an environmental scan, and created a list of library labs.

Christine McClure (Illinois Institute of Technology) was given the floor, and she was seeking feedback in regards to here recently launched mobile website. Working in a very small shop, she found the design process invigorating since she was not necessarily beholden to a committee for guidance. “I work very well with my boss. We discuss things, and I implement them.” Her lightning talk was the first of many which exploited JQuery and JQuery Mobile, and she advocated the one-page philosophy of Web design.

Jeremy Prevost (Northwestern University) built upon the McClure’s topic by describing how he built a mobile website using a Model View Controller (MVC) framework. Using such a framework, which is operating system and computer programming language agnostic, accepts a URL as input, performs the necessary business logic, branches according to the needs/limitations of the HTTP user-agent, and returns the results appropriately. Using MVC he is able to seamlessly provide many different interfaces to his website.

If a poll had been taken on the best talk of the Meeting, then I think Matthew Reidsma‘s (Grand Valley State University) presentation would have come out on top. In it he drove home two main points: 1) practice “progressive enhancement” Web design as opposed to “graceful degradation”, and 2) use JQuery to control the appearance and functionality of hosted Web content. In the former, single Web pages are designed in a bare bones manner, and through the use of conditional Javascript logic and cascading stylesheets the designer implements increasingly complicated pages. This approach works well for building mobile websites through full-fledged desktop browser interfaces. The second point — exploiting JQuery to control hosted pages — was very interesting. He was given access to the header and footer of hosted content (Summon). He then used JQuery’s various methods to read the body of the pages, munge it, and present more aesthetically pleasing as well as more usable pages. His technique was quite impressive. Through Reidsma’s talk I also learned the necessity of many skills to do Web work. It is not enough to know how HTML or Javascript or graphic design or database management or information architecture, etc. Instead, it is necessary to have a combination of these skills in order to really excel. To a great degree Riedsma embodied such a combination.

Francis Kayiwa (University of Illinois, Chicago) wrapped up the first day by asking the group questions about hosting and migrating applications from different domains. The responses quickly turned to things about EAD files, blogs postings, and where the financial responsibility lies when grant money dries up. Ah, Code4Lib. You gotta love it.

Day #2

The second day was given over to three one-hour presentations. The first was by Rich Wolf (University of Illinois, Chicago) who went to excruciating detail on how to design and write RESTful applications using Objective-C.

My presentation on text mining might have been as painful for others. In it I tried to describe and demonstrate how libraries could exploit the current environment to provide services against texts through text mining. Examples included the listing of n-grams and their frequencies, concordances, named-entity extractions, word associations through network diagrams, and geo-locations. The main point of the presentation was “Given the full text of documents and readily accessible computers, a person can ‘read’ and analyze a text in so many new and different ways that would not have been possible previously.”

The final presentation at the Meeting was by Margaret Kipp (University of Wisconsin Milwaukee), and it was called “Teaching Linked Data”. In it she described and demonstrated how she was teaching library school students about mash-ups. Apparently her students have very little computer experience, and the class surrounded things like the shapes of URLs, the idea of Linked Data, and descriptions of XML and other data streams like JSON. Using things like Fusion tables, Yahoo Pipes, Simile Timelines, and Google Maps students were expected to become familiar with new uses for metadata and open data. One of the nicest things I heard from Kipp was, “I was also trying to teach the students about programatic thinking.” I too think such a thing is important; I think it important to know how to think both systematically (programmatically) as well as analytically. Such thinking processes complement each other.

Summary

From my perspective, the Meeting was an unqualified success. Kudos go to Francis Kayiwa, Abigail Goben, Bob Sandusky, and Margaret Heller for organizing the logistics. Thank you! The presentations were on target. The facilites were more than adequate. The wireless network connections were robust. The conversations were apropos. The company was congenial. The price was right. Moreover, I genuinely believe everybody went away from the Meeting learning something new.

I also believe these sorts of meetings demonstrate the health and vitality of the growing Code4Lib community. The Code4Lib mailing list boasts about 2,000 subscribers who are from all over the world but mostly in the United States. Code4Lib sponsors an annual meeting and regularly occurring journal. Regional meetings, like this one in Chicago, are effective and inexpensive professional development opportunities for people who are unable or uncertain about the full-fledged conference. If these meetings continue, then I think we ought to start charging a franchise fee. (Just kidding, really.)