Archive for the ‘Uncategorized’ Category

Patron-Driven Acquisitions: A Symposium

Posted on July 18, 2012 in Uncategorized

You might say this is a reverse travelogue because it documents what I learned at a symposium that took place here at the University of Notre Dame (May 21, 2012) on the topic of patron-driven acquisitions (PDA). In a sentence, I learned that an acquisitions process partially driven by direct requests from library readers is not new, and it is a pragmatic way to supplement the building of library collections.

Symposium speakers and the PDC

Suzanne Ward, Robert Freeman, and Judith Nixon (Purdue University) began the symposium with a presentation called “Silent Partners In Collection Development: Patron-Driven Acquisitions at Purdue“. The folks at Purdue have been doing PDA for about a decade, but they advocate that libraries have been doing PDA for longer than that when you consider patron suggestion forms, books-on-demand services, etc. Their PDA program began in interlibrary-loan. When requested materials fit a particular criteria (in English, scholarly, non-fiction, cost between $50-150, ship in less than a week, and were published in the last five years), it was decided to purchase the material instead of try to borrow it. The project continued for a number of years, and after gathering sufficient data, they asked themselves a number of questions in order to summarize their experience. Who were the people who were driving the process? Sixty percent (60%) of the requests fitting the criteria were from graduate students — the “silent partners”. How much did it cost? Through the process they added about 10,000 books to the collection at a cost of about $350,000. Were these books useful? These same books seemed to circulate four times when the books purchased through other means circulated about two and half times. What were the subjects of the books being purchased? This was one of the more interesting questions because the subjects turned out to be cross-disciplinary and requestors were asking to borrow materials that generally fell outside the call number range of their particular discipline. Consequently, the PDA functions were fulfilling collection development functions in ways traditional approval profiles could not. E-book purchases are the next wave of PDA, and they have begun exploring these options, but not enough data has yet to be gathered in order for any conclusions to have been made.

Lynn Wiley (University of Illinois at Urbana-Champaign) was second and presented “Patron Driven Acquisitions: One Piece On A Continuum Of Evolving Services”. Starting in early 2010 UIUC and in conjunction with the state-wide consortium named CARLI began the first of four pilot projects exploring the feasibility of PDA. In it they loaded 16,000 MARC records into their catalog. These records represented items from their Yankee Book Peddler approval plan. Each record included a note stating that the book can be acquired upon request. The implementation was popular because they ran out of money in five weeks when they expected the money to last a lot longer. In a similar project about 6,000 ebook titles where added to the library catalog, and after 10 “activities” (uses) were done against one of the items the ebook was purchased. After about four months about 240 titles were purchased but as many as 450 examined but not triggered. Each ebook cost about $100. A third pilot expanded on the second. It included core approval items from bibliographers — about 150 items every two weeks. Requested items got a two-day turn-around time at an average cost of $60/book. Finally, a forth project is currently underway at it expands the user population to the whole of CARLI. Some of the more interesting conclusions from Wiley’s presentation include: 1) build it and they will come, 2) innovation comes out of risk taking, 3) PDA supplements collection building, 4) change circulation periods for high access books, and 5) PDA is a way to build partnerships with vendors, other librarians, and consortia.

“The Long Tail of PDA” was given by Dracine Hodges (The Ohio State University) and third in the line up. At Ohio State approximately 16,000 records representing on-demand titles were loaded into their catalog. No items were older than 2007, in a foreign language, computer manuals, or cost more than $300. They allowed patrons to select materials have have them purchased automatically. The university library allocated approximately $35,000 to the project, and it had to be cut short after a month because of the project’s popularity. Based on Hodges’s experience a number of things were learned. PDA benefits the researcher in cross-disciplines because titles get missed when they are in one or the other discipline. Second, comparing and contrasting print-based titles and ebooks is like comparing apples and oranges. The issues with print-based titles surround audience and things like courtesy cards. Whereas the issues surrounding ebooks include things like views, printing, downloads, and copying. In the future she believes she can see publishers selling things more directly to the patron as opposed to going through a library. She noted the difficulty of integrating the MARC records into the catalog. “They are free for a reason.” Hodges summarized her presentation this way, “We are doing a graduate shift from just-in-case selection to just-in-time selection, and the just-in-time selection process is a combination of activities… Print is not dead yet.”

The final presentation was by Natasha Lyandres and Laura Sill (University of Notre Dame), and it was called “Why PDA… Why Now?” In order to understand the necessary workflows, the Hesburgh Libraries experimented with patron-driven acquisitions. Fifty thousand ($50,000) was allocated and a total of 6,333 records from one ebook vendor were loaded into the Aleph catalog. The URLs in the catalog pointed to activated titles available on the vendor’s platform. Platform advantages and disadvantages became quickly apparent as patron began to make use of titles. Their questions prompted the library to draw up an FAQ page to explain features and advise patrons. Other platform issues to be further investigated are restrictions because of digital rights management, easier downloads, and printing quality. To monitor the speed of spend and to analyze the mix of content being collected, usage reports were reviewed weekly. While the work with PDA at Notre Dame is still in its infancy, and number of things have been learned. PDA is a useful way of acquiring ebooks. Print materials can be acquired in similar ways. The differences between vendor platforms should be explored some more. Ongoing funding for PDA and its place and structure in the materials budget will require further discussion and thought. Integrating PDA into formal collection development practices should be considered.

“Thank you”

The symposium was attended by as many as seventy-five people from across the three or four states. These folks helped turn the event into a real discussion. The symposium was sponsored by the Professional Development Committee (PDC) of the Hesburgh Libraries (University of Notre Dame). I want to thank Collette Mak and Jenn Matthews — PDC co-members — for their generous support in both time and energy. Thanks also go to our speakers with whom none of this would have been possible. “Thank you to one and all.”

E-Reading: A Colloquium at the University of Toronto

Posted on April 26, 2012 in Uncategorized

On Saturday, March 31 I presented and attended a colloquium (E-Reading: A Colloquium at the University of Toronto) on the topic of e-reading, and I am documenting the experience because writing — the other half of reading — literally transcends space and time. In a sentence, my Toronto experience feed my body, my heart, and my mind.

Sponsored by a number of groups (The Collaborative Program in Book History and Print Culture, the Toronto Centre for the Book, the Toronto Review of Books, and Massey College) the event was divided into three sections: 1) E-Reader Response, 2) The Space of E-Texts, and 3) a keynote address.

E-Reader Response

Kim Martin (Western University) was honored with the privilege of giving the first presentation. It was originally entitled “Primary Versus Secondary Sources: The Use of Ebooks by Historians”, but sometime before the colloquium she changed the topic of her presentation to the process of serendipity. She advocated a process of serendipity articulated by Jacquelyn Burkell that includes a prepared mind, prior concern, previous experience or expertise, fortuitous outcome, and an act of noticing. [1] All of these elements are a part of the process of serendipitous find. She compared these ideas with the possibilities ebooks, and she asked a set of historians about serendipity. She discovered that there was some apprehension surrounding ebook reading, elements of traditional reading are seen as lost in ebooks, but despite this there was some degree ebook adoption by the historians.

I (Eric Lease Morgan, University of Notre Dame) gave a presentation originally entitled “Close and Distant Reading at Once and at the Same time: Using E-Readers in the Classroom”, but my title changed as well. It changed to “Summarizing the State of the Catholic Youth Literature Project“. In short, I summarized the project, described some of its features, and emphasized that “distant” reading is not a replacement — but rather a supplement — to the traditional close reading process.

Alex Willis (University of Toronto and Skeining Writing Solutions) then shared with the audience a presentation called “Fan Fiction and the Changing Landscape of Self-Publication”. Fan fiction is a type of writing that fills in gaps in popular literature. For example, it describes how the “warp core” of Star Trek space ships might be designed and work. Fan fiction may be the creation of story behind a online multi-player game. These things are usually written by very enthusiastic — “spirited” — players of the games. Sites like fanfiction.net and the works of Amanda Hocking are included as good examples of the genre. With the advent of this type of literature questions of copyright are raised, the economics of publishing are examined, and the underlying conventional notions of authority are scrutinized. Fan fiction is a product of a world where just about anybody and everybody can be a publisher. I was curious to know how fan fiction compares to open access publishing.

The Space of E-Texts

After a short break the second round of presentations began. It started with Andrea Stuart (University of Toronto) and “Read-Along Records: The Rise of Multimedia Modeling Reading”. Stuart presented on the history of read-along books, how they have changed over the years, and what they are becoming with the advent of e-readers. Apparently they began sometime after phonograph player were inexpensively produced and sold because this is when records started to be included in children’s books. They were marketed as time-savers to parents who were unable to read to their children as well as do household duties. She did a bit of compare & contrast of these read-along books and noticed how the stories narrated by men included all sorts of sound effects, but the narrations by woman did not. She then described how the current crop of ebooks are increasingly becoming like the read-along books of yesterday but with significant enhancements — buttons to push, questions to answer, and pages to turn. She then asked the question, “Are these enhancements liberating or limiting?” In the end I believe she thought they were a little bit of both.

“Commuter Reading and E-Reading” was the title of Emily Thompson‘s (University of Toronto) paper. This was a short history of a particular type of reading — reading that happens when people are commuting to and from their place of work. Apparently it began in France or England with the advent of commuting by train, “railway literature”, and “yellow backs” sold by a man named W.H. Smith. This literature was marketed as easy, comfortable, and enjoyable. They were sold in stalls and offered “limitless” choice. Later on the Penguin Books publisher started using the Penguinator — a vending machine — as a way of selling this same sort of literature. Thompson went on to compare the form & function of railway literature to the form & function of current cell phone and ebook readers. It was interesting to me to see how the form of the literature fit its function. Short, easy-to-ready chapters. Something that could be picked up, left off, and picked up again quickly. Something that wasn’t too studious and yet engaging. For example, the very short chapters books designed for cell phone sold in Japan. In the end Thompson described the advent of ebook readers as a moment in time for reading, not the death of the book. It was a refreshing perspective.

Brian Greenspan (Carlton University) then shared “Travel/Literature: Reading Locative Narrative”. While most of the presentations looked back in history, Greenspan’s was the only one that looked forward. In it he described a type of book designed to be read while walking around. “Books surpress optical input,” he said. By including geo-spacial technology into an ebook reader, different things happened in his narrative (a technology he called “StoryTrek”) depending on where a person was located. Readers of the narrative commented on the new type of reality they experienced through its use, specifically, they used the word “stimulating”. They felt less isolated during the reading process because when the saw things in their immediate location they brought them into the narrative.

Keynote address

The keynote address was given by Assistant Professor of Library & Information Science, Bonnie Mak (University of Illinois), and it was entitled “Reading the ‘E’ in E-Reading”. The presentation was a reflection on e-reading, specifically a reflection of the definition of a words on a page, and how different types of pages create different types of experiences. For example, think of the history for writing from marks in clay, to the use of was tablets, to the codex, to the e-reader. Think of the scroll of milleniums past, and think of scrolling on our electronic devices. Think of the annotation of mediaeval manuscripts, and compare that to the annotations we make on PDF documents. “What is old is new again… The material of books engender certain types of reading.” Even the catalogs and services of libraries are effected by this phenomenon. She used the example of Early English Books Online (EEBO), and how it is based on the two Short Title Catalogs (STC) — “seminal works of bibliographic scholarship that set out to define the printed record of the English-speaking world from the very beginnings of British printing in the late fifteenth century through to 1700.” Apparently the STC is incomplete in and of itself, and yet EEBO is touted as a complete collection of early English literature. And because EEBO is incomplete as well as rendered online in a particular format, it too lends itself to only a particular typer of reading. To paraphrase, “Reader, beware and be aware.”

Summary and Conclusions

As I mentioned above, my trip to Toronto fed my body, my heart, and my mind.

The day I arrived I visited with Michael Bramah, Noel Mcferran, Sian Meikle all of the University of Toronto. I got a 50¢ and private tour of the St. Michael’s College special collections including the entire library of the school when it was founded (the Soulerin Collection) as well as their entire collection of G.K. Chesterton and Cardinal John Henry Newman materials. It was then fun trying to find a popular reading item from their Ninetieth Century French Collection. More importantly, we all talked about the “Catholic Portal” and ways we could help make it go forward. That evening had a nice meal in a nice restaurant. All these things fed my body.

My heart was fed the next morning — the day of the colloquium — when I first went to one of the university’s libraries and autographed my WAIS And Gopher Servers book for the fourth or fifth time in the past dozen years. I went to the Art Gallery Of Ontario. There I saw a wall of Dufy’s paintings. I also experienced a curation of some paintings in the style of a Paris salon. This was echoed in the museum’s Canadian collection were similar paintings of similar classic styles were hung as in a salon. My heart soared as I was inspired. The Gallery’s collection and presentation style is to be applauded.

Finally, I fed my mind through the colloquium. Located in an academic atmosphere, we shared and discussed. We were all equals. Everybody had something to offer. There was no goal other than to stimulate our minds. Through the process I learned of my new and different types of reading:

  • close reading
  • continuous reading
  • deviant reading
  • distant reading
  • distracted reading
  • intersectional reading
  • location-aware reading
  • sustained reading

My conception of reading was expanded. After the event many of us retired to a nearby pub where I met the author of a piece of iPad software called iAnnotate. He described the fluctuating and weaving way features to the PDF “standard” were created. Again, my ideas about reading were expanded. I need and require more of this type of stimulation. This trip was well worth the nine hour drive to Toronto and the twelve hour drive back.

Summarizing the state of the Catholic Youth Literature Project

Posted on March 30, 2012 in Uncategorized

This posting summarizes the purpose, process, and technical infrastructure behind the Catholic Youth Literature Project. In a few sentences, the purpose was two-fold: 1) to enable students to learn what it meant to be Catholic during the 19th century, and 2) to teach students the value of reading “closely” as well as from a “distance”. The process of implementing the Project required the time and skills of a diverse set of individuals. The technical infrastructure is built on a large set of open source software, and the interface is far from perfect.

Purpose

The purpose of the project was two-fold: 1) to enable students to learn what it meant to be Catholic during the 19th century, and 2) to teach students the value of reading “closely” as well as from a “distance”. To accomplish this goal a faculty member here at the University of Notre Dame (Sean O’Brien) sought to amass a corpus of materials written for Catholic youth during the 19th century. This corpus was expected to be accessible via tablet-based devices and provide a means for “reading” the texts in the traditional manner as well as through various text mining interfaces.

During the Spring Semester students in a survey class were lent Android-based tablet computers. For a few weeks of the semester these same students were expected to select one or two texts from the amassed corpus for study. Specifically, they were expected to read the texts in the traditional manner (but on the tablet computer), and they were expected to “read” the texts through a set of text mining interfaces. In the end the the students were to outline three things: 1) what did you learn by reading the text in the traditional way, 2) what did you learn by reading the text through text mining, and 3) what did you learn by using both interfaces at once and at the same time.

Alas, the Spring semester has yet to be completed, and consequently what the students learned has yet to be determined.

Process

The process of implementing the Project required the time and skills of a diverse set of individuals. These individuals included the instructor (Sean O’Brien), two collection development librarians (Aedin Clements and Jean McManus), and librarian who could write computer programs (myself, Eric Lease Morgan).

As outlined above, O’Brien outlined the overall scope of the Project.

Clements and McManus provided the means of amassing the Project’s corpus. A couple of bibliographies of Catholic youth literature were identified. Searches were done against the University of Notre Dame’s library catalog. O’Brien suggested a few titles. From these lists items were selected for inclusion for purchase, from the University library’s collection, as well as from the Internet Archive. The items for purchase were acquired. The items from the local collection were retrieved. And both sets of these items were sent off for digitization and optical character recognition. The results of the digitization process were then saved on a local Web server. At the same time, the items identified from the Internet Archive were mirrored locally and saved in the same Web space. About one hundred items items were selected in all, and they can be seen as a set of PDF files. This process took about two months to complete.

Technical infrastructure

The Project’s technical infrastructure enables “close” and “distant” reading, but the interface is far from perfect.

From the reader’s (I don’t use the word “user” anymore) point of view, the Project is implemented through a set of Web pages. Behind the scenes, the Project is implemented with an almost dizzying array of free and open source software. The most significant processes implementing the Project are listed and briefly described below:

  • mirroring – Much of the text mining services require extensive analysis of the original item. To accomplish this local copies of the texts were mirrored locally. By feeding the venerable wget program with a list of URLs based on Internet Archive unique identifiers, mirroring content locally is trivial.
  • name-entity extraction – There was a desire to list the underlying names, places, and organizations from each text. These things can put a text into a context for the reader. Are there a lot of Irish names? Is there a preponderance of place names from the United States? To accomplish this task and assist in answering these sorts of questions, a Perl script was written around the Stanford Named Entity Recognizer. This script (txt2ner.pl) extracts the entities, looks them up in DBedia, and saves metadata (abstracts, URLs to images, as well as latitudes & longitudes) describing the entities to a locally defined XML file for later processing. (See an example.) A CGI script (ner.cgi) was then written to provide a reader-interface to these files.
  • parts-of-speech extraction – Just as lists of named entities can be enlightening so can lists of a text’s parts-of-speech. Are the pronouns generally speaking masculine or feminine? Over all, are the verbs active or passive? To what degree are color words used words in the text? To begin to answer these sorts of questions, a Perl script exploited a Perl module called Lingua::TreeTagger. The script (pos-tag.pl) extracts parts-of-speech from a text file and saves the result as a simple tab-delimited file for later use. (See an example.)
  • word/phrase tabulation and concordancing – To support rudimentary word and phrase tabulations, as well as a concordance interface, an Apache module (Concordance.pm) was written around two more Perl modules. The first, Lingua::EN::Ngram, extracts word and phrase occurrences. The second, Lingua::Concordance, provides an object-oriented keyword-in-context interface.
  • metadata enhancement and storage – A rudimentary catalog listing the items in the Project’s corpus was implemented using a Perl module called MyLibrary. The MARC records describing each item in the corpus were first parsed. Desired metadata elements were mapped to MyLibrary fields, facets, and terms. Each item in the corpus was then analyzed in terms of word length as well as readability score through the use of yet another Perl module called Lingua::EN::Fathom. These additional metadata elements were then added to the underlying “catalog”. To accomplish this set of tasks two additional Perl scripts were written (add-author-title.pl and add-size-readability.pl).
  • HTML creation – A final Perl script was written to bring all the parts together. By looping through the “catalog” this script (make-catalog.pl) generates HTML files designed for display on tablet devices. These HTML files make heavy use of JQuery Mobile, and since no graphic designer was a part of the Project, JQuery Mobile was a godsend.

The result — the Catholic Youth Literature Project — is a system that enables the reader to view the texts online as well as do some analysis against them. The system functions in that it does not output invalid data, and it does provide enhanced access to the texts.


The home page is simply a list of covers and associated titles.


The Internet Archive online reader is one option for “close” reading.


The list of parts-of-speech provides the reader with some context. Notice how the word “good” is the most frequently used adjective.


The histogram feature of the concordance allows the reader to see where selected words appear in the text. For example, in this text the word “god” is used rather consistently.


A network diagram allows the reader to see what words are used “in the same breath” as a given word. Here the word “god” is frequently used in conjunction with “good”, “holy”, “give”, and “love”.

Summary

To summarize, the Catholic Youth Literature Project is far from complete. For example, it has yet to be determined whether or not the implementation has enabled students to accomplish the Project’s stated goals. Does it really enhance the use and understanding of a text? Second, the process of selecting, acquiring, digitizing, and integrating the texts into the library’s collection is not streamlined. Finally, usability of the implementation is still in question. On the other hand, the implementation is more than a prototype and does exemplify how the process of reading is evolving over time.

Summary of the Catholic Pamphlets Project

Posted on March 27, 2012 in Uncategorized

This posting summarizes the Catholic Pamphlets Project — a process to digitize sets of materials from the Hesburgh Libraries collection, add the result to a repository, provide access to the materials through the catalog and “discovery system” as well as provide enhanced access to the materials through a set of text mining interfaces. In a sentence, the Project has accomplished most of its initial goals both on time and under budget.

The Project’s original conception

The Catholic Pamphlets Project began early in 2011 with the writing of a President’s Circle Award proposal. The proposal detailed how sets of Catholic Americana would be digitized in conjunction with the University Archives. The Libraries was to digitize the 5,000 Catholic pamphlets located in Special Collections, and the Archives was to digitize its set of Orestes Brownson papers. In addition, a graduate student was to be hired to evaluate both collections, write introductory essays describing why they are significant research opportunities, and do an environmental scan regarding the use of digital humanities computing techniques applied against digitized content. In the end, both the Libraries and the Archives would have provided digital access to the materials through things like the library catalog, its “discovery” system, and the “Catholic Portal”, as well as laid the groundwork for further digitization efforts.

Getting started

By late Spring a Project leader was identified, and their responsibilities were to coordinate the Libraries’s side of the Project in conjunction with a number of library departments including Special Collections, Cataloging, Electronic Resources, Preservation, and Systems. By this time it was also decided not to digitize the entire collection of 5,000 items, but instead hire someone for the summer to digitize as many items as possible and process them accordingly – a workflow test. In the meantime, a comparison of in-house and vendor-supplied digitization costs would be evaluated.

By this time a list of specific people had also been identified to work on the Project, and these people became affectionately known as Team Catholic Pamphlets:

Aaron Bales • Eric Lease Morgan (leader) • Jean McManus • Julie Arnott • Lisa Stienbarger • Louis Jordan • Mark Dehmlow • Mary McKeown • Natasha Lyandres • Rajesh Balekai • Rick Johnson • Robert Fox • Sherri Jones

Work commences

Through out the summer a lot of manual labor was applied against the Project. A recent graduate from St. Mary’s (Eileen Laskowski) was hired to scan pamphlets. After a one or two weeks of work, she was relocated from the Hesburgh Library to the Art Slide Library where others were doing similar work. She used equipment borrowed from Desktop Computing and Network Services (DCNS) and the Slide Library. Both DCNS and the Slide Library were gracious about offering their resources. By the end of the summer Ms. Laskowski had digitized just less than 400 pamphlets. The covers were digitized in 24-bit color. The inside pages were gray-scale. Everything was digitized at 600 dots per inch. These pamphlets generated close to 92 GB of data in the form of TIFF and PDF files.

Because the Pamphlets Project was going to include links to concordance (text mining) interfaces from within the library’s catalog, Sherri Jones facilitated two hour-long workshops to interested library faculty and staff in order to explain and describe the interfaces. The first of these workshops took place in the early summer. The second took place in late summer.

In the meantime efforts were spent by two summer students of Jean McManus‘s. The students determined the copyright status of each of the 5,000 pamphlets. They used a decision-making flowchart as the basis of their work. This flowchart has since been reviewed by the University’s General Counsel and deemed a valid tool for determining copyright. Of the sum of pamphlets, approximately 4,000 (80%) have been determined to be in the public domain.

Starting around June Team Catholic Pamphlets decided to practice with the technical services aspect of the Project. Mary McKeown, Natasha Lyandres, and Lisa Stienbarger wrote a cataloging policy for the soon-to-be created MARC records representing the digital versions of the pamphlets. Aaron Bales exported MARC records representing the print versions of the pamphlets. PDF versions of approximately thirty-five pamphlets were placed on a Libraries’s Web server by Rajesh Balekai and Rob Fox. Plain text versions of the same pamphlets were placed on a different Web server, and a concordance application was configured against them. Using the content of the copyright database being maintained by Jean McManus’s students, Eric Lease Morgan updated the MARC records representing the print records to include links to the PDF and concordance versions of the pamphlets. The records were passed along to Lisa Stienbarger who updated them according to the newly created policy. The records were then loaded into a pre-production version of the catalog for verification. Upon examination the Team learned that users of Internet Explorer were not able to consistently view the PDF versions. After some troubleshooting, Rob Fox wrote a work-around to the problem, and the MARC records were changed to reflect new URLs of the PDF versions. Once this work was done the thirty-five records were loaded into the production version of the catalog, and from there they seamlessly flowed into the library’s “discovery system” – Primo. Throughout this time Julie Arnott and Dorothy Snyder applied quality control measures against the digitized content and wrote a report documenting their findings. Team Catholic Portal had successfully digitized and processed thirty-five pamphlets.

With these successes under our belts, and with the academic year commencing, Team Catholic Pamphlets celebrated with a pot-luck lunch and rested for a few weeks.

The workflow test concludes

In early October the Team got together again and unanimously decided to process the balance of the digitized pamphlets in order to put them into production. Everybody wanted to continue practicing with their established workflows. The PDF and plain text versions of the pamphlets were saved on their respective Web servers. The TIFF versions of the pamphlets were saved to the same file system as the library’s digital repository. URLs were generated. The MARC records were updated and saved to pre-production. After verification, they were moved to production and flowed to Primo. What took at least three months earlier in the year now took only a few weeks. By Halloween Team Catholic Pamphlets finished its workflow test processing the totality of the digitized pamphlets.

Access to the collection

There is no single home page for the collection of digitized pamphlets. Instead, each of the pamphlets have been cataloged, and through the use of command-line search strategy one can pull up all the pamphlets in the library’s catalog — http://bit.ly/sw1JH8

From the results list it is best to view the records’ detail in order to see all of the options associated with the pamphlet.

command-line search results page

From the details page one can download and read the pamphlet in the form of a PDF document or the reader can use a concordance to apply “distant reading” techniques against the content.

details of a specific Catholic pamphlets record

50 most frequently used words in a selected pamphlet

Conclusions and next steps

The Team accomplished most of its goals, and we learned many things, but not everything was accomplished. No graduate student was hired, and therefore no overarching description of the pamphlets (nor content from the Archives) was evaluated. Similarly, no environmental scan regarding use of digital humanities against the collections was done. While 400 of our pamphlets are accessible from the catalog as well as the “discovery system”, no testing has been done to determine their ultimate usability.

The fledgling workflow can still be refined. For example, the process of identifying content to digitize, removing it from Special Collections, digitizing it, returning it to Special Collections, doing quality control, adding the content to the institutional repository, establishing the text mining interfaces, updating the MARC records (with copyright information, URLs, etc.), and ultimately putting the lot into the catalog is a bit disjointed. Each part works well unto itself, but the process as a whole does not run like a well-oiled machine, yet. Like any new workflow, more practice is required.

This Project provided Team members with the opportunity to apply traditional library skills against a new initiative, and it was relished by everybody involved. The Project required the expertise of faculty and staff. It required the expertise of people in Collection Management, Preservation, Technical Services, Public Services, and Systems. Everybody applied their highly developed professional knowledge to a new and challenging problem. The Project was a cross-departmental holistic process, and it even generated interest in participation from people outside the Team. There are many people across the Libraries who would like to get involved with wider digitization efforts because they thought this Project was exciting and had the potential for future growth. They too see it as an opportunity for professional development.

While there are 5,000 pamphlets in the collection, only 4,000 of them are deemed in the public domain (legally digitizable). Four-hundred (400) pamphlets were scanned by a single person at a resolution of 600 dots/inch over a period of three months for a total cost of approximately $3,400. This is a digitization rate of approximately 1,200 pamphlets per year at a cost of $13,600. At this pace it would take the Libraries close to 3 1/3 years to digitized the 4,000 pamphlets for an approximate out-of-pocket labor cost of $44,880. If the dots/inch qualification were reduced by half – which still exceeds the needs for quality printing purposes – then it would take a single person approximately 1.7 years to do the digitization at a total cost of approximately $22,440. The time spent doing digitization could be reduced even further if the dots/inch qualification were reduced some more. One hundred fifty dots/inch is usually good enough for printing purposes. Based on our knowledge, it would cost less than $3,000 to purchase three or four computer/scanning set-ups similar to the ones used during the Project. If the Libraries were to hire as many as four students to do digitization, then we estimate the public domain pamphlets could be digitized in less than two years at a cost of approximately $25,000.

There are approximately 184,996 pages of Catholic pamphlet content, but approximately 80% of these pages (4,000 pamphlets of the total 5,000) are legally digitizable – 147,997 pages. A reputable digitization vendor will charge around $.25/page to do digitization. Consequently, the total out-of-pocket cost of using the vendor is close to $37,000.

Team Catholic Pamphlets recommends going forward with the Project using an in-house digitization process. Despite the administrative overhead associated with hiring and managing sets of digitizers, the in-house process affords the Libraries a means to learn and practice with digitization. The results will make the Libraries more informed and better educated and thus empower us to make higher quality decisions in the future.

Patron-Driven Acquisitions: A Symposium at the University of Notre Dame

Posted on March 19, 2012 in Uncategorized

The Professional Development Committee at the Hesburgh Libraries of the University of Notre Dame is sponsoring a symposium on the topic of patron-driven acquisitions:

  • Who – Anybody and everybody is invited
  • What – A symposium
  • When – Monday, May 21, 2012 from 9 o’clock to 1 o’clock, then lunch (included), and then informal roundtable discussions
  • Where – Hesburgh Library Auditorium, University of Notre Dame
  • Cost – free

After lunch and given enough interest, we will also be facilitating roundtable discussions on the topic of the day. To register, simply send your name to Eric Lease Morgan, and you will be registered. Easy!

Need a map? Download a campus map highlighting where to park and the location of the library.

Presentations

Here is a list of the presentations to get the discussion going:

  • Silent Partners in Collection Development: Patron-Driven Acquisitions at Purdue (Judith M. Nixon, Robert S. Freeman, and Suzanne M. Ward) – The Purdue University Libraries was an early implementer of patron-driven acquisitions (PDA). In 2000, interlibrary loan began buying rather than borrowing books that patrons requested. Following a brief review of the origin and reasons for this service, we will report on the results of an analysis of the 10,000 books purchased during the program’s first ten years. We examined data on the users’ status and department affiliations; most frequent publishers; and bibliographers’ analysis of the books in the top six subjects assessing whether the purchases were relevant to the collection. In addition, we will summarize the highlights of a comparative circulation study of PDA books vs. normally acquired books: do patron-selected books or librarian-selected books circulate at a higher rate? The conclusions of these PDA print book investigations encouraged the Libraries to begin an e-book PDA pilot program. We will report some early insights and surprises with this pilot. A librarian with selecting responsibilities in several subject areas will discuss his perspective of the value that PDA programs bring to collection building.
  • The Long Tail of PDA (Dracine Hodges) – Patron-driven acquisitions (PDA) titles are known to generate usage at least once at the moment a short-term loan or purchase is triggered. Despite the current PDA buzz, many remain unconvinced of the potential for ongoing circulation. There is a palpable level of skepticism over the sustainability of this buffet model with regard to user interest and the validity of shrinking librarian mediation in the selection process. To discuss these issues, data for content purchased during Ohio State’s 2009/2010 e-book PDA pilot will be examined. Several years of usage activity will be charted and analyzed for budgetary implications, including cost per use. In addition, key issues surrounding academic library patron-driven collection development philosophies will be explored. Particularly, this period when traditional methods of collection development must be maintained, while concurrently moving toward what appears to be the future with patron-driven collection development.
  • Acquisitions and User Services: responsive and responsible ways to build the collection (Lynn Wiley) – Patron-driven acquisitions (PDA) or purchase on demand programs are a natural extension of what libraries do naturally and that is to build programs to allow users to gain access to research materials. PDA programs provide for direct accountability on purchase decisions, especially relevant in the present economic situation. The Association of College and Research Libraries 2010 top ten trends in academic libraries (ACRL, 2010) listed PDA as a new force in collection development explaining: “Academic library collection growth is driven by patron demand and will include new resource types.” ACRL noted how this change was facilitated by vendor tools that provide controls for custom-made purchase on demand programs. In consortia settings, a PDA model can broaden access across the collective collection. This presentation describes the evolution of purchase on demand programs at the University of Illinois at Urbana-Champaign (UIUC) and includes a detailed description of several programs recently implemented at UIUC as well as a PDA program within a statewide academic library consortium that tested and analyzed purchase on demand mechanisms for print purchases. These programs describe a natural progression of models used to expand PDAs from ILL requesting to the discovery and selection model where bibliographic records were preselected and then made available in the online catalog for ordering. Statistics on use and users comments will be shared as well as comments on future applications.
  • Demand Driven Acquisitions: University of Notre Dame Experience (Fall 2011 – Spring 2012) (Laura A. Sill and Natasha Lyandres) – Using one time special funding, the Hesburgh Libraries of Notre Dame launched a DDA pilot project for ebooks in conjunction with YBP and Ebrary in September 2011. The implementation date followed several months of planning. The goal of the project was to test patronddriven acquisitions as the method for adding ebook titles of high interest to the library collection. Up until that point, ebooks had been acquired primarily through the purchase of large-scale vendor packages. One such package acquired in July of 2011 was Academic Complete on subscription, which provided access to 70,000 ebooks through the Ebrary platform. Also available to bibliographers and selectors was the ability to place firm orders through YBP for Ebrary titles. Our presentation will provide an overview of the pilot project and our thoughts on the effectiveness of this method vis-à-vis other ebook acquisitions methods currently utilized by the Libraries. We will discuss the particular challenges of running the pilot with Ebrary in conjunction with Academic Complete, as well as future possibilities for expanding our use of DDA to include additional use options such as short-term loans, greater integration with approval plans, and DDA for print.

Speakers

Here is a list of the speakers, their titles, and the briefest of bios:

  • Robert S. Freeman (Associate Professor of Library Science, Reference, Languages and Literatures Librarian) – Robert S. Freeman has worked at Purdue University since 1997, where he is a reference librarian and the liaison to the Department of English as well as the School of Languages and Cultures. He has an M.A. in German from UNC-Chapel Hill and an M.S. in Library and Information Science from University of Illinois at Urbana-Champaign. Interested in the history of libraries, he co-edited and contributed to Libraries to the People: Histories of Outreach (McFarland, 2003). More recently, he co-edited a special issue of Collection Management on PDA.
  • Dracine Hodges (Head, Acquisitions Department) – Dracine Hodges is Head of the Acquisitions Department at The Ohio State University Libraries. Previously, she was the Monographs Librarian and the Mary P. Key Resident Librarian. She received her BA from Wesleyan College and MLIS from Florida State University. She manages the procurement of print and electronic resources for the OSU Libraries. Most of her career has focused on acquisitions, but she has also worked as a reference librarian and in access services. Dracine is active in ALCTS serving on the Membership Committee and as past chair of the Tech Services Workflow Efficiency Interest Group. She is also an editorial assistant for College & Research Libraries and a graduate of the Minnesota Institute.
  • Natasha Lyandres (Head, Acquisitions, Resources and Discovery Services Department (ARDS)) – Natasha Lyandres, MLIS from San Jose State University, began her professional career in 1993 as cataloging and special projects librarian at the Hoover Institution Library and Archives, Stanford University. From 1996 to 2001 she has served as Reference and Collections Development Librarian at Joyner Library, East Carolina University. Natasha has joined the Hesburgh Libraries of Notre Dame in 2001. She has held positions in the areas of serials, cataloging, acquisitions and electronic resources. Natasha is currently Head of Acquisitions, Resources and Discovery Services Department, and Russian and East European Studies bibliographer.
  • Judith M. Nixon (Professor of Library Science and Education Librarian) – Judith M. Nixon holds degrees from Valparaiso University and University of Iowa. She has worked at Purdue University since 1984 as head of the Consumer & Family Sciences Library, the Management & Economics Library, and the Humanities & Social Science Library. Currently, as Education Librarian, she develops the education collections. Her publishing record includes over 35 articles and books. Her interest in patron-driven acquisitions lead to co-editing a special issue of Collection Management that focuses on this topic and a presentation at La Biblioteca Del Futuro in Mexico City in October of 2001.
  • Laura A. Sill (Supervisor, Monographic Acquisitions Unit, ARDS) – Laura A. Sill, MA from the University of Wisconsin-Madison, has been a member of the Hesburgh Libraries of Notre Dame library faculty over the years since 1989. She has held positions in the areas of acquisitions, serials, and systems. Laura is currently Visiting Associate Librarian, supervising Monographic Acquisitions in the Acquisitions, Resources and Discovery Services Department.
  • Suzanne M. Ward (Professor of Library Science and Head, Collection Management) – Suzanne (Sue) Ward holds degrees from UCLA, the University of Michigan, and Memphis State University. She has worked at the Purdue University Libraries since 1987 in several different positions. Her current role is Head, Collection Management. Professional interests include patron-driven acquisitions (PDA) and print retention issues. Sue has published one book and over 25 articles on various aspects of librarianship. She recently co-edited a special issue of Collection Management that focuses on PDA, and her book Guide to Patron-Driven Acquisitions is in press at the American Library Association.
  • Lynn Wiley (Head of Acquisitions and Associate professor of Library Administration) – Lynn Wiley has been a librarian for over thirty years working for academic libraries in the east coast and since 1995 at the University of Illinois. Lynn has worked in public service roles until 2005 when she switched to acquisitions. She has written and presented widely on meeting user needs and provided analysis on how library partnerships can best achieve this. She is active in state, regional and national professional associations and is also on the editorial board of LRTS. Her overall goal is to meet the needs of users easily and seamlessly.

Emotional Intelligence

Posted on February 23, 2012 in Uncategorized

This is sort of like a travelogue — a description of what I learned by attending a workshop here at Notre Dame on the topic of emotional intelligence. In a sentence, emotional intelligence begins with self-awareness, moves through self-management to control impulses, continues with social awareness and the ability to sense the emotions of others, and matures with relationship management used to inspire and manage conflict.

The purpose of the workshop — attended by approximately thirty people and sponsored by the University’s Human Resources Department — was to make attendees more aware of how they can build workplace relationships by being more emotionally intelligent.

The workshop’s facilitator began by outlining The Rule Of 24. Meaning, when a person is in an emotionally charged situation, then wait twenty-four hours before attempting resolution. If, after twenty-four hours ask yourself, “How do I feel?” If the answer is anxious, then repeat. If not, approach the other person with a structured script. In other words, practice what you hope to communicate. If an immediate solution is necessary or when actually having the difficult conversation, then remember a few points:

  1. pause — give yourself time
  2. slow your rate of speech
  3. soften the tone of your voice
  4. ask a few questions
  5. allow the other person to “save face”

When having a difficult conversation, try prefacing it with some like this. “I am going to tell you something, and it is not my intent to make you feel poorly. It is difficult for me as well.” Clarify this in the beginning as well as at the end of the conversation.

The facilitator also outlined a process for learning emotional intelligence:

  1. begin by being self-aware
  2. identify a problem that happened
  3. ask yourself, “What did I say or do that hurt the situation?”
  4. ask yourself, “What can I say or do to improve the situation?”
  5. ask yourself, “What did I do to improve the situation?”

There were quite a number of interesting quotes I garnered from the facilitator:

  • “When talking to people, don’t treat everybody the same. Take into consideration the personality of others. This is akin to the ‘Platinum Rule’ presented to the library faculty and staff a few weeks ago.”
  • “Emotions are tools if we use them properly.”
  • “Realize that ‘I don’t have to like you to work well with you. Let’s be productive together.'”
  • “It is not about being right and much as it is about getting the job done.”
  • “If you can see the humor in the situation, then things will go a lot better. Have fun with it.”
  • “Be prepared for the other person’s shock, anger, or disappointment.”
  • “Think about collaboration as if it were a sporting event where everybody knows the rules of the game.”
  • “Ask yourself, ‘What strengths do they bring to the table? What are the things they do to get in the way, and don’t think of these things as weaknesses.”
  • “In many cases it is not what you say, but how you say it. You can disagree without being emotional.”
  • “We are here to find solutions not find fault. Define common ground.”
  • “What are you doing that I’m not doing? Ask others for advice and how to deal with specific individuals.”

There were a number of other people from the Libraries who attended the workshop, and most of us gathered around a table afterwards to discuss what we learned. I think it would behoove the balance of the Libraries be more aware of emotional intelligence issues.

Much of the workshop was about controlling and managing emotions as if they were things to be tamed. In the end I wanted to know when and how emotions could be encouraged or even indulged for the purposes of experiencing beauty, love, or spirituality. But alas, the workshop was about the workplace and relationship building.

400 Catholic pamphlets digitized

Posted on November 11, 2011 in Uncategorized

Team Catholic Pamphlets has finished digitizing, processing, and making available close to 400 pieces of material available in the Aleph as well as Primo — http://bit.ly/sw1JH8

More specifically, we had a a set of Catholic pamphlets located in Special Collections converted into TIFF and PDF files. We then had OCR (optical character recognition) done against them, and the result was saved on a few local computers — parts of our repository. We then copied and enhanced the existing MARC records describing the pamphlets, and we ingested them into Aleph. From there they flowed to Primo.

When search results are returned for Catholic Pamphlet items, the reader is given the opportunity to download the PDF version and/or apply text mining services against them in order to enhance the process of understanding. For example, here are links to a specific catalog record, the pamphlet’s PDF version, and text mining interface:

Our next step is two-fold. First, we will document our experience and what we learned. Second, we will share this documentation with the wider audience. We hope to complete these last two tasks before we go home for the Christmas Holiday. Wish us luck.

Field trip to the Mansueto Library at the University of Chicago

Posted on November 2, 2011 in Uncategorized

On Wednesday, October 19, 2011 the Hesburgh Libraries Professional Development Committee organized a field trip to the Mansueto Library at the University of Chicago. This posting documents some of my things seen, heard, and learned. If I had one take-away, it was the fact that the initiatives of the libraries at the University of Chicago are driven by clearly articulated needs/desires of their university faculty.


Mansueto Library, the movie!

The adventure began early in the morning as a bunch of us from the Hesburgh Libraries (Collette Mak, David Sullivan, Julie Arnott, Kenneth Kinslow, Mandy Havert, Marsha Stevenson, Rick Johnson, and myself) boarded the South Shore train bound for Chicago. Getting off at 57th Street, we walked a few short blocks to the University, and arrived at 10:45. The process was painless, if not easy and inexpensive.

David Larsen (our host) greeted us at the door, gave us the opportunity to put our things down, and immediately introduced us to David Borycz who gave us a tour of the Mansueto Library. If my memory serves me correctly, a need for an additional university library was articulated about ten years ago. Plans were drafted and money allocated. As time went on the need for more money — almost double — was projected. That was when Mr. & Mrs. Mansueto stepped up to the plate and offered the balance. With its dome made of uniquely shaped glass parts and eyeball shape, the Library looks like a cross between the Louvre Pyramid (Paris) and the Hemisfèric in Valencia (Spain). The library itself serves three functions: 1) reading room, 2) book storage, and 3) combination digitization & conservation lab. For such a beautiful and interesting space, I was surprised the later function was included in the mix which occupied almost half of the above ground space.

The reading room was certainly an inviting space. Long tables complete with lights. Quiet. Peaceful. Inviting. Contemplative.

The back half of the ground-level was occupied by both a digitization and conservation lab. Lots of scanners including big, small, and huge. Their scanning space is not a public space. There were no students, staff, nor faculty digitizing things there. Instead, their scanning lab began as a preservation service, grew from there, and now digitizes things after being vetted through a committee prioritizing projects. The conservation lab was complete with large tables, de-acidification baths, and hydration chambers. Spacious. Well-equipped. Located in a wonderful place.

Borycz then took us down to see the storage area. Five stories deep, this space is similar to the storage space at Valparaiso University. Each book is assigned a unique identifier. Books are sorted by size and put into large metal bins (also assigned a unique number). The identifiers are then saved in a database denoting the location in the cavernous space below. One of the three elevators/lifts then transport the big metal boxes to their permanent locations. The whole space will hold about 3.5 million volumes (the entire collection of the Hesburgh Libraries), but at the present time there are only 900,000 volumes currently stored there. How did they decide what would go to the storage area? Things that need not be browsed (like runs of bound serial volumes), things that are well-indexed, things that have been digitized, and “elephant” folios.

When we returned from lunch our respective libraries did bits of show & tell. I shared about the Hesburgh Libraries efforts to digitize Catholic pamphlets and provide text mining interfaces against the result. Rick Johnson demonstrated the state of the Seaside Project. We were then shown the process the University of Chicago librarians were using to evaluate the EBSCOhost “discovery service”. An interface was implemented, but the library is not sure exactly what content is being indexed, and the indexed items’ metadata seems applied inconsistently. Moreover, it is difficult (if not impossible) to customize the way search results are ranked and prioritized. All is not lost. The index does include the totality of JSTOR, which is seen as a plus. Librarians have also discovered that the index does meet the needs of many library patrons. The library staff have also enhanced other library interfaces pointing them to the EBSCO service if patrons browse past two or three pages of search results. When show & tell was finished we broke into smaller groups for specific discussions, and I visited the folks in the digitization unit. We then congregated in the lobby, made our way back to the train, and returned to South Bend by 7:30 in the evening.

The field trip was an unqualified success. It was fun, easy, educational, team-building, inexpensive, collegial, and enlightening. Throughout the experience we heard over and over again how directives were taken by University of Chicago faculty on new directions. These faculty then advocated for the library, priorities were set, and goals were fulfilled. The Hesburgh Libraries at the University of Notre Dame is geographically isolated. In my opinion we must make more concerted efforts to both visit other libraries and bring other librarians to Notre Dame. Such experiences enrich us all.

Scholarly publishing presentations

Posted on November 1, 2011 in Uncategorized

As a part of Open Access Week, a number of us (Cheri Smith, Collette Mak, Parker Ladwig, and myself) organized a set of presentations on the topic of scholarly publishing with the goal of increasing awareness of the issues across the Hesburgh Libraries. This posting outlines the event which took place on Thursday, October 27, 2011.

The first presentation was given by Kasturi Halder (Julius Nieuwland Professor of Biological Sciences and Founding Director of the Center for Rare and Neglected Diseases) who described her experience working with the Public Library of Science (PLoS). Specifically, Halder is the editor-in-chief of PLoS Pathogens with a total editorial staff of close to 140 persons. The journal receives about 200 submissions per month, and her efforts require approximately one hour of time per day. She describes the journal as if it were a community, and she says one of the biggest problems they have right now is internationalization. Halder was a strong advocate for open access publishing. “It is important to make the content available because the research is useful all over the world… When the content is free it can be used in any number of additional ways including text mining and course packs… Besides, the research is government funded and ought to be given back to the public… Patients should have access to articles.” Halder lauded PLoS One, a journal which accepts anything as long as it has been peer-reviewed, and she cited an article co-written by as many as sixty-four students here at Notre Dame as an example. Finally, Halder advocated article-level impact as opposed to journal-level impact as a measure of success.

Anthony Holter (Assistant Professional Specialist in the Mary Ann Remick Leadership Program, Institute for Educational Initiatives) outlined how Catholic Education has migrated from a more traditional scholarly publication to something that stretches the definition of a journal. Started in 1997 as a print journal, Catholic Education was sponsored and supported by four institutions of higher education, each paying an annual fee. The purpose of the journal was (and still is) to “promote and disseminate scholarship about the purposes, practices, and issues in Catholic education at all levels.” Over time the number of sponsors grew and eventually faced two problems. First, they realized that libraries were paying twice for the content. Once for the membership fee and again for a subscription. Second, many practitioners appreciated the journal when they were in school, but as they graduated they no longer had access to it. What to do? The solution was to go open access. The journal is now hosted at Boston College. In this new venue Holter has more access to usage statistics than he has ever had before making it easier for him to track trends. For example, he saw many searches on topics of leadership, and consequently, he anticipates a special issue on leadership in the near future. Finally, Holter also sees the journal akin to a community, and the editorial board plans to exploit social networks to a greater degree in an effort to make the community more interactive. “We are trying to create a rich tapestry of a journal.” For the time being, Project Euclid fits the bill.

Finally Peter Cholak (Professor of Mathematics, College of Science) put words to characteristics of useful scholarly journals and used the Notre Dame Journal of Formal Logic as an example. Cholak looks to journals to add value to scholarly research. He does not want to pay any sort of page or image charges (which are sometimes the case in open access publications). Cholak looks for author-friendly copyright agreements from publishers. This is the case because his community is expected (more or less) to submit their soon-to-be-published articles in a repository called MathSciNet. He uses MathSciNet as both a dissemination and access tool. A few years ago the Notre Dame Journal of Formal Logic needed a new home, and Cholak visited many people across the Notre Dame campus looking for ways to make is sustainable. (I remember him coming to the libraries, for example.) He found little, if any, support. Sustainability is a major issue. “Who is going to pay? Creation, peer-review, and dissemination all require time and money?” Project Euclid fits the bill.

The presentations were well-received by the audience of about twenty people. Most were from the Libraries but others were from across the University. It was interesting to compare & contrast the disciplines. One was theoretical. Another was empirical. The third was both academic and practical at once and at the same time. There was lively discussion after the formal presentations. Such was the goal. I sincerely believe each of the presenters have more things in common than differences when it comes to scholarly communication. At the same time they represented a wide spectrum of publishing models. This spectrum is the result of the current economic and technological environment, and the challenge is to see the forest from the trees. The challenge for libraries is to understand the wider perspectives and implement solutions satisfying the needs of most people given limited amounts of resources. In few places is this more acute than in the realm of scholarly communication.

Tablet-base “reading”

Posted on October 15, 2011 in Uncategorized

A number of us got together today, and we had nice time doing show & tell as well as discussing “tablet-based ‘reading'”. We included:

  • Carole Pilkinton
  • Charles Vardeman
  • Elliott Visconsi
  • Eric Lease Morgan
  • Jean McManus
  • Laura Fuderer
  • Markus Krusche
  • Sean O’Brien

Elliot demonstrated iPad Shakespeare while Charles and Markus filled in the gaps when it came to the technology. Sean and I did the same thing when it came to the Catholic Youth Literature Project. Some points during the discussion included but were not limited to:

  • the two projects complement each other in their approaches
  • the availability of usable texts make such projects difficult
  • evaluating the effectiveness of these tools is challenging
  • such applications require significant resources to create
  • these types of application demonstrate a large degree of potential

Fun in academia and the digital humanities.