Serial publications with editors at Notre Dame

Posted on August 22, 2012 in Uncategorized by Eric Lease Morgan

This is a list of serial publications (journals, yearbooks, magazines, newsletters, etc.) whose editorial board includes at least one person from the University of Notre Dame. This is not a complete list, and if you know of other titles, then please drop me a line:

  1. Actroterion
  2. American Journal Of Jurisprudence
  3. American Midland Naturalist
  4. American Political Thought
  5. Analyst
  6. Analytical Chemistry
  7. Applied Preventative Psychology
  8. Attention, Perception, & Psychophysics
  9. Behavior Genetics
  10. Beyond Politics
  11. Biomicrofluidics
  12. Bulletin De Philosphie Mediévalé
  13. Cognitive Psychology
  14. Conflict Resolution
  15. Current Drug Targets
  16. Faith And Philosophy
  17. International Yearbook Of German Idealism
  18. Journal of Autism and Developmental Disorders
  19. Journal Of Biblical Literature
  20. Journal of Chemical & Engineering Data
  21. Journal Of College and University Law
  22. Journal of Experimental Psychology: Learning, Memory & Cognition
  23. Journal of Hindu-Christian Studies
  24. Journal Of Legislation
  25. Journal Of Modern Russian History and Historiography
  26. Journal of Moral Education
  27. Journal of Multivariate Analysis
  28. Journal of Organic Chemistry
  29. Journal Of Philosophical Research
  30. Journal of Physical Chemistry A
  31. Journal of Physical Chemistry Letters
  32. Journal Of Religion and Literature
  33. Journal Of Undergraduate Research
  34. Kellogg Institute Working Papers
  35. Mobilization
  36. Mobilizing Ideas
  37. Multivariate Behavioral Research
  38. Nineteenth Century Context
  39. Notre Dame Journal of Formal Logic
  40. Notre Dame Journal of International & Comparative Law
  41. Notre Dame Journal Of Law, Ethics, And Public Policy
  42. Notre Dame Law Review
  43. Notre Dame Philosophical Reviews
  44. Notre Dame Review
  45. Psychological Methods
  46. Quarterly Journal Of Experimental Psychology
  47. Re: Visions
  48. Reilly Center Reports
  49. Rethinking Marxism
  50. Review Of Politics
  51. Scholastic
  52. Scientia
  53. Sociological Voices
  54. Studies in History and Philosophy of Science
  55. Sullivan Prize series
  56. The Bend
  57. The Hub
  58. The Juggler
  59. Through Gendered Lenses
  60. William And Katherine Devers Series in Dante Studies

This is a list titles that may or may not have had an editor from Notre Dame at one time, but to the best of my ability I could not find one.

  1. Catholic Education
  2. Comparative Politics Newsletter
  3. International Journal Of Ecology
  4. International Journal Of Industrial Organization
  5. Medieval Philosophy And Theology
  6. Memory And Cognition

Again, is not necessarily a complete list, and if know of other titles, then please drop me a line.

Last updated: October 1, 2012

Exploiting the content of the HathiTrust, epilogue

Posted on August 14, 2012 in Uncategorized by Eric Lease Morgan

This blog posting simply points to a browsable and downloadable set of MARC records describing a set of books in both in the HathiTrust as well as the Hesburgh Libraries at the University of Notre Dame.

In a previous blog posting I described how I downloaded about 25,000 MARC records that:

  1. were denoted as in the public domain
  2. described books publish prior to 1924
  3. were denoted as a part of the Hesburgh Libraries at the University of Notre Dame
  4. were denoted as a part of the HathiTrust
  5. had a one-to-one correspondance between OCLC number and digitized item

This list of MARC records is not nor was not intended to be a comprehensive list of overlapping materials between the Hesburgh Libraries collection and the HathiTrust. Instead, this list is intended to be a set of unambiguous sample data allowing us to import and assimilate HathiTrust records into our library catalog and/or “discovery system” on an experimental basis.

The browsable interface is rudimentary. Simply point your browser to the interface and a list of ten randomly selected titles from the MARC record set will be displayed. Each title will be associated with the date of publication and three links. The first link points to the HathiTrust catalog record where you will be able to read/view the item’s bibliographic data. The second link points to the digitized version of the item complete with its searching/browsing interface. Third and final link queries OCLC for libraries owning the print version of the item. This last link is here to prove that the item is owned by the Hesburgh Libraries.

Screen shot of browsable interface

For a good time, you can also download the MARC records as a batch.

Finally, why did I create this interface? Because people will want to get a feel for the items in question before the items’ descriptions and/or URLs become integrated into our local system(s). Creating a browsable interface seemed to be one of the easier ways I could accomplish that goal.

Fun with MARC records, the HathiTrust, and application programmer interfaces.

Exploiting the content of the HathiTrust, continued

Posted on August 11, 2012 in Uncategorized by Eric Lease Morgan

This blog posting describes how I created a set of MARC records representing public domain content that is in both the University of Notre Dame’s collection as well as in the HathiTrust.

Background

In a previous posting I described how I learned about the amount of overlap between my library’s collection and the ‘Trust. There is about a 33% overlap. In other words, about one out of every three books owned by the Hesburgh Libraries has also been digitized and in the ‘Trust. I wondered how our collections and services could be improved if hypertext links between our catalog and the ‘Trust could be created.

In order to create links between our catalog and the ‘Trust, I need to identify overlapping titles and remote ‘Trust URLs. Because they originally wrote the report which started the whole thing, OCLC had to have the necessary information. Consequently I got in touch with the author of the original OCLC report (Constance Malpas) who in turn sent me a list of Notre Dame holdings complete with the most rudimentary of bibliographic data. We then had a conference call between ourselves and two others — Roy Tennant from OCLC and Lisa Stienbarger from the Notre Dame. As a group we discussed the challenges of creating an authoritative overlap list. While we all agreed the creation of links would be beneficial to my local readers, we also agreed to limit what gets linked, specifically public domain items associated with single digitized items. Links to copyrighted materials were deemed more useless than useful. One can’t download the content, and searching the content is limited. Similarly, any OCLC number — the key I planned to use to identify overlapping materials — can be associated with more than one digitized item. “To which digitized item should I link?” Trying to programmatically disambiguate between one digitized item and another was seen as too difficult to handle at the present time.

The hacking

I then read the HathiTrust Bib API, and I learned it was simple. Construct a URL denoting the type of control number one wants to use to search as well as denote full or brief output. (Full output is just like brief output except full output includes a stream of MARCXML.) Send the URL off to the ‘Trust and get back a JSON stream of text. The programmer is then expected to read, parse, and analyze the result.

Energized with a self-imposed goal, I ran off to my text editor to hack a program. Given the list of OCLC numbers provided by OCLC, I wrote a Perl program that queries the ‘Trust for a single record. I then made sure the resulting record was: 1) denoted as in the public domain, 2) published prior to 1924, and 3) was associated with a single digitized item. When records matched this criteria I wrote the OCLC number, the title, and the ‘Trust URL pointing to the digitized item to a tab-delimited file. After looping through all the records I identified about 25,000 fitting my criteria. I then wrote another program which looped through the 25,000 items and created a local MARC file describing each item complete with remote HathiTrust URL. (Both of my scripts — filter-pd.pl and get-marcxml.pl — can be used by just about any library. All you need is a list of OCLC numbers.) It is now possible for us here at Notre Dame to pour these MARC records into our catalog or “discovery system”. Doing so is not always straight-forward, and if the so desire, I’ll let that work to others.

What I learned

This process has been interesting. I learned that a lot of our library’s content exists in digital form, and copyright is getting in the way of making it as useful as it could be. I learned the feasibility of improving our library collections and services by linking between our catalog and remote repositories. The feasibility is high, but the process of implementation is not straight-forward. I learned how to programmatically query the HathiTrust. It is simple and easy-to-use. And I learned how the process of mass digitization has been boon as well as a bit of a bust — the result is sometimes ambiguous.

It is now our job as librarians to figure out how to exploit this environment and fulfill our mission at the same time. Hopefully, this posting will help somebody else take the next step.

Exploiting the content of the HathiTrust

Posted on August 10, 2012 in Uncategorized by Eric Lease Morgan

I have been exploring possibilities of exploiting to a greater degree the content in the HathiTrust. This blog posting outlines some of my initial ideas.

The OCLC Research Library Partnership program recently sent us here at the University of Notre Dame a report describing and illustrating the number and types of materials held by both the University of Notre Dame and the HathiTrust — an overlap report.

As illustrated by the pie chart from the report, approximately 1/3 of our collection is in the HathiTrust. It might be interesting to link our local library catalog records to the records in the ‘Trust. I suppose the people who wrote the original report would be able supply us with a list our overlapping titles. Links could be added to our local records facilitating enhanced services to our readers. “Service excellence.”

pie chart

Percentage of University of Notre Dame and HathiTrust overlap

According to the second chart, of our approximately 1,000,000 overlapping titles, about 121,000 (5%) are in the public domain. The majority of the public domain documents are government documents. On the other hand about 55,000 of our overlapping titles are both in the public domain and a part of our collection’s strengths (literature, philosophy, and history). It might be interesting to mirror any or all of these public domain documents locally. This would enable us to enhance our local collections and possibly provide services (text mining, printing, etc.) against them. “Lots of copies keep stuff safe.”

subjects

Subject coverage of the overlapping materials

According to the HathiTrust website, about 250,000 items in the ‘Trust are freely available via the public domain. For example, somebody has created a collection of public domain titles called English Short Title Catalog, which is apparently the basis of EBBO and in the public domain. [2] Maybe we could query the ‘Trust for public domain items of interest, and mirror them locally too? Maybe we could “simply” add those public domain records to our catalog? The same process could be applied collections from the Internet Archive.

The primary purpose of the HathiTrust is to archive digitized items for its membership. A secondary purpose it to provide some public access to the materials. After a bit of creative thinking on our parts, I believe it is possible to extend the definition of public access and provide enhanced services against some of the content in the archive as well as fulfill our mission as a research library.

I think will spend some time trying to get a better idea of exactly what public domain titles are in our collection as well as in the HathiTrust. Wish me luck.

Patron-Driven Acquisitions: A Symposium

Posted on July 18, 2012 in Uncategorized by Eric Lease Morgan

You might say this is a reverse travelogue because it documents what I learned at a symposium that took place here at the University of Notre Dame (May 21, 2012) on the topic of patron-driven acquisitions (PDA). In a sentence, I learned that an acquisitions process partially driven by direct requests from library readers is not new, and it is a pragmatic way to supplement the building of library collections.

Symposium speakers and the PDC

Suzanne Ward, Robert Freeman, and Judith Nixon (Purdue University) began the symposium with a presentation called “Silent Partners In Collection Development: Patron-Driven Acquisitions at Purdue“. The folks at Purdue have been doing PDA for about a decade, but they advocate that libraries have been doing PDA for longer than that when you consider patron suggestion forms, books-on-demand services, etc. Their PDA program began in interlibrary-loan. When requested materials fit a particular criteria (in English, scholarly, non-fiction, cost between $50-150, ship in less than a week, and were published in the last five years), it was decided to purchase the material instead of try to borrow it. The project continued for a number of years, and after gathering sufficient data, they asked themselves a number of questions in order to summarize their experience. Who were the people who were driving the process? Sixty percent (60%) of the requests fitting the criteria were from graduate students — the “silent partners”. How much did it cost? Through the process they added about 10,000 books to the collection at a cost of about $350,000. Were these books useful? These same books seemed to circulate four times when the books purchased through other means circulated about two and half times. What were the subjects of the books being purchased? This was one of the more interesting questions because the subjects turned out to be cross-disciplinary and requestors were asking to borrow materials that generally fell outside the call number range of their particular discipline. Consequently, the PDA functions were fulfilling collection development functions in ways traditional approval profiles could not. E-book purchases are the next wave of PDA, and they have begun exploring these options, but not enough data has yet to be gathered in order for any conclusions to have been made.

Lynn Wiley (University of Illinois at Urbana-Champaign) was second and presented “Patron Driven Acquisitions: One Piece On A Continuum Of Evolving Services”. Starting in early 2010 UIUC and in conjunction with the state-wide consortium named CARLI began the first of four pilot projects exploring the feasibility of PDA. In it they loaded 16,000 MARC records into their catalog. These records represented items from their Yankee Book Peddler approval plan. Each record included a note stating that the book can be acquired upon request. The implementation was popular because they ran out of money in five weeks when they expected the money to last a lot longer. In a similar project about 6,000 ebook titles where added to the library catalog, and after 10 “activities” (uses) were done against one of the items the ebook was purchased. After about four months about 240 titles were purchased but as many as 450 examined but not triggered. Each ebook cost about $100. A third pilot expanded on the second. It included core approval items from bibliographers — about 150 items every two weeks. Requested items got a two-day turn-around time at an average cost of $60/book. Finally, a forth project is currently underway at it expands the user population to the whole of CARLI. Some of the more interesting conclusions from Wiley’s presentation include: 1) build it and they will come, 2) innovation comes out of risk taking, 3) PDA supplements collection building, 4) change circulation periods for high access books, and 5) PDA is a way to build partnerships with vendors, other librarians, and consortia.

“The Long Tail of PDA” was given by Dracine Hodges (The Ohio State University) and third in the line up. At Ohio State approximately 16,000 records representing on-demand titles were loaded into their catalog. No items were older than 2007, in a foreign language, computer manuals, or cost more than $300. They allowed patrons to select materials have have them purchased automatically. The university library allocated approximately $35,000 to the project, and it had to be cut short after a month because of the project’s popularity. Based on Hodges’s experience a number of things were learned. PDA benefits the researcher in cross-disciplines because titles get missed when they are in one or the other discipline. Second, comparing and contrasting print-based titles and ebooks is like comparing apples and oranges. The issues with print-based titles surround audience and things like courtesy cards. Whereas the issues surrounding ebooks include things like views, printing, downloads, and copying. In the future she believes she can see publishers selling things more directly to the patron as opposed to going through a library. She noted the difficulty of integrating the MARC records into the catalog. “They are free for a reason.” Hodges summarized her presentation this way, “We are doing a graduate shift from just-in-case selection to just-in-time selection, and the just-in-time selection process is a combination of activities… Print is not dead yet.”

The final presentation was by Natasha Lyandres and Laura Sill (University of Notre Dame), and it was called “Why PDA… Why Now?” In order to understand the necessary workflows, the Hesburgh Libraries experimented with patron-driven acquisitions. Fifty thousand ($50,000) was allocated and a total of 6,333 records from one ebook vendor were loaded into the Aleph catalog. The URLs in the catalog pointed to activated titles available on the vendor’s platform. Platform advantages and disadvantages became quickly apparent as patron began to make use of titles. Their questions prompted the library to draw up an FAQ page to explain features and advise patrons. Other platform issues to be further investigated are restrictions because of digital rights management, easier downloads, and printing quality. To monitor the speed of spend and to analyze the mix of content being collected, usage reports were reviewed weekly. While the work with PDA at Notre Dame is still in its infancy, and number of things have been learned. PDA is a useful way of acquiring ebooks. Print materials can be acquired in similar ways. The differences between vendor platforms should be explored some more. Ongoing funding for PDA and its place and structure in the materials budget will require further discussion and thought. Integrating PDA into formal collection development practices should be considered.

“Thank you”

The symposium was attended by as many as seventy-five people from across the three or four states. These folks helped turn the event into a real discussion. The symposium was sponsored by the Professional Development Committee (PDC) of the Hesburgh Libraries (University of Notre Dame). I want to thank Collette Mak and Jenn Matthews — PDC co-members — for their generous support in both time and energy. Thanks also go to our speakers with whom none of this would have been possible. “Thank you to one and all.”

E-Reading: A Colloquium at the University of Toronto

Posted on April 26, 2012 in Uncategorized by Eric Lease Morgan

On Saturday, March 31 I presented and attended a colloquium (E-Reading: A Colloquium at the University of Toronto) on the topic of e-reading, and I am documenting the experience because writing — the other half of reading — literally transcends space and time. In a sentence, my Toronto experience feed my body, my heart, and my mind.

Sponsored by a number of groups (The Collaborative Program in Book History and Print Culture, the Toronto Centre for the Book, the Toronto Review of Books, and Massey College) the event was divided into three sections: 1) E-Reader Response, 2) The Space of E-Texts, and 3) a keynote address.

E-Reader Response

Kim Martin (Western University) was honored with the privilege of giving the first presentation. It was originally entitled “Primary Versus Secondary Sources: The Use of Ebooks by Historians”, but sometime before the colloquium she changed the topic of her presentation to the process of serendipity. She advocated a process of serendipity articulated by Jacquelyn Burkell that includes a prepared mind, prior concern, previous experience or expertise, fortuitous outcome, and an act of noticing. [1] All of these elements are a part of the process of serendipitous find. She compared these ideas with the possibilities ebooks, and she asked a set of historians about serendipity. She discovered that there was some apprehension surrounding ebook reading, elements of traditional reading are seen as lost in ebooks, but despite this there was some degree ebook adoption by the historians.

I (Eric Lease Morgan, University of Notre Dame) gave a presentation originally entitled “Close and Distant Reading at Once and at the Same time: Using E-Readers in the Classroom”, but my title changed as well. It changed to “Summarizing the State of the Catholic Youth Literature Project“. In short, I summarized the project, described some of its features, and emphasized that “distant” reading is not a replacement — but rather a supplement — to the traditional close reading process.

Alex Willis (University of Toronto and Skeining Writing Solutions) then shared with the audience a presentation called “Fan Fiction and the Changing Landscape of Self-Publication”. Fan fiction is a type of writing that fills in gaps in popular literature. For example, it describes how the “warp core” of Star Trek space ships might be designed and work. Fan fiction may be the creation of story behind a online multi-player game. These things are usually written by very enthusiastic — “spirited” — players of the games. Sites like fanfiction.net and the works of Amanda Hocking are included as good examples of the genre. With the advent of this type of literature questions of copyright are raised, the economics of publishing are examined, and the underlying conventional notions of authority are scrutinized. Fan fiction is a product of a world where just about anybody and everybody can be a publisher. I was curious to know how fan fiction compares to open access publishing.

The Space of E-Texts

After a short break the second round of presentations began. It started with Andrea Stuart (University of Toronto) and “Read-Along Records: The Rise of Multimedia Modeling Reading”. Stuart presented on the history of read-along books, how they have changed over the years, and what they are becoming with the advent of e-readers. Apparently they began sometime after phonograph player were inexpensively produced and sold because this is when records started to be included in children’s books. They were marketed as time-savers to parents who were unable to read to their children as well as do household duties. She did a bit of compare & contrast of these read-along books and noticed how the stories narrated by men included all sorts of sound effects, but the narrations by woman did not. She then described how the current crop of ebooks are increasingly becoming like the read-along books of yesterday but with significant enhancements — buttons to push, questions to answer, and pages to turn. She then asked the question, “Are these enhancements liberating or limiting?” In the end I believe she thought they were a little bit of both.

“Commuter Reading and E-Reading” was the title of Emily Thompson‘s (University of Toronto) paper. This was a short history of a particular type of reading — reading that happens when people are commuting to and from their place of work. Apparently it began in France or England with the advent of commuting by train, “railway literature”, and “yellow backs” sold by a man named W.H. Smith. This literature was marketed as easy, comfortable, and enjoyable. They were sold in stalls and offered “limitless” choice. Later on the Penguin Books publisher started using the Penguinator — a vending machine — as a way of selling this same sort of literature. Thompson went on to compare the form & function of railway literature to the form & function of current cell phone and ebook readers. It was interesting to me to see how the form of the literature fit its function. Short, easy-to-ready chapters. Something that could be picked up, left off, and picked up again quickly. Something that wasn’t too studious and yet engaging. For example, the very short chapters books designed for cell phone sold in Japan. In the end Thompson described the advent of ebook readers as a moment in time for reading, not the death of the book. It was a refreshing perspective.

Brian Greenspan (Carlton University) then shared “Travel/Literature: Reading Locative Narrative”. While most of the presentations looked back in history, Greenspan’s was the only one that looked forward. In it he described a type of book designed to be read while walking around. “Books surpress optical input,” he said. By including geo-spacial technology into an ebook reader, different things happened in his narrative (a technology he called “StoryTrek”) depending on where a person was located. Readers of the narrative commented on the new type of reality they experienced through its use, specifically, they used the word “stimulating”. They felt less isolated during the reading process because when the saw things in their immediate location they brought them into the narrative.

Keynote address

The keynote address was given by Assistant Professor of Library & Information Science, Bonnie Mak (University of Illinois), and it was entitled “Reading the ‘E’ in E-Reading”. The presentation was a reflection on e-reading, specifically a reflection of the definition of a words on a page, and how different types of pages create different types of experiences. For example, think of the history for writing from marks in clay, to the use of was tablets, to the codex, to the e-reader. Think of the scroll of milleniums past, and think of scrolling on our electronic devices. Think of the annotation of mediaeval manuscripts, and compare that to the annotations we make on PDF documents. “What is old is new again… The material of books engender certain types of reading.” Even the catalogs and services of libraries are effected by this phenomenon. She used the example of Early English Books Online (EEBO), and how it is based on the two Short Title Catalogs (STC) — “seminal works of bibliographic scholarship that set out to define the printed record of the English-speaking world from the very beginnings of British printing in the late fifteenth century through to 1700.” Apparently the STC is incomplete in and of itself, and yet EEBO is touted as a complete collection of early English literature. And because EEBO is incomplete as well as rendered online in a particular format, it too lends itself to only a particular typer of reading. To paraphrase, “Reader, beware and be aware.”

Summary and Conclusions

As I mentioned above, my trip to Toronto fed my body, my heart, and my mind.

The day I arrived I visited with Michael Bramah, Noel Mcferran, Sian Meikle all of the University of Toronto. I got a 50¢ and private tour of the St. Michael’s College special collections including the entire library of the school when it was founded (the Soulerin Collection) as well as their entire collection of G.K. Chesterton and Cardinal John Henry Newman materials. It was then fun trying to find a popular reading item from their Ninetieth Century French Collection. More importantly, we all talked about the “Catholic Portal” and ways we could help make it go forward. That evening had a nice meal in a nice restaurant. All these things fed my body.

My heart was fed the next morning — the day of the colloquium — when I first went to one of the university’s libraries and autographed my WAIS And Gopher Servers book for the fourth or fifth time in the past dozen years. I went to the Art Gallery Of Ontario. There I saw a wall of Dufy’s paintings. I also experienced a curation of some paintings in the style of a Paris salon. This was echoed in the museum’s Canadian collection were similar paintings of similar classic styles were hung as in a salon. My heart soared as I was inspired. The Gallery’s collection and presentation style is to be applauded.

Finally, I fed my mind through the colloquium. Located in an academic atmosphere, we shared and discussed. We were all equals. Everybody had something to offer. There was no goal other than to stimulate our minds. Through the process I learned of my new and different types of reading:

  • close reading
  • continuous reading
  • deviant reading
  • distant reading
  • distracted reading
  • intersectional reading
  • location-aware reading
  • sustained reading

My conception of reading was expanded. After the event many of us retired to a nearby pub where I met the author of a piece of iPad software called iAnnotate. He described the fluctuating and weaving way features to the PDF “standard” were created. Again, my ideas about reading were expanded. I need and require more of this type of stimulation. This trip was well worth the nine hour drive to Toronto and the twelve hour drive back.

Summarizing the state of the Catholic Youth Literature Project

Posted on March 30, 2012 in Uncategorized by Eric Lease Morgan

This posting summarizes the purpose, process, and technical infrastructure behind the Catholic Youth Literature Project. In a few sentences, the purpose was two-fold: 1) to enable students to learn what it meant to be Catholic during the 19th century, and 2) to teach students the value of reading “closely” as well as from a “distance”. The process of implementing the Project required the time and skills of a diverse set of individuals. The technical infrastructure is built on a large set of open source software, and the interface is far from perfect.

Purpose

The purpose of the project was two-fold: 1) to enable students to learn what it meant to be Catholic during the 19th century, and 2) to teach students the value of reading “closely” as well as from a “distance”. To accomplish this goal a faculty member here at the University of Notre Dame (Sean O’Brien) sought to amass a corpus of materials written for Catholic youth during the 19th century. This corpus was expected to be accessible via tablet-based devices and provide a means for “reading” the texts in the traditional manner as well as through various text mining interfaces.

During the Spring Semester students in a survey class were lent Android-based tablet computers. For a few weeks of the semester these same students were expected to select one or two texts from the amassed corpus for study. Specifically, they were expected to read the texts in the traditional manner (but on the tablet computer), and they were expected to “read” the texts through a set of text mining interfaces. In the end the the students were to outline three things: 1) what did you learn by reading the text in the traditional way, 2) what did you learn by reading the text through text mining, and 3) what did you learn by using both interfaces at once and at the same time.

Alas, the Spring semester has yet to be completed, and consequently what the students learned has yet to be determined.

Process

The process of implementing the Project required the time and skills of a diverse set of individuals. These individuals included the instructor (Sean O’Brien), two collection development librarians (Aedin Clements and Jean McManus), and librarian who could write computer programs (myself, Eric Lease Morgan).

As outlined above, O’Brien outlined the overall scope of the Project.

Clements and McManus provided the means of amassing the Project’s corpus. A couple of bibliographies of Catholic youth literature were identified. Searches were done against the University of Notre Dame’s library catalog. O’Brien suggested a few titles. From these lists items were selected for inclusion for purchase, from the University library’s collection, as well as from the Internet Archive. The items for purchase were acquired. The items from the local collection were retrieved. And both sets of these items were sent off for digitization and optical character recognition. The results of the digitization process were then saved on a local Web server. At the same time, the items identified from the Internet Archive were mirrored locally and saved in the same Web space. About one hundred items items were selected in all, and they can be seen as a set of PDF files. This process took about two months to complete.

Technical infrastructure

The Project’s technical infrastructure enables “close” and “distant” reading, but the interface is far from perfect.

From the reader’s (I don’t use the word “user” anymore) point of view, the Project is implemented through a set of Web pages. Behind the scenes, the Project is implemented with an almost dizzying array of free and open source software. The most significant processes implementing the Project are listed and briefly described below:

  • mirroring – Much of the text mining services require extensive analysis of the original item. To accomplish this local copies of the texts were mirrored locally. By feeding the venerable wget program with a list of URLs based on Internet Archive unique identifiers, mirroring content locally is trivial.
  • name-entity extraction – There was a desire to list the underlying names, places, and organizations from each text. These things can put a text into a context for the reader. Are there a lot of Irish names? Is there a preponderance of place names from the United States? To accomplish this task and assist in answering these sorts of questions, a Perl script was written around the Stanford Named Entity Recognizer. This script (txt2ner.pl) extracts the entities, looks them up in DBedia, and saves metadata (abstracts, URLs to images, as well as latitudes & longitudes) describing the entities to a locally defined XML file for later processing. (See an example.) A CGI script (ner.cgi) was then written to provide a reader-interface to these files.
  • parts-of-speech extraction – Just as lists of named entities can be enlightening so can lists of a text’s parts-of-speech. Are the pronouns generally speaking masculine or feminine? Over all, are the verbs active or passive? To what degree are color words used words in the text? To begin to answer these sorts of questions, a Perl script exploited a Perl module called Lingua::TreeTagger. The script (pos-tag.pl) extracts parts-of-speech from a text file and saves the result as a simple tab-delimited file for later use. (See an example.)
  • word/phrase tabulation and concordancing – To support rudimentary word and phrase tabulations, as well as a concordance interface, an Apache module (Concordance.pm) was written around two more Perl modules. The first, Lingua::EN::Ngram, extracts word and phrase occurrences. The second, Lingua::Concordance, provides an object-oriented keyword-in-context interface.
  • metadata enhancement and storage – A rudimentary catalog listing the items in the Project’s corpus was implemented using a Perl module called MyLibrary. The MARC records describing each item in the corpus were first parsed. Desired metadata elements were mapped to MyLibrary fields, facets, and terms. Each item in the corpus was then analyzed in terms of word length as well as readability score through the use of yet another Perl module called Lingua::EN::Fathom. These additional metadata elements were then added to the underlying “catalog”. To accomplish this set of tasks two additional Perl scripts were written (add-author-title.pl and add-size-readability.pl).
  • HTML creation – A final Perl script was written to bring all the parts together. By looping through the “catalog” this script (make-catalog.pl) generates HTML files designed for display on tablet devices. These HTML files make heavy use of JQuery Mobile, and since no graphic designer was a part of the Project, JQuery Mobile was a godsend.

The result — the Catholic Youth Literature Project — is a system that enables the reader to view the texts online as well as do some analysis against them. The system functions in that it does not output invalid data, and it does provide enhanced access to the texts.


The home page is simply a list of covers and associated titles.


The Internet Archive online reader is one option for “close” reading.


The list of parts-of-speech provides the reader with some context. Notice how the word “good” is the most frequently used adjective.


The histogram feature of the concordance allows the reader to see where selected words appear in the text. For example, in this text the word “god” is used rather consistently.


A network diagram allows the reader to see what words are used “in the same breath” as a given word. Here the word “god” is frequently used in conjunction with “good”, “holy”, “give”, and “love”.

Summary

To summarize, the Catholic Youth Literature Project is far from complete. For example, it has yet to be determined whether or not the implementation has enabled students to accomplish the Project’s stated goals. Does it really enhance the use and understanding of a text? Second, the process of selecting, acquiring, digitizing, and integrating the texts into the library’s collection is not streamlined. Finally, usability of the implementation is still in question. On the other hand, the implementation is more than a prototype and does exemplify how the process of reading is evolving over time.

Summary of the Catholic Pamphlets Project

Posted on March 27, 2012 in Uncategorized by Eric Lease Morgan

This posting summarizes the Catholic Pamphlets Project — a process to digitize sets of materials from the Hesburgh Libraries collection, add the result to a repository, provide access to the materials through the catalog and “discovery system” as well as provide enhanced access to the materials through a set of text mining interfaces. In a sentence, the Project has accomplished most of its initial goals both on time and under budget.

The Project’s original conception

The Catholic Pamphlets Project began early in 2011 with the writing of a President’s Circle Award proposal. The proposal detailed how sets of Catholic Americana would be digitized in conjunction with the University Archives. The Libraries was to digitize the 5,000 Catholic pamphlets located in Special Collections, and the Archives was to digitize its set of Orestes Brownson papers. In addition, a graduate student was to be hired to evaluate both collections, write introductory essays describing why they are significant research opportunities, and do an environmental scan regarding the use of digital humanities computing techniques applied against digitized content. In the end, both the Libraries and the Archives would have provided digital access to the materials through things like the library catalog, its “discovery” system, and the “Catholic Portal”, as well as laid the groundwork for further digitization efforts.

Getting started

By late Spring a Project leader was identified, and their responsibilities were to coordinate the Libraries’s side of the Project in conjunction with a number of library departments including Special Collections, Cataloging, Electronic Resources, Preservation, and Systems. By this time it was also decided not to digitize the entire collection of 5,000 items, but instead hire someone for the summer to digitize as many items as possible and process them accordingly – a workflow test. In the meantime, a comparison of in-house and vendor-supplied digitization costs would be evaluated.

By this time a list of specific people had also been identified to work on the Project, and these people became affectionately known as Team Catholic Pamphlets:

Aaron Bales • Eric Lease Morgan (leader) • Jean McManus • Julie Arnott • Lisa Stienbarger • Louis Jordan • Mark Dehmlow • Mary McKeown • Natasha Lyandres • Rajesh Balekai • Rick Johnson • Robert Fox • Sherri Jones

Work commences

Through out the summer a lot of manual labor was applied against the Project. A recent graduate from St. Mary’s (Eileen Laskowski) was hired to scan pamphlets. After a one or two weeks of work, she was relocated from the Hesburgh Library to the Art Slide Library where others were doing similar work. She used equipment borrowed from Desktop Computing and Network Services (DCNS) and the Slide Library. Both DCNS and the Slide Library were gracious about offering their resources. By the end of the summer Ms. Laskowski had digitized just less than 400 pamphlets. The covers were digitized in 24-bit color. The inside pages were gray-scale. Everything was digitized at 600 dots per inch. These pamphlets generated close to 92 GB of data in the form of TIFF and PDF files.

Because the Pamphlets Project was going to include links to concordance (text mining) interfaces from within the library’s catalog, Sherri Jones facilitated two hour-long workshops to interested library faculty and staff in order to explain and describe the interfaces. The first of these workshops took place in the early summer. The second took place in late summer.

In the meantime efforts were spent by two summer students of Jean McManus‘s. The students determined the copyright status of each of the 5,000 pamphlets. They used a decision-making flowchart as the basis of their work. This flowchart has since been reviewed by the University’s General Counsel and deemed a valid tool for determining copyright. Of the sum of pamphlets, approximately 4,000 (80%) have been determined to be in the public domain.

Starting around June Team Catholic Pamphlets decided to practice with the technical services aspect of the Project. Mary McKeown, Natasha Lyandres, and Lisa Stienbarger wrote a cataloging policy for the soon-to-be created MARC records representing the digital versions of the pamphlets. Aaron Bales exported MARC records representing the print versions of the pamphlets. PDF versions of approximately thirty-five pamphlets were placed on a Libraries’s Web server by Rajesh Balekai and Rob Fox. Plain text versions of the same pamphlets were placed on a different Web server, and a concordance application was configured against them. Using the content of the copyright database being maintained by Jean McManus’s students, Eric Lease Morgan updated the MARC records representing the print records to include links to the PDF and concordance versions of the pamphlets. The records were passed along to Lisa Stienbarger who updated them according to the newly created policy. The records were then loaded into a pre-production version of the catalog for verification. Upon examination the Team learned that users of Internet Explorer were not able to consistently view the PDF versions. After some troubleshooting, Rob Fox wrote a work-around to the problem, and the MARC records were changed to reflect new URLs of the PDF versions. Once this work was done the thirty-five records were loaded into the production version of the catalog, and from there they seamlessly flowed into the library’s “discovery system” – Primo. Throughout this time Julie Arnott and Dorothy Snyder applied quality control measures against the digitized content and wrote a report documenting their findings. Team Catholic Portal had successfully digitized and processed thirty-five pamphlets.

With these successes under our belts, and with the academic year commencing, Team Catholic Pamphlets celebrated with a pot-luck lunch and rested for a few weeks.

The workflow test concludes

In early October the Team got together again and unanimously decided to process the balance of the digitized pamphlets in order to put them into production. Everybody wanted to continue practicing with their established workflows. The PDF and plain text versions of the pamphlets were saved on their respective Web servers. The TIFF versions of the pamphlets were saved to the same file system as the library’s digital repository. URLs were generated. The MARC records were updated and saved to pre-production. After verification, they were moved to production and flowed to Primo. What took at least three months earlier in the year now took only a few weeks. By Halloween Team Catholic Pamphlets finished its workflow test processing the totality of the digitized pamphlets.

Access to the collection

There is no single home page for the collection of digitized pamphlets. Instead, each of the pamphlets have been cataloged, and through the use of command-line search strategy one can pull up all the pamphlets in the library’s catalog — http://bit.ly/sw1JH8

From the results list it is best to view the records’ detail in order to see all of the options associated with the pamphlet.

command-line search results page

From the details page one can download and read the pamphlet in the form of a PDF document or the reader can use a concordance to apply “distant reading” techniques against the content.

details of a specific Catholic pamphlets record

50 most frequently used words in a selected pamphlet

Conclusions and next steps

The Team accomplished most of its goals, and we learned many things, but not everything was accomplished. No graduate student was hired, and therefore no overarching description of the pamphlets (nor content from the Archives) was evaluated. Similarly, no environmental scan regarding use of digital humanities against the collections was done. While 400 of our pamphlets are accessible from the catalog as well as the “discovery system”, no testing has been done to determine their ultimate usability.

The fledgling workflow can still be refined. For example, the process of identifying content to digitize, removing it from Special Collections, digitizing it, returning it to Special Collections, doing quality control, adding the content to the institutional repository, establishing the text mining interfaces, updating the MARC records (with copyright information, URLs, etc.), and ultimately putting the lot into the catalog is a bit disjointed. Each part works well unto itself, but the process as a whole does not run like a well-oiled machine, yet. Like any new workflow, more practice is required.

This Project provided Team members with the opportunity to apply traditional library skills against a new initiative, and it was relished by everybody involved. The Project required the expertise of faculty and staff. It required the expertise of people in Collection Management, Preservation, Technical Services, Public Services, and Systems. Everybody applied their highly developed professional knowledge to a new and challenging problem. The Project was a cross-departmental holistic process, and it even generated interest in participation from people outside the Team. There are many people across the Libraries who would like to get involved with wider digitization efforts because they thought this Project was exciting and had the potential for future growth. They too see it as an opportunity for professional development.

While there are 5,000 pamphlets in the collection, only 4,000 of them are deemed in the public domain (legally digitizable). Four-hundred (400) pamphlets were scanned by a single person at a resolution of 600 dots/inch over a period of three months for a total cost of approximately $3,400. This is a digitization rate of approximately 1,200 pamphlets per year at a cost of $13,600. At this pace it would take the Libraries close to 3 1/3 years to digitized the 4,000 pamphlets for an approximate out-of-pocket labor cost of $44,880. If the dots/inch qualification were reduced by half – which still exceeds the needs for quality printing purposes – then it would take a single person approximately 1.7 years to do the digitization at a total cost of approximately $22,440. The time spent doing digitization could be reduced even further if the dots/inch qualification were reduced some more. One hundred fifty dots/inch is usually good enough for printing purposes. Based on our knowledge, it would cost less than $3,000 to purchase three or four computer/scanning set-ups similar to the ones used during the Project. If the Libraries were to hire as many as four students to do digitization, then we estimate the public domain pamphlets could be digitized in less than two years at a cost of approximately $25,000.

There are approximately 184,996 pages of Catholic pamphlet content, but approximately 80% of these pages (4,000 pamphlets of the total 5,000) are legally digitizable – 147,997 pages. A reputable digitization vendor will charge around $.25/page to do digitization. Consequently, the total out-of-pocket cost of using the vendor is close to $37,000.

Team Catholic Pamphlets recommends going forward with the Project using an in-house digitization process. Despite the administrative overhead associated with hiring and managing sets of digitizers, the in-house process affords the Libraries a means to learn and practice with digitization. The results will make the Libraries more informed and better educated and thus empower us to make higher quality decisions in the future.

Patron-Driven Acquisitions: A Symposium at the University of Notre Dame

Posted on March 19, 2012 in Uncategorized by Eric Lease Morgan

The Professional Development Committee at the Hesburgh Libraries of the University of Notre Dame is sponsoring a symposium on the topic of patron-driven acquisitions:

  • Who – Anybody and everybody is invited
  • What – A symposium
  • When – Monday, May 21, 2012 from 9 o’clock to 1 o’clock, then lunch (included), and then informal roundtable discussions
  • Where – Hesburgh Library Auditorium, University of Notre Dame
  • Cost – free

After lunch and given enough interest, we will also be facilitating roundtable discussions on the topic of the day. To register, simply send your name to Eric Lease Morgan, and you will be registered. Easy!

Need a map? Download a campus map highlighting where to park and the location of the library.

Presentations

Here is a list of the presentations to get the discussion going:

  • Silent Partners in Collection Development: Patron-Driven Acquisitions at Purdue (Judith M. Nixon, Robert S. Freeman, and Suzanne M. Ward) – The Purdue University Libraries was an early implementer of patron-driven acquisitions (PDA). In 2000, interlibrary loan began buying rather than borrowing books that patrons requested. Following a brief review of the origin and reasons for this service, we will report on the results of an analysis of the 10,000 books purchased during the program’s first ten years. We examined data on the users’ status and department affiliations; most frequent publishers; and bibliographers’ analysis of the books in the top six subjects assessing whether the purchases were relevant to the collection. In addition, we will summarize the highlights of a comparative circulation study of PDA books vs. normally acquired books: do patron-selected books or librarian-selected books circulate at a higher rate? The conclusions of these PDA print book investigations encouraged the Libraries to begin an e-book PDA pilot program. We will report some early insights and surprises with this pilot. A librarian with selecting responsibilities in several subject areas will discuss his perspective of the value that PDA programs bring to collection building.
  • The Long Tail of PDA (Dracine Hodges) – Patron-driven acquisitions (PDA) titles are known to generate usage at least once at the moment a short-term loan or purchase is triggered. Despite the current PDA buzz, many remain unconvinced of the potential for ongoing circulation. There is a palpable level of skepticism over the sustainability of this buffet model with regard to user interest and the validity of shrinking librarian mediation in the selection process. To discuss these issues, data for content purchased during Ohio State’s 2009/2010 e-book PDA pilot will be examined. Several years of usage activity will be charted and analyzed for budgetary implications, including cost per use. In addition, key issues surrounding academic library patron-driven collection development philosophies will be explored. Particularly, this period when traditional methods of collection development must be maintained, while concurrently moving toward what appears to be the future with patron-driven collection development.
  • Acquisitions and User Services: responsive and responsible ways to build the collection (Lynn Wiley) – Patron-driven acquisitions (PDA) or purchase on demand programs are a natural extension of what libraries do naturally and that is to build programs to allow users to gain access to research materials. PDA programs provide for direct accountability on purchase decisions, especially relevant in the present economic situation. The Association of College and Research Libraries 2010 top ten trends in academic libraries (ACRL, 2010) listed PDA as a new force in collection development explaining: “Academic library collection growth is driven by patron demand and will include new resource types.” ACRL noted how this change was facilitated by vendor tools that provide controls for custom-made purchase on demand programs. In consortia settings, a PDA model can broaden access across the collective collection. This presentation describes the evolution of purchase on demand programs at the University of Illinois at Urbana-Champaign (UIUC) and includes a detailed description of several programs recently implemented at UIUC as well as a PDA program within a statewide academic library consortium that tested and analyzed purchase on demand mechanisms for print purchases. These programs describe a natural progression of models used to expand PDAs from ILL requesting to the discovery and selection model where bibliographic records were preselected and then made available in the online catalog for ordering. Statistics on use and users comments will be shared as well as comments on future applications.
  • Demand Driven Acquisitions: University of Notre Dame Experience (Fall 2011 – Spring 2012) (Laura A. Sill and Natasha Lyandres) – Using one time special funding, the Hesburgh Libraries of Notre Dame launched a DDA pilot project for ebooks in conjunction with YBP and Ebrary in September 2011. The implementation date followed several months of planning. The goal of the project was to test patronddriven acquisitions as the method for adding ebook titles of high interest to the library collection. Up until that point, ebooks had been acquired primarily through the purchase of large-scale vendor packages. One such package acquired in July of 2011 was Academic Complete on subscription, which provided access to 70,000 ebooks through the Ebrary platform. Also available to bibliographers and selectors was the ability to place firm orders through YBP for Ebrary titles. Our presentation will provide an overview of the pilot project and our thoughts on the effectiveness of this method vis-à-vis other ebook acquisitions methods currently utilized by the Libraries. We will discuss the particular challenges of running the pilot with Ebrary in conjunction with Academic Complete, as well as future possibilities for expanding our use of DDA to include additional use options such as short-term loans, greater integration with approval plans, and DDA for print.

Speakers

Here is a list of the speakers, their titles, and the briefest of bios:

  • Robert S. Freeman (Associate Professor of Library Science, Reference, Languages and Literatures Librarian) – Robert S. Freeman has worked at Purdue University since 1997, where he is a reference librarian and the liaison to the Department of English as well as the School of Languages and Cultures. He has an M.A. in German from UNC-Chapel Hill and an M.S. in Library and Information Science from University of Illinois at Urbana-Champaign. Interested in the history of libraries, he co-edited and contributed to Libraries to the People: Histories of Outreach (McFarland, 2003). More recently, he co-edited a special issue of Collection Management on PDA.
  • Dracine Hodges (Head, Acquisitions Department) – Dracine Hodges is Head of the Acquisitions Department at The Ohio State University Libraries. Previously, she was the Monographs Librarian and the Mary P. Key Resident Librarian. She received her BA from Wesleyan College and MLIS from Florida State University. She manages the procurement of print and electronic resources for the OSU Libraries. Most of her career has focused on acquisitions, but she has also worked as a reference librarian and in access services. Dracine is active in ALCTS serving on the Membership Committee and as past chair of the Tech Services Workflow Efficiency Interest Group. She is also an editorial assistant for College & Research Libraries and a graduate of the Minnesota Institute.
  • Natasha Lyandres (Head, Acquisitions, Resources and Discovery Services Department (ARDS)) – Natasha Lyandres, MLIS from San Jose State University, began her professional career in 1993 as cataloging and special projects librarian at the Hoover Institution Library and Archives, Stanford University. From 1996 to 2001 she has served as Reference and Collections Development Librarian at Joyner Library, East Carolina University. Natasha has joined the Hesburgh Libraries of Notre Dame in 2001. She has held positions in the areas of serials, cataloging, acquisitions and electronic resources. Natasha is currently Head of Acquisitions, Resources and Discovery Services Department, and Russian and East European Studies bibliographer.
  • Judith M. Nixon (Professor of Library Science and Education Librarian) – Judith M. Nixon holds degrees from Valparaiso University and University of Iowa. She has worked at Purdue University since 1984 as head of the Consumer & Family Sciences Library, the Management & Economics Library, and the Humanities & Social Science Library. Currently, as Education Librarian, she develops the education collections. Her publishing record includes over 35 articles and books. Her interest in patron-driven acquisitions lead to co-editing a special issue of Collection Management that focuses on this topic and a presentation at La Biblioteca Del Futuro in Mexico City in October of 2001.
  • Laura A. Sill (Supervisor, Monographic Acquisitions Unit, ARDS) – Laura A. Sill, MA from the University of Wisconsin-Madison, has been a member of the Hesburgh Libraries of Notre Dame library faculty over the years since 1989. She has held positions in the areas of acquisitions, serials, and systems. Laura is currently Visiting Associate Librarian, supervising Monographic Acquisitions in the Acquisitions, Resources and Discovery Services Department.
  • Suzanne M. Ward (Professor of Library Science and Head, Collection Management) – Suzanne (Sue) Ward holds degrees from UCLA, the University of Michigan, and Memphis State University. She has worked at the Purdue University Libraries since 1987 in several different positions. Her current role is Head, Collection Management. Professional interests include patron-driven acquisitions (PDA) and print retention issues. Sue has published one book and over 25 articles on various aspects of librarianship. She recently co-edited a special issue of Collection Management that focuses on PDA, and her book Guide to Patron-Driven Acquisitions is in press at the American Library Association.
  • Lynn Wiley (Head of Acquisitions and Associate professor of Library Administration) – Lynn Wiley has been a librarian for over thirty years working for academic libraries in the east coast and since 1995 at the University of Illinois. Lynn has worked in public service roles until 2005 when she switched to acquisitions. She has written and presented widely on meeting user needs and provided analysis on how library partnerships can best achieve this. She is active in state, regional and national professional associations and is also on the editorial board of LRTS. Her overall goal is to meet the needs of users easily and seamlessly.

Emotional Intelligence

Posted on February 23, 2012 in Uncategorized by Eric Lease Morgan

This is sort of like a travelogue — a description of what I learned by attending a workshop here at Notre Dame on the topic of emotional intelligence. In a sentence, emotional intelligence begins with self-awareness, moves through self-management to control impulses, continues with social awareness and the ability to sense the emotions of others, and matures with relationship management used to inspire and manage conflict.

The purpose of the workshop — attended by approximately thirty people and sponsored by the University’s Human Resources Department — was to make attendees more aware of how they can build workplace relationships by being more emotionally intelligent.

The workshop’s facilitator began by outlining The Rule Of 24. Meaning, when a person is in an emotionally charged situation, then wait twenty-four hours before attempting resolution. If, after twenty-four hours ask yourself, “How do I feel?” If the answer is anxious, then repeat. If not, approach the other person with a structured script. In other words, practice what you hope to communicate. If an immediate solution is necessary or when actually having the difficult conversation, then remember a few points:

  1. pause — give yourself time
  2. slow your rate of speech
  3. soften the tone of your voice
  4. ask a few questions
  5. allow the other person to “save face”

When having a difficult conversation, try prefacing it with some like this. “I am going to tell you something, and it is not my intent to make you feel poorly. It is difficult for me as well.” Clarify this in the beginning as well as at the end of the conversation.

The facilitator also outlined a process for learning emotional intelligence:

  1. begin by being self-aware
  2. identify a problem that happened
  3. ask yourself, “What did I say or do that hurt the situation?”
  4. ask yourself, “What can I say or do to improve the situation?”
  5. ask yourself, “What did I do to improve the situation?”

There were quite a number of interesting quotes I garnered from the facilitator:

  • “When talking to people, don’t treat everybody the same. Take into consideration the personality of others. This is akin to the ‘Platinum Rule’ presented to the library faculty and staff a few weeks ago.”
  • “Emotions are tools if we use them properly.”
  • “Realize that ‘I don’t have to like you to work well with you. Let’s be productive together.'”
  • “It is not about being right and much as it is about getting the job done.”
  • “If you can see the humor in the situation, then things will go a lot better. Have fun with it.”
  • “Be prepared for the other person’s shock, anger, or disappointment.”
  • “Think about collaboration as if it were a sporting event where everybody knows the rules of the game.”
  • “Ask yourself, ‘What strengths do they bring to the table? What are the things they do to get in the way, and don’t think of these things as weaknesses.”
  • “In many cases it is not what you say, but how you say it. You can disagree without being emotional.”
  • “We are here to find solutions not find fault. Define common ground.”
  • “What are you doing that I’m not doing? Ask others for advice and how to deal with specific individuals.”

There were a number of other people from the Libraries who attended the workshop, and most of us gathered around a table afterwards to discuss what we learned. I think it would behoove the balance of the Libraries be more aware of emotional intelligence issues.

Much of the workshop was about controlling and managing emotions as if they were things to be tamed. In the end I wanted to know when and how emotions could be encouraged or even indulged for the purposes of experiencing beauty, love, or spirituality. But alas, the workshop was about the workplace and relationship building.