My first Python script, concordance.py

Posted on November 10, 2014 in Uncategorized by Eric Lease Morgan

Below is my first Python script, concordance.py:

#!/usr/bin/env python2

# concordance.py - do KWIK search against a text
#
# usage: ./concordance.py <file> <word>ph

# Eric Lease Morgan <emorgan@nd.edu>
# November 5, 2014 - my first real python script!


# require
import sys
import nltk

# get input; needs sanity checking
file = sys.argv[ 1 ]
word = sys.argv[ 2 ]

# do the work
text = nltk.Text( nltk.word_tokenize( open( file ).read( ) ) )
text.concordance( word )

# done
quit()

Given the path to a plain text file as well as a word, the script will output no more than twenty-five lines containing the given word. It is a keyword-in-context (KWIC) search engine, one of the oldest text mining tools in existence.

The script is my first foray into Python scripting. While Perl is cool (and “kewl”), it behooves me to learn the language of others if I expect good communication to happen. This includes others using my code and me using the code of others. Moreover, Python comes with a library (module) call the Natural Langauge Toolkit (NLTK) which makes it relatively easy to get my feet wet with text mining in this environment.

Lexicons and sentiment analysis – Notes to self

Posted on July 9, 2014 in Uncategorized by Eric Lease Morgan

This is mostly a set of notes to myself on lexicons and sentiment analysis.

A couple of weeks ago I asked Jeffrey Bain-Conkin to read at least one article about sentiment analysis (sometimes called “opinion mining”), and specifically I asked him to help me learn about the use of lexicons in such a process. He came back with a few more articles and a list of pointers to additional information. Thank you, Jeffrey! I am echoing the list here for future reference, for the possible benefit of others, and to remove some of the clutter from my to-do list. While I haven’t read and examined each of the items in great detail, just re-creating the list increases my knowledge. The list is divided into three sections: lexicons, software, and “more”.

Lexicons

  • Arguing Lexicon – “The lexicon includes patterns that represent arguing.”
  • BOOTStrep Bio-Lexicon – “Biological terminology is a frequent cause of analysis errors when processing literature written in the biology domain. For example, ‘retro-regulate’ is a terminological verb often used in molecular biology but it is not included in conventional dictionaries. The BioLexicon is a linguistic resource tailored for the biology domain to cope with these problems. It contains the following types of entries: a set of terminological verbs, a set of derived forms of the terminological verbs, general English words frequently used in the biology domain, [and] domain terms.”
  • English Phrases for Information Retrieval – “Goal of the ‘English Phrases for IR’ (EP4IR) project at the Radboud University Nijmegen (The Netherlands) is the development of a grammar and lexicon of English suitable for applications in Information Retrieval and available in the public domain.”
  • General Inquirer – “The General Inquirer is basically a mapping tool. It maps each text file with counts on dictionary-supplied categories. The currently distributed version combines the ‘Harvard IV-4’ dictionary content-analysis categories, the ‘Lasswell’ dictionary content-analysis categories, and five categories based on the social cognition work of Semin and Fiedler, making for 182 categories in all. Each category is a list of words and word senses. A category such as ‘self references’ may contain only a dozen entries, mostly pronouns. Currently, the category ‘negative’ is our largest with 2291 entries. Users can also add additional categories of any size.”
  • NRC word-emotion association lexicon – “The lexicon has human annotations of emotion associations for more than 24,200 word senses (about 14,200 word types). The annotations include whether the target is positive or negative, and whether the target has associations with eight basic emotions (joy, sadness, anger, fear, surprise, anticipation, trust, disgust).” The URL also points to a large number of articles on sentiment analysis in general.
  • Subjectivity Lexicon – “The Subjectivity Lexicon (list of subjectivity clues) that is part of OpinionFinder…”
  • WordNet – “WordNet® is a large lexical database of English. Nouns, verbs, adjectives and adverbs are grouped into sets of cognitive synonyms (synsets), each expressing a distinct concept. Synsets are interlinked by means of conceptual-semantic and lexical relations. The resulting network of meaningfully related words and concepts can be navigated with the browser. WordNet is also freely and publicly available for download. WordNet’s structure makes it a useful tool for computational linguistics and natural language processing.”
  • WordNet Domains – “WordNet Domains is a lexical resource created in a semi-automatic way by augmenting WordNet with domain labels. WordNet Synsets have been annotated with at least one semantic domain label, selected from a set of about two hundred labels structured according the WordNet Domain Hierarchy. Information brought by domains is complementary to what is already in Wordnet. A domain may include synsets of different syntactic categories and from different WordNet sub-hierarchies. Domains may group senses of the same word into homogeneous clusters, with the side effect of reducing word polysemy in WordNet.”
  • WordNet-Affect – “WordNet-Affect is an extension of WordNet Domains, including a subset of synsets suitable to represent affective concepts correlated with affective words. Similarly to our method for domain labels, we assigned to a number of WordNet synsets one or more affective labels (a-labels). In particular, the affective concepts representing emotional state are individuated by synsets marked with the a-label emotion. There are also other a-labels for those concepts representing moods, situations eliciting emotions, or emotional responses. The resource was extended with a set of additional a-labels (called emotional categories), hierarchically organized, in order to specialize synsets with a-label emotion. The hierarchical structure of new a-labels was modeled on the WordNet hyperonym relation. In a second stage, we introduced some modifications, in order to distinguish synsets according to emotional valence. We defined four addictional a-labels: positive, negative, ambiguous, and neutral.”

Software / applications

  • Linguistic Inquiry and Word Count – “Linguistic Inquiry and Word Count (LIWC) is a text analysis software program designed by James W. Pennebaker, Roger J. Booth, and Martha E. Francis. LIWC calculates the degree to which people use different categories of words across a wide array of texts, including emails, speeches, poems, or transcribed daily speech. With a click of a button, you can determine the degree any text uses positive or negative emotions, self-references, causal words, and 70 other language dimensions.”
  • OpinionFinder – “OpinionFinder is a system that processes documents and automatically identifies subjective sentences as well as various aspects of subjectivity within sentences, including agents who are sources of opinion, direct subjective expressions and speech events, and sentiment expressions.”
  • SenticNet – “SenticNet is a publicly available semantic resource for concept-level sentiment analysis. The affective common-sense knowledge base is built by means of sentic computing, a paradigm that exploits both AI and Semantic Web techniques to better recognize, interpret, and process natural language opinions over the Web. In particular, SenticNet exploits an ensemble of graph-mining and dimensionality-reduction techniques to bridge the conceptual and affective gap between word-level natural language data and the concept-level opinions and sentiments conveyed by them. SenticNet is a knowledge base that can be employed for the development of applications in fields such as big social data analysis, human-computer interaction, and e-health.”
  • SPECIALIST NLP Tools – “The SPECIALIST Natural Language Processing (NLP) Tools have been developed by the The Lexical Systems Group of The Lister Hill National Center for Biomedical Communications to investigate the contributions that natural language processing techniques can make to the task of mediating between the language of users and the language of online biomedical information resources. The SPECIALIST NLP Tools facilitate natural language processing by helping application developers with lexical variation and text analysis tasks in the biomedical domain. The NLP Tools are open source resources distributed subject to these [specific] terms and conditions.”
  • Visual Sentiment Ontology – “The analysis of emotion, affect and sentiment from visual content has become an exciting area in the multimedia community allowing to build new applications for brand monitoring, advertising, and opinion mining. There exists no corpora for sentiment analysis on visual content, and therefore limits the progress in this critical area. To stimulate innovative research on this challenging issue, we constructed a new benchmark and database. This database contains a Visual Sentiment Ontology (VSO) consisting of 3244 adjective noun pairs (ANP), SentiBank a set of 1200 trained visual concept detectors providing a mid-level representation of sentiment, associated training images acquired from Flickr, and a benchmark containing 603 photo tweets covering a diverse set of 21 topics. This website provides the above mentioned material for download…”

Lists of additional information

  • Lexical databases and corpora – “This is a list of links to lexical databases and corpora, organized by language or language group. The resources on this page were initially compiled from announcements on the LINGUIST list and web-search results. This is not intended to be an exhaustive list, but rather a place to organize and store potentially useful links as I [Jen Smith] encounter them.”
  • Opinion Mining, Sentiment Analysis, and Opinion Spam Detection – a long list of links pointing to articles, etc. about opinion mining.
  • Sentiment Symposium Tutorial – “This tutorial covers all aspects of building effective sentiment analysis systems for textual data, with and without sentiment-relevant metadata like star ratings. We proceed from pre-processing techniques to advanced uses cases, assessing common approaches and identifying best practices.”

Summary

What did I learn? I learned that to do sentiment analysis, lexicons are often employed. I learned that to evaluate a corpus for a particular sentiment, a researcher first needs to create a lexicon embodying that sentiment. Each element in the lexicon then needs to be assigned a quantitative value. The lexicon is then compared to the corpus tabulating the occurrences. Once tabulated, scores can then be summed, measurements taken, observations made and graphed, and conclusions/judgments made. Correct? Again, thank you, Jeffrey!

“Librarians love lists.”

What’s Eric Reading?

Posted on July 4, 2014 in Uncategorized by Eric Lease Morgan

I have resurrected an application/system of files used to archive and disseminate things (mostly articles) I’ve been reading. I call it What’s Eric Reading? From the original About page:

I have been having fun recently indexing PDF files.

For the pasts six months or so I have been keeping the articles I’ve read in a pile, and I was rather amazed at the size of the pile. It was about a foot tall. When I read these articles I “actively” read them — meaning, I write, scribble, highlight, and annotate the text with my own special notation denoting names, keywords, definitions, citations, quotations, list items, examples, etc. This active reading process: 1) makes for better comprehension on my part, and 2) makes the articles easier to review and pick out the ideas I thought were salient. Being the librarian I am, I thought it might be cool (“kewl”) to make the articles into a collection. Thus, the beginnings of Highlights & Annotations: A Value-Added Reading List.

The techno-weenie process for creating and maintaining the content is something this community might find interesting:

  1. Print article and read it actively.
  2. Convert the printed article into a PDF file — complete with embedded OCR — with my handy-dandy ScanSnap scanner.
  3. Use MyLibrary to create metadata (author, title, date published, date read, note, keywords, facet/term combinations, local and remote URLs, etc.) describing the article.
  4. Save the PDF to my file system.
  5. Use pdttotext to extract the OCRed text from the PDF and index it along with the MyLibrary metadata using Solr.
  6. Provide a searchable/browsable user interface to the collection through a mod_perl module.

Software is never done, and if it were then it would be called hardware. Accordingly, I know there are some things I need to do before I can truely deem the system version 1.0. At the same time my excitment is overflowing and I thought I’d share some geekdom with my fellow hackers.

Fun with PDF files and open source software.

Visualising Data: A Travelogue

Posted on June 17, 2014 in Uncategorized by Eric Lease Morgan


Last month a number of us from the Hesburgh Libraries attended a day-long workshop on data visualisation facilitated by Andy Kirk of Visualising Data. This posting documents some of the things I learned.

First and foremost, we were told there are five steps to creating data visualisations. From the handouts and supplemented with my own understanding, they include:

  1. establishing purpose – This is where you ask yourself, “Why is a visualisation important here? What is the context of the visualization?
  2. acquiring, preparing and familiarising yourself with the data – Here were echoed different data types (open, nominal, ordinal, intervals, and ratios), and we were introduced to the hidden costs of massaging and enhancing data, which is something I do with text mining and others do in statistical analysis.
  3. establishing editorial focus – This is about asking and answering questions regarding the visualisation’s audience. What is their education level? How much time will they have to absorb the content? What medium(s) may be best used for the message?
  4. conceiving the design – Using just paper and pencil, draw, brainstorm, and outline the appearance of the visualisation.
  5. constructing the visualisation – Finally, do the work of making the visualisation a reality. Increasingly this work is done by exploiting the functionality of computers, specifically for the Web.

Here are a few meaty quotes:

  • Context is king.
  • Data preparation is a hidden cost in visualization.
  • Data visualisation is a tool for understanding, not fancy ways of showing numbers.
  • Data visualisation is about analysis and communication.

One of my biggest take-aways was the juxtaposition of two spectrum: reading to feeling, and explaining to exploring. In other words, to what degree is the visualization expected to be read or felt, and to what degree is it offering the possibilities to explain or explore the data? Kirk illustrated the idea like this:

                read
                 .
                / \
                 |
                 |
   explain <-----+-----> explore
                 |
                 |
                \ /
                 .
                feel

The the reading/feeling spectrum reminded me of the usability book entitled Don’t Make Me Think. The explaining/exploring spectrum made me consider interactivity in visualisations.

I learned two other things along the way: 1) creating visualisations is a team effort requiring a constellation of skilled people (graphic designers, statisticians, content specialists, computer technologists, etc.), and 2) is it entirely plausible to combine more than one graphic — data set illustration — into a single visualisation.

Now I just need to figure out how to put these visualisation techniques into practice.

ORCID Outreach Meeting (May 21 & 22, 2014)

Posted on June 13, 2014 in Uncategorized by Eric Lease Morgan

This posting documents some of my experiences at the ORCID Outreach Meeting in Chicago (May 21 & 22, 2014).

As you may or may now know, ORCID is an acronym for “Open Researcher and Contributor ID”.* It is also the name of a non-profit organization whose purpose is to facilitate the creation and maintenance of identifiers for scholars, researchers, and academics. From ORCID’s mission statement:

ORCID aims to solve the name ambiguity problem in research and scholarly communications by creating a central registry of unique identifiers for individual researchers and an open and transparent linking mechanism between ORCID and other current researcher ID schemes. These identifiers, and the relationships among them, can be linked to the researcher’s output to enhance the scientific discovery process and to improve the efficiency of research funding and collaboration within the research community.

A few weeks ago the ORCID folks facilitated a user’s group meeting. It was attended by approximately 125 people (mostly librarians or people who work in/around libraries), and some of the attendees came from as far away as Japan. The purpose of the meeting was to build community and provide an opportunity to share experiences.

The meeting itself was divided into number of panel discussions and a “codefest”. The panel discussions described successes (and failures) for creating, maintaining, enhancing, and integrating ORCID identifiers into workflows, institutional repositories, grant application processes, and information systems. Presenters described poster sessions, marketing materials, information sessions, computerized systems, policies, and politics all surrounding the implementation of ORCID identifiers. Quite frankly, nobody seemed to have a hugely successful story to tell because too few researchers seem to think there is a need for identifiers. I, as a librarian and information professional, understand the problem (as well as the solution), but outside the profession there may not seem to be much of a problem to be solved.

That said, the primary purpose of my attendance was to participate in the codefest. There were less than a dozen of us coders, and we all wanted to use the various ORCID APIs to create new and useful applications. I was most interested in the possibilities of exploiting the RDF output obtainable through content negotiation against an ORCID identifier, a la the command line application called curl:

curl -L -H "Accept: application/rdf+xml" http://orcid.org/0000-0002-9952-7800

Unfortunately, the RDF output only included the merest of FOAF-based information, and I was interested in bibliographic citations.

Consequently I shifted gears, took advantage of the ORCID-specific API, and I decided to do some text mining. Specifically, I wrote a Perl program — orcid.pl — that takes an ORCID identifier as input (ie. 0000-0002-9952-7800) and then:

  1. queries ORCID for all the works associated with the identifier**
  2. extracts the DOIs from the resulting XML
  3. feeds the DOIs to a program called Tika for the purposes of extracting the full text from documents
  4. concatenates the result into a single stream of text, and sends the whole thing to standard output

For example, the following command will create a “bag of words” containing the content of all the writings associated with my ORCID identifier and have DOIs:

$ ./orcid.pl 0000-0002-9952-7800 > morgan.txt

Using this program I proceeded to create a corpus of files based on the ORCID identifiers of eleven Outreach Meeting attendees. I then used my “tiny text mining tools” to do analysis against the corpus. The results were somewhat surprising:

  • The most significant key words shared across the corpus of eleven people included: information, system, site, and orcid.
  • The authors Haak and Paglione wrote the most similar articles. (They both wrote about ORCID.) Morgan and Havert were a very close second. (We both wrote about “information” and “sites”.)
  • The DOIs often point to splash pages, and consequently my “bags of words” included lots of content about cookies and publishers as opposed to meaty journal article content. ***

Ideally, the hack I wrote would allow a person to feed one or more identifiers to a system and output a report summarizing and analyzing the journal article content at a glance — a quick & easy “distant reading” tool.

I finished my “hack” in one sitting which gave me time to attend the presentations of the second day.

All of the hacks were added to a pile and judged by a vendor on their utility. I’m proud to say that Jeremy Friesen’s — a colleague here at Notre Dame — hack won a prize. His application followed the links to people’s publications, created a screen dump of the publications’ root pages, and made a montage of the result. It was a visual version of orcid.pl. Congratulations, Jeremy!

I’m very glad I attended the Meeting. I reconnected with a number of professional colleagues, and I my awareness of researcher identifiers was increased. More specifically, there seem to be a growing number of these identifiers. Examples for myself include:

And for a really geeky good time, I learned to create the following set of RDF triples with the use of these identifiers:

@prefix dc: <http://purl.org/dc/elements/1.1/> .
  <http://dx.doi.org/10.1108/07378831211213201> dc:creator
  "http://isni.org/isni/0000000035290715" ,
  "http://id.loc.gov/authorities/names/n94036700" ,
  "http://orcid.org/0000-0002-9952-7800" ,
  "http://viaf.org/viaf/26290254" ,
  "http://www.researcherid.com/rid/F-2062-2014" ,
  "http://www.scopus.com/authid/detail.url?authorId=25944695600" .

I learned about the (subtle) difference between an identifier and a authority control record. I learned of the advantages and disadvantages the various identifiers. And through a number of serendipitous email exchanges, I learned about ISNIs which are a NISO standard for identifiers and seemingly popular in Europe but relatively unknown here in the United States. For more detail, see the short discussion of these things in the Code4Lib mailing list archives.

Now might be a good time for some of my own grassroots efforts to promote the use of ORCID identifiers.

* Thanks, Pam Masamitsu!

** For a good time, try http://pub.orcid.org/0000-0002-9952-7800/orcid-works, or substitute your identifier to see a list of your publications.

*** The problem with splash screens is exactly what the very recent CrossRef Text And Data Mining API is designed to address.

CrossRef’s Text and Data Mining (TDM) API

Posted on June 11, 2014 in Uncategorized by Eric Lease Morgan

A few weeks ago I learned that CrossRef’s Text And Data Mining (TDM) API had gone version 1.0, and this blog posting describes my tertiary experience with it.

A number of months ago I learned about Prospect, a fledgling API being developed by CrossRef. Its purpose was to facilitate direct access to full text journal content without going through the hassle of screen scraping journal article splash pages. Since then the API has been upgraded to version 1.0 and renamed the Text And Data Mining API. This is how the API is expected to be used:

  1. Given a (CrossRef) DOI, resolve the DOI using HTTP content negotiation. Specifically, request text/turtle output.
  2. From the response, capture the HTTP header called “links”.
  3. Parse the links header to extract URIs denoting full text, licenses, and people.
  4. Make choices based on the values of the URIs.

What sorts of choices is one expected to make? Good question. First and foremost, a person is suppose to evaluate the license URI. If the URI points to a palatable license, then you may want to download the full text which seems to come in PDF and/or XML flavors. With version 1.0 of the API, I have discovered ORCID identifiers are included in the header. I believe these denote authors/contributors of the articles.

Again, all of this is based on the content of the HTTP links header. Here is an example header, with carriage returns added for readability:

<http://downloads.hindawi.com/journals/isrn.neurology/2013/908317.pdf>;
rel="http://id.crossref.org/schema/fulltext"; type="application/pdf"; version="vor",
<http://downloads.hindawi.com/journals/isrn.neurology/2013/908317.xml>;
rel="http://id.crossref.org/schema/fulltext"; type="application/xml"; version="vor",
<http://creativecommons.org/licenses/by/3.0/>; rel="http://id.crossref.org/schema/license";
version="vor", <http://orcid.org/0000-0002-8443-5196>; rel="http://id.crossref.org/schema/person",
<http://orcid.org/0000-0002-0987-9651>; rel="http://id.crossref.org/schema/person",
<http://orcid.org/0000-0003-4669-8769>; rel="http://id.crossref.org/schema/person"

I wrote a tiny Perl library — extractor.pl — used to do steps #1 through #3, above. It returns a reference to a hash containing the values in the links header. I then wrote three Perl scripts which exploit the library:

  1. resolver.cgi – a Web-based application taking a DOI as input and returning the URIs in the links header, if they exist. Your milage with the script will vary because most DOIs are not associated with full text URIs.
  2. search.cgi – given a simple query, use CrossRef’s Metadata API to find no more than five articles associated with full text content, and then resolve the links to the full text.
  3. search.pl – a command-line version of search.cgi

Here are a few comments. Myself, as a person who increasingly wants direct access to full text articles, the Text And Data Mining API is a step in the right direction. Now all that needs to happen is for publishers to get on board and feed CrossRef the URIs of full text content along the associated licensing terms. I found the links header to be a bit convoluted, but this is what programming libraries are for. I could not find a comprehensive description of what name/value combinations can exist in the links header. For example, the documentation alludes to beginning and ending dates. CrossRef seems to have a growing number of interesting applications and APIs which are probably going unnoticed, and there is an opportunity of some sort lurking in there. Specifically, somebody out to do something the text/turtle (RDF) output of the DOI resolutions.

‘More fun with HTTP and bibliographics.

Code4Lib jobs topic

Posted on May 15, 2014 in Uncategorized by Eric Lease Morgan

entrance to romeThis posting describes how to turn off and on a thing called the jobs topic in the Code4Lib mailing list.

Code4Lib is a mailing list whose primary focus is computers and libraries. Since its inception in 2004, it has grown to include about 2,800 members from all around the world but mostly from the United States. The Code4Lib community has also spawned an annual conference, a refereed online journal, its own domain, and a growing number of regional “franchises”.

The Code4Lib community has also spawned job postings. Sometimes these job postings flood the mailing list, and while it is entirely possible use mail filters to exclude such postings, there is also “more than one way to skin a cat”. Since the mailing list uses the LISTSERV software, the mailing list has been configured to support the idea of “topics“, and through this feature a person can configure their subscription preferences to exclude job postings. Here’s how. By default every subscriber to the mailing list will get all postings. If you want to turn off getting the jobs postings, then email the following command to listserv@listserv.nd.edu:

SET code4lib TOPICS: -JOBS

If you want to turn on the jobs topic and receive the notices, then email the following command to listserv@listserv.nd.edu:

SET code4lib TOPICS: +JOBS

Sorry, but if you subscribe to the mailing list in digest mode, then the topics command has no effect; you will get the job postings no matter what.

HTH.

Special thanks go to Jodi Schneider and Joe Hourcle who pointed me in the direction of this LISTSERV functionality. Thank you!

The 3D Printing Working Group is maturing, complete with a shiny new mailing list

Posted on April 9, 2014 in Uncategorized by Eric Lease Morgan

A couple of weeks ago Kevin Phaup took the lead of facilitating a 3D printing workshop here in the Libraries’s Center For Digital Scholarship. More than a dozen students from across the University participated. Kevin presented them with an overview of 3D printing, pointed them towards a online 3D image editing application (Shapeshifter), and everybody created various objects which Matt Sisk has been diligently printing. The event was deemed a success, and there will probably be more specialized workshops scheduled for the Fall.

Since the last blog posting there has also been another Working Group meeting. A short dozen of us got together in Stinson-Remick where we discussed the future possibilities for the Group. The consensus was to create a more formal mailing list, maybe create a directory of people with 3D printing interests, and see about doing something more substancial — with a purpose — for the University.

To those ends, a mailing list has been created. Its name is 3D Printing Working Group . The list is open to anybody, and its purpose is to facilitate discussion of all things 3D printing around Notre Dame and the region. To subscribe address an email message to listserv@listserv.nd.edu, and in the body of the message include the following command:

subscribe nd-3d-printing Your Name

where Your Name is… your name.

Finally, the next meeting of the Working Group has been scheduled for Wednesday, May 14. It will be sponsored by Bob Sutton of Springboard Technologies, and it will be located in Innovation Park across from the University, and it will take place from 11:30 to 1 o’clock. I’m pretty sure lunch will be provided. The purpose of the meeting will be continue to outline the future directions of the Group as well as to see a demonstration of a printer called the Isis3D.

Digital humanities and libraries

Posted on April 3, 2014 in Uncategorized by Eric Lease Morgan

This posting outlines a current trend in some academic libraries, specifically, the inclusion of digital humanities into their service offerings. It provides the briefest of introductions to the digital humanities, and then describes how one branch of the digital humanities — text mining — is being put into practice here in the Hesburgh Libraries’ Center For Digital Scholarship at the University of Notre Dame.

(This posting and its companion one-page handout was written for the Information Organization Research Group, School of Information Studies at the University of Wisconsin Milwaukee, in preparation for a presentation dated April 10, 2014.)

Digital humanities

busa
For all intents and purposes, the digital humanities is a newer rather than older scholarly endeavor. A priest named Father Busa is considered the “Father of the Digital Humanities” when, in 1965, he worked with IBM to evaluate the writings of Thomas Aquinas. With the advent of the Internet, ubiquitous desktop computing, an increased volume of digitized content, and sophisticated markup languages like TEI (the Text Encoding Initiative), the processes of digital humanities work has moved away from a fad towards a trend. While digital humanities work is sometimes called a discipline this author sees it more akin to a method. It is a process of doing “distant reading” to evaluate human expression. (The phrase “distant reading” is attributed to Franco Moretti who coined it in a book entitles Graphs, Maps, Trees: Abstract Models for a Literary History. Distant reading is complementary to “close reading”, and is used to denote the idea of observing many documents simultaneously.) The digital humanities community has grown significantly in the past ten or fifteen years complete with international academic conferences, graduate school programs, and scholarly publications.

Digital humanities work is a practice where digitized content of the humanist is quantitatively analyzed as if it were the content studied by a scientist. This sort of analysis can be done against any sort of human expression: written and spoken words, music, images, dance, sculpture, etc. Invariably, the process begins with counting and tabulating. This leads to measurement, which in turn provides opportunities for comparison. From here patterns can be observed and anomalies perceived. Finally, predictions, thesis, and judgements can be articulated. Digital humanities work does not replace the more traditional ways of experiencing expressions of the human condition. Instead it supplements the experience.

This author often compares the methods of the digital humanist to the reading of a thermometer. Suppose you observe an outdoor thermometer and it reads 32° (Fahrenheit). This reading, in and of itself, carries little meaning. It is only a measurement. In order to make sense of the reading it is important to put it into context. What is the weather outside? What time of year is it? What time of day is it? How does the reading compare to other readings? If you live in the Northern Hemisphere and the month is July, then the reading is probably an anomaly. On the other hand, if the month is January, then the reading is perfectly normal and not out of the ordinary. The processes of the digital humanist make it possible to make many measurements from a very large body of materials in order to evaluate things like texts, sounds, images, etc. It makes it possible to evaluate the totality of Victorian literature, the use of color in paintings over time, or the rhythmic similarities & difference between various forms of music.

Digital humanities centers in libraries

As the more traditional services of academic libraries become more accessible via the Internet, libraries have found the need to necessarily evolve. One manifestation of this evolution is the establishment of digital humanities centers. Probably one of oldest of these centers is located at the University of Virginia, but they now exist in many libraries across the country. These centers provide a myriad of services including combinations of digitization, markup, website creation, textual analysis, speaker series, etc. Sometimes these centers are akin to computing labs. Sometimes they are more like small but campus-wide departments staffed with scholars, researchers, and graduate students.

The Hesburgh Libraries’ Center For Digital Scholarship at the University of Notre Dame was recently established in this vein. The Center supports services around geographic information systems (GIS), data management, statistical analysis of data, and text mining. It is located in a 5,000 square foot space on the Libraries’s first floor and includes a myriad of computers, scanners, printers, a 3D printer, and collaborative work spaces. Below is an annotated list of projects the author has spent time against in regards to text mining and the Center. It is intended to give the reader a flavor of the types of work done in the Hesburgh Libraries:

  • Great Books – This was almost a tongue-in-cheek investigation to calculate which book was the “greatest” from a set of books called the Great Books of the Western World. The editors of the set defined a great book as one which discussed any one of a number of great ideas both deeply and broadly. These ideas were tabulated and compared across the corpus and then sorted by the resulting calculation. Aristotle’s Politics was determined to be the greatest book and Shakespeare was determined to have written nine of the top ten greatest books when it comes to the idea of love.
  • HathiTrust Research Center – The HathiTrust Research Center is a branch of the HathiTrust. The Center supports a number of algorithms used to do analysis against reader-defined worksets. The Center For Digital Scholarship facilitates workshops on the use of the HathiTrust Research Center as well as a small set of tools for programmatically searching and retrieving items from the HathiTrust.
  • JSTOR ToolData For Research (DFR) is a freely available and alternative interface to the bibliographic index called JSTOR. DFR enables the reader to search the entirety of JSTOR through a faceted querying. Search results are tabulated enabling the reader to create charts and graphs illustrating the results. Search results can be downloaded for more detailed investigations. JSTOR Tool is a Web-based application allowing the reader to summarize and do distant reading against these downloaded results.
  • PDF To Text – Text mining almost always requires the content of its investigation to be in the form of plain text, but much of the content used by people is in PDF. PDF To Text is a Web-based tool which extracts the plain text from PDF files and provides a number of services against the result (readability scores, ngram extraction, concordancing, and rudimentary parts-of-speech analysis.)
  • Perceptions of China – This project is in the earliest stages. Prior to visiting China students have identified photographs and written short paragraphs describing, in their minds, what they think of China. After visiting China the process is repeated. The faculty member leading the students on their trips to China wants to look for patterns of perception in the paragraphs.
  • Poverty Tourism – A university senior believes they have identified a trend — the desire to tourist poverty-stricken places. They identified as many as forty websites advertising “Come vist our slum”. Working with the Center they programmatically mirrored the content of the remote websites. They programmatically removed all the HTML tags from the mirrors. They then used Voyant Tools as well as various ngram tabulation tools to do distant reading against the corpus. Their investigations demonstrated the preponderant use of the word “you”, and they posit this because the authors of the websites are trying to get readers to imagine being in a slum.
  • State Trials – In collaboration with a number of other people, transcripts of the State Trials dating between 1650 and 1700 were analyzed. Digital versions of the Trails was obtained, and a number of descriptive analyses were done. The content was indexed and a timeline was created from search results. Ngram extraction was done as well as parts-of-speech analysis. Various types of similarity measures were done based on named entities and the over-all frequency of words (vectors). A stop word list was created based on additional frequency tabulations. Much of these analysis was visualized using word clouds, line charts, and histograms. This project is an excellent example of how much of digital humanities work is collaborative and requires the skills of many different types of people.
  • Tiny Text Mining Tools – Text mining is rooted in the counting and tabulation of words. Computers are very good at counting and tabulating. To that end a set of tiny text mining tools has been created enabling the Center to perform quick & dirty analysis against one or more items in a corpus. Written in Perl, the tools implement a well-respected relevancy ranking algorithm (term-frequency inverse document frequency or TFIDF) to support searching and classification, a cosine similarity measure for clustering and “finding more items like this one”, a concordancing (keyword in context) application, and an ngram (phrase) extractor.

Summary

starry night
Text mining, and digital humanities work in general, is simply the application computing techniques applied against the content of human expression. Their use is similar to use of the magnifying glass by Galileo. Instead of turning it down to count the number of fibers in a cloth (or to write an email message), it is being turned up to gaze at the stars (or to analyze the human condition). What he finds there is not so much truth as much as new ways to observe. The same is true of text mining and the digital humanities. They are additional ways to “see”.

Links

Here is a short list of links for further reading:

  • ACRL Digital Humanities Interest Group – This is a mailing list whose content includes mostly announcements of interest to librarians doing digital humanities work.
  • asking for it – Written by Bethany Nowviskie, this is a through response to the OCLC report, below.
  • dh+lib – A website amalgamating things of interest to digital humanities librarianship (job postings, conference announcements, blog posttings, newly established projects, etc.)
  • Digital Humanities and the Library: A Bibliography – Written by Miriam Posner, this is a nice list of print and digital readings on the topic of digital humanities work in libraries.
  • Does Every Research Library Need a Digital Humanities Center? – A recently published, OCLC-sponsored report intended for library directors who are considering the creation of a digital humanities center.
  • THATCamp – While not necessarily library-related THATCamp is a organization and process of facilitating informal digital humanities workshops, usually in academic settings.

Tiny Text Mining Tools

Posted on April 2, 2014 in Uncategorized by Eric Lease Morgan

I have posted to Github the very beginnings of Perl library used to support simple and introductory text mining analysis — tiny text mining tools.

Presently the library is implemented in a set of subroutines stored in a single file supporting:

  • simple in-memory indexing and single-term searching
  • relevancy ranking through term-frequency inverse document frequency (TFIDF) for searching and classification
  • cosine similarity for clustering and “finding more items like this one”

I use these subroutines and the associated Perl scripts to do quick & dirty analysis against corpuses of journal articles, books, and websites.

I know, I know. It would be better to implement these thing as a set of Perl modules, but I’m practicing what I preach. “Give it away even if it is not ready.” The ultimate idea is to package these things into a single distribution, and enable researchers to have them at their finger tips as opposed to a Web-based application.