Have you ever wondered how to make your video accessible for people who have a hearing problem or speak a different native language? Have you ever wondered how your video can benefit visual learners who absorb and recall information best by reading as opposed to listening? Have you ever wondered how to give the audience the freedom to search across your video and jump to the exact keywords that interest them? You are not alone, we have been wondering about that for a while. Fortunately, we now have a solution … interactive transcript.
Interactive transcript provides the audience a new way to enjoy your media content. Similar to subtitles in many ways, interactive transcript is displayed next to the audio/video source. As the audience hears the words being spoken, they can also see matching words highlighted or underlined in the transcript. The entire transcript is clickable, users can click on any word and start to enjoy the audio/video from that exact point. They can also search the transcript and jump to the part that interests them. Interactive transcript provides users a much richer experience with media content.
1. Click on word “mammals”, the video jumps to that point and starts to play from there.
2. When “mammals” is spoken in the video, it’s also highlighted in the transcript.
3, Search “fish”, all the parts that mention “fish” are highlighted in the transcript.
There are a nurmeous of interactive transcripts providers, of which 3PlayMedia and SpeakerText Captionbox are among the most popular. We ran a small pilot with 3Play Media’s service. We provided them audio/video files along with the transcripts, and they synchronized the text with the media using automated speech technology. The result has been very satisfying, except we find the price to be a bit high.
Can the price be a showstopper? If you are like me, a strong believer in OpenSource, you must be wondering if there is something like that available for free. Well, after some digging, voila! Pipwerks already published the API(s) for adding captions and interactive transcript for online videos. And it’s completely OpenSource. Now it’s time to roll up our sleeves and climb onto the shoulders of giants. Thanks to the EasyCaption and Kaltura API, I was able to help an Irish Studies professor build a repository (Irish Stories) to collect and share Irish Immigrants’ stories in a more interactive and engaging way. If you are interested, here is an example. Please feel free to contact me at email@example.com for any technical details.
MakerBot Replicator 2
You just came up with a great idea for a new product that you’re incredibly excited about. What do you do?
Print it out!
You heard me right. Using vector-based templates, the MakerBot Replicator 2 is able to print in 3-D. Rather than ink, 3-D printers deposit the chosen material in layers, creating a physical object from a digital file. Technology for printing in 3-D is about a decade old. Modern 3-D printers use an additive process, building up from small layers for high definition. The MakerBot was one of the first self-replicating 3-D printers to enter the marketplace. In addition to the wide variety of products it can print, the MakerBot has the ability to print the majority of its own parts. The price point for this technology has dropped dramatically, making 3-D printers widespread and accessible for the first time. With the Replicator 2, MakerBot hopes to bring this technology to the masses.
The MakerBot uses software to analyze the image before print.
You don’t need to be an engineer to create something printable in 3-D. Free design libraries, such as Thingiverse, allow anyone to find premade CAD files that are compatible with the MakerBot. These files can be downloaded on a memory card, or the printer can be hooked directly up to your computer. There is also an app that allows the user to scan objects from their phone and have them printed. The MakerBot software examines the vector file, figuring out how to make it printable. It calculates the best thickness to print, minimizing materials and time. Currently, only a few dozen types of materials, primarily metals and plastics, are compatible with the MakerBot. Yet, the list continues to expand as more printable materials are being developed.
3D Printing can be used from rapid modeling.
3-D print technology is used in many museums for educational purposes, or to lay out upcoming exhibits. Digital fabrication reduces the cost of manufacturing in many applications.Engineers, architects, and designers use the technology for rapid prototyping. Many custom dental fittings are now 3-D printed. The technology seems to be finding its place in a variety of disciplines. MakerBot hopes that in the coming years, 3-D printing will find its place in your home. In the future, if you’re looking for children’s toys, jewelry, or parts to fix things, it may be as simple as just hitting print.
Improving Maternal Health Through Mobile Technology
mHealth Pilot in Uganda; Vaccination day at the Nnindye health center
Does SMS increase the utilization of available medical resources in the developing world?
To find the answer, a cross functional team from the University of Notre Dame has launched a mobile health (mHealth) pilot in the Nnindye Villages in rural Uganda. By providing the local health center with open source SMS mobile technology and training, the research team is investigating if there is a resulting correlation in improved health-seeking behavior by increasing communications.
The project is funded by a grant from the Verizon Foundation, and is a collaboration between the Notre Dame Initiative for Global Development (NDIGD), the Ford Family Program in Human Development Studies and Solidarity and the Office of Information Technologies. Additional program assistance is being provided by the Eck Institute of Global Health. OIT staff member Tom Marentette traveled to Uganda in January to test and implement the technology to be used in the mHealth intervention. The project advances our shared university mission of preeminent research and Catholic character, through the innovative use of mobile technologies.
Tom Marentette meeting with Nnindye Health Center staff (Jan 2012)
Countries in the developing world, such as Uganda, have seen tremendous growth in mobile ownership and technologies over the last several years and this program is eager to leverage those advancements. To establish a baseline for research, an initial census was conducted in December 2012. It included data on local demographics, employment, education, and health care access.
Nnindye Villages Health Center
Academic Technologies spent the last two academic years researching how Notre Dame faculty and students used tablets to augment or replace traditional paper-based course materials. The OIT Academic Technologies team partnered with faculty and staff from the Mendoza College of Business, Notre Dame Law School, Hesburgh Libraries and the Center for the Study of Languages and Cultures to purchase 50 Apple iPads and 60 Samsung Galaxy Tabs for faculty and students to use in pilot courses. We spent a significant amount of time with each faculty member involved in the tablet course pilots to find electronic versions of their textbooks that provided students the most interactivity. The level of interactivity and quality of the interaction features available depends on the content and the capabilities of each tablet app. We also discovered that tablets with large color screens solved the challenge of viewing heavily illustrated eTextbooks versus using eReading devices like the Amazon Kindle or Barnes & Noble Nook that use the black & white E Ink technology.
Academic Technologies is also assisting faculty develop their own eBooks which have the potential to create more dynamic and engaging learning experiences. We helped Dr. Elliott Visconsi develop Shakespeare’s The Tempest for iPad, which is an iPad app designed for social reading, listening, annotating, authoring, and sharing. We collaborated with a student developer working for the Institute for Latino Studies create Day of the Dead – Experience the Tradition, which is an iOS app providing readers an interactive way to learn about the Day of the Dead (Día de los Muertos) through many beautiful multimedia elements. And we are currently supporting Assistant Professor Andre Murnieks who teaches graphic design to help him use iBooks Author to develop a textbook which demonstrates the interactive design principles he teaches. Apple has also requested that we create interactive iBooks and iTunes U courses that will include materials from the University of Notre Dame OpenCourseWare courses created by faculty with assistance from the Kaneb Center for Teaching & Learning.
Our current work to support faculty exploring ePublishing, eTextbook authoring and iOS application development is a small part of a broader multi-year effort to examine how the University of Notre Dame can create an mobile elearning ecosystem to support the creation, distribution, and consumption of eBooks and eTextbooks on different eReading devices.
Skyhook enables GPS-like location services using cellular tower and Wi-Fi access point signal strength.
Notre Dame is working with Skyhook to get a complete mapping of campus and fine-tune the system to provide the most accurate location services possible for devices that do not have GPS capabilities, and for all devices when inside buildings and unable to see satellites for GPS location.
The ePortfolio system is designed to allow students to collect and showcase their best work, aid in student-adviser communications, and help students play a more direct role in their personal development.
For more information, please contact Xiaojing Duan (xduan @ nd.edu).
Gigapan is a combination of hardware and software that creates panoramic photos from a series of very high resolution photos. You can pan around and zoom into incredible levels of detail in the finished panorama. For example, it has been used to create a panorama of an archaeological dig site in Indiana.
Notre dame faculty in Architecture and Archaeology are exploring both Gigapan and 3D scanning technology amd processes to create high resolution photo-realistic models of building exteriors and interiors, and archaeology field sites. Working with photography experts in the Center for Creative Computing (CCC) and visualization specialists in the Center for Research Computing (CRC), the Academic Technologies team is coordingating an interdisciplinary project that is exploring how to combine both data/images from both the Gigapan and 3D scanner into integrated visualizations that can be displayed in the Digital Visualization Theatre (Jordan Hall of Science) or in other ways. For more information, please contact Paul Turner (pturner1 @ nd.edu).
The Notre Dame Mobile Web http://m.nd.edu provides essential information and services on users’ mobile device, with an interface optimized for on-the-go access. It has been released to the general public in February 2010 in beta for expanded testing and feedback.
For more information, please contact Xiaojing Duan (xduan @ nd.edu).