ALA Midwinter 2012: Metadata Interest Group

ALA Midwinter 2012: ALCTS Metadata Interest Group presentations, Sunday, Jan 22, 2012, 8-10 a.m.

Links to presentation slides are available on ALA connect.

The Metadata Interest Group had three presentations.  Below is a short synopsis of these presentations.

Video metadata @ Cornell: implementing Kaltura by Jason Kovari

The session began with the presentation of Kaltura at Cornell by Jason Kovari.  Kaltura is an open source content management system for videos and audios. This project at Cornell is still a work in progress. The Cornell Library provides metadata expertise to the campus community by providing baseline and extended metadata field sets for this project. The presentation mainly focused on the Cornell Library’s involvement in providing metadata field sets for two collections—Muller-Kluge (collections of videotaped interviews) and Experimental Television Center (collections of video art). In general, providing metadata for video collection is challenging as these collections are distributed across the campus, use various platforms for streaming, have different languages, and are structurally different-either they are described at the segment level or at the chapter level. The baseline metadata field sets consist of creator Net ID, contributing unit as some of the core fields along with the others such as title, description, keywords, etc.  The extended metadata sets consists of technical and administrative metadata which is captured automatically. Jason noted that Kaltura also pose some metadata challenges. Kaltura’s controlled vocabulary can only be controlled in the baseline fields, it does not support nested elements, for example, name, first name, Net Id, have no wrapping and are not repeatable. Also, Kaltura does not have any underlying common schema to map to metadata elements. A crosswalk to PBCore, a metadata standard to describe digital and analog media can be developed.   Jason concluded his talk by suggesting the need to create multi -level hierarchical description of the video content, its integration with DSpace, discovery layers, and with the digital archival repository.

Kaltura: http://corp.kaltura.com/

Examples of Cornell’s video projects include the Muller-Kluge collection (http://muller-kluge.library.cornell.edu/en/) and the Experimental Television Center (http://www.experimentaltvcenter.org/).

The slides for Jason Kovari’s presentation can be found here: http://connect.ala.org/node/167132

 

Preservation and access metadata for born digital videos: by Amy Rushing

The University of Texas Libraries (UTL) at Austin has established the Human Rights Documentation Initiative (HRDI) to preserve and make accessible the historical records of genocide and human rights violations. HRDI aims in long term preservation of fragile and most vulnerable records (audio, video, and print) of human struggle around the world. It collaborates with human rights organizations and engages in post custodial management of digital copies of the records. There are five partners involved in this project- Kigali Genocide Memorial, Free Burma Rangers, Museo de la Palabra y la Imagen, Texas After Violence Project, and WITNESS. UTL defined metadata guidelines to achieve consistency, interoperability, access, management, and preservation of the digital content. The guidelines are METS based and OAI compliant and define standards for capturing descriptive, technical, source, and preservation rights metadata. The presentation outlined metadata challenges that UTL had for describing highly sensitive human violation content of the videos and audios. Privacy, diverse local metadata practices of the involved partners, lack of vocabulary for describing human rights violations subjects, and access restrictions to the content were noted as major challenges. Moreover, with respect to the videos there were more challenges such as there were no existing guidelines, technical metadata was not well defined, and also video content needed to be understood to determine which fields were important so that they could be included in the description. The Metadata Guidelines are METS based. The METS profile maps to MODS and to qualified Dublin Core and has been registered with the Library of Congress. The descriptive metadata uses multiple access systems (Dspace, digital Archive) and is interoperable with other human rights archives. The UTL has developed xml schemas to capture technical and source metadata, although adding source metadata has been the responsibility of the partner organizations.  The preservation metadata is generated using PREMIS by UTL. The UTL is in the process of developing HRDI thesaurus, has developed  HRDI Metadata Guidelines for videos with element definitions, input guidelines, mappings to MODS and DC, and also the source and technical metadata schemas. HRDI Metadata Guidelines and METS profile for Audios are also being developed, and HRDI thesaurus and Metadata Guidelines for the Archived Websites will be out soon.

HRDI: http://www.lib.utexas.edu/hrdi/

The finished University of Texas Libraries’ Human Rights Documentation Initiative Metadata Guidelines for Video can be found here: http://www.lib.utexas.edu/schema/Video_Metadata_Guidelines_v1.pdf

The slides for Amy Rushing’s presentation can be found here: http://connect.ala.org/node/167149

 

Repurposing Metadata for an Institutional repository at the Ohio State University” by Maureen Walsh

The Ohio State University’s Institutional Repository (IR) is called the “Knowledge Bank.” So far it has 67 communities, 46, 178 items, and 97, 441 content files containing about 73 percent images and 5 percent audio and video collections.  The Metadata application profile is based on the Dublin Core and uses OAI for harvesting. Submissions are done by staff and the campus community and are added to the IR either individually or by batch loading. For individual item submissions, customized input forms are available that requires information to be filled into the form such as description of the item, keywords, controlled terms, resource type, sponsor, etc. Importing items in batch is done in two ways- by importing Metadata into csv file that uses DSpace Simple Archives format for repurposing metadata or using XSLT workflow. XSLT workflow uses MarcEdit to export tab delimited files into the IR. Image data repurposing is done using Adobe photoshop. The metadata is extracted using ExifTool and csv file is batch loaded into the IR. For adding text metadata, the printed list of records is OCRed. The delimited text file of the records is cleaned using the text editor and regular expressions, and the results are batch loaded into the IR.

Knowledge Bank: https://kb.osu.edu/dspace/

The slides for Maureen Walsh’s presentation can be found here: http://connect.ala.org/node/167277

Reported by Kavita Mundle

 

 

About admin

Kristin Martin is the Metadata Blog Coordinator for the Metadata Interest Group. She is the Acting Electronic Resources Librarian and Metadata Librarian at the University of Illinois at Chicago.
This entry was posted in ALA Midwinter 2012. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *