Saturday, December 11, 2010

Reading Notes for 12/13/2010

1)      Galen Gruman. “What cloud computing really means” InfoWorld, April 2008. http://www.infoworld.com/article/08/04/07/15FE-cloud-computing-reality_1.html

I learned from the above article how the idea of cloud computing and shared storage space evolved from the need of shared data across communities of the world. This idea would be amazing for libraries in a conflict environment, where the data from these libraries can still be stored and shared via cloud computing.
2)      Explaining Cloud Computing http://www.youtube.com/watch?v=hplXnFUlPmg&NR=1

During my work, I was part of a project where the database we work on was stored on servers in other country but which we can access and work with it, because this type of database leasing agreement is for only 10 percent of the cost of having it on our servers in addition to human resources cost. This is a very interesting development, and shows how cloud computing can benefit libraries in countries with developing economies.
3)      Thomas Frey. The Future of Libraries: Beginning the Great Transformation
In the above article Mr. Frey makes important assumptions on how libraries are shifting from being centers of information to cultural centers. He points out the idea of changing roles for libraries and how this will affect the type of work done in libraries in the future.

Saturday, December 4, 2010

Reading notes for 12/6/2010

1)      No place to hide site: http://www.noplacetohide.net/  

This website includes very thoughtful discussions. I first read the interviews contained on the homepage. They really did an excellent job at conveying how data collection tools and security measures increased after 9/11, and these interviews served to outline some of the major players involved in this change. 
I then clicked the link to read Robert O'Harrow's book "No Place to Hide." This reading was actually quite shocking. The extent to which one can be followed, tracked, and monitored is astounding. This text arose many questions for which I have not formulated any sort of conclusive opinion to share. But for me, it seems like all of the devices that supposedly make my life so much easier are the same devices that are taking any rights of privacy. But as more and more we come to depend on these technologies, more and more we in a sense submit to the idea of being "monitored". Do we not have the right to in a sense opt out; is this right taken away from us?
At any rate, this reading brought up a lot of very important questions that I think, noting the present state, everyone should at least be considering.
2)      TIA and data mining http://www.epic.org/privacy/profiling/tia/

This website provided information regarding the Defense Advanced Research Project Agency's (DARPA) creation of a tracking system called Total Information Awareness (TIA). From what I understood, the system sought to collect as much data or information about people as possible (via multiple sources), and then through algorithms and human analysis, create a database that would be able to aid in the capture of terrorists, but the funding was cut by congress in 2003.
I was not able to access the above video and was not able to locate it on the internet.

Saturday, November 27, 2010

Reading Notes for 11/29/2010

1)      John Blossom (2009). What makes social media tick: seven secrets of social media. Content Nation, chapter 2. Wiley Publishing Inc. http://www.contentnation.com/wiki/chapter-2-what-makes-social-media-tick-seven-secrets-of-social-media

This article takes an in depth look at various functions of social media such as the role of Weblogs in both library and personal settings. I like how the author points out a key feature of Weblogs being the archival tools in which many blogs have. This class has actually afforded me my first opportunity to work within a blog setting, and I am finding the archiving feature to be of great value.  It has even been useful to me outside of this class. For instance, if I am in another class and they make mention of a topic that has been discussed in this specific class, all I have to do is enter into my blog and search my archive. Once I find the articles dealing with this subject, I then am brought to the particular entry where my notes and the exact articles I read dealing with this subject are stored, which is much more efficient than having to haphazardly skim a bunch of papers. This article also mentions, and I had not thought of this possibility until now, the benefits that blogging can have in a group project setting. I don't know how many times I have, during group projects; I have run into the problem of e-mail organization. Blogging seems like a much more efficient alternative. I also like the idea presented in this article regarding establishing a "reference blog" for those that may be working reference in a library. It may be a very viable alternative to the old reference binder. In addition, I think this article did a good job in relaying the opinion that librarians need to acquainted with the technicalities of blog creation in order to assist patrons who may benefit from the establishment of one. 
Charles Allan, “Using a wiki to manage a library instruction program: Sharing knowledge to better serve patrons, C&RL News, April 2007 Vol. 68, No. 4 http://www.ala.org/ala/acrl/acrlpubs/crlnews/backissues2007/april07/usingawiki.cfm
This article also points out the increased benefits for librarians in using social software applications. In this particular article the benefits of creating a Wiki are discussed in order to affect more efficient communication processes in the realm of library instruction. It was interesting to learn that many companies actually offer Wikis (I had thought that this specific social software was only available via Wikipedia). Basically, a wiki works by establishing a list of users (via their e-mail address) who have rights to contribute to the material within said Wiki. Unfortunately, I have had no experience with using Wikis in the past, but this article definitely provides great incentive to get familiar with them in the future especially within a library setting.   
2)      Xan Arch, “Creating the academic library folksonomy: Put social tagging to work at your institution” C&RL News, February 2007 Vol. 68, No. 2 http://www.ala.org/ala/acrl/acrlpubs/crlnews/backissues2007/february07/libraryfolksonomy.cfm

This article provides an overview of the benefits of social tagging or “folksonomy”. Then the article further outlines how social bookmarking sites such as Delicious.com can make the life for World Wide Web frequent users so much easier. I also learned about Delicious.com from this class which very much helped me organize the sites I find interesting or worth bookmarking. This article explains another benefit of social tagging being, through the creation of additional subject access by users to library materials. The article explains that this system can be largely fruitful for those users who are unfamiliar with controlled vocabularies and so these user supplied folksonomies will allow other users to retrieve information available in their fields easily. Also the use of social tagging in a library setting can be a way for librarians to point out good information resources that are available on the web. The article then states, that this would assist library patrons with the task of searching and bookmarking websites from the World Wide Web. 
3)      Jimmy Wales: “How a ragtag band created Wikipedia” http://www.ted.com/index.php/talks/jimmy_wales_on_the_birth_of_wikipedia.html

I had previously understood how certain aspects of Wikipedia worked, but this video further explained to me how decisions are made about what entries should be included. I learned that this is done via the vote’s page which was actually new to me, and how votes and discussions among volunteer editors makes the process about which entry can be determined to be significant enough to make its way into Wikipedia. I also learned at the end of this video about how Wales discussed the creation of socially generated textbooks, and this idea sounds interesting.





Saturday, November 20, 2010

Reading Notes for 11/22/2010

David Hawking , Web Search Engines: Part 1 and Part 2 IEEE Computer, June 2006.

    This information is vital in understanding how to tackle various threats to the PC.  As librarians, I was wondering if there is anything that we can do more than a simple virus scan.  What else we can do as information professionals in order to contribute to the war on various Trojans? 

2) Shreeves, S. L., Habing, T. O., Hagedorn, K., & Young, J. A. (2005). Current developments and future trends for the OAI protocol for metadata harvesting. Library Trends, 53(4), 576-589: http://www.ideals.illinois.edu/bitstream/handle/2142/609/Shreeves_CurrentFutureTrends.pdf?sequence=2

I learned from this article how OAI-PMH will allow users to search many catalogs and repositories simultaneously. However, the different metadata schemas and controlled vocabularies used in the harvested catalogs and repositories could be the main barrier.  However, the expansions made on OAI-PMH gives a new hope for overcoming these barriers using a consistent approach. 

3) MICHAEL K. BERGMAN. “The Deep Web: Surfacing Hidden Value” http://www.press.umich.edu/jep/07-01/bergman.html
 
      I learned from this article how much we are missing out when we conduct our searches via Google.  That’s why librarians should further understand and implement Deep Web, such as searching electronic resources for patrons. However, I think that Google Scholar somewhat allows librarians and users to search the web as well as scholarly electronic resources in the same time. But still the user needs access to these electronic resources via library account as these resources needs subscription fees to be accessed.


Saturday, November 13, 2010

Comments for 11/15/2010

http://archivist-amy-in-training.blogspot.com/2010/11/week-9-muddiest-point.html#comments

Reading Notes for 11/15/2010

1)      Mischo, W. (July/August 2005).  Digital Libraries: challenges and influential work. D-Lib Magazine. 11(7/8).  http://www.dlib.org/dlib/july05/mischo/07mischo.html

The above article explores different research projects on the development of “Federated Search”. It outlines that developing federated search is important to facilitate finding distributed information from different digital resources. It also outlines briefly on the history of developing “federated searching” through different projects and methodologies such as full-text repositories maintained by commercial and professional society publishers; preprint servers and Open Archive Initiative (OAI) provider sites; specialized Abstracting and Indexing (A & I) services; publisher and vendor vertical portals; local, regional, and national online catalogs; Web search and metasearch engines; local e-resource registries and digital content databases; campus institutional repository systems; and learning management systems.

2)      Paepcke, A. et al. (July/August 2005).  Dewey meets Turing: librarians, computer scientists and the digital libraries initiative. D-Lib Magazine. 11(7/8). http://www.dlib.org/dlib/july05/paepcke/07paepcke.html

The above article highlights the National Science Foundation project named the Digital Libraries Initiative (DLI), which took place in 1994. It discusses and contrasts the point of view of both librarians and computer scientists about the development of digital libraries. It also outlines both the librarians’ views and computer scientists’ views about how the data should be organized and retrieved in digital environments. While the librarians prefer the traditional ways of collection development and using metadata in organization and retrieval, the computer scientists prefer developing algorithms for searching and retrieval without using the collection development methods. These two points of views were also affected by the advent of the World Wide Web at that time. The article concluded that there should be no conflict between the two views of librarians and computer scientists and that the librarians should utilize this new technological environment for better presenting their traditional services.

3)      Lynch, Clifford A. "Institutional Repositories: Essential Infrastructure for Scholarship in the Digital Age" ARL, no. 226 (February 2003): 1-7. http://www.arl.org/bm~doc/br226ir.pdf

The above article illustrates the nature and functions of institutional repositories and their role in transforming scholarship. It starts by defining the Institutional Repositories as a university-based institutional repository is a set of services that a university offers to the members of its community for the management and dissemination of digital materials created by the institution and its community members. It is most essentially an organizational commitment to the stewardship of these digital materials, including long-term preservation where appropriate, as well as organization and access or distribution.

Then it outlines the operational responsibility for these services which represents collaboration among librarians, information technologists, archives and records mangers, faculty, and university administrators and policymakers.
A fully realized institutional repository will contain the following:
·         The intellectual works of faculty and students--both research and teaching materials
·         Documentation of the activities of the institution itself in the form of records of events and performance and of the ongoing intellectual life of the institution
·         Experimental and observational data captured by members of the institution that support their scholarly activities
The author distinguishes between institutional repositories and scholarly publishing that the institutional repository which he proposes does not call for a new scholarly publishing role for universities, but it calls for institutional repositories as a mean of dissemination of scholarly communication. Institutional repositories can maintain data in addition to authored scholarly works. In this sense, the institutional repository is a complement and a supplement, rather than a substitute, for traditional scholarly publication venues.

The Strategic Importance of Institutional Repositories

The author summarizes the strategic importance of institutional repositories as “Institutional repositories can facilitate greatly enhanced access to traditional scholarly content by empowering faculty to effectively use the new dissemination capabilities offered by the network. This is also occurring on a disciplinary basis through the development of e-print and preprint servers, at least in some disciplines. In cases where the disciplinary practice is ready, institutional repositories can feed disciplinary repositories directly. In
cases where the disciplinary culture is more conservative, where scholarly societies or key journals choose to hold back change, institutional repositories can help individual faculty take the lead in initiating shifts in disciplinary practice.”

Cautions about Institutional Repositories
The author outlines the following three cautions about institutional repositories:

·         The first potential danger is that institutional repositories are cast as tools of institutional (administrative) strategies to exercise control over what has typically been faculty controlled intellectual work. I believe that any institutional repository approach that requires deposit of faculty or student works and/or uses the institutional repository as a means of asserting control or ownership over these works will likely fail, and probably deserves to fail.
·         The use of complex, cumbersome "gate keeping" policies for admitting materials to institutional repositories--particularly those that emulate practices from traditional scholarly publication such as the use of peer reviewers--are highly counterproductive; this will prevent institutional repositories from supporting and empowering faculty innovators and leaders.
·         An institutional repository can fail over time for many reasons: policy (for example, the institution chooses to stop funding it), management failure or incompetence, or technical problems. Any of these failures can result in the disruption of access, or worse, total and permanent loss of material stored in the institutional repository.

Institutional Repositories and Networked Information Standards and Infrastructure

The author outlines the main features of the institutional repositories and networked information standards and infrastructure as Preservable Formats which are the file formats that will be preserved in accessible forms (presumably through format migration, Identifiers which are persistent reference to materials in institutional repositories, Rights Documentation and Management which include technical part involving metadata structures; the other part is building consensus around a relatively small number of sets of terms and conditions that can cover the majority of the materials in practice. Working "standards" like the stock licenses under development by Creative Commons http://creativecommons.org/ can be used in rights documentation and management.

Future Developments in Institutional Repositories
Finally, the author outlines the future developments in institutional repositories as “There is a clearly evolving idea of "federating" institutional repositories but as yet little concrete exploration of what this means--cross-repository search, swaps of storage between institutional repositories to gain geographic and systems diversity in pursuit of backup, preservation, and disaster recovery, or other capabilities. This will be a fruitful area for exploration and innovation. Another part of federation is that faculty often don't stay at a single institution for their entire career, and they frequently disregard institutional boundaries when collaborating with other scholars. Federation of institutional repositories may also subsume the development of arrangements that recognize and facilitate faculty mobility and cross-institutional collaborations.”



Friday, November 12, 2010

Sunday, November 7, 2010

Assignment 5 (Koha List)

http://upitt01-staff.kwc.kohalibrary.com/cgi-bin/koha/virtualshelves/shelves.pl

User Name: IAW5
Password: IAW5

List Names:
Egyptology
History of Egypt
History of Ancient Egypt
Nile River
Underwater Archaeology (Alexandria, Egypt)

Saturday, November 6, 2010

11/8/2010 Comments

http://acovel.blogspot.com/2010/11/unit-9-reading-notes_05.html#comment-form

Reading notes 11/8/2010 XML

1)      Martin Bryan.  Introducing the Extensible Markup Language (XML)


The above tutorial is not available as the BURKS project was ceased.

“BURKS (the Brighton University Resource Kit for Students) was a non-profit collection of useful resources for students of Computing who did not have (or could not afford) an Internet connection. The resources include compilers, tutorials and reference manuals for dozens of different programming languages, a dictionary of computing with over 13,000 entries, a copy of the Mandrake 8.0 Linux distribution, a vast amount of useful software, information about the Internet itself, and much more. The entire collection was also available online.
The BURKS project ran from 1997 to 2001 and the collection grew from about 450M in the 1997 edition to about 2.5G in the 2001 edition. New editions were prepared every August in readiness for the start of the UK academic year. Eventually sales dropped as broadband Internet access and cheap CD and DVD writers became more common, and the project was closed down as a result.”
2)      Uche Ogbuji. A survey of XML standards: Part 1. January 2004.


The above article provides a guide to XML standards, including a wide range of recommended resources for further information about all aspects of using XML standards.

3)      Extending you Markup: a XML tutorial by Andre Bergholz


I didn’t locate article 3 from the above link but located it through the link below:


Article 3 is short tutorial that presents the essential concepts of XML, and how XML is important for presentation, exchange, and management of information

4)      XML Schema Tutorial
The above tutorial explains how to create XML Schemas, why XML Schemas are more powerful than DTDs, and how to use XML Schema in applications.

Friday, November 5, 2010

Muddiest Points for 11/1/2010

I was wondering if anyone can use Web page authoring software or HTML to modify existing Web pages. Or there are permissions for doing this.

Sunday, October 31, 2010

Comments on 10/25/2010

http://iandtupitt.blogspot.com/2010/10/reading-notes-for-oct-25-class.html

Reading Notes for 11/1/2010

1)      W3schools HTML Tutorial: http://www.w3schools.com/HTML/

The above tutorial explains in easy ways how to create HTML documents with examples of different types of HTML documents and practical examples.


The above website includes product reviews, magazines, instructions for doing various things in the “how to” and videos including culture, events, gaming, etc.

3)      W3 School Cascading Style Sheet Tutorial: http://www.w3schools.com/css/

The above tutorial explains in easy ways how to create HTML documents with examples of different types of HTML documents and practical examples.

4)      Goans, D., Leach, G., & Vogel, T. M. (2006). Beyond HTML: Developing and re-imagining library web guides in a content management system. Library Hi Tech, 24(1), 29-53.

“The purpose of the above article is to report on the content management system designed to manage the 30 web-based research guides developed by the subject liaison librarians at the Georgia State University Library. The methodology/Approach: The web development librarian, with assistance from the web programmer, designed a system using MySQL and ASP. A liaison team gave input on the system through rigorous testing and assisted with the design of the templates that control the layout of the content on the guides. A usability study and two surveys were also completed.
Findings: The new system met and exceeded the baseline expectations for content collection and management, offering a greater control over appearance and navigation while still offering customization features for liaisons. Improvements are planned for the templates in addition to better promotion of the guides on the library website. Initial and ongoing training for the liaisons should have been more effectively addressed. Despite their observed and future potential advantages, the CMS model has not been universally adopted by academic libraries.
Practical Implications: Regardless of the technology involved, libraries preparing for a CMS transition must give at least as much attention to user issues as they do to technical issues, from the organizational buy-in and comprehensive training to internal/external usability.
Originality/Value of Paper: This paper contributes to a small but growing collection of CMS case studies. It covers the technical, functional, and managerial developments of a CMS, while also addressing the practical user factors that sometimes get lost in the process.” From the authors’ abstract.

Friday, October 29, 2010

Mudiest points 10/25/2010

Whether there is a difference between the terms Invisible Web and Deep Web and the terms Visible Web and Surface Web or they are synonyms?

Muddiest points for 10/25/2010

What is the difference between Intranets and LANs?

Saturday, October 23, 2010

Reading Notes 10/25/2010

The article entitled: How the Internet infrastucture works elaborates on the basic underlying structure of the Internet, domain name servers, network access points and backbones and how a computer connects to others.
The article chapter: The Internet: Computer Network Hierarchy elaborates how when you connect to the Internet, your computer becomes part of a network.
“Every computer that is connected to the Internet is part of a network, even the one in our home, for example, you may use a modem and dial a local number to connect to an Internet Service Provider (ISP). At work, you may be part of a local area network (LAN), but you most likely still connect to the Internet using an ISP that your company has contracted with. When you connect to your ISP, you become part of their network. The ISP may then connect to a larger network and become part of their network. The Internet is simply a network of networks.
Most large communications companies have their own dedicated backbones connecting various regions. In each region, the company has a Point of Presence (POP). The POP is a place for local users to access the company's network, often through a local phone number or dedicated line. The amazing thing here is that there is no overall controlling network. Instead, there are several high-level networks connecting to each other through Network Access Points or NAPs.”

The article chapter: Internet Network example concludes that “In the real Internet, dozens of large Internet providers interconnect at NAPs in various cities, and trillions of bytes of data flow between the individual networks at these points. The Internet is a collection of huge corporate networks that agree to all intercommunicate with each other at the NAPs. In this way, every computer on the Internet connects to every other.

The article chapter: Function of an Internet router illustrates that “Routers are specialized computers that send your messages and those of every other Internet user speeding to their destinations along thousands of pathways. A router has two separate, but related, jobs:
·         It ensures that information doesn't go where it's not needed. This is crucial for keeping large volumes of data from clogging the connections of "innocent bystanders."
·         It makes sure that information does make it to the intended destination.”

The article chapter: Internet Backbone elaborates that “Backbones are typically fiber optic trunk lines. The trunk line has multiple fiber optic cables combined together to increase the capacity. Fiber optic cables are designated OC for optical carrier, such as OC-3, OC-12 or OC-48. An OC-3 line is capable of transmitting 155 Mbps while an OC-48 can transmit 2,488 Mbps (2.488 Gbps). Compare that to a typical 56K modem transmitting 56,000 bps and you see just how fast a modern backbone is.
Today there are many companies that operate their own high-capacity backbones, and all of them interconnect at various NAPs around the world. In this way, everyone on the Internet, no matter where they are and what company they use, is able to talk to everyone else on the planet. The entire Internet is a gigantic, sprawling agreement between companies to intercommunicate freely.”

The article chapter: Internet Protocol: IP address elaborates that “Every machine on the Internet has a unique identifying number, called an IP Address. The IP stands for Internet Protocol, which is the language that computers use to communicate over the Internet. A protocol is the pre-defined way that someone who wants to use a service talks with that service. The "someone" could be a person, but more often it is a computer program like a Web browser.

A typical IP address looks like this:
216.27.61.137
To make it easier for us humans to remember, IP addresses are normally expressed in decimal format as a dotted decimal number like the one above. But computers communicate in binary form. Look at the same IP address in binary:
11011000.00011011.00111101.10001001”


 1)      Andrew K. Pace.  Dismantling Integrated Library Systems Library Journal, vol 129 Issue 2, p34-36. 2/1/2004
This article elaborates that many expect in future ILS new modules will communicate with old ones, products from different vendors will work together, and a suite of existing standards will make distributed systems seem transparently whole. But in an ironic twist, most of the touted interoperability is between a vendor's own modules (sometimes) or between a library's homegrown solutions and its own ILS (sometimes). Today, interoperability in library automation is more myth than reality. Some of us wonder if we may lose more than we gain in this newly dismantled world. The article concludes that Librarians and vendors should work on developing interoperable library systems and suggests that the open source movement has demonstrated the value of open standards and protocols. Through XML, web services, OAI-PMH (Open Archives Initiative--Protocol for Metadata Harvesting), librarians believe they can create interoperability among systems, whether vendors' or their own. The article also suggests that vendors are already using standards and protocols to build logical relationships with course management vendors (Blackboard and WebCT), accounting and HR systems (PeopleSoft), and authentication tools (LDAP, EZProxy, and Shibboleth). If vendors can build interoperability with these systems, then they can become interoperable with each other and with local library systems. Finally, the article concludes that library vendors have two choices. They can continue to maintain large systems that use proprietary methods of interoperability and promise tight integration of services for their customers. Or, they can choose to dismantle their modules in such a way that librarians can reintegrate their systems through web services and standards, combining new with the old modules as well as the new with each other.

2)      Sergey Brin and Larry Page: Inside the Google machine. This vedio casts Google co-founders Larry Page and Sergey Brin who talked about the peek inside the Google machine, sharing tidbits about international search patterns, the philanthropic Google Foundation, and the company's dedication to innovation and employee satisfaction. It also elaborates about some Google projects undertaken on 2004.

Saturday, October 9, 2010

Reading notes for 10/15/2010 & 10/16/2010

1)      Local Area Network: http://en.wikipedia.org/wiki/Local_Area_Network

This article describes the type of connection that many libraries have.  It is important to learn about how Local Area Networks such as WiFi works. However, in some countries where the communications are not very developed, users are not able to benefit from such communication technology. 

2)      Computer network http://en.wikipedia.org/wiki/Computer_network

This article mentions the use of an Intranet.  I was not aware of the many variables which construct the Intranet; such as internet protocols and IP based tools.  This resource is vital not so much to the general public, but more for librarians in order to convey communications of information and how it happens. 
3)      Common types of computer networks http://www.youtube.com/watch?v=1dpgqDdfUjQ

The author outlines a new trend that local area connections are becoming more popular than wide area connections.  I thought this would be quite the contrary.  Perhaps local area connections are more powerful and speedy.  The author mentions Ethernet as the main factor in this trend. 

4)      Coyle, K. (2005). Management of RFID in libraries. Journal of Academic Librarianship, 31(5), 486-489.
The author suggests that libraries will be forced via the market to utilize RFID.  However, I think that RFID could be financially difficult to sustain with limited funding available for libraries. On the other hand, I learned from a conference that in Singapore, a country with transitional economy, all libraries are using RFID which has a lot of very efficient features that enable libraries to do all types of collection management activities such as inventory, self-circulation and shelving fairly easily with minimum human intervention, thus reducing the human resources costs. So, I think the market will increase, as the author mentions, over time. 

Friday, October 8, 2010

Muddiest Points 10/4/2010

In the lecture, I knew that databases have some sort of tabular structure i.e. (table, row and column) and there are relations between them. I would like to ask if this structure corresponds to the (file, record and field) structure. So Table is equal to file, row equal record and column equal field; or these structures are different.

Saturday, October 2, 2010

Reading Notes for 10/4/2010


Prior to reading this article, I was under the impression that databases were simply a storage/information retrieval mechanism.  However, after learning about Distributed Databases and Hypermedia Databases, there seems to be so much more.  As librarians, this represents the foundation for Information Science.  We need to understand and execute these systems in order to better develop knowledge organization systems based on these new database structures. 

2)      Anne J. Gilliland. Introduction to Metadata, pathways to Digital Information: 1: Setting the Stage http://www.getty.edu/research/conducting_research/standards/intrometadata/setting.html

In another class, I learned that Metadata is data about data.  Also, there are many components to Metadata, such as, Data Value, Data Content, and Data Structure. These components of Metadata are also vital components to consider when constructing information, especially within a database structure.  Understanding these components will allow librarians to better develop knowledge organization tools, and users to have a much easier time navigating a particular system. 
3)      Eric J. Miller. An Overview of the Dublin Core Data Model http://dublincore.org/1999/06/06-overview/

The visual aid in this article was very helpful in outlining the foundation of Dublin Core.  The Dublin Core was developed for use by non-librarians and thus considered simplicity as a fundamental approach while designing Dublin Core. However, the qualified Dublin Core that uses namespaces, data format standards and controlled vocabulary will help experienced librarians to utilize it and allow users to find very specific forms of information accurately.

Saturday, September 25, 2010

Reading notes for 9/27/2010

1)      Data Compression. http://en.wikipedia.org/wiki/Data_compression

I found Data de-duplication to be interesting.  Based on this information, a computer can instantaneously eliminate duplicated data. I was wondering if there is a way to manually utilize this function.  I know “Defragmentation” to be a method.  But, was wondering if there is a more detailed manner to find and eliminate data manually. 

2)      Data compression basics (long documents, but covers all basics and beyond): http://dvd-hq.info/data_compression_1.php

It seems like the main idea data compression is to store more data in a centralized location, thus reducing overall space on the hard drive.  In order to do this effectively, it seems as though one would need to understand various coding techniques in order to create this type of space.  Financially speaking, I think this would save libraries a lot of money if more information can be stored within fewer computers, from an administration standpoint.  Also technologically speaking, data stored on public computers in libraries might not need to be changed frequently, as this would reduce a computer crashing, delay slow startups, and reduce delayed searches. 

3)      Edward A. Galloway, “Imaging Pittsburgh: Creating a shared gateway to digital image collections of the Pittsburgh region” First Monday 9:5 2004 http://www.firstmonday.org/issues/issue9_5/galloway/index.html

From looking at the pictures and reading the article, it looks like digital imaging is one of the prime methods of historic preservation.  Unfortunately, physical copies of material found in libraries could disintegrate over time.  In order to preserve the past, and more importantly, our local history, digital image collections can provide a gateway for the present to intertwine with the past, and act as a medium for a more instant and hands on look at historical artifacts. However, I think it is also important to preserve the original versions as a physical artifact to the next generations through conservation methods.

4)      Paula L. Webb, YouTube and libraries: It could be a beautiful relationship C&RL News, June 2007 Vol. 68, No. 6 http://www.ala.org/ala/acrl/acrlpubs/crlnews/backissues2007/jun07/youtube.cfm

           I agree that YouTube and libraries could be a beautiful relationship. I think the use of  YouTube in
libraries will support strong marketing and outreach programs, library  instructional services, and much more in the future.