Você está na página 1de 3

UGC NET LIS - Some important Notes

Posted by sudhakar on December 28, 2011 at 10:57 in Discussion / News /


Article
View Forum

1. Gray literature (or grey literature) is a field in library and information science. The term is
used variably by the intellectual community, librarians, and medical and research
professionals to refer to a body of materials that cannot be found easily through conventional
channels such as publishers, "but which is frequently original and usually recent" in the
words of M.C. Debachere. Examples of grey literature include technical reports from
government agencies or scientific research groups, working papers from research groups or
committees, white papers, or preprints. The term grey literature is often employed exclusively
with scientific research in mind. Nevertheless, grey literature is not a specific genre of
document, but a specific, non-commercial means of disseminating information.
The identification and acquisition of grey literature poses difficulties for librarians and other
information professionals for several reasons. Generally, grey literature lacks strict
bibliographic control, meaning that basic information such as author, publication date or
publishing body may not be easily discerned. Similarly, non-professional layouts and formats
and low print runs of grey literature make the organized collection of such publications
challenging compared to more traditional published media such as journals and books.
Information and research professionals generally draw a distinction between ephemera and
grey literature. However, there are certain overlaps between the two media and they certainly
share common frustrations such as bibliographic control issues
2. Institutional repository is an online locus for collecting, preserving, and disseminating - in
digital form - the intellectual output of an institution, particularly a research institution.
For a university, this would include materials such as research journal articles, before
(preprints) and after (postprints) undergoing peer review, and digital versions of theses and
dissertations, but it might also include other digital assets generated by normal academic life,
such as administrative documents, course notes, or learning objects.
The four main objectives for having an institutional repository are:
to provide open access to institutional research output by self-archiving it;
to create global visibility for an institution's scholarly research;
to collect content in a single location;
to store and preserve other institutional digital assets, including unpublished or otherwise
easily lost ("grey") literature (e.g., theses or technical reports).
Features and Benefits of an Institutional Repository
According to the Directory of Open Access Repositories (DOAR) data [6] and the Repository
66 map at December 2010,[7] the majority of IRs are built using Open Source software.
While the most popular Open Source and hosted applications share the advantages that IRs
bring to institutions, such as increased visibility and impact of research output,
interoperability and availability of technical support, IR advocates tend to favour Open
Source solutions for the reason that they are by their nature more compatible with the
ideology of the freedom and independence of the internet from commercial interests. On the
other hand, some institutions opt for outsourced commercial solutions.
In her briefing paper[8] on open access repositories, advocate Alma Swan lists the following
as the benefits that repositories bring to institutions:
Opening up outputs of the institution to a worldwide audience;
Maximizing the visibility and impact of these outputs as a result;
Showcasing the institution to interested constituencies prospective staff, prospective

students and other stakeholders;


Collecting and curating digital output;
Managing and measuring research and teaching activities;
Providing a workspace for work-in-progress, and for collaborative or large-scale projects;
Enabling and encouraging interdisciplinary approaches to research;
Facilitating the development and sharing of digital teaching materials and aids, and
Supporting student endeavours, providing access to theses and dissertations and a location
for the development of e-portfolios.
Repository Software
There are a number of open-source software packages for running a repository including:
DSpace
Eprints
Fedora
There are also hosted (proprietary) software services, including:
Digital Commons
SimpleDL
There is a mashup indicating the worldwide locations of open access digital repositories. This
project is called Repository 66[1] and is based on data provided by ROAR and the
OpenDOAR service developed by the SHERPA
3. Ontologies Information reterival system
The use of ontologies to overcome the limitations of keyword-based search has been put
forward as one of the motivations of the Semantic Web since its emergence in the
late 90s. While there have been contributions in this direction in the last few years,
most achievements so far either make partial use of the full expressive power of an
ontology-based knowledge representation, or are based on boolean retrieval models,
and therefore lack an appropriate ranking model needed for scaling up to massive
information sources.
In the former case, ontologies provide a shallow representation of the information
space, equivalent in essence to the taxonomies and thesauri used before the Semantic
Web was envisioned [3,6,7,15]. Rather than an instrument for building knowledge
bases, these light-weight ontologies provide controlled vocabularies for the classification
of content, and rarely surpass several KBs in size. This approach has brought
improvements over classic keyword-based search through e.g. query expansion based
on class hierarchies and rules on relationships, or multifaceted searching and browsing.
It is not clear though that these techniques alone really take advantage of the full potential
of an ontological language, beyond those that could be reduced to conventional
classification schemes.
Other semantic search techniques have been developed that do exploit large knowledge
bases in the order of GBs or TBs consisting of thousands of ontology instances,
classes and relations of arbitrary complexity [1,2,4,12]. These techniques typically use
boolean search models, based on an ideal view of the information space as consisting
of non-ambiguous, non-redundant, formal pieces of ontological knowledge. In this
view, the information retrieval problem is reduced to a data retrieval task. A knowl search
results are assumed to be always 100% precise, and there is no notion of approximate
answer to an information need. This model makes sense when the whole
information corpus can be fully represented as an ontology-driven knowledge base, so
that search results consist of ontology entities.

However, there are limits to the extent to which knowledge can or should be formalized
in this way. First, because of the huge amount of information currently available

Você também pode gostar