There is great variety within the set of searches that authors may include in this book. While the difference between the searches is self evident to some, it is not to others. For this reason, the types of searches are explained below.
Internet search engineEdit
An Internet search engine is a program designed to help you access files stored on a public server on the Internet. The search engine allows you to ask for media content meeting specific criteria (typically those which contain a given word or phrase) and retrieve a list of files that match those criteria. Data collection is automated and done by software often known as a Web crawler.
Web search engines work by storing information about a large number of web pages, which they retrieve from the WWW itself. These pages are retrieved by a web crawler -- an automated web browser which follows every link it sees. The contents of each page are then analyzed to determine how it should be indexed (for example, words are extracted from the titles, headings, or special fields called meta tags). Data about web pages is stored in an index database for use in later queries. Some search engines, such as Google, store all or part of the source page (referred to as a cache) as well as information about the web pages.
When a user comes to the search engine and makes a query, typically by giving key words, the engine looks up the index and provides a listing of best-matching web pages according to its criteria, usually with a short summary containing the document's title and sometimes parts of the text.
The usefulness of a search engine depends on the relevance of the results it gives back. While there may be millions of Web pages that include a particular word or phrase, some pages may be more relevant, popular, or authoritative than others. Most search engines employ methods to rank the results to provide the "best" results first. How a search engine decides which pages are the best matches, and what order the results should be shown in, varies widely from one engine to another. The methods also change over time as Internet usage changes and new techniques evolve.
Most Web search engines are commercial ventures supported by advertising revenue and, as a result, some employ the controversial practice of allowing advertisers to pay money to have their listings ranked higher in search results.
Indexing/abstracting journal databaseEdit
Indexing and abstracting databases developed as an electronic alternative to paper periodical indexes. Historically libraries provided catalogs to help find journals, magazines, and newspapers by title but they rarely catalogued the articles in each issue. Indexing and abstracting books and databases fill in this gap. One can search electronic indexes by at least article author, article title, and article subject. Most indexes have many more ways to access articles.
Indexing and abstracting databases always contain indexed article information, sometimes contain abstracts, and rarely contain the full text of articles. In the age of the Internet and Google, using a database that doesn't have full text may seem antiquated but much of scholarly research still relies on such databases.
The addition of article subject headings in indexes is done by humans (as opposed to automated computer programs) and from this addition come positive and negative affects. Adding subject headings requires trained people and such people have to be payed therefore, driving the cost of the index higher. On the other hand, being able to find similar articles by subject can save a great amount of time.
A citation index keeps track of which articles in scientific journals cite which other articles. This allows one person who wrote an article to find out how many people have cited their article.
"A library catalog is a register of all bibliographic items found in a library." A bibliographic item can be a book, video, portrait or about anything else that is considered library material.