A meta-search engine uses several other search engines and displays their results simultaneously. Meta-search engines do not have their own database of web pages. All the results are gathered from several search engines such as Google, are then collated to remove duplicates and ranked according to its algorithm. Finally, an organized and sorted list of web pages is presented to the user from the different search engines so you get the best results.
The extraordinary growth of the internet has made it difficult, if not impossible, for search engines to keep up with its immense size and pace. Thus far, the major search engines have only been able to index a fraction of all the data that is available on the internet. Therefore, chances are you will occasionally fail to find the desired result if you rely on only one search engine. Thus, the key to effective internet searching is not to rely on one, but rather on multiple search engines.
However, it can be time consuming and tedious to individually visit and perform a search on multiple search engines. A meta-search engine solves this problem by providing a central place and interface where users can search several search engines at once. This benefits users by saving them time from having to individually learn and visit multiple search engines.
Another benefit of a meta-search engine is its ability to access a cross section of results from several search engines. Rather than being tied to one database, meta-search engines combine results from multiple databases, thereby enhancing the coverage and relevancy of your search. Additionally, the ability to access multiple databases provides the most up-to-date results.
http://www.zapmeta.com/?sttname=metasearchfaq
WebCrawlers
A WebCrawler is an automated program that scans the World Wide Web for the most common websites related to what you are looking for, in a methodical manner. The WebCrawler is most commonly used for search engines. When a search engine's web crawler visits a web page, it scans the visible text, the hyperlinks, Images and the content of the various tags used in the site. Using the information gathered from the crawler, a search engine will then determine what the site is about and the websites are put in most related or popular order for that search in the search engine database. Webcrawlers are used in search engines such as:
BBC
yahoo
Bing
Market researchers may use a web crawler by scanning through the most searches to determine and assess trends in a given market, a web crawler may be used by anyone seeking to collect information out on the Internet.
Web crawlers may operate one time only, say for a particular one-time project. If its purpose is for something long term, as is the case with search engines, they may be programmed to go through the internet over n over again periodically to determine whether there have been any changes made. If a site is experiencing heavy traffic or technical difficulties, the WebCrawler may be programmed to note that and revisit the site again, hopefully after the technical issues have gone.
Web crawling is an important method for collecting data on, and keeping up with, the rapidly expanding Internet. A vast number of web pages are continually being added every day, and information is constantly changing. A web crawler is a way for the search engines and other users to regularly ensure that their databases are up to date.
http://www.wisegeek.com/what-is-a-web-crawler.htm
Kid’s search engines-
Children are starting to use the internet more and more each year for educational reasons, entertainment and just for general information such as opening times and news. Unfortunately there is a lot of inappropriate WebPages accessible by anyone using the internet, so they have created kids search engines so the parents can feel safe about there children using the world wide web and so they know what there not looking at by filtering out sites that some parents and teachers might find inappropriate for kids. These usually include sites that deal with explicit sexual matters, porn sites, violence, hate speech, gambling and drug use. Lots of normal search engines have kid versions or by selecting safesearch, such as:
Yahooligans
Ask Jeeves For Kids
AOL
MSN search
Filtering software works across the entire web, not just for search results. Most filtering software provides a fair amount of control for parents to determine what it and is not allowable content. Cyber Patrol and Net Nanny are two of the most popular of these programs.
Cyber Patrol relies on an extensive categorized list of web sites to allow parents to determine which sites are allowable or not. Content is sourced by a team of 40+ professional researchers, automated tools and customer submissions to gather the most widely accessed content on the Internet. These lists are updated frequently. Parents can also control whether individual web sites are allowed or not.
The program can filter web pages, newsgroups, chat rooms and other internet resources, and can be used to limit online time, create user logs and so on.
Looksmart acquired Net Nanny in April 2004 and added porn-free web search to the product shortly thereafter. The product provides a wide variety of parental controls, including blocking content based on content, URL, or ratings.
In addition to blocking web pages, the program allows selective blocking of access to chat, instant messaging, internet games and newsgroups. The program can also be configured to prevent illegal downloading of copyrighted or obscene material.
http://searchenginewatch.com/2156191
Directories
We visit directories for a number of reasons. Directories tend to display high quality sites that have been pre-screened by a human editor. Editors check to make sure that a site is active, that a site contains unique content that it is not under construction, and that visitors can actually find their way around it. Sites that crash browsers, contain no content, or are simply duplicates of other sites are rarely listed in directories, therefore, searchers can click on a listing and be assured that the Web site they visit will likely be of reasonable quality. Example of we directories are:
Aboutus.org
Yahoo! Directory
World wide web virtual library
Business.com
Directories are also useful for the structure they give to the Internet. While local search has come a long way in the last few years, searchers that are looking for a business or organization in their region still sometimes have a difficult time locating a site via a search engine. A directory on the other hand nearly always contains regional categories that can be easily browsed. A searcher looking for a real estate agent in their area of the country can locate multiple, relevant listings without having to sort through page after page of search engine results.
Users that are searching for a variety of sites on the same topic will also find directories helpful. In additional to regional categories, directories are organized topically. Using your favourite search engine to look for Web sites about classic TV shows may turn up thousands, or even tens of thousands of listings that you'll need to carefully sort through, visit and reject as you look for the site that has exactly what you need. Browsing a directory on the other hand, will provide you with a list of pre-screened, human selected Web sites that cover the exact topic you are looking for. Better yet, the descriptions assigned to the listings have been screened for accuracy and were written by editors, not marketers.
http://www.searchengineguide.com/jennifer-laycock/sem-101-what-is.php#ixzz0lTXcyyJh