วันจันทร์ที่ 20 ตุลาคม พ.ศ. 2551

Educational schemata

One view of the components of information literacy
Based on the Big6 by Mike Eisenberg and Bob Berkowitz.
http://big6.com/

1.The first step in the Information Literacy strategy is to clarify and understand the requirements of the problem or task for which information is sought. Basic questions asked at this stage:
1.What is known about the topic?
2.What information is needed?
3.Where can the information be found?

2.Locating: The second step is to identify sources of information and to find those resources. Depending upon the task, sources that will be helpful may vary. Sources may include: books; encyclopedias; maps; almanacs; etc. Sources may be in electronic, print, social bookmarking tools, or other formats.
3.Selecting/analyzing: Step three involves examining the resources that were found. The information must be determined to be useful or not useful in solving the problem. The useful resources are selected and the inappropriate resources are rejected.
4.Organizing/synthesizing: It is in the fourth step this information which has been selected is organized and processed so that knowledge and solutions are developed. Examples of basic steps in this stage are:

1.Discriminating between fact and opinion
2.Basing comparisons on similar characteristics
3.Noticing various interpretations of data
4.Finding more information if needed
5.Organizing ideas and information logically

5.Creating/presenting: In step five the information or solution is presented to the appropriate audience in an appropriate format. A paper is written. A presentation is made. Drawings, illustrations, and graphs are presented.
6.Evaluating: The final step in the Information Literacy strategy involves the critical evaluation of the completion of the task or the new understanding of the concept. Was the problem solved? Was new knowledge found? What could have been done differently? What was done well?

Another conception of information literacy
This conception, used primarily in the library and information studies field, and rooted in the concepts of library instruction and bibliographic instruction, is the ability "to recognize when information is needed and have the ability to locate, evaluate and use effectively the needed information" (Presidential Committee on Information Literacy. 1989, p. 1). In this view, information literacy is the basis for life-long learning, and an information literate person is one who:
-Recognizes that accurate and complete information is the basis for intelligent decision making.
-Recognizes the need for information.
-Knows how to locate needed information.
-Formulates questions based on information needs.
-Identifies potential sources of information.
-Develops successful search strategies.
-Accesses sources of information including computer-based and other technologies.
-Evaluates information no matter what the source.
-Organizes information for practical application.
-Integrates new information into an existing body of knowledge.
-Uses information in critical thinking and problem solving. (Doyle, 1992)
-Uses information ethically and legally.

Since information may be presented in a number of formats, the term information applies to more than just the printed word. Other literacies such as visual, media, computer, network, and basic literacies are implicit in information literacy.

How Web search engines work

A search engine operates, in the following order
Web crawling
Indexing
Searching
Web search engines work by storing information about many web pages, which they retrieve from the WWW itself. These pages are retrieved by a Web crawler (sometimes also known as a spider) — an automated Web browser which follows every link it sees. Exclusions can be made by the use of robots.txt. The contents of each page are then analyzed to determine how it should be indexed (for example, words are extracted from the titles, headings, or special fields called meta tags). Data about web pages are stored in an index database for use in later queries. Some search engines, such as Google, store all or part of the source page (referred to as a cache) as well as information about the web pages, whereas others, such as AltaVista, store every word of every page they find. This cached page always holds the actual search text since it is the one that was actually indexed, so it can be very useful when the content of the current page has been updated and the search terms are no longer in it. This problem might be considered to be a mild form of linkrot, and Google's handling of it increases usability by satisfying user expectations that the search terms will be on the returned webpage. This satisfies the principle of least astonishment since the user normally expects the search terms to be on the returned pages. Increased search relevance makes these cached pages very useful, even beyond the fact that they may contain data that may no longer be available elsewhere.
When a user enters a query into a search engine (typically by using key words), the engine examines its index and provides a listing of best-matching web pages according to its criteria, usually with a short summary containing the document's title and sometimes parts of the text. Most search engines support the use of the boolean operators AND, OR and NOT to further specify the search query. Some search engines provide an advanced feature called proximity search which allows users to define the distance between keywords.
The usefulness of a search engine depends on the relevance of the result set it gives back. While there may be millions of webpages that include a particular word or phrase, some pages may be more relevant, popular, or authoritative than others. Most search engines employ methods to rank the results to provide the "best" results first. How a search engine decides which pages are the best matches, and what order the results should be shown in, varies widely from one engine to another. The methods also change over time as Internet usage changes and new techniques evolve.
Most Web search engines are commercial ventures supported by advertising revenue and, as a result, some employ the practice of allowing advertisers to pay money to have their listings ranked higher in search results. Those search engines which do not accept money for their search engine results make money by running search related ads alongside the regular search engine results. The search engines make money every time someone clicks on one of these ads.
Revenue in the web search portals industry is projected to grow in 2008 by 13.4 percent, with broadband connections expected to rise by 15.1 percent. Between 2008 and 2012, industry revenue is projected to rise by 56 percent as Internet penetration still has some way to go to reach full saturation in American households. Furthermore, broadband services are projected to account for an ever increasing share of domestic Internet users, rising to 118.7 million by 2012, with an increasing share accounted for by fiber-optic and high speed cable lines

The World Wide Web

The World Wide Web


Graphic representation of a minute fraction of the WWW, demonstrating hyperlinks
Many people use the terms Internet and World Wide Web (or just the Web) interchangeably, but, as discussed above, the two terms are not synonymous.
The World Wide Web is a huge set of interlinked documents, images and other resources, linked by hyperlinks and URLs. These hyperlinks and URLs allow the web servers and other machines that store originals, and cached copies, of these resources to deliver them as required using HTTP (Hypertext Transfer Protocol). HTTP is only one of the communication protocols used on the Internet.
Web services also use HTTP to allow software systems to communicate in order to share and exchange business logic and data.
Software products that can access the resources of the Web are correctly termed user agents. In normal use, web browsers, such as Internet Explorer and Firefox, access web pages and allow users to navigate from one to another via hyperlinks. Web documents may contain almost any combination of computer data including graphics, sounds, text, video, multimedia and interactive content including games, office applications and scientific demonstrations.
Through keyword-driven Internet research using search engines like Yahoo! and Google, millions of people worldwide have easy, instant access to a vast and diverse amount of online information. Compared to encyclopedias and traditional libraries, the World Wide Web has enabled a sudden and extreme decentralization of information and data.
Using the Web, it is also easier than ever before for individuals and organisations to publish ideas and information to an extremely large audience. Anyone can find ways to publish a web page, a blog or build a website for very little initial cost. Publishing and maintaining large, professional websites full of attractive, diverse and up-to-date information is still a difficult and expensive proposition, however.
Many individuals and some companies and groups use "web logs" or blogs, which are largely used as easily updatable online diaries. Some commercial organisations encourage staff to fill them with advice on their areas of specialization in the hope that visitors will be impressed by the expert knowledge and free information, and be attracted to the corporation as a result. One example of this practice is Microsoft, whose product developers publish their personal blogs in order to pique the public's interest in their work.
Collections of personal web pages published by large service providers remain popular, and have become increasingly sophisticated. Whereas operations such as Angelfire and GeoCities have existed since the early days of the Web, newer offerings from, for example, Facebook and MySpace currently have large followings. These operations often brand themselves as social network services rather than simply as web page hosts.
Advertising on popular web pages can be lucrative, and e-commerce or the sale of products and services directly via the Web continues to grow.
In the early days, web pages were usually created as sets of complete and isolated HTML text files stored on a web server. More recently, websites are more often created using content management system (CMS) or wiki software with, initially, very little content. Contributors to these systems, who may be paid staff, members of a club or other organisation or members of the public, fill underlying databases with content using editing pages designed for that purpose, while casual visitors view and read this content in its final HTML form. There may or may not be editorial, approval and security systems built into the process of taking newly entered content and making it available to the target visitors.

super junior


Super Junior (슈퍼주니어; often stylized SUPERJUNIOR), sometimes referred to as SJ or SuJu (슈주), is a thirteen-member boy band from Seoul, South Korea. They are managed by producer Lee Soo Man and is currently the largest group under SM Entertainment. The members are Leeteuk (the leader), Heechul, Han Geng, Yesung, Kang-in, Shindong, Sungmin, Eunhyuk, Siwon, Donghae, Ryeowook, Kibum, and Kyuhyun. The Chinese member, Han Geng, was chosen among the 3,000 applicants through auditions held in China by SM Entertainment in 2001.[1] The group initially debuted with twelve members on November 6, 2005, but since the addition of Kyuhyun in May 23, 2006, they have become a thirteen-member groupSuper Junior has released two studio albums and one CD single since 2005. Their most successful album, Don't Don, was the second best-selling album of 2007, according to the Music Industry Association of Korea. In addition, the group has earned four music awards from the M.NET/KM Music Festival, another four from the Golden Disk Awards, and is the second group to win Favorite Artist Korea at the MTV Asia Awards after JTL