Monday, July 1, 2013

Natural Wallpaper For Pc

Natural Wallpaper For Pc History

Source(google.com.pk)



Watson is an artificially intelligent computer system capable of answering questions posed in natural language,[2] developed in IBM's DeepQA project by a research team led by principal investigator David Ferrucci. Watson was named after IBM's Thomas J. Watson.[3][4] The computer system was specifically developed to answer questions on the quiz show Jeopardy!.[5] In 2011, Watson competed on Jeopardy! against former winners Brad Rutter and Ken Jennings.[3][6][7] Watson received the first prize of $1 million.[8]
Watson had access to 200 million pages of structured and unstructured content consuming four terabytes of disk storage[9] including the full text of Wikipedia,[10] but was not connected to the Internet during the game.[11][12] For each clue, Watson's three most probable responses were displayed on the television screen. Watson consistently outperformed its human opponents on the game's signaling device, but had trouble responding to a few categories, notably those having short clues containing only a few words.
In February 2013, IBM announced that Watson software system's first commercial application would be for utilization management decisions in lung cancer treatment at Memorial Sloan–Kettering Cancer Center in conjunction with health insurance company WellPoint.[13] IBM Watson’s business chief Manoj Saxena says that 90% of nurses in the field who use Watson now follow its guidance.[14]
Contents  [hide] 
1 Architecture
2 Hardware
3 Software
4 Data
5 Operation
5.1 Comparison with human players
6 Development history
7 Competing on Jeopardy!
7.1 Preparation
7.2 Practice match
7.3 First match
7.4 Second match
7.5 Final outcome
7.6 Public reaction
7.7 Match against members of the United States Congress
8 Future uses of software system
8.1 Healthcare
9 See also
10 References
11 Further reading
12 External links
12.1 J! Archive
12.2 Videos
Architecture[edit]



The high-level architecture of IBM's DeepQA used in Watson[15]
Watson is a Question answering (QA) computing system built by IBM.[2] IBM describes it as "an application of advanced Natural Language Processing, Information Retrieval, Knowledge Representation and Reasoning, and Machine Learning technologies to the field of open domain question answering" which is "built on IBM's DeepQA technology for hypothesis generation, massive evidence gathering, analysis, and scoring."[2]
Hardware[edit]

According to IBM:
Watson is a workload optimized system designed for complex analytics, made possible by integrating massively parallel POWER7 processors and the IBM DeepQA software to answer Jeopardy! questions in under three seconds. Watson is made up of a cluster of ninety IBM Power 750 servers (plus additional I/O, network and cluster controller nodes in 10 racks) with a total of 2880 POWER7 processor cores and 16 Terabytes of RAM. Each Power 750 server uses a 3.5 GHz POWER7 eight core processor, with four threads per core. The POWER7 processor's massively parallel processing capability is an ideal match for Watson's IBM DeepQA software which is embarrassingly parallel (that is a workload that is easily split up into multiple parallel tasks).[16]
According to John Rennie, Watson can process 500 gigabytes, the equivalent of a million books, per second.[17] IBM's master inventor and senior consultant Tony Pearson estimated Watson's hardware cost at about $3 million[18] and with 80 TeraFLOPs would be placed 94th on the Top 500 Supercomputers list.[19] According to Rennie, the content was stored in Watson's RAM for the game because data stored on hard drives are too slow to access.[17]
Software[edit]

Watson's software was written in various languages, including Java, C++, and Prolog, and uses Apache Hadoop framework for distributed computing, Apache UIMA (Unstructured Information Management Architecture) framework, IBM’s DeepQA software and SUSE Linux Enterprise Server 11 operating system.[9][20][21] “[...] more than 100 different techniques are used to analyze natural language, identify sources, find and generate hypotheses, find and score evidence, and merge and rank hypotheses.” [22]
Data[edit]

The sources of information for Watson include encyclopedias, dictionaries, thesauri, newswire articles, and literary works. Watson also used databases, taxonomies, and ontologies. Specifically, DBPedia, WordNet, and Yago were used.[23]
The IBM team provided Watson with millions of documents, including dictionaries, encyclopedias, and other reference material that it could use to build its knowledge.[12] Although Watson was not connected to the Internet during the game,[24] it contained 200 million pages of structured and unstructured content consuming four terabytes of disk storage,[9] including the full text of Wikipedia.[10]
Operation[edit]



When presented with a question Watson would use thousands of algorithms simultaneously to find answers, then compile those answers to determine its level of confidence in any given answer.
The computer's techniques for unraveling Jeopardy! clues sounded just like mine. That machine zeroes in on key words in a clue, then combs its memory (in Watson's case, a 15-terabyte data bank of human knowledge) for clusters of associations with those words. It rigorously checks the top hits against all the contextual information it can muster: the category name; the kind of answer being sought; the time, place, and gender hinted at in the clue; and so on. And when it feels "sure" enough, it decides to buzz. This is all an instant, intuitive process for a human Jeopardy! player, but I felt convinced that under the hood my brain was doing more or less the same thing.
—Ken Jennings[25]
When playing Jeopardy! all players must wait until host Alex Trebek reads each clue in its entirety, after which a light is lit as a "ready" signal; the first to activate their buzzer button wins the chance to respond.[12][26] Watson received the clues as electronic texts at the same moment they were made visible to the human players.[12] It would then parse the clues into different keywords and sentence fragments in order to find statistically related phrases.[12] Watson's main innovation was not in the creation of a new algorithm for this operation but rather its ability to quickly execute thousands of proven language analysis algorithms simultaneously to find the correct answer.[12][27] The more algorithms that find the same answer independently the more likely Watson is to be correct.[12] Once Watson has a small number of potential solutions, it is able to check against its database to ascertain whether the solution makes sense.[12] In a sequence of 20 mock games, human participants were able to use the average six to seven seconds that Watson needed to hear the clue and decide whether to signal for responding.[12] During that time, Watson also has to evaluate the response and determine whether it is sufficiently confident in the result to signal.[12] Part of the system used to win the Jeopardy! contest was the electronic circuitry that receives the "ready" signal and then examined whether Watson's confidence level was great enough to activate the buzzer. Given the speed of this circuitry compared to the speed of human reaction times, Watson's reaction time was faster than the human contestants except when the human anticipated (instead of reacted to) the ready signal.[28] After signaling, Watson speaks with an electronic voice and gives the responses in Jeopardy!'s question format.[12] Watson's voice was synthesized from recordings that actor Jeff Woodman made for an IBM text-to-speech program in 2004.[29]
Comparison with human players[edit]


Watson, Ken Jennings, and Brad Rutter in their Jeopardy! exhibition match.
Watson's basic working principle is to parse keywords in a clue while searching for related terms as responses. This gives Watson some advantages and disadvantages compared with human Jeopardy! players.[30] Watson has deficiencies in understanding the contexts of the clues. As a result, human players usually generate responses faster than Watson, especially to short clues.[12] Watson's programming prevents it from using the popular tactic of buzzing before it is sure of its response.[12] Watson has consistently better reaction time on the buzzer once it has generated a response, and is immune to human players' psychological tactics.[12][31]
The Jeopardy! staff used different means to notify Watson and the human players when to buzz,[28] which was critical in many rounds.[31] The humans were notified by a light, which took them tenths of a second to perceive.[32][33] Watson was notified by an electronic signal and could activate the buzzer within about eight milliseconds.[34] The humans tried to compensate for the perception delay by anticipating the light,[35] but the variation in the anticipation time was generally too great to fall within Watson's response time.[31] Watson did not operate to anticipate the notification signal.[33][35]
Development history[edit]

Since Deep Blue's victory over Garry Kasparov in chess in 1997, IBM had been on the hunt for a new challenge. In 2004, IBM Research manager Charles Lickel, over dinner with coworkers, noticed that the restaurant they were in had fallen silent. He soon discovered the cause of this evening hiatus: Ken Jennings, who was then in the middle of his successful 74-game run on Jeopardy!. Nearly the entire restaurant had piled toward the televisions, mid-meal, to watch the phenomenon. Intrigued by the quiz show as a possible challenge for IBM, Lickel passed the idea on, and in 2005, IBM Research executive Paul Horn backed Lickel up, pushing for someone in his department to take up the challenge of playing Jeopardy! with an IBM system. Though he initially had trouble finding any research staff willing to take on what looked to be a much more complex challenge than the wordless game of chess, eventually David Ferrucci took him up on the offer.[36] In competitions managed by the United States government, Watson's predecessor, a system named Piquant, was usually able to respond correctly to only about 35% of clues and often required several minutes to respond.[37][38][39] To compete successfully on Jeopardy!, Watson would need to respond in no more than a few seconds, and at that time, the problems posed by the game show were deemed to be impossible to solve.[12]
In initial tests run during 2006 by David Ferrucci, the senior manager of IBM's Semantic Analysis and Integration department, Watson was given 500 clues from past Jeopardy! programs. While the best real-life competitors buzzed in half the time and responded correctly to as many as 95% of clues, Watson's first pass could get only about 15% correct. During 2007, the IBM team was given three to five years and a staff of 15 people to solve the problems.[12] By 2008, the developers had advanced Watson such that it could compete with Jeopardy! champions.[12] By February 2010, Watson could beat human Jeopardy! contestants on a regular basis.[40]
While primarily an IBM effort, Watson's development team includes faculty and students from Rensselaer Polytechnic Institute, Carnegie Mellon University, University of Massachusetts Amherst, University of Southern California's Information Sciences Institute, University of Texas at Austin, Massachusetts Institute of Technology, New York Medical College, University of Trento, and Queens College, City University of New York.[15][41]
Competing on Jeopardy![edit]

Preparation[edit]


Watson demo at an IBM booth at a trade show
In 2008, IBM representatives communicated with Jeopardy! executive producer Harry Friedman about the possibility of having Watson compete against Ken Jennings and Brad Rutter, two of the most successful contestants on the show, and the program's producers agreed.[12][42] Watson's differences with human players had generated conflicts between IBM and Jeopardy! staff during the planning of the competition.[30] IBM repeatedly expressed concerns that the show's writers would exploit Watson's cognitive deficiencies when writing the clues, thereby turning the game into a Turing test. To alleviate that claim, a third party randomly picked the clues from previously written shows that were never broadcast.[30] Jeopardy! staff also showed concerns over Watson's reaction time on the buzzer. Originally Watson signaled electronically, but show staff requested that it press a button physically, as the human contestants would.[43] Even with a robotic "finger" pressing the buzzer, Watson remained faster than its human competitors. Ken Jennings noted, "If you're trying to win on the show, the buzzer is all," and that Watson "can knock out a microsecond-precise buzz every single time with little or no variation. Human reflexes can't compete with computer circuits in this regard."[31][35][44] Stephen Baker, a journalist who recorded Watson's development in his book "Final Jeopardy", reported that the conflict between IBM and Jeopardy! became so serious in May 2010 that the competition was almost canceled.[30] Watson learns from his mistakes, for example, this mistake during a practice round. He was given the clue "This trusted friend was the first non-dairy powdered creamer," to which he replied, "What is milk?" As part of the preparation, IBM constructed a mock set in a conference room at one of its technology sites to model the one used on Jeopardy! Human players, including former Jeopardy! contestants, also participated in mock games against Watson with Todd Alan Crain of The Onion playing host.[12] About 100 test matches were conducted with Watson winning 65% of the games.[45]
To provide a physical presence in the televised games, Watson was represented by an "avatar" of a globe, inspired by the IBM "smarter planet" symbol. Forty-two colored threads criss-crossed the globe, to represent Watson's state of thought; the number 42 was an in-joke referring to the novel The Hitchhiker's Guide to the Galaxy.[25] Joshua Davis, the artist who designed the avatar for the project, explained to Stephen Baker that there are 36 triggerable states that Watson was able to use throughout the game to show its confidence in responding to a clue correctly; he had hoped to be able to find forty-two, to add another level to the Hitchhiker's Guide reference, but he was unable to pinpoint enough game states.[46]
A practice match was recorded on January 13, 2011, and the official matches were recorded on January 14, 2011. All participants maintained secrecy about the outcome until the match was broadcast in February.[47]








Pictures Photos Images 2013
Pictures Photos Images 2013
Pictures Photos Images 2013
Pictures Photos Images 2013
Pictures Photos Images 2013
Pictures Photos Images 2013
Pictures Photos Images 2013
Pictures Photos Images 2013
Pictures Photos Images 2013
Pictures Photos Images 2013
Pictures Photos Images 2013





Share on Facebook
Share on Twitter
Share on Google+

Related : Natural Wallpaper For Pc

0 comments:

Post a Comment