Artificial intelligence - Wikipedia. Artificial intelligence (AI, also machine intelligence, MI) is intelligence exhibited by machines, rather than humans or other animals (natural intelligence, NI). In computer science, the field of AI research defines itself as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of success at some goal.[1] Colloquially, the term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving". The scope of AI is disputed: as machines become increasingly capable, tasks considered as requiring "intelligence" are often removed from the definition, a phenomenon known as the AI effect, leading to the quip "AI is whatever hasn't been done yet."[3] For instance, optical character recognition is frequently excluded from "artificial intelligence", having become a routine technology.[4] Capabilities generally classified as AI, as of 2. Go[6]), autonomous cars, intelligent routing in content delivery networks, military simulations, and interpreting complex data. Artificial intelligence was founded as an academic discipline in 1. AI winter"),[9][1. For most of its history, AI research has been divided into subfields that often fail to communicate with each other.[1. However, in the early 2. The traditional problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing, perception and the ability to move and manipulate objects.[1. General intelligence is among the field's long- term goals.[1. Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, neural networks and methods based on statistics, probability and economics. The AI field draws upon computer science, mathematics, psychology, linguistics, philosophy, neuroscience, artificial psychology and many others. The field was founded on the claim that human intelligence "can be so precisely described that a machine can be made to simulate it".[1. This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human- like intelligence, issues which have been explored by myth, fiction and philosophy since antiquity.[1. Some people also consider AI a danger to humanity if it progresses unabatedly.[1. In the twenty- first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding, and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science.[1. History[edit]While thought- capable artificial beings appeared as storytelling devices in antiquity,[1. Ramon Llull (c. 1. CE). With his Calculus ratiocinator, Gottfried Leibniz extended the concept of the calculating machine (Wilhelm Schickard engineered the first one around 1. Since the 1. 9th century, artificial beings are common in fiction, as in Mary Shelley's Frankenstein or Karel Čapek's R. U. R. (Rossum's Universal Robots).[2. The study of mechanical or "formal" reasoning began with philosophers and mathematicians in antiquity. The study of mathematical logic led directly to Alan Turing's theory of computation, which suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable act of mathematical deduction. This insight, that digital computers can simulate any process of formal reasoning, is known as the Church–Turing thesis.[2. Along with concurrent discoveries in neurology, information theory and cybernetics, this led researchers to consider the possibility of building an electronic brain.[2. The first work that is now generally recognized as AI was Mc. Cullouch and Pitts' 1. Turing- complete "artificial neurons". ![]() The field of AI research was born at a workshop at Dartmouth College in 1. Attendees Allen Newell (CMU), Herbert Simon (CMU), John Mc. Carthy (MIT), Marvin Minsky (MIT) and Arthur Samuel (IBM) became the founders and leaders of AI research.[2. They and their students produced programs that the press described as "astonishing": computers were winning at the game checkers, solving word problems in algebra, proving logical theorems and speaking English.[2. 10.06.2017 - 08.10.2017. Photography exhibition: Rare images of LS Lowry at home at The Lowry. Tabtight professional, free when you need it, VPN service. In recent decades, the role of recess during the school day has been called into question. This chapter addresses the critical issue of whether recess adds value to. By the middle of the 1. U. S. was heavily funded by the Department of Defense[2. AI's founders were optimistic about the future: Herbert Simon predicted, "machines will be capable, within twenty years, of doing any work a man can do". Marvin Minsky agreed, writing, "within a generation .. They failed to recognize the difficulty of some of the remaining tasks. ![]() Progress slowed and in 1. Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U. S. and British governments cut off exploratory research in AI. The next few years would later be called an "AI winter",[9] a period when obtaining funding for AI projects was difficult. In the early 1. 98. AI research was revived by the commercial success of expert systems,[3. AI program that simulated the knowledge and analytical skills of human experts. By 1. 98. 5 the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U. S and British governments to restore funding for academic research.[8] However, beginning with the collapse of the Lisp Machine market in 1. AI once again fell into disrepute, and a second, longer- lasting hiatus began.[1. In the late 1. 99. AI began to be used for logistics, data mining, medical diagnosis and other areas.[1. The success was due to increasing computational power (see Moore's law), greater emphasis on solving specific problems, new ties between AI and other fields and a commitment by researchers to mathematical methods and scientific standards.[3. Deep Blue became the first computer chess- playing system to beat a reigning world chess champion, Garry Kasparov on 1. May 1. 99. 7. Advanced statistical techniques (loosely known as deep learning), access to large amounts of data and faster computers enabled advances in machine learning and perception.[3. By the mid 2. 01. In a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy champions, Brad Rutter and Ken Jennings, by a significant margin. The Kinect, which provides a 3. D body–motion interface for the Xbox 3. Xbox One use algorithms that emerged from lengthy AI research[3. In March 2. 01. 6, Alpha. Go won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go- playing system to beat a professional Go player without handicaps.[6][3. In the 2. 01. 7 Future of Go Summit, Alpha. Go won a three- game match with Ke Jie,[3. No. 1 ranking for two years[4. This marked the completion of a significant milestone in the development of Artificial Intelligence as Go is an extremely complex game, more so than Chess. According to Bloomberg's Jack Clark, 2. AI within Google increased from a "sporadic usage" in 2. Clark also presents factual data indicating that error rates in image processing tasks have fallen significantly since 2. He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets. Other cited examples include Microsoft's development of a Skype system that can automatically translate from one language to another and Facebook's system that can describe images to blind people.[4. The overall research goal of artificial intelligence is to create technology that allows computers and machines to function in an intelligent manner. The general problem of simulating (or creating) intelligence has been broken down into sub- problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention.[1. Erik Sandwell emphasizes planning and learning that is relevant and applicable to the given situation.[4. Reasoning, problem solving[edit]Early researchers developed algorithms that imitated step- by- step reasoning that humans use when they solve puzzles or make logical deductions.[4. By the late 1. 98.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
November 2017
Categories |