Artificial Intelligence Seminar Materials
Artificial intelligence (AI) is the intelligenceexhibited by machines or software.
Thinking machines and
artificial beings appear in Greek myths, such as Talos of Crete, the golden
robots of Hephaestus and Pygmalion's Galatea. Human likenesses believed to have
intelligence were built in every major civilization: animated statues were seen
in Egypt and Greece and humanoid automatons were built by Yan Shi, Hero of
Alexandria, Al-Jazari and Wolfgang von Kempelen.
It was also widely believed
that artificial beings had been created by Jābir ibn Hayyān, Judah Loew and
Paracelsus. By the 19th and 20th centuries, artificial beings had become a
common feature in fiction, as in Mary Shelley's Frankenstein or Karel
Čapek's R.U.R. (Rossum's Universal Robots). Pamela McCorduck argues that
all of these are examples of an ancient urge, as she describes it, "to
forge the gods". Stories of these creatures and their fates discuss many
of the same hopes, fears and ethical concerns that are presented by artificial
intelligence.
Mechanical or
"formal" reasoning has been developed by philosophers and
mathematicians since antiquity. The study of logic led directly to the
invention of the programmable digital electronic computer, based on the work of
mathematician Alan Turing and others. Turing's theory of computation suggested
that a machine, by shuffling symbols as simple as "0" and
"1", could simulate any conceivable act of mathematical deduction.
This, along with recent discoveries in neurology, information theory and
cybernetics, inspired a small group of researchers to begin to seriously
consider the possibility of building an electronic brain.
In the early 1980s, AI research
was revived by the commercial success of expert systems, a form of AI program
that simulated the knowledge and analytical skills of one or more human
experts. By 1985 the market for AI had reached over a billion dollars. At the
same time, Japan's fifth generation computer project inspired the U.S and
British governments to restore funding for academic research in the field.[36] However,
beginning with the collapse of the Lisp Machine market in 1987, AI once again
fell into disrepute, and a second, longer lasting AI winter began.
In the 1990s
and early 21st century, AI achieved its greatest successes, albeit somewhat
behind the scenes. Artificial intelligence is used for logistics, data mining,
medical diagnosis and many other areas throughout the technology industry.] The success was due to several factors: the incredible power
of computers today (see Moore's law), a greater emphasis on solving specific
subproblems, the creation of new ties between AI and other fields working on
similar problems, and above all a new commitment by researchers to solid
mathematical methods and rigorous scientific standards.
Thinking machines and
artificial beings appear in Greek myths, such as Talos of Crete, the golden
robots of Hephaestus and Pygmalion's Galatea. Human likenesses believed to have
intelligence were built in every major civilization: animated statues were seen
in Egypt and Greece and humanoid automatons were built by Yan Shi, Hero of
Alexandria, Al-Jazari and Wolfgang von Kempelen. It was also widely believed
that artificial beings had been created by Jābir ibn Hayyān, Judah Loew and
Paracelsus. By the 19th and 20th centuries, artificial beings had become a
common feature in fiction, as in Mary Shelley's Frankenstein or Karel
Čapek's R.U.R. (Rossum's Universal Robots). Pamela McCorduck argues that
all of these are examples of an ancient urge, as she describes it, "to
forge the gods". Stories of these creatures and their fates discuss many
of the same hopes, fears and ethical concerns that are presented by artificial
intelligence.
Mechanical or
"formal" reasoning has been developed by philosophers and
mathematicians since antiquity. The study of logic led directly to the
invention of the programmable digital electronic computer, based on the work of
mathematician Alan Turing and others. Turing's theory of computation suggested
that a machine, by shuffling symbols as simple as "0" and
"1", could simulate any conceivable act of mathematical deduction.
This, along with recent discoveries in neurology, information theory and
cybernetics, inspired a small group of researchers to begin to seriously
consider the possibility of building an electronic brain.
In the early 1980s, AI research
was revived by the commercial success of expert systems, a form of AI program
that simulated the knowledge and analytical skills of one or more human
experts. By 1985 the market for AI had reached over a billion dollars. At the
same time, Japan's fifth generation computer project inspired the U.S and
British governments to restore funding for academic research in the field. However,
beginning with the collapse of the Lisp Machine market in 1987, AI once again
fell into disrepute, and a second, longer lasting AI winter began.
In the 1990s
and early 21st century, AI achieved its greatest successes, albeit somewhat
behind the scenes. Artificial intelligence is used for logistics, data mining,
medical diagnosis and many other areas throughout the technology industry.
The success was due to several factors: the incredible power
of computers today (see Moore's law), a greater emphasis on solving specific
subproblems, the creation of new ties between AI and other fields working on
similar problems, and above all a new commitment by researchers to solid
mathematical methods and rigorous scientific standards.
To order this seminar material (doc and ppt), click here
No comments:
Post a Comment