Artificial Intelligence

Artificial Intelligence
Group #1
May 7, 2003
Artificial intelligence is the science and engineering of creating machinery that can carry
out tasks such as perception, reasoning, and learning. These computers are programmed how to
think on their own so that they can aid humans in their particular area of interest. AI is a young
field that has been growing and gaining support for almost forty years now. The government is
and has been supporting the research and creation of artificial intelligence and will continue to
do so in the future because it can help them in so many ways. There are many different areas of
artificial intelligence but the one that is most important to the people of this country is its use in
the military. Some critics may say that computers put people out of jobs and that they are too
expensive, but these programs may help save people’s lives, and a price cannot be put on lives.
It is recommended that research and creation of artificial intelligence should continue because it
is so important for the safety of the people of this country.
Background Information
The first fifty years of AI has produced a wide range of results. There were early
successes that sometimes ended in failure, but huge progress was made with every failed attempt.
Specific areas that needed improvement were highlighted and they were able to build on them.
AI research has explored the human reasoning process and intelligence in order to create systems
that could function in the same manner. To understand how AI became what it is today, it is
important to look at its past and learn about the early achievements.
John McCarthy coined the name Artificial Intelligence in 1956. Although it is a very
young field, its roots go back into the first studies of reasoning and knowledge. Hundreds of
years ago, people began thinking about methods to reason automatically and intelligent artifacts
were found in Greek mythology.
2
Military efforts led to the development of computer science. The United States and
Germany were competing to build computers that could translate coded messages or be used in
ballistics calculations. After World War II, computing facilities became available for less
important tasks. Surplus computing power after the war and major improvements in the design
of the machines allowed further investigations on more ideas for computing.
Many pioneers in the field of artificial intelligence were highly interested in developing
intelligent systems. Alan Turing was one of these pioneers and his work helped in the
advancement of AI. In a significant paper he wrote in 1950, he argued for the possibility of
creating intelligent machines. The paper proposed a test for comparing the intellectual capability
of AI systems to humans. That test is now known as the “Turing Test.” In this test, a
conversation on any topic is held between a human and an unseen party. If the human believes
he is talking to another human when he is actually talking to a computer, it has passed the test
and is believed to be intelligent.
Others had the same ideas in mind and were eager to make machines that could imitate
human reasoning. Newell and Simon were one of the first researchers that tried to create
intelligent programs. In 1955, they designed a well-known program called Logic Theorist.
Using common rules of logic, the problem-solving program tried to verify statements. The
program was able to reproduce many human-developed proofs and even created a shorter and
more direct proof for one theorem commonly found in textbooks.
In the following year, John McCarthy proposed that a two-month study of artificial
intelligence be done during the summer of 1956 at Dartmouth College. This study was called
“The Dartmouth Summer Research Project On Artificial Intelligence” and is considered a
3
significant milestone in AI’s progress. McCarthy wanted to conduct studies based on his belief
that every aspect of learning can be described so that a machine can be created to replicate it.
There were many early successes and failures in the attempts of developing AI
application systems. Hubert Dreyfus, a well-known critic of artificial intelligence, says that
nearly all initial work in AI was involved with either language translation, problem solving, or
pattern recognition. Language translation was an early leader and Dreyfus estimates that over
$20 million was spent on its progress in the first ten years of the research program. By the late
fifties, programs that could do a decent job of translating technical documents existed. Only
extra databases and more computing power were needed to apply the procedures to less formal
texts. Realistically, the programs could not measure up in the anticipated ways.
Work in problem solving also produced early successes and ultimate failure. Newell,
Shaw, and Simon’s work on GPS (General Problem Solver) was the basis for most problem
solving work. This program solved different types of problems using theoretical problem
solving rules believed to be used by humans. However, GPS had deep theoretical problems and
did not fulfill its promise. It could solve some problems with its general techniques, but could
not work with the simplest problems. The basic trouble was that common problem solving
strategies are limited. Domain-specific knowledge and skills are used to solve problems in
different situations, but the GPS had broad strategies. Increasing its capability meant adding
domain-specific knowledge for all potential problem areas. The GPS program was abandoned in
1967.
Pattern recognition also produced some early successes. Computers that could translate
Morse code and programs that could read handwriting in different styles were created. However,
4
no discovery of pattern recognition was incorporated into those programs. Inflexible templates
that were beaten by data distortions were used instead.
Some important programs that could go beyond their makers were later created. Arthur
Samuel wrote the first checkers-playing program in 1963. This program used learning methods
to develop tournament-level skills. It is best known for its win against Robert Nealey in a 1963
exhibition match. In 1996, Gary Kasparov played Deep Blue (the computer built by IBM
engineers that best plays chess) in a chess match and won in the 6th match. Kasparov wanted a
rematch for 1997 and IBM quickly accepted. The tournament consisted of 6 matches and Deep
Blue and Kasparov were at a tie when they reached the last match. Deep Blue beat Gary
Kasparov, the world’s human chess champion for the previous 12 years. In the years after this
major accomplishment, there has been much advancement in the applications of artificial
intelligence that have continued until today.
The Importance of Artificial Intelligence
While many would say that artificial intelligence has not succeeded in creating computer
systems with human-like intelligence, many businesses would agree the payoffs of artificial
intelligence have been enormous. Although the field has been devaluated in the mid to late 90s,
the focus has been changed towards practical applications that can benefit a wide range of
industrial and commercial applications. The AI market is now aimed at enhancing existing
applications with specific intelligent technologies. AI software helps engineers create better jet
engines. In factories, it boosts productivity by monitoring equipment and signaling when
preventive maintenance is needed. The Pentagon uses AI to coordinate its immense logistics
operations. And in the pharmaceutical field, it is used to gain new insights into the tremendous
amount of data on human genetics.
5
Data-mining systems filter instantly through mass quantities of data to uncover patterns
and relationships that would elude an army of human searchers. Data-mining software typically
includes neural nets, statistical analysis, and expert systems with if-then rules that mimic the
logic of human experts. Companies such as Wal-Mart use this technology to predict sales of
every product with uncanny accuracy. Not only does this technology dramatically reduce the
amount of work to be done by humans in sales analysis, it also nearly eliminates the possibility
of human error. The results are huge savings in inventories and maximum payoff from
promotional spending. The downside to the efficiency in data mining is that many workers will
no longer be needed, causing problems with unemployment.
Similar data-mining methods, such as the Echelon system, that can parse the text of
electronic messages and translate foreign-language phone conversations, are crucial for U.S.
intelligence agencies in the current war on terror. Without AI, even the entire adult population of
the U.S. could not begin to filter all the material the National Security Agency (NSA) collects,
which is why the Defense Advanced Research Projects Agency (DARPA) has long been a
primary sponsor of research on artificial intelligence. NSA's data-sleuthing system was able to
detect early warning signs of the September 11th attacks, although these alerts were not screened
by humans until after the fact.
Data-mining software has also been found useful in the medical field. Using naturallanguage processing methods, a Swiss company discovered that infant leukemia has three
distinct clusters. Mining cancer patients’ records for clustering patterns can be a tremendous
stride in curing cancer.
As we can see, industries that utilize AI-enhanced applications will become more diverse,
as the need for complex data analysis, customer relationship management, and other applications
6
expands outside the traditional commercial sphere. It is estimated that defense security and
education will experience the most significant growth in AI technology, while other industries
such as manufacturing will also see strong interest in implementation in the coming years. It is
important to separate the science-fiction aspect of artificial intelligence from the more realistic
applications. AI is not about chess-playing robots, but about how specific technologies can
enhance already existing applications.
Artificial Intelligence Today
With technology changing as much as it does, it is hard to keep track of how many
different applications of artificial intelligence there are in the world today. There are many
applications of artificial intelligence. Computers are helping people with everything from
decision making to simulating systems. The following are just a few examples of the
applications of artificial intelligence people are working with today.
Adaptive Learning
Everyday, humans use their brains instinctively to make decisions when changes occur
around them, like putting on a coat when it is cold outside or taking an alternate route when
traffic is heavy on the beltway. Computers are being programmed how to think instinctively
also. There is a logistics program created by Ascent Technology called SmartAirport that can
schedule plane flights by factoring workers’ availabilities and qualifications. It also knows
whether or not a plane needs to have a maintenance check. Productivity has been raised by up to
thirty percent in airports where Ascent’s software has been implemented. Humans have been in
charge of scheduling flights for decades and now computers are doing it better.
Pattern Recognition
7
Another everyday task that computers are getting better at than humans is pattern
recognition. Software is being created to catch criminals, specifically credit card scammers. The
company HNC in San Diego created a program called Falcon that tracks when and where
customers use their credit cards. Humans can usually only track large purchases of items such as
jewelry or entertainment equipment, but by using neural networks and statistical analysis, Falcon
can keep track of normal credit card behavior too, which makes it easier to find the frauds. Nine
out of ten of the major U.S. credit card companies use the Falcon program and it has improved
detection rates from thirty up to seventy percent.
Expert Systems
Every time humans have an experience, they react to it and learn something from it, and
after enough times, those reactions turn into instinct. Expert systems work like a human’s
instinct because they have those rules programmed into them. Programmers take the expertise of
experienced lab technicians and put that information into the system. Today, lab technicians are
being assisted in diagnosing diseases and infections with the help of FocalPoint created by
TriPath Imaging. For example, this software examines about five million Pap smear slides a
year in the U.S. for hints of cervical cancer.
The best part about the system is that it cannot be
changed once it leaves the laboratory. This security feature is to ensure that a less qualified
technician cannot lessen the system’s intelligence.
Simulation Systems
On-the-job training is one of the most important things that a new trainee should go
through because it gives them a chance to make decisions in real life situations and see the
consequences first hand. However, in some situations, like in the operating room or in war, it
would be a huge mistake for a person with no training to make decisions where the result could
8
be disastrous. For these situations, programmers are building simulation systems. These trainees
could work with the simulation system, which would be almost identical to the real world, and
they would learn to make good decisions without potentially harming anyone or anything. These
types of systems are being used in areas such as emergency health services, the military, and the
space program.
These are just some of the many applications of artificial intelligence that are being built
today. There is also one other application that is very important to the world today, and that is
artificial intelligence in the military.
Applications of Artificial Intelligence in the Military
Artificial Intelligence will continue to develop and expand in many different fields but
with the state of the nation today, the most important applications lie within defense systems and
the military. The military funds tremendous amounts in artificial intelligence and many projects
are currently under production. According to University of Maryland Baltimore County’s Dr.
Tim Oates, a professor and researcher of artificial intelligence, DARPA (Defense Advanced
Research Projects Agency) is one of the main providers of these developments. The three main
applications of AI in the military that are currently being built include robots in urban warfare,
autonomous aircraft and weapons, and software developments to help in tracking terrorist
activity.
Robots in Urban Warfare
Robots in urban warfare are a growing application of AI in the military. The idea is to
have robots go along with or instead of humans to check for enemy soldiers or threats. Similar
to what went on recently in Baghdad, Iraq, snipers could be hiding around the city waiting to
attack. But if robots were sent out instead of humans, even if it they were to get destroyed, at
9
least human lives are saved. For military personnel and fellow Americans, it is a lot easier to say
a robot soldier has been lost versus a father, son, mother, or daughter. Generally speaking,
research in robotics has been increasing since 2001 according to the Robotics Industry
Association, which is organized exclusively to promote the use of robotics.
Use of robots in urban warfare saw its first significant military action in Afghanistan
during Operation Enduring Freedom. By sending them into caves, buildings and other dark and
dangerous areas before troops helps prevent human casualties. Emerging from Massachusetts
Institute of Technology's artificial intelligence program is iRobot’s remote-controlled Packbot,
which the Pentagon has been pushing for faster development. This kind of technology involves
the robot listening to commands, deciphering allies from the enemy, and executing commands
successfully. So far these Packbots are capable of navigating terrain and obstacles more
skillfully, lay down a cover of smoke, test for chemical weapons and extend a ``neck'' that can
peer around corners. Additionally these machines are also learning how to follow their tracks
back home if they lose contact with their base.
10
Source: Robotics Online
Robots are not meant to be complete substitutes for humans. Most people agree that they
will never fully replace soldiers. However, the number of benefits they can bring is endless. Just
recently, Packbots made their first appearance and brought significant assistance to our troops at
a cave complex outside the village of Nazaraht, near the Pakistani border. During the war in
Iraq, the robots sent video back to the troops, sparing them the hazard of being killed by booby
traps and enemy combat. Robots are especially useful in this kind of urban warfare, which
involves peering around corners and clearing buildings. In the future, it would be beneficial to
have ``throw bots'', which soldiers could toss over a wall or through a window. An even more
powerful effort that experts would like see work is the phenomenon of robots that work together.
11
Autonomous Weapons and Aircraft
Another important application of artificial intelligence lies in autonomous weapons and
aircraft. These machines are defined as those that have the capability of functioning at some
level without the supervision of a human. Additionally, they should have the ability to identify
possible enemy threats and determine what to do when the target is identified. If it chooses to
strike the opponent, the aircraft or weapon should be able to aim, fire, and reload on its own.
This is beneficial so drone planes can fly in war areas dangerous for humans.
Developments in these autonomous aircraft are being made useful even today. The
aircrafts act as spy planes that circle the area in set patterns for long periods of time. For
example, long-endurance unmanned aircraft called RQ-1 Predators were flown over coalition
troops racing toward Baghdad. They are able to quickly provide the military ground
commanders key information on what lies ahead in the battlefield. The Predator is equipped
with day and night-time television cameras and radars which allow the aircraft to "see" through
smoke, clouds and haze while capturing events as they happen. The sensor operator acts as the
plane's "co-pilot", who controls the cameras and radar. A Predator pilot, Capt. Traz Trzaskoma,
says that these planes are helping make the ground war a success by minimizing coalition troop
losses.
Future improvements of this technology would be to have the plane react to
circumstances strictly on its own. Without a remote control, this task becomes very difficult. It
is easy to have set coordinates or instructions for the plane, but the difficulty lies in when a
situation occurs and the plane has to know how to react effectively—and react the way a human
12
does. The Predator is even flown by humans at state-of-the-art ground control stations, miles
away from the battlefield.
Tracking Terrorist Activity using Data Mining and Machine Learning
Research programs are currently underway to track terrorist activity around the globe. A
technique called data mining is used in identification of terrorism suspects by their suspicious
purchasing patterns, phone use, travel arrangements, etc. and is being facilitated by the
coordination of multiple government and commercial databases. Once a specific database is
compiled, a systematic information scan is performed to identify those characteristics which best
describe suspects or suspicious activity. Once those activities/people are targeted, a “watch” list
is compiled, and that information is sent to agencies such as ticketing agents of airlines, financial
institutions, where large cash transactions are made, or flight schools where individuals enroll for
flight training. Data mining can also be used to regulate border controls for immigrants traveling
in and out of the country. This and many other AI research techniques have been found useful in
the military and therefore should be an investment for our future.
Cost and Funding Information
According to a report from Business Communications Company, Incorporated, the AI
market is dramatically increasing by a rate of 12.2% annually, and globally is expected to hit $21
billion by the year 2007. Therefore funding for artificial intelligence will always be available to
researchers because of how much importance it holds in the technological future. Table 1 shows
recent AI spending and the projected spending in the future:
13
Global AI Market, through 2007
($ Millions)
2000
2001
2002
2007
4,051.3
5,530.5
11,902.3
21,174.5
AAGR %
2002-2007
12.2
Table 1 - Source: BCC, Inc.
Expert systems will see strong growth and amass a $4.8 billion market by the year 2007.
Belief networks which will rise to $2.2 billion and neural networks ($4.5 billion) will
compliment expert systems in the fastest-growing AI technologies in the market, as they will
also compliment existing applications for enhancement.
As one can see, artificial intelligence is a wise investment with unthinkable payoffs.
Defense research in AI is one of the fastest growing technologies, and should be accommodated
with proper funding. As shown through the proposed AI military research projects: robots in
urban warfare, autonomous weapons and aircraft, and tracking terrorist activity using data
mining, defense technology is important to the future security of the nation, therefore we
recommend a $100 million budget, for AI military research projects. Because data mining is the
most promising and beneficial of the projects with today’s technology, we find the data mining
project for tracking terrorist activity would result in money best spent. Therefore $40 million
will be allocated for this research. Robotic soldiers and autonomous weapons and aircrafts are
still far-fetched ideas, but research needs to continually advance in order for these to become a
reality. We are allocating $20 million for each of these projects. The remaining $20 million
shall be used for various other projects, which may prove to be of importance.
14