AP Capstone Seminar: Summer Assignment

Name ______________________________________
Ms. Manning
AP Capstone Seminar: Summer Assignment 2017
I would like to take the opportunity to welcome all of you to AP Seminar for the 2017- 2018 school year. I
recognize that the summer offers all of you an opportunity to unwind and relax and I encourage you to do just that.
However, in order to prepare yourself for the upcoming course I ask that you complete the following summer assignment.
The AP Capstone Program will provide you with instruction on how to research, write, and present college level works.
This program is founded on a concept known as QUEST:
Question and Explore
Understand and Analyze Arguments
Evaluate Multiple Perspectives
Synthesize Ideas
Team, Transform, and Transmit
Using this framework we will attempt to look at a variety of topics through a number of different “lenses.” Some of the
possible themes that we may consider throughout the year are listed below:
Scoring: Total Points Possible= 100
Scoring will be based on completeness, effort, and punctuality. Please do not get overwhelmed by the assignment or the
sources within it. Just do your best. I am grading you based on completing the assignment to the best of your ability and
do not expect that everyone will get everything completely correct.
Part 1: 5 pts
Part 2: 95 pts
Source A- 20 pts
Source B- 10 pts
Source C- 20 pts
Source D- 10 pts
Source E and F- 35 pts
Part 1: Due Tuesday August 1
E-mail me to introduce yourself and acknowledge that you have reviewed your summer assignment. My email address is:
[email protected]. The email should include the following:
o
In the subject line:

o
Your first and last name – AP Seminar
In the body:

“I have reviewed the summer assignment and am aware of my responsibilities” or “I have
reviewed the summer assignment and have the following questions…”

Why did you decide to take AP Seminar? What are your short/ long term goals and how will this
course help you achieve them?

What is your biggest concern about taking AP Seminar/ What can I do to help YOU succeed?

What themes from the first page interest you most? Choose two and explain why they are of
interest to you.
Part 2: Due Friday September 8
This summer assignment is desiged to introduce you to some of the skills that you will be utilizing throughout the year,
focusing mainly on identifying claims and supporting evidence, and analyzing lines of reasoning.
Below you will find various articles and works of art all surrounding a common theme. The instructions will be given with
each source.
Source A
Objective: Identify and analyze the author’s line of reasoning.
Instructions:
1) Number the paragraphs
2) Identify the author’s main idea or thesis.
3) Analyze the author’s line of reasoning- the expression, organization, and sequence of ideas the author uses to
present his argument.
Your response for this portion should be typed (double- spaced) and attached.
Will AI Robots Turn Humans Into Pets? Technology industry leaders came to the U.N. to discuss the future of
artificial intelligence which may revolutionize everything--or not.
Kevin Maney Newsweek 168.13 (Apr. 14, 2017)
In a room at the United Nations overlooking New York's East River, at a table as long as a tennis court, around 70 of the
best minds in artificial intelligence recently ate a sea bass dinner and could not, for the life of them, agree on the coming
impact of AI and robots.
This is perhaps the most vexing challenge of AI. There's a great deal of agreement around the notion that humans are
creating a genie unlike any that's poofed out of a bottle so far--yet no consensus on what that genie will ultimately do for
us. Or to us.
Will AI robots gobble all our jobs and render us their pets? Tesla CEO Elon Musk, perhaps the most admired entrepreneur
of the decade, thinks so. He just announced his new company, Neuralink, which will explore adding AI-programmed
chips to human brains so people don't become little more than pesky annoyances to new generations of thinking machines.
A few days before that U.N. meeting, Treasury Secretary Steven Mnuchin waved away worries that AI-driven robots will
steal our work and pride. "It's not even on our radar screen," he told the press. When asked when we'll feel the intellectual
heat from robots, he answered: "Fifty to 100 more years."
At the U.N. forum, organized by AI investor Mark Minevich to generate discussions that might help world leaders plan
for AI, Chetan Dube, CEO of IPsoft, stood and said AI will have 10 times the impact of any technology in history in onefifth the time. He threw around figures in the hundreds of trillions of dollars when talking about AI's effect on the global
economy. The gathered AI chiefs from companies such as Facebook, Google, IBM, Airbnb and Samsung nodded their
heads.
Is such lightning change good? Who knows? Even IPsoft's stated mission sounds like a double-edged ax. The company's
website says it wants "to power the world with intelligent systems, eliminate routine work and free human talent to focus
on creating value through innovation." That no doubt sounds awesome to a CEO. To a huge chunk of the population,
though, it could come across as happy-speak for a pink slip. Apparently, if you're getting paid a regular wage to do
"routine work," you're about to get "freed" from that tedious job of yours, and then you had better "innovate" if you want
to, you know, "eat."
The folks from IBM talked about how its Watson AI will help doctors sift through much more information when
diagnosing patients, and it will constantly learn from all the data, so its thinking will improve. But won't the AI start to do
a better job than doctors and make the humans unnecessary? No, of course not, the IBMers said. The AI will improve the
doctors, so they can help us all be healthier.
Hedge fund guys said robot trading systems will make better investing decisions faster, improving returns. They didn't
seem too worried about their careers, even though some hedge funds guided solely by AI are already outperforming
human hedge fund managers. Yann LeCun, Facebook's AI chief and one of the most respected AI practitioners, says AI
will be used to discover and help eliminate biases and bring people together--yet for now, AI gets accused of uncovering
our individual biases and serving up content that confirms and hardens them, thereby making half the country mad at the
other half.
Grete Faremo, executive director of the United Nations Office for Project Services, beseeched technologists to slow down
a bit and make sure the stuff they're inventing solves the world's great problems without making new ones. But another
speaker, Ullas Naik of Streamlined Ventures, hinted at how quantum computing will soon greatly speed up development
of thinking machines. He believes quantum computing is closer than most people think, and in case you don't know, a
quantum computer will be so freakishly powerful, it will make any computer today seem as old-fashioned as an Amish
buggy.
Put all this together, and AI might be the most wonderful technology we've yet created, helping humans get to a higher
plane--if it doesn't turn against humans, Terminator style. Though most likely, it will land somewhere in between.
Here's a question worth considering: Is this AI tsunami really that different from the changes we've already weathered?
Every generation has felt that technology was changing too much too fast. It's not always possible to calibrate what we're
going through while we're going through it.
In January 1965, Newsweek ran a cover story titled "The Challenge of Automation." It talked about automation killing
jobs. In those days, "automation" often meant electro-mechanical contraptions on the order of your home dishwasher, or
in some cases the era's newfangled machines called computers. "In New York City alone," the story said, "because of
automatic elevators, there are 5,000 fewer elevator operators than there were in 1960." Tragic in the day, maybe, but
somehow society has managed without those elevator operators.
That 1965 story asked what effect the elimination of jobs would have on society. "Social thinkers also speak of man's
'need to work' for his own well-being, and some even suggest that uncertainty over jobs can lead to more illness, real or
imagined." Sounds like the same discussion we're having today about paying everyone a universal basic income so we can
get by in a post-job economy, and whether we'd go nuts without the sense of purpose work provides.
Just like now, back then no one knew how automation was going to turn out. "If America can adjust to this change, it will
indeed become a place where the livin' is easy--with abundance for all and such space-age gadgetry as portable
translators--and home phone-computer tie-ins that allow a housewife to shop, pay bills and bank without ever leaving her
home." The experts of the day got the technology right, but whiffed on the "livin' is easy" part.
So for every pronouncement that AI is different--that the changes it will drive are coming at us faster and harder than
anything in history--it's also worth wondering if we're seeing a rerun. For all we know, 50 years ago a group of
technologists might have got together at the U.N. and expressed pretty much the same hopes and concerns as the AI
group.
Except that was 1965. They would've talked over tuna casserole. At least the sea bass served at the U.N. confab represents
progress.
Source B
Objective: Identify the central claim and give supporting evidence.
Instructions: Read the lyrics below and answer the questions that follow.
Mr. Roboto by Styx, 1984
Domo Arigato, Mr. Roboto
Domo Arigato, Mr. Roboto
Mata ahoo Hima de
Domo Arigato, Mr. Roboto
Himitsu wo Shiri tai
You're wondering who I am
(Secret, secret, I've got a secret)
Machine or mannequin
(Secret, secret, I've got a secret)
With parts made in Japan
(Secret, secret, I've got a secret)
I am the modern man
I've got a secret, I've been hiding under my skin
My heart is human, my blood is boiling
My brain I.B.M., so if you see me
Acting strangely, don't be surprised
I'm just a man who needed someone
And somewhere to hide
To keep me alive, just keep me alive
Somewhere to hide to keep me alive
I'm not a robot without emotions
I'm not what you see
I've come to help you
With your problems, so we can be free
I'm not a hero, I'm not a savior
Forget what you know
I'm just a man whose circumstances
Went beyond his control
Beyond my control, we all need control
I need control, we all need control
I am the modren man
(Secret, secret I've got a secret)
Who hides behind a mask
(Secret, secret, I've got a secret)
So no one else can see
(Secret, secret, I've got a secret)
My true identity
Domo Arigato, Mr. Roboto
Domo, Domo
Domo Arigato, Mr. Roboto
Domo, Domo
Domo Arigato, Mr. Roboto
Domo Arigato, Mr. Roboto
Domo Arigato, Mr. Roboto
Domo Arigato, Mr. Roboto
Domo Arigato, Mr. Roboto
Thank you very much, Mr. Roboto
For doing the jobs that nobody wants to
And thank you very much, Mr. Roboto
For helping me escape
Just when I needed to
Thank you, thank you, thank you
I want to thank you
Please, thank you, oh
The problem's plain to see
Too much technology
Machines to save our lives
Machines, de-humanize
The time has come at last
(Secret, secret, I've got a secret)
To throw away this mask
(Secret, secret, I've got a secret)
Now everyone can see
(Secret, secret, I've got a secret)
My true identity, I'm Kilroy, Kilroy, Kilroy, Kilroy
What is the central claim of the song?
What lyrics can be used to support the central claim?
Source C
Objective: Identify and analyze the author’s line of reasoning.
Instructions:
1) Number the paragraphs.
2) Identify the author’s main idea or thesis.
3) Analyze the author’s line of reasoning- the expression, organization, and sequence of ideas the author uses to
present his argument.
Your response for this portion should be typed (double- spaced) and attached.
GOOGLE'S COMPUTERS ARE CREATING SONGS. MAKING MUSIC MAY NEVER BE THE SAME
Matt McFarland States News Service (June 6, 2016)
Google has launched a project to use artificial intelligence to create compelling art and music, offering a reminder of how
technology is rapidly changing what it means to be a musician, and what makes us distinctly human.
Google's Project Magenta, announced Wednesday, aims to push the state of the art in machine intelligence that's used to
generate music and art.
"We don't know what artists and musicians will do with these new tools, but we're excited to find out," said Douglas Eck,
the project's leader in a blog post. Just as Louis Daguerre and George Eastman did not predict what Annie Leibovitz or
Richard Avedon would do, "surely Rickenbacker and Gibson didn't have Jimi Hendrix or St. Vincent in mind."
Google has already released a song demonstrating the technology. The song was created with a neural network -- a
computer system loosely modeled on the human brain -- which was fed recordings of a lot of songs. With exposure to tons
of examples, the neural network soon begins to realize which note should come next in a sequence. Eventually the neural
network learns enough to generate entire songs of its own.
The project has just begun so the only available tools now are for musicians with machine-learning expertise. Google
hopes to produce -- along with contributors from outside Google -- more tools that will be useful to a broad group,
including artists with minimal technical expertise.
[Google's psychedelic 'paint brush' raises the oldest question in art]
Efforts to use computers to make music stretch back decades. But experts say what's unique here is the extent of Google's
computing power and its decision to share its tools with everyone, which may accelerate innovation.
"It's a potential game-changer because so many academics and developers in companies can get their hands on this library
and can start to create songs and see what they can do," said Gil Weinberg, the director of Georgia Tech's center for music
technology.
David Cope, a retired professor at the University of California-Santa Cruz and pioneer in computer generated music,
believes it's inevitable that one day the best composers will use artificial intelligence to aid their work.
"It's going to rampage through the film music industry," Cope said. "It's going to happen just as cars happened and we
didn't have the horse and buggy anymore." He's confident in this given the exponential growth of computing power, which
for decades has doubled about every two years.
With digital tools improving so quickly, it's become difficult for musicians to stay on the cutting edge while also
mastering their instrument of choice.
"The violinist uses the same instrument for a whole career potentially, and they develop the kind of virtuosity on that
instrument because they have that intimate relationship with it day after day for years and years," said Peter Swendsen, an
Oberlin professor of computer music and digital arts. "Software comes and goes in weeks sometimes."
Amper Music is a new start-up that like Google is interested in harnessing the latest software to create music. Amper
uses artificial intelligence to create original songs that match the emotions a video producer wants to convey in their work.
Creating the music takes only seconds.
"If you take the sum of everything that has affected music historically and add them together, in 20 or 30 years I think
you'd look back and say, 'Wow, music AI rivals all of that,' said its co-founder, Drew Silverstein.
For now, the potential of music made with artificial intelligence is still largely unrealized. Silverstein is only beginning to
tap the entertainment market in Los Angeles. The song Google's Magenta project released recently demonstrates what it's
currently capable of, but also how much work lies ahead.
The machine-generated melody was primed with just four notes: C,C,G,G, by the Google Brain Team. (Project
Magenta/Google)
"It is indeed very basic," said Swendsen, the Oberlin professor, after listening to the song. "That's not to say that the
system they are using doesn't hold lots of promise or isn't working on a much deeper level than a simple random
generator."
The emerging power of this technology is also a wake-up call for what makes us really human.
"A lot of the uniqueness that we like to ascribe to ourselves becomes threatened, said George Lewis, a professor of
American music at Columbia University. "People have to get the idea out of their head that music comes from great
individuals. It doesn't, it comes from communities, it comes from societies. It develops over many years and computers
become a part of societies."
As machines have become more a part of our lives, we can count on them to share a hand in the artistic process. For the
75-year-old Cope, this is a great thing, and nothing to be afraid of.
"The computer is just a really really high class shovel," Cope said. "I love this new stuff and want it to come fast enough
so I'm not dead when it happens."
Source D
Instructions: Answer the questions about the piece of art.
From Paul Ford MIT Technology Review Feb. 11, 2015
Analyze the painting above. What do you think is the artist’s main idea or claim?
What evidence can be seen in the work of art that supports the artist’s main idea?
Source E and F
Instructions:
1) Summarize each source, giving the authors’ main idea or thesis and analyzing the evidence and reasoning in each.
2) Then, write your thoughts about these two sources down in a “thought piece” of between 200 and 250 words. A
thought piece is like an extended paragraph or two where you write your thoughts on the issue but use evidence
from the sources to support up your opinions. Please type this (double- spaced) and attach.
Source E
The future of artificial intelligence: benevolent or malevolent?
George Michael, Skeptic 20.1 (Winter 2015): p. 57
In his 2013 State of the Union Address, President Barack Obama announced federal funding for an ambitious scientific
endeavor christened the BRAIN (the Brain Research Through Advancing Innovative Neurotechnologies) Initiative. The
$3 billion project seeks to unlock the secrets of the brain by mapping its electrical pathways. That same year, the
European Union unveiled its Human Brain Project, which will use the world's largest computers to create a copy of the
human brain made of transistors and metal. Generous funding to the tune of 1.19 billion euros (about $1.6 billion) has
been earmarked for this effort.
These two ambitious studies could create a windfall by generating new discoveries for treating incurable diseases and
spawning new industries. Concomitant with these projects are exciting new developments in the field of artificial
intelligence (AI)--that is, computer engineering efforts to develop machine-based intelligence that can mimic the human
mind. Concrete progress toward this goal was realized in June of 2014, when it was announced that a computer had just
passed the "Turing Test"--the ability to exhibit intelligent behavior indistinguishable from that of a human. At a test
competition organized by Kevin Warwick, a so-called "chatterbot" convinced 33 percent of the judges that it was human
with a 13-year old boys personality. (1) Two recent books examine trends in these areas of research and their implications.
In the Future of the Mind, Michio Kaku, a professor of theoretical physics at the City College and City University of New
York, draws upon numerous fields, including biotechnology, psychology, evolutionary theory, robotics, physics, and
futurism, to survey what lies ahead for the human race on the cusp of what could be a quantum leap in intelligence. As
Kaku explains, the introduction of MRI machines could do for brain research what the telescope did for astronomy. Just as
humankind learned more about the cosmos in the 15 years after the invention of the telescope than in all of previous
history, likewise advanced brain scans in the mid-1990s and 2000s have transformed neuroscience. Physicists played an
important role in this endeavor as they were involved in the development of a plethora of new diagnostic instruments used
for brain scans, including magnetic resonance imaging (MRI), Electroencephalography (EEG), Computerized
Tomography (CAT), and the Positron Emission Topography (PET).
Getting to our current level of human intelligence involved many evolutionary pathways. Previously in our evolution,
those humans who survived and thrived in the grasslands were those who were adept at tool making, which required
increasingly larger brains. The development of language was believed to have accelerated the rise of intelligence insofar
as it enhanced abstract thought and the ability to plan and organize society. With these new capabilities, humans could
join together to form hunting teams, which increased their likelihood of survival and passing on their genes. The increase
in intelligence and expressive capabilities led to the emergence of politics as humans formed factions to vie for control of
the tribe. What was essential to this progress was the ability to anticipate the future. Whereas animals create a model of
the world in relation to space and one another, Kaku develops a "space-time theory of consciousness" for human
psychology implying that humans, unlike other animals, create a model of the world in relation to time, both forward and
backward. He argues that humans are alone in the animal kingdom in understanding the concept of tomorrow. Thus the
human brain can be characterized as an "anticipation machine."
Kaku employs the metaphor of a CEO for how the human brain functions, in which numerous parties in a corporation
clamor for the attention of the chief executive officer. The notion of a singular "I" making all of our decisions
continuously is an illusion created by our subconscious minds, says Kaku; instead, consciousness amounts to a maelstrom
of events distributed throughout our brains. When one competing process trumps the others, the brain rationalizes the
outcome after the fact and concocts the impression that a single "self" decided the outcome.
Genetic engineering might someday be used to enhance human intelligence. By manipulating only a handful of genes, it
could be possible to increase our I.Q. Brain research suggests that a series of genes acting together in complex ways is
responsible for the human intellect. There's an upper ceiling for how smart we could become based on the laws of physics,
however, as Kaku notes, nature has limited the growth and development of our brains. For a variety of reasons, it is not
physically feasible to increase human brain size and add to the length of neurons. Thus, he says, any further enhancement
of intelligence must come from external means.
In the field of medicine, brain research could increase longevity and enhance the quality of life for many patients.
Engineers are currently working to create a "robo-doc," which could screen people and give basic medical advice with 99
percent accuracy almost for free. Such a device could do much to bring down accelerating healthcare costs. Through the
fusion of robotics and brain research, paralyzed patients could one day use telekinesis to move artificial limbs. Complete
exoskeletons would enable paraplegics to walk about and function like whole people. Taking this principle a step further,
people could control androids from pods and live their lives through attractive alter egos in the style of the 2009 movie
Surrogates starring Bruce Willis. Perhaps AI may even allow people to one day escape their bodies completely and
transition to a post-biological existence.
Funding for artificial intelligence has gone through cycles of growth and retrenchment. Initial optimism is often followed
by frustration as scientists realize the daunting task of reverse-engineering the brain. The two most fundamental
challenges confronting AI are replicating pattern recognition and common sense. Our subconscious minds perform
trillions of calculations when carrying out pattern recognition exercises, yet the process seems effortless. Duplicating this
process in a computer is a tall order. In point of fact, the digital computer is not really a good analog of the human brain as
the latter operates a highly sophisticated neural network. Unlike a computer, the human mind has no fixed architecture;
instead, collections of neurons constantly rewire and reinforce themselves after learning a task. What is more, we now
know today that most human thought actually takes place in the subconscious, which still remains something of a black
box in brain research. The conscious part of our mind represents only a tiny part of our computations.
Kaku asks an important question: Flow should we deal with robot consciousness that could decide the future of the human
race? An artificially intelligent entity programmed for self-preservation would stop at nothing to prevent someone from
pulling the plug. Because of their superior ability to anticipate the future, "robots could plot the outcomes of many
scenarios to find the best way to overthrow humanity." This ability could lead the way for a real-life Terminator scenario.
In fact, Predator drones may soon be equipped with face recognition technology and permission to fire capabilities if it is
reasonably confident of the identity of its target. Furthermore, inasmuch as robots are likely to reflect the particular ethics
and moral values of their creators, Kaku sees the potential for conflict between them, a scenario perhaps not unlike that
depicted in The Transformers movie series. Finally, Kaku speculates on what form advanced extraterrestrial intelligence
might take. Assuming that once intelligent life emerges it will continue to advance, then our first contact with superior life
outside of Earth could be with intelligent super computer entities that have long abandoned their biological bodies in
exchange for more efficient and durable computational bodies.
Whereas Kaku's tone on AI is mostly optimistic, James Barrat's prognosis is dystopian to the point where our very
existence may be threatened by AI. In Our Final Invention, the documentary filmmaker warns about the looming threat of
smart machines. For his research he interviewed a number of leading scientists in the fields of AI and robotics. Although
all of his subjects were confident that someday all important decisions governing the lives of humans would be made by
machines, or humans whose intelligence is augmented by machines, they were uncertain when this epoch would be
reached and what its implications might be.
Much of Barrat's book is devoted to countering the optimism of the so-called "singularitarians." Vernor Vinge first coined
the term singularity in 1993 in an address to NASA called "The Coming Technological Singularity." The term was then
popularized by Ray Kurzweil, a noted inventor, entrepreneur, and futurist who predicted that by the year 2045 we would
reach the Singularity--"a future period during which the pace of technological change will be so rapid, its impact so deep,
that human life will be irreversibly transformed." As he explained in his book, The Singularity is Near, people will begin
the process of leaving their biological bodies and melding with computers. Fie predicts that by the end of the 21st century
the non-biological portion of our intelligence will be trillions of trillions of times more powerful than unaided human
intelligence. An unabashed technological optimist, Kurzweil believes that the singularity will herald a new era in human
history in which problems such as hunger, disease, and even mortality will be solved. Based on the notion of accelerating
returns, if humans survive this milestone, the 21st century should witness technological progress equivalent to 200,000
years. Inasmuch as technological evolution tends not to occur in linear trends, but rather, exponential trends, scientific
development will advance so rapidly that the fabric of history will be torn. Singularitarians anticipate a future in which AI
will allow us to realize our utmost potential.
The singularitarian movement has strong religious overtones, which Barrat argues is overly optimistic. In contrast to
Kurzweil, Barrat fears that humans will eventually be left out of this historical process and relegated to the dustbin of
evolution. Holding extreme misgivings about artificial intelligence, he warns that the singularitarians are naive about the
peril posed by self-aware machines. The more sanguine scientists believe that this process will be friendly and
collaborative akin more to a handover than a takeover; however, Barrat argues that such an assumption is misguided.
Instead, he avers that the process will be unpredictable and inscrutable. He fears that we could lose control over AI and
the results could be catastrophic. Hence, the ultra-intelligent machine could be our final invention.
As Barrat explains, trying to fathom the values of an entity a million times more intelligent than humans is beyond our
comprehension. Simply put, the machine will not have human-like motives because it will not have a human psyche.
Though AI may harbor no ill will toward humanity, the latter could get in its way and be deemed expendable. He finds it
irrational to assume that an entity far more intelligent than we are and which did not evolve in an ecosystem in which
empathy is rewarded and passed on to subsequent generations, will necessarily want to protect us. As he argues:
You and I are hundreds of times
smarter than field mice, and share
about 90 percent of our DNA with
them. But do we consult them before
plowing under their dens for agriculture?
Do we ask lab monkeys for their
opinions before we crush their heads
to learn more about sports injuries?
We don't hate mice or monkeys, yet we
treat them cruelly. Superintelligent AI
wouldn't have to hate us to destroy us.
As Barrat notes, the way we treat our closest relatives--the great apes--is not reassuring for those chimpanzees,
orangutans, and gorillas that are not already bush meat, zoo inmates, or show biz clowns--the rest are either endangered or
living on borrowed time.
Even today, computers are responsible for important decisions that affect the economy. In the realm of finance, up to 70
percent of Wall Street's equity trades are now made by computerized high-frequency trading systems--supercomputers
that use algorithms to take advantage of split-second opportunities in price fluctuations of stocks. In recent years, Wall
Street has been using agent-based financial modeling that simulates the entire stock market, and even the entire economy,
to improve forecasting. Barrat fears that the intelligence explosion in the computational finance domain will be opaque for
at least four reasons. First, it will probably take place in various "black box" artificial intelligence techniques closed to
outsiders. Second, the high-bandwidth, millisecond-fast transmissions will take place faster than humans can react to them
as witnessed during the so-called Flash Crash on May 6 of 2010 when the Dow Jones Industrial Average plummeted by
1,000 points within minutes. Third, the system is extremely complex and thus beyond the understanding of most financial
analysts. And finally, any AI system implemented on Wall Street would more than likely be treated as proprietary
information and kept secret as long as it makes money for its creators. In the near future, it is reasonable to assume that
computer technology will have the power to end lives. As Barrat points out, semi-autonomous robotic drones now kill
dozens of people each year on the battlefield.
Nefarious forms of quasi-artificial intelligence already have befallen us. For example, "botnets" that hijack infected
computers (unbeknownst to their users) and launch DDOS (distributed denial of service) attacks are designed to crash
and/or jam targeted networks. For Barrat, it would seem to logically follow that as AI develops, it will be used for
cybercrime. Ominously, cyber-sabotage could be directed at critical infrastructure. If, for instance, the power grid were
taken down it would have catastrophic results. As an example of the great peril posed by semi-autonomous computer
programs Barrat cites the case of a joint U.S.-Israeli cyber campaign against Iran dubbed "Olympic Games," which
unleashed the Stuxnet computer virus. Stuxnet was designed to destroy machinery, specifically the centrifuges in Natanz
nuclear enrichment facility in Iran. Highly effective, the worm crippled between 1,000 and 2,000 centrifuges and set Iran's
nuclear weapons program back two years. But as Barrat warns, malware of this sort does not just simply go away;
thousands of copies of the virus escaped the Natanz plant and infected other PCs around the world. Barrat warns that such
cyber operations are terribly short-sighted and carry a high risk of blowback. As he explains, now that Stuxnet is out in the
public domain, it has dramatically lowered the cost of a potential terrorist attack on the U.S. electrical grid to about a
million dollars.
Perhaps in the not-so-distant future, computers will be autonomous agents making decisions without guidance from
human programmers. Moreover, the transition from artificial general intelligence to artificial super intelligence could
come swiftly and without forewarning, thus we will not have adequate time to prepare for it. Once it has access to the
Internet, an AI entity could find the fulfillment of all its needs, not unlike the scenario depicted in 2014 film
Transcendence in which Johnny Depp starred as the mind behind a supercomputer.
To be safe, Barrat advises that AI should be developed with something akin to consciousness and human understanding
built in. But even this feature could be dangerous. After all, a machine could pretend to think like a human and produce
human-like answers it prepared to implement its own agenda.
Kurzweil has argued that one way to limit the potentially dangerous aspects of artificial intelligence is to pair it with
humans through intelligence augmentation. As AI becomes intimately embedded in our bodies and brains, it will begin to
reflect our values. But Barrat counters that super-intelligence could be a violence multiplier, turning grudges into killings
and disagreements into disasters, not unlike how a gun can turn a fistfight into murder. Today, much of the cutting edge
AI research is being undertaken by the Pentagon. The Defense Advanced Research Projects Agency (DARPA) has been
investigating ways to implement artificial intelligence to gain an advantage on the battlefield. Put simply, intelligence
augmentation is no moral fail-safe.
Invoking the Precautionary Principle, Barrat counsels that if the consequences of an action are unknown but judged by
some scientists to carry a risk of being catastrophic, then it is better not to carry out the action. He concedes, however, that
relinquishing the pursuit of artificial general intelligence is no longer a viable option. To do otherwise would cede the
opportunity to rogue nations and gangsters who might not be as scrupulous in engineering safeguards against malevolent
AI. There is a decisive first-mover advantage in AI development in the sense that whoever first attains it will create the
conditions necessary for an intelligence explosion. And they can pursue this goal not necessarily for malevolent reasons,
but because they will anticipate that their chief competitors, whether corporate or military, will be doing the same.
Perhaps the best course of action would be to incrementally integrate components of artificial intelligence with the human
brain. The next step in intelligence augmentation would be to put all of the enhancements contained in a smart phone
inside of us and connect it to our brains. A human along with Google is already an example of artificial super-intelligence.
Inasmuch as AI is developed by humans, Kurzweil argues that it will reflect our values. He maintains that future machines
will still be human even if they are not biological. To be safe, Barrat recommends applying a cluster of defenses that could
mitigate the harmful consequences of malevolent AI, including programming in human features, such as ethics and
emotions. These qualities will probably have to be implemented in stages because of the complexity involved, but by
doing so, we could derive enormous benefits from machine-based intelligence without being consigned to evolutionary
obsolescence.
Source E
How to make a mind: can non-biological brains have real minds of their own?
Ray Kurzweil The Futurist 47.2 (March- April 2013): p14.
The mammalian brain has a distinct aptitude not found in any other class of animal. We are capable of hierarchical
thinking, of understanding a structure composed of diverse elements arranged in a pattern, representing that arrangement
with a symbol, and then using that symbol as an element in a yet more elaborate configuration.
This capability takes place in a brain structure called the neocortex, which in humans has achieved a threshold of
sophistication and capacity such that we are able to call these patterns ideas. We are capable of building ideas that are ever
more complex. We call this vast array of recursively linked ideas knowledge. Only Homo sapiens have a knowledge base
that itself evolves, grows exponentially, and is passed down from one generation to another.
We are now in a position to speed up the learning process by a factor of thousands or millions once again by migrating
from biological to nonbiological intelligence. Once a digital neocortex learns a skill, it can transfer that know-how in
minutes or even seconds. Ultimately we will create an artificial neocortex that has the full range and flexibility of its
human counterpart.
Consider the benefits. Electronic circuits are millions of times faster than our biological circuits. At first we will have to
devote all of this speed increase to compensating for the relative lack of parallelism in our computers. Parallelism is what
gives our brains the ability to do so many different types of operations--walk-ing, talking, reasoning--all at once, and
perform these tasks so seamlessly that we live our lives blissfully unaware that they are occurring at all. The digital
neocortex will be much faster than the biological variety and will only continue to increase in speed.
When we augment our own neocortex with a synthetic version, we won't have to worry about how much additional
neocortex can physically fit into our bodies and brains, as most of it will be in the cloud, like most of the computing we
use today. We have about 300 million pattern recognizers in our biological neocortex. That's as much as could be
squeezed into our skulls even with the evolutionary innovation of a large forehead and with the neocortex taking about
80% of the available space. As soon as we start thinking in the cloud, there will be no natural limits--we will be able to
use billions or trillions of pattern recognizers, basically whatever we need, and whatever the law of accelerating returns
can provide at each point in time.
In order for a digital neocortex to learn a new skill, it will still require many iterations of education, just as a biological
neocortex does. Once a single digital neocortex somewhere and at some time learns something, however, it can share that
knowledge with every other digital neocortex without delay. We can each have our own private neocortex extenders in the
cloud, just as we have our own private stores of personal data today.
Last but not least, we will be able to back up the digital portion of our intelligence. It is frightening to contemplate that
none of the information contained in our neocortex is backed up today. There is, of course, one way in which we do back
up some of the information in our brains: by writing it down. The ability to transfer at least some of our thinking to a
medium that can outlast our biological bodies was a huge step forward, but a great deal of data in our brains continues to
remain vulnerable.
The Next Chapter in Artificial Intelligence
Artificial intelligence is all around us. The simple act of connecting with someone via a text message, e-mail, or cellphone call uses intelligent algorithms to route the information. Almost every product we touch is originally designed in a
collaboration between human and artificial intelligence and then built in automated factories. If all the AT systems
decided to go on strike tomorrow, our civilization would be crippled: We couldn't get money from our bank, and indeed,
our money would disappear; communication, transportation, and manufacturing would all grind to a halt. Fortunately, our
intelligent machines are not yet intelligent enough to organize such a conspiracy.
What is new in AT today is the viscerally impressive nature of publicly available examples. For example, consider
Google's self-driving cars, which as of this writing have gone over 200,000 miles in cities and towns. This technology will
lead to significantly fewer crashes and increased capacity of roads, alleviate the requirement of humans to perform the
chore of driving, and bring many other benefits.
Driverless cars are actually already legal to operate on public roads in Nevada with some restrictions, although
widespread usage by the public throughout the world is not expected until late in this decade. Technology that
intelligently watches the road and warns the driver of impending dangers is already being installed in cars. One such
technology is based in part on the successful model of visual processing in the brain created by MIT's Tomaso Poggio.
Called MobilEye, it was developed by Amnon Shashua, a former postdoctoral student of Poggio's. It is capable of alerting
the driver to such dangers as an impending collision or a child running in front of the car and has recently been installed in
cars by such manufacturers as Volvo and BMW.
I will focus now on language technologies for several reasons: Not surprisingly, the hierarchical nature of language
closely mirrors the hierarchical nature of our thinking. Spoken language was our first technology, with written language as
the second. My own work in artificial intelligence has been heavily focused on language. Finally, mastering language is a
powerfully leveraged capability. Watson, the IBM computer that beat two former Jeopardy! champions in 2011, has
already read hundreds of millions of pages on the Web and mastered the knowledge contained in these documents.
Ultimately, machines will be able to master all of the knowledge on the Web--which is essentially all of the knowledge of
our human-machine civilization.
One does not need to be an Al expert to be moved by the performance of Watson on Jeopardy! Although I have a
reasonable understanding of the methodology used in a number of its key subsystems, that does not diminish my
emotional reaction to watching it--him?--perform. Even a perfect understanding of how all of its component systems work
would not help you to predict how Watson would actually react to a given situation. It contains hundreds of interacting
subsystems, and each of these is considering millions of competing hypotheses at the same time, so predicting the
outcome is impossible. Doing a thorough analysis--after the fact--of Watson's deliberations for a single three-second
query would take a human centuries.
One limitation of the Jeopardy! game is that the answers are generally brief: It does not, for example, pose questions of
the sort that ask contestants to name the five primary themes of A Tale of Two Cities. To the extent that it can find
documents that do discuss the themes of this novel, a suitably modified version of Watson should be able to respond to
this. Coming up with such themes on its own from just reading the book, and not essentially copying the thoughts (even
without the words) of other thinkers, is another matter. Doing so would constitute a higher-level task than Watson is
capable of today.
It is noteworthy that, although Watson's language skills are actually somewhat below that of an educated human, it was
able to defeat the best two jeopardy! players in the world. It could accomplish this because it is able to combine its
language ability and knowledge understanding with the perfect recall and highly accurate memories that machines
possess. That is why we have already largely assigned our personal, social, and historical memories to them.
Wolfram | Alpha is one important system that demonstrates the strength of computing applied to organized knowledge.
Wolfram | Alpha is an answer engine (as opposed to a search engine) developed by British mathematician and scientist
Stephen Wolfram and his colleagues at Wolfram Research. For example, if you ask Wolfram | Alpha, "How many primes
are there under a million?" it will respond with "78,498." It did not look up the answer, it computed it, and following the
answer it provides the equations it used. If you attempted to get that answer using a conventional search engine, it would
direct you to links where you could find the algorithms required. You would then have to plug those formulas into a
system such as Mathematica, also developed by Wolfram, but this would obviously require a lot more work (and
understanding) than simply asking Alpha.
Indeed, Alpha consists of 15 million lines of Mathematica code. What Alpha is doing is literally computing the answer
from approximately 10 trillion bytes of data that has been carefully curated by the Wolfram Research staff. You can ask a
wide range of factual questions, such as, "What country has the highest GDP per person?" (Answer: Monaco, with
$212,000 per person in U.S. dollars), or "How old is Stephen Wolfram?" (he was born in 1959; the answer is 52 years, 9
months, 2 days on the day I am writing this). Alpha is used as part of Apple's Sin; if you ask Sini a factual question, it is
handed off to Alpha to handle. Alpha also handles some of the searches posed to Microsoft's Bing search engine.
Wolfram reported in a recent blog post that Alpha is now providing successful responses 90% of the time. He also reports
an exponential decrease in the failure rate, with a half-life of around 18 months. It is an impressive system, and uses
handcrafted methods and hand-checked data. It is a testament to why we created computers in the first place. As we
discover and compile scientific and mathematical methods, computers are far better than unaided human intelligence in
implementing them. Most of the known scientific methods have been encoded in Alpha,
In a private conversation I had with him, Wolfram estimated that self-organizing methods such as those used in Watson
typically achieve about an 80% accuracy when they are working well. Alpha, he pointed out, is achieving about a 90%
accuracy. Of course, there is self-selection in both of these accuracy numbers, in that users (such as myself) have learned
what kinds of questions Alpha is good at, and a similar factor applies to the self-organizing methods. Some 80% appears
to be a reasonable estimate of how accurate Watson is on Jeopardy! queries, but this was sufficient to defeat the best
humans.
It is my view that self-organizing methods such as I articulate as the pattern-recognition theory of mind, or PRTM, are
needed to understand the elaborate and often ambiguous hierarchies we encounter in real-world phenomena, including
human language. Ideally, a robustly intelligent system would combine hierarchical intelligence based on the PRTM
(which I contend is how the human brain works) with precise codification of scientific knowledge and data. That
essentially describes a human with a computer.
We will enhance both poles of intelligence in the years ahead. With regard to our biological intelligence, although our
neocortex has significant plasticity, its basic architecture is limited by its physical constraints. Putting additional neocortex
into our foreheads was an important evolutionary innovation, but we cannot now easily expand the size of our frontal
lobes by a factor of a thousand, or even by 10%. That is, we cannot do so biologically, but that is exactly what we will do
technologically.
Our digital brain will also accommodate substantial redundancy of each pattern, especially ones that occur frequently.
This allows for robust recognition of common patterns and is also one of the key methods to achieving invariant
recognition of different forms of a pattern. We will, however, need rules for how much redundancy to permit, as we don't
want to use up excessive amounts of memory on very common low-level patterns.
Educating Our Nonbiological Brain
A very important consideration is the education of a brain, whether a biological or a software one. A hierarchical patternrecognition system (digital or biological) will only learn about two--preferably one--hierar-chical levels at a time. To
bootstrap the system, I would start with previously trained hierarchical networks that have already learned their lessons in
recognizing human speech, printed characters, and natural-language structures.
Such a system would be capable of reading natural-language documents but would only be able to master approximately
one conceptual level at a time. Previously learned levels would provide a relatively stable basis to learn the next level. The
system can read the same documents over and over, gaining new conceptual levels with each subsequent reading, similar
to the way people reread and achieve a deeper understanding of texts. Billions of pages of material are available on the
Web. Wikipedia itself has about 4 million articles in the English version.
I would also provide a critical-thinking module, which would perform a continual background scan of all of the existing
patterns, reviewing their compatibility with the other patterns (ideas) in this software neocortex. We have no such facility
in our biological brains, which is why people can hold completely inconsistent thoughts with equanimity. Upon
identifying an inconsistent idea, the digital module would begin a search for a resolution, including its own cortical
structures as well as all of the vast literature available to it. A resolution might mean determining that one of the
inconsistent ideas is simply incorrect (if contraindicated by a preponderance of conflicting data). More constructively, it
would find an idea at a higher conceptual level that resolves the apparent contradiction by providing a perspective that
explains each idea. The system would add this resolution as a new pattern and link to the ideas that initially triggered the
search for the resolution. This critical thinking module would run as a continual background task. It would be very
beneficial if human brains did the same thing.
I would also provide a module that identifies open questions in every discipline. As another continual background task, it
would search for solutions to them in other disparate areas of knowledge. The knowledge in the neocortex consists of
deeply nested patterns of patterns and is therefore entirely metaphorical. We can use one pattern to provide a solution or
insight in an apparently disconnected field.
As an example, molecules in a gas move randomly with no apparent sense of direction. Despite this, virtually every
molecule in a gas in a beaker, given sufficient time, will leave the beaker. This provides a perspective on an important
question concerning the evolution of intelligence. Like molecules in a gas, evolutionary changes also move every which
way with no apparent direction. Yet, we nonetheless see a movement toward greater complexity and greater intelligence,
indeed to evolution's supreme achievement of evolving a neocortex capable of hierarchical thinking. So we are able to
gain an insight into how an apparently purposeless and directionless process can achieve an apparently purposeful result
in one field (biological evolution) by looking at another field (thermodynamics).
We should provide a means of stepping through multiple lists simultaneously to provide the equivalent of structured
thought. A list might be the statement of the constraints that a solution to a problem must satisfy. Each step can generate a
recursive search through the existing hierarchy of ideas or a search through available literature. The human brain appears
to be only able to handle four simultaneous lists at a time (without the aid of tools such as computers), but there is no
reason for an artificial neocortex to have such a limitation.
We will also want to enhance our artificial brains with the kind of intelligence that computers have always excelled in,
which is the ability to master vast databases accurately and implement known algorithms quickly and efficiently Wolfram
I Alpha uniquely combines a great many known scientific methods and applies them to carefully collected data. This type
of system is also going to continue to improve, given Stephen Wolfram's observation of an exponential decline in error
rates.
Finally, our new brain needs a purpose. A purpose is expressed as a series of goals. In the case of our biological brains,
our goals are established by the pleasure and fear centers that we have inherited from the old brain. These primitive drives
were initially set by biological evolution to foster the survival of species, but the neocortex has enabled us to sublimate
them. Watson's goal was to respond to Jeopardy! queries. Another simply stated goal could be to pass the Turing test. To
do so, a digital brain would need a human narrative of its own fictional story so that it can pretend to be a biological
human. It would also have to dumb itself down considerably, for any system that displayed the knowledge of Watson, for
instance, would be quickly unmasked as nonbiological.
More interestingly, we could give our new brain a more ambitious goal, such as contributing to a better world. A goal
along these lines, of course, raises a lot of questions: Better for whom? Better in what way? For biological humans? For
all conscious beings? If that is the case, who or what is conscious?
As nonbiological brains become as capable as biological ones of effecting changes in the world--indeed, ultimately far
more capable than unenhanced biological ones--we will need to consider their moral education. A good place to start
would be with one old idea from our religious traditions: the golden rule.