Chapter questions

HONR 300/CMSC 491: Computation, Complexity, and Emergence
Spring 2016 – Chapter Questions
Last revised July 28, 2017
These questions are “food for thought” that you can use to guide your reading and
understanding as you work your way through the readings. They are not meant as a
study guide or as required questions for the class journal assignments, but you should
be able to answer (at least) these questions – and you may use them to structure your
journal entries, especially if you’re having trouble deciding what to write about. In
your journal entries, you should also feel free to suggest other reading questions that
you think would have been useful – I will add them to the question list the next time I
teach the class!
These are just the questions that I generated as I read through the textbook, to try to
enhance your thinking and understanding (and my own). Not all of them have a single
correct answer. For all I know, some of them don’t have answers at all! The point is
not to know the right answer to every question. The point is to use the questions to
help you think about the topics and readings in the class. You should also ask your
own questions, and you should be willing to speculate on possible answers – or
speculate on ways that you might be able to go about finding an answer, even if you’re
not sure what the answer is.
C1: Tue 1/26: First class – no reading!
C2: Thu 1/28: Complexity game: Mitchell 1, Flake Preface, Flake 1 (no reading
journal)








When is a group of organisms just a group, and when is it a “superorganism”?
What does Mitchell mean by the “coevolutionary relationships” between
search engines and the structure of the Web?
What should you do if you don’t understand an equation as you’re reading
the book?
What is reductionism? Why aren’t we all physicists?
What are some examples of agents and their interactions at different levels of
scale or abstraction?
Why does Flake say, “[R]eductionism fails when we try to use it in a reverse
direction” (p. 2)?
Explain the concepts of self-similarity, parallelism, self-organization, iteration,
recursion, and adaptation, and give examples of each.
What is the difference between adaptation and evolution?
C3: Tue 2/2: Algorithms: Exploring Computational Thinking; Wikiversity articles
(RJ1)




This article is part of a national (and international) effort in the computing
education community to increase awareness of computational thinking as a
paradigm for learning computer science, and more generally, as a paradigm
for problem solving that is relevant to many disciplines and aspects of
everyday life.
o Did you learn about computational thinking in high school or in your
previous college classes?
o How would you explain "computational thinking" to someone who
hasn't heard the term before?
o Can you think of some areas of your daily life (including but not
limited to your other classes) where computational thinking is
relevant or could be a useful way of characterizing a problem?
o Do you see analogies or connections between the four main
computational thinking techniques mentioned in the article
(decomposition, pattern recognition, pattern generalization, and
algorithm design) and the topics on complexity that we've discussed
(agents and interactions, emergence, self-similarity, parallelism,
recursion, adaptation)?
What are the main components of an algorithm? Can you think of types of
problems that couldn't be solved using these components?
Does the "bubble sort" algorithm reflect the procedure that you would follow
if you were sorting a large pile of objects (say, shelving library books or
alphabetizing student papers)?
Some people say that computer science teachers should never teach bubble
sort. Why do you think they would have this opinion? Do you agree with it?
C4: Thu 2/4: Lab: NetLogo / The Game of Life (no reading or reading journal)
C5: Mon 2/13: Number system basics: Flake 2 (RJ2)



Would Zeno’s paradox still seem paradoxical if Achilles was three times as
fast as the tortoise, instead of twice as fast? The book analyzes this problem
and shows that the Achilles will catch up to the tortoise in exactly twice as
long as it takes him to run the distance of the tortoise’s headstart.
¥
1
Figure 2.1 shows graphically that å i = 1. Can you create a similar
2
i=1
¥
1
graphical depiction for å i ? What is the result of this summation?
3
i=1
CMSC 491: You should be able to answer this type of summation in the general
case, and give an inductive proof that your answer is correct.

Are the following sets finite, countably infinite, or uncountable? (1) The
integers between -∞ and +∞. (2) The real numbers between 0.1 and 0.2. (3)
The atoms in the universe. (4) The number of integers that have a finite
number of digits. (5) The number of strings (character sequences) that can
be created using only the characters “a” and “b.” (6) The number of prime
numbers. (7) The number of real numbers that are rational numbers. (8) The
number of real numbers that are not rational numbers. (9) The number of
different ways in which the atoms in the universe could be combined into
different groups.
C6: Thu 2/11: Fractals: Flake 5 (5.5 optional), Nova documentary (RJ3)










(Note: CMSC 491 students should generally be able to derive precise
mathematical answers to the questions in this chapter. HONR 300 students will
find it useful to try to analyze these questions precisely as well, but on the
midterm, will not be expected to provide mathematical answers to such
questions.)
What is self-similarity?
What would the result be if the Cantor set construction process was applied
to the [0, 1] line segment by removing the middle half of each line segment at
each iteration? The book shows that the Cantor set consists of exactly those
points that can be written in ternary (base 3) notation without using any 1s.
Can you come up with an analogous rule for the “middle half” Cantor set?
Does it make intuitive sense to you that the Cantor set has width zero but still
has an uncountably infinite number of points? (This is an opinion question,
not a factual question!)
Express the Koch curve’s length at iteration i as a mathematical summation.
(Hmm, I wrote this, but don’t find it to have an interesting or satisfying
answer…)
What is the length of the “middle half” Koch curve at iteration i if we use a
construction analogous to the “middle half” Cantor set?
What would happen if we applied a “middle half” construction to try to
generate a Peano curve?
Can you think of other real-world examples where the “coastline measuring”
phenomenon applies? (that is, in which measuring something precisely
depends on the “yardstick” used)
What is the fractional dimension of a “middle half” Cantor set? A “middle
half” Koch curve? A “middle half” Peano curve?
Think of some fractal-like objects that you may have seen in nature. (Ferns,
mountains, coastlines…) What are the underlying processes that cause these
fractal objects to be created? Are they similar to the mathematical
constructions introduced in this chapter for the mathematical fractals, or are
they quite different?
C7: Tue 2/16: Lab: Fractals and L-Systems: Flake 6 (RJ4)



What intuition does Flake provide for why we see fractal curves in nature?
What does “L” stand for in “L-system?”
Why does one need to specify a depth value when applying an L-system like
the one in Table 6.1?
C8: Thu 2/18: Mandelbrot and Julia sets: Wikipedia page, Flake 8 (RJ5)






We skipped Chapter 3, so here is a very brief summary of RE and CO-RE sets:
o Recursively enumerable (RE) sets are sets of numbers that can be
computed (i.e., for which a program exists that will halt and output
“yes” if it is given a member of the set as input). You can think of these
sets as corresponding to partially computable functions.
o CO-RE sets, or “complement recursively enumerable” sets, are sets of
the form “S’ = all numbers that are not in some RE set S” (so there is a
CO-RE set S’ for each RE set S). You can think of these sets as a special
kind of uncomputable function (i.e., one for which there exists a
program that will halt and output “no” if it is given a number that is
not in the set as input).
o Recursive sets are both RE and CO-RE: that is, they are fully
computable, in the sense that a program exists that will always halt for
any input, outputting “yes” if the input is a member of the set and “no”
if the input is not a member of the set.
o Finally, the really tricky type of sets, which the book calls
“algorithmically indescribable objects,” are all sets that are neither RE
nor CO-RE. That is, there is no program that will consistently halt and
recognize whether or not a number is the member of the set: for some
inputs, any program that tries to recognize the input will simply never
halt.
You may want to refresh your memory on imaginary numbers, but really all
you need to know for this section is that an imaginary number c = a + bi can
be represented by the coefficient pair, (a, b).
Minor correction: In the algorithms for computing the Mandelbrot set (Table
8.1, p. 114) and Julia set (Table 8.2, p. 121), instead of “xt = xt + 1,” it should
say “xt = xt-1 + 1.”
Referring to the function x2t + c, Flake says (p. 114), “If a2 + b2 is greater than
4, then… the sequence will diverge.” Why is this the case?
o The Mandelbrot set is the set of all complex constants c such that the
Mandelbrot function x2t + c will diverge as t goes to infinity if x0 = 0.
o A Julia set Jc is defined for a given constant c, and is the set of all points
x0 that result in a divergent Mandelbrot series as t goes to infinity.
In Figure 8.2 (the first picture of the Mandelbrot set), what does the (x,y)
location of each point correspond to? What is the description of the set of
points contained within the black region?
Is Figure 8.2 actually a picture of the Mandelbrot set? If not, why not?



Do you have any intuition for why Jc for values of c in the Mandelbrot set is a
connected set, but Jc for values of c outside the Mandelbrot set (i.e., values
that lead to divergence when x0 = 1) is disconnected? (I have to admit that I
don’t!)
Do you have any intuition for why the number of iterations that it takes for
the “neck of the M-set” to converge is an approximation to  /  ?? Me either!
How can such a complex structure possibly result from such a trivial
function??
C9: Tue 2/23: Complexity and simplicity: Mitchell 3 and 7; Flake 9; Murray &
Gell-Mann, “Effective Complexity” (skim, especially HONR 300 students) (RJ6)








There is a relatively large amount of reading for today, and it is fairly dense
in places, so this is a good opportunity to continue developing the skill of
reading challenging scientific material. I’ve assigned these different readings
because they come at the same general ideas in different ways.
What is the relationship between entropy in a physical system and
information in a computational system?
Why does Mitchell describe Shannon information as “the amount of
surprise”?
We see the fractal dimension again in this chapter – does it make more (or
less) sense to you in this context than when Flake talked about it in the
context of fractals?
What does it mean for a number to be “random”?
Information as complexity is a common theme in computer science –
Chaitin’s notion of “information theory” as the number of bits that are
needed to encode a string is at the center of much of AI and knowledge
management.
In information theory, randomness is “complex,” which seems
counterintuitive. I thought the quote, “it may very well be that [fractals] offer
the greatest amount of functionality for the amount of underlying complexity
consumed” captured what I was thinking of when I asked this. Randomness
is “complex” in the sense that it’s hard to precisely replicate – but it’s not
very useful. Fractals are “complex” and also store a lot of potentially useful
information. Hence the idea of effective complexity.
Of the three authors, which one do you think did the most effective job of
explaining information theory? Do you think the answer to this question
would be different for a layperson than for a mathematician? Do you think
the order I assigned the readings in was the best order to read them in?
C10: Thu 2/25: Fractals in nature: Cells, Corals, Antennas, and Broccoli (RJ7)

Returning to Flake, recall that he explains the appearance of fractal curves in
nature as an optimization for “packing efficiency.” Identify some fractals that

are seen in nature, and discuss the tradeoffs that might have led an
optimization (evolutionary) process to generate those fractals.
Can you find other articles about real-world fractals? Feel free to share these
on the discussion board, and to bring those fractals to share!
C11: Tue 3/1: Chaos/nonlinear dynamics: A Sound of Thunder, Mitchell 2, Flake
10 (RJ8)







Could Ray Bradbury's story actually happen? Why or why not?
Chaotic systems are a great example of the distinction we make in AI
between “stochastic” or “nondeterministic” environments and “incomplete”
environments. “Stochastic” means that there are (at least apparently)
probabilistic effects of actions. “Incomplete” means that actions may have
different outcomes depending on conditions of which we have insufficient
knowledge. The interesting idea revealed by this chapter is that a sufficiently
complex (but incomplete) deterministic system can appear stochastic – for
example, when we roll a set of dice, is the outcome probabilistic or
deterministic? Maybe if we could completely specify the conditions (starting
location, angle and speed of the throw, surface properties of the dice and
table, air pressure) with complete precision, down to the molecular level, we
would be able to characterize the system as deterministic.
Notice that modeling dynamical systems, as with the discussion of “effective
complexity,” requires us to decide what properties of a dynamical system
matter. For a chemical reaction, the chapter indicates that “the ratio of
reactants to reagents” matters – but what if the distribution of these
materials over the space in which the reaction occurs also matters?
Four types of motion: fixed point (stable or convergent, unvarying behavior);
limit cycle (periodic motion into which a system stabilizes), quasiperiodic
motion (periodic motion within some “envelope”), chaotic (predictable but
only if the starting point is known with infinite precision).
Will a particular state in a chaotic system go in one direction or another? Is a
particular point in the Mandelbrot set? These are somehow mathematically
similar questions, in the sense that as you increase the precision of the
representation of the state, your answer may flip, then flip again, then flip
again.
The logistic map equation looks kind of like the Mandelbrot equation, doesn’t
it?
Make sure you understand the visualization of the state space in Figure
10.2(b) – this is just showing the logistic map function (parabola), which can
be used to plot how a value at time t (on the x axis) projects to a value at time
t+1 (on the y axis). You can see that the system converges to a fixed point
where the parabola intersects the identity line – because that value will
always map back to itself, and any nearby value will map to a closer value.
This picture may remind you a bit of Newton’s method.







It still seems surprising to me that shifting the parabola in these
visualizations can so dramatically change the behavior (fixed point vs k-point
limit cycle vs. chaotic behavior).
Bifurcation = qualitative change in state space behavior that results in a
doubling of the number of points in the limit cycle.
Chaos = “infinite-period limit cycle” – a regime in which so many bifurcations
have occurred that
Check out the self-similarity of the bifurcation diagram in Figure 10.7!
The Feigenbaum analysis, that lets one predict when chaos will occur based
on the first few bifurcations, is pretty cool…
In the middle of Section 10.4, Flake alludes to what we talked about in class:
no matter how much precision you give to your digital computer, there are
some functions that simply can’t be computed.
Key properties of chaos:
o Determinism: The system behavior is completely defined by the
previous state.
o Sensitivity: The system is very sensitive to initial conditions, so any
measurement error can cause arbitrarily large divergence over a
period of time.
o Ergodicity: This just means that over time, all regions of the state
space that are reachable will be revisited with regularity. This
property lets us understand the likelihood that a region of the state
space will be visited, even if we can’t say when that region will be
visited.
C12: Thu 3/3: Lab: Producer-consumer dynamics: Flake 12 (RJ9)





With respect to our discussion about whether it’s possible/reasonable/
accurate to model real-world systems, it’s interesting that the equationbased and agent-based modeling approaches for predator-prey systems can
lead to such similar global behaviors.
Predator and prey interact with each other, leading to mutually recursion in
the system dynamics.
You should be sure to understand the Lotka-Volterra system dynamics
model, and what the different parameters are intended to represent.
The set of fixed-point behaviors in these systems is quite different from the
logistic map, and depends critical on the initial sizes of the two populations.
Yet there is a similarity in the shape of the limit cycle; it’s just “scaled” in the
state space. Thought question: In figure 12.1, what do you think happens at
the boundaries or near the “corners” of the state space (e.g., when the fish
population is very close to 3.5 and the shark population is very close to 0)?
Some people have the ability to easily view a stereogram (side-by-side dual
pictures) like the images shown in Figure 12.2. I can’t. Can you? If so, check
out some of the stereograms you can find online: for example,
http://www.magiceye.com/3dfun/stwkdisp.shtml .


If you don’t have the mathematical inclination, don’t worry too much about
the matrix-based representation towards the end of 12.4 for capturing the
system dynamics of multi-species populations.
In our lab, we’ll play around with a NetLogo agent-based predator-prey
model (i.e., a cellular automaton model).
C13: Tue 3/8: Network Dynamics: Mitchell 15 and 16 (RJ10)







What is the strangest “small world experience” you’ve had? I once ran into a
colleague from NASA in California when I was walking down the street on
vacation in Madrid.
Why do airlines use a hub structure for scheduling their flights? Why is the
Internet designed around hub servers? Who are the hubs in your own social
network?
How would “network thinking” lead you to think differently than “object
thinking” about a phenomenon or system in your own major/discipline/area
of interest?
Mitchell doesn’t formally define clustering in a network. There are different
measures of clustering, but the most common one is “the percentage of
connected triples of node that are also triangles” (where a “triple” is a set of
three nodes that are all connected through some path (i.e., are connected by
at least two edges) and a “triangle” is a set of three nodes that are fully
connected (i.e., are connected by three edges)). What would a graph with a
low clustering coefficient look like? What about one with a high clustering
coefficient? What is the lowest possible clustering coefficient, the highest
possible clustering coefficient, and what do those graphs look like?
Explain the idea of a degree distribution in a graph. What kind of degree
distribution would a random graph have? What about a fully hub-and-spoke
graph? A scale-free graph?
What is the key insight behind the PageRank search algorithm? Have you
ever heard of “link farms”? (If not, google it!) How can a link farm be used to
subvert the PageRank algorithm?
Mitchell 16 lists a number of real-world networks. Think of a network that
isn’t listed here, and how its properties might be described using the
concepts in Chapter 15 (degree distributions, clustering, scale-free
networks…)
C14: Thu 3/10: Lab: Designing NetLogo Models / Boids: Boids paper, Fish and
Bycycles, Honeybee dance (no reading journal)
ENJOY YOUR SPRING BREAK!
C15: Tue 3/22: MIDTERM (no reading or reading journal)
C16: Thu 3/24: Cellular automata: Mitchell 10, Flake 15, Kurzweil (RJ11)



Mitchell 10
o Have you seen the Game of Life before? Does it surprise you that such
simple rules can generate such complex behavior? How easy do you
think it would be to “design” a glider gun if you didn’t already know
about that pattern?
Flake 15
o Most of the cellular automata that we see here are binary (two
possible states per cell), but notice the k parameter that defines the
number of states and the r parameter that defines the number of
neighbors that affect each cell. The size of the rule table grows
exponentially as k grows, where r determines the exponent of this
growth.
o The four classes of cellular automata should seem reminiscent of
ideas we’ve seen before: Class I cellular automata reach a fixed point;
Class II CAs enter a limit cycle; Class III CAs are effectively random
(one might think of them as having low “effective complexity”); and
Class IV CAs seem like hybrid chaotic systems – unpredictable, but
with patterns and attractors.
o The idea that CAs can be (somewhat) characterized by the simple 
parameter (which represents the fraction of rules or table entries that
map a state into “state zero” or the “off” state) is a bit surprising. Of
course, it’s never that simple…
o You will probably want to just skim most of section 15.4, about how
the Game of Life can act as a universal computer. The link I’ve
provided for the Monday 3/28 reading assignment gives a slightly
different spin on this topic. It’s interesting but more like a mind game
than something that seems useful in practice… in a way, it’s like the
idea of NP-complexity – we have this class of systems (“complex
systems”) that are in some sense equivalent to each other, because
they can all be used to model “universal computation.”
o Hopefully you can see how useful CAs are for modeling real-world
phenomena: Flake gives a number of examples in section 15.5, and in
a way, the whole rest of the class can be seen as different types of
cellular automata.
Kurzweil, “Reflections on Stephen Wolfram’s ‘A New Kind of Science’”
o Flake mentions Wolfram’s extensive research on cellular automata;
after Flake’s book was published, Wolfram wrote the book, A New
Kind of Science, which as you can see from Kurzweil’s review is a sort
of polemic that extols cellular automata as a universal explanation of
complexity.
o If you’ve ever read anything by Ray Kurzweil (who believes that the
Singularity is coming and that he is its prophet), you will undoubtedly
find it as amusing as I do that he accuses Wolfram of “hubris.”
o I do have to go along with Kurzweil in his “hubris” interpretation
when I read the quote by Wolfram that refers to his “discovery that
simple programs can produce great complexity.” As Kurzweil points
out, Rule 110 is yet another instance of a phenomenon we’ve seen
repeatedly in this class, where a very simple, deterministic rule leads
to unexpected and unpredictable complexity.
o I haven’t read Wolfram’s book (I believe it’s around 1000 pages long),
but at least as Kurzweil summarizes his argument (and as I have seen
other reviewers summarize it), Wolfram seems to be claiming that
“Cellular automata can produce complexity; the world is complex;
therefore, the world is a cellular automaton.” This is definitely not a
supportable logical argument!
o In particular, I think that Kurzweil’s point that systems in the world
are adaptive is at the heart of what makes cellular automata
inadequate to capture many phenomena in the real world. At a
minimum, modeling the world requires some storage devices. I
particularly resonated with the quote, “It is the complexity of the
software that runs on a universal computer that is precisely the issue.”
o Shortly after that, Kurzweil shares his own hubris and
shortsightedness (in my opinion): “To build strong AI, we will short
circuit this process, however, by reverse engineering the human brain,
a project well under way,…”
o We talked in class about the question that Wolfram, Kurzweil, and
others have all raised: “whether the ultimate nature of reality is
analog or digital.” Maybe we will never be able to answer this
question. Still, it does seem to me that while it may be philosophically
interesting to speculate on this question, it is perhaps not very useful
to imagine the world as being discrete – because for all practical
purposes (given our own computational limitations), it seems to
behave like a continuous system.
C17: Tue 3/29: Finite state automata: Turing Machine article, Game of Life TM,
GEB Excerpts (RJ12)

Mitchell 4 / Turing Machine article (Stanford Encyclopedia of
Philosophy):
o A Turing Machine is just a slightly more general kind of finite state
machine (aka finite state automaton) – it’s a FSA plus memory (i.e., it
can write stuff down to remember it for later).
o A TM consists of a finite set of possible “states” (think of the “state” as
a single memory location that the TM can write a single symbol into);
an infinite one-dimensional tape with discrete locations or “cells” that
can be written with 1s and 0s; a read-write head that can be

positioned at any location at the tape, and can write a 0, write a 1,
move one step left, or move one step right; and a set of rules
(transition table) that tells the machine what to do, given its current
state (symbol in the “state” memory location) and the contents of the
current cell (where the read-write head is positioned).
o The Church-Turing Thesis states (rather informally) that for any
intuitively computable task, a Turing Machine can compute the
solution. It’s not exactly provable, but nobody has ever disproved it
(by describing a computable task that can’t be computed by a TM).
o A Universal Turing Machine (UTM) is a state table that can read a
Turing Machine specification written on its tape and “execute” that
Turing Machine. Basically, it’s a programmable Turing Machine that
can do anything that any Turing Machine can do. Wow.
o Because this very simple TM can emulate a much more complex
Turing machine (more memory, more symbols, more tapes, even a
“nondeterministic” machine that can explore multiple transitions
simultaneously), every digital computer (even a 1,024-cell
supercomputer) can be shown to be Turing-equivalent. Also wow.
o Will a particular TM halt? Who knows – that’s the Halting Problem!
(Approximation of the proof that we can’t, in general, know: If we run
this machine on itself, it won’t halt.)
Game of Life TM article:
o I have to confess that I didn’t completely understand the details of the
Game of Life Turing Machine, but it’s interesting to think about the
idea that you can “design” the complexity of a Game of Life simulation
to carry out a specific computation.
o The Universal Turing Machine on this website reminds me of the RNA
transcription process, which is basically a little biological “machine”
that walks along a genome, turning the DNA encoding into
“implemented” proteins.
C18: Thu 3/31: Self-organization: Flake 16; Strogatz article; color articles (RJ13)

Chapter 16:
o Self-organization is just another name for “parallelism that leads to
interesting group behaviors.”
o Before I started studying self-organization and multi-agent systems, I
hadn’t seen the termite model. It still surprises me that these simple
rules lead to such interesting patterns. It does make some sense –
termites pick up wood chips anywhere, but only drop wood chips
near other wood chips, so the wood chips are going to tend to gather.
o The ant simulation in Figure 16.3 has the feeling of the Game of Life,
with a “trajectory” of the simulation that extends itself spatially. In
fact, the ant rules are a bit like Game of Life rules, except that the ant
can only be in one location at a time (so only one location at a time can
change).

o Figure 16.5 just blows my mind – the patterns are so rich yet so
unpredictable.
o This chapter doesn’t talk about real ants very much, but I know that
there are computational biologists who are developing agent-based
computational models of ants, to try to simulate their colony
behaviors in silico.
o Changing the relative weights of the different rules (avoidance,
copying, centering, clearing view, and momentum) in the “boid”
flocking model gives remarkably different sorts of behaviors. I had a
Ph.D. student (Don Miner) who completed a whole Ph.D. dissertation
studying how to predict the emergent behavior of boids (and other
agent-based models) from the values of the low-level parameters (i.e.,
weights on the different rules).
Strogatz article:
o Terminology: graphs (or networks) consist of nodes or vertices,
connected by edges. The degree of a node is the number of edges that
it has. The density of a graph is the average degree. The transitivity is
the probability that two neighbors of a randomly selected node will
also be connected to each other. (This can also be thought of as the
number of triangles in the graph.)
o Mathematicians have their own version of “Six Degrees of Kevin
Bacon”: the “Erdös Number.” Paul Erdös was an incredibly prolific
mathematician who published with a huge number of co-authors. His
work, combined with recent research on social networks, has led to
the exploration of “co-authorship chains” that lead to Erdös. My own
Erdös number is 4: I have co-authored with a former student (Matt
Gaston) who has a paper co-authored with Miro Kraetzl, who was a
co-author with Erdös.
o For a long time, people primarily studied regular networks (lattices or
fully connected graphs) or random networks (where any given edge
has an equal probability of being in the graph). These networks aren’t
reflective of most real-world network topologies, so recently there has
been a lot of interest in other types of network structures, like smallworld graphs and scale-free graphs.
o Small-world graphs are “almost-lattices” where there are some
“shortcut” links. These graphs have the interesting property that the
diameter (shortest path) of the graph becomes very small (compared
to a full lattice).
o Scale-free networks are modeled by a “growth pattern” where new
nodes are more likely to be connected to highly-connected node in the
graph. This makes a lot of sense as a growth model of many realworld networks: when you move to a new town, you’re more likely to
meet people with a lot of friends; when a new node is added to the
Internet, it’s more likely to be connected to a hub node. Scale-free
networks have an exponential degree distribution (in contrast to
random graphs, which have a Poisson (bell-curve-like) degree

distribution). That is, there are a few very highly connected nodes
(orders of magnitude higher than average). This pattern is sometimes
referred to as “heavy tailed” (i.e., with individuals that are much
further out on the “tail” of the degree distribution than one would
expect).
Color Trends / Pantone Articles
o I included these articles because I had read about color trends in the
newspaper and thought it was an interesting example of a complex
system in which the agents are consciously trying to control the
system’s trajectory.
o “You might think that this has elements of a self-fulfilling prophecy….”
– that’s exactly what makes the color “market” different from, say, the
stock market. (Though of course in the stock market, the agents also
have a conscious desire to control the system’s trajectory, so maybe
they are more similar than not?)
o The Pantone “color of the year” just cracks me up (not being a
fashionista) – the idea that there’s this company that just decides what
the hot color will be in two years makes the whole fashion thing seem,
well, a bit silly to me. But perhaps not quite as silly as the quote from
the executive director of Pantone in the NYT article: ““Blue Iris brings
together the dependable aspects of blue, underscored by a strong,
soul-searching purple cast. Emotionally, it is anchoring and meditative
with a touch of magic.” Wow. And here I thought it was blue.
o Also amusing (to me) is the director of “branding and design” firm
who also thinks that Pantone’s prediction is a bit silly – but would
take it entirely seriously if it had come from a designer instead. Also
adorable: “forecasts are for the mass market” – whereas, obviously,
high fashion is much more dynamic and unpredictable.
C19: Tue 4/5: Multi-Agent Game Day (no reading or reading journal)
C20: Thu 4/7: Giving Presentations / Competition and cooperation: Flake 17,
Mitchell 14 (RJ14)


I didn’t know that slime mold cells are usually independent but sometimes
self-organize into a single organism – so weird!
Game theory is the “economics of games” – i.e., the study of what a “rational”
player should do in the context of an interaction with other players, where
each player’s payoff depends on what the other players do. The policy for
taking actions in this setting is called a “strategy.” In most games, the strategy
is just a single action. In some games, a player can have a “mixed strategy”
(where they pick different actions with some probability). In general, the
player has to pick their action without knowing what the other player will
choose (though they may have some information about the other player’s
previous actions).






The payoff matrix defines what the reward for each player will be, given that
player’s actions and the actions of the other player(s). (Most games studied
in game theory are two-player games, but in principle, there can be any
number of players.)
Social welfare is said to be maximized when the sum of the payoffs of all of
the players is maximized.
A Nash equilibrium is a set of player strategies where no player will change
their strategy if they know the other players’ strategies.
The reason that life is difficult is because the set of strategies that maximizes
social welfare is quite often not the Nash equilibrium. In fact, the Nash
equilibrium can be the minimum social welfare strategy set! This situation
can lead to a “race to the bottom” with self-interested agents. The “tragedy of
the commons” is a classic example of this scenario.
In the iterated Prisoner’s Dilemma, Tit-for-Tat can beat almost all comers. As
Flake points out, though, there is no such thing as a “best strategy.” (Things
do get more complicated when there is “noise” – i.e., when players sometimes
try to cooperate but unintentionally defect, or vice versa.)
The IPD is an interesting model for the evolution of cooperation.
C21: Tue 4/12: NetLogo project presentations (no reading or reading journal)
C22: Thu 4/14: Guest lecture TBA (no reading or reading journal)
C21: Tue 4/19: NetLogo project presentations (no reading or reading journal)
C24: Thu 4/21: Phase transitions: Chapter 18 / Tipping Point / Egypt / Tulips
(RJ15)

Chapter 18:
o The tendency of systems to “optimize” themselves by reaching a
minimum-energy state is the inspiration for many computational
optimization mechanisms. Graph layout, for example, is most
commonly performed using a “spring-embedding” algorithm that
searches for a minimum-energy state by optimizing the lengths of the
edges in the graph.
o “Optimization” can be thought of as a process of finding a state, or set
of parameter assignments, that minimizes (or sometimes maximizes)
the value of an “objective function” (i.e., some function of the state or
parameter assignments).



o “Combinatorial optimization” just means optimization of an objective
function that is defined on a set of assignments or choices, where the
assignments may have various constraints between them.
o HONR students will probably want to skip most of Sections 18.1-18.4.
Don’t worry about the math. The main ideas are:
 Section 18.1 – Computational models of neurons can be
“trained” to learn (optimize) various predictive functions.
 Section 18.2, 18.3 – Neural models can be trained to
“remember” various sets of patterns, using mathematical
feedback rules.
 Section 18.4 – Hopfield networks are a more sophisticated
model for neural nets that can learn complex functions and
solve difficult constrained optimization problems.
Tipping Point:
o It’s fascinating how Gladwell connects such disparate phenomena as
the Hush Puppies trend, the drop in crime in Brooklyn, and the spread
of disease using the “tipping point” idea.
o Can you think of other “tipping points” and how they might have been
generated, using the three characteristics Gladwell identifies –
contagiousness, small causes leading to large effects, and change
happening suddenly? Can you identify any Mavens, Connectors, or
Salespeople in these tipping point phenomena? Is there a Law of the
Few, a Stickiness Factor, or a Powerful Context that applies?
o If you don’t smoke but know people who do, or don’t do drugs but
know people who do, or don’t engage in other risky behaviors but
know people who do, it’s easy from the outside to just dismiss these as
individual choices. But understanding why people engage in these
behaviors in the context of the society in which they live is the key to
developing interventions that might halt or slow these “epidemics.”
Egypt article:
o As I’ve mentioned in class, the popular uprising Egypt strikes me as a
fascinating example of a social “tipping point.” What was the trigger
that caused the populace to shift from unexpressed frustration with
the system to vocal frustration with the system? Can you connect the
“tipping point” concepts of contagion, small causes, and sudden
change to this social movement?
o This article doesn’t really address the sources of the tipping point, but
it does hint at an intriguing thought – on the other side of any tipping
point, we enter into a new regime that we don’t yet understand and
can’t predict based on pre-tipping point system behavior.
Tulips article:
o “Tulip mania” is a great example of groupthink leading to an
unsustainable positive feedback loop.
o “At the peak of tulip mania, in February 1637, some single tulip bulbs
sold for more than 10 times the annual income of a skilled
craftsman.At the peak of tulip mania, in February 1637, some single
tulip bulbs sold for more than 10 times the annual income of a skilled
craftsman.” – This sentence reminds me a bit of our discussions about
Gucci bags and Jimmy Choo sandals, though it’s a different
phenomenon, really – Gucci and Jimmy Choo are seen as having
“value” of some sort, whereas the tulip prices during tulip mania were
based on their perceived value as an investment. Luxury pricing
seems to be more sustainable than speculation – they are both equally
“false” in that neither of them are based on intrinsic reality-based
value, but the latter relies on having a market in which to resell the
products – and when that market collapses, all of the value dissipates.
o Housing bubbles, subpar mortgage repackaging – we have seen many
speculative bubbles recently, so it’s not as though “we” collectively
have learned much from tulip mania…
C25: Tue 4/26: Genetics and Evolution: Mitchell 5 and 6; The Evolution of
Language (RJ16); The Selfish Gene (chapters 1-3) (RJ16)

C26: Wed 5/2: Genetic Algorithms: Mitchell 8, Mitchell 9; Mimetics; FoldIt
(RJ17)

C27/C28/29/Final Exam Slot: Tue 5/3; Thu 5/5; Tue 5/10; TBA: Student
presentations (no reading or reading journal)