Breadth first search and depth first search

<AI
人工智慧>
二技子一甲
黃舒蘋
A9011059
一、 What is agent?
1
http://www.microsoft.com/msagent/
Microsoft® Agent is a software technology that enables an enriched form of user
interaction that can make using and learning to use a computer, easier and more
natural. With the Microsoft Agent set of software services, developers can easily
enhance the user interface of their applications and Web pages with interactive
personalities in the form of animated characters. These characters can move freely
within the computer display, speak aloud (and by displaying text onscreen), and
even listen for spoken voice commands. When used effectively with a
conversational interface approach, Microsoft Agent can be a powerful extension
and enhancement of the existing interactive modalities of the Microsoft Windows®
interface.
2
http://members.ud.com/download/gold/
By combining thousands of ordinary PCs to work on extremely large computational
projects, problems can be solved more quickly and less expensively than by conventional
methods. Now regular people can help fuel research and projects that previously may have
required a bank of supercomputers or a hundred years to complete. The Volunteer Your PC
program lets you make a real difference without donating money or your time with the tool
that you are using to view this site right now—your home PC.
In our Volunteer Your PC program, your computer can join tens of thousands of other PCs
across the Internet each working on a small part of a large problem simultaneously. You
can help one or more public good projects. To participate, simply download a very small,
non-invasive software program that works like a screensaver: it runs when your computer
isn't being used, and processes projects—until you need the power. Your computer never
leaves your desk, and the project never interrupts your usual PC use.
3
玉山律師事務所
http://www.wangleellp.com/ctp-law.htm
本事務所之律師及助理群擁有豐富的法律專業素養和經驗,我們的執業範圍包括
下列各個項目:
各類民/刑事及商業訴訟,
婚姻及家庭法相關案件(家庭暴力,監護權,贍養費),
車禍及各種人體傷害賠償案件,
移民申請/身份轉換,政治庇護及遺返上訴,
知識產權之登記申請(專利,著作權和商標)及侵權訴訟,
電腦及 Internet e-Commerce 相關法律,
商業秘密及不公平競爭相關法律,
公司商法及破產法。
二、 Famous Robot:
RORZE
機
是世界上有名的半導体晶圓搬運 ROBOT , 晶圓搬運
(WAFER SORTER)
NON-SMIF WAFER SORTER
SMIF WAFER
SORTER
W680mm*D930mm 直線型 WAFER
SORTER 適用於自動化
系統與 AGV 配合使用.
STAGES: 2CASSETTE
OPERATOR 擺放
OCR: COGNEX
CASSETTE 最為方便輕
SIZE:
體積最小.
DISPLAY: 12.1"TOUCH PANEL
BUFFER STATION
有與世界知名品牌的各
EQUIPMENT 連接的實
績.
安裝快速運轉最為穩
定..
鬆.
傳送速度高可增加產能.
Robot competition:
掌握機電學的跨領域性
原標題: Grasping The Interdisciplinarity of Mechatronics
原作者: Siegwart, R.
出
處: IEEE Robotics & Automation Magazine, 8(2), Jun. 2001, pp.27-34(英
文 8 頁)
關鍵詞: 機電學教育; 設計; 精明產品; 機器人競賽; 行動機器人; 實習教
育
Mechatronics education; Design; Smart products; Robot competition;
Mobile robotics; Hands-on education
機電學係協力整合精密機械工程、電子學、智慧控制與系統思考等學科
應用於精明產品之設計與加工,而最典型的案例便是應用於機器人的設計與
製造。蘇黎士瑞士聯邦技術學院(ETHZ)自 1992 年起開授機器人學課程,
由不同科系學生合組成隊伍,每一隊伍由校方頒給一組以 Motorola 68332 微
處理器為主件的行動機器人套件。修課學生必須花費 50%時間修習基礎知
識,另外 50%時間則用於實做經驗。基礎知識內容包括:機電系統之基礎、
機電學之物理效應與應用、機電學模擬、基本電子電路及感測器與致動器、
控制器之架構與設計、控制與人工智慧、以及微處理器系統與即時數位控制
等。配合實做經驗所需的課程包括:機電學設計、工具及方法,創造力與創
新,計畫管理、團隊合作、通信與控制,以及實做與設計競賽等。課程結束
之成果驗收的比賽,在 1992 年為在 4x4 米的場地打高爾夫球。1993 年為堆
積木高塔(最高達 1 米)。1994 年則是在 8x8 米的迷宮中蒐集不同顏色與重
量之物品。1995 年是在 8x8 米之球場踢足球。1996 年是在 8x8 米之迷宮以鐵
杯將點燃之蠟燭撲滅。1997 年在 8x8 米散佈有桌椅及木屑之場地,以吸塵器
吸除木屑。1998 年則是在模擬城市中以校車載運學童。1999 年的比賽為以機
器人扮演郵差在模擬辦公室送遞郵件。去年的比賽則是利用機器人拾取球體
放入對手的拖車中。鑒於 ETHZ 之機器人學的教學相當成功,計畫利用遠距
教學於明年起將之推廣到其他學校。(徐堯山)
三、 Find solutions to 8 queen’s problem:
The 8-queens problem is a well-known constraint satisfaction problem (CSP) of
classical AI. The task is: given a standard chessboard and 8 chess queens, place them
on the board so that no queen is on the line of attack of any other queen. This problem
can be easily generalized to the case of N queens on the NxN board.
Solution1:
Solution2:
四、 Document on TSP:
TSPTM is a complete language for the estimation and simulation of econometric models. It
is a world-wide standard for econometric estimation, with over 2000 installations. What
follows is a general overview; detailed lists of features and commands can be found
elsewhere. TSP stands for "Time Series Processor", although it is also commonly used with
cross section and panel data. TSP happens to be a common acronym with alternative
definitions.
五、 BFS / DFS / Heuristic Search:
Breadth first search and depth first search
Traversal of graphs and digraphs
To traverse means to visit the vertices in some systematic order. You should be
familiar with various traversal methods for trees:
preorder: visit each node before its children.
postorder: visit each node after its children.
inorder (for binary trees only): visit left subtree, node, right subtree.
We also saw another kind of traversal, topological ordering , when I talked about shortest
paths.
Today, we'll see two other traversals: breadth first search (BFS) and depth first search
(DFS). Both of these construct spanning trees with certain properties useful in other graph
algorithms. We'll start by describing them in undirected graphs, but they are both also very
useful for directed graphs.
Breadth First Search
This can be throught of as being like Dijkstra's algorithm for shortest paths, but with
every edge having the same length. However it is a lot simpler and doesn't need any
data structures. We just keep a tree (the breadth first search tree), a list of nodes to be
added to the tree, and markings (Boolean variables) on the vertices to tell whether
they are in the tree or list.
breadth first search:
unmark all vertices
choose some starting vertex x
mark x
list L = x
tree T = x
while L nonempty
choose some vertex v from front of list
visit v
for each unmarked neighbor w
mark w
add it to end of list
add edge vw to T
It's very important that you remove vertices from the other end of the list than the one
you add them to, so that the list acts as a queue (fifo storage) rather than a stack (lifo).
The "visit v" step would be filled out later depending on what you are using BFS for,
just like the tree traversals usually involve doing something at each vertex that is not
specified as part of the basic algorithm. If a vertex has several unmarked neighbors, it
would be equally correct to visit them in any order. Probably the easiest method to
implement would be simply to visit them in the order the adjacency list for v is stored
in.
Let's prove some basic facts about this algorithm. First, each vertex is clearly marked at
most once, added to the list at most once (since that happens only when it's marked), and
therefore removed from the list at most once. Since the time to process a vertex is
proportional to the length of its adjacency list, the total time for the whole algorithm is
O(m).
Next, let's look at the tree T constructed by the algorithm. Why is it a tree? If you think of
each edge vw as pointing "upward" from w to v, then each edge points from a vertex
visited later to one visited earlier. Following successive edges upwards can only get
stopped at x (which has no edge going upward from it) so every vertex in T has a path to x.
This means that T is at least a connected subgraph of G. Now let's prove that it's a tree. A
tree is just a connected and acyclic graph, so we need only to show that T has no cycles. In
any cycle, no matter how you orient the edges so that one direction is "upward" and the
other "downward", there is always a "bottom" vertex having two upward edges out of it.
But in T, each vertex has at most one upward edge, so T can have no cycles. Therefore T
really is a tree. It is known as a breadth first search tree.
We also want to know that T is a spanning tree, i.e. that if the graph is connected (every
vertex has some path to the root x) then every vertex will occur somewhere in T. We can
prove this by induction on the length of the shortest path to x. If v has a path of length k,
starting v-w-...-x, then w has a path of length k-1, and by induction would be included in T.
But then when we visited w we would have seen edge vw, and if v were not already in the
tree it would have been added.
Breadth first traversal of G corresponds to some kind of tree traversal on T. But it isn't
preorder, postorder, or even inorder traversal. Instead, the traversal goes a level at a time,
left to right within a level (where a level is defined simply in terms of distance from the
root of the tree). For instance, the following tree is drawn with vertices numbered in an
order that might be followed by breadth first search:
1
/ | \
2
3
/
4
\
|
5
6
7
|
/ | \
8
9 10 11
The proof that vertices are in this order by breadth first search goes by induction on
the level number. By the induction hypothesis, BFS lists all vertices at level k-1
before those at level k. Therefore it will place into L all vertices at level k before all
those of level k+1, and therefore so list those of level k before those of level k+1.
(This really is a proof even though it sounds like circular reasoning.)
Breadth first search trees have a nice property: Every edge of G can be classified into one
of three groups. Some edges are in T themselves. Some connect two vertices at the same
level of T. And the remaining ones connect two vertices on two adjacent levels. It is not
possible for an edge to skip a level.
Therefore, the breadth first search tree really is a shortest path tree starting from its root.
Every vertex has a path to the root, with path length equal to its level (just follow the tree
itself), and no path can skip a level so this really is a shortest path.
Breadth first search has several uses in other graph algorithms, but most are too
complicated to explain in detail here. One is as part of an algorithm for matching, which is
a problem in which you want to pair up the n vertices of a graph by n/2 edges. If you have
a partial matching, pairing up only some of the vertices, you can extend it by finding an
alternating path connecting two unmatched vertices; this is a path in which every other
edge is part of the partial matching. If you remove those edges in the path from the
matching, and add the other path edges back into the matching, you get a matching with
one more edge. Alternating paths can be found using a version of breadth first search.
A second use of breadth first search arises in certain pattern matching problems. For
instance, if you're looking for a small subgraph such as a triangle as part of a larger graph,
you know that every vertex in the triangle has to be connected by an edge to every other
vertex. Since no edge can skip levels in the BFS tree, you can divide the problem into
subproblems, in which you look for the triangle in pairs of adjacent levels of the tree. This
sort of problem, in which you look for a small graph as part of a larger one, is known as
subgraph isomorphism. In a recent paper, I used this idea to solve many similar
pattern-matching problems in linear time.
Depth first search
Depth first search is another way of traversing graphs, which is closely related to
preorder traversal of a tree. Recall that preorder traversal simply visits each node
before its children. It is most easy to program as a recursive routine:
preorder(node v)
{
visit(v);
for each child w of v
preorder(w);
}
To turn this into a graph traversal algorithm, we basically replace "child" by
"neighbor". But to prevent infinite loops, we only want to visit each vertex once. Just
like in BFS we can use marks to keep track of the vertices that have already been
visited, and not visit them again. Also, just like in BFS, we can use this search to build
a spanning tree with certain useful properties.
dfs(vertex v)
{
visit(v);
for each neighbor w of v
if w is unvisited
{
preorder(w);
add edge vw to tree T
}
}
The overall depth first search algorithm then simply initializes a set of markers so we
can tell which vertices are visited, chooses a starting vertex x, initializes tree T to x,
and calls dfs(x). Just like in breadth first search, if a vertex has several neighbors it
would be equally correct to go through them in any order. I didn't simply say "for each
unvisited neighbor of v" because it is very important to delay the test for whether a
vertex is visited until the recursive calls for previous neighbors are finished.
The proof that this produces a spanning tree (the depth first search tree) is essentially the
same as that for BFS, so I won't repeat it. However while the BFS tree is typically "short
and bushy", the DFS tree is typically "long and stringy".
Just like we did for BFS, we can use DFS to classify the edges of G into types. Either an
edge vw is in the DFS tree itself, v is an ancestor of w, or w is an ancestor of v. (These last
two cases should be thought of as a single type, since they only differ by what order we
look at the vertices in.) What this means is that if v and w are in different subtrees of v, we
can't have an edge from v to w. This is because if such an edge existed and (say) v were
visited first, then the only way we would avoid adding vw to the DFS tree would be if w
were visited during one of the recursive calls from v, but then v would be an ancestor of w.
As an example of why this property might be useful, let's prove the following fact: in any
graph G, either G has some path of length at least k. or G has O(kn) edges.
Proof: look at the longest path in the DFS tree. If it has length at least k, we're done.
Otherwise, since each edge connects an ancestor and a descendant, we can bound the
number of edges by counting the total number of ancestors of each descendant, but if the
longest path is shorter than k, each descendant has at most k-1 ancestors. So there can be at
most (k-1)n edges.
This fact can be used as part of an algorithm for finding long paths in G, another subgraph
isomorphism problem closely related to the traveling salesman problem. If k is a small
constant (like say 5) you can find paths of length k in linear time (measured as a function
of n). But measured as a function of k, the time is exponential, which isn't surprising
because this problem is closely related to the traveling salesman problem. For more on this
particular problem, see Michael R. Fellows and Michael A. Langston, "On search, decision
and the efficiency of polynomial-time algorithms", 21st ACM Symp. Theory of Computing,
1989, pp. 501-512.
Relation between BFS and DFS
It may not be clear from the pseudo-code above, but BFS and DFS are very closely
related to each other. (In fact in class I tried to describe a search in which I modified
the "add to end of list" line in the BFS pseudocode to "add to start of list" but the
resulting traversal algorithm was not the same as DFS.)
With a little care it is possible to make BFS and DFS look almost the same as each other
(as similar as, say, Prim's and Dijkstra's algorithms are to each other):
bfs(G)
{
list L = empty
tree T = empty
choose a starting vertex x
search(x)
while(L nonempty)
remove edge (v,w) from start of L
if w not yet visited
{
add (v,w) to T
search(w)
}
}
dfs(G)
{
list L = empty
tree T = empty
choose a starting vertex x
search(x)
while(L nonempty)
remove edge (v,w) from end of L
if w not yet visited
{
add (v,w) to T
search(w)
}
}
search(vertex v)
{
visit(v);
for each edge (v,w)
add edge (v,w) to end of L
}
Both of these search algorithms now keep a list of edges to explore; the only
difference between the two is that, while both algorithms adds items to the end of L,
BFS removes them from the beginning, which results in maintaining the list as a
queue, while DFS removes them from the end, maintaining the list as a stack.
BFS and DFS in directed graphs
Although we have discussed them for undirected graphs, the same search routines
work essentially unmodified for directed graphs. The only difference is that when
exploring a vertex v, we only want to look at edges (v,w) going out of v; we ignore the
other edges coming into v.
For directed graphs, too, we can prove nice properties of the BFS and DFS tree that help to
classify the edges of the graph. For BFS in directed graphs, each edge of the graph either
connects two vertices at the same level, goes down exactly one level, or goes up any
number of levels. For DFS, each edge either connects an ancestor to a descendant, a
descendant to an ancestor, or one node to a node in a previously visited subtree. It is not
possible to get "forward edges" connecting a node to a subtree visited later than that node.
We'll use this property next time to test if a directed graph is strongly connected (every
vertex can reach every other one).
Heuristic Search
This last heuristic for the driving domain is the basic idea behind most "heuristic
search" techniques in artificial intelligence. If you can somehow estimate which states
are "closest" to a goal state, using some numerical measure of closeness, this estimate
can be used to decide which states to consider applying operators to next.
So the general idea of a heuristic search can be illustrated with this modified version of the
general search algorithm:
states = make_empty_bag ()
add_to_bag (initial_state, states)
loop:
if is_empty (bag) FAIL
else:
state = remove_from_bag (bag)
if achieves_goal (new_state) SUCCEED
for each op in operators:
if op_applies (op, state)
new_state = apply_op (op, state)
add_to_bag (new_state, states)
sort_states_by_heuristic_value ()
After the new states are added to the bag, the contents of the bag are sorted so that
those states for which the heuristic estimate of the distance to the goal is lowest are
near the front. The state with the lowest estimate distance will be at the front and will
be the next state removed from the bag and have operators applied to it.
In fact, sorting the entire contents of the bag is not always necessary. One could also
implement the remove_from_bag operator to find the state whose heuristic estimate is lowest
and remove that one from the bag.