Concurrent Turing Machines as Rewrite Theories

Proc. CS&P '06
Concurrent Turing Machines
as Rewrite Theories
Berndt Farwer, Manfred Kudlek, Heiko Rölke
University of Hamburg
Faculty of Mathematics, Informatics, and Natural Sciences
Dept. of Informatics, Vogt-Kölln-Str. 30, 22527 Hamburg, Germany
{farwer,kudlek,roelke}@informatik.uni-hamburg.de
Abstract. We define Concurrent Turing Machines (CTMs) as Turing
machines with Petri nets as finite control. This leads to machines with
arbitrary many tape heads, thus subsuming any class of (constant) khead Turing machines.
Concurrent Turing machines correspond to a class of multiset rewriting
systems. The definition of a CTMs as a rewrite theory avoids the need
for encoding multisets as words and using an equivalence relation on
configurations. Multiset rewriting lends itself to be used in rewriting
systems and tools like the rewriting engine Maude. For the rewriting
system, a configuration is given by a varying sequence of strings and
multisets.
1
Introduction
We introduce a Turing machine model with concurrency and investigate the
implications that this extension of the traditional sequential formalism has on
issues like computability, decidability, and complexity.
In summary, the basic ideas for concurrent Turing machines are:
– A Petri net replaces the finite automaton as finite control
– Each token in the Petri net is associated with an individual tape head
– Tape heads can only be distinguished if they are associated with tokens on
different places or have different positions on the tape
– For the execution of a transition with multiple input tokens, thus involving
more than one head, the heads have to occupy the same position on the tape.
Other Turing machine formalisms that employ multiple heads or multiple
controls are known in the literature (e.g. [2], [5], [6], [7], [8]) but none of them
use a concurrent control part. It is known that multi-head TMs are equally
powerful as standard TMs.
In the conclusion we give some hints on further extensions of the Concurrent
Turing Machine model as well as similar extensions of other automata, such as
the push down automaton.
352
Part 3: Programming
1.1
Overview
After giving some general remarks on the notation used in this paper, we introduce in Section 2 the proposed model of concurrent Turing machines and
give several acceptance conditions in Section 2.1. Section 3 defines complexity
measures for CTMs and Section 4 gives an example. In Section 5 we introduce
CTMs as rewrite theories and give some examples using the Maude rewriting
engine. We address tool support and conclude the paper with some remarks on
further extensions.
1.2
Notation
For any set A define the set of multisets over A by MS(A). Each f ∈ MS(A)
is given as f : A → N and is alternatively denoted by f .
A tuple N = (P, T, F ) is called Petri net iff the set of places P and the
set of transitions T are disjoint, i.e. P ∩ T = ∅ and the flow relation F ⊆
(P ×T )∪(T ×P ). Some commonly used notations for Petri nets are • y := ( F y)
for the pre-set and y • := (y F ) for the post-set of a net element y ∈ P ∪ T .
A P/T-net structure is a tuple N = (P, T, pre, post), such that P is a finite
set of places, T is a finite set of transitions, with P ∩ T = ∅, and pre, post :
T → MS(P ) are multiset mappings. A marked P/T net is an P/T-net structure
(P, T, pre, post) together with an initial marking m0 ∈ MS(P ). The term P/T
net is used for both the unmarked and the marked case and the meaning will
be obvious from the context. The flow relation F is defined by F := {(p, t) |
pre(t)(p) > 0} ∪ {(t, p) | post(t)(p) > 0}. Alternatively, a P/T net structure can
be denoted by N = (P, T, W ) were P and T have the same meaning as above,
and weight : P × T ∪ T × P → N is defined as
(
pre(y)
if y ∈ T
W (x, y) :=
post(x) otherwise
The set of all possible markings for a net structure N = (P, T, F ) is denoted by
MN and is defined by the set of place multisets, MN := MS(P ).
A transition t ∈ T of a P/T net N is enabled in marking m iff ∀p ∈ P :
m(p) ≥ pre(t)(p) holds. The successor marking after firing t is defined by
m0 (p) := m(p) − pre(t)(p) + post(t)(p). Enablement of t in marking m is
t
t
denoted by m −
→, firing of t is denoted by m −
→ m0 . Using multiset operat
tors m −
→ is equivalent to m ≥ pre(t) and the successor marking is m0 :=
m − pre(t) + post(t).
For a finite set of symbols Σ, a symbol x ∈ Σ, and a word w ∈ Σ ∗ , let |w|
be the length of w and let |w|x denote the number of occurrences of x in w.
353
Proc. CS&P '06
2
Concurrent Turing Machines
We enrich traditional Turing machines, that were introduced to provide a mathematical notion of computability and complexity of algorithms, by a concurrent
finite control. Instead of the finite state transition diagram used in sequential
Turing machines, our concurrent version of the Turing machine facilitates a Petri
net for its control. The marking of the Petri net represents the heads that are
currently available for the computation and just as Petri nets can consume or
produce tokens, our Turing machine control has the ability to consume or to
produce heads. Before going into some detail concerning the actual computation steps that such a machine can undertake, we give the formal definition of a
concurrent Turing machine.
Definition 1 (Concurrent Turing Machine). A Concurrent Turing Machine
(CTM) is a tuple A = (P, T, W, Σ, Γ, G, m0 , F) where
–
–
–
–
–
–
–
–
P is a finite set of places,
T is a finite set of transitions with P ∩ T = ∅,
W : (P × T ) ∪ (T × P ) → N is the arc weight function,
Σ is a finite set of input symbols,
Γ is a finite set of tape symbols with P ∩ Γ = ∅, Σ ⊂ Γ and # ∈ Γ \ Σ,
G : T → Γ × Γ × {R, L, H} is the transition guard function,
m0 ∈ MS(P ) is the initial marking,
F ⊆ 2MA = 2MS(P ) is the acceptance condition.
Remark 1. For the sake of simplicity, we require that every transition has at
least one input place, i.e. ∀t ∈ T. • t 6= ∅.
In analogy to sequential Turing machines we define configurations (sometimes called instantaneous description, ID) as words over the combined sets of
tape symbols and places. A configuration describes the tape inscription and the
positions of all heads that are present at some instant. The computation of a
CTM is then defined on the set of its configurations.
Definition 2 (configuration). A configuration of a CTM is given by w ∈
(Γ ∪ P )∗ . Every sub-word w0 = vxw00 of w with v ∈ P ∗ , x ∈ Γ , and w00 ∈ Γ ∗
describes a portion of the tape, such that for all q ∈ P there are |v|q heads
positioned at the tape cell containing x.
The set of configurations of CTM A is denoted conf(A).
Remark 2. Configurations can contain an arbitrary number of tape head positions. A configuration gives a unique instantaneous description of a CTM. The
converse, however, is not true, since there is no defined order for the head positions in the configuration, resulting in many configurations referring to the same
instantaneous description of the CTM.
354
Part 3: Programming
The remainder of this section is devoted to the definition of the actions that a
CTM can perform. Transitions are used to describe actions on the tape as well as
state changes in the finite control. Since the control is a Petri net, a state is in fact
a marking of the net. Each token is associated with a head, as discussed above.
A Petri net transition can be executed (or fire) if all input places are sufficiently
marked, in effect removing the input tokens and producing output tokens in the
output places with the multiplicity given by the arc-weight function. Here, we
also require that the tape heads associated with all input tokens of a transition
to be executed point to the same tape position.
Definition 3 (computation step relation). Firing a transition t ∈ T causes
t
a state change in the control and a modification of the tape. We write c1 −
→ c2 for
c1 , c2 ∈ conf(A) to denote the transition from configuration c1 to configuration
c2 . The following conditions have to hold for c1 and c2 where η ∈ {1, . . . , 5}:
∀p ∈ P.∀x, y, z ∈ Γ.∀v, v 0 , v̄, v̄ 0 ∈ P ∗ .
|v|p ≥ W (p, t) ∧ |v 0 |p = |v|p − W (p, t) ∧ |v̄ 0 |p = |v̄|p + W (t, p) =⇒ (η)
t
⇐⇒
G(t) = (x, y, R)
(1)
c1 = w1 v̄zvxw2 −
→ w1 v̄ 0 zv 0 yw2 = c2
⇐⇒
G(t) = (x, y, L)
(2)
c1 = w1 vxv̄w2 −
→ w1 v 0 yv̄ 0 w2 = c2
t
In the above cases of left or right movement, w1 ∈ (Γ ∪ P )∗ Γ and w2 ∈ Γ (Γ ∪
P )∗ ∪ {ε}.
If no movement is required by the guard, w1 ∈ (Γ ∪ P )∗ Γ ∪ {ε} and w2 ∈
Γ (Γ ∪ P )∗ ∪ {ε}:
t
c1 = w1 v̄vxw2 −
→ w1 v̄ 0 v 0 yw2 = c2
⇐⇒
G(t) = (x, y, H)
(3)
For the special cases of movement beyond the current left or right borders of
the configuration the following conditions are required to hold:
t
c1 = w1 vxv̄ −
→ w1 v 0 yv̄ 0 # = c2
⇐⇒
G(t) = (x, y, R)
(4)
t
c1 = v̄vxw2 −
→ v̄ 0 #v 0 yw2 = c2
⇐⇒
G(t) = (x, y, L)
(5)
The following example illustrates the firing rule of CTMs.
Example 1. Figure 1 illustrates a configuration of a CTM in Fig. 1(a) and its only
successor configuration (Fig. 1(b)) reached by firing transition t. The example
illustrates that transitions involving more than one input place require the head
of all token used in the firing point to the same tape position. Similarly, all
tokens resulting from such firing point to same tape position, as determined by
the transition’s guard. This step can also be written in terms of configurations:
p1 p2 ababab ` Ap2 p3 babab or equivalently either of the three terms: p2 p1 ababab `
Ap2 p3 babab, p2 p1 ababab ` Ap3 p2 babab, p1 p2 ababab ` Ap3 p2 babab.
355
Proc. CS&P '06
#
#
a
b
a
b
a
b
#
#
#
#
p2
p1
A
b
a
b
a
b
#
#
p2
t
a,A,R
p3
p1
(a) Configuration before firing t
t
a,A,R
p3
(b) Configuration after firing t
Fig. 1. State change in a simple CTM
In order not to have to enumerate all possible equivalent representations of
CTM configurations, we introduce an equivalence relation on configurations and
subsequently work modulo this equivalence.
For a CTM A we denote by words(m) the set of words composed of the
symbols in the multiset m, respecting the multiplicities, i.e. words(m) := {w |
w ∈ P ∗ ∧ ∀p ∈ P.m(p) = |p|}.
Definition 4. Let A = (P, T, W, Σ, Γ, G, m0 , F) be a Concurrent Turing machine. The initial configurations of A are given by words(m0 )w.
Definition 5. Let Σ, Γ be finite sets of symbols and let m ∈ MS(Γ ). Define
mix(m, Σ) := {z ∈ (Σ ∪ Γ )∗ | m(a) = |z|a }.
In the remainder of this paper we use m ∈ MS(A) to denote an arbitrary
word from the set {w ∈ A∗ | ∀a ∈ A.m(a) = |w|a }. To clarify when a word is
used in the sense of a multiset we use the notion w for the multiset induced by
w.
Definition 6. Let u, v ∈ (Γ ∪ P )∗ , then u ≡ v iff
∃k ∈ N.u = u0 p1 u1 . . . pk uk ∧ v ∈ u0 words(p1 )u1 . . . words(pk )uk
where u0 , . . . , uk ∈ Γ ∗ and p1 , . . . , pk ∈ P ∗ . Denote by [w]≡ the equivalence class
of w, i.e. [w]≡ := {v | v ≡ w}.
Whenever no confusion is to be expected, we use [w] instead of [w]≡ .
2.1
Acceptance Conditions
Since the finite control of traditional Turing machines has been replaced by a
Petri net, we have to adjust the acceptance condition for CTMs to accommodate the new model. The theory of Petri nets suggests different possibilities or
defining the end of a computation. The most basic – and drastic – end to a
Petri net computation is the arrival at a deadlock, i.e. a marking that enables
no transitions in the net. This leads to deadlock acceptance.
356
Part 3: Programming
Definition 7. Let A = (P, T, W, Σ, Γ, G, m0 , F) be a Concurrent Turing mat
chine. A configuration c of A is called deadlock iff ¬∃t ∈ T.∃c0 ∈ conf(A).c −
→ c0 .
Definition 8 (deadlock acceptance). Let A = (P, T, W, Σ, Γ, G, m0 , F) be
a Concurrent Turing machine. An input word w ∈ Σ ∗ is accepted with deadlock
acceptance (D-acceptance) by A iff there exists a finite computation of A from
an initial configuration leading to a deadlock.
An alternative to deadlock acceptance is the definition of a set of markings
regarded as goals and called final markings. This is closer to the traditional acceptance of Turing machines and leads to the following definition of final marking
acceptance.
Definition 9 (final marking acceptance). Let A = (P, T, W, Σ, Γ, G, m0 ,
F) be a Concurrent Turing machine. An input word w ∈ Σ ∗ is accepted by A with
final-marking acceptance (FM-acceptance) iff there exists a finite computation
of A from an initial configuration leading to a final marking from the decidable
v
set F, i.e. ∃v ∈ T ∗ .∃m ∈ MA .w1 −
→ w2 , such that w0 ∈ words(m0 )w and
w2 ∈ mix(F, Γ ) for some F ∈ F.
Remark 3. If in the preceding definition the set of final markings is allowed to be
undecidable, we can use it as a kind of oracle, leading to a model more powerful
than sequential Turing machines.
Combining Definitions 8 and 9 we get the following acceptance condition.
Definition 10 (final marking deadlock acceptance). Let A = (P, T, W, Σ,
Γ, G, m0 , F) be a Concurrent Turing machine. An input word w ∈ Σ ∗ is accepted
by A with final-marking acceptance (FM-acceptance) iff there exists a finite
computation of A from an initial configuration leading to a final marking from F,
v
i.e. ∃v ∈ T ∗ .∃m ∈ MA .w1 −
→ w2 , such that w0 ∈ words(m0 )w, w2 ∈ mix(F, Γ )
for some F ∈ F and w2 is a deadlock.
3
Complexity Measures for CTMs
We consider space and time as complexity measures for CTMs. For space, we can
use the standard definitions from the theory of sequential Turing machines, i.e.
the maximum number of tape cells used in an accepting computation for input
w is denoted space(w). For time we distinguish parallel time (time(w)) and socalled work complexity (work(w)). The latter is given by the minimum number
of transitions used for an accepting computation. The former is derived from
exploiting the potential of nondeterminism, i.e. from all accepting computations
we take the shortest sequence of parallel computation steps.
Remark 4. Note that this does not coincide with facilitating the maximal firing
rule, since it is possible that a shorter computation arises from a non-maximal
concurrent step by allowing more parallelism later in the computation.
357
Proc. CS&P '06
t1 a,a,R
t2 B,B,R
start
h1
p1
p2
t5 b,B,R
b,b,H t4
p3
t3 a,a,R
#,#,H
t6
p4
stop
Fig. 2. CTM accepting an bn or an bm (n > m) according to the acceptance condition
%
→
Work complexity is defined as work(w) := min{|%| | % ∈ T ∗ and m0 w −
v where v is an accepting configuration}.
In addition to the complexity measures already introduced, we have to take
into account the maximal number of tape heads, i.e. the overall cardinality of
the controlling Petri net’s marking. We call this head complexity, denoted by
head(w) for input w. The head complexity is essential for a realistic estimation
of complexity, since the Petri net control part can otherwise be abused as an
additional storage device.
4
An Example
In this section we study a simple example of a CTM and its accepted languages
for the different acceptance conditions and estimate the complexity involved.
Example 2. P , T , W , and G are derived from the graphical representation in
Figure 2. Σ = {a, b} and Γ = {a, b, B, #}. m0 = p1. Various acceptance conditions are possible and lead to different accepted languages.
General Idea The general idea of the machine in Figure 2 is the following: The
machine reads the prefix of the input word that consists of characters “a” and
duplicates the head for each “a”. These duplicates go forward on the tape until
they find a “b”, mark it and wait. The original head passes forward on the tape
and collects the heads one after another. Only if the number of “a” is not smaller
than (or equal to, depending on the acceptance condition) all heads are collected
and the stop marking is reached.
Detailed Description Initially, place p1 is marked with one head pointing to the
beginning of the input word. Transition t1 is enabled, if and only if the input
word starts with an “a”, otherwise the machine is in a deadlock. Execution of t1
leads to a step right on the tape while the head is duplicated, i.e. it is returned
to p1 and a copy is placed into p2. This proceeds until no “a” can be read from
the tape, then a “b” is read (Transition t4) and the head halts while in p3. The
358
Part 3: Programming
duplicated heads on p2 move to the right of the tape over-reading any “a” or
“B” until they find a “b”. Reading this is only possible if the original head in
p3 is in the same position. Then t5 is enabled and while moving to the right the
head from p2 is deleted while the symbol “b” is replaced by “B”. If the original
head in p3 reaches the end of the input word, t6 is enabled. Firing of t6 marks
the final place p4.
Accepted Language(s) The machine accepts different languages depending on
the kind of acceptance condition chosen.
– Deadlock Acceptance: The machine accepts every finite input word.
– Final Marking Acceptance: For F = {p4}1 the machine accepts exactly the
words of the form an bn . For F = {p2x p4y | x, y ∈ N} all words of the form
an bm with n ≥ m are accepted. This holds also for final marking deadlock
acceptance.
For each “a” read from the tape a duplicate of the head from p1 is placed
on p2. After a “b” is read, no “a” further can be read. Place p2 can only be
emptied if at least as many symbols “b” are read as symbols “a” have been read
before, otherwise at least one token remains in p2. If the input word contains
more symbols “b” than “a” the machine is in a deadlock with p3 marked. Only
if the end of the input word is reached by occurrences of t5, Transition t6 can
be executed to reach the required final marking.
5
Concurrent Turing Machines as Rewrite Rules
Concurrent Turing machines correspond to a class of multiset rewriting systems.
The definition is similar to that of CTMs in this paper but avoids the need for
encoding multisets as words and using an equivalence relation on configurations.
For lack of space we do not give the precise definitions here. Multiset rewriting
lends itself to be used in rewriting systems and tools like the rewriting engine
Maude. For the rewriting system, a configuration is given by a varying sequence
of strings and multisets. The rewrite rules handle the transitions in a similar
way as defined in Section 2.
We have implemented CTMs as rewriting systems in Maude, which is sketched
towards the end of this section in 5.2. First we give some formal definitions.
5.1
A Formal Theory
A configuration is given by c = w0 ν1 w1 · · · νr wr with r > 0, w0 ∈ Γ ∗ , wi ∈ Γ +
(1 ≤ i ≤ r), and νi ∈ P ⊕ , ν 6= 0 (1 ≤ i ≤ r).
A P/T net N = (P, T, µ0 ) is a triple with a finite set P of places, a finite set
T ⊆ (P ⊕P ⊕ )×P ⊕ of transitions, and an initial marking µ0 ∈ P ⊕ . The elements
of T can also be considered multiset rewriting rules. A rule t = (α, β) ∈ T can
1
Note, that p4 denotes the multiset containing exactly one instance of p4.
359
Proc. CS&P '06
be applied (a transition t can be fired ) on µ ∈ P ⊕ if α v µ, and the result is
µ0 = (µ α) ⊕ β.
Lr
The marking of N is µ = i=1 νi .
A rule (transition) t̄ of a CTM is given by t̄ = (t, x, x0 , d), t = (α, β) ∈ T ,
x, x0 ∈ Γ , and d ∈ {L, M, R}.
A rule (transition) t̄ = (t, x, x0 , d), t = (α, β) ∈ T can be applied on a
configuration c = w0 ν1 w1 · · · νr wr with r > 0 if ∃ i ∈ {1, · · · , r} : wi =
xw̃i ∧ α v νi .
The result of one (basic) step is given by (ν = νi for some i, b the blank
symbol) :
1. d = M : νx ` ((ν α) ⊕ β)x0
2. d = L : yνx ` βy(ν α)x0
3. d = R : νxy ` (ν α)x0 βy
νx ` βb(ν α)x0 (left end)
νx ` (ν α)x0 βb (right end)
where any ρσ has to be replaced by ρ ⊕ σ.
Several modes of rule applications are possible:
a. 1 rule at one step (step semantics)
b. arbitrary many possible rules concurrently at one step
c. maximally many possible rules concurrently at one step
In b and c it may happen that more than one rule has to be applied at the
same place. Then different modi of writing are possible like arbitrary, common,
or priority for rules or symbols.
Another version is that the
µ of the underlying vector addition
Lmarking
r
system (net) is independent of i=1 νi . In this case a rule is applied in the same
way as above.
Yet another possibility is that a rule works in parallel on several places of
the tape. Formally, a rule t = (α, β) ∈ T can be applied on a configuration
c = w0 ν1 w1 · · · νr wr if
∃m∃j(1) · · · ∃j(m)∃αj(1) · · · αj(m) ∃βj(1) · · · βj(m) :
m≤r∧α=
m
M
αj(s) ∧ β =
s=1
m
M
!
βj(s) ∧ α v νj(1) ∧ · · · ∧ αj(m) v νj(m)
s=1
A TM rule t̄ is given by a tuple t̄ = (t, f, d) where f : Σ → Σ and d : Σ →
{L, M, R} are functions depending on t̄.
If wj(s) = xj(s) , then in a basic step, xj(s) is replaced by f (xj(s) ), according
to one of different modi, and the heads are moved by d(xj(s) ).
If more than one head is on the same place, several possibilities for writing
a new symbol arise:
a.
b.
c.
d.
arbitray
common
priority among heads
priority among symbols
360
Part 3: Programming
5.2
Basics of modelling CTMs with Maude
Listing 1.1 shows a basic Maude theory for the CTM in Figure 2 and does not
make use of any alternative modes discussed in Section 5.1 but rather models
the basic behaviour introduced in the Section 2.
mod CTM i s
s o r t s Token Marking .
subsort Token < Marking .
vars T T1 : Token .
vars M M1 M2 M3 M4 : Marking .
op m−n i l : −> Marking .
op ,
: Marking Marking −> Marking
[ assoc comm id : m−n i l ] .
s o r t s Symbol .
vars X X1 X2 X3 : Symbol .
sort PairMaSy .
sort PairMaSyList .
subsort PairMaSy < PairMaSyList .
op p−n i l : −> PairMaSyList .
op
: Marking Symbol −> PairMaSy .
op ;
: PairMaSyList PairMaSyList −> PairMaSyList
[ assoc id : p−n i l ] .
ops p1 p2 p3 p4 : −> Token .
ops a b B # c : −> Symbol .
ops $ : −> PairMaSy .
var x : Symbol .
var m : Marking .
var i n i t C : PairMaSyList .
∗ ∗∗ ∗∗ ∗∗ ∗ ∗∗ ∗∗ ∗∗ ∗ ∗∗ ∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗
∗∗∗ The ” F i n i t e C o n t r o l ” n e t ∗∗∗
∗ ∗∗ ∗∗ ∗∗ ∗ ∗∗ ∗∗ ∗∗ ∗ ∗∗ ∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗
∗∗∗ t 1 : ( p1 ; a ; a ;R; p1 , p2 )
r l [ t 1 ] : (M, p1 a ) ; (M1 x ) => (M a ) ; ( p1 , p2 ,M1 x ) .
r l [ t 1 $ ] : (M, p1 a ) ; $ => (M a ) ; ( p1 , p2 #) ; $ .
∗∗∗ t 2 : ( p2 ; B ; B ;R; p2 )
r l [ t 2 ] : (M, p2 B) ; (M1 x ) => (M B) ; ( p2 ,M1 x ) .
r l [ t 2 $ ] : (M, p2 B) ; $ => (M B) ; ( p2 #) ; $ .
∗∗∗ t 3 : ( p2 ; a ; a ;R; p2 )
r l [ t 3 ] : (M, p2 a ) ; (M1 x ) => (M a ) ; ( p2 ,M1 x )
r l [ t 3 $ ] : (M, p2 a ) ; $ => (M a ) ; ( p2 #) ; $ .
361
.
Proc. CS&P '06
∗∗∗ t 4 : ( p1 ; b ; b ;H; p3 )
r l [ t 4 ] : (M, p1 b ) => (M, p3 b ) .
∗∗∗ t 5 : ( p2 , p3 ; b ; B ;R; p3 )
r l [ t 5 ] : (M, p2 , p3 b ) ; (M1 x ) => (M B) ; ( p3 ,M1 x ) .
r l [ t 5 $ ] : (M, p2 , p3 b ) ; $ => (M B) ; ( p3 #) ; $ .
∗∗∗ t 6 : ( p3 ;#;#;H; p4 )
r l [ t 6 ] : (M, p3 #) => (M, p4 #) .
endm
Listing 1.1. CTM as Maude rewriting theory
Acceptance conditions are not modelled in Listing 1.1 but could easily be
added. A transcript of an ‘accepting’ run on the input word ab is shown in
Listing 1.2. The initial configuration is given by $ ; (p1 a) ; (m-nil b) ; $
where $ is used as a delimiter of the portion of the tape that has been used. The
tape cells are separated by semi-colons and each cell contains a pair consisting
of a multiset of places (representing tape heads) and a symbol.
Maude> rew $ ; ( p1 a ) ; (m−n i l b ) ; $ .
r e w r i t e i n CTM : $ ; ( p1 a ) ; (m−n i l b ) ; $ .
r e w r i t e s : 4 i n 0ms cpu ( 0ms r e a l ) ( ˜ r e w r i t e s / s e c o n d )
r e s u l t PairMaSyList : $ ; (m−n i l a ) ; (m−n i l B) ; ( p4 #) ; $
Listing 1.2. Transcript of Maude’s rewriting of ab
Apart from simulating CTMs, Maude can also be used to model check some
properties of a CTM. As usual, one has to make sure the Petri net used in the
control part of the CTM is bounded, but in this case the potentially infinite
tape inscription is a second factor that can lead to an infinite state space, thus
prohibiting the use of Maude’s built-in model checker.
6
Simulation Using Renew
The Reference Net Workshop Renew [3] is an adaptable modelling and simulation tool [4] for different kinds of (high-level) Petri nets, especially reference nets.
Renew is extendable by plug-ins. For the different kinds of CTMs presented in
this paper we have implemented a prototype plug-in that allows to model and
simulate them. Therefore, it is possible to validate the presented examples via
testing. The plug-in itself is easily extendable to other kinds of CTMs or other
types of automata.
7
Conclusion and Outlook
We have defined a concurrent version of the Turing machine that makes use
of concurrency and has several acceptance conditions. In another paper [1] we
362
Part 3: Programming
have shown that CTMs are equivalent to sequential TMs with respect to the
acceptable languages but may be superior in terms of complexity. Furthermore,
CTMs allow the dynamic creation of heads, thus subsuming all classes of k-head
Turing machines.
It should be noted that transition fusion – even when allowing to read and
write words instead of symbols – cannot be allowed in the control, since this
would imply the need for non-elementary head movements and would possibly
destroy alternative computations. Hence, a theory of abstraction and refinement,
constraining that for Petri nets, has to be introduced.
The idea of CTMs can also be applied to PDAs, which is the topic of a
forthcoming paper.
References
1. B. Farwer, M. Kudlek, and H. Rölke. Petri-net-controlled machine models.
Manuscript, April 2006.
2. A. Hemmerling. Systeme von Turing-Automaten und Zellularräume auf rahmbaren pseudomustermengen. Journal of Information Processing and Cybernetics
EIK, 15:47–72, 1979.
3. O. Kummer, F. Wienberg, and M. Duvigneau. Renew – the Reference Net Workshop. Available at: http://www.renew.de/, June 2004. Release 2.0.
4. O. Kummer, F. Wienberg, M. Duvigneau, J. Schumacher, M. Köhler, D. Moldt,
H. Rölke, and R. Valk. An extensible editor and simulation engine for Petri nets: Renew. In J. Cortadella and W. Reisig, editors, Applications and Theory of Petri Nets
2004. 25th International Conference, ICATPN 2004, Bologna, Italy, June 2004.
Proceedings, volume 3099 of Lecture Notes in Computer Science, pages 484–493,
Heidelberg, June 2004. Springer.
5. K. Wagner and G. Wechsung. Computational Complexity. D. Reidel Publishing
Company, 1986.
6. J. Wiedermann. Parallel Turing machines. Technical Report RUU-CS-84-11, University of Utrecht, 1984.
7. J. Wiedermann. Parallel machine models: How they are and where they are going.
In SOFSEM’95: Theory and Practice of Informatics, volume 1012 of Lecture Notes
in Computer Science, pages 1–30. Springer-Verlag, 1995.
8. T. Worsch. On parallel Turing machines with multi-head control units. Technical
Report 11/96, Department of Informatics, University of Karlsruhe, 1996.
363