University of Warsaw Faculty of Mathematics, Informatics and Mechanics Institute of Informatics Marek Grabowski Complexity of simulation preorder for a product of finite automata Term paper Under supervision of Ph.D. S!lawomir Lasota April 1, 2010 Complexity of simulation preorder for a product of finite automata Marek Grabowski, Sawomir Lasota [email protected], [email protected] Institute of Informatics, Faculty of Mathematics, Informatics and Mechanics, University of Warsaw Abstract. We consider complexity of the question whether a finite-state system simulates a product of finite-state systems. As the main technical result we show EXPTIME-hardness of that problem. However, the value of our result appears more apparent when related to the complexity of language inclusion, known to be in PSPACE for the case we study. Our hardness result thus provides a counterexample to a widely accepted conjecture that branching-time preorders tend to be more computationally tractable than the linear-time ones. To the best of our knowledge, this is the first such counterexample. 1 Introduction An important verification problem is to check, for two given finite-state systems, if all behaviors of one of them are also possible in the other. As a formalization of this concept, one usually studies various semantical preorder relations. In [6] all relevant relations was divided into linear-time and branching-time classes. In this paper we investigate one representative of each of the two classes: language (trace) inclusion and simulation preorder, respectively. It is widely believed that branching-time relations are computationally more feasible than linear-time ones. In particular, for finite state systems, language inclusion is PSPACE-complete [4] while simulation preorder is in PTIME. The complexities are valid under assumption that the state-spaces of the two systems are given explicitly as input. On the other hand, it is very reasonable to consider systems described compositionally. Typically, only the local finite-state components are given explicitly, and the whole (global) system is understood as a kind of product of the components. The global system is finite-state as well, but its size may be exponential with respect to the size of its description, i.e., the sum of sizes of the local components. Thus it is not surprising that the complexities of the language inclusion and trace preorder lift as much as to EXPSPACE-complete [5] and EXPTIME-complete (an even easier case has been shown EXPTIME-hard in [3]), respectively, if the input systems are the products of finite-state systems. This confirms the supremacy of branching-time preorders over the linear-time ones. The main discovery of this paper is that the above-mentioned supremacy is not always the case. Throughout this paper we assume that a reader is familiar with basics of automata and complexity theory. Tuples of states or letters will be denoted as − → − t,→ s and so on. Unless stated otherwise ti will mean i-th component of tuple t. By asynchronous product of n automata Ai we understand nondeterministic automaton A(Q, Σ, q 0 , δ) such that: – the set of states Q = Q1 × Q2 × · · · × Qn , where Qi is the set of states of automaton Ai , S – the alphabet Σ = i=1,··· ,n Σi , where Σi is an alphabet of automaton Ai , − → → – the transition relation δ : Q×Σ ×Q, where a transition from − s to s′ labeled − →′ → by a, denoted by s ∈ δ(− s , a), exists iff: ∃i : s′i ∈ δi (si , a) and ∀j6=i sj = s′j , where δi is transition relation of automaton Ai , – the initial state q 0 = (q10 , q20 , · · · , qn0 ). Such product will be called PFS. By P F S P F S we denote the following kind of decision problems: given two product automata belonging to PFS, test if they are related by a semantical preorder under consideration. The preorder will be instantiated either with the language inclusion or the simulation preorder. For completeness, we also consider special cases when on one of the sides the system is not composed, i.e., given explicitely as a finite automaton. These two special cases are denoted by F S P F Sand P F F S, respectively. A case F S F S(i.e., finite automata on both sides) may be added for completeness. Simulation relation we are going to study, is defined in a standard way. Suppose we have two automata A and B over the same alphabet. The simulation is the biggest relation ⊆ QA × QB such that if qA qB then for every a ∈ Σ ′ ′ ′ ′ and every qA ∈ δA (qA , a) there is qB ∈ δB (qB , a) such that qA qB . We say 0 0 that automaton B can simulate A (denoted by A B) if qA qB . Equivalent definition of this relation can be given in terms of existence of a winning strategy in the simulation game. The game is played by two players, Spoiler and Duplicator. Each of them gets an automaton, let’s call Spoiler’s automaton A and Duplicator’s one B. Game starts in the configuration (qA , qB ), where qA and qB are initial states of automata A and B respectively. Players take alternating turns. In a turn Spoiler goes first, he chooses a label, say a, and makes any correct transition in A labeled by a letter a. Then, Duplicator has to fire a transition in automaton B labeled by the letter chosen by Spoiler. Game ends when either Spoiler is blocked and can’t make any move, Duplicator wins in this case; or Duplicator can’t fire any transition with correct label and Spoiler wins. Additionally the game is won by Duplicator if it is infinite. It is known that there exists a winning strategy for duplicator if and only if A B. Let A be an automaton - FSor PFS- and Σ be it’s alphabet. Run σ in A is a series of fired transitions starting in initial state. By trace we understand a language of words t ∈ Σ ∗ created from runs of A, by projecting them on Σ. Our main technical result is as follows: Theorem 1. The simulation preorder P F F S is EXPTIME-hard, even in the case of disjoint alphabets. This, confronted with the PSPACE-completness of the language inclusion, gives an example where the linear-time preorder is more tractable than the branchingtime one. Thus the supremacy of branching-time over linear-times appears to fail in the P F F S variant. This ’failure’ may be explained as follows. In the F S F S variant, language inclusion is PSPACE-complete, while simulation preorder is in PTIME. When allowing the left-hand side to be an arbitrary product from PFS, the complexity of language inclusion stays unchanged, as it essentially depends only on the size of the right-hand side automaton (cf. the discussion in Sect. 5). On the other hand, complexity of simulation raises one exponent up, essentially due to the additional complexity of possible Spoiler’s strategies. 2 Definitions Turing machine is a theoretical model composed of finite set of states Q called control states, with special state called halt, finite tape alphabet Γ , infinite tape, transition relation δ ⊆ Q × Γ × Q × Γ × {L, R}, where L is left shift, R is right shift. W.l.o.g. we assume that for each state q and every tape letter a, δ contains at least one tuple with q and a as first two elements, except for the halt state. Machine starts in initial configuration s0 = (q0 , w) where w ∈ Γ ∗ , with a head at the beginning of an input written on the tape. Transitions of the machine will be denoted (q, a) → (q ′ , b, d), where q is an old state, q ′ is a new state, a is a letter read from the tape, b is a letter written by the head on the tape and d ∈ (L, R) is a direction in which head moved. When machine reaches a configuration (halt, w) for any w ∈ Γ ∗ the computation ends. If δ is a function, Turing machine is deterministic, otherwise it’s nondeterministic. By computation of a Turing machine we understand a sequence of fired transitions – finite or not. Deterministic Turing machine has only one possible computation while nondeterministic one can have many and it has to choose one. Halting problem for Turing machine is a question if for given Turing machine and input, there exists finite computation, or every computation loops. If we limit ourselves to machines with working space (number of tape cells) bounded by the size of it’s input, halting problem is PSPACE-complete in both deterministic and nondeterministic cases. This kind of machine is called linearly bounded automaton – abrv. LBA. To prove Theorem (1) we will need a notion of alternating Turing machines. In nondeterministic Turing machines a computation was defined as a path in computation tree – we had to choose only one computation branch to follow from each configuration (qA , w). We defined that our machine halts if there exists a choice of this path, such that the computation reaches halt state. In the alternating model we distinguish existential states – in which we choose only one computation branch, and universal states – in which we have to follow all possible computations. Thus computation of an alternating Turing machine is not a simple path, like in ”normal” Turing machines, but a subtree of computation, branching in universal nodes. We say that this kind of machine halts if there exists a computation which contains no infinite branch. If we limit ourselves to machines with linear memory this question becomes EXPTIME-complete as proved in [1]. This limited version of alternating Turing machines are called alternating linearly bounded automata – abrv. ALBA. 3 PSPACE lower bound We provide a reduction from the halting problem of deterministic LBA to a simulation problem P F S F S. A main idea of our proof is to make the simulation game between Spoiler and Duplicator imitate the computation of LBA in such way that winning by Duplicator would mean that the given LBA loops. 0 Fix a deterministic LBA, let M = (QM , Γ, δ, qM ) and an input word w. We construct two systems – an asynchronous product of a finite state automata A1 , A2 , · · · , An where n = |w|, and a finite state automaton B. Configuration (qA1 , · · · , qAn , qB ) correspond to the word written on the tape and a control state of M . In our construction Spoiler plays in a product of automata A = A1 , A2 , · · · , An which is used to keep a word stored on the tape of M , while Duplicator plays in automaton B, which is used to keep current control state. Each automaton Ai is responsible for keeping a letter from the i-th cell of the tape. It is possible to have unique automaton for each cell because M is linearly bounded. Automata Ai are almost identical. Each of Ai has one state for each letter from Γ plus one special middle state. QAi = Γ ∪ {middle}, [ QAi . QA = i=1,··· ,n Denote ∆i = Γ × {i}. Each automaton Ai works on alphabet ∆i , in future we’ll write ai instead of (a, i). Transitions in Ai are defined as follows: we add transitions labeled ai from state a to middle and backwards, for every a ∈ Γ , as shown on Figure 1. By ∆ we denote a sum of all ∆i for all component automata of A: [ ∆i . ∆= i=1,··· ,n Automaton B is responsible for keeping track of control state of M , as well as forcing Spoiler to follow the correct computation. We have to achieve behavior corresponding to the computation of LBA – in particular we must force that a ai ai bi di b middle d di bi ci ci c Fig. 1. An example of one of Spoiler’s automata only automaton, which is responsible for a cell with a head over it, can act. This can be done by introducing a special state τ . This state has a transition to itself labeled by every letter from ∆, so if Duplicator enters it, he surely wins as the game will be infinite. We will be punishing Spoiler for incorrect moves by allowing Duplicator to enter τ every time Spoiler does one. Hence if there exists winning strategy for Spoiler it certainly doesn’t allow Duplicator to move to state τ , so it follows computation of M . States of automaton B can be divided in a four classes: the τ state – winning for Duplicator; the halt state corresponding to halt state of M – winning for Spoiler; states (q, i) representing machine M in control state q, with its head over position i of the tape; states (q, i, a) additionally specifying a letter a just read from the i-th position. QB = {τ }∪{halt}∪(QM \ {halt})×{1, · · · , n}∪(QM \ {halt})×{1, · · · , n}×Γ. Automaton B has following transitions: a – τ → τ for all a ∈ ∆, a – (q, i) →i (q, i, a) for all ai ∈ ∆i , b – (q, i) → τ for all b ∈ ∆ \ ∆i , b – (q, i, a) →i (q ′ , j) iff machine M can make transition (q, a) → (q ′ , b, d) where j = i − 1 if d = L and j = i + 1 if d = R; note that for each (q, i, a), there is precisely one such bi as M is deterministic; let’s call it bq,i,a , c – (q, i, a) → τ for all c 6= bq,i,a . There are no outgoing transitions from halt state. A simple example of a part of automaton B is presented on Figure 2. Notice that each of automata A1 , · · · , An , B are polynomial in size with respect to the size of M and a length of w. q, i ∆ \ ∆i ai q, i, a ∆ \ {bq,i,a } τ ∆ bq,i,a s′ , i ± 1 Fig. 2. An example of one part of Duplicator’s automaton Every state in automaton B, except halt, has transition for every letter a ∈ ∆. This means that the only way Spoiler can win is to force Duplicator to enter halt state. Furthermore Spoiler has to avoid letting Duplicator move into τ state. Automata A and B were constructed in such way that Duplicator can enter τ state every time Spoiler tries to ”cheat” – not to follow computation of M . 0 Initial configuration of A is given by the input word of M : qA = wi where i 0 0 wi ∈ Γ is the i-th letter of the input. Set initial state qB to (qM , 1). We’ll now show that Spoiler has a winning strategy in the simulation game for automata A = A1 ⊗ A2 , · · · , An and B if and only if M halts. First assume that M halts on w. In this case if Spoiler would follow the computation of M on w and he will eventually force Duplicator into halt state. According to our construction Duplicator never has any choice as long as Spoiler faithfully follows the computation. Thus he cannot avoid entering halt state by any means. Now assume that M does not halt on w. As in previous case Duplicator can only passively answer to Spoiler’s moves, so only Spoiler can affect the course of the game. If the Spoiler follows the computation of M on w, the game would be infinite, so Duplicator would win. We need to show that if he tries to make a move which deviates from the computation of M , the Duplicator will immediately punish him by moving into τ state. Each step of the computation of M is represented by two consecutive phases in the simulation game. At the begining B is in a state (q, i) and Ai is in a state a. Spoiler moves first and has a chocie: he can either choose the label ai and move in automaton Ai , thus following the computation; or choose any other label aj for j 6= i and move in Aj . Notice that if he moves in an Aj then the Duplicator can move to the τ state and, as a result, win. On the other hand if Spoiler moves a by the ai , then Duplicator has to fire the transition (q, i) →i (q, i, a). Second phase begins when Ai is in the middle state and B is in a (q, i, a) state. Now Spoiler has the same choice again – either follow correctly the computation by choosing a label bi , or deviate from it by either moving in an automaton other than Ai or choosing an incorrect label. Again if he tries to cheat, Duplicator can enter the τ state. Note that Spoiler actually don’t have any choice of correct move, because M is deterministic there’s only one bi which don’t let Duplicator to move to the τ state. Thus Duplicator wins iff M loops; and Spoiler wins iff M halts on w. As halting problem of LBA is P SP ACE-complete, the simulation problem is P SP ACE-hard as well. 4 EXPTIME lower bound We modify the construction described above in such a way that it would imitate computation of an ALBA. To achieve this we need to introduce a way to handle alternation. First notice that the halting problem for ALBA can also be seen as a kind of a game. In the halting game we want to check if there exists a finite computation of a given M running on an input word w. One player, called machine, wants to prove that there exists a finite run – he chooses transitions when machine is in an existential state. Second one, called adversary, wants to disprove it, by showing infinite computation – he chooses transition to fire when machine is in an universal state. Game ends if computation reaches halt state – then machine wins; otherwise computation is infinite and adversary wins. Halting of M on w is characterized as follows: M halts on w iff the machine has a winning strategy in the halting game. For sake of simplicity, w.l.o.g. assume that in every state q 6= halt, for all letters from Γ , the machine has exactly two possible moves. The general case can be easily reduced to this one. If in some configuration machine doesn’t have any choice and has exactly one possible move, it’s enough to duplicate this transition. Recall that by our definition of Turing machine, only state without any outgoing transitions is the halt state. M can now choose one of two identical transitions, which doesn’t change termination property. On the other hand, if, from some state, the machine has more than two transitions, we can construct a binary tree, which has all those transitions as leafs. In this case we need to add some additional dummy states, but the blow-up of the size is of polynomial magnitude. We have to add one more phase to previous construction’s turn. Recall that in the deterministic case each step of the machine was represented by two phases: move from a letter to middle and from middle to letter in Spoiler’s automaton plus Duplicator’s responses. We add a choose phase to this schema, between reading and writing a letter. In this step we will be able to choose one of two possible moves. Denote Spoiler’s automaton by A and Duplicator’s one by B as before. Main idea of this construction is to imitate halting game described above. In a configuration corresponding to M ’s existential state Spoiler chooses next move – as there are exactly two possibilities we call them 1 and 2. In a configuration corresponding to universal state the choice is made by the Duplicator. Modification to Spoiler’s automaton is really simple – we add additional automaton A0 which has one state and two tight loops, labeled by 1 and 2; depicted on Figure 3. Define alphabet ∆ on which A works exactly as previous but add labels 1 and 2 to it. Let ∆i be as before. 1 2 Fig. 3. An additional Spoiler’s automaton Compared to previous section, Duplicator’s automaton grows roughly two times. First of all, from each (q, i, a) state there are two possible moves instead of one. Secondly we need to add a fifth type of state: choice states. We need two of those per every state (q, i, a), we call them (q, i, a, 1) and (q, i, a, 2). The set of B’s transitions is also changed. Now they are as follows: – – – – – – a τ → τ for all a ∈ ∆, a (q, i) →i (q, i, a) for all ai ∈ ∆i , b (q, i) → τ for all b ∈ ∆ \ ∆i , 1 2 (q, i, a) → (q, i, a, 1) and (q, i, a) → (q, i, a, 2) if q is existential state, 1 1 (q, i, a) → (q, i, a, 1) and (q, i, a) → (q, i, a, 2) if q is universal state, a (q, i, a) → τ for all a ∈ ∆ \ {1, 2} in existential state, or a ∈ ∆ \ {1} in universal state, bn i – (q, i, a, n) → (qn′ , jn ), n ∈ {1, 2}, where the two transitions of M from q, a ′ are: q, a → q1 , b1 and q, a → q2′ , b2 ; value jn is determined by i and dn as previously, c – (q, i, a, n) → τ for all c ∈ ∆ \ ∆n . A fragment of automaton B is presented on Figure 4. As mentioned before, each machine’s transition will be represented by three moves: reading from tape, represented by Spoiler’s move in on of it’s automata; choosing branch of computation, done by either Spoiler or Duplicator based on kind of current state; writing on tape and moving machine’s head, depicted as ∆ \ b1i ∆ \ b1i q, i q, i ∆ \ ∆i ∆ \ ∆i ai ai q, i, a ∆ ∆ \ {1, 2} q, i, a 1 2 1 1 ∆ \ {b2i } q, i, a, 1 b1i b1i b2i q′, i ± 1 q, i, a, 1 q, i, a, 2 q ′′ , i ± 1 ∆ ∆ \ {1} ∆ \ {b2i } q, i, a, 2 b2i q′, i ± 1 q ′′ , i ± 1 Fig. 4. A fragment of Duplicator’s automaton; on the left: existential state, on the right: universal state. Spoiler’s move from middle to letter state and a Duplicator’s response. Notice that if Spoiler doesn’t want to allow Duplicator into τ state, he can use his new A0 automaton only when Duplicator’s automaton is in (q, i, a) state. Furthermore, if q is an existential state, Spoiler can choose any of two possible transitions, thus forcing Duplicator to choose a specific transition, but if q is universal state, then he can choose only label 1 (2 leads to τ state) and has to let Duplicator choose actual computation branch. Thus simulation game can imitate the halting game of ALBA. The proof of correctness of our construction will go exactly like before. One difference is choice states. The simulation game between Spoiler and Duplicator mimics the halting game between machine and adversary. Spoiler plays machine part while Duplicator is an adversary. Only possible cause of deviation from computation is a Spoiler’s move, because Duplicator’s one cannot be reason for it. But, exactly like in previous construction, if Spoiler tries not to follow halting game, Duplicator immediately punishes him by going into τ state. As a result Spoiler has a winnig strategy in simulation game iff machine has a winning strategy in halting game; Spoiler’s strategy is just to follow machine’s one. Otherwise Duplicator has a winning strategy, namely imitating adversary’s strategy as long as Spoiler is correctly following a computation and going into τ state when Spoiler ceases to do so. We’ve shown that if given machine M , we can construct automata A and B of polynomial size, which have such property that A B iff M loops. We know that termination problem for ALBA is EXP T IM E-hard, so P F S F S is also EXP T IM E-hard. In our construcion automata in left-hand system have disjoint alphabets, thus Theorem 1 is prooved. Note that this problem can be easly solved in exponential time. There are exponentially many different configurations and we never need to check any of them more than once. 5 PSPACE-completness of trace preorder problem We now sketch an algorithm which can answer if P F S ⊑ F S using polynomial space. It is known that the P SP ACE class is closed under complementation and, by the Savitch theorem, under nondeterminism. Thus it sufices to show a nondeterministic algorithm which can find a word w which can be generated by the left-hand side system and cannot by the right-hand one. We’ll be guessing a word w which cannot be generated by right-hand automaton, letter by letter. To keep track of current configuration of left-hand system, we need memory of linear size. To check if a given word cannot be generated by the right-hand automaton we need to store a set S of all states in which that automaton can be. If w cannot be generated then at some point S will become empty. Note that the size of S will never be greater than number of states in right-hand automaton, thus our algorithm needs only linear memory. On the other side even F S ⊑ F S is PSPACE-hard, so trace inclusion P F S ⊑ F S is PSPACE-complete. 6 Summary We have shown a natural class of systems, where linear-time questions tend to be more computationally tractable than the branching-time ones. Namely F S P F S is EXPTIME-complete while P F S ⊑ F S is EXPSPACE-complete. When considering communication-free Petri nets (so called BPP ) instead of PFS, we only know that both trace inclusion and simulation preorder are EXPSPACE-hard [2], and the actual complexities remain unknown. References 1. A. K. Chandra, D. Kozen, and L. J. Stockmeyer. Alternation. J. ACM, 28(1):114– 133, 1981. 2. S. Lasota. Expspace lower bounds for the simulation preorder between a communication-free petri net and a finite-state system. Inf. Process. Lett., 109(15):850–855, 2009. 3. A. Muscholl and I. Walukiewicz. A lower bound on web services composition. CoRR, abs/0804.3105, 2008. 4. C. H. Papadimitriou. Computational Complexity. Addison Wesley, 1994. 5. A. Valmari and A. Kervinen. Alphabet-based synchronisation is exponentially cheaper. In CONCUR, pages 161–176, 2002. 6. R. J. van Glabbeek. The linear time-branching time spectrum (extended abstract). In CONCUR, pages 278–297, 1990.
© Copyright 2026 Paperzz