# Contents

## Idea

When discussing languages, say, in the theory of finite automata?, the usual starting point is a set, $\Sigma$, of symbols, which is taken to be the alphabet for the language. The next step is to form the free monoid, $\Sigma^*$, on the alphabet, and then a language is simply a subset of $\Sigma^*$, and these can be used in a number of situations to describe the workings of a ‘system’, the symbols being used to represent certain operations or actions of the system, and the words in $\Sigma^*$ thus corresponding to sequences of operations. Usually the system will not allow arbitrary sequences of operations to be performed, and so the system determines a language, namely the allowable execution sequences. Describing the language, helps describe and determine the overall ‘system’.

If the system being studied has some concurrency in it, so that certain of the operations are (causally) independent of each other, then one way of modelling such a system is to use a (Mazurkiewicz) trace alphabet, that is one in which the independence is explicitly taken into consideration.

## Definition

A (Mazurkiewicz) trace alphabet is a pair $(\Sigma, I)$ where $\Sigma$ is a (usually finite) set of actions and $I \subseteq \Sigma \times \Sigma$ is an irreflexive and symmetric relation.

The relation $I$ is usually referred to as independence. Its complement $D = (\Sigma \times \Sigma) - I$ is called the dependency relation. It will be reflexive and symmetric. (Such a sense of ‘dependency relation’ is interpreted in a related but slightly different way from that in the use of dependency graphs in project planning.)

## Example

Consider the following ‘dependency diagram, with some vertices (states) and labelled edges (actions):

(arrow go from left to right but are not showing up … bother!)

We take $\Sigma = \{ a,b,c,d,e\}$ and note that, starting at the left, $b$ depends on $a$ (written $a\lt b$, since we cannot start action $b$ until action $a$ has been completed. This gives a labelled poset.

Here the first information to encode in $D$ is the poset structure of dependency

$\{(a,a),(a,b),(a,c),(a,d),(a,e),(b,b),(b,d), etc.\},$

then symmetrise this, and take the complement to get

$I = \{(b,c),(c,b),(d,e),(e,d),(b,e), (e,b), (c,d),(d,c)\}.$

Note this encoding does not involve the relationship given by adjacency, and hence ignore if an ‘execution sequence’ is ‘allowable’ (i.e., realistic) so $(c,d)$ is in $I$ even though we could not first ‘do’ $c$ and then ‘do’ $d$ (unless of course $b$ had already been done).

The language (trace language?) corresponding to the model will be a subset of the trace monoid of the trace alphabet $(\Sigma, I)$ given here. It will encode allowable execution sequences but also will consider the allowable sequences ( traces ): $a b c d e$, $a c b d e$, $a b d c e$, $a c e b d$ etc. to be identified.

## Reference

• A Mazurkiewicz, Trace theory in ‘Advances in Petri Nets’, volume 255 of LNCS, Springer (1987).

Last revised on November 15, 2010 at 07:50:42. See the history of this page for a list of all contributions to it.