9.7 KiB
Relations
Binary relations
Definition: a binary relation
Rbetween the setsSandTis a subset of the Cartesian productS \times T.
- If
(a,b) \in Rthenais in relationRtob, denoted byaRb.- The set
Sis called the domain of the relationRand the setTthe codomain.- If
S=TthenRis a relation onS.- This definition can be expanded to n-ary relations.
Definition: let
Rbe a relation from a setSto a setT. Then for each elementa \in Swe define[a]_Rto be the set
[a]_R := {b \in T ;|; aRb}.This set is called the ($R$-) image of
a.For
b \in Tthe set
_R[b] := {a \in S ;|; aRb}is called the ($R$-) pre-image of
Bor $R$-fiber ofb.
Relations between finite sets can be described using matrices.
Definition: if
S = \{s_1, \dots, s_n\}andT = \{t_1, \dots, t_m\}are finite sets andR \subseteq S \times Tis a binary relation, then the adjacency matrixA_Rof the relationRis then \times nmatrix whose rows are indexed bySand columns byTdefined by
A_{s,t} = \begin{cases} 1 &\text{ if } (s,t) \in R, \ 0 &\text{ otherwise}. \end{cases}
For example, the adjacency matrix of relation \leq on the set \{1,2,3,4,5\} is the upper triangular matrix
\begin{pmatrix} 1 & 1 & 1 & 1 & 1 \ 0 & 1 & 1 & 1 & 1 \ 0 & 0 & 1 & 1 & 1 \ 0 & 0 & 0 & 1 & 1 \ 0 & 0 & 0 & 0 & 1\end{pmatrix}
Some relations have special properties
Definitions: let
Rbe a relation on a setS. ThenRis called
- Reflexive if
\forall x \in Sthere is(x,x) \in R.- Irreflexive if
\forall x \in Sthere is(x,x) \notin R.- Symmetric if
\forall x,y \in Sthere is thatxRy \implies yRx.- Antisymmetric if
\forall x,y \in Sthere is thatxRy \land yRx \implies x = y.- Transitive if
\forall x,y,z \in Sthere is thatxRy \land yRz \implies xRz.
Equivalence relations
Definition: a relation
Ron a setSis called an equivalence relation onSif and only if it is reflexive, symmetric and transitive.
Lemma: let
Rbe an equivalence relation on a setS. Ifb \in [a]_R, then[b]_R = [a]_R.
??? note "Proof:"
Suppose $b \in [a]_R$, therefore $aRb$. If $c \in [b]_R$, then $bRc$ and as $aRb$ there is transitivity $aRc$. In particular $[b]_R \subseteq [a]_R$. By symmetry of $R$, $aRb \implies bRa$ and hence $a \in [b]_R$, obtaining $[a]_R \subseteq [b]_R$.
Definition: let
Rbe an equivalence relation on a setS. Then the sets[s]_Rwheres \in Sare called the $R$-equivalence classes onS. The set of $R$-equivalence classes is denoted byS/R.
Theorem: let
Rbe an equivalence relation on a setS. Then the setS/Rof $R$-equivalence classes partitions the setS.
??? note "Proof:"
Let $\Pi_R$ be the set of $R$-equivalence classes. Then by reflexivity of $R$ we find that each element $a \in S$ is inside the class $[a]_R$ of $\Pi_R$. If an element $a \in S$ is in the classes $[b]_R$ and $[c]_R$ of $\Pi_R$, then by the previous lemma we find $[b]_R = [a]_R$ and $[c]_R = [a]_R$. Then $[b]_R = [c]_R$, therefore each element $a \in S$ is inside a unique member of $\Pi_R$, which therefore is a partition of $S$.
Composition of relations
If R_1 and R_2 are two relations between a set S and T, new relations can be formed between S and T by taking the intersection R_1 \cap R_2, the union R_1 \cup R_2 or the complement R_1 \backslash R_2. Furthermore a relation R^\top from T to S can be considered as the relation \{(t,s) \in T \times S \;|\; (s,t) \in R\} and the identity relation from T to S is given by I = \{(s, t) \in S \times T \;|\; s = t\}
Another way of making new relations out of existing ones is by taking the composition.
Definition: if
R_1is a relation betweenSandTandR_2is a relation betweenTandUthen the compositionR = R_1;R_2is the relation betweenSandUdefined bysRufors \in Sandu \in U, if and only if there is at \in TwithsR_1tandtR_2u.
Proposition: suppose
R_1is relation fromStoT,R_2a relation fromTtoUandR_3a relation fromUtoV. ThenR_1;(R_2;R_3) = (R_1;R_2);R_3. Composing relations is associative.
??? note "Proof:"
Suppose $s \in S$ and $v \in V$ with $sR_1;(R_2;R_3)v$. Then a $t \in T$ with $sR_1t$ and $t(R_2;R_3)v$ can be found. Then there is also a $u \in U$ with $tR_2u$ and $uR_3v$. For this $u$ there is $sR_1;R_2u$ and $uR_3v$ and hence $s(R_1;R_2);R_3v$.
Similarly, if $s \in S$ and $v \in V$ with $s(R_1;R_2);R_3v$. Then a $u \in U$ with $s(R_1;R_2)u$ and $uR_3v$ can be found. Then there is also a $t \in T$ with $sR_1t$ and $tR_2u$. For this $t$ there is $tR_2;R_3u$ and $sR_1t$ and hence $sR_1;(R_2;R_3)v$.
Transitive closure
Lemma: let
\ellbe a collection of relationsRon a setS. If all relationsRin\ellare transitive, reflexive or symmetric then the relation\bigcap_{R \in \ell} Ris also transitive, reflexive or symmetric respectively.
??? note "Proof:"
Let $\bar R = \bigcap_{R \in \ell} R$. Suppose all members of $\ell$ are transitive. Then for all $a,b,c \in S$ with $a \bar R b$ and $b \bar R c$ there is $aRb$ and $bRc$ for all $R \in \ell$. Thus by transitivity of each $R \in \ell$ there is also $aRc$ for each $R \in \ell$. Thus there is $a \bar R c$. Hence $\bar R$ is also transitive.
Proof for symmetric relation will follow.
Proof for reflexive relation will follow.
The above lemma makes it possible to define the reflexive, symmetric or transitive closure of a relation R on a set S. It is the smallest reflexive, symmetric or transitive relation containing R.
- For example suppose
R = \{(1,2), (2,2), (2,3), (5,4)\}is a relation onS = \{1, 2, 3, 4, 5\}. -
The reflexive closure of
Ris then the relation\big{(1,1), (1,2), (2,2), (2,3), (3,3), (4,4), (5,5), (5,4) \big},the symmetric closure of
Ris then the relation\big{ (1,2), (2,1), (2,2), (2,3), (3,2), (4,5), (5,4) \big},and the transitive clusure of
Ris then the relation{(1,2), (1,3), (2,2), (2,3), (5,4)}.
It may be observed that the reflexive closure of R equals the relation I \cup R and the symmetric closure equals R \cup R^\top. For the transitive closure there is:
Proposition:
\bigcup_{n > 0} R^nis the transitive closure of the relationRon a setS.
??? note "Proof:"
Define $\bar R = \bigcup_{n>0} R^n$, to show that $\bar R$ is the least transitive relation containing $R$, $\bar R$ must contain $R$, must be transitive and must be the smallest set with both of those properties.
Since $R \subseteq \bar R$, $\bar R$ contains all of the $R^i, i \in \mathbb{N}$, so in particular $\bar R$ contains $R$.
If $(s_1, s_2), (s_2, s_3) \in \bar R$, then $(s_1, s_2) \in R^j$ and $(s_2, s_3) \in R^k$ for some $j,k$. Since composition is [associative](#composition-of-relations), $R^{j+k} = R^j ; R^k$ and hence $(s_1, s_3) \in R^{j+k} \subseteq \bar R$.
We claim that if $T$ is any transitive relation containing $R$, then $\bar R \subseteq T$. By taking $R^n \subseteq \bar R \subseteq T \; \forall n \in \mathbb{N}$ .
: We first check for $n=1$
$$
R^1 = R \subseteq T.
$$
: Now suppose that for some $k \in \mathbb{N}$ we have $R^k \subseteq T$. Then by assumption $R^{k+1} \subseteq T$. Let $(s_1, s_3) \in R^{k+1} = R^k ; R$, then $(s_1, s_2) \in R$ and $(s_2, s_3) \in R^k$ for some $s_2$. Hence $(s_1, s_2), (s_2, s_3) \in T$ and by transitivity of $T$, $(s_1, s_3) \in T$.
Hence if the claim holds for some $k \in \mathbb{N}$ then it also holds for $k+1$. The principle of natural induction implies now that $\forall n \in \mathbb{N}$ we have $R^n \subseteq \bar R \subseteq T$.
Suppose a relation R on a finite set S of size n is given by its adjacency matrix A_R. Then Warshall's algorithm is an method for finding the adjacency matrix of the transitive closure of the relation R.
Algorithm - Warshall's algoritm: for an adjacency matrix
A_R = M_0of relationRonnelements there will bensteps taken to obtain the adjacency matrix of the transitive closure of the relationR. LetR_iandC_ibe the $i$th row and column ofA_R. In each step a new matrixM_iis obtained withC_i \times R_iadded toM_{i-1}. AfternstepsA_{\bar R}is obtained.
- For example let
Rbe an relation onS = \{1,2,3,4\}withR = \{(2,1), (2,3), (3,1), (3,4), (4,1), (4,3)\}, determining the transitive closure\bar RofRwith Warshall's algorithm. -
The adjacency matrix of the relation
Ris given byA_R = \begin{pmatrix} 0 & 0 & 0 & 0 \ 1 & 0 & 1 & 0 \ 1 & 0 & 0 & 1 \ 1 & 0 & 1 & 0\end{pmatrix}.We have
C_1 = \{2,3,4\}andR_1 = \varnothing, thereforeC_1 \times R_1 = \varnothingand no additions will be made,M_1 = A_R.We have
C_2 = \varnothingand $R_2 = {1,3}, thereforeC_2 \times R_2 = \varnothingand no additions will be made,M_2 = M_1.We have
C_3 = \{2,4\}andR_3 = \{1,4\}, thereforeC_3 \times R_3 = \{(2,1), (2,4), (4,1), (4,4)\}obtaining the matrixM_3 = \begin{pmatrix} 0 & 0 & 0 & 0 \ 1 & 0 & 1 & 1 \ 1 & 0 & 0 & 1 \ 1 & 0 & 1 & 1\end{pmatrix}.We have $C_4 = {2,3,4} and $R_4 = {1,3,4}, thereforeC_4 \times R_4 = \{(2,1), (2,3), (2,4), (3,1), (3,3), (3,4), (4,1), (4,3), (4,4)\}obtaining the final matrixM_4 = \begin{pmatrix} 0 & 0 & 0 & 0 \ 1 & 0 & 1 & 1 \ 1 & 0 & 1 & 1 \ 1 & 0 & 1 & 1\end{pmatrix} = A_{\bar R}.