port from mathematics-physics notes

This commit is contained in:
Luc Bijl 2025-08-26 15:48:53 +02:00
parent a4e106ce02
commit c009ea53f0
124 changed files with 13224 additions and 0 deletions

View file

@ -0,0 +1,27 @@
# Additional axioms
## Axiom of choice
> *Axiom*: let $C$ be a collection of nonempty sets. Then there exists a map
>
>$$
> f: C \to \bigcap_{A \in C} A
>$$
>
> with $f(A) \in A$.
>
> * The image of $f$ is a subset of $\bigcap_{A \in C} A$.
> * The function $f$ is called a **choice function**.
The following statements are equivalent to the axiom of choice.
* For any two sets $A$ and $B$ there does exist a surjective map from $A$ to $B$ or from $B$ to $A$.
* The cardinality of an infinite set $A$ is equal to the cardinality of $A \times A$.
* Every vector space has a basis.
* For every surjective map $f: A \to B$ there is a map $g: B \to A$ with $f(g(b)) = b$ for all $b \in B$.
## Axiom of regularity
> *Axiom*: let $X$ be a nonempty set of sets. Then $X$ contains an element $Y$ with $X \cap Y = \varnothing$.
As a result of this axiom any set $S$ cannot contain itself.

View file

@ -0,0 +1,67 @@
# Cardinalities
## Cardinality
> *Definition*: two sets $A$ and $B$ have the same **cardinality** if there exists a bijection from $A$ to $B$.
For example, two finite sets have the same cardinality if and only if they have the same number of elements. The sets $\mathbb{N}$ and $\mathbb{Z}$ have the same cardinality, consider the map $f: \mathbb{N} \to \mathbb{Z}$ defined by $f(2n) = n$ and $f(2n+1) = -n$ with $n \in \mathbb{N}$, which may be observed to be a bijection.
> *Theorem*: having the same cardinality is an equivalence relation.
??? note "*Proof*:"
Let $A$ be a set. Then the identity map is a bijection from $A$ to itself, so $A$ has the same cardinality as $A$. Therefore we obtain reflexivity.
Suppose $A$ has the same cardinality as $B$. Then there is a bijection $f: A \to B$. Now $f$ has an inverse $f^{-1}$, which is a bijection from $B$ to $A$. So $B$ has the same cardinality as $A$, obtaining symmetry.
Suppose $A$ has the same cardinality as $B$ and $B$ the same cardinality as $C$. So, there exist bijections $f: A \to B$ and $g: B \to C$. Then $g \circ f: A \to C$ is a bijection from $A$ to $C$. So $A$ has the same cardinality as $C$, obtaining transitivity.
## Countable sets
> *Definition*: a set is called **finite** if it is empty or has the same cardinality as the set $\mathbb{N}_n := \{1, 2, \dots, n\}$ and **infinite** otherwise.
<br>
> *Definition*: a set is called **countable** if it is finite or has the same cardinality as the set $\mathbb{N}$. An infinite set that is not countable is called **uncountable**.
<br>
> *Theorem*: every infinite set contains an infinite countable subset.
??? note "*Proof*:"
Suppose $A$ is an infinite set. Since $A$ is infinite, we can start enumerating the elements $a_1, a_2, \dots$ such that all the elements are distinct. This yields a sequence of elements in $A$. The set of all elements in this sequence form a countable subset of $A$.
> *Theorem*: let $A$ be a set. If there is a surjective map from $\mathbb{N}$ to $A$ then $A$ is countable.
??? note "*Proof*:"
Will be added later.
## Uncountable sets
> *Lemma*: the set $\{0,1\}^\mathbb{N}$ is uncountable.
??? note "*Proof*:"
let $F: \mathbb{N} \to \{0,1\}^\mathbb{N}$. By $f_i$ we denote the function $F(i)$ from $\mathbb{N}$ to $\{0,1\}$. ...
The power set of $\mathbb{N}$ has the same cardinality as $\{0,1\}^\mathbb{N}$ therefore it also uncountable.
> *Lemma*: the interval $[0,1)$ is uncountable.
??? note "*Proof*:"
Will be added later.
> *Theorem*: $\mathbb{R}$ is uncountable.
??? note "*Proof*:"
as $\mathbb{R}$ contains the uncountable subset $[0,1)$, it is uncountable.
## Cantor-Schröder-Bernstein theorem
> *Theorem*: let $A$ and $B$ be sets and assume that there are two maps $f: A \to B$ and $g: B \to A$ which are injective. Then there exists a bijection $h: A \to B$.
>
> Therefore $A$ and $B$ have the same cardinality.

View file

@ -0,0 +1,109 @@
# Maps
## Definition
> *Definition*: a relation $f$ from a set $A$ to a set $B$ is called a map or function from $A$ to $B$ if for each $a \in A$ there is one and only one $b \in B$ with $afb$.
>
> * To indicate that $f$ is a map from $A$ to $B$ we may write $f:A \to B$.
> * If $a \in A$ and $b \in B$ is the unique element with $afb$ then we may write $b=f(a)$.
> * The set of all maps from $A$ to $B$ is denoted by $B^A$.
> * A partial map $f$ from a $A$ to $B$ with the property that for each $a \in A$ there is at most one $b \in B$ with $afb$.
For example, let $f: \mathbb{R} \to \mathbb{R}$ with $f(x) = \sqrt{x}$ for all $x \in \mathbb{R}$ is a partial map, since not all of $\mathbb{R}$ is mapped.
<br>
> *Proposition*: let $f: A \to B$ and $g: B \to C$ be maps, then the composition $g$ after $f$: $g \circ f = f;g$ is a map from $A$ to $C$.
??? note "*Proof*:"
Let $a \in A$ then $g(f(a))$ is an element in $C$ in relation $f;g$ with $a$. If $c \in C$ is an element in $C$ that is in relation $f;g$ with $a$, then there is a $b \in B$ with $afb$ and $bgc$. But then, as $f$ is a map, $b=f(a)$ and as $g$ is a map $c=g(b)$. Hence $c=g(b)=g(f(a))$ is the unique element in $C$ which is in relation $g \circ f$ with $a$.
<br>
> *Definition*: Let $f: A \to B$ be a map.
>
> * The set $A$ is called the *domain* of $f$ and the set $B$ the *codomain*.
> * If $a \in A$ then the element $b=f(a)$ is called the image of $a$ under $f$.
> * The subset of $B$ consisting of the images of the elements of $A$ under $f$ is called the image or range of $f$ and is denoted by $\text{Im}(f)$.
> * If $a \in A$ amd $b=f(a)$ then the element $a$ is called a pre-image of $b$. The set of all pre-images of $b$ is denoted by $f^{-1}(b)$.
Notice that $b$ can have more than one pre-image. Indeed if $f: \mathbb{R} \to \mathbb{R}$ is given by $f(x) = x^2$ for all $x \in \mathbb{R}$, then both $-2$ and $2$ are pre-images of $4$.
If $A'$ is a subset of $A$ then the image of $A'$ under $f$ is the set $f(A') = \{f(a) \;|\; a \in A'\}$, so $\text{Im}(f) = f(A)$.
If $B'$ is a subet of $B$ then the pre-image of $B'$, denoted by $f^{-1}(B')$ is the set of elements $a$ from $A$ that are mapped to an element $b$ of $B'$.
<br>
> *Theorem*: let $f: A \to B$ be a map.
>
> * If $A' \subseteq A$, then $f^{-1}(f(A')) \supseteq A'$.
> * If $B' \subseteq B$, then $f(f^{-1}(B')) \subseteq B'$.
??? note "*Proof*:"
Let $a' \in A'$, then $f(a') \in f(A')$ and hence $a' \in f^{-1}(f(A'))$. Thus $A' \subseteq f^{-1}(f(A'))$.
Let $a \in f^{-1}(B')$, then $f(a) \in B'$. Thus $f(f^{-1}(B')) \subseteq B'$.
## Special maps
> *Definition*: let $f: A \to B$ be a map.
>
> * $f$ is called **surjective**, if for each $b \in B$ there is at least one $a \in A$ with $b = f(a)$. Thus $\text{Im}(f) = B$.
> * $f$ is called **injective** if for each $b \in B$, there is at most one $a$ with $f(a) = b$.
> * $f$ is called **bijective** if it is both surjective and injective. So, if for each $b \in B$ there is a unique $a \in A$ with $f(a) = b$.
For example the map $\sin: \mathbb{R} \to \mathbb{R}$ is not surjective nor injective. The map $\sin: [-\frac{\pi}{2},\frac{\pi}{2}] \to \mathbb{R}$ is injective but not surjective and the map $\sin: \mathbb{R} \to [-1,1]$ is surjective but not injective. To conclude the map $\sin: [-\frac{\pi}{2},\frac{\pi}{2}] \to [-1,1]$ is a bijective map.
<br>
> *Theorem*: let $A$ be a set of size $n$ and $B$ a set of size $m$. Let $f: A \to B$ be a map between the sets $A$ and $B$.
>
> * If $n < m$ then $f$ can not be surjective.
> * If $n > m$ then $f$ can not be injective.
> * If $n = m$ then $f$ is injective if and only if it is surjective.
??? note "*Proof*:"
Think of pigeonholes. (Not really a proof).
<br>
> *Proposition*: let $f: A \to B$ be a bijection. Then for all $a \in A$ and $b \in B$ we have $f^{-1}(f(a)) = a$ and $f(f^{-1}(b)) = b$. In particular, $f^{-1}$ is the inverse of $f$.
??? note "*Proof*:"
Let $a \in A$. Then $f^{-1}(f(a)) = a$ by definition of $f^{-1}$. If $b \in B$ then by surjectivity of $f$ there is an $a \in A$ with $b = f(a)$. So, by the above $f(f^{-1}(b)) = f(f^{-1}(f(a))) = f(a) = b$.
<br>
> *Theorem*: let $f: A \to B$ and $g: B \to C$ be two maps.
>
> 1. If $f$ and $g$ are surjective then so is $g \circ f$.
> 2. If $f$ and $g$ are injective then so is $g \circ f$.
> 3. If $f$ and $g$ are bijective then so is $g \circ f$.
??? note "*Proof*:"
1. Suppose $f$ and $g$ are surjective, let $c \in C$. By surjectivity of $g$ there is a $b \in B$ with $g(b) = c$. Since $f$ is surjective there is also an $a \in A$ with $f(a) = b$. Therefore $g \circ f(a) = g(f(a)) = g(b) = c$.
2. Suppose $f$ and $g$ are injective, let $a,a' \in A$ with $g \circ f(a) = g \circ f(a')$. Then $g(f(a)) = g(f(a'))$ and by injectivity of $g$ we find $f(a) = f(a')$. Injectivity of $f$ implies $a = a'$.
3. Proofs 1. and 2. imply 3. by definition of bijectivity.
<br>
> *Proposition*: if $f: A \to B$ and $g: B \to A$ are maps with $f \circ g = I_B$ and $g \circ f = I_A$, where $I_A$ and $I_B$ denote the identity maps on $A$ and $B$, respectively. Then $f$ and $g$ are bijections and $f^{-1} = g$ and $g^{-1} = f$.
??? note "*Proof*:"
Suppose $f A \to B$ and $g: B \to A$ are maps with $f \circ g = I_B$ and $g \circ f = I_A$. Let $b \in B$ then $f(g(b)) = b$, thus $f$ is surjective. If $a,a' \in A4 with $f(a) = f(a')$, then $a = g(f(a)) = g(f(a')) = a' and hence $f$ is injective. Therefore $f$ is bijective and by symmetry $g$ is also bijective.
<br>
> *Proposition*: suppose $f: A \to B$ and $g: B \to C$ are bijective maps. Then the inverse of the map $g \circ f$ equals $f^{-1} \circ g^{-1}$.
??? note "*Proof*:"
Suppose $f: A \to B$ and $g: B \to C$ are bijective maps. Then for all $a \in A$ we have $(f^{-1} \circ g^{-1}) (g \circ f)(a) = f^{-1}(g^{-1}(g(f(a)))) = f^{-1}(f(a)) = a$.

View file

@ -0,0 +1,55 @@
# Orders
## Orders and posets
> *Definition*: a relation $\sqsubseteq$ on a set $P$ is called an **order** if it is reflexive, antisymmetric and transitive.
>
>* The pair $(P, \sqsubseteq)$ is called a **partially ordered set** or for short **poset**.
>* Two elements $x$ and $y$ in a poset $(P, \sqsubseteq)$ are called comparable if $x \sqsubseteq y$ or $y \sqsubseteq x$. The elements are incomparable if $x \not\sqsubseteq y$ and $y \not\sqsubseteq x$.
>* If any two elements are comparable then the relation is called a linear order.
For example on the set of real numbers $\mathbb{R}$ the relation $\leq$ is an order relation. For any two numbers $x,y \in \mathbb{R}$ we have $x \leq y$ or $y \leq x$. This makes $\leq$ into a linear order.
> *Definition* **- Hasse diagram**: let $(P, \sqsubseteq)$ be a poset. The graph with vertex set $P$ and two vertices $x,y \in P$ adjacent if and only if $x \sqsubseteq y$ and there is no $z \in P$ different from $x$ and $y$ with $x \sqsubseteq z$ and $z \sqsubseteq y$.
## Maximal and minimal elements
> *Definition*: let $(P, \sqsubseteq)$ be a partially ordered set and $A \subseteq P$. An element $a \in A$ is called the **maximum** ($\top$) of $A$, if for all $a' \in A$ we have $a' \sqsubseteq a$. An element $a \in A$ is called **maximal** if for all $a' \in A$ we have that either $a' \sqsubseteq a$ or $a$ and $a'$ are incomparable.
>
> Similarly we can define the notion of **minimum** ($\bot$) and **minimal** element.
If we consider the poset of all subsets of a set $S$ then the empty set $\varnothing$ is the minimum of the poset, whereas the whole set $S$ is the maximum. The atoms are the subsets of $S$ containing just a single element.
> *Definition*: if a poset $(P, \sqsubseteq)$ has a minimum $\bot$, then the minimal elements of $P\backslash \{\bot\}$ are called the atoms of $P$.
<br>
> *Lemma*: let $(P, \sqsubseteq)$ be a partially ordered set. Then $P$ contains at most one maximum and one minimum.
??? note "*Proof*:"
Suppose $p,q \in P$ are maxima. Then $p \sqsubseteq q$ as $q$ is a maximum. Similarly $q \sqsubseteq p$ as $p$ is a maximum. By antisymmetry of $\sqsubseteq$ we have $p = q$.
> *Lemma*: let $(P, \sqsubseteq)$ be a finite poset then $P$ contains a minimal and a maximal element.
??? note "*Proof*:"
Consider the directed graph associated to $(P, \sqsubseteq)$ and pick a vertex in this graph. If the vertex is not maximal, then there is an edge leaving it. Move along this edge to the neighbour. Repeat this as long as no maximal element is found. Since the graph contains no cycles, a vertex will never be met twice. Hence, as $P$ is finite, the procedure has to stop. Implying a maximal element has been found. A minimal element of $(P, \sqsubseteq)$ is a maximal element of $(P, \sqsupseteq)$ and thus also exists.
> *Definition*: if $(P, \sqsubseteq)$ is a poset and $A \subseteq P$ then an **upperbound** for $A$ is an element $u$ with $a \sqsubseteq u$ for all $a \in A$. A **lowerbound** for $A$ is an element $u$ with $u \sqsubseteq a$ for all $a \in A$.
>
> If the set of all upperbounds of $A$ has a minimal element then this element is called the **least upperbound** or **supremum** of $A$. Such an element is denoted by $\mathrm{sup} A$.
>
> If the set of all lowerbounds of $A$ has a maximal element then this element is called the **largest lowerbound** or **infimum** of $A$. Such an element is denoted by $\mathrm{inf} A$.
For example let $S$ be a set. In $(\wp(S), \subseteq)$ any set $A$ of subsets of $S$ has a least upperbound and an largest lowerbound. Indeed
$$
\mathrm{sup} A = \bigcup_{X \in A} X \;\text{ and }\; \mathrm{inf} A = \bigcap_{X \in A} X.
$$
If $(P, \sqsubseteq)$ is a finite poset then the elements from $P$ can be ordered as $p_1, p_2, \dots, p_n$ such that $p_i \sqsubseteq p_j$ implies $i < j$. This implies that the adjacency matrix of $\sqsubseteq$ is uppertriangular, which means that it has only nonzero entries on or above the main diagonal.
> *Definition*: an **ascending chain** in a poset $(P, \sqsubseteq)$ is a sequence $p_1 \sqsubseteq p_2 \sqsubseteq \dots$ of elements $p_i \in P,i \in \mathbb{N}$. A **descending chain** in $(P, \sqsubseteq)$ is a sequence $p_1 \sqsupseteq p_2 \sqsupseteq \dots$ of elements $p_i \in P, i \in \mathbb{N}$.
>
> The poset $(P, \sqsubseteq)$ is called **well founded** if any descending chain is finite.

View file

@ -0,0 +1,197 @@
# Permutations
## Definition
> *Definition*: let $X$ be a set.
>
> * A bijection of $X$ to itself is called a permutation of $X$. The set of all permutations of $X$ is denoted by $\text{Sym}(X)$ and is called the symmetric group on $X$.
> * The product $g \cdot h$ of two permutations $g,h$ in $\text{Sym}(X)$ is defined as the composition $g \circ h$ of $g$ and $h$.
> * If $X = \{1, \dots, n\}$ we write $\mathrm{Sym}_n(X)$ instead of $\mathrm{Sym}(X)$.
<br>
> *Definition*: the identity map is defined as $\mathrm{id}: X \to X$ with $g = g \cdot \mathrm{id} = \mathrm{id} \cdot g$ for all $g$ in $\mathrm{Sym}(X)$. The inverse of $g$ denoted by $g^{-1}$ satisfies $g^{-1} \cdot g = g \cdot g^{-1} = \mathrm{id}$.
In matrix notation: let $g = \begin{pmatrix} 1 & 2 & 3 \\ 2 & 3 & 1\end{pmatrix}$ and $h = \begin{pmatrix} 1 & 2 & 3 \\ 2 & 1 & 3\end{pmatrix}$ with $g,h \in \mathrm{Sym}_3(X)$, then we can take
$$
g \cdot h = \begin{pmatrix} 1 & 2 & 3 \\ 2 & 3 & 1 \\ \hline 2 & 1 & 3 \\ 3 & 2 & 1\end{pmatrix} = \begin{pmatrix} 1 & 2 & 3 \\ 3 & 2 & 1\end{pmatrix},
$$
and we have $g^{-1} = \begin{pmatrix} 2 & 3 & 1 \\1 & 2 & 3 \end{pmatrix}$.
<br>
> *Theorem*: $\mathrm{Sym}_n$ has exactly $n!$ elements.
??? note "*Proof*:"
A permutation can be described in a matrix notation by a $2$ by $n$ matrix with the numbers $1,\dots,n$ in the first row and the images in the second row. There are $n!$ possibilities to fill the second row.
We can also omit the matrix notation and use the list notation for permutations then we have for $g = \begin{pmatrix} 1 & 2 & 3 \\ 2 & 3 & 1\end{pmatrix} = [2,3,1]$, as the first row speaks for itself.
<br>
> *Definition*: the order of a permutation $g$ is the smallest positive integer $m$ such that $g^m = \mathrm{id}$.
For example the order of the permutation $[2,1,3]$ in $\mathrm{Sym}_3$ is 2.
If $g$ is a permutation in $\mathrm{Sym}_n$ then the permutations $g, g^2, g^3, \dots$ can not all be distinct, since there are only $n!$ distinct permutations in $\mathrm{Sym}_n$. So there must exists a $r < s$ such that $g^r = g^s$. Since $g$ is a bijection there must be $g^{s-r} = e$. So there exist positive numbers $m$ with $g^m = e$ and in particular a smallest such number. Therefore each permutation $g$ has a well-defined order.
## Cycles
> *Definition*: the **fixed** points of a permutation $g$ of $\mathrm{Sym}(X)$ are the elements of $x \in X$ for which $g(x) = x$ holds. The set of all fixed points is $\mathrm{fix}(g) = \{x \in X \;|\; g(x) = x\}$.
>
> The **support** of $g$ is the complement in $\mathrm{Sym}(X)$ of $\mathrm{fix}(g)$, denoted by $\mathrm{support}(g)$.
For example consider the permutation $g = [1,3,2,5,4,6] \in \mathrm{Sym}_6$. The fixed points of $g$ are 1 and 6. So $\mathrm{fix}(g) = \{1,6\}$. Thus the points moved by $g$ form the set $\mathrm{support}(g) = \{2,3,4,5\}$.
<br>
> *Definition*: let $g \in \mathrm{Sym}_n$ be a permutation with $\mathrm{support}(g) = \{a_1, \dots, a_m\}$ with $a_i$ pairwise distinct.
>
> We say $g$ is an $m$-cycle if $g(a_i) = g(a_{i+1})$ for all $i \in \{1, \dots, m-1\}$ and $g(a_m) = a_1$. For such a cycle $g$ we also use the cycle notation $(a_1, \dots, a_m)$.
>
> 2-cycles are called transpositions.
The composition of permutation in $\mathrm{Sym}_n$ is not commutative. This implies that for $g, h \in \mathrm{Sym}_n(X)$ the products $g \cdot h$ and $h \cdot g$ are not the same.
Two cycles are called disjoint if the intersection of their supports is empty. Two disjoint cycles always commute.
For example in $\mathrm{Sym}_4$ the permutation $[2,1,4,3]$ is not a cycle, but it is the product of two disjoint cycles $(1,2)$ and $(3,4)$.
<br>
> *Theorem*: every permutation in $\mathrm{Sym}_n$ is a product of disjoint cycles. This product is unique up to rearrangement of the factors.
??? note "*Proof*:"
Will be added later.
For example consider the permutation $g = [8,4,1,6,7,2,5,3]$ in $\mathrm{Sym}_8$. The following steps lead to the disjoint cycles decomposition.
: Choose an element in the support of $g$, for example 1. Now construct the cycle
$$
(1,g(1),g^2(1),\dots),
$$
obtaining the cycle $(1,8,3)$.
Next choose an element in the support of $g$, but outside $\{1,3,8\}$, for example 2. Construct the cycle
$$
(2,g(2),g^2(2),\dots),
$$
obtaining the cycle $(2,4,6)$.
Choose an element in the support of $g$ but outside $\{1,2,3,4,6,8\}, for example 5. Construct the cycle
$$
(5,g(5),g^2(5),\dots),
$$
obtaining the cycle $(5,7)$. Then $g$ and $(1,8,3) \cdot (2,4,6) \cdot (5,7)$ coincide on $\{1,\dots,8\}$ and the decomposition is finished. As these cycles are disjoint they may commute, implying that $g$ can also be written as $(5,7) \cdot (1,8,3) \cdot (2,4,6)$ and $(2,4,6) \cdot (5,7) \cdot (1,8,3)$.
<br>
> *Definition*: the cycle structure of a permutation $g$ is the sequence of the cycle lengths in an expression of $g$ as a product of disjoint cycles.
This means that every permutation has a unique cycle structure.
## Conjugation
The choice $X = \{1, \dots, n\}$ fixed the set $X$ under consideration. Suppose a different numbering of the elements in $X$ is chosen. How may a permutation of $X$ be compared with respect to two different numberings?
> *Lemma*: let $h$ be a permutation in $\mathrm{Sym}_n$.
>
> * For every cycle $(a_1, \dots, a_m)$ in $\mathrm{Sym}_n$ we have
> $$
> h \cdot (a_1, \dots, a_m) \cdot h^{-1} = (h(a_1), \dots, h(a_m)).
> $$
>
> * If $(g_1, \dots, g_k)$ are in $\mathrm{Sym}_n$, then $h \cdot g_1 \cdots g_k \cdot h^{-1} = h g_1 h^{-1} \cdots h g_k h^{-1}$. In particular, if $g_1, \dots, g_k$ are disjoint cycles, then $h \cdot g_1 \cdots g_k \cdot h^{-1}$ is the product of the disjoint cycles $h g_1 h^{-1}, \dots, h g_k h^{-1}$.
??? note "*Proof*:"
Will be added later.
Conjugation is similar to basis transformation in linear algebra.
<br>
> *Theorem*: two permutations $g$ and $h$ in $\mathrm{Sym}_n$ have the same cycle structure if and only if there exists a permutation $k$ in $\mathrm{Sym}_n$ with $g = k \cdot h \cdot k^{-1}$.
??? note "*Proof*:"
Will be added later.
<br>
> *Corollary*: being conjugate is an equivalence relation on $\mathrm{Sym}_n$.
??? note "*Proof*:"
Two elements in $\mathrm{Sym}_n$ are conjugate if and only if they have the same cycle structure. But having the same cycle structure is reflexive, symmetric and transitive.
For example in $\mathrm{Sym}_4$ the permutations $g = [2,1,4,3]$ and $h=[3,4,1,2] are conjugate, since both have the cycle structure $2,2$: $g = (1,2) \cdot (3,4)$ and $h = (1,3) \cdot (2,4)$. A permutation $k$ such that $k \cdot g \cdot k^{-1} = h$ is $k = [1,3,2,4] = (2,3)$.
<br>
> *Theorem*: let $n \geq 2$. Every permutation of $\mathrm{Sym}_n$ is the product of transpositions.
??? note "*Proof*:"
Since every permutation in $\mathrm{Sym}_n$ can be written as a product of disjoint cycles, it suffices to show that every cycle is a product of 2-cycles. Now every $m$-cycle $(a_1, \dots, a_m)$ is equal to the product
$$
(a_1, a_2) \cdot (a_2, a_3) \cdots (a_{m-1}, a_m).
$$
## Alternating groups
To be able to distinguish between permutations defined by an even or odd number of products (length of products), the following result is needed.
> *Theorem*: if a permutation can be written in two way as a product of 2-cycles, then both products have even length or both products have odd length.
??? note "*Proof*:"
Will be added later.
From this theorem the following definition follows.
> *Definition*: let $g$ be a permutation of $\mathrm{Sym}_n$. The sign of $g$, denoted by $\mathrm{sign}(g)$, is defined as
>
> * 1 if $g$ can be written as a product of an even number of 2-cycles, and
> * -1 if $g$ can be written as a product of an odd number of 2-cycles.
>
> We say that $g$ is even if $\mathrm{sign}(g)=1$ and odd if $\mathrm{sign}(g)=-1$.
<br>
> *Theorem*: for all permutations $g,h$ in $\mathrm{Sym}_n$, we have
>
> $$
> \mathrm{sign}(g \cdot h) = \mathrm{sign}(g) \cdot \mathrm{sign}(h).
> $$
??? note "*Proof*:"
Let $g$ and $h$ be elements of $\mathrm{Sym}_n$, if one of the permutations is even and the other is odd, then $g \cdot h$ can be written as the product of an odd number of 2-cycles and is therefore odd. If $g$ and $h$ are both even or both odd, then the product $g \cdot h$ can be written as the product of an even number of 2-cycles so that $g \cdot h$ is even.
The fact that sign is multiplicative implies that products and inverses of even permutations are event, this given rise to the following definition.
> *Definition*: by $\mathrm{Alt}_n$ we denote the set of even permutations in $\mathrm{Sym}_n$, called the alternating group on $n$ letters.
>
> The alternating group is closed with respect to taking products and inverse elements.
For example for $n=3$ the even permutations are given by ($\mathrm{id}$ or $(1,2,3)$), $(3,1,2)$ and $(2,3,1)$.
<br>
> *Theorem*: for $n > 1$ the alternating group $\mathrm{Alt}_n$ contains precisely $\frac{n!}{2}$ permutations.
??? note "*Proof*:"
A permutation $g$ of $\mathrm{Sym}_n$ is even if and only if the product $g \cdot (1,2)$ is odd. Hence the map $g \mapsto g \cdot (1,2)$ defines a bijection between the even and the odd permutations of $\mathrm{Sym}_n$. Then half of the $n!$ permutations of $\mathrm{Sym}_n$ are even.

View file

@ -0,0 +1,99 @@
# Recursion and induction
## Recursion
A recursively defined function $f$ needs two ingredients:
* a *base*, where the function value $f(n)$ is defined, for some value of $n$.
* a *recursion*, in which the computation of the function in $n$ is explained with the help of the previous values smaller than $n$.
For example, the sum
$$
\begin{align*}&\sum_{i=1}^1 i = 1,\\ &\sum_{i=1}^{n+1} i = (n + 1) + \sum_{i=1}^{n} i.\end{align*}
$$
Or the product
$$
\begin{align*}&\prod_{i=0}^0 i = 1,\\ &\prod_{i=0}^{n+1} i = (n+1) \cdot \prod_{i=0}^{n} i.\end{align*}
$$
## Induction
> *Principle* **- Natural induction**: suppose $P(n)$ is a predicate for $n \in \mathbb{Z}$, let $b \in \mathbb{Z}$. If the following holds
>
> * $P(b)$ is true,
> * for all $k \in \mathbb{Z}$, $k \geq b$ we have that $P(k)$ implies $P(k+1)$.
>
> Then $P(n)$ is true for all $n \geq b$.
For example, we claim that $\forall n \in \mathbb{N}$ we have
$$
\sum_{i=1}^n i = \frac{n}{2} (n+1).
$$
We first check the claim for $n=1$:
$$
\sum_{i=1}^1 i = \frac{1}{2} (1+1) = 1.
$$
Now suppose that for some $k \in \mathbb{N}$
$$
\sum_{i=1}^k i = \frac{k}{2} (k+1).
$$
Then by assumption
$$
\begin{align*}
\sum_{i=1}^{k+1} i &= \sum_{i=1}^k i + (k+1), \\
&= \frac{k}{2}(k+1) + (k+1), \\
&= \frac{k+1}{2}(k+2).
\end{align*}
$$
Hence if the claim holds for some $k \in \mathbb{N}$ then it also holds for $k+1$. The principle of natural induction implies now that $\forall n \in \mathbb{N}$ we have
$$
\sum_{i=1}^n i = \frac{n}{2}(n+1).
$$
> *Principle* **- Strong induction**: suppose $P(n)$ is a predicate for $n \in \mathbb{Z}$, let $b \in \mathbb{Z}$. If the following holds
>
> * $P(b)$ is true,
> * for all $k \in \mathbb{Z}$ we have that $P(b), P(b+1), \dots, P(k-1)$ and $P(k)$ together imply $P(k+1)$.
>
> Then $P(n)$ is true for all $n \geq b$.
For example, we claim for the recursion
$$
\begin{align*}
&a_1 = 1, \\
&a_2 = 3, \\
&a_n = a_{n-2} + 2 a_{n-1}
\end{align*}
$$
that $a_n$ is odd $\forall n \in \mathbb{N}$.
We first check the claim for for $n=1$ and $n=2$, from the definition of the recursion it may be observed that the it is true.
Now suppose that for some $i \in \{1, \dots, k\}$ $a_i$ is odd.
Then by assumption
$$
\begin{align*}
a_{k+1} &= a_{k-1} + 2 a_k, \\
&= a_{k-1} + 2 a_{k} + 2(a_{k-2} + 2a_{k-1}), \\
&= 2 (a_k + a_{k-2} + 2 a_{k-1}) + a_{k-1},
\end{align*}
$$
so $a_{k+1}$ is odd.

View file

@ -0,0 +1,183 @@
# Relations
## Binary relations
> *Definition*: a binary relation $R$ between the sets $S$ and $T$ is a subset of the Cartesian product $S \times T$.
>
> * If $(a,b) \in R$ then $a$ is in relation $R$ to $b$, denoted by $aRb$.
> * The set $S$ is called the domain of the relation $R$ and the set $T$ the codomain.
> * If $S=T$ then $R$ is a relation on $S$.
> * This definition can be expanded to n-ary relations.
<br>
> *Definition*: let $R$ be a relation from a set $S$ to a set $T$. Then for each element $a \in S$ we define $[a]_R$ to be the set
>
> $$
> [a]_R := \{b \in T \;|\; aRb\}.
> $$
>
> This set is called the ($R$-) image of $a$.
>
> For $b \in T$ the set
>
> $$
> _R[b] := \{a \in S \;|\; aRb\}
> $$
>
> is called the ($R$-) pre-image of $B$ or $R$-fiber of $b$.
<br>
Relations between finite sets can be described using matrices.
> *Definition*: if $S = \{s_1, \dots, s_n\}$ and $T = \{t_1, \dots, t_m\}$ are finite sets and $R \subseteq S \times T$ is a binary relation, then the adjacency matrix $A_R$ of the relation $R$ is the $n \times n$ matrix whose rows are indexed by $S$ and columns by $T$ defined by
>
> $$
> A_{s,t} = \begin{cases} 1 &\text{ if } (s,t) \in R, \\ 0 &\text{ otherwise}. \end{cases}
> $$
For example, the adjacency matrix of relation $\leq$ on the set $\{1,2,3,4,5\}$ is the upper triangular matrix
$$
\begin{pmatrix} 1 & 1 & 1 & 1 & 1 \\ 0 & 1 & 1 & 1 & 1 \\ 0 & 0 & 1 & 1 & 1 \\ 0 & 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 & 1\end{pmatrix}
$$
<br>
Some relations have special properties
> *Definitions*: let $R$ be a relation on a set $S$. Then $R$ is called
>
> * *Reflexive* if $\forall x \in S$ there is $(x,x) \in R$.
> * *Irreflexive* if $\forall x \in S$ there is $(x,x) \notin R$.
> * *Symmetric* if $\forall x,y \in S$ there is that $xRy \implies yRx$.
> * *Antisymmetric* if $\forall x,y \in S$ there is that $xRy \land yRx \implies x = y$.
> * *Transitive* if $\forall x,y,z \in S$ there is that $xRy \land yRz \implies xRz$.
## Equivalence relations
> *Definition*: a relation $R$ on a set $S$ is called an equivalence relation on $S$ if and only if it is reflexive, symmetric and transitive.
<br>
> *Lemma*: let $R$ be an equivalence relation on a set $S$. If $b \in [a]_R$, then $[b]_R = [a]_R$.
??? note "*Proof*:"
Suppose $b \in [a]_R$, therefore $aRb$. If $c \in [b]_R$, then $bRc$ and as $aRb$ there is transitivity $aRc$. In particular $[b]_R \subseteq [a]_R$. By symmetry of $R$, $aRb \implies bRa$ and hence $a \in [b]_R$, obtaining $[a]_R \subseteq [b]_R$.
<br>
> *Definition*: let $R$ be an equivalence relation on a set $S$. Then the sets $[s]_R$ where $s \in S$ are called the $R$-equivalence classes on $S$. The set of $R$-equivalence classes is denoted by $S/R$.
<br>
> *Theorem*: let $R$ be an equivalence relation on a set $S$. Then the set $S/R$ of $R$-equivalence classes partitions the set $S$.
??? note "*Proof*:"
Let $\Pi_R$ be the set of $R$-equivalence classes. Then by reflexivity of $R$ we find that each element $a \in S$ is inside the class $[a]_R$ of $\Pi_R$. If an element $a \in S$ is in the classes $[b]_R$ and $[c]_R$ of $\Pi_R$, then by the previous lemma we find $[b]_R = [a]_R$ and $[c]_R = [a]_R$. Then $[b]_R = [c]_R$, therefore each element $a \in S$ is inside a unique member of $\Pi_R$, which therefore is a partition of $S$.
## Composition of relations
If $R_1$ and $R_2$ are two relations between a set $S$ and $T$, new relations can be formed between $S$ and $T$ by taking the intersection $R_1 \cap R_2$, the union $R_1 \cup R_2$ or the complement $R_1 \backslash R_2$. Furthermore a relation $R^\top$ from $T$ to $S$ can be considered as the relation $\{(t,s) \in T \times S \;|\; (s,t) \in R\}$ and the identity relation from $T$ to $S$ is given by $I = \{(s, t) \in S \times T \;|\; s = t\}$
Another way of making new relations out of existing ones is by taking the composition.
> *Definition*: if $R_1$ is a relation between $S$ and $T$ and $R_2$ is a relation between $T$ and $U$ then the composition $R = R_1;R_2$ is the relation between $S$ and $U$ defined by $sRu$ for $s \in S$ and $u \in U$, if and only if there is a $t \in T$ with $sR_1t$ and $tR_2u$.
<br>
> *Proposition*: suppose $R_1$ is relation from $S$ to $T$, $R_2$ a relation from $T$ to $U$ and $R_3$ a relation from $U$ to $V$. Then $R_1;(R_2;R_3) = (R_1;R_2);R_3$. Composing relations is associative.
??? note "*Proof*:"
Suppose $s \in S$ and $v \in V$ with $sR_1;(R_2;R_3)v$. Then a $t \in T$ with $sR_1t$ and $t(R_2;R_3)v$ can be found. Then there is also a $u \in U$ with $tR_2u$ and $uR_3v$. For this $u$ there is $sR_1;R_2u$ and $uR_3v$ and hence $s(R_1;R_2);R_3v$.
Similarly, if $s \in S$ and $v \in V$ with $s(R_1;R_2);R_3v$. Then a $u \in U$ with $s(R_1;R_2)u$ and $uR_3v$ can be found. Then there is also a $t \in T$ with $sR_1t$ and $tR_2u$. For this $t$ there is $tR_2;R_3u$ and $sR_1t$ and hence $sR_1;(R_2;R_3)v$.
## Transitive closure
> *Lemma*: let $\ell$ be a collection of relations $R$ on a set $S$. If all relations $R$ in $\ell$ are transitive, reflexive or symmetric then the relation $\bigcap_{R \in \ell} R$ is also transitive, reflexive or symmetric respectively.
??? note "*Proof*:"
Let $\bar R = \bigcap_{R \in \ell} R$. Suppose all members of $\ell$ are transitive. Then for all $a,b,c \in S$ with $a \bar R b$ and $b \bar R c$ there is $aRb$ and $bRc$ for all $R \in \ell$. Thus by transitivity of each $R \in \ell$ there is also $aRc$ for each $R \in \ell$. Thus there is $a \bar R c$. Hence $\bar R$ is also transitive.
Proof for symmetric relation will follow.
Proof for reflexive relation will follow.
The above lemma makes it possible to define the reflexive, symmetric or transitive closure of a relation $R$ on a set $S$. It is the smallest reflexive, symmetric or transitive relation containing $R$.
For example suppose $R = \{(1,2), (2,2), (2,3), (5,4)\}$ is a relation on $S = \{1, 2, 3, 4, 5\}$.
: The reflexive closure of $R$ is then the relation
$$
\big\{(1,1), (1,2), (2,2), (2,3), (3,3), (4,4), (5,5), (5,4) \big\},
$$
the symmetric closure of $R$ is then the relation
$$
\big\{ (1,2), (2,1), (2,2), (2,3), (3,2), (4,5), (5,4) \big\},
$$
and the transitive clusure of $R$ is then the relation
$$
\{(1,2), (1,3), (2,2), (2,3), (5,4)\}.
$$
It may be observed that the reflexive closure of $R$ equals the relation $I \cup R$ and the symmetric closure equals $R \cup R^\top$. For the transitive closure there is:
> *Proposition*: $\bigcup_{n > 0} R^n$ is the transitive closure of the relation $R$ on a set $S$.
??? note "*Proof*:"
Define $\bar R = \bigcup_{n>0} R^n$, to show that $\bar R$ is the least transitive relation containing $R$, $\bar R$ must contain $R$, must be transitive and must be the smallest set with both of those properties.
Since $R \subseteq \bar R$, $\bar R$ contains all of the $R^i, i \in \mathbb{N}$, so in particular $\bar R$ contains $R$.
If $(s_1, s_2), (s_2, s_3) \in \bar R$, then $(s_1, s_2) \in R^j$ and $(s_2, s_3) \in R^k$ for some $j,k$. Since composition is [associative](#composition-of-relations), $R^{j+k} = R^j ; R^k$ and hence $(s_1, s_3) \in R^{j+k} \subseteq \bar R$.
We claim that if $T$ is any transitive relation containing $R$, then $\bar R \subseteq T$. By taking $R^n \subseteq \bar R \subseteq T \; \forall n \in \mathbb{N}$ .
: We first check for $n=1$
$$
R^1 = R \subseteq T.
$$
: Now suppose that for some $k \in \mathbb{N}$ we have $R^k \subseteq T$. Then by assumption $R^{k+1} \subseteq T$. Let $(s_1, s_3) \in R^{k+1} = R^k ; R$, then $(s_1, s_2) \in R$ and $(s_2, s_3) \in R^k$ for some $s_2$. Hence $(s_1, s_2), (s_2, s_3) \in T$ and by transitivity of $T$, $(s_1, s_3) \in T$.
Hence if the claim holds for some $k \in \mathbb{N}$ then it also holds for $k+1$. The principle of natural induction implies now that $\forall n \in \mathbb{N}$ we have $R^n \subseteq \bar R \subseteq T$.
Suppose a relation $R$ on a finite set $S$ of size $n$ is given by its adjacency matrix $A_R$. Then Warshall's algorithm is an method for finding the adjacency matrix of the transitive closure of the relation $R$.
> *Algorithm* **- Warshall's algoritm**: for an adjacency matrix $A_R = M_0$ of relation $R$ on $n$ elements there will be $n$ steps taken to obtain the adjacency matrix of the transitive closure of the relation $R$. Let $R_i$ and $C_i$ be the $i$th row and column of $A_R$. In each step a new matrix $M_i$ is obtained with $C_i \times R_i$ added to $M_{i-1}$. After $n$ steps $A_{\bar R}$ is obtained.
For example let $R$ be an relation on $S = \{1,2,3,4\}$ with $R = \{(2,1), (2,3), (3,1), (3,4), (4,1), (4,3)\}$, determining the transitive closure $\bar R$ of $R$ with Warshall's algorithm.
: The adjacency matrix of the relation $R$ is given by
$$
A_R = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 0 \\ 1 & 0 & 0 & 1 \\ 1 & 0 & 1 & 0\end{pmatrix}.
$$
We have $C_1 = \{2,3,4\}$ and $R_1 = \varnothing$, therefore $C_1 \times R_1 = \varnothing$ and no additions will be made, $M_1 = A_R$.
We have $C_2 = \varnothing$ and $R_2 = \{1,3\}, therefore $C_2 \times R_2 = \varnothing$ and no additions will be made, $M_2 = M_1$.
We have $C_3 = \{2,4\}$ and $R_3 = \{1,4\}$, therefore $C_3 \times R_3 = \{(2,1), (2,4), (4,1), (4,4)\}$ obtaining the matrix
$$
M_3 = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 1 \\ 1 & 0 & 0 & 1 \\ 1 & 0 & 1 & 1\end{pmatrix}.
$$
We have $C_4 = \{2,3,4\} and $R_4 = \{1,3,4\}, therefore $C_4 \times R_4 = \{(2,1), (2,3), (2,4), (3,1), (3,3), (3,4), (4,1), (4,3), (4,4)\}$ obtaining the final matrix
$$
M_4 = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 1 \\ 1 & 0 & 1 & 1 \\ 1 & 0 & 1 & 1\end{pmatrix} = A_{\bar R}.
$$

View file

@ -0,0 +1,165 @@
# Sets
## Sets and subsets
> *Definition*: a set is a collection of elements uniquely defined by these elements.
Examples are $\mathbb{N}$, the set of natural numbers. $\mathbb{Z}$, the set of integers. $\mathbb{Q}$, the set of rational numbers. $\mathbb{R}$, the set of real numbers and $\mathbb{C}$ the set of complex numbers.
<br>
> *Definition*: suppose $A$ and $B$ are sets. Then $A$ is called a subset of $B$, if for every element $a \in A$ there also is $a \in B$. Then $B$ contains $A$ and can be denoted by $A \subseteq B$.
The extra line under the symbol implies properness. A subset $A$ of a set $B$ which is not the empty set $\varnothing$ nor the full set $B$ is called a proper subset of $B$, denoted by $A \subsetneq B$. For example $\mathbb{N} \subsetneq \mathbb{Z}$.
<br>
> *Definition*: if $B$ is a set, then $\wp(B)$ denotes the set of all subsets $A$ of $B$. The set $\wp(B)$ is called the power set of $B$.
Suppose for example that $B = {x,y,z}$, then $\wp(B) = \{\varnothing,\{x\},\{y\},\{z\},\{x,y\},\{x,z\},\{y,z\},\{x,y,z\}\}$.
<br>
> *Proposition*: let $B$ be a set with $n$ elements. Then its power set $\wp(B)$ contains $w^n$ elements.
??? note "*Proof*:"
Let $B$ be set with $n$ elements. A subset $A$ of $B$ is completely determined by its elements. For each element $b \in B$ there are two options, it is in $A$ or it is not. So, there are $2^n$ options and thus $2^n$ different subsets $A$ of $B$.
<br>
> *Proposition*: suppose $A$, $B$ and $C$ are sets. Then the following hold:
>
> 1. if $A \subseteq B$ and $B \subseteq C$ then $A \subseteq C$,
> 2. if $A \subseteq B$ and $B \subseteq A$ then $A = B$.
??? note "*Proof*:"
To prove 1, suppose that $A \subseteq B$. Let $a \in A$, then $a \in B$ therefore $a \in C$.
To prove 2, every element of $A$ is in $B$ and every element of $B$ is in $A$. As the set is uniquely determined by its elements $A = B$.
<br>
> *Definition*: let $P$ be a predicate with reference set $X$, then
>
>$$
> \big\{x \in X \;\big|\; P(x) \big\}
>$$
>
> denotes the subset of $X$ consisting of all elements $x \in X$ for which statement $P(x)$ is true.
## Operations on sets
> *Definition*: let $A$ and $B$ be sets.
>
> * The intersection of $A$ and $B$ $(A \cap B)$ is the set of all elements contained in both $A$ and $B$.
> * The union of $A$ and $B$ $(A \cup B)$ is the set of elements that are in at least on of $A$ or $B$.
> * $A$ and $B$ are disjoint if the intersection $(A \cap B)$ is the empty set $\varnothing$.
<br>
> *Definition*: suppose $I$ is a set (an index set) and for each element $i$ there exists a set $A_i$, then
>
> $$
> \bigcup_{i \in I} A_i := \big\{x \;\big|\; \text{there is an } i \in I \text{ with } x \in A_i \big\},
> $$
>
> and
>
> $$
> \bigcap_{i \in I} A_i := \big\{x \;\big|\; \text{for all } i \in I \text{ there is } x \in A_i \big\}.
> $$
>
Implying unions and intersections taken over an index set. For example suppose for each $i \in \mathbb{N}$ the set $A_i$ is defined as $\{x \in \mathbb{R} \;|\; 0 \leq x \leq i \}$, then
$$
\bigcap_{i \in \mathbb{N}} A_i = \{0\},
$$
and
$$
\bigcup_{i \in \mathbb{N}} A_i = \mathbb{R}_{\geq 0}.
$$
<br>
> *Definition*: if $C$ is a collection of sets, then
>
> $$
> \bigcup_{A \in C} A := \big\{x \;\big|\; \text{there is an } A \in C \text{ with } x \in A \big\},
> $$
>
> and
>
> $$
> \bigcap_{A \in C} A := \big\{x \;\big|\; \text{for all } A \in C \text{ there is } x \in A \big\}.
> $$
<br>
> *Definition*: let $A$ and $B$ be sets. The difference of $A$ and $B$ $(A \backslash B)$ is the set of all elements from $A$ that are not in $B$.
>
>: The symmetric difference of $A$ and $B$ $(A \triangle B)$ is the set consisting of all elements that are in exactly one of $A$ or $B$.
>
>: If one is working inside a fixed set $U$ and only considering subsets of $U$, then the difference $U \backslash A$ is also called the complement of $A$ in $U$, denoted by $A^*$. In this case the set $U$ is called the universe.
## Cartesian products
Suppose $a_1, a_2, \dots, a_k$ are elements from some set, then the ordered k-tuple of $a_1, a_2, \dots, a_k$ is denoted by $(a_1, a_2, \dots, a_k)$
> *Definition*: the cartesian product $A_1 \times \dots \times A_k$ of sets $A_1, \dots , A_k$ is the set of all ordered k-tuples $(a_1, a_2, \dots, a_k)$ where $a_i \in A_i$ for $1 \leq i \leq k$.
>
>: If $A$ and $B$ are sets then
>
> $$
> A \times B = \big\{ (a,b) \;\big|\; a \in A,\; b \in B \big\}
> $$
Notice that for all $1 \leq i \leq k$ and $A_i = A$ then $A_1 \times \dots \times A_k$ is also denoted by $A^k$.
## Partitions
> *Definition*: let $S$ be a nonempty set. A collection $\Pi$ of subsets is called a partition if and only if
>
> * $\varnothing \notin \Pi$,
> * $\bigcup_{X \in \Pi} X = S$,
> * for all $X \neq Y \in \Pi$ there is $X \cap Y = \varnothing$
For example the set $\{1,2, \dots , 10\}$ can be partitioned into the sets $\{1,2,3\}$, $\{4,5\}$ and $\{6,7,8,9,10\}$.
## Quantifiers
> *Definitions*: the universal quantifier "for all" is denoted by $\forall$ and the existential quantifier "there exists" is denoted by $\exists$.
<br>
> *Proposition* **- DeMorgan's rule**: the statement
>
> $$
> \neg (\forall x \in X \;[P(x)])
> $$
>
> is equivalent with the statement
>
> $$
> \exists x \in X \;[\neg (P(x))].
> $$
>
> The statement
>
> $$
> \neg (\exists x \in X \;[P(x)])
> $$
>
> is equivalent with the statement
>
> $$
> \forall x \in X \; [\neg (P(x))].
> $$
??? note "*Proof*:"
will be added later.