port from mathematics-physics notes

This commit is contained in:
Luc Bijl 2025-08-26 15:48:53 +02:00
parent a4e106ce02
commit c009ea53f0
124 changed files with 13224 additions and 0 deletions

View file

@ -0,0 +1,99 @@
# Direct sums
> *Definition 1*: in a metric space $(X,d)$, the **distance** $\delta$ from an element $x \in X$ to a nonempty subset $M \subset X$ is defined as
>
> $$
> \delta = \inf_{\tilde y \in M} d(x,\tilde y).
> $$
In a normed space $(X, \|\cdot\|)$ this becomes
$$
\delta = \inf_{\tilde y \in M} \|x - \tilde y\|.
$$
> *Definition 2*: let $X$ be a vector space and let $x, y \in X$, the **line segment** $l$ between the vectors $x$ and $y$ is defined as
>
> $$
> l = \{z \in X \;|\; \exists \alpha \in [0,1]: z = \alpha x + (1 - \alpha) y\}.
> $$
Using definition 2, we may define the following.
> *Definition 3*: a subset $M \subset X$ of a vector space $X$ is **convex** if for all $x, y \in M$ the line segment between $x$ and $y$ is contained in $M$.
This definition is true for projections of convex lenses which have been discussed in [optics]().
We can now provide the main theorem in this section.
> *Theorem 1*: let $X$ be an inner product space and let $M \subset X$ be a complete convex subset of $X$. Then for every $x \in X$ there exists a unique $y \in M$ such that
>
> $$
> \delta = \inf_{\tilde y \in M} \|x - \tilde y\| = \|x - y\|,
> $$
>
> if $M$ is a complete subspace $Y$ of $X$, then $x - y$ is orthogonal to $X$.
??? note "*Proof*:"
Will be added later.
Now that the foundation is set, we may introduce direct sums.
> *Definition 4*: a vector space $X$ is a **direct sum** $X = Y \oplus Z$ of two subspaces $Y \subset X$ and $Z \subset X$ of $X$ if each $x \in X$ has a unique representation
>
> $$
> x = y + z,
> $$
>
> for $y \in Y$ and $z \in Z$.
Then $Z$ is called an *algebraic complement* of $Y$ in $X$ and vice versa, and $Y$, $Z$ is called a *complementary pair* of subspaces in $X$.
In the case $Z = \{z \in X \;|\; z \perp Y\}$ we have that $Z$ is the *orthogonal complement* or *annihilator* of $Y$. Also denoted as $Y^\perp$.
> *Proposition 1*: let $Y \subset X$ be any closed subspace of a Hilbert space $X$, then
>
> $$
> X = Y \oplus Y^\perp,
> $$
>
> with $Y^\perp = \{x\in X \;|\; x \perp Y\}$ the orthogonal complement of $Y$.
??? note "*Proof*:"
Will be added later.
We have that $y \in Y$ for $x = y + z$ is called the *orthogonal projection* of $x$ on $Y$. Which defines an operator $P: X \to Y: x \mapsto Px \overset{\mathrm{def}}= y$.
> *Lemma 1*: let $Y \subset X$ be a subset of a Hilbert space $X$ and let $P: X \to Y$ be the orthogonal projection operator, then we have
>
> 1. $P$ is a bounded linear operator,
> 2. $\|P\| = 1$,
> 3. $\mathscr{N}(P) = \{x \in X \;|\; Px = 0\}$.
??? note "*Proof*:"
Will be added later.
> *Lemma 2*: if $Y$ is a closed subspace of a Hilbert space $X$, then $Y = Y^{\perp \perp}$.
??? note "*Proof*:"
Will be added later.
Then it follows that $X = Y^\perp \oplus Y^{\perp \perp}$.
??? note "*Proof*:"
Will be added later.
> *Lemma 3*: for every non-empty subset $M \subset X$ of a Hilbert space $X$ we have
>
> $$
> \mathrm{span}(M) \text{ is dense in } X \iff M^\perp = \{0\}.
> $$
??? note "*Proof*:"
Will be added later.

View file

@ -0,0 +1,122 @@
# Inner product spaces
> *Definition 1*: a vector space $X$ over a field $F$ is an **inner product space** if an **inner product** $\langle \cdot, \cdot \rangle: X \times X \to F$ is defined on $X$ satisfying
>
> 1. $\forall x \in X: \langle x, x \rangle \geq 0$,
> 2. $\langle x, x \rangle = 0 \iff x = 0$,
> 3. $\forall x, y \in X: \langle x, y \rangle = \overline{\langle y, x \rangle}$,
> 4. $\forall x, y \in X, \alpha \in F: \langle \alpha x, y \rangle = \alpha \langle x, y \rangle$,
> 5. $\forall x, y, z \in X: \langle x + y, z \rangle = \langle x, z \rangle + \langle y, z \rangle$.
Similar to the case in normed spaces we have the following proposition.
> *Proposition 1*: an inner product $\langle \cdot, \cdot \rangle$ on a vector space $X$ defines a norm $\|\cdot\|$ on $X$ given by
>
> $$
> \|x\| = \sqrt{\langle x, x \rangle},
> $$
>
> for all $x \in X$ and is called the **norm induced by the inner product**.
??? note "*Proof*:"
Will be added later.
Which makes an inner product space also a normed space as well as a metric space, referring to proposition 1 in normed spaces.
> *Definition 2*: a **Hilbert space** $H$ is a complete inner product space with its metric induced by the inner product.
Definition 2 makes a Hilbert space also a Banach space, using proposition 1.
## Properties of inner product spaces
> *Proposition 2*: let $(X, \langle \cdot, \cdot \rangle)$ be an inner product space, then
>
> $$
> \| x + y \|^2 + \| x - y \|^2 = 2\big(\|x\|^2 + \|y\|^2\big),
> $$
>
> for all $x, y \in X$.
??? note "*Proof*:"
Will be added later.
Proposition 2 is also called the parallelogram identity.
> *Lemma 1*: let $(X, \langle \cdot, \cdot \rangle)$ be an inner product space, then
>
> 1. $\forall x, y \in X: |\langle x, y \rangle| \leq \|x\| \cdot \|y\|$,
> 2. $\forall x, y \in X: \|x + y\| \leq \|x\| + \|y\|$.
??? note "*Proof*:"
Will be added later.
Statement 1 in lemma 1 is known as the Schwarz inequality and statement 2 is known as the triangle inequality and will be used throughout the section of inner product spaces.
> *Lemma 2*: let $(X, \langle \cdot, \cdot \rangle)$ be an inner product space and let $(x_n)_{n \in \mathbb{N}}$ and $(y_n)_{n \in \mathbb{N}}$ be sequences in $X$, if we have $x_n \to x$ and $y_n \to y$ as $n \to \infty$, then
>
> $$
> \lim_{n \to \infty} \langle x_n, y_n \rangle = \langle x, y \rangle.
> $$
??? note "*Proof*:"
Will be added later.
## Completion
> *Definition 3*: an **isomorphism** $T$ of an inner product space $(X, \langle \cdot, \cdot \rangle)_X$ onto an inner product space $(\tilde X, \langle \cdot, \cdot \rangle)_{\tilde X}$ over the same field $F$ is a bijective linear operator $T: X \to \tilde X$ which preserves the inner product
>
> $$
> \langle Tx, Ty \rangle_{\tilde X} = \langle x, y \rangle_X,
> $$
>
> for all $x, y \in X$.
As a first application of lemma 2, let us prove the following.
> *Theorem 1*: for every inner product space $(X, \langle \cdot, \cdot \rangle)_X$ there exists a Hilbert space $(\tilde X, \langle \cdot, \cdot \rangle)_{\tilde X}$ that contains a subspace $W$ that satisfies the following conditions
>
> 1. $W$ is an inner product space isomorphic with $X$.
> 2. $W$ is dense in $X$.
??? note "*Proof*:"
Will be added later.
Somewhat trivially, we have that a subspace $M$ of an inner product space $X$ is defined to be a vector subspace of $X$ taken with the inner product on $X$ restricted to $M \times M$.
> *Proposition 3*: let $Y$ be a subspace of a Hilbert space $X$, then
>
> 1. $Y$ is complete $\iff$ $Y$ is closed in $X$,
> 2. if $Y$ is finite-dimensional, then $Y$ is complete,
> 3. $Y$ is separable if $X$ is separable.
??? note "*Proof*:"
Will be added later.
## Orthogonality
> *Definition 4*: let $(X, \langle \cdot, \cdot \rangle)$ be an inner product space, a vector $x \in X$ is **orthogonal** to a vector $y \in X$ if
>
> $$
> \langle x, y \rangle = 0,
> $$
>
> and we write $x \perp y$.
Furthermore, we can also say that $x$ and $y$ *are orthogonal*.
> *Definition 5*: let $(X, \langle \cdot, \cdot \rangle)$ be an inner product space and let $A, B \subset X$ be subspaces of $X$. Then $A$ is **orthogonal** to $B$ if for every $x \in A$ and $y \in B$ we have
>
> $$
> \langle x, y \rangle = 0,
> $$
>
> and we write $A \perp B$.
Similarly, we may state that $A$ and $B$ *are orthogonal*.

View file

@ -0,0 +1,95 @@
# Operator classes
## Hilbert-adjoint operator
> *Definition 1*: let $(X, \langle \cdot, \cdot \rangle_X)$ and $(Y, \langle \cdot, \cdot \rangle_Y)$ be Hilbert spaces over the field $F$ and let $T: X \to Y$ be a bounded linear operator. The **Hilbert-adjoint operator** $T^*$ of $T$ is the operator $T^*: Y \to X$ such that for all $x \in X$ amd $y \in Y$
>
> $$
> \langle Tx, y \rangle_Y = \langle x, T^* y \rangle.
> $$
We should first prove that for a given $T$ such a $T^*$ exists.
> *Proposition 1*: the Hilbert-adjoint operator $T^*$ of $T$ exists is unique and is a bounded linear operator with norm
>
> $$
> \|T^*\| = \|T\|.
> $$
??? note "*Proof*:"
Will be added later.
The Hilbert-adjoint operator has the following properties.
> *Proposition 2*: let $T,S: X \to Y$ be bounded linear operators, then
>
> 1. $\forall x \in X, y \in Y: \langle T^* y, x \rangle_X = \langle y, Tx \rangle_Y$,
> 2. $(S + T)^* = S^* + T^*$,
> 3. $\forall \alpha \in F: (\alpha T)^* = \overline \alpha T^*$,
> 4. $(T^*)^* = T$,
> 5. $\|T^* T\| = \|T T^*\| = \|T\|^2$,
> 6. $T^*T = 0 \iff T = 0$,
> 7. $(ST)^* = T^* S^*, \text{ when } X = Y$.
??? note "*Proof*:"
Will be added later.
## Self-adjoint operator
> *Definition 2*: a bounded linear operator $T: X \to X$ on a Hilbert space $X$ is **self-adjoint** if
>
> $$
> T^* = T.
> $$
If a basis for $\mathbb{C}^n$ $(n \in \mathbb{N})$ is given and a linear operator on $\mathbb{C}^n$ is represented by a matrix, then its Hilbert-adjoint operator is represented by the complex conjugate transpose of that matrix (the Hermitian).
Proposition 3, 4 and 5 pose some interesting results of self-adjoint operators.
> *Proposition 3*: let $T: X \to X$ be a bounded linear operator on a Hilbert space $(X, \langle \cdot, \cdot \rangle_X)$ over the field $\mathbb{C}$, then
>
> $$
> T \text{ is self-adjoint} \iff \forall x \in X: \langle Tx, x \rangle \in \mathbb{R}.
> $$
??? note "*Proof*:"
Will be added later.
> *Proposition 4*: the product of two bounded self-adjoint linear operators $T$ and $S$ on a Hilbert space is self-adjoint if and only if
>
> $$
> ST = TS.
> $$
??? note "*Proof*:"
Will be added later.
Commuting operators therefore imply self-adjointness.
> *Proposition 5*: let $(T_n)_{n \in \mathbb{N}}$ be a sequence of bounded self-adjoint operators $T_n: X \to X$ on a Hilbert space $X$. If $T_n \to T$ as $n \to \infty$, then $T$ is a bounded self-adjoint linear operator on $X$.
??? note "*Proof*:"
Will be added later.
## Unitary operator
> *Definition 3*: a bounded linear operator $T: X \to X$ on a Hilbert space $X$ is **unitary** if $T$ is bijective and $T^* = T^{-1}$.
A bounded unitary linear operator has the following properties.
> *Proposition 6*: let $U, V: X \to X$ be bounded unitary linear operators on a Hilbert space $X$, then
>
> 1. $U$ is isometric,
> 2. $\|U\| = 1 \text{ if } X \neq \{0\}$,
> 3. $UV$ is unitary,
> 4. $U$ is normal, that is $U U^* = U^* U$,
> 5. $T \in \mathscr{B}(X,X)$ is unitary $\iff$ $T$ is isometric and surjective.
??? note "*Proof*:"
Will be added later.

View file

@ -0,0 +1,65 @@
# Orthonormal sets
> *Definition 1*: an **orthogonal set** $M$ in an inner product space $X$ is a subset $M \subset X$ whose elements are pairwise orthogonal.
Pairwise orthogonality implies that $x, y \in M: x \neq y \implies \langle x, y \rangle = 0$.
> *Definition 2*: an **orthonormal set** $M$ in an inner product space $X$ is an orthogonal set in $X$ whose elements have norm 1.
That is for all $x, y \in M$:
$$
\langle x, y \rangle = \begin{cases}0 &\text{if } x \neq y, \\ 1 &\text{if } x = y.\end{cases}
$$
> *Lemma 1*: an orthonormal set is linearly independent.
??? note "*Proof*:"
Will be added later.
In the case that an orthogonal or orthonormal set is countable it can be arranged in a sequence and call it can be called an *orthogonal* or *orthonormal sequence*.
> *Theorem 1*: let $(e_n)_{n \in \mathbb{N}}$ be an orthonormal sequence in an inner product space $(X, \langle \cdot, \cdot \rangle)$, then
>
> $$
> \sum_{n=1}^\infty |\langle x, e_n \rangle|^2 \leq \|x\|^2,
> $$
>
> for all $x \in X$.
??? note "*Proof*:"
Will be added later.
Theorem 1 is known as the Bessel inequality, and we have that $|\langle x, e_n \rangle|$ are called the Fourier coefficients of $x$ with respect to the orthonormal sequence $(e_n)_{n \in \mathbb{N}}$.
## Orthonormalisation process
Let $(x_n)_{n \in \mathbb{N}}$ be a linearly independent sequence in an inner product space $(X, \langle \cdot, \cdot \rangle)$, then we can use the **Gram-Schmidt process** to determine the corresponding orthonormal sequence $(e_n)_{n \in \mathbb{N}}$.
Let $e_1 = \frac{1}{\|x_1\|} x_1$ be the first step and let $e_n = \frac{1}{\|v_n\|} v_n$ be the $n$th step with
$$
v_n = x_n - \sum_{k=1}^{n-1} \langle x_n, e_k \rangle e_k.
$$
## Properties
> *Proposition 1*: let $(e_n)_{n \in \mathbb{N}}$ be an orthonormal sequence in a Hilbert space $(X, \langle \cdot, \cdot \rangle)$ and let $(\alpha_n)_{n \in \mathbb{N}}$ be a sequence in the field of $X$, then
>
> 1. the series $\sum_{n=1}^\infty \alpha_n e_n$ is convergent in $X$ $\iff$ $\sum_{n=1}^\infty | \alpha_n|^2$ is convergent in $X$.
> 2. if the series $\sum_{n=1}^\infty \alpha_n e_n$ is convergent in $X$ and $s = \sum_{n=1}^\infty \alpha_n e_n$ then $a_n = \langle s, e_n \rangle$.
> 3. the series $\sum_{n=1}^\infty \alpha_n e_n = \sum_{n=1}^\infty \langle s, e_n \rangle e_n$ is convergent in $X$ for all $x \in X$.
??? note "*Proof*:"
Will be added later.
Furthermore, we also have that.
> *Proposition 2*: let $M$ be an orthonormal set in an inner product space $(X, \langle \cdot, \cdot \rangle)$, then any $x \in X$ can have at most countably many nonzero Fourier coefficients $\langle x, e_k \rangle$ for $e_k \in M$ over the uncountable index set $k \in I$ of $M$.
??? note "*Proof*:"
Will be added later.

View file

@ -0,0 +1,68 @@
# Representations of functionals
> *Lemma 1*: let $(X, \langle \cdot, \cdot \rangle)$ be an inner product space, if
>
> $$
> \forall z \in X: \langle x, z \rangle = \langle y, z \rangle \implies x = y,
> $$
>
> and if
>
> $$
> \forall z \in X: \langle x, z \rangle = 0 \implies x = 0.
> $$
??? note "*Proof*:"
Will be added later.
Lemma 1 will be used in the following theorem.
> *Theorem 1*: for every bounded linear functional $f$ on a Hilbert space $(X, \langle \cdot, \cdot \rangle)$, there exists a $z \in X$ such that
>
> $$
> f(x) = \langle x, z \rangle,
> $$
>
> for all $x \in x$, with $z$ uniquely dependent on $f$ and $\|z\| = \|f\|$.
??? note "*Proof*:"
Will be added later.
## Sequilinear form
> *Definition 1*: let $X$ and $Y$ be vector spaces over the field $F$. A **sesquilinear** form $h$ on $X \times Y$ is an operator $h: X \times Y \to F$ satisfying the following conditions
>
> 1. $\forall x_{1,2} \in X, y \in Y: h(x_1 + x_2, y) = h(x_1, y) + h(x_2, y)$.
> 2. $\forall x \in X, y_{1,2} \in Y: h(x, y_1 + y_2) = h(x_1, y_1) + h(x_2, y_2)$.
> 3. $\forall x \in X, y \in Y, \alpha \in F: h(\alpha x, y) = \alpha h(x,y)$.
> 4. $\forall x \in X, y \in Y, \beta \in F: h(x, \beta y) = \overline \beta h(x,y)$.
Hence, $h$ is linear in the first argument and conjugate linear in the second argument. Bilinearity of $h$ is only true for a real field $F$.
> *Definition 2*: let $X$ and $Y$ be normed spaces over the field $F$ and let $h: X \times Y \to F$ be a sesquilinear form, then $h$ is a **bounded sesquilinear form** if
>
> $$
> \exists c \in F: |h(x,y)| \leq c \|x\| \|y\|,
> $$
>
> for all $(x,y) \in X \times Y$ and the norm of $h$ is given by
>
> $$
> \|h\| = \sup_{\substack{x \in X \backslash \{0\} \\ y \in Y \backslash \{0\}}} \frac{|h(x,y)|}{\|x\| \|y\|} = \sup_{\|x\|=\|y\|=1} |h(x,y)|.
> $$
For example, the inner product is sesquilinear and bounded.
> *Theorem 2*: let $(X, \langle \cdot, \cdot \rangle_X)$ and $(Y, \langle \cdot, \cdot \rangle_Y)$ be Hilbert spaces over the field $F$ and let $h: X \times Y \to F$ be a bounded sesquilinear form. Then there exists a bounded linear operators $T: X \to Y$ and $S: Y \to X$, such that
>
> $$
> h(x,y) = \langle Tx, y \rangle_Y = \langle x, Sy \rangle_X,
> $$
>
> for all $(x,y) \in X \times Y$, with $T$ and $S$ uniquely determined by $h$ with norms $\|T\| = \|S\| = \|h\|$.
??? note "*Proof*:"
Will be added later.

View file

@ -0,0 +1,58 @@
# Total sets
> *Definition 1*: a **total set** in a normed space $(X, \langle \cdot, \cdot \rangle)$ is a subset $M \subset X$ whose span is dense in $X$.
Accordingly, an orthonormal set in $X$ which is total in $X$ is called a total orthonormal set in $X$.
> *Proposition 1*: let $M \subset X$ be a subset of an inner product space $(X, \langle \cdot, \cdot \rangle)$, then
>
> 1. if $M$ is total in $X$, then $M^\perp = \{0\}$.
> 2. if $X$ is complete and $M^\perp = \{0\}$ then $M$ is total in $X$.
??? note "*Proof*:"
Will be added later.
## Total orthornormal sets
> *Theorem 1*: an orthonormal sequence $(e_n)_{n \in \mathbb{N}}$ in a Hilbert space $(X, \langle \cdot, \cdot \rangle)$ is total in $X$ if and only if
>
> $$
> \sum_{n=1}^\infty |\langle x, e_n \rangle|^2 = \|x\|^2,
> $$
>
> for all $x \in X$.
??? note "*Proof*:"
Will be added later.
> *Lemma 1*: in every non-empty Hilbert space there exists a total orthonormal set.
??? note "*Proof*:"
Will be added later.
> *Theorem 2*: all total orthonormal sets in a Hilbert space have the same cardinality.
??? note "*Proof*:"
Will be added later.
This cardinality is called the Hilbert dimension or the orthogonal dimension of the Hilbert space.
> *Theorem 3*: let $X$ be a Hilbert space, then
>
> 1. if $X$ is separable, every orthonormal set in $X$ is countable.
> 2. if $X$ contains a countable total orthonormal set, then $X$ is separable.
??? note "*Proof*:"
Will be added later.
> *Theorem 4*: two Hilbert spaces $X$ and $\tilde X$ over the same field are isomorphic if and only if they have the same Hilbert dimension.
??? note "*Proof*:"
Will be added later.

View file

@ -0,0 +1,243 @@
# Completeness
> *Definition 1*: a sequence $(x_n)_{n \in \mathbb{N}}$ in a metric space $(X,d)$ is a **Cauchy sequence** if
>
> $$
> \forall \varepsilon > 0 \exists N \in \mathbb{N} \forall n,m > N: \quad d(x_n, x_m) < \varepsilon.
> $$
A convergent sequence $(x_n)_{n \in \mathbb{N}}$ in a metric space $(X,d)$ is always a Cauchy sequence since
$$
\forall \varepsilon > 0 \exists N \in \mathbb{N}: \quad d(x_n, x) < \frac{\varepsilon}{2},
$$
for all $n > N$. By axiom 4 of the definition of a metric space we have for $m, n > N$
$$
d(x_m, x_n) \leq d(x_m, x) + d(x, x_n) < \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon,
$$
showing that $(x_n)$ is Cauchy.
> *Definition 2*: a metric space $(X,d)$ is **complete** if every Cauchy sequence in $X$ is convergent.
Therefore, in a complete metric space every Cauchy sequence is a convergent sequence.
> *Proposition 1*: let $M \subset X$ be a nonempty subset of a metric space $(X,d)$ and let $\overline M$ be the closure of $M$, then
>
> 1. $x \in \overline M \iff \exists (x_n)_{n \in \mathbb{N}} \text{ in } M: x_n \to x$,
> 2. $M \text{ is closed } \iff M = \overline M$.
??? note "*Proof*:"
To prove statement 1, let $x \in \overline M$. If $x \notin M$ then $x$ is an accumulation point of $M$. Hence, for each $n \in \mathbb{N}$ the ball $B(x,\frac{1}{n})$ contains an $x_n \in M$ and $x_n \to x$ since $\frac{1}{n} \to 0$ as $n \to \infty$. Conversely, if $(x_n)_{n \in \mathbb{N}}$ is in $M$ and $x_n \to x$, then $x \in M$ or every neighbourhood of $x$ contains points $x_n \neq x$, so that $x$ is an accumulation point of $M$. Hence $x \in \overline M$.
Statement 2 follows from statement 1.
We have that the following statement is equivalent to statement 2: $x_n \in M: x_n \to x \implies x \in M$.
> *Proposition 2*: let $M \subset X$ be a subset of a complete metric space $(X,d)$, then
>
> $$
> M \text{ is complete} \iff M \text{ is a closed subset of } X
> $$
??? note "*Proof*:"
Let $M$ be complete, by proposition 1 statement 1 we have that
$$
\forall x \in \overline M \exists (x_n)_{n \in \mathbb{N}} \text{ in } M: x_n \to x.
$$
Since $(x_n)$ is Cauchy and $M$ is complete, $x_n$ converges in $M$ with the limit being unique by statement 1 in [lemma 1](). Hence, $x \in M$ which proves that $M$ is closed because $x \in \overline M$ has been chosen arbitrary.
Conversely, let $M$ be closed and $(x_n)$ Cauchy in $M$. Then $x_n \to x \in X$ which implies that $x \in \overline M$ by statement 1 in proposition 1, and $x \in M$ since $M = \overline M$ by assumption. Hence, the arbitrary Cauchy sequence $(x_n)$ converges in $M$.
> *Proposition 3*: let $T: X \to Y$ be a map from a metric space $(X,d)$ to a metric space $(Y,\tilde d)$, then
>
> $$
> T \text{ is continuous in } x_0 \in X \iff x_n \to x_0 \implies T(x_n) \to T(x_0),
> $$
>
> for any sequence $(x_n)_{n \in \mathbb{N}}$ in $X$ as $n \to \infty$.
??? note "*Proof*:"
Suppose $T$ is continuous at $x_0$, then for a given $\varepsilon > 0$ there is a $\delta > 0$ such that
$$
\forall \varepsilon > 0 \exists \delta > 0: \quad d(x, x_0) < \delta \implies \tilde d(Tx, Tx_0) < \varepsilon.
$$
Let $x_n \to x_0$ then
$$
\exists N \in \mathbb{N} \forall n > N: \quad d(x_n, x_0) < \delta.
$$
Hence,
$$
\forall n > N: \tilde d(Tx_n, Tx_0) < \varepsilon.
$$
Which means that $T(x_n) \to T(x_0)$.
Conversely, suppose that $x_n \to x_0 \implies T(x_n) \to T(x_0)$ and $T$ is not continuous. Then
$$
\exists \varepsilon > 0: \forall \delta > 0 \exists x \neq x_0: \quad d(x, x_0) < \delta \quad \text{ however } \quad \tilde d(Tx, Tx_0) \geq \varepsilon,
$$
in particular, for $\delta = \frac{1}{n}$ there is a $x_n$ satisfying
$$
d(x_n, x_0) < \frac{1}{n} \quad \text{ however } \quad \tilde d(Tx_n, Tx_0) \geq \varepsilon,
$$
Clearly $x_n \to x_0$ but $(Tx_n)$ does not converge to $Tx_0$ which contradicts $Tx_n \to Tx_0$.
## Completeness proofs
To show that a metric space $(X,d)$ is complete, one has to show that every Cauchy sequence in $(X,d)$ has a limit in $X$. This depends explicitly on the metric on $X$.
The steps in a completeness proof are as follows
1. take an arbitrary Cauchy sequence $(x_n)_{n \in \mathbb{N}}$ in $(X,d)$,
2. construct for this sequence a candidate limit $x$,
3. prove that $x \in X$,
4. prove that $x_n \to x$ with respect to metric $d$.
> *Proposition 4*: the Euclidean space $\mathbb{R}^n$ with $n \in \mathbb{N}$ and the metric $d$ defined by
>
> $$
> d(x,y) = \sqrt{\sum_{j=1}^n \big(x(j) - y(j) \big)^2},
> $$
>
> for all $x,y \in \mathbb{R}^n$ is complete.
??? note "*Proof*:"
Let $(x_m)_{m \in \mathbb{N}}$ be a Cauchy sequence in $(\mathbb{R}^n, d)$, then we have
$$
\forall \varepsilon > 0 \exists N \in \mathbb{N}: \forall m, k > N: d(x_m, x_k) = \sqrt{\sum_{j=1}^n \big(x_m(j) - x_k(j) \big)^2} < \varepsilon,
$$
obtains for all $j \in \mathbb{N}$: $|x_m(j) - x_k(j)| < \varepsilon$.
Which shows that $(x_m(j))_{m \in \mathbb{N}}$ is a Cauchy sequence in $\mathbb{R}$. Suppose that it converged by $x_m(j) \to x(j)$ as $(m \to \infty)$ then $x \in \mathbb{R}^n$ since $x = \big(x(1), \dots, x(n)\big)$.
Thus for $(k \to \infty)$ we have
$$
d(x_m, x) < \varepsilon \implies x_m \to x,
$$
which implies that $\mathbb{R}^n$ is complete.
A similar proof exists for the completeness of the Unitary space $\mathbb{C}^n$.
> *Proposition 5*: the space $C([a,b])$ of all **real-valued continuous functions** on a closed interval $[a,b]$ with $a<b \in \mathbb{R}$ with the metric $d$ defined by
>
> $$
> d(x,y) = \max_{t \in [a,b]} |x(t) - y(t)|,
> $$
>
> for all $x, y \in C$ is complete.
??? note "*Proof*:"
Let $(x_n)_{n \in \mathbb{N}}$ be a Cauchy sequence in $(C,d)$, then we have
$$
\forall \varepsilon > 0 \exists N \in \mathbb{N}: \forall n, m > N: d(x_n, x_m) = \max_{t \in [a,b]} |x_n(t) - x_m(t)| < \varepsilon,
$$
obtains for all $t \in [a,b]$: $|x_n(t) - x_m(t)| < \varepsilon$.
Which shows that $(x_m(t))_{m \in \mathbb{N}}$ for fixed $t \in [a,b]$ is a Cauchy sequence in $\mathbb{R}$. Since $\mathbb{R}$ is complete the sequence converges; $x_m(t) \to x(t)$ as $m \to \infty$.
Thus, for $m \to \infty$ we have
$$
d(x_n, x) = \max_{t \in [a,b]} | x_n(t) - x(t) | < \varepsilon,
$$
hence $\forall t \in [a,b]: | x_n(t) - x(t) < \varepsilon$, obtaining convergence to $x_n \to x$ as $n \to \infty$ and $x \in C$ which implies that $C$ is complete.
While $C$ with a metric $d$ defined by
$$
d(x,y) = \int_a^b |x(t) - y(t)| dt,
$$
for all $x,y \in C$ is incomplete.
??? note "*Proof*:"
Will be added later.
> *Proposition 6*: the space $l^p$ with $p \geq 1$ and the metric $d$ defined by
>
> $$
> d(x,y) = \Big(\sum_{j \in \mathbb{N}} | x(j) - y(j) |^p\Big)^\frac{1}{p},
> $$
>
> for all $x,y \in l^p$ is complete.
??? note "*Proof*:"
Let $(x_n)_{n \in \mathbb{N}}$ be a Cauchy sequence in $(l^p,d)$, then we have
$$
\forall \varepsilon > 0 \exists N \in \mathbb{N}: n, m > N: d(x_n, x_m) = \Big(\sum_{j \in \mathbb{N}} |x_n(j) - x_m(j)|^p\Big)^\frac{1}{p} < \varepsilon,
$$
obtains for all $j \in \mathbb{N}$: $|x_n(j) - x_m(j)| <\varepsilon$.
Which shows that $(x_m(j))_{m \in \mathbb{N}}$ for fixed $j \in \mathbb{N}$ is a Cauchy sequence in $\mathbb{C}$. Since $\mathbb{C}$ is complete the sequence converges; $x_m(j) \to x(j)$ as $m \to \infty$.
Thus, for $m \to \infty$ we have
$$
d(x_n, x) = \Big(\sum_{j \in \mathbb{N}} |x_n(j) - x(j)|^p\Big)^\frac{1}{p} < \varepsilon,
$$
implies that $x_n - x \in l^p$ and $x = x_n - (x_n - x) \in l^p \implies x \in l^p$ and $x_n \to x$ as $n \to \infty$ which implies that $l^p$ is complete.
> *Proposition 7*: the space $l^\infty$ with the metric $d$ defined by
>
> $$
> d(x,y) = \sup_{j \in \mathbb{N}} | x(j) - y(j) |,
> $$
>
> for all $x,y \in l^\infty$ is complete.
??? note "*Proof*:"
Let $(x_n)_{n \in \mathbb{N}}$ be a Cauchy sequence in $(l^\infty,d)$, then we have
$$
\forall \varepsilon > 0 \exists N \in \mathbb{N}: n, m > N: d(x_n, x_m) = \sup_{j \in \mathbb{N}} | x_n(j) - x_m(j) | < \varepsilon,
$$
obtains for all $j \in \mathbb{N}$: $|x_n(j) - x_m(j)| <\varepsilon$.
Which shows that $(x_m(j))_{m \in \mathbb{N}}$ for fixed $j \in \mathbb{N}$ is a Cauchy sequence in $\mathbb{C}$. Since $\mathbb{C}$ is complete the sequence converges; $x_m(j) \to x(j)$ as $m \to \infty$.
Thus, for $m \to \infty$ we have
$$
d(x_n, x) = \sup_{j \in \mathbb{N}} | x_n(j) - x(j) | < \varepsilon \implies |x_n(j) = x(j)| < \varepsilon.
$$
Since $x_n \in l^\infty$ there exists a $k_n \in \mathbb{R}: |x_n(j)| \leq k_n$ for all $j \in \mathbb{N}$. Hence
$$
|x(j)| \leq |x(j) - x_n(j)| + |x_n(j)| < \varepsilon + k_n,
$$
for all $j \in \mathbb{N}$ which implies that $x \in l^\infty$ and $x_n \to x$ as $n \to \infty$ obtaining that $ l^\infty$ is complete.

View file

@ -0,0 +1,20 @@
# Completion
> *Definition 1*: let $(X,d)$ and $(\tilde X, \tilde d)$ be metric spaces, then
>
> 1. a mapping $T: X \to \tilde X$ is an **isometry** if $\forall x, y \in X: \tilde d(Tx, Ty) = d(x,y)$.
> 2. $(X,d)$ and $(\tilde X, \tilde d)$ are **isometric** if there exists a bijective isometry $T: X \to \tilde X$.
Hence, isometric spaces may differ at most by the nature of their points but are indistinguishable from the viewpoint of the metric.
Or in other words, the metric space $(\tilde X, \tilde d)$ is unique up to isometry.
> *Theorem 1*: for every metric space $(X,d)$ there exists a complete metric space $(\tilde X, \tilde d)$ that contains a subset $W$ that satisfies the following conditions
>
> 1. $W$ is a metric space isometric with $(X,d)$.
> 2. $W$ is dense in $X$.
??? note "*Proof*:"
Will be added later.

View file

@ -0,0 +1,59 @@
# Convergence
> *Definition 1*: a sequence $(x_n)_{n \in \mathbb{N}}$ in a metric space $(X,d)$ is **convergent** if there exists an $x \in X$ such that
>
> $$
> \lim_{n \to \infty} d(x_n, x) = 0.
> $$
>
> $x$ is the **limit** of $(x_n)$ and is denoted by
>
> $$
> \lim_{n \to \infty} x_n = x,
> $$
>
> or simply by $x_n \to x$, $(n \to \infty)$.
We say that $(x_n)$ *converges to* $x$ or *has the limit* $x$. If $(x_n)$ is not convergent then it is **divergent**.
We have that the limit of a convergent sequence must be a point of $X$.
> *Definition 2*: a non-empty subset $M \subset X$ of a metric space $(X,d)$ is **bounded** if there exists an $x_0 \in X$ and an $r > 0$ such that $M \subset B(x_0,r)$.
Furthermore, we call a sequence $(x_n)$ in $X$ a **bounded sequence** if the corresponding point set is a bounded subset of $X$.
> *Lemma 1*: let $(X,d)$ be a metric space then
>
> 1. a convergent sequence in $X$ is bounded and its limit is unique,
> 2. if $x_n \to x$ and $y_n \to y$ then $d(x_n, y_n) \to d(x,y)$, $(n \to \infty)$.
??? note "*Proof*:"
For statement 1, suppose that $x_n \to x$. Then, taking $\varepsilon = 1$, we can find an $N$ such that $d(x_n, x) < 1$ for all $n > N$. Which shows that $(x_n)$ is bounded. Suppose that $x_n \to x$ and $x_n \to z$ then by axiom 4 of the definition of a metric space we have
$$
d(x_n, x) \leq d(x_n, z) + d(x, z) \to 0,
$$
as $n \to \infty$ and by axiom 2 of the definition of a metric space it follows that $x = z$.
For statement 2, we have that
$$
d(x_n,y_n) \leq d(x_n, x) + d(x, y) + d(y, y_n),
$$
by axiom 4 of the definition of a metric space. Hence we obtain
$$
d(x_n, y_n) - d(x, y) \leq d(x_n, x) + d(y_n, y),
$$
such that
$$
|d(x_n, y_n) - d(x, y)| \leq d(x_n, x) + d(y_n, y) \to 0
$$
as $n \to \infty$.

View file

@ -0,0 +1,82 @@
# Metric spaces
> *Definition 1*: a **metric space** is a pair $(X,d)$, where $X$ is a set and $d$ is a metric on $X$, which is a function on $X \times X$ such that
>
> 1. $d$ is real, finite and nonnegative,
> 2. $\forall x,y \in X: \quad d(x,y) = 0 \iff x = y$,
> 3. $\forall x,y \in X: \quad d(x,y) = d(y,x)$,
> 4. $\forall x,y,z \in X: \quad d(x,y) \leq d(x,z) + d(y,z)$.
The metric $d$ is also referred to as a distance function. With $x,y \in X: d(x,y)$ the distance from $x$ to $y$.
## Examples of metric spaces
For the **Real line** $\mathbb{R}$ the usual metric is defined by
$$
d(x,y) = |x - y|,
$$
for all $x,y \in \mathbb{R}$. Obtaining a metric space $(\mathbb{R}, d)$.
??? note "*Proof*:"
Will be added later.
For the **Euclidean space** $\mathbb{R}^n$ with $n \in \mathbb{N}$, the usual metric is defined by
$$
d(x,y) = \sqrt{\sum_{j=1}^n (x(j) - y(j))^2},
$$
for all $x,y \in \mathbb{R}^n$ with $x = (x(j))$ and $y = (y(j))$. Obtaining a metric space $(\mathbb{R}^n, d)$.
??? note "*Proof*:"
Will be added later.
Similar examples exist for the complex plane $\mathbb{C}$ and the unitary space $\mathbb{C}^n$.
For the space $C([a,b])$ of all **real-valued continuous functions** on a closed interval $[a,b]$ with $a<b \in \mathbb{R}$ the metric may be defined by
$$
d(x,y) = \max_{t \in [a,b]} |x(t) - y(t)|,
$$
for all $x,y \in C([a,b])$. Obtaining a metric space $(C([a,b]), d)$.
??? note "*Proof*:"
Will be added later.
> *Definition 2*: let $l^p$ with $p \geq 1$ be the set of sequences $x \in l^p$ of complex numbers with the property that
>
> $$
> \sum_{j \in \mathbb{N}} | x(j) |^p \text{ is convergent},
> $$
>
> for all $x \in l^p$.
We have that a metric $d$ for $l^p$ may be defined by
$$
d(x,y) = (\sum_{j \in \mathbb{N}} | x(j) - y(j) |^p)^\frac{1}{p},
$$
for all $x,y \in l^p$.
??? note "*Proof*:"
Will be added later.
From definition 2 the sequence space $l^\infty$ follows, which is defined as the set of all bounded sequences $x \in l^\infty$ of complex numbers. A metric $d$ of $l^\infty$ may be defined by
$$
d(x,y) = \sup_{j \in \mathbb{N}} | x(j) - y(j) |,
$$
for all $x, y \in l^\infty$.
??? note "*Proof*:"
Will be added later.

View file

@ -0,0 +1,97 @@
# Topological notions
> *Definition 1*: let $(X,d)$ be a metric space and let $x_0 \in X$ and $r > 0$, the following may be defined
>
> 1. **open ball**: $B(x_0, r) = \{x \in X \;|\; d(x,x_0) < r\}$,
> 2. **closed ball**: $\tilde B(x_0,r) = \{x \in X \;|\; d(x,x_0) \leq r\}$,
> 3. **sphere**: $S(x_0,r) = \{x \in X \;|\; d(x,x_0) = r\}$.
In all three cases $x_0$ can be thought of as the center and $r$ as the radius.
> *Definition 2*: a subset $M \subset X$ of a metric space $(X,d)$ is **open** if $\forall x_0 \in M \exists r > 0: B(x_0,r) \subset M$.
>
> $M$ is **closed** if $X \backslash M$ is open.
Therefore, one may observe that an open ball is an open set and a closed ball is a closed set.
## Neigbourhoods
> *Definition 3*: let $(X,d)$ be a metric space and let $x_0 \in X$, then $B(x_0, \varepsilon)$ is an **$\varepsilon$-neighbourhood** of $x_0$ for some $\varepsilon > 0$.
Using definition 3 we may define the following.
> *Definition 4*: a **neighbourhood** of $x_0$ is a set that contains an $\varepsilon$-neighbourhood of $x_0$ for some $\varepsilon > 0$.
Therefore $x_0$ is an element of each of its neighbourhoods and if $N$ is a neighbourhood of $x_0$ and $N \subset M$, then $M$ is also a neighbourhood of $x_0$.
> *Definition 5*: let $(X,d)$ be a metric space and let $M \subset X$, a point $x_0 \in M$ is an **interior point** of $M$ if $M$ is a neighbourhood of $x_0$.
One may think of an interior point of a subset as a point that lies within the interior of $M$.
> *Definition 6*: let $(X,d)$ be a metric space and let $M \subset X$, the **interior** of $M$, denoted by $M^\circ$ is the set of all interior points of $M$.
One may observe that $M^\circ$ is open and is the largest open set contained in $M$.
> *Lemma 1*: let $(X,d)$ be a metric space and let $\mathscr{T}$ be the set of all open subsets of $X$, then
>
> 1. $\empty \in \mathscr{T} \land X \in \mathscr{T}$,
> 2. the union of a collection of sets in $\mathscr{T}$ is itself a set in $\mathscr{T}$,
> 3. the intersection of a finite collection of sets in $\mathscr{T}$ is a set in $\mathscr{T}$.
??? note "*Proof*:"
Statement 1 follows by noting that $\empty$ is open since $\empty$ has no elements and $X$ is open.
For statement 2 we have that for any point $x$ of the union $U$ of open sets belongs to at least one of these sets $M$ and $M$ contains a ball $B$ about $x$. Then $B \subset U$, by the definition of a union.
For statement 3 we have that if $y$ is any point of the intersection of open sets $M_1, \dots, M_n$ with $n \in \mathbb{N}$ then each $M_j$ contains a ball about $y$ and the smallest of these balls is contained in that intersection.
From statements 1 and 3 from *lemma 1* we may define a topological space $(X,\mathscr{T})$ to be a set $X$ and a collection $\mathscr{T}$ of subsets of $X$ such that $\mathscr{T}$ satisfies the axioms 1 and 3. The set $\mathscr{T}$ is a topology for $X$, and it follows that a metric space is a topological space.
## Continuity
> *Definition 7*: let $(X,d)$ and $(Y,\tilde d)$ be metric spaces and let $T: X \to Y$ be a map. $T$ is **continuous in** $x_0 \in X$ if
>
> $$
> \forall \varepsilon > 0 \exists \delta > 0 \forall x \in X: \quad d(x,x_0) < \delta \implies \tilde d \big(T(x), T(x_0) \big) < \varepsilon.
> $$
>
> A mapping $T$ is **continuous** if it is continuous in all $x_0 \in X$.
Continuous mappings can be characterized in terms of open sets as follows.
> *Theorem 1*: let $(X,d)$ and $(Y,\tilde d)$ be metric spaces, a mapping $T: X \to Y$ is continuous if and only if the inverse image of any open subset of $Y$ is an open subset of $X$.
??? note "*Proof*:"
Suppose that $T$ is continuous. Let $S \subset Y$ be open and $S_0$ the inverse image of $S$. If $S_0 = \empty$, it is open. Let $S_0 = \empty$. For any $x \in S_0$ let $y_0 = T(x_0)$. Since $S$ is open, it contains an $\varepsilon$-neighbourhood $N$ of $y_0$. Since $T$ is continuous, $x_0$ has a $\delta$-neighbourhood $N_0$ which is mapped into $N$. Since, $N \subset S$ we have $N_0 \subset S_0$ so that $S_0$ is open because $x_0 \in S_0$ is arbitrary.
Suppose that the inverse image of every open set in $Y$ is an open set in $X$. Then for every $x_0 \in X$ and any $\varepsilon$-neighbourhood $N$ of $T(x_0)$, the inverse image $N_0$ of $N$ is open, since $N$ is open, and $N_0$ contains $x_0$. Hence, $N_0$ also contains a $\delta$-neighbourhood of $x_0$, which is mapped into $N$ because $N_0$ is mapped into $N$. Consequently, $T$ is continuous at $x_0$. Since $x_0 \in X$ was chosen arbitrary, $T$ is continuous.
## Accumulation points
> *Definition 8*: let $M \subset X$ be a subset of a metric space $(X,d)$. A point $x_0 \in X$ is an **accumulation point** of $M$ if
>
> $$
> \forall \varepsilon > 0 \exists y \in M \backslash \{x_0\}: d(x_0,y) < \varepsilon.
> $$
An accumulation point of a subset $M$ is also sometimes called a limit point of $M$. Implying the nature of these points.
> *Definition 9*: the set consisting of all points of $M$ and all accumulation points of $M$ is the **closure** of $M$, denoted by $\overline M$.
Therefore, $\overline M$ is the smallest closed set containing $M$.
> *Definition 10*: let $(X,d)$ be a metric space and let $M$ be a subset of $X$. The set $M$ is dense in $X$ if $\overline M = X$.
Hence if $M$ is dense in $X$, then every ball in $X$, no matter how small, will contain points of $M$.
> *Definition 11*: a metric space $(X,d)$ is separable if $X$ contains a countable subset $M$ that is dense in $X$.
For example the real line $\mathbb{R}$ is separable, since the set $\mathbb{Q}$ of all rational numbers is countable and is dense in $\mathbb{R}$.
Furthermore, $l^\infty$ is not separable while $l^p$ is indeed separable.
??? note "*Proof*:"
Will be added later.

View file

@ -0,0 +1,60 @@
# Compactness
> *Definition 1*: a metric space $X$ is **compact** if every sequence in $X$ has a convergent subsequence. A subset $M$ of $X$ is compact if every sequence in $M$ has a convergent subsequence whose limit is an element of $M$.
A general property of compact sets is expressed in the following proposition.
> *Proposition 1*: a compact subset $M$ of a metric space $(X,d)$ is closed and bounded.
??? note "*Proof*:"
Will be added later.
The converse of this proposition is generally false.
??? note "*Proof*:"
Will be added later.
However, for a finite dimensional normed space we have the following proposition.
> *Proposition 2*: in a finite dimensional normed space $(X, \|\cdot\|)$ a subset $M \subset X$ is compact if and only if $M$ is closed and bounded.
??? note "*Proof*:"
Will be added later.
A source of interesting results is the following lemma.
> *Lemma 1*: let $Y$ and $Z$ be subspaces of a normed space $(X, \|\cdot\|)$, suppose that $Y$ is closed and that $Y$ is a strict subset of $Z$. Then for every $\alpha \in (0,1)$ there exists a $z \in Z$, such that
>
> 1. $\|z\| = 1$,
> 2. $\forall y \in Y: \|z - y\| \geq \alpha$.
??? note "*Proof*:"
Will be added later.
Lemma 1 gives the following remarkable proposition.
> *Proposition 3*: if a normed space $(X, \|\cdot\|)$ has the property that the closed unit ball $M = \{x \in X | \|x\| \leq 1\}$ is compact, then $X$ is finite dimensional.
??? note "*Proof*:"
Will be added later.
Compact sets have several basic properties similar to those of finite sets and not shared by non-compact sets. Such as the following.
> *Proposition 4*: let $(X,d_X)$ and $(Y,d_Y)$ be metric spaces and let $T: X \to Y$ be a continuous mapping. Let $M$ be a compact subset of $(X,d_X)$, then $T(M)$ is a compact subset of $(Y,d_Y)$.
??? note "*Proof*:"
Will be added later.
From this proposition we conclude that the following property carries over to metric spaces.
> *Corollary 1*: let $M \subset X$ be a compact subset of a metric space $(X,d)$ over a field $F$, a continuous mapping $T: M \to F$ attains a maximum and minimum value.
??? note "*Proof*:"
Will be added later.

View file

@ -0,0 +1,37 @@
# Linear functionals
> *Definition 1*: a **linear functional** $f$ is a linear operator with its domain in a vector space $X$ and its range in a scalar field $F$ defined in $X$.
The norm can be a linear functional $\|\cdot\|: X \to F$ under the condition that the norm is linear. Otherwise, it would solely be a functional.
> *Definition 2*: a **bounded linear functional** $f$ is a bounded linear operator with its domain in a vector space $X$ and its range in a scalar field $F$ defined in $X$.
## Dual space
> *Definition 3*: the set of linear functionals on a vector space $X$ is defined as the **algebraic dual space** $X^*$ of $X$.
From this definition we have the following.
> *Theorem 1*: the algebraic dual space $X^*$ of a vector space $X$ is a vector space.
??? note "*Proof*:"
Will be added later.
Furthermore, a secondary type of dual space may be defined as follows.
> *Definition 4*: the set of bounded linear functionals on a normed space $X$ is defined as **dual space** $X'$.
In this case, a rather interesting property of a dual space emerges.
> *Theorem 2*: the dual space $X'$ of a normed space $(X,\|\cdot\|_X)$ is a Banach space with its norm $\|\cdot\|_{X'}$ given by
>
> $$
> \|f\|_{X'} = \sup_{x \in X\backslash \{0\}} \frac{|f(x)|}{\|x\|_X} = \sup_{\substack{x \in X \\ \|x\|_X = 1}} |f(x)|,
> $$
>
> for all $f \in X'$.
??? note "*Proof*:"
Will be added later.

View file

@ -0,0 +1,215 @@
# Linear operators
> *Definition 1*: a **linear operator** $T$ is a linear mapping such that
>
> 1. the domain $\mathscr{D}(T)$ of $T$ is a vector space and the range $\mathscr{R}(T)$ of $T$ is contained in a vector space over the same field as $\mathscr{D}(T)$.
> 2. $\forall x, y \in \mathscr{D}(T): T(x + y) = Tx + Ty$.
> 3. $\forall x \in \mathscr{D}(T), \alpha \in F: T(\alpha x) = \alpha Tx$.
Observe the notation; we $Tx$ and $T(x)$ are equivalent, most of the time.
> *Definition 2*: let $\mathscr{N}(T)$ be the **null space** of $T$ defined as
>
> $$
> \mathscr{N}(T) = \{x \in \mathscr{D}(T) \;|\; Tx = 0\}.
> $$
We have the following properties.
> *Proposition 1*: let $T$ be a linear operator, then
>
> 1. $\mathscr{R}(T)$ is a vector space,
> 2. $\mathscr{N}(T)$ is a vector space,
> 3. if $\dim \mathscr{D}(T) = n \in \mathbb{N}$ then $\dim \mathscr{R}(T) \leq n$.
??? note "*Proof*:"
Will be added later.
An immediate consequence of statement 3 is that linear operators preserve linear dependence.
> *Proposition 2*: let $Y$ be a vector space, a linear operator $T: \mathscr{D}(T) \to Y$ is injective if
>
> $$
> \forall x_1, x_2 \in \mathscr{D}(T): Tx_1 = Tx_2 \implies x_1 = x_2.
> $$
??? note "*Proof*:"
Will be added later.
Injectivity of $T$ is equivalent to $\mathscr{N}(T) = \{0\}$.
??? note "*Proof*:"
Will be added later.
> *Theorem 1*: if a linear operator $T: \mathscr{D}(T) \to \mathscr{R}(T)$ is injective there exists a mapping $T^{-1}: \mathscr{R}(T) \to \mathscr{D}(T)$ such that
>
> $$
> y = Tx \iff T^{-1} y = x,
> $$
>
> for all $x \in \mathscr{D}(T)$, denoted as the **inverse operator**.
??? note "*Proof*:"
Will be added later.
> *Proposition 3*: let $T: \mathscr{D}(T) \to \mathscr{R}(T)$ be an injective linear operator, if $\mathscr{D}(T)$ is finite-dimensional, then
>
> $$
> \dim \mathscr{D}(T) = \dim \mathscr{R}(T).
> $$
??? note "*Proof*:"
Will be added later.
> *Lemma 1*: let $X,Y$ and $Z$ be vector spaces and let $T: X \to Y$ and $S: Y \to Z$ be injective linear operators, then $(ST)^{-1}: Z \to X$ exists and
>
> $$
> (ST)^{-1} = T^{-1} S^{-1}.
> $$
??? note "*Proof*:"
Will be added later.
We finish this subsection with a definition of the space of linear operators.
> *Definition 3*: let $\mathscr{L}(X,Y)$ denote the set of linear operators mapping from a vector space $X$ to a vector space $Y$.
From this definition the following theorem follows.
> *Theorem 2*: let $X$ and $Y$ be vectors spaces, the set of linear operators $\mathscr{L}(X,Y)$ is a vector space.
??? note "*Proof*:"
Will be added later.
Therefore, we may also call $\mathscr{L}(X,Y)$ the space of linear operators.
## Bounded linear operators
> *Definition 4*: let $(X, \|\cdot\|_X)$ and $(Y,\|\cdot\|_Y)$ be normed spaces over a field $F$ and let $T: \mathscr{D}(T) \to Y$ be a linear operator with $\mathscr{D}(T) \subset X$. Then $T$ is a **bounded linear operator** if
>
> $$
> \exists c \in F \forall x \in \mathscr{D}(T): \|Tx\|_Y \leq c \|x\|_X.
> $$
In this case we may also define the set of all bounded linear operators.
> *Definition 5*: let $\mathscr{B}(X,Y)$ denote the set of bounded linear operators mapping from a vector space $X$ to a vector space $Y$.
We have the following theorem.
> *Theorem 3*: let $X$ and $Y$ be vectors spaces, the set of bounded linear operators $\mathscr{B}(X,Y)$ is a subspace of $\mathscr{L}(X,Y)$.
??? note "*Proof*:"
Will be added later.
Likewise, we may call $\mathscr{B}(X,Y)$ the space of bounded linear operators.
The smallest possible $c$ such that the statement in definition 4 still holds is denoted as the norm of $T$ in the following definition.
> *Definition 5*: the norm of a bounded linear operator $T \in \mathscr{B}(X,Y)$ is defined by
>
> $$
> \|T\|_{\mathscr{B}} = \sup_{x \in \mathscr{D}(T) \backslash \{0\}} \frac{\|Tx\|_Y}{\|x\|_X},
> $$
>
> with $X$ and $Y$ vector spaces.
The operator norm makes $\mathscr{B}$ into a normed space.
> *Lemma 2*: let $X$ and $Y$ be normed spaces, the norm of a bounded linear operator $T \in \mathscr{B}(X,Y)$ may be given by
>
> $$
> \|T\|_\mathscr{B} = \sup_{\substack{x \in \mathscr{D}(T) \\ \|x\|_X = 1}} \|Tx\|_Y,
> $$
>
> and the norm of a bounded linear operator is a norm.
??? note "*Proof*:"
Will be added later.
Note that the second statement in lemma 2 is non trivial, as the norm of a bounded linear operator is only introduced by a definition.
> *Proposition 4*: if $(X, \|\cdot\|)$ is a finite-dimensional normed space, then every linear operator on $X$ is bounded.
??? note "*Proof*:"
Will be added later.
By linearity of the linear operators we have the following.
> *Theorem 4*: let $X$ and $Y$ be normed spaces and let $T: \mathscr{D}(T) \to Y$ be a linear operator with $\mathscr{D}(T) \subset X$. Then the following statements are equivalent
>
> 1. $T$ is bounded,
> 2. $T$ is continuous in $\mathscr{D}(T)$,
> 3. $T$ is continuous in a point in $\mathscr{D}(T)$.
??? note "*Proof*:"
Will be added later.
> *Corollary 1*: let $T \in \mathscr{B}$ and let $(x_n)_{n \in \mathbb{N}}$ be a sequence in $\mathscr{D}(T)$, then we have that
>
> 1. $x_n \to x \in \mathscr{D}(T) \implies Tx_n \to Tx$ as $n \to \infty$,
> 2. $\mathscr{N}(T)$ is closed.
??? note "*Proof*:"
Will be added later.
Furthermore, bounded linear operators have the property that
$$
\|T_1 T_2\| \leq \|T_1\| \|T_2\|,
$$
for $T_1, T_2 \in \mathscr{B}$.
??? note "*Proof*:"
Will be added later.
> *Theorem 5*: if $X$ is a normed space and $Y$ is a Banach space, then $\mathscr{B}(X,Y)$ is a Banach space.
??? note "*Proof*:"
Will be added later.
> *Definition 6*: let $T_1, T_2 \in \mathscr{L}$ be linear operators, $T_1$ and $T_2$ are **equal** if and only if
>
> 1. $\mathscr{D}(T_1) = \mathscr{D}(T_2)$,
> 2. $\forall x \in \mathscr{D}(T_1) : T_1x = T_2x$.
## Restriction and extension
> *Definition 7*: the **restriction** of a linear operator $T \in \mathscr{L}$ to a subspace $A \subset \mathscr{D}(T)$, denoted by $T|_A: A \to \mathscr{R}(T)$ is defined by
>
> $$
> T|_A x = Tx,
> $$
>
> for all $x \in A$.
Furthermore.
> *Definition 8*: the **extension** of a linear operator $T \in \mathscr{L}$ to a vector space $M$ is an operator denoted by $\tilde T: M \to \mathscr{R}(T)$ such that
>
> $$
> \tilde T|_{\mathscr{D}(T)} = T.
> $$
Which implies that $\tilde T x = Tx\; \forall x \in \mathscr{D}(T)$. Hence, $T$ is the resriction of $\tilde T$.
> *Theorem 6*: let $X$ be a normed space and let $Y$ be Banach space. Let $T \in \mathscr{B}(M,Y)$ with $A \subset X$, then there exists an extension $\tilde T: \overline M \to Y$, with $\tilde T$ a bounded linear operator and $\| \tilde T \| = \|T\|$.
??? note "*Proof*:"
Will be added later.

View file

@ -0,0 +1,202 @@
# Normed spaces
> *Definition 1*: a vector space $X$ is a **normed space** if a norm $\| \cdot \|: X \to F$ is defined on $X$, satisfying
>
> 1. $\forall x \in X: \|x\| \geq 0$,
> 2. $\|x\| = 0 \iff x = 0$,
> 3. $\forall x \in X, \alpha \in F: \|\alpha x\| = |\alpha| \|x\|$,
> 4. $\forall x, y \in X: \|x + y\| \leq \|x\| + \|y\|$.
Also called a *normed vector space* or *normed linear space*.
> *Proposition 1*: a norm on a vector space $X$ defines a metric $d$ on $X$ given by
>
> $$
> d(x,y) = \|x - y\|,
> $$
>
> for all $x, y \in X$ and is called a **metric induced by the norm**.
??? note "*Proof*:"
Will be added later.
Furthermore, there is a category of normed spaces with interesting properties which is given in the following definition.
> *Definition 2*: a **Banach space** is a complete normed space with its metric induced by the norm.
If we define the norm $\| \cdot \|$ of the Euclidean vector space $\mathbb{R}^n$ by
$$
\|x\| = \sqrt{\sum_{j=1}^n |x(j)|^2},
$$
for all $x \in \mathbb{R}^n$, then it yields the metric
$$
d(x,y) = \|x - y\| = \sqrt{\sum_{j=1}^n |x(j) - y(j)|^2},
$$
for all $x, y \in \mathbb{R}^n$ which imposes completeness. Therefore $(\mathbb{R}^n, \|\cdot\|)$ is a Banach space.
This adaptation also works for $C$, $l^p$ and $l^\infty$, obviously. Obtaining that $\mathbb{R}^n$, $C$, $l^p$ and $l^\infty$ are all Banach spaces.
> *Lemma 1*: a metric $d$ induced by a norm on a normed space $(X, \|\cdot\|)$ satisfies
>
> 1. $\forall x, y \in X, \alpha \in F: d(x + \alpha, y + \alpha) = d(x,y)$,
> 2. $\forall x, y \in X, \alpha \in F: d(\alpha x, \alpha y) = |\alpha| d(x,y)$.
??? note "*Proof*:"
We have
$$
d(x + \alpha, y + \alpha) = \|x + \alpha - (y + \alpha)\| = \|x - y\| = d(x,y),
$$
and
$$
d(\alpha x, \alpha y) = \|\alpha x - \alpha y\| = |\alpha| \|x - y\| = |\alpha| d(x,y).
$$
By definition, a subspace $M$ of a normed space $X$ is a subspace of $X$ with its norm induced by the norm on $X$.
> *Definition 3*: let $M$ be a subspace of a normed space $X$, if $M$ is closed then $M$ is a **closed subspace** of $X$.
By definition, a subspace $M$ of a Banach space $X$ is a subspace of $X$ as a normed space. Hence, we do not require $M$ to be complete.
> *Theorem 1*: a subspace $M$ of a Banach space $X$ is complete if and only if $M$ is a closed subspace of $X$.
??? note "*Proof*:"
Will be added later.
Convergence in normed spaces follows from the definition of convergence in metric spaces and the fact that the metric is induced by the norm.
## Convergent series
> *Definition 4*: let $(x_k)_{k \in \mathbb{N}}$ be a sequence in a normed space $(X, \|\cdot\|)$. We define the sequence of partial sums $(s_n)_{n \in \mathbb{N}}$ by
>
> $$
> s_n = \sum_{k=1}^n x_k,
> $$
>
> if $s_n$ converges to $s \in X$, then
>
> $$
> \lim_{n \to \infty} \sum_{k=1}^n x_k,
> $$
>
> is convergent, and $s$ is the sum of the series, writing
>
> $$
> s = \lim_{n \to \infty} \sum_{k=1}^n x_k = \sum_{k=1}^\infty x_k = \lim_{n \to \infty } s_n.
> $$
>
> If the series
>
> $$
> \sum_{k=1}^\infty \|x_k\|,
> $$
>
> is convergent in $F$, then the series is **absolutely convergent**.
From the notion of absolute convergence the following theorem may be posed.
> *Theorem 2*: absolute convergence of a series implies convergence if and only if $(X, \|\cdot\|)$ is complete.
??? note "*Proof*:"
Will be added later.
## Schauder basis
> *Definition 5*: let $(X, \|\cdot\|)$ be a normed space and let $(e_k)_{k \in \mathbb{N}}$ be a sequence of vectors in $X$, such that for every $x \in X$ there exists a unique sequence of scalars $(\alpha_k)_{k \in \mathbb{N}}$ such that
>
> $$
> \lim_{n \to \infty} \|x - \sum_{k=1}^n \alpha_k e_k\| = 0,
> $$
>
> then $(e_k)_{k \in \mathbb{N}}$ is a **Schauder basis* of $(X, \|\cdot\|)$.
The expansion of a $x \in X$ with respect to a Schauder basis $(e_k)_{k \in \mathbb{N}}$ is given by
$$
x = \sum_{k=1}^\infty \alpha_k e_k.
$$
> *Lemma 2*: if a normed space has a Schauder basis then it is seperable.
??? note "*Proof*:"
Will be added later.
## Completion
> *Theorem 3*: for every normed space $(X, \|\cdot\|_X)$ there exists a Banach space $(Y, \|\cdot\|_Y)$ that contains a subspace $W$ that satisfies the following conditions
>
> 1. $W$ is a normed space isometric with $X$.
> 2. $W$ is dense in $Y$.
??? note "*Proof*:"
Will be added later.
The Banach space $(Y, \|\cdot\|_Y)$ is unique up to isometry.
## Finite dimension
> *Lemma 3*: let $\{x_k\}_{k=1}^n$ with $n \in \mathbb{N}$ be a linearly independent set of vectors in a normed space $(X, \|\cdot\|)$, then there exists a $c > 0$ such that
>
> $$
> \Big\| \sum_{k=1}^n \alpha_k x_k \Big\| \geq c \sum_{k=1}^n |\alpha_k|,
> $$
>
> for all $\{\alpha_k\}_{k=1}^n \in F$.
??? note "*Proof*:"
Will be added later.
As a first application of this lemma, let us prove the following.
> *Theorem 4*: every finite-dimensional subspace $M$ of a normed space $(X, \|\cdot\|)$ is complete.
??? note "*Proof*:"
Will be added later.
In particular, every finite dimensional normed space is complete.
> *Proposition 2*: every finite-dimensional subspace $M$ of a normed space $(X, \|\cdot\|)$ is a closed subspace of $X$.
??? note "*Proof*:"
Will be added later.
Another interesting property of finite-dimensional vector space $X$ is that all norms on $X$ lead to the same topology for $X$. That is, the open subsets of $X$ are the same, regardless of the particular choice of a norm on $X$. The details are as follows.
> *Definition 6*: a norm $\|\cdot\|_1$ on a vector space $X$ is **equivalent** to a norm $\|\cdot\|_2$ on $X$ if there exists $a,b>0$ such that
>
> $$
> \forall x \in X: a \|x\|_1 \leq \|x\|_2 \leq b \|x\|_1.
> $$
This concept is motivated by the following proposition.
> *Proposition 3*: equivalent norms on $X$ define the same topology for $X$.
??? note "*Proof*:"
Will be added later.
Using lemma 3 we may now prove the following theorem.
> *Theorem 5*: on a finite dimensional vector space $X$ any norm $\|\cdot\|_1$ is equivalent to any other norm $\|\cdot\|_2$.
??? note "*Proof*:"
Will be added later.
This theorem is of considerable importance. For instance, it implies that convergence or divergence of a sequence in a finite dimensional vector space does not depend on the particular choice of a norm on that space.

View file

@ -0,0 +1,100 @@
# Vector spaces
> *Definition 1*: a **vector space** $X$ over a **scalar field** $F$ is a non-empty set, on which two algebraic operations are defined; vector addition and scalar multiplication. Such that
>
> 1. $(X, +)$ is a commutative group with neutral element 0.
> 2. the scalar multiplication satisfies $\forall x, y \in X$ and $\lambda, \mu \in F$
> * $\lambda (x + y) = \lambda x + \lambda y$,
> * $(\lambda + \mu) x = \lambda x + \mu x$,
> * $\lambda (\mu x) = (\lambda \mu) x$,
> * $1 x = x$.
When $F = \mathbb{R}$ we have a real vector space while when $F = \mathbb{C}$ we have a complex vector space.
We have that the metric spaces $\mathbb{R}^n$, $C$, $l^p$ and $l^\infty$ are also vector spaces.
??? note "*Proof*:"
I am too lazy to add this trivial proof. Maybe some time in the future, if I do not forget.
> *Definition 2*: a **subspace** of a vector space $X$ is a non-empty subset $M$ of $X$, such that $\forall x, y \in M$ and $\lambda, \mu \in F$:
>
> $$
> \lambda x + \mu y \in M,
> $$
>
> with $M$ itself a vector space.
A special subspace $M$ of a vector space $X$ is the *improper subspace* $M = X$. Every other subspace of $X$ is a *proper subspace*.
## Linear combinations
> *Definition 3*: a **linear combination** of the vectors $\{x_i\}_{i=1}^n$ with $n \in \mathbb{N}$ is vector of the form
>
> $$
> \alpha_1 x_1 + \dots + \alpha_n x_n = \sum_{i=1}^n \alpha_i x_i,
> $$
>
> with $\{\alpha_i\}_{i=1}^n \in F$.
The set of all linear combinations of a set of vectors is defined as follows.
> *Definition 4*: the **span** of a subset $M \subset X$ of a vector space $X$, denoted by $\mathrm{span}(M)$, is the set of all linear combinations of vectors from $M$.
It follows that $\mathrm{span}(M)$ is a subspace of $X$.
## Linear independence
> *Definition 5*: a finite subset of vectors $M = \{x_i\}_{i=1}^n$ is **linearly independent** if
>
> $$
> \sum_{i=1}^n \alpha_i x_i = 0 \implies \forall i \in \{1, \dots, n\}: \alpha_i = 0.
> $$
The converse may also be defined.
> *Definition 6*: a finite subset of vectors $M = \{x_i\}_{i=1}^n$ is **linearly dependent** if $\exists \{\alpha_i\}_{i=1}^n \in F$ not all zero such that
>
> $$
> \sum_{i=1}^n \alpha_i x_i = 0.
> $$
The notions of linear dependence and independence may also be extended to infinite subsets.
> *Definition 7*: a subset $M$ of a vector space $X$ is **linearly independent** if every non-empty finite subset of $M$ is linearly independent.
While the converse in this case is defined by the contradiction.
> *Definition 8*: a subset $M$ of a vector space $X$ is **linearly dependent** if $M$ is not linearly independent.
## Dimension and basis
> *Definition 9*: a vector space $X$ is **finite dimensional** if there exists a $n \in \mathbb{N}$, such that $X$ contains a set of $n$ linearly independent vectors, while every set of $n+1$ vectors in $X$ is linearly dependent. In this case $n$ is the dimension of $X$, denoted by $\dim X = n$.
By definition $X = \{0\}$ is finite dimensional and $\dim X = 0$.
> *Definition 10*: if a vector space $X$ is not finite dimensional then $X$ is **infinite dimensional**.
The following definition of a basis is both relevant to finite and infinite dimensional vector spaces.
> *Definition 11*: a **basis** $B$ of a vector space $X$ is a linearly independent subset of $X$, that spans $X$.
Such a set $B$ is also called a *Hamel basis* of $X$.
> *Theorem 1*: every vector space $X$ has a Hamel basis.
??? note "*Proof*:"
Read it again, a proof is not necessary.
> *Theorem 2*: let $X$ be a vector space with $\dim X = n \in \mathbb{N}$. Then any proper subspace $M \subset X$ has dimension less than $n$.
??? note "*Proof*:"
If $n = 0$, then $X = \{0\}$ and $X$ has no proper subspace.
If $\dim M = 0$, then $M = \{0\}$ and $X \neq M \implies \dim X \geq 1$.
If $\dim M = n$ then $M$ would have a basis of $n$ elements, which would also be a basis for $X$ since $\dim X = n$, so that $X = M$.
This shows that any linearly independent set of vectors in $M$ must have fewer than $n$ elements and $\dim M < n$.