port from mathematics-physics notes

This commit is contained in:
Luc Bijl 2025-08-26 15:48:53 +02:00
parent a4e106ce02
commit c009ea53f0
124 changed files with 13224 additions and 0 deletions

View file

@ -0,0 +1,22 @@
# Concavity and inflections
## Concave up
A function $f$ is **concave up** on an open differentiable interval $I$ if the derivative $f'$ is an increasing function on $I$, then $f'' > 0$. Obtaining tangent line above the graph.
## Concave dowm
A function $f$ is **concave down** on an open and differentiable interval $I$ if the derivative is a decreasing function on $I$, then $f'' < 0$. Obtaining tangent lines below the graph.
## Inflection points
The function $f$ has an inflection point at $x_0$ if
1. the tangent line in $(x_0, f(x_0))$ exists, and
2. the concavity of $f$ is opposite on opposite sides of $x_0$.
If $f$ has an inflection point at $x_0$ and $f''(x_0)$ exists, then $f''(x_0) = 0$
## The second derivative test
...

View file

@ -0,0 +1,53 @@
# Continuity
Continuity is a local property. A function $f$ is continuous at an interior point $c$ of its domain if
$$\lim_{x \to c} f(x) = f(c).$$
If either $\lim_{x \to c} f(x)$ fails to exist or it exists but is not equal to $f(c)$, then $f$ is discontinuous at $c$.
## Right and left continuity
$f$ is **right continuous** at $c$ thereby having a left endpoint $c$ of its domain if
$$\lim_{x \downarrow c} f(x) = f(c)$$
and **left continuous** thereby having a right endpoint $c$ if
$$\lim_{x \uparrow c} f(x) = f(c).$$
## Continuity on an interval
$f$ is continuous on the interval $I$ if and only if $f$ is continuous in each point of $I$. In endpoints left/right continuity is sufficient.
$f$ is called a continuous function if and only if $f$ is continuous on its domain.
## Discontinuity
A discontinuity is removable if and only if the limit exists otherwise the discontinuity is non-removable.
## Combining continuous functions
If the functions $f$ and $g$ are both defined on an interval containing $c$ and both are continuous at $c$, then the following functions are also continuous at $c$:
* the sum $f + g$ and the difference $f - g$;
* the product $f g$;
* the constant multiple $k f$, where $k$ is any number;
* the quotient $\frac{f}{g}$, provided $g(c) \neq 0$; and
* the *n*th root $(f(x))^{\frac{1}{n}}$, provided $f(c) > 0$ if $n$ is even.
This may be proved using the various [limit rules](limits.md/#limit-rules).
## The extreme value theorem
If $f(x)$ is continuous on the closed, bounded interval $[a,b]$, then there exists numbers $p$ and $q$ in $[a,b]$ such that $\forall x \in [a,b]$,
$$f(p) \leq f(x) \leq f(q).$$
Thus, $f$ has the absolute minimum value $m=f(p)$, taken on at the point $p$, and the absolute maximum value $M=f(q)$, taken on at the point $q$. This follows from the consequence of the completeness property of the real numbers.
## The intermediate value theorem
If $f(x)$ is continuous on the interval $[a,b]$ and if $s$ is a number between $f(a)$ and $f(b)$, then there exists a number $c$ in $[a,b]$ such that $f(c)=s$. This follows from the consequence of the completeness property of the real numbers.
In particular, a continuous function defined on a closed interval takes on all values between its minimum value $m$ and its maximum value $M$, so its range is also a closed interval, $[m,M]$.

View file

@ -0,0 +1,231 @@
# Differentation
## The slope of a curve
The slope $a$ of a curve $C$ at a point $p$ is the slope of the tangent line to $C$ at $P$ if such a tangent line exists. In particular, the slope of the graph of $y=f(x)$ at the point $x_0$ is
$$
\lim_{h \to 0} \frac{f(x_0 + h) - f(x_0)}{h} = a.
$$
### Normal line
If a curve $C$ has a tangent line $L$ at point $p$, then the straight line $N$ through $P$ perpendicular to $L$ is called the **normal** to $C$ at $P$. The slope of the normal $s$ is the negative reciprocal of the slope of the curve $a$, that is
$$
s = \frac{-1}{a}
$$
## Derivative
The **derivative** of a function $f$ is another function $f'$ defined by
$$
f'(x) = \lim_{h \to 0} \frac{f(x + h) - f(x)}{h}
$$
at all points $x$ for which the limits exists. If $f'(x)$ exists, then $f$ is **differentiable** at $x$.
## Differentiability implies continuity
If $f$ is differentiable at $x$, then $f$ is continuous at $x$.
**Proof:** Since $f$ is differentiable at $x$
$$
\lim_{h \to 0} \frac{f(x + h) - f(x)}{h} = f'(x)
$$
must exist. Then, using the [limit rules](limits.md/#limit-rules)
$$
\lim_{h \to 0} f(x + h) - f(x) = \lim_{h \to 0} (\frac{f(x + h) - f(x)}{h}) (h) = (f'(x)) (0) = 0
$$
This is equivalent to $\lim_{h \to 0} f(x + h) = f(x)$, which says that $f$ is continuous at $x$.
## Differentation rules
* **Differentation of a sum:** $(f + g)'(x) = f'(x) + g'(x)$.
* **Proof:** Follows from the [limit rules](limits.md/#limit-rules)
$$
\begin{array}{ll}
(f + g)'(x) &= \lim_{h \to 0} \frac{(f + g)(x + h) - (f + g)(x)}{h}, \\
&= \lim_{h \to 0} (\frac{f(x + h) - f(x)}{h} + \frac{g(x + h) - g(x)}{h}), \\
&= f'(x) + g'(x).
\end{array}
$$
* **Differentation of a constant multiple:** $(C f)'(x) = C f'(x)$.
* **Proof:** Follows from the [limit rules](limits.md/#limit-rules)
$$
\begin{array}{ll}
(C f)'(x) &= \lim_{h \to 0} \frac{C f(x + h) - C f(x)}{h}, \\
&= C \lim_{h \to 0} \frac{f(x + h) - f(x)}{h}, \\
&= C f'(x).
\end{array}
$$
* **Differentation of a product:** $(f g)'(x) = f'(x) g(x) + f(x) g'(x)$.
* **Proof:** Follows from the [limit rules](limits.md/#limit-rules)
$$
\begin{array}{ll}
(f g)'(x) &= \lim_{h \to 0} \frac{f(x+h) g(x+h) - f(x) g(x)}{h}, \\
&= \lim_{h \to 0} (\frac{f(x+h) - f(x)}{h} g(x+h) + f(x) \frac{g(x+h) - g(x)}{h}), \\
&= f'(x) g(x) + f(x) g'(x).
\end{array}
$$
* **Differentation of the reciprocal:** $(\frac{1}{f})'(x) = \frac{-f'(x)}{(f(x))^2}$.
* **Proof:** Follows from the [limit rules](limits.md/#limit-rules)
$$
\begin{array}{ll}
(\frac{1}{f})'(x) &= \lim_{h \to 0} \frac{\frac{1}{f(x+h)} - \frac{1}{f(x)}}{h}, \\
&= \lim_{h \to 0} \frac{f(x) - f(x+h)}{h f(x+h) f(x)}, \\
&= \lim_{h \to 0} (\frac{-1}{f(x+h) f(x)}) \frac{f(x+h) - f(x)}{h}, \\
&= \frac{-1}{(f(x))^2} f'(x).
\end{array}
$$
* **Differentation of a quotient:** $(\frac{f}{g})'(x) = \frac{f'(x) g(x) - f(x) g'(x)}{(g(x))^2}$.
* **Proof:** Follows from the product and reciprocal rule
$$
\begin{array}{ll}
(\frac{f}{g})'(x) &= (f \frac{1}{g})'(x), \\
&= f'(x) \frac{1}{g(x)} + f(x) (- \frac{g'(x)}{(g(x))^2}), \\
&= \frac{f'(x) g(x) - f(x) g'(x)}{(g(x))^2}.
\end{array}
$$
* **Differentation of a composite:** $(f \circ g)'(x) = f'(g(x)) g'(x)$.
* **Proof:** Follows from the [limit rules](limits.md/#limit-rules)
$$
\begin{array}{ll}
(f \circ g)'(x) &= \lim_{h \to 0} \frac{f(g(x+h)) - f(g(x))}{h} \quad \mathrm{let} \space h = a - x, \\
&= \lim_{a \to x} \frac{f(g(a)) - f(g(x))}{a - x}, \\
&= \lim_{a \to x} (\frac{f(g(a)) - f(g(x))}{g(a) - g(x)}) (\frac{g(a) - g(x)}{a -x}), \\
&= f'(g(x)) g'(x).
\end{array}
$$
## The derivative of the sine and cosine function
The derivative of the sine function is the cosine function $\frac{d}{dx} \sin x = \cos x$.
**Proof:** using the definition of the derivative, the addition formula for the sine and the [limit rules](limits.md/#limit-rules)
$$
\begin{array}{ll}
\frac{d}{dx} \sin x &= \lim_{h \to 0} \frac{\sin(x+h) - \sin x}{h}, \\
&= \lim_{h \to 0} \frac{\sin x \cos h + \cos x \sin h}{h}, \\
&= \lim_{h \to 0} (\sin x (\frac{\cos h - 1}{h}) + \cos x (\frac{\sin h}{h})), \\
&= (\sin x) \cdot (0) + (\cos x) \cdot (1) = \cos x.
\end{array}
$$
The derivative of the cosine function is the negative of the sine function $\frac{d}{dx} \cos x = -\sin x$.
**Proof:** using the derivative of the sine and the composite (chain) rule
$$
\begin{array}{ll}
\frac{d}{dx} \cos x &= \frac{d}{dx} \sin (\frac{\pi}{2} - x), \\
&= (-1) \cos (\frac{\pi}{2} - x) = - \sin x.
\end{array}
$$
## Implicit differentation
Implicit equations; equations that cannot be solved may still be differentiated by implicit differentation.
**Example:** $x y^2 + y = 4 x$
$$
\begin{array}{ll}
\frac{dy}{dx}(x y^2 + y = 4 x) &\implies (y^2 + 2 x y \frac{dy}{dx} + \frac{dy}{dx} = 4), \\
&\implies (\frac{dy}{dx} = \frac{f- y^2}{1 + 2 x y}).
\end{array}
$$
## Rolle's theorem
Suppose that the function $g$ is continuous on the closed and bounded interval $[a,b]$ and is differentiable in the open interval $(a,b)$. If $g(a) = g(b)$ then there exists a point $c$ in the open interval $(a,b)$ such that $g'(c) = 0$.
**Proof:** By the [extereme value theorem](continuity.md/#the-extreme-value-theorem) $g$ attains its maximum and its minimum in $[a,b]$, if these are both attained at the endpoints of $[a,b]$, then $g$ is constant on $[a.b]$ and so the derivative of $g$ is zero at every point in $(a,b)$.
Suppose then that the maximum is obtained at an interior point $c$ of $(a,b)$. For a real $h$ such that $c + h$ is in $[a,b]$, the value $g(c + h)$ is smaller or equal to $g(c)$ because $g$ attains its maximum at $c$.
Therefore, for every $h>0$,
$$\frac{g(c + h) - g(c)}{h} \leq 0,$$
hence,
$$\lim_{h \downarrow 0} \frac{g(c + h) - g(c)}{h} \leq 0.$$
Similarly, for every $h < 0$
$$\lim_{h \uparrow 0} \frac{g(c + h) - g(c)}{h} \geq 0.$$
Thereby obtaining,
$$\lim_{h \to 0} \frac{g(c + h) - g(c)}{h} = 0 = g'(c)$$
The proof for a minimum value at $c$ is similar.
## Mean-value theorem
Suppose that the function $f$ is continuous on the closed and bounded interval $[a,b]$ and is differentiable in the open interval $(a,b)$. Then there exists a point $c$ in the open interval $(a,b)$ such that
$$
\frac{f(b) - f(a)}{b - a} = f'(c).
$$
**Proof:** Define $g(x) = f(x) - r x$, where $r$ is a constant. Since $f$ is continuous on $[a,b]$ and differentiable on $(a,b)$, the same is true for $g$. Now $r$ is chosen such that $g$ satisfies the conditions of [Rolle's theorem](differentation.md/#rolles-theorem). Namely
$$
\begin{array}{ll}
g(a) = g(b) &\iff f(a) - ra = f(b) - rb \\
&\iff r(b - a) = f(b) - f(a) \\
&\iff r = \frac{f(b) - f(a)}{b - a}
\end{array}
$$
By [Rolle's theorem](differentation.md/#rolles-theorem), since $g$ is differentiable and $g(a) = g(b)$, there is some $c$ in $(a,b)$ for which $g'(c) = 0$, and it follows from the equality $g(x) = f(x) - rx$ that,
$$
\begin{array}{ll}
g'(x) &= f'(x) - r\\
g'(c) &= 0 \\
g'(c) &= f'(c) - r = 0 \implies f'(c) = r = \frac{f(b) - f(a)}{b - a}
\end{array}
$$
## Generalized Mean-value theorem
If the functions $f$ and $g$ are both continuous on $[a,b]$ and differentiable on $(a,b)$ and if $g'(x) \neq 0$ for every $x$ between $(a,b)$. Then there exists a $c \in (a,b)$ such that
$$
\frac{f(b) - f(a)}{g(b) - g(a)} = \frac{f'(c)}{g'(c)}.
$$
**Proof:** Let $h(x) = (f(b) - f(a))(g(x) - g(a)) - (g(b) - g(a))(f(x) - f(a))$.
Applying [Rolle's theorem](differentation.md/#rolles-theorem), since $h$ is differentiable and $h(a) = h(b)$, there is some $c$ in $(a,b)$ for which $h'(c) = 0$
$$
h'(c) = (f(b) - f(a))g'(c) - (g(b) - g(a))f'(c) = 0,
$$
$$
\begin{array}{ll}
\implies (f(b) - f(a))g'(c) = (g(b) - g(a))f'(c), \\
\implies \frac{f(b) - f(a)}{g(b) - g(a)} = \frac{f'(c)}{g'(c)}.
\end{array}
$$

View file

@ -0,0 +1,47 @@
# Extreme values
## Absolute extreme values
Function $f$ has an **absolute maximum value** $f(x_0)$ at the point $x_0$ in its domain if $f(x) \leq f(x_0)$ holds ofr every $x$ in the domain of $f$.
Similarly, $f$ has an **absolute minimum value** $f(x_1)$ at the point $x_1$ in its domain if $f(x) \geq f(x_1)$ holds for every $x$ in the domain of $f$.
## Local extreme values
Function $f$ has an **local maximum value** $f(x_0)$ at the point $x_0$ in its domain provided there exists a number $h > 0$ such that $f(x) \leq f(x_0)$ whenever $x$ is in the domain of $f$ and $|x - x_0| < h$.
Similarly, $f$ has an **local minimum value** $f(x_1)$ at the point $x_1$ in its domain provided there exists a number $h > 0$ such that $f(x) \geq f(x_1)$ whenever $x$ is in the domain of $f$ and $|x - x_1| < h$.
## Critical points
A critical point is a point $x \in \mathrm{Dom}(f)$ where $f'(x) =0$.
## Singular points
A singular point is a point $x \in \mathrm{Dom}(f)$ where $f'(x)$ is not defined.
## Endpoints
An endpoint $x \in \mathrm{Dom}(f)$ that does not belong to any open interval contained in $\mathrm{Dom}(f)$
## Locating extreme values
If the function $f$ is defined on an interval $I$ and has a local maxima or minima in $I$ then the point must be either a critical point of $f$, a singular point of $f$ or an endpoint of $I$.
**Proof:**
Suppose that $f$ has a local maximum value at $x_0$ and that $x_0$ is neither an endpoint of the domain of $f$ nor a singular point of $f$. Then for some $h > 0$, $f(x)$ is defined on the open interval $(x_0 - h, x_0 + h)$ and has an absolute maximum at $x_0$. Also, $f'(x_0) exists, following from [Rolle's theorem](differentation.md#rolles-theorem).
## The first derivative test
### Example
Find the local and absolute extreme values of $f(x) = x^4 - 2x^2 -3$ on the interval $[-2,2]$.
$$f'(x) = 4x^3 - 4x = 4x(x^2 - 1) = 4x(x - 1)(x + 1)$$
| $x$ | $-2$| $-1$| $0$ | $1$ | $2$ |
| --- | --- | --- | --- | --- | --- |
| $f'$| |- 0 +|+ 0 -|- 0 +| |
| $f$ | max | min | max | min | max |
| | EP | CP | CP | CP | EP |

View file

@ -0,0 +1,151 @@
# Improper integrals
Proper integrals are [definite integrals](integration.md/#the-definite-integral) where the integrand $f$ is *continuous* on a *closed, finite* interval $[a,b]$. For positive $f$ it corresponds to the area of a **bounded region** of the plane, a region contained inside some disk of finite radius with centre at the origin. To extend the definite integral by allowing for two possibilities excluded in the situation described above.
1. There may exist $a=-\infty$ or $b=\infty$ or both.
2. $f$ may be unbounded as $x$ approaches $a$ or $b$ or both.
Integrals satisfying 1. are called **improper integrals of type I.** and integrals satisfying 2. are called **improper integrals of type II**.
## Improper integrals of type I
If $f$ is continuous on $[a,\infty]$, the improper integral of $f$ over $[a,\infty]$ is defined as a limit of proper integrals:
$$
\int_a^\infty f(x)dx = \lim_{R \to \infty} \int_a^R f(x)dx.
$$
Similarly, if $f$ is continuous on $[-\infty,b]$, then the improper integrals is defined as:
$$
\int_{-\infty}^b f(x)dx = \lim_{R \to -\infty} \int_R^b f(x)dx.
$$
In either case, if the limit exists, the improper integral **converges**. If the limit does not exist, the improper integral **diverges**. If the limit is $\infty$ or $-\infty$, the proper integral **diverges to (negative) infinity**.
## Improper integrals of type II
If $f$ is continuous on the interval $(a,b]$ and is possibly unbounded near $a$, the improper integral may be defined as:
$$
\int_a^b f(x)dx = \lim_{c \downarrow a} \int_c^b f(x)dx.
$$
Similarly, if $f$ is continuous on $[a,b)$ and is possibly unbounded near $b$, the improper integral may be defined as:
$$
\int_a^b f(x)dx = \lim_{c \uparrow b} \int_a^c f(x)dx.
$$
These improper integrals may also converge, diverge or diverge to (negative) infinity.
## p-integrals
Summerizing the behaviour of improper integrals of types I and II for powers of $x$ if $0 < a< \infty$, then:
1.
$$
\int_a^\infty x^{-p}dx =
\begin{cases}
\text{converges to } \frac{a^{1-p}}{p-1} \quad \text{if } p > 1, \\
\text{diverges to } \infty \quad \text{if } p \leq 1.
\end{cases}
$$
2.
$$
\int_0^a x^{-p}dx =
\begin{cases}
\text{converges to } \frac{a^{1-p}}{1-p} \quad \text{if } p < 1, \\
\text{diverges to } \infty \quad \text{if } p \geq 1.
\end{cases}
$$
**Proof of 1:**
For $p=1$:
$$
\int_a^\infty x^{-1}dx = \lim_{R \to \infty} \int_a^R x^{-1}dx = \lim_{R \to \infty} (\ln R - \ln a) = \infty.
$$
For $p < 1$:
$$
\begin{array}{ll}
\int_a^\infty x^{-p}dx &= \lim_{R \to \infty} \int_a^R x^{-p}dx, \\
&= \lim_{R \to \infty} [\frac{x^{-p+1}}{-p+1}]_a^R, \\
&= \lim_{R \to \infty} \frac{R^{1-p}-a^{1-p}}{1-p} = \infty.
\end{array}
$$
For $p > 1$
$$
\begin{array}{ll}
\int_a^\infty x^{-p}dx &= \lim_{R \to \infty} \int_a^R x^{-p}dx, \\
&= \lim_{R \to \infty} [\frac{x^{-p+1}}{-p+1}]_a^R, \\
&= \lim_{R \to \infty} \frac{a^{-(p-1)}-R^{-(p-1)}}{p-1} = \frac{a^{1-p}}{p-1}.
\end{array}
$$
**Proof of 2:**
For $p=1$:
$$
\int_0^a x^{-1}dx = \lim_{c \space\downarrow\space 0} \int_c^a x^{-1}dx = \lim_{c \space\downarrow\space 0} (\ln a - \ln c) = \infty.
$$
For $p > 1$
$$
\begin{array}{ll}
\int_0^a x^{-p}dx &= \lim_{c \space\downarrow\space 0} \int_c^a x^{-p}dx, \\
&= \lim_{c \space\downarrow\space 0} [\frac{x^{-p+1}}{-p+1}]_c^a, \\
&= \lim_{c \space\downarrow\space 0} \frac{c^{-(p-1)} - a^{-(p-1)}}{p-1} = \infty.
\end{array}
$$
For $p < 1$:
$$
\begin{array}{ll}
\int_0^a x^{-p}dx &= \lim_{c \space\downarrow\space 0} \int_c^a x^{-p} dx, \\
&= \lim_{c \space\downarrow\space 0} [\frac{x^{-p+1}}{-p+1}]_c^a, \\
&= \lim_{c \space\downarrow\space 0} \frac{a^{1-p}-c^{1-p}}{1-p} = \frac{a^{1-p}}{1-p}.
\end{array}
$$
## Comparison theorem for integrals
Let $-\infty \leq a < b \leq \infty$, and suppose that functions $f$ and $g$ are continuous on the interval $(a,b)$ and satisfy $0 \leq f(x) \leq g(x)$. If $\int_a^b g(x)dx$ converges, then so does $\int_a^b f(x)dx$, and:
$$
\int_a^b f(x)dx \leq \int_a^b g(x)dx.
$$
Equivalently, if $\int_a^b f(x)dx$ diverges to $\infty$, then so does $\int_a^b g(x)dx$.
**Proof:** Since both integrands are nonnegative, there are only two possibilities for each integral: it can either converge to a nonnegative number or diverge to $\infty$. Since $f(x) \leq g(x)$ on $(a,b)$, it follows by the [properties of the definite integral](integration.md/#properties) that if $a < r < s < b$, then:
$$
\int_r^s f(x)dx \leq \int_r^s g(x)dx.
$$
By taking limits as $r \space\downarrow\space a$ and $s \space\uparrow\space b$.
### To prove convergence
To find a function $g$, such that
1. $\forall x \in [a,\infty], \space 0 \leq f(x) \leq g(x)$.
2. $\int_0^\infty g(x)dx$ is convergent.
### To prove divergence
To find a function $f$ such that
1. $\forall x \in [a,\infty], \space g(x) \geq f(x) \geq 0$.
2. $\int_0^\infty f(x)dx$ is divergent.

View file

@ -0,0 +1,90 @@
# Integration techniques
## Elementary integrals
$$
\int \frac{1}{a^2 + x^2} dx = \frac{1}{a} \arctan(\frac{x}{a}) + C
$$
$$
\int \frac{1}{\sqrt{a^2-x^2}} dx = \arcsin(\frac{x}{a}) + C
$$
## Linearity of the integral
$$
\int Af(x) + Bg(x)dx = A\int f(x)dx + B\int g(x)dx
$$
**Proof:** is missing.
## Substitution
Suppose that $g$ is a differentiable on $[a,b]$, that satisfies $g(a)=A$ and $g(b)=B$. Also suppose that $f$ is continuous on the range of $g$, then
let $u = g(x)$ then $du = g'(x)dx$,
$$
\int_a^b f(g(x))g'(x)dx = \int_A^B f(u)du.
$$
## Inverse substitution
Inverse substitutions appear to make the integral more complicated, thereby this strategy must act as last resort. Substituting $x=g(u)$ in the integral
$$
\int_a^b f(x)dx,
$$
leads to the integral
$$
\int_{x=a}^{x=b} f(g(u))g'(u)du.
$$
## Integration by parts
Suppose $U(x)$ and $V(x)$ are two differentiable functions. According to the [product rule](differentation.md/#differentation-rules),
$$
\frac{d}{dx}(U(x)V(x)) = U(x) \frac{dV}{dx} + V(x) \frac{dU}{dx}.
$$
Integrating both sides of this equation and transposing terms
$$
\int U(x) \frac{dV}{dx} dx = U(x)V(x) - \int V(x) \frac{dU}{dx} dx,
$$
obtaining:
$$
\int U dV = U V - \int V dU.
$$
For definite integrals that is:
$$
\int_a^b f'(x)g(x)dx = [f(x)g(x)]_a^b - \int_a^b f(x)g'(x)dx.
$$
## Integration of rational functions
Let $P(x)$ and $Q(x)$ be polynomial functions with real coefficients. Forming a rational function, $\frac{P(x)}{Q(x)}$. Let $\frac{P(x)}{Q(x)}$ be a **strictly proper rational function**, that is; $\mathrm{deg}(P(x)) < \mathrm{deg}(Q(x))$. If the function is not it can be possibly made into a **strictly proper rational function** by using **long division**.
Then, $Q(x)$ can be factored into the product of a constant $K$, real linear factors of the form $x-a_i$, and real quadratic factors of the form $x^2+b_ix + c_i having no real roots.
The rational function can be expressed as a sum of partial fractions. Corresponding to each factor $(x-a)^m$ of $Q(x)$ the decomposition contains a sum of fractions of the form
$$
\frac{A_1}{x-a} + \frac{A_2}{(x-a)^2} + ... + \frac{A_m}{(x-a)^m}.
$$
Corresponding to each factor $(x^2+bx+c)^n$ of $Q(x)$ the decomposition contains a sum of fractions of the form
$$
\frac{B_1x+C_1}{x^2+bx+c} + \frac{B_2x+C_2}{(x^2+bx+c)^2} + ... + \frac{B_nx+C_n}{(x^2+bx+c)^n}.
$$
The constant $A_1,A_2,...,A_m,B_1,B_2,...,B_n,C_1,C_2,....,C_n$ can be determined by adding up the fractions in the decomposition and equating the coefficients of like powers of $x$ in the numerator of the sum those in $P(x)$.

View file

@ -0,0 +1,191 @@
# Integration
## Sigma notation
if $m$ and $n$ are integers with $m \leq n$, and if $f$ is a function defined as $f: \{m,m+1,...,n\} \to \mathbb{R}$, the symbol $\sum_{i=m}^{n} f(i)$ represents the sum of the values of $f$ at those integers:
$$
\sum_{i=m}^{n} f(i) = f(m) + f(m+1) + f(m+2) + ... + f(n).
$$
The explicit sum appearing on the right side of this equation is the **expansion** of the sum represented in sigma notation on the left side.
## Partitions
Let $P$ be a finite set of points arranged in order between $a$ and $b$ on the real line
$$
P = {x_0, x_1, ... , x_{n-1}, x_n},
$$
where $a = x_0 < x_1 < ... < x_{n-1} < x_n = b$. Such a set $P$ is called a **partition** of $[a,b]$; it divides $[a,b]$ into $n$ subintervals of which the *i*th is $[x_{i-1},x_i]. The length of the *i*th subinterval of $P$ is
$$
\Delta x_i = x_i - x_{i-1} \quad \mathrm{for} \space 1 \leq i \leq n,
$$
Then, the **norm** of the partition $P$ is defined as
$$
\parallel P \parallel = \max_{1 \leq i \leq n} \Delta x_i
$$
If the function $f$ is continuous on the interval $[a,b]$, it is continuous on each subinterval $[x_{i-1},x_i]$, and has a maximum $u_i$ and minimum $l_i$ on each subinterval by the [Extreme value theorem](continuity.md/#the-extreme-value-theorem) such that
$$
f(l_i) \leq f(x) \leq f(u_i) \quad \forall x \in [x_{i-1},x_i]/
$$
## Upper and lower Riemann sums
The **lower Riemann sum**, $L(f,P)$, and the **upper Riemann sum**, $U(f,P)$, for the function $f$ amd the partition $P$ are defined by:
$$
\begin{array}{ll}
L(f,P) &= f(l_1)\Delta x_1 + f(l_2)\Delta x_2 + ... + f(l_n)\Delta x_n \\
&= \sum_{i=1}^n f(l_i)\Delta x_i, \\
U(f,P) &= f(u_1)\Delta x_1 + f(u_2)\Delta x_2 + ... + f(u_n)\Delta x_n \\
&= \sum_{i=1}^n f(u_i)\Delta x_i.
\end{array}
$$
**Theorem:** for any partitions $P$, $Q$ on $[a,b]$ all lower Riemann sums are smaller than or equal to any upper Riemann sums:
$$
L(f,P) \leq U(f,Q).
$$
**Proof:** let $P$, $Q$ be partitions on $[a,b]$, suppose $L(f,P) \leq U(f,Q)$, define $R = P \cup Q$, $R$ is a refinement of $P$, $Q$. Then,
$$
L(f,P) \leq L(f,R) \leq U(f,R) \leq U(f,Q).
$$
## The definite integral
Suppose there exists exactly one number $I \in \mathbb{R}$ such that for every partition $P$ of $[a,b]$:
$$
L(f,P) \leq I \leq U(f,P).
$$
Then the function $f$ is integrable on $[a,b]$ and $I$ is called the definite integral
$$
I = \int_a^b f(x) dx.
$$
**Theorem:** suppose that a function $f$ is bounded on the interval $[a,b]$, then $f$ is integrable on $[a,b]$ if and only if $\forall \varepsilon > 0$ there exists a partition $P$ of $[a,b]$ such that
$$
U(f,P) - L(f,P) < \varepsilon.
$$
**Proof:** let $a,b \in \mathbb{R}$, $\forall \varepsilon > 0$ there is $|a-b| < \varepsilon$ then $a=b$.
**Theorem:** if $f$ is continuous on the interval $[a,b]$, then $f$ is integrable on $[a,b]$.
**Proof:** is missing...
### Properties
* If $a \leq b$ and $f(x) \leq g(x) \space \forall x \in [a,b]$:
$$
\int_a^b f(x)dx \leq \int_a^b g(x)dx.
$$
**Proof:** is missing...
* The **triangle inequality** for sums extends to definite integrals. If $a \leq b$, then
$$
|\int_a^b f(x)dx| \leq \int_a^b |f(x)|dx.
$$
**Proof:** is missing...
* Integral of an odd function $f(-x) = -f(x)$:
$$
\int_{-a}^a f(x)dx = 0.
$$
**Proof:** is missing...
* Integral of an even function $f(-x) = f(x)$:
$$
\int_{-a}^a f(x)dx = 2\int_0^a f(x)dx.
$$
**Proof:** is missing...
## The Mean-value theorem for integrals
If the function $f$ is continuous on $[a,b]$ then there exists a point $c$ in $[a,b]$ such that
$$
\int_a^b f(x)dx = (b-a)f(c)
$$
**Proof:** $\forall x \in [a,b]$,
let $m \leq f(x) \leq M$,
$$m(b-a)=\int_a^b mdx \leq \int_a^b f(x)dx \leq \int_a^b Mdx = M(b-a),$$
$$m \leq \frac{1}{b-a} \int_a^b f(x)dx \leq M,$$
According to [the intermediate value theorem](continuity.md/#the-intermediate-value-theorem) there exists a $c \in [a,b]$ such that
$$\frac{1}{b-a} \int_a^b f(x)dx = f(c)$$
## Piecewise continuous functions
Let $c_0 < c_1 < ... < c_n$ be a finite set of points on the real line. A function $f$ defined on $[c_0,c_n]$ except possibly at some of the points $c_i$, $(0 \leq i \leq n)$, is called piecewise continuous on that interval if for each $i$ $(1 \leq i \leq b)$ there exists a function $F_i$ continuous on the *closed* interval $[c_{i-1},c_i]$ such that
$$
f(x) = F_i(x).
$$
In this case, te integral of $f$ from $c_0$ to $c_n$ is defined to be
$$
\int_{c_0}^{c_n} f(x)dx = \sum_{i=1}^n \int_{c_i-1}^{c_i} F_i(x)dx.
$$
## The fundamental theorem of calculus
Suppose that the function $f$ is continuous on an interval $I$ containing the point $a$.
**Part I.** Let the function $F$ be defined on $I$ by
$$
F(x) = \int_a^x f(t)dt.
$$
Then $F$ is differentiable on $I$, and $F'(x) = f(x)$ there. Thus, $F$ is an antiderivative of $f$ on $I$:
$$
\frac{d}{dx} \int_a^x f(t)dt = f(x).
$$
**Part II.** If $G(x)$ is *any* antiderivative of $f(x)$ on $I$, so that $G'(x) = f(x)$ on $I$, then for any $b$ in $I$ there is
$$
\int_a^b f(x)dx = G(b) - G(a).
$$
**Proof:** using the definitions of the derivative
$$
\begin{array}{ll}
F'(x) &= \lim_{h \to 0} \frac{F(x+h)-F(x)}{h}, \\
&= \lim_{h \to 0} \frac{1}{h}(\int_a^{x+h} f(t)dt - \int_a^x f(t)dt), \\
&= \lim_{h \to 0} \frac{1}{h} \int_x^{x+h} f(t)dt, \\
&= \lim_{h \to 0} hf(c) \quad \mathrm{for \space some} \space c \in [x,x+h], \\
&= \lim_{c \to x} f(c), \\
&= f(x).
\end{array}
$$

View file

@ -0,0 +1,107 @@
# Limits
If $f(x)$ is defined for all $x$ near a, except possibly at a itself, and if it can be ensured that $f(x)$ is as close to $L$ by taking $x$ close enough to $a$, but not equal to $a$. Then $f$ approaches the **limit** $L$ as $x$ approaches $a$:
$$
\lim_{x \to a} f(x) = L
$$
## One-sided limits
If $f(x)$ is defined on some interval $(b,a)$ extending to the left of $x=a$, and if it can be ensured that $f(x)$ is as close to $L$ by taking $x$ to the left of $a$ and close enough to $a$, then $f(x) has **left limit** $L$ at $x=a$ and:
$$
\lim_{x \uparrow a} f(x) = L.
$$
If $f(x)$ is defined on some interval $(b,a)$ extending to the right of $x=a$ and if it can be ensured that $f(x)$ is as close to $L$ by taking $x$ to the right of $a$ and close enough to $a$, then $f(x) has **right limit** $L$ at $x=a$ and:
$$
\lim_{x \downarrow a} f(x) = L.
$$
## Limits at infinity
If $f(x)$ is defined on an interval $(a,\infty)$ and if it can be ensured that $f(x)$ is as close to $L$ by taking $x$ large enough, then $f(x)$ **approaches the limit $L$ as $x$ approaches infinity** and
$$
\lim_{x \to \infty} f(x) = L
$$
## Limit rules
If $\lim_{x \to a} f(x) = L$, $\lim_{x \to a} g(x) = M$, and $k$ is a constant then,
* **Limit of a sum:** $\lim_{x \to a}[f(x) + g(x)] = L + M$.
* **Limit of a difference:** $\lim_{x \to a}[f(x) - g(x)] = L - M$.
* **Limit of a multiple:** $\lim_{x \to a}k f(x) = k L$.
* **Limit of a product:** $\lim_{x \to a}f(x) g(x) = L M$.
* **Limit of a quotient:** $\lim_{x \to a}\frac{f(x)}{g(x)} = \frac{L}{M}$, if $M \neq 0$.
* **Limit of a power:** $\lim_{x \to a}[f(x)]^\frac{m}{n} = L^{\frac{m}{n}}$.
## Formal definition of a limit
The limit $\lim_{x \to a} f(x) = L$ means,
$$
\forall \varepsilon_{> 0} \exists \delta_{>0} \Big[ 0<|x-a|<\delta \implies |f(x) - L| < \varepsilon \Big].
$$
The limit $\lim_{x \to \infty} f(x) = L$ means,
$$
\forall \varepsilon_{> 0} \exists N_{>0} \Big[x > N \implies |f(x) - L | < \varepsilon \Big].
$$
The limit $\lim_{x \to a} f(x) = \infty$ means,
$$
\forall M_{> 0} \exists \delta_{>0} \Big[ 0<|x-a|<\delta \implies f(x) > M \Big].
$$
The limit $\lim_{x \to \infty} f(x) = \infty$ means,
$$
\forall M_{> 0} \exists N_{>0} \Big[ x > N \implies f(x) > M \Big].
$$
For one-sided limits there are similar formal definitions.
### Example
Applying the formal definition of a limit for $\lim_{x \to 4}\sqrt{2x + 1}$
* Given $\varepsilon > 0$
* Choose $\delta = \frac{\varepsilon}{2}$
* Suppose $0 < |x - 4| < \delta$
* Check $|\sqrt{2x + 1} - 3|$
$$
\begin{array}{ll}
|\sqrt{2x + 1} - 3| &= |\frac{(\sqrt{2x + 1} - 3)(\sqrt{2x + 1} + 3)}{\sqrt{2x + 1} + 3}|\\
&= \frac{2|x - 4|}{\sqrt{2x + 1} + 3}\\
&< 2|x-4|\\
&< 2\delta = \varepsilon
\end{array}
$$
## Squeeze Theorem
Suppose that $f(x) \leq g(x) \leq h(x)$ holds for all $x$ in some open interval containing $a$, except possibly at $x=a$ itself. Suppose also that
$$\lim_{x \to a} f(x) = \lim_{x \to a} h(x) = L.$$
Then $\lim_{x \to a} g(x) = L$ also. Similar statements hold for left and right limits.
### Example
Applying squeeze theorem on $\lim_{x \to 0} x^2 \cos(\frac{1}{x})$.
$$
\begin{array}{ll}
\forall x \neq 0\\
-1 \leq \cos(\frac{1}{x}) \leq 1 \implies -x^2 \leq x^2 \cos(\frac{1}{x}) \leq x^2\\
\mathrm{Since,} \space \lim_{x \to 0} x^2 = \lim_{x \to 0} -x^2 = 0\\
\lim_{x \to 0} x^2 \cos(\frac{1}{x}) = 0
\end{array}
$$

View file

@ -0,0 +1,197 @@
# Taylor polynomials
## Linearization
A function $f(x)$ about $x = a$ may be linearized into
$$
P_1(x) = f(a) + f'(a)(x-a),
$$
obtaining a polynomial that matches the value and derivative of $f$ at $x = a$.
## Taylor's theorem
Even better approximations of $f(x)$ can be obtained by using higher degree polynomials if $f^{n+1}(t)$ exists for all $t$ in an interval containing $a$ and $x$. Thereby matching more derivatives at $x = a$,
$$
P_n(x) = f(a) + f'(a)(x-a) + \frac{f''(a)}{2!}(x-a)^2+ ... + \frac{f^{(n)}(a)}{n!}(x-a)^n.
$$
Then the error $E_n(x) = f(x) - P_n(x)$ in the approximation $f(x) \approx P_n(x)$ is given by
$$
E_n(a) = \frac{f^{(n+1)}(s)}{(n+1)!}(x-a)^{n+1},
$$
where $s$ is some number between $a$ and $x$. The resulting formula
$$
f(x) = f(a) + f'(a)(x-a) + \frac{f''(a)}{2!}(x-a)^2 + ... + \frac{f^{(n)}(a)}{n!}(x-a)^n + \frac{f^{(n+1)}(s)}{(n+1)!}(x-a)^{n+1},
$$
for some $s$ between $a$ and $x$, is called **Taylor's formula with Lagrange remainder**; the Lagrange is the error term $E_n(x)$.
**Proof:**
Observe that the case $n=0$ of Taylor's formula, namely
$$
f(x) = P_0(x) + E_0(x) = f(a) + \frac{f'(s)}{1!}(x-a),
$$
is just the [Mean-value theorem](differentation.md#mean-value-theorem) for some $s$ between $a$ and $x$
$$
\frac{f(x) - f(a)}{x-a} = f'(s).
$$
Using induction to prove for $n > 0$. Suppose $n = k-1$ where $k \geq 1$ is an integer, then
$$
E_{k-1}(x) = \frac{f^{(k)}(s)}{k!}(x-a)^k,
$$
where $s$ is some number between $a$ and $x$. Consider the next higher case: $n=k$. Applying the [Generalized Mean-value theorem](differentation.md/#generalized-mean-value-theorem) to the functions $E_k(t)$ and $(t-a)^{k+1}$ on $[a,x]$. Since $E_k(a)=0$, a number $u$ in $(a,x)$ is obtained such that
$$
\frac{E_k(x) - E_k(a)}{(x-a^{k+1}) - (a-a)^{k+1}}= \frac{E_k(x)}{(x-a)^{k+1}} = \frac{E_k'(u)}{(k+1)(u - a)^k}.
$$
Since
$$
\begin{array}{ll}
E_k'(u)&=\frac{d}{dx}(f(x)-f(a)-f'(a)(x-a)-\frac{f''(a)}{2!}(t-a)^2-...-\frac{f^{(k)}(a)}{k!}(t-a)^k)|_{x=u} \\
&= f'(u) - f'(a) - f''(a)(u-a)-...-\frac{f^{(k)}(a)}{(k-1)!}(u-a)^{k-1}
\end{array}
$$
is just $E_{k-1}(u)$ for the function $f'$ instead of $f$. By the induction assumption it is equal to
$$
\frac{(f')^{(k)}(s)}{k!}(u-a)^k = \frac{f^{(k+1)}(s)}{k!}(u-a)^k
$$
for some $s$ between $a$ and $u$. Therefore,
$$
E_k(x) = \frac{f^{(k+1)}(s)}{(k+1)!}(x-a)^{k+1}
$$
## Big-O notation
$f(x) = O(u(x))$ for $x \to a$ if and only if there exists a $k > 0$ such that
$$
|f(x)| \leq k|u(x)|
$$
For all $x$ in the open interval around $x=a$.
The following properties follow from the definition:
1. If $f(x) = O(u(x))$ as $x \to a$, then $Cf(x) = O(u(x))$ as $x \to a$ for any value of the constant $C$.
2. If $f(x) = O(u(x))$ as $x \to a$ and $g(x) = O(u(x))$ as $x \to a$, then $f(x) \pm g(x) = O(u(x))$ as $x \to a$.
3. If $f(x) = O((x-a)^ku(x))$ as $x \to a$, then $\frac{f(x)}{(x-a)^k} = O(u(x))$ as $x \to a$ for any constant $k$.
If $f(x) = Q_n(x) + O((x-a)^{n+1})$ as $x \to a$, where $Q_n$ is a polynomial of degree at most $n$, then $Q_n(x) = P_n(x)$.
**Proof:** Follows from the properties of the big-O notation
Let $P_n$ be the Taylor polynomial, then properties 1 and 2 of big-O imply
that $R_n(x) = Q_n(x) - P_n(x) = O((x - a)^{n+1})$ as $x \to a$. It must be shown that $R_n(x)$ is identically zero so that $Q_n(x) = P_n(x)$ for all $x$. $R_n(x)$ may be written in the form
$$
R_n(x) = c_0 + c_1(x-a) + c_2(x-a)^2 + ... + c_n(x-a)^n
$$
If $R_n(x)$ is not identically zero, then there is a smallest coefficient $c_k$ $k \leq n$, such that $c_k \neq 0$, but $c_j = 0$ for $0 \leq j \leq k -1$
$$
R_n(x) = (x-a)^k(c_k + c_{k+1}(x-a) + ... + c_n(x-a)^{n-k}).
$$
Therefore,
$$
\lim_{x \to a} \frac{R_n(x)}{(x-a)^k} = c_k \neq 0.
$$
However, by property 3
$$
\frac{R_n(x)}{(x-a)^k} = O((x-a)^{n+1-k}).
$$
Since $n+1-k > 0$, $\frac{R_n(x)}{(x-a)^k} \to 0$ as $x \to a$. This contradiction shows that $R_n(x)$ must be
identically zero.
## Maclaurin formulas
Some Maclaurin formulas with errors in big-O notation. These may be used in constructing Taylor polynomials from compsite functions. As $x \to 0$
1. $$\frac{1}{1-x} = 1 + x + ... + x^n + O(x^{n+1}),$$
2. $$\ln(1+x) = x - \frac{x^2}{2} + \frac{x^3}{3} - ... + (-1)^{n-1}\frac{x^n}{n} + O(x^{n+1}),$$
3. $$e^x = 1 + x + \frac{x^2}{2!} + ... + \frac{x^n}{n!} + O(x^{n+1}),$$
4. $$\sin x = x - \frac{x^3}{3!} + \frac{x^5}{5!} - ... + (-1)^n\frac{x^{2n+1}}{(2n+1)!} + O(x^{2n+3}),$$
5. $$\cos x = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - ... + (-1)^n\frac{x^{2n}}{(2n)!} + O(x^{2n+1}),$$
6. $$\arctan x = x - \frac{x^3}{3} + \frac{x^5}{5} - ... + (-1)^n\frac{x^{2n+1}}{2n+1} + O(x^{2n+3}).$$
### Example
Construct $P_4(x)$ for $f(x) = e^{\sin x}$ around $x=0$.
$$
e^{\sin x} \approx 1 + (x - \frac{x^3}{3!} + \frac{x^5}{5!}) + \frac{1}{2!}(x - \frac{x^3}{3!} + \frac{x^5}{5!})^2 + \frac{1}{3!}(x - \frac{x^3}{3!} + \frac{x^5}{5!})^3
$$
$$
\begin{array}{ll}
P_4(x) &= 1 + x \frac{1}{2}x^2 + (-\frac{1}{6} + \frac{1}{6})x^3 + (-\frac{1}{6} + \frac{1}{4!})x^4 + O(x^5), \\
&= 1 + x \frac{1}{2}x^2 - \frac{1}{8}x^4 + O(x^5).
\end{array}
$$
## Evaluating limits with Taylor polynomials
Taylor and Macluarin polynomials provide a method for evaluating limits of indeterminate forms.
### Example
Determine the limit $\lim_{x \to 0} \frac{x \arctan x - \ln(1+x^2)}{x \sin x - x^2}$.
$$
\begin{array}{ll}
x \sin x - x^2 \approx x^2 - \frac{x^4}{6} + O(x^6) - x^2 = - \frac{x^4}{6} + O(x^6) \\
x \arctan x - \ln(1+x^2) \approx x^2 - \frac{x^4}{3} + O(x^6) - x^2 + \frac{x^4}{2} + O(x^6) = \frac{x^4}{6} + O(x^6)
\end{array}
$$
$$
\lim_{x \to 0} \frac{\frac{x^4}{6} + O(x^6)}{- \frac{x^4}{6} + O(x^6)} = -1
$$
## L'Hôpital's rule
Suppose the function $f$ and $g$ are differentiable on the interval $(a,b)$, and $g'(x) \neq 0$ there. Also suppose that $\lim_{x \downarrow a} f(x) = \lim_{x \downarrow a} g(x) = 0$ then
$$
\lim_{x \downarrow a} \frac{f(x)}{g(x)} = \lim_{x \downarrow a} \frac{f'(x)}{g'(x)} = L.
$$
The outcome is exactly the same as using Taylor polynomials.
**Proof:** using Taylor polynomials around $x = a$.
$$
\lim_{x \to a} \frac{f(x)}{g(x)} = \lim_{x \to a} \frac{f(a) + f'(a)(x - a) + \frac{f''(a)}{2}(x-a)^2 + O((x-a)^3)}{g(a) + g'(a)(x-a) + \frac{g''(a)}{2}(x-a)^2 + O((x-a)^3)}.
$$
If $f(a)$ and $g(a)$ are both zero
$$
\lim_{x \to a} \frac{f'(a)(x - a) + \frac{f''(a)}{2}(x-a)^2 + O((x-a)^3)}{g'(a)(x-a) + \frac{g''(a)}{2}(x-a)^2 + O((x-a)^3)},
$$
enzovoort.

View file

@ -0,0 +1,50 @@
# Exponential and logarithmic functions
## The natural logarithm
The natural logarithm is defined as having its derivative equal to $\frac{1}{x}$. For $x > 0$, then
$$
\frac{d}{dx} \ln x = \frac{1}{x}.
$$
### Standard limit
$$
\lim_{h \to 0} \frac{\ln (1+h)}{h} = 1
$$
## The exponential function
The exponential function is defined as the inverse of the natural logarithm
$$
\ln e^x = x.
$$
Furthermore $e$ may be defined by,
$$
\begin{array}{ll}
\lim_{n \to \infty} (1 + \frac{1}{n})^n = e, \\
\lim_{n \to \infty} (1 + \frac{x}{n})^n = e^x.
\end{array}
$$
### Derivative of exponential function
The derivative of $y = e^x$ may be calculated by [implicit differentation](../differentation.md#implicit-differentation):
$$
\begin{array}{ll}
y = e^x &\implies x = \ln y, \\
&\implies 1 = \frac{1}{y} \frac{dy}{dx}, \\
&\implies \frac{dy}{dx} = y = e^x.
\end{array}
$$
### Standard limit
$$
\lim_{h \to 0} \frac{e^h - 1}{h} = 1
$$

View file

@ -0,0 +1,69 @@
# Inverse functions
## Injectivity
A function $f$ is called injective if for all $x_1,x_2 \in \mathrm{Dom}(f), \space x_1 \neq x_2$ implies that $f(x_1) \neq f(x_2).$ Meaning that for every $y \in \mathrm{Rang}(f)$ there is precisely one $x \in \mathrm{Dom}(f)$ such that $y = f(x)$. Meaning, every $x$ has an unique $y$.
## Inverse function
If $f$ is injective, then it has an inverse function $f^{-1}$. The value of $f^{-1}(x)$ is the unique number $y$ in the domain of $f$ for which $f(y) = x$. Thus,
$$
y = f^{-1}(x) \iff x = f(y)
$$
Suppose $f$ is a continuous function, $f$ is injective if $f$ is strictly increasing or decreasing. That is, $f' \leq 0 \vee f' \geq 0$.
### Derivative of inverse function
When $f$ is differentiable and injective $(f^{-1})'(x) = \frac{1}{f'(f^{-1}(x))}$.
**Proof:**
$$f(y) = x \implies f'(y) \frac{dy}{dx} = 1$$
$$\frac{dy}{dx} = \frac{1}{f'(y)} = \frac{1}{f'(f^{-1}(x))}$$
Without knowing the inverse function a value of the inverse derivative may be determined.
## The arcsine function
Always $\arcsin$ not $\sin^{-1}$ that is wrong since $\sin$ is not injective.
For $x \in [-\frac{\pi}{2},\frac{\pi}{2}] \space \arcsin(\sin x) = x$
For $x \in [-1,1] \space \sin(\arcsin x) = x$
The arccosine function is similar.
## Example question
Prove that $\forall x \geq 0$: $\arctan(x + 1) - \arctan(x) < \frac{1}{1 + x^2}$.
For $x = 0$: $\frac{\pi}{4} < 1$.
For $x > 0$: Consider the function $f(t) = \arctan(t)$ on the interval $[x, x+1]$. Apply the [Mean-value theorem](../differentation.md/#mean-value-theorem) of $f$ at the interval $[x,x+1]$,
$$\frac{f(x+1) - f(x)}{(x+1) - 1} = f'(c).$$
Let $\arctan(c) = y$ then, $c = \tan y$,
$$
\begin{array}{ll}
\frac{dy}{dc} (c = \tan y) &\implies 1 = \sec^2 (y) \frac{dy}{dc} = (\tan^2 y + 1) \frac{dy}{dc} \\
&\implies 1 = (c^2 + 1) \frac{dy}{dc} \\
&\implies \frac{dy}{dc} = \frac{1}{c^2 + 1}.
\end{array}
$$
Obtaining,
$$\arctan(x+1) - \arctan(x) = f'(c) = \frac{1}{c^2 + 1}.$$
For some $c \in (x,x+1)$, since $c > x$
$$\frac{1}{1 + c^2} < \frac{1}{1 + x^2},$$
thereby
$$\arctan(x+1) - \arctan(x) < \frac{1}{1 + x^2}.$$