Assez surprenant… mais élégant

Y’a des primitives qui ne peuvent pas être exprimé en termes de fonction élémentaires, mais ça, vous la saviez déjà probablement. Ce qu’il faut savoir, c’est qu’il y plein de façons de trouver la valeur d’une intégrale sans pour autant passer par le théorème fondamental de l’analyse. Et je trouve que ça ne fait que rendre les intégrales encore plus intéressantes. Par exemple:

\displaystyle \int_0^{\infty} \frac{x^s}{e^x-1}\,dx

Il serait sand doute très difficile de trouver sa primitive–et faudrait déjà qu’elle existe… Par contre, j’aimerais vous montrer une façon assez élégante, je trouve, d’évaluer la valeur de cette intégrale. Allons-y;

\displaystyle \int_0^{\infty} \frac{x^s}{e^x-1}\,dx=\int_0^{\infty}x^{s}e^{-x}\frac{1}{1-e^{-x}}\,dx

J’ai just divisé chaque partie de la fraction par e^{-x}. Et si vous vous demandez pourquoi j’ai fait ça, c’est parce qu’il est possible de développer en série cette expression:

\displaystyle \frac{1}{1-e^{-x}}=\sum_{n=0}^{\infty} (e^{-x})^n=\sum_{n=0}^{\infty} (e^{-nx})

Et ceci nous permet changer un peu notre intégrale:

\displaystyle \int_0^{\infty} \frac{x^s}{e^x-1}\,dx=\int_0^{\infty}x^{s}e^{-x}\sum_{n=0}^{\infty} (e^{-nx})\,dx

Ensuite, le théorème de Fubini nous permet d’échanger la série avec l’intégrale:

\displaystyle \sum_{n=0}^{\infty} \int_0^{\infty}x^{s}e^{-x(n+1)}\,dx

Au lieu de commencer à n=0, on va laisser la série commencer à n=1:

\displaystyle \sum_{n=1}^{\infty} \int_0^{\infty}x^{s}e^{-x((n-1)+1)}\,dx=\sum_{n=1}^{\infty} \int_0^{\infty}x^{s}e^{-nx}\,dx

En laissant u=nx, et donc du=n\,dx,

\displaystyle \sum_{n=1}^{\infty} \int_0^{\infty}\left(\frac{u^s}{n^s}\right)e^{-u}\left(\frac{du}{n}\right)\,dx

\displaystyle =\sum_{n=1}^{\infty} \frac{1}{n^{s+1}}\int_0^{\infty} u^s\,e^{-u}\,du

Et l’on reconnait tout de suite la fonction Zeta et la fonction Gamma!

\displaystyle =\sum_{n=1}^{\infty} \frac{1}{n^{s+1}}\int_0^{\infty} u^s\,e^{-u}\,du=\zeta(s+1)\Gamma(s+1)

Et donc

\displaystyle \int_0^{\infty} \frac{x^s}{e^x-1}\,dx=\zeta(s+1)\Gamma(s+1)

DIY approximation for 3.1415…

There are some very weird and unintuitive formulas for approximating the irrational number \pi out there, each with a different rate of convergence. I’d like to go over one which I think is rather intuitive, and can be derived with some basic calculus and series knowledge. Let’s go:

We would like to find an infinite series  for \arctan(x). It can be shown that

\displaystyle \int_0^x \frac{1}{1+t^2}\,dt=\arctan(x)

What if we change the appearance of the integrand just a little, without actually altering its meaning?

\displaystyle \frac{1}{1+t^2}=\frac{1}{1-(-t^2)}

Now does the latter expression remind you of something? Well I see it as the sum of an infinite geometric series, with r=-t^2. That means that

\displaystyle  \frac{1}{1-(-t^2)}=\sum_{n=0}^{\infty}(-t^2)^n=\sum_{n=0}^{\infty}(-1)^n(t)^{2n}

We can now substitute this series in for our integrand:

\displaystyle \int_0^x \frac{1}{1+t^2}\,dt=\int_0^x \sum_{n=0}^{\infty}(-1)^n(t)^{2n}\,dt

We now want to switch the order of summation and integration. There are a lot of ways to prove that this is valid, but let’s just use Fubini’s theorem–treating the sum as an integral on the counting measure.

\displaystyle \sum_{n=0}^{\infty} \int_0^x (-1)^n(t)^{2n}\,dt= \sum_{n=0}^{\infty}\frac{(-1)^n x^{2n+1}}{2n+1}

Remember that all this is equal to \arctan(x).

\displaystyle \arctan(x)=\sum_{n=0}^{\infty}\frac{(-1)^n x^{2n+1}}{2n+1}

Before moving on, we have to find the interval of convergence of this power series. And we can use the ratio test for this:

\displaystyle \lim_{n\to\infty}\left|\frac{a_{n+1}}{a_{n}}\right|<1

\displaystyle \lim_{n\to\infty} \left|\frac{(-1)^{n+1} x^{2(n+1)+1}}{2(n+1)+1}\cdot\frac{2n+1}{(-1)^n x^{2n+1}}\right|<1

\displaystyle \left|x^2\right|<1

\displaystyle \left|x\right|<1

And we now have our interval of convergence. So now, we’re looking for a way to somehow get \pi out of our our \arctan function. Let’s imagine right a triangle, with an angle of \frac{\pi}{3}. Using what we know about 30-60-90 triangles, we know that \tan(\frac{\pi}{6})=\frac{1}{\sqrt{3}}. That means that \pi=6\arctan(\frac{1}{\sqrt{3}}). Since this value is included in out interval of convergence, we will be getting a pretty accurate approximation of \ \pi. Substituting this in for our series,

\displaystyle 6\arctan(\frac{1}{\sqrt{3}})=\frac{6}{\sqrt{3}}\sum_{n=0}^{\infty}\frac{(-1)^n (\frac{1}{\sqrt{3}})^{2n}}{2n+1}

\displaystyle \pi=\sqrt{12}\sum_{n=0}^{\infty}\frac{(-1)^n 3^{-n}}{2n+1}

And there we go! We have our approximation for \pi!. Calculating the sum of the first 100 terms gives us 50 accurate digits, which not too bad. And doing it up to 10 terms gives us this:

\displaystyle \sqrt{12}\sum_{n=0}^{10}\frac{(-1)^n 3^{-n}}{2n+1}=3.14159330...

So it has 6 digits of accuracy for 10 terms. However, there are other series that converge even faster, but seem pretty random(?). For example, here is one Ramanujan discovered:

\displaystyle \frac{1}{\pi}=\frac{2\sqrt{2}}{9801}\sum_{n=0}^{\infty} \frac{(4n)!(1103+26390n)}{(n!)^4 (396)^{4n}}

For every new term, the series computes 8 new accurate digits of \pi. I find it amazing that he just came up with this series with only his intuition. That guy was a genius.

Inverse Integration Technique

A lot of times, integral of inverse functions can be very hard to compute. However, there exists this great “theorem” that allows you to easily find the antiderivative of the inverse of a function, if you know that integral of the function itself. So here it is:

\displaystyle \int f^{-1}(x)\,dx=x\,f^{-1}(x)-\left(F\circ f^{-1}\right)(x)+C

Where F denotes the primitive of our function f(x). The proof is actually pretty easy, so let’s do it now. It relies on a simple substitution and one iteration of integration by parts.

\displaystyle I=\int f^{-1}(x)\,dx

Now, we let x=f(u).

\displaystyle I=\int f^{-1}\left(f(u)\right)\,df(u)=\int u\,df(u)

We can now integrate by parts:

\displaystyle I=\int u\,df(u)=u\,f(u) - \int f(u)\,du= u\,f(u) - F(u) + C

And all we have left to do is to substitute back in our original variable:

\displaystyle I=u\,f(u) - F(u) + C=x\,f^{-1}(x)\, -\left(F \circ f^{-1}\right)(x) + C

And that’s the proof! Short but so useful. Alright, let’s put it to the test by trying to find the primitive of \tan^{-1}(x). From our formula,

\displaystyle \int \tan^{-1}(x)\,dx=x\,\tan^{-1}(x)-\left(F\circ f^{-1}\right)(x)+C

So all we need to do is find the integral of \tan(x), pretty easy:

\displaystyle \int \tan(x)\,dx=\int \frac{\sin(x)}{\cos(x)}\,dx=-\int\frac{du}{u}=-\ln|u|=\ln|\sec(x)|

Substituting this result in our formula,

\displaystyle \int \tan^{-1}(x)\,dx=x\,\tan^{-1}(x)-\ln|\sec(\tan^{-1}(x))|+C

But ideally, we would like to simplify that mess inside the natural log. How do we do that? Back to basics: construct a right triangle who has an acute angle of \theta=\tan^{-1}(x). We are then looking for \sec(\theta). After using our dear Pythagoras’ theorem, we end up with the following results:

\displaystyle \sec(\tan^{-1}(x))=\sec(\theta)=\sqrt{x^2+1}

Still a little messy, but not as bad as our previous expression, right? We then end up with

\displaystyle \int \tan^{-1}(x)\,dx=x\,\tan^{-1}(x)-\ln|\sqrt{x^2+1}|+C

\displaystyle \int \tan^{-1}(x)\,dx=x\,\tan^{-1}(x)-\frac{1}{2}\ln|x^2+1|+C

And technically, we don’t need the absolute value bars because the expression inside is bound to be positive, but don’t they look cooler?

The best bedtime story

I just found this awesome story written by undergrad students who were bored enthralled in their Analysis class, and then proceeded to write the best ever math short story.

Featuring \epsilon-Red Riding Hood and the–hilarious–Big Bag Bolzano-Weierstrass Theorem, this parody is just too good. So yeah, shoutout to the authors for writing this funny story; here it is.

Before finishing, let me just quote one of the best lines of the story…

“And you know what they say about lemmas; “…when life hands you lemmas, make lemmanade.””

L’intégrale de Gauss

Je pense que l’intégrale de Gauss mérite bien un article: redoutable et pleine de mystères, elle ne cessera jamais d’émerveiller les meilleurs mathématiciens. En plus, c’est cette intégrale qui nous permet de calculer la valeur de \displaystyle \left(\frac{1}{2}\right)!.

Bon allons-y! On doit d’abord savoir ce que c’est exactement l’intégral de Gauss. Eh bah c’est ça:

\displaystyle \int_{-\infty}^{\infty} e^{-x^2}\,dx

Il est d’abord important de savoir que cette intégrale est non-élémentaire, ce qui veut dire que sa primitive ne peut pas être exprimée en terme de fonctions élémentaires (qui sont composées d’un nombre fini de fonctions trigonométriques, exponentielles, logarithmiques, et constantes). Tout cela pour dire qu’il va falloir faire preuve d’un peu de créativité si l’on veut y parvenir a bout.

Tout d’abord commençons par créer une équation qui va nous permettre de faire quelques manipulations intéressantes. On va mettre l’intégrale égale à I.

\displaystyle I = \int_{-\infty}^{\infty} e^{-x^2}\,dx

Et si on élevait chaque coté de l’équation à la puissance deux?

\displaystyle I^2 = \left(\int_{-\infty}^{\infty} e^{-x^2}\,dx\right)\left(\int_{-\infty}^{\infty} e^{-x^2}\,dx\right)

Je sais que pour l’instant, ça a l’air de s’être empiré mais vous verrez que cet étape va nous permettre d’évaluer cette intégrale.

Dans une des intégrales, on peut remplacer le x et le dx par y et dy sans que ceci ne change rien.

\displaystyle I^2 = \left(\int_{-\infty}^{\infty} e^{-x^2}\,dx\right)\left(\int_{-\infty}^{\infty} e^{-y^2}\,dy\right)

Ensuite, comme l’intégrale de droite est une constante–dont on ne connait pas la valeur–on peut la déplacer comme ceci:

\displaystyle I^2 =\int_{-\infty}^{\infty}\left(\int_{-\infty}^{\infty} e^{-y^2}\,dy\right)e^{-x^2}\,dx

En utilisant le théorème de Fubini,

\displaystyle I^2 =\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} e^{-y^2}e^{-x^2}\,dxdy

\displaystyle I^2 =\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} e^{-(x^2+y^2)}\,dxdy

Donc, on a maintenant affaire a une double intégrale. Par contre, il est toujours impossible de l’évaluer si on s’obstine a à utiliser les coordonnées cartésiennes. Ce qui veut dire qu’on va devoir effectuer un changement de variables. En l’occurrence, on devrait sans doute passer aux coordonnées polaires. Voici les substitutions qu’on utilisera:

\displaystyle x=r\cos(\theta)

\displaystyle y=r\sin(\theta)

\displaystyle x^2 + y^2=r^2

Et dernièrement, avec l’aide de la matrice Jacobienne,

\displaystyle dxdy\to r\,drd\theta

il fait maintenant convertir la région d’intégration en coordonnés polaires. Notre région est tout l’espace \displaystyle \mathbb{R}^2. En coordonnées polaires, ça veut dire que notre rayon s’étend de 0 à l’infini, et notre angle, do 0 à 2\pi. On est maintenant prêt à changer de variables:

\displaystyle I^2 =\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} e^{-(x^2+y^2)}\,dxdy=\int_0^{\infty}\!\!\!\int_0^{2\pi} e^{-r^2}r\,d\theta dr

\displaystyle I^2 =\int_0^{\infty}\!\!\!\int_0^{2\pi} e^{-r^2}r\,d\theta dr

La première intégrale est d’une simplicité merveilleuse:

\displaystyle I^2 =2\pi\int_0^{\infty} e^{-r^2}r\,dr

Quant à la seconde, elle demande just un peu plus d’effort—mais pas tant que ça.

\displaystyle u=-r^2

\displaystyle \frac{du}{-2}=r\,dr

\displaystyle I^2 =\frac{-2\pi}{2}\int_{-\infty}^{0} e^{u}\,du

\displaystyle I^2 =-\pi e^{-r^2}\Big |_0^{\infty}

\displaystyle I^2 =-\pi \left(\lim_{r\to\infty} \frac{1}{e^{r^2}} - e^0\right)

\displaystyle I^2=-\pi(-1)

\displaystyle I^2=\pi

On y est presque! Nous, ce qu’on veut c’est I, pas I^2. Il nous reste plus qu’a prendre la racine carrée des deux cotés:

\displaystyle I^2=\pi

\displaystyle I=\sqrt{\pi}

\displaystyle \int_{-\infty}^{\infty} e^{-x^2}\,dx=\sqrt{\pi}

Complex numbers and Trig

So the other day I was thinking about the complex representation of some trig functions, which are derived from Euler’s Formula, and wanted to share some interesting ways of arriving at the Pythagorean identity.

\displaystyle \cos^2(\theta)+\sin^2(\theta)=1

Before jumping into \mathbb{C}, let’s try to understand this equation using simple trig and geometry. And the reason why it’s called the “Pythagorean” identity is because it’s usually derived from Pythagorean’s theorem. How? Let’s see:

Imagine a unit circle—actually don’t worry about the unit part—centered at the origin and a right triangle inscribed in it. The length of its hypotenuse will be equal to the radius, the x-coordinate will be the length of one its shorter side, and the y-coordinate will be the length of its last side. Now we know from the Pythagorean theorem that

\displaystyle x^2+y^2=r^2

And then we can use some trigonometry to try to express x and y in terms of r, sine and cosine. We’ll let our angle \theta be the angle that our hypotenuse makes with the x-axis. Let’s write down what we know about \theta

\displaystyle \sin(\theta)=\frac{y}{r}

\displaystyle \cos(\theta)=\frac{x}{r}

Or

\displaystyle y=r\sin(\theta)

\displaystyle x=r\cos(\theta)

We can then substitue these last equations in for x and y in our Pythagorean equation:

\displaystyle (r\cos(\theta))^2+(r\sin(\theta))^2=r^2

\displaystyle r^2\cos(\theta)^2+r^2\sin(\theta)^2=r^2

Dividing out by r (we can do this because r can never be zero):

\displaystyle \cos^2(\theta)+\sin^2(\theta)=1

And there we go, we find our original Pythagorean identity. But now, let’s prove it in a very different but interesting way: using complex numbers! We can actually do this in more than one way, so let’s start with the first method. Now remember Euler’s formula:

\displaystyle e^{i\theta}=\cos(\theta)+i\sin(\theta)

But what happens if we replace \theta with -\theta ?

\displaystyle e^{-i\theta}=\cos(-\theta)+i\sin(-\theta)

Ideally we want to  get rid of the negative sign inside the cosine and sine functions. So let’s think about what happens when we take the cosine of an angle \theta and the cosine of the angle -\theta . Well the cosine doesn’t change right! We can also use the fact that cosine is an even function, so

\displaystyle \cos(-\theta)=\cos(\theta)

Now we need to know what happens to the sine of an angle \theta when we take the sine of -\theta. We can use the fact that sine is an odd function so

\displaystyle \sin(-\theta)=-sin(\theta)

So the sine becomes negative when we take the negative angle. We can now substitute these value in in our equation to replace all (well except one) the -\theta with \theta:

\displaystyle e^{-i\theta}=\cos(\theta)-i\sin(\theta)

Combining this equation with the Euler’s original formula:

\displaystyle e^{-i\theta}=\cos(\theta)-i\sin(\theta)

\displaystyle e^{i\theta}=\cos(\theta)+i\sin(\theta)

Notice that if we add both of these together, the sines will “cancel out” and we can solve for cosine. After adding them, we have

\displaystyle e^{i\theta}+e^{-i\theta}=2\cos(\theta)

Solving for cosine:

\displaystyle \cos(\theta)=\frac{e^{i\theta}+e^{-i\theta}}{2}

And we have the complex exponential representation of cosine! Now let’s try to solve for sine by subtracting one of the equations from another:

\displaystyle e^{-i\theta}=\cos(\theta)-i\sin(\theta)

\displaystyle e^{i\theta}=\cos(\theta)+i\sin(\theta)

After subtracting, we have one equation:

\displaystyle e^{i\theta}-e^{-i\theta}=2i\sin(\theta)

Solving for sine:

\displaystyle \sin(\theta)=\frac{e^{i\theta}-e^{-i\theta}}{2i}

Alright, so now that we have equations for both sine and cosine, we can try to square them and add them together to make sure they they do indeed add to 1:

\displaystyle \left(\frac{e^{i\theta}-e^{-i\theta}}{2i} \right)^2 + \left(\frac{e^{i\theta}+e^{-i\theta}}{2} \right)^2=1

\displaystyle \frac{e^{2i\theta} -2 + e^{-2i\theta}}{-4} + \frac{e^{2i\theta}+2+e^{-2i\theta}}{4} = 1

\displaystyle \frac{-e^{2i\theta} +2 - e^{-2i\theta}}{4} + \frac{e^{2i\theta}+2+e^{-2i\theta}}{4} = 1

\displaystyle \frac{-e^{2i\theta} +2 - e^{-2i\theta} + e^{2i\theta}+2+e^{-2i\theta}}{4}=1

And just like magic—or just like math—we are left with

\displaystyle \frac{2+2}{4}=1

\displaystyle \frac{4}{4}=1

\displaystyle 1=1

And we have confirmed the pythagorean identity. There is however another way, and it’s much simpler and shorter. Recall Euler’s formula:

\displaystyle e^{i\theta}=\cos(\theta)+i\sin(\theta)

Now let’s take the reciprocal of both sides:

\displaystyle \frac{1}{e^{i\theta}}=\frac{1}{\cos(\theta)+i\sin(\theta)}

We can also rewrite this as

\displaystyle {e^{-i\theta}}=\frac{1}{\cos(\theta)+i\sin(\theta)}

But we also know that

\displaystyle e^{-i\theta}=\cos(\theta)-i\sin(\theta)

Using this, we can set both of the right-hand-sides equal to each other:

\displaystyle \cos(\theta)-i\sin(\theta)=\frac{1}{\cos(\theta)+i\sin(\theta)}

And if we eliminate the fraction by multiplying both sides by the reciprocal of the right-hand-side,

\displaystyle (\cos(\theta)-i\sin(\theta))(\cos(\theta)+i\sin(\theta))=1

Now the mathematician inside you should recognize the left-hand-side as a difference of square:

\displaystyle \cos(\theta)^2-(i\sin(\theta))^2=1

\displaystyle \cos(\theta)^2- (i^2)(\sin(\theta))^2=1

\displaystyle \cos(\theta)^2- (-\sin(\theta))^2=1

\displaystyle \cos(\theta)^2 +\sin(\theta)^2=1

Voilà!