(Math 141 Summer 2015, Teaching)

Imagine that you have a pendulum such that the length of the rod is \( L \). With a bunch of simplifying assumptions that you usually learn in an intro physics course, you arrive at the following equation of motion

$$ \frac{d^2\theta}{dt^2} + \frac{g}{L} \theta = 0,$$

where \( g \) is the gravitational constant. The solution of this differential equation turns out to be

$$ \theta(t) = \theta_0 \cos\left( \sqrt{ \frac{g}{L} } t \right).$$

For our purposes, lets assume \( \theta_0 = 1 \) and \( L = g \) so that the solution becomes

$$ \theta(t) = \cos(t).$$

We know values of \( \cos(t) \) for \( t = 0, \frac{\pi}{2}, \frac{\pi}{3}, \) and so on.
But those are the *only* values of \( \cos \) for which we have simple expressions.
In other words, we will get into trouble when \( t \) is a number that doesn't involve \( \pi \).
Let's take \( t = 1 \), for example.
\( \cos 1 \) corresponds to the angle of the pendulum after 1 second, which is a reasonable thing to ask.
Yet, we have no way of expressing this quantity using constants famililar to us, such as \( \pi, e, \sqrt{3}, \) etc.

This is where estimation of a series comes in.

The Alternating Series Estimation Theorem says that, given a convergent alternating series

$$ \sum_{n=1}^\infty (-1)^n b_n $$

we have \( \lvert s - s_k \rvert \leq b_{n+1} \), where \( s \) is the value of the series, and \( s_k \) is the \( k \)-th partial sum.

The cosine function has the power series expansion

$$ \theta(t) = \cos(t) = \sum_{n=0}^\infty \frac{(-1)^n t^{2n}}{(2n)!}.$$

So \( \theta(1) \) is given by

$$ \theta(1) = \sum_{n=0}^\infty \frac{(-1)^n}{(2n)!}.$$

In the sage cell below, we define functions that we use for estimating the series. \( a_n \) is the \( n^\text{th} \) term of the sequence, and \( b_n \) is the positive part of \( a_n \).

Let's get the first 10 partial sums:

The same thing visually:

The 4th partial sum and after agree with each other till the 3rd decimal point. We know that the partial sums must converge to the value of the series. So it seems the 3rd partial sum approximates the series within \( 10^{-3} \). In fact, according to our Theorem,

the discrepancy between the series and the 4th partial sum is *at most* \( \approx 2.8 \cdot 10^{-7} \).
I say this is a pretty good approximation already.
But what if we want our approximation to be even better?

According to the output, if we want the error to be less than \( 10^{-10} \), then we'd go up to the 6th partial sum. Note that, the 2nd partial sum, which we can easily calculate by hand, approximates the series within \( \sim 10^{-3} \).

This cell gives you the difference between the value of \( cos(1) \) calculated by Sage (which should be a really good approximation), and k-th partial sums. Compare them with the theoretical upper bounds of errors calculated above.