Fourier series

Here we will introduce trigonometric series and investigate possibility of expressing a function using such series. First we work on the whole real line and introduce the theory behind Fourier series. Then we apply this to functions on intervals and introduce the concept of sine and cosine Fourier series.

The underlying idea of Fourier series is to investigate what can be done with sines and cosines. These naturally live on the interval [0,2π], but we want to work on the interval [0,T ] for some T > 0 and consider all sines and cosines whose periods are fractions of this T, because then they nicely fit this interval (see this note). It is a good idea to think of this T as of wavelength, the sines and cosines then have to be scaled by the corresponding frequency ω given by the usual formula known from physics, ω = 2π/T. We will now consider the system of functions {sin(kωt), cos(kωt)} where k are integers, the so-called trigonometric system. We would like to know what functions can be expressed using this system (see the section Systems of functions). Before we address this question we will investigate the system itself.

First, what are the values of k in the above system? Recall (see Systems of functions) that we want to obtain the largest result with the smallest possible set, so we definitely want a linearly independent set. Now since the sines in the above set are all odd functions and cosines are even functions, we should not use negative integers for k, since then we would only double functions that are already in the system, thus spoiling independence. If k = 0, then the sine function gives sin(0) = 0, which would again spoil independence. On the other hand, cos(0) = 1, and there is no reason to rule this function out, it does contribute to the space of functions that can be reached. Thus when we say the trigonometric system, we really mean the set of functions

{1, sin(kωt), cos(kωt) for k natural numbers}.

Note that there are in fact lots of trigonometric systems, they correspond to different choices of T. We do not mix them together, so when we talk about a trigonometric system, it is always assumed that it is a system that corresponds to some fixed positive T (and the corresponding frequency omega).

Having clarified this, it is now a standard fact that the sines and cosines in a trigonometric system form a linearly independent set. However, there is a deeper way in which these functions are distinct. In many areas of mathematics the following criterion is used to judge how far functions are from each other. Given two functions f and g on an interval I, we multiply them and integrate this product over I. The larger the resulting number is (in absolute value), the more do these two functions have in common. Obviously, the largest independence of these two functions occurs if this integral is zero, it corresponds to vectors being perpendicular. And that is exactly what happens for two distinct functions from our system.

Fact.
Let T > 0, denote ω = 2π/T.
For all integers m,n > 0 the following are true.

There is one more function in the trigonometric system, the constant function cos(0) = 1. Also this function is perpendicular in the above sense to all other functions from the trigonometric system, but it is exceptional in one respect. The above formulas among other things show that if we integrate squares of sines or cosines over the basic interval [0,T ], we get T /2. However, when we integrate the square of cos(0), we get T. This shows that this function is somewhat exceptional and therefore it will be treated a bit differently, as we will see in the next definition.

The above property of perpendicularity makes the trigonometric system very special and in particular it comes handy when we address one of the the core question of this section: How do we express other functions using trigonometric functions? When expression functions using this system, the starting point is traditional linear combinations of functions from the trigonometric system, but the main object will be "infinite linear combinations" - that is, series. There are many ways in which the sines and cosines can be arranged and ordered, but one turned out to be the most practical; we will now set up the forms that we will use in the sequel.

Definition.
Let T > 0, denote ω = 2π/T.
By a trigonometric polynomial of degree N we mean the function

By a trigonometric series we mean the function series

where ak and bk are real numbers.

What functions can be expressed by series of this form? As usual, this is a very difficult question and we will apply the traditional approach. We will assume that a certain function f was already expressed as such a series and then we will try to find out what it means for this f and the series.

We start with something simple. Note that all functions in a trigonometric system are T-periodic, so automatically also all finite linear combinations TN(t) are T-periodic. Passing to the limit we get that also the trigonometric series (when they converge) are T-periodic. This immediately gives us the following observation.

Fact.
Let T > 0, denote ω = 2π/T. Assume that for all real numbers t we have

Then f is necessarily T-periodic.

Consequently, if we want to express functions f using trigonometric series, then it is pointless to try other functions than periodic ones. Thus in particular it is enough to do our investigations on the interval [0,T ] (or some shift of it). Unfortunately, there are no other immediately useful observations about f that can be made using the assumption that our function is expressed as a trigonometric series. If we want more, we have to also assume more, namely uniform convergence of this series.

Theorem (uniqueness).
Let T > 0, denote ω = 2π/T. Assume that f is a T-periodic function such that for all real numbers t we have

Moreover, assume that convergence of this series is uniform on [0,T ]. Then the coefficients in this series are necessarily given by

Actually, note that also a0 is given by the second formula, since for k = 0 the cosine in this integral is always 1. However, it is traditional to state it in this way, since even if we wrote it as one formula, in actual calculations we would have to handle two cases anyway; after all, as we observed before, the function 1 is somewhat special.

We called this theorem the "uniqueness theorem", since it says that a periodic function can be expressed as a trigonometric series in only one way. However, note that we have uniqueness only for the case when the series converges uniformly. Otherwise it may happen that a function can be expressed as a trigonometric series and this series is not the one from the above theorem. This is quite different from the behavior of power series. The core cause for this difference lies in the fact that for power series convergence already implies uniform convergence (on almost whole region of convergence), which yields uniqueness in all cases. Here it can happen (and it does happen quite often) that we have convergence but not uniform convergence.

Note also that this uniqueness refers only to series whose basic period is T. If the function f is also S-periodic, then we can use it to get a different expansion.

Now we know that if we want to expand a function using a trigonometric series, the only way that has a good chance of succeeding is to use the coefficients as above. In order to do that we need to make sure that the integrals actually exist. (We did not have to worry in the above theorem, since uniform convergence of a series whose terms are continuous yields a continuous - hence integrable - function.)

Definition.
Let f be a T-periodic function for some T > 0, denote ω = 2π/T. Assume that f is Riemann integrable on [0,T ].
We define the Fourier series of f as the series

where the coefficients are given as

Note that this is a purely formal assignment. Given f, we calculate those integrals and create a series, but there is no guarantee that this series actually converges and if it does, that it converges to f. We denote this formal assignment as follows.

What good can be expected of this series? Convergence of Fourier series is very tricky and difficult, mathematicians have been working on it for over a hundred years. Obviously here in Math Tutor we are far from the level needed to understand all that, so we will just outline some useful results. For starters, note that even if f is continuous, then its Fourier series need not converge to it; as a matter of fact in a typical case it will not, often it even does not converge at all in many points. This does not sound too promising. From practical point of view some results are hopeful, though. They show that for convergence we need to look deeper into f, but then on the other hand we do not mind a little discontinuity here and there.

Theorem (Dirichlet).
Let f be a T-periodic function for some T > 0, denote ω = 2π/T. Assume that f is Riemann integrable on [0,T ]. Let

Assume that f is differentiable on a reduced neighborhood of some t0 and that this derivative has one-sided limits at t0. Then the Fourier series of f converges at t0 and

Recall that f (t0+) stands for the limit from the right at t0, f (t0-) denotes the limit from the left at t0,

This theorem has three important aspects. First, the convergence of the Fourier series can be deduced from differentiability, which is often used. Second, this convergence (and the value of this limit) depends only on behavior of f around the point t0. This means that for behavior of the Fourier series at t0 it makes no difference how f looks further away from this point. Results of this form are called principle of localization.

It also means that the value of f at t0 itself is irrelevant. Indeed, it does not appear at all in the above theorem, not even indirectly, and it actually should not be surprising. Since the coefficients of a Fourier series are given by integrals, it follows that we can change the given function at finitely many points without changing the resulting series.

The third important aspect is that the Fourier series recovers not the original function, but a sort of average of it. Given t0 as above, the Fourier series looks a bit to the left and a bit to the right and then it chooses exactly the middle value.

As we saw, Fourier series does not yield exactly the function value but the limit of it. Still, in practical use it would be nice to have an actual equality between f and its Fourier series. There is only one way to make this happen, we have to make the function value equal to the one-sided limits - and this means continuity.

Theorem.
Let f be a T-periodic function for some T > 0, denote ω = 2π/T. Assume that f is Riemann integrable on [0,T ]. Let

If f is differentiable at some t0, then

Now we will look at a global statement. We will not require continuity everywhere (since Fourier series are especially interesting for non-continuous functions), but in order to get something reasonable we cannot allow the function to have too many problems. One possibility to do this is as follows. We want the function to consist of "nice" pieces, that is, on every piece we expect the function to be continuous and perhaps to have some other favourable property. At endpoints of every piece we want one-sided limits to converge. For more precise definition, see for instance this note. The key property is that of bounded variation, the convergence of Fourier series is then guaranteed by the Jordan theorem. However, determining this property is not easy, so in applications we often prefer to check on stronger but more tractable properties.

Theorem (Jordan conditions implied by derivative).
Let f be a T-periodic function that is piecewise continuous with piecewise continuous derivative. Denote ω = 2π/T. Let

Then for every t we have

If f is actually continuous on the real line, then the Fourier series converges uniformly to f on the set of real numbers.

Recall that the idea behind the notion of a piecewise continuous and differentiable function is this: Its domain (in our case the real line) can be split into intervals (in our case infinitely many) whose lengths do not become arbitrarily small such that on interior of each interval the function is continuous, differentiable, this derivative is continuous, and the function and the derivative have proper one-sided limits at endpoints of these intervals. As an example, in the following picture we first show a typical function as in the assumptions of the above theorem and then how the sum of its Fourier series would look like.

As you can see, the series returns the original function on the continuous segments, but at points of discontinuity it returns the average of the left and right limit, regardless of what the actual value at such a point is. For a "real" example, with an actual function given by a formula and all, see below.

These conditions are very useful, but still too restrictive in some situations, for instance we cannot apply them to functions that involve the square root, with improper one-sided derivative at the origin. Another useful version of conditions uses piecewise monotonicity.

Theorem (Dirichlet).
Let f be a T-periodic function that is bounded and piecewise monotone. Denote ω = 2π/T. Let

Then for every t we have

This theorem could also be applied to the above picture. The difference is that now, when dividing the domain into intervals of monotonicity, we would have to split the intervals where the function has the shape of a hill, whereas the previous theorem could handle them whole.

Note that while for continuous functions we have uniform convergence (which is definitely something desirable), there is no hope of having it when the function has some discontinuities. To see what is happening, imagine a very simple function with discontinuity, a function f that is identically 0 for x from (−1,0) and identically 1 for x from (0,1), this pattern then follows periodically (see picture below). We will be interested in what is happening at 0. Note that we did not specify values at 0,1,−1,2,... since we now know that they will not have any influence on Fourier series anyway.

Partial sums of the corresponding Fourier series are trying to approximate f well, and because f is continuous and continuously differentiable on (−1,0), Fourier series must converge to 0 there. Similarly, it must converge to 1 on (0,1). What is happening around the discontinuity at x = 0? Partial sums of the Fourier series must change very quickly from being about 0 to being about 1. As they jump from one level to another, it seems natural that at x = 0 they will get up half the way, so the value should be 1/2. Thus it would seem that the averaging that we discussed above is actually something to be expected.

Moreover, as a continuous functions, partial sums cannot jump from 0 to 1 at once, but this jump takes some room (on the x-axis), in other words, a particular partial sum (when viewed left to right) has to leave the level 0 already before x is 0, and for x negative but really close to 0 this partial sum is almost 1/2, while the function f is still 0. Therefore the global quality of approximation cannot be better than this difference 1/2 and global uniform convergence is impossible.

Note that we have uniform convergence on any closed interval that does not include points of discontinuity. In particular, if we take any a > 0 that is very small, then convergence is uniform on [−1/2,−a] and therefore partial sums TN with high N can start its ascent up to level 1 only after leaving this interval, that is, after -a, and for analogous reason they must already be almost 1 after a. In other words, those quick changes from level 0 to level 1 happen within very narrow area around the origin, and this area is getting smaller and smaller as N gets large. Thus partial sums have to hurry more and more at this point of discontinuity. This creates an interesting feature of Fourier series.

As those partial sums shoot up really fast, they actually overshoot and go significantly higher than 1, only then they settle down. We can also read this situation right-to-left, those partial sums fall down really fast and overshoot also on the left. This behavior appears every time a Fourier series has to deal with discontinuity. On each side of it, partial sums exhibit oscillation that gets progressively narrower and narrower, but its size stays large. This disturbance is called the Gibbs phenomenon and you can see it in an actual example in this note.

We conclude this part with some simple observations. First and importantly, since all functions involved are T-periodic, there is no need to keep preferring the interval [0,T ]. Any of its shifts would do, so in all the above theorems we could put the assumptions for instance like this: "Let f be a function that is Riemann integrable on some interval of the form [a,a + T )." In articular this applies to the formulas for the coefficients of Fourier series. It is sometimes useful, so it is definitely worth stating.

Fact.
Let f be a T-periodic function for some T > 0, denote ω = 2π/T. Assume that f is Riemann integrable on an interval [a,a + T ] for some real number a. Then the coefficients of its Fourier series are given by

One popular choice is to integrate over the interval from -T /2 to T /2. Then the integrating interval is symmetric about the origin, note that also the sines and cosines in the above integrals are symmetric functions (sines are odd, cosines are even), so if we have some symmetry also for the function f, it may simplify things considerably. Using the usual rules for multiplication of symmetric functions (see Functions - Theory - Real functions - Basic properties) and basic properties of integrals (see Integrals - Theory - Introduction - Properties of Riemann integral) we immediately get the following.

Proposition.
Let f be a T-periodic function that is integrable on some interval of length T > 0. Consider its corresponding Fourier series.
• If f is an odd function, then

• If f is an even function, then

In short, odd functions yield sine series and even functions yield cosine series. We will make use of this later, see sine and cosine Fourier series.

Functions on intervals

We just saw that to create a Fourier series we actually only need to know the function f on some interval of the form [a,a + T ]. This gives us an opportunity to greatly generalize Fourier series, since if we have a function defined on such an interval, we can easily turn it into a T-periodic function defined on the whole real line simply by "repeating the given pattern." Actually, one has to be a bit careful, since the endpoint a + T of the given interval is also the starting point of the next interval, therefore the value of f there should be the same as at a. Since this cannot be guaranteed for a general function defined on [a,a + T ], we simply leave this right endpoint out from the basic interval. Note that any interval of the form [a,b) can be written as [a,a + T ). Actually, we can also leave out the left endpoint or both and it would make no difference to the resulting Fourier series; as we observed above, changing the given function at one point (and its periodic shifts) does not influence its Fourier series at all. The choice of [a,b) is traditional.

Definition.
Let f be a function that is defined on some interval of the form [a,a + T ). We define its periodic extension as the function f defined on the whole real line by the formula

Note that there is a little mix-up in that definition. The f on the right is the original f that is defined only on the given interval, while the f on the left is the new function defined on the real line. Since these two functions agree whenever they are both defined, it is customary to use one letter for both, although it may look a bit funny in this definition.

Now we can create a Fourier series also for functions that are given only on some finite interval.

Definition.
Let f be a function that is defined on some interval of the form [a,a + T ). Assume that it is Riemann integrable there. We define its Fourier series as the Fourier series of its periodic extension.

Note that we formally get a Fourier series defined on the whole real line, but since f was originally defined only on some interval I, we usually care only about what the series does there. However, as we will see below, in order to see this we do have to look also a bit around, at the periodic extension.

Example: We derive the Fourier series for the function

This function is continuous, hence integrable. When we extend it periodically, we get a 2-periodic function (since the length of the basic interval is 2). Thus the frequency is ω = π. We evaluate the appropriate integrals using integration by parts where necessary.

We are ready to form the appropriate trigonometric series. In order to make it look better we recall that cos(2kπ) = 1 and the very useful fact that cos(kπ) = (−1)k. When this is put into the formula for ak, then its numerator becomes either 0 (if k is even) or 2 (if k is odd). This offers an opportunity to further simplify the resulting trigonometric series by introducing index n and using k = 2n + 1, which is a well-known trick for getting all positive odd integers. It is not necessary to do this, but it is nice to give your answer in a form that clearly shows what is happening, so that the reader does not have to decipher it for himself.

Now what does this series have to do with the given function f? We actually assigned this Fourier series to the periodic extension of the given f, so we should start by visualizing it (see the first graph in the picture below). Can we use the Jordan theorem to determine the convergence of our series? By looking at the picture we see that the periodic extension consists of segments where f is a straight line, so it is definitely continuous and differentiable on these pieces, with convergent limits (of f and f ′) at endpoints. After all, in the definition we have only formulas that are both continuous and continuously differentiable with limits at endpoints, which is something that cannot be spoiled by going to periodic extension.

Anyway, the Theorem above applies and thus the series behaves as follows. The function f is continuous at all points of the interval (0,2) and its shifts by 2, so the series will converge to f there. Thus it remains to investigate what happens at the point 2 and its shifts by 2. We see that the limit of f at 2 from the left is 1; on the other hand, the limit at 2 from the right is (by periodicity) the same as the limit of f at 0 from the right, that is, it is 0. The average is 1/2. Thus we obtain the sum of the series as shown in the second graph in this picture.

This is usually considered a sufficient answer to the following question: "What is the sum of the resulting Fourier series?" To see more details on convergence of Fourier series in this example, see this note. In particular you will see animation of partial sums and the Gibbs phenomenon.

One reason why the picture is usually considered sufficient is that writing it down using formulas would be ugly and much less transparent. Just to show you, we will do an exception and write the above result "properly", but only for the interval [0,2). After all, the function was originally given only there, so it is natural to focus on this set.

What are we to make of such a result? The basic idea is that we express the given function as a sum of various oscillations, a typical example might be a sound signal that is expressed as a combination of basic harmonic sounds. If a certain coefficient ak or bk is markedly larger than the others, then it means that this particular frequency is very prominent in the given signal. This can be used for "frequency analysis", but we talk more about it in the section on Applications.

Sine and cosine Fourier series

In general, when we express the given function as Fourier series, it features both cosines and sines. Sometimes it would be useful if we could only use functions of our choice - either sines only or cosines only. In many cases there is a way to achieve this.

Recall that we proved earlier that odd functions have Fourier series without cosines and even functions have them without sines. This points to the way to do what we want. Of course, if we are given a function on the whole real line, then we have to rely on the symmetry it already has. However, if we are given a function just on an interval I of length L, then we can sometimes fix it so that its periodic extension becomes even or odd.

Consider a function defined on an interval of the form [a,a + L ). We will want to extend it periodically, so we may assume that this basic interval is positioned in such a way that it contains the origin. There are two cases. If the origin is inside this basic interval, then this interval (and therefore the basic shape of f ) extends to both sides of the origin and thus symmetry of f is already given. The really interesting case is when 0 coincides with the left endpoint, that is, if the function is defined on an interval [0,0 + L ). Then we can do the following trick. We extend the definition also to the interval [−L ,0) and obtain a new basic shape that has length T = 2L. Then we extend this new basic piece to a T-periodic function and find its Fourier series.

The real trick lies in how we do the extension to [−L ,0). We can simply flip the given f around the y-axis, that is, the new functions is given by f (−x) for x from [−L ,0). Then the new basic shape and the resulting periodic extensions is even and we get a cosine series. We talk about even periodic extension of f. We can also flip the given shape around both axes so that the new function is symmetric about the origin, that is, the new functions is given by − f (−x) for x from [−L ,0). Then the new basic shape and the resulting periodic extension is odd and we get a sine series. We talk about odd periodic extension of f. See the example below, it should be clear.

Definition.
Let f be a function defined and integrable on an interval [0,L ).
We define the sine Fourier series of f as the Fourier series of its odd periodic extension.
We define the cosine Fourier series of f as the Fourier series of its even periodic extension.

When we double the period, we also get a different frequency, ω = 2π/T = π/L. For the even, respectively odd extension we then apply the Proposition above about symmetric functions and we get the following formulas for series for odd, respectively even periodic extension.

Surprisingly, apart from the different frequency the above formulas ended up exactly the same as if we were doing the usual Fourier series, just instead of T we use L. This is rather convenient.

Fact.
Let f be a function defined and integrable on an interval [0,L ).
The sine Fourier series of f is the trigonometric series with ω = π/L and coefficients given by


The cosine Fourier series of f is the trigonometric series with ω = π/L and coefficients given by

If f is piecewise continuous with piecewise continuous derivative on [0,L ), then its sine Fourier series converges to the odd periodic extension of f modified at discontinuities using averages.
Its cosine Fourier series converges to the even periodic extension of f modified at discontinuities using averages.

Example: We return to the above example and find its sine and cosine Fourier series. We have L = 2, therefore for these two series we have ω = π/2. Using the above formulas, we first derive the sine Fourier series.

Now we derive the cosine Fourier series.

If we want to know to what function do these two Fourier series converge, we have to first make the odd and even periodic extensions of f and then apply the averaging trick to points of discontinuity. In the picture we emphasize the basic doubled period.

Note that the even periodic extension is actually a continuous function, therefore the cosine Fourier series converges uniformly to it. When we naturally focus on the interval [0,2) where the function was originally given, we see that the three series (Fourier, sine Fourier and cosine Fourier) differ in what they do at endpoints of the given interval. We again refer to this note for some pictures of partial sums.


Properties of Fourier series
Back to Theory - Series of functions