We start by looking at a few basic properties of limits. Then we look at theorems about operations, which leads directly to the limit algebra, our main tool for evaluating limits. Then we will investigate some interplays between convergence and monotonicity and boundedness; we also look at limits and subsequences. At the end we briefly introduce to notion of Cauchy sequence.
The following statements should be clear if you understand what limit means.
Fact.
A sequence{an} converges to L if and only if{an − L} converges to 0.Fact.
If a sequence{an} goes to L, then{|an|} goes to|L|. Fact.
A sequence{an} goes to 0 if and only if{|an|} goes to 0.Fact.
Assume that the sequences{an} and{bn} converge. The limit of{an} is equal to the limit of{bn} if and only if{an − bn} goes to 0.
Note that the last statement is not true any more if we drop the assumption about convegence.
Fact.
If a sequence{an} has a non-zero limit, then there exists N and a constantm > 0 such that| an | > m fork > N.
When evaluating a limit, the starting point is elementary limits that we are supposed to remember; these are the basic building blocks. More complicated expressions are created by combining such elementary expressions, so we also need to know how to put them together in a limit.
Theorem (limits and algebraic operations).
Assume that the sequence{an} has limit A and the sequence{bn} has limit B. Then the following is true:
(i) For any real number c, the sequence{can} has limitcA if it makes sense.
(ii) The sequence{an + bn} has limitA + B if it makes sense.
(iii) The sequence{an − bn} has limitA − B if it makes sense.
(iv) The sequence{an⋅bn} has limitA⋅B if it makes sense.
(v) The sequence{an /bn} has limitA/B if it makes sense.
(vi) The sequence{an bn} has limitAB if it makes sense.
Now what is it about making sense? If A and B are real
numbers, that is, if the two sequences are convergent, then the operations
(i) through (iv) make always sense. However, the ratio
We could present a theorem now with many statements, but it is much easier to start from another end. Note that in the theorem above we did not assume that A,B are finite, and some operations can be defined also for cases when they feature infinity. If we use these operation in the above theorem and deem that they "make sense", then all the results we obtain in this way are correct. What operations can we introduce?
For instance, what do we get if we add or multiply two immensely huge
numbers? Another immensely huge number. We just argued that
One can also ask what happens when infinities are mixed up with ordinary
numbers. For instance, when we subtract 13 from a really huge number, we are
still left with a huge number (a millionaire who loses a dollar is still
basically a millionaire). Indeed, for any real number L we have
Now, what do we get if we subtract an immensely huge number from another
immensely huge number? Well, that depends. They may be equal and we get zero.
Or one might be larger, then the outcome depends on which and by how much.
This shows that the difference
For a complete list of all operations, along with notes and more details, click here. In particular, you will find there an important remark concerning some limitations of this algebra. In other words, definitely check it out. For a brief list of the limit algebra, click here.
Note that the "making sense" for working with limits is somewhat different from
making sense for numbers. The reason is that now the numbers A,B
do not represent real numbers, that is, fixed quantities, but outcomes of
limits, in other words, they represent processes, "almost numbers". This has
the effect that some operations, although they can be performed with real
numbers, do not work with limits. The best example is the power 00.
We know that as a number it makes sense, it gives 1. However, if these zeros
represent limits of sequences, then we are in a situation that we look for a
limit of the general power
Indeterminate expressions are as important as those of the limit algebra. When calculating a limit, one should know what works and also what does not. For a complete list of indeterminate expressions, along with notes and more details, click here. For a brief list, click here.
Finally, here you will find some remarks about operations with sequences some of which do not have a limit. This is of less interest and we include it only for the sake of completeness, or to satisfy the more curious reader.
In practice we thus follow a very simple rule.
If we want to find the limit of a sequence given by some an expression, we "substitute" infinity into it and if the operations involved make sense, then the outcome is the correct answer to our limit.
However, note that this is quite informal and some profs are allergic to it. Therefore it is safer to do all "infinity calculations" on the side. In our calculations we put them, along with other remarks, between big double angled braces ⟪ and ⟫ to indicate that they are not parts of the "official" solution. Here is a very simple example, written in a long way with all the steps; usually you would do it faster.
We can rarely get an answer that easily, in particular because often the "little pieces" are a bit more complicated and we do not know their limits right away, we need to work them out before we can try to put them together. Then the above theorem as stated is a bit awkward; it is more convenient to express it in this form:
Important note: each of these equalities is true only if the expression on
the right makes sense. They are therefore "conditional": Until we know that
the final answer in our calculations makes sense, all the equalities in it
need not be true. To put it another way, it is no good splitting an
expression into parts if what we get at the end makes no sense. For
example, the constant function 1 has limit 1 at infinity. However, if we
write it as
Now will show a simple example:
Example: Find
We see that n is always inside some simple term whose limit at
infinity we already know (see elementary limits). Namely, constants 13 and 5
converge to themselves. Further:
(i) We know that 1 divided by the root of n, which is in fact
(ii) By the Squeeze theorem (see Limit and comparison in Theory - Limits) we know that
(iii) We know that
(iv) In order to see the second term in the denominator, we first rewrite
it to get a positive power:
It is also possible to write
We now use the above theorem to put these basic facts together and find the
limit of the given sequence. By the theorem, the numerator converges to
We will now show how to write this procedure using the limit notation. Here we will write all the steps to show how we decompose the given expression step by step, usually one would write it much shorter.
This solution was correct, but rather long. Applying the limit algebra and doing some unofficial calculations on the side (between double angled braces) we can do it much faster:
And that is my favourite way of handling this problem - correct and short.
For some tips and insight into applying this theorem, see Methods Survey - Limit.
We still did not cover one important operation, that of composition.
Theorem.
Let{an} be a sequence with limit A, assume thatan ≠ A for all n. Let f be a function which has limit B whenx→A. Then the sequence{f (an)} has limit B.
Here A and/or B may be also infinite, as long as the statements involved make sense. The most typical case is when A is a number and f is continuous at A, which for all practical purposes means that f is given by some formula which does not mind having A substituted into it. In this case, the theorem can be expressed like this:
Application is easy. If we are looking for the limit of a sequence which has the form of "some expression inside a nice function", then we can ignore the function, find the limit of the expression inside, and then put this limit into the function. A simple example is here.
Note that this in some sense fits with the above example about using limit algebra. There we for a moment ignored operations and just focused on simple terms, basic building blocks from which the given expression is built. We realized that we knew what their limits are, then we put them together to get the final answer. This theorem tells us that we can also ignore functions at first, evaluate simple terms, then not just compose the partial answers together using limit algebra, but we can also substitute partial answers into functions and if it makes sense, we get a correct answer. If you look for instance at this example, it should be clearer.
Thus the practical rule - substitute and see - applies also to expressions with composition, it is a general rule for limits that we use as our first approach. Of course, many, perhaps most limits cannot be solved in this way. Then we have to use tricks that change the given sequence into another one that can be solved the basic way - by putting together simple results using the limit algebra.
One more very important rule: Unless you know what you are doing, always finish all parts. In particular, if you split a limit of a product into a product of smaller limits and one of them comes up as zero, you cannot stop calculations and claim that the whole thing is zero. Granted, zero times a number is again zero, but that only works in the usual algebra. In the limit algebra we can also have "zero times infinity", which is an indeterminate product that can be anything. As an example we try a different decomposition of 1:
Obviously it would be a mistake to stop once we saw that the first limit was zero, but after completing the other part we see the indeterminate product and know that it was not a good idea to split the original limit into two. For more details, see this note.
We saw that a convergent sequence can approach its limit in strange ways, so definitly convergent sequences cannot be expected to be monotone in general. Boundedness is not so hopeless:
Theorem.
Every convergent sequence is bounded.
Can we conversely get some information about convergence from the two basic properties? The counterpositive of the above statement says that an unbounded sequence must be divergent, that's one piece of information. Can we get something positive, too? No. The example of alternating sequence shows that a bounded sequence need not be convergent, there is even no limit at all (not even improper). However, if we are willing to loose some terms of the given sequence, we do get something out of boundedness (cf. Bolzano-Weierstrass theorem in Functions - Theory - Real numbers - Topological notions):
Theorem (Bolzano-Weierstrass theorem).
Every bounded sequence has a convergent subsequence.
Another useful property is monotonicity. If you try to imagine all kinds of increasing sequences, you should start having the (correct) feeling that such sequences either grow towards some upper bound which is then their limit and they converge, or they grow above all possible bounds and they therefore tend to infinity; in any case, they have a limit. Indeed, this is true, and moreover boundedness provides a nice way to avoid that infinity.
Theorem.
Every monotone sequence has a limit.
Every bounded monotone sequence is convergent.
In more detail, every non-decreasing sequence (in particular every increasing sequence) either converges or goes to infinity, and every non-increasing sequence (in particular every decreasing sequence) either converges or goes to minus infinity.
We start with one theoretical result.
Theorem.
If a sequence is convergent, then all its subsequences are also convergent and converge to the limit of the original sequence.
This is not exactly useful when investigating convergence, but the following, weaker statement in sort of opposite direction is often useful.
Fact.
If a given sequence has two subsequences that converge to different limits, then the given sequence diverges.
For instance, from the alternating sequence
One useful observation one can make about a sequence is that as it goes along, it changes less and less. To define it formally we again use the idea of a game. Somebody gives us a tolerance and we want to be able to throw away some beginning of the gives sequence so that its remaining terms never jump by more than this tolerance.
Definition.
Consider a sequence{an}. We say that this is a Cauchy sequence, or that this sequence is Cauchy, if for everyε > 0 there is some natural number N so that for allm,n ≥ N we have|an − am| < ε.
If a sequence converges, then it settles down to some value and does not change much. This seems clear and in fact it is simple to prove. Less easy to prove (for instance using the Bolzano-Weierstrass theorem above) is the fact that if a sequence settles down, then it should converge (which again sounds like common sense). Thus we get the following theorem.
Theorem.
A sequence of real numbers is convergent if and only if it is Cauchy.
Note that this theorem only gives convergence, not the actual value of the limit, suggesting that this result is more theoretical than practical. This is true, in theory it is indispensable and extremely useful in many situations.
We mentioned above that one implication is less trivial. While convergent sequences are Cauchy also for sequences of elements from fairly general spaces, the fact that Cauchy sequences converge is not automatic. For instance, when we work in the world of rational numbers, then this is no longer true (see completeness in Extra - Sets and mappings - Important sets of numbers). The statement that Cauchy sequences of real numbers converge is sometimes called the Bolzano-Cauchy theorem.