Unit 10: Infinite Sequences and Series

Students will explore convergence and divergence behaviors of infinite series and learn how to represent familiar functions as infinite series. They will also learn how to determine the largest possible error associated with certain approximations involving series.

Sequences

Definition of a Sequence and Applying Limits to Understand Convergence

  • A sequence is an ordered list of numbers, typically written as \( \{a_n\} \), where each term is defined by a formula or rule. To determine whether a sequence converges, evaluate \( \lim_{n \to \infty} a_n \) using algebraic or calculus-based methods. If the limit exists and is finite, the sequence converges to that value; if it does not exist or is infinite, the sequence diverges.
  • Step-by-step:
    (1) Identify the general term \( a_n \).
    (2) Simplify the expression for large \( n \) using factoring or dividing by the highest power of \( n \).
    (3) Apply limit laws or L’Hôpital’s Rule if necessary. Example: \( a_n = \frac{3n^2 + 4}{2n^2 - 1} \) → divide by \( n^2 \) to get \( \frac{3 + 4/n^2}{2 - 1/n^2} \) → limit is \( \frac{3}{2} \), so it converges.
  • Connects to series: Since the convergence of a series \(\sum a_n\) requires \( a_n \to 0 \), understanding how to find limits of sequences is the first building block in series analysis. Incorrectly computing these limits can lead to false conclusions about a series’ behavior.
  • Example: \( a_n = \frac{\sin n}{n} \) → \( |\sin n| \le 1 \) and \( \frac{1}{n} \to 0 \) by the Squeeze Theorem, so \( a_n \to 0 \) and the sequence converges to 0. This technique is essential for sequences with bounded oscillation divided by a growing denominator.
  • Common mistake: assuming a sequence converges just because the numerator grows slower than the denominator without verifying the limit formally. Always provide a clear step-by-step limit calculation or inequality proof to avoid this.

Monotonic Sequences and the Monotone Convergence Theorem

  • A sequence is monotonic if it is either always increasing (\( a_{n+1} \ge a_n \)) or always decreasing (\( a_{n+1} \le a_n \)). The Monotone Convergence Theorem states that if a sequence is monotonic and bounded, it must converge. This theorem is useful for proving convergence when direct limit evaluation is difficult.
  • Step-by-step:
    (1) Prove monotonicity by checking \( a_{n+1} - a_n \) or \( \frac{a_{n+1}}{a_n} \).
    (2) Establish a bound by comparing to a constant.
    (3) Conclude convergence from the theorem. Example: \( a_n = 1 - \frac{1}{n} \) → \( a_{n+1} - a_n = \frac{1}{n} - \frac{1}{n+1} > 0 \) so increasing, and clearly \( a_n < 1 \), so it converges.
  • Connection to series: Monotonic sequences appear in the Alternating Series Test’s remainder estimation and in the behavior of partial sums for certain series. Recognizing monotonicity can give insight into the stability of sums.
  • Example: \( a_n = \sqrt{2 + a_{n-1}} \) with \( a_1 = 1 \). Prove \( a_n \) is increasing by squaring carefully, then show \( a_n \le 2 \) as an upper bound. The limit \( L \) must satisfy \( L = \sqrt{2 + L} \) → \( L = 2 \).
  • Common mistake: assuming a sequence is monotonic by looking at the first few terms without proof. Always verify the property algebraically or via induction.

Recursive Sequences and Fixed Points

  • A recursive sequence defines each term from previous terms, such as \( a_{n+1} = f(a_n) \). Convergence often involves finding a fixed point \( L \) where \( L = f(L) \). Stability depends on whether values get closer to \( L \) as \( n \) increases.
  • Step-by-step: (1) Identify the recursion rule. (2) Prove monotonicity and boundedness if possible. (3) Solve \( L = f(L) \) to find potential limits. (4) Verify that the sequence actually approaches \( L \) by testing a few terms and using inequalities. Example: \( a_{n+1} = \frac{1}{2}(a_n + \frac{5}{a_n}) \) converges to \( \sqrt{5} \) (Babylonian method).
  • Connection to calculus: Many numerical methods (like Newton’s Method) generate recursive sequences, so understanding fixed points ties into later BC concepts such as root finding and error estimation.
  • Example: \( a_1 = 2, \; a_{n+1} = \frac{a_n + 3}{a_n + 1} \). Show it’s bounded and decreasing, then solve \( L = \frac{L + 3}{L + 1} \) → \( L^2 + L = L + 3 \) → \( L^2 = 3 \) → \( L = \sqrt{3} \).
  • Common mistake: assuming the fixed point is the limit without proving stability. A fixed point can be unstable, causing divergence if the starting value is far enough away.

Recognizing Divergence Patterns in Sequences

  • Some sequences diverge by growing without bound (\( a_n \to \infty \)), oscillating indefinitely, or failing to approach a single value. Recognizing these quickly can save time on exams. If \( \lim_{n\to\infty} a_n \) does not exist or is infinite, the sequence diverges.
  • Step-by-step: (1) Check for polynomial or exponential growth. (2) If oscillatory, examine whether magnitude decreases (e.g., \( \frac{(-1)^n}{n} \) converges to 0). (3) Look for repeating patterns in trigonometric sequences. Example: \( a_n = (-1)^n \) oscillates between -1 and 1 → diverges.
  • Connection to series: If a sequence’s terms don’t approach 0, its series will fail the n-th term test for convergence. Divergence recognition is thus the first checkpoint before applying any series test.
  • Example: \( a_n = n\sin\left(\frac{\pi n}{2}\right) \) alternates between large positive, large negative, and zero values → diverges by unbounded oscillation.
  • Common mistake: thinking oscillation always means divergence. If oscillation occurs but amplitude decreases to zero, the sequence can converge (e.g., \( \frac{\cos n}{n^2} \)).

Common Sequence Limit Techniques

  • Divide by the highest power of \( n \) for rational expressions. Use L’Hôpital’s Rule for indeterminate forms when treating \( a_n \) as \( f(n) \). Example: \( \lim_{n\to\infty} \frac{\ln n}{n} = 0 \) since \( n \) grows faster than \( \ln n \).
  • Apply the Squeeze Theorem when the sequence is bounded above and below by sequences with the same limit. Example: \( 0 \le \frac{\sin n}{n} \le \frac{1}{n} \) → both bounds → 0.
  • For exponential terms, compare growth rates: exponentials outpace polynomials, which outpace logarithms. This hierarchy is key for both sequences and series tests like the Ratio Test.
  • For factorial terms, use Stirling’s approximation or compare \( n! \) to \( n^n \) growth rates. Example: \( \frac{n^2}{n!} \to 0 \) because factorial growth dominates.
  • Common mistake: applying continuous methods without justification. Always confirm that \( f(x) \) is continuous for \( x\ge N \) before replacing \( n \) with \( x \) to compute limits.

Series Basics

Definition of a Series and Partial Sums

  • An infinite series is the sum of terms from a sequence: \( \sum_{n=1}^{\infty} a_n \). The key idea is that we don’t add infinitely many numbers all at once; instead, we form partial sums \( S_N = \sum_{n=1}^N a_n \) and see if \( S_N \) approaches a finite limit as \( N \to \infty \).
  • Step-by-step:
    (1) Identify the term \( a_n \).
    (2) Write out the first few partial sums.
    (3) If possible, find a formula for \( S_N \).
    (4) Take \( \lim_{N \to \infty} S_N \). If the limit exists and is finite, the series converges; otherwise, it diverges.
  • Connection to sequences: The sequence \(\{S_N\}\) is the sequence of partial sums. A series converges exactly when this sequence converges, so all series problems boil down to sequence limit problems.
  • Example: \( a_n = \frac{1}{n(n+1)} \) → partial sums telescope to \( S_N = 1 - \frac{1}{N+1} \), and \(\lim_{N\to\infty} S_N = 1\), so the series converges.
  • Common mistake: confusing the limit of \( a_n \) with the limit of \( S_N \). Even if \( a_n \to 0 \), the series can still diverge (harmonic series). Always analyze \( S_N \) for convergence.

Applying Limits to Understand Convergence of Infinite Series

  • The first test to apply to any series is the n-th term test for divergence: if \( \lim_{n \to \infty} a_n \neq 0 \), then \( \sum a_n \) diverges. This is a necessary condition for convergence but not sufficient.
  • Step-by-step:
    (1) Compute \( \lim_{n \to \infty} a_n \).
    (2) If the limit is nonzero or does not exist, conclude divergence immediately.
    (3) If the limit is zero, proceed to another convergence test—never stop here.
  • Example: \( a_n = \frac{n+1}{n} \to 1 \neq 0 \) → series diverges without any further work. Example: \( a_n = \frac{1}{n} \to 0 \) but series diverges (harmonic), so another test is needed.
  • Connection to sequences: This test uses the exact same limit skills as sequence convergence, reinforcing why mastery of sequence limits is essential before moving to series.
  • Common mistake: thinking that if \(\lim a_n = 0\), the series converges. This leads to wrong answers for famous counterexamples like the harmonic series.

Types of Series: Geometric, Harmonic, and \( p \)-Series

  • A geometric series has the form \( \sum_{n=0}^{\infty} ar^n \) with ratio \( r \). It converges if \( |r| < 1 \) and diverges otherwise. Sum formula (when convergent): \( S = \frac{a}{1-r} \).
  • The harmonic series \( \sum_{n=1}^{\infty} \frac{1}{n} \) diverges despite terms going to zero. It is the \( p \)-series case with \( p = 1 \), which is the threshold between convergence and divergence.
  • A \( p \)-series is \( \sum_{n=1}^{\infty} \frac{1}{n^p} \). It converges if \( p > 1 \) and diverges if \( 0 < p \le 1 \). This rule is widely used for comparison tests.
  • Example: \( \sum_{n=0}^{\infty} 3\left(\frac{1}{2}\right)^n \) is geometric with \( r = \frac{1}{2} \) → sum is \( \frac{3}{1 - 1/2} = 6 \). Example: \( \sum \frac{1}{n^2} \) converges (\( p = 2 > 1 \)).
  • Common mistake: forgetting to check \( |r| < 1 \) before applying the geometric sum formula or misidentifying \( p \) in a \( p \)-series with extra terms in the denominator.

Telescoping Series and Partial Fraction Decomposition

  • A telescoping series collapses because consecutive terms cancel out in partial sums. To reveal this structure, rewrite \( a_n \) as a difference of fractions (partial fractions) or other expressions.
  • Step-by-step:
    (1) Decompose \( a_n \) into \( b_n - b_{n+1} \).
    (2) Write out first few partial sums to observe cancellation.
    (3) Take \( \lim_{N \to \infty} S_N \) using the leftover terms.
  • Example: \( a_n = \frac{1}{n} - \frac{1}{n+1} \) → partial sum \( S_N = 1 - \frac{1}{N+1} \) → limit is 1. Another: \( \frac{1}{n(n+1)} = \frac{1}{n} - \frac{1}{n+1} \).
  • Connection to convergence: Telescoping can show exact sums without estimating. Often used when \( a_n \) comes from a rational function that simplifies.
  • Common mistake: claiming telescoping without showing the partial sums. Always demonstrate the actual cancellation pattern before concluding.

Convergence Tests

n-th Term Test for Divergence

  • If \( \lim_{n \to \infty} a_n \neq 0 \) or the limit does not exist, the series \( \sum a_n \) diverges. This is always the first check to perform before applying other tests.
  • Step-by-step: (1) Find \( \lim_{n \to \infty} a_n \). (2) If the limit is not zero or does not exist, conclude divergence immediately. (3) If the limit is zero, continue to another test.
  • Example: \( a_n = \frac{n+2}{n} \) has limit \( 1 \), so \( \sum a_n \) diverges.
  • Example: \( a_n = \frac{1}{n} \) has limit \( 0 \), so the test is inconclusive and another method is needed.
  • Always apply this test before spending time on more complex tests to quickly identify divergence.

Direct Comparison Test (DCT)

  • Use when you can compare \( a_n \) directly to a known convergent or divergent series with positive terms. If \( 0 \le a_n \le b_n \) for all \( n \) and \( \sum b_n \) converges, then \( \sum a_n \) converges. If \( 0 \le b_n \le a_n \) for all \( n \) and \( \sum b_n \) diverges, then \( \sum a_n \) diverges.
  • Step-by-step:
    (1) Identify a simpler benchmark series \( b_n \).
    (2) Write the inequality between \( a_n \) and \( b_n \).
    (3) State the known behavior of \( \sum b_n \). (4) Conclude the behavior of \( \sum a_n \).
  • Example: \( a_n = \frac{1}{n^2 + n} \le \frac{1}{n^2} \) and \( \sum \frac{1}{n^2} \) converges (\( p = 2 > 1 \)), so \( \sum a_n \) converges.
  • Example: \( a_n = \frac{1}{n} \ge \frac{1}{2n} \) and \( \sum \frac{1}{2n} \) diverges (harmonic), so \( \sum a_n \) diverges.
  • This test works best when terms can be cleanly compared to a p-series or geometric series.

Limit Comparison Test (LCT)

  • Use when the Direct Comparison Test is hard to apply but the terms behave similarly to a known series. If \( \lim_{n \to \infty} \frac{a_n}{b_n} = L \) where \( L \) is a finite positive number, then \( \sum a_n \) and \( \sum b_n \) either both converge or both diverge.
  • Step-by-step:
    (1) Choose \( b_n \) based on the dominant term in \( a_n \).
    (2) Compute \( \lim_{n \to \infty} \frac{a_n}{b_n} \).
    (3) If \( L > 0 \) and finite, the two series have the same behavior.
  • Example: \( a_n = \frac{n^2 + 3}{5n^2 - 1} \) with \( b_n = \frac{1}{n^2} \) → ratio limit \( = \frac{1}{5} \), and since \( \sum \frac{1}{n^2} \) converges, \( \sum a_n \) converges.
  • Example: \( a_n = \frac{n}{n^2 + 4} \) with \( b_n = \frac{1}{n} \) → ratio limit \( = 1 \), and since \( \sum \frac{1}{n} \) diverges, \( \sum a_n \) diverges.
  • This is most useful for rational expressions and terms involving roots where direct inequality comparisons are difficult.

Integral Test

  • Applies when \( a_n = f(n) \) where \( f \) is positive, continuous, and decreasing for \( x \ge N \). The series \( \sum a_n \) converges if and only if \( \int_N^\infty f(x) \, dx \) converges.
  • Step-by-step: (1) Verify that \( f(x) \) meets the criteria. (2) Evaluate \( \int_N^\infty f(x) \, dx \) as an improper integral. (3) Conclude that the series converges if the integral is finite and diverges if it is infinite.
  • Example: \( a_n = \frac{1}{n^p} \) → \( \int_1^\infty x^{-p} dx \) converges if \( p > 1 \) and diverges if \( p \le 1 \).
  • Example: \( a_n = \frac{1}{n \ln n} \) → \( \int_2^\infty \frac{1}{x \ln x} dx = \ln(\ln x) \big|_2^\infty \) diverges, so the series diverges.
  • This test also provides a way to estimate the remainder when approximating a sum by a partial sum.

Alternating Series Test (Leibniz Test)

  • An alternating series has terms that switch signs, typically written as \( \sum (-1)^{n} b_n \) or \( \sum (-1)^{n+1} b_n \) with \( b_n > 0 \). The series converges if \( b_n \) is decreasing and \( \lim_{n \to \infty} b_n = 0 \).
  • Step-by-step:
    (1) Identify \( b_n \).
    (2) Verify \( b_{n+1} \le b_n \) for all large \( n \).
    (3) Check that \( b_n \to 0 \). If both hold, the series converges.
  • Example: \( \sum (-1)^{n+1} \frac{1}{n} \) satisfies decreasing \( b_n = \frac{1}{n} \) and \( b_n \to 0 \), so it converges.
  • Example: \( \sum (-1)^n \frac{n}{n+1} \) fails because \( b_n \to 1 \neq 0 \), so it diverges.
  • The Alternating Series Error Bound states that the error in approximating by the \( N \)-th partial sum is less than or equal to \( b_{N+1} \).

Absolute and Conditional Convergence

  • A series \( \sum a_n \) converges absolutely if \( \sum |a_n| \) converges. Absolute convergence guarantees convergence of \( \sum a_n \).
  • If \( \sum a_n \) converges but \( \sum |a_n| \) diverges, then the series converges conditionally. This often happens with alternating series that fail absolute convergence.
  • Example: \( \sum \frac{(-1)^n}{n} \) converges conditionally because \( \sum \frac{1}{n} \) diverges.
  • Example: \( \sum \frac{(-1)^n}{n^2} \) converges absolutely because \( \sum \frac{1}{n^2} \) converges.
  • Absolute convergence is stronger and should be checked first before considering conditional convergence.

Ratio Test and Root Test

  • Ratio Test: Compute \( L = \lim_{n \to \infty} \left| \frac{a_{n+1}}{a_n} \right| \). If \( L < 1 \), the series converges absolutely. If \( L > 1 \) or is infinite, the series diverges. If \( L = 1 \), the test is inconclusive.
  • Root Test: Compute \( L = \lim_{n \to \infty} \sqrt[n]{|a_n|} \). Same conclusions as the Ratio Test: \( L < 1 \) → converges absolutely, \( L > 1 \) → diverges, \( L = 1 \) → inconclusive.
  • Step-by-step for Ratio Test:
    (1) Write \( \frac{a_{n+1}}{a_n} \).
    (2) Simplify completely.
    (3) Take the limit as \( n \to \infty \).
    (4) Compare to 1 for conclusion.
  • Step-by-step for Root Test:
    (1) Take the n-th root of \( |a_n| \).
    (2) Simplify expression.
    (3) Take the limit.
    (4) Compare to 1 for conclusion.
  • Example Ratio Test: \( a_n = \frac{n!}{n^n} \) → \( L = \frac{1}{e} < 1 \), so converges absolutely. Example Root Test: \( a_n = \left( \frac{n}{n+1} \right)^n \) → \( L = \frac{1}{e} < 1 \), so converges absolutely.

Approximating Sums and Error Bounds

Approximating the Sum of a Convergent Series

  • To approximate the sum of a convergent series \( \sum a_n \), choose a partial sum \( S_N = \sum_{n=1}^N a_n \) that adds the first \( N \) terms. This partial sum is your approximation to the exact infinite sum. The difference between \( S_N \) and the exact sum is called the remainder \( R_N \), and we want this remainder to be as small as needed for the problem.
  • Step-by-step:
    (1) Identify that the series converges using an appropriate test.
    (2) Decide on the number of terms \( N \) you will include based on the desired accuracy.
    (3) Compute \( S_N \) directly by summing those terms.
    (4) Use an error bound method to confirm that \( |R_N| \) is within tolerance.
    (5) State the approximation as \( S \approx S_N \) with an error less than the bound.
  • Example: To approximate \( \sum_{n=1}^\infty \frac{1}{n^2} \) to within \( 0.01 \), use the integral test remainder bound \( R_N \le \int_{N}^\infty \frac{1}{x^2} dx = \frac{1}{N} \). Solve \( \frac{1}{N} \le 0.01 \) → \( N \ge 100 \). Compute \( S_{100} \) and guarantee that the sum is within 0.01 of the true value.
  • This method is required when the infinite sum cannot be computed exactly. The partial sum gives a practical numerical value, while the error bound ensures the answer is accurate to the required number of decimal places.
  • Always explicitly show the steps: proving convergence, calculating the partial sum, and bounding the remainder. Skipping any of these leaves the approximation unsupported.

Integral Test Remainder Estimate

  • For a positive, decreasing series where the integral test applies, the remainder after \( N \) terms satisfies \( \int_{N+1}^\infty f(x) \, dx \le R_N \le \int_{N}^\infty f(x) \, dx \) where \( f(n) = a_n \). This gives an upper and lower bound for the error made when approximating with \( S_N \).
  • Step-by-step:
    (1) Verify \( f(x) \) is positive, continuous, and decreasing for \( x \ge N \).
    (2) Write the upper bound \( R_N \le \int_{N}^\infty f(x) dx \).
    (3) Solve the inequality to find \( N \) needed for the desired error.
    (4) Compute \( S_N \) with that \( N \).
    (5) State the final approximation and guaranteed error.
  • Example: For \( \sum_{n=1}^\infty \frac{1}{n^p} \) with \( p > 1 \), \( R_N \le \int_{N}^\infty x^{-p} dx = \frac{1}{(p-1)N^{p-1}} \). To get error below \( 0.001 \) when \( p = 2 \), solve \( \frac{1}{N} \le 0.001 \) → \( N \ge 1000 \).
  • The integral bounds are tight enough for accurate estimates and work even for slowly converging series when other methods are difficult to apply.
  • Be precise: compute both the bound and the partial sum so the approximation is fully justified. The bound alone is not enough to give the approximation.

Alternating Series Error Bound

  • For an alternating series that meets the Alternating Series Test conditions, the error when stopping at \( N \) terms is less than or equal to the absolute value of the first omitted term: \( |R_N| \le |a_{N+1}| \).
  • Step-by-step:
    (1) Verify \( b_n \) is decreasing and \( b_n \to 0 \) where \( a_n = (-1)^n b_n \).
    (2) Choose \( N \) so that \( b_{N+1} \) is less than the desired error tolerance.
    (3) Compute \( S_N \) as the approximation.
    (4) State \( S \approx S_N \) with \( |R_N| \le b_{N+1} \).
  • Example: For \( \sum_{n=1}^\infty \frac{(-1)^{n+1}}{n} \), to get error \( \le 0.001 \), solve \( \frac{1}{N+1} \le 0.001 \) → \( N+1 \ge 1000 \) → \( N \ge 999 \). Compute \( S_{999} \) for the approximation.
  • This bound is exact in the sense that it uses the next term size directly, which is simple and avoids integration.
  • Always check decreasing terms and limit to zero before applying; if these fail, the bound is invalid.

Taylor Series Error Bound (Lagrange Remainder)

  • When approximating a function \( f(x) \) with a Taylor polynomial \( P_n(x) \), the Lagrange form of the remainder is \( R_n(x) = \frac{f^{(n+1)}(c)}{(n+1)!}(x-a)^{n+1} \) for some \( c \) between \( a \) and \( x \). This gives a way to bound the error by finding a maximum possible value of \( |f^{(n+1)}(t)| \) on that interval.
  • Step-by-step:
    (1) Write the Taylor polynomial \( P_n(x) \) for \( f(x) \) centered at \( a \).
    (2) Find \( f^{(n+1)}(x) \) and bound its absolute value on the interval between \( a \) and \( x \).
    (3) Plug the bound into \( |R_n(x)| \le \frac{M}{(n+1)!}|x-a|^{n+1} \).
    (4) Ensure the bound meets the required accuracy.
  • Example: Approximate \( e^x \) at \( x = 0.5 \) using \( n = 3 \) centered at \( a = 0 \). Here \( f^{(4)}(x) = e^x \) and on \( [0, 0.5] \), \( e^x \le e^{0.5} \). So \( M = e^{0.5} \), and \( |R_3(0.5)| \le \frac{e^{0.5}}{4!}(0.5)^4 \).
  • This method ensures the polynomial approximation is within a known error margin, which is essential for accuracy in applications and AP free-response questions.
  • Always choose \( M \) carefully by taking the largest value of the derivative on the interval, not just at the endpoints.

Power Series

Definition and General Form of a Power Series

  • A power series is an infinite series of the form \( \sum_{n=0}^{\infty} c_n (x-a)^n \), where \( c_n \) are constants and \( a \) is the center. The variable \( x \) is the input, and the series becomes a function that outputs a sum depending on \( x \). Convergence depends on how far \( x \) is from \( a \).
  • Step-by-step:
    (1) Identify \( c_n \) and the center \( a \).
    (2) Recognize that for different \( x \) values, the series may converge or diverge.
    (3) Determine the set of \( x \) values where it converges; this set is called the interval of convergence.
    (4) Use convergence tests, typically the Ratio or Root Test, to find where convergence happens.
  • Example: \( \sum_{n=0}^{\infty} \frac{(x-2)^n}{n!} \) is a power series centered at \( a = 2 \) with coefficients \( c_n = \frac{1}{n!} \). This series converges for all real \( x \) because factorial growth in the denominator ensures convergence everywhere.
  • Power series are important because they act like polynomials of infinite degree and can approximate functions extremely accurately within their interval of convergence.
  • When the center is \( a = 0 \), the series is called a Maclaurin series. Otherwise, it is a Taylor series centered at \( a \).

Determining the Radius of Convergence

  • The radius of convergence \( R \) describes the distance from the center \( a \) within which the series converges absolutely. To find \( R \), apply the Ratio Test or Root Test to the general term \( c_n (x-a)^n \).
  • Step-by-step using Ratio Test: (1) Compute \( L = \lim_{n \to \infty} \left| \frac{c_{n+1}(x-a)^{n+1}}{c_n(x-a)^n} \right| \). (2) Simplify to \( L = |x-a| \cdot \lim_{n \to \infty} \left| \frac{c_{n+1}}{c_n} \right| \). (3) Require \( L < 1 \) for convergence. (4) Solve \( |x-a| < R \) to find the radius.
  • Example: For \( \sum_{n=1}^\infty \frac{(x-3)^n}{n} \), \( L = |x-3| \lim_{n \to \infty} \frac{n}{n+1} = |x-3| \). Convergence requires \( |x-3| < 1 \), so \( R = 1 \).
  • The Root Test can be used similarly: \( L = \lim_{n\to\infty} \sqrt[n]{|c_n||x-a|^n} = |x-a| \lim_{n\to\infty} \sqrt[n]{|c_n|} \), and convergence requires \( L < 1 \).
  • The radius \( R \) tells you the “open” part of the interval where the series converges absolutely; endpoints require separate testing.

Determining the Interval of Convergence

  • The interval of convergence is the set of all \( x \) values for which the series converges. It always has the form \( (a-R, a+R) \) or some variation with endpoints included depending on endpoint behavior.
  • Step-by-step:
    (1) Find \( R \) using Ratio or Root Test.
    (2) Write the open interval \( (a-R, a+R) \).
    (3) Test \( x = a-R \) and \( x = a+R \) separately using appropriate convergence tests (often Alternating Series Test, p-series comparison, or direct evaluation).
    (4) Decide whether to include each endpoint based on convergence or divergence there.
  • Example: For \( \sum_{n=1}^\infty \frac{(x+2)^n}{n} \), \( R = 1 \) from Ratio Test. Open interval: \( (-3, -1) \). At \( x = -3 \), terms are \( \frac{(-1)^n}{n} \) which converges by Alternating Series Test → include \( -3 \). At \( x = -1 \), terms are \( \frac{1}{n} \) which diverges (harmonic) → exclude \( -1 \). Final interval: \( [-3, -1) \).
  • Always test both endpoints independently, since convergence inside the radius does not guarantee convergence at the boundary.
  • The interval of convergence must be written carefully, with correct bracket notation showing inclusion or exclusion of endpoints.

Convergence at Endpoints

  • At \( x = a \pm R \), the Ratio and Root Tests give \( L = 1 \), so they are inconclusive. You must substitute the endpoint into the series and analyze it as a separate series using the tests from convergence theory.
  • Step-by-step:
    (1) Plug \( x = a-R \) into \( \sum c_n (x-a)^n \) to get a numeric series.
    (2) Apply tests like the p-series test, Alternating Series Test, or Comparison Tests.
    (3) Record whether the series converges or diverges at that endpoint.
    (4) Repeat for \( x = a+R \).
  • Example: For \( \sum_{n=1}^\infty \frac{(x-4)^n}{n} \) with \( R = 2 \), at \( x = 6 \), the series is \( \sum \frac{2^n}{n} \) which diverges by the Ratio Test. At \( x = 2 \), the series is \( \sum \frac{(-2)^n}{n} \) which converges conditionally by Alternating Series Test.
  • This process ensures the interval of convergence is complete and accurate, avoiding the common mistake of ignoring endpoints.
  • Endpoint analysis is required on AP exams for full credit when determining intervals of convergence.

Taylor and Maclaurin Polynomials

Definition and Purpose of Taylor and Maclaurin Polynomials

  • A Taylor polynomial of degree \( n \) for a function \( f(x) \) centered at \( a \) is a polynomial that matches the value of \( f \) and its first \( n \) derivatives at \( x = a \). Its general form is \( P_n(x) = f(a) + f'(a)(x-a) + \frac{f''(a)}{2!}(x-a)^2 + \cdots + \frac{f^{(n)}(a)}{n!}(x-a)^n \).
  • When the center \( a = 0 \), the polynomial is called a Maclaurin polynomial, which is a special case of a Taylor polynomial centered at zero. The formula becomes \( P_n(x) = f(0) + f'(0)x + \frac{f''(0)}{2!}x^2 + \cdots + \frac{f^{(n)}(0)}{n!}x^n \).
  • Step-by-step to construct: (1) Identify the function \( f(x) \) and center \( a \). (2) Compute derivatives \( f'(x), f''(x), \dots, f^{(n)}(x) \). (3) Evaluate each derivative at \( x = a \). (4) Substitute into the Taylor formula up to degree \( n \).
  • Example: For \( f(x) = e^x \) centered at \( a = 0 \) with \( n = 3 \), all derivatives are \( e^x \), so \( f^{(k)}(0) = 1 \) for all \( k \). The Maclaurin polynomial is \( P_3(x) = 1 + x + \frac{x^2}{2} + \frac{x^3}{6} \).
  • Taylor and Maclaurin polynomials are used to approximate functions near the center \( a \), and the approximation becomes more accurate as \( n \) increases or \( x \) is closer to \( a \).

Approximating Functions with Taylor Polynomials

  • To approximate \( f(x) \) near \( x = a \), compute \( P_n(x) \) and use it as the approximation \( f(x) \approx P_n(x) \). This is especially useful when evaluating \( f(x) \) exactly is difficult or impossible.
  • Step-by-step:
    (1) Construct \( P_n(x) \) using the method above.
    (2) Choose \( n \) based on the desired accuracy or error bound.
    (3) Evaluate \( P_n(x) \) at the target \( x \).
    (4) If required, compute the error bound to confirm the accuracy.
  • Example: Approximate \( \cos(0.1) \) with \( n = 2 \) centered at \( a = 0 \). Derivatives alternate between \( \cos x \) and \( -\sin x \), so \( f(0) = 1 \), \( f'(0) = 0 \), \( f''(0) = -1 \). Polynomial: \( P_2(x) = 1 - \frac{x^2}{2} \). Approximation: \( 1 - \frac{(0.1)^2}{2} = 0.995 \).
  • Higher-degree polynomials generally give more accurate approximations within a given interval around \( a \). However, accuracy can worsen if \( x \) is far from \( a \), even for large \( n \).
  • This technique connects directly to Taylor series, which are the infinite-degree version of Taylor polynomials.

Error Estimation with Lagrange Remainder

  • The Lagrange Remainder formula gives a bound for the error when approximating \( f(x) \) by \( P_n(x) \): \( R_n(x) = \frac{f^{(n+1)}(c)}{(n+1)!}(x-a)^{n+1} \) for some \( c \) between \( a \) and \( x \). The magnitude \( |R_n(x)| \) is bounded by replacing \( |f^{(n+1)}(c)| \) with a maximum \( M \) on that interval.
  • Step-by-step:
    (1) Determine \( f^{(n+1)}(x) \).
    (2) Find the maximum value \( M \) of \( |f^{(n+1)}(x)| \) for \( x \) between \( a \) and the target \( x \).
    (3) Substitute \( M \), \( n \), \( a \), and \( x \) into \( \frac{M}{(n+1)!}|x-a|^{n+1} \) to get an error bound.
    (4) Compare the bound to the required tolerance to confirm adequacy.
  • Example: Approximate \( e^x \) at \( x = 0.5 \) with \( n = 2 \) centered at \( a = 0 \). The third derivative is \( e^x \), and on \( [0,0.5] \), \( e^x \le e^{0.5} \). Bound: \( |R_2(0.5)| \le \frac{e^{0.5}}{3!}(0.5)^3 \).
  • This method is necessary to prove accuracy rather than just assuming it. AP exam problems often require stating the error bound explicitly.
  • Calculating \( M \) correctly ensures the bound is valid; use the maximum derivative value on the entire interval, not just at a point.

Taylor and Maclaurin Series

Definition of a Taylor Series

  • A Taylor series is the infinite sum obtained by extending a Taylor polynomial to infinitely many terms. For a function \( f(x) \) centered at \( a \), the Taylor series is \( \sum_{n=0}^{\infty} \frac{f^{(n)}(a)}{n!}(x-a)^n \). This series can represent \( f(x) \) exactly for all \( x \) within its interval of convergence.
  • When \( a = 0 \), the series becomes a Maclaurin series: \( \sum_{n=0}^{\infty} \frac{f^{(n)}(0)}{n!}x^n \). This is a special case where the series is centered at zero, making many formulas simpler.
  • Step-by-step to form a Taylor series: (1) Compute derivatives \( f'(x), f''(x), \dots \). (2) Evaluate each at \( x = a \). (3) Substitute into the general term \( \frac{f^{(n)}(a)}{n!}(x-a)^n \). (4) Write the infinite sum starting at \( n = 0 \).
  • Example: For \( f(x) = e^x \) centered at \( a = 0 \), \( f^{(n)}(0) = 1 \) for all \( n \), so the Maclaurin series is \( \sum_{n=0}^{\infty} \frac{x^n}{n!} \).
  • Unlike polynomials, which are finite sums, a Taylor series can capture the exact behavior of a function on its interval of convergence if the remainder term goes to zero.

Common Maclaurin Series

  • Certain functions have well-known Maclaurin series that are worth memorizing for quick use on exams:
  • \( e^x = \sum_{n=0}^\infty \frac{x^n}{n!} \), valid for all \( x \).
  • \( \sin x = \sum_{n=0}^\infty \frac{(-1)^n}{(2n+1)!}x^{2n+1} \), valid for all \( x \).
  • \( \cos x = \sum_{n=0}^\infty \frac{(-1)^n}{(2n)!}x^{2n} \), valid for all \( x \).
  • \( \frac{1}{1-x} = \sum_{n=0}^\infty x^n \), valid for \( |x| < 1 \).
  • \( \ln(1+x) = \sum_{n=1}^\infty \frac{(-1)^{n+1}}{n}x^n \), valid for \( -1 < x \le 1 \).

Generating New Series from Known Series

  • You can create new Taylor or Maclaurin series by substituting, differentiating, or integrating known series. These operations preserve the radius of convergence (except possibly at endpoints).
  • Step-by-step:
    (1) Start with a known series, like \( \frac{1}{1-x} = \sum x^n \).
    (2) Replace \( x \) with another expression, e.g., \( 2x^2 \) to get \( \frac{1}{1-2x^2} = \sum (2x^2)^n \).
    (3) Differentiate or integrate term-by-term if needed to match your target function.
  • Example: To find the Maclaurin series for \( \frac{x}{1-x} \), start with \( \frac{1}{1-x} = \sum x^n \) and multiply the whole series by \( x \) to get \( \sum x^{n+1} \).
  • Example: To find \( \arctan x \), start with \( \frac{1}{1+x^2} = \sum (-1)^n x^{2n} \) for \( |x| < 1 \), then integrate term-by-term to get \( \arctan x = \sum \frac{(-1)^n}{2n+1}x^{2n+1} \).
  • These transformations are standard exam techniques and save time compared to recomputing derivatives from scratch.

Determining Intervals of Convergence for Taylor/Maclaurin Series

  • Just like power series, a Taylor or Maclaurin series converges on an interval determined by its radius of convergence \( R \) and possible endpoint behavior. The interval must be explicitly determined for complete accuracy.
  • Step-by-step:
    (1) Apply the Ratio or Root Test to the general term of the series to find \( R \).
    (2) Write the open interval \( (a-R, a+R) \) where convergence is guaranteed.
    (3) Test each endpoint individually using standard convergence tests.
    (4) State the full interval using brackets for included endpoints and parentheses for excluded ones.
  • Example: For \( \ln(1+x) \), the Ratio Test gives \( R = 1 \). At \( x = 1 \), series becomes \( \sum \frac{(-1)^{n+1}}{n} \) which converges (alternating harmonic). At \( x = -1 \), series becomes \( \sum \frac{(-1)^{n+1}}{n}(-1)^n = \sum \frac{-1}{n} \) which diverges. Interval: \( (-1, 1] \).
  • Without testing endpoints, you cannot determine the full convergence set, which can lead to incomplete answers.
  • This process ensures you can use the series safely within the range where it truly represents the function.

Practice Problems

Problem 1 (Representing a Function as a Taylor Series and Finding Its Interval)

  • Question: Find the Maclaurin series for \( \ln(1+3x) \). Determine the radius and interval of convergence, and write the first four nonzero terms explicitly.
  • Step 1 (series construction): Start from the known Maclaurin series \( \ln(1+u)=\displaystyle \sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{n}u^{\,n} \) for \( -1
  • Step 2 (radius of convergence): The ratio test on the general term \( \frac{(-1)^{n+1}}{n}(3x)^n \) gives \( \lim_{n\to\infty}\big|\frac{a_{n+1}}{a_n}\big|=|3x| \). Convergence requires \( |3x|<1 \), so the radius is \( R=\tfrac{1}{3} \). The open interval is \( (-\tfrac{1}{3},\tfrac{1}{3}) \) before endpoint testing.
  • Step 3 (endpoint analysis): At \( x=\tfrac{1}{3} \), the series becomes \( \sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{n} \), which converges (alternating harmonic) and sums to \( \ln 2 \). At \( x=-\tfrac{1}{3} \), the series becomes \( \sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{n}(-1)^n=-\sum_{n=1}^{\infty}\frac{1}{n} \), which diverges. Therefore the interval of convergence is \( \left(-\tfrac{1}{3},\,\tfrac{1}{3}\right] \).
  • Step 4 (first four nonzero terms): Expand to get \( \ln(1+3x)=3x-\dfrac{(3x)^2}{2}+\dfrac{(3x)^3}{3}-\dfrac{(3x)^4}{4}+\cdots \). Simplify coefficients: \( 3x-\dfrac{9}{2}x^2+9x^3-\dfrac{81}{4}x^4+\cdots \). State the final result as the series with its interval \( \left(-\tfrac{1}{3},\,\tfrac{1}{3}\right] \) and radius \( R=\tfrac{1}{3} \).

Problem 2 (Taylor Approximation with a Lagrange Error Bound)

  • Question: Approximate \( e^{0.5} \) using the third-degree and then the sixth-degree Maclaurin polynomials for \( e^x \). Find a degree \( n \) so that the error is less than \( 10^{-4} \) and justify it using the Lagrange remainder.
  • Step 1 (setup): The Maclaurin series for \( e^x \) is \( \displaystyle \sum_{k=0}^{\infty}\frac{x^k}{k!} \). The degree-\( n \) Maclaurin polynomial is \( P_n(x)=\sum_{k=0}^{n}\frac{x^k}{k!} \). The Lagrange remainder satisfies \( |R_n(x)|=\left|\dfrac{e^{c}}{(n+1)!}x^{n+1}\right| \) for some \( c \) between \( 0 \) and \( x \).
  • Step 2 (degree 3 approximation): At \( x=0.5 \), \( P_3(0.5)=1+0.5+\dfrac{0.5^2}{2!}+\dfrac{0.5^3}{3!}=1+0.5+0.125+0.020833\ldots=1.645833\ldots \). The error bound is \( |R_3(0.5)|\le\dfrac{e^{0.5}}{4!}(0.5)^4 \). Since \( e^{0.5}\approx1.6487 \), \( 4!=24 \), and \( 0.5^4=0.0625 \), the bound is \( \le \dfrac{1.6487\cdot0.0625}{24}\approx0.00429 \), which is larger than \( 10^{-4} \).
  • Step 3 (find \( n \) for error \( <10^{-4} \)): Use \( |R_n(0.5)|\le\dfrac{e^{0.5}}{(n+1)!}(0.5)^{n+1} \). Test \( n=5 \): \( \dfrac{e^{0.5}\cdot0.5^{6}}{6!}=\dfrac{1.6487\cdot1/64}{720}\approx4.0\times10^{-5} \), which is \( <10^{-4} \). Therefore \( n=5 \) (degree 5) suffices; using \( n=6 \) is also valid and even tighter.
  • Step 4 (degree 6 approximation (tighter)): \( P_6(0.5)=\sum_{k=0}^{6}\dfrac{0.5^k}{k!}=1+0.5+0.125+0.020833+\dfrac{0.5^4}{24}+\dfrac{0.5^5}{120}+\dfrac{0.5^6}{720} \). Numerically this is \( \approx1.648721 \) to six decimals. The remainder satisfies \( |R_6(0.5)|\le\dfrac{e^{0.5}\cdot0.5^{7}}{7!}\approx2.6\times10^{-6} \), guaranteeing accuracy better than \( 10^{-4} \).