Limit of a Sum Theorem
Let $f(x)$ and $g(x)$ be functions that admit finite limits as $x \to x_0$
$$ \lim_{x \to x_0} f(x) = l $$
$$ \lim_{x \to x_0} g(x) = m $$
Then the limit of their sum exists and equals the sum of the limits
$$ \lim_{x \to x_0} [f(x)+g(x)] = l + m $$
where $l, m \in \mathbb{R}$
Equivalently, the limit of a sum is the sum of the limits, provided that both limits exist and are finite.
$$ \lim_{x \to x_0} [f(x)+g(x)] = l + m $$
The same limit law applies to differences:
$$ \lim_{x \to x_0} [f(x)-g(x)] = l - m $$
This theorem plays a central role in evaluating limits because it allows each term to be handled independently before combining the results.
Why is this true?
The reasoning is intuitive. As $x$ approaches $x_0$, the values of $f(x)$ become arbitrarily close to $l$, and the values of $g(x)$ become arbitrarily close to $m$.
Accordingly, the sum $f(x)+g(x)$ becomes arbitrarily close to $l+m$.
Note. The theorem applies when both limits exist and are finite real numbers. If one of the limits does not exist or is infinite, the result cannot be invoked automatically, and the expression must be examined case by case. For example: $$ \lim_{x \to +\infty} x = +\infty $$ $$ \lim_{x \to +\infty} (-x) = -\infty $$ Their sum yields an indeterminate form. In this situation, the theorem is not directly applicable. $$ +\infty - \infty = \text{ind} $$
Within the framework of the extended real numbers, the rule can be extended whenever the sum does not produce an indeterminate form. For instance, if two functions both diverge to $+\infty$, their sum also diverges to $+\infty$.
$$ \infty + \infty = +\infty $$
A systematic analysis of all possible combinations shows that the indeterminate forms associated with sums are $+\infty - \infty$ and $-\infty + \infty$.
| ℓ | +∞ | -∞ | |
|---|---|---|---|
| m | m + ℓ | +∞ | -∞ |
| +∞ | +∞ | +∞ | ? |
| -∞ | -∞ | ? | -∞ |
More generally, this theorem is a particular case of the linearity property of limits, which governs sums, differences, and scalar multiples.
$$ \lim_{x \to x_0} [a f(x) + b g(x)] = a \lim_{x \to x_0} f(x) + b \lim_{x \to x_0} g(x)$$
The term “linearity” emphasizes that the limit operator preserves linear combinations.
This property is one of the cornerstones of calculus and underlies the standard technique of decomposing complex expressions into simpler components.
A Practical Example
Evaluate the limit
$$ \lim_{x \to 2} (3x + 5) $$
Rewrite the expression as a sum:
$$ 3x + 5 = (3x) + 5 $$
Compute the limits of the individual terms.
$$ \lim_{x \to 2} 3x = 3 \cdot 2 = 6 $$
$$ \lim_{x \to 2} 5 = 5 $$
Apply the theorem:
$$ \lim_{x \to 2} (3x + 5) = 6 + 5 = 11 $$
The computation is immediate because the limit distributes over addition.
Example 2
Consider the functions
$$ f(x) = x^2 $$
$$ g(x) = \sin x $$
Evaluate the limit of their sum:
$$ \lim_{x \to 0} [x^2 + \sin x] $$
Using linearity:
$$ \lim_{x \to 0} [x^2 + \sin x] = \lim_{x \to 0} [x^2 ] + \lim_{x \to 0} [\sin x] $$
The expression is thereby reduced to elementary limits.
Since both functions approach zero:
$$ \lim_{x \to 0} x^2 = 0 $$
$$ \lim_{x \to 0} \sin x = 0 $$
Substituting the results:
$$ \lim_{x \to 0} [x^2 + \sin x] = 0 + 0 = 0 $$
Therefore
$$ \lim_{x \to 0} [x^2 + \sin x] = 0 $$
No further manipulation is required. The linearity property makes the evaluation direct.
Example 3
Compute the limit
$$ \lim_{x \to 1} (2x^2 + 3x) $$
Apply the sum law for limits:
$$ \lim_{x \to 1} (2x^2) + \lim_{x \to 1} (3x) $$
Next, invoke the constant multiple law by factoring the constants 2 and 3 outside the limits
$$ 2 \cdot \lim_{x \to 1} x^2 + 3 \cdot \lim_{x \to 1} x $$
Evaluate the limits separately. Both converge to 1.
$$ 2 \cdot \underbrace{ \lim_{x \to 1} x^2}_{=1} + 3 \cdot \underbrace{ \lim_{x \to 1} x}_{=1} $$
$$ = 2 \cdot 1 + 3 \cdot 1 $$
$$ = 2 + 3 = 5 $$
Therefore, the original limit is
$$ \lim_{x \to 1} (2x^2 + 3x) = 5 $$
Here, the limit laws reduce the problem to elementary evaluations.
Note. This example is deliberately simple. While the limit could be obtained immediately by direct substitution, the stepwise argument clarifies how the limit laws function. In more advanced problems, these rules are essential. They enable a complex expression to be decomposed into simpler terms, whose limits can be computed independently and then recombined. The result is a procedure that is transparent, structured, and often computationally efficient.
Proof
Let $ f(x) $ and $ g(x) $ be functions with finite limits \( l,m\in\mathbb{R} \) as $ x \to x_0 $
\[ \lim_{x \to x_0} f(x)=l \]
\[ \lim_{x \to x_0} g(x)=m \]
We show that
\[ \lim_{x \to x_0}[f(x)+g(x)]=l+m \]
By the definition of limit applied to $f(x)$, for every \( \varepsilon>0 \), there exists a neighborhood \(I_1\) of $ x_0 $ such that
\[ l - \varepsilon < f(x) < l + \varepsilon \qquad \forall x \in I_1,\ x \neq x_0 \]
Since \( \varepsilon > 0 \), the quantity \( \varepsilon/2 \) is also positive. Because \( \varepsilon \) is arbitrary, we may replace it with \( \varepsilon/2 \)
\[ l-\frac{\varepsilon}{2} < f(x) < l+\frac{\varepsilon}{2} \qquad \forall x \in I_1,\ x \neq x_0 \]
Applying the definition of limit to $g(x)$, for the same \( \varepsilon/2 \), there exists a neighborhood \(I_2\) of $ x_0 $ such that
\[ m-\frac{\varepsilon}{2} < g(x) < m+\frac{\varepsilon}{2} \qquad \forall x \in I_2,\ x \neq x_0 \]
Consider the intersection of these neighborhoods
\[ I = I_1 \cap I_2 \]
This intersection is itself a neighborhood of $x_0$.
Thus, for every \(x \in I\), with \(x \neq x_0\), both inequalities hold simultaneously:
\[ l-\frac{\varepsilon}{2} < f(x) < l+\frac{\varepsilon}{2} \]
\[ m-\frac{\varepsilon}{2} < g(x) < m+\frac{\varepsilon}{2} \]
Add the inequalities term by term:
\[ \left(l-\frac{\varepsilon}{2}\right)+\left(m-\frac{\varepsilon}{2}\right) < f(x)+g(x) < \left(l+\frac{\varepsilon}{2}\right)+\left(m+\frac{\varepsilon}{2}\right) \]
Simplifying:
\[ (l+m)-\varepsilon < f(x)+g(x) < (l+m)+\varepsilon \qquad \forall x \in I,\ x \neq x_0 \]
This double inequality is equivalent to
\[ \big|[f(x)+g(x)]-(l+m)\big| < \varepsilon \qquad \forall x \in I,\ x \neq x_0 \]
Since the condition holds for every \( \varepsilon>0 \), by definition
\[ \lim_{x \to x_0}[f(x)+g(x)] = l+m \]
As required.
Note. The choice of \( \varepsilon/2 \) partitions the total tolerance \( \varepsilon \) between the two functions. Each function deviates from its limit by at most half the allowed margin, ensuring that the deviation of the sum from \( l+m \) remains below \( \varepsilon \).
And so on.
