Linear Dependence and Independence of Vectors
Given m vectors v_{1}, v_{2}, ..., v_{m} in a vector space V over a field K, these vectors are linearly independent if the linear combination with m scalar coefficients equal to the zero vector has only the trivial solution. $$ a_1 \cdot \vec{v}_1 + a_2 \cdot \vec{v}_2 + \ ... \ + \alpha_m \cdot \vec{v}_m = \vec{0} $$ Conversely, if there are other nontrivial solutions, the vectors are linearly dependent.
A linear combination of vectors equal to the zero vector is called trivial when all the scalar coefficients are zero.
$$ a_1 = a_2 = \ ... = \ a_m = 0 $$
If at least one coefficient is different from zero, the solution of the linear combination is nontrivial.
Linearly Independent Vectors
In a vector space V over a field K, vectors v_{1},...,v_{m} ∈ V are linearly independent if no linear combination equals the zero vector, except for the trivial linear combination. $$ \alpha_1 \cdot \vec{v}_1 + ... + \alpha_m \cdot \vec{v}_m = \vec{0} \ \ \ \ \ \ \alpha \in K $$
What is a trivial linear combination?
It's a linear combination of a vector with a sequence of zeros { a_{1}=0, ..., a_{m}=0 }.
Why is the trivial linear combination excluded?
The trivial linear combination is not considered because any linear combination of vectors with zero coefficients equals a zero vector.
$$ 0 \cdot v_1 + ... + 0 \cdot v_m = 0 $$
Thus, the trivial solution always exists and cannot inform us about the linear dependence or independence of vectors.
Linearly Dependent Vectors
In a vector space V over the field K, vectors v_{1},...,v_{m} ∈ V are linearly dependent if there exists one or more linear combinations equal to the zero vector, besides the trivial linear combination. $$ \alpha_1 \cdot \vec{v}_1 + ... + \alpha_m \cdot \vec{v}_m = \vec{0} \ \ \ \ \ \ \alpha \in K $$
The trivial solution is excluded because it does not allow to distinguish between linear dependence and independence of vectors.
In this case, there is at least one nonzero scalar coefficient in the linear combination.
From this follows the theorem:
The vectors v_{1},...,v_{m} are linearly dependent if at least one of them (e.g., v_{1}) can be written as a linear combination of the remaining vectors.
$$ v_1 = \beta_2 \cdot v_2 + ... + \beta_m \cdot v_m $$ $$ with \: \: \beta_2, ..., \beta_m \in R $$
The theorem does not specify which vector can be written as a combination of the others, it only states that there exists one if the vectors are linearly dependent.
It could be any of the vectors v_{1},...,v_{m} and not necessarily v_{1}.
Geometric Interpretation
In R^{2} space, two vectors are linearly dependent if they are parallel or coincident, i.e., they lie on the same line and have the same direction.
In R^{3} space, three vectors are linearly dependent if they belong to the same plane.
Note. These geometric considerations can only be made in the case of R^{2} or R^{3} vector spaces. They cannot be made for vector spaces of higher dimensions such as R^{4,} R^{5}, etc.
A Practical Example
Example 1
Let's consider two vectors in the vector space V=R^{2} over the field of real numbers K=R:
$$ \vec{v}_1 = \begin{pmatrix} 3 \\ 1 \end{pmatrix} $$
$$ \vec{v}_2 = \begin{pmatrix} 2 \\ 5 \end{pmatrix} $$
The linear combination is as follows:
$$ a_1 \cdot \vec{v}_1 + a_2 \cdot \vec{v}_2 = \vec{0} $$
$$ a_1 \cdot \begin{pmatrix} 3 \\ 1 \end{pmatrix} + a_2 \cdot \begin{pmatrix} 2 \\ 5 \end{pmatrix} = \vec{0} $$
The null vector is a vector made up of two zeros:
$$ \begin{pmatrix} 3a_1 \\ a_1 \end{pmatrix} + \begin{pmatrix} 2a_2 \\ 5a_2 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix} $$
I'll rewrite the vector equation as a system of equations:
$$ \begin{cases} 3a_1 + 2a_2 = 0 \\ \\ a_1 + 5a_2 = 0 \end{cases} $$
Then, I'll solve the system using the substitution method, isolating a_{1} in the second equation:
$$ \begin{cases} 3a_1 + 2a_2 = 0 \\ \\ a_1 =  5a_2 \end{cases} $$
Note. Any other method to solve the system is equally valid. For example, the Cramer's rule.
Next, I substitute a_{1} into the first equation and find the value of a_{2}=0.
$$ \begin{cases} 3(5a_2) + 2a_2 = 0 \\ \\ a_1 =  5a_2 \end{cases} $$
$$ \begin{cases}15a_2 + 2a_2 = 0 \\ \\ a_1 =  5a_2 \end{cases} $$
$$ \begin{cases}13a_2 = 0 \\ \\ a_1 =  5a_2 \end{cases} $$
$$ \begin{cases} a_2 = 0 \\ \\ a_1 =  5a_2 \end{cases} $$
Once I know a_{2}, I substitute it into the second equation and get the value of a_{1}=0.
$$ \begin{cases} a_2 = 0 \\ \\ a_1 =  5(0) \end{cases} $$
$$ \begin{cases} a_2 = 0 \\ \\ a_1 = 0 \end{cases} $$
The only solution of the system is the null vector, i.e., a_{1}=0 and a_{2}=0.
Therefore, the two vectors are linearly independent.
Note. In the case of two vectors in the twodimensional space R^{2}, i.e., on the plane, linear dependence occurs only when the vectors are parallel. Therefore, if the vectors are linearly independent, they are not parallel vectors. However, this characteristic is only valid in the plane and not in higherdimensional spaces.
Example 2
Consider three vectors in the twodimensional real vector space V=R^{2}:
$$ \vec{v_1} = \begin{pmatrix} 3 \\ 1 \end{pmatrix} $$
$$ \vec{v_2} = \begin{pmatrix} 1 \\ 4 \end{pmatrix} $$
$$ \vec{v_3} = \begin{pmatrix} 2 \\ 1 \end{pmatrix} $$
The three vectors are linearly independent if there isn't a nontrivial linear combination of the three vectors equal to the null vector.
$$ a_1 \cdot \vec{v_1} + a_2 \cdot \vec{v_2} + a_3 \cdot \vec{v_3} = \vec{0} $$
$$ a_1 \cdot \begin{pmatrix} 3 \\ 1 \end{pmatrix} + a_2 \cdot \begin{pmatrix} 1 \\ 4 \end{pmatrix} + a_3 \cdot \begin{pmatrix} 2 \\ 1 \end{pmatrix} = \vec{0} $$
$$ \begin{pmatrix} 3a_1 \\ a_1 \end{pmatrix} + \begin{pmatrix} a_2 \\ 4a_2 \end{pmatrix} + \begin{pmatrix} 2a_3 \\ a_3 \end{pmatrix} = \vec{0} $$
To determine if they are linearly independent or dependent, I'll convert the vector equation into a system of equations.
$$ \begin{cases} 3a_1+a_2+2a_3=0 \\ a_1+4a_2 +a_3 = 0 \end{cases} $$
Then, I'll check if there's a nontrivial solution, that is, different from the null vector of coefficients.
However, it's futile in this case since the system has 3 variables and only 2 equations.
Thus, according to the RouchéCapelli theorem, the linear system has infinite solutions beyond the trivial null vector solution.tion
Therefore, the three vectors are linearly dependent.
This means one of the three vectors can be obtained as a linear combination of the other two.
Example 3
In a vector space V=R^{3} over the field K=R, consider two vectors:
$$ v_1 = \begin{pmatrix} 1 \\ 2 \\ 1 \end{pmatrix} $$ $$ v_2 = \begin{pmatrix} 3 \\ 2 \\ 3 \end{pmatrix} $$
I need to determine if they are linearly dependent.
By definition, vectors are linearly dependent if their linear combination equals zero, with the coefficients α not all being zero simultaneously.
$$ α_1 \cdot v_1 + α_2 \cdot v_2 = 0 $$ $$ α_1 \cdot \begin{pmatrix} 1 \\ 2 \\ 1 \end{pmatrix} + α_2 \cdot \begin{pmatrix} 3 \\ 2 \\ 3 \end{pmatrix} = 0 $$ $$ ( α_1, 2α_1, α_1 ) + ( 3α_2, 2α_2, 3α_2) = 0 $$
I can represent these vectors as a linear system with three equations and two unknown variables:
$$ \begin{cases} α_1 + 3α_2 = 0 \\ 2α_1 + 2α_2 = 0 \\ α_1 + 3α_2 = 0 \end{cases} $$
I solve the system to check if there is a nontrivial solution (where α_{1}≠0, α_{2}≠0)
$$ \begin{cases} α_1 + 3α_2 = 0 \\ 2α_1 + 2α_2 = 0 \\ α_1 =  3α_2 \end{cases} $$
$$ \begin{cases} ( 3α_2) + 3α_2 = 0 \\ 2( 3α_2) + 2α_2 = 0 \\ α_1 =  3α_2 \end{cases} $$
$$ \begin{cases} 0 = 0 \\ 4α_2 = 0 \\ α_1 =  3α_2 \end{cases} $$
The system has no solutions other than the trivial one (α_{1}=0, α_{2}=0)
Therefore, vectors v_{1} and v_{2} are not linearly dependent.
They are linearly independent.
Example 4
In the same vector space V=R^{3} over the field K=R, consider two other vectors:
$$ v_1 = \begin{pmatrix} 1 \\ 2 \\ 1 \end{pmatrix} $$ $$ v_2 = \begin{pmatrix} 3 \\ 6 \\ 3 \end{pmatrix} $$
They are linearly dependent if their linear combination equals zero, with the coefficients α not all being zero simultaneously.
$$ α_1 \cdot v_1 + α_2 \cdot v_2 = 0 $$ $$ α_1 \cdot \begin{pmatrix} 1 \\ 2 \\ 1 \end{pmatrix} + α_2 \cdot \begin{pmatrix} 3 \\ 6 \\ 3 \end{pmatrix} = 0 $$ $$ ( α_1, 2α_1, α_1 ) + ( 3α_2, 6α_2, 3α_2) = 0 $$
I express these vectors as a linear system with three equations and two unknowns:
$$ \begin{cases} α_1 + 3α_2 = 0 \\ 2α_1 + 6α_2 = 0 \\ α_1 + 3α_2 = 0 \end{cases} $$
Next, I solve the system:
$$ \begin{cases} α_1 + 3α_2 = 0 \\ 2α_1 + 6α_2 = 0 \\ α_1 =  3α_2 \end{cases} $$
$$ \begin{cases} ( 3α_2) + 3α_2 = 0 \\ 2( 3α_2) + 6α_2 = 0 \\ α_1 =  3α_2 \end{cases} $$
$$ \begin{cases} 0 = 0 \\ 0 = 0 \\ α_1 =  3α_2 \end{cases} $$
The system has a nontrivial solution.
Hence, vectors v_{1} and v_{2} are linearly dependent.
Example 5
Given 3 vectors in the vector space V=R^{3}
$$ v_1 = \begin{pmatrix} 3 \\ 1 \\ 2 \end{pmatrix} $$
$$ v_2 = \begin{pmatrix} 2 \\ 1 \\ 0 \end{pmatrix} $$
$$ v_3 = \begin{pmatrix} 1 \\ 3 \\ 2 \end{pmatrix} $$
Their linear combination equal to the null vector is
$$ a_1 \cdot \vec{v_1} + a_2 \cdot \vec{v_2} + a_3 \cdot \vec{v_3} = \vec{0} $$
$$ a_1 \cdot \begin{pmatrix} 3 \\ 1 \\ 2 \end{pmatrix} + a_2 \cdot \begin{pmatrix} 2 \\ 1 \\ 0 \end{pmatrix} + a_3 \cdot \begin{pmatrix} 1 \\ 3 \\ 2 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix} $$
$$ \begin{pmatrix} 3a_1 \\ a_1 \\ 2a_1 \end{pmatrix} + \begin{pmatrix} 2a_2 \\ a_2 \\ 0 \end{pmatrix} + \begin{pmatrix} a_3 \\ 3a_3 \\ 2a_3 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix} $$
Converting the vector equation into a system of Cartesian equations.
$$ \begin{cases} 3a_1 + 2a_2 + a_3 = 0 \\ a_1  a_2  3a_3 = 0 \\ 2a_1 + 2a_3 = 0 \end{cases} $$
Looking for solutions using the substitution method.
Isolating variable a_{1} in the third equation and substituting it into the other equations.
$$ \begin{cases} 3a_1 + 2a_2 + a_3 = 0 \\ a_1  a_2  3a_3 = 0 \\ a_1 = a_3 \end{cases} $$
$$ \begin{cases} 3a_3 + 2a_2 + a_3 = 0 \\ a_3  a_2  3a_3 = 0 \\ a_1 = a_3 \end{cases} $$
$$ \begin{cases} 4a_3 + 2a_2 = 0 \\ a_2  2a_3 = 0 \\ a_1 = a_3 \end{cases} $$
Isolating variable a_{2} in the second equation and substituting it into the other equations.
$$ \begin{cases} 4a_3 + 2a_2 = 0 \\ a_2 = 2a_3 \\ a_1 = a_3 \end{cases} $$
$$ \begin{cases} 4a_3 + 2(2a_3) = 0 \\ a_2 = 2a_3 \\ a_1 = a_3 \end{cases} $$
$$ \begin{cases} 4a_3 4a_3 = 0 \\ a_2 = 2a_3 \\ a_1 = a_3 \end{cases} $$
$$ \begin{cases} 0 = 0 \\ a_2 = 2a_3 \\ a_1 = a_3 \end{cases} $$
Thus, the system has infinite solutions as a_{3} can take any value, determining a_{1} and a_{2}.
Consequently, the vectors v_{1}, v_{2}, v_{3} are linearly dependent on each other.
This means one of the three vectors can be obtained as a linear combination of the other two vectors.
Note. Observing the three vectors, it's evident that vector v_{3} can be derived as a linear combination of v_{2} and v_{3} $$ \begin{pmatrix} 3 \\ 1 \\ 2 \end{pmatrix} = 2 \cdot \begin{pmatrix} 2 \\ 1 \\ 0 \end{pmatrix}  v_3 $$ $$ v_1 = 2v_2  \begin{pmatrix} 1 \\ 3 \\ 2 \end{pmatrix} $$
Geometrically, this means the three linearly dependent vectors v_{1}, v_{2}, v_{3 }lie i/math/borderedmatrixtheoremn the same plane in threedimensional space.
The Demonstration
The vectors v_{1},...,v_{m} are linearly dependent if their linear combination equals the zero vector, with scalar coefficients α_{1},...,α_{m} not all zero.
$$ α_1 \cdot \vec{v}_1 + ... + α_m \cdot \vec{v}_m = \vec{0} $$
Assuming the scalar coefficient α_{1} is nonzero,
$$ a_1 \ne 0 $$
Highlighting vector v_{1} by moving all other vectors to the right side,
$$ α_1 \cdot \vec{v}_1 =  α_2 \cdot \vec{v}_2 ...  α_m \cdot \vec{v}_m $$
Knowing α_{1} is nonzero, I divide both sides of the equation by α_{1}
$$ (\frac{1}{α_1}) \cdot α_1 \cdot \vec{v}_1 = (\frac{1}{α_1}) \cdot (  α_2 \cdot \vec{v}_2 ...  α_m \cdot \vec{v}_m ) $$
$$ \vec{v}_1 =  \frac {α_2 } {α_1} \cdot \vec{v}_2 ...  \frac {α_m} {α_1} \cdot \vec{v}_m $$
For simplicity, I represent the ratios between the scalars with the letter beta (β)
$$ b_2 =  \frac {α_2 } {α_1} \\ b_3 =  \frac {α_3 } {α_1} \\ \vdots \\ b_m =  \frac {α_m } {α_1} $$
And thus, I obtain the expression I wanted to demonstrate.
$$ \vec{v}_1 = β_2 \cdot \vec{v}_2 + ... + β_m \cdot \vec{v}_m $$
Note: The beta coefficients are the ratios $$ β_2 =  \frac {α_2 } {α_1} $$ $$ ... $$ $$ β_m =  \frac {α_m } {α_1} $$
Now, I demonstrate the converse: if vector v_{1} is obtained from the linear combination of the remaining vectors,
$$ \vec{v}_1 = β_2 \cdot \vec{v}_2 + ... + β_m \cdot \vec{v}_m $$
then the vectors v_{1},v_{2},...,v_{m} are linearly dependent.
$$  \vec{v}_1 + β_2 \cdot \vec{v}_2 + ... + β_m \cdot \vec{v}_m = \vec{0} $$
If, hypothetically, the scalar coefficients α_{2}=0, ...., α_{m}=0 are zero, then the β_{2} ... β_{m} coefficients are also zero.
$$  \vec{v}_1 + 0 \cdot \vec{v}_2 + ... + 0 \cdot \vec{v}_m
< = \vec{0} $$
Hence, the linear combination reduces to
$$  \vec{v}_1= \vec{0} $$
This isn't a trivial linear combination, as the coefficient of vector v_{1} is nonzero (β=1) by the initial hypothesis.
Therefore, the linear combination is nontrivial.
Corollary 1
If v_{1},...,v_{m} are linearly dependent vectors, then every subset of vectors is also linearly dependent.
Proof
Starting with the initial assumption α_{1}=1.
Thus, the following linear combination is nontrivial, and the vectors are linearly dependent.
$$ α_1 \cdot \vec{v}_1 + ... + α_m \cdot \vec{v}_m = 0 $$
If α_{m}=0, I can eliminate the last term of the linear combination.
$$ α_1 \cdot \vec{v}_1 + ... + α_{m1} \cdot \vec{v}_{m1} + 0 \cdot \vec{v}_m = 0 $$
$$ α_1 \cdot \vec{v}_1 + ... + α_{m1} \cdot \vec{v}_{m1} = 0 $$
Knowing α_{1} is nonzero by the initial assumption, this last subset of the linear combination is also nontrivial.
Corollary 2
If v_{1},...,v_{m} are linearly dependent vectors, then any set of vectors { v_{1},...,v_{m+k} } including them is linearly dependent, setting the coefficients α_{m}=0,...,α_{m+k}=0 to zero.
Proof
$$ α_1 \cdot \vec{v}_1 + ... + α_m \cdot \vec{v}_m + ( α_{m+1} \cdot \vec{v}_{m+1} + ... + α_{m+k} \cdot \vec{v}_{m+k} ) = 0 $$
$$ with \: α_{m+1} = 0 \; , \; .... \; , \; α_{m+k} = 0 $$
$$ α_1 \cdot \vec{v}_1 + ... + α_m \cdot \vec{v}_m + ( 0 \cdot \vec{v}_{m+1} + ... + 0\cdot \vec{v}_{m+k} ) = 0 $$
$$ α_1 \cdot \vec{v}_1 + ... + α_m \cdot \vec{v}_m = 0 $$
How to Determine Linear Dependence Using Matrix Rank
To check for linear dependence or independence of numerical vectors, one effective method is to observe the rank of the matrix within the equation system.
A set of n numerical vectors v_{1}, ..., v_{n} is linearly independent if the row rank of the matrix M_{m,n}(R) equals n.
$$ r_k = n $$
The column rank of a matrix M(R) refers to the maximum number of independent columns, considering the columns as numerical vectors with n elements.
 If r_{k}=n, the columns are linearly independent
 If r_{k}<n, the columns are linearly dependent
When the columns are linearly dependent (r_{k} < n), it's still possible to find a subset of independent vectors among the nonzero complementary minors of the matrix.
Vectors included in a complementary minor with a nonzero determinant are independent.
Note. The same principle applies when using rows instead of columns, considering rows as numerical vectors. In a matrix M_{m,n}(R), the matrix rank coincides with both the row and column ranks.
A Practical Example
In a vector space V=R^{4} with field K=R, consider four vectors:
$$ v_1 = ( 1, 0, 1, 2 ) $$ $$ v_2 = ( 1, 3, 0, 0 ) $$ $$ v_3 = ( 1, 3, 2, 4 ) $$ $$ v_4 = ( 2, 3, 1, 2 ) $$
The linear combination of these vectors is as follows:
$$ α_1 \cdot v_1 + α_2 \cdot v_2 + α_3 \cdot v_3 + α_4 \cdot v_4 $$
$$ α_1 \cdot ( 1, 0, 1, 2 ) + α_2 \cdot ( 1, 3, 0, 0 ) + α_3 \cdot ( 1, 3, 2, 4 ) + α_4 \cdot ( 2, 3, 1, 2 ) $$
Arrange these numerical vectors in a column to form a 4x4 square matrix:
$$ M_{4,4} \begin{bmatrix} 1 & 0 & 1 & 2 \\ 1 & 3 & 0 & 0 \\ 1 & 3 & 2 & 4 \\ 2 & 3 & 1 & 2 \\ \end{bmatrix} $$
Then, calculate the determinant of M_{4,4}.
$$ \bigtriangleup M = 0 $$
The matrix has a zero determinant, indicating a rank r_{k} less than 4.
To determine the matrix rank, use the bordered determinant theorem.
Take a secondorder complementary minor with a nonzero determinant.
This suggests that the matrix has a rank of at least 2.
To check if the matrix has a rank of 3, verify if at least one of the bordered minors has a nonzero determinant.
All bordered minors have a zero determinant.
Therefore, according to the bordered determinant theorem, the matrix does not have a rank of 3 or higher.
The matrix has a rank of 2.
Once the matrix rank is calculated, the dependence or independence of the vectors becomes clear.
The matrix consists of n=4 unknowns (columns) but has a rank r_{k}=2.
$$ r_k \ne n $$
According to the theorem:
 If r_{k}=n, the vectors are linearly independent.
 If r_{k}≠n, the vectors are linearly dependent.
Thus, the vectors v_{1}, v_{2}, v_{3}, v_{4} are linearly dependent.
And to find the independent vectors?
Simply select the vectors in pairs from the complementary minors. Those from a secondorder complementary minor with a nonzero determinant are independent of each other.
For example, vectors v_{1} and v_{2} are independent because they are included in a complementary minor with a nonzero determinant.
In this case, the matrix rank (2) matches the number of unknowns (columns) in the complementary minor.
The same calculation can be applied to check the independence relationship between vectors v_{2} and v_{3}, v_{3} and v_{4}, v_{2} and v_{4}, v_{1} and v_{3}, v_{1} and v_{4}.
Gauss's Elimination Algorithm
The linear dependence of vectors can also be determined using the rank of a matrix, which can be calculated with the GaussJordan elimination algorithm.
According to Gauss, the number of pivots in a matrix equals its rank.
Example
In the previous exercise, four vectors are arranged as columns in a 4x4 matrix.
$$ M_{4,4} \begin{bmatrix} 1 & 1 & 1 & 2 \\ 0 & 3 & 3 & 3 \\ 1 & 0 & 2 & 1 \\ 2 & 0 & 4 & 2 \\ \end{bmatrix} $$
Then, I transform the matrix into a staircase matrix using Gauss's rules.
$$ M_{4,4} \begin{bmatrix} 1 & 0 & 2 & 1 \\ 0 & 1 & 1 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix} $$
Note. To keep the explanation concise, I will not detail every step of transforming the matrix into a staircase matrix using the GaussJordan algorithm. To see in detail, click here.
The matrix has two steps (pivots), giving it a rank of two (r_{k} = 2).
Therefore, the vectors v_{1}, v_{2}, v_{3}, and v_{4} are linearly dependent because r_{k} < n.
The two vectors v_{1}, v_{2}, where the pivots are located, are linearly independent of each other.
Note. The staircase matrix is only used for calculating the rank. The original vector elements remain the same, i.e., v_{1} = ( 1, 1, 1, 2 ), v_{2} = ( 0, 3, 3, 3 ), v_{3} = ( 1, 0, 2, 1 ), and v_{4} = ( 2, 0, 4, 2 ).
Theorems on the Linear Independence of Vectors

If a set of vectors {v_{1}, v_{2}, ..., v_{n}} are linearly independent, then none of these vectors is the zero vector $$ \vec{v}_1 \ne \vec{0} \\ \vec{v}_2 \ne \vec{0} \\ \vdots \\ \vec{v}_n \ne \vec{0} $$
See proof. 
In a finitely generated vector space V, consider a set of generators for V $$ \{ v_1, v_2, ..., v_n \} $$ and a set of linearly independent vectors in V $$ \{ w_1, w_2, ..., w_p \} $$, then $$ p \le n $$
In essence, the number of linearly independent vectors in V is always less than or equal to the number of vectors in a set of generators for V.
See proof
Observations
Some useful notes on vector linear dependency and independence.
 The zero vector is linearly dependent
According to the definition, a vector is linearly independent if it equals the zero vector only when all coefficients of the linear combination are zero (the trivial solution α=0). $$ \alpha \cdot \vec{v} = \vec{0} $$ The zero vector equals itself (the zero vector) even when the scalar coefficient α of the linear combination is not zero (α≠0). $$ \alpha \cdot \vec{0} = \vec{0} $$ Therefore, the zero vector is always linearly dependent.
And so on.