\(\DeclarePairedDelimiterX{\Set}[2]{\{}{\}}{#1 \nonscript\;\delimsize\vert\nonscript\; #2}\) \( \DeclarePairedDelimiter{\set}{\{}{\}}\) \( \DeclarePairedDelimiter{\parens}{\left(}{\right)}\) \(\DeclarePairedDelimiterX{\innerproduct}[1]{\langle}{\rangle}{#1}\) \(\newcommand{\ip}[1]{\innerproduct{#1}}\) \(\newcommand{\bmat}[1]{\left[\hspace{2.0pt}\begin{matrix}#1\end{matrix}\hspace{2.0pt}\right]}\) \(\newcommand{\barray}[1]{\left[\hspace{2.0pt}\begin{matrix}#1\end{matrix}\hspace{2.0pt}\right]}\) \(\newcommand{\mat}[1]{\begin{matrix}#1\end{matrix}}\) \(\newcommand{\pmat}[1]{\begin{pmatrix}#1\end{pmatrix}}\) \(\newcommand{\mathword}[1]{\mathop{\textup{#1}}}\)
Needs:
Linear Combinations
Needed by:
Matroids
Vector Space Bases
Links:
Sheet PDF
Graph PDF

Linearly Dependent Vectors

Why

We want to find a set of vectors so that there exists a unique linear combination of these vectors corresponding to each vector in the space.

Definition

First, we introduce terminology for when it is possible to write a vector as a linear combination of a set of vectors. A vector $x$ is linearly dependent on a set of vectors if $x$ the result of some linear combination of them. For example, $x$ is linearly dependent on $\set{x}$.

Naturally, the interesting case is when the vector is not contained in the set it is dependent on. In this case, we imagine constructing it from those vectors in the set. For example, the vector $y = 2x$ is linearly dependent on the set $\set{x}$.

A set of vectors is a linearly dependent set if one element is linearly dependent on the set excluding it. The idea is that if a set is linearly dependent, we can drop one of its elements, but still “build” that vector out of the rest. The idea is the following: suppose that a vector is linearly dependent on this set. Then that vector can be written as a linear combination of the set. Now suppose that the set is linearly dependent; then we can express one of the vectors as a linear combination of the others. The conclusion is that we can actually express the original vector as a linear combination of the set of vectors without the dependent vector. The upshot is that we can shrink the set of vectors, and still represent that vector.

We feel that the above definitions better capture the intuition. The usual definition of linear dependence is given in the following proposition.

A sequence of vectors is linearly dependent if and only if then there exists a nontrivial linear combination of sequence whose result is the zero vector.

A finite sequence of vectors is linearly dependent if one is identical to a nontrivial linear combination of the others. If one can be written as a linearly combination

Two vectors, then are linearly dependent if one is a scalar multiple of the other. The sequence $(\mathbfsf{1})$ is linearly dependent since $a_1 \mathbfsf{0} = \mathbfsf{0}$ for any scalar $a_1$.

An infinite sequence of vectors is linearly dependent if some finite subsequence is dependent. It is linearly independent if every finite subsequence is linearly independent.

Any sequence (finite or infinite) which contains the zero vector is linearly dependent.

If a finite sequence of vectors is linearly independent, then each of its coordinates is distinct.

A set of vectors is linearly independent if every sequence of distinct vectors in the set is linearly independent. The set is linearly dependent if somem finite sequence of distinct vectors is dependent.

Copyright © 2023 The Bourbaki Authors — All rights reserved — Version 13a6779cc About Show the old page view