Intro
Red cars are over-represented in the statistics of traffic accidents, but why isn't the insurance cost of red cars higher than that of other-colored cars of the same model?
When digging a bit deeper, we find that it is not the color red itself that is a risk factor on the roads, but the color is linked with other features that are.
Red is a common color of choice for sports cars, which tend to have powerful engines and male drivers. They also tend to have a high price.
The cost of insurance is linearly dependent on the car's value and the risk of crashing, meaning if these go up, so does the insurance fee with a proportional amount.
The probability of a car having red color and its insurance cost depend on the same parameters, but do not directly affect each other.
Concept
Let's say that a set of vectors is given to you and your task is to form a path that starts and ends at the origin of a coordinate system by placing vectors after each other.
You can use any number of vectors from the set, but only the same one once, and when using a vector you can stretch, squeeze and flip it, but not change its direction in any other way.
If this is possible, the set of vectors you were given is said to be linearly dependent. In contrast, if every time you try to arrange the vectors to take you back to the origin, you end up going in a direction you cannot return from, they are linearly independent.
The set will always be linearly dependent if the number of vectors is greater than their dimension.
Math
If the set of vectors
is linearly independent, then none of the vectors can be expressed as a linear combination of the others.
That means that the vector equation
only has the trivial solution:
We can check whether this is the case by arranging the vectors as columns of a matrix A.
By solving the matrix equation:
then that will either reveal a dependence relation of the vectors, or verify their independence.
Linear combination
A general expression for linear combination is:
that is, a sum of a series of vectors , each with a scalar . Two examples are the parametric forms for the line and the plane in for :
other examples are:
Linear dependence
Linear dependence means that a collection of vectors can be expressed using each other in a linear combination. In the sections on the equation of a line and the equation of a plane, we write about their respective parametric forms and . If these expressions are generalized, we see that a vector in can be expressed as:
where are vectors in . is then called a linear combination and is linearly dependent on the -vectors. If all such vectors are considered as a collection, they are called a subspace of .
The above equation can be set up as a system of linear equations:
and then resolved with the Gaussian-Jordan method. If there is a unique solution , it means that can be expressed as a linear combination of the -vectors, and thus the collection of vectors and are linearly dependent. If there is no such solution for , then the vectors are linearly independent. Note that if is linearly dependent on , then is also linearly dependent on and . This leads us to the formal definition:
An unknown set of vectors in is said to be linearly independent if the only scalars that solve the equation:
are , that is, the trivial solution. If there is any non-zero scalar that solves the equation, the set of vectors is said to be linearly dependent.
One conclusion is that any set of vectors with more than vectors in is always linearly dependent. The reason is that you then get more equations than variables, which will lead to at least one zero row with row reducing.