Intro
Most of the problems we encounter in our lives can be solved in multiple ways. That is not to say that we are equally satisfied with each of the different solutions.
Optimization is a branch of mathematics that aims not only to solve problems but also to find the best possible solution, given some constraints. The process often includes looking at solution spaces of the problems.
The technique has applications in a wide range of subjects, from determining how to best plan the production in a factory, to effectively managing energy grid systems.
Concept
For problems with multiple solutions, we often talk about their solution set. As the name suggests, it is simply the set of values satisfying the problem's equations or inequalities.
Take, as an example, the equation:
The solution set contains all the combinations of values where .
A solution space is a set of solutions, subject to certain constraints
Since this set of solutions fulfills the following criteria:
is a solution.
The sum of any two solutions is also a solution.
Multiplying a solution by any number provides a solution.
then the solution set also qualifies as a solution space.
Math
Solving homogeneous systems of linear equations is a special type of problem that we can write in the form:
We can use the Gauss-Jordan method to find the solution to such system. If the set we find meets the previously mentioned requirements, it is a solution space.
Furthermore, the solution space to a homogeneous linear system is a subspace of .
Subspace
A collection of vectors that is closed under addition and scalar multiplication means that:
if and are in , then are in (closed under addition)
if is in , then there is in for all (closed under scalar multiplication)
it follows from the two points above that must be in
This leads us to the definition of a subspace:
A set of vectors in is called a subspace of if it is closed under addition and closed under scalar multiplication.
Note that because a subspace is closed under addition and scalar multiplication, it is also closed under all linear combinations. In addition, the -vector must belong to the space to be a subspace of . Also note that regardless of the dimension of , there are always two subspaces called the trivial subspaces, namely and itself. That the subspace meets the definition often falls naturally, but that is always a subspace we now show:
(closed under addition)
for all (closed under scalar multiplication)
As an example we take a plane in . Let the equation or parametric form be:
Then is a subspace of because:
The null vector is located in
if and are in , there is also in (closed under addition)
if is in , there is also in for all (closed under scalar multiplication)
We now show the above three statements.
exists in because
Let and be in . This means that there are scalars , , and that satisfy the parametric form. Then we have:
which follows the parametric form for , and thus is also in .
Let be in . Then there are scalars and that satisfy the parametric form. We then have:
which follows the parametric form for , and thus exists in .
All subspaces of fall into one of three categories:
Zero vector subspace
Lines passing through the origin
All
All subspaces of fall into one of four categories:
Zero vector subspace
Lines passing through the origin
Planes passing through the origin
All
Note, therefore, that all lines and planes that do not cross the origin are not subspaces, but they are a translated subspaces and are usually called linear manifolds. Let be such a translated subspace, while is a subspace in . Then there exists a vector such that can be expressed as:
Remember the general parametric forms for a line and a plane, where the point is used as a fixed point, which is noted above as .
Span
A span in reference to a subspace is a collection of vectors that are closed under all possible linear combinations of the subspace and is referred to as:
This means that for all vectors in this range, there is a set of scalars such that:
That is, there is a linear combination of the -vectors to express . Furthermore, all vectors in the span are linearly dependent on .
As an example, we take a plane in . Let the parametric form be:
Then is a subspace of , and we can write:
Solution space
A solution space is what we refer to our collection of points that solve the linear system of equations
for our -matrix . For the special case of a homogenous linear system of equations, meaning the right-hand side is , we have that our solution space satisfies the definition of a subspace of . Let's say that we have
which solution results can be expressed as:
which is also a subspace of . In addition, this set of solutions is the trivial solution if, and only if, the column vectors of are linearly independent.
For non-homogeneous systems , that is to say that the right-hand side is non-zero, the solution spaces are not subspaces of . These are translated subspaces, so-called linear manifolds, and can always be referenced to their associated homogeneous system . These two are associated as follows:
Note that the solution spaces above only differ by a single element, namely .
Geometric interpretation
The solution set for each linear system of equations follows one, and only one, of the three cases:
a (unique) point
infinitely many points or
no points
where the term point is most often expressed as a solution, but has been chosen precisely to make a point in this section. The first and last case is not very interesting from a geometrical stand-point, since its one or no points. The middle case however may sound chaotic, but the points are not scattered randomly in space. On the contrary, they are structured beautifully into the shapes with names as a line, plane and hyperplane depending on the shape's dimension.
A single point
If the solution set to the equation
consists of a single point, this means that the inverse exists, and can therefore be expressed as:
The dimension of the solution space is thus 0.
Infinitely many points
In this case, the inverse of does not exist. While it may sound chaotic, the points are never randomly scattered in the space, but always follow a beautiful shape. This case is the most interesting because geometric interpretations can be made, which in turn can be divided into three sub-cases:
a line
a plane
a hyperplane
Line
Since the solution set is a line, the points are placed along a single direction, e.g. , and its dimension is 1. We can then say that the parametric form of the line, and thus the solution set, is:
where is the contribution to the translated subspace, and is the solution set to the associated homogeneous system . If , the line intersects the origin and the solution set can be expressed as
Plane
When the solution set is a plane, the solution set spreads in two directions and its dimension is 2. The parametric form is written as:
If , the plane intersects the origin, and the solution set can be expressed, as for the line, as:
Hyperplane
A hyperplane is the general expression for an equation in of the format:
and the dimension is . This means that the dimension of the hyperplane is dependent on the space , and has special names for when and when , which are line and plane respectively. The hyperplane has hence no special names for when . If , the hyperplane crosses the origin, and its solution set can be expressed as:
where are direction vectors for the hyperplane.
No points
The last case is that no points solve the equation . This means that the solution set is empty, or as we mathematicians say, the empty set, which we note as:
Summary for
For each consistent, homogeneous and linear system of equations in , the following applies for the solution space :
is not defined is nothing.
is a point, the origin.
is a line through the origin.
is a plane through the origin.
is all of .
Solution space theorems
If is a homogeneous system of linear equations with unknowns, then its solution set is another subspace of .
The proof is formed by looking to the three requirements for subspaces; includes the zero vector, is closed under addition and is closed under scalar multiplication.
obviously satisfies the equation and is part of the solution set.
Let and be solutions to the system. Then it applies that:
thus the solution set is closed under addition.
Let be a scalar multiple of . Then it applies that:
for all . Thus, the solution set is closed under scalar multiplication.
If is a consistent, non-homogeneous, linear system of equations, then let be the solution set to the associated homogeneous system . Then the solution space is the translated subspace of
where is an arbitrary solution to .
First we prove that if is a vector in , then it is also a solution to . Then we prove the reverse, that each solution to belongs to the set .
Let be a vector in and a vector in , which means we can express:
where belongs to . We then have:
which shows that is another solution to .
Let be an arbitrary solution to . Then we have:
which indicates that can be expressed as belonging to .
More useful theorems on the topic are:
A general solution to a consistent linear system of equations can be constructed by adding a particular solution to to the general solution of its associated homogeneous systems, .
If is an -matrix, then the following statements are applicable:
has only the trivial solution
has either one solution or no solutions for each in
If is an -matrix, then the solution space (solution set) of the homogeneous system is that which consists of all vectors in that are orthogonal to each row vector of the matrix .
The reasoning behind this theorem is that the system can be developed into:
where each row equation can be considered as a hyperplane in , and the system's solution space can be considered as the intersection of all these hyperplanes. A simplification of the above system is to consider the constants of each row equation as a vector multiplied by the variable . We then get:
which means that the scalar product between and is 0, and thus the angle between them is orthogonal.