9. Euclidean affine spaces

Part 1: Problems, solutions.
Part 2: Problems, solutions.

Perpendicular projections and reflections with respect to affine subspaces

How to calculate a projection of a vector onto an affine subspace and its image under reflection with respect to such a subspace? Simply bring it to the know case of linear subspaces, calculate, and go back to the initial setting. In other words translate everything to make the considered affine subspace go through zero, calculate the projection and translate everything backwards.

E.g. let us calculate the projection of (2,2,1) onto (2,1,0)+lin(-1,-1,0). First we calculate the projection of (2,2,1)-(2,1,0)=(0,1,1) onto lin(-1,-1,0):


And translate this projection backwards, so the final result is: \left(\frac{1}{2},\frac{1}{2},0\right)+(2,1,0)=\left(\frac{5}{2},\frac{3}{2},0\right).


We can define a distance between two points p,q, d(p,q) as \|p-q\|.

Notice that d(p,q)=0 iff p=q, d(p,q)=d(q,p) and for every points p,q,r, d(p,r)\leq d(p,q)+d(q,r).

Orthogonal basic system

A basic system p;v_1,\ldots, v_n of an affine space H, is called orthogonal (respectively, orthonormal), if v_1,\ldots, v_n is an orthogonal basis (respectively, orthonormal) of T(H).

Distance between a point and an affine subspace.

Distance between a point p and an affine subspace H is d(p,H) the distance between p, and its projection onto H.

Parallelepipeds and simplexes

Let p\in H oraz v_1,\ldots, v_k\in T(H) are linearly independent. Then the set

    \[R(p_0;v_1,\ldots, v_n)=\left\{p_0+\sum_{i=1}^k a_iv_i\colon a_i\in [0,1]\right\}\]

is called a k-parallelepiped given by this system.

Given affine independent system of points p_0,\ldots, p_k\in H, the set

    \[S(p_0,\ldots, p_k)=\left\{\sum_{i=0}^k a_ip_i\colon \sum_{i=0}^k a_i=1\land  \forall_{0\leq i\leq k} a_i\geq 0\right\}\]

is called a k-dimensional simplex with vertices in p_0,\ldots, p_k. Obviously, one dimensional simplex is a line segment, two dimensional is a triangle, and three-dimensional is a tetrahedron.

k-dimension measure of parallelepipeds and simplexes

k-dimensional measure \mu_k is a generalization of length (k=1), area (k=2) and volume (k=3).

To explain how to calculate the k-dimensional measure of a k-dimensional simplex or parallelepiped, we have to notice that if v_1,\ldots, v_k is a system of linearly independent vectors in V, v\in V and w is the projection of v onto (lin(v_1,\ldots, v_k))^\bot, then
We have w=v-w', where w'\in lin(v_1,\ldots, v_k). Let w'=\sum_{i=1}^k a_i v_i. Thus,

    \[W(v_1,\ldots, v_k,v)=W(v_1,\ldots, v_k,w+w')=\]

    \[=\left|\begin{array}{cccc}\langle v_1,v_1\rangle &\ldots &\langle v_1,v_k\rangle & \langle v_1,w'\rangle\\ \langle v_2,v_1\rangle &\ldots &\langle v_2,v_k\rangle & \langle v_2,w'\rangle\\ &\ldots&&\ldots \\ \langle v_k,v_1\rangle &\ldots &\langle v_k,v_k\rangle & \langle v_k,w'\rangle\\\langle w',v_1\rangle &\ldots &\langle w',v_k\rangle & \langle w,w\rangle+\langle w',w'\rangle\end{array}\right|,\]


    \[\langle v_i,w+w'\rangle=\langle v_i,w\rangle+\langle v_i,w'\rangle=\langle v_i,w'\rangle,\]


    \[\langle w,w'\rangle=0.\]


    \[W(v_1,\ldots, v_k,v)=\left|\begin{array}{cccc}\langle v_1,v_1\rangle &\ldots &\langle v_1,v_k\rangle & \sum_{i=1}^k a_i\langle v_1,v_i\rangle\\ \langle v_2,v_1\rangle &\ldots &\langle v_2,v_k\rangle & \sum_{i=1}^k a_i\langle v_2,v_i\rangle\\ &\ldots&&\ldots \\ \langle v_k,v_1\rangle &\ldots &\langle v_k,v_k\rangle & \sum_{i=1}^k a_i\langle v_k,v_i\rangle\\\langle w',v_1\rangle &\ldots &\langle w',v_k\rangle & \sum_{i=1}^k a_i\langle w',v_i\rangle\end{array}\right|+\]

    \[+\left|\begin{array}{cccc}\langle v_1,v_1\rangle &\ldots &\langle v_1,v_k\rangle & 0\\ \langle v_2,v_1\rangle &\ldots &\langle v_2,v_k\rangle & 0\\ &\ldots&&\ldots \\ \langle v_k,v_1\rangle &\ldots &\langle v_k,v_k\rangle & 0\\\langle w',v_1\rangle &\ldots &\langle w',v_k\rangle &  \langle w,w\rangle\end{array}\right|.\]

But the first is equal to zero because the last column is a combination of the other columns. And the second is equal to \langle w,w\rangle W(v_1,\ldots, v_k) (expansion by the last column). Thus

    \[W(v_1,\ldots, v_k,v)=\|w\|^2W(v_1,\ldots, v_k).\]

Since, we know that k-dimensional measure of a parallelepiped should be the product of k-1-dimension measure of its base by its height (length of an appropriate projection), using induction you can easily come to the conclusion that

    \[\mu_{k}(R(p_0;v_1,\ldots,v_k))=\sqrt{W(v_1,\ldots, v_k)}.\]


    \[\mu_{k}(S(p_0,\ldots,p_k))=\frac{1}{k!}\sqrt{W(p_1-p_0,\ldots, p_k-p_0)}.\]

Orientation of a space

We say that bases \mathcal{A}, \mathcal{B} of a vector space V are consistently oriented if \det M(id)_{\mathcal{A}}^{\mathcal{B}}>0, and inconsistently oriented if \det M(id)_{\mathcal{A}}^{\mathcal{B}}<0. Being consistently oriented is thus an equivalence relation on the set of all bases which has two equivalence classes.

An orientation of a vector space is choosing one of those two equivalence classes (e.g. by choosing a basis). Then bases from this equivalence class are said to have positive orientation and all the rest have negative orientation.

Vector product

Given an n-dimensional Euclidean space V, the vector product defines a method of completing a system of n-1 linearly independent vectors to a basis of V.

We say that v_n is the vector product of v_1,\ldots, v_{n-1} (denoted by v_n=v_1\times \ldots \times v_{n-1}), if

  • v_n=0 iff v_1,\ldots, v_{n-1} are not linearly independent,
  • otherwise, v_1,\ldots, v_{n-1}, v_n is a positively oriented basis of V,

        \[v_n\in (lin(v_1,\ldots, v_{n-1}))^{\bot}\]


        \[\|v_n\|=\sqrt{W(v_1,\ldots, v_{n-1})}.\]