...

13 Chapter 13 Multilinear Algebra

by taratuta

on
Category: Documents
47

views

Report

Comments

Transcript

13 Chapter 13 Multilinear Algebra
13
Multilinear Algebra
Multilinear Maps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Tensor Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Rank of a Tensor: Decomposable Tensors . . . . . . . . . . .
Tensor Product of Linear Maps . . . . . . . . . . . . . . . . . . . . .
Symmetric and Antisymmetric Maps . . . . . . . . . . . . . . .
Symmetric and Grassmann Tensors . . . . . . . . . . . . . . . . .
The Tensor Multiplication, the Alt Multiplication,
and the Sym Multiplication . . . . . . . . . . . . . . . . . . . . . . . . .
13.8 Associated Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.9 Tensor Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.10 Tensor Product of Inner Product Spaces. . . . . . . . . . . . .
13.11 Orientation and Hodge Star Operator . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.1
13.2
13.3
13.4
13.5
13.6
13.7
José A. Dias da Silva
Universidade de Lisboa
Armando Machado
Universidade de Lisboa
13.1
13-1
13-3
13-7
13-8
13-10
13-12
13-17
13-19
13-20
13-22
13-24
13-26
Multilinear Maps
Unless otherwise stated, within this section V, U, and W as well as these letters with subscripts, superscripts,
or accents, are finite dimensional vector spaces over a field F of characteristic zero.
Definitions:
A map ϕ from V1 × · · · × Vm into U is a multilinear map (m-linear map) if it is linear on each coordinate,
i.e., for every vi , vi ∈ Vi , i = 1, . . . , m and for every a ∈ F the following conditions hold:
(a) ϕ(v1 , . . . , vi + v i , . . . , vm ) = ϕ(v1 , . . . , vi , . . . , vm ) + ϕ(v1 , . . . , v i , . . . , vm );
(b) ϕ(v1 , . . . , avi , . . . , vm ) = aϕ(v1 , . . . , vi , . . . , vm ).
The 2-linear maps and 3-linear maps are also called bilinear and trilinear maps, respectively.
If U = F then a multilinear map into U is called a multilinear form.
The set of multilinear maps from V1 × · · · × Vm into U , together with the operations defined as follows,
is denoted L (V1 , . . . , Vm ; U ). For m-linear maps ϕ, ψ, and a ∈ F ,
(ψ + ϕ)(v 1 , . . . , v m ) = ψ(v 1 , . . . , v m ) + ϕ(v 1 , . . . , v m ),
(aϕ)(v 1 , . . . , v m ) = aϕ(v 1 , . . . , v m ).
Let (bi 1 , . . . , bi ni ) be an ordered basis of Vi , i = 1, . . . , m. The set of sequences ( j1 , . . . , jm ), 1 ≤ ji ≤
ni , i = 1, . . . , m, will be identified with the set (n1 , . . . , nm ) of maps α from {1, . . . , m} into N satisfying
1 ≤ α(i ) ≤ ni , i = 1, . . . , m.
For α ∈ (n1 , . . . , nm ), the m-tuple of basis vectors (b1α(1) , . . . , bm,α(m) ) is denoted by bα .
13-1
13-2
Handbook of Linear Algebra
Unless otherwise stated (n1 , . . . , nm ) is considered ordered by the lexicographic order. When there is
no risk of confusion, is used instead of (n1 , . . . , nm ).
Let p, q be positive integers. If ϕ is an ( p + q )-linear map from W1 × · · · × Wp × V1 × · · · × Vq into
U , then for each choice of wi in Wi , i = 1, . . . , p, the map
(v1 , . . . , vq ) −→ ϕ(w1 , . . . , w p , v1 , . . . , vq ),
from V1 × · · · × Vq into U , is denoted ϕw1 ,...,w p , i.e.
ϕw1 ,...,w p (v1 , . . . , vq ) = ϕ(w1 , . . . , w p , v1 , . . . , vq ).
Let η be a linear map from U into U and θi a linear map from Vi into Vi , i = 1, . . . , m. If (v1 , . . . , vm ) →
ϕ(v1 , . . . , vm ) is a multilinear map from V1 × · · · × Vm into U , L (θ1 , . . . , θm ; η)(ϕ) denotes the map from
from V1 × · · · × Vm into U , defined by
(v 1 , . . . , v m ) → η(ϕ(θ1 (v 1 ), . . . , θm (v m ))).
Facts:
The following facts can be found in [Mar73, Chap. 1] and in [Mer97, Chap. 5].
1. If ϕ is a multilinear map, then ϕ(v1 , . . . , 0, . . . , vm ) = 0.
2. The set L (V1 , . . . , Vm ; U ) is a vector space over F .
3. If ϕ is an m−linear map from V1 × · · · × Vm into U , then for every integer p, 1 ≤ p < m, and
vi ∈ Vi , 1 ≤ i ≤ p, the map ϕv1 ,...,v p is an (m − p)-linear map.
4. Under the same assumptions than in (3.) the map (v1 , . . . , v p ) → ϕv1 ,...,v p from V1 × · · · × Vp into
L (Vp+1 , . . . , Vm ; U ), is p-linear. A linear isomorphism from L (V1 , . . . , Vp , Vp+1 , . . . , Vm ; U ) into
L (V1 , . . . , Vp ; L (Vp+1 , . . . , Vm ; U )) arises through this construction.
5. Let η be a linear map from U into U and θi a linear map from Vi into Vi , i = 1, . . . , m. The map
L (θ1 , . . . , θm ; η) from L (V1 , . . . , Vm ; U ) into L (V1 , . . . , Vm ; U ) is a linear map. When m = 1, and
U = U = F , then L (θ1 , I ) is the dual or adjoint linear map θ1∗ from V1∗ into V1∗ .
6. |(n1 , . . . , nm )| = im=1 ni where | | denotes cardinality.
7. Let (yα )α∈ be a family of vectors of U . Then, there exists a unique m-linear map ϕ from
V1 × · · · × Vm into U satisfying ϕ(bα ) = yα , for every α ∈ .
8. If (u1 , . . . , un ) is a basis of U , then (ϕi,α : α ∈ , i = 1, . . . , m) is a basis of L (V1 , . . . , Vm ; U ),
where ϕi,α is characterized by the conditions ϕi,α (bβ ) = δα,β ui . Moreover, if ϕ is an m-linear map
from V1 × · · · × Vm into U such that for each α ∈ ,
ϕ(bα ) =
n
ai,α ui ,
i =1
then
ϕ=
ai,α ϕi,α .
α,i
Examples:
The map from F m into F , (a1 , . . . , am ) → im=1 ai , is an m-linear map.
Let V be a vector space over F . The map (a, v) → av from F × V into V is a bilinear map.
The map from F m × F m into F , ((a1 , . . . , am ), (b1 , . . . , bm )) −→ ai bi , is bilinear.
Let U, V , and W be vector spaces over F . The map (θ, η) → θ η from L (V, W) × L (U, V ) into
L (U, W), given by composition, is bilinear.
5. The multiplication of matrices, (A, B) → AB, from F m×n × F n× p into F m× p , is bilinear. Observe
that this example is the matrix counterpart of the previous one.
1.
2.
3.
4.
13-3
Multilinear Algebra
6. Let V and W be vector spaces over F . The evaluation map, from L (V, W) × V into W,
(θ, v) −→ θ(v),
is bilinear.
7. The map
((a11 , a21 , . . . , am1 ), . . . , (a1m , a2m , . . . , amm )) → det([ai j ])
from the Cartesian product of m copies of F m into F is m-linear.
13.2
Tensor Products
Definitions:
Let V1 , . . . , Vm , P be vector spaces over F . Let ν : V1 × · · · × Vm −→ P be a multilinear map. The pair
(ν, P ) is called a tensor product of V1 , . . . , Vm , or P is said to be a tensor product of V1 , . . . , Vm with
tensor multiplication ν, if the following condition is satisfied:
Universal factorization property
If ϕ is a multilinear map from V1 × · · · × Vm into the vector space U , then there exists a unique
linear map, h, from P into U , that makes the following diagram commutative:
n
V1 × … × Vm
P
h
j
U
i.e., hν = ϕ.
If P is a tensor product of V1 , . . . , Vm , with tensor multiplication ν, then P is denoted by V1 ⊗ · · · ⊗ Vm
and ν(v1 , . . . , vm ) is denoted by v1 ⊗ · · · ⊗ vm and is called the tensor product of the vectors v1 , . . . , vm .
The elements of V1 ⊗ · · · ⊗ Vm are called tensors. The tensors that are the tensor product of m vectors
are called decomposable tensors.
When V1 = · · · = Vm = V , the vector space V1 ⊗ · · · ⊗ Vm is called the mth tensor power of V and
is denoted by m V . It is convenient to define 0 V = F and assume that 1 is the unique decomposable
0
V . When we consider simultaneously different models of tensor product, sometimes we use
tensor of
or ⊗
to emphasize these different choices.
alternative forms to denote the tensor multiplication like ⊗ , ⊗,
Within this section, V1 , . . . , Vm are finite dimensional vector spaces over F and (bi 1 , . . . , bi ni ) denotes
a basis of Vi , i = 1, . . . , m. When V is a vector space and x1 , . . . , xk ∈ V , Span({x1 , . . . , xk }) denotes the
subspace of V spanned by these vectors.
Facts:
The following facts can be found in [Mar73, Chap. 1] and in [Mer97, Chap. 5].
1. If V1 ⊗ · · · ⊗ Vm and V1 ⊗ · · · ⊗ Vm are two tensor products of V1 , . . . , Vm , then the unique linear
map h from V1 ⊗ · · · ⊗ Vm into V1 ⊗ · · · ⊗ Vm satisfying
h(v1 ⊗ · · · ⊗ vm ) = v1 ⊗ · · · ⊗ vm
is an isomorphism.
13-4
Handbook of Linear Algebra
2. If (ν(bα ))α∈(n1 ,...,nm ) is a basis of P , then the pair (ν, P ) is a tensor product of V1 , . . . , Vm . This
is often the most effective way to identify a model for the tensor product of vector spaces. It also
implies the existence of a tensor product.
3. If P is the tensor product of V1 , . . . , Vm with tensor multiplication ν, and h : P −→ Q is a linear
isomorphism, then (hν, Q) is a tensor product of V1 , . . . , Vm .
4. When m = 1, it makes sense to speak of a tensor product of one vector space V and V itself is used
as a model for that tensor product with the identity as tensor multiplication, i.e., 1 V = V .
5. Bilinear version of the universal property — Given a multilinear map from V1 × · · · × Vk × U1 ×
· · · × Um into W,
(v1 , . . . , vk , u1 , . . . , um ) → ϕ(v1 , . . . , vk , u1 , . . . , um ),
there exists a unique bilinear map χ from (V1 ⊗ · · · ⊗ Vk ) × (U1 ⊗ · · · ⊗ Um ) into W satisfying
χ (v1 ⊗ · · · ⊗ vk , u1 ⊗ · · · ⊗ um ) = ϕ(v1 , . . . , vk , u1 , . . . , um ),
vi ∈ Vi u j ∈ U j , i = 1, . . . , k, j = 1, . . . , m.
6. Let a ∈ F and vi , v i ∈ Vi , i = 1, . . . , m. As the consequence of the multilinearity of ⊗, the
following equalities hold:
(a) v1 ⊗ · · · ⊗ (vi + v i ) ⊗ · · · ⊗ vm
= v1 ⊗ · · · ⊗ vi ⊗ · · · ⊗ vm + v1 ⊗ · · · ⊗ v i ⊗ · · · ⊗ vm ,
(b) a(v1 ⊗ · · · ⊗ vm ) = (av1 ) ⊗ · · · ⊗ vm = · · · = v1 ⊗ · · · ⊗ (avm ),
(c) v1 ⊗ · · · ⊗ 0 ⊗ · · · ⊗ vm = 0.
7. If one of the vector spaces Vi is zero, then V1 ⊗ · · · ⊗ Vm = {0}.
8. Write b⊗
α to mean
b⊗
α := b1α(1) ⊗ · · · ⊗ bmα(m) .
Then
(b⊗
α )α∈
is a basis of V1 ⊗ · · · ⊗ Vm . This basis is said to be induced by the bases (bi1 , . . . , bi ni ), i = 1, . . . , m.
9. The decomposable tensors span the tensor product V1 ⊗ · · · ⊗ Vm . Furthermore, if the set C i spans
Vi , i = 1, . . . , m, then the set {v1 ⊗ · · · ⊗ vm : vi ∈ C i , i = 1, . . . , m} spans V1 ⊗ · · · ⊗ Vm .
10. dim(V1 ⊗ · · · ⊗ Vm ) = im=1 dim(Vi ).
11. The tensor product is commutative,
V1 ⊗ V2 = V2 ⊗ V1 ,
meaning that if V1 ⊗ V2 is a tensor product of V1 and V2 , then V1 ⊗ V2 is also a tensor product of
V2 and V1 with tensor multiplication (v2 , v1 ) → v1 ⊗ v2 .
In general, with a similar meaning, for any σ ∈ Sm ,
V1 ⊗ · · · ⊗ Vm = Vσ (1) ⊗ · · · ⊗ Vσ (m) .
12. The tensor product is associative,
(V1 ⊗ V2 ) ⊗ V3 = V1 ⊗ (V2 ⊗ V3 ) = V1 ⊗ V2 ⊗ V3 ,
meaning that:
13-5
Multilinear Algebra
(a) A tensor product V1 ⊗ V2 ⊗ V3 is also a tensor product of V1 ⊗ V2 and V3 (respectively of V1 and
V2 ⊗ V3 ) with tensor multiplication defined (uniquely by Fact 5 above) for vi ∈ Vi , i = 1, 2, 3,
by (v1 ⊗ v2 ) ⊗ v3 = v1 ⊗ v2 ⊗ v3 (respectively by v1 ⊗ (v2 ⊗ v3 ) = v1 ⊗ v2 ⊗ v3 ).
(b) And, V1 ⊗ V2 ) ⊗ V3 (respectively V1 ⊗ (V2 ⊗ V3 ) is a tensor product of V1 , V2 , V3 with tensor
multiplication defined by v1 ⊗ v2 ⊗ v3 = (v1 ⊗ v2 ) ⊗ v3 , vi ∈ Vi , i = 1, 2, 3 (respectively
v1 ⊗ v2 ⊗ v3 = v1 ⊗ (v2 ⊗ v3 ), vi ∈ Vi , i = 1, 2, 3).
In general, with an analogous meaning,
(V1 ⊗ · · · ⊗ Vk ) ⊗ (Vk+1 ⊗ · · · ⊗ Vm ) = V1 ⊗ · · · ⊗ Vm ,
for any k, 1 ≤ k < m.
13. Let Wi be a subspace of Vi , i = 1, . . . , m. Then W1 ⊗ · · · ⊗ Wm is a subspace of V1 ⊗ · · · ⊗ Vm ,
meaning that the subspace of V1 ⊗ · · · ⊗ Vm spanned by the set of decomposable tensors of the
form
w 1 ⊗ · · · ⊗ wm ,
wi ∈ Wi , i = 1, . . . , m
is a tensor product of W1 , . . . , Wm with tensor multiplication equal to the restriction of ⊗ to
W1 × · · · × Wm .
From now on, the model for the tensor product described above is assumed when dealing with
the tensor product of subspaces of Vi .
14. Let W1 , W1 be subspaces of V1 and W2 and W2 be subspaces of V2 . Then
(a) (W1 ⊗ W2 ) ∩ (W1 ⊗ W2 ) = (W1 ∩ W1 ) ⊗ (W2 ∩ W2 ).
(b) W1 ⊗ (W2 + W2 ) = (W1 ⊗ W2 ) + (W1 ⊗ W2 ),
(W1 + W1 ) ⊗ W2 = (W1 ⊗ W2 ) + (W1 ⊗ W2 ).
(c) Assuming W1 ∩ W1 = {0},
(W1 ⊕ W1 ) ⊗ W2 = (W1 ⊗ W2 ) ⊕ (W1 ⊗ W2 ).
Assuming W2 ∩ W2 = {0},
W1 ⊗ (W2 ⊕ W2 ) = (W1 ⊗ W2 ) ⊕ (W1 ⊗ W2 ).
15. In a more general setting, if Wi j , j = 1, . . . , pi are subspaces of Vi , i ∈ {1, . . . , m}, then
⎛
⎝
p1
⎞
⎛
W1 j ⎠ ⊗ · · · ⊗ ⎝
j =1
pm
⎞
W1 j ⎠ =
j =1
W1γ (1) ⊗ · · · ⊗ Wmγ (m) .
γ ∈( p1 ··· pm )
If the sums of subspaces in the left-hand side are direct, then
⎛
⎝
p1
j =1
⎞
⎛
W1 j ⎠ ⊗ · · · ⊗ ⎝
pm
j =1
⎞
W1 j ⎠ =
γ ∈( p1 ,..., pm )
W1γ (1) ⊗ · · · ⊗ Wm,γ (m) .
13-6
Handbook of Linear Algebra
Examples:
1. The vector space F m×n of the m × n matrices over F is a tensor product of F m and F n with
tensor multiplication (the usual tensor multiplication for F m×n ) defined, for (a1 , . . . , am ) ∈ F m
and (b1 , . . . , bn ) ∈ F n , by
⎡
⎤
a1
⎢ . ⎥
⎢
(a1 , . . . , am ) ⊗ (b1 , . . . , bn ) = ⎣ .. ⎥
⎦ b1
am
bn .
···
With this definition, ei ⊗ ej = E i j where ei , ej , and E i j are standard basis vectors of F m , F n , and
F m×n .
2. The field F , viewed as a vector space over F , is an mth tensor power of F with tensor multiplication
defined by
a1 ⊗ · · · ⊗ am =
m
ai ,
ai ∈ F ,
i = 1, . . . , m.
i =1
3. The vector space V is a tensor product of F and V with tensor multiplication defined by
a ⊗ v = av,
a ∈ F,
v ∈ V.
4. Let U and V be vector spaces over F . Then L (V ; U ) is a tensor product U ⊗ V ∗ with tensor
multiplication (the usual tensor multiplication for L (V ; U )) defined by the equality (u ⊗ f )(v) =
f (v)u, u ∈ U, v ∈ V.
5. Let V1 , . . . , Vm be vector spaces over F . The vector space L (V1 , . . . , Vm ; U ) is a tensor product
L (V1 , . . . , Vm ; F ) ⊗ U with tensor multiplication
(ϕ ⊗ u)(v1 , . . . , vm ) = ϕ(v1 , . . . , vm )u.
6. Denote by F n1 ×···×nm the set of all families with elements indexed in {1, . . . , n1 }×· · ·×{1, . . . , nm } =
(n1 , . . . , nm ). The set F n1 ×···×nm equipped with the sum and scalar product defined, for every
( j1 , . . . , jm ) ∈ (n1 , . . . , nm ), by the equalities
(a j1 ,..., jm ) + (b j1 ,..., jm ) = (a j1 ,..., jm + b j1 ,..., jm ),
α(a j1 ,..., jm ) = (αa j1 ,..., jm ),
α ∈ F,
is a vector space over F . This vector space is a tensor product of F n1 , . . . , F nm with tensor multiplication defined by
(a11 , . . . , a1n1 ) ⊗ · · · ⊗ (am1 , . . . , amnm ) =
m
.
a i ji
i =1
( j1 ,..., jm )∈
7. The vector space L (V1 , . . . , Vm ; F ) is a tensor product of V1∗ = L (V1 ; F ), . . . , Vm∗ = L (Vm ; F ) with
tensor multiplication defined by
g 1 ⊗ · · · ⊗ g m (v1 , . . . , vm ) =
m
g t (vt ).
t=1
Very often, for example in the context of geometry, the factors of the tensor product are vector
space duals. In those situations, this is the model of tensor product implicitly assumed.
8. The vector space
L (V1 , . . . , Vm ; F )∗
13-7
Multilinear Algebra
is a tensor product of V1 , . . . , Vm with tensor multiplication defined by
v1 ⊗ · · · ⊗ vm (ψ) = ψ(v1 , . . . , vm ).
9. The vector space L (V1 , . . . , Vm ; F ) is a tensor product L (V1 , . . . , Vk ; F ) ⊗ L (Vk+1 , . . . , Vm ; F ) with
tensor multiplication defined, for every vi ∈ Vi , i = 1, . . . , m, by the equalities
(ϕ ⊗ ψ)(v1 , . . . , vm ) = ϕ(v1 , . . . , vk )ψ(vk+1 , . . . , vm ).
13.3
Rank of a Tensor: Decomposable Tensors
Definitions:
Let z ∈ V1 ⊗ · · · ⊗ Vm . The tensor z has rank k if z is the sum of k decomposable tensors but it cannot be
written as sum of l decomposable tensors, for any l less than k.
Facts:
The following facts can be found in [Bou89, Chap. II, §7.8]and [Mar73, Chap. 1].
1. The tensor z = v1 ⊗ w1 + · · · + vt ⊗ wt ∈ V ⊗ W has rank t if and only if (v1 , . . . , vt ) and
(w1 , . . . , wt ) are linearly independent.
2. If the model for the tensor product of F m and F n is the vector space of m × n matrices over F with
the usual tensor multiplication, then the rank of a tensor is equal to the rank of the corresponding
matrix.
3. If the model for the tensor product U ⊗ V ∗ is the vector space L (V ; U ) with the usual tensor
multiplication, then the rank of a tensor is equal to the rank of the corresponding linear map.
4. x1 ⊗ · · · ⊗ xm = 0 if and only if xi = 0 for some i ∈ {1, . . . , m}.
5. If xi , yi are nonzero vectors of Vi , i = 1, . . . , m, then
Span({x1 ⊗ · · · ⊗ xm }) = Span({y1 ⊗ · · · ⊗ ym })
if and only if Span({xi }) = Span({yi }), i = 1, . . . , m.
Examples:
1. Consider as a model of F m ⊗ F n , the vector space of the m × n matrices over F with the usual
tensor multiplication. Let A be a tensor of F m ⊗ F n . If rank A = k (using the matrix definition of
rank), then
I
A=M k
0
0
N,
0
where M = [x1 · · · xm ] is an invertible matrix with columns x1 , . . . , xm and
⎡ ⎤
y1
⎢y ⎥
⎢ 2⎥
⎥
N=⎢
⎢ .. ⎥
⎣.⎦
yn
is an invertible matrix with rows y1 , . . . , yn . (See Chapter 2.) Then
A = x1 ⊗ y1 + · · · + xk ⊗ yk
has rank k as a tensor .
13-8
13.4
Handbook of Linear Algebra
Tensor Product of Linear Maps
Definitions:
Let θi be a linear map from Vi into Ui , i = 1, . . . , m. The unique linear map h from V1 ⊗ · · · ⊗ Vm into
U1 ⊗ · · · ⊗ Um satisfying, for all vi ∈ Vi , i = 1, . . . , m,
h(v1 ⊗ · · · ⊗ vm ) = θ1 (v1 ) ⊗ · · · ⊗ θm (vm )
is called the tensor product of θ1 , . . . , θm and is denoted by θ1 ⊗ · · · ⊗ θm .
matrix over F , t = 1, . . . , m. The Kronecker product of A1 , . . . , Am ,
Let At = (ai(t)
j ) be an r t × s t m
denoted A1 ⊗ · · · ⊗ Am , is the ( m
t=1 r t ) × ( t=1 s t ) matrix whose (α, β)-entry (α ∈ (r 1 , . . . , r m ) and
m (t)
β ∈ (s 1 , . . . , s m )) is t=1 aα(t)β(t) . (See also Section 10.4.)
Facts:
The following facts can be found in [Mar73, Chap. 2] and in [Mer97, Chap. 5].
Let θi be a linear map from Vi into Ui , i = 1, . . . , m.
1. If ηi is a linear map from Wi into Vi , i = 1, . . . , m,
(θ1 ⊗ · · · ⊗ θm )(η1 ⊗ · · · ⊗ ηm ) = (θ1 η1 ) ⊗ · · · ⊗ (θm ηm ).
2. I V1 ⊗···⊗Vm = I V1 ⊗ · · · ⊗ I Vm .
3. Ker(θ1 ⊗ · · · ⊗ θm ) = Ker(θ1 ) ⊗ V2 ⊗ · · · ⊗ Vm + V1 ⊗ Ker(θ2 ) ⊗ · · · ⊗ Vm + · · · + V1 ⊗ · · · ⊗
Vm−1 ⊗ Ker(θm ).
In particular, θ1 ⊗ · · · ⊗ θm is one to one if θi is one to one, i = 1, . . . , m, [Bou89, Chap. II, §3.5].
4. θ1 ⊗ · · · ⊗ θm (V1 ⊗ · · · ⊗ Vm ) = θ1 (V1 ) ⊗ · · · ⊗ θm (Vm ). In particular θ1 ⊗ · · · ⊗ θm is onto if θi is
onto, i = 1, . . . , m.
In the next three facts, assume that θi is a linear operator on the ni -dimensional vector space Vi ,
i = 1, . . . , m.
5. tr(θ1 ⊗ · · · ⊗ θm ) = im=1 tr(θi ).
6. If σ (θi ) = {ai 1 , . . . , ai ni }, i = 1, . . . , m, then
σ (θ1 ⊗ · · · ⊗ θm ) =
m
.
ai,α(i )
i =1
α∈(n1 ,...,nm )
7. det(θ1 ⊗ θ2 ⊗ · · · ⊗ θm ) = det(θ1 )n2 ···nm det(θ2 )n1 ·n3 ···nm · · · det(θm )n1 ·n2 ···nm−1 .
8. The map ν : (θ1 , . . . , θm ) → θ1 ⊗ · · · ⊗ θm is a multilinear map from L (V1 ; U1 ) × · · · × L (Vm ; Um )
into L (V1 ⊗ · · · ⊗ Vm ; U1 ⊗ · · · ⊗ Um ).
9. The vector space L (V1 ⊗ · · · ⊗ Vm ; U1 ⊗ · · · ⊗ Um )) is a tensor product of the vector spaces
L (V1 ; U1 ), . . . , L (Vm ; Um ), with tensor multiplication (θ1 , . . . , θm ) → θ1 ⊗ · · · ⊗ θm :
L (V1 ; U1 ) ⊗ · · · ⊗ L (Vm ; Um ) = L (V1 ⊗ · · · ⊗ Vm ; U1 ⊗ · · · ⊗ Um ).
10. As a consequence of (9.), choosing F as the model for
multiplication,
m
F with the product in F as tensor
V1∗ ⊗ · · · ⊗ Vm∗ = (V1 ⊗ · · · ⊗ Vm )∗ .
13-9
Multilinear Algebra
11. Let (vi j ) j =1,...,ni be an ordered basis of Vi and (ui j ) j =1,...,qi an ordered basis of Ui , i = 1, . . . , m.
Let Ai be the matrix of θi on the bases fixed in Vi and Ui . Then the matrix of θ1 ⊗ · · · ⊗ θm on
⊗
the bases (v⊗
α )α∈(n1 ,...,nm ) and (uα )α∈(q 1 ,...,qr ) (induced by the bases (vi j ) j =1,...,ni and (ui j ) j =1,...,q i ,
respectively) is the Kronecker product of A1 , . . . , Am ,
A1 ⊗ · · · ⊗ Am .
12. Let n1 , . . . , nm , r 1 , . . . , r m , t1 , . . . , tm be positive integers. Let Ai be an ni × r i matrix, and Bi be an
r i × ti matrix, i = 1, . . . , m. Then the following holds:
(a) (A1 ⊗ · · · ⊗ Am )(B1 ⊗ · · · ⊗ Bm ) = A1 B1 ⊗ · · · ⊗ Am Bm ,
(b) (A1 ⊗ · · · ⊗ Ak ) ⊗ (Ak+1 ⊗ · · · ⊗ Am ) = A1 ⊗ · · · ⊗ Am .
Examples:
1. Consider as a model of U ⊗ V ∗ , the vector space L (V ; U ) with tensor multiplication defined by
(u ⊗ f )(v) = f (v)u. Use a similar model for the tensor product of U and V ∗ . Let η ∈ L (U ; U )
and θ ∈ L (V ; V ). Then, for all ξ ∈ U ⊗ V ∗ = L (V ; U ),
η ⊗ θ ∗ (ξ ) = ηξ θ.
2. Consider as a model of F m ⊗ F n , the vector space of the m × n matrices over F with the usual
tensor multiplication. Use a similar model for the tensor product of F r and F s . Identify the set of
column matrices, F m×1 , with F m and the set of row matrices, F 1×n , with F n . Let A be an r × m
matrix over F . Let θ A be the linear map from F m into F r defined by
⎡
⎤
a1
⎢a ⎥
⎢ 2⎥
⎥
θ A (a1 , . . . , am ) = A ⎢
⎢ .. ⎥ .
⎣ . ⎦
am
Let B be an s × n matrix. Then, for all C ∈ F m×n = F m ⊗ F n , θ A ⊗ θ B (C ) = AC B T .
3. For every i = 1, . . . , m consider the ordered basis (bi 1 , . . . , bi ni ) fixed in Vi , and the basis
(bi 1 , . . . , bi s i ) fixed in Ui . Let θi be a linear map from Vi into Ui and let Ai = (a (ij k) ) be the s i × ni
matrix of θi with respect to the bases (bi 1 , . . . , bi ni ), (bi 1 , . . . , bi s i ). For every z ∈ V1 ⊗ · · · ⊗ Vm ,
z=
n2
n1 ···
j1 =1 j2 =1
=
nm
c j1 ,..., jm b1 j1 ⊗ · · · ⊗ bm, jm
jm =1
α∈(n1 ,...,nm )
c α b⊗
α.
Then, for β = (i 1 , . . . , i m ) ∈ (s 1 , . . . , s m ), the component c i1 ,...,i m of θ1 ⊗ · · · ⊗ θm (z) on the
basis element b1i 1 ⊗ · · · ⊗ bmi m of U1 ⊗ · · · ⊗ Um is
c β = c i1 ,...,i m =
=
n1
···
j1 =1
nm
jm =1
γ ∈(n1 ,...,nm )
ai(1)
· · · ai(m)
c
1 , j1
m , jm j1 ,..., jm
m
i =1
(i )
aβ(i
)γ (i )
cγ .
13-10
Handbook of Linear Algebra
4. If A = [ai j ] is an p × q matrix over F and B is an r × s matrix over F , then the Kronecker product
of A and B is the matrix whose partition in r × s blocks is
⎡
a11 B
⎢a B
⎢ 21
A⊗ B =⎢
⎢ ..
⎣ .
a p1 B
13.5
···
···
..
.
···
a12 B
a22 B
..
.
a p2 B
⎤
a1q B
a2q B ⎥
⎥
.. ⎥
⎥.
. ⎦
a pq B
Symmetric and Antisymmetric Maps
Recall that we are assuming F to be of characteristic zero and that all vector spaces are finite dimensional
over F . In particular, V and U denote finite dimensional vector spaces over F .
Definitions:
Let m be a positive integer. When V1 = V2 = · · · = Vm = V L m (V ; U ) denotes the vector space of the
multilinear maps L (V1 , . . . , Vm ; U ). By convention L 0 (V ; U ) = U .
An m-linear map ψ ∈ L m (V ; U ) is called antisymmetric or alternating if it satisfies
ψ(vσ (1) , . . . , vσ (m) ) = sgn(σ )ψ(v1 , . . . , vm ),
σ ∈ Sm ,
where sgn(σ ) denotes the sign of the permutation σ .
Similarly, an m-linear map ϕ ∈ L m (V ; U ) satisfying
ϕ(vσ (1) , . . . , vσ (m) ) = ϕ(v1 , . . . , vm )
for all permutations σ ∈ Sm and for all v1 , . . . , vm in V is called symmetric. Let S m (V ; U ) and Am (V ; U )
denote the subsets of L m (V ; U ) whose elements are respectively the symmetric and the antisymmetric
m-linear maps. The elements of Am (V ; F ) are called antisymmetric forms. The elements of S m (V ; F )
are called symmetric forms.
Let m,n be the set of all maps from {1, . . . , m} into {1, . . . , n}, i.e,
m,n = (n, . . . , n) .
m times
The subset of m,n of the strictly increasing maps α (α(1) < · · · < α(m)) is denoted by Q m,n . The subset
of the increasing maps α ∈ m,n (α(1) ≤ · · · ≤ α(m)) is denoted by G m,n .
Let A = [ai j ] be an m × n matrix over F . Let α ∈ p,m and β ∈ q ,n . Then A[α|β] be the p × q -matrix
over F whose (i, j )-entry is aα(i ),β( j ) , i.e.,
A[α|β] = [aα(i ),β( j ) ].
The mth-tuple (1, 2, . . . , m) is denoted by ιm . If there is no risk of confusion ι is used instead of ιm .
13-11
Multilinear Algebra
Facts:
1. If m > n, we have Q m,n = ∅. The cardinality of m,n is nm , the cardinality of Q m,n is
cardinality of G m,n is m+n−1
.
m
2. Am (V ; U ) and S m (V ; U ) are vector subspaces of L m (V ; U ).
3. Let ψ ∈ L m (V ; U ). The following conditions are equivalent:
n
m
, and the
(a) ψ is an antisymmetric multilinear map.
(b) For 1 ≤ i < j ≤ m and for all v1 , . . . , vm ∈ V ,
ψ(v1 , . . . , vi −1 , v j , vi +1 , . . . , v j −1 , vi , v j +1 , . . . , vm )
= −ψ(v1 , . . . , vi −1 , vi , vi +1 , . . . , v j −1 , v j , v j +1 , . . . , vm ).
(c) For 1 ≤ i < m and for all v1 , . . . , vm ∈ V ,
ψ(v1 , . . . , vi +1 , vi , . . . , vm ) = −ψ(v1 , . . . , vi , vi +1 , . . . , vm ).
4. Let ψ ∈ L m (V ; U ). The following conditions are equivalent:
(a) ψ is a symmetric multilinear map.
(b) For 1 ≤ i < j ≤ m and for all v1 , . . . , vm ∈ V ,
ψ(v1 , . . . , vi −1 , v j , vi +1 , . . . , v j −1 , vi , v j +1 , . . . , vm )
= ψ(v1 , . . . , vi −1 , vi , vi +1 , . . . , v j −1 , v j , v j +1 , . . . , vm ).
(c) For 1 ≤ i < m and for all v1 , . . . , vm ∈ V ,
ψ(v1 , . . . , vi +1 , vi , . . . , vm ) = ψ(v1 , . . . , vi , vi +1 , . . . , vm ).
5. When we consider L m (V ; U ) as the tensor product, L m (V ; F ) ⊗ U , with the tensor multiplication
described in Example 5 in Section 13.2, we have
Am (V ; U ) = Am (V ; F ) ⊗ U
and
S m (V ; U ) = S m (V ; F ) ⊗ U.
6. Polarization identity [Dol04] If ϕ is a symmetric multilinear map, then for every m-tuple
(v1 , . . . , vm ) of vectors of V , and for any vector w ∈ V , the following identity holds:
ϕ(v1 , . . . , vm ) =
=
1
2m m!
ε1 · · · εm ϕ(w + ε1 v1 + · · · + εm vm , . . . , w + ε1 v1 + · · · + εm vm ),
ε1 ···εm
where εi ∈ {−1, +1}, i = 1, . . . , m.
Examples:
1. The map
((a11 , a21 , . . . , am1 ), . . . , (a1m , a2m , . . . , amm )) → det([ai j ])
from the Cartesian product of m copies of F m into F is m-linear and antisymmetric.
13-12
Handbook of Linear Algebra
2. The map
((a11 , a21 , . . . , am1 ), . . . , (a1m , a2m , . . . , amm )) → per([ai j ])
from the Cartesian product of m copies of F m into F is m-linear and symmetric.
3. The map ((a1 , . . . , an ), (b1 , . . . , bn )) → (ai b j − bi a j ) from F n × F n into F n×n is bilinear antisymmetric.
4. The map ((a1 , . . . , an ), (b1 , . . . , bn )) → (ai b j +bi a j ) from F n ×F n into F n×n is bilinear symmetric.
5. The map χ from V m into Am (V ; F )∗ defined by
χ (v1 , . . . , vm )(ψ) = ψ(v1 , . . . , vm ),
v1 , . . . , vm ∈ V,
is an antisymmetric multilinear map.
6. The map χ from V m into S m (V ; F )∗ defined by
χ (v1 , . . . , vm )(ψ) = ψ(v1 , . . . , vm ),
v1 , . . . , vm ∈ V,
is a symmetric multilinear map.
13.6
Symmetric and Grassmann Tensors
Definitions:
Let σ ∈ Sm be a permutation of {1, . . . , m}. The unique linear map, from ⊗m V into ⊗m V satisfying
v1 ⊗ · · · ⊗ vm → vσ −1 (1) ⊗ · · · ⊗ vσ −1 (m) ,
v1 , . . . , vm ∈ V,
is denoted P (σ ).
Let ψ be a multilinear form of L m (V ; F ) and σ an element of Sm . The multilinear form (v1 , . . . , vm ) →
ψ(vσ (1) , . . . , vσ (m) ) is denoted ψσ .
The linear operator Alt from ⊗m V into ⊗m V defined by
Alt :=
1 sgn(σ )P (σ )
m! σ ∈S
m
is called the alternator. In order to emphasize the degree of the domain of Alt , Alt m is often used for the
operator having m V , as domain.
Similarly, the linear operator Sym is defined as the following linear combination of the maps P (σ ):
Sym =
1 P (σ ).
m! σ ∈S
m
As before, Sym m is often written to mean the Sym operator having
!m
!m
m
m
V , as domain.
The range of Alt is denoted by
V , i.e.,
V = Alt (
V ), and is called the Grassmann space of
degree m associated with V or the mth-exterior power of V .
"
"
The range of Sym is denoted by m V , i.e., m V = Sym ( m V ), and is called the symmetric space
of degree m associated with V or the mth symmetric power of V .
By convention
#0
V=
$0
V=
%0
V = F.
13-13
Multilinear Algebra
!
Assume m ≥ 1. The elements of m V that are the image under Alt of decomposable tensors of m V
!
are called decomposable elements of m V . If x1 , . . . , xm ∈ V , x1 ∧ · · · ∧ xm denotes the decomposable
!m
V,
element of
x1 ∧ · · · ∧ xm = m!Alt (x1 ⊗ · · · ⊗ xm ),
"
and x1 ∧ · · · ∧ xm is called the exterior product of x1 , . . . , xm . Similarly, the elements of m V that are
"
the image under Sym of decomposable tensors of m V are called decomposable elements of m V . If
"m
V,
x1 , . . . , xm ∈ V , x1 ∨ · · · ∨ xm denotes the decomposable element of
x1 ∨ · · · ∨ xm = m!Sym (x1 ⊗ · · · ⊗ xm ),
and x1 ∨ · · · ∨ xm is called the symmetric product of x1 , . . . , xm .
∧
∨
Let (b1 , . . . , bn ) be a basis of V . If α ∈ m,n , b⊗
α , bα , and bα denote respectively the tensors
b⊗
α = bα(1) ⊗ · · · ⊗ bα(m) ,
b∧α = bα(1) ∧ · · · ∧ bα(m) ,
b∨α = bα(1) ∨ · · · ∨ bα(m) .
Let n and m be positive integers. An n-composition of m is a sequence
µ = (µ1 , . . . , µn )
of nonnegative integers that sum to m. Let Cm,n be the set of n-compositions of m.
Let λ = (λ1 , . . . , λn ) be an n-composition of m. The integer λ1 ! · · · λn ! will be denoted by λ!.
Let α ∈ m,n . The multiplicity composition of α is the n-tuple of the cardinalities of the fibers of α,
(|α −1 (1)|, . . . , |α −1 (n)|), and is denoted by λα .
Facts:
The following facts can be found in [Mar73, Chap. 2], [Mer97, Chap. 5], and [Spi79, Chap. 7].
!
"
1. m V and m V are vector subspaces of m V .
2. The map σ → P (σ ) from the symmetric group of degree m into L (⊗m V ; ⊗m V ) is an
F -representation of Sm , i.e., P (σ τ ) = P (σ )P (τ ) for any σ, τ ∈ Sm and P (I ) = I⊗m V
3. Choosing L m (V ; F ), with the usual tensor multiplication, as the model for the tensor power,
m ∗
V , the linear operator P (σ ) acts on L m (V ; F ) by the following transformation
(P (σ )ψ) = ψσ .
4. The linear operators Alt and Sym are projections, i.e., Alt 2 = Alt and Sym 2 = Sym .
5. If m = 1, we have
Sym = Alt = I1 V = I V .
!
6. m V = {z ∈ m V : P (σ )(z) = sgn(σ )z, ∀σ ∈ Sm }.
"
7. m V = {z ∈ m V : P (σ )(z) = z, ∀σ ∈ Sm }.
8. Choosing L m (V ; F ) as the model for the tensor power m V ∗ with the usual tensor multiplication,
$m
V ∗ = Am (V ; F )
and
%m
V ∗ = S m (V ; F ).
13-14
Handbook of Linear Algebra
9.
#1
10.
2
V=
!2
V⊕
"2
V=
$1
2
V . Moreover for z ∈
V=
%1
V = V.
V,
z = Alt (z) + Sym (z).
11.
12.
13.
14.
15.
The corresponding equality is no more true in m V if m = 2.
V = {0} if m > dim(V ).
!
!
If m ≥ 1, any element of m V is a sum of decomposable elements of m V .
"m
"
V is a sum of decomposable elements of m V .
If m ≥ 1, any element of
m
V.
Alt (P (σ )z) = sgn(σ )Alt (z) and Sym (P (σ )(z)) = Sym (z), z ∈
!
The map ∧ from V m into m V defined for v1 , . . . , vm ∈ V by
!m
∧(v1 , . . . , vm ) = v1 ∧ · · · ∧ vm
is an antisymmetric m-linear map.
"
16. The map ∨ from V m into m V defined for v1 , . . . , vm ∈ V by
∨(v1 , . . . , vm ) = v1 ∨ · · · ∨ vm
is a symmetric m-linear map.
!
17. (Universal property for m V ) Given an antisymmetric m-linear map ψ from V m into U , there
!
exists a unique linear map h from m V into U such that
ψ(v1 , . . . , vm ) = h(v1 ∧ · · · ∧ vm ),
v1 , . . . , vm ∈ V,
i.e., there exists a unique linear map h that makes the following diagram commutative:
∧
Vm
⵩ mV
h
y
U
"
18. (Universal property for m V ) Given a symmetric m-linear map ϕ from V m into U , there exists a
"m
unique linear map h from
V into U such that
ϕ(v1 , . . . , vm ) = h(v1 ∨ · · · ∨ vm ),
v1 , . . . , vm ∈ V,
i.e., there exists a unique linear map h that makes the following diagram commutative:
∨
Vm
⵪mV
h
j
U
Let p and q be positive integers.
13-15
Multilinear Algebra
19. (Universal property for m V -bilinear version) If ψ is a ( p + q )-linear map from V p+q into
U , then there exists a unique bilinear map χ from p V × q V into U satisfying (recall Fact 5
in Section 13.2)
χ (v1 ⊗ · · · ⊗ v p , v p+1 ⊗ · · · ⊗ v p+q ) = ψ(v1 , . . . , v p+q ).
!
20. (Universal property for m V -bilinear version) If ψ is a ( p + q )-linear map from V p+q into U
antisymmetric in the first p variables and antisymmetric in the last q variables, then there exists a
!
!
unique bilinear map χ from p V × q V into U satisfying
χ (v1 ∧ · · · ∧ v p , v p+1 ∧ · · · ∧ v p+q ) = ψ(v1 , . . . , v p+q ).
"
21. (Universal property for m V -bilinear version) If ϕ is a ( p + q )-linear map from V p+q into U
symmetric in the first p variables and symmetric in the last q variables, then there exists a unique
"
"
bilinear map χ from p V × q V into U satisfying
χ (v1 ∨ · · · ∨ v p , v p+1 ∨ · · · ∨ v p+q ) = ϕ(v1 , . . . , v p+q ).
!
m
m
∧
22. If (b1 , . . . , bn ) is a basis of V , then (b⊗
V , and
α )α∈m,n is a basis of ⊗ V , (bα )α∈Q m,n is a basis of
"m
∨
V . These bases are said to be induced by the basis (b1 , . . . , bn ).
(bα )α∈G m,n is a basis of
23. Assume L m (V ; F ) as the model for the tensor power of m V ∗ , with the usual tensor multiplication.
Let ( f 1 , . . . , f n ) be the dual basis of the basis (b1 , . . . , bn ). Then:
(a) For every ϕ ∈ L m (V ; F ),
ϕ=
α∈m,n
ϕ(bα ) f α⊗ .
(b) For every ϕ ∈ Am (V, F ),
ϕ=
α∈Q m,n
ϕ(bα ) f α∧ .
(c) For every ϕ ∈ S m (V, F ),
ϕ=
α∈G m,n
24. dim m V = nm , dim
25. The family
!m
V=
n
m
, and dim
"m
1
ϕ(bα ) f α∨ .
λα !
V=
n+m−1
m
.
((µ1 b1 + · · · + µn bn ) ∨ · · · ∨ (µ1 b1 + · · · + µn bn ))µ∈Cm,n
"
is a basis of m V [Mar73, Chap. 3].
26. Let x1 , . . . , xm be vectors of V and g 1 , . . . , g m forms of V ∗ . Let ai j = g i (x j ), i, j = 1, . . . , m. Then,
choosing ( m V )∗ as the model for m V ∗ with tensor multiplication as described in Fact 10 in
Section 13.4,
g 1 ⊗ · · · ⊗ g m (x1 ∧ · · · ∧ xm ) = det[ai j ].
27. Under the same conditions of the former fact,
g 1 ⊗ · · · ⊗ g m (x1 ∨ · · · ∨ xm ) = per[ai j ].
13-16
Handbook of Linear Algebra
28. Let ( f 1 , . . . , f n ) be the dual basis of the basis (b1 , . . . , bn ). Then, choosing (
for m V ∗ :
m
V )∗ as the model
(a)
f α⊗
α∈m,n
m
is the dual basis of the basis (b⊗
α )α∈m,n of ⊗ V .
(b)
&
f α⊗
is the dual basis of the basis (b∧α )α∈Q m,n of
!m
|
!m
'
V
α∈Q m,n
V.
(c)
(
1 ⊗
f
λα ! α
is the dual basis of the basis (b∨α )α∈G m,n of
"m
)
|
"m
V
α∈G m,n
V.
Let v1 , . . . , vm be vectors of V and (b1 , . . . , bn ) be a basis of V .
29. Let A = [ai j ] be the n × m matrix over F such that v j =
n
i =1
ai j bi , j = 1, . . . , m. Then:
(a)
v1 ⊗ · · · ⊗ vm =
α∈m,n
m
aα(t),t
b⊗
α;
t=1
(b)
v1 ∧ · · · ∧ v m =
α∈Q m,n
det A[α|ι]b∧α ;
(c)
v1 ∨ · · · ∨ vm =
α∈G m,n
1
perA[α|ι]b∨α .
λα !
30. v1 ∧ · · · ∧ vm = 0 if and only if (v1 , . . . , vm ) is linearly dependent.
31. v1 ∨ · · · ∨ vm = 0 if and only if one of the vi s is equal to 0.
32. Let u1 , . . . , um be vectors of V .
(a) If (v1 , . . . , vm ) and (u1 , . . . , um ) are linearly independent families, then
Span({u1 ∧ · · · ∧ um }) = Span({v1 ∧ · · · ∧ vm })
if and only if
Span({u1 , . . . , um }) = Span({v1 , . . . , vm }).
(b) If (v1 , . . . , vm ) and (u1 , . . . , um ) are families of nonzero vectors of V , then
Span({v1 ∨ · · · ∨ vm }) = Span({u1 ∨ · · · ∨ um })
13-17
Multilinear Algebra
if and only if there exists a permutation σ of Sm satisfying
Span({vi }) = Span({uσ (i ) }),
i = 1, . . . , m.
Examples:
1. If m = 1, we have
Sym = Alt = I1 V = I V .
2. Consider as a model of 2 F n , the vector space of the n × n matrices with the usual tensor multi!
"
plication. Then 2 F n is the subspace of the n × n antisymmetric matrices over F and 2 F n is the
subspace of the n × n symmetric matrices over F . Moreover, for (a1 , . . . , an ), (b1 , . . . , bn ) ∈ F n :
(a) (a1 , . . . , an ) ∧ (b1 , . . . , bn ) = [ai b j − bi a j ]i, j =1,...,n .
(b) (a1 , . . . , an ) ∨ (b1 , . . . , bn ) = [ai b j + bi a j ]i, j =1,...,n .
With these definitions, ei ∧ e j = E i j − E j i and ei ∨ e j = E i j + E j i , where ei , e j , and E i j are
standard basis vectors of F m , F n , and F m×n .
3. For x ∈ V , x ∨ · · · ∨ x = m!x ⊗ · · · ⊗ x.
13.7
The Tensor Multiplication, the Alt Multiplication,
and the Sym Multiplication
Next we will introduce “external multiplications” for tensor powers, Grassmann spaces, and symmetric
spaces, Let p, q be positive integers.
Definitions:
The ( p, q )-tensor multiplication is the unique bilinear map, (z, z ) → z ⊗ z from (
into p+q V , satisfying
p
V) × (
q
V)
(v1 ⊗ · · · ⊗ v p ) ⊗ (v p+1 ⊗ · · · ⊗ v p+q ) = v1 ⊗ · · · ⊗ v p+q .
The ( p, q )-alt multiplication (briefly alt multiplication ) is the unique bilinear map (recall Fact 20 in
!
!
!
section 13.6), (z, z ) → z ∧ z from ( p V ) × ( q V ) into p+q V , satisfying
(v1 ∧ · · · ∧ v p ) ∧ (v p+1 ∧ · · · ∧ v p+q ) = v1 ∧ · · · ∧ v p+q .
The ( p, q )-sym multiplication (briefly sym multiplication ) is the unique bilinear map (recall Fact 21
"
"
"
in section 13.6), (z, z ) → z ∨ z from ( p V ) × ( q V ) into p+q V , satisfying
(v1 ∨ · · · ∨ v p ) ∨ (v p+1 ∨ · · · ∨ v p+q ) = v1 ∨ · · · ∨ v p+q .
These definitions can be extended to include the cases where either p or q is zero, taking as multiplication
the scalar product.
Let m, n be positive integers satisfying 1 ≤ m < n. Let α ∈ Q m,n . We denote by α c the element of
the permutation of Sn :
Q n−m,n whose range is the complement in {1, . . . , n} of the range of α and by α
=
α
1
···
α(1) · · ·
m
α(m)
m + 1 ···
α c (1) · · ·
n
.
α c (n)
13-18
Handbook of Linear Algebra
Facts:
The following facts can be found in [Mar73, Chap. 2], [Mer97, Chap. 5], and in [Spi79, Chap. 7].
1. The value of the alt multiplication for arbitrary elements z ∈
"p
V and z ∈
"q
V and z ∈
!q
V is given by
( p + q )!
Alt p+q (z ⊗ z ).
p!q !
z ∧ z =
2. The product of z ∈
!p
V by the sym multiplication is given by
z ∨ z =
( p + q )!
Sym p+q (z ⊗ z ).
p!q !
3. The alt-multiplication z ∧ z and the sym-multiplication z ∨ z are not, in general, decomposable
elements of any Grassmann or symmetric space of degree 2.
!
4. Let 0 = z ∈ m V . Then z is decomposable if and only if there exists a linearly independent family
of vectors v1 , . . . , vm satisfying z ∧ vi = 0, i = 1, . . . , m.
!
5. If dim(V ) = n, all elements of n−1 V are decomposable.
6. The multiplications defined in this subection are associative. Therefore,
z ⊗ z ⊗ z , z ∈
#p
w ∧ w ∧ w , w ∈
y ∨ y ∨ y , y ∈
V,
$p
%p
V,
V,
z ∈
#q
w ∈
y ∈
V,
$q
%q
V,
V,
z ∈
#r
w ∈
y ∈
V;
$r
%r
V;
V
are meaningful as well as similar expressions with more than three factors.
!
!
7. If w ∈ p V , w ∈ q V , then
w ∧ w = (−1) pq w ∧ w .
8. If y ∈
"p
V , y ∈
"q
V , then
y ∨ y = y ∨ y.
Examples:
1. When the vector space is the dual V ∗ = L (V ; F ) of a vector space and we choose as the models of
tensor powers of V ∗ the spaces of multilinear forms (with the usual tensor multiplication), then
the image of the tensor multiplication ϕ ⊗ ψ (ϕ ∈ L p (V ; F ) and ψ ∈ L q (V ; F )) on (v1 , . . . , v p+q )
is given by the equality
(ϕ ⊗ ψ)(v1 , . . . , v p+q ) = ϕ(v1 , . . . , v p )ψ(v p+1 , . . . , v p+q ).
2. When the vector space is the dual V ∗ = L (V ; F ) of a vector space and we choose as the models for
the tensor powers of V ∗ the spaces of multilinear forms (with the usual tensor multiplication), the
alt multiplication of ϕ ∈ A p (V ; F ) and ψ ∈ Aq (V ; F ) takes the form
(ϕ ∧ ψ)(v1 , . . . , v p+q )
1 sgn(σ )ϕ(vσ (1) , . . . , vσ ( p) )ψ(vσ ( p+1) , . . . , vσ ( p+q ) ).
=
p!q ! σ ∈S
p+q
3. The equality in Example 2 has an alternative expression that can be seen as a “Laplace expansion”
for antisymmetric forms
13-19
Multilinear Algebra
(ϕ ∧ ψ)(v1 , . . . , v p+q )
=
)ϕ(vα(1) , . . . , vα( p) )ψ(vαc (1) , . . . , vαc (q ) ).
sgn(α
α∈Q p, p+q
4. In the case p = 1, the equality in Example 3 has the form
(ϕ ∧ ψ)(v1 , . . . , vq +1 )
=
q +1
(−1) j +1 ϕ(v j )ψ(v1 , . . . , v j −1 , v j +1 , . . . , vq +1 ).
j =1
5. When the vector space is the dual V ∗ = L (V ; F ) of a vector space and we choose as the models
of tensor powers of V ∗ the spaces of multilinear forms (with the usual tensor multiplication), the
value of sym multiplication of ϕ ∈ S p (V ; F ) and ψ ∈ S q (V ; F ) on (v1 , . . . , v p+q ) is
(ϕ ∨ ψ)(v1 , . . . , v p+q )
1 ϕ(vσ (1) , . . . , vσ ( p) )ψ(vσ ( p+1) , . . . , vσ ( p+q ) ).
=
p!q ! σ ∈S
p+q
6. The equality in Example 5 has an alternative expression that can be seen as a “Laplace expansion”
for symmetric forms
(ϕ ∨ ψ)(v1 , . . . , v p+q )
=
ϕ(vα(1) , . . . , vα( p) )ψ(vαc (1) , . . . , vαc (q ) ).
α∈Q p, p+q
7. In the case p = 1, the equality in Example 6 has the form
(ϕ ∨ ψ)(v1 , . . . , vq +1 )
=
q +1
ϕ(v j )ψ(v1 , . . . , v j −1 , v j +1 , . . . , vq +1 ).
j =1
13.8
Associated Maps
Definitions:
Let θ ∈ L (V ; U ). The linear map θ ⊗ · · · ⊗ θ from m V into m U (the tensor product of m copies of
!
"
!
"
θ) will be denoted by m θ. The subspaces m V and m V are mapped by m θ into m U and m U ,
m
!m
"m
!m
"m
θ to
V and to
V will be respectively denoted,
θe
θ.
respectively. The restriction of
Facts:
The following facts can be found in [Mar73, Chap. 2].
13-20
Handbook of Linear Algebra
1. Let v1 , . . . , vm ∈ V . The following properties hold:
!m
(a)
θ(v1 ∧ · · · ∧ vm ) = θ(v1 ) ∧ · · · ∧ θ(vm ).
"m
(b)
θ(v1 ∨ · · · ∨ vm ) = θ(v1 ) ∨ · · · ∨ θ(vm ).
2. Let θ ∈ L (V ; U ) and η ∈ L (W, V ). The following equalities hold:
!m
(a)
(b)
"m
!m
(θ η) =
(θ η) =
I! m
!m
"m
(θ)
(θ)
!m
"m
"m
(η).
(η).
3.
(I V ) =
(I V ) = I"m V .
V;
4. Let θ, η ∈ L (V ; U ) and assume that rank (θ) > m. Then
$m
θ=
$m
η
if and only if θ = aη and a m = 1.
"
"
5. Let θ, η ∈ L (V ; U ). Then m θ = m η if and only if θ = aη and a m = 1.
!
"
6. If θ is one-to-one (respectively onto), then m θ and m θ are one-to-one (respectively onto).
From now on θ is a linear operator on the n-dimensional vector space V .
!
!
7. Considering n θ as an operator in the one-dimensional space n V ,
& $n '
θ (z) = det(θ)z, for all z ∈
$n
V.
8. If the characteristic polynomial of θ is
pθ (x) = x n +
n
(−1)i ai x n−i ,
i =1
then
ai = tr
& $i '
θ ,
i = 1, . . . , n.
9. If θ has spectrum σ (θ) = {λ1 , . . . , λn }, then
σ
& $m '
θ =
m
i =1
10.
det
& $m '
λα(i )
,
σ
& %m '
θ =
n−1
m
λα(i )
i =1
α∈Q m,n
θ = det(θ)(m−1) ,
det
& %m '
θ = det(θ)(
.
α∈G m,n
m+n−1
m−1
).
Examples:
1. Let A be the matrix of the linear operator θ ∈ L (V ; V ) in the basis (b1 , . . . , bn ). The linear operator
!
!
on m V whose matrix in the basis (b∧α )α∈Q m,n is the mth compound of A is m θ.
13.9
Tensor Algebras
Definitions:
Let A be an F -algebra and (Ak )k∈N a family of vector subspaces of A. The algebra A is graded by (Ak )k∈N
if the following conditions are satisfied:
*
(a) A = k∈N Ak .
(b) Ai A j ⊆ Ai + j for every i, j ∈ N.
13-21
Multilinear Algebra
+
The elements of Ak are known as homogeneous of degree k, and the elements of n∈N Ak are called
homogeneous.
By condition (a), every element of A can be written uniquely as a sum of (a finite number of nonzero)
homogeneous elements, i.e., given u ∈ A there exist uniquely determined uk ∈ Ak , k ∈ N satisfying
u=
uk .
k∈N
These elements are called homogeneous components of u. The summand of degree k in the former
equation is denoted by [u]k .
From now on V is a finite dimensional vector space over F of dimension n. As before k V denotes
the kth-tensor power of V .
Denote by V the external direct sum of the vector spaces k V, k ∈ N. If z i ∈ i V , z i is identified
V whose i th coordinate is z i and the remaining coordinates are 0. Therefore,
with the sequence z ∈
after this identification,
#
V=
#k
V.
k∈N
Consider in
V the multiplication (x, y) → x ⊗ y defined for x, y ∈
[x ⊗ y]k =
[x]r ⊗ [y]s ,
V by
k ∈ N,
r,s ∈N
r +s =k
where [x]r ⊗[y]s is the (r, s )-tensor multiplication of [x]r and [y]s introduced in the definitions of Section
V equipped with this multiplication is called the tensor algebra on V .
13.7. The vector space
!
!
!
Denote by V the external direct sum of the vector spaces k V, k ∈ N. If z i ∈ i V , z i is identified
!
with the sequence z ∈ V whose i th coordinate is z i and the remaining coordinates are 0. Therefore,
after this identification,
$
V=
$k
V.
k∈N
Recall that
!k
V = {0} if k > n. Then
$
V=
n $
k
V
k=0
and the elements of
!
V can be uniquely written in the form
z0 + z1 + · · · + zn ,
Consider in
!
zi ∈
$i
V,
i = 0, . . . , n.
V the multiplication (x, y) → x ∧ y defined, for x, y ∈
[x ∧ y]k =
[x]r ∧ [y]s ,
!
V , by
k ∈ {0, . . . , n},
r,s ∈{0, ... ,n}
r +s =k
where [x]r ∧ [y]s is the (r, s )-alt multiplication of [x]r and [y]s referred in definitions of Section 13.7.
!
The vector space V equipped with this multiplication is called the Grassmann algebra on V .
"
"
Denote by V the external direct sum of the vector spaces k V, k ∈ N.
"i
"
V , we identify z i with the sequence z ∈ V whose i th coordinate is z i and the remaining
If z i ∈
coordinates are 0. Therefore, after this identification
%
V=
%k
k∈N
V.
13-22
Handbook of Linear Algebra
Consider in
"
V the multiplication (x, y) → x ∨ y defined for x, y ∈
[x ∨ y]k =
[x]r ∨ [y]s ,
"
V by
k ∈ N,
r,s ∈N
r +s =k
where [x]r ∨ [y]s is the (r, s )-sym multiplication of [x]r and [y]s referred in definitions of Section 13.7.
"
The vector space V equipped with this multiplication is called the symmetric algebra on V .
Facts:
The following facts can be found in [Mar73, Chap. 3] and [Gre67, Chaps. II and III].
V with the multiplication (x, y) → x ⊗ y is an algebra over F graded by
1. The vector space
( k V )k∈N , whose identity is the identity of F = 0 V.
!
2. The vector space V with the multiplication (x, y) → x ∧ y is an algebra over F graded by
!
!
( k V )k∈N whose identity is the identity of F = 0 V .
"
3. The vector space V with the multiplication (x, y) → x ∨ y is an algebra over F graded by
"
"
( k V )k∈N whose identity is the identity of F = 0 V .
V does not have zero divisors.
4. The F -algebra
5. Let B be an F -algebra and θ a linear map from V into B satisfying θ(x)θ(y) = −θ(y)θ(x) for all x, y ∈
!
V . Then there exists a unique algebra homomorphism h from V into B satisfying h|V = θ.
6. Let B be an F -algebra and θ a linear map from V into B satisfying θ(x)θ(y) = θ(y)θ(x), for all x, y ∈
"
V . Then there exists a unique algebra homomorphism h from V into B satisfying h|V = θ.
"m
V is isomorphic to the algebra of
7. Let (b1 , . . . , bn ) be a basis of V . The symmetric algebra
polynomials in n indeterminates, F [x1 , . . . , xn ], by the algebra isomorphism whose restriction to
V is the linear map that maps bi into xi , i = 1, . . . , n.
Examples:
1. Let x1 , . . . , xn be n distinct indeterminates. Let V be the vector space of the formal linear combinations with coefficients in F in the indeterminates x1 , . . . , xn . The tensor algebra on V is the
algebra of the polynomials in the noncommuting indeterminates x1 , . . . , xn ([Coh03], [Jac64]).
This algebra is denoted by
F x1 , . . . , xn .
The elements of this algebra are of the form
f (x1 , . . . , xn ) =
c α xα(1) ⊗ · · · ⊗ xα(m) ,
m∈N α∈m,n
with all but a finite number of the coefficients c α equal to zero.
13.10 Tensor Product of Inner Product Spaces
Unless otherwise stated, within this section V, U , and W, as well as these letters subscripts, superscripts,
or accents, are finite dimensional vector spaces over R or over C, equipped with an inner product.
The inner product of V is denoted by , V . When there is no risk of confusion , is used instead. In
this section F means either the field R or the field C.
Definitions:
Let θ be a linear map from V into W. The notation θ ∗ will be used for the adjoint of θ (i.e., the linear map
from W into V satisfying θ(x), y = x, θ ∗ (y) for all x ∈ V and y ∈ W).
13-23
Multilinear Algebra
The unique inner product , on V1 ⊗ · · · ⊗ Vm satisfying, for every vi , ui ∈ Vi , i = 1, . . . , m,
v1 ⊗ · · · ⊗ vm , u1 ⊗ · · · ⊗ um =
m
vi , ui Vi ,
i =1
is called induced inner product associated with the inner products , Vi , i = 1, . . . , m.
For each v ∈ V , let f v ∈ V ∗ be defined by f v (u) = u, v. The inverse of the map v → f v is denoted
by V (briefly ). The inner product on V ∗ , defined by
f, g = (g ), ( f )V ,
is called the dual of , V .
Let U, V be inner product spaces over F . We consider defined in L (V ; U ) the Hilbert–Schmidt inner
product, i.e., the inner product defined, for θ, η ∈ L (V ; U ), by θ, η = tr(η∗ θ).
From now on V1 ⊗ · · · ⊗ Vm is assumed to be equipped with the inner product induced by the inner
products , Vi , i = 1, . . . , m.
Facts:
The following facts can be found in [Mar73, Chap. 2].
1. The map v → f v is bijective-linear if F = R and conjugate-linear (i.e., c v → c f v ) if F = C.
2. If (bi 1 , . . . , bi ni ) is an orthonormal basis of Vi , i = 1, . . . , m, then {b⊗
α : α ∈ (n 1 , . . . , n m )} is an
orthonormal basis of V1 ⊗ · · · ⊗ Vm .
3. Let θi ∈ L (Vi ; Wi ), i = 1, . . . , m, with adjoint map θi∗ ∈ L (Wi , Vi ). Then,
(θ1 ⊗ · · · ⊗ θm )∗ = θ1∗ ⊗ · · · ⊗ θm∗ .
4. If θi ∈ L (Vi ; Vi ) is Hermitian (normal, unitary), i = 1, . . . , m, then θ1 ⊗ · · · ⊗ θm is also Hermitian
(normal, unitary).
"
5. Let θ ∈ L (V ; V ). If m θ ( m θ) is normal, then θ is normal.
!
6. Let θ ∈ L (V ; V ). Assume that θ is a linear operator on V with rank greater than m. If m θ is
normal, then θ is normal.
7. If u1 , . . . , um , v1 , . . . , vm ∈ V :
u1 ∧ · · · ∧ um , v1 ∧ · · · ∧ vm = m! detui , v j ,
u1 ∨ · · · ∨ um , v1 ∨ · · · ∨ vm = m!perui , v j .
8. Let (b1 , . . . , bn ) be an orthonormal basis of V . Then the basis (b⊗
α )α∈m,n is an orthonormal basis of
m
,
1 ∧
V , ( m!
bα )α∈Q m,n is an orthonormal basis of
"m
V.
basis of
!m
,
V , and (
1
b∨ )
m!λα ! α α∈G m,n
is an orthonormal
Examples:
The field F (recall that F = R or F = C) has an inner product, (a, b) → a, b = ab. This inner product
is called the standard inner product in F and it is the one assumed to equip F from now on.
1. When we choose F as the mth tensor power of F with the field multiplication as the tensor
multiplication, then the canonical inner product is the inner product induced in m F by the
canonical inner product.
2. When we assume V as the tensor product of F and V with the tensor multiplication a ⊗ v = av,
the inner product induced by the canonical inner product of F and the inner product of V is the
inner product of V .
13-24
Handbook of Linear Algebra
3. Consider L (V ; U ) as the tensor product of U and V ∗ by the tensor multiplication (u ⊗ f )(v) =
f (v)u. Assume in V ∗ the inner product dual of the inner product of V . Then, if (v1 , . . . , vn ) is an
orthonormal basis of V and θ, η ∈ L (V ; U ), we have
θ, η =
m
θ(v j ), η(v j ) = tr(η∗ θ),
j =1
i.e., the associated inner product of L (V ; U ) is the Hilbert–Schmidt one.
4. Consider F m×n as the tensor product of F m and F n by the tensor multiplication described in
Example 1 in section 13.2. Then if we consider in F m and F n the usual inner product we get in
F m×n as the induced inner product, the inner product
T
(A, B) → tr(B A) =
ai j bi, j .
i, j
5. Assume that in Vi∗ is defined the inner product dual of , Vi , i = 1, . . . , m. Then choosing
L (V1 , . . . , Vm ; F ) as the tensor product of V1∗ , . . . , Vm∗ , with the usual tensor multiplication, the
inner product of L (V1 , . . . , Vm ; F ) induced by the duals of inner products on Vi∗ , i = 1, . . . , m is
given by the equalities
ϕ, ψ =
ϕ(b1,α(1) , . . . , bm,α(m) )ψ(b1,α(1) , . . . , bm,α(m) ).
α∈
13.11 Orientation and Hodge Star Operator
In this section, we assume that all vector spaces are real finite dimensional inner product spaces.
Definitions:
Let V be a one-dimensional vector space. The equivalence classes of the equivalence relation ∼, defined
by the condition v ∼ v if there exists a positive real number a > 0 such that v = av, partitions the set
of nonzero vectors of V into two subsets.
Each one of these subsets is known as an open half-line.
An orientation of V is a choice of one of these subsets. The fixed open half-line is called the positive
half-line and its vectors are known as positive. The other open half-line of V is called negative half-line,
and its vectors are also called negative.
The field R, regarded as one-dimensional vector space, has a “natural” orientation that corresponds to
choose as positive half-line the set of positive numbers.
!
If V is an n-dimensional vector space, n V is a one-dimensional vector space (recall Fact 22 in section
!
13.6). An orientation of V is an orientation of n V .
A basis (b1 , . . . , bn ) of V is said to be positively oriented if b1 ∧ · · · ∧ bn is positive and negatively
oriented if b1 ∧ · · · ∧ bn is negative.
!
Throughout this section m V will be equipped with the inner product , ∧ , a positive multiple of
the induced the inner product, defined by
z, w ∧ =
1
z, w ,
m!
where the inner product on the right-hand side of the former equality is the inner product of m V
induced by the inner product of V . This is also the inner product that is considered whenever the norm
of antisymmetric tensors is referred.
13-25
Multilinear Algebra
!
The positive tensor of norm 1 of n V , uV , is called fundamental tensor of V or element of volume of V .
Let V be a real oriented inner product space . Let 0 ≤ m ≤ n.
!
!
The Hodge star operator is the linear operator m (denoted also by ) from m V into n−m V
defined by the following condition:
m (w ), w ∧ uV = w ∧ w , for all w ∈
$n−m
V.
Let n ≥ 1 and let V be an n-dimensional oriented inner product space over R. The external product
on V is the map
(v1 , . . . , vn−1 ) → v1 × · · · × vn−1 = n−1 (v1 ∧ · · · ∧ vn−1 ),
from V n−1 into V .
Facts:
The following facts can be found in [Mar75, Chap. 1] and [Sch75, Chap. 1].
1. If (b1 , . . . , bn ) is a positively oriented orthonormal basis of V , then uV = b1 ∧ · · · ∧ bn .
2. If (b1 , . . . , bn ) is a positively oriented orthonormal basis of V , then
)b∧
m b∧
α = sgn(α
αc ,
α ∈ Q m,n ,
and α c are defined in Section 13.7.
where α
3. Let (v1 , . . . , vn ) and (u1 , . . . , un ) be two bases of V and v j = ai j ui , j = 1, . . . , n. Let A = [ai j ].
Since (recall Fact 29 in Section 13.6)
v1 ∧ · · · ∧ vn = det(A)u1 ∧ · · · ∧ un ,
two bases have the same orientation if and only if their transition matrix has a positive determinant.
4. is an isometric isomorphism.
5. 0 is the linear isomorphism that maps 1 ∈ R onto the fundamental tensor.
6. m n−m = (−1)m(n−m) I!n−m V .
Let V be an n-dimensional oriented inner product space over R.
7. If m = 0 and m = n, the Hodge star operator maps the set of decomposable elements of
!
onto the set of decomposable elements of n−m V .
8. Let (x1 , . . . , xm ) be a linearly independent family of vectors of V . Then
y1 ∧ · · · ∧ yn−m = m (x1 ∧ · · · ∧ xm )
if and only if the following three conditions hold:
(a) y1 , . . . , yn−m ∈ Span({x1 , . . . , xm })⊥ ;
(b) y1 ∧ · · · ∧ yn−m = x1 ∧ · · · ∧ xm ;
(c) (x1 , . . . , xm , y1 , . . . , yn−m ) is a positively oriented basis of V .
!m
V
13-26
Handbook of Linear Algebra
9. If (v1 , . . . , vn−1 ) is linearly independent, v1 ×· · ·×vn−1 is completely characterized by the following
three conditions:
(a) v1 × · · · × vn−1 ∈ Span({v1 , . . . , vn−1 })⊥ .
(b) v1 × · · · × vn−1 = v1 ∧ · · · ∧ vn−1 .
(c) (v1 , . . . , vn−1 , v1 × · · · × vn−1 ) is a positively oriented basis of V .
10. Assume V ∗ = L (V ; F ), with dim(V ) ≥ 1, is equipped with the dual inner product. Consider
L m (V ; F ) as a model for the mth tensor power of V ∗ with the usual tensor multiplication. Then
!m ∗
V = Am (V ; F ). If λ is an antisymmetric form in Am (V ; F ), then m (λ) is the form whose
value in (v1 , . . . , vn−m ) is the component in the fundamental tensor of λ∧−1 (v1 )∧· · ·∧−1 (vn−m ),
where is defined in the definition of section 13.10.
m (λ)(v1 , . . . , vn−m )uV ∗ = λ ∧ −1 (v1 ) ∧ · · · ∧ −1 (vn−m ).
11. Assuming the above setting for the Hodge star operator, the external product of v1 , . . . , vn−1 is the
image by of the form (uV ∗ )v1 ,...,vn−1 (recall that (uV ∗ )v1 ,...,vn−1 (vn ) = uV ∗ (v1 , . . . , vn−1 , vn )), i.e.,
v1 × · · · × vn−1 = ((uV ∗ )v1 ,...,vn−1 ).
The preceeding formula can be unfolded by stating that for each v ∈ V , v, v1 × · · · × vn−1 =
uV ∗ (v1 , . . . , vn−1 , v).
Examples:
!
!
1. If V has dimension 0, the isomorphism 0 from 0 V = R into 0 V = R is either the identity
(in the case we choose the natural orientation of V ) or −I (in the case we fix the nonnatural
orientation of V ).
2. When V has dimension 2, the isomorphism 1 is usually denoted by J. It has the property J 2 = −I
and corresponds to the positively oriented rotation of π/2.
3. Assume that V has dimension 2. Then the external product is the isomorphism J.
4. If dim(V ) = 3, the external product is the well-known cross product.
References
[Bou89] N. Bourbaki, Algebra, Springer-Verlag, Berlin (1989).
[Coh03] P. M. Cohn, Basic Algebra–Groups Rings and Fields, Springer-Verlag, London (2003).
[Dol04] Igor V. Dolgachev, Lectures on Invarint Theory. Online publication, 2004. Cambridge University
Press, Cambridge-New York (1982).
[Gre67] W. H. Greub, Multilinear Algebra, Springer-Verlag, Berlin (1967).
[Jac64] Nathan Jacobson, Structure of Rings, American Mathematical Society Publications, Volume
XXXVII, Providence, RI (1964).
[Mar73] Marvin Marcus, Finite Dimensional Multilinear Algebra, Part I, Marcel Dekker, New York (1973).
[Mar75] Marvin Marcus, Finite Dimensional Multilinear Algebra, Part II, Marcel Dekker, New York (1975).
[Mer97] Russell Merris, Multilinear Algebra, Gordon Breach, Amsterdam (1997).
[Sch75] Laurent Schwartz, Les Tenseurs, Hermann, Paris (1975).
[Spi79] Michael Spivak, A Comprehensive Introduction to Differential Geometry, Volume I, 2nd ed., Publish
or Perish, Inc., Wilmington, DE (1979).
Fly UP