Jump to content

Row and column vectors: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Fixed comma splice
No edit summary
Tags: Mobile edit Mobile web edit Advanced mobile edit
(32 intermediate revisions by 14 users not shown)
Line 1: Line 1:
{{short description|Matrix consisting of a single row or column}}
In [[linear algebra]], a '''column vector''' or '''column matrix''' is an ''m'' × 1 [[matrix (mathematics)|matrix]], that is, a matrix consisting of a single column of ''m'' elements,
{{more footnotes|date=November 2022}}


In [[linear algebra]], a '''column vector''' with {{tmath|m}} elements is an <math>m \times 1</math> [[Matrix_(mathematics)|matrix]]<ref name="Artin">{{cite book |last1=Artin |first1=Michael |title=Algebra |date=1991 |publisher=Prentice-Hall |location=Englewood Cliffs, NJ |isbn=0-13-004763-5 |page=2}}</ref> consisting of a single column of {{tmath|m}} entries, for example,
:<math>\boldsymbol{x} = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_m \end{bmatrix} \,. </math>
<math display="block">\boldsymbol{x} = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_m \end{bmatrix}.</math>


Similarly, a '''row vector''' or '''row matrix''' is a 1 &times; ''m'' [[matrix (mathematics)|matrix]], that is, a matrix consisting of a single row of ''m'' elements<ref>{{harvtxt|Meyer|2000}}, p. 8</ref>
Similarly, a '''row vector''' is a <math>1 \times n</math> matrix for some {{tmath|n}}, consisting of a single row of {{tmath|n}} entries,
<math display="block">\boldsymbol a = \begin{bmatrix} a_1 & a_2 & \dots & a_n \end{bmatrix}. </math>
(Throughout this article, boldface is used for both row and column vectors.)
The [[transpose]] (indicated by {{math|T}}) of any row vector is a column vector, and the transpose of any column vector is a row vector:
<math display="block">\begin{bmatrix} x_1 \; x_2 \; \dots \; x_m \end{bmatrix}^{\rm T} = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_m \end{bmatrix}</math>
and
<math display="block">\begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_m \end{bmatrix}^{\rm T} = \begin{bmatrix} x_1 \; x_2 \; \dots \; x_m \end{bmatrix}.</math>


The set of all row vectors with {{mvar|n}} entries in a given [[field (mathematics)|field]] (such as the [[real numbers]]) forms an {{mvar|n}}-dimensional [[vector space]]; similarly, the set of all column vectors with {{mvar|m}} entries forms an {{mvar|m}}-dimensional vector space.
:<math>\boldsymbol x = \begin{bmatrix} x_1 & x_2 & \dots & x_m \end{bmatrix} \,. </math>


The space of row vectors with {{mvar|n}} entries can be regarded as the [[dual space]] of the space of column vectors with {{mvar|n}} entries, since any linear functional on the space of column vectors can be represented as the left-multiplication of a unique row vector.
Throughout, boldface is used for the row and column vectors. The [[transpose]] (indicated by T) of a row vector is a column vector

:<math>\begin{bmatrix} x_1 \; x_2 \; \dots \; x_m \end{bmatrix}^{\rm T} = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_m \end{bmatrix} \,,</math>

and the transpose of a column vector is a row vector

:<math>\begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_m \end{bmatrix}^{\rm T} = \begin{bmatrix} x_1 \; x_2 \; \dots \; x_m \end{bmatrix} \,.</math>

The set of all row vectors forms a [[vector space]] called '''row space'''; similarly, the set of all column vectors forms a vector space called '''column space'''. The dimensions of the [[row and column spaces]] equals the number of entries in the row or column vector.

The column space can be viewed as the [[dual space]] to the row space, since any linear functional on the space of column vectors can be represented uniquely as an [[inner product]] with a specific row vector.


== Notation ==
== Notation ==
Line 23: Line 22:
To simplify writing column vectors in-line with other text, sometimes they are written as row vectors with the transpose operation applied to them.
To simplify writing column vectors in-line with other text, sometimes they are written as row vectors with the transpose operation applied to them.


:<math>\boldsymbol{x} = \begin{bmatrix} x_1 \; x_2 \; \dots \; x_m \end{bmatrix}^{\rm T}</math>
<math display="block">\boldsymbol{x} = \begin{bmatrix} x_1 \; x_2 \; \dots \; x_m \end{bmatrix}^{\rm T}</math>


oder
oder


:<math>\boldsymbol{x} = \begin{bmatrix} x_1, x_2, \dots, x_m \end{bmatrix}^{\rm T}</math>
<math display="block">\boldsymbol{x} = \begin{bmatrix} x_1, x_2, \dots, x_m \end{bmatrix}^{\rm T}</math>


Some authors also use the convention of writing both column vectors and row vectors as rows, but separating row vector elements with [[comma]]s and column vector elements with [[semicolon]]s (see alternative notation 2 in the table below).
Some authors also use the convention of writing both column vectors and row vectors as rows, but separating row vector elements with [[comma]]s and column vector elements with [[semicolon]]s (see alternative notation 2 in the table below).{{fact|date=March 2021}}
{| class="wikitable"
{| class="wikitable"
|-
|-
Line 51: Line 50:
[[Matrix multiplication]] involves the action of multiplying each row vector of one matrix by each column vector of another matrix.
[[Matrix multiplication]] involves the action of multiplying each row vector of one matrix by each column vector of another matrix.


The [[dot product]] of two vectors '''a''' and '''b''' is equivalent to the matrix product of the row vector representation of '''a''' and the column vector representation of '''b''',
The [[dot product]] of two column vectors {{math|'''a''', '''b'''}}, considered as elements of a coordinate space, is equal to the matrix product of the transpose of {{math|'''a'''}} with {{math|'''b'''}},


:<math>\mathbf{a} \cdot \mathbf{b} = \mathbf{a} \mathbf{b}^\mathrm{T} = \begin{bmatrix}
<math display="block">\mathbf{a} \cdot \mathbf{b} = \mathbf{a}^\intercal \mathbf{b} = \begin{bmatrix}
a_1 & a_2 & a_3
a_1 & \cdots & a_n
\end{bmatrix}\begin{bmatrix}
\end{bmatrix} \begin{bmatrix}
b_1 \\ b_2 \\ b_3
b_1 \\ \vdots \\ b_n
\end{bmatrix} = a_1 b_1 + a_2 b_2 + a_3 b_3 \,, </math>
\end{bmatrix} = a_1 b_1 + \cdots + a_n b_n \,, </math>


which is also equivalent to the matrix product of the row vector representation of '''b''' and the column vector representation of '''a''',
By the symmetry of the dot product, the [[dot product]] of two column vectors {{math|'''a''', '''b'''}} is also equal to the matrix product of the transpose of {{math|'''b'''}} with {{math|'''a'''}},


:<math>\mathbf{b} \cdot \mathbf{a} = \mathbf{b} \mathbf{a}^\mathrm{T} = \begin{bmatrix}
<math display="block">\mathbf{b} \cdot \mathbf{a} = \mathbf{b}^\intercal \mathbf{a} = \begin{bmatrix}
b_1 & b_2 & b_3
b_1 & \cdots & b_n
\end{bmatrix}\begin{bmatrix}
\end{bmatrix}\begin{bmatrix}
a_1 \\ a_2 \\ a_3
a_1 \\ \vdots \\ a_n
\end{bmatrix}\,. </math>
\end{bmatrix} = a_1 b_1 + \cdots + a_n b_n\,. </math>


The matrix product of a column and a row vector gives the [[outer product]] of two vectors '''a''' and '''b''', an example of the more general [[tensor product]]. The matrix product of the column vector representation of '''a''' and the row vector representation of '''b''' gives the components of their dyadic product,
The matrix product of a column and a row vector gives the [[outer product]] of two vectors {{math|'''a''', '''b'''}}, an example of the more general [[tensor product]]. The matrix product of the column vector representation of {{math|'''a'''}} and the row vector representation of {{math|'''b'''}} gives the components of their dyadic product,


:<math>\mathbf{a} \otimes \mathbf{b} = \mathbf{a}^\mathrm{T} \mathbf{b} = \begin{bmatrix}
<math display="block">\mathbf{a} \otimes \mathbf{b} = \mathbf{a} \mathbf{b}^\intercal = \begin{bmatrix}
a_1 \\ a_2 \\ a_3
a_1 \\ a_2 \\ a_3
\end{bmatrix}\begin{bmatrix}
\end{bmatrix}\begin{bmatrix}
b_1 & b_2 & b_3
b_1 & b_2 & b_3
\end{bmatrix} = \begin{bmatrix}
\end{bmatrix} = \begin{bmatrix}
a_1b_1 & a_1b_2 & a_1b_3 \\
a_1 b_1 & a_1 b_2 & a_1 b_3 \\
a_2b_1 & a_2b_2 & a_2b_3 \\
a_2 b_1 & a_2 b_2 & a_2 b_3 \\
a_3b_1 & a_3b_2 & a_3b_3 \\
a_3 b_1 & a_3 b_2 & a_3 b_3 \\
\end{bmatrix} \,, </math>
\end{bmatrix} \,, </math>


which is the [[transpose]] of the matrix product of the column vector representation of '''b''' and the row vector representation of '''a''',
which is the [[transpose]] of the matrix product of the column vector representation of {{math|'''b'''}} and the row vector representation of {{math|'''a'''}},


:<math>\mathbf{b} \otimes \mathbf{a} = \mathbf{b}^\mathrm{T} \mathbf{a} = \begin{bmatrix}
<math display="block">\mathbf{b} \otimes \mathbf{a} = \mathbf{b} \mathbf{a}^\intercal = \begin{bmatrix}
b_1 \\ b_2 \\ b_3
b_1 \\ b_2 \\ b_3
\end{bmatrix}\begin{bmatrix}
\end{bmatrix}\begin{bmatrix}
a_1 & a_2 & a_3
a_1 & a_2 & a_3
\end{bmatrix} = \begin{bmatrix}
\end{bmatrix} = \begin{bmatrix}
b_1a_1 & b_1a_2 & b_1a_3 \\
b_1 a_1 & b_1 a_2 & b_1 a_3 \\
b_2a_1 & b_2a_2 & b_2a_3 \\
b_2 a_1 & b_2 a_2 & b_2 a_3 \\
b_3a_1 & b_3a_2 & b_3a_3 \\
b_3 a_1 & b_3 a_2 & b_3 a_3 \\
\end{bmatrix} \,. </math>
\end{bmatrix} \,. </math>


==Preferred input vectors for matrix transformations==
==Matrix transformations==
{{main|Transformation matrix}}
An {{math|''n'' × ''n''}} matrix {{mvar|M}} can represent a [[linear map]] and act on row and column vectors as the linear map's [[transformation matrix]]. For a row vector {{math|'''v'''}}, the product {{math|'''v'''''M''}} is another row vector {{math|'''p'''}}:


<math display="block">\mathbf{v} M = \mathbf{p} \,.</math>
Frequently a row vector presents itself for an operation within ''n''-space expressed by an ''n'' × ''n'' matrix ''M'',


Another {{math|''n'' × ''n''}} matrix {{mvar|Q}} can act on {{math|'''p'''}},
:<math> v M = p \,.</math>


<math display="block"> \mathbf{p} Q = \mathbf{t} \,. </math>
Then ''p'' is also a row vector and may present to another ''n'' × ''n'' matrix ''Q'',


Then one can write {{math|1='''t''' = '''p'''''Q'' = '''v'''''MQ''}}, so the [[matrix product]] transformation {{mvar|MQ}} maps {{math|'''v'''}} directly to {{math|'''t'''}}. Continuing with row vectors, matrix transformations further reconfiguring {{mvar|n}}-space can be applied to the right of previous outputs.
:<math> p Q = t \,. </math>


When a column vector is transformed to another column vector under an {{math|''n'' × ''n''}} matrix action, the operation occurs to the left,
Conveniently, one can write ''t'' = ''p Q'' = ''v MQ'' telling us that the [[matrix product]] transformation ''MQ'' can take ''v'' directly to ''t''. Continuing with row vectors, matrix transformations further reconfiguring ''n''-space can be applied to the right of previous outputs.


<math display="block"> \mathbf{p}^\mathrm{T} = M \mathbf{v}^\mathrm{T} \,,\quad \mathbf{t}^\mathrm{T} = Q \mathbf{p}^\mathrm{T},</math>
In contrast, when a column vector is transformed to become another column under an ''n'' × ''n'' matrix action, the operation occurs to the left,


leading to the algebraic expression {{math|''QM'' '''v'''<sup>T</sup>}} for the composed output from {{math|'''v'''<sup>T</sup>}} input. The matrix transformations mount up to the left in this use of a column vector for input to matrix transformation.
:<math> p^\mathrm{T} = M v^\mathrm{T} \,,\quad t^\mathrm{T} = Q p^\mathrm{T} </math>,

leading to the algebraic expression ''QM v<sup>T</sup>'' for the composed output from ''v<sup>T</sup>'' input. The matrix transformations mount up to the left in this use of a column vector for input to matrix transformation.

Nevertheless, using the [[transpose]] operation these differences between inputs of a row or column nature are resolved by an [[antihomomorphism]] between the groups arising on the two sides. The technical construction uses the [[dual space]] associated with a vector space to develop the [[Dual space#Transpose of a linear map|transpose of a linear map]].

For an instance where this row vector input convention has been used to good effect see Raiz Usmani,<ref>Raiz A. Usmani (1987) ''Applied Linear Algebra'' [[Marcel Dekker]] {{isbn|0824776224}}. See Chapter 4: "Linear Transformations"</ref> where on page 106 the convention allows the statement "The product mapping ''ST'' of ''U'' into ''W'' [is given] by:
:<math>\alpha (ST) = (\alpha S) T = \beta T = \gamma</math>."
(The Greek letters represent row vectors).

[[Ludwik Silberstein]] used row vectors for spacetime events; he applied Lorentz transformation matrices on the right in his [[List of important publications in physics#Special|Theory of Relativity]] in 1914 (see page 143).
In 1963 when [[McGraw-Hill]] published ''Differential Geometry'' by [[Heinrich Guggenheimer]] of the [[University of Minnesota]], he used the row vector convention in chapter 5, "Introduction to transformation groups" (eqs. 7a,9b and 12 to 15). When [[H. S. M. Coxeter]] reviewed<ref>Coxeter [http://www.ams.org/mathscinet/pdf/188842.pdf Review of ''Linear Geometry''] from [[Mathematical Reviews]]</ref> ''Linear Geometry'' by [[Rafael Artzy]], he wrote, "[Artzy] is to be congratulated on his choice of the 'left-to-right' convention, which enables him to regard a point as a row matrix instead of the clumsy column that many authors prefer." [[J. W. P. Hirschfeld]] used right multiplication of row vectors by matrices in his description of projectivities on the [[Galois geometry]] PG(1,q).<ref>[[J. W. P. Hirschfeld]] (1979) ''Projective Geometry over Finite Fields'', page 119, [[Clarendon Press]] {{isbn|0-19-853526-0}}</ref>

In the study of stochastic processes with a [[stochastic matrix]], it is conventional to use a row vector as the [[stochastic vector]].<ref>[[John G. Kemeny]] & [[J. Laurie Snell]] (1960) ''Finite Markov Chains'', page 33, D. Van Nostrand Company</ref>


== See also ==
== See also ==
* [[Covariance and contravariance of vectors]]
* [[Covariance and contravariance of vectors]]
* [[Index notation]]
* [[Index notation]]
* [[Vector of ones]]
* [[Single-entry vector]]
* [[Standard unit vector]]
* [[Unit vector]]


== Notes ==
== Notes ==
Line 159: Line 151:
|url = http://www.matrixanalysis.com/DownloadChapters.html
|url = http://www.matrixanalysis.com/DownloadChapters.html
|url-status = dead
|url-status = dead
|archiveurl = https://web.archive.org/web/20010301161440/http://matrixanalysis.com/DownloadChapters.html
|archive-url = https://web.archive.org/web/20010301161440/http://matrixanalysis.com/DownloadChapters.html
|archivedate = March 1, 2001
|archive-date = March 1, 2001
}}
}}
* {{Citation
* {{Citation
Line 187: Line 179:
| edition = 7th
| edition = 7th
}}
}}

{{Linear algebra}}


[[Category:Linear algebra]]
[[Category:Linear algebra]]

Revision as of 10:56, 14 May 2023

In linear algebra, a column vector with elements is an matrix[1] consisting of a single column of entries, for example,

Similarly, a row vector is a matrix for some , consisting of a single row of entries, (Throughout this article, boldface is used for both row and column vectors.)

The transpose (indicated by T) of any row vector is a column vector, and the transpose of any column vector is a row vector: and

The set of all row vectors with n entries in a given field (such as the real numbers) forms an n-dimensional vector space; similarly, the set of all column vectors with m entries forms an m-dimensional vector space.

The space of row vectors with n entries can be regarded as the dual space of the space of column vectors with n entries, since any linear functional on the space of column vectors can be represented as the left-multiplication of a unique row vector.

Notation

To simplify writing column vectors in-line with other text, sometimes they are written as row vectors with the transpose operation applied to them.

oder

Some authors also use the convention of writing both column vectors and row vectors as rows, but separating row vector elements with commas and column vector elements with semicolons (see alternative notation 2 in the table below).[citation needed]

Row vector Column vector
Standard matrix notation
(array spaces, no commas, transpose signs)
Alternative notation 1
(commas, transpose signs)
Alternative notation 2
(commas and semicolons, no transpose signs)

Operations

Matrix multiplication involves the action of multiplying each row vector of one matrix by each column vector of another matrix.

The dot product of two column vectors a, b, considered as elements of a coordinate space, is equal to the matrix product of the transpose of a with b,

By the symmetry of the dot product, the dot product of two column vectors a, b is also equal to the matrix product of the transpose of b with a,

The matrix product of a column and a row vector gives the outer product of two vectors a, b, an example of the more general tensor product. The matrix product of the column vector representation of a and the row vector representation of b gives the components of their dyadic product,

which is the transpose of the matrix product of the column vector representation of b and the row vector representation of a,

Matrix transformations

An n × n matrix M can represent a linear map and act on row and column vectors as the linear map's transformation matrix. For a row vector v, the product vM is another row vector p:

Another n × n matrix Q can act on p,

Then one can write t = pQ = vMQ, so the matrix product transformation MQ maps v directly to t. Continuing with row vectors, matrix transformations further reconfiguring n-space can be applied to the right of previous outputs.

When a column vector is transformed to another column vector under an n × n matrix action, the operation occurs to the left,

leading to the algebraic expression QM vT for the composed output from vT input. The matrix transformations mount up to the left in this use of a column vector for input to matrix transformation.

See also

Notes

  1. ^ Artin, Michael (1991). Algebra. Englewood Cliffs, NJ: Prentice-Hall. p. 2. ISBN 0-13-004763-5.

References

  • Axler, Sheldon Jay (1997), Linear Algebra Done Right (2nd ed.), Springer-Verlag, ISBN 0-387-98259-0
  • Lay, David C. (August 22, 2005), Linear Algebra and Its Applications (3rd ed.), Addison Wesley, ISBN 978-0-321-28713-7
  • Meyer, Carl D. (February 15, 2001), Matrix Analysis and Applied Linear Algebra, Society for Industrial and Applied Mathematics (SIAM), ISBN 978-0-89871-454-8, archived from the original on March 1, 2001
  • Poole, David (2006), Linear Algebra: A Modern Introduction (2nd ed.), Brooks/Cole, ISBN 0-534-99845-3
  • Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International
  • Leon, Steven J. (2006), Linear Algebra With Applications (7th ed.), Pearson Prentice Hall