連立一次方程式

 以下の形式を持つ方程式の集合があるとします.

 \left. \begin{array}{ccc}  a_{11}x_1 + a_{12}x_2 + \cdots + a_{1n}x_n & = & r_1 \\  a_{21}x_2 + a_{22}x_2 + \cdots + a_{2n}x_n & = & r_2 \\  \cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots & \cdots & \cdots \\  a_{m1}x_1 + a_{m2}x_2 + \cdots + a_{mn}x_n & = & r_n   \end{array} \right\}\cdots(16)

これらは n 個の未知数 x_1,\ x_2,\ \cdots,\ x_n についての m 個の連立方程式 と呼びます.仮に r_1,\ r_2,\ \cdots,\ r_n がすべてゼロならその連立方程式は 斉次 と呼びます.仮にそれらがすべてゼロでないなら 非斉次 と呼びます.(16) を満たすいかなる数 x_1,\ x_2,\ \cdots,\ x_n の集合も連立方程式の と呼びます.

行列においては (16) の形式は以下のように記述できます.

\displaystyle \left( \begin{array}{cccc}  a_{11} & a_{12} & \cdots & a_{1n} \\  a_{21} & a_{22} & \cdots & a_{2n} \\  \cdots & \cdots & \cdots & \cdots \\  a_{m1} & a_{m2} & \cdots & a_{mn}     \end{array} \right)  \left( \begin{array}{c} x_1 \\ x_2 \\ \cdots \\ x_n \end{array} \right) =   \left( \begin{array}{c} r_1 \\ r_2 \\ \cdots \\ r_n \end{array} \right) \cdots (17)

または短縮して  AX = R \cdots (18)

ここで A, X, R はそれぞれ (17) における対応する行列を表現しています.

Orthogonal vectors

The scalar or dot product of two vectors a_1\bold{i} + a_2\bold{j} + a_3\bold{k} and b_1\bold{i} + b_2\bold{j} + b_3\bold{k} is a_1b_1 + a_2b_2 + a_3b_3 and the vectors are perpendicular or orthogonal if a_1b_1 + a_2b_2 + a_3b_3 = 0. From the point of view of matrices we can consider these vectors as column vectors

\displaystyle A = \left( \begin{array}{c} a_1 \\ a_2 \\ a_3 \end{array} \right),\ B = \left( \begin{array}{c} b_1 \\ b_2 \\ b_3 \end{array} \right)

from which it follows that A^TB = a_1b_1 + a_2b_2 + a_3b_3.

This leads us to define the scalar product of real column vectors A and B as A^TB and to define A and B to be orthogonal if A^TB = 0.

It is convenient to generalize this to cases where the vectors can have complex components and we adopt the following definition:

Definition 1. Two column vectors A and B are called orthogonal if \bar{A}^TB = 0 , and \bar{A}^TB is called the scalar product of A and B.

It should be noted also that if A is a unitary matrix then \bar{A}^TA = 1, which means that the scalar product of A with itself is 1 or equivalently A is a unit vector, i.e. having length 1. Thus a unitary column vector is a unit vector. Because of these remarks we have the following

Definition 2. A set of vectors X_1,\ X_2,\ \cdots for which

\displaystyle \bar{X}^T_jX_k = \left\{\begin{array}{cc} 0 & j \ne k \\ 1 & j = k \end{array} \right.

is called a unitary set or system of vectors or, in the case where the vectors are real, an orthonormal set or an orthogonal set of unit vectors.

直交ベクトル

 二つのベクトル a_1\bold{i} + a_2\bold{j} + a_3\bold{k} および b_1\bold{i} + b_2\bold{j} + b_3\bold{k} のスカラー積またはドット積は a_1b_1 + a_2b_2 + a_3b_3 であり, a_1b_1 + a_2b_2 + a_3b_3 = 0 ならばそれらのベクトルは垂直または直交します.行列の観点からこれらのベクトルは列ベクトルと考えることができます.

\displaystyle A = \left( \begin{array}{c} a_1 \\ a_2 \\ a_3 \end{array} \right),\ B = \left( \begin{array}{c} b_1 \\ b_2 \\ b_3 \end{array} \right)

これらは A^TB = a_1b_1 + a_2b_2 + a_3b_3 という性質があります.

 これにより 実数の列ベクトル A および B の内積A^TB と定義し, A^TB = 0 なら A および B直交 するとの定義に至ります.

 これらを複素数を要素に持つ場合に一般化し,次の定義を採用するのは便利です.

定義 1. 二つの列ベクトル A および B\bar{A}^TB = 0 なら 直交 と呼び, \bar{A}^TBA および Bスカラー積 と呼びます.

 仮に A がユニタリ行列なら \bar{A}^TA = 1 であることに注意が必要です.それは A とそれ自身とのスカラー積が 1 であり, A単位ベクトル であることすなわち長さが 1 であることと等価です.ゆえにユニタリ列ベクトルは単位ベクトルです.これらの特徴から以下を得ます.

定義 2. ベクトルの集合 X_1,\ X_2,\ \cdots について

\displaystyle \bar{X}^T_jX_k = \left\{\begin{array}{cc} 0 & j \ne k \\ 1 & j = k \end{array} \right.

unitary set or system of vectors と呼び,あるいはベクトルが実数の場合には 正規直交の集合 または 単位ベクトルの直交の集合 と呼びます.

直交行列とユニタリ行列

 実行列 A はその転置行列が自身の逆行列と同じ場合,すなわち仮に A^T = A^{-1} または  A^TA= I ならば 直交行列 と呼びます.

 複素行列 A は自身の複素共軛転置行列が逆行列と同じなら,すなわち仮に  \bar{A}^T = A^{-1} または  \bar{A}^TA = I ならば ユニタリ行列 と呼びます.実ユニタリ行列は直交行列であることに注意が必要です.

Inverse of a matrix

If for a given square matrix A there exists a matrix B such that  AB = I , then B is called an inverse of A and is denoted by A^{-1}. The following theorem is fundamental.

11. If A is a non-singular square matrix of order n [i.e. \det(A) \ne 0], then there exists a unique inverse A^{-1} such that AA^{-1} = A^{-1}A = I and we can express  A^{-1} in the following form

\displaystyle A^{-1} = \frac{(A_{jk})^T}{\det(A)} \cdots(14)

where (A_{jk}) is the matrix of cofactors A_{jk} and (A_{jk})^T = (A_{kj}) is its transpose.

The following express some properties of the inverse:

(AB)^{-1} = B^{-1}A^{-1} ,\ (A^{-1})^{-1} = A \cdots(15)

逆行列

 仮にある正方行列 A があって,  AB = I のような性質を有する B が存在するなら BA逆行列 と呼ばれ A^{-1} と記述します.下記の定理が成り立ちます.

11. 仮に An 次の非特異的正方行列の場合,すなわち \det(A) \ne 0 の時,唯一のの逆行列 A^{-1} が存在し, AA^{-1} = A^{-1}A = I であって  A^{-1} を次の形で表現できます.

\displaystyle A^{-1} = \frac{(A_{jk})^T}{\det(A)} \cdots(14)

ここで (A_{jk}) は余因子 A_{jk} の行列であって (A_{jk})^T = (A_{kj}) はその転置行列です.

 以下は逆行列のいくつかの性質を示しています.

(AB)^{-1} = B^{-1}A^{-1} ,\ (A^{-1})^{-1} = A \cdots(15)

Theorems on determinants

  1. The value of a determinant remains the same if rows and columns are interchanged. In symbols, \det(A) = \det(A^T).
  2. If all elements of any row [or column] are zero except for one element, then the value of the determinant is equal to the product of that element by its cofactor. In particular, if all elements of a row [or column] are zero the determinant is zero.
  3. An interchange of any two rows [or columns] changes the sign of the determinant.
  4. If all elements in any row [or column] are multiplied by a number, the determinant is also multiplied by this number.
  5. If any two rows [or columns] are the same or proportional, the determinant is zero.
  6. If we express the elements of each row [or column] as the sum of two terms, then the determinant can be expressed as the sum of two determinants having the same order.
  7. If we multiply the elements of any row [or column] by a given number and add to corresponding elements of any other row [or column], then the value of the determinant remains the same.
  8. If A and B are square matrices of the same order, then
    \det(AB) = \det(A)\det(B)\cdots(11)
  9. The sum of the products of the elements of any row [or column] by the cofactors of another row [or column] is zero. In symbols,
    \displaystyle \sum^n_{k=1}a_{qk}A_{pk} = 0 or \displaystyle \sum^n_{k=1}a_{kq}A_{kp} = 0 if p \ne q\cdots(12)

    If  p = q , the sum is \det(A) by (10).

  10. Let v_1,\ v_2,\ \cdots,\ v_n represent row vectors [or column vectors] of a square matrix A of order n. Then \det(A) = 0 if and only if there exist constants [scalars] \lambda_1,\ \lambda_2,\ \cdots,\ \lambda_n not all zero such that
    \lambda_1v_1 + \lambda_2v_2 + \cdots + \lambda_nv_n = O \cdots(13)

    where O is the null or zero row matrix. If condition (13) is satisfied we say that the vectors v_1,\ v_2,\ \cdots,\ v_n are linearly dependent. A matrix A such that \det(A) = 0 is called a singular matrix. If \det(A) \ne 0, then A is a non-singular matrix.

In practice we evaluate a determinant of order n by using Theorem 7 successively to replace all but one of the elements in a row or column by zeros and then using Theorem 2 to obtain a new determinant of order n – 1. We continue in this manner, arriving ultimately at determinants of order 2 or 3 which are easily evaluated.

行列式の定理

  1. 行列式の値は,行と列が入れ替わっても変化しません.記法では \det(A) = \det(A^T).
  2. 任意の行または列の1つを除く全要素がゼロならば行列式の値は,そのゼロでない要素の余因子の積に等しくなります.特に,ある行または列の全要素がゼロならば行列式もゼロになります.
  3. 任意の 2 行または 2 列を交換すると行列式の符号が変化します.
  4. 任意の行または列の全要素にある数をかけると,その行列式もその数でかけられたものになります.
  5. 任意の 2 行または 2 列が同じか比例するならその行列式はゼロになります.
  6. 各行または各列の要素を 2 項で表現できるなら,その行列式は同次の二つの行列式の和で表現できます.
  7. 任意の行または列の要素にある数をかけ,任意の他の行または列の対応する要素に足していくと,その行列式の値は同じになります.
  8. 仮に A および B が同次の正方行列なら
    \det(AB) = \det(A)\det(B)\cdots(11)
  9. 他の行または列の余因子による任意の行または列の要素の積和はゼロとなります.記法では
    \displaystyle \sum^n_{k=1}a_{qk}A_{pk} = 0 or \displaystyle \sum^n_{k=1}a_{kq}A_{kp} = 0 if p \ne q\cdots(12)

    仮に  p = q なら \det(A) の和は (10) によります.

  10. ここで v_1,\ v_2,\ \cdots,\ v_nn 次正方行列 A の行ベクトルまたは列ベクトルを表すとします.すると \det(A) = 0 となるのはすべてゼロではない以下の条件を満たす定数またはスカラー \lambda_1,\ \lambda_2,\ \cdots,\ \lambda_n が存在するときのみです.
    \lambda_1v_1 + \lambda_2v_2 + \cdots + \lambda_nv_n = O \cdots(13)

    ここで O はヌル行列または零行列です.仮に条件式 (13) が満たされるならベクトル v_1,\ v_2,\ \cdots,\ v_n線形従属 であると示すことができます.ある行列 A\det(A) = 0 を満たすなら 特異行列 と呼びます.仮に \det(A) \ne 0 であるなら A非特異行列 です.

 実際には,定理 7 によりある行または列の一つを除いた全要素を 0 で置換し,更に定理 2 を用いて n – 1 次の新しい行列式を得ることで n 次の行列式を評価できます.この方法を続けることで,最終的に 2 次または 3 次の行列式に到達するため,評価は容易です.

Determinants

If the matrix A in (1) is a square matrix, then we associate with A a number denoted by

\displaystyle \Delta = \left| \begin{array}{cccc} a_{11} & a_{12} & \cdots & a_{an} \\  a_{21} & a_{22} & \cdots & a_{2n} \\  \cdots & \cdots & \cdots & \cdots \\  a_{n1} & a_{n2} & \cdots & a_{nn} \end{array} \right|\cdots (9)

called the determinant of A of order n, written det(A). In order to define the value of a determinant, we introduce the following concepts.

1. Minor

Given any element a_{jk} of \Delta we associate a new determinant of order (n – 1) obtained by removing all elements of the jth row and kth column called the minor of a_{jk}.

2. Cofactor

If we multiply the minor of a_{jk} by (-1)^{j+k}, the result of the elements in any row [or column] by their corresponding cofactors and is called the Laplace expansion. In symbols,

\displaystyle \det{A} = \sum^{n}_{k=1}a_{jk}A_{jk} \cdots (10)

We can show that this value is independent of the row [or column] used.

行列式

 仮に (1) における行列 A が正方行列なら, A に対して下記に示すある数を関連付けられます.

\displaystyle \Delta = \left| \begin{array}{cccc} a_{11} & a_{12} & \cdots & a_{an} \\  a_{21} & a_{22} & \cdots & a_{2n} \\  \cdots & \cdots & \cdots & \cdots \\  a_{n1} & a_{n2} & \cdots & a_{nn} \end{array} \right|\cdots (9)

これは n次A行列式 と呼び, det(A) と記述します.行列式の値を定義するために次の概念を導入しましょう.

1. 小行列式

\Delta の任意の要素 a_{jk} がある時, j 番目の行および k 番目の列の全要素を除去して得られた (n – 1) 次の新しい行列式を関連付け,これを a_{jk} の小行列式と呼びます.

2. 余因子

 仮に a_{jk} の小行列式に (-1)^{j+k} を乗算するなら,それらの対応する余因子による任意の行(または列)における要素の結果は ラプラス展開 と呼びます.記法では

\displaystyle \det{A} = \sum^{n}_{k=1}a_{jk}A_{jk} \cdots (10)

 この値は用いられる行または列によらず独立です.

Some special definitions and operations involving matrices

1. Equality of Matrices

Two matrices A = (a_{jk}) and B = (b_{jk}) of the same order [i.e. equal numbers of rows and columns] are equal if and only if a_{jk} = b_{jk}.

2. Addition of Matrices

If A = (a_{jk}) and B = (b_{jk}) have the same order we define the sum of A and B as  A + B = (a_{jk} + b_{jk}) .

Note that the communicative and associative laws for addition are satisfied by matrices, i.e. for any matrices A,\ B,\ C of the same order

A + B = B + A,\ A + (B + C) = (A + B) + C \cdots (2)

3. Subtraction of Matrices

If A = (a_{jk}) , B = (b_{jk}) have the same order, we define the difference of A and B as A - B = (a_{jk} - b_{jk}).

4. Multiplication of a Matrix by a Number

If A = (a_{jk}) and \lambda is any number or scalar, we define the product of A by \lambda as \lambda A = A\lambda = (\lambda a_{jk}).

5. Multiplication of Matrices

If A = (a_{jk}) is an m\times n matrix while B = (b_{jk}) is an n\times p matrix, then we define the product A\cdot B or AB as the matrix C = (c_{jk}) where

\displaystyle c_{jk} = \sum_{l = 1}^n a_{jl}b_{lk} \cdots (3)

and where C is of order m\times p.

Note that in general AB \ne BA, i.e. the communicative law for multiplication of matrices is not satisfied in general. However, the associative and distributive laws are satisfied, i.e.

A(BC) = (AB)C,\ A(B + C) = AB + AC,\ (B + C)A = BA + CA \cdots (4)

A matrix A can be multiplied by itself if and only if it is a square matrix. The product A\cdot A can in such case be written A^2. Similarly we define powers of a square matrix, i.e.  A^3 = A\cdot A^2,\ A^4 = A\cdot A^3, etc.

6. Transpose of a Matrix

If we interchange rows and columns of a matrix A, the resulting matrix is called the transpose of A and is denoted by A^T. In symbols, if A = (a_{jk}) then A^T = (a_{kj}).

We can prove that

(A + B)^T = A^T + B^T,\ (AB)^T = B^TA^T,\ (A^T)^T = A \cdots(5)

7. Symmetric and Skew-Symmetric matrices

A square matrix A is called symmetric if A^T = A and skew-symmetric if A^T = - A.

Any real square matrix [i.e. one having only real elements] can always be expressed as the sum of a real symmetric matrix and a real skew-symmetric matrix.

8. Complex Conjugate of a Matrix

If all elements a_{jk} of a matrix A are replaced by their complex conjugates \bar{a}_{jk}, the matrix obtained is called the complex conjugate of A and is denoted by \bar{A}.

9. Hermitian and Skew-Hermitian Matrices

A square matrix A which is the same as the complex conjugate of its transpose, i.e. if  A = \bar{A}^T , is called Hermitian. If  A = -\bar{A}^T , then A is called skew-Hermitian. If A is real these reduce to symmetric and skew-symmetric matrices respectively.

10. Principal Diagonal and Trace of a Matrix

If A = (a_{jk}) is a square matrix, then the diagonal which contains all elements a_{jk} for which  j = k is called the principal or main diagonal and the sum of all elements is called trace of A.

A matrix for which a_{jk} = 0 when  j \ne k is called diagonal matrix.

11. Unit Matrix

A square matrix in which all elements of the principal diagonal are equal to 1 while all other elements are zero is called the unit matrix and is denoted by I. An important property of I is that

 AI = IA = A,\ I^n = I,\ n = 1,2,3,\cdots(6)

The unit matrix plays a role in matrix algebra similar to that played by the number one in ordinary algebra.

12. Zero or Null matrix

A matrix whose elements are all equal to zero is called the null or zero matrix and is often denoted by O or symply 0. For any matrix A having the same order as 0 we have

 A + 0 = 0 + A = A \cdots(7)

Also if A and 0 are square matrices, then

 A0 = 0A = 0 \cdots(8)

The zero matrix plays a role in matrix algebra similar to that played by the number zero of ordinary algebra.

いくつかの行列を含む特殊な定義と演算

1. 行列が等しい

 二つの行列 A = (a_{jk}) および B = (b_{jk}) が同じ次数で(すなわち行と列の数が同じで) a_{jk} = b_{jk} の時にのみ 等しい

2. 行列の和

 仮に A = (a_{jk}) および B = (b_{jk}) が同じ次数ならば A および B A + B = (a_{jk} + b_{jk}) と定義できます.

 行列の交換法則と結合法則は,すなわちある同じ次数の行列 A,\ B,\ C を下記のように記述します.

A + B = B + A,\ A + (B + C) = (A + B) + C \cdots (2)

3. 行列の差

 仮に A = (a_{jk}) , B = (b_{jk}) が同じ次数を有するなら A および BA - B = (a_{jk} - b_{jk}) と定義できます.

4. 行列のスカラー倍

 仮に A = (a_{jk}) があって \lambda が任意の数またはスカラーの時 A\lambda による \lambda A = A\lambda = (\lambda a_{jk}) と定義できます.

5. 行列の積

 仮に A = (a_{jk})m\times n 行列で B = (b_{jk})n\times p 行列の時,A\cdot B または AB を行列 C = (c_{jk}) と定義できます.ここで

\displaystyle c_{jk} = \sum_{l = 1}^n a_{jl}b_{lk} \cdots (3)

また Cm\times p 次です.

 一般に AB \ne BA すなわち行列の積の交換法則は成り立たないことに注意してください.しかしながら結合法則と分配法則は成り立ちます,すなわち

A(BC) = (AB)C,\ A(B + C) = AB + AC,\ (B + C)A = BA + CA \cdots (4)

 ある行列 A がそれ自身との積をつくれるのは正方行列の場合のみです.積  A \cdot A  A^2 と記述します.同様に行列の累乗を定義できます.すなわち  A^3 = A\cdot A^2,\ A^4 = A\cdot A^3 などです.

6. 行列の転置

 行列 A の行と列を入れ替えることができるなら,その結果得られる行列は A転置 と呼び, A^T と記述します.記号では,仮に A = (a_{jk}) ならば A^T = (a_{kj}) と記述します.

以下を証明できます.

(A + B)^T = A^T + B^T,\ (AB)^T = B^TA^T,\ (A^T)^T = A \cdots(5)

7. 対称行列と歪対称行列

 ある正方行列 AA^T = A の時 対称 と呼び,A^T = - A の時 歪対称 と呼びます.

 任意の実正方行列(すなわち実数の要素のみからなる実正方行列)は常に実対称行列と実歪対称行列の和で表現できます.

8. 行列の複素共役

 仮に行列 A のすべての要素 a_{jk} が複素共役 \bar{a}_{jk} で置き換えられたら,その結果得られた行列は A複素共役 と呼び, \bar{A} と記述します.

9. エルミート行列及び歪エルミート行列

 ある行列 A がそれ自身の転置の複素共役に等しい時,すなわち A = \bar{A}^T であるなら エルミート行列 と呼びます.仮に  A = -\bar{A}^T の場合は A歪エルミート行列 と呼びます.仮に A が実行列ならこれらは対称行列および歪対称行列にそれぞれ短縮されます.

10. 主対角線と行列のトレース

 仮に A = (a_{jk}) を正方行列とすると対角線上のすべての要素 a_{jk} について  j = k であるものを principal あるいは 主対角線 と呼び,主対角線上の全要素の和を Aトレース と呼びます.

 ある行列の  j \ne k なる要素について a_{jk} = 0 の時その行列を 対角行列 と呼びます.

11. 単位行列

 ある正方行列について主対角線上の全要素が 1 に等しく他の要素が全てゼロに等しい時 単位行列 と呼び, I と記述します. I の属性については非常に重要です.

 AI = IA = A,\ I^n = I,\ n = 1,2,3,\cdots(6)

 単位ベクトルは行列代数において,普通の代数における数 1 と同じ役割を果たします.

12. 零行列またはヌル行列

 ある行列についてその要素が全てゼロに等しいなら ヌル行列 または 零行列 と呼び, O または単に 0 と記述します.すべての行列 A について同次数の 0 を考えると,

 A + 0 = 0 + A = A \cdots(7)

 また仮に A および 0 が正方行列なら

 A0 = 0A = 0 \cdots(8)

 零行列は行列代数においては通常の対数における数 0 と同じ役割を果たします.

Definition of a matrix

A matrix of order m × n, or m by n matrix, is a rectangular array of numbers having m rows and n columns. It can be written in the form

 A = \left( \begin{array}{ccccc}  a_{11} & a_{12} & a_{13} & \cdots & a_{1n} \\  a_{21} & a_{22} & a_{23} & \cdots & a_{2n} \\  \cdots & \cdots & \cdots & \cdots & \cdots \\  a_{m1} & a_{m2} & a_{3m} & \cdots & a_{mn} \end{array} \right)\cdots(1)

Each number a_{jk} in this matrix is called an element. The subscripts j and k indicate respectively the row and column of the matrix in which the element appears.

We shall often denote a matrix by a letter, such as A in (1), or by the symbol (a_{jk}) which shows a representative element.

A matrix having only one row is called a row matrix or row vector while a matrix having only one column is called a column matrix or column vector. If the number of rows m and columns n are equal the matrix is called a square matrix of order n \times n or briefly n. A matrix is said to be a real matrix or complex matrix according as its elements are real or complex numbers.

行列の定義

 m × n 行列 または mn 行列m 行および n 列を有する長方形の配列です.下記の形式で記述できます.

 A = \left( \begin{array}{ccccc}  a_{11} & a_{12} & a_{13} & \cdots & a_{1n} \\  a_{21} & a_{22} & a_{23} & \cdots & a_{2n} \\  \cdots & \cdots & \cdots & \cdots & \cdots \\  a_{m1} & a_{m2} & a_{3m} & \cdots & a_{mn} \end{array} \right)\cdots(1)

 この配列内のそれぞれの数 a_{jk}要素 と呼びます.添字の j および k は,要素の出現する行列における行と列をそれぞれ示しています.

 行列はしばしば (1) における A のような 1 文字や,代表的な要素を示す記号 (a_{jk}) で記述します.

 ただ 1 行からなる行列を row matrix または 行ベクトル と呼び,ただ 1 列からなる行列を column matrix または 列ベクトル と呼びます.仮に行数 m と列数が n が等しいならその行列を次数 n \times n または単に n 次の 正方行列 と呼びます.行列はその要素が実数か複素数かによって 実行列 または 複素行列 と呼びます.

Special curvilinear coordinates

1. Cylindrical coordinates (\rho, \phi, z)

Transformation equations:  x = \rho\cos\phi ,\ y = \rho\sin\phi ,\ z = z

where \rho \ge 0 ,  0 \le \phi \le 2\pi,  -\infty < z < \infty [/latex].    <em>Scale factors</em>: [latex]h_1 = 1,\ h_2 = 1,\ h_3 = 1

Element of arc length:  ds^2 = d\rho^2 + \rho^2 d\phi^2 + dz^2

Jacobian: \displaystyle \frac{\partial(x, y, z)}{\partial(\rho, \phi, z)} = \rho

Element of volume:  dV = \rho d\rho d\phi dz

Laplacian: \displaystyle \nabla^2U = \frac{1}{\rho}\frac{\partial}{\partial\rho}\left( \rho\frac{\partial U}{\partial\rho} \right) + \frac{1}{\rho^2}\frac{\partial^2U}{\partial\phi^2} + \frac{\partial^2U}{\partial z^2}   = \frac{\partial^2U}{\partial\rho^2} + \frac{1}{\rho}\frac{\partial U}{\partial\rho} + \frac{1}{\rho^2}\frac{\partial^2U}{\partial\phi^2} + \frac{\partial^2U}{\partial z^2}

Note that corresponding results can be obtained for polar coordinates in the plane by omitting z dependence. In such case for example, ds^2 = d\rho^2 + \rho^2d\phi^2, while the element of volume is replaced by the element of area, dA = \rho d\rho d\phi.

2. Spherical coordinates (r, \theta, \phi)

Transformation equations: x = r\sin\theta,\ y = r\sin\theta\sin\phi,\ z = r\cos\theta

where r \ge 0,\ 0 \le \theta \le \pi,\ 0 \le \phi \le 2\pi .

Scale factors: h_1 = 1,\ h_2 = r,\ h_3 = r\sin\theta

Element of arc length:  ds^2 = dr^2 + r^2d\theta^2 + r^2\sin^2\theta d\phi^2

Jacobian: \displaystyle \frac{\partial(x, y, z)}{\partial(r, \theta, \phi)} = r^2\sin\theta

Element of volume:  dV = r^2\sin\theta drd\theta d\phi

Laplacian: \displaystyle \nabla^2U = \frac{1}{r^2}\frac{\partial}{\partial r}\left(r^2\frac{\partial U}{\partial r}\right) + \frac{1}{r^2\sin\theta}\frac{\partial}{\partial\theta}\left(\sin\theta\frac{\partial U}{\partial\theta}\right) + \frac{1}{r^2\sin^2\theta}\frac{\partial^2U}{\partial\phi^2}

Other types of coordinate systems are possible.

特殊な曲線座標

1. 円柱座標系 (\rho, \phi, z)

変換式:  x = \rho\cos\phi ,\ y = \rho\sin\phi ,\ z = z

ここで \rho \ge 0 ,  0 \le \phi \le 2\pi,

弧長要素:  ds^2 = d\rho^2 + \rho^2 d\phi^2 + dz^2

ヤコビアン: \displaystyle \frac{\partial(x, y, z)}{\partial(\rho, \phi, z)} = \rho

体積要素:  dV = \rho d\rho d\phi dz

ラプラシアン: \displaystyle \nabla^2U = \frac{1}{\rho}\frac{\partial}{\partial\rho}\left( \rho\frac{\partial U}{\partial\rho} \right) + \frac{1}{\rho^2}\frac{\partial^2U}{\partial\phi^2} + \frac{\partial^2U}{\partial z^2}   = \frac{\partial^2U}{\partial\rho^2} + \frac{1}{\rho}\frac{\partial U}{\partial\rho} + \frac{1}{\rho^2}\frac{\partial^2U}{\partial\phi^2} + \frac{\partial^2U}{\partial z^2}

 z 依存性を省略した平面内での極座標系で対応する結果が得られることに注意してください.そのような場合,例えば ds^2 = d\rho^2 + \rho^2d\phi^2 ここで体積要素は面積要素 dA = \rho d\rho d\phi で置換されます.

2. 球面座標系 (r, \theta, \phi)

変換式: x = r\sin\theta,\ y = r\sin\theta\sin\phi,\ z = r\cos\theta

ここで r \ge 0,\ 0 \le \theta \le \pi,\ 0 \le \phi \le 2\pi .

スケール因子: h_1 = 1,\ h_2 = r,\ h_3 = r\sin\theta

弧長要素:  ds^2 = dr^2 + r^2d\theta^2 + r^2\sin^2\theta d\phi^2

ヤコビアン: \displaystyle \frac{\partial(x, y, z)}{\partial(r, \theta, \phi)} = r^2\sin\theta

体積要素:  dV = r^2\sin\theta drd\theta d\phi

ラプラシアン: \displaystyle \nabla^2U = \frac{1}{r^2}\frac{\partial}{\partial r}\left(r^2\frac{\partial U}{\partial r}\right) + \frac{1}{r^2\sin\theta}\frac{\partial}{\partial\theta}\left(\sin\theta\frac{\partial U}{\partial\theta}\right) + \frac{1}{r^2\sin^2\theta}\frac{\partial^2U}{\partial\phi^2}

他の種類の座標系も可能です.

Gradient, divergence, curl and Laplacian in orthogonal curvilinear coordinates

If \Phi is a scalar function and \bold{A} = A_1\bold{e_1} + A_2\bold{e_2} + A_3\bold{e_3} a vector function of orthogonal curvilinear coordinates u_1, u_2, u_3, we have the following results.

1. \displaystyle \nabla\Phi = grad\Phi = \frac{1}{h_1}\frac{\partial\Phi}{\partial u_1}\bold{e_1} + \frac{1}{h_2}\frac{\partial\Phi}{\partial u_2}\bold{e_2} + \frac{1}{h_3}\frac{\partial\Phi}{\partial u_3}\bold{e_3}

2. \displaystyle \nabla\cdot\bold{A} = div\bold{A} = \frac{1}{h_1h_2h_3}\left[ \frac{\partial}{\partial u_1}(h_2h_3A_1) + \frac{\partial}{\partial u_2}(h_3h_1A_2) + \frac{\partial}{\partial u_3}(h_1h_2A_3) \right]

3. \displaystyle \nabla\times\bold{A} = curl\bold{A}   = \frac{1}{h_1h_2h_3}\left| \begin{array}{ccc} h_1\bold{e_1} & h_2\bold{e_2} & h_3\bold{e_3} \\ \frac{\partial}{\partial u_1} & \frac{\partial}{\partial u_2} & \frac{\partial}{\partial u_3} \\ h_1A_1 & h_2A_2 & h_3A_3 \end{array} \right|

4. \displaystyle \nabla^2\Phi = Laplacian\ of\ \Phi\\   = \frac{1}{h_1h_2h_3}\left[ \frac{\partial}{\partial u_1}\left( \frac{h_2h_3}{h_1}\frac{\partial\Phi}{\partial u_1} \right) + \frac{\partial}{\partial u_2}\left( \frac{h_3h_1}{h_2}\frac{\partial\Phi}{\partial u_2} \right) + \frac{\partial}{\partial u_3}\left( \frac{h_1h_2}{h_3}\frac{\partial\Phi}{\partial u_3} \right) \right]

These reduce to the usual expressions in rectangular coordinates if we replace (u_1, u_2, u_3) by (x, y, z), in which case \bold{e_1}, \bold{e_2} and \bold{e_3} are replaced by \bold{i}, \bold{j} and \bold{k} and h_1 = h_2 = h_3 = 1.

直交曲線座標における勾配,発散,回転およびラプラシアン

 仮に \Phi が一つのスカラー関数であり,また \bold{A} = A_1\bold{e_1} + A_2\bold{e_2} + A_3\bold{e_3} が直交曲線座標 u_1, u_2, u_3 のベクトル関数の時,下記の結果を得ます.

1. \displaystyle \nabla\Phi = grad\Phi = \frac{1}{h_1}\frac{\partial\Phi}{\partial u_1}\bold{e_1} + \frac{1}{h_2}\frac{\partial\Phi}{\partial u_2}\bold{e_2} + \frac{1}{h_3}\frac{\partial\Phi}{\partial u_3}\bold{e_3}

2. \displaystyle \nabla\cdot\bold{A} = div\bold{A} = \frac{1}{h_1h_2h_3}\left[ \frac{\partial}{\partial u_1}(h_2h_3A_1) + \frac{\partial}{\partial u_2}(h_3h_1A_2) + \frac{\partial}{\partial u_3}(h_1h_2A_3) \right]

3. \displaystyle \nabla\times\bold{A} = curl\bold{A}   = \frac{1}{h_1h_2h_3}\left| \begin{array}{ccc} h_1\bold{e_1} & h_2\bold{e_2} & h_3\bold{e_3} \\ \frac{\partial}{\partial u_1} & \frac{\partial}{\partial u_2} & \frac{\partial}{\partial u_3} \\ h_1A_1 & h_2A_2 & h_3A_3 \end{array} \right|

4. \displaystyle \nabla^2\Phi = Laplacian\ of\ \Phi\\   = \frac{1}{h_1h_2h_3}\left[ \frac{\partial}{\partial u_1}\left( \frac{h_2h_3}{h_1}\frac{\partial\Phi}{\partial u_1} \right) + \frac{\partial}{\partial u_2}\left( \frac{h_3h_1}{h_2}\frac{\partial\Phi}{\partial u_2} \right) + \frac{\partial}{\partial u_3}\left( \frac{h_1h_2}{h_3}\frac{\partial\Phi}{\partial u_3} \right) \right]

 仮に (u_1, u_2, u_3)(x, y, z) で置換すると,以下の場合,つまり \bold{e_1}, \bold{e_2} および \bold{e_3}\bold{i}, \bold{j} および \bold{k} で置換され, h_1 = h_2 = h_3 = 1 で置換されるような場合などには,これらの結果は直交座標系の通常の式に短縮されます.

Orthogonal curvilinear coordinates. Jacobians

The transformation equations

x = f(u_1, u_2, u_3)\ y = g(u_1, u_2, u_3)\ z = h(u_1, u_2, u_3)\cdots(17)

where we assume that f, g, h are continuous, have continuous partial derivatives and have a single-valued inverse establish a one to one correspondence between points in an xyz u_1u_2u_3 rectangular coordinate system. In vector notation the transformation (17) can be written

\bold{r} = x\bold{i} + y\bold{j} + z\bold{k} = f(u_1, u_2, u_3)\bold{i} + g(u_1, u_2, u_3)\bold{j} + h(u_1, u_2, u_3)\bold{k}\cdots (18)

A point P can be defined not only by rectangular coordinates (x, y, z) but by coordinates (u_1, u_2, u_3) as well. We call (u_1, u_2, u_3) the curvilinear coordinates of the point.

If u_2 and u_3 are constant, then as u_1 varies, \bold{r} describes a curve which we call the u_1 coordinate curve. Similarly we define the u_2 and u_3 coordinate curves through P.

From (18), we have

\displaystyle d\bold{r} = \frac{\partial\bold{r}}{\partial u_1}du_1 + \frac{\partial\bold{r}}{\partial u_2}du_2 + \frac{\partial\bold{r}}{\partial u_3}du_3 \cdots (19)

The vector \partial\bold{r}/\partial u_1 is tangent to the u_1 coordinate curve at P. If \bold{e_1} is a unit vector at P in this direction, we can write  \partial \bold{r} / \partial u_1 = h_1\bold{e_1} where h_1 = |\partial\bold{r}/\partial u_1|. Similarly we can write \partial\bold{r} / \partial u_2 = h_2\bold{e_2} and  \partial\bold{r}/\partial u_3 = h_3 \bold{e_3}, where h_2 = |\partial\bold{r}/\partial u_2| and  h_3 = |\partial\bold{r}/\partial u_3| respectively. Then (19) can be written

d\bold{r} = h_1du\bold{e_1} + h_2du\bold{e_2} + h_3du\bold{e_3}\cdots(20)

The quantities h_1, h_2, h_3 are sometimes called scale factors.

If \bold{e_1}, \bold{e_2}, \bold{e_3} are mutually perpendicular at any point P, the curvilinear coordinates are called orthogonal. In such case the element of arc length ds is given by

ds^2 = d\bold{r} \cdot d\bold{r} = h_1^2du_1^2 + h_2^2du_2^2 + h_3^2du_3^2 \cdots(21)

and corresponds to the square of the length of the diagonal in the above parallelepiped.

Also, in the case of orthogonal coordinates the volume of the parallelepiped is given by

 dV = |(h_1du_1\bold{e_1}) \cdot (h_2du_2\bold{e_2}) \times (h_3du_3\bold{e_3})| = h_1h_2h_3du_1du_2du_3 \cdots (22)

which can be written by

\displaystyle dV = \left| \frac{\partial\bold{r}}{\partial u_1} \cdot \frac{\partial\bold{r}}{\partial u_2} \times \frac{\partial\bold{r}}{\partial u_3} \right| du_1du_2du_3   = \left| \frac{\partial(x, y, z)}{\partial(u_1, u_2, u_3)} \right|du_1du_2du_3 \cdots (23)

where

\displaystyle \frac{\partial(x, y, z)}{\partial(u_1, u_2, u_3)}   = \left| \begin{array}{ccc}   \frac{\partial x}{\partial u_1} & \frac{\partial x}{\partial u_2} & \frac{\partial x}{\partial u_3} \\   \frac{\partial y}{\partial u_1} & \frac{\partial y}{\partial u_2} & \frac{\partial y}{\partial u_3} \\   \frac{\partial z}{\partial u_1} & \frac{\partial z}{\partial u_2} & \frac{\partial z}{\partial u_3} \end{array} \right|\cdots (24)

is called the Jacobian of the transformation.

It is clear that when the Jacobian is identically zero there is no parallelepiped. In such case there is a functional relationship between x, y and z, i.e. there is a function \phi such that \phi(x, y, z) = 0 identically.