Sine of a Matrix

The sine of a matrix \( A \) is defined using the Taylor series expansion for sine: $$ \sin(A) = \sum_{k=0}^{\infty} \frac{(-1)^k}{(2k+1)!} A^{2k+1} $$

To compute the sine of a matrix, we evaluate this infinite series by calculating the powers of \( A \) and applying the corresponding coefficients.

This series converges for any matrix \( A \).

Note. The sine of a matrix has various applications, especially in physics and engineering, where it helps model dynamic systems and solve matrix differential equations.

A Practical Example

Consider the matrix \( A \):

$$ A = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} $$

This matrix represents a rotation and has a straightforward structure, making the calculations relatively simple.

Now, we can compute \( \sin(A) \) using the first few terms of the Taylor series:

$$ \sin(A) = \sum_{k=0}^{\infty} \frac{(-1)^k}{(2k+1)!} A^{2k+1} $$

The series for \( \sin(A) \) expands as:

$$ \sin(A) = A - \frac{A^3}{3!} + \frac{A^5}{5!} - \dots $$

Observe that the square of \( A \) is:

$$ A^2 = \begin{pmatrix} -1 & 0 \\ 0 & -1 \end{pmatrix} = -I $$

where \( I \) is the identity matrix.

Note. To compute the power of a matrix, we multiply \( A \) by itself, rather than raising each element to the power. To calculate \( A^2 \), we proceed with matrix multiplication as follows:
$$ A^2 = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \cdot \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} $$ $$ = \begin{pmatrix} 0 \cdot 0 + 1 \cdot (-1) & 0 \cdot 1 + 1 \cdot 0 \\ -1 \cdot 0 + 0 \cdot (-1) & -1 \cdot 1 + 0 \cdot 0 \end{pmatrix} $$ $$ = \begin{pmatrix} 0 - 1 & 0 + 0 \\ 0 + 0 & -1 + 0 \end{pmatrix} $$ So, the result is: $$ A^2 = \begin{pmatrix} -1 & 0 \\ 0 & -1 \end{pmatrix} $$ This outcome matches \(-I\), where \( I \) is the identity matrix. $$ A^2 = - I $$

Knowing \( A^2 = -I \), it follows that \( A^3 \) can be found as:

$$ A^3 = A \cdot A^2 = A \cdot (-I) = -A $$

Similarly, we find that \( A^4 = I \):

$$ A^4 = A^2 \cdot A^2 = (-I) \cdot (-I) = I $$

This means \( A^5 = A \), since multiplying any matrix \( A \) by the identity matrix \( I \) yields \( A \):

$$ A^5 = A \cdot A^4 = A \cdot I = A $$

Using \( A^3 = -A \) and \( A^5 = A \), we can rewrite the series as:

$$ \sin(A) = A - \frac{A^3}{3!} + \frac{A^5}{5!} - \dots $$

$$ \sin(A) = A - \frac{(-A)}{3!} + \frac{A}{5!} - \dots $$

Summing the first few terms, which repeat cyclically, gives a good approximation for \( \sin(A) \):

$$ \sin(A) \approx A \left(1 - \frac{1}{3!} + \frac{1}{5!} - \dots \right) $$

This approach converges quickly, offering a practical approximation of \( \sin(A) \).

And so forth.

 
 

Please feel free to point out any errors or typos, or share suggestions to improve these notes. English isn't my first language, so if you notice any mistakes, let me know, and I'll be sure to fix them.

FacebookTwitterLinkedinLinkedin
knowledge base

Matrices (linear algebra)