Page
Description
Equation
428 consistent, inconsistent, dependent systems
  1. The lines intersect at one point $P(a,b)$, in this case the system has exactly one solution, namely, the ordered pair $(a,b)$. The system is said to be consistent.
  2. The lines are parallel and do not intersect, in this case the system has no solution. The system is said to be inconsistent.
  3. The lines coincide and intersect at an infinite number of points, in this case the system has infinitely many solutions. This system is said to be dependent.
429 Gaussian elimination Applying a systematic approach that can be extended to a system of linear equations in more than two unknowns. 429 equivalent systems Systems that have the same solution set. From one, find the other like so:
  1. Interchange the position of two equations.
  2. Multiply both sides of one equation by a nonzero constant.
  3. Add a multiple of one equation to another.
429 echelon form, back substitution The echelon form is the coefficient of $x$ in the second equation is $0$. To use back substitution, you will need to add the first equation to the second then replace $y$ with $1$ in the first equation and solve for $x$. Lastly you will need to eliminate fractions by multiplying by $2$ in the equation. 430 echelon form is not unique. It depends on the sequence of operations that we employ. 432 systems of $n$ linear equations in $n$ unknowns It is known as the $n$ by $n$ linear system. This system uses double subscript system and Gaussian elimination to solve the equation. The solution is always one of the following:
  1. exactly one solution, consistent system.
  2. no solution, inconsistent system.
  3. infinite many solutions, dependent system.
435 dependent, inconsistent, and non-square systems When we generate equivalent systems in $n$ unknowns $x_1$, $x_2$, $x_3,$ $\ldots,$ $x_n,$ the coefficients of $x_1,$ $x_2,$ $x_3,$ $\ldots,$ $x_n$ in one of the equations may all become $0$. If then the system has infinitely many solutions and is dependent. However, if then the system has no solution and is inconsistent. The number of equation is the same as the number of unknowns is called square system. If the number of equations is less than the number of unknowns, the system is called nonsquare system. 442 matrix, element, row, column, dimension A rectangular array of numbers enclosed by a pair of brackets is called a matrix, and each number in the matrix is called an element. In matrix notation, double subscripts are used. The first number is the row and the second number is the column. The dimension of the matrix is the number $m$ by $n$. \[ \begin{bmatrix} a_{11}&a_{12}&a_{13}&\cdots&a_{1n}\\ a_{21}&a_{12}&a_{13}&\cdots&a_{2n}\\ a_{31}&a_{32}&a_{33}&\cdots&a_{3n}\\ \vdots&\vdots&\vdots&\ &\vdots\\ a_{m1}&a_{m2}&a_{m3}&\cdots&a_{mn}\\ \end{bmatrix} \] 443 augmented matrix For every such $m\times n$ linear system, there coreesponds an enlarged or augmented matrix of dimension $m\times (n+1)$. 443 echelon form It is also in echelon form if the elements $a_{ij}=0$ when $i\gt j.$ That is, the element in the first column in every row after the first is $0;$ the element in the second column in every row after the second is $0;$ and so on. 444 elementary row operation
  1. Interchange the position of two rows.
  2. Multiply all elements in a row by a nonzero constant.
  3. Add a multiple of one row to another.
444 row equivalent If one matrix is obtained from another by a sequence of elementary row operations, we say that the two matrices are row equivalent. 445 matrix method for solving a system of linear equations
  • Step 1. Write the augmented matrix for the system of linear equations.
  • Step 2. Use row operations to generate a row-equivalent matrix in echelon form.
  • Step 3. Write the system of linear equations that corresponds to the matrix in step 2.
  • Step 4. Solve the system of linear equations in step 3 by using back substitution.
447 dependent, inconsistent, and non-square systems When performing row operations on an augmented matrix, all the elements in one of the rows may become $0$. In this case, the corresponding equation we can conclude that the system has infinitely many solutions also known as dependent. However, if all the elements in one of the rows except the last entry become $0$, then the corresponding equation is inconsistent. But the non-square system does not have a unique solution. 448 Gauss-Jordan elimination If the augmented matrix for a system of $n$ linear equations in $n$ unknowns $(x_1,x_2,x_3,\ldots,x_n)$ can be reduced to the form by using row operations. This procedure is known as the Gauss-Jordan elimination. 453 equality of matrices Two matrices $A=[a_{ij}]$ and $B=[b_{ij}]$ are equal if they both are of dimension $m\times n$ and \[ a_{ij}=b_{ij} \] for all $i=1\ldots m$ and all $j=1\ldots n.$ 454 matrix addition If $A=[a_{ij}]$ and $B=[bij]$ are $m\times n$ matrices then their sum $A+B$ is an $m\times n$ matrix defined by \[ A+B=[a_{ij}+b_{ij}] \] 455 scalar multiplication If $A=[a_{ij}]$ is an $m\times n$ matrix and $k$ is a scalar, then the scalar multiple $kA$ is an $m\times n$ matrix defined by \[ kA=[ka_{ij}] \] 456 matrix subtraction If $A$ and $B$ are $m\times n$ matrices then \[ A-B=A+(-B) \] 457 properties of matrix addition and scalar multiplication where $A,$ $B,$ and $C$ are $m\times n$ matrices, $\vect0$ is the zero $m\times n$ matrix, and $c$ and $d$ are scalars. 457 commutative property for addition \[ A+B=B+A \] 457 associative property for addition \[ (A+B)+C=A+(B+C) \] 457 identity property for addition \[ A+0=A \] 457 inverse property for addition \[ A+(-A)=0 \] 457 associative property for scalar multiplication \[ (cd)A=c(dA) \] 457 identity property for scalar multiplication \[ 1A=A \] 457 distributive property of a scalar over matrix addition \[ c(A+B)=cA+cB \] 457 distributive property of a matrix over scalar addition \[ (c + d)A = cA + dA \] 460 matrix multiplication If $A=[a_{ij}]$ is an $m\times n$ matrix and $B=[b_{ij}]$ is an $n\times p$ matrix, then the product $AB$ is an $m\times p$ matrix defined by \[ AB=[c_{ij}],\\ \text{where}\\ c_{ij} =a_{i1}b_{1j}+a_{i2}b_{2j} +a_{i3}b_{3j}+\cdots+a_{in}b_{nj}. \] 461 properties of matrix multiplication where $A$, $B$, and $C$ are matrices of the appropriate dimension. 461 associative property \[ A(BC)=(AB)C \] 461 left distributive property of a matrix over matrix addition \[ A(B+C)=AB+AC \] 461 right distributive property of a matrix over matrix addition \[ (B+C)A=BA+CA \] 461 associative property of a scalar with a matrix product \[ c(AB)=(cA)B=A(cB) \] 466 square matrix A matrix of dimension $n\times n$ 466 main diagonal The elements $a_{11},a_{22},a_{33},\cdots, a_{nn}$ 466 identity matrix A square matrix with the digit $1$ along its main diagonal and zeros elsewhere. 467 identity property of matrix multiplication When multiplying an $n\times n$ matrix by the identity matrix $I_n$, we obtain the original $n\times n$ matrix. 467 verifying inverse matrices \[ AB=BA=I_n, \] where $I_n$ is the $n\times n$ identity matrix. 468 inverse property of matrix multiplication If $A$ is an $n\times n$ square matrix and $I_n$ is the $n\times n$ identity matrix, then \[ AI=I_nA=A \] 470 procedure for finding the inverse of a matrix To find the inverse of an $n\times n$ square matrix $A$, proceed as follows:
  1. Write the $n\times 2n$ matrix $[A|I_n]$ consisting of matrix $A$ on the left and the identity matrix $I_n$ on the right of the dashed line.
  2. Use elementry row operations on the matrix $[A|I_n]$ to obtain the matrix $[I_n|B]$. The matrix $B$ is the inverse matrix of $A$, that is, $B=A^{-1}$.
  3. Check to see if $AA^{-1}=A^{-1}A=I_n$.
471 invertible, nonsingular, noninvertible, singular If matrix $A$ is an inverse then it must be invertible or nonsingula. However, not every $n\times n$ square matrix has an inverse. If elementary row operations on the matrix $[A|I_n]$ yield a row of zeros on the $A$ portion of this matrix, then we cannot write $[A|I_n]$ in the form $[I_n|B]$. If a matrix does not have an inverse then it is singular. 471 solving matrix equations When $A$ and $B$ are known matrices and $X$ is the unknown matrix, that is what a matrix equation is. The following two properties cannot be used when solving the matrix equation.
  1. Matrix division is not defined.
  2. Matrix multiplication is not communtative.
474 coefficient matrix, variable matrix, constant matrix \[ A= \begin{bmatrix} a_{11}&a_{12}&a_{13}&\cdots&a_{1n}\\ a_{21}&a_{12}&a_{13}&\cdots&a_{2n}\\ a_{31}&a_{32}&a_{33}&\cdots&a_{3n}\\ \vdots&\vdots&\vdots&&\vdots\\ a_{m1}&a_{m2}&a_{m3}&\cdots&a_{mn}\\ \end{bmatrix},\\ X= \begin{bmatrix} x\\ y\\ z\\ \end{bmatrix} ,\text{ and }\\ B= \begin{bmatrix} -1\\ 2\\ 5\\ \end{bmatrix} \] $A$ is a coefficient matrix, $X$ is a variable matrix, and $B$ is a constant matrix. 474 inverse method of solving a linear system of equations If the matrix equation $AX=B$ represents a system of $n$ linear equations in $n$ unknowns and the coefficient matrix $A$ is invertible, then the system has a unique solution given by \[ X = A^{-1}B. \] If the coefficient matrix $A$ is not invertible, then the system is either dependent or inconsistent. 481 determinant of $A$ \[ |A| \] is a real number associated with every square matrix $A.$ 481 determinant of a $2\times2$ matrix The determinant of the $2\times2$ matrix \[ A= \begin{bmatrix} a_{11}&a_{12}\\ a_{21}&a_{22}\\ \end{bmatrix} \] is \[ |A|= \begin{vmatrix} a_{11}&a_{12}\\ a_{21}&a_{22}\\ \end{vmatrix} =a_{11}a_{22}-a_{21}a_{12} \] 482 minors, cofactor If $A$ is an $n\times n$ matrix, then the minor of an element $a_{ij}$, denoted $M_{ij}$, is the determinant of the matrix that remains after deleting the row and column in which the element $a_{ij}$ appears. The cofactor of an element $a_{ij}$, denoted $C_{ij}$, differs from $M_{ij}$ at most in sign and is given by \[ C_{ij}=(-1)^{i+j}M_{ij} \] 483 expansion by cofactors The determinant of an $n\times n$ matrix can be found by multiplying each element in any row or colunm by its corresponding cofactor, and then adding the products. 485 diagonal method Example of $3\times3$ matrix \[ |A|= \begin{bmatrix} a_{11}&a_{12}&a_{13}\\ a_{21}&a_{22}&a_{23}\\ a_{31}&a_{32}&a_{33}\\ \end{bmatrix} \] Start by rewriting the first and second column to the right of the matrix. \[ \begin{bmatrix} a_{11}&a_{12}&a_{13}\\ a_{21}&a_{22}&a_{23}\\ a_{31}&a_{32}&a_{33}\\ \end{bmatrix} % \begin{matrix} a_{11}&a_{12}\\ a_{21}&a_{22}\\ a_{31}&a_{32}\\ \end{matrix} \] Now obtain the determinant of $A$ by adding the products of the three left-to-right diagonals, and subtracting the products of the three right-to-left diagonals. \[ |A|= (a_{11}a_{22}a_{33}+{a_{12}}a_{23}a_{31}+a_{13}a_{21}a_{32})\\ -(a_{12}a_{21}a_{33}+{a_{11}}a_{23}a_{32}+a_{13}a_{22}a_{31}) \] 488 Cramer's rule for a system of two linear equations in two unknowns. Associated with the system \[ a_{11}x+a_{12}y=k_1 \] are the three determinants \[ |A|= \begin{vmatrix} a_{11}&a_{12}\\ a_{21}&a_{22}\\ \end{vmatrix},\\ |A_x|= \begin{vmatrix} k_{1}&k_{12}\\ a_{2}&a_{22}\\ \end{vmatrix},\\ \text{ and }\\ |A_y|= \begin{vmatrix} a_{11}&k_{1}\\ a_{21}&k_{2}\\ \end{vmatrix} . \] The system has the unique solution \[ x=\frac{|A_x|}{|A|}\\ \text{ and } y=\frac{|A_y|}{|A|},\\ \text{ provided }|A|\neq0. \] 494 determinant of a matrix in echelon form \[ |A|=a_{11}a_{22}a_{33} \] The determinant of this matrix is also the product of the elements along its main diagonal. This statement is true for any $n\times n$ matrix in echelon form. Let $A$ be an $n\times n$ matrix in echelon form. Then $|A|$ is the product of the elements along its main diagonal. 495 row operation properties of determinants Suppose $A$ is an $n\times n$ matrix.
  1. If two rows of $A$ are interchanged to form the row-equivalent matrix $B,$ then $|B|=-|A|.$
  2. If a row of $A$ is replaced by $c$ times that row to form the row-equivalent matrix $B,$ then $|B|=c|A|.$
  3. If a row of $A$ is replaced by the sum of that row and $c$ times another row to form the row-equivalent matrix $B,$ then $|B|=|A|.$
497 elementary column operations for a matrix
  1. Interchange the position of two columns. $$C_i\leftrightarrow C_{j}$$
  2. Multiply all elements in a column by a nonzero constant. $$cC_i\rightarrow C_i$$
  3. Add a multiple of one column to another. $$cC_i+C_j\rightarrow C_j$$
497 column equivalent Has the same concept as \[ \] 497 column operation properties of determinants \[ \] 497 column operation properties of determinants \[ \] 498 combining row and column operations with expansion by cofactors \[ \] 500 zero determinants \[ \] 504 linear inequalities, solution \[ \] 505 linear programming \[ \] 505 graph of a linear inequality \[ \] 507 system of linear inequalities, solution, graph \[ \] 508 linear programming, convex, objective function, constraints \[ \] 509 fundamental principle of linear programming \[ \]