Test #2 Topics
Test #2 covers the following sections of the book:
- Systems of Linear Equations (Chapter 2, pages 49 - 81, 83 - 85, 88, 89
(plus the Jacobi and Gauss-Seidel Iterative methods which are covered in section 11.5.2-3 page 469-471))
- Eigenvalue Problems (Chapter 4, pages 157 - 162, 165 - 166, Section 4.5.1 -
4.5.2, pages 173 - 177)
Topics
- Gaussian Elimination Know how to solve a system using
Gaussian Elimination with the three different pivoting strategies
we studied; understand why naive pivoting can be unstable, understand
why scaled pivoting is sometimes necessary.
- Complexity of Solving Systems Know the complexity of
performing Gaussian elimination and of forward and backward substitution
(solving triangular systems).
- Condition Number Know the definition and how to compute
using both the one and infinity norms; know what it means - what it
tells you about the system and how accurate your solution is; know what
the problem is in computing it in "real life" and why we need ways
to approximate it; understand the approximation algorithms used in
your assignment (Problem 2.4).
- LU Decomposition Know how to compute the LU decomposition
as you perform Gaussian Elimination. Understand that if you pivot
the LU is the decomposition of PA, a permutation of A.
Understand why we compute the LU and how it is used to solve Ax = b
for different b's once it has been computed.
- Implementation Issues Understand how (and why) we compute
and store both L and U in the same matrix, not explicitly storing
0s or the 1s on the diagonal of L. Understand why we do not literally
swap rows (and what we do instead to keep track of such swaps).
- Residual Know how to compute the residual for a solution
and know how (together with the condition number) to interpret it.
- Iterative Refinement (Improvement) Know how to perform
iterative refinement on a solution.
- Iterative Methods Know the Jacobi and Gauss-Seidel algorithms
for solving systems of equations iteratively. Know about
the concept of diagonal dominance and how it relates to
the convergence of these methods.
- Eigenvalues and Eigenvectors Know the definitions and the
traditional (linear algebra) way of computing them. Be able to
explain the numerical problems posed by this way of computing.
- Localizing Eigenvalues Know the relationship beween "where"
eigenvalues reside in the complex plane and the norm of the matrix;
know how to compute Gershgorin circles and what they tell you about
where eigenvalues are.
- Iterative Methods for Computing Eigenvalues Know the
Power Iteration, Normalized Power Iteration, and Inverse Iteration
algorithms and what each finds. Understand why normalization is
used; know the limitations of these methods.
Ch. 4 Review Problems: pages 205 - 207 #4.20, 4.30, 4.41, 4.44,
4.46; pages 208 - 209 #4.3 (a) - (f), 4.16.