0去购物车结算
购物车中还没有商品,赶紧选购吧!
当前位置: 图书分类 > 数学 > 计算数学 > 稀疏线性系统的迭代方法(第二版)

相同语种的商品

浏览历史

稀疏线性系统的迭代方法(第二版)


联系编辑
 
标题:
 
内容:
 
联系方式:
 
  
稀疏线性系统的迭代方法(第二版)
  • 书号:9787030234834
    作者:Yousef Saad
  • 外文书名:Iterative Methods for Sparse Linear Systems
  • 装帧:平装
    开本:B5
  • 页数:552
    字数:665000
    语种:英文
  • 出版社:科学出版社
    出版时间:2009-01
  • 所属分类:
  • 定价: ¥198.00元
    售价: ¥156.42元
  • 图书介质:
    按需印刷

  • 购买数量: 件  可供
  • 商品总价:

相同系列
全选

内容介绍

样章试读

用户评论

全部咨询

Iterative Methods for Sparse Linear Systems, Second Edition gives an in-depth, up-to-date view of practical algorithms for solving large-scale linear systems of equations. These equations can number in the millions and are sparse in the sense that each involves only a small number of unknowns. The methods described are iterative, i.e., they provide sequences of approximations that will converge to the solution.
This new edition includes a wide range of the best methods available today. The author has added a new chapter on multigrid techniques and has updated material throughout the text, particularly the chapters on sparse matrices, Krylov subspace methods, preconditioning techniques, and parallel preconditioners. Material on older topics has been removed or shortened, numerous exercises have been added, and many typographical errors have been corrected. The updated and expanded bibliography now includes more recent works emphasizing new and important research topics in this field.
This book can be used to teach graduate-level courses on iterative methods for linear systems. Engineers and mathematicians will find its contents easily accessible, and practitioners and educators will value it as a helpful resource. The preface includes syllabi that can be used for either a semester-or quarter-length course in both mathematics and computer science.
样章试读
  • 暂时还没有任何用户评论
总计 0 个记录,共 1 页。 第一页 上一页 下一页 最末页

全部咨询(共0条问答)

  • 暂时还没有任何用户咨询内容
总计 0 个记录,共 1 页。 第一页 上一页 下一页 最末页
用户名: 匿名用户
E-mail:
咨询内容:

目录

  • Preface to the Second Edition
    Preface to the First Edition
    1 Background in Linear Algebra
    1.1 Matrices
    1.2 Square Matrices and Eigenvalues
    1.3 Types of Matrices
    1.4 Vector Inner Products and Norms
    1.5 Matrix Norms
    1.6 Subspaces,Range,and Kernel
    1.7 Orthogonal Vectors and Subspaces
    1.8 Canonical Forms of Matrices
    1.8.1 Reduction to the Diagonal Form
    1.8.2 The Jordan Canonical Form
    1.8.3 The Schur Canonical Form
    1.8.4 Application to Powers of Matrices
    1.9 Normal and Hermitian Matrices
    1.9.1 Normal Matrices
    1.9.2 Hermitian Matrices
    1.10 Nonnegative Matrices,M-Matrices
    1.11 Positive Definite Matrices
    1.12 Projection Operators
    1.12.1 Range and Null Space of a Projector
    1.12.2 Matrix Representations
    1.12.3 Orthogonal and Oblique Projectors
    1.12.4 Properties of Orthogonal Projectors
    1.13 Basic Concepts in Linear Systems
    1.13.1 Existence of a Solution
    1.13.2 Perturbation Analysis
    Exercises
    Notes and References
    2 Discretization of Partial Differential Equations
    2.1 Partial Differential Equations
    2.1.1 Elliptic Operators
    2.1.2 The Convection Diffusion Equation
    2.2 Finite Difference Methods
    2.2.1 Basic Approximations
    2.2.2 Difference Schemes for the Laplacian Operator
    2.2.3 Finite Differences for One-Dimensional Problems
    2.2.4 Upwind Schemes
    2.2.5 Finite Differences for Two-Dimensional Problems
    2.2.6 Fast Poisson Solvers
    2.3 The Finite Element Method
    2.4 Mesh Generation and Refinement
    2.5 Finite Volume Method
    Exercises
    Notes and References
    3 Sparse Matrices
    3.1 Introduction
    3.2 Graph Representations
    3.2.1 Graphs and Adjacency Graphs
    3.2.2 Graphs of PDE Matrices
    3.3 Permutations and Reorderings
    3.3.1 Basic Concepts
    3.3.2 Relations with the Adjacency Graph
    3.3.3 Common Reorderings
    3.3.4 Irreducibility
    3.4 Storage Schemes
    3.5 Basic Sparse Matrix Operations
    3.6 Sparse Direct Solution Methods
    3.6.1 MD Ordering
    3.6.2 ND Ordering
    3.7 TestProblems
    Exercises
    Notes and References
    4 Basic Iterative Methods
    4.1 Jacobi,Gauss-Seidel,and Successive Overrelaxation
    4.1.1 Block Relaxation Schemes
    4.1.2 Iteration Matrices and Preconditioning
    4.2 Convergence
    4.2.1 General Convergence Result
    4.2.2 Regular Splittings
    4.2.3 Diagonally Dominant Matrices
    4.2.4 Symmetric Positive Definite Matrices
    4.2.5 Property A and Consistent Orderings
    4.3 Alternating Direction Methods
    Exercises
    Notes and References
    5 Projection Methods
    5.1 Basic Definitions and Algorithms
    5.1.1 General Projection Methods
    5.1.2 Matrix Representation
    5.2 General Theory
    5.2.1 Two Optimality Results
    5.2.2 Interpretation in Terms of Projectors
    5.2.3 General Error Bound
    5.3 One-Dimensional Projection Processes
    5.3.1 Steepest Descent
    5.3.2 MR Iteration
    5.3.3 Residual Norm Steepest Descent
    5.4 Additive and Multiplicative Processes
    Exercises
    Notes and References
    6 Krylov Subspace Methods,Part Ⅰ
    6.1 Introduction
    6.2 Krylov Subspaces
    6.3 Arnoldi's Method
    6.3.1 The Basic Algorithm
    6.3.2 Practical Implementations
    6.4 Amoldi's Method for Linear Systems
    6.4.1 Variation 1:Restarted FOM
    6.4.2 Variation 2:IOM and DIOM
    6.5 Generalized Minimal Residual Method
    6.5.1 The Basic GMRES Algorithm
    6.5.2 The Householder Version
    6.5.3 Practical Implementation Issues
    6.5.4 Breakdown of GMRES
    6.5.5 Variation 1:Restarting
    6.5.6 Variation 2:Truncated GMRES Versions
    6.5.7 Relations Between FOM and GMRES
    6.5.8 Residual Smoothing
    6.5.9 GMRES for Complex Systems
    6.6 The Symmetric Lanczos Algorithm
    6.6.1 The Algorithm
    6.6.2 Relation to Orthogonal Polynomials
    6.7 The Conjugate Gradient Algorithm
    6.7.1 Derivation and Theory
    6.7.2 Alternative Formulations
    6.7.3 Eigenvalue Estimates from the CG Coefficients
    6.8 The Conjugate Residual Method
    6.9 Generalized Conjugate Residual, ORTHOMIN, and ORTHODIR
    6.10 The Faber-Manteuffel Theorem
    6.11 Convergence Analysis
    6.11.1 Real Chebyshev Polynomials
    6.11.2 Complex Chebyshev Polynomials
    6.11.3 Convergence of the CG Algorithm
    6.11.4 Convergence of GMRES
    6.12 Block Krylov Methods
    Exercises
    Notes and References
    7 Krylov Subspace Methods,Part Ⅱ
    7.1 Lanczos Biorthogonalization
    7.1.1 The Algorithm
    7.1.2 Practical Implementations
    7.2 The Lanczos Algorithm for Linear Systems
    7.3 The Biconjugate Gradient and Quasi-Minimal Residual Algorithms
    7.3.1 The BCG Algorithm
    7.3.2 QMR Algorithm
    7.4 Transpose-Free Variants
    7.4.1 CGS
    7.4.2 BICGSTAB
    7.4.3 TFQMR
    Exercises
    Notes and References
    8 Methods Related to the Normal Equations
    8.1 The Normal Equations
    8.2 Row Projection Methods
    8.2.1 Gauss-Seidel on the Normal Equations
    8.2.2 Cimmino's Method
    8.3 Conjugate Gradient and Normal Equations
    8.3.1 CGNR
    8.3.2 CGNE
    8.4 Saddle-Point Problems
    Exercises
    Notes and References
    9 Preconditioned Iterations
    9.1 Introduction
    9.2 Preconditioned Conjugate Gradient
    9.2.1 Preserving Symmetry
    9.2.2 Efficient Implementations
    9.3 Preconditioned Generalized Minimal Residual
    9.3.1 Left-Preconditioned GMRES
    9.3.2 Right-Preconditioned GMRES
    9.3.3 Split Preconditioning
    9.3.4 Comparison of Right and Left Preconditioning
    9.4 Flexible Variants
    9.4.1 FGMRES
    9.4.2 DQGMRES
    9.5 Preconditioned Conjugate Gradient for the Normal Equations
    9.6 The Concus,Golub,and Widlund Algorithm
    Exercises
    Notes and References
    10 Preconditioning Techniques
    10.1 Introduction
    10.2 Jacobi,Successive Overrelaxation,and Symmetric Successive Overrelaxation Preconditioners
    10.3 Incomplete LU Factorization Preconditioners
    10.3.1 ILU Factorizations
    10.3.2 Zero Fill-in ILU (ILU(0))
    10.3.3 Level of Fill and ILU(p)
    10.3.4 Matrices with Regular Structure
    10.3.5 MILU
    10.4 Threshold Strategies and Incomplete LU with Threshold
    10.4.1 The ILUT Approach
    10.4.2 Analysis
    10.4.3 Implementation Details
    10.4.4 The ILUTP Approach
    10.4.5 The ILUS Approach
    10.4.6 The ILUC Approach
    10.5 Approximate Inverse Preconditioners
    10.5.1 Approximating the Inverse of a Sparse Matrix
    10.5.2 Global Iteration
    10.5.3 Column-Oriented Algorithms
    10.5.4 Theoretical Considerations
    10.5.5 Convergence of Self-Preconditioned MR
    10.5.6 AINVs via Bordering
    10.5.7 Factored Inverses via Orthogonalization:AINV
    10.5.8 Improving a Preconditioner
    10.6 Reordering for Incomplete LU
    10.6.1 Symmetric Permutations
    10.6.2 Nonsymmetric Reorderings
    10.7 Block Preconditioners
    10.7.1 Block Tridiagonal Matrices
    10.7.2 General Matrices
    10.8 Preconditioners for the Normal Equations
    10.8.1 Jacobi,SOR,and Variants
    10.8.2 IC(0)for the Normal Equations
    10.8.3 Incomplete Gram-Schmidt and ILQ
    Exercises
    Notes and References
    11 Parallel Implementations
    11.1 Introduction
    11.2 Forms of Parallelism
    11.2.1 Multiple Functional Units
    11.2.2 Pipelining
    11.2.3 Vector Processors
    11.2.4 Multiprocessing and Distributed Computing
    11.3 Types of Parallel Architectures
    11.3.1 Shared Memory Computers
    11.3.2 Distributed Memory Architectures
    11.4 Types of Operations
    11.5 Matrix-by-Vector Products
    11.5.1 The CSR and CSC Formats
    11.5.2 Matvecs in the Diagonal Format
    11.5.3 The Ellpack-Itpack Format
    11.5.4 The JAD Format
    11.5.5 The Case of Distributed Sparse Matrices
    11.6 Standard Preconditioning Operations
    11.6.1 Parallelism in Forward Sweeps
    11.6.2 Level Scheduling: The Case of Five-Point Matrices
    11.6.3 Level Scheduling for Irregular Graphs
    Exercises
    Notes and References
    12 Parallel Preconditioners
    12.1 Introduction
    12.2 Block Jacobi Preconditioners
    12.3 Polynomial Preconditioners
    12.3.1 Neumann Polynomials
    12.3.2 Chebyshev Polynomials
    12.3.3 Least-Squares Polynomials
    12.3.4 The Nonsymmetric Case
    12.4 Multicoloring
    12.4.1 Red-Black Ordering
    12.4.2 Solution of Red-Black Systems
    12.4.3 Multicoloring for General Sparse Matrices
    12.5 Multi-Elimination Incomplete LU
    12.5.1 Multi-Elimination
    12.5.2 ILUM
    12.6 Distributed Incomplete LU and Symmetric Successive Overrelaxation
    12.7 Other Techniques
    12.7.1 AINVs
    12.7.2 EBE Techniques
    12.7.3 Parallel Row Projection Preconditioners
    Exercises
    Notes and References
    13 Multigrid Methods
    13.1 Introduction
    13.2 Matrices and Spectra of Model Problems
    13.2.1 The Richardson Iteration
    13.2.2 Weighted Jacobi Iteration
    13.2.3 Gauss-Seidel Iteration
    13.3 Intergrid Operations
    13.3.1 Prolongation
    13.3.2 Restriction
    13.4 Standard Multigrid Techniques
    13.4.1 Coarse Problems and Smoothers
    13.4.2 Two-Grid Cycles
    13.4.3 V-Cycles and W-Cycles
    13.4.4 FMG
    13.5 Analysis of the Two-Grid Cycle
    13.5.1 Two Important Subspaces
    13.5.2 Convergence Analysis
    13.6 Algebraic Multigrid
    13.6.1 Smoothness in AMG
    13.6.2 Interpolation in AMG
    13.6.3 Defining Coarse Spaces in AMG
    13.6.4 AMG via Multilevel ILU
    13.7 Multigrid versus Krylov Methods
    Exercises
    Notes and References
    14 Domain Decomposition Methods
    14.1 Introduction
    14.1.1 Notation
    14.1.2 Types of Partitionings
    14.1.3 Types of Techniques
    14.2 Direct Solution and the Schur Complement
    14.2.1 Block Gaussian Elimination
    14.2.2 Properties of the Schur Complement
    14.2.3 Schur Complement for Vertex-Based Partitionings
    14.2.4 Schur Complement for Finite Element Partitionings
    14.2.5 Schur Complement for the Model Problem.
    14.3 Schwarz Alternating Procedures
    14.3.1 Multiplicative Schwarz Procedure
    14.3.2 Multiplicative Schwarz Preconditioning
    14.3.3 Additive Schwarz Procedure
    14.3.4 Convergence
    14.4 Schur Complement Approaches
    14.4.1 Induced Preconditioners
    14.4.2 Probing
    14.4.3 Preconditioning Vertex-Based Schur Complements
    14.5 Full Matrix Methods
    14.6 Graph Partitioning
    14.6.1 Basic Definitions
    14.6.2 Geometric Approach
    14.6.3 Spectral Techniques
    14.6.4 Graph Theory Techniques
    Exercises
    Notes and References
    Bibliography
    Index
帮助中心
公司简介
联系我们
常见问题
新手上路
发票制度
积分说明
购物指南
配送方式
配送时间及费用
配送查询说明
配送范围
快递查询
售后服务
退换货说明
退换货流程
投诉或建议
版权声明
经营资质
营业执照
出版社经营许可证