书主要包含以下内如:最优化问题的简介,凸分析基础,无约束优化的理论及线搜索算法框架,信赖域算法,线搜索收敛性分析及收敛速度分析,半光滑牛顿算法,共轭梯度算法,约束优化理论及延伸理论,罚方法,增广拉格朗日算法及算法在实际问题(支持向量机模型、超图匹配)中的应用。本书对知识点的分析紧密结合当前研究前沿问题,并通过对应用问题使用优化算法,让学生看到优化理论与实际数据的结合,将知识点以全方位的立体感呈现给学生。
样章试读
目录
- Contents
Preface III
CHAPTER 1
Introduction 1.1
1.1 About Optimization1
1.2 Classification of Optimizatiou4
1.3 Preliminaries in Convex Analysis10
1.4 Exercises15
CHAPTER 2
Fundamentals of Optimization 17
2.1 Unconstrained Optimization Problem17
2.2 What is a Solution?18
2.2.1 Definitions of Different Solutions18
2.2.2 Recognizing a Local Minimun20
2.2.3 Nonsmooth Problems23
2.3 Overview of Algorithms25
23.1 Line Search Strategy26
2.3.2 Trust Region Strategy30
2.4Couvergence31
2.5Scaling32
2.6Exercises 33
CHAPTER 3
Line Search Methods35
3.1Step Length35
3.1.1 The Wolfe Conditions37
3.1.2 The Goldstein Conditions40
3.1.3 Suficient Decrease and Backtracking41
3.2 Convergence of Line Search Methods42
3.3 Rate of Convergence44
3.3.1 Steepest Descent Method44
3.3.2 Newton's Method16
3.3.3 Quasi-Newton Methods48
3.4 Exercises50
CHAPTER 4.
Trust Regiou Methods51
4.1Outline of the Trust Region Approach52
4.2Algorithms Based on the Cauchy Point 54
4.2.1 The Caucly Poiut54
4.2.2 The Dogleg Method56
4.2.3 Two-Dimensioual Subepace Minimization 58
4.3Global Convergence59
4.3.1 Reduction Obtained by the Caucly Point59
4.3.2 Couvergence to Stationary Points 61
4.4 Local Convergence65
4.5 Other Enhancements 46
4.6 Exercises68
CHAPTER 5 Comjugate Gradient Methods 69
5.1 Linear Conjugate Gradient Method 69
5.1.1 Coujugate Direction Method 69
5.1.2 Conjugate Gradient Method 72
5.1.3 A Practical Form of the Conjugate Gradient Method 75
5.1.4 Rate of Couvergence 76
5.1.5 Precouditioning 77
5.2 Noulinear Conjugate Gradient Methods78
5.2.1 The Polak- Ribiere Method and Variants30
5.2.2 Global Convergence81
5.3 Exercises 83
CHAPTER 6
Seamismooth Newton's Method 85
6.1 Semismoothuness85
6.2 Nousmooth Versiou of Newton's Method 87
6.3 Support Vector Machine 89
6.4 Semismooth Newton's Method for SVM 91
6.5 Exercises 96
CHAPTER 7
Theory of Constrained Optimization 97
7.1 Local and Global Solutions 97
7.1.1 Smoothmess 98
7.2 Examples99
7.3 Tangent Cone and Constraint Qualifications 103
7.4 First- Order Optimality Conditious 105
7.5 Second- Order Conditions106
7.6 Duality109
7.7 KKT Condition 112
7.8 Dual Problem 114
7.9 Exercises 118
CHAPTER.8
Penalty and Augmented Lagrangian Methods 119
8.1 The Quadratic Penalty Method 119
8.2 Exact Penalty Method 122
8.3 Augmented Lagrangian Method 123
8.4 Quadratic Penalty Method for Hypergaph Matching 125
8.4.1 Hypergraph Matching 126
8.4.2 Mathematical Formulation 126
8.4 3 Relaxation Problem 128
8.4.4 Quadratic Penalty Method for (8.21)129
8.4.5 Numerical Results 130
8.5 Augmented Lagrangian Method for SVM 132
8.5.1 Support Vecotr Machine 132
8.5.2 Mathermatical Formulation 133
8.5.3 Augmented Lagrangian Method (ALM) 133
8.5.4 Sermismooth Newton's Method for the Subproblem 136
8.5.5 Reducing the Computational Cost 137
8.5.6 Convergence Result of ALM 138
8.5.7 Numerical Results on LIBLINEAR 139
8.6 Exercises 141
CHAPTER 9
Bilevel Optimization and Its Applicatious143
9.1 Introduction 143
9.2 Bilevel Model for a Case of Hyperparameter Selection in SVC 145
9.2.1 An MPEC Formulation 147
9.3 The Global Relaxation Method (GRM)148
9.4 MPEC-MFCQ: A Hidden Property 149
9.5 Numerical Resutlts 150
Bibliography 153