内容简介
本书是享誉世界的名著,内容既全面又相对独立,既有基础知识的介绍,又有本领域研究现状的介绍,还有对未来发展的展望,是本领域最全面的参考书,被世界众多高校选用为教材。本书可作为高等院校计算机。电子、通信。自动化等专业研究生和高年级本科生的教材,也可作为计算机信息处理、自动控制等相关领域的工程技术人员的参考用书。.
本书主要特点..
·提供了大型数据集和高维数据的聚类算法以及网络挖掘和生物信息学应用的最新资料。
·涵盖了基于图像分析、光学字符识别,信道均衡,语音识别和音频分类的多种应用。
·呈现了解决分类和稳健回归问题的内核方法取得的最新成果。
·介绍了带有boosting方法的分类器组合技术。
·提供更多处理过的实例和图例,加深读者对各种方法的了解。
·增加了关于热点话题的新的章节,包括非线性维数约减、非负矩阵分解、实用性反馈。稳健回归、半监督学习,谱聚类和聚类组合技术。...
作译者
Sergios Theodoridis,希腊雅典大学信息系教授。主要研究方向是自适应信号处理、通信与模式识别。他是欧洲并行结构及语言协会(PARLE-95)的主席和欧洲信号处理协会(EUSIPCO-98)的常务主席、《信号处理》杂志编委。.
Konstantinos Koutroumbas,1995年在希腊雅典大学获得博士学位。自2001年起任职于希腊雅典国家天文台空间应用研究院,是国际知名的专家。...
.. <<
目录
preface .
chapter 1 introduction
1.1 is pattern recognition important?
1.2 features, feature vectors, and classifiers
1.3 supervised, unsupervised, and semi-supervised learning
1.4 matlab programs
1.5 outline of the book
chapter 2 classifiers based on bayes decision theory
2.1 introduction
2.2 bayes decision theory
2.3 discriminant functions and decision surfaces
2.4 bayesian classification for normal distributions
2.5 estimation of unknown probability density functions
2.6 the nearest neighbor rule
2.7 bayesian networks
2.8 problems
references
chapter 3 linear classifiers
3.1 introduction
3.2 linear discriminant functions and decision hyperplanes
.3.3 the perceptron algorithm
3.4 least squares methods
3.5 mean square estimation revisited
3.6 logistic discrimination
3.7 support vector machines
3.8 problems
references
chapter 4 nonlinear classifiers
4.1 introduction
4.2 the xor problem
4.3 thetwo-layer perceptron
4.4 three-layer perceptrons
4.5 algorithms based on exact classification of the training set
4.6 the backpropagation algorithm
4.7 variations on the backpropagation theme
4.8 the cost function choice
4.9 choice of the network size
4.10 a simulation example
4.11 networks with weight sharing
4.12 generalized linear classifiers
4.13 capacity of the/-dimensional space inlinear dichotomies
4.14 polynomial classifiers
4.15 radial basis function networks
4.16 universalapproximators
4.17 probabilistic neural networks
4.18 support vector machines: the nonlinear case
4.19 beyond the svm paradigm
4.20 decision trees
4.21 combining classifiers
4.22 the boosting approach to combine classifiers
4.23 the class imbalance problem
4.24 discussion
4.25 problems
references
chapter 5 feature selection
5.1 introduction
5.2 preprocessing
5.3 the peaking phenomenon
5.4 feature selection based on statistical
hypothesis testing
5.5 the receiver operating characteristics (roc) curve
5.6 class separability measures
5.7 feature subset selection
5.8 optimal feature generation
5.9 neural networks and feature generation/selection
5.10 a hint on generalization theory
5.11 the bayesian information criterion
5.12 problems
references
chapter 6 feature generation i: data transformation and
dimensionality reduction
6.1 introduction
6.2 basis vectors and images
6.3 the karhunen-loeve transform
6.4 the singular value decomposition
6.5 independent component analysis
6.6 nonnegative matrix factorization
6.7 nonlinear dimensionality reduction
6.8 the discrete fourier transform (dft)
6.9 the discrete cosine and sine transforms
6.10 the hadamard transform
6.11 the haartransform
6.12 the haar expansion revisited
6.13 discrete time wavelet transform (dtwt)
6.14 the multiresolution interpretation
6.15 wavelet packets
6.16 a look at two-dimensional generalizations ..
6.17 applications
6.18 problems
references
chapter 7 feature generation ii
7.1 introduction
7.2 regional features
7.3 features for shape and size characterization
7.4 a glimpse at fractals
7.5 typical features for speech and audio classification
7.6 problems
references
chapter 8 template matching
8.1 introduction
8.2 measures based on optimal path searchingtechniques
8.3 measures based on correlations
8.4 deformable template models
8.5 content-based information retrieval:relevance feedback
8.6 problems
chapter 9 context-dependent classification
9.1 introduction
9.2 the bayes classifier
9.3 markov chain models
9.4 the viterbi algorithm
9.5 channel equalization
9.6 hidden markov models
9.7 hmm with state duration modeling
9.8 training markov models via neural networks
9,9 a discussion of markov random fields
9.10 problems
references
chapter 10 supervised learning: the epilogue
10.1 introduction
10.2 error-counting approach
10.3 exploiting the finite size of the data set
10.4 a case study from medical imaging
10.5 semi-supervised learning
10.6 problems
references
chapter 11 clustering: basic concepts
11.1 introduction
11.2 proximity measures
11.3 problems
references
chapter 12 clustering algorithms i: sequential algorithms
12.1 introduction
12.2 categories of clustering algorithms
12.3 sequential clusteringalgorithms
12.4 a modification of bsas
12.5 atwo-threshold sequential scheme
12.6 refinement stages
12.7 neural network implementation
12.8 problems
references
chapter 13 clustering algorithms ii: hierarchical algorithms
13.1 introduction
13.2 agglomerative algorithms
13.3 the cophenetic matrix
13.4 divisive algorithms
13.5 hierarchicalalgorithms for large data sets
13.6 choice of the best number of clusters
13.7 problems
references
chapter 14 clustering algorithms iii. schemes based on
function optimization
14.1 introduction
14.2 mixture decomposition schemes
14.3 fuzzy clustering algorithms
14.4 possibilistic clustering
14.5 hard clustering algorithms
14.6 vector quantization
14.7 problems
references
chapter 15 clustering algorithms iv
15.1 introduction
15.2 clustering algorithms based on graph theory
15.3 competitive learning algorithms
15.4 binary morphology clustering algorithms (bmcas)
15.5 boundary detection algorithms
15.6 valley-seeking clustering algorithms
15.7 clustering via cost optimization (revisited)
15.8 kernel clustering methods
15.9 density-basedalgorithms for large data sets
15.10 clusteringalgorithms for high-dimensional data sets
15.11 other clustering algorithms
15.12 combination of clusterings
15.13 problems
references
chapter 16 cluster validity
16.1 introduction
16.2 hypothesis testing revisited
16.3 hypothesistesting in clustervalidity
16.4 relative criteria
16.5 validity of individual clusters
16.6 clustering tendency
16.7 problems
references
appendix a hints from probability and statistics
appendix b linear algebra basics
appendix c cost function optimization
appendix d basic definitions from linear systems theory
index ...