Abstract
Most of this book concerns supervised learning methods such as regression and classification. In the supervised learning setting, we typically have access to a set of p features \(X_{1},X_{2},\ldots,X_{p}\), measured on n observations, and a response Y also measured on those same n observations. The goal is then to predict Y using \(X_{1},X_{2},\ldots,X_{p}\).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
On a technical note, the principal component directions ϕ 1, ϕ 2, ϕ 3, … are the ordered sequence of eigenvectors of the matrix X T X, and the variances of the components are the eigenvalues. There are at most min(n − 1, p) principal components.
- 2.
This function names it the rotation matrix, because when we matrix-multiply the X matrix by pr.out$rotation, it gives us the coordinates of the data in the rotated coordinate system. These coordinates are the principal component scores.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer Science+Business Media New York
About this chapter
Cite this chapter
James, G., Witten, D., Hastie, T., Tibshirani, R. (2013). Unsupervised Learning. In: An Introduction to Statistical Learning. Springer Texts in Statistics, vol 103. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-7138-7_10
Download citation
DOI: https://doi.org/10.1007/978-1-4614-7138-7_10
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4614-7137-0
Online ISBN: 978-1-4614-7138-7
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)