机器学习:PCA图像处理
参考资料
- A Step-by-Step Explanation of Principal Component Analysis (PCA)
- In Depth: Principal Component Analysis
- Principal Component Analysis: In-depth understanding through image visualization
- A One-Stop Shop for Principal Component Analysis
- Understanding Concepts behind PCA
- Principal Component Analysis in 3 Simple Steps
- Principal Component Analysis: In-depth understanding through image visualization
- Implementing a Principal Component Analysis (PCA) – in Python, step by step
- sklearn-官方数据集的正确打开方法
- An end-to-end comprehensive guide for PCA
- Python 计算彩色图像信噪比
- PCA推导
Organizing information in principal components this way, will allow you to reduce dimensionality without losing much information, and this by discarding the components with low information and considering the remaining components as your new variables.
An important thing to realize here is that, the principal components are less interpretable and don’t have any real meaning since they are constructed as linear combinations of the initial variables.
Geometrically speaking, principal components represent the directions of the data that explain a maximal amount of variance, that is to say, the lines that capture most information of the data. The relationship between variance and information here, is that, the larger the variance carried by a line, the larger the dispersion of the data points along it, and the larger the dispersion along a line, the more the information it has. To put all this simply, just think of principal components as new axes that provide the best angle to see and evaluate the data, so that the differences between the observations are better visible.
How does PCA work?
We are going to calculate a matrix that summarizes how our variables all relate to one another.
We’ll then break this matrix down into two separate components: direction and magnitude. We can then understand the “directions” of our data and its “magnitude” (or how “important” each direction is).
We will transform our original data to align with these important directions (which are combinations of our original variables).
While the visual example here is two-dimensional (and thus we have two “directions”), think about a case where our data has more dimensions. By identifying which “directions” are most “important,” we can compress or project our data into a smaller space by dropping the “directions” that are the “least important.” By projecting our data into a smaller space, we’re reducing the dimensionality of our feature space.