7. DIMENSIONALITY REDUCTION (CONTEXT-DEPENDENT - Seventh Priority) (1/5)
├── Linear Methods (3/5)
│ ├── Principal Component Analysis (PCA) (4/5)
│ │ ├── sklearn.decomposition.PCA
│ │ └── ✓ Transforms data to orthogonal components, retains variance
│ ├── Linear Discriminant Analysis (LDA) (3/5)
│ │ ├── sklearn.discriminant_analysis.LinearDiscriminantAnalysis
│ │ └── ✓ Maximizes class separability, supervised
│ ├── Independent Component Analysis (ICA) (2/5)
│ │ ├── sklearn.decomposition.FastICA
│ │ └── ✓ Separates multivariate signal into independent components
│ └── Factor Analysis (2/5)
│ ├── sklearn.decomposition.FactorAnalysis
│ └── ✓ Explains variance using a smaller number of latent factors
│
├── Non-Linear Methods (3/5)
│ ├── t-Distributed Stochastic Neighbor Embedding (t-SNE) (4/5)
│ │ ├── sklearn.manifold.TSNE
│ │ └── ✓ Best for visualization, preserves local structure
│ ├── UMAP (Uniform Manifold Approximation and Projection) (4/5)
│ │ ├── umap-learn.UMAP
│ │ └── ✓ Faster than t-SNE, good for visualization and general embedding
│ ├── Kernel PCA (2/5)
│ │ ├── sklearn.decomposition.KernelPCA
│ │ └── ✓ Non-linear PCA using kernel tricks
│ └── Autoencoders (3/5)
│ ├── tensorflow.keras.models.Sequential
│ ├── torch.nn.Module
│ └── ✓ Neural network for learning compressed data representation
│
└── Sparse Methods (2/5)
├── Sparse PCA (2/5)
│ ├── sklearn.decomposition.SparsePCA
│ └── ✓ PCA with sparse components, improves interpretability
└── Dictionary Learning (2/5)
├── sklearn.decomposition.DictionaryLearning
└── ✓ Learns a dictionary of sparse components