This repository contains well-structured Jupyter notebooks that implement and demonstrate a variety of classical machine learning algorithms using Python and Scikit-learn. It is designed for learners and practitioners who want to understand how traditional models work and how to apply them effectively on real-world datasets.
| Notebook | Description |
|---|---|
01_Perceptron_and_Adaline.ipynb |
Implementation of Perceptron and Adaptive Linear Neuron (Adaline) |
02_classification_algorithms.ipynb |
Classification models: Logistic Regression, SVM, Decision Trees, KNN |
03_data_preprocessing.ipynb |
Data cleaning, normalization, encoding, and splitting |
04_data_compression.ipynb |
Dimensionality reduction using PCA and LDA |
05_model_evaluation.ipynb |
Model validation and performance metrics |
06_ensemble_methods.ipynb |
Ensemble methods: Random Forest, Bagging, Boosting, XGBoost |
07_regression_analysis.ipynb |
Regression using Linear Regression and RANSAC |
08_clustering_analysis.ipynb |
Unsupervised learning: K-Means, DBSCAN, Hierarchical clustering |
-
Dimensionality Reduction
- Principal Component Analysis (PCA)
- Linear Discriminant Analysis (LDA)
-
Classification Algorithms
- Logistic Regression
- Support Vector Machine (SVM)
- Decision Trees
- K-Nearest Neighbors (KNN)
-
Ensemble Methods
- Random Forest
- Bagging
- AdaBoost & Gradient Boosting
- XGBoost
-
Regression Analysis
- Linear Regression
- RANSAC Regressor
-
Clustering Algorithms
- K-Means
- DBSCAN
- Hierarchical Clustering
-
Model Evaluation Techniques
- K-Fold Cross-Validation
- Confusion Matrix
- ROC Curve and AUC
- Precision, Recall, F1 Score
- Python
- NumPy, Pandas
- Matplotlib, Seaborn
- Scikit-learn
- XGBoost