- 🤖 AI Enthusiast | Multi-language Coder | Tech Explorer
- 🧠 Interests: Machine Learning · Neural Networks · LLMs · Generative AI · Computer Vision · Natural Language Processing
- 🌐 Tech Stack: Python · JavaScript/TypeScript · Rust · C++ · Java · Go · Web
- 🎯 Always learning something new... and breaking it 😄
Here are some of the key AI algorithms I work with:
- Linear Regression - Used for predicting continuous data.
- Logistic Regression - Used for classification tasks.
- Decision Trees - Used for classification and regression tasks.
- Random Forest - Ensemble method combining decision trees for better performance.
- Support Vector Machines (SVM) - Used for classification and regression, especially in high-dimensional spaces.
- k-Nearest Neighbors (k-NN) - A simple, instance-based learning algorithm for classification and regression.
- Naive Bayes - A probabilistic classifier based on Bayes' theorem.
- Gradient Boosting Machines (GBM) - Ensemble method that builds models sequentially.
- XGBoost/LightGBM - Popular implementations of gradient boosting techniques.
- Principal Component Analysis (PCA) - Used for dimensionality reduction.
- Neural Networks - Used for a wide range of tasks, including classification, regression, and generation.
- Convolutional Neural Networks (CNNs) - Mainly used for image processing and computer vision.
- Recurrent Neural Networks (RNNs) - Used for sequence-based data, such as time series or text.
- Long Short-Term Memory (LSTM) - A type of RNN that handles long-term dependencies better.
- Transformer Networks - The backbone of state-of-the-art NLP models (e.g., GPT, BERT).
- Autoencoders - Used for unsupervised learning and data compression.
- Generative Adversarial Networks (GANs) - A deep learning framework used for generating new data samples.
- Q-learning - A model-free reinforcement learning algorithm used for decision making.
- Deep Q Networks (DQN) - Combines Q-learning with deep neural networks.
- Policy Gradient Methods - Algorithms like REINFORCE that optimize the policy directly.
- Bag of Words (BoW) - A simple method for representing text data.
- TF-IDF - Term Frequency-Inverse Document Frequency, used for text weighting.
- Word2Vec - A method for learning word embeddings.
- BERT - Bidirectional Encoder Representations from Transformers, a transformer-based NLP model.
- GPT - Generative Pre-trained Transformers, for language generation tasks.
- Named Entity Recognition (NER) - A technique used to identify entities like names and locations in text.
- Image Classification - Identifying objects in an image using CNNs.
- Object Detection - Detecting and localizing objects in images using algorithms like YOLO (You Only Look Once).
- Semantic Segmentation - Classifying each pixel of an image into a class (e.g., human, sky).
- Generative Models for Image Creation - Using GANs for generating realistic images.
- Gradient Descent - An iterative optimization algorithm for minimizing a function.
- Stochastic Gradient Descent (SGD) - A variant of gradient descent, used for large datasets.
- Genetic Algorithms - Search heuristics inspired by natural selection.
"Code is not just instructions. It's imagination turned into logic."