Skip to content
View adarsh-rai-secure's full-sized avatar

Block or report adarsh-rai-secure

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Popular repositories Loading

  1. secml-unsupervised-anomaly-detection secml-unsupervised-anomaly-detection Public

    Unsupervised anomaly detection model trained on process level endpoint telemetry (BETH dataset) and Isolation Forests to study malicious events detection, false positives, and SOC implementation.

    Jupyter Notebook

  2. secml-adversarial-ml-attacks secml-adversarial-ml-attacks Public

    Builds and evaluates adversarial ML attacks (data poisoning, targeted misclassification, and model extraction) and discusses defensive tradeoffs for real deployments.

    Jupyter Notebook

  3. secml-model-drift-detection secml-model-drift-detection Public

    Detects concept and model drift in DNS traffic using ML, analyzes attack recall collapse, engages alarm for threshold drop, and compares retraining feasibility in a SOC detection environment.

    Jupyter Notebook

  4. secml-llm-secure-coding-review secml-llm-secure-coding-review Public

    Iterative LLM-assisted code review on a CLI program, tracking how prompts change code quality, robustness, and security posture across versions.

    Python

  5. secml-llm-prompt-rag-attacks secml-llm-prompt-rag-attacks Public

    Evaluates LLM safety failure modes across prompt attacks, context overflow, and RAG poisoning.

    Jupyter Notebook