Skip to content
View adarsh-rai-secure's full-sized avatar
💭
Generationally locked in
💭
Generationally locked in

Block or report adarsh-rai-secure

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Pinned Loading

  1. adversarial-ml-attacks adversarial-ml-attacks Public

    Testing adversarial ML attacks (data poisoning, targeted misclassification, and model extraction) and discussing defensive tradeoffs that exist for real deployments.

    Jupyter Notebook 1

  2. pharma-enterprise-threat-modeling pharma-enterprise-threat-modeling Public

    Designing a full Cyber Threat Intelligence & Fusion Center program for a global pharmaceutical enterprise, including attack surface analysis, OSINT collection, STRIDE/PASTA threat modeling, insider…

  3. model-drift-detection model-drift-detection Public

    Detects concept and model drift in DNS traffic using ML, analyzes attack recall collapse, engages alarm for threshold drop, and compares retraining feasibility in a SOC detection environment.

    Jupyter Notebook

  4. biometric-breach-privacy-regulatory-analysis biometric-breach-privacy-regulatory-analysis Public

    Privacy-first analysis of biometric data breaches examining regulatory gaps, consumer harm, AI-driven fraud, and post-quantum cryptographic risk.

  5. flight-delay-forecasting flight-delay-forecasting Public

    Predicting flight delays across U.S regions using the U.S. Bureau of Transportation Statistics (BTS) dataset

    Jupyter Notebook 1

  6. gpt-vs-claude-agent-security-benchmark gpt-vs-claude-agent-security-benchmark Public

    Benchmarked GPT and Claude tool agents across finance, legal, and network audits to evaluate accuracy, hallucination risk, and AI decision reliability.

    1