Founding Engineer @ Mindgard
Adversarial ML Researcher · Lancaster University

Building the AI security platform from day one — automated red teaming, guardrail evasion research, and adversarial attack tooling used to discover 70+ vulnerabilities in production AI systems from Microsoft, NVIDIA, Meta, and others.
Researching adversarial vulnerabilities in training data and ML model pipelines within the Experimental Distributed Systems lab.
Four-year integrated master's with academic scholarship. Elected student representative.
Demonstrates two approaches for bypassing LLM guardrail systems via character injection and adversarial ML evasion techniques. Tested against six prominent systems including Microsoft Azure Prompt Shield and Meta Prompt Guard, achieving up to 100% evasion success in some cases.
Read on arXivIntroduces a novel extraction technique that distills task-specific knowledge from large language models into compact models. Achieved 73% exact match similarity with ChatGPT-3.5-Turbo at only $50 in API cost, with an 11% boost in adversarial attack transferability.
Read on arXivProposes using model compilation and tensor optimization as a lightweight defense against side-channel attacks in adversarial ML — reducing attack effectiveness by up to 43% without costly model re-engineering.
Read on arXivVulnerabilities discovered in production AI guardrail systems through security research at Mindgard.