Live, Hands-on Deep-Dive into LLM Hacking: Prompt Injection, Model Context Protocol and Skills

Webinar Registration

Most of us probably use LLM chat bots every day and more and more of us are being called upon to secure systems that use LLMs.  You’ve probably read about LLM attack scenarios such as prompt injection, but have you actually tried it?  You’ve heard of the difference between system and user prompts and system prompt leakage,  but have you seen actual system prompts?  The answer to the latter is usually no unless you are running your own model.  But we’re going to change that. 

In this real training for free event, I am joined by Joe Brinkley and John McShane from Cobalt.io.  These guys are hard core LLM security pentesters and their enthusiasm for this fast-changing field is terribly infectious.  They have really inspired me to get more interested in this field.  This session is going to take the full 90 minutes and it’s going to be mostly their show.

But before I hand the reigns to Joe and John I’m going take a few minutes to try to level set all of us with some live LLM prompt engineering.  I’m a big believer in getting hands on and helping others get their hands dirty.  And this field more than anything else I’ve seen lately is subject to a lot of theoretical, amorphous talk where you feel like you have at best a foggy notion of what’s being discussed. 

I’m going to show you how to quickly load and run your own small LLM (or SLM?) on your local PC.  Then we are going to give the model some system prompts that explicitly prohibit it from revealing same.  Then we’ll try to trick it into revealing its system prompts and violating other rules.  If we succeed, we’ll modify the prompts to be more resilient. 

If you find that too basic, don’t worry, once Joe and John take over, here’s some of what’s in store:

  • Binary Injection & Fake OS Attacks: How we trick the LLM into thinking it’s a terminal. We’re not just talking text; we’re talking binary-to-shell escapes.
  • MCP JSON-RPC Streams: The "hidden" layer. Using the binary stream to slip prompts past the eyes of a standard monitor.
  • Skills vs. MCP: The "Skill" is the workflow (the brain), while MCP is the tool (the hands). We’ll show how they hand off secrets.
  • System Prompt Weakness & Leakage: Using the Cobalt Strike "System Prompt Check" logic. We’ll demonstrate the leak of a "medical document" from a db backend.
  • Prompt Escape/Injections: Walking through the classic "Ignore all previous instructions" but updated for 2026 agentic workflows.

Cobalt will finish up with a brief presentation about how they modernize offensive security through Pentest as a Service (PTaaS) and their roadmap for responsible AI adoption.

This event is in the best spirit of our “real training for free” model.

First Name:  
Last Name:  
Work Email:  
Phone:
Job Title:
Organization:
Country:  
State:
Company Size:
 

Your information will be shared with the sponsor.

By clicking "Submit", you're agreeing to our Privacy Policy and consenting to be contacted by us and the sponsor.

 

 

Upcoming Webinars
    Additional Resources