VEXA-ASE · Specialist
VEXA Certified AI Security Engineer
Security testing for LLM apps, RAG systems and AI pipelines.

What you will learn
- Prompt injection, jailbreaks and model abuse patterns.
- RAG-specific risks and data exfiltration techniques.
- Designing safer prompts, guardrails and monitoring.
- Communicating AI security risks to engineering and leaders.
Syllabus overview
Module 1 · AI Threat Landscape
- LLM basics and attack surfaces
- Lab: basic prompt injection attacks
Module 2 · RAG & Data Risks
- Indirect prompt injection and poisoning
- Lab: attacking a RAG demo app
Module 3 · Defences & Guardrails
- Input validation and model routing patterns
- Lab: hardening an AI demo pipeline
Module 4 · AI Security Review
- Risk assessment templates and checklists
- Capstone: security review for a sample app
Requirements & assessment
- Basic security or IT knowledge recommended.
- Laptop with a modern browser and ability to run labs.
- Commitment to complete labs, quizzes and a final capstone project.
Certification & digital badge
Complete all mandatory labs and the final assessment to receive your verified VEXA-ASE digital badge, ready to share on LinkedIn and CVs.