Logo

How to Do PT for AI – Penetration Testing for AI Models

Date

Tuesday, March 18, 2025

Time

05:00 PM Asia/Jerusalem

Convert to my timezone
This field is required.
This field is required.
This field is required.
This field is required.
Required fields
If you have already registered and can't locate your registration confirmation email, click here!
The email address is incorrect. Please double-check your email address.

A confirmation email with logging details has been sent to the provided email.

System configuration test. Click here!

Agenda


GenAI is everywhere, but do you really know how secure your AI models are?

Hackers are finding new ways to exploit AI vulnerabilities, and without proper testing, your compliance and security could be at risk. Join Nikita Goman, Scytale’s Penetration Testing Team Leader, and Avi Lumelsky, AI Security Research at Oligo, for a deep dive into:

  • The real security risks lurking in AI models 
  • How attackers exploit vulnerabilities in GenAI
  • Strategies to secure AI and maintain SOC 2 compliance

Don’t let AI derail your compliance journey. Learn how to test, protect, and stay ahead.

Save your spot and tune in!

Nikita Goman

Nikita is the Penetration Testing Team Lead at Scytale, managing offensive security operations and overseeing diverse security projects, including cloud security testing, kiosk testing, and web application assessments. With nearly five years of hands-on penetration testing experience, he specializes in uncovering vulnerabilities and researching emerging attack vectors in new technologies. His expertise and passion for offensive security drive innovative testing strategies to strengthen organizations’ defenses.

Avi Lumelsky

A business-oriented engineer, who loves security and AI, with deep security insights. Currently busy with AI security research at Oligo Security.