The Security Summit for Researchers / By Researchers

It’s time to declare your INTENT.

25-26 November • Petah tikva

Rokach, 101 Rokach Boulevards, Tel Aviv, Israel

INTENT is where the global cyber research community comes to connect, create, and challenge the status quo. Built by researchers, for researchers, this Petah tikva-based summit dives deep into AI threats, offensive security, and hands-on innovation. It’s a high-energy, research-first experience that draws hundreds of passionate pros ready to shape what’s next in cyber defense. 

INTENT 2025 Register Now!

CAPTURE THE FLAG

Researchers Are Coming from Afar to Capture Our Flag!

Join our colossal CTF to test and hone your skills and come away with knowledge only other researchers have to offer. We’ll throw relevant, thought-provoking challenges at you that will make this flag one worth fighting for.

Join us if you think you can hack it!
  • When? November 25 – 26 , 5:00 PM – 8:30 PM  
  • Where? CyberArk offices, Hapsagot 9, Petah Tikva. There will be parking on site, plus a shuttle from/to Kiryat Arye train station.

AGENDA

Descend into the rabbit hole with us for a night full of mind-blowing insights, research innovation, and your chance at the flag. Whether you choose to join INTENT sessions or test your skills with our challenges, expect a night packed with content, networking and lots of fun!

November 25, Petah Tikva

17:00

x

Registration & Dinner

18:00

x

Welcome!

Lavi Lazarovitz, Vice President of Cyber Research, CyberArk

18:10

x

Michael Bargury, Co-founder and CTO, Zenity

18:40

x

Inga Cherny, Security Researcher, Cato Networks

19:10

x

Tomer Agayev, Staff security researcher, Cato Networks

Workshop:

Shai Dvash, Software Engineer, CyberArk

Eran Shimony, Principal Researcher, CyberArk

*The workshop is fully booked!

19:40

x

Break *

20:00

x

Avi Lumelsky, AI Security Researcher, Oligo
Gal Elbaz, Co-founder & CTO, Oligo Security

20:30

x

Aluma Shaari, Security Researcher, ActiveFence
Vladi Krasner, Director of AI Security, Activefence

Workshop:

Shai Dvash, Software Engineer, CyberArk

Eran Shimony, Principal Researcher, CyberArk

*The workshop is fully booked!

21:00

x

Closing remarks

21:10

x

Event End

November 26, Petah Tikva

17:00

x

Registration & Dinner

18:00

x

Welcome!

Lavi Lazarovitz, Vice President of Cyber Research, CyberArk

18:40

x

Kobi Ben-Naim, Co founder and CEO, malanta.ai

19:10

x

Liora Itkin, Security Researcher, CardinalOps

Workshop:

Shai Dvash, Software Engineer, CyberArk

Eran Shimony, Principal Researcher, CyberArk

*The workshop is fully booked!

19:40

x

Break *

20:00

x

Tal Skverer, Head of Research, Astrix Security

20:30

x

Chen Shiri, Cyber Security Researcher, Accenture Security

Workshop:

Shai Dvash, Software Engineer, CyberArk

Eran Shimony, Principal Researcher, CyberArk

*The workshop is fully booked!

21:00

x

Closing remarks

21:10

x

Event End

Become a Sponsor​

INTENT brings together the world’s leading cyber and AI researchers who are shaping the future of cyber defense. As a sponsor, your brand will be front and center – where strategies are shared, partnerships are built, and industry reputations are made.
Sponsorship opportunities are limited - secure your spot now!

Submit your request to Sponsorships@cyberark.com

Key Deadlines
August 19

Early Bird Deadline (Enjoy a 5% Discount)

September 18

Final Sponsorship Commitments

Powered by

Pwn the Enterprise -- Thank You AI!
18:10 – 18:40

Compromising a well-protected enterprise used to require careful planning, proper resources, and the ability to execute. Not anymore! Enter AI.

Initial access? AI is happy to let you operate on its users’ behalf. Persistence? Self-replicate through corp docs. Data harvesting? AI is the ultimate data hoarder. Exfil? Just render an image. Impact? So many tools at your disposal. There’s more. You can do all this as an external attacker. No credentials required, no phishing, no social engineering, no human-in-the-loop. In-and-out with a single prompt.

Last year at Black Hat USA, we demonstrated the first real-world exploitation of AI vulnerabilities impacting enterprises, living off Microsoft Copilot. A lot has changed in the AI space since… for the worse. AI assistants have morphed into agents. They read your search history, emails and chat messages. They wield tools that can manipulate the enterprise environment on behalf of users – or a malicious attacker once hijacked. We will demonstrate access-to-impact AI vulnerability chains in most flagship enterprise AI assistants: ChatGPT, Gemini, Copilot, Einstein, and their custom agent . Some require one bad click by the victim, others work with no user interaction – 0click attacks.

The industry has no real solution for fixing this. Prompt injection is not another bug we can fix. It is a security problem we can manage! We will offer a security framework to help you protect your organization–the GenAI Attack Matrix. We will compare mitigations set forth by AI vendors, and share which ones successfully prevent the worst 0click attacks. Finally, we’ll dissect our own attacks, breaking them down into basic TTPs, and showcase how they can be detected and mitigated.

Web Poisoning: How LLMs Trust Lies - and How to Stop Them
18:40 – 19:10

“Large language models excel at summarizing and synthesizing vast amounts of information, but they have a critical blind spot: they cannot independently verify the credibility of their sources. This session shows how generative AI plus simple web hosting can fabricate companies, studies, or personas in hours, and any web-connected LLM agent will treat them as credible. Learn how zero-knowledge attackers hijack due diligence, spread misinformation, and skew decisions – and walk away with concrete steps to verify sources and lock down AI workflows.

We’ll begin with a live walkthrough of a rapid-fire exploit that uses generative AI tools and a $15 domain to create entirely fabricated entities, companies, scientific studies, and expert profiles – in just a few hours. Once that content is indexed by search engines, any web-connected LLM agent can be tricked into citing it as legitimate.

We’ll then unpack the implications:

  • Attack Surface Expansion: How zero-knowledge threat actors leverage web poisoning for fraud, corporate sabotage, and regulatory manipulation
  • Systemic Vulnerability: Why every domain—finance, healthcare, legal, academia—is at equal risk
  • Trust Erosion: The downstream impact on AI-driven research, due diligence, and decision-making
  • Finally, we’ll equip defenders with a multi-layered playbook:
  • Source Verification Workflows: Incorporate provenance checks and manual audits into AI-augmented processes
  • Model-Level Mitigations: Filter or flag unverified web content before it influences LLM outputs
  • Organizational Best Practices: Establish AI sourcing policies, incident response playbooks, and continuous monitoring of indexed content

By the end of this session, you’ll understand how trivial it is to poison AI systems at scale, the breadth of industries at risk, and exactly what controls to implement today to reclaim trust in AI-driven intelligence.”

Trusted Clouds, Fake CAPTCHAs: How Lumma Stealer Targets Privileged Users
19:10 – 19:40

Threat actors are increasingly abusing legitimate cloud infrastructure to host malicious content, blending in with trusted services to evade detection.

Recent campaigns linked to suspected Russian groups demonstrate this shift, using object storage platforms like Tigris, Oracle Cloud, and Scaleway to host fake reCAPTCHA pages that trick users into executing clipboard-injected PowerShell commands.

These attacks specifically target technically proficient and privileged users, exploiting their access to escalate compromise deeper into enterprise environments.

This novel research traces the evolution of Lumma Stealer, a malware-as-a-service infostealer, from earlier campaigns against gamers through malvertising to its latest delivery through trusted cloud platforms. We will dissect the attack chain, including the use of living-off-the-land binaries (mshta.exe), obfuscation techniques, and manual user interaction to bypass automated defenses. Join this session to gain an insider view of how these campaigns operate, why cloud abuse makes detection harder, and what controls organizations need to protect high-access accounts against this evolving threat.

Exploiting Guardrails: Advanced Techniques in LLM Jailbreaking
20:15 – 21:30

Delve into the dynamic back-and-forth of outsmarting evolving LLM defenses designed to block jailbreaks. In this hands-on CTF, you’ll explore prompt engineering, sophisticated jailbreaking tactics, and practical ways to bypass model safeguards. Along the way, you’ll also gain valuable insights into defending against adversarial attacks, equipping you with a solid blend of offensive and defensive skills for working with LLMs.

*The workshop is fully booked!

A Worm in the Apple - Wormable Zero-Click RCE in AirPlay Impacts Billions of Apple and IoT Devices
20:00 – 20:30

Since its introduction in 2010, AirPlay has transformed the way Apple users stream media. Today, it is integrated into a wide range of devices, including speakers, smart TVs, audio receivers and even automotive systems, making it a key part of the world’s multimedia ecosystem.

In this session, we will share new details about AirBorne – a series of vulnerabilities within Apple’s AirPlay protocol that can compromise Apple devices as well as AirPlay supported devices that use the AirPlay SDK. These attacks can be carried out over the network and on nearby devices, since AirPlay supports peer-to-peer connections.

Among the AirBorne class of vulnerabilities, there are multiple vulnerabilities that lead to remote code execution, access control bypass, privilege escalation and sensitive information disclosure. When chained together, the vulnerabilities allowed us to fully compromise a wide range of devices from Apple and other vendors.

In this talk, we’ll demonstrate full exploits on three devices: MacBook, Bose speaker and a Pioneer Carplay device. We will reveal, for the first time, the technical details of the Zero-Click RCE vulnerabilities impacting nearly every AirPlay-enabled device, including IoT devices that may take years to update and some that may never be patched.

No Human in the Loop - The New Era of Autonomous Malware
20:30 – 21:00

These days, when the term AI seems to appear as every third word in a sentence, it is only a matter of time before malware evolves to adopt these new methods as well.

In this talk, we’ll explore how the new generation of malware might leverage a wide range of AI capabilities – not just to generate scripts, but also to make decisions, fundamentally changing the techniques we’ve known until now.

What does network propagation look like when the malware decides on its own where to spread? How do you detect persistence mechanisms that change with every execution? And how can defenders hope to win the race when the opponent is no longer human?

Exploiting Guardrails: Advanced Techniques in LLM Jailbreaking
21:45 - 22:45

Delve into the dynamic back-and-forth of outsmarting evolving LLM defenses designed to block jailbreaks. In this hands-on CTF, you’ll explore prompt engineering, sophisticated jailbreaking tactics, and practical ways to bypass model safeguards. Along the way, you’ll also gain valuable insights into defending against adversarial attacks, equipping you with a solid blend of offensive and defensive skills for working with LLMs.

*The workshop is fully booked!

Breaking and Securing LLMs: Evolving Jailbreaks and Mitigation Strategies
18:10 – 18:40

As LLMs become more integrated into applications, understanding and preventing jailbreak attacks is critical. This talk explores cutting-edge techniques for bypassing LLM safeguards and the strategies to defend against them. We’ll start with semantic fuzzing, showcasing how taxonomies and language-disruptive paraphrasing can evolve to defeat alignment. Then, we’ll delve into iterative refinement mechanisms, where multiple LLMs collaborate to create increasingly effective jailbreak prompts.

The session will also cover evaluation methods, including how to numerically distinguish compliance from rejection in LLM outputs. Finally, we’ll present mitigation strategies, highlighting the strengths and limitations of model alignment, external safeguards, LLMs as judges, and hybrid defenses.

Hijacker to the Galaxy
18:40 – 19:10

From a forgotten subdomain to a full-scale organizational breach — this talk takes you on a high-speed journey through the attacker’s playbook. We’ll uncover how subdomain hijacking creates hidden backdoors, how valid certificate forging gives attackers the cloak of trust, and how a single act of cookie theft can spiral into a complete corporate takeover.

Packed with real-world examples and live attack flows, this session reveals how small cracks in the surface can expand into galactic-sized breaches. You’ll leave with a clear view of the attack chain, the tools adversaries use to weaponize trust, and the defense strategies that can keep your organization safe from takeover.

Fast-paced, eye-opening, and rooted in practical security lessons, Hijacker to the Galaxy is a must-attend session for anyone responsible for defending digital infrastructure.

The Shapeshifting Threat: Inside Polymorphic AI Malware
19:10– 19:40

AI-powered attacks are here, and they’re evolving fast. Nation-state groups like APT28 have already experimented with AI-driven tooling, proving that polymorphic AI malware isn’t just theory, it’s operational reality.

In this talk, we unveil a real-world proof-of-concept: a polymorphic AI keylogger, generated in real time with GPT-4o, that executes fully in memory and mutates on every run to slip past static defenses. Think living malware – constantly rewriting itself to survive.

We’ll walk through how this AI-generated malware interacts with EDR environments, why traditional signatures crumble against it, and most importantly, how defenders can fight back. Blue teamers, SOC analysts, and detection engineers will leave with practical detection strategies to hunt and contain these shape-shifting threats before they become tomorrow’s APT toolkit.

Breakin ‘Em All – Overcoming Pokemon’s Go Anti-Cheat Mechanism
20:00 – 20:30

It was the summer of 2016, and like everyone else, I was out playing Pokémon Go. Except my rural location barely spawned anything interesting. Naturally, I dove into the game’s code, reverse engineered its protocol, and built a custom Pokémon scanner.

But the story doesn’t end there. One day, a switch was flipped, enabling a fancy new anti-cheating feature that locked out any custom implementations.

In this talk, I’ll begin by exploring how mobile games like Pokémon Go handle communication through specialized protocols—and how I replicated that behavior to build a scanner. Then, I’ll walk you through a 4-day hacking marathon where I teamed up with a group of like-minded enthusiasts to overcome the anti-cheating mechanism that nearly broke our scanners.

We’ll examine how mobile games attempt to thwart such applications, unravelling the anti-cheating mechanism that was deployed by Pokemon Go. We’ll explore how we managed, through obsfuscated cryptographic functions, unexpected use of smartphone peripherals and hidden protobuf definitions, to break the anti-chetaing system and release a publicly available API for the game’s protocol.

Almost a decade later, the full story is ready to be told. Join me for an inside look at the anti-cheating mechanisms of online mobile games—and how to hack them.

"I Own your Cluster" -Taking over AWS EKS cluster with Chain Attack
20:30 – 21:00

Compromising a well-protected enterprise used to require careful planning, proper resources, and the ability to execute. Not anymore! Enter AI.

Initial access? AI is happy to let you operate on its users’ behalf. Persistence? Self-replicate through corp docs. Data harvesting? AI is the ultimate data hoarder. Exfil? Just render an image. Impact? So many tools at your disposal. There’s more. You can do all this as an external attacker. No credentials required, no phishing, no social engineering, no human-in-the-loop. In-and-out with a single prompt.

Last year at Black Hat USA, we demonstrated the first real-world exploitation of AI vulnerabilities impacting enterprises, living off Microsoft Copilot. A lot has changed in the AI space since… for the worse. AI assistants have morphed into agents. They read your search history, emails and chat messages. They wield tools that can manipulate the enterprise environment on behalf of users – or a malicious attacker once hijacked. We will demonstrate access-to-impact AI vulnerability chains in most flagship enterprise AI assistants: ChatGPT, Gemini, Copilot, Einstein, and their custom agent . Some require one bad click by the victim, others work with no user interaction – 0click attacks.

The industry has no real solution for fixing this. Prompt injection is not another bug we can fix. It is a security problem we can manage! We will offer a security framework to help you protect your organization–the GenAI Attack Matrix. We will compare mitigations set forth by AI vendors, and share which ones successfully prevent the worst 0click attacks. Finally, we’ll dissect our own attacks, breaking them down into basic TTPs, and showcase how they can be detected and mitigated.