The Security Summit for Researchers / By Researchers

Thank you for attending
INTENT Summit 2024!

On November 19th, the INTENT community gathered for its annual summit for researchers, by researchers.

CAPTURE THE FLAG

More than 100 researchers played INTENT CTF and Done-Pwn competitions, winning cool prizes like Steam Deck, Electric Scooter, and top-notch headphones. 
Make sure to come back next year to solve extreme CTF challenges!

AGENDA

Descend into the rabbit hole with us for a night full of mind-blowing insights, research innovation, and your chance at the flag. Whether you choose to join INTENT sessions & lightning talks, or test your skills with our challenges, expect a night packed with content, networking and lots of fun!

19:00

x

Registration, Networking & CTF open! *

20:00

x

Welcome!

Lavi Lazarovitz, Vice President of Cyber Research, CyberArk

20:10

x

Michael Bargury, Co-founder & CTO, Zenity

20:50

x

Barak Sternberg, Former CEO & Advisor, Wild Pointer
Nevo Poran, CEO & Lead Researcher, Wild Pointer

Workshop:

Shai Dvash, Software Engineer, CyberArk

Eran Shimony, Principal Researcher, CyberArk

*The workshop is fully booked!

21:30

x

Break *

21:45

x

22:25

x

Mark Cherp, Vulnerability Research Team Lead, CyberArk
Niv Rabin, Principal Software Architect, CyberArk

22:45

x

Ofek Itach, Senior Security Researcher, Aqua Security
Yakir Kadkoda, Lead Security Researcher, Aqua Security

Workshop #2: 60 minutes

*Limited Capacity, Please Register to attend.

Workshop:

Shai Dvash, Software Engineer, CyberArk

Eran Shimony, Principal Researcher, CyberArk

*The workshop is fully booked!

23:05

x

Closing remarks and CTF Winners announcement

23:30

x

Live show by Ness & Stilla

Sessions On Demand

Powered by

Living off Microsoft Copilot
20:10 – 20:30

Whatever your needs as a hacker post-compromise, Microsoft Copilot has got you covered. Covertly search for sensitive data and organize it for easy use. Exfiltrate sensitive data without triggering logs. If you encounter obstacles, Microsoft Copilot can assist with phishing for lateral movement—even handling social engineering on your behalf!

This talk provides a comprehensive analysis of Microsoft Copilot used at a red-team level of practicality. We will demonstrate how Copilot plugins can install backdoors into other users’ Copilot interactions, enabling data theft as an entry point and AI-driven social engineering as the primary strategy.

For the final course, we’ll show how hackers can circumvent security controls focused on files and data by weaponizing AI against them.
Next, we’ll introduce LOLCopilot, a red-teaming tool for ethical hackers to exploit Microsoft Copilot in M365-enabled environments. This tool operates seamlessly with default configurations in any M365 Copilot-enabled tenant.

Finally, we’ll provide recommendations for detection and hardening measures you can implement to guard against malicious insiders and threat actors with access to Copilot.

Hacking HiSilicon Cameras for...Necessity (and hacking several million other devices while at it)
20:30 – 20:50

This talk presents the process of adding remote debugging and over-the-air (OTA) update mechanisms to a popular commercial off-the-shelf (COTS) security camera that has been under development for some time. The presentation begins by explaining the motivation—a design house unresponsive to software development needs—that necessitated updated remote debugging capabilities. It then details the techniques used to analyze the firmware structure, resulting in a self-extracting archive that adds functionality without vendor support. The talk concludes by illustrating how these same methods could potentially compromise millions of similar devices.

The primary takeaway is the thin line between being a Linux hacker and being a hacker. Other insights include understanding supply chain (and time-to-market) flaws, common embedded Linux filesystem structures, “security opportunities” when working with managers handling their first consumer device release, and, finally, how global resources—particularly those in China—can inadvertently enable mass exploitation opportunities.

Breaking Your Beloved Kube's etcd for Fun & Profit
20:50 – 21:10

How secure is your Kubernetes (etcd) configuration? This talk uncovers a widespread Server-Side Request Forgery (SSRF) vulnerability within Kubernetes’ etcd, caused by a common misconfiguration that affects thousands of instances globally. Exploiting this misconfiguration enables attackers to (1) access internal Kubernetes resources and services, (2) inject malicious configurations into other services, and (3) achieve Remote Code Execution (RCE) if the popular Apache APISIX gateway is in use. Attendees will be guided through the discovery of this etcd SSRF vulnerability, the bypassing of security restrictions, and advanced exploitation techniques that lead to the full compromise of additional services, enabling lateral movement and potential financial gain.

The GCP Jenga Tower: Hacking Millions of Google Servers with a Single Package (and more)
21:10 – 21:30

Cloud security is so complex that even cloud providers occasionally get it wrong—one simple faulty command argument by Google Cloud Platform (GCP) led to the discovery of a critical remote code execution (RCE) vulnerability, dubbed CloudImposer, in both GCP customers’ workloads and Google’s own internal production servers, impacting millions of cloud servers.

This talk begins with a recounting of the thrilling discovery of the CloudImposer vulnerability, highlighting the journey from receiving hundreds of DNS requests from internal Google servers to being halted by a PyPI guardrail.

Exploiting Guardrails: Advanced Techniques in LLM Jailbreaking
20:15 – 21:30

Delve into the dynamic back-and-forth of outsmarting evolving LLM defenses designed to block jailbreaks. In this hands-on CTF, you’ll explore prompt engineering, sophisticated jailbreaking tactics, and practical ways to bypass model safeguards. Along the way, you’ll also gain valuable insights into defending against adversarial attacks, equipping you with a solid blend of offensive and defensive skills for working with LLMs.

*The workshop is fully booked!

Once And Forever: Exploring WhatsApp’s “View Once” Media for Fun and Giggles
21:45 – 22:05

Instant messaging (IM) apps are among the most widely used applications, with billions of users daily. Meta’s WhatsApp leads the market, boasting over five billion downloads and 2.4 billion active users. However, with immense popularity comes the critical responsibility of safeguarding user security and privacy.

WhatsApp introduced the View Once feature to let users “send photos, voice messages, and videos that disappear from a chat after the recipient has opened them once,” promoting it as a privacy feature.

In this talk, the mechanics of WhatsApp’s View Once media feature are examined to reveal implementation flaws, why it falls short of its privacy promises, and what steps can be taken to (partially) improve it.

Redis or Not: Argo CD & GitOps Critical Vulnerability from an Attacker's Perspective
22:05 – 22:25

Prepare for a groundbreaking revelation as a critical vulnerability in Kubernetes clusters using Argo CD—a widely adopted GitOps continuous delivery tool used by industry giants like Google, Adobe, and Spotify—is unveiled.

This vulnerability exploits the elevated permissions of the Argo CD server, creating an attack vector that enables adversaries to escalate privileges from an initial foothold to full control over the Kubernetes cluster. By manipulating data within Argo CD’s Redis caching server, attackers can deploy malicious pods, access sensitive information, and erase evidence of their actions.

This presentation will dive into the technical details of the vulnerability, its potential impact, and effective mitigation strategies, emphasizing the urgent need for robust security practices in Kubernetes environments that utilize GitOps.

How LLMs Interpret Jailbreaks: Exposing Vulnerabilities and Fortifying Defenses
22:25 – 22:45

Large Language Models (LLMs) have made remarkable strides in tackling complex problems and engaging meaningfully with humans, often leaving both end-users and AI experts in awe of their capabilities. As the push toward Artificial General Intelligence (AGI) gains momentum, LLMs are being integrated into production environments at an unprecedented pace. However, embedding these powerful yet unpredictable models at the core of such systems introduces significant security challenges.

Our research in Adversarial AI seeks to uncover the mechanics of Jailbreaks and identify the vulnerabilities they exploit. This presentation explores emerging Jailbreak techniques and highlights specific neural patterns that allow adversarial inputs to bypass alignment safeguards. We’ll also discuss practical approaches for developing and evaluating advanced jailbreak methods.

Our goal is to provide both an intuitive and technical understanding of LLM Jailbreaks, essential for crafting effective defense strategies to ensure the safe deployment and application of these models. By examining the mechanics of these attacks, we offer insights into recently developed mitigation techniques as part of our research.

By the end of this talk, participants will gain a comprehensive understanding of how Jailbreaks function, methods for exploring and generating them, and strategies for stopping them effectively.

Breaching AWS Accounts Through Shadow Resources
22:45 – 23:05

The cloud may seem complex, but the hidden processes behind it are where the real complications lie. Some services operate by using others as resources within their logic, and when done unsafely, this interconnectedness can lead to catastrophic results.

This talk presents six critical vulnerabilities discovered in AWS, along with the stories and methodologies behind them. Each vulnerability, promptly acknowledged and fixed by AWS, could have allowed external attackers to compromise nearly any AWS account. Ranging from remote code execution leading to full account takeover, to information disclosure exposing sensitive data or causing denial of service, these vulnerabilities highlight significant risks. The session recounts the discovery journey, revealing commonalities among the vulnerabilities and detailing the techniques developed to identify more, enhance impact, and escalate privileges. Additionally, it explains our approach to mapping external resources within AWS services.

The talk concludes with key lessons from this research and outlines future research directions, identifying new areas for cloud vulnerability exploration and best practices developers should follow in complex cloud environments.

Exploiting Guardrails: Advanced Techniques in LLM Jailbreaking
21:45 - 22:45

Delve into the dynamic back-and-forth of outsmarting evolving LLM defenses designed to block jailbreaks. In this hands-on CTF, you’ll explore prompt engineering, sophisticated jailbreaking tactics, and practical ways to bypass model safeguards. Along the way, you’ll also gain valuable insights into defending against adversarial attacks, equipping you with a solid blend of offensive and defensive skills for working with LLMs.

*The workshop is fully booked!