Speakers & talks

         KEYNOTE          
6 pm - 8 pm
Journey towards the Confidential Cloud
Shubhra Sinha
Microsoft, USA
Mark Russinovich
Mark Russinovich
CTO, Microsoft Azure
         KEYNOTE          
6 pm - 8 pm
Welcome Session and Introduction to Confidential Computing

Welcome to OC3! In this session, we'll outline the agenda for the day and give an introduction to confidential computing. We'll assess the current state of the confidential-computing space and compare it to last year's. We'll take a look at key problems that still require attention. We'll close with an announcement from Edgeless Systems.

Shubhra Sinha
Microsoft, USA
Felix Schuster
Felix Schuster
Edgeless Systems
Cloud-Native
Confidential Containers: Bringing Confidential Computing to the Kubernetes Workload Masses

The new Confidential Computing security frontier is still out of reach for most cloud-native applications.The Confidential Containers project aims at closing that gap by seamlessly running unmodified Kubernetespod workloads through their own, dedicated Confidential Computing environments.

Description

Confidential Computing expands the cloud threat model into a drastically different paradigm. In a worldwhere more and more cloud native applications run through hybrid clouds, not having to trust your cloudprovider anymore is a very powerful and economically attractive proposal. Unfortunately, the currentconfidential computing cloud offerings and architectures are either limited in scope, workload intrusive orprovide node level isolation only. In contrast, the Confidential Containers open-source project integratesthe Confidential Computing security promise directly into cloud native applications by allowing anyKubernetes pod to run into its own, exclusive trusted execution environment.

This presentation will start with describing the Confidential Containers software architecture. We will showhow it's reusing some of the hardware-virtualization based Kata Containers software stack components tobuild confidential micro-VMs for Kubernetes workloads to run into. We will explain how those micro-VMscan transparently leverage the latest Confidential Computing hardware implementations like Intel TDX,AMD SEV or IBM SE to fully protect pod data while it's in use.

Going into more technical details, we will go through several key components of the ConfidentialContainers software stack like the Attestation Agent, the container image management Rust crates or theKubernetes operator. Overall we will show how those components integrate together to form a softwarearchitecture that verifies and attest tenant workloads which pull and run encrypted container images ontop of encrypted memory only.

The final parts of the presentation will first expand into the project roadmap and where it wants to go afterits initial release. Then we will conclude with a mandatory demo of a Kubernetes pod being run on its owntrusted execution environment, on top of an actual Confidential Computing enabled machine.

Samuel Ortiz
Apple
Apps & Solutions
6 pm - 8 pm
Confidential Computing in German E-Prescription Service

Description

Approximately 500 million medical prescriptions are issued, dispensed, and procured each year in Germany. gematik is legally mandated to develop the involved processes into their digital form within its public digital infrastructure (Telematikinfrastruktur). Due to the staged development of these processes as well as to their variable collaborative nature involving medical professionals, patients, pharmacists, and insurance companies a centralized approach for data processing was chosen since it provides adequate design flexibility. In this setup data protection regulations require any processed medical data to be reliably protected from unauthorized access from within the operating environment of the service. Consequently, the solution is based on Intel SGX as Confidential Computing technology. This talk introduces the solution, focusing on trusted computing base, attestation, and availability requirements.

Shubhra Sinha
Microsoft, USA
Andreas Berg
Gematik
6 pm - 8 pm, Meeting Room A
6 pm - 8 pm, Meeting Room A
Apps & Solutions
Proof of Being Forgotten: Verified Privacy Protection in Confidential Computing Platform

For data owners, whether their data has been erased after use is questionable and needs to be proved even when executing in a TEE. We introduce security proof by verifying that sensitive data only lives inside TEE and is guaranteed of being erased after use. We call it proof of being forgotten.

Description

One main goal of Confidential Computing is to guarantee that the security and privacy of data in use areunder the protection of a hardware-based Trusted Execution Environment (TEE). The Trusted ExecutionEnvironment protects the content (code and data) inside the TEE is not accessible from outside. However,as for data owners, whether their sensitive data has been intendedly or un-intendedly leaked by the codeinside TEE is still questionable and needs to be proved. In this talk, we'd like to introduce the concept ofProof of Being Forgotten (PoBF). What PoBF provides is a security proof. The enclaves with the PoBF canensure users that they have the property that sensitive data only live inside an SGX enclave and will beerased after use. By verifying the property and presenting a report with proof of being forgotten to dataowners, the complete data lifecycle protected by TEE can be strictly controlled, enforced, and auditable.

Mingshen Sun
Baidu
cloud-native
Kubernetes meets confidential computing - the different ways of scaling sensitive workloads

Cloud-native and confidential computing will inevitably grow together. This talk maps the design space for confidential Kubernetes and shows the latest corresponding developments from Edgeless Systems.

Description

Kubernetes is the most popular platform for running workloads at scale in a cloud-native way. With the help of confidential computing, Kubernetes deployments can be made verifiable and can be shielded from various threats. The simplest approach towards "confidential Kubernetes" is to run containers inside enclaves or confidential VMs. While this simple approach may look compelling on the surface, on closer inspection, it does not provide great benefits and leaves important questions unanswered:How to set up confidential connections between containers? How to verify the deployment from the outside? How to scale? How to do updates? How to do disaster recovery?

In this talk, we will map the solution space for confidential Kubernetes and discuss pros and cons of the different approaches. In this context, we will give an introduction to our open-source tool MarbleRun, which is a control plane for SGX-based confidential Kubernetes. We will show how MarbleRun, in conjunction with our other open-source tools EGo and EdgelessDB, can make existing cloud-native apps end-to-end confidential.

We will also discuss the additional design options for confidential Kubernetes that are enabled by confidential VM technologies like AMD SEV, Intel TDX, or AWS Nitro. In this context, we will introduce and demo our upcoming product Constellation, which uses confidential VMs o create "fully confidential" Kubernetes deployments, in which all of Kubernetes runs inside confidential environments. Constellation is an evolution of MarbleRun that strikes a different balance between ease-of-use and TCB size.

Moritz Eckert
Edgeless Systems
Apps & Solutions
SGX-protected Scalable Confidential AI for ADAS Development

Privacy is an important aspect of AI applications. We combine Trusted Execution Environments, a library OS, and a scalable service mesh for confidential computing to achieve these security guarantees for Tensorflow-based inference and training with minimal performance and porting overheads.

Description

Access to data is a crucial requirement for the development of advanced driver-assistance systems (ADAS) based on Artificial Intelligence (AI). However, security threats, strict privacy regulations, and potential loss of Intellectual Property (IP) ownership when collaborating with partners can turn data into a toxic asset (Schneier, 2016): Data leaks can result in huge fines and in damage to brand reputation. An increasingly diverse regulatory landscape imposes significant costs on global companies. Finally, ADAS development requires close collaboration across original equipment manufacturers (OEMs) and suppliers. Protecting IP in such settings is both necessary and challenging.
Privacy-Enhancing Technologies (PETs) can alleviate all these problems by increasing control over data. In this paper, we demonstrate how Trusted Execution Environments (TEEs) can be used to lower the aforementioned risks related to data toxicity in AI pipelines used for ADAS development. Contributions
The three most critical success factors for applying PETs in the automotive domain are low overhead in terms of performance and efficiency, ease of adoption, and the ability to scale. ADAS development projects are major efforts generating infrastructure costs in the order of tens to hundreds of millions. Hence, even moderate efficiency overheads translate into significant cost overhead. Before the advent of Intel 3rd Generation XEON Scalable Processors (Ice Lake), the overhead of SGX protected CPU-based training of a TensorFlow model was up to 3-fold when compared to training on the same CPU without using SGX. In a co-engineering effort, Bosch Research and Intel have been able to effectively eliminate these overheads.

In addition, ADAS development happens on complex infrastructures designed to meet highest demands in terms of storage space and compute power. Major changes to these systems for implementing advanced security measures would be prohibitive in terms of time and effort. We demonstrate that Gramine’s (Tsai, Porter, & Vij, 2017) Lift and Shift approach keeps the effort for porting existing workloads to SGX minimal. Finally, being able to process millions of video sequences consisting of billions of frames in short development cycles necessitates a scalable infrastructure. By using the MarbleRun (Edgeless Systems GmbH, 2021) confidential service mesh, Kubernetes can be transformed into a substrate for confidential computing at scale.

To demonstrate the validity of our approach, Edgeless Systems and Bosch Research jointly implemented a proof-of-concept implementation of an exemplary ADAS pipeline using SGX, MarbleRun and Gramine as part of the Open Bosch venture client program.

Stefan Gehrer
Bosch
Scott Raynor
Intel
Moritz Eckert
Edgeless Systems
low-level magic
AMD Secure Nested Paging with Linux - Development Update

Support for AMD Secure Nested Paging (SNP) for Linux is under heavy development. There is work ongoing to make Linux run as an SNP guest and to host SNP protected virtual machines. I will explain the key concepts of SNP and talk about the ongoing work and the directions being considered to enable SNP support in the Linux kernel and the higher software layers. I will also talk about proposed attestation workflows and their implementation.

Jörg Rödel
Suse
Apps & Solutions
Confidential Computing Governance

Regulated institutions have strong business reasons to invest in confidential computing. As with any new technology, governance takes center stage. This talk explores the vast landscape of considerations involved in provably and securely operationalizing Confidential Computing in the public cloud.

Download the accompanying paper here.

Description

Heavily regulated institutions have a strong interest in strengthening protections around data entrusted to public clouds. Confidential Computing is an area that will be of great interest in this context. Securing data in use raises a significantly larger number of questions around proving the effectiveness of new security guarantees -- significantly more than either securing data-in-transit or data-at-rest.

Curiously, this topic has so far received no attention in CCC, or IETF, or anywhere else that we're aware of.

This talk will propose a taxonomy of confidential computing governance and break the problem space down into several constituent domains, with requirements listed for each. Supply chain and toolchain considerations, controls matrices, control plane governance, attestation and several other topics will be discussed.

Mark Novak
JPMorgan Chase
low-level magic
Transparent Release Process for Releasing Verifiable Binaries

Binary attestation allows a remote machine (e.g., a server) to attest that it is running a particular binary. However, usually, the other party (e.g., a client) is interested in guarantees about properties of the binary. We present a release process that allows checking claims about the binaries.

Description

Project Oak provides a trusted runtime and a generic remote attestation protocol for a server to prove its identity and trustworthiness to its clients. To do this, the server, running inside a trusted execution environment (TEE), sends TEE-provided measurements to the client. These measurements include the cryptographic hash of the server binary signed by the TEE’s key. This is called binary attestation.

However, the cryptographic hash of the binary is not sufficient for making any guarantees about the security and trustworthiness of the binary. What is really desired is semantic remote attestation that allows attestation to the properties of a binary. However these approaches are expensive, as they require running checks (e.g., a test suite) during the attestation handshake.

We propose a release process to fill in this gap by adding transparency to binary attestation. For transparency, the release process publishes all released binaries in a public and externally maintained verifiable log. Once an entry has been added to the log, it can never be removed or changed. So a client, or any other interested party (e.g., a trusted external verifier or auditor), can find the binary in the verifiable log. Finding the binary in the verifiable log is important for the client as it gives the client the possibility to detect, with higher likelihood, if it is interacting with a malicious server. Having a public verifiable log is important as it supports public scrutiny of the binaries.

In addition, we are implementing an ecosystem to provide provenance claims about released binaries. We use SLSA provenance predicates for specifying provenance claims. Every entry in the verifiable log corresponding to a released binary contains a provenance claim, cryptographically signed by the team or organization releasing the binary. The provenance claim specifies the source code and the toolchain for building the binary from source. The provenance details allow reproducing server binaries from the source, and verifying (or more accurately falsifying) security claims about the binaries by inspecting the source, its dependencies, and the build toolchain.

Razieh Behjati
Google
cloud-native
Project Veraison - Verification of Attestation

OSS Project Veraison builds software components that can be used to create Attestation Verification services required to establish that a CC environment is trustworthy. These flexible & extensible components can be used to address multiple Attestation technologies and deployment options.

Description

Establishing that a Confidential Computing environment is trustworthy requires the process of Attestation. Verifying the evidential claims in an attestation report can be a complex process, requiring knowledge of token formats and access to a source of reference data that may only be available from a manufacturing supply chain.

Project Veraison (VERificAtIon of atteStatiON) addresses these complexities by building software components that can be used to create Attestation Verification services.

This session discusses the requirements to determine that an environment is trustworthy, the mechanisms of attestation and how Project Veraison brings consistency to the problems of appraising technology specific attestation reports and connecting to the manufacturing supply chain where the reference values of what is 'good' reside.

Simon Frost
Arm
Thomas Fossati
Arm
Apps & Solutions
From zero to hero: making Confidential Computing accessible

How can we make Confidential Computing accessible, so that developers from all levels can quickly learn and use this technology? In this session, we welcome three Outreachy interns, who had zero knowledge of Confidential Computing, to showcase what they've developed in just a few months.

Description

Implementing state-of-the-art Confidential Computing is complex, right? Developers must understand how Trusted Execution Environments work (whether they are process-based or VM-based), be familiar with the different platforms that support Confidential Computing (such as Intel's SGX or AMD's SEV), and have knowledge of complex concepts such as encryption and attestation.

Enarx, an open source project part of the Confidential Computing Consortium, abstracts all these complexities and makes it really easy for developers from all levels to implement and deploy applications to Trusted Execution Environments.

The Enarx project partnered with Outreachy, a diversity initiative from the Software Freedom Conservancy, to welcome three interns, who had zero knowledge of Confidential Computing. During just a few of months, they learned the basics and started building demos in their favorite language, from simple to more complex.
In this session, they'll have the opportunity to showcase their demos and share what they've learned. Our hope is to demonstrate that Confidential Computing can be made accessible and easy to use by all developers.

Nick Vidal
Profian
cloud-native
Understanding trust relationships for Confidential Computing

Confidential Computing requires trust relationships. What are they, how can you establish them, and what are the possible pitfalls? Our focus will be cloud deployments, but we will look at other environments such as telecom and Edge.

Description

Deploying Confidential Computing workloads is only useful if you can be sure what assurances you have about trust. This requires establishing relationships with various entities, and sometimes rejecting certain entities as appropriate for trust. Examples of someof the possible entities include:
- hardware vendors
- CSPs
- workload vendors
- open source communities
- independent software vendors (ISVs)
- attestation providers 

This talk will address how and why trust relationships can be established, the dangers of circular relationships, some of the mechanisms for evaluating them, and what they allow when (and if!) they are set up. It describes the foundations for considering when Confidential Computing makes sense, and when you should mistrust the claims of some of those offering it!

Mike Bursell
Profian
low-level magic
Exploring OSS guest firmware for Confidential VMs

As confidential VMs become a reality, trusted components within the guest such as guest firmware become increasingly relevant for trust and security posture of VM. In this talk, we will focus on our explorations in building “customer managed guest firmware” for increased control and auditability of CVM’s TCB.

Description

Confidential computing developers like flexibility and control over guest TCB because that allows managing what components make up the trusted code base. In a VM these requirements are tricky to meet. In this talk you will learn how in Azure we are enabling new capabilities to help you make a full VM as a Trusted Execution Environment and help your app perform remote attestation with another trusted party in a Linux VM environment with OSS guest firmware options.

Pushkar V. Chitnis
Microsoft
Ragavan Dasarathan
Microsoft
Apps & Solutions
Mystikos Python support with demo of confidential ML inference using PyTorch

In this talk, we present Mystikos project’s progress on Python programing language support and a ML workflow in a cloud environment that preserve the confidentiality of the ML model and the privacy of the inference data even if the cloud provider is not trusted. In addition, we provide demo showing how to protect the data using secret keys stored with Azure Managed HSM, and how to retrieve the keys from MHSM at run time using attestation, and how to use the keys for decryption. We also demonstrate how an application could add the secret provisioning capability with simple configurations.

Description

Confidential ML involves many stakeholders: the owner of the input data, the owner of the inference model, and the owner of the inference results, etc. Porting ML workload to Confidential Computing and managing keys and their retrieval into the Confidential Computing ML application securely and confidentially are challenging for users who have limited understanding of Confidential Computing confidentiality and security. We provide a solution implementing the heavy lifting in Mystikos runtime: the programming language runtime, the attestation, the encryption/decryption, the key provisioning etc., so that users only have to convert their python based ML applications and config their applications with a few lines of JSON code.  While the demo takes advantage of Secure Key Unwrap capability of Azure Managed HSM, the solution is based on an open framework that can be extended to other key vault providers.

Xuejun Yang
Microsoft
Apps & Solutions
Smart Contracts with Confidential Computing for Hyperledger Fabric

Fabric Private Chaincode (FPC) is a new security feature for Hyperledger Fabric that leverages Intel SGX to protect the integrity and confidentiality of Smart Contracts. This talk is a FPC 101 and will showcase the benefits of Confidential Computing in the blockchain space.

Description

Fabric Private Chaincode (FPC) is a new security feature for Hyperledger Fabric that leverages Confidential Computing Technology to protect the integrity and confidentiality of Smart Contracts.

In this talk we will learn what Fabric Private Chaincode is and how it can be used to implement privacy-sensitives use cases for Hyperledger Fabric. Our goal of this talk is to educate developers and architects with all necessary background and first hands-on experience to adopt FPC for their projects.

We start with an introduction of FPC, explaining the basic FPC architecture, security properties, and hardware requirements. We will cover the FPC Chaincode API and the applications integration using the FPC Client SDK.
The highlight of this talk will be a showcase of a new language support feature for Fabric Private Chaincode using the EGo open-source SDK.

Marcus Brandenburger
IBM Research
Apps & Solutions
Using Secure Ledger Technology to Tackle Compliance and Auditing

Secure ledger technology is enabling customers who have a need for maintaining a source of truth where even the operator is outside the trusted computing base. Top examples: recordkeeping for compliance purposes, and enable trusted data.

Description

This session will dive into how secure ledgers provide security and integrity to customers in compliance and auditing related scenarios. Specifically, customers who must maintain a source of truth which remains tamper protected, from everyone. We will also discuss how secure ledgers benefit from confidential computing and open-source.

Shubhra Sinha
Microsoft
Apps & Solutions
Unlock the mysteries of data with confidential computing powered by Intel SGX

We all understand that data sovereignty in highly regulated industries like government, healthcare, and fintech is critical, prohibiting even the most basic data insights because it cannot be moved to a centralized location for collaboration or model training. Confidential computing powered by Intel Software Guard Extensions (Intel SGX) changes all of that. Join us to learn how customers across every industry are gaining insights never before possible.

Laura Martinez
Intel
Apps & Solutions
PCI  compliance with Azure confidential computing

Storing payment data in your e-commerce site may expose your business to challenges for PCI compliance. Azure confidential computing provides a platform for protecting your customer’s financial information at scale.

Stefano Tempesta
Microsoft
Apps & Solutions
"PODfidential" Computing - Protecting Workloads with Cloud Native Scale and Agility

Balancing data privacy, runtime protection with ease and nimbleness of deployments is reality for the current state of confidential computing.
Simplicity of PODs and availability of orchestration for confidential computing, exploring the adoption of Kata POD isolation with protected virtualisation.Secure ledger technology is enabling customers who have a need for maintaining a source of truth where even the operator is outside the trusted computing base. Top examples: recordkeeping for compliance purposes, and enable trusted data.

Description

We are discussing the use of Kata POD isolation with protected virtualisation. Striving for confidential computing with a cloud native model while preserving most of the K8S compliance. This talk will summarise the state of the technical discussion in the industry, discuss solutions and open questions and give a hint into the future of confidential computing with cloud native models.Speed of adoption of confidential computing will to a large extend depend on the ease of use for developers and administrators in incorporating runtime protection into the established technology stack. From UseCases to technology demo the technology team is moving forward.

Stefan Liesche
IBM
James Magowan
IBM
6 pm - 8 pm
6 pm - 8 pm
6 pm - 8 pm
6 pm - 8 pm
No items found.
AI
Confidential GenAI as a Service: Building Privatemode

See abstract

As Generative AI becomes a cornerstone of innovation, ensuring data privacy and intellectual property protection in cloud-based deployments remains a critical challenge. Privatemode addresses this by providing a SaaS platform for Confidential GenAI, leveraging Confidential Computing to deliver end-to-end encryption for inference workloads.

This talk will cover how Privatemode is built on Contrast and Confidential Containers to deliver a secure and scalable platform for GenAI as a service. We will explain how these technologies enable confidential orchestration, ensure workload attestation, and solve challenges like integrating GPUs securely. Key features of PRivatemode, such as provider exclusion, support for cutting-edge open-source LLMs, and compatibility with OpenAI’s API, will also be discussed.

Real-world use cases will demonstrate Privatemode in action, from privacy-compliant AI services for public administration to confidential LLM-as-a-Service deployments for academic research. We will also touch on ongoing advancements, such as multi-GPU support and retrieval-augmented generation (RAG) capabilities, showcasing how Privatemode continues to evolve to meet diverse requirements.

This session provides insights into building practical, confidential GenAI solutions and demonstrates how Confidential Computing can enable leveraging GenAI’s potential without compromising confidentiality.

Slides
Felix Schuster
Felix Schuster
CEO
Edgeless Systems
Apps & Solutions
Confidential Computing: Preventing Fraud through Secure Industry Collaboration

See abstract

Annually, the volume of fraudulent activities in payments continues to rise – prompting banks to ramp up their investments in prevention and compliance measures to safeguard the integrity of the financial system. Recognizing the imperative for industry-wide collaboration, Swift, as a member-driven cooperative, is spearheading efforts to mitigate the impact of fraud through innovative approaches.

In this presentation, we will showcase Swift's groundbreaking initiative to drive industry collaboration in fraud reduction. Leveraging its unparalleled network and community data, Swift is pioneering a foundation model for anomaly detection with unprecedented accuracy and speed. Central to this endeavor is Swift's strategic integration of confidential computing and verifiable attestation, ensuring the highest standards of security and privacy in data and AI collaboration.

By partnering with key technology vendors and leading an Industry Pilot Group comprising the world's largest banks, Swift is tackling some of the toughest challenges that have plagued the industry for decades. This collaborative effort underscores the recognition that no single entity possesses all the answers, but together, industry stakeholders can forge solutions that benefit all.

Attendees will gain invaluable insights into Swift's holistic approach to combating fraud, and how confidential computing serves as a linchpin in enabling secure collaboration among industry players. Join us to discover how this work is championing a global, inclusive economy that prioritizes the interests of end-customers, while maintaining the highest standards of security and privacy.

Slides
Johan Bryssinck
Johan Bryssinck
Swift
Ecosystem & foundations
Advancements in Azure Confidential Computing: Technical Innovations and Real-World Applications

See abstract

This session will provide an in-depth look at the latest advancements in Azure Confidential Computing, focusing on technical innovations around AMD SEV-SNP, Intel TDX, and confidential Nvidia GPUs and their real-world applications. Attendees will gain insights about the practical benefits and challenges faced during deployment, with examples from both within and outside Azure. Additionally, we will present technical demos ranging from confidential VMs, containers and confidential AI, including Retrieval-Augmented Generation (RAG) and Azure ML based confidential AI inferencing. We will also explore how Azure Confidential Computing integrates with Microsoft Cloud for Sovereignty (MCFS) to address regulatory and compliance needs. This session will highlight how Azure's confidential computing capabilities ensure data privacy and integrity throughout the application lifecycle, making it possible to develop secure and private applications.

Slides
Vikas Bhatia
Vikas Bhatia
Microsoft
Ecosystem & foundations
Ubuntu Core - An Immutable OS for Securing Confidential Virtual Machines and Empowering ISVs to Build Trusted Appliances

See abstract

Confidential Virtual Machines (CVMs) introduce new security paradigms, necessitating operating systems that are specifically architected to support their unique threat models. Ubuntu Core, Canonical’s immutable, containerized Linux OS, offers an ideal platform for CVMs, addressing both security and operational resilience at scale.

This presentation will delve into the key technical attributes that make Ubuntu Core well-suited for CVMs. We will highlight its architecture, which is built around an immutable, read-only base system, ensuring that the operating environment remains secure from tampering. Through atomic updates, Ubuntu Core ensures that system changes are applied consistently, preventing issues that could arise from partial updates or system corruption. The use of strict application isolation via containerization mitigates the risk of cross-application vulnerabilities, while enabling fine-grained control over what software can access system resources. This approach also enables CVMs to maintain their integrity even in hostile environments where physical access is limited.

Drawing parallels with IoT devices, which share similar constraints such as remote deployment and lack of manual intervention, we will explain how Ubuntu Core’s self-healing capabilities, secure boot, and seamless rollback features offer a critical advantage for CVM environments. The OS is designed to be highly composable, allowing individual system components (such as the kernel, applications, and system services) to be independently updated or rolled back, ensuring minimal disruption and high uptime in sensitive environments.

We will also discuss Ubuntu Core's runtime integrity support, including the use of signed snaps and encrypted channels, which provide real-time verification of system and application integrity. This is essential for CVMs, where runtime integrity and protection from unauthorized access are paramount.

By leveraging Ubuntu Core’s hardened architecture, CVMs can achieve robust security, predictable system behavior, and operational flexibility, making it the ideal foundation for running confidential workloads in both cloud and edge environments. This talk will cover the technical foundations of Ubuntu Core’s suitability for CVMs, demonstrating its potential to meet the unique demands of secure virtualized environments.

Slides
Ijlal Loutfi
Ijlal Loutfi
Canonical
Ecosystem & foundations
Intel TDX Connect: Understanding Goals, Lifecycle, and Ecosystem Integration

See abstract

In the evolving field of confidential computing, Intel's TDX Connect stands out as a transformative framework, enabling seamless integration of Trusted Execution Environments (TEEs) with diverse computing devices. This session provides an end-to-end overview of TDX Connect, exploring its goals, operational lifecycle, and user-centric features such as Trusted Domain Operating System (TD OS) enablement and driver/application integration. Attendees will also learn about Intel's contributions to Linux upstreaming and collaborations with Cloud Service Providers (CSPs) to drive adoption. Join us to discover how TDX Connect is advancing secure computing and unlocking new possibilities for confidential data processing.

Slides
Arie Aharon
Arie Aharon
Intel
Shalini Sharma
Shalini Sharma
Intel
Attestation
Attested End-to-End Encrypted Channel with Noise Protocol

See abstract

Trusted Execution Environments in combination with open-source and reproducible builds provide transparency by relying on reviewers to analyze the Trusted Computing Base (TCB). The size of the TCB directly influences the speed at which new releases and bug fixes are deployed to production, since reviews can be time consuming. Hence, low-TCB environments enable transparency and trust to scale.

Crucial security features required to implement a trusted workload, such as end-to-end encryption, can significantly expand the TCB. For example, more general approaches like TLS with an extensive feature set including support for many signature schemes, certificate parsing and session resumption logic may be less ideal for low-TCB environments.

To address this issue we will present a remote attestation scheme that uses the Noise Protocol Framework to create an end-to-end encrypted attested channel between an end-user device and a TEE. Noise Protocol Framework allows minimizing the amount of cryptographic primitives required to establish an encrypted session bound to the attestation evidence.

Slides
Ivan Petrov
Ivan Petrov
Google DeepMind
Katsiaryna Naliuka
Katsiaryna Naliuka
Google DeepMind
Attestation
Meaningful Attestations from Supply Chain to the World

See abstract

The IETF RATS working group has been designing a standard format (CoRIM) for attestation endorsements so that their behavior in any compliant Attestation Verification Service is fully determined. This talk is going to discuss the appraisal model we've come up with, and how its composable design accommodates a model of endorsement syndication to build the global web of trust from supply chain to service operation governance.

Slides
Joshua Krstic
Joshua Krstic
Google
Dionna Glaze
Dionna Glaze
Google
AI
The Future of AI is Confidential

See abstract

Artificial intelligence (AI) is rapidly transforming our world, but it also presents new challenges for data security and privacy. How can we ensure these increasingly powerful models and the sensitive data they rely on are protected throughout their lifecycle?

Join us to explore how confidential computing, powered by Intel CPUs, provides a critical answer. We'll delve into practical applications of confidential computing for securing AI workloads, including how confidential computing ensures privacy and confidentiality, how confidential computing solutions are being used to build and deploy secure AI applications, and how confidential computing can protect sensitive data without sacrificing the performance needed, even for demanding AI tasks.

Join us to unlock the power of confidential AI and learn how to build secure, privacy-preserving AI solutions for the future.

Slides
Anand Pashupathy
Anand Pashupathy
Intel
Nelly Porter
Nelly Porter
Google
Ecosystem & foundations
Advancing Secure Processing with FUJITSU-MONAKA and Arm Confidential Compute Architecture

See abstract

In an era where sensitive data is increasingly processed in distributed environments, the need to ensure its protection during processing is critical. The growing reliance on Arm CPUs, known for their energy efficiency and versatility, further underscores the demand for robust and scalable security frameworks. Addressing this challenge, Arm Confidential Compute Architecture (Arm CCA) represents a transformative approach, leveraging hardware-supported encryption and dynamic memory isolation to secure data in use.

This talk explores the evolving landscape of secure processing with a focus on Arm CCA, its security model, practical tools, and real-world applications in domains such as healthcare, finance, and government. The session explores the Arm CCA's Realm environment, which enhances traditional systems like TrustZone by offering advanced secure memory isolation. Using tools like QEMU Virt and FVP, we will demonstrate how developers can prepare for upcoming Armv9-A architecture chips and integrate these capabilities into modern workloads.

The session also highlights safeguarding applications in Realm VMs against malicious hypervisors and ensuring system integrity through remote attestation. By examining a real-world threat scenario, we’ll showcase the potential of confidential computing in mitigating risks and securing workloads in privacy-sensitive contexts.

FUJITSU-MONAKA, Fujitsu’s next-generation 2nm low-power Arm processor, incorporates Arm CCA to redefine secure processing for future hardware systems. Join us to gain technical insights on leveraging Arm CCA to meet the growing need for secure, energy-efficient computing.

This presentation is based on results obtained from a project, JPNP21029, subsidised by the New Energy and Industrial Technology Development Organisation (NEDO).

Slides
Darshan Patel
Darshan Patel
Fujitsu Research
Tatsuya Kitamura
Tatsuya Kitamura
Fuijitsu
Attestation
Remote Attestation for NVIDIA Hopper and Blackwell GPUs, CPUs and Beyond

See abstract

This session delves into NVIDIA’s latest advancements in confidential computing on NVIDIA systems and beyond, focusing on key updates to Attestation Services that expand platform support. Single and multi-GPU attestation, switch and network attestation, along with upcoming capabilities being released this year. Attendees will discover how these advancements fortify security by enabling relying parties to securely validate claims, while also learning about optimized deployment strategies, supported usage patterns, and SLA benchmarks for NVIDIA’s cloud-based services—ensuring robust, scalable solutions for developers and system integrators.

Slides
Rob Nertney
Rob Nertney
NVIDIA
AI
Secure Deployment of Hugging Face Models in Trusted Execution Environments Using Evidence-API and Confidential AI Loader

See abstract

We present a comprehensive framework for securely deploying Hugging Face models, particularly large language models (LLMs), within Trusted Execution Environments (TEEs). The framework encompasses the entire process from evidence collection to attestation, ensuring robust protection of AI models. Central to this approach is the Confidential AI Loader, which encrypts models prior to loading them into memory, thereby safeguarding them within the TEE. The preprocessing steps involve generating an AES GCM key, encrypting the AI model, uploading the encrypted model to a model server, and registering the encryption key with a Key Broker Service (KBS) that interfaces with a Key Management Service (KMS). The architecture facilitates the seamless integration of encrypted models and the Hugging Face project into a container, enabling secure execution within the TEE. This methodology ensures that AI models are protected throughout their lifecycle, from preprocessing to deployment, leveraging TEEs to maintain confidentiality and integrity. https://github.com/cc-api/confidential-huggingface-runner.

Slides
Wenhui Zhang
Wenhui Zhang
ByteDance
Attestation
Attestation Everywhere: The Path Towards Making Attestation Comprehensible and Standards-Based

See abstract

Attestation for Confidential Computing presents a data management problem that spans an entire industry. Multiple entities need to both produce and consume data at various stages in the supply chain, creating many points of interaction across trust boundaries.

To address the complexity of this landscape, we first need to make it understandable. The RATS architecture is a great starting point. But better yet is to have working software and hands-on experiences. Arm and Linaro have been collaborating on an end-to-end experimental platform for attestation, based on components from the Veraison project. We will present a demonstration of this work.

The next step is to reduce fragmentation of solutions in open source-projects that are built for production. The RATS group works extensively to create alignment around data formats and interaction models. We present some recent work in the Confidential Containers project as a case study for how these are being adopted.

In the final part of our presentation, we will look at some future work that focuses on the distribution of endorsements and reference values within the RATS framework. We will show how the Veraison project can once again be the essential proving ground for these next steps towards a harmonised ecosystem for attestation.

Slides
Paul Howard
Paul Howard
Arm
Kevin Zhao
Kevin Zhao
Linaro
Ecosystem & foundations
COCONUT-SVSM: Architecture, Advancements, and Future Directions for Secure CVM Services

See abstract

This talk provides a technical deep dive into the COCONUT-SVSM project, a platform designed to provide secure services for Confidential Virtual Machines. We'll explore the project's architecture, detail significant advancements made over the last year, and discuss current challenges in securing CVM services. The talk will conclude with an overview of planned developments and the project roadmap for the coming year.

Slides
Jörg Rödel
Jörg Rödel
SUSE
Ecosystem & foundations
OpenHCL, the New Open-Source Paravisor

See abstract

Microsoft has embraced a different approach that offers flexibility to customers through the use of a “paravisor”. A paravisor executes within the confidential trust boundary and provides the virtualization and device services needed by a general-purpose operating system (OS), enabling existing VM workloads to execute securely without requiring continual service of the OS to take advantage of innovative advances in confidential computing technology. Microsoft developed the first paravisor in the industry (and gave a talk at OC3 2 years ago about this), and for years, we have been enhancing the paravisor offered to Azure customers. This effort now culminates in the release of a new, open source paravisor, called OpenHCL. We will talk about our journey to get here, our goals and future milestones for this new project.

Slides
Chris Oo
Chris Oo
Microsoft
Apps & Solutions
Confidential Computing Solution to Implement Digital Healthcare Transformation Mandate

See abstract

Major German health insurance companies and healthcare providers need a solution to support the country's electronic patient (ePA) record system. Given the sensitivity of private medical information, the technology infrastructure must ensure data security for 50 - 60 million citizens. IBM worked with Edgeless Systems to enable Confidential Computing at scale with support from Intel Xeon processors and Intel SGX.

Slides
Ram Pai
Ram Pai
IBM Cloud
Rachel Wan
Rachel Wan
IBM
Ecosystem & foundations
Making Confidential Containers Accessible: The Story of Contrast

See abstract

Confidential Containers are a key enabler of confidential cloud-native workloads, but integrating them into existing environments can be complex. Contrast addresses this challenge by simplifying the adoption of Confidential Containers, offering a practical, Kubernetes-native solution for Confidential Computing.

In this talk, we will outline how Contrast adds a Confidential Computing layer to existing Kubernetes platforms, enabling policy-based workload attestation, secrets management, and a service mesh for mTLS-based workload authentication without disrupting existing workflows. We will also discuss Contrast’s architecture and its compatibility with hybrid environments, making it suitable for both cloud and bare-metal deployments.

The presentation will highlight real-world use cases and in-production deployments, including securing AI workloads, protecting sensitive financial data, and protecting sensitive information in hostile environments like military battlefields. We’ll also provide a hands-on demo of how Contrast enables confidential application deployment with minimal effort. This session will offer valuable insights for those looking to adopt Confidential Containers and leverage Confidential Computing in practical scenarios.

Slides
Moritz Eckert
Moritz Eckert
Edgeless Systems
Attestation
RA-WEBs: Remote Attestation for WEB services

See abstract

Data theft and leakage, caused by external adversaries and insiders, demonstrate the need for protecting user data. Trusted Execution Environments (TEEs) offer a promising solution by creating secure environments that protect data and code from such threats. The rise of confidential computing on cloud platforms facilitates the deployment of TEE-enabled server applications, which are expected to be widely adopted in web services such as privacy-preserving LLM inference and secure data logging. One key feature is Remote Attestation (RA), which enables integrity verification of a TEE. However, compatibility issues with RA verification arise as no browsers natively support this feature, making prior solutions cumbersome and risky.

To address these challenges, in this talk, we present RA-WEBs (Remote Attestation for Web services), a novel RA protocol designed for high compatibility with the current web ecosystem. RA-WEBs leverages established web mechanisms for immediate deployability, enabling RA verification on existing browsers. We will show preliminary evaluation results and highlight open challenges when introducing RA to the web.

Slides
Kosei Akama
Kosei Akama
Keio University