
Mark Russinovich is Chief Technology Officer and Technical Fellow for Microsoft Azure, Microsoft’s global enterprise-grade cloud platform. A widely recognized expert in distributed systems, operating systems and cybersecurity, Mark earned a Ph.D. in computer engineering from Carnegie Mellon University. He later co-founded Winternals Software, joining Microsoft in 2006 when the company was acquired. Mark is a popular speaker at industry conferences such as Microsoft Ignite, Microsoft Build, and RSA Conference. He has authored several nonfiction and fiction books, including the Microsoft Press Windows Internals book series, Troubleshooting with the Sysinternals Tools, as well as fictional cyber security thrillers Zero Day, Trojan Horse and Rogue Code.
Welcome to OC3! In this session, we'll outline the agenda for the day and give an introduction to confidential computing. We'll assess the current state of the confidential-computing space and compare it to last year's. We'll take a look at key problems that still require attention. We'll close with an announcement from Edgeless Systems.

Felix Schuster is an academic turned startup founder. After his PhD in computer security, he joined Microsoft Research to work four years on the foundations of Azure Confidential Computing, before co-founding Edgeless Systems. The startup’s vision is to build an open-source stack for cloud-native Confidential Computing. Throughout his career, Felix has frequently given technical talks at top-tier conferences, including Usenix Security Symposium, IEEE Symposium on Security & Privacy, and ACM CCS. His 2015 paper on the “VC3” system is believed by some to have coined the term Confidential Computing.
The new Confidential Computing security frontier is still out of reach for most cloud-native applications.The Confidential Containers project aims at closing that gap by seamlessly running unmodified Kubernetespod workloads through their own, dedicated Confidential Computing environments.
Description
Confidential Computing expands the cloud threat model into a drastically different paradigm. In a worldwhere more and more cloud native applications run through hybrid clouds, not having to trust your cloudprovider anymore is a very powerful and economically attractive proposal. Unfortunately, the currentconfidential computing cloud offerings and architectures are either limited in scope, workload intrusive orprovide node level isolation only. In contrast, the Confidential Containers open-source project integratesthe Confidential Computing security promise directly into cloud native applications by allowing anyKubernetes pod to run into its own, exclusive trusted execution environment.
This presentation will start with describing the Confidential Containers software architecture. We will showhow it's reusing some of the hardware-virtualization based Kata Containers software stack components tobuild confidential micro-VMs for Kubernetes workloads to run into. We will explain how those micro-VMscan transparently leverage the latest Confidential Computing hardware implementations like Intel TDX,AMD SEV or IBM SE to fully protect pod data while it's in use.
Going into more technical details, we will go through several key components of the ConfidentialContainers software stack like the Attestation Agent, the container image management Rust crates or theKubernetes operator. Overall we will show how those components integrate together to form a softwarearchitecture that verifies and attest tenant workloads which pull and run encrypted container images ontop of encrypted memory only.
The final parts of the presentation will first expand into the project roadmap and where it wants to go afterits initial release. Then we will conclude with a mandatory demo of a Kubernetes pod being run on its owntrusted execution environment, on top of an actual Confidential Computing enabled machine.

Samuel Ortiz is a software engineer at Apple. He enjoys playing with containers and virtualization, and maintains a few related open source projects like for example Kata Containers. When not messing with software, Samuel runs across mountain trails and builds radio controlled toys.
Description
Approximately 500 million medical prescriptions are issued, dispensed, and procured each year in Germany. gematik is legally mandated to develop the involved processes into their digital form within its public digital infrastructure (Telematikinfrastruktur). Due to the staged development of these processes as well as to their variable collaborative nature involving medical professionals, patients, pharmacists, and insurance companies a centralized approach for data processing was chosen since it provides adequate design flexibility. In this setup data protection regulations require any processed medical data to be reliably protected from unauthorized access from within the operating environment of the service. Consequently, the solution is based on Intel SGX as Confidential Computing technology. This talk introduces the solution, focusing on trusted computing base, attestation, and availability requirements.

Andreas Berg is IT Architect at gematik where he played a leading role in defining thesecurity architecture of the German electronic patient records service (ePA) as well as the e-prescription service (E-Rezept). He has a long-standing interest in Confidential Computing technologies and methods for high assurance and trustworthy IT systems.
For data owners, whether their data has been erased after use is questionable and needs to be proved even when executing in a TEE. We introduce security proof by verifying that sensitive data only lives inside TEE and is guaranteed of being erased after use. We call it proof of being forgotten.
Description
One main goal of Confidential Computing is to guarantee that the security and privacy of data in use areunder the protection of a hardware-based Trusted Execution Environment (TEE). The Trusted ExecutionEnvironment protects the content (code and data) inside the TEE is not accessible from outside. However,as for data owners, whether their sensitive data has been intendedly or un-intendedly leaked by the codeinside TEE is still questionable and needs to be proved. In this talk, we'd like to introduce the concept ofProof of Being Forgotten (PoBF). What PoBF provides is a security proof. The enclaves with the PoBF canensure users that they have the property that sensitive data only live inside an SGX enclave and will beerased after use. By verifying the property and presenting a report with proof of being forgotten to dataowners, the complete data lifecycle protected by TEE can be strictly controlled, enforced, and auditable.

Mingshen Sun is a Staff Security Researcher at Baidu. He leads, maintains and actively contributes to Apache Teaclave (incubating) (a confidential computing platform), and several open source projects. Mingshen regularly gives talks at industry events in security. He also collaborates with academic researchers in multiple interesting research projects on solving real-world problems in industry. His interests lie in the areas of security and privacy, operating systems, and programming languages.
Cloud-native and confidential computing will inevitably grow together. This talk maps the design space for confidential Kubernetes and shows the latest corresponding developments from Edgeless Systems.
Description
Kubernetes is the most popular platform for running workloads at scale in a cloud-native way. With the help of confidential computing, Kubernetes deployments can be made verifiable and can be shielded from various threats. The simplest approach towards "confidential Kubernetes" is to run containers inside enclaves or confidential VMs. While this simple approach may look compelling on the surface, on closer inspection, it does not provide great benefits and leaves important questions unanswered:How to set up confidential connections between containers? How to verify the deployment from the outside? How to scale? How to do updates? How to do disaster recovery?
In this talk, we will map the solution space for confidential Kubernetes and discuss pros and cons of the different approaches. In this context, we will give an introduction to our open-source tool MarbleRun, which is a control plane for SGX-based confidential Kubernetes. We will show how MarbleRun, in conjunction with our other open-source tools EGo and EdgelessDB, can make existing cloud-native apps end-to-end confidential.
We will also discuss the additional design options for confidential Kubernetes that are enabled by confidential VM technologies like AMD SEV, Intel TDX, or AWS Nitro. In this context, we will introduce and demo our upcoming product Constellation, which uses confidential VMs o create "fully confidential" Kubernetes deployments, in which all of Kubernetes runs inside confidential environments. Constellation is an evolution of MarbleRun that strikes a different balance between ease-of-use and TCB size.

Moritz Eckert is a cloud security enthusiast. With a past in software security research he now leads product development at Edgeless Systems. Moritz is a passionate engineer and has presented at top-tier conferences including Usenix Security Symposium, EuropeClouds Summit, and OC3 in the past. Alongside his professional work, Moritz is part of Shellphish, one of the highest-ranked competitive hacking groups in the world.
Privacy is an important aspect of AI applications. We combine Trusted Execution Environments, a library OS, and a scalable service mesh for confidential computing to achieve these security guarantees for Tensorflow-based inference and training with minimal performance and porting overheads.
Description
Access to data is a crucial requirement for the development of advanced driver-assistance systems (ADAS) based on Artificial Intelligence (AI). However, security threats, strict privacy regulations, and potential loss of Intellectual Property (IP) ownership when collaborating with partners can turn data into a toxic asset (Schneier, 2016): Data leaks can result in huge fines and in damage to brand reputation. An increasingly diverse regulatory landscape imposes significant costs on global companies. Finally, ADAS development requires close collaboration across original equipment manufacturers (OEMs) and suppliers. Protecting IP in such settings is both necessary and challenging.
Privacy-Enhancing Technologies (PETs) can alleviate all these problems by increasing control over data. In this paper, we demonstrate how Trusted Execution Environments (TEEs) can be used to lower the aforementioned risks related to data toxicity in AI pipelines used for ADAS development. Contributions
The three most critical success factors for applying PETs in the automotive domain are low overhead in terms of performance and efficiency, ease of adoption, and the ability to scale. ADAS development projects are major efforts generating infrastructure costs in the order of tens to hundreds of millions. Hence, even moderate efficiency overheads translate into significant cost overhead. Before the advent of Intel 3rd Generation XEON Scalable Processors (Ice Lake), the overhead of SGX protected CPU-based training of a TensorFlow model was up to 3-fold when compared to training on the same CPU without using SGX. In a co-engineering effort, Bosch Research and Intel have been able to effectively eliminate these overheads.
In addition, ADAS development happens on complex infrastructures designed to meet highest demands in terms of storage space and compute power. Major changes to these systems for implementing advanced security measures would be prohibitive in terms of time and effort. We demonstrate that Gramine’s (Tsai, Porter, & Vij, 2017) Lift and Shift approach keeps the effort for porting existing workloads to SGX minimal. Finally, being able to process millions of video sequences consisting of billions of frames in short development cycles necessitates a scalable infrastructure. By using the MarbleRun (Edgeless Systems GmbH, 2021) confidential service mesh, Kubernetes can be transformed into a substrate for confidential computing at scale.
To demonstrate the validity of our approach, Edgeless Systems and Bosch Research jointly implemented a proof-of-concept implementation of an exemplary ADAS pipeline using SGX, MarbleRun and Gramine as part of the Open Bosch venture client program.

Stefan Gehrer is a Research Engineer with Robert Bosch Research in Pittsburgh, Pennsylvania. His current research interests are in Trusted Execution Environments, Intrusion Detection Systems, and AI Security. In the past he also worked on Deep Learning-based Side-Channel-Attacks, Physically Unclonable Functions, and Automotive Safety&Security with publications at top-tier conferences. Stefan has a PhD in Hardware Security from the Technical University of Munich.

Scott Raynor is the lead Security Solutions Architect within the Security Software and Services group at Intel. Scott has worked at Intel for over 25 years in various roles including CPU and platform validation, OS kernel and driver development, BIOS architect and development, and currently in a customer facing role working to enable customers to successfully develop and bring their security based products to market. In particular, products based on Intel® Software Guard Extensions (Intel® SGX).

Moritz Eckert is a cloud security enthusiast. With a past in software security research he now leads product development at Edgeless Systems. Moritz is a passionate engineer and has presented at top-tier conferences including Usenix Security Symposium, EuropeClouds Summit, and OC3 in the past. Alongside his professional work, Moritz is part of Shellphish, one of the highest-ranked competitive hacking groups in the world.
Support for AMD Secure Nested Paging (SNP) for Linux is under heavy development. There is work ongoing to make Linux run as an SNP guest and to host SNP protected virtual machines. I will explain the key concepts of SNP and talk about the ongoing work and the directions being considered to enable SNP support in the Linux kernel and the higher software layers. I will also talk about proposed attestation workflows and their implementation.

Jörg is a Linux Kernel engineer highly involved in the Confidential Computing work at SUSE. He enabled Linux to run as an AMD SEV-ES guest and helps with the enablement of Secure Nested Paging. He maintains the IOMMU subsystem of the Linux kernel and is also involved in the KVM, X86, and PCI subsystems.
Regulated institutions have strong business reasons to invest in confidential computing. As with any new technology, governance takes center stage. This talk explores the vast landscape of considerations involved in provably and securely operationalizing Confidential Computing in the public cloud.
Download the accompanying paper here.
Description
Heavily regulated institutions have a strong interest in strengthening protections around data entrusted to public clouds. Confidential Computing is an area that will be of great interest in this context. Securing data in use raises a significantly larger number of questions around proving the effectiveness of new security guarantees -- significantly more than either securing data-in-transit or data-at-rest.
Curiously, this topic has so far received no attention in CCC, or IETF, or anywhere else that we're aware of.
This talk will propose a taxonomy of confidential computing governance and break the problem space down into several constituent domains, with requirements listed for each. Supply chain and toolchain considerations, controls matrices, control plane governance, attestation and several other topics will be discussed.

Mark Novak is a researcher in JPMorgan's Future Lab for Applied Research and Engineering (FLARE) focusing on Confidential Computing for the Financial Technology sector. Prior to joining FLARE, Mark spent three years in JPMorgan's Cybersecurity, Technology and Controls organization and before that - was an architect for confidential computing services and fleet health at Microsoft Azure.
Binary attestation allows a remote machine (e.g., a server) to attest that it is running a particular binary. However, usually, the other party (e.g., a client) is interested in guarantees about properties of the binary. We present a release process that allows checking claims about the binaries.
Description
Project Oak provides a trusted runtime and a generic remote attestation protocol for a server to prove its identity and trustworthiness to its clients. To do this, the server, running inside a trusted execution environment (TEE), sends TEE-provided measurements to the client. These measurements include the cryptographic hash of the server binary signed by the TEE’s key. This is called binary attestation.
However, the cryptographic hash of the binary is not sufficient for making any guarantees about the security and trustworthiness of the binary. What is really desired is semantic remote attestation that allows attestation to the properties of a binary. However these approaches are expensive, as they require running checks (e.g., a test suite) during the attestation handshake.
We propose a release process to fill in this gap by adding transparency to binary attestation. For transparency, the release process publishes all released binaries in a public and externally maintained verifiable log. Once an entry has been added to the log, it can never be removed or changed. So a client, or any other interested party (e.g., a trusted external verifier or auditor), can find the binary in the verifiable log. Finding the binary in the verifiable log is important for the client as it gives the client the possibility to detect, with higher likelihood, if it is interacting with a malicious server. Having a public verifiable log is important as it supports public scrutiny of the binaries.
In addition, we are implementing an ecosystem to provide provenance claims about released binaries. We use SLSA provenance predicates for specifying provenance claims. Every entry in the verifiable log corresponding to a released binary contains a provenance claim, cryptographically signed by the team or organization releasing the binary. The provenance claim specifies the source code and the toolchain for building the binary from source. The provenance details allow reproducing server binaries from the source, and verifying (or more accurately falsifying) security claims about the binaries by inspecting the source, its dependencies, and the build toolchain.

Razieh is a senior software engineer at Google, currently working on Project Oak. In this role, her focus is on developing building blocks for more secure and privacy-preserving software systems. She has a PhD in Software Engineering from University of Oslo. Prior to Google, she has been involved in various industrial and research projects aimed at developing more effective techniques for verification and validation of safety-critical systems, software product families, and information-intensive systems.
OSS Project Veraison builds software components that can be used to create Attestation Verification services required to establish that a CC environment is trustworthy. These flexible & extensible components can be used to address multiple Attestation technologies and deployment options.
Description
Establishing that a Confidential Computing environment is trustworthy requires the process of Attestation. Verifying the evidential claims in an attestation report can be a complex process, requiring knowledge of token formats and access to a source of reference data that may only be available from a manufacturing supply chain.
Project Veraison (VERificAtIon of atteStatiON) addresses these complexities by building software components that can be used to create Attestation Verification services.
This session discusses the requirements to determine that an environment is trustworthy, the mechanisms of attestation and how Project Veraison brings consistency to the problems of appraising technology specific attestation reports and connecting to the manufacturing supply chain where the reference values of what is 'good' reside.

Simon Frost is a Software Architect in the Architecture and Technology Group at Arm. He runs a software prototyping team responsible for building components that will assist using technologies based on Arm architectures in various environments. Areas of interest include attestation and firmware provenance.

Thomas Fossati is an engineer in the Architecture and Technology Group at Arm where he deals with attestation in various capacities and is the tech lead for Project Veraison. He is the Arm representative on the Confidential Computing Consortium TAC and a co-chair of the CCC Attestation SIG.
How can we make Confidential Computing accessible, so that developers from all levels can quickly learn and use this technology? In this session, we welcome three Outreachy interns, who had zero knowledge of Confidential Computing, to showcase what they've developed in just a few months.
Description
Implementing state-of-the-art Confidential Computing is complex, right? Developers must understand how Trusted Execution Environments work (whether they are process-based or VM-based), be familiar with the different platforms that support Confidential Computing (such as Intel's SGX or AMD's SEV), and have knowledge of complex concepts such as encryption and attestation.
Enarx, an open source project part of the Confidential Computing Consortium, abstracts all these complexities and makes it really easy for developers from all levels to implement and deploy applications to Trusted Execution Environments.
The Enarx project partnered with Outreachy, a diversity initiative from the Software Freedom Conservancy, to welcome three interns, who had zero knowledge of Confidential Computing. During just a few of months, they learned the basics and started building demos in their favorite language, from simple to more complex.
In this session, they'll have the opportunity to showcase their demos and share what they've learned. Our hope is to demonstrate that Confidential Computing can be made accessible and easy to use by all developers.

Nick Vidal is the Community Manager of Profian and the Enarx project, which is part of the Confidential Computing Consortium from the Linux Foundation. Previously, he was the Director of Community and Business Development at the Open Source Initiative, Director of Americas at the Open Invention Network, and one of the community leaders of the Drupal project in Latin America.
Confidential Computing requires trust relationships. What are they, how can you establish them, and what are the possible pitfalls? Our focus will be cloud deployments, but we will look at other environments such as telecom and Edge.
Description
Deploying Confidential Computing workloads is only useful if you can be sure what assurances you have about trust. This requires establishing relationships with various entities, and sometimes rejecting certain entities as appropriate for trust. Examples of someof the possible entities include:
- hardware vendors
- CSPs
- workload vendors
- open source communities
- independent software vendors (ISVs)
- attestation providers
This talk will address how and why trust relationships can be established, the dangers of circular relationships, some of the mechanisms for evaluating them, and what they allow when (and if!) they are set up. It describes the foundations for considering when Confidential Computing makes sense, and when you should mistrust the claims of some of those offering it!

Mike Bursell is CEO of Profian, a company in the Confidential Computing space. He is one of the co-founders of the Enarx project (https://enarx.dev) and a visible presence in the Confidential Computing Consortium. Mike has previously worked at companies including Red Hat, Intel and Citrix, with roles working on security, virtualisation and networking. After training in software engineering, he specialised in distributed systems and security. He regularly speaks at industry events in Europe, North America and APAC.
Professional interests include: Confidential Computing, Linux, trust, open source software, security, distributed systems, blockchain, virtualisation.
Mike has an MA from the University of Cambridge and an MBA from the Open University, and is author of "Trust in Computer Systems and the Cloud", published by Wiley.
As confidential VMs become a reality, trusted components within the guest such as guest firmware become increasingly relevant for trust and security posture of VM. In this talk, we will focus on our explorations in building “customer managed guest firmware” for increased control and auditability of CVM’s TCB.
Description
Confidential computing developers like flexibility and control over guest TCB because that allows managing what components make up the trusted code base. In a VM these requirements are tricky to meet. In this talk you will learn how in Azure we are enabling new capabilities to help you make a full VM as a Trusted Execution Environment and help your app perform remote attestation with another trusted party in a Linux VM environment with OSS guest firmware options.

Pushkar is principal architect in Azure Confidential Computing in Microsoft. Pushkar worked in Microsoft Research for 9 years focused on systems and protocols for search and compression before focusing his efforts on practical confidential computing. Since then, he joined azure to lay the foundations for Azure Confidential Computing with charter to take cloud confidential computing to the masses.

Ragavan is a Senior Software Engineer working in Azure Confidential Computing, Microsoft. He has 16 years of software development experience in building system software and antimalware systems.
In this talk, we present Mystikos project’s progress on Python programing language support and a ML workflow in a cloud environment that preserve the confidentiality of the ML model and the privacy of the inference data even if the cloud provider is not trusted. In addition, we provide demo showing how to protect the data using secret keys stored with Azure Managed HSM, and how to retrieve the keys from MHSM at run time using attestation, and how to use the keys for decryption. We also demonstrate how an application could add the secret provisioning capability with simple configurations.
Description
Confidential ML involves many stakeholders: the owner of the input data, the owner of the inference model, and the owner of the inference results, etc. Porting ML workload to Confidential Computing and managing keys and their retrieval into the Confidential Computing ML application securely and confidentially are challenging for users who have limited understanding of Confidential Computing confidentiality and security. We provide a solution implementing the heavy lifting in Mystikos runtime: the programming language runtime, the attestation, the encryption/decryption, the key provisioning etc., so that users only have to convert their python based ML applications and config their applications with a few lines of JSON code. While the demo takes advantage of Secure Key Unwrap capability of Azure Managed HSM, the solution is based on an open framework that can be extended to other key vault providers.

Xuejun Yang is a developer for Azure Confidential Computing. Xuejun tries to help developers to build confidential applications in their favorite languages, Python, C#, or C/C++, while minimizing the hassle of attestation and secret provisioning. Xuejun has worked on a wide spectrum of systems software ranging from compilers and language runtimes including CRT to libos. His work on random testing of C compilers leads to best paper award of PLDI and influenced the random testing of various system software.
Fabric Private Chaincode (FPC) is a new security feature for Hyperledger Fabric that leverages Intel SGX to protect the integrity and confidentiality of Smart Contracts. This talk is a FPC 101 and will showcase the benefits of Confidential Computing in the blockchain space.
Description
Fabric Private Chaincode (FPC) is a new security feature for Hyperledger Fabric that leverages Confidential Computing Technology to protect the integrity and confidentiality of Smart Contracts.
In this talk we will learn what Fabric Private Chaincode is and how it can be used to implement privacy-sensitives use cases for Hyperledger Fabric. Our goal of this talk is to educate developers and architects with all necessary background and first hands-on experience to adopt FPC for their projects.
We start with an introduction of FPC, explaining the basic FPC architecture, security properties, and hardware requirements. We will cover the FPC Chaincode API and the applications integration using the FPC Client SDK.
The highlight of this talk will be a showcase of a new language support feature for Fabric Private Chaincode using the EGo open-source SDK.

Marcus is a researcher at the IBM Research Lab in Zurich, Switzerland working in the area of Blockchain Security and Applications. His research is focused on secure distributed systems using confidential computing. Marcus holds a PhD in computer science from TU Braunschweig, Germany. He is an active contributor to the Hyperledger Community, particularly, he writes code for the Fabric Private Chaincode project, and he has spoken at various conferences such as the Hyperledger Global Forum 2021 and 2020, SRDS 2019, and DSN 2017.
Secure ledger technology is enabling customers who have a need for maintaining a source of truth where even the operator is outside the trusted computing base. Top examples: recordkeeping for compliance purposes, and enable trusted data.
Description
This session will dive into how secure ledgers provide security and integrity to customers in compliance and auditing related scenarios. Specifically, customers who must maintain a source of truth which remains tamper protected, from everyone. We will also discuss how secure ledgers benefit from confidential computing and open-source.

Shubhra leads product management for secure ledger services that utilize confidential computing at Microsoft. She is passionate about solving end user problems, learning about new technological concepts and domains, and diving deep into data to surface insights. Her past experiences include building enterprise and consumer products.
We all understand that data sovereignty in highly regulated industries like government, healthcare, and fintech is critical, prohibiting even the most basic data insights because it cannot be moved to a centralized location for collaboration or model training. Confidential computing powered by Intel Software Guard Extensions (Intel SGX) changes all of that. Join us to learn how customers across every industry are gaining insights never before possible.

Laura Martinez directs marketing strategy for cybersecurity at Intel Corporation. Laura has spoken on a wide variety of topics including artificial intelligence, analytics, as well as IT security spanning healthcare, banking and transportation. Laura has been a key contributor to Intel’s participation in Cyberweek and RSA, focusing efforts to translate customer security needs into everyday language.
Laura spent the first part of her career in IT security at Trend Micro, where she managed Premium Support Services before moving into program & product management. After moving into product management, she saw a gap in the market and proposed a new security solution that was sold in the consumer market. In her tenure at Trend Micro, she found that there was a growing need for security in the healthcare market, and joined UC Davis Medical Center, where she managed the areas of IT Communications and Analytics before joining Intel Corp in their security marketing division.
Storing payment data in your e-commerce site may expose your business to challenges for PCI compliance. Azure confidential computing provides a platform for protecting your customer’s financial information at scale.

Stefano Tempesta works at Microsoft in the Azure Confidential Computing product group to make the Cloud a more secure place for your data and apps.
Balancing data privacy, runtime protection with ease and nimbleness of deployments is reality for the current state of confidential computing.
Simplicity of PODs and availability of orchestration for confidential computing, exploring the adoption of Kata POD isolation with protected virtualisation.Secure ledger technology is enabling customers who have a need for maintaining a source of truth where even the operator is outside the trusted computing base. Top examples: recordkeeping for compliance purposes, and enable trusted data.
Description
We are discussing the use of Kata POD isolation with protected virtualisation. Striving for confidential computing with a cloud native model while preserving most of the K8S compliance. This talk will summarise the state of the technical discussion in the industry, discuss solutions and open questions and give a hint into the future of confidential computing with cloud native models.Speed of adoption of confidential computing will to a large extend depend on the ease of use for developers and administrators in incorporating runtime protection into the established technology stack. From UseCases to technology demo the technology team is moving forward.

Stefan Liesche is the Architect for IBM Hybrid Cloud on Z. Stefan is focused on security, transparency, protection of data and services and confidential computing technology in flexible cloud environments. He has developed a broad spectrum of technology areas for IBM, including IBM Cloud Hyper protect Services and IBMs Watson Talent Portfolio where Stefan created AI driven solutions that transform recruiting and career decisions to enhance fairness and tackle biases. Stefan also innovated within the Exceptional User Experience products for several years with a focus on open solutions and integration. Stefan has more than 20 years of experience as global technical leader collaborating with partners and customers through joint initiatives.

James works within the IBM Hyper Protect family of offerings which deliver Confidential Computing to the Cloud using IBM LinuxONE and IBM Z Systems technology. He has responsibility for the technical architecture to leverage IBM Secure Execution for Linux capability (Trusted Execution Environment) in Cloud Native solutions. James has an MA from the University of Cambridge University and over 20 years experience solving customer problems with emerging technologies, contributing to and using open source projects as part of the solution. He is an active contributor to the open source Confidential Containers project which is looking to enable Cloud Native Confidential Computing by leveraging Trusted Execution Environments to protect containers and data.
See abstract
As Generative AI becomes a cornerstone of innovation, ensuring data privacy and intellectual property protection in cloud-based deployments remains a critical challenge. Privatemode addresses this by providing a SaaS platform for Confidential GenAI, leveraging Confidential Computing to deliver end-to-end encryption for inference workloads.
This talk will cover how Privatemode is built on Contrast and Confidential Containers to deliver a secure and scalable platform for GenAI as a service. We will explain how these technologies enable confidential orchestration, ensure workload attestation, and solve challenges like integrating GPUs securely. Key features of PRivatemode, such as provider exclusion, support for cutting-edge open-source LLMs, and compatibility with OpenAI’s API, will also be discussed.
Real-world use cases will demonstrate Privatemode in action, from privacy-compliant AI services for public administration to confidential LLM-as-a-Service deployments for academic research. We will also touch on ongoing advancements, such as multi-GPU support and retrieval-augmented generation (RAG) capabilities, showcasing how Privatemode continues to evolve to meet diverse requirements.
This session provides insights into building practical, confidential GenAI solutions and demonstrates how Confidential Computing can enable leveraging GenAI’s potential without compromising confidentiality.

Felix Schuster is an academic turned startup founder. After his PhD in computer security, he joined Microsoft Research to work on the foundations of Azure Confidential Computing. He co-founded Edgeless Systems in 2020. The company’s mission is to make scalable confidential computing accessible to everyone. Felix has given talks at top-tier cybersecurity conferences, including Usenix Security Symposium and RSA Conference.
See abstract
Annually, the volume of fraudulent activities in payments continues to rise – prompting banks to ramp up their investments in prevention and compliance measures to safeguard the integrity of the financial system. Recognizing the imperative for industry-wide collaboration, Swift, as a member-driven cooperative, is spearheading efforts to mitigate the impact of fraud through innovative approaches.
In this presentation, we will showcase Swift's groundbreaking initiative to drive industry collaboration in fraud reduction. Leveraging its unparalleled network and community data, Swift is pioneering a foundation model for anomaly detection with unprecedented accuracy and speed. Central to this endeavor is Swift's strategic integration of confidential computing and verifiable attestation, ensuring the highest standards of security and privacy in data and AI collaboration.
By partnering with key technology vendors and leading an Industry Pilot Group comprising the world's largest banks, Swift is tackling some of the toughest challenges that have plagued the industry for decades. This collaborative effort underscores the recognition that no single entity possesses all the answers, but together, industry stakeholders can forge solutions that benefit all.
Attendees will gain invaluable insights into Swift's holistic approach to combating fraud, and how confidential computing serves as a linchpin in enabling secure collaboration among industry players. Join us to discover how this work is championing a global, inclusive economy that prioritizes the interests of end-customers, while maintaining the highest standards of security and privacy.

Johan Bryssinck is the technical lead of Swift’s Federated AI and Data Collaboration program. He has more than 20 years’ experience in leadership in corporate strategy, business transformation and innovation, technology, architecture and capacity management. At Swift, Johan is driving the adoption of artificial intelligence to enhance its products and services and innovate with customers. He oversees banking, vendor, FinTech, and research partnerships to solve common industry challenges with artificial intelligence. ‘Responsible AI’ is central to how Swift employs the technology as a key enabler for instant and frictionless cross-border payment and security transactions. His career spans critical market infrastructures at SWIFT, CLS and Euroclear. He holds a doctorate in Nuclear Physics from the University of Ghent, Belgium.
See abstract
This session will provide an in-depth look at the latest advancements in Azure Confidential Computing, focusing on technical innovations around AMD SEV-SNP, Intel TDX, and confidential Nvidia GPUs and their real-world applications. Attendees will gain insights about the practical benefits and challenges faced during deployment, with examples from both within and outside Azure. Additionally, we will present technical demos ranging from confidential VMs, containers and confidential AI, including Retrieval-Augmented Generation (RAG) and Azure ML based confidential AI inferencing. We will also explore how Azure Confidential Computing integrates with Microsoft Cloud for Sovereignty (MCFS) to address regulatory and compliance needs. This session will highlight how Azure's confidential computing capabilities ensure data privacy and integrity throughout the application lifecycle, making it possible to develop secure and private applications.

Vikas Bhatia is the Head of Product for Azure Confidential Computing (ACC) and is responsible for designing products and services that organizations around the world leverage to ensure that their workloads are running confidentially on Azure by protecting data when in use. He leads a team responsible for products and services including confidential virtual machines, containers, and applications on confidential hardware across CPUs and GPUs. He is also responsible for services in Azure that leverage the confidential platform to themselves become confidential. Prior to this role, Vikas led the Product team for Project Rome in the Windows Developer Platform team. He has also done stints on Cloud Game Streaming, Xbox One and the C++ Compiler in DevDiv. Vikas has an MBA from the University of Washington, Foster School of Business and a MS in Computer Science from the University of Alabama.
See abstract
Confidential Virtual Machines (CVMs) introduce new security paradigms, necessitating operating systems that are specifically architected to support their unique threat models. Ubuntu Core, Canonical’s immutable, containerized Linux OS, offers an ideal platform for CVMs, addressing both security and operational resilience at scale.
This presentation will delve into the key technical attributes that make Ubuntu Core well-suited for CVMs. We will highlight its architecture, which is built around an immutable, read-only base system, ensuring that the operating environment remains secure from tampering. Through atomic updates, Ubuntu Core ensures that system changes are applied consistently, preventing issues that could arise from partial updates or system corruption. The use of strict application isolation via containerization mitigates the risk of cross-application vulnerabilities, while enabling fine-grained control over what software can access system resources. This approach also enables CVMs to maintain their integrity even in hostile environments where physical access is limited.
Drawing parallels with IoT devices, which share similar constraints such as remote deployment and lack of manual intervention, we will explain how Ubuntu Core’s self-healing capabilities, secure boot, and seamless rollback features offer a critical advantage for CVM environments. The OS is designed to be highly composable, allowing individual system components (such as the kernel, applications, and system services) to be independently updated or rolled back, ensuring minimal disruption and high uptime in sensitive environments.
We will also discuss Ubuntu Core's runtime integrity support, including the use of signed snaps and encrypted channels, which provide real-time verification of system and application integrity. This is essential for CVMs, where runtime integrity and protection from unauthorized access are paramount.
By leveraging Ubuntu Core’s hardened architecture, CVMs can achieve robust security, predictable system behavior, and operational flexibility, making it the ideal foundation for running confidential workloads in both cloud and edge environments. This talk will cover the technical foundations of Ubuntu Core’s suitability for CVMs, demonstrating its potential to meet the unique demands of secure virtualized environments.

Ijlal Loutfi is the Product Lead for platform Security at Canonical. She is also a part-time lecturer at the University of Oslo. She started working on confidential computing back during her PhD studies at the University of Oslo, where she researched trusted execution environments for commodity endpoint devices. She also spent time as a visiting researcher at various labs, including HP Security Labs in Bristol. After completing her PhD, Ijlal joined the Norwegian Computing Center to focus on privacy-enhancing technologies. Earlier in her career, she worked as a field Consultant with Microsoft, deploying identity management and access control solutions to enterprises.
See abstract
In the evolving field of confidential computing, Intel's TDX Connect stands out as a transformative framework, enabling seamless integration of Trusted Execution Environments (TEEs) with diverse computing devices. This session provides an end-to-end overview of TDX Connect, exploring its goals, operational lifecycle, and user-centric features such as Trusted Domain Operating System (TD OS) enablement and driver/application integration. Attendees will also learn about Intel's contributions to Linux upstreaming and collaborations with Cloud Service Providers (CSPs) to drive adoption. Join us to discover how TDX Connect is advancing secure computing and unlocking new possibilities for confidential data processing.

Arie Aharon is the Chief Architect for Intel TDX Connect and a leading expert in confidential computing, specializing in Intel TDX virtualization-based TEEs, memory encryption, I/O virtualization (VT-d), and PCIe security. Arie leads the full-stack hardware-to-software definition of TDX Connect, serving as platform security and TDX software architect, ensuring seamless integration with BIOS, OS, VMM, and driver software for scalable and secure confidential virtualization.

Shalini Sharma is a Principal Engineer and System Security Architect at Intel Corporation, specializing in system security and architecture, and confidential computing. Shalini is a key architect for TDX Connect, focusing on enabling secure I/O virtualization in Trusted Execution Environments (TEE). Her work addresses challenges related to secure data transfer, secure device assignment, and isolation. She also contributed to enabling TDX Connect for Linux. Previously, Shalini served as the lead hardware architect for Intel's Security Engine, advancing silicon security features and working on security solutions for Intel’s next-generation products. She is also involved in PCIe technologies like TDISP and IDE.
See abstract
Trusted Execution Environments in combination with open-source and reproducible builds provide transparency by relying on reviewers to analyze the Trusted Computing Base (TCB). The size of the TCB directly influences the speed at which new releases and bug fixes are deployed to production, since reviews can be time consuming. Hence, low-TCB environments enable transparency and trust to scale.
Crucial security features required to implement a trusted workload, such as end-to-end encryption, can significantly expand the TCB. For example, more general approaches like TLS with an extensive feature set including support for many signature schemes, certificate parsing and session resumption logic may be less ideal for low-TCB environments.
To address this issue we will present a remote attestation scheme that uses the Noise Protocol Framework to create an end-to-end encrypted attested channel between an end-user device and a TEE. Noise Protocol Framework allows minimizing the amount of cryptographic primitives required to establish an encrypted session bound to the attestation evidence.

Ivan Petrov is a Software Engineer working on Project Oak at Google DeepMind. He holds a PhD from Moscow State University, where he focused on intrusion detection systems and software-defined networking.

Katsiaryna Naliuka is a Software Engineer working on Project Oak at Google DeepMind. She holds a PhD in Computer Science from the University of Trento, where her doctoral research specialized in mobile phone security.
See abstract
The IETF RATS working group has been designing a standard format (CoRIM) for attestation endorsements so that their behavior in any compliant Attestation Verification Service is fully determined. This talk is going to discuss the appraisal model we've come up with, and how its composable design accommodates a model of endorsement syndication to build the global web of trust from supply chain to service operation governance.

Joshua is a Software Engineer on the Confidential Space team at Google.

Dionna has been at Google for 8 years working on Confidential Computing, with over a year on the IETF RATS working group. She is primarily focused on improving the standards story around trustworthiness for measured boot, but has also been involved in new SLSA standards for attested builds. She is a maintainer on the gce-tcb-verifier, go-sev-guest, go-configfs-tsm, and Veraison OSS projects.
See abstract
Artificial intelligence (AI) is rapidly transforming our world, but it also presents new challenges for data security and privacy. How can we ensure these increasingly powerful models and the sensitive data they rely on are protected throughout their lifecycle?
Join us to explore how confidential computing, powered by Intel CPUs, provides a critical answer. We'll delve into practical applications of confidential computing for securing AI workloads, including how confidential computing ensures privacy and confidentiality, how confidential computing solutions are being used to build and deploy secure AI applications, and how confidential computing can protect sensitive data without sacrificing the performance needed, even for demanding AI tasks.
Join us to unlock the power of confidential AI and learn how to build secure, privacy-preserving AI solutions for the future.

Anand Pashupathy is Vice President and General Manager of Security Software and Services (S3) Division in the Office of the Corporate Technology Officer organization where he leads a team of senior leaders whose purpose is to deliver security software technologies, services, and practices that empower our customers to achieve their security objectives. His organization’s product portfolio includes Confidential Compute, Ecosystem Enabling, Security Corporate Technology Initiative, Open Source Security Supply Chain, etc. Anand is also responsible for Intel’s Confidential Compute vision, strategy, and execution. Previously Anand has held many engineering, program wide and GM leadership roles at Intel. Additionally, Anand has been granted six patents and currently serves a governing board member of the Confidential Computing Consortium. Anand is a strong advocate for women and underrepresented people in technology, he serves as the Executive Sponsor for an internal employee resource group. For his advocacy and leadership contributions to diversity and inclusion, Anand received the 2023 Global Diversity and Inclusion Achievement Award for Executive Advocate for D&I. Outside of work, Anand and his family love to travel and experience cultures from around the world. He earned his MBA from the Kellogg School of Management, and a Master’s degree in Computer Science and has been with Intel since the nineties.

Nelly Porter is an accomplished technology leader currently serving as the Director of Product Management for GCP Confidential Computing and Encryption at Google, where responsibilities include leading end-to-end encryption efforts, managing secure software supply chains, and defining the vision and strategy for platform and virtualization security. Prior experience at Microsoft includes roles such as Lead Program Manager, Principal Group Program Manager, and Senior Program Manager, where Nelly focused on product management and development in areas including authentication, remote desktop technologies, and augmented reality. Earlier in the career at Hewlett-Packard Labs, Nelly was involved in establishing technical partnerships with Microsoft, contributing to high availability solutions, and developing tools for clustered systems. Nelly's career began at Scitex Vision, where low-level device drivers and firmware for various operating systems were developed.
See abstract
In an era where sensitive data is increasingly processed in distributed environments, the need to ensure its protection during processing is critical. The growing reliance on Arm CPUs, known for their energy efficiency and versatility, further underscores the demand for robust and scalable security frameworks. Addressing this challenge, Arm Confidential Compute Architecture (Arm CCA) represents a transformative approach, leveraging hardware-supported encryption and dynamic memory isolation to secure data in use.
This talk explores the evolving landscape of secure processing with a focus on Arm CCA, its security model, practical tools, and real-world applications in domains such as healthcare, finance, and government. The session explores the Arm CCA's Realm environment, which enhances traditional systems like TrustZone by offering advanced secure memory isolation. Using tools like QEMU Virt and FVP, we will demonstrate how developers can prepare for upcoming Armv9-A architecture chips and integrate these capabilities into modern workloads.
The session also highlights safeguarding applications in Realm VMs against malicious hypervisors and ensuring system integrity through remote attestation. By examining a real-world threat scenario, we’ll showcase the potential of confidential computing in mitigating risks and securing workloads in privacy-sensitive contexts.
FUJITSU-MONAKA, Fujitsu’s next-generation 2nm low-power Arm processor, incorporates Arm CCA to redefine secure processing for future hardware systems. Join us to gain technical insights on leveraging Arm CCA to meet the growing need for secure, energy-efficient computing.
This presentation is based on results obtained from a project, JPNP21029, subsidised by the New Energy and Industrial Technology Development Organisation (NEDO).

Darshan Patel is a Lead Software Engineer at FUJITSU-MONAKA Software R&D Unit, HPC AI Lab, Fujitsu Research of India Private Limited, Bengaluru. He specializes in the research and development of AI software optimized for Arm high-performance computing. Currently, his primary focus is on designing and developing the ARM CCA (Confidential Compute Architecture) software stack for FUJITSU-MONAKA, a next-generation 2nm Arm-based CPU set to launch in 2027. His work plays a key role in enhancing security and performance in high-performance computing environments.

Tatsuya Kitamura is a Lead Software Engineer at Fujitsu. He is in the software development team for Fujitsu's next generation Arm-based processor, FUJITSU-MONAKA, and currently working on Arm CCA (Confidential Compute Architecture) software stack and use case development for FUJITSU-MONAKA.
See abstract
This session delves into NVIDIA’s latest advancements in confidential computing on NVIDIA systems and beyond, focusing on key updates to Attestation Services that expand platform support. Single and multi-GPU attestation, switch and network attestation, along with upcoming capabilities being released this year. Attendees will discover how these advancements fortify security by enabling relying parties to securely validate claims, while also learning about optimized deployment strategies, supported usage patterns, and SLA benchmarks for NVIDIA’s cloud-based services—ensuring robust, scalable solutions for developers and system integrators.

Rob Nertney is a senior software architect for confidential computing & attestation. He has spent nearly 15 years architecting the features and deployment of accelerator hardware into hyperscale environments for both internal and external use by developers. He has several patents in processor design relating to secure solutions that are in production today. In his spare time, he loves golfing when the weather is nice, and gaming (on RTX hardware of course!) when the weather isn’t.
See abstract
We present a comprehensive framework for securely deploying Hugging Face models, particularly large language models (LLMs), within Trusted Execution Environments (TEEs). The framework encompasses the entire process from evidence collection to attestation, ensuring robust protection of AI models. Central to this approach is the Confidential AI Loader, which encrypts models prior to loading them into memory, thereby safeguarding them within the TEE. The preprocessing steps involve generating an AES GCM key, encrypting the AI model, uploading the encrypted model to a model server, and registering the encryption key with a Key Broker Service (KBS) that interfaces with a Key Management Service (KMS). The architecture facilitates the seamless integration of encrypted models and the Hugging Face project into a container, enabling secure execution within the TEE. This methodology ensures that AI models are protected throughout their lifecycle, from preprocessing to deployment, leveraging TEEs to maintain confidentiality and integrity. https://github.com/cc-api/confidential-huggingface-runner.

Wenhui Zhang is a Researcher/Software Engineer at Bytedance, where she has been there since 2021. She holds PhD from Penn State. Wenhui’s research interests include operating systems and runtimes, security, architecture, compiler optimization, and concurrency. Her recent work has been on system support for trusted execution environments, CXL memory, serverless computing, and safety guardrails for large language models. She has been serving on PC for WWW'[24, 25], NeurIPS’24, IPDPS’25, AAAI'25 DATASAFE, Shadow PC for Eurosys'25, AEC PC for SOSP'23, OSDI'24, ATC'24 and SIGCOMM’24 and PC for Demo/Poster Track of CCS'24. Her publishing recognition includes Best Paper runner-ups at Eurosys'24. She is also serving as security co-chair for Akraino Edge Stack, co-lead for cc-api project and involved in many other open source projects.
See abstract
Attestation for Confidential Computing presents a data management problem that spans an entire industry. Multiple entities need to both produce and consume data at various stages in the supply chain, creating many points of interaction across trust boundaries.
To address the complexity of this landscape, we first need to make it understandable. The RATS architecture is a great starting point. But better yet is to have working software and hands-on experiences. Arm and Linaro have been collaborating on an end-to-end experimental platform for attestation, based on components from the Veraison project. We will present a demonstration of this work.
The next step is to reduce fragmentation of solutions in open source-projects that are built for production. The RATS group works extensively to create alignment around data formats and interaction models. We present some recent work in the Confidential Containers project as a case study for how these are being adopted.
In the final part of our presentation, we will look at some future work that focuses on the distribution of endorsements and reference values within the RATS framework. We will show how the Veraison project can once again be the essential proving ground for these next steps towards a harmonised ecosystem for attestation.

Paul Howard is a Principal System Solutions Architect in the Architecture and Technology Group at Arm. Paul focuses on secure solutions that combine hardware and software across cloud, edge and IoT. His work includes prototypes, proofs-of-concept, and open-source collaborations. He is a founding maintainer of the Parsec security project in the CNCF Sandbox.

Kevin Zhao is currently the tech lead at Linaro Data Center Group. He has been working on Arm server ecosystem for about 10 years, including the open source IAAS solutions, distribute storage and confidential computing. Now, he is actively working on Arm Confidential Computing Architecture(CCA) software stack.
See abstract
This talk provides a technical deep dive into the COCONUT-SVSM project, a platform designed to provide secure services for Confidential Virtual Machines. We'll explore the project's architecture, detail significant advancements made over the last year, and discuss current challenges in securing CVM services. The talk will conclude with an overview of planned developments and the project roadmap for the coming year.

Jörg is leading the development of COCONUT-SVSM and is working with AMD on enabling SEV and related technologies. In this role at SUSE he implemented major parts of the AMD SEV-ES guest support in the Linux kernel and brought it upstream into kernel 5.10. He is also active in the Linux kernel community as the maintainer for the IOMMU subsystem and a contributor to other areas like KVM or the X86 architecture.
See abstract
Microsoft has embraced a different approach that offers flexibility to customers through the use of a “paravisor”. A paravisor executes within the confidential trust boundary and provides the virtualization and device services needed by a general-purpose operating system (OS), enabling existing VM workloads to execute securely without requiring continual service of the OS to take advantage of innovative advances in confidential computing technology. Microsoft developed the first paravisor in the industry (and gave a talk at OC3 2 years ago about this), and for years, we have been enhancing the paravisor offered to Azure customers. This effort now culminates in the release of a new, open source paravisor, called OpenHCL. We will talk about our journey to get here, our goals and future milestones for this new project.

Chris is a developer on the Core OS Virtualization team at Microsoft, focused on confidential computing. His background includes work across different operating systems and architectures such as Windows, Linux, x86-64 and AArch64. He received a B.S.E. from the University of Michigan. One of his main areas of focus is OpenHCL, a paravisor for Confidential VMs, part of the broader open-source OpenVMM project.
See abstract
Major German health insurance companies and healthcare providers need a solution to support the country's electronic patient (ePA) record system. Given the sensitivity of private medical information, the technology infrastructure must ensure data security for 50 - 60 million citizens. IBM worked with Edgeless Systems to enable Confidential Computing at scale with support from Intel Xeon processors and Intel SGX.

Ram is a Confidential Computing Architect for IBM Cloud VPC based in Portland Oregon. He has Masters in Computer Science from OHSU Portland. He was instrumental is enabling the Intel based Confidential Computing Services on IBM Cloud. Prior to that he lead the architecture and development of the technology on IBM's OpenPOWER and coauthored a IEEE paper on Confidential Computing on POWER. He has also contributed to the open-source community; Virtual File-systems, the memory management, device drivers, volume managers and many more.

Rachel Wan is a Lead Product Manager at IBM, specializing in data privacy and security, with a focus on Confidential Computing and AI technologies. She leads product development in Confidential Computing/Confidential AI strategy and Virtual Desktop Infrastructure, helping companies ensure data compliance and security. Additionally, Rachel sits as chair of the Outreach Committee at the Confidential Computing Consortium (CCC).
See abstract
Confidential Containers are a key enabler of confidential cloud-native workloads, but integrating them into existing environments can be complex. Contrast addresses this challenge by simplifying the adoption of Confidential Containers, offering a practical, Kubernetes-native solution for Confidential Computing.
In this talk, we will outline how Contrast adds a Confidential Computing layer to existing Kubernetes platforms, enabling policy-based workload attestation, secrets management, and a service mesh for mTLS-based workload authentication without disrupting existing workflows. We will also discuss Contrast’s architecture and its compatibility with hybrid environments, making it suitable for both cloud and bare-metal deployments.
The presentation will highlight real-world use cases and in-production deployments, including securing AI workloads, protecting sensitive financial data, and protecting sensitive information in hostile environments like military battlefields. We’ll also provide a hands-on demo of how Contrast enables confidential application deployment with minimal effort. This session will offer valuable insights for those looking to adopt Confidential Containers and leverage Confidential Computing in practical scenarios.

Moritz Eckert is Chief Architect at Edgeless Systems. He conducted research at EURECOM and UC Santa Barbara working on the next generation of Cyber Reasoning Systems. He joined Edgeless Systems in 2020 with the mission of making confidential computing scalable and accessible for everyone. He has presented at conferences such as Microsoft Build, Usenix Security, and RSA Conference.
See abstract
Data theft and leakage, caused by external adversaries and insiders, demonstrate the need for protecting user data. Trusted Execution Environments (TEEs) offer a promising solution by creating secure environments that protect data and code from such threats. The rise of confidential computing on cloud platforms facilitates the deployment of TEE-enabled server applications, which are expected to be widely adopted in web services such as privacy-preserving LLM inference and secure data logging. One key feature is Remote Attestation (RA), which enables integrity verification of a TEE. However, compatibility issues with RA verification arise as no browsers natively support this feature, making prior solutions cumbersome and risky.
To address these challenges, in this talk, we present RA-WEBs (Remote Attestation for Web services), a novel RA protocol designed for high compatibility with the current web ecosystem. RA-WEBs leverages established web mechanisms for immediate deployability, enabling RA verification on existing browsers. We will show preliminary evaluation results and highlight open challenges when introducing RA to the web.

Kosei Akama is a second-year master's student at Keio University's Graduate School of Media and Governance. He earned his bachelor's degree from the Faculty of Environment and Information Studies at Keio University in 2023. His research focuses on applied cryptography and trusted computing. Notably, his paper, "Scrappy: SeCure Rate Assuring Protocol with PrivacY," was accepted to the Network and Distributed System Security (NDSS) Symposium 2024, a prestigious international cybersecurity conference. He is also active online under the handle "akakou."