ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2208.01548
7
3

A Turning Point for Verified Spectre Sandboxing

2 August 2022
Sunjay Cauligi
Marco Guarnieri
D. Moghimi
D. Stefan
Marco Vassena
ArXivPDFHTML
Abstract

Spectre attacks enable an attacker to access restricted data in an application's memory. Both the academic community and industry veterans have developed several mitigations to block Spectre attacks, but to date, very few have been formally vetted; most are "best effort" strategies. Formal guarantees are particularly crucial for protecting isolated environments like sandboxing against Spectre attacks. In such environments, a subtle flaw in the mitigation would allow untrusted code to break out of the sandbox and access trusted memory regions. In our work, we develop principled foundations to build isolated environments resistant against Spectre attacks. We propose a formal framework for reasoning about sandbox execution and Spectre attacks. We formalize properties that sound mitigation strategies must fulfill and we show how various existing mitigations satisfy (or fail to satisfy!) these properties.

View on arXiv
Comments on this paper