The last year has been brutal for businesses globally. Taking examples from my home country, the UK, the cost is over £1B and still rising, as well as the loss of at least one life due to cybercrime.
- Marks & Spencer lost £300M when ransomware crippled its systems for weeks.
- The Co-op suffered a related attack, losing over £200M in sales and the customer data of more than 20 million people.
- Jaguar Land Rover’s assembly lines have been shut down for weeks, haemorrhaging £70M per week and requiring a £1.5B loan secured by the government.
- Transport for London’s systems were compromised, with the ensuing disruption lasting months, costing £39mn and exposing 5,000 customers’ banking details. Two teenagers are being prosecuted for the attack.
- Most tragically, a patient at King’s College Hospital died after ransomware delayed critical blood test results. Speaking to friends that were sat in meetings to decide who got blood tests each day, the human toll was evident. Cyberattacks aren’t just about money!
These aren’t isolated incidents - they’re symptoms of a systemic vulnerability in how we build computer systems.
According to the Verizon 2025 Data Breach Investigations Report, credential abuse and exploitation of vulnerabilities continue to dominate as attack vectors, accounting for 22% and 20% of breaches respectively. The exploitation of vulnerabilities saw a 34% surge year-over-year, creating what Verizon describes as a “concerning threat landscape”.
We’re yet to learn the root causes and attack chains involved in each of the examples above, but many involved ransomware, which frequently uses software exploits as a post-initial-access vector to gain control of target systems and spread across a network.
Here’s the kicker: approximately 70% of all software vulnerabilities stem from a single root cause - memory safety issues. This isn’t a new problem. Google, Microsoft, Apple, Mozilla and the Linux Foundation have all reported similar figures for their software over the last two decades. The uncomfortable truth is that current CPUs are fundamentally incapable of preventing these vulnerabilities, and traditional software patches have proven woefully inadequate.
Rewriting all the world’s software into memory safe languages, such as C#, Java and Rust, is unviable. While new projects may be adopting Rust over C/C++, and some critical components are being rewritten into safe languages, the scale and depth of the C and C++ ecosystems makes it practically impossible to rewrite all the world’s unsafe software. The risk of introducing other (non-memory-safety) issues during a software rewrite also poses a substantial barrier. Given sufficient software compatibility, it is actually easier to swap the hardware!
Two architectural approaches have emerged to tackle this trillion-dollar problem at the hardware level:
- CHERI: Capability Hardware Enhanced RISC Instructions - pioneered at University of Cambridge (UK), and
- OMA: Object Memory Architecture - pioneered at University of Bristol (UK) and now being commercialised by Doubtless Computing.
Both aim to make memory-unsafe systems safe-by-design but they take different paths to get there. Understanding these differences matters because the choice between them will shape the security and performance characteristics of computing for decades to come.
The Memory Safety Crisis
Before diving into solutions, it’s worth understanding what we’re solving. When software runs, it constantly allocates and deallocates memory - think of it like booking rooms in an office building. Memory safety vulnerabilities arise when this process goes wrong. A person might try to continue to use a room after their booking has ended (use-after-free), enter a neighbouring room (buffer overflow), or use a room without booking one in the first place (invalid pointer dereference). Software has these same problems with memory allocations (room bookings).
These bugs become catastrophic vulnerabilities when attackers exploit them to read sensitive data they shouldn’t access, manipulate critical system variables, or inject malicious code. The underlying architecture of today’s processors - paging-based virtual memory - lacks the granularity needed to enforce security within a single application or process.
Memory safety breaks down into three categories:
- Referential safety ensures pointers genuinely reference allocated memory and can’t be forged. Think of it as ensuring software has a valid booking for a room, ensuring accesses to memory are authorized, and that bookings can’t be faked.
- Spatial safety prevents accessing memory outside allocated bounds - no going into neighbouring rooms on the same corridor.
- Temporal safety addresses what happens over time, ensuring memory can’t be accessed after it’s been freed and reallocated. In our building analogy, if you move office, you shouldn’t be able to use your badge to access your old office.
Traditional architectures like x86, Arm, and RISC-V rely on coarse-grained page-level protection (typically 4KB or larger pages), which is far too blunt an instrument for modern security needs.
CHERI: Capabilities Meet Legacy Systems
CHERI, developed over more than a decade by the University of Cambridge and SRI International, extends conventional instruction set architectures with hardware-enforced capabilities. A CHERI capability is a form of fat pointer - it contains not just a memory address but also bounds information, permissions, and validity metadata. Every memory access gets checked against these constraints in hardware, catching violations before they can be exploited.
The architecture provides strong referential and spatial safety guarantees. When you have a CHERI capability, you provably have legitimate access to a specific bounded region of memory, and the hardware won’t let you stray outside those bounds. CHERI achieves this while maintaining compatibility with existing paged memory architectures, which is both its greatest strength and a source of limitations.
Here’s where it gets interesting: CHERI’s capabilities are large. On a 64-bit system, a CHERI pointer requires 129 bits (including the hidden tag bit) - essentially double the data width of the base architecture. This decision to encode all protection metadata within the pointer itself has profound implications. Every data structure that stores pointers effectively doubles in memory consumption for those fields. Capabilities in memory (stack/heap) must be aligned to natural 128-bit boundaries. Cache lines, which are precious and limited, now hold fewer actual pointers. Memory bandwidth requirements increase because for each pointer you’re moving twice as much data around.
CHERI provides hardware-enforced referential and spatial safety but leaves temporal safety to software. You can achieve temporal memory safety with CHERI, but it requires modifying your memory allocator and implementing pointer revocation mechanisms - essentially software to scan memory to find and invalidate stale pointers. This software-based approach to temporal safety remains part of the trusted computing base and requires careful verification. It’s also closely related to software garbage collection.
Research has explored various temporal safety mechanisms for CHERI, but they all involve non-trivial software complexity and performance overhead. In theory, hardware acceleration may be possible but is likely to always require software involvement. This is because a CHERI capability covers a range of memory, which may include more than one object. Software allocation and object type information is required to differentiate objects and thus revoke capabilities appropriately.
The software ecosystem for CHERI has made impressive progress. Most code recompiles with minimal changes, though the capability width difference can require significant rewrites for certain applications. Additionally, it causes a division in the ISA where load/stores of capabilities must be handled separately from ordinary data. This leads to some complexity in the compiler to detect edge cases where the compiler does not know for certain whether a register or memory slot contains a capability or not. C/C++ code which abuses pointers by treating them as integers, which is uncommon but frequent enough to cause a headache, requires some effort to address.
Arm’s Morello project, which implemented CHERI on a modified Neoverse N1 core, revealed performance challenges that have pushed commercial CHERI efforts toward smaller embedded processors for the time being. Notably, Arm declined to join the CHERI Alliance, instead indicating they will take a step back from new work on Morello and wait to see if CHERI gains the long-sought commercial traction.
OMA: Rethinking Memory From the Ground Up
Doubtless Computing’s Object Memory Architecture takes a fundamentally different approach. Rather than extending paged memory, OMA implements object-based memory management directly in hardware. Every allocation becomes a first-class hardware object with its own identity, bounds, and metadata maintained by the processor itself.
This architectural choice enables several key advantages. OMA pointers are leaner - 65 bits on a 64-bit architecture, including the hidden tag bit. Rather than carrying all metadata with every pointer, OMA stores object information centrally in hardware-managed directories. This reduces memory bandwidth requirements and means that multiple pointers to the same object don’t duplicate metadata. The hardware maintains a complete understanding of object relationships and lifecycles, enabling optimizations that software-only approaches can’t match.
A critical differentiator is temporal safety. OMA implements garbage collection in hardware, scanning for and reclaiming unreachable objects in real-time as part of the processor’s normal operation. This isn’t the same as software garbage collection - it’s parallel, highly optimized, and doesn’t block program execution. By managing object lifecycles in hardware, OMA provides hardware-guaranteed temporal safety alongside referential and spatial protections, completing the trinity of memory safety properties.
It would be tempting to say that memory safety is solved by using a managed language like Java, JavaScript, Swift or Python. Unfortunately, this doesn’t hold up in practice. Managed language runtimes, as well as many supporting libraries, are written in C/C++ and suffer memory safety issues just as much as any other C/C++ code. The operating systems and hypervisors are also exposed to these languages, offering yet another attack surface. This leaves managed language apps vulnerable. Memory safe languages, including both Rust and managed languages, are a distinct improvement over traditional C and C++, but only hardware can provide the safety guarantees we need in today’s systems.
For managed languages like Java, JavaScript, Python, C# and Go, the OMA architecture delivers dramatic performance improvements. Doubtless Computing’s analysis of CPython 3.12 reveals that 32-44% of instructions are spent on memory management operations - allocation, deallocation, reference counting, and garbage collection. Moving these operations into parallel hardware execution, along with microarchitectural optimisations derived from hardware’s new understanding of the structure of data in memory, yields 2-5x speedups for managed language applications. Even C/C++ applications see 1.2-2x improvements as the hardware optimizes memory management functions and eliminates per-object metadata from cache.
The architecture maintains full source code compatibility for managed languages - all changes are confined to the runtime. For C/C++, the story is much the same as with CHERI: recompilation with modified standard libraries and a modified compiler, such as LLVM or GCC. Maintaining the pointer width the same as the data width, and the same alignment requirements, avoids the ISA-level split for handling pointers, which simplifies the compiler and improves compatibility with legacy C/C++ code. This compatibility approach differs from CHERI’s and aligns with OMA’s target market: server-class and application processors, where managed languages dominate.
Fundamental Trade-offs: Where the Architectures Diverge
The philosophical differences between CHERI and OMA create distinct trade-off profiles. CHERI carries all metadata with pointers, enabling incremental adoption where different parts of a program can use capabilities independently. OMA’s centralized metadata requires the hardware to maintain a consistent view of all objects but enables more aggressive optimization. CHERI works within the existing paged memory model, simplifying system software migration. OMA introduces a new memory model that requires deeper changes but delivers performance gains that paged architectures can’t match.
These differences manifest in pointer width - CHERI’s 129-bit capabilities versus OMA’s 65-bit pointers. While both exceed the base address width, the doubling effect in CHERI has more severe implications for data structure layouts, cache efficiency, and memory bandwidth. Research on CHERI implementations has shown there is a long road ahead to achieve performance parity for managed languages. In the meantime, OMA offers a shorter path with substantial speedups rather than equal performance.
Temporal safety represents perhaps the most significant divergence in security. CHERI’s software-based pointer revocation requires explicit memory scanning and manipulation, adding complexity to the trusted computing base and verification burden. OMA’s hardware garbage collection happens transparently and continuously, providing stronger guarantees with less software complexity. This matters enormously for total cost of ownership - every line of security-critical software that doesn’t need to be written, verified, and maintained is a win.
The instruction set philosophies differ too. CHERI historically opts for ISA changes beyond pure memory safety to achieve its security goals, which can complicate adoption. OMA has historically prioritized backward compatibility, though this is adaptable based on market requirements. The consensus in the industry is that software compatibility presents the primary barrier to new processor designs, which favours architectures that minimize disruption.
Industrial Relevance and Market Fit
CHERI and OMA target fundamentally different computing environments, which is why calling them competitors misses the point. They’re complementary solutions to a shared problem, each optimized for distinct use cases.
CHERI finds its natural home in embedded systems and microcontrollers. These environments predominantly use C, C++, or Rust with restricted or no dynamic memory allocation. The code bases are smaller and more amenable to the verification required to ensure CHERI capabilities are used correctly. The memory overhead from wider pointers, while still present, matters less in resource-constrained designs that carefully manage every allocation. Four companies - SCI Semiconductor, Codasip, lowRISC, and Secqai - are actively commercializing CHERI for embedded applications. SCI’s ICENI family of CHERIoT microcontrollers, built on Microsoft’s open-source CHERIoT-Ibex core, targets the IoT and operational technology markets. Codasip offers CHERI-enabled RISC-V IP cores for custom processor designs. lowRISC’s Sonata platform provides an open-source FPGA-based development environment for CHERIoT research and prototyping.
Arm’s experience with CHERI tells an important story about scaling limitations. The Morello project, which implemented CHERI on a modified Neoverse N1 server-class core, yielded results that Arm appears to have found unsatisfactory. There has been no apparent follow-up on the substantial initial investment made into the Arm Morello designs. This assessment seems to reflect the performance challenges that CHERI faces in larger systems.
OMA’s sweet spot sits at the opposite end of the spectrum. Application-class and server-class processors running managed languages benefit enormously from hardware-accelerated memory management. Python, Java, JavaScript, C#, and Go all share similar memory models that align naturally with OMA’s object-based approach. These environments already use garbage collection extensively, so moving that functionality into hardware removes overhead, rather than adding it as it would in embedded systems. The performance gains - up to 5x for managed languages - become transformational for data centre workloads where every percentage point of efficiency translates to millions in operating costs.
The market dynamics favour different adoption paths. CHERI benefits from strong government backing, particularly from the now-ended UK’s Digital Security by Design programme and recognition from the US White House and NSA. This institutional support hopes to accelerate adoption in defence and critical infrastructure applications. CHERI’s open-source foundation through the CHERI Alliance creates a broad ecosystem but limits opportunities for proprietary differentiation.
OMA’s proprietary nature and performance advantages position it for commercial data centre deployment. The technology directly addresses the performance problems that hindered CHERI at scale. While OMA lacks CHERI’s first-mover advantage and government momentum, it offers compelling value for cloud providers and enterprises running managed language workloads. The economic argument is straightforward: if you can eliminate 70% of vulnerabilities while quintupling performance for your Python services, the return on investment is measured in weeks from deployment, rather than years.
OMA’s proprietary technology makes it attractive for investment as it can be patented. However, CHERI’s openness makes it possible for independent security teams to verify the safety of the architecture. Open implementations of CHERI processors also enables those designs to be independently verified. Doubtless Computing will need to make its ISA public, which is inevitable anyway for a new CPU as customers will require it. Doubtless will also need to offer a public platform for independent researchers to build confidence in the security claims.
The CHERI Ecosystem: Who’s Building What
The CHERI Alliance, formally launched in 2024, coordinates standardization and adoption efforts across industry and academia. Founding members include the FreeBSD Foundation, Capabilities Limited, SCI Semiconductor, Codasip, lowRISC, and the University of Cambridge. Google’s participation as a founding member signals serious industry interest, though notably Arm is not a member.
SCI Semiconductor, based in Cambridge, leads commercialization of Microsoft’s open-source CHERIoT Ibex implementation for embedded systems. Their ICENI family of processors targets microcontroller applications in automotive, industrial control, defence, and aerospace. The company has secured strategic distribution through EPS Global, which specializes in automotive tier-one suppliers and contract manufacturers. SCI’s early access program, in collaboration with lowRISC, allows select partners to begin development on lowRISC’s FPGA-based Sonata platform with guaranteed migration paths to production silicon.
Codasip, a RISC-V processor IP vendor, offers the X730 - a CHERI-enabled 64-bit application-class core based on their A730 design. Their “Custom Compute” methodology allows customers to license CHERI-enhanced cores or customize them further using Codasip Studio. The company has donated a CHERI SDK built on open-source tools to the CHERI Alliance, making it freely available for anyone implementing CHERI on RISC-V. Codasip is also developing Linux kernel support for RISC-V CHERI, which will be crucial for broader adoption.
lowRISC, a not-for-profit organization spun out of Cambridge University, maintains the Sonata evaluation platform and leads the UK-government-funded Sunburst Project. Sonata provides a complete FPGA-based development environment for CHERIoT, enabling software development and hardware experimentation before silicon is available. The Sunburst Project’s recent expansion to include SCI Semiconductor aims to validate CHERIoT designs through commercial tapeout on GlobalFoundries’ 22nm process, with all project deliverables remaining open-source.
Microsoft’s role deserves special mention. Microsoft Research developed CHERIoT-Ibex, an open-source RISC-V core optimized for embedded systems. They’ve made this core freely available and co-maintain the CHERIoT Platform repository with SCI Semiconductor. David Weston, Microsoft’s VP of Enterprise and OS Security, has publicly endorsed SCI’s commercialization efforts, stating that CHERI represents a “promising technology that can be used to enhance computer security.” This corporate backing from a major software vendor adds credibility to the embedded CHERI ecosystem.
The UK government’s support through the now-ended Digital Security by Design programme and UKRI funding has been instrumental in advancing CHERI. The programme provided ~£190 million in research funding over five years and continues to support development through initiatives like Sunburst. This institutional backing, combined with endorsements from the US White House and NSA, positions CHERI advantageously for government and defence procurements.
Conclusions
CHERI and OMA represent two responses to the memory safety crisis, each with distinct strengths that make them suited to different computing environments. The notion that one must “win” while the other “loses” misunderstands the landscape - the computing world is large enough, and varied enough, that multiple approaches can and should coexist. Cybersecurity principles also demand diversity of solutions.
CHERI’s compatibility with existing paged memory architectures and incremental deployment model make it an excellent fit for embedded systems where code bases are manageable, languages are predominantly C/C++/Rust, and the verification burden is acceptable. The active CHERI ecosystem, backed by government support and open-source collaboration, has created momentum that shouldn’t be underestimated. For IoT devices, industrial control systems, and safety-critical embedded applications, CHERI offers a practical path to hardware-enforced memory safety that companies can adopt today.
OMA’s object-based architecture and integrated hardware garbage collection (IHGC) deliver transformational performance for managed language workloads. By tackling temporal safety in hardware alongside referential and spatial protections, OMA provides more complete memory safety with less software complexity. The performance gains - up to 5x for Python, Java, JavaScript, C#, and Go - directly address the scalability problems that have limited CHERI in larger systems. For data centres, cloud infrastructure, and application servers where managed languages dominate, OMA presents compelling advantages.
Both architectures eliminate memory safety vulnerabilities. The formal guarantees that CHERI can provide are a subset of what OMA delivers, since OMA includes hardware-enforced temporal safety. However, CHERI’s earlier start and ecosystem momentum matter significantly in technology adoption. The question isn’t which architecture is “better” in absolute terms but rather which is more appropriate for specific use cases and deployment contexts.
Looking ahead, memory safety will increasingly become a non-negotiable requirement. The UK National Cyber Security Centre, US White House, and NSA have all called for fundamental changes in how we build secure systems. The attacks on Marks & Spencer, Co-op, Jaguar Land Rover, the NHS, Transport for London, and many others, demonstrate that our current approach isn’t working. Software-only solutions like Rust, while valuable, face adoption barriers that make them insufficient on their own. Hardware-based memory safety, whether through CHERI, OMA, or future approaches we haven’t yet invented, represents the most practical path to eliminating this class of vulnerability at scale.
The semiconductor industry moves slowly, with design cycles measured in years and deployment timelines measured in decades. Today’s architectural decisions will shape computing security through 2040 and beyond. The good news is that we now have proven approaches to memory safety that work in real hardware. CHERI has demonstrated its viability in embedded systems. OMA has shown it can deliver both security and performance for managed languages with a hardware prototype on AWS Cloud FPGAs supporting CPython 3.12 and Jupyter Notebooks. The challenge now isn’t technical feasibility - it’s economic deployment and ecosystem coordination.
For embedded designers, CHERI offers immediate benefits with manageable overhead. For cloud and data centre operators, OMA promises to eliminate vulnerabilities while dramatically improving performance. The fundamental insight is that both approaches work by making the right choices for their target markets. We don’t need to pick one winner. We need both, deployed where each makes the most sense, steadily displacing the insecure architectures that enabled the attacks we’ve seen this year. The trillion-dollar memory safety problem is solvable - and the will to deploy the solutions we’ve built is growing as organisations can no longer afford the risk of being vulnerable.