RISC-V chips are everywhere now—powering everything from smartwatches to data center servers. But there’s a growing problem that could fracture the entire ecosystem: nobody agrees on what counts as a “real” RISC-V processor. Two chips claiming to be RISC-V compatible might execute the same software completely differently, or worse, not run it at all. The industry is scrambling to solve this before the fragmentation becomes unfixable.
The core issue comes down to two different types of verification that chip designers often confuse or conflate. Architectural conformance asks: “Is this actually a RISC-V chip according to the official specification?“
Implementation verification asks: “Does our specific design work without bugs in real-world conditions?”
Most verification engineers have decades of experience with the second question but almost none with the first—and RISC-V’s flexibility makes conformance exponentially harder than it was for closed architectures like x86 or Arm. For those looking for the “source of truth,” the official RISC-V Specifications remain the primary reference for defining what a core must legally implement.
This matters because RISC-V’s biggest selling point—its open, customizable nature—also creates its biggest risk. Companies can legally add custom instructions, modify behavior, or implement optional features differently. That’s great for innovation but terrible for software compatibility. If your app works on one RISC-V chip but crashes on another “RISC-V” chip, developers will simply avoid the platform entirely. The industry learned this lesson the hard way with fragmented Android devices in the 2010s. Now semiconductor companies are trying to avoid repeating that mistake with processors.
⚡
WireUnwired • Fast Take
- RISC-V’s flexibility creates fragmentation risk—chips claiming compatibility may run software differently
- Architectural conformance (proving you’re RISC-V) vs implementation verification (proving it works) are separate challenges
- Testing every instruction combination is mathematically impossible—requires strategic verification approaches
- Industry lacks standardized performance and safety verification beyond basic instruction compliance

Conformance vs. Implementation: What’s the Difference?
The distinction sounds academic but has massive practical implications. Vladislav Palfy, solutions management director at Siemens EDA, explains:
“Architectural conformance is about asking if what you designed is actually a RISC-V core—does it execute instructions as defined, handle exceptions properly, implement the memory model correctly? Implementation verification is about making sure your specific design works in reality, covering all the microarchitectural details like pipeline implementation, cache coherency, and those corner cases that only reveal themselves at the least convenient moments.”
Think of it this way: architectural conformance is like verifying a car meets safety regulations—does it have working brakes, airbags, and emission controls as defined by law? Implementation verification is making sure your specific car model doesn’t have a design flaw where the transmission fails after 50,000 miles. Both are critical, but they’re fundamentally different problems requiring different testing approaches.
For closed architectures like Intel x86 or Arm, conformance was relatively straightforward because a single company controlled the specification and the reference implementations. If you wanted to build an x86-compatible chip, Intel’s processors served as the de facto standard. But RISC-V is open—there’s no single reference implementation that everyone must match. This freedom enables innovation but creates ambiguity about what “conforming to RISC-V” actually means.
The Mathematics of Impossibility
Here’s where the problem becomes genuinely hard: it’s mathematically impossible to test every possible instruction combination and state a RISC-V processor could encounter. Even a relatively simple RISC-V core with basic extensions has trillions of possible states when you account for different instruction sequences, memory configurations, privilege levels, and exception conditions.
To handle these vast state spaces, many verification teams rely on the Universal Verification Methodology (UVM), which provides the standardized framework needed to build scalable and reusable testbenches.
Dave Kelf, CEO of Breker Verification Systems, notes the scale: “RISC-V core vendors are now facing the same issues as Arm and Intel, investing vast sums in new verification process development.” The industry has moved beyond just “blasting tests”—running random instruction streams and hoping to catch bugs. Modern verification requires strategic coverage planning: identifying which instruction combinations and edge cases actually matter for software compatibility versus which are theoretically possible but practically irrelevant.
RISC-V International (the standards body) has been developing test suites through partnerships with institutions like Harvey Mudd College, but these tests focus primarily on unprivileged instructions—the basic operations any software might use. Privileged instructions (used by operating systems and hypervisors) are exponentially harder to test comprehensively and often require hand-written test cases rather than automated generation.
The Software Compatibility Problem
The ultimate goal of conformance testing is ensuring software portability—that code compiled for RISC-V runs correctly on any conforming processor. But RISC-V’s flexibility complicates this significantly. As Ashley Stevens, director of product management at Arteris, explains: “RISC-V standardization efforts focus on architectural conformance, ensuring that all software-visible aspects behave as defined by the ISA. But RISC-V’s greatest strength—flexibility of the open instruction set architecture—is also its Achilles heel, potentially resulting in incompatibility between devices from different sources.”
This is why RISC-V International created “profiles”—standardized subsets of the specification that define specific feature combinations. Detailed documentation on these can be found in the RISC-V Profiles Repository. If your processor conforms to the RVA23 profile, for example, software written for that profile should work reliably. But profiles only solve part of the problem. Many companies extend RISC-V with custom instructions for domain-specific acceleration, and these extensions aren’t covered by standard conformance tests.
Aimee Sutton, product manager at Synopsys, puts it bluntly: “There’s no single definition of RISC-V. The creation of profiles facilitates software portability, but nobody is ever going to say ‘that isn’t a RISC-V core’ just because you deviated from the ISA.” This permissiveness enables innovation but creates verification headaches—how do you prove conformance to a standard that explicitly allows deviation?

Beyond Functional Correctness: Performance and Safety
Even if a chip passes all conformance tests, it might still be practically unusable. Palfy recounts a recent example: “An engineer had just taped out a RISC-V core for an automotive application. The core passed all architectural tests, but three months into software bring-up they discovered their branch predictor had the accuracy of a coin flip. It was technically correct but functionally useless—all that promised performance simply wasn’t there.”
This highlights a critical gap: the industry has standardized functional conformance (does the chip execute instructions correctly?) but not performance verification (does it execute them efficiently?). Every company builds custom infrastructure to answer questions like “Does this thing actually run fast?” or “Will it overheat under real workloads?” These aren’t covered by standard test suites because they depend on microarchitectural implementation details that vary wildly between designs.
Safety-critical applications add another layer of complexity. For automotive or industrial uses, compliance with the ISO 26262 functional safety standard requires fault injection testing, error handling verification, and safety mechanism validation. Standard RISC-V conformance tests don’t address any of this—companies building processors for cars or medical devices must create entirely separate verification flows from scratch.
The Verification Toolkit: Multiple Approaches Required
Modern RISC-V verification combines multiple techniques, each addressing different aspects of the problem. Traditional simulation remains foundational but is too slow for comprehensive coverage—you might verify 10,000 test cases in simulation. Hardware-assisted verification (emulation) accelerates this by 100-1000x, enabling millions of test cases. Post-silicon validation on actual fabricated chips can run billions of cases but happens too late to fix fundamental design flaws.
Formal verification—using mathematical proofs rather than test execution—has gained traction for specific problems. As Ashley Stevens notes:
“Formal is particularly strong for architectural compliance, ensuring ISA properties hold for all legal instruction sequences. But it’s not a replacement for dynamic verification, especially when full SoC behavior and real workloads are involved.”
Collaborative efforts within the CHIPS Alliance are helping to standardize these open-source verification tools to ensure broader industry adoption.
The verification industry is also exploring AI-driven approaches.
William Wang, CEO of ChipAgents, sees opportunity:
“RISC-V is an excellent domain for applying agentic AI to verification. Processor designs are dominated by control-based signals, making them well-suited for formal reasoning and symbolic exploration. As RISC-V adoption grows, AI-driven formal approaches can significantly accelerate both conformance and implementation verification.”
What Needs to Happen Next
The RISC-V ecosystem needs several things to mature beyond the current fragmentation risk:
Standardized interface specifications beyond the core ISA. Currently, RISC-V only standardizes the instruction set itself. How processors connect to memory systems, interconnects, and peripherals remains largely undefined. Ashley Stevens argues for “a RISC-V-focused subset of streamlined interface standards aligned with real-world use cases. Such standardization would significantly reduce redundant verification work and improve multi-vendor interoperability.”
Expanded conformance test coverage for privileged operations. The existing open-source test suites handle basic instruction execution well but struggle with complex operating system-level features. As Frank Schirrmeister at Synopsys notes, this is critical for “proving whether the core interprets the ISA architecturally so that software can run across different cores.”
Performance and power verification frameworks. The industry needs standardized ways to measure and compare RISC-V implementations on metrics that matter for real applications—not just functional correctness but actual performance, power consumption, and thermal characteristics under realistic workloads.
Better tooling for coverage analysis. Verification teams need unified dashboards that combine coverage from simulation, emulation, formal verification, and post-silicon testing—with traceability back to specific specification requirements. Currently, this integration requires significant manual effort.
The verification challenge isn’t unique to RISC-V, but the open nature of the architecture makes it more visible and more urgent. If the industry doesn’t solve conformance and compatibility soon, we risk repeating past mistakes where “standard” became meaningless and software developers abandoned the platform due to fragmentation. The technical capabilities exist to address this—what’s needed now is coordination and standardization across the ecosystem before incompatibility becomes entrenched.
FAQ
Q: How is RISC-V conformance different from what Intel or Arm do?
A: Intel and Arm control their architectures centrally—they define the spec, build reference implementations, and can enforce conformance through licensing. If you build an x86 chip, Intel’s processors are the de facto standard you must match. RISC-V is open-source with no single reference implementation, so “conformance” means matching a written specification rather than matching a specific chip. This is both liberating (enables innovation) and challenging (creates ambiguity about what “correct” looks like).
Q: Can’t formal verification just mathematically prove a chip is correct?
A: In theory, yes. In practice, it’s computationally intractable for complex modern processors. Formal methods excel at proving specific properties (like “this cache coherency protocol never loses data”) but struggle with full-system verification involving billions of transistors, complex software interactions, and performance characteristics. The industry uses formal verification for targeted corner cases while relying on simulation, emulation, and silicon testing for comprehensive validation. It’s a complementary tool, not a complete solution.
Q: Why does it matter if RISC-V chips are incompatible as long as they work?
A: Software ecosystem health depends on portability. If developers must test and debug their code separately for every RISC-V chip variant, they’ll avoid the platform entirely—it’s too expensive and time-consuming. This is exactly what happened with fragmented Android devices in the 2010s, where “Android compatible” became meaningless and app developers struggled with thousands of device-specific bugs. For RISC-V to succeed as a mainstream architecture, software written for one conforming chip must run reliably on any other conforming chip. Without that guarantee, the ecosystem can’t scale beyond niche applications.
For insights on chip design and verification challenges, join our WhatsApp community where 2,000+ hardware engineers discuss real-world semiconductor development.
Discover more from WireUnwired Research
Subscribe to get the latest posts sent to your email.



