⚡
WireUnwired Research • Key Insights
- Design and verification are distinct jobs: for dedicated DV engineers, 100% of the work is verification, while many designers spend a significant but smaller share on module-level tests.
- Verification dominates project time and cost, not because tools are primitive, but because state spaces are huge and many core tasks resist easy parallelization.
- Large chip companies lean on mature commercial EDA stacks and in-house CAD automation; the real pain is in debug, test generation, and foundry-specific scripting, not button-clicking GUIs.
- Developers see opportunities for AI in triaging failures and debugging, but expect verification to remain the main chokepoint in advanced chip design.
- Foundry coordination is tedious rather than glamorous: engineers write tech-node and foundry-specific scripts to align flows, checks, and deliverables.
Intro: The Myth of “70% Verification”
Chip designers are tired of hearing that “70% of their time” goes into verification. Discussions in the developer community paint a sharper picture: verification is its own job, and in serious silicon teams, it consumes entire careers, not leftover hours.
Digital verification engineers live in the trenches. For them, 100% of their time is verification. Designers, by contrast, may spend zero time on full testbenches in large organizations, while still burning a substantial slice of their week on module-level tests and debug.
The obsession with the 70% number misses the point. The real truth is harsher and more interesting: most of the schedule and cost of a modern chip is swallowed by proving it won’t fail in the field.
Analysis: Why Verification Feels Like a Brick Wall

Verification isn’t a “hellscape” because tools are primitive. It’s brutal because the math is unforgiving.
- State space is enormous. Engineers describe a landscape where the number of possible behaviors explodes, while controllability and observability of internal signals stay limited. You can’t brute-force your way through all scenarios, no matter how many simulations you throw at the design.
- Test generation is the grind. A huge share of DV effort revolves around generating and refining test vectors that probe this vast space efficiently. Teams simulate architectural behavior, compare it against microarchitectural simulations, and hunt down mismatches or assertion failures that surface as signatures.
- Debug is human-heavy. When a failure signature appears, DV engineers investigate, map it to a known bug, or file a new one. This is where experience matters. It is also where naive “just throw AI at it” thinking tends to crash.
- Parallelism hits limits. Many of the key algorithms and tasks in pre-silicon verification are not inherently parallelizable. That makes simple scaling with more cores, more simulators, or more cloud capacity far less effective than outsiders expect.
From the outside, the commercial tool stack can look dated. Developers coming from modern software ecosystems see Cadence and Synopsys flows and assume they can be rebuilt as slick, cloud-native platforms. But inside major chip companies, engineers say something very different:
- Tools aren’t the bottleneck. For large organizations, license cost is a line item, not an existential threat. They are not arguing about whether Cadence or Synopsys “live up to the price tag” day to day; they’re focused on coverage, debug speed, and schedule risk.
- CAD teams automate the boring parts. Anything that is truly repetitive and scriptable is usually handled by internal CAD (computer-aided design) engineers. Flow glue, report munging, regression orchestration, lint and DRC wrappers – most of that already lives in in-house scripts.
- Module test still eats time. Even when verification is a separate discipline, many designers spend a nontrivial portion of their job writing Verilog testbenches for modules and subsystems. One engineer pegs their own split at roughly half design, half testbench code.
The net result: the low-hanging automation fruit has largely been picked inside serious chip houses. What remains are the hard problems: coverage closure, failure triage, and root-cause analysis across millions of lines of RTL and testbench code.
Where AI Fits – and Where It Doesn’t (Yet)
Engineers are not dismissing AI outright. On the contrary, they are already pointing to concrete niches:
- Signature triage. When a regression farm produces a flood of failing tests, AI could help cluster signatures, correlate them with known bugs, and route them to the right owners faster.
- Assisted debug. Pattern-matching across prior fixes, waveform patterns, and assertion histories could suggest likely failure modes or suspicious blocks.
But the community is wary of software developers who see hardware verification as “unit testing, but worse” and assume that a clever new framework will magically fix it. The cautionary stories are blunt:
Bright software engineers arrive convinced they’ll “revolutionize” pre-silicon verification, then discover after a couple of tapeouts that the constraints are physical, mathematical, and organizational – not just tooling.
Toy frameworks and experimental Python-based environments often get abandoned, leaving behind clutter that seasoned DV teams quietly delete. The message is clear: if you want to improve verification, you have to respect its complexity, not abstract it away with buzzwords.
Foundry Coordination: The Unsexy Pain Point
On the back end, foundry coordination is less about existential drama and more about chronic friction.
Each foundry, and often each process node, comes with its own quirks: rule decks, file formats, checklists, and signoff expectations. Engineers report writing scripts that are both foundry-specific and tech-node-specific just to keep flows consistent and deliverables clean.
For big players, this is a solved-but-annoying problem. It is part CAD, part project management, part institutional memory. For smaller teams or new entrants, those differences can feel like “coordination nightmares,” but internally, veterans treat it as another layer of automation and scripting, not as an unsolved systems problem.
You Would Love To Read :FPGA Engineering vs. PhD: Career Impact, Salary, and AI Acceleration Trends in 2025
Community Sentiment: Respect the Craft
Strip away the sarcasm and the message from experienced chip engineers is consistent.
- Verification is not an afterthought. It is the main driver of schedule risk and cost for serious chips. DV engineers are specialists, not designers moonlighting on testbenches.
- Designers still verify – differently. Even when formal DV teams exist, designers spend significant time on module tests, bring-up, and debug. That work is verification in everything but job title.
- EDA stacks are entrenched for a reason. Cadence and Synopsys are not loved, but they are deeply integrated, heavily automated, and battle-tested. New tools must interoperate and add clear value, not just prettier UIs.
- AI has a role, not a revolution. Engineers see plausible wins in triage and debug assistance. They do not expect AI to erase verification as the chokepoint of chip development.
For software developers eyeing this space, the opportunity is real but narrow. The leverage points are where human attention is burned today: smarter failure clustering, better coverage visualization, automated constraint generation, and more ergonomic interfaces to existing flows.
But the bar is high. Any new tool must slot into established Cadence/Synopsys-based flows, respect the realities of limited observability and massive state spaces, and prove it can survive more than one tapeout cycle.
If you’re tracking how verification, AI, and EDA tooling are colliding, and you want to compare notes with other engineers, consider joining WireUnwired Research on WhatsApp or LinkedIn – it’s where we see these war stories and experiments first, long before they hit conference decks.
Discover more from WireUnwired Research
Subscribe to get the latest posts sent to your email.




