Executive Summary
For production cryptographic software, memory safety alone does not define security. Real-world crypto must run on every platform, maintain stable assumptions over decades, and allow explicit control over hardware behavior. While memory-safe languages like Rust offer real benefits, serious cryptographic implementations inevitably rely on unsafe code, assembly, and low-level control, eroding those guarantees. At that point, the added abstraction often increases complexity without meaningfully reducing risk. For mature, heavily validated crypto libraries, process, testing, and time-in-field matter far more than the implementation language.
The Core Problem: Abstractions vs. Cryptography
Modern security discussions often overemphasize memory safety. In mature cryptographic libraries, memory bugs are no longer the dominant risk.
Instead, cryptography is constrained by:
- Extreme portability requirements
- Long-term ecosystem stability
- Explicit control over memory layout and execution
- Predictable performance and auditability
These constraints conflict with rapidly evolving language ecosystems and high-level abstractions.
Why C Fits Cryptography’s Reality
Portability and Reach
Cryptographic libraries must run everywhere:
bare metal, embedded systems, legacy RTOSes, constrained IoT devices, general-purpose OSes, and regulated environments with frozen toolchains.
At wolfSSL, we target C89 deliberately to maximize portability and predictability. Memory-safe alternatives do not offer comparable reach without significant tradeoffs.
Ecosystem Stability
C’s ecosystem evolves slowly, which is a feature—not a flaw.
Cryptographic code depends on assumptions that must remain valid for decades. High-tempo changes in language semantics, compiler behavior, or idioms introduce risk. When assumptions break in crypto, the result is often a CVE, not a refactor.
Unsafe Is Inevitable
High-performance cryptography requires:
- Precise memory layout
- Predictable calling conventions
- Tight control over inlining and execution
- Direct use of assembly
Rust can support this—but only through extensive use of unsafe, inline assembly, volatile operations, and compiler fences. Once a crypto codebase reaches that point, it has largely opted out of the language’s safety model.
This is visible in today’s strongest Rust crypto libraries: they rely heavily on assembly and unsafe, recreating C-like semantics behind additional abstraction layers.
Memory Safety ? Security
Memory-safe languages eliminate certain bug classes—but they do not guarantee:
- Correct specifications
- Correct error handling
- Correct API usage
- Absence of vulnerabilities
Even in memory-safe languages, logic errors and exception-handling mistakes remain common. Features like unwrap() don’t cause memory corruption, but they can introduce panics or denial-of-service paths in security-critical code.
No language replaces discipline, review, and testing.
What Actually Makes Crypto Safe
Security in production cryptography comes from process, not language choice:
- Static and dynamic analysis
- Continuous fuzzing
- Expert code review
- Long-term customer usage
- Formal validation (e.g., FIPS 140)
- Regression testing against known attacks
This infrastructure already achieves mostly memory-safe outcomes in practice. Meaningful improvements come from better coverage and tooling, not wholesale rewrites.
A Note on Side Channels
Side-channel resistance is difficult in any language.
What is true: as abstractions obscure execution behavior, side-channel properties become harder to reason about and harder to verify at scale. Rust provides tools to mitigate this—but using them extensively means abandoning most safety abstractions, bringing the problem space back toward C.
The issue is not impossibility—it’s verifiability without abstraction collapse.
The Bottom Line
For production cryptographic software, several requirements are non-negotiable:
- Universal platform support
- Stable assumptions
- Explicit hardware control
- Auditability
- Predictable toolchains
C provides these directly.
Memory-safe languages offer useful tools—but in cryptography, they often reintroduce C-like semantics with additional complexity. For battle-tested libraries, the language matters far less than the rigor behind the implementation.
Q&A: Cryptographic Software Security Practices
Q1: When is a low-level C crypto component “secure enough” to ship?
We require standard test vectors and testing on common platforms, but we ship early to get real customer usage. Trust is earned through time-in-field, FIPS 140 CAVP testing, extensive fuzzing and stress testing, and continuous static analysis. We also track cryptographic research and collaborate with researchers to address subtle issues.
Q2: What residual risks still concern you in mature crypto code?
The primary concern is side-channel leakage. Compiler optimizations and hardware or microcode updates can subtly alter timing or power behavior. These risks require ongoing awareness and, in some cases, empirical validation on representative targets.
Q3: If time and staffing were unlimited, what additional assurance work would you do?
We would add ground-truth constant-time verification on representative hardware as part of nightly testing, improve coverage requirements, and invest in better tooling for side-channel analysis, including power and current measurements. This is an area where broader community and academic collaboration would help.
Q4: Was there a turning point where wolfCrypt became “trusted”?
Trust develops gradually, not at a single moment. It grows as code is deployed by many customers without issues, as comprehensive tests mature, and as long-running fuzzing and validation uncover fewer surprises. Longevity and usage are as important as design.
If you have questions about any of the above, please contact us at facts@wolfssl.com or call us at +1 425 245 8247.
Download wolfSSL Now

