Skip to content

Commit 18973dd

Browse files
committed
Update of 69 after review
1 parent dd42292 commit 18973dd

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

content/newsletter/69.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Let's be honest, when AWS, Microsoft, Google, and Meta all doubled down on Rust,
2121

2222
**The AWS Effect**: Amazon's <a href="https://firecracker-microvm.github.io/" target="_blank">Firecracker microVMs written in Rust</a> continue to prove Rust's enterprise readiness. When you're managing millions of serverless functions, memory safety isn't just nice to have, it's mission-critical.
2323

24-
**Rust at Microsoft – Security-Driven Push** (May 2025) At Rust Nation UK, Microsoft ,<a href="https://www.infoq.com/news/2025/05/microsoft-cto-rust-commitment/?utm_source=chatgpt.com" target="_blank">Azure CTO Mark Russinovich detailed why Microsoft is scaling Rust adoption</a>. A decade of data showed that 70 percent of security vulnerabilities originated from unsafe memory usage in C/C++ code. Rust’s ownership model and memory safety directly address this risk. Microsoft is accelerating its migration from vulnerable C/C++ to safer Rust, especially for security-critical infrastructure.
24+
**Rust at Microsoft – Security-Driven Push** (May 2025) At Rust Nation UK, Microsoft, <a href="https://www.infoq.com/news/2025/05/microsoft-cto-rust-commitment/?utm_source=chatgpt.com" target="_blank">Azure CTO Mark Russinovich detailed why Microsoft is scaling Rust adoption</a>. A decade of data showed that 70 percent of security vulnerabilities originated from unsafe memory usage in C/C++ code. Rust’s ownership model and memory safety directly address this risk. Microsoft is accelerating its migration from vulnerable C/C++ to safer Rust, especially for security-critical infrastructure.
2525

2626
**Cloudflare Builds High-Performance AI Inference Engine in Rust** (August 2025)
2727
Cloudflare developed <a href="https://blog.cloudflare.com/cloudflares-most-efficient-ai-inference-engine/" target="_blank">Infire, a custom LLM inference engine written in Rust</a>. Infire delivers faster inference (up to 7% faster than vLLM) with lower CPU overhead and more efficient GPU utilization. It now powers Llama 3.1 8B on Cloudflare’s edge network. 

0 commit comments

Comments
 (0)