Who Is This For?
I benchmarked the RTX 5080 alongside the RTX 5090 to find out exactly where the value line is. Here’s what I found.
- 4K gaming at high refresh rates: I comfortably pushed 120-plus fps in most titles at 4K
- AI inference and small model fine-tuning: 16 GB VRAM handled 7B to 13B parameter models well in my testing
- Video editing and streaming: NVENC encoder handled 4K H.265 without any dropped frames
If you’re training large language models or working with datasets that exceed 16 GB VRAM, step up to the RTX 5090.
Benchmarks
| Game / Workload | RTX 5080 | RTX 5090 | vs. 5090 |
|---|---|---|---|
| Cyberpunk 2077 (4K Ultra, DLSS Quality) | 148 fps | 185 fps | 80% |
| Stable Diffusion XL (512x512, 50 steps) | 5.1s | 3.2s | 63% |
| Blender BMW (CUDA) | 29s | 18s | 62% |
| LLM Inference (Llama 3 8B, 4-bit) | 85 tok/s | 110 tok/s | 77% |
Power and Thermals
At 360W, the RTX 5080 is far more practical to cool and power than the 5090. A quality 750W PSU handles it comfortably. I measured 72 degrees under sustained gaming loads with the Founders Edition cooler. No throttling, no drama. If you’ve got decent case airflow, this card runs clean.
The Bottom Line
For most people building a high-end PC, the RTX 5080 is the card I’d recommend. It offers the Blackwell architecture’s full feature set, including DLSS 4, AV1 encode and decode, and improved ray tracing, at a price that doesn’t require selling your old system first. After running my benchmarks, I’m convinced the RTX 5090 only makes sense if you specifically need 32 GB VRAM for AI workloads. For gaming and content creation, the 5080 is the sweet spot.