Performance

Benchmark Catalog

Transparent, reproducible benchmarks across single-node, cluster, and heterogeneous GPU configurations.

Single Node

Single-Node Inference — Moreh vLLM

Per-server inference performance powered by Moreh vLLM.

DeepSeek R1 671B

8× AMD MI300X

Output TPS geomean

Moreh vLLM
1.68×
ROCm vLLM
1.0×
Technical Report

DeepSeek R1 671B

8× AMD MI300X

TTFT (lower is better)

Moreh vLLM
0.57×
ROCm vLLM
1.0×
Technical Report

Llama 3.3 70B

2× AMD MI300X

Output TPS geomean

Moreh vLLM
1.74×
ROCm vLLM
1.0×
Technical Report

Llama 3.3 70B

2× AMD MI300X

TTFT (lower is better)

Moreh vLLM
0.50×
ROCm vLLM
1.0×
Technical Report

Step3 321B

8× AMD MI308X

Decode TPS

Moreh vLLM
4,082
NVIDIA H20 baseline
3,147
Customer Case

InferenceMAX DeepSeek R1 0528

8× AMD MI300X

Throughput geomean

Moreh vLLM
1.47×
SGLang
1.0×
Blog Post

InferenceMAX DeepSeek R1 0528

8× AMD MI300X

E2E Latency geomean (lower is better)

Moreh vLLM
0.68×
SGLang
1.0×
Blog Post

Cluster

Cluster Inference — MoAI Inference Framework

PD disaggregation, intelligent routing, and other optimizations at cluster scale.

DeepSeek R1 671B

5× AMD MI300X nodes

Output tok/s per decode node

PD disagg + EP
22,000+
Docs

DeepSeek R1 671B

5× AMD MI300X nodes

End-to-end latency (lower is better)

PD disaggregation
0.74×
Non-disaggregated
1.0×
Coming Soon

DeepSeek R1 671B

2× vs 5× AMD MI300X nodes

Throughput

Cache-aware (2 nodes)
2.2×
Naive routing (5 nodes)
1.0×
Docs

DeepSeek R1 671B

2× vs 5× AMD MI300X nodes

TTFT (lower is better)

Cache-aware (2 nodes)
0.03–0.05×
Naive routing (5 nodes)
1.0×
Docs

Heterogeneous

Heterogeneous GPU Integration

Higher throughput and lower latency by orchestrating GPUs across vendors and generations.

GPT-OSS 120B

H100 + AMD MI300X

Throughput

Cross-vendor PD disagg
1.7×
Same-vendor PD disagg
1.0×
Coming Soon

DeepSeek R1 671B

AMD MI300X + MI308X

Throughput

PD disaggregation
1.53×
Load-balanced
1.0×
Blog Post

GPT-OSS 120B

H100 + AMD MI250

Throughput

Speculative decoding
1.17×
All-inference baseline
1.0×
Technical Report

GPT-OSS 120B

4× AMD MI250 nodes

TTFT at 100K context (lower is better)

Multi-node prefill engine
<2s
Single-node baseline
~9s
Blog Post