Resources
Blog

Moreh Unlocks AMD MI300X Potential: 1.5× Faster DeepSeek R1 Inference vs. SGLang (InferenceMax)
Moreh’s optimized inference engine achieves 1.47x improvement in end-to-end latency and throughput per GPU for DeepSeek R1 on AMD MI300X, compared to InferenceMAX baseline.

TIDE: Temporal Incremental Draft Engine for Self-Improving LLM Inference
TIDE continuously improves inference speed by training a lightweight draft model in the background, using idle GPUs in the cluster — no extra data preparation or downtime required.

Step3 Inference Optimization on AMD Instinct MI308X: 1.30× Higher Decode Throughput vs. NVIDIA H20
Moreh optimized StepFun’s Step3 321B MoE model for AMD Instinct MI308X GPUs, achieving 1.30× higher decode throughput and 23% lower decode latency compared to NVIDIA H20.

Optimizing Long-Context Prefill on Multiple (Older-Generation) GPU Nodes
SLOPE Engine improves long-context prefill performance by applying context parallelism across multiple GPU servers. This also helps efficiently utilize older-generation GPUs.

Moreh-Tenstorrent AI Data Center Solution System Architecture
Moreh combine Tenstorrent’s lightweight and scalable hardware with our proprietary software stack to deliver an efficient and flexible solution for large-scale AI data centers.

21K Output Tokens Per Second DeepSeek Inference on AMD Instinct MI300X GPUs with Expert Parallelism
Moreh demonstrated that DeepSeek-R1 inference can be executed at a decoding throughput of >21,000 tokens/sec by implementing EP on the ROCm software stack.

Runtime Draft Model Training: Adapting Speculative Decoding to Real-World Workloads
TIDE provides a method to optimize inference computation on newer GPUs by utilizing older or idle GPUs for runtime draft model training, resulting in better overall cost-performance at the system level.

Distributed Inference on Heterogeneous Accelerators Including GPUs, Rubin CPX, and AI Accelerators
MoAI Inference Framework supports automatic and efficient distributed inference on heterogeneous accelerators such as AMD MI300X + MI308X and NVIDIA Rubin CPX + GPU.

Moreh vLLM Performance Evaluation: Llama 3.3 70B on AMD Instinct MI300X GPUs
Moreh vLLM achieves 1.68x higher output TPS, 2.02x lower TTFT, and 1.59x lower TPOT compared to the original vLLM for Meta’s Llama 3.3 70B model.

Moreh vLLM Performance Evaluation: DeepSeek V3/R1 671B on AMD Instinct MI300X GPUs
Moreh vLLM achieves 1.68x higher output TPS, 1.75x lower TTFT, and 1.70x lower TPOT compared to the original vLLM for the DeepSeek V3/R1 671B model.

DeepSeek V3 and R1 on MoAI: 1. Fine-Tuning on AMD GPU Clusters
MoAI provides a PyTorch-compatible environment that makes LLM fine-tuning on hundreds of AMD GPUs super easy, including DeepSeek 671B MoE.

Introducing Motif: A High-Performance Open-Source Korean LLM by Moreh
Moreh announces the release of Motif, a high-performance 102B Korean language model (LLM), which will be made available as an open-source model.

Fine-tuning Llama 3.1 405B on AMD GPUs
There are no barriers to fine-tune Llama 3.1 405B on the MoAI platform. The Moreh team has actually demonstrated fine-tuning on the model with 192 AMD GPUs.

GPU Virtualization in the MoAI Platform
The MoAI platform provides comprehensive GPU virtualization including fine-grained resource allocation, multi-GPU scaling, and heterogeneous GPU support.

Training 221B Parameter Korean LLM on 1,200 AMD MI250 GPU Cluster
Moreh trained a largest-ever Korean LLM with 221B parameters on top of the MoAI platform and an 1,200 AMD MI250 cluster system.

KT’s Success Stories in AI Cloud Service and Large AI Model Training on AMD Instinct MI250 and Moreh AI Platform
KT has collaborated with Moreh and AMD to overcome the challenges in public cloud services and in-house AI model development.