Real-world training projects
we successfully supported

Real-world training projects
we successfully supported

KT (Korea Telecom)

Mi:dm 221B LLM

Pretraining on 1,200 AMD Instinct MI250 GPUs

June 2023

Motif Technologies

Motif 102B LLM

Continual training on 640 AMD Instinct MI250 GPUs

December 2024

Motif Technologies

Motif 2.6B LLM

Pretraining on 384 AMD Instinct MI250 GPUs

June 2025

Motif Technologies

Motif Image 6B

Pretraining on 96 AMD Instinct MI250 GPUs

July 2025

Powerful models must start with powerful infrastructure software. Moreh brings that capability for your models.

2x

Lower Infrastructure Cost

Contrary to common belief, building a large-scale GPU cluster and training LLMs does not necessarily require NVIDIA’s latest high-end GPUs. With our software support, the same tasks can be accomplished using cost-effective hardware including AMD GPUs, RoCE networking, and Tenstorrent accelerators. In addition, Moreh helps customers to optimize models and training processes with full awareness of GPU acceleration.

10x

Faster LLM Development

Speed up every iteration of your model development process, and build a cutting-edge, high-performing AI model faster. Our software allows customers to focus on high-level tasks such as model architecture and training algorithms, without having to deal with the complexities of optimizing performance across multiple GPUs.