MoAI Training Frameworkmorehiodev2025-08-23T10:37:49+00:00
Programmable AI infrastructure at scale
MoAI Training Framework unlocks access to a variety of accelerators at a massive scale, making them easy to use for AI training.
Key Focuses
Automatic Clustering with Moreh AI Compiler
The graph compiler analyzes the given training workload at run time, determines the optimal parallelization and optimization strategy, and executes it across GPUs.
Extensive GPU Operation Libraries
MoAI Training Framework is equipped with a rich set of libraries covering numerous tensor operations from AI primitives to complex LLM-specific ones, ensuring portability and performance across various accelerators.
Non-NVIDIA Accelerator Support
MoAI Training Framework supports non-NVIDIA chips including AMD GPUs and Tenstorrent AI accelerators. This enables broader chip options for AI training without concerning software compatibility.