Blog
-
Introducing Motif: A High-Performance Open-Source Korean LLM by Moreh
December 2, 2024
Moreh announces the release of Motif, a high-performance 102B Korean language model (LLM), which will be made available as an open-source model. -
Fine-tuning Llama 3.1 405B on AMD GPUs
September 3, 2024
There are no barriers to fine-tune Llama 3.1 405B on the MoAI platform. The Moreh team has actually demonstrated fine-tuning on the model with 192 AMD GPUs. -
GPU Virtualization in the MoAI Platform
August 19, 2024
The MoAI platform provides comprehensive GPU virtualization including fine-grained resource allocation, multi-GPU scaling, and heterogeneous GPU support. -
Training 221B Parameter Korean LLM on 1,200 AMD MI250 GPU Cluster
August 14, 2023
Moreh trained a largest-ever Korean LLM with 221B parameters on top of the MoAI platform and an 1,200 AMD MI250 cluster system. -
KT’s Success Stories in AI Cloud Service and Large AI Model Training on AMD Instinct MI250 and Moreh AI Platform
November 11, 2022
KT has collaborated with Moreh and AMD to overcome the challenges in public cloud services and in-house AI model development.