Skip to content
  • Product
    • MoAI Inference Framework
    • Moreh vLLM
    • MoAI Training Framework
    • MoAI Platform
  • Solutions

    Infrastructure

    AMD GPU Appliance

    Deploy fully integrated AMD GPU-based cluster systems with scalable RoCE networking

    Tenstorrent Appliance

    Deliver the lowest TCO with inherently scalable, network-integrated chips

    Use Cases

    End-to-End Model Deployment

    Build cost-effective inference endpoints on-premises or in the cloud

    Large Scale Training

    Maximize GPU utilization and reduce training costs at the 1,000+ GPU scale

    Operation

    GPU Virtualization

    Flexible GPU aggregation, decomposition, and scaling with heterogeneous GPU support

    System Reliability

    Automatic GPU failover and diagnostic hardware monitoring

    • AMD GPU Appliance
    • Tenstorrent Appliance
    • End-to-End Model Deployment
    • Large Scale Training
    • GPU Virtualization
    • System Reliability
  • Resources
    • Blog
    • Docs
    • Demo Videos
    • Open Source
  • Career
  • Company
    • About
    • Contact
    • Newsroom

morehiodev

  • The non-US AI startups competing with Silicon Valley heavyweights

    January 28, 2025

    Global Corporate Venturing — The startup that perhaps comes closest to DeepSeek’s approach is South Korea’s Moreh, which has created a software tool that allows users to build and optimise their own AI models using a more flexible, modular approach.

  • Introducing Motif: A High-Performance Open-Source Korean LLM by Moreh

    December 2, 2024

    Moreh announces the release of Motif, a high-performance 102B Korean language model (LLM), which will be made available as an open-source model.

  • Moreh partners with Tenstorrent to challenge NVIDIA in AI data center market

    November 18, 2024

    Joint R&D of AI data center solutions by integrating Tenstorrent's semiconductors with Moreh's software; Targeting NVIDIA dominant market with competitive solutions.

  • Fine-tuning Llama 3.1 405B on AMD GPUs

    September 3, 2024

    There are no barriers to fine-tune Llama 3.1 405B on the MoAI platform. The Moreh team has actually demonstrated fine-tuning on the model with 192 AMD GPUs.

  • GPU Virtualization in the MoAI Platform

    August 19, 2024

    The MoAI platform provides comprehensive GPU virtualization including fine-grained resource allocation, multi-GPU scaling, and heterogeneous GPU support.

  • Korean startup Moreh tops global large language model test

    January 18, 2024

    The Korea Economic Daily — South Korean artificial intelligence tech startup Moreh Inc. said on Thursday that its large language model (LLM) topped a performance assessment by global leading AI platform operator Hugging Face Inc.

  • AMD and Korean telco KT back AI software developer Moreh in $22M Series B

    October 26, 2023

    TechCrunch — Advanced Micro Devices (AMD) and Korean telco KT are among the investors of Moreh, which builds an AI software tool that optimizes and creates AI models.

  • Moreh joins the LLM race, proving AMD can outperform NVIDIA

    August 22, 2023

    Moreh announced the successful completion of a monumental LLM (large language model) training project in collaboration with KT, a leading CSP in Korea.

  • Training 221B Parameter Korean LLM on 1,200 AMD MI250 GPU Cluster

    August 14, 2023

    Moreh trained a largest-ever Korean LLM with 221B parameters on top of the MoAI platform and an 1,200 AMD MI250 cluster system.

  • KT and KT Cloud invest 15B won in AI startup Moreh

    July 23, 2023

    Korea JoongAng Daily — KT and KT Cloud will jointly make a 15 billion won ($11.6 million) investment in Moreh, an artificial intelligence computing infrastructure startup, the telco said Sunday.

Previous123Next

Moreh, Inc.

  • Home
  • About
  • Career
  • Contact
  • Docs
  • Blog
  • Newsroom
  • Privacy Policy
  • Terms of Use
  • Home
  • About
  • Career
  • Contact
  • Docs
  • Blog
  • Newsroom

© 2025 Moreh, Inc. All right reserved.

Page load link
Go to Top