Skip to content
  • Product
    • MoAI Inference Framework
    • Moreh vLLM
    • MoAI Training Framework
    • MoAI Platform
  • Solutions

    Infrastructure

    AMD GPU Appliance

    Deploy fully integrated AMD GPU-based cluster systems with scalable RoCE networking

    Tenstorrent Appliance

    Deliver the lowest TCO with inherently scalable, network-integrated chips

    Use Cases

    End-to-End Model Deployment

    Build cost-effective inference endpoints on-premises or in the cloud

    Large Scale Training

    Maximize GPU utilization and reduce training costs at the 1,000+ GPU scale

    Operation

    GPU Virtualization

    Flexible GPU aggregation, decomposition, and scaling with heterogeneous GPU support

    System Reliability

    Automatic GPU failover and diagnostic hardware monitoring

    • AMD GPU Appliance
    • Tenstorrent Appliance
    • End-to-End Model Deployment
    • Large Scale Training
    • GPU Virtualization
    • System Reliability
  • Blog
  • Career
  • Company
    • About
    • Contact
    • Newsroom

morehiodev

  • Moreh vLLM Performance Evaluation: Llama 3.3 70B on AMD Instinct MI300X GPUs

    August 30, 2025

    Moreh vLLM achieves 1.68x higher output TPS, 2.02x lower TTFT, and 1.59x lower TPOT compared to the original vLLM for Meta's Llama 3.3 70B model.

  • Moreh vLLM Performance Evaluation: DeepSeek V3/R1 671B on AMD Instinct MI300X GPUs

    August 29, 2025

    Moreh vLLM achieves 1.68x higher output TPS, 1.75x lower TTFT, and 1.70x lower TPOT compared to the original vLLM for the DeepSeek V3/R1 671B model.

  • DeepSeek V3 and R1 on MoAI: 1. Fine-Tuning on AMD GPU Clusters

    February 20, 2025

    MoAI provides a PyTorch-compatible environment that makes LLM fine-tuning on hundreds of AMD GPUs super easy, including DeepSeek 671B MoE.

  • The non-US AI startups competing with Silicon Valley heavyweights

    January 28, 2025

    Global Corporate Venturing — The startup that perhaps comes closest to DeepSeek’s approach is South Korea’s Moreh, which has created a software tool that allows users to build and optimise their own AI models using a more flexible, modular approach.

  • Introducing Motif: A High-Performance Open-Source Korean LLM by Moreh

    December 2, 2024

    Moreh announces the release of Motif, a high-performance 102B Korean language model (LLM), which will be made available as an open-source model.

  • Moreh partners with Tenstorrent to challenge NVIDIA in AI data center market

    November 18, 2024

    Joint R&D of AI data center solutions by integrating Tenstorrent's semiconductors with Moreh's software; Targeting NVIDIA dominant market with competitive solutions.

  • Fine-tuning Llama 3.1 405B on AMD GPUs

    September 3, 2024

    There are no barriers to fine-tune Llama 3.1 405B on the MoAI platform. The Moreh team has actually demonstrated fine-tuning on the model with 192 AMD GPUs.

  • GPU Virtualization in the MoAI Platform

    August 19, 2024

    The MoAI platform provides comprehensive GPU virtualization including fine-grained resource allocation, multi-GPU scaling, and heterogeneous GPU support.

  • Korean startup Moreh tops global large language model test

    January 18, 2024

    The Korea Economic Daily — South Korean artificial intelligence tech startup Moreh Inc. said on Thursday that its large language model (LLM) topped a performance assessment by global leading AI platform operator Hugging Face Inc.

  • AMD and Korean telco KT back AI software developer Moreh in $22M Series B

    October 26, 2023

    TechCrunch — Advanced Micro Devices (AMD) and Korean telco KT are among the investors of Moreh, which builds an AI software tool that optimizes and creates AI models.

12Next

Moreh, Inc.

  • Home
  • About
  • Career
  • Contact
  • Blog
  • Newsroom
  • Privacy Policy
  • Terms of Use
  • Home
  • About
  • Career
  • Contact
  • Blog
  • Newsroom

© 2025 Moreh, Inc. All right reserved.

Page load link
Go to Top