Skip to content

AMD-YES 🎮

A Curated Collection of Cool Projects on AMD GPUs

Making AI more fun, unleashing creativity

Back to Home

🚀 Quick Navigation

Local Run (Single GPU)Cluster Deployment (Multi-GPU)
🧸 toy-cli ✅️🚀 happy-llm ✅️
🎮 WeChat Jump ✅️
🎭 Chat-Huanhuan ✅️
✈️ Smart Travel Planner ✅️
🦞 OpenClaw Private Assistant ✅️

✅ Supported | 🚧 In progress

Introduction

AMD-YES brings together practical project examples built on AMD GPU and ROCm platform. Whether you want to quickly experience LLM applications, learn computer vision, or dive deep into distributed training, you can find suitable example projects here.

This module is divided into two stages: First, quickly get started with various applications on a single local GPU, then advance to cluster deployment and distributed training. Each project includes complete code and detailed tutorial guidance.

AMD-YES Learning Path

Figure 5.1 AMD-YES project learning path overview

Project List

Local Run (Single GPU)

Suitable for individual developers to quickly get started – you can do it all with just one AMD GPU!


🧸 toy-cli - Lightweight LLM Terminal Assistant

toy-cli is a simplified code Agent that provides a minimal command-line interface for calling LLM APIs.

  • Target Audience: Beginners, users wanting to quickly learn API calls
  • Difficulty Level: ⭐
  • Estimated Time: 30 minutes

📖 Start Learning toy-cli


🎮 YOLOv10 WeChat Jump

YOLOv10 WeChat Jump is an automated game AI based on YOLOv10 object detection, which automatically recognizes jump targets in real-time and calculates distances accurately.

  • Target Audience: Developers interested in computer vision and game AI
  • Difficulty Level: ⭐⭐
  • Estimated Time: 1 hour

📖 Start Learning WeChat Jump


🎭 Chat-Huanhuan - Palace Language Model

Chat-Huanhuan is a LoRA fine-tuned model trained on dialogue excerpts from the TV series, perfectly imitating Huanhuan's tone and speaking style.

  • Target Audience: Developers wanting to learn model fine-tuning and LoRA techniques
  • Difficulty Level: ⭐⭐
  • Estimated Time: 1.5 hours

📖 Start Learning Chat-Huanhuan


✈️ Smart Travel Planner

Smart Travel Planner is an intelligent Agent application based on the HelloAgents framework, integrating MCP protocol to call Amap API, running large models locally on AMD GPU.

  • Target Audience: Developers wanting to learn Agent frameworks and MCP protocol
  • Difficulty Level: ⭐⭐⭐
  • Estimated Time: 2 hours

📖 Start Learning Smart Travel Planner


🦞 OpenClaw - Fully Private Local AI Agent Platform

OpenClaw is a unified message processing and AI agent platform. Deploy a large model locally on AMD 395 Max for a fully private personal AI assistant.

  • Target Audience: Developers wanting to learn AI Agent platforms and local privacy deployment
  • Difficulty Level: ⭐⭐⭐
  • Estimated Time: 1.5 hours

📖 Start Learning OpenClaw


Cluster Deployment (Multi-GPU)

Advanced usage – distributed training and deployment!


🚀 Happy-LLM - Train Large Models from Scratch

Happy-LLM provides comprehensive tutorials for distributed multi-machine multi-GPU training, including core principles of large models and hands-on training implementation.

  • Target Audience: Advanced developers wanting to gain deep knowledge of large model training
  • Difficulty Level: ⭐⭐⭐
  • Estimated Time: 3+ hours

📖 Start Learning Happy-LLM


Environment Requirements

Hardware Requirements

  • AMD GPU (ROCm-compatible GPUs, such as RX 7000 series, MI series, etc.)
  • Recommended VRAM: 8 GB or more

Software Requirements

  • Operating System: Linux (Ubuntu 22.04+) or Windows 11
  • ROCm 7.12.0/7.2.0 or higher version
  • Python 3.10~3.12

Frequently Asked Questions

Q: How do I verify if my AMD GPU supports ROCm?

Please refer to the ROCm Official Support List to check supported GPU models.

Q: What is the recommended learning order for these projects?

It is recommended to learn in the following order:

  1. Start with toy-cli first to get familiar with basic LLM API calls
  2. Choose projects of interest to dive deeper (game AI, dialogue models, Agent applications)
  3. After completing basic projects, challenge yourself with Happy-LLM for advanced distributed training

Reference Resources


Contributions of more project examples are welcome! 🎉

Submit Issue | Submit PR