Skip to content

hello-rocmROCm tutorials and practice cases for AMD GPUs

A structured learning path for AMD ROCm in LLM workflows, from environment setup and deployment to fine-tuning and operator optimization.

hello-rocm

hello-rocm Skill

Give this project to your AI assistant in one copy

Copy the sentence below and paste it into any tool that supports Skills, Rules, or Agent configuration. Let the tool decide how to load src/hello-rocm-skill.

Use it for GPU architecture lookup, ROCm quick installs, vLLM / Ollama / llama.cpp deployment, troubleshooting, and learning paths.

  1. Read Environment first and finish the ROCm, PyTorch, and Python toolchain setup.
  2. Move to Deploy and validate your environment with LM Studio, vLLM, or another inference workflow.
  3. Continue with Fine-tune to understand LoRA fine-tuning on ROCm.
  4. Explore Operator Optimization if you want to go deeper into low-level performance and operator development.
  5. Check AMD Practice when you need complete project examples.

Content Map

SectionContent
EnvironmentROCm installation, configuration, validation, and GPU architecture references
DeployMulti-framework local deployment for models such as Qwen3 and Gemma4
Fine-tuneLLM fine-tuning notes and examples on ROCm
Operator OptimizationAMD AI hardware, ROCm software stack, HIP operators, and PyTorch custom operators
ReferencesROCm and AMD AI ecosystem resources
AMD PracticeApplication cases and community projects for AMD platforms

Who This Is For

  • Learners who have an AMD GPU and want to run or fine-tune LLMs locally.
  • Developers who want a structured ROCm workflow instead of scattered setup notes.
  • Practitioners who want to move from inference deployment into operator development and performance analysis.