Unified environment baseline
Set up ROCm, PyTorch, and uv first so deployment, fine-tuning, and operator optimization tutorials share the same foundation.
A structured learning path for AMD ROCm in LLM workflows, from environment setup and deployment to fine-tuning and operator optimization.

hello-rocm Skill
Copy the sentence below and paste it into any tool that supports Skills, Rules, or Agent configuration. Let the tool decide how to load src/hello-rocm-skill.
Use it for GPU architecture lookup, ROCm quick installs, vLLM / Ollama / llama.cpp deployment, troubleshooting, and learning paths.
| Section | Content |
|---|---|
| Environment | ROCm installation, configuration, validation, and GPU architecture references |
| Deploy | Multi-framework local deployment for models such as Qwen3 and Gemma4 |
| Fine-tune | LLM fine-tuning notes and examples on ROCm |
| Operator Optimization | AMD AI hardware, ROCm software stack, HIP operators, and PyTorch custom operators |
| References | ROCm and AMD AI ecosystem resources |
| AMD Practice | Application cases and community projects for AMD platforms |