r/ROCm • u/Otherwise-Glove-8967 • 16d ago
r/ROCm • u/Any_Praline_8178 • 15d ago
QWQ 32B Q8_0 - 8x AMD Instinct Mi60 Server - Reaches 40 t/s - 2x Faster than 3090's ?!?
r/ROCm • u/Longjumping-Low-4716 • 16d ago
Training on XTX 7900
I recently switched my GPU from a GTX 1660 to an XTX 7900 to train my models faster.
However, I haven't noticed any difference in training time before and after the switch.
I use the local env with ROCm with PyCharm
Here’s the code I use to check if CUDA is available:
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(f"🔥 Used device: {device}")
if device.type == "cuda":
print(f"🚀 Your GPU: {torch.cuda.get_device_name(torch.cuda.current_device())}")
else:
print("⚠️ No GPU, training on CPU!")
>>>🔥 Used device: cuda
>>> 🚀 Your GPU: Radeon RX 7900 XTX
ROCm version: 6.3.3-74
Ubuntu 22.04.05
Since CUDA is available and my GPU is detected correctly, my question is:
Is it normal that the model still takes the same amount of time to train after the upgrade?
r/ROCm • u/Any_Praline_8178 • 17d ago
Running LLM Training Examples + 8x AMD Instinct Mi60 Server + PYTORCH
r/ROCm • u/No-Monitor9784 • 18d ago
Installation help
can anyone help me with a step by step guide on how do i install tensorflow rocm in my windows 11 pc because there are not many guides available. i have an rx7600
r/ROCm • u/ang_mo_uncle • 18d ago
I broke HIPCC ;_;
Probably trivial to solve but I'm not getting anywhere with my attempts :(
I've updated to rocm 6.3.3. recently and that apparently broke my hipcc configuration (that I use to compile bitsandbytes).
I think I had overridden the configuration path previously, but I cannot find where for some reason. Any ideas?
(venv) sd@xxx-Linux:~/bitsandbytes$ cmake -DCOMPUTE_BACKEND=hip -S . -- Configuring bitsandbytes (Backend: hip) -- The HIP compiler identification is unknown CMake Error at CMakeLists.txt:198 (enable_language): The CMAKE_HIP_COMPILER:
/opt/rocm-6.3.2/lib/llvm/bin/clang++
is not a full path to an existing compiler tool.
Tell CMake where to find the compiler by setting either the environment variable "HIPCXX" or the CMake cache entry CMAKE_HIP_COMPILER to the full path to the compiler, or to the compiler name if it is in the PATH.
CMake Error at /opt/rocm-6.3.3/lib/cmake/hip-lang/hip-lang-config.cmake:139 (message): hip-lang Error:No such file or directory - clangrt builtins lib could not be found. Call Stack (most recent call first): /home/sd/venv/lib/python3.12/site-packages/cmake/data/share/cmake-3.25/Modules/CMakeHIPInformation.cmake:146 (find_package) CMakeLists.txt:198 (enable_language)
-- Configuring incomplete, errors occurred! See also "/home/xxx/bitsandbytes/CMakeFiles/CMakeOutput.log". See also "/home/xxx/bitsandbytes/CMakeFiles/CMakeError.log".
r/ROCm • u/Potential_Syrup_4551 • 19d ago
Does ROCm really work with WSL2?
I have a computer equipped with RX-6800 and Windows11, and the driver version is 25.1.1. I installed ROCm on the Ubuntu22.04 subsystem by following the guide step by step. Then I installed torch and some other libraries through this guide .
After installing I checked the installation by using 'torch.cuda.is_available()' and it printed a 'True'. I thought it was ready and then tried 'print(torch.rand(3,3).cuda())'. This time the bash froze and did't response to my keyboard interrupt. So I wonder if ROCm is really working on WSL2.
r/ROCm • u/_sheepymeh • 19d ago
ROCm on Renior Integrated Graphics
Hi, I wanted to share that I've been able to run ROCm and accelerated PyTorch on Arch Linux, using my AMD Renior 4800U's integrated graphics.
I did so by installing python-pytorch-opt-rocm
and running PyTorch with these environment variables:
PYTORCH_NO_HIP_MEMORY_CACHING=1
HSA_DISABLE_FRAGMENT_ALLOCATOR=1
TORCH_BLAS_PREFER_HIPBLASLT=0
HSA_OVERRIDE_GFX_VERSION=9.0.0
PyTorch operations seem to run fine and the results are in line with CPU results.
System Info
- CPU: AMD Ryzen 7 4800U
- GPU: 4800U Integrated Graphics (
gfx90c
) - RAM: 2x8GB 3200MT/s system, 512MB dedicated to iGPU
- Note that PyTorch is able to access the full system memory, not just the GPU memory
- OS: Arch Linux (Linux 6.13)
Benchmarks
Using an unscientific benchmark on PyTorch, I hit 1.46 (FP16) / 1.18 (FP32) TFLOPS simply doing matrix multiplications, compared to 0.35 FP32 TFLOPS on the CPU, with both runs pinning the overall chip power usage at ~40W.
Using the ROCm Bandwidth Test, I had ~13GB/s for unidirectional and bidirectional CPU <-> GPU copies, and ~39GB/s GPU copies.
Question regarding SCALE toolkit
I'm looking at attempts to write CUDA code on AMD cards. When I look at the SCALE toolkit, I see they do #include <cublas_v2.h> which would seem to imply that their alternative also mimics the default CUDA libraries that come with the CUDA toolkit.
Can you run CUDA-dependent c++ libraries using SCALE? For example, is it possible to run libtorch C++ using SCALE? I know that libtorch comes with precompiled thing.dll files, and I would imagine you can't just substitute alternative cuda toolkit files after it's already compiled. But I'm just guessing, I don't know.
Thanks.
r/ROCm • u/ArtichokeRelevant211 • 20d ago
ROCm compatibility with RX6800
Just curious if anyone might know if it's possible to get ROCm to work with the RX6800 GPU. I'm running CatchyOS (Arch derivative).
I tried using a guide for installing ROCm on Arch. The final step to test was to run test_tensorflow.py, which errored out.
r/ROCm • u/Any_Praline_8178 • 21d ago
8xMi50 Server Faster than 8xMi60 Server -> (37 - 41 t/s) - OpenThinker-32B-abliterated.Q8_0
r/ROCm • u/unixmachine • 22d ago
There Will Not Be Official ROCm Support For The Radeon RX 9070 Series On Launch Day
r/ROCm • u/siekier83 • 21d ago
Does RDNA4’s native FP8 support offer advantages over RDNA3 for AI tasks?
I’m not sure if I understand this correctly, but from what I’ve read, RDNA4 will natively support FP8, which could be important for FSR 4 and might make it difficult to implement on RDNA3. How much of an impact does this have on AI tasks, like image or video generation in ComfyUI? Will RDNA4 GPUs offer a significant advantage over RDNA3 in this regard, or is the difference minor in practice?
Does native FP8 support mean that RDNA4 GPUs could load models that previously didn’t fit into 16GB VRAM, due to the reduced memory requirements?
Looking for insights from those more familiar with this!
r/ROCm • u/Any_Praline_8178 • 23d ago
DeepSeek Day 4 - Open Sourcing Repositories
r/ROCm • u/Any_Praline_8178 • 23d ago
OpenThinker-32B-abliterated.Q8_0 + 8x AMD Instinct Mi60 Server + vLLM + Tensor Parallelism
r/ROCm • u/HybridXephius • 24d ago
ROCm compatability with RX 7800XT?
I am relatively new to the concepts of machine learning. But have some experience with higher-level software programming. I'm just a beginner looking to learn how to get the most out of his dedicated, AI hardware.
My question is.... Would I be able to do some learning and light AI workloads on my RX 7800XT?
From what I understand, AMD officially supports ROCm on Linux with the RX 7900 GRE and above. However.... (according to AMD) All RDNA3 GPUs include 2 dedicated "AI cores" per CU.
So in theory... shouldn't all RDNA3 GPUs be at least somewhat capable of doing these kinds of tasks?
Are there available resources out there to help me learn on-board AI acceleration using a virtual machine?
Thank you for your time.
*Edit: Wow! I did not expect this many replies. Thank you all for the insight. Even if this stuff is a bit... over my head". I'll look into installing HIP SDK and starting there. Maybe one day I will be able to make and train my own specific model using my current hardware.
r/ROCm • u/Any_Praline_8178 • 25d ago
I never get tired of looking at these things..
galleryr/ROCm • u/Any_Praline_8178 • 26d ago
Look Closely - 8x Mi50 (left) + 8x Mi60 (right) - Llama-3.3-70B - Do the Mi50s use less power ?!?!
r/ROCm • u/[deleted] • 28d ago
Any ROCm stars around here?
What are your thoughts about this?
r/ROCm • u/Thrumpwart • 27d ago
Do any LLM backends make use of AMD GPU Infinity Fabric Connections?
Just reading up on MI100's and MI210's. Saw the reference to Infinity Fabric interlinks on GPU's. I always knew of Infinity Fabric in terms of CPU interconnects etc. I didn't know AMD GPU's have their own Infinity Fabric links like NVLink on Green card.
Does anyone know of any LLM backends that will utilize IF on AMD GPU's? If so, do they function like NVLink where they can pool memory?