Basically, RoCM and CUDA allows one to do math on the GPU. Most Linear Algebra operations (i.e. LLM or NNs and ML generally) can be parallelized over a GPU which is much more performant than CPU.
To perform calculations on GPU, one needs some sort of interface to to their programming language of choice, NVIDIA has CUDA which is in CPP with bindings to python: (pytorch, Tensorflow etc. ), Julia: Flux etc.
RoCM is AMDs solution, there bindings are young and not widely implemented.
My advice, play around with Flux RoCM and PyTorch RoCM just to get an idea. Suffice it to say, when I started doing RL and LLMs more seriously I gave up my colab and sold my AMDs to fund a 3060.
I agree with revealJS, I recommend trying it with org-mode in eMacs with the plugin (plus you also get banner for free)
Alternatively, it also works with Jupyter.
This is what I use for every presentation I need to give.