It is also first in the Distrowatch rank
https://distrowatch.com/table.php?distribution=cachyos
I distro hopped to it from Bazzite a couple of months ago, and I could not be happier.
If you try the installer, be careful when selecting multiples DE/WM as the conflicts were not listed anywhere for the installation process.
Picking a single environment and then adding the others later was what worked for me.


Zram kills my LLM speeds, big time. I had to disable it or almost disable it (limiting to like 0.5GB) and move swap to the SSD to get my system usable.
The issue is that LLM frameworks allocate a whole bunch of CPU RAM up-front, statically. So if I’m riiiight at the edge of the limit, that bit of RAM zram eats pushes me over to swapping, which utterly obliterates token gen speed and system responsiveness, and it never recovers once it starts.
I think CachyOS helps hybrid inference from better default scheduling, though. This is another thing llama.cpp gets very picky about. More optimized default compilation flags, GPU libs and such would also go a long way.