2,153
edits
(Created page with "I briefly had a Macbook M3 Max with 64GB. It was pretty good at running local LLMs, but couldn't stand the ergonomics and not being able to run Linux, so returned it. I picked up a Thinkpad P16s with an AMD 7840 to give Linux hardware a chance to catch up with Apple silicon. It's an amazing computer for the price, and can run LLMs. Here's how I set up llama.cpp to use ROCm. Install ROCm, set an env variable for the 780m: <code>export HSA_OVERRIDE_GFX_VERSION=11.0.0</co...") |
No edit summary |
||
Line 1: | Line 1: | ||
{{SH_Triple|Lata Pada|Source|http://en.wikipedia.org/wiki/Lata_Pada}} | |||
{{SH_Add|5520|20100710233945|User:David Mason}} | |||
{{ |