2,153
edits
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
{{# | {{#description2:Using a 7840u for llama.cpp} | ||
I briefly had a Macbook M3 Max with 64GB. It was pretty good at running local LLMs, but couldn't stand the ergonomics and not being able to run Linux, so returned it. | I briefly had a Macbook M3 Max with 64GB. It was pretty good at running local LLMs, but couldn't stand the ergonomics and not being able to run Linux, so returned it. |