Ask HN: M5 MacBook Pro buyers, worth spending the $$$ to maybe run LLMs local?

To anyone upgrading their daily driver Mac this year, are you considering going to a Max + high memory config? eg. with the hope (now or in near future) of being able to do usefully run agents/LLMs locally on your main machine?

Or is the few extra thousand dollars difference between a base and max-spec MBP still just better spent on literally any other practical option (like different harware, remote hardware, cloud AI subscriptions or credits). Or wait to see if there will be an M5 Studio or what inferencing performance next year's M6 may bring?

I am tempted, but even with some new models getting skinnier and more efficient, I am not sure moore's law and the M5 generation is quite there yet to be worth the trouble?

What call's are ya'll making and why?

7 points | by tpurves 15 hours ago

5 comments

  • al_borland 7 hours ago
    I have an M1 Pro MBP. I keep being tempted to upgrade, but when I look at the improvements being made, it makes me continue to wait. In day to day use, I’m not hurting at all with the M1 Pro and it seems like I’ll be able to get so much more in a couple years if trends continue.

    When it comes to local AI, I’m of the option that this is where things should go in the long-term. However, I want to see the market mature more to understand what that will look like. I don’t want to buy today in hopes of something down the road.

    So pretty much all my buy signals are telling me to kick the can down the road.

    This site was posted a couple weeks back. You can select the M5 Max from the list and see how it would run various local models. That may help make the decision.

    https://www.canirun.ai/

  • satvikpendem 11 hours ago
    You should ask r/LocalLLaMa, they have more benchmarks such as this [0]. As an aside, the other comments in this HN comment section are useless, I mean, one is talking about using cloud models when the question is specifically about local models, which have various reasons to exist beyond cloud models; and another one is about not buying any more Apple products, like, that's irrelevant to the question at hand.

    [0] https://www.reddit.com/r/LocalLLaMA/comments/1rzkw4x/m5_max_...

  • i_have_an_idea 11 hours ago
    I thought about it for a moment, but the real reason I got the new M5 Pro with 64GB is to be able to run several large projects concurrently in Docker envs.

    I didn't go for a Max chip because I value the better battery life on the Pro more than I value the additional GPU cores.

    Personally, I think until the LLMs start to plateau, it will always be more valuable to run a frontier LLM vs just a very capable local LLM. I have no idea when that will happen, so I simply decided to not overbuy the hardware now.

  • raw_anon_1111 14 hours ago
    I have never come close to hitting a limit using Codex cli with my $20/month ChatGPT subscription all day for five months
  • tim-tday 14 hours ago
    [flagged]