Show HN: macOS GUI for running LLMs locally


58 points | by cztomsik 280 days ago


  • BrutalCoding 280 days ago
    Good to see that other devs share the same passion by making AI accessible to everyone.

    I’ve got a similar app, with the same goals and eventually same supported platforms. Slight difference is that it’s open source:

    • syncbehind 279 days ago
      Yours looks ... shady.

      Jokes aside, thanks for building this! It looks swell and I really enjoy your sense of humor on your readme as well. Looking forward to poking around tonight!

      • BrutalCoding 279 days ago
        If you're on macOS ARM64, give it a go. I have the hardware from the README but need more time. I often switch between languages and systems. Kudos to the OP for their work.
    • aaomidi 279 days ago
      Open source when running LLMs locally is...kinda the entire point.

      Thank you for this!

      • BrutalCoding 279 days ago
        Cheers for your nice words!

        I still salute products like Avapls. Hope you succeed @cztomsik. It’s a good initiative to make offline models accessible.

        • cztomsik 278 days ago
          Thank you, and I wish you luck with your project too, of course :)
  • marcellus23 280 days ago
    > What is a language server?

    > A language server is a specialized program that processes language-related tasks. This includes activities like text generation, grammar correction, rephrasing, summarization, data extraction, and more.

    This confused me initially. I've never heard language server in this context. To me, it has always been in the context of the LSP[0]. But maybe this is a common usage of the term and I've just missed it?

    Even if so, it seems like an odd term, since this is a self-contained desktop application and not really a server, right?


    • duskwuff 280 days ago
      Now I'm imagining a natural-language language server -- a plugin for text editors which can provide services like text completion and transformations in plain text documents based on an underlying LLM. Why let programming languages have all the fun? :)
    • cztomsik 280 days ago
      No, you are right, it's a new term I made up. The idea is that I see this as a RAD tool (for non-programmers). So you would have this app open all the time and it would serve different tasks you have prepared before. But that part is not finished yet and it will be limited in the free version.
      • CharlesW 280 days ago
        > …it's a new term I made up.

        On the bright side, now that you know what a language server is, you can rename your thing so it can be searched for.

  • achrono 280 days ago
    >Is it open-source? No, it is not.

    Why not?

    As to Why --> this one actually has a decent GUI and one that's not (visibly) based on a locally running webserver -- so it seems like it's got a shot at getting 100x popular with just a YouTube video (or gasp Tiktok). With the popularity of open-source today I find it a little hard to trust that something is "ensuring maximum privacy" as you say without having the open-source badge (yes, I know I shouldn't be so naive, but no, I am not going to look at network level logs to find out if the app is truly contained to local).

    • cztomsik 280 days ago
      Fair point, but if this was open-source, would you really go and read all of the source code in several programming languages and also audit all of the dependencies?

      IMHO, at some point, it's just about trust.

      Now, to be 100% honest, I'd love to release this as OSS, but my current business model is based on freemium. And, sell it as white-label for companies.

      EDIT: if you have better idea, I'd love to hear about it.

      • lagniappe 280 days ago
        I resent having to step in and say something here, but the way you describe things, I effectively don't exist. I read the app code. I write stdlib code. I grokk every line of every library I use. I share alike.

        I'm not saying change your ways, just consider being a bit less flippant when you say things like "IMHO, at some point, it's just about trust". As someone who seeks to profit from this, that's about the worst example you can give a prospective user like me.

        My face is hot just having to type this.

        • cztomsik 279 days ago
          > IMHO, at some point, it's just about trust

          Yes, but you are also literally going to multiply several billions numbers together and pray for a random outcome.

          I don't know, maybe some middle ground would be feasible - like have the server-side part open-source, because that's the only part where anything harmful might be (and it's also fairly thin BTW).

          But no matter what, the project needs to be profitable for me, sorry to say that, but let's be a bit realistic, you wouldn't work for free either.

        • unshavedyak 280 days ago
          Also, FOSS has a lot more chances to have eyes on it. I definitely don't read every line, and i definitely don't expect anyone else to - but i do expect more eyes to be on it, to raise questions, spot interesting behavior, etc. If you see a questionable network IO you can look in the source to investigate, etc.

          It has value even if every person doesn't individually read every line.

        • carlosjobim 280 days ago
          You're at most a prospective user, never a prospective customer. Developers deserve to get paid for what they create, and those who don't want to pay have no right to make any demands.

          Maybe your face is getting hot from trying to shame somebody into giving their work away for free?

          • cztomsik 279 days ago
            Thank you for your kind words, I couldn't say it better.
        • sieabahlpark 280 days ago
      • achrono 280 days ago
        >it's just about trust.

        Yes, and a big part of establishing trust is sending the right signals. The risk vs reward differential is way higher for open-source compared to closed source, so all other things being equal I would trust open-source more. Note that all other things are actually not equal in this case, e.g. your closest competition is open-source: Ollama, GPT4All, Llamero.

        • cztomsik 279 days ago
          Not really, my closest competition is LM Studio, which is closed-source too. There's also Machato, which is also closed-source (and not even free).
          • photoGrant 279 days ago
            And not updated in quite some time. And very buggy (Machato)
      • SkyMarshal 280 days ago
        > Fair point, but if this was open-source, would you really go and read all of the source code in several programming languages and also audit all of the dependencies?

        This probably isn't the best justification to use on tech forums. Odds are higher here and in similar forums that some of audience do indeed read the source code, do static analysis on them, check the libraries for CVEs, etc.

      • beezlewax 280 days ago
        It is not just about trust.. on an oss project I expect other people to have already read the source code when I can't. People with no incentive to lie or hide whatever they find
        • cztomsik 279 days ago
          Ok, let's try this, if this was (fully) open-source, would you pay $5/month for an access to a premium discord/formus with videos, examples, ability to propose new features or vote on the backlog?
          • unshavedyak 279 days ago
            Sidenote, i value FOSS but i'm supportive of your desire to keep it closed and make money.

            With that said, i'd never pay $5/m these days for something unless is had very good value to me and obvious ongoing costs i'm using (servers/etc) for the dev/company. Those costs would need to be clear though, not just "we chose to run some servers so that we'd have features which justified subscriptions"

            I would however, pay for upgrades - ie the licensing model employed by JetBrains/etc. Which can come out to $5/m (or yearly purchases, etc), or w/e, but most importantly the software keeps working if you decide to cancel.

            edit: Also i'm on Linux. Looks like i can't buy it anyway haha

            • cztomsik 279 days ago
              Exactly! I don't want to subscribe to anything, I want to pay once and have the license forever (for that specific version + bugfixes).

              But that makes OSS more complicated - people usually don't pay for something they can clone & build themselves.

              BTW: Linux and Windows will definitely happen, I just didn't have time yet. Sorry, it's a lot of work and time so I had to pick macos first, and even that alone is a lot of testing on different configurations.

              • unshavedyak 279 days ago
                I will say, as someone who values FOSS - if you see areas in your app that don't feel super proprietary and that you can library-out - that puts forward good will to people like me.

                Ideally your app would be all FOSS components and i pay for the glue to tie it all together, but i've seen that become too easy to knock-off. Depends on the app of course. Still, FOSSing parts can do a fair amount of good will to the FOSS crowd, imo.

  • syntaxing 280 days ago
    Do the models run in a docker container? Probably one of my favorite things about Ollama.
    • mchiang 280 days ago
      disclaimer: I'm one of the maintainers working on Ollama.

      I would love to hear how you are using Ollama. One of the upcoming releases will involve an official release of Ollama on Linux with CUDA support of the box. From there, we will publish Ollama Docker images to enable GPU support as well.

      • syntaxing 280 days ago
        Thank you for your amazing work! I more or less use it as a llama cpp replacement cause I honestly can’t figure out a good prompt structure so I get bad results. On my M1 MBP, I use it for stuff that can’t go to GPT but can only use upto 13B. I also run a 34B model with CUBLAS enabled on my Linux server by modifying the go files mentioned in one of the issue tickets.
    • cztomsik 278 days ago
      Sorry, I thought I have answered already. No, everything runs as one native-like (mix of native and webview) application.
  • neontomo 279 days ago
    I like this GUI.


    - I was missing file sizes on the models list, and even in the installer window there's only a percentage indicator. Blurring the window while installing also stops me from visually exploring the app while waiting for the install to finish (maybe just a lil' bit of blur is enough to show that the rest of the window is inactive?).

    - Scrolling up should stop auto-scroll in the chat, I couldn't even reach the Stop generation button while it was generating. Maybe put that button in a static location.

    - Maybe an uninstall button for models is appropriate.

    • cztomsik 279 days ago
      Thanks for the feedback, this is what I was hoping for.

      > Scrolling up should stop auto-scroll in the chat

      This is at the top-of the list for today, check the website/discord/twitter for updates. I want to do both - pin the button at the bottom, and also make sure that if you scroll, it will stay there. Currently, it's one component which contains the button and also scrolls and it was a bad idea.

      > Blurring the window while installing

      Agreed 100%, I definitely want to improve downloader a lot more but this was good enough for first release.

      > Maybe an uninstall button for models is appropriate.

      It should be there (in the upper area), I hope the 24h cache in cloudfront is not too much but the link should be pointing to 09-18 version which has Delete button.

  • pedalpete 279 days ago
    I'm tempted to try this, not because I want a locally running version of openAI, but just because the openAI website is so painful to use.

    I'm currently on an intel mac mini with almost no memory, so I can't even download to test this (I have to close all other programs just so I can build an app in xcode).

    I've got a razer blade as well, but obviously mac only doesn't help there.

  • alexstore06 280 days ago
    How does this compare to GUIs like llamero ( and gpt4all (
    • cztomsik 280 days ago
      It has playground with prompt saving and (simple) templates, chat saving, builtin model downloader. It's also very small, most people don't care about that, but I do.
  • walth 279 days ago
    How does this compare with ?
    • cztomsik 278 days ago
      This is macos application, you download it, double click it and it should work.
  • carlosjobim 279 days ago
    > 6. Does it support older macOS? 12.6 is the lowest version at the moment.

    Do you have any interest in supporting older versions of OS X?

    • cztomsik 278 days ago
      It depends, what version do you need?
      • carlosjobim 278 days ago
        I'm on Catalina, but to be fair I haven't used LLMs for more than toying, so I wouldn't call it a "need", just something cool to try out.
        • cztomsik 278 days ago
          I can't promise anything but there's apparently some problem with intel macs and I only have older machine here (with catalina) so I might give it a shot and if it works I can publish the build.

          EDIT: it's too old, sorry.

  • krm01 280 days ago
    Is there a central place where most downloadable models are available for quick download?
    • SparkyMcUnicorn 280 days ago
    • cztomsik 280 days ago
      It works with current version of GGUF models for llama.cpp - you can find them on huggingface, or you can convert them manually.

      Only a few download links are baked-in at the moment but whatever *.gguf file you put in your Downloads folder should appear in the dropdown.

  • swyx 280 days ago
    very cool. why did you work on it? curious on motivations. congrats on shipping.