14 comments

  • glerk 2 minutes ago
    This is awesome work, thanks for sharing!

    How do you plan on keeping up with upstream changes from the API providers? I have implemented something similar, and the biggest issue I have faced with go is that providers don’t usually have sdk’s (compared to javascript and python), and there is work involved in staying up to date at each release.

  • nzoschke 6 minutes ago
    Looks nice, thanks for open sourcing and sharing.

    I'm all in on Go and integrating AI up and down our systems for https://housecat.com/ and am currently familiar and happy with:

    https://github.com/boldsoftware/shelley -- full Go-based coding agent with LLM gateway.

    https://github.com/maragudk/gai -- provides Go interfaces around Anthropic / OpenAI / Google.

    Adding this to the list as well as bifrost to look into.

    Any other Go-based AI / LLM tools folks are happy with?

    I'll second the request to add support for harnesses with subscriptions, specifically Claude Code, into the mix.

  • pizzafeelsright 15 minutes ago
    I have written and maintained AI proxies. They are not terribly complex except the inconsistent structure of input and output that changes on each model and provider release. I figure that if there is a not a < 24 hour turn around for new model integration the project is not properly maintained.

    Governance is the biggest concern at this point - with proper logging, and integration to 3rd party services that provide inspection and DLP type threat mitigation.

  • crawdog 21 minutes ago
    I wrote a similar golang gateway, with the understanding that having solid API gateway features is important.

    https://sbproxy.dev - engine is fully open source.

    Another reason golang is interesting for the gateway is having clear control of the supply chain at compile time. Tools like LiteLLM the supply chain attacks can have more impact at runtime, where the compiled binary helps.

  • mosselman 44 minutes ago
    Does this have a unified API? In playing around with some of these, including unified libraries to work with various providers, I've found you are, at some point, still forced to do provider-specific works for things such as setting temperatures, setting reasoning effort, setting tool choice modes, etc.

    What I'd like is for a proxy or library to provide a truly unified API where it will really let me integrate once and then never have to bother with provider quirks myself.

    Also, are you also planning on doing an open-source rug pull like so many projects out there, including litellm?

  • sowbug 40 minutes ago
    Are these kinds of libraries a temporary phenomenon? It strikes me as weird that providers haven't settled on a single API by now. Of course they aren't interested in making it easier for customers to switch away from them, but if a proprietary API was a critical part of your business plan, you probably weren't going to make it anyway.

    (I'm asking only about the compatibility layer; the other tracking features would be useful even if there were only one cloud LLM API.)

    • simonw 2 minutes ago
      I've been maintaining an abstraction layer over multiple providers for a couple of years now - https://llm.datasette.io/

      The best effort we have to defining a standard is OpenAI harmony/responses - https://developers.openai.com/cookbook/articles/openai-harmo... - but it's not seen much pickup. The older OpenAI Chat Completions thing is much more of an ad-hoc standard - almost every provider ends up serving up a clone of that, albeit with frustrating differences because there's no formal spec to work against.

      The key problem is that providers are still inventing new stuff, so committing to a standard doesn't work for them because it may not cover the next set of features.

      2025 was particularly turbulent because everyone was adding reasoning mechanisms to their APIs in subtly different shapes. Tool calls and response schemas (which are confusingly not always the same thing) have also had a lot of variance - some providers allow for multiple tool calls in the same response, for example.

      My hunch is we'll need abstraction layers for quite a while longer, because the shape of these APIs is still too frothy to support a standard that everyone can get behind without restricting their options for future products too much.

    • harikb 25 minutes ago
      The providers themselves can't keep this straight even within their own ecosystem. Plus everyone is running at a million miles/hour.

      For example `Claude code` used to set 2 specific beta headers with some version numbers for their Max subscription to be supported.

      Oauth tokens for Max plan is different from how their API keys looked. They kind of look similar, but has specific prefix that these tool pre-validate.

      It is barely working at this point even within a single provider

  • driese 54 minutes ago
    Nice one! Let's say I'm serving local models via vllm (because ollama comes with huge performance hits), how would I implement that in gomodel?
    • devmor 41 minutes ago
      This is way more interesting to me as well. I have projects that use small limited-purpose language models that run on local network servers and something like this project would be a lot simpler than manually configuring API clients for each model in each project.
      • santiago-pl 8 minutes ago
        Thanks for raising it! Since vLLM has an OpenAI-compatible API, this should work for now:

          docker run --rm -p 8080:8080 \
            -e OPENAI_API_KEY="some-vllm-key-if-needed" \
            -e OPENAI_BASE_URL="http://host.docker.internal:11434/v1" \
            ...
            enterpilot/gomodel
        
        I'll add a more convenient way to configure it in the coming days.
  • pjmlp 2 hours ago
    Expectable, given that LiteLLM seems to be implemented in Python.

    However kudos for the project, we need more alternatives in compiled languages.

    • santiago-pl 2 hours ago
      Agree and thank you! Please let us know if you'd like to give it a try and if you miss any feature in GoModel.
    • goodkiwi 1 hour ago
      It’s also badly implemented - everything is a global import. Had to stop using it
  • Talderigi 2 hours ago
    Curious how the semantic caching layer works.. are you embedding requests on the gateway side and doing a vector similarity lookup before proxying? And if so, how do you handle cache invalidation when the underlying model changes or gets updated?
    • giorgi_pro 2 hours ago
      Hey, contributor here. That's right, GoModel embeds requests and does vector similarity lookup before proxying. Regarding the cache invalidation, there is no "purging" involved – the model is part of the namespace (params_hash includes the LLM model, path, guardrails hash, etc). TTL takes care of the cleanup later.
  • indigodaddy 1 hour ago
    Any plans for AI provider subscription compatibility? Eg ChatGPT, GH Copilot etc ? (Ala opencode)
    • santiago-pl 1 hour ago
      You are not the first person who has asked about it.

      It looks like a useful feature to have. Therefore, I'll dig into this topic more broadly over the next few days and let you know here whether, and possibly when, we plan to add it.

  • rvz 1 hour ago
    I don't see any significant advantage over mature routers like Bifrost.

    Are there even any benchmarks?

    • lackoftactics 0 minutes ago
      It’s a heavily vibe coded project with only proxy with terrible benchmarks design. Basically vibe coded benchmarks that lie through ignorance of mocked super fast endpoint without using full power of litellm in multiple processes.

      Too bad so many people fall for it.

      Other than that almost useless it’s faster when this will be io bound and not cpu bound.

  • anilgulecha 2 hours ago
    how does this compare to bifrost - another golang router?
    • santiago-pl 2 hours ago
      First of all, GoModel doesn't have a separate private repository behind a paywall/license.

      It's more lightweight and simpler. The Bifrost docker image looks 4x larger, at least for now.

      IMO GoModel is more convenient for debugging and for seeing how your request flows through different layers of AI Gateways in the Audit Logs.

      • anilgulecha 2 hours ago
        That would be valuable if there's a commitment to never have a non-opensource offering under GoModel? If so, you can document it in the repo.
  • pukaworks 43 minutes ago
    [dead]
  • tahosin 1 hour ago
    This is really useful. I've been building an AI platform (HOCKS AI) where I route different tasks to different providers — free OpenRouter models for chat/code gen, Gemini for vision tasks. The biggest pain point has been exactly what you describe: switching models without changing app code.

    One thing I'd love to see is built-in cost tracking per model/route. When you're mixing free and paid models, knowing exactly where your spend goes is critical. Do you have plans for that in the dashboard?

    • santiago-pl 1 hour ago
      This comment looks like AI-generated.

      However IIUC what you're asking for - it's already in the dashboard! Check the Usage page.