4 comments

  • _moog 1 day ago
    I recently started diving into LLMs a few weeks ago, and one thing that immediately caught me off guard was how little standardization there is across all the various pieces you would use to build a chat stack.

    Want to swap out your client for a different one? Good luck - it probably expects a completely different schema. Trying a new model? Hope you're ready to deal with a different chat template. It felt like every layer had its own way of doing things, which made understanding the flow pretty frustrating for a noobie.

    So I sketched out a diagram that maps out what (rough) schema is being used at each step of the process - from the initial request all the way through Ollama and an MCP server with OpenAI-compatible endpoints showing what transformations occur where.

    Figured I'd share it as it may help someone else.

    https://moog.sh/posts/openai_ollama_mcp_flow.html

    Somewhat ironically, Claude built the JS hooks for my SVG with about five minutes of prompting.

    • 1dom 18 hours ago
      I found this really helpful. I've read a few different bits around this area, and being able to quickly click and scroll around this has confirmed my understanding of it now - thanks!

      I thought it funny to think how this is all to give the impression to the user that the AI, for example, _knows_ the weather. The AI doesn't: it's just getting it from a weather API and wrapping some text around it.

      Now, imagine being given a requirement 5 years ago like: "When the user asks, we need to be able to show them the weather from this API, and wrap some text around it". Imagine something like your diagram came back as the proposed the solution:| Not at all a criticism of any of your stuff, but it blows my mind how tech develops.

    • youdont 1 day ago
      Have you tried BAML? We use it to manage APIs and clients, as well as prompts and types. It gives great low level control over your prompts and logic, but acts as a nice standardisation later.
      • redhale 1 day ago
        +1 for BAML. I find that the "prompts as typed functions" concept really simplifies the mental model here, making LLM apps easier to reason about.
      • _moog 1 day ago
        That's going to be super useful for some of the high-level prompt-testing work I'm doing. Thanks!

        I'm also getting more into the lower-level LLM fine-tuning, training on custom chat templates, etc. which is more of where the diagram was needed.

    • nimchimpsky 1 day ago
      [dead]
  • upghost 1 day ago
    I think it's interesting and odd that tool calling took the form of this gnarly json blob. I much prefer the NexusRaven[1] style where you provide python function stubs with docstrings and get back python function invocations with the arguments populated. Of course I don't really understand why MCP is popular over REST or CLI, either.

    [1]: https://github.com/nexusflowai/NexusRaven-V2

    • max-privatevoid 1 day ago
      The actual API call is still going to be JSON. How do you deal with that? Pack your Python function definitions into an array of huge opaque strings? And who would want to write a parser for that?
      • upghost 1 day ago
        I'm sure you realize it gets reassembled into "huge opaque strings" when it is fed into the LLM as context. The arbitrary transport of the context as JSON is just a bit of protocol theater.

        You don't really have to parse the output, Python already has a parser in the form of the AST library[1].

        But I get your drift. Depending on your workflow this could seem like more work.

        [1]: https://docs.python.org/3/library/ast.html#module-ast

        • max-privatevoid 1 day ago
          The inference engine can do whatever it wants. This is already the case. The actual format of the tool call text varies by model, and the inference engine handles the translation from/to the JSON representation, that's the least of its concerns.

          What I don't want to happen is for some shitty webdev who writes an AI client in JavaScript to be forced to write a custom parser for some bespoke tool call language (call it "MLML", the Machine Learning Markup Language, to be superseded by YAMLML and then YAYAMLML, ...), or god forbid, somehow embed a WASM build of Python in their project to be able to `import ast`, instead of just parsing JSON and looking at the fields of the resulting object.

          • upghost 1 day ago
            Yeah that's fair, I concede the point.

            I got a good snicker out of the YAYAMLMLOLOL :D

            Seems like it's tools calling tools all the way down heh

  • rapidaneurism 1 day ago
    How do you pass a user token to MCP calls? Do you hand the token to the LLM and expect it to fill an argument?
    • therealpygon 1 day ago
      Easy. The LLM is never making MCP calls. The LLM simply identifies an endpoint it thinks would be useful and provides the required request parameters (like the text to be searched for or processed). As far as an LLM is concerned, MCP calls are handled “client-side” (from its perspective). This is why you configure MCP servers in your client and not on the server. (Yes, some providers allow you to configure MCP servers, but that is just a layer between you and the LLM and not a feature of the LLM itself.

      So back to the credentials, that means that the credentials are managed “client-side” and the LLM never needs to see any of that. Think of it like this, say you set up an MCP url (my-mcp.com); the LLM knows nothing of this url, or what MCP server you use. So if instead you called my-mcp.com/<some-long-string>/, the LLM still doesn’t know. Now, instead of a URL parameter, your tool calls the MCP with a header (Bearer: <token>), the LLM still doesn’t know and you’ve accessed an OAUTH endpoint.

    • asabla 1 day ago
      I know I'm a bit late. But for MCP servers running over HTTP/custom it should use OAuth 2.0. If it's served via stdout, it should use configuration files and/or environment variables.

      ref: https://modelcontextprotocol.io/specification/2025-03-26/bas...

    • theblazehen 1 day ago
      Usually via environment variables in the MCP server definition, or a config file
  • nullorempty 1 day ago
    I don't think Spring is well regarded on HN.
    • sorokod 1 day ago
      "Just write this...." adds an annotation

      One of the many issues with Spring is that abstractions it provides are extremely leaky [1]. It leaks frequently and when it does, an engineer is faced with the need to comprehend a pile of technology[2] that was supposed to be abstracted away in the first place.

      - [1] https://en.wikipedia.org/wiki/Leaky_abstraction

      - [2] https://github.com/spring-projects/spring-ai

      • xienze 1 day ago
        In what ways are the abstractions leaky? @Tool or @GetMapping make no demands on how to implement “this is a tool” or “this is a GET REST endpoint.” That they’re coupled with Spring (or rather, Spring is the only implementation for the semantics of these annotations) doesn’t constitute a leaky abstraction.
        • xyzzy123 1 day ago
          This is fair. I think the complaint is that Spring is _beautiful_ in up to medium sized demos, but in any sufficiently large application you always seem to need to dig in and figure out what Spring is doing inside those annotations and do something unspeakable involving the giant stack of factory factory context thread local bean counter manager handler method proxy managers etc.

          Also Spring is a kind of franchise or brand, and the individual projects under the umbrella vary a lot in quality.

          • delecti 1 day ago
            Just about any tool will require a bunch of work at some point in the scale. Some front-load that, and some make it easy to get started but then you hit a point where you need to peek under the covers. Personally I prefer the latter, though I'm sure there's a lot of Stockholm syndrome involved in how I feel about Spring. And Spring's popularity means you're probably not the only one to hit any given problem.
            • sorokod 1 day ago
              This is a rational attitude but my experience is that engineers do not get to "the latter" at their leasure. What typically happens is that peeking under the covers is forced on them along with a tight timeline.
        • layer8 1 day ago
          The precise semantics usually aren’t that well specified, and debugging is difficult when something goes wrong. Annotation-based frameworks are generally more difficult to reason about than libraries you only call in to. One reason is that with frameworks you don’t know very well which parts of the framework code are all involved in calling your code, whereas with libraries the answer usually is “the parts you call in to”.

          Spring has more “synergy” in a sense than using a bunch of separate libraries, but because of that it’s also a big ball of mud that your code sits on top of, but isn’t on top of it in the sense of being in control.

          • physix 1 day ago
            I've been actively working with Spring since about 2008. About 3-4 times a year, I cuss and curse some strange side effects that occur during refactorings. And in some areas we've painted ourselves into a corner.

            But all in all, it's a great set of frameworks in the enterprise Java/Kotlin space. I'd say it's that synergy, which makes it worth the while.

            I'm curious, though. Is the use of dependency injection part of the portfolio of criticisms towards Spring?

      • th0ma5 1 day ago
        I think about this occasionally trying to rationalize it. I see similar patterns in other things like R and Julia where they design something in the environment to seem like a composable tool, and maybe it is but only within two or three specific compositions and then the way the environment is described sure seems to imply some kind of universality but it just doesn't work. Some even seem to keep patching every leak (maybe Spring means Spring a leak? Haha) and there's a sunk cost fallacy thing with an immense documentation page.
        • sorokod 1 day ago
          There is similarity between Spring and "Buy now, pay later" schemes. You do often get a working feature quickly while having the price of evolving and maintaining that feature spread over some future.

          This is the best I can do for rationalizing Spring.

    • greenchair 1 day ago
      Nothing is well regarded on HN so that's fine.
    • esafak 1 day ago
      It's a lumbering framework like Django. People opt for lighter and simpler these days.