Building an AI agent inside a 7-year-old Rails monolith

(catalinionescu.dev)

33 points | by cionescu1 2 hours ago

5 comments

  • pell 49 minutes ago
    Was there any concern about giving the LLM access to this return data? Reading your article I wondered if there could be an approach that limits the LLM to running the function calls without ever seeing the output itself fully, e.g., only seeing the start of a JSON string with a status like “success” or “not found”. But I guess it would be complicated to have a continuous conversation that way.
  • sidd22 39 minutes ago
    Hey, interesting read. I am working on product in Agent <> Tool layer. Would you be open for a quick chat ?
  • tovej 9 minutes ago
    If all this does is give you the data from a contact API, why not just let the users directly interact with the API? The LLM is just extra bloat in this case.

    Surely a fuzzy search by name or some other field is a much better UI for this.

  • MangoToupe 46 minutes ago
    Bruh this cannot seriously be considered interesting by hacker news guidelines. Where's the beef? Can i submit my instagram client for points next?
    • nomilk 26 minutes ago
      I found it interesting because they:

      - Made a RAG in ~50 lines of ruby (practical and efficient)

      - Perform authorization on chunks in 2 lines of code (!!)

      - Offload retrieval to Algolia. Since a RAG is essentially LLM + retriever, the retriever typically ends up being most of the work. So using an existing search tool (rather than setting up a dedicated vector db) could save a lot of time/hassle when building a RAG.