Language models on the command line

(simonwillison.net)

56 points | by rednafi 30 days ago

6 comments

  • simonw 30 days ago
    This was a workshop I gave in my https://llm.datasette.io/ CLI tool.

    What other CLI tools are people using to work with LLMs in the terminal?

    There one comment here about https://github.com/paul-gauthier/aider and Ollama is probably the most widely used CLI tool at the moment: https://github.com/ollama/ollama/blob/main/README.md#quickst...

    • evmar 30 days ago
      I am using a hacky one I wrote myself.

      I looked at llm but it doesn't appear to have a mechanism for multi-shot prompting, where you provide both prompts and responses within your query. (Ref https://platform.openai.com/docs/guides/prompt-engineering/t... .) Maybe take this as a feature request?

      It feels like the 'template' system in llm might be able to encompass this but the docs don't appear to provide a reference for the yaml format, only examples. I guess that is another feature request, sorry!

      (BTW if you haven't seen https://docs.divio.com/documentation-system/ it really changed how I think about documentation)

      • simonw 30 days ago
        Yeah, I've been thinking a bit about the multi shot thing. I've had great results from Claude by "faking" the previous conversation to include example question/answer pairs.

        With LLM you can do that using the undocumented Python Conversation API, but it's undocumented for a reason (I don't think it's good enough yet). You could also fake a previous conversation through the CLI tool but that is VERY undocumented - you would have to write fake rows into the SQLite database!

        I also want to support the Claude thing where you can prefill the start of the response - amazing for things like forcing HTML by refilling an HTML doctype.

        • evmar 30 days ago
          In case it helps any, here are some more details about what I used it for. (Summary: providing examples of a specific translation task, to make it use a particular API in the output.) https://inuh.net/@evmar/112001414385042731

          And what I expected of llm is for the template file to (optionally) contain an array of prompt/response pairs. You could even imagine the --save flag constructing one from an ongoing conversation of llm -c maybe.

    • pjot 30 days ago
      openai's python client includes a cli actually - im not sure if its really documented anywhere.

        $ openai api chat.completions.create -g 'user' 'say hello' -m 'gpt-3.5-turbo'
      
        $ Hello! How can I assist you today?
    • throwup238 30 days ago
      > What other CLI tools are people using to work with LLMs in the terminal?

      I use aichat: https://github.com/sigoden/aichat

      I especially like the terminal integration where I can type a natural language request at the terminal and press Alt+E to have it converted to a command to run.

    • saulpw 30 days ago
      I've appreciated https://github.com/cthulahoops/chatcli which is simple and straightforward.
    • bt1a 30 days ago
      (aichat in tangential comment looks much better) Besides aider and ollama, I think shell_gpt https://github.com/TheR1D/shell_gpt is great for quick chats / lookups. Being able to quickly cat files to repl sessions saves a lot of time.

      I need to integrate distil whisper large v3, aider, and shell_gpt to tidy up a lot of my disjointed LLM use. As someone else mentioned, the commits created by aider allow me to "skip" some intermediate steps that would be required when working on coding tasks with other frameworks.

  • dvt 30 days ago
    Fantastic work here! I'm working on a local tool, affectionately called Descartes, which does something similar—but with a spotlight-like UX for the non-hackers out there.

    I do think that LLMs have the potential to fundamentally change the way we interact with our computers. There's a lot of edge cases (especially when combining it with the inaccurate science of screen readers) but it's pretty mind-blowing when it works. I'm working on a blog post, but here's my little proof of concept working on both Windows in a web browser[1] and MacOS in the Finder [2].

    [1] https://vimeo.com/931907811

    [2] https://dvt.name/wp-content/uploads/2024/04/image-11.png

  • bagels 30 days ago
    I wrote one to help with creating command line commands. It just hits openai api with a prompt asking for just a code block to run on bash + whatever is passed in, and then it prints the command out. I wrote it because I can never remember all the weird command args for all the tools.

    $ bashy find large files over 10 gb

    find / -type f -size +10G

    • QuiDortDine 30 days ago
      Do you have a followup command to execute the suggested command?
      • bagels 30 days ago
        No, I want to review or tweak them, just in case it's trying to do something bad. It's usually pretty good though.
      • sdf4j 30 days ago
        $ bashy evaluate the previous command output
  • liamYC 30 days ago
    This is awesome, thanks for sharing. I find this kind of tool really useful, Aider in particular. I made my own cli tool for interacting with GPT. It’s really useful with the -c flag for generating code especially bash commands I've forgotten

    https://github.com/ljsimpkin/chat-gpt-cli-tool

  • bearjaws 30 days ago
    Aider continues to be the best way to interact with LLMs while coding, and its a command line tool.

    Copilot is pretty good, but the forced change > commit > QA process that Aider forces you through is really powerful.

  • behnamoh 30 days ago
    I wish llm were more stable, but unfortunately things just kept breaking out of the blue without me touching any settings of the program. I often had to reinstall the package. but finally I gave up and implemented my own.
    • simonw 30 days ago
      Which plugins are you using? Did you install via pipx or Homebrew or something else?
      • behnamoh 30 days ago
        I used pipx and used AnthropicAI's plugin to use Claude. After 2 weeks of perfectly working, suddenly I got the "this model is invalid" error.

        PS: I appreciate the work put into llm tho, it's a neat program I used with my Automator scripts to bring LLMs to macOS before Apple Intelligence was announced. I just wish the stability was not a concern.

        • simonw 30 days ago
          That's weird. Were you using llm-claude or llm-claude-3?
        • foobarqux 30 days ago
          The plugins seem to need to be reinstalled after every upgrade