5 comments

  • enricoros 403 days ago
    What does it take to make a basic ChatGPT-like frontend, with code highlighting, run in sandbox, drop-files, and 'acting' in prompts? Clone away and enjoy. First time poster
    • anonzzzies 402 days ago
      Thanks, it is good :) You don’t need the serverside though; you can just call openai apis straight from the client. Makes things easier!
      • capableweb 402 days ago
        It seems to support both? First it tries to load the key from a environment variable, and if it cannot, it'll ask for it client-side.

        None the less, if you're building a project for others, you most likely don't want the secret key to be public, which it'd be if you embed it in the client-side code.

        • anonzzzies 402 days ago
          > It seems to support both?

          Ah, only saw the api one!

      • yodon 402 days ago
        If you call the openai API's straight from the FrontEnd, you are likely leaking your api keys to visitors who can then use your api keys (and api key limits) for their own purposes.
        • anonzzzies 402 days ago
          Nah, you ask them to enter their own; if you use yours, then yes, only use backend. In this case, the author is not using his, you have to bring your own, so frontend is fine.
    • nico 402 days ago
      What do you mean by “acting”?

      That it pretends to be a Scientist? Or that it can perform actions (like sending an email or searching the web)?

      If the latter, do you have more info/docs about it? Didn’t see it on the roadmap.

      • capableweb 402 days ago
        You can kind of program something around the API to make it able to perform actions. That's the way I'm guessing Microsoft did it with Bing. Here is an example:

        System prompt:

            - You are a text-based AI delegates to other AIs.
            - You only respond in JSON with two keys `delegate_to` and `parameters`.
            - `delegate_to` defines what other AI you should delegate to.
            - `parameters` are parameters to send to that AI.
            - Here is the list of possible delegates
            - delegate_to: email_ai, parameters: $to $subject $body, used for sending emails
            - delegate_to: calendar_ai, parameters: $time, $date, $title, used for creating events in a calendar
            - delegate_to: search_ai, parameters: $search-term
            - Be creative and helpful.
            - Your goal is to take user input and successfully delegate the task you receive from the user to the correct AI.
            - Your initial message to the user should welcome them and ask them what you can do for them.
            
        Example conversation:

            > {"message": "Welcome! I'm your AI assistant. What can I do for you today?"}
            
            > User: Send email to john@example.com asking if he wants to have dinner tonight
            
            {
              "delegate_to": "email_ai",
              "parameters": {
                "to": "john@example.com",
                "subject": "Dinner Invitation",
                "body": "Hi John, would you like to have dinner together tonight? Let me know your thoughts. Best regards."
              }
            }
        
        Then you'd make your program read the JSON messages it generates and perform the correct action.
        • nico 402 days ago
          Nice. How would you go about providing a dynamic list of delgates? Could it work to just give one delegate that can provide a list of delegates with a description of the actions they can perform (then that delegate can query a db and return a list)?

          Re-reading, I’m guessing the prompt could also be dynamically generated to include the most relevant delegates.

          • sprobertson 402 days ago
            > the prompt could also be dynamically generated to include the most relevant delegates

            Yup that's how I'm doing it - the system prompt is re-generated for every request, and that includes getting a list of available delegates and the arguments they accept. I only have 10 so I'm just listing all of them, but if you had some huge number you could combine that with embeddings / vector lookup.

          • behnamoh 402 days ago
            At that point we're basically back to the `AI is just nested if-else expressions` story. The only difference is that now there is a language reader on top that understands the semantic of your language. But actors (or agents in LangChain lingo) are just if-else. The tools you connect them to must be developed separately.
            • nico 402 days ago
              Sure, you could also say that human language/action capacity is just a biological LLM with some ifs on top that give it access to actions.

              In the case you describe, you can have an LLM write the tools.

              Yes, the first tools and bridge code might need to be manually built. But after that it could be LLMs all the way down.

              Kind of similar to writing a new programming language. At first you write it in another language but after compiling it for the first time, you can then write the language in the language itself.

            • enricoros 402 days ago
              Very good point. Once you start breaking down a llm into presets/delegators, you introduce basically if-else, with all the problems of that split. Lack of visibility, local vs global optimization, lack of control and predictability, asymmetry of information. I wonder if the current Agents approach is a stopgap solution.
    • justplay 402 days ago
      • behnamoh 402 days ago
        This hijacks the Back button on Chrome. Not trustworthy.
  • tonyoconnell 402 days ago
    Here's one made with Astro https://github.com/ddiu8081/chatgpt-demo There's a fork in Chinese somewhere with loads more features.
  • yayr 402 days ago
    nice, maybe you can make the initializing prompts in

    https://github.com/enricoros/nextjs-chatgpt-app/blob/main/pa...

    transparent to the user and even changeable by them.

    • enricoros 402 days ago
      When the user selects one of those, any query will reveal the prompt. Can be changed but the change won't be persisted yet. We added a 'Custom' preset today that requires editing. Agree with your point tho - rn editing happens via 'forking' :)
  • phil42 402 days ago
    But I need an GPT-4 api key for that, right? The normal API keys don't work?
    • capableweb 402 days ago
      The keys for GPT3 (`GPT-3.5-turbo` is the actual model ID) is the same as for GPT-4. You define the model when you make the request to the API.
      • layer8 402 days ago
        Does this work if you only have GPT-4 access via a ChatGPT subscription (ChatGPT Plus)?
        • fredliu 402 days ago
          It doesn't. Even if you have ChatGPT Plus, the key you have only supports 3.5 unless you are explicitly given the gpt-4 key.
          • capableweb 402 days ago
            It's true that you need explicit access to the GPT-4 to use the API, but again, it's not a different key. I'm using the same API key for accessing `gpt-3.5-turbo` as I use for `gpt-4`.
          • fredliu 402 days ago
            There seems to be already a PR for adding 3.5 support. The community and speed of change in this field is mind blowing!
            • capableweb 402 days ago
              Oh yeah, changing 33 lines is truly revolutionary!
              • enricoros 402 days ago
                OP: I went to sleep with this as my 1st post and 1 star, and woke up with a PR for 3.5-Turbo pending. Community for the win!
    • tonyoconnell 402 days ago
      Seems to fall back to GPT 3 if you haven't got access to GPT 4 API yet
      • fredliu 402 days ago
        hmm... doesn't seem to be the case, when I provided my gpt-3 turbo key the error message indicates the gpt-4 model doesn't exist.
        • capableweb 402 days ago
          Change this https://github.com/enricoros/nextjs-chatgpt-app/blob/466a366... to say "gpt-3.5-tubo" and it should work for you
          • trendoid 402 days ago
            I updated the model in that file, deployed again using Vercel but still same error.
            • capableweb 402 days ago
              It seems like it's setting gpt-4 here as well: https://github.com/enricoros/nextjs-chatgpt-app/blob/466a366...

              But come on, read through the source, look for the issue, I'm sure you can track down at least something :)

              • trendoid 402 days ago
                Sorry, as soon as posted my comment, I looked again and found the other occurrence. It's quite late here and I just followed your advice blindly :)

                Works now.

                • enricoros 402 days ago
                  Hey guys, op here. Merged the PR for 3.5-Turbo support and cleaned up the code (very good observations on all the places 'gpt-4' was hardcoded). Combo box to select the model. GPT-4 will need a 4-enabled key, while 3.5-Turbo will work with any GPT key.