8 comments

  • gsuuon 310 days ago
    This is a cool demo! It does a good job hinting at how immersive the experience can be.

    I'm excited about large language models for gaming. The first thing I tried with ChatGPT was to run a text-based RPG game, and it was fun the very first attempt. I imagine we'll have (a text-based) holodeck/Westworld soon enough.

    In the demo I ran out of tokens pretty quickly, after talking to 3 of the characters. I wonder how the economics of this works - I'm guessing token markup?

    I wrote a library for structured inference in-browser[1] and recently started building a murder-mystery demo as well[2] (very rough WIP, not yet playable).

    The library might make a good backend for your project. Local llm in-browser means it's way easier to distribute since there's no need to install anything. Structured inference lets you get consistent responses out of even smaller models so you can always get a conversation, or an action json. And of course, no token limits since it runs client-side.

    [1] https://github.com/gsuuon/ad-llama [2] https://github.com/gsuuon/ad-llama/tree/murder-mystery-solid...

    • broken_clock 310 days ago
      Thanks!

      > I wonder how the economics of this works - I'm guessing token markup?

      We're mostly thinking about building tools now for a future in which many gamers will have the compute to locally run a sizable ~30B model. It's hard to me to see LLM gaming taking off in the very near term due to the compute costs for any model of reasonable immersiveness.

      > I wrote a library for structured inference in-browser[1]

      Very cool! Excited to see where this ends up. Yeah, OpenAI is a bit annoying since you can't ask for JSON or the logit distribution. But GPT3.5-turbo right now is even cheaper than running your own llama2-70b so we stuck with that for the demo since most people probably don't have sufficient compute for this.

      • gsuuon 310 days ago
        I've actually been able to get pretty remarkable results out of Llama 2 7B on just my laptop gpu (8gb). Just don't give it any logic puzzles. I wonder if a 13B (more or less runnable with consumer GPU hardware) fine-tuned on a gaming data set could actually be pretty immersive already?

        And good luck! Will be keeping an eye on the gaming framework.

  • freilanzer 311 days ago
    Well, I told the first character to kill me and he beat me into the hospital. :D
  • ibdf 311 days ago
    I spent a few minutes chatting around - this is fun. I tried to get Amelia to go fetch me a shovel to unbury the body - but she did not collaborate much 10/10.
  • lakomen 311 days ago
    Imagine something like Baldur's Gate 3 with AI generated, infinite stories and characters in those stories. Including voice acting.

    I think you're having the right idea.

  • all2 312 days ago
    I have to laugh.

    First, the j_l button on my _eyboard is not functional so I have to sort out how to words without it.

    Second, the characters, when faced with vague questions, will give away pieces of the plot before I discovered them. For example, a doctor let slip the manner of death before I ever saw the body of the deceased. Someone, when questioned vaguely about the "night of the party" first said the deceased died and then when questioned more closely said collapsed.

    There's a component of information discovery here that should be rolled out to each character. The doctor should not have information before the discovery of the information.

    I'm going to laugh if this is a "Murder on the Orient Express" ending.

    • broken_clock 312 days ago
      I guess in our minds the doctor has already inspected the body before you get there (that's why it's at their house).

      > Someone, when questioned vaguely about the "night of the party" first said the deceased died and then when questioned more closely said collapsed.

      Can you elaborate why you think this is suboptimal? In my mind, this seems consistent with how interrogations go - people make assumptions (he died) until you press them further (oh, I only saw him collapse).

      > There's a component of information discovery here that should be rolled out to each character. The doctor should not have information before the discovery of the information.

      We're actively working on this as part of the framework :)

      Thank you for the feedback.

      • all2 312 days ago
        I'm going to write out a short idea...

        We need a prompt that is situational, that morphs with interaction.

        So we have a prompt provided by python fstrings:

        f""" # Actions context could be up here. You would ammend the "reply" section below with "reply with words or an action".

        This is a summary of your recent interaction with PLAYER CHARACTER: { self.recent_memory() }

        This is how you feel about PLAYER CHARACTER: { self.disposition() }

        Your current conversation with PLAYER CHARACTER is: { self.conversation }

        Score how the last reply effects you emotionally using this format <some extractable format with a function call>

        Replying in a tone consistent with your disposition towards PLAYER CHARACTER, you say: """

        self.recent_memory would be a summary of previous discussions and decreasing granularity the farther in time they are. Eventually the content of the memory fades, and only the emotional impact remains.

        This could also be applied to general memories, where you would run a prompt through the situation you want the character to remember and then that gets embedded in the emotional/memory and recalled "realistically".

        self.disposition would be a short text description of how the character feels about the player's character as well as a scoring mechanism that weights the effect of recent conversations against prior conversations.

        Actions could be embedded into the prompt to allow the character to do stuff when so inclined.

  • jerrysievert 312 days ago
    I'm not sure I understand why you need to harvest my personal details from GitHub or google in order to see a demo of a game?
    • broken_clock 312 days ago
      The only details we're asking for is your email so we don't get spammed/botted by people taking advantage of a publicly hosted LLM endpoint. We're not using your personal details for anything.
      • broken_clock 311 days ago
        btw we turned off auth for now - you can just hit Play
      • chillbill 311 days ago
        [dead]
  • hgo 311 days ago
    I haven't had the opportunity to try this yet, but it's a brilliant use of LLM! Thanks for sharing.