Ask HN: LLM coin flipping – lands on heads

As per the title. Ask an LLM to flip a coin. Is heads biased in language?

I'm aware how LLMs work internally, just an interesting observation.

We use pseudo-random to select more dynamic language during token generation, so a run of flips from the model should in theory tend towards semi-randomness.

1 points | by razodactyl 6 hours ago

0 comments