I'm aware how LLMs work internally, just an interesting observation.
We use pseudo-random to select more dynamic language during token generation, so a run of flips from the model should in theory tend towards semi-randomness.
I'm aware how LLMs work internally, just an interesting observation.
We use pseudo-random to select more dynamic language during token generation, so a run of flips from the model should in theory tend towards semi-randomness.
0 comments