Ran several experiments using local LLMs (~7b parameter models) like llama3.2
and phi3
to generate a random number between 1 and 100.
The exact prompt was
llama3.2
I didn’t expect this approach to work as a uniform number generator, but it was interesting to see how it doesn’t work. At lower temperatures, most models only output a few different values in the range of 40-60. There was little to no variability. With increases in temperature (between 1-3), the distribution begins to look bi-model for several models. After this threshold, most model outputs start to breakdown and output only single digit numbers at temperature 7 and higher (I am aware this is general not a recommended model of using a language model).