01-15-2025
Next-token-prediction-based Self-reflection
Predicting my future behavior through next-token prediction mechanism.
#Reflections

Traditional large language models (LLM) output texts by constantly predicting the next most likely token (~word) of the existing text. For example, if you feed the phrase "ChatGPT needs more" to an LLM, it is likely going to predict "training" and not "therapy".
Inspired by this "next-token prediction" mechanism, I came up with an experiment to know myself better - I will start by predicting my "state" for the next day, where "state" here includes: things I did upon internal motivation and the time spent on social media. Then on the next day, I will validate my previous prediction with the actual "state" and update my subjective prediction strategy. This means on day 10, I will write a few bullet points to predict the things I will do on day 11, and on day 100, I will have accumulated 100 validations and updates. As I'm typing these down, I'm on day 13.
The reason I'm doing this experiment is to find out: can we improve at predicting our future behaviors, and is there a natural limit to this predictability? Intuitively, there has to be a limit, or else I will simply follow a daily routine and no longer form spontaneous ideas - a robot, basically.
If you pause for a second and try to predict the things you will do tomorrow, it's going to be hard, because there are countless random factors every day (e.g., finding out that an ). But imagine if you can actually find changing patterns in the things you do each day, either in terms of time cycles or subjects - that excites me.
Let's see how it goes.