Mini-Site 4 · Lesson 2

How Prompt Prediction Works

A simple interactive way to show what an LLM is actually doing: taking the words so far, weighing what probably comes next, and choosing the next token based on pattern and probability.

No APIs. No backend. Just HTML, CSS, JavaScript and three.js.
Prediction Preview

What the model is doing

It is not “thinking” like a human in the ordinary sense. It looks at the prompt so far, turns it into tokens, compares those patterns to what it learned in training, and predicts which token is most likely to come next.

1
Read the context

The model takes the words already written and treats them as the current context window.

2
Score possibilities

Many candidate next tokens are given probabilities. Some are much more likely than others.

3
Choose the next token

The model selects one and appends it to the sequence, then repeats the whole process again.

Why prompts matter so much

Change the prompt, and you change the probability landscape. A vague prompt leaves many paths open. A clear prompt narrows the options and makes stronger output more likely.

Simple example

Prompt: “Act as a social media manager for a luxury hotel in Sydney. Please write three elegant launch taglines.”

That prompt strongly nudges the model toward marketing language, hospitality tone, and short polished options rather than a random essay.

Act Gives the model a role and narrows its likely vocabulary.
Explain Adds context so probabilities shift toward the right subject matter.
Please States the task clearly so the output shape becomes more predictable.

Interactive prompt prediction simulator

Choose a sample prompt below, then watch the model score possible next tokens. This is a simplified teaching demo, but it clearly shows the central idea.

Current token stream

Candidate probabilities

Ready to predict the next token.

Try your own prompt wording

Paste a prompt below and this mini-site will analyse it locally. It will not generate text, but it will estimate how structured the prompt is and suggest what kinds of next-token behaviour it encourages.

What this prompt is signalling

    Structure score

    Role clarity
    Context depth
    Task specificity
    Output guidance
    Back to Lessons