Notes on using AI

Short practical observations on how language models behave in use.

Implicit metaphors and unintended mappings

How training data shapes interpretation of vague instructions

If a prompt contains an implicit metaphor, the model may map the task to a different but statistically related concept. Asking a model to “annotate a chart” may produce stock-chart style additions because many annotated chart examples are financial.

Overloaded terms and ambiguous intent

Why common words lead to inconsistent outcomes

Terms such as “model”, “prediction” and “explain” are used in multiple ways. The model resolves ambiguity by statistical context rather than by asking what the user intended but did not specify.

Consistency and reproducibility

Why identical inputs do not guarantee identical outputs

Model outputs are probabilistic. Repeated runs may produce different wording, structure or occasionally different conclusions unless the system is tightly constrained.

Local coherence vs global correctness

Why plausible outputs may still be wrong

Language models are good at producing text that is locally consistent. That does not guarantee that the output is globally correct, current or grounded in the right data.

Distribution bias and default patterns

How models fall back to statistically common behaviour

Where a request is underspecified, the model tends to produce the most common pattern associated with that prompt type. This can introduce unintended domain assumptions.