Notes on using AI
Short practical observations on how language models behave in use.
Implicit metaphors and unintended mappings
If a prompt contains an implicit metaphor, the model may map the task to a different but statistically related concept. Asking a model to “annotate a chart” may produce stock-chart style additions because many annotated chart examples are financial.
Overloaded terms and ambiguous intent
Terms such as “model”, “prediction” and “explain” are used in multiple ways. The model resolves ambiguity by statistical context rather than by asking what the user intended but did not specify.
Consistency and reproducibility
Model outputs are probabilistic. Repeated runs may produce different wording, structure or occasionally different conclusions unless the system is tightly constrained.
Local coherence vs global correctness
Language models are good at producing text that is locally consistent. That does not guarantee that the output is globally correct, current or grounded in the right data.
Distribution bias and default patterns
Where a request is underspecified, the model tends to produce the most common pattern associated with that prompt type. This can introduce unintended domain assumptions.