Humans are pretty bad at articulating what we want. If you want a perfect example of this – look at almost all performance management and recruitment ever.
Ask the average employee what it is that they do, and how they provide value, and you’ll find that a large chunk of the working population don’t have a great answer. This isn’t to say that they don’t provide value or know what they do, but they can’t cram that into language on the spot.
As it happens, this is a known, hard problem in machine learning – deciding what goal the models should be running towards is as much art as it is science, and whatever we specify has tradeoffs.
LLMs provide a window into just how bad we are at this – because they let non-ML-engineers specify what they’re looking for using natural language – and the model outputs will only ever be as good as the input tokens they get. Just like humans – the less useful context you provide, the more general your ask – the less you’re likely to like the output.
However, this also helps explain why coding has been so heavily disrupted by the models. Coding comprises specific qualities:
- Code expresses a much simpler grammar and vocabulary than a natural language
- A large chunk of code can be verified at the point of creation
- Software engineers are trained to translate messy human language into precise code
This final point also means that engineers are very good at using these models because they already deal with the issues that arise from mis-specification.
People that can clearly and unambiguously articulate what they want have always been well-rewarded – Tim Ferriss has an excellent quote:
Life punishes the vague wish and rewards the specific ask
As to these models.
Workloads that can be specified in natural language are already being eaten by these models, and I predict that your ability to articulate yourself will become even more valuable than it is today.