On the abstraction of an LLM in coding
And yet, this time something fundamentally is different, something sinister and dangerous... The ability to be wrong, or, to be clear, the fundamental baked in design principle of being wrong. While the industry leaping abstractions that came before focused on removing complexity, they did so with the fundamental assertion that the abstraction they created was correct. That is not to say they were perfect, or they never caused bugs or failures. But those events were a failure of the given implementation a departure from what the abstraction was SUPPOSED to do, every mistake, once patched led to a safer more robust system. LLMs by their very fundamental design are a probabilistic prediction engine, they merely approximate correctness for varying amounts of time.
By Jay
My interpretation: every layer of abstraction we've previously built (from assembly to python) was crafted into a robust system. An LLM is not robust in the same manner.