A take from Pete Koomen
Pete Koomen has a refreshing take of what good vs. bad AI applications look like. I put in some excerpts here for my own reference, but I could copy the full essay as it's very well put.
He starts by going over the example of Gmail's AI assistant. If you prompt it:
"Let my boss Garry know that my daughter woke up with the flu this morning and that I won't be able to come in to the office today."
The AI assistant will write a perfect email, but one that doesn't sound anything like the user.
In order to do that, you need to adjust the system prompt. But you also need to categorize emails first in order to know the context. Some emails are personal, some are business, some are newsletters and don't require a reply at all.
"The division of labor is clear: developers decide how software behaves in the general case, and users provide input that determines how it behaves in the specific case.
By splitting the prompt into System and User components, we've created analogs that map cleanly onto these old world domains. The System Prompt governs how the LLM behaves in the general case and the User Prompt is the input that determines how the LLM behaves in the specific case."
And
"My core contention in this essay is this: when an LLM agent is acting on my behalf I should be allowed to teach it how to do that by editing the System Prompt."
and also:
"This is what AI's "killer app" will look like for many of us: teaching a computer how to do things that we don't like doing so that we can spend our time on things we do.
One of the reasons I wanted to include working demos in this essay was to show that large language models are already good enough to do this kind of work on our behalf. In fact they're more than good enough in most cases. It's not a lack of AI smarts that is keeping us from the future I described in the previous section, it's app design."
It's a different way of looking at AI powered software. I like it!
Source: Pete Koomen