The average user doesn’t want (or isn’t able) to decide which model to use or how to finesse a useful prompt. We need software applications that can handle this.
One sign that we’re still very early in the evolution of AI is how much heavy lifting is still left to the user to figure out. As Community Leadership Core founder Jono Bacon laments, even the very act of “need[ing] to choose between [large language] models” to run a query is “complex and confusing for most people.” Once you’ve chosen the “right” model (whatever that means), you still need to do all sorts of work to get the model to return relevant results (and forget about getting consistent results—that’s not really a feature of current LLMs).
All that said, when I asked RedMonk co-founder James Governor if AI/genAI had lost its shine, his response was an emphatic “No.” We may currently be sitting in the trough of disillusionment (my phrase, not his), but that’s just because we’re following the same timeline all important new technologies seem to take: from indifference to worship to scorn to general adoption. Some software developers are already jumping into that last phase; for others, things are going to take more time.
Eventually consistent
It’s been clear for a while now that AI would take time to really hit its stride. I mean, all it takes is a little fiddling with something like Midjourney to create an image before you notice, as Governor did, that “the majority of AI art trends to kitsch.” Is that because computers don’t know what good art looks like? As inveterate AI grumbler Grady Booch notes, we sometimes pretend that AI can reason and think, but neither is true. By contrast, “Human thinking and human understanding are not mere statistical processes as are LLMs, and to assert that they are represents a profound misunderstanding of the exquisite uniqueness of human cognition.”
People are different than LLMs and machines. AI can’t paint the way Van Gogh can. It can’t write the way Woolf can. It can mimic but will always fall short of human cognition.
That’s not to say it’s not very useful. For example, I recently asked a group of friends for their opinions on what we should do about a thorny business problem. I dumped their varied unstructured responses into ChatGPT and asked for a summary. It was…astoundingly good. I went back and double-checked that it hadn’t simply regurgitated something that one of my human respondents had told me. Nope. It seems that summarizing is something machines can do quite well.
GenAI is also quite good for software developers, without threatening the heart of their work at all. Coding is the least interesting thing a software developer does, as I’ve suggested, following Honeycomb CTO Charity Majors’ excellent thoughts on the topic. Kelsey Hightower argues that “writing code should be the last thing a developer does.” Instead, genAI can help developers think around obstacles, fill in boilerplate code, give them a look into how it might appear in alternative languages, etc.
So, yes, there are great uses for genAI and AI today. But it’s still way too hard to use.
Turning AI into applications
Back to Jono Bacon. “Don’t make me pick [a model],” he demands. “Pick one for me based on my request.” He’s really asking for someone to turn cumbersome, complicated AI infrastructure into applications. We’re starting to see this with Apple Intelligence, Google search, and more. Companies are baking AI into their applications, rather than making users do the undifferentiated heavy lifting of picking the infrastructure and generating the prompts.
This is essential, given that there’s still so much manual work behind the scenes to get genAI to deliver usable output. As Dan Price says, “You have to provide all of the context the model needs to answer your question.” The only way to know which context is most helpful toward getting (somewhat) consistent results is to “play with the models.” The applications provider should do that work for you.
Price continues: “It’s better to break up complex tasks into smaller sub-tasks which you complete over the course of several conversations instead of trying to one-shot your task with one very complicated initial instruction.” Again, the application provider should do this work for the user. And from Cristiano Giardina, “You’re interacting with a superposition of all humanity, so defining a specific persona that would be helpful for your task produces better results.” Why is this your job? Let the application provider do that work for you.
I could go on. The point is that in these early days of AI, we keep expecting mainstream users to be able to do all the work of understanding and manipulating still-janky LLMs. That’s not their job, just as it wasn’t the “job” of mainstream enterprises to get under the hood and compile Linux for their servers. Red Hat and others came along to package distributions of Linux for mass-market use. We need the same thing for genAI and soon. Once we get that, we’ll see adoption (and the productivity it can generate) soar.