Imagine AI Beyond Chat

Regardless of advances in human-computer interaction, typing at least a part of our commands remains a staple. Now, ChatGPT is pretty much synonymous with AI-human interaction. But are these just remnants of a very strong mental model, or is it the most effective way to interface with computers? It might be both.

BENCE BELICZAY
|
|

As digital interfaces evolve from rigid terminals to AI-driven canvases, how humans explore technology shifts. This transformation isn’t merely technical—it’s a cognitive revolution fueled by the innate human drive to push boundaries. At the intersection of generative interfaces, autonomous agents, and ethical AI, every interaction becomes an opportunity to question, adapt, and reimagine what’s possible.

When typing commands felt like magic

Remember the first time you used a command line? (Or maybe you’ve seen it in old hacker movies?) That was the terminal era—a club for tech wizards only.

Then came the 90s, and boom: icons, folders, and that satisfying click (not the last time when skeuomorphic design made a difference). GUIs turned computers into something your grandma could use. Putting files in folders and opening a calendar were obvious. The mental model worked and the desktop was so strong of a methaphor that facing its limitations never turned it down.

It also influenced how our messy homescreens look today on our smartphones, and it’s quite similar today as it was back then. Small bits here and there, but the main idea is the same. The only big thing introduced later is search, and it opened new perspectives for users.

Search made the web accessible too by making us able to use the associative side of our brain and find information where otherwise we would be hopelessly lost on the word wide web. Search was a big innovation: it changed how we think about information, navigation, and finding stuff. But do you remember how the web looked back then? They went all the way back to the terminal or teletext graphics.

Google1997.png

Now, here’s the plot twist: AI’s new chat interfaces also rely heavily on typing. They’re like next-level terminals, a democratized CLI — instead of memorizing code, you just… talk. “Hey ChatGPT, explain quantum physics like I’m five.” Magic, right? It encourages natural language exploration. Users probe these systems not through memorized commands but through iterative questioning.

The complexity behind the simple chat is hidden, just like on the GUI. Many use LLMs without knowing how it works and what they can expect from it. And it’s a bigger issue than not knowing what happens with your file on the SSD when you put it in a folder.

What’s happening around the UX of AI applications is similar to every big leap in human-computer interaction. The rapid development of digital technology creates challenges and opportunities at the same time.

HCI evolves in interaction with innovation, as the devices and technologies are constantly changing, the scientific backing develops to support that, influencing technologies in turn. Chat may seem like a definitive way to interact, but there’s room for more—search is still a major player on the web, but it doesn’t mean it is just that. Think of the web apps we use every day.

LLMs and agents in general enable us to create complex workflows, self-critical or self-reflective systems, and with them, insights from extremely varied documents (tables, PDFs, images). These usually need more than a simple chat, furthermore,  generative UI is just around the corner, and zero UI is on the horizon as well (yes, I’m looking at you Jony and Sam).

SamJony.png

I strongly believe that chat is here to stay for many use cases. However, often there are more effective ways of interaction. As UX designers, we must explore these options to enable our users to confidently interact with, understand, and navigate the 'black box' of AI.

But what the heck is AI?

Finding a webpage based on keywords is an AI? Forecasting weather? Estimating arrival time? Calculating a route? Driving a car? Scanning/drawing an image? Scoring a Nobel Prize by finding all 3D protein structures in nature?

In a sense, design is a major driving force for how we see AI, it defines how AI is integrated into our work and life. But what we believe to be the right way to design now might not cut it just a few months.

LLMs are all the buzz, and when we say AI, most of the time, there is an LLM behind it. As their name suggests, they are based on a very large set of data, and they can read and generate good, bad, or messy sentences. But in every case, it seems insanely intelligent, and most of the time it’s correct.

Knowing that the next word in a given sentence is calculated using a bunch of matrix equations, it is really fascinating how good insights it can give you. You can think of this whole operation as a student at an exam, guessing the answer word by word, like panic, first word, panic, second word, and so on.

Even with models getting more capable constantly, you might get incorrect information. This is why keeping the human in the loop is necessary, especially where an AI-enhanced system supports decision-making.

The trust paradox

AI might be able to create better insights, emails, and reports than I ever could, but at the same time, it’s not deterministic and can make huge mistakes. Beyond the models themselves working well, users need a clear view on how the system works, where the info comes from, and some level of transparency through the whole journey—a lot of this hinges on UX design here. We talked about this issue in a previous article. Without transparency, your users will distrust your product, quickly sandbagging the entire project.

Bias also has an influence on user trust. These are difficult choices to make—who decides what generated content is correct? LLMs are trained on us; thus, if we don't like the results, it might be time to look in the mirror as a society. We need to be sure that the training data is correct and in line with our minimum standards; otherwise, things might go off the rails fast.

On the other hand, providing accurate data for AI is crucial. I’m sure it’s a delightful prospect of conversing with the aggregated LinkedIn data concerning your cinematic preferences—a truly efficient pathway to discovering the precise spectrum of your agreeable, if ultimately unremarkable, diversions. (Praise Kier!) But in real life, this would be boring as hell, everyone would expect IMDB data as a minimum to talk with. 

Recap

Chat is currently how we primarily interact with LLMs and I believe it will stay with us, as the desktop or search did. However, when we consider the historical trajectory of Human-Computer Interaction, it becomes clear that there's an enormous, untapped realm of possible design choices to discover ahead of us.

Agents bring new challenges that will influence this discovery. Providing context and clear sources to users becomes increasingly difficult with agents. Furthermore, Generative User Interfaces (GenUI) bring their own set of complexities: everything is temporary and hyper-personal.

It's highly probable that AI will ultimately move beyond chat interfaces, leading to user journeys that are individually crafted on a hyper-personal canvas. In such a future, designing the states through which users and agents achieve their goals will be more important than the journey itself.
 

Article by BENCE BELICZAY
App Development
AI Agents
UX design

Explore more stories

Flying high with Hifly

We want to work with you

Hiflylabs is your partner in building your future. Share your ideas and let’s work together.