posted on Fri, Dec 16 '22 under tag: technology

Is there any difference between ChatGPT and programmers if ChatGPT can write React components just the way programmers do? Will programming jobs be snatched by the model?

A computer program that talks to me like a human. Answering my queries, occasionally asking me a counter-question. A conversation that keeps on giving. I’m not talking about ChatGPT, I’m talking about ELIZA a 20th century chatbot that knew just enough to keep you occupied.

In the beginning of 21st century, I spent hours with ELIZA. I was fascinated by it. With the little I knew about computer programming back then, I opened up the source code in a notepad. And I could see how it was pattern matching for certain words and phrases.

Did that kill the magic? Maybe it did. But the magic was dead long before I looked at the source code. I could see the patterns myself when my conversations almost always evolved the same way. I could even predict what it would reply when I sent a message. You say something like, “I am happy today”, and ELIZA would ask you “Why are you happy today?”. The source code only confirmed what I knew by then.

Today we have ChatGPT. Trained on lots and lots of data. Not simply pattern matching. It seems to “understand” a lot of things just like humans do. People are suggesting it does a better job than Google search in uncovering information. Doctors are making it write letters to insurance. Professors are worried about being unable to catch their students cheating with ChatGPT on homeworks. How far have we come from ELIZA!

Humans being humans, do not want to acknowledge the power of large language models. It has already reached the level of understanding an average person have on an average topic (or even an advanced topic). It is undisputedly becoming an authority on how we interact with existing knowledge in the human world.

The comparisons with Google are fair on that regard. Haven’t we, forever, been trusting that Google’s algorithms are surfacing the most relevant information when we search for stuff? When is that last time we did a systematic review of literature to arrive at a conclusion on questions that we had? If we could use Google without questioning its intelligence, we can as well use ChatGPT without questioning its intelligence.

The core debate, though, is whether ChatGPT “understands” the world like humans do. If ChatGPT can write bug-free software, can it replace human beings at a programming job? And so on.

I think the answer has already been provided in the great Rajinikanth movie - Enthiran. The initial version of Chitti was smart, fast, and powerful. But Chitti couldn’t understand human emotions and complex/nuanced situations. And Dr Vaseegaran had to spend lots of time in training Chitti to understand human emotions. At which point, Chitti became a sentimental human being.

ChatGPT is like the initial version of Chitti presently. It can understand everything, but only through language and logic.

It cannot read between the lines.

And that is going to be the fundamental difference between ChatGPT and humans. A programmer, for example, doesn’t just understand what’s been stated about the requirements from a software. But also what’s not been stated. Because a programmer has the capacity to ask clarifying questions by guessing the motivation of other human beings. To understand motivation, incentives, and purpose, one needs to understand human emotions.

In the movie, Vaseegaran solves it with a few weeks of training on emotions and exposure to sensations. But I think that’s infeasible in the case of machine learning models that we have today.

It seems like these models need a lot of data to develop accuracy and understanding. How do we obtain such large amounts of data to train models on human emotions? Many emotions are conveyed not through words, but through para-verbal and non-verbal communication. How do we even capture these in a form that models can be trained on?

This is going to be the hard challenge in front of AI researchers next. To shift AI from knowing everything about an emotion to feeling that emotion and predicting the effects of that emotion.

Without that, AI models can’t accurately understand software design requirements to implement it fully and correctly. And till that point it cannot replace a programmer completely.

It is indeed probable that human beings take on an interlocutor role thereby slightly shifting the day-to-day workflow of a programmer. But that’s about the maximum that’s feasible till computer can read between the lines.

Like what you are reading? Subscribe (by RSS, email, mastodon, or telegram)!