Icon

Impel Acquires Automotive Customer Engagement Platform Outsell
in $100M+ Deal, Expanding to 8,000 Dealerships, 51 Countries. | Details

Icon

Impel Acquires Automotive Customer Engagement Platform Outsell
in $100M+ Deal, Expanding to 8,000 Dealerships, 51 Countries. | Details

Impel Blog

Good Will Hunting & ChatGPT


Nauman Sheikh, Sr. Director – Head of Data and AI Products
Transforming & Reinventing operations using Data/AI

When you hear about ChatGPT and other chatting AI bots, there are a few typical emotions that you may experience. Oh, the Terminator prophecy is coming true, Machines are taking over. Or Skepticism – that can’t be real, there must be some trick at work here, technology cannot be this capable. Or maybe you’re one of the cynical types looking for that one incorrect answer from AI that someone posted on social media, and you discarded the entire achievement. However, if you spend enough time playing with a Large Language Model (LLMs for short) interacting with it over chat on a wide variety of topics through any of the various offerings, you are bound to be amazed eventually – in a way somewhat relatable to the 1996 Deep Blue defeat of Gary Kasparov – this software is actually pretty smart.

Understanding the Capabilities of ChatGPT: The “Will Hunting” Analogy
Once you reach this conclusion, two questions normally arise: I wonder how it works, and what are its limitations. Let’s address these questions using Matt Damon’s character – no it’s not Jason Bourne – its Will Hunting from the movie Good Will Hunting. A LLM is like Will Hunting. It’s smart, it has good analytical ability and is extremely well read. Just like in the movie, Matt Damon could talk on any topic at length with any expert since he had read all the popular material on that topic and had absorbed and synthesized it. LLMs go a step further in terms of their expanse of reading material consumed. And similar to the movie character, there was no objective or any real purpose of that knowledge. LLMs have read millions of books and articles on every subject from History to Philosophy to Chemistry and Travel Guides and have extensive knowledge on healthcare, financial management, and handyman tasks. It has read all the classics from all the great poets and writers of human history. As part of absorbing that material, LLMs have also acquired a very deep understanding of language structure, vocabulary, sentiments, and idiosyncrasies associated with words and phrases. But it has no sense of context or purpose as to what to do with all that knowledge and learning. When you ask something and engage with LLMs, all they do is respond as best as they can from the available knowledge they have acquired, using pretty much everything published on that topic in the public domain.

Navigating Context and Prompts
Can they make assumptions and speculate on things, ideas, and possibilities they have never seen? To some degree they can, like solving a problem or puzzle for which they have not seen a solution, but if you ask the LLMs about the outcome of a College Basketball game or the next US presidential elections, it wouldn’t know and “may not” speculate. This “may not” in quotes is actually important to understand since that is where Human Control comes into the picture through what are called Prompts. In the movie, when Will Hunting was talking to a girl, a professor, a friend, or an adversary, he would use the knowledge differently with adjusted response and sentiment in the choice of words – he was after all a human character. LLMs have no such notion about the person chatting with them (or another system for that matter) and therefore the LLMs need a context to go with a question or conversation to better formulate an appropriate response. If we leave it to the bot’s own intuition, no one really knows where the conversation will go. But still, during that conversation, LLM-based chatbots don’t judge, have no bias against the other person, are not moody, and will try to continue the conversation to the best of their ability without any malicious intent or deception.

Limitations and the Need for Specific Details
In order to understand its limitations, let’s take the Will Hunting character beyond the story in the movie. What if Will Hunting was asked to perform cardiovascular surgery on a heart patient? While he could sure read all the available books, articles and watch all the videos, he would still need to understand the patient’s history, medications, reports, and vitals, and would need surgical precision in using the instruments to carry out the operation – which would require years and years of practice & training. This is the same case for LLMs, and while they are great at conversations and know everything about anything, they need to learn specific details of a given situation in order to formulate a decisive action. That knowledge is not available in books and articles and may be buried in corporate or government databases. Therefore, that information has to be passed to the LLM in the form of a Prompt to force it to review and prefer that material over its own learning when formulating the response. All the LLMs provide open access and interfaces to provide simple to complex to creative prompts and that is where the Human Control and human limitations of imagination come in. How a prompt would be interpreted by the LLM to formulate a response is anyone’s guess, and therefore, extensive review of prompt engineering and experimentation is warranted before LLMs can be allowed to take decisions that impact real life situations.

Changing Perceptions and Exploring Possibilities
The tech pundits and some influential naysayers need to stop the fear mongering against this evolution of Artificial Intelligence. They need to simplify its explanation, access, and possibilities so people can decide for themselves how and when to make use of it. What would you do if you had your own personal dedicated Matt Damon at your disposal 24/7, heck I’ll take Ben or Casey Affleck for that matter. For those who have not seen this great movie, it may be about time you pick it up.