Read more

August 21, 2023
3 min read
Save

Language models are just tools — be careful how you use them

It looks like artificial intelligence is here to stay, and it’s still in its infancy.

The field of artificial intelligence (AI) related to language models focuses on enabling computers to understand and generate human language. It is machine simulation of human intelligence processes. Specific applications of AI include expert systems, natural language processing, speech recognition and machine vision.

Graphic with quote from Nicholas J. Petrelli, MD, FACS

AI language models are a major component of natural language processing (NLP). A large language model (LLM) consists of a neural network with many parameters trained on large quantities of unlabeled text using self-supervised or semi-supervised learning.

An article in Analytics India Magazine states that in supervised learning, AI systems are fed with labeled data. However, with bigger models it becomes difficult to label all the data. Hence the need for self-supervised learning to overcome this challenge.

This commentary will stop at any further differences between the two. You will have to settle for that basic description.

Nothing is sacred

Examples of language models include voice assistants like Siri and Alexa.

You may have seen in the news that Amazon will pay the Federal Trade Commission more than $30 million to settle allegations of privacy lapses in its Alexa and Ring divisions. I suggest you mute your Alexa when not in use. Nothing is sacred.

Other language models are Google Translator and Microsoft Translate, which translate words and text into various languages.

In a bmc blog post, Jonathan Johnson describes four types of AI: reactive machines, limited memory, theory of mind and self-aware.

Reactive machines are the simplest and perform very basic operations with no learning. An example is machine learning that could identify a human face. Another example is Deep Blue, the chess-playing IBM supercomputer that beat world champion Garry Kasparov.

Limited memory is AI that can store previous data and/or predictions and use that data to make better predictions. Here, machine learning architecture becomes a bit more complex. An example is a self-driving car that uses limited memory to observe the car speed and direction, allowing the vehicle to adjust to the road as needed and avoid accidents. At least that’s the goal. However, the word “limited” is appropriate because in self-driving vehicles, the information is not saved in the car’s long-term memory.

Theory of mind AI is only in its beginning phases. What is interesting about this AI is that it begins to interact with the thoughts and emotions of humans. For example, you can hold a conversation with an emotionally intelligent robot that looks and sounds like a real human being. This type of AI allows machines to acquire a true decision-making capability that is similar to humans.

Well, at least some humans.

However, there is still a lot to be done, because we know that behavior and emotions are extremely variable in human communication. As clinicians, we experience that every day with the training of residents. And if you have ever raised children, you can appreciate shifting behavior and emotions.

Johnson describes self-aware intelligence beyond the human as having an independent intelligence that would likely force individuals to negotiate terms with the entity they created. Whether the endpoint is good or bad is anyone's guess. Machines with this type of AI will be self aware of internal emotions and mental states. This sophisticated AI still needs the hardware or algorithms to support it. We need to get this one right.

The sky is the limit

As you can tell, these four types of artificial learning aren’t all equal. Some aren’t even scientifically possible yet.

The potential of AI and the progress it is making makes one wonder if a fifth type will be developed. Will it be possible to develop a superintelligent AI that surpasses the current intelligence of humans?

For some humans, that may be a breeze.

However, as Asha Zimmerman, MD, stated, “These language models are ultimately just tools. Tools can be used well or poorly by people in responsible or unethical ways.”

At this time, all we can say is that science continues to push the limits and it looks like the sky is the limit when it comes to AI. Let’s hope that as we push the limits, we don’t lose control of our own emotions and that we use these tools wisely.

Stay safe.

References:

For more information:

Nicholas J. Petrelli, MD, FACS, is Bank of America endowed medical director of ChristianaCare’s Helen F. Graham Cancer Center & Research Institute and associate director of translational research at Wistar Cancer Institute. He also serves as Associate Medical Editor for Surgical Oncology for HemOnc Today. He can be reached at npetrelli@christianacare.org.