There’s a lot of controversy over AI. The passion of those for and against its propagation is almost religious in nature, with zealots and heretics. The zealots believe that AI is a revolutionary technological innovation that will quickly transform our lives for the better, and a good development for humanity. A few of these believers do concede that it has the potential to be evil, turning against its creators.
The heretics say that AI certainly is artificial but is not intelligent. Some of them say it never will be intelligent. It is basically a statistical probability model that can digest huge amounts of information from the Internet but lacks the ability to recognize and correct its own mistakes, which is a key attribute of intelligence.
Consider the following intelligent opinions:
(1) Sam Altman, the CEO of OpenAI, said the following about AI:
“This will be the greatest technology humanity has yet developed.” He believes that it has the potential to revolutionize nearly every industry, not just those that stand to be radically transformed like healthcare, finance, and education.
(2) Elon Musk has been warning that AI creates an “existential risk” for humanity. He sees tremendous benefits but also a great need to manage the risks of AI. He has said,
“I’m particularly worried that these models could be used for large-scale disinformation.”
(3) Gary N. Smith, the Fletcher Jones Professor of Economics at Pomona College, has written extensively about AI. He wrote a book titled The AI Delusion. He was also one of my professors at Yale. He believes that AI isn’t intelligent and that it has the potential to pollute the Internet with lots of disinformation. See for example his January 15, 2024 article titled “Internet Pollution—If You Tell A Lie Long Enough…” He argues that:
“ChatGPT, Bing, Bard, and other large language models (LLMs) are undeniably astonishing. Initially intended to be a new-and-improved autocomplete tool, they can generate persuasive answers to queries, engage in human-like conversations, and write grammatically correct essays. So far, however, their main successes have been in providing entertainment for LLM addicts, raising money for fake-it-till-you-make-it schemes, and generating disinformation efficiently.
“It is said that if a lie is told long enough, people will come to believe it. In our internet age, a lie repeated in a large number of places on the Internet will eventually be accepted as truth by LLMs—particularly because they are not designed to know what words mean and consequently have no practical way of assessing the truth or falsity of the text they input and output.
“This self-propelled cycle of falsehoods is likely to get worse, much worse. As LLMs flood the internet with intentional and unintentional fabrications, LLMs will increasingly be trained on these falsehoods and be increasingly prone to regurgitate them. It won’t just be amusing garbage about Russian bears in space. It will be vicious lies about people, businesses, and governments—all spouted confidently and authoritatively—and many people will be conditioned to believe the LLMs’ rubbish.”
Artificial Intelligence II: Autofill on Speed
The simplest versions of AI have been around for a while. Microsoft) Word has long had an autofill feature. When you turn it on, it anticipates your next words and suggests words or phrases as you type. When you are using it, you must check to make sure that it is correctly predicting what you intend to spell or the next couple of words you intend to write.
If it makes the wrong prediction, you immediately recognize its mistake and just keep typing, ignoring autofill’s suggestions. Google describes its Autocomplete as “a feature within Google (NASDAQ:) Search that makes it faster to complete searches that you start to type.”
Other examples of AI that have been around for a while are Apple’s Siri and Amazon’s Alexa. They can accurately answer lots of questions. They can play music, videos, and audio books. They can wake you up in the morning and tell you the weather. But they can’t converse with you. They are one-trick ponies as personal assistants. In my opinion, AI will be ready for primetime once Siri and Alexa can function as multitasking personal assistants.
Google recently introduced Google Assistant for Android that does the same things as Siri and Alexa. I tried asking it for directions to JFK airport from my home in Long Island. It worked as well as Waze. But when I asked for the nearest gasoline station, it suggested one 45 miles away. When I chastised it for the mistake, it apologized and said it is still in training.
Artificial Intelligence III: Search on Speed
AI can also be viewed as search on speed. Indeed, Google now often includes a short “AI Overview” at the top of search pages. At Yardeni Research, we’ve been using Microsoft’s Copilot as a search engine. It also functions as a research assistant because it provides a short write-up of the subject we are researching with links to the sourced articles. That makes it much easier to fact-check Copilot’s summary and to avoid cribbing.
In effect, Copilot is “Search and Summarize” on speed is and is an efficient way to use AI while reducing the risks of disinformation, since the sources of the AI-generated summary are readily available.
Artificial Intelligence IV: Big Data Combined with Supercomputing
AI that is fed all the data and information that are available on the Internet will collect a lot of disinformation and produce a lot of disinformation, as Gary Smith observes. However, when it is used solely to analyze limited pools of content known to be reliable—e.g., data proprietary to the researcher or information from external sources that have been properly vetted—then AI should be a significant source of productivity.
Again, AI has been around for quite some time but with limited applications and accuracy. What has changed is the use of Nvidia’s) lightning-fast GPU chips to speed up the processing and statistical analysis of all the data provided to the LLM. So in effect, AI is Big Data combined with supercomputing. Widespread adoption of such powerful capabilities should provide a big boost to productivity.
A great example of this is the way Walmart is leveraging generative AI to improve its customer’s experience. Walmart is using LLMs to create or improve more than 850 million pieces of data across its product catalog, a process that would have required 100 times the headcount to complete in the same amount of time, executives said.
Walmart’s employees are using mobile devices to quickly locate inventory and get items on shelves or to waiting customers. That’s a significant upgrade from the “treasure hunt” of finding items in years past, according to John Furner, president and CEO. He also said that inventories are down 4.5% thanks to AI.
Artificial Intelligence V: Hallucinations
A search of “AI hallucinations” on Google produced the following AI Overview: “AI hallucinations, or artificial hallucinations, are when AI models generate incorrect or misleading results that are presented as fact. These errors can be caused by several factors. AI hallucinations can be a problem for AI systems that are used to make important decisions, such as medical diagnoses or financial trading.” AI has warned you about AI.