Every time you use an AI chatbot, you lose money. That is a difficulty.

Using ChatGPT as a leading example, this Washington Post piece investigates the unadvertised expenses related to GPU processing. The article highlights the negative environmental effects of machine learning model training’s high energy consumption, a problem that is frequently disregarded in the wake of AI technology developments.

Although the advantages of these models are evident, the essay urges a balanced approach, supporting the use of GPU computing and sustainable practises in AI development.

Companies aren’t releasing their most advanced versions to the general public since operating the systems is so expensive.

AI chatbots have a problem: They incur a loss with each conversation.

Today’s huge language models, which power tools like ChatGPT and Bard, are extremely expensive to run, which is reducing their quality and endangering the worldwide AI boom they have spawned.

Technology is not your ally. We are. Join the newsletter of The Tech Friend.

Even the biggest firms in the world are being under pressure to transform chatbots into moneymakers sooner than they may be ready to due to their high cost and the scarce supply of the computer chips they require.

“The models being deployed right now, as impressive as they seem, are really not the best models available,” said Tom Goldstein, a professor of computer science at the University of Maryland. As a result, the models you see “have a lot of weaknesses” that could be avoided if money were no object, like a tendency to produce findings that are obviously biassed or inaccurate.

What occurs when ChatGPT fabricates information about real people?

The major tech companies who are betting on AI hardly ever talk about the price of the technology. Google, Microsoft, and OpenAI—the company that created ChatGPT—all declined to comment. But experts claim that it’s the most obvious barrier standing in the way of Big Tech’s vision of generative AI tearing through every industry, reducing headcounts, and increasing productivity.

Because AI demands a lot of computer resources, OpenAI has deferred releasing its robust new language model, GPT-4, to ChatGPT’s free edition, which is still using the GPT-3.5 model. The underlying data set of ChatGPT hasn’t been updated since September 2021, therefore it’s not useful for looking up or debating recent events. And because GPT-4 is so expensive to run, even those who pay $20 per month for it can only send 25 messages every three hours. (It responds considerably more slowly as well.)

These expenses might also be a factor in why Google hasn’t added an AI chatbot to its leading search engine, which processes billions of searches daily. Google decided not to utilise its most comprehensive language model when it unveiled the Bard chatbot in March. According to Chief Analyst Dylan Patel of SemiAnalysis, a single chat with ChatGPT might cost up to 1,000 times as much as a straightforward Google search.

The Biden administration identified the computing costs of generative AI as a national concern in a recent study on artificial intelligence. The White House stated that there is a “urgent need” to develop more environmentally friendly systems because the technology is predicted to “dramatically increase computational demands and the associated environmental impacts.”

Only the wealthiest businesses can afford the staggering amounts of computational power and specialised computer processors, known as GPUs, that generative AI requires more than other types of machine learning. The fierce competition for those chips has helped turn their top suppliers into independent tech behemoths by providing them the keys to what has grown to be the most valuable resource in the technology sector.

Why Nvidia is now among the world’s most valuable corporations

By making services like social media, email, and online search available to everyone for free while initially losing money, Silicon Valley eventually became the dominant force in the internet economy and began making large profits from targeted advertising. And AI chatbot advertisements are likely on the way. However, scientists predict that adverts alone most likely won’t be sufficient to quickly make cutting-edge AI technologies lucrative.

In the meantime, the businesses that provide AI models for consumer usage must weigh their ambition to gain market share against the money they are losing.

Along with the chipmakers whose gear they need to run the models, the cloud computing giants that now dominate most of the digital realm are also poised to benefit most from the search for more dependable AI.

It’s no coincidence that the businesses developing the top AI language models are either among the biggest cloud computing providers, like Google and Microsoft, or have close relationships with them, like OpenAI and Microsoft do. According to Clem Delangue, CEO of Hugging Face, an open-source AI company, businesses that purchase such firms’ AI tools are unaware that they are being tied into a service that is substantially subsidised and that will cost them much more than what they are already paying.

At a Senate hearing last month, Sen. Jon Ossoff (D-Ga.) cautioned that if OpenAI attempted to make ChatGPT addictive in a way that hurt children, Congress “will look very harshly” on it. OpenAI CEO Sam Altman implied that the issue existed. Ossoff need not fear, according to Altman, because “we try to design systems that do not maximise for engagement.” The less people utilise our products, the better because we’re running low on GPUs.

The cost of AI language models begins with their creation and training, which necessitates enormous volumes of data and software to recognise linguistic patterns. AI businesses also frequently employ top researchers, whose pay can compete with that of professional sports. Although a few well-funded start-ups have succeeded, like Anthropic AI, which OpenAI alumni launched with financial support from Google, this poses an initial barrier for any company wishing to build its own model.

Each request for a chatbot like ChatGPT, Microsoft Bing, or Anthropic’s Claude is then forwarded to data centres, where supercomputers crunch the models and carry out numerous simultaneous high-speed calculations. They first interpret the user’s request before attempting to predict the most likely response, one “token,” or four-letter sequence, at a time.

Large language models demand a lot of computational power, which calls for GPUs, or graphics processing units, which were originally designed for video games but were later discovered to be the only chips capable of handling them. The best of those are now only sold by one company, Nvidia, and cost tens of thousands of dollars. On the back of predicted sales, Nvidia’s valuation just skyrocketed to $1 trillion. The valuation of TSMC, a Taiwanese business that produces many of those chips, has also increased.

Elon Musk, who recently bought 10,000 GPUs for his own AI start-up, claimed at a Wall Street Journal conference on May 23 that “GPUs at this point are considerably harder to get than drugs.”

The fact that OpenAI is no longer the nonprofit it was intended to be can also be attributed to those computational requirements.

Beginning in 2015 with the stated goal of developing AI “in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return,” it switched to a for-profit model in 2019 to attract investors, including Microsoft, which invested $1 billion and became OpenAI’s sole provider of computing. (Since then, Microsoft has invested an additional $10 billion and has incorporated OpenAI’s technology into Bing, Windows, and other products.)

As businesses strive to make chatbots like ChatGPT more effective, the exact cost of running them is a changing target.

Soon after ChatGPT’s debut in December, Altman estimated its cost to be “probably single-digits cents per chat.” That might not seem like much until you multiplied it by the analysts’ anticipated upward of 10 million users each day. Based on the processing required to run GPT-3.5, the default model at the time, SemiAnalysis estimated in February that ChatGPT was costing OpenAI over $700,000 per day in computational expenditures alone.

One may begin to see why the tech giants are hesitant to make the best AI models accessible to the general public by multiplying those computational expenses by the 100 million daily users of Microsoft’s Bing search engine or the more than 1 billion users who supposedly use Google.

According to the new Bing, it “can feel or think things.”

“This is not a sustainable equation for the democratisation or widespread availability of generative AI, the economy or the environment,” declared Sid Sheth, the founder and CEO of d-Matrix, a startup that develops more effective AI processors.

When Bard was first announced by Google in February, the firm stated that it would utilise a “lightweight” variant of the LaMDA language model since it needed “significantly less computing power, enabling us to scale to more users.” To put it another way, not even a big business like Google was willing to foot the expense for integrating its most advanced AI technology into a free chatbot.

What the latest AI from Google gets right, bad, and bizarre.

Bard made a number of errors during its debut demonstration as a result of cost-cutting efforts, devaluing Google’s stock by $100 billion. Bing, on the other hand, started off on the wrong foot, which forced Microsoft to reduce both its personality and the number of inquiries users could make at once.

Such mistakes, sometimes known as “hallucinations,” have emerged as a significant issue with AI language models as people and businesses have come to rely on them more and more. They are built to generate likely word sequences, not factual assertions, according to experts, who claim that this is a byproduct of the models’ fundamental nature.

With the intention of minimising incorrect information, the DeepMind division of Google created the Sparrow chatbot. Sparrow searches the internet and cites its sources. However, Google has not yet made that one available.

ChatGPT “hallucinations.” Some scientists worry that it cannot be fixed.

Each of the major competitors is currently scrambling to find ways to reduce the cost of AI language models.

The new, lightweight GPT-3.5 Turbo model from OpenAI costs less than a tenth of what its top-of-the-line GPT-4 does to do a query. Along with start-ups like d-Matrix, Google is producing its own AI chips that it claims are more effective than Nvidia’s. And many startups are constructing on open-source language models, like Meta’s LLaMA, in order to use them without paying OpenAI or Google — even though those models aren’t yet as effective and do not have safeguards to stop exploitation.

According to Maryland’s Goldstein, the business has suddenly turned around with the demand for smaller, less expensive versions.

“We spent the last four years just trying to make the biggest models we could,” he claimed. But back then, the focus was on publishing academic papers, not making AI chatbots available to the general population. “Now, just in the last few months, the community has completely changed, and all of a sudden, everyone is trying to build the smallest model they can to control the costs.”

That might indicate to customers that their days of having unrestricted access to effective general-purpose AI models are coming to an end.

Microsoft is already experimenting with incorporating adverts into the Bing search results it powers with AI. Although he stated that he favours a paid subscription model, OpenAI’s Altman did not completely rule out doing the same at the Senate hearing.

Both businesses assert their certainty that the economics will finally work out. There is so much value here, it’s incomprehensible to me that we can’t figure out how to ring the cash register on it, Altman said in a February interview with the tech blog Stratechery.

However, detractors point out that generative AI also has societal drawbacks.

The Fletcher School of Business at Tufts University’s Dean of Global Business, Bhaskar Chakravorti, stated that “all this processing has implications for greenhouse gas emissions.” Energy needed for the computation could be employed for other activities, such as other computing tasks that are less fashionable than AI language models. It “could even slow down the development and application of AI for other, more meaningful uses, such as in healthcare, drug discovery, cancer detection, etc.,” Chakravorti said.

Data scientist Kasper Groes Albin Ludvigsen calculated that ChatGPT may have consumed as much electricity in January as 175,000 people, or the equivalent of a large city, based on estimations of its usage and computation requirements.

According to Goldstein, the tech giants are currently willing to take a loss in order to increase the market share of their AI chatbots. But what if they can’t turn a profit on them? When the hype cycle is over, your investors will only be interested in your bottom line, according to this statement.

Even with all of its shortcomings, Goldstein projected that many people and businesses will find it difficult to resist generative AI tools. Even though it is pricey, he insisted, “human labour is still much more expensive.”

Mittal Sharma Jee is a passionate and dedicated health blog writer who is committed to providing reliable and evidence-based information on a wide range of health topics. With a deep-rooted interest in promoting wellness and empowering individuals to make informed decisions about their health, Mittal Sharma strives to create engaging and informative content that inspires readers to lead healthier lives. As a trusted health blog writer, Mittal Sharma understands the importance of maintaining a balance between factual information and personalized advice. While the articles offer general guidance and tips, Sharma always emphasizes the significance of consulting with healthcare professionals for individualized care and guidance. Through Sharma's writing, readers can expect not only a wealth of knowledge but also a compassionate and supportive approach to health and well-being. By advocating for self-care, preventative measures, and holistic approaches to health, Sharma aspires to inspire and motivate readers to prioritize their well-being and take proactive steps towards leading healthier, happier lives.