AI – at what cost?

Written by Vincent Bryce 

The last four years have seen remarkable advances in artificial intelligence, especially in the development and availability of ‘foundation models’. In particular, generative large language models (LLMs) have astounded users with their capabilities and many organisations are seeking to integrate them into services and processes. Yet, behind their utility lie hidden costs that challenge the sustainability of their increasingly widespread use.

LLM AI – visible benefits, less-visible downsides

Large Language Models (LLMs) are AI systems trained on vast amounts of text data to predict and generate human-like language. Their abilities range from drafting essays to answering complex questions in conversational interfaces. They are in some cases free to end users, and enable individual benefits as well as the potential for scientific and other beneficial uses, however, these feats also come with potential downsides to society and the environment.

The energy impacts of large language models were an inconvenient truth which has only recently received wider attention. The dismissal of Timnit Gebru from Google in 2020, following the controversial “Stochastic Parrots” paper, sounded a note of alarm. Her paper highlighted the environmental and ethical risks of large-scale AI models, including the energy required for training and running them. The controversy of her dismissal, and the commercial sensitivity of this issue to a corporate AI behemoth, underscored a growing tension between LLM-based AI and the cost of its arcane methods of production.

Since then, OpenAI’s ongoing corporate governance microdrama indicates a tension between ambitious emancipatory aims, and commercial imperatives, which may drive AI companies to encourage widespread LLM use with a goal of future commercial returns. As with the changing picture of Elon Musk’s motivations for electric vehicle development.  This raises the question of whether the relationship between LLM AI and society is as beneficial as it first seems.

Parasitism of energy and ideas

The rapidly increasing capabilities of LLMs are powered by increasingly massive datasets, with their use spurred on by competition and ‘fear of missing out’ on a perceived game changing technology. Training models like GPT-4 involves a large energy investment. While subsequent inference—running queries against these models—uses less energy, the cumulative cost remains significant. Increasing adoption of LLMs in industry – particular when encouraged by ‘technology push’ such as the provision of free preview access, the addition of LLM features to existing software, and increasing awareness in industry of the need to train or otherwise locally tailor models to overcome limitations – exacerbates the energy burden, with each deployment, adding to the global demand.

Literature on blockchain has highlighted the ways in which hyped digital technologies can consume outsized energy resources, in particular when broader recreational or commercial uses are popularised. The broader adoption of blockchain technologies, apparently favoured by the incoming US administration, indicates that private commercial considerations can outweigh public interest considerations in energy use. Reports indicate that the growing workloads of AI systems increasingly burden infrastructure, with residents liable to bear the financial costs. This issue may become increasingly prominent as OpenAI starts to compete with Google, using its LLM as a search tool – while the calculations involved are complex and speculative, the energy costs of a ‘chatgpt search’ may be many times that of a traditional semantic web ‘Google search’.

The increasing energy demands of AI raise concerns about the strain on power grids (Henderson et al., 2023). Reports indicate that the growing workloads of AI systems increasingly burden infrastructure, with residents liable to bear the financial costs, and potentially requiring the building of new nuclear reactors.

Beyond this, the reliance of LLMs on datasets scraped without consent has sparked legal and ethical debates about intellectual property rights. A specific example of hidden impacts may relate to professions such as artists and authors whose output has been scraped, without payment, into the LLM; and whose livelihoods may subsequently be at risk, when the resulting generative AI tool is made available (at least initially) to the public for free.

A further social cost relates to the reliance of LLM systems on extensive human labour for tasks such as data labelling, often outsourced to low-paid ‘ghost workers’ in the Global South (Gray & Suri, 2019) – in countries where automation of processes in the Global North using the resulting AI models, may then displace better paying customer service jobs. Further costs may emerge in practice, with AI-enabled ‘algorithmic management’ practices liable to impact workers negatively alongside their potential benefits.

Toxic in large doses – reframing how we consider potent technologies

Throughout history, humanity has celebrated transformative materials like mercury and plastics, only to later recognise alarming downsides. Similarly, LLMs represent a ‘meta-technology’ with seemingly limitless applications, and correspondingly high potential for good or ill. Responsible management, akin to modern controls on mercury and plastics, may help us harness their benefits while managing risks – we still use mercury in thermometers and switches, but not as a tonic or infant teething aid. This ‘hormetic‘ perspective suggests the need for cautious, purpose-driven deployment of AI, with limits to its availability to avoid insidious long-term impacts.

Frameworks like Responsible Research and Innovation (RRI), and the methods and perspectives developed in journals such as JRI, JRT, and Novation, offer approaches to managing the benefits of technologies alongside their downsides. They encourage anticipation, reflection, and inclusion in technology development (Von Schomberg, 2013; Owen et al., 2013). By applying them, we can better assess when and how to deploy transformative technologies like AI in ways that align with societal values and sustainable development goals.

They open us up to the viewpoint that sometimes, the responsible way to use a technology in a given situation, may be to use it carefully and sparingly; or not at all.

References and recommended reading