top of page

Exploring the Risks of AI Art – Part 2

  • Alex Vezina
  • 3 days ago
  • 3 min read

The continuation of last weeks piece on the risks of AI Art. This week focuses on the other two sections previously referenced: Resource use, and the terminator problem.


Resource Use


The two main resources that are most commonly looked at involving increased AI usage are power/energy/electricity and water. In both cases, AI primarily represents a challenge with increased demand. It is not utilizing the resource in a way that is particularly environmentally problematic compared to other industries. 


The primary issue is the increased strain on limited available resources.


Most AI processes involve massive numbers of computations in datacenters which require a proportionately massive amount of energy to operate. 


The International Energy Agency (IEA) estimates the global energy demand of AI in 2024 to be about 415 terawatt hours (TWH) which is approximately 1.5% of global electricity consumption. 


While 1.5% might not seem like a very large number to some, in the context of the global energy industry, this is a very large number. This number is expected to have increased and is expected to continue to rapidly increase over time.


When looking at AI within the context of global energy demand there are a few broad observations that are relevant:


1. The human population is still increasing; more energy is required to support each new person.


2. Demand for energy per-person is also increasing.


3. New renewable non-hydrocarbon fuel-based energy systems cannot keep up with the increased demand, even without AI. Hydrocarbon fuels still make up over 80% of global energy supply and are fundamentally required to maintain the current quality of life on the planet.

4. Individuals generally are willing to accept moderate harm or inconvenience to another, especially someone who is geographically distant, in exchange for personal benefit.


These things when combined create an interesting situation. To many, AI assisted tools simply represent another modern convenience. The energy cost of this will necessitate convenient reliable power generation, this is likely to necessitate reliable hydrocarbons like liquid natural gas (LNG).


The sheer scale of energy required creates the sort of situation where the most efficient and/or cheap energy producer will likely dominate. There is simply too much rapidly increasing demand.


A similar phenomenon will likely be observed with water. AI datacenters require water for cooling. Just like with electricity, AI datacenters also comprise a fraction of global water use.


For context, the total global data center usage of water is approximately 560 billion litres per year. This is all data centers, not just AI. It is projected to reach 1.2 trillion litres per year by 2030.


The total current agricultural water usage for almonds grown in California is 5.9 trillion litres per year.


Being fair, almonds are a particularly inefficient crop when it comes to water usage, but it still demonstrates the point. There are multiple things that inefficiently use these resources and AI is simply a symptom of a bigger challenge with overall resource management.


If a small town has its watershed disrupted it does not really matter if it is done so because of AI (tech sector) or almonds (agricultural sector), the major issue in that the watershed is disrupted.


When looking at AI’s effect on resources, the conversation appears to be focused on AI being the problem. My hypothesis would be that this has resulted in AI effectively being a scapegoat for the much larger issue of overall resource management which ought to be the main conversation.


The Terminator Problem


The Terminator is a movie that came out in 1984 which references an apocalypse scenario where sentient AI robots attempt to eliminate humanity. This theme has been seen many times in media, The Terminator is a common reference for this idea.


There is negligible risk currently of this being an issue with current AI technology. Current AI technology is nowhere close to solving true generalized artificial intelligence. Common AI tools are effectively highly sophisticated search engines.


Machine learning has been extremely useful in rapid advancing many technologies but there is no current evidence that the computers are sentient and that the robots are coming to kill humanity.


In virtually all cases where an ‘industry expert’ has gone public and discussed this risk; that expert or their company was in a position to benefit from that news breaking. People get afraid of the terminator, stock price for tech company goes up. Online influencer stokes fear, engagement goes up, they get paid more ad-revenue.A conflict of interest brings into question the credibility of the messenger.


The conversation around more efficient resource use and what our priorities ought to be could be highly productive. The conversation around the robots coming to kill us, while entertaining, is strategically a waste of time.


Vezina is the CEO of Prepared Canada Corp. and is the author of Continuity 101. He can be reached at info@prepared.ca.


Comments


Prepared
Canada
Corp

info@prepared.ca

(905) 501-8180

405 Britannia Rd E., Suite #220
Mississauga, ON, L4Z 1X9

  • mail (1)
  • Youtube
  • LinkedIn
  • Twitter
  • TikTok
  • Instagram
  • Facebook
visa.png
mastercard_vrt_pos_92px_2x.png
american-express.png
interac-400x-q75.png

© Prepared Canada Corp | All rights reserved

bottom of page