Overdependence on AI Art
- Alex Vezina
- 4 hours ago
- 4 min read
Received a question relating to a previous piece on AI art. The question is as follows:
“What a great discussion. This is a topic I wish I understood better. One point I’ve seen raised is that if AI has to train on human work, but that human work is washed out by a much greater abundance of AI-generated content, which then trains off this derivative, the work will deteriorate over time and generations of training. What do you make of this? Is that something people are thinking about on the horizon, or does it not find itself in the current discourse on preserving human-made art?”
This question actually relates to multiple risks. All of these risks stem from two underlying assumptions:
1. That AI generated content becomes dominant and effectively replaces non-AI generated content in the market.
2. AI’s ‘inspirational’ source becomes scarce or is effectively eliminated. Creating a ‘void’ of new potential inputs. This seems to imply that non-AI art and artists would be obsolete.
The first immediate risk that comes to mind is organizational memory loss.
Organizational memory is the skills, knowledge, etc. that individuals in the organization have that are lost when they leave the organization. Usually this is applied to critical staff risk related to employment.
In this case, the organization is quite macro in scale, it would effectively be society or humanity at large.
The idea would be that, over a significant amount of time, all the people with traditional art skillsets would not pass on their skills. Sort of like lost ancient traditions. Future generations might risk not even knowing what was lost.
This risk appears to have a few variables that affect its likelihood of occurrence:
The rate at which AI models develop (chance of detection of the issue). The faster this happens, the earlier it would be identified.
Human lifespan. In order for this knowledge to be lost, significant time has to transpire where the individuals who can address it need to no longer be around.
The assumption that these skills will not be considered relevant for developing the AI.
On an initial review, it is unlikely this specific risk will be a problem, here is the rationale:
Human lifespan versus the rate of risk identification. This sort of flaw in the AI is likely to be identified in a timescale of months or years, likely much before decades. Art experts should outlive the identification of this issue and humanity would be able to course-correct as needed.
Also, AI ‘hallucination’ is widely understood as a relevant issue. It is well understood that when looking for AI to produce a specific particular output, one major limit is the skill of the user. The greater the expertise of the user the better they can identify when the AI makes errors.
This necessitates a continued creation of skilled users in order to both better improve the models and to edit the products of the prompts.
Another risk would be that humanity would lose the ability to properly appreciate art.
This sort of risk stems from perspectives many artists have relating to individuals who have discerning tastes. Think about it this way:
Movie A is incredibly popular with the public, but artists consider it to be pandering slop which does not advance the artform and is otherwise considered ‘trash’.
Movie B goes unappreciated by the public, but artists consider it to be significant, something that artists generally appreciate.
Movie A is the greater commercial success but the industry professionals might consider
Movie B to be the superior work.
One immediate issue with this risk is that it becomes a philosophical debate before it starts being analysed. Who is the authority on what qualifies as ‘better’ or ‘good’ art. Consider the following perspectives that might be argued to determine what art is ‘better’:
Makes the most money is better (total revenue).
Makes the most money relative to its cost is better (% rate of return).
What the public votes for a higher score on a review website is better.
An industry insider group of experts tells you what is better because they understand it better than the public.
A different industry group that knows better than the above tells you what is better.
To manage the risk, first there is going to be a step known as hazard identification. As it stands, it is unclear what the specific hazard is in this instance. The nature of this hazard is relevant. If we don’t agree on what the ‘bad’ thing is, then it is extremely difficult to reasonably apply any risk control to the ‘bad’ thing.
It occurs that one might be able to apply risk controls to this similarly to how ‘black swans’ are handled.
(A ‘black swan’ is a jargon term in disaster risk reduction for an event that you have never seen before and cannot reasonably anticipate).
But this issue here is that in this instance one does actually understand the issue, they just don’t know necessarily which one it is. It creates a sort of decision paralysis.
This leads into a fundamental underlying issue with approaching this from a disaster risk reduction angle:
Through the lens of disaster risk reduction, this sort of impact is generally considered negligible, irrelevant, or out of scope. Generally, the focus is primarily on life preservation and secondarily on economic impact reduction.
In this risk, no one is dying and the market it just choosing what it likes. The consequences within a risk matrix done by a disaster risk reduction professional would likely indicate that the risk is effectively irrelevant.
This is not to say that this issue does not matter, surely to some individuals this issue does matter. It just does not matter enough to warrant consideration from the individuals that would make a difference in its outcome.
Learn more and watch the full video here.
Vezina is the CEO of Prepared Canada Corp. and is the author of Continuity 101. He can be reached at info@prepared.ca.




Comments