How AI could wipe out creativity

AI (Artificial Intelligence) is the branch of Information Technology that develops self-adapting algorithms. Usually a program evaluates a result based on the input in a fixed way, like a mathematical formula. A Machine Learning algorithm can “adapt” itself and change its behaviour based on the result.

Today AI systems (more properly, Machine Learning algorithms) are used in a widespread set of applications: from data analytics and the so called “data driven decision” solutions, to facial recognition or the infamous Content ID of YouTube that invokes a copyright claim based on its content. A fine example of “data driven decisions” is suggesting you the best ads on your Facebook timeline based on what sites you visited in past days or something you might say near your phone (just saying).

Allow me to make a few examples of the AI applications that could potentially mine the creativity industry.

Creating Fake Contents

Deepfakes is one of the most infamous applications of AI technologies that allows creation of videos with fake content by altering an original video. An example has been shown on how a speech of a President could be altered so it appeared that he made and entirely different statement.

It is also finding its way in creation of adult-videos where faces of actors are swapped with celebrities. In a report called “The state of Deepfakes 2019” emerged that the adoption of this AI-driven technology in the phenomenon named “deepfake pornography” is 100% targeted and harmful for women (I will get back to the Human Rights later on this post).

AI Enforced Copyright

Another AI application claimed it will solve the copyright problem in User Generated Content Platforms such as YouTube or Instagram.

It has been mentioned, for example, in the Copyright Directive in the European Union within the infamous Article 17” (formerly Article 13) and commonly known as the “upload filters”.

Plenty of technology companies, Google being one of those, allegedly can intercept and tear down any uploaded content that is potentially infringing an existing copyright.

Except they cannot.

Content ID, for example, relies on a dataset of information provided by editors and producers. The system looks for similar content within the uploaded videos (without further explanation on “how” it performs such lookups) and eventually raises a copyright infringement strike. And not once, this happened to be imprecise or massively wrong, like when it raised five copyright claims on a 10 hours long video filled with white noise.

So how exactly could it protect an original content uploaded by a vlogger or video maker who really did create the video from scratch and that is his or her original content?

It is also useful to remind, that platforms like YouTube, Facebook, Instagram or TikTok are making massive profits thanks to the content generated by creators using those platforms.

AI systems based on Computer Vision (the branch of AI that studies the interpretation and visual processing of images) could intercept and mark as “infringing” an image that is an original work.

Let’s make it clear: Google Images, the search engine that looks for “similar” images, returns two types of results:

  • perfect matches, or images that are close to the original one
  • similar images, or images that could be described in a similar way (e.g. a “black cat”, “man with a shirt”, “red dress”, and so on)

There is nothing in today’s AI (neither will be until a “singularity” occurs) that could explain what emotion an Artwork inspires you: sadness, pain, pleasure, joy.

Those philosophical concepts are not statistically evaluable and cannot be processed by any modern algorithm, no matter how complex it could be.

Art is the ability of creating something that inspires and emotes the viewer, listener or attendee.

Creating something that inspires and being inspired by something is what makes us human.

There’s the third AI application that risks killing creativity.

AI-generated art

We’re finding online more and more articles on how this or that “programmer” is creating art using AI.

Hayao Miyazaki created some of the most impressive masterpieces of animation in the past decades. He has been invited at the Dwango Artificial Intelligence Laboratory in Tokyo to demonstrate an AI driven program that could hopefully allow them to create a machine that could draw like humans do.

The response of Miyazaki was more likely a warning. The results did impress him but not the way they expected and said that “I strongly feel that this is an insult to life itself”.

Creating Art, again, is a human capability that cannot be emulated by an algorithm of sorts.

So, here are the last two things that popped to my eyes lately.

What would happen if someone created any possible melody and release it to the Public Domain? This is what two musician/programmers have attempted to do with an AI application that creates, statistically, as much as 300.000 melodies per second.

Big companies such as Warner Music, Universal or Sony have been working on AI-driven applications to create music based on existing music datasets they already own. For example, in 2019 Warner Music has partnered with a company named Endel which created 600 music tracks artificially.

This kind of approach could also bring a serious problem that could entirely kill, or rather monopolize, the copyright industry: if a big-tech giant could create, hypothetically, any form or variation of an artwork, it would also hold all the rights on the created piece of “art” (which wouldn’t be “art” anymore but a statistical result, but from a legal point of view it’s not relevant).

Whereas at the WIPO (World Intellectual Property Organization) there’s a roundtable discussing the “Copyright and AI” problem and determine who holds the rights on the created piece, the issue probably is more concerning for Human Rights, since creativity is, and should remain until a “singularity” occurs, a human privilege.

I am not entirely against AI or Machine Learning systems, but I can’t even hide that the technology companies trend is to create more systems that could potentially enslave people into what I called a “self-created god”. Should we let technology companies and giants creating content themselves, we could possibly allow people losing their creativity and innovation allowing them to wipe out any viable option of creating something inspiring.

Post Scriptum.

I mentioned “singularity” a few times without giving it a major explanation. When talking about AI a “singularity” is that moment where the computer gains consciousness of its existence and becomes self-aware. Should a “singularity” occur, computers could understand the meaning of a piece of art rather than giving objective representation of its features. According to experts in AI, a “singularity” is likely to occur within the next 10-20 years from now.

About the Author

Sebastian Zdrojewski

Sebastian Zdrojewski

Founder, (He/Him)

Worked for 25 years in the IT industry facing cyber security, privacy and data protection problems for businesses. In 2017 founds Rights Chain, a project aiming to provide resources and tools for copyright and intellectual property protection for Content Creators, Artists and Businesses.