Generative AI is being studied at Rights Chain with respect to its adoption and use. Rights Chain services do not use, nor are they expected to use in the future, tools for creating learning models of this type of system.
The following text is taken from Wikipedia's definition of Generative AI (the original link is available at the bottom of the page).
It should be taken into account that the use of the term 'Artificial Intelligence' is improper and is only exploited to maintain consistency with the communication to the general public by the mainstream media, as the correct term should be 'Machine Learning' (which is a sub-division of the world of AI, yet is totally separate from the concept of 'intelligence') - nda.
Generative AI is a type of artificial intelligence (AI) system that can generate text, images or other media in response to requests. Generative AI models learn the patterns and structure of incoming training data and then generate new data with similar characteristics.
Notable generative AI systems include ChatGPT (and its Bing Chat variant), a chatbot built by OpenAI using its GPT-3 and GPT-4 core language models, and Bard, a chatbot built by Google using its LaMDA core model. Other models of generative AI include artistic artificial intelligence systems such as Stable Diffusion, Midjourney, Ninijourney or DALL-Eb (for generating images from prompts).
Generative AI has potential applications in a wide range of fields, including art, writing, software development, product design, healthcare, finance, games, marketing and fashion. Investment in generative AI surged in the early 2020s, with large companies such as Microsoft, Google and Baidu and numerous smaller firms developing generative AI models. However, there are also concerns about the potential misuse of generative AI, such as in the creation of fake news or deepfakes, which can be used to deceive or manipulate people.
Issues related to generative AI include ethical, moral and environmental aspects.
Among the ethical problems of adopting generative models we find the way in which such models are constructed, i.e. through the collection (also called 'scraping') of creative content from the Internet and social media, without the knowledge or consent of the authors of the works themselves. It is estimated that the Stable Diffusion model has been 'trained' with the use of more than 2 billion images.
Moral issues include the use of generative AIs such as Deep Fakes for the production of counterfeit content for the propagation of Fake News or for activities such as Revenge Porn.
This is followed by environmental issues, since both the learning phase and the generation of content require dedicated hardware and are extremely energy-intensive. The initial model of Stable Diffusion is estimated to have required 150,000 Nvidia A100 CPU hours for the learning part alone.
Last, but not least, is the issue of the coverage of works generated with Generative AI by copyright law.
Definition from Wikipedia: https://en.wikipedia.org/wiki/Generative_artificial_intelligence
Stable Diffusion: https://en.wikipedia.org/wiki/Stable_Diffusion