Last week, the OpenAI project released the ChatGPT Research Preview application. Since its release, people worldwide have been posting on social media with fascination, excitement, and fear about the potential impact of such a tool.
If you have not heard of ChatGPT, it is a large language-focused machine-learning model trained to process and generate text. These systems are typically trained on large amounts of text data, books, articles, and other written materials to learn the patterns and structures of human language. Training a large language model allows an AI system to produce natural-sounding, readable text similar to human writing.
The company behind ChatGPT is OpenAI, a research institute, and technology company focused on developing AI technologies and advancing the field of artificial intelligence. One of their projects is training large language models, such as GPT-3 (Generative Pretrained Transformer 3). Today GPT-3 is one of the largest and most powerful language models. These models can be used for various applications, such as language translation, summarization, question answering, and more.
When considering the potential impact of such technology, one can look back to the Industrial age for some examples. During the Industrial Revolution, machines and technology began to augment and, in some areas, replace human power. These changes allowed things to get created faster, reducing costs and, in many cases reducing or removing the need for humans to complete those tasks. As a result of these changes, jobs were lost, but significantly more new jobs emerged.
In today’s Digital age, Artificial Intelligence will augment and, in some cases, replace human intelligence-based tasks. As a result, people will need to accept and embrace change. These changes will come fast and will be very disruptive. As we saw in the Industrial age, jobs will be lost, but new industries and markets will emerge from the disruption.
As we embrace AI-based technologies to generate human intelligence-based content, it raises some critical questions about transparency and accountability. When consuming information and making decisions today, we often utilize multiple sources such as news media, friends, family, Internet search tools, and other online content. When we digest information and begin to process it, the information gathered is often weighted based on our trust and experience with those sources.
Thanks to AI-based technologies, we can now create pictures, write papers, write application code, draft articles and social media posts, and generate videos and audio recordings simply by writing a few sentences. However, while these capabilities help accelerate the creation of knowledge-based content, there are risks. For example, AI-generated results could deceive or mislead readers because of bias, data quality issues, malicious intent, lack of diverse thoughts, and a simple lack of ethics in disclosing the source of the content. Therefore, while the results may be excellent, we should always weigh the information generated cautiously, as AI output is based on available data at the time and how it was trained. This is no different than if you received information from a friend, you just treat that as another source of information that should be weighed in with other viewpoints and sources.
With that, we should know who or what created the information we are consuming. If people start representing AI-generated content as their own, it will impact diverse thinking, break trust in others, and is not ethical. In addition, if you read the terms of service on AI-based platforms, there are rules against passing AI-generated content as your own. For example, the following line is from OpenAI’s terms of service.
“represent that output from the Services was human-generated when it is not”https://openai.com/api/policies/terms/
One immediate action that may help reduce risk is utilizing some type of disclosure system. Initially, the disclosure may need to be done manually, but over time could be handled in an automated way. Disclosures will be critical for us to know and understand how to interpret, analyze, and respond to the information we consume. For example here are three high level disclosures that could be used and associated with written content so the person consuming the content knows how to best handle the information they are consuming.
Disclosure: The following content was generated entirely by an AI-based system based on specific requests asked of the AI system.
Disclosure: The following content was generated by me with the assistance of an AI-based system to augment the effort.
Disclosure: The following content was generated entirely by me without assistance from an AI-based system.
As you can see these are high level and designed to get an understanding if the content we are reading was created by the person, by the person with assistance from AI, or completely written by an AI completely. This approach may be a good solution until we can come up with more ideas on how to best blend in tools like the chatGPT based AI platform.
My disclosures regarding this blog post
Image Disclosure: The blog post image was generated by an AI-based system called "Text to Image" from Canva based on two requests. The results were then merged together using Photoshop.
Content Disclosure: The following written content was generated by me without assistance from an AI-based system.