Fear of Artificial Intelligence and Its Impact on Creativity
What are a writer's responsibilities regarding the use AI?
Something is Missing from the AI discussion…
I recently attended the Clemens Lecture at St. John’s University, where Simon Johnson, a professor at the MIT Sloan School of Management, gave a presentation titled Technology and Inequality in the Age of AI. He used economic data from previous technology revolutions to anticipate different scenarios of how AI could disrupt national and world economies. A question-and-answer session followed.
A central point in the lecture was that today’s Narrow AI (NAI) technology could reduce or replace entry-level white-collar jobs, and even some higher-level ones. But what about my work? I wondered as I walked back to the guest house after the event. What about writers, designers, musicians? How might NAI disrupt creativity?
Creativity and NAI
In a recent post on values I defined creativity as observing the world and cultivating new ideas and things. People use their creativity to bring comfort, health, safety, and joy to society through technology and the arts. They also use their creativity to subjugate, mislead, and destroy. It is an essential characteristic of a fulfilling life and a rich culture. The modern world with its amazing technologies exists because past and present generations combined knowledge and material in new ways. Today, knowledge technology allows people to use widely available tools to create at an accelerating pace—a pace some find exhilarating, and some find bewildering.
Knowledge is essential to reduce fear and bewilderment. The critical piece of knowledge to have about NAI is that it is software. It is lines of computer code designed to mimic a human response to a question or request. NAI code runs on server farms accessed through the internet, farms which require lots of electric power. The program combs through large data sets1, drawn from sources such as books, newspapers, websites, academic journals, financial reports, and personal data (in the case of corporations or governments), and assembles an image or text response based on patterns gleaned from the data and guidelines given by the user.
NAI does not observe the world or cultivate new ideas; it is not creative. It may appear creative by combining or relating things the user may not have considered in response to a query or request. Its advantage over search engines, which return a list of websites related to a word or question, is that NAI formulates its response based on the user’s requirements and considers data and patterns from multiple sources to form the response.
Creativity, Tools, and Decisions
As a young research assistant at Woods Hole Oceanographic Institution, I built a device to extract freon compounds from seawater, based on a concept developed by the scientist for whom I worked. At first, I had to operate the machine by hand, turning valves in the correct order with correct timing to successfully collect data from samples. Later, I added electronic controls so I could push buttons to turn the valves. Once I’d refined the process, I added a programmable logic controller to the system. I wrote a program to control the sequence and timing of the valves operation required to process a sample. I could now execute the entire process by hitting one button.
The controller did not make decisions about when to open and close valves. I made those decisions by virtue of writing the program. The controller simply executed the program to operate the valves in the correct sequence and report the results of the sample.
NAI, like the controller used to execute the freon extraction process, does not make decisions. In fact, I’ll let ChatGPT speak for itself, in response to the question, ‘Does ChatGPT make decisions?’
No, ChatGPT doesn't make decisions. It processes and generates responses based on patterns in the data it was trained on but doesn't have the ability to make choices or have opinions. It provides information, suggestions, or ideas based on what you ask, but the ultimate decisions remain with the user.
A succinct—and incomplete—answer2.
In the case of ChatGPT, the programmers at OpenAI made decisions—through the computer code—as to the form and content of the output based on a given input. ChatGPT ‘learns’ through modifications to the code made by the OpenAI programmers based on comparing results to a given query against different data sets. And like the extraction machine controller, it automates and/or accelerates the process to acquire data and information, freeing the user to do something else. In economic terms, it increases the user’s productivity.
The Line Between Creativity and NAI - Whose Work is it?
In the case of the freon extraction machine, there is no doubt as to who gets the credit for the results. The scientist developed the concept for the machine, and I built and refined the machine using automation technology. Together we produced oceanographic freon data. The scientist analyzed the data. The results were his work, produced with my assistance.
In the case of AI-generated images, creative credit is more complicated. For example, I asked ChatGPT to generate an illustration of a cycling criterium. Below is the ChatGPT result, alongside a photo from a real criterium. ChatGPT produced a decent, although not artful, result. I submitted the input that prompted ChatGPT to generate the image. ChatGPT used its data to find what a criterium is and produce the image. I did not produce the illustration, as a graphic designer would have in the recent past. However, without my prompt, the image would not exist.
Who or what should get credit? Me? ChatGPT? Me and ChatGPT? At present, AI-generated images are not eligible for copyright protection, since they were not created by a person. However, I own the criterium illustration and could sell it or use it in advertising.
Yikes! Imagine the impact NAI is having on advertising and graphic design firms, and their clients!
The Writer’s Responsibility
Short-story master George Saunders has said that good stories are the result of ‘micro-choices’ made by the writer. The placement of each word, comma, and period is a micro-decision, and in the case of the best writers, those decisions are made with full awareness—by the writer.
As mentioned earlier, NAI does not make decisions—it produces text based on programmers’ decisions embedded in the NAI code. What if a writer produces a work of creative writing in which NAI produced a portion of the text? Should the writer be denied copyright protection since part of the work was not produced by a person? How could such a denial be enforced? Most important, what responsibility does a writer have to disclose any use of NAI in creating the work?
Right now, there are few guidelines regarding transparency and ethical use of AI in creative work. In the absence of widely accepted guidelines, I believe writers must take individual responsibility for disclosing their use of AI technology in their process. Writers must disclose if, how, and where in their work NAI was used. Further, writers must advocate for standardized guidelines to protect their work from copycats and scammers, who disclose nothing. If we writers don’t determine the rules, someone else will determine them for us.
Decision making is the foundation of the creative process. It is what makes each writer’s work unique. Machine-generated writing is not unique, it’s the result of programming decisions made by computer code developers. Readers should know if the person in the byline made all the decisions or not. To not disclose the use of AI is to lie to the reader3.
To my fellow writers, I believe it comes down to this: do you want the programmers at OpenAI to make the micro-decisions in your work, or do you want to make them yourself?
I look forward to seeing your comments.
How AI companies acquire large data sets is a copyright issue for artists and a personal data issue for everyone. Publishers and media companies have sued AI companies for using copyrighted material without permission to build the large language models used by NAI systems to mimic different styles of human response.
The first rule of using any AI technology is to critically evaluate the output to determine the validity of the response. It’s worthwhile to note that the role of OpenAI’s designers and programmers in determining the form and content of ChatGPT’s output is not mentioned in the response to the query about ChatGPT’s ability to make decisions.
AI was not used to produce or edit the text in this post, or any of my previous posts. I do take advice from an excellent editor, who is a good human and wishes to remain anonymous.