Opinion: I asked ChatGPT to write me a symphony, a letter to an ex and more

File photo/Jovelle Tamayo/The New York Times / Microsoft headquarters in Redmond, Wash., is shown on Dec. 7, 2022. On Monday, Jan. 23, 2023, Microsoft announced that it is making a “multiyear, multibillion-dollar” investment in OpenAI, the San Francisco artificial intelligence lab behind the experimental online chatbot ChatGPT.
File photo/Jovelle Tamayo/The New York Times / Microsoft headquarters in Redmond, Wash., is shown on Dec. 7, 2022. On Monday, Jan. 23, 2023, Microsoft announced that it is making a “multiyear, multibillion-dollar” investment in OpenAI, the San Francisco artificial intelligence lab behind the experimental online chatbot ChatGPT.

I mean, what was I expecting from a chatbot? A formula for world peace? Clues on how to mend a broken heart? A cheesy joke?

Sure, all that, why not?

I have absolutely no urge to use the first iteration of anything. But so many AI stories have swirled around the media sphere, including how AI is going to replace journalists, that it seemed irresponsible not to plunge in.

The Atlantic predicted that in the next five years, AI will reduce employment opportunities for college-educated workers. (Actually, ChatGPT predicted that outcome after the Atlantic prompted it to address the issue.)

The New York Times recently had a story about how chatbots like ChatGPT are writing entire papers for undergrads, forcing universities to change how they assign work. So far, The Times reported, more than 6,000 teachers from institutions including Harvard, Yale and the University of Rhode Island have signed up to use GPTZero, a program developed by a Princeton University senior to detect artificial-intelligence-generated text.

A day later, my friend Drex forwarded a video about the latest evolution of Atlas, the Boston Dynamics humanoid robot that has captivated viewers with its uncanny dance and parkour moves. Atlas can now run, jump, grab and throw.

But AI can certainly run amok. (See: Tesla autonomous car crashes.)

CNET, the popular tech website, had to amend dozens of its news stories after admitting it was using bots to write them. The bots were error prone, miscalculating basic things like compounded interest. Futurism, the website that discovered the ruse, was less charitable: "CNET is now letting an AI write articles for its site. The problem? It's kind of a moron." CNET claimed the bots were an experiment.

ChatGPT was straightforward about its limitations. It could tell me what a symphony is, but it could not compose one. It was also a little oversteppy. When I asked it to compose a letter to someone who broke my heart, it did, but it warned: "It's also important to consider the person who broke your heart's feelings and whether contacting them is the best course of action for you." Who asked you?

I also indulged in nonsense.

"How much does Czechosolvakia weigh?" I wondered. ("As it is a former country and not a physical object, it does not have a weight.")

"To be or not to be?" (Hamlet, said ChatGPT, "is weighing the pros and cons of life, and considering whether it would be better to end his life or continue living and dealing with his troubles.")

And -- how could I not? -- I asked if it knew any dirty jokes.

"Some types of jokes, including dirty jokes, can be considered offensive or disrespectful to certain individuals or groups and it's important to be mindful of that before sharing any type of joke." How uptight. It did, however, offer a bunch of Dad jokes: "Why was the math book sad? Because it had so many problems." "Why was the computer cold? Because it left all its windows open."

My final request to ChatGPT was to see if it could edit the opening lines of three recent columns to make them better.

I am happy to report that in my entirely subjective, all-too-human opinion, it made no edits that improved my copy, and in fact, made it clunkier.

You ain't putting me out of a job yet, robot.

The Los Angeles Times

Upcoming Events