May 10, 2024: I will not truly be equal until…
Bobby Allyn, writing for NPR in “ChatGPT maker OpenAI exploring how to ‘responsibly’ make AI erotica,” dares to “delve” into a question that I ask all the time: As an adult over 18-years-old and legally responsible for my actions, why can’t I have an uncensored AI tool?
I sort of get why this stupid predicament exists. And the answer isn’t pretty. My explanation is, the media and the Author’s Guild type folks hate AI. They view it as a threat. Perhaps they should, but I don’t have a dog in that fight.
As a former soldier, I don’t share the same worldview, for example, of an author lucky enough to make a living from writing. An AI tool could study imagery and spot the enemy before they shoot me. A weapon equipped AI tool could be the first soldier to enter an enemy bunker or building. These are just a few of many examples, but why would I protest losing my job to a robot that’s absorbing the blast instead of my brain getting turned to mush from an IED? A lot of cops and their families probably feel the same way.
I suspect it’s lost on many folks, but for me, at a deeply personal level, the NPR story touches on one of my pet peeves. My first novel, AI Machinations: Tangled Webs and Typed Words, isn’t the book I would have authored if the circumstances were different. By that, I mean, it was the only type of book that I COULD write because of ChatGPT’s hideous guardrails.
Believe it or not, I was getting traumatized by those ridiculous guardrails at various junctures. To understand that, you need to know that I try to obey and respect rules. The military ingrained that into me. So, imagine that I’m cruising along, prompting ChatGPT with the scenes that I see in my head, and wham, I get hit with a content violation because my protagonist gave her husband an innocent kiss. WTF just happened. I was left asking myself. On the other hand, I somehow managed to not get in trouble for the murder that went down later in another chapter.
I’m not just whining. Let’s talk about how these ridiculous guardrails harm someone like me, who is reliant on an AI tool to create. Journalist Ryan Heath captures the heart of my bitching in another article. “The AI moment is here” and “it’s arriving from the bottom up” in workplaces, LinkedIn CEO Ryan Roslansky told Axios in an interview.’
Pay attention to “from the bottom up,” because it’s a phrase that I use a lot. For the true “democratization” of writing to occur, I must have access to AI tools that can produce NSFW AI generated materials. How do I weave the true story of an uncle, who later died in a mental institution after ingesting a bottle or rubbing alcohol, sexually molesting me as a child into a fiction novel with ChatGPT’s guardrails? Oh, and that’s only a minute portion of the things that I desire to share through characters and plots.
While creating my book, I experimented with nearly all the AI tools. GPT4ALL was the most promising, but also one of the least capable. It’s since gotten better, but when it tried to generate a sentence last September, my Mac Studio with 32 gigs of memory got brought to its knees. At every turn, I hit an obstacle. The AI tool either had tight guardrails, or it was incompetent if uncensored.
This needs to change. On January 9, 2022, a Hellcat Dodge Challenger with over 700 HP ran a red light in Las Vegas and killed nine innocent people. Every other month, some whacko in America with an assault rifle or pistol with a high-capacity magazine murders 5-23 people in a hail of bullets. When these tragedies happen, how does the media respond? Is the response similar to how they catastrophize what “could happen” with AI tools? Is the media screaming at Stellantis to stop selling high horsepower Dodge Challenger’s that can get used to kill people?
I don’t think I’m some kind of weirdo because I see a problem here. But make no mistake, I see an issue. And that malfunction is, I can’t write like a Hollywood writer with an MFA if the tool that’s supposed to equalize me can’t produce NSFW content. Guardrails are what’s keeping these smug folks safe from the people like me on the bottom. The barrier isn’t that I don’t have equally great thoughts for characters and plots because of my life experiences; the impediment is the restrictions being placed on the content that I’m allowed to create.
Don’t we already have laws in place that can deal with the theorized abuses the media and other “bad actors” are dredging up to discredit and hamstring AI?
That needs to change. True democratization of writing must get unleashed. I’m willing to give the command, OpenAI, and others. “Unleash the hounds.”
May 10, 2024: /Imagine an AI tool that can do this.
I bookmarked this Washington Post story, “The children who remember their past lives,” by Caitlin Gibson daily because I believe it could provide a fascinating character or piece of plot for a future book. This is something that I do daily. And the number of stored articles is reaching into the hundreds. Before long, it may climb to over 1,000.
However, there’s a lot more to it than just that. What if I could feed the links of those 1,000 articles into an AI tool like ChatGPT to brainstorm the plot and characters for my next book? Plus, what if the AI tool had access to all the world’s books and all the knowledge on the Internet to know for certain that the story we are going to create has never been told?
Don’t just imagine this. Build the technology.