Updated 1.22.23
We affirm to our readers and contributors that The Fabulist Magazine is, first and foremost, a venue for connections and encounters with unadulterated human creative works.
• Unless otherwise specified in any given call for submissions, The Fabulist is not open to works that include AI processes of any sort, including the generation of prompts, titles, names, outlines, dialogue, plot elements, descriptive passages, etc.
• We have updated our contractual and submissions materials to reflect this prohibition as clearly as possible.
• This policy is retroactive; we will remove from our archives any works found to have included undisclosed AI adulteration, though, lacking a formal policy prior to this date, we welcome the opportunity to work with previous contributors to update such works or look at new submissions.
• Moving forward, wilful violators of this policy will be permanently banned from our pages.
• No, running a spellchecker or grammar tool on your finished text is not AI.
Background
Text and image generation by AI systems has achieved remarkable verisimilitude to actual writing and art created by human beings.
We acknowledge, and are interested in, the intentional and fully disclosed use of AI as a creative tool. (We also recognize that AI technology has legitimate uses outside the arts in research, education, commerce, government, logistics, etc.)
Despite this, we are deeply concerned by the potential for AI’s abuse as an inauthentic, appropriative mechanism that can enable an extraordinary range of exploitative, deceitful and fraudulent behavior in the arts, academia, and mass media.
We assert that algorithmic capitalism will certainly enable the widespread and casual adoption of this technology, thereby greatly increasing the likelihood of abuse.
Safeguards against, and costs of, undisclosed AI text adulteration
The goal of The Fabulist’s AI policy is to ensure to the best of our ability that the media spaces and artifacts we produce and publish represent uniquely, specifically, ethically and honestly human artistic expression; and that any use of AI in this effort will be transparent, disclosed, and described as part of the larger human creative process.
• To enact this policy, we’ve updated our contract to include a no-AI section in the “originality and ownership” clause.
• Our Submittable form now has a checkbox in which the submitter affirms the work is their own creative effort, unadulterated by machine-generated text, and that they understand that we’ll delete their work and ban them from submitting in the future if we discover that the work does in fact incorporate AI text outputs.
We do expect there will be some folks out there who will consider our AI policy a challenge: Can they fool us, and all of you as well? It will certainly get easier for them to do so, as AIs become more sophisticated — and we’re not ruling out the use of AI-detection software to guard against this.
Please note that some types of AI-detection software comes with a financial cost of a few cents per batch of text. If at some point we feel it is necessary to begin using AI detection on each submission we receive, we may have to consider passing along that cost to submitters, in the form of a small surcharge of hopefully less than a dollar.
We hate this. It is our policy to never charge reading fees. While the cost of AI detection would be a passthrough and not a reading fee, and we would do everything we can to avoid it, we can’t rule it out in the future.
For now, we just don’t know how it’s going to play out; for example, even given the use of good AI-detection software, we would be extremely concerned about false positives.
Rationale: Why no AI?
Creative works are an essential means of connection between human hearts and minds. They open doors between the self and the other, deepen empathy, and enable the survival and flourishing, on a myriad complex levels, of both the individual and the community.
At its best, Art’s connection and expression is authentically human, and has integrity: that is, it contains and embodies a sincere integration of creative vision or inspiration, training, practice, and technical craft; it is genuine, experiential, it has intention. This is true both of the act and process of making art, as well as the experience of it as a witness — as a viewer, a reader, a listener (in the case of audio works).
Can AI be used as intentionally, as a creative tool? Absolutely. What matters, then, is that its use is both invited and disclosed. The viewer, the reader, the audience, has to understand that they are witnessing something created, in varying degrees, with AI mechanisms; they have to know this prior to the experience, and choose to continue with it.
A picture hangs on a wall in a gallery or museum. A placard placed adjacent by the gallery curators identifies the title of the work, the creator, and the medium: The nature of the work, its origin and process, and its conceptual framing, are all disclosed, on display, with the art itself.
There is a place for disclosed use of AI in human creativity. We’re all still learning what that is. In the meantime, we at The Fabulist are drawing a hard line in opposition to any deception and lack of transparency. It is an abuse of trust, and lacks integrity. In mass media and politics, it’s actually dangerous. In academics, it’s cheating. In the arts, it’s degrading.
We consider the analogy of performance-enhancing drugs in athletics. It cannot be said that Lance Armstrong’s Tour De France achievements are truly his, nor are any of the records set by the “steroid sluggers” of baseball — Jose Canseco, Mark McGwire, Sammy Sosa, Barry Bonds and the like. They supplemented their performance to give themselves an advantage they would not normally have. It was ultimately not about their athleticism, nor even their innate ability as athletes. They knew it was dishonest — after all, they did it in secret. They did not disclose, because what they did was cheat.
Sadly, our peers in the literary- and genre-publishing worlds are already seeing the undisclosed use of AI in the production of fiction, nonfiction and imagery that purport to be the product of the human hand and heart, but are in fact adulterated in varying degrees by mechanical text and image generation. We consider this lack of disclosure to be fraudulent.
For now, this does not appear to be a widespread phenomenon, and AI works are at this time pretty easy to spot. Unfortunately, we expect this sort of fraud to become widespread, and more sophisticated, in the months and years to come; and that it will over time lead to the degradation of human experience in four ways:
1) through the uncompensated and unethical appropriation of the styles and characteristics of authors and artists, by which AI systems “learn” to generate text and images,
2) through the normalization of inauthentic creative expression, in which the human heart and hand is self-disintermediated,
3) through the proliferation of algorithmically generated media objects intended solely to mine audiences for profit, and,
4) through the propagation of exploitative and manipulative simulacra and deepfakes.
Potential abuses of AI
Let’s take a moment to look beyond the thoughtful efforts by artists to use AI with intention and transparency, and consider the downsides.
Capitalism already loves AI. Why bother hiring poets, writers, artists, when you can just enter your parameters and push the “go” button?
Malign actors love AI. Moving forward, we expect AI media generation to become one of the default tools for the misrepresentation and degradation of human connections and experience, with great consequence. It’s a gift to tricksters, exploiters, demagogues, con artists and manipulators, who will find it an extraordinary vehicle for the generation of deepfakes and simulacra that produce ever-greater degrees of reality distortion.
This is deeply concerning, given the deception and distortion that is already inherent in our mediated society. In the continuum from advertising puffery and political propaganda to “alternative facts” and QAnon, AI is the diving board at the end of the runway.
The ease with which AI generates realistic media objects makes deception more seamless, and easier to swallow. The normalization of AI distortions create a slippery slope even for casual uses of the technology by individual actors. How many steps is it from a little white lie (an AI-generated outline or essential paragraph for an essay, story or personal statement) down into an abyss of delirious deceptions? The web of false corroborations George Santos could weave is breathtaking.
Beyond the arts and media, and AI’s potential benefits notwithstanding, the inevitability of its abuse, misuse, and malfunction in, for example, government activities such as intelligence or enforcement, is chilling.
In all these cases, we hope that the worst abuses will remain rare. But we should most certainly expect the worst.
What about the upsides? The intentional and transparent use of AI
Overlooking the ultimate destination of that road paved with good intentions, and AI’s potential value to a variety of necessary civic functions, let’s look here at the use of AI as a tool with sophisticated intention by artists.
At The Fabulist, for example, we published the works of Hawaii-based artist Michael Powers, who fed 600 modern-art portraits into a neural network, and then generated a spectrum of images that are curious and fascinating “reflections” of the human face. He designed the digital system himself, and is very clear in describing what it does, and how it works.
Here in San Francisco, we recently attended a talk on AI and the arts, staged at the Arion Press, and featuring the avant garde musician and artist Laurie Anderson, who spent time as an artist-in-residence at the Australian Institute for Machine Learning. She was joined in the conversation by former software engineer Helena Sarin, who uses “generative adversarial networks” to create intriguing images as well as pottery.
Over the course of the talk, Anderson described, and demonstrated, an AI tool developed for her by AIML scientists that outputs lyrics, based on prompts, in her style, or the style of her late husband (the iconic musician Lou Reed), or a combination of both of them.
“AI as seance,” the moderator Davia Nelson said, to which Anderson responded that she doesn’t really think she’s collaborating with her dead husband.
This sense of reality and differentiation between the real and the simulacra is important. Anderson is, frankly, lucky to be able to make that distinction.
It cannot be said that this capacity for differentiating the true and the false is a universal human quality.
Fabulist Magazine contributor Laird Harrison explores this tension in his poignant, quasi-ghost story “Abbreviation,” in which a woman begins receiving text messages apparently sent by her dead husband.
It is likewise tempting to look at AI as a form of divination; but Philip K. Dick’s famous use of the I Ching to help direct his writing of The Man in the High Castle is not at all the same as getting plot outlines and prompts from ChatGPT. The I Ching, and likewise the western tarot, are complex, historic systems of archetype coming out of specific cultural traditions. They are entirely human in origin, deeply obscure, and their value is derived from human interpretation. An AI text generator removes the opportunity for and necessity of interpretation altogether.
AI lacks motive, feeling, originality, mythic depth, or original analytical insight. It is only as good as the purpose to which it is put, and the skill of the hand that wields it.
It is at best merely fascinating, useful, and exploitable, for good or ill — like all tools.
By the Human Heart and Hand
Art of any sort — written, drawn, painted, sculpted, composed, performed by the human heart, hand, voice and body — is in some essential way a sacred thing. Even at its most minor degree of accomplishment, it is at least an effort to express, to connect, and this is to be recognized as fundamentally human.
Regardless of any alarmism on our part: In the AI era, where text and image generation will become increasingly sophisticated, and “content” production increasingly generic, algorithmic and marketing-driven (as if it could get any worse!) — there’s a clear need for publishers that are creating spaces unambiguously committed to creative works unadulterated by AI processes.
That space is inside all of us, and a surround that we create together: Here we are.