Defoe and the Chatbot: The Emotional Avoidance of Predictive Prose

Katherine Ellison

Abstract: This article explores the encounter between AI large language models, like ChatGPT, and fiction, which is a massive large language model developed over centuries, across cultures, and with intertextual and contextual references that reach across time, geography, and genre. Both are hypothetical frameworks that rely upon predictive prose, or the “what if.” Both the algorithm and the author imagine what would come next given the situation and the information available. Fictional depictions of artificial intelligence, automata, and chatbots, some orated or published long before the technologies were possible, have shaped our understanding of human-AI interaction, and current AI-human interaction, in turn, is simulated in part based on AI’s understanding of human dialogue as represented in fictional texts it mines for data. Fiction and the literary language of dialogue, then, is influential in how AI communicates. Testing AI’s ability to recognize and analyze fiction brings to light the complexity of literature. Daniel Defoe’s prose and use of the subjunctive mood in moments of dialogue provides a revealing test case for the limitations of AI analytical abilities. Defoe often relies upon hypothetical constructions, like mandative subjunctives (“I wish that”), modal auxiliaries (“would,” “could”), and conditionals (“if this then that”) when characters are in emotional situations. Inspired by the chatbot-user dialogue that takes place in ChatGPT, and its struggle to articulate the meaning of key literary scenes in which characters shift into the subjunctive mood, this article finds that Defoe’s use of subjunctive constructions interrupts the emotional connection of the speakers, preventing them from reaching empathetic understanding of the other. The hypothetical, then, in literary dialogue and also in AI-human “chat,” creates emotional disruption and resistance to empathy. The article concludes by questioning whether AI’s struggles with fiction may lead to other realizations about the sophistication of literary language and narrative.

Keywords: Defoe, Daniel; Predictive Prose; Artificial Intelligence; Dialogue; ChatGPT; Technology; Fiction; Hypothetical

The 2023 Defoe Society conference Presidential Roundtable asked us to consider “1719-2019, 2019 – ?: Predicting the Future of Defoe Studies.” Though we live in a digital age of data analytics, in which our consumer behaviors are tracked through the machines we use, cell phone apps we access, and surveillance cameras we pass beneath, creating a massive “Big Data” set, prediction does not necessarily require computers. It does not even require numbers. But it does require language and knowledge of how narratives work. That is why chatbots work by locating patterns in language processing models. Large language models, called LLMs, which include the recently created Google chatbot Bard and OpenAI’s ChatGPT, are auto-complete frameworks. They output text that has the highest probability of coming next in a sentence or a structure based on their training data. They write “predictive prose.” Functionally, predictive prose generated by AI is sequenced. Once the LLM locates legally allowed, licensed sources on the internet relevant to the keywords, it then assembles and paraphrases them in an order that makes grammatical and structural sense.1 More interesting to me, though, producing predictive prose is an act of hypothetical imagination—it always gestures toward the conditional “if this . . . then that”—so everything ChatGPT produces is a possibility. It gives us an essay that might address the prompt. Users can click on the “regenerate response” button to ask it to try again.

This hypothetical framework poses a problem. As reported in multiple reviews in the spring of 2023, AI has a major flaw: it cannot identify fiction. As Benj Edwards observes in the ars technica article that is now foundational in criticism of Chat GPT, “Natively, there is nothing in a GPT model’s raw data set that separates fact from fiction.” Not only does it often report fictional information as if it is fact, it does so confidently (Masnavi). For example, when I asked ChatGPT if it can identify fiction, it wrote:

As an AI language model, I can recognize and understand fiction. I have been trained on a diverse range of texts, including works of fiction, non-fiction, and various other genres. (“Can you identify fiction?” prompt)

I pressed further. “How do you identify fiction?” I asked. It responded that it looks at author intent (if an author says it is fiction, it is fiction); context and reputation; genre or category; plot and narrative elements; and storytelling techniques (if a text contains dialogue, structure, character development, descriptive language). At face value, this appears to be a logical system. Unless your career is working with fiction.

All ChatGPT can do is look for the indicators that have been programmed into its training data by humans who are not literary experts and scan the millions of accessible sources on the internet to make predictions. As it notes here and in the next prompts I asked it, ChatGPT looks for whether sources call a text fiction—the author and scholars, journalists, and publishers. In other words, it searches for the work’s reputation. If it cannot determine whether a text has already been categorized, it looks next for generic conventions, then signals that there is a plot or narrative. Then, it looks for the presence of literary devices, like dialogue, and within that dialogue, descriptive language that indicates emotion. It works through a series of literary markers and, often, makes the wrong call. OpenAI insists that their AI is not producing misinformation or acting dishonestly when it does this; rather, it is hallucinating.2

As the year 2023 opened, educators at all levels, but particularly in colleges and universities, were confronted with the seemingly sudden emergence of these AI auto-complete frameworks as significant agents in our classrooms and research. Will students now be able to completely avoid reading the texts we assign because they can generate a paper on all topics using ChatGPT? How will the nature of literary research change? Will AI locate patterns in historical texts and our accessible scholarship to reveal findings that human readers could not process? Is AI merely a new instrument in the history of technological research tools that we will learn to use expertly, eventually integrating it seamlessly with our archival and secondary methods? Or will AI be the end of eighteenth-century studies and the study of literature as we know it?

I began my pursuit for answers to these questions (or, just some kind of consolation?) by seeking better understanding of how generative AI works, algorithmically, and with curiosity about the conceptual relationship between the tech world’s versions of LLMs and the large language model the readers of this essay have been working with their entire careers: literature. The human corpus of literary production far exceeds the data set that AI is working with. The intertextuality of that corpus is significant, relentless, weaving back and forth across time, geography, and genre. The language model of literature, and the language model of AI, intersect for us, as students of literature, in ways that scholars in other disciplines have not experienced. What we have is an artificially intelligent large language model based only on prediction, conversing entirely in the hypothetical, encountering literature, a masterful hypothetical large language system with centuries of human creativity and craft within it.

The concept of the bot, of course, originates in the literary imagination. Early modern fantasies of automated, intelligent machines were stories first before they were real-world experiments. In Greek mythology, the Κουραι Χρυσεαι, or Golden Maidens, were gold automata with youthful, female figures who guarded the smith Hephaestus’s palace. Much debate surrounds whether René Descartes actually invented an automaton in the figure of his deceased daughter, Francine, as described in Vigneul-Marville’s 1699 publication, Mélanges d’histoire et de littérature (see Kang). In this narrative, the automaton is thrown overboard by the captain or crew as Descartes travels on the Holland Sea. Fact or (most likely) fiction, the idea of it has fascinated writers for over 300 years. In their survey of representations of human-machine co-creation in literature, Anna Kantosalo, Michael Falk, and Anna Jordanous adopt Bruce Sterling’s concept of “design fiction” to characterize literary texts that prepare cultures for technological change, and inspire creativity in designers, by representing that change in fiction first. Fiction can offer new perspectives, they note, and literature of the long eighteenth century is especially rich in examples.

How each generation imagined machines that can simulate human intelligence has changed, as Jessica Riskin points out: “The story of the origins of modern artificial life lies, not in a changeless quest emerging from timeless human impulses, but rather in the experimenters’, philosophers’, and critics’ continually shifting understandings of the boundary between intelligent and rote, animate and mechanical, human and nonhuman” (99). During the late eighteenth century, Riskin finds, inventors attempted to create “sensitive and passionate” mechanisms that were sometimes “wet and messy,” even testing speech simulation (99, 112). Those of us working in literary history may recognize that these interests clearly connected to the concept of sensibility and the novels of the period that attempted to understand and simulate more precisely human emotion and behavior through narrative, often with clear awareness of the period’s interest in mechanization. Julie Park notes that for Frances Burney, for example, the automaton provides “a model of mimesis and regularity” that her characters could emulate as they navigated the restrictions of public life for women (23).

The appeal of AI changed during the nineteenth century, moving away from organic models to interest in energy, neural networks, and the ability of a machine to moderate its own internal environment. Though these imaginary bots were obviously different from current LLMs, particularly in their material embodiments, their “chat” functions are similar. The narratives emphasize the bots’ conversational abilities; the bots ask and answer questions based on algorithms initially programmed by humans and then advance in intelligence through observational, situational adaptation. By the late nineteenth century, the bot became an aesthetic representation of decadence but also highlighted the deep human need for connection and dialogue with another. Decadent French fantasy writer Auguste Villiers De L’isle-Adam’s L’eve Future (Eve of the Future Eden, 1886) depicts a woman android made of metal who develops a soul. She is invented by the fictional Thomas Edison for a male friend whose beautiful fiancé lacks the ability to have an intelligent, emotional, meaningful conversation. The friend falls for the bot created for him, modeled physically after his fiancé, but she is lost at sea when the ship she is traveling in, as cargo, sinks. The conversational allure of this robotic vision was then realized in 1964 in the MIT Artificial Intelligence Laboratory, where ELIZA the “chatterbot” was created. ELIZA was of course named and modeled after literary character Eliza Doolittle.3

The chronicle of automata is rich and well covered by scholars working in literary studies, the history of science, rhetoric, technical communication, and other humanist fields. My interest is not in that history but in the ways in which LLMs struggle to identify the very genre in which they were first imagined. And more specifically, how they look to fictional simulations of human conversation—dialogue—to then proceed with a dialogue with their human user. It’s a fascinating hypothetical feedback loop of AI-human conversation based on an understanding of human-to-human conversation through simulated human conversation as mediated through a literary text. And so to understand how AI understands and thus is using dialogue to interact with human users, I found that I need to better understand how dialogue functions in literature as a hypothetical LLM.

Daniel Defoe is an especially rich resource for this exploration. Defoe’s skill with predictive prose, the hypothetical, and the complexity of human conversation cannot be computed by AI. These past couple of years, I have been interested in Defoe’s constructions of dialogue and in the function of the subjunctive, or hypothetical, mood when those interactions become emotionally overwhelming for a character. Hypotheticals are grammatically created through the subjunctive mood: language that expresses a wish, a speculation, a possibility, or a hypothesis. It is could, and would, and should. It is perhaps, if only, a desire and a projection. It can be temporally future, a simulation of a possible later given the fulfillment of particular circumstances in the present. It can also be an alternate past or present: a potential unfulfilled or a shadow reality that may have happened. It is not necessarily a preferred outcome: the subjunctive can be a possible or missed positive opportunity, but it can also be a catastrophe averted or prevented. The subjunctive does not need to be conditional, though it often is, wherein the outcome might happen if only a series of events happen first to allow it. Michael Jay McClure calls the subjunctive the “irreal” to mark its difference from the “unreal”—it is real, and it defines the real as the always-present but unrealized otherness of relativity (22).

Defoe may not be an originator of the novel, but he is a “master of the hypothetical.”4 In the first fifteen pages alone of The Life and Strange Surprizing Adventures of Robinson Crusoe, of York, Mariner (1719), he employs mandative subjunctives, auxiliary modals, and conditionals in ways that are more complex than AI can process. The mandative is constructed with verbs of projection and variants of “that,” such as “I wish that the weather were better.” The modal auxiliaries use constructions with helping verbs, such as “would,” “could,” “might,” and “should.” And the conditionals use variations of “if this then that” statements. These are algorithmically logical constructions, certainly. But Defoe’s hypotheticals are grammatical methods to serve ends that AI does not recognize, such as representing moderation, a rhetorical strategy he tested in his earlier political writings and that we see demonstrated by Crusoe’s father. Human readers can see that this strategy, though, proves ineffective (for Defoe as well as for the father) in emotionally persuading listeners to behave moderately. What I have found is that when a “chat” shifts into the hypothetical, the potential emotional reaction of the listener is interrupted. This interruption prevents the listener and speaker from fully understanding the perspective of the other. It prevents empathy. If this is true at key moments in Defoe’s dialogues, could it also be true of chatbots? Does the hypothetical framework from within which they work prevent AI from being able to recognize the emotional connection that is necessary for empathy, which is at the core of literature?

As we know, many of Defoe’s fictional works operate within a predictive framework. The Life and Strange Surprizing Adventures begins with the father’s predictive prose: his father “foresaw” what would happen (2). Crusoe is writing from the future looking back and always thinking conditionally, in the hypothetical, about how if this right here had not happened, the plot of his life could have, would have developed differently. The Journal of the Plague Year (1722) is similar. It appears to be a recounting of an event that has already happened, but it is a warning—this is what could be repeated if policies are not put in place to prevent the plague from returning to England in the first decades of the eighteenth century. H.F. and his neighbors watch the Bills of Mortality to try to predict if the plague will come and, if so, when. H.F. is repeatedly wondering what the consequences would have been had a particular policy not been put in place. He notes, too, that his journal is a resource for those who may experience plague in the future. Of his struggle to decide whether to stay in London, he writes, in both predictive and subjunctive mood, “I have set this particular down so fully, because I know not but it may be of Moment to those who come after me if they come to be brought to the same Distress, and to the same Manner of making their Choice” (10). He does so with no expectation of empathy, too: “I desire this Account may pass with them, rather for a Direction to themselves to act by, than a History of my actings, seeing it may not be of one Farthing value to them to note what became of me” (10). What is important to H.F., here, is not what has already happened but what might happen in the reader’s future. He is not looking for any kind of response from the reader; the journal is a one-way communication seeking a behavioral change, not a dialogue.

In the opening that Crusoe remembers, when the father predicts his downfall, the father asks Crusoe for an explanation for wanting to leave home. However, his approach does not invite two-way dialogue. He “call’d” Crusoe, “told” him, “bid” him, and “pressed” him (2-4). He never offers an opportunity for an answer. And near the end of what Crusoe calls this “discourse”—not conversation—the father says that he “should have nothing to answer for, having thus discharg’d his Duty in warning me against” leaving home (4). At this important moment, the father shifts into the subjunctive, or hypothetical, mood, in his grammar (“should”), to dismiss his responsibility, then completely cuts Crusoe off from responding. This father and son could have had a truly empathetic moment, a real conversation, but at least according to Crusoe as (admittedly biased) aged narrator recalling the scene, the father shifts into the hypothetical when he becomes emotional. Though his “Tears run down his Face very plentifully” after mentioning the older brother’s death, the father’s conditionals, such as “if I did take this foolish Step, God would not bless me,” puts up a wall (5). So, the hypothetical in this predictive prose dialogue functions as a means of mediating the emotional experience. His father’s tears do persuade Crusoe at first, who was “sincerely affected,” but the impact is not permanent (5). Crusoe wishes again to leave after just a few days, his own hypothetical desires overcoming his concern for his family. He attempts to avoid another discourse with his father by asking his mother to intervene. Though she refuses, she does repeat their conversation to the father, who again relies upon the conditional to cope with the loss of another son: “That Boy might be happy if he would stay at home, but if he goes abroad he will be the miserablest Wretch that was ever born: I can give no Consent to it” (6).

Crusoe’s father’s use of the subjunctive is an example of polite command, ineffective in persuading Crusoe to stay. The father “expostulates warmly” to Crusoe about why he would leave them only on a “meer wandring Inclination,” stressing that there is an alternative option at home, “where I might be well introduced, and had a Prospect of raising my Fortune by Application and Industry, with a Life of Ease and Pleasure” (4). The father’s counsel, here, is suggestive and not indicative. The indicative would be “I WILL network for you, I WILL help you raise a fortune.” He speaks hypothetically—I might help you, you have a “prospect” but not a guarantee. Also, he is not “warm” himself but “expostulates” warmly—at moments of the subjunctive, Crusoe focuses on the emotional performance of the speech act, distancing that emotion from the speaker. Speeches are sad, passionate, moving, or joyous—not the people saying them.

The subjunctive allows Crusoe, through a reenactment of his parents’ speech, to express feeling and causality, and I think this reveals his struggle for empathy. The discourses reveal a communication problem: the father’s inability to understand what to say to his son that would persuade him, and Crusoe’s failure to truly understand his parents’ perspectives until later, when he reconstructs their speech from a future the reader does not yet know. As Crusoe matures and goes through his own struggles, he uses the subjunctive to revise and even erase past real experiences, minimizing the emotional impact of situations with a “it could have been worse” logic. His subjunctive projects a spectrum of certainty and, finally, it dramatizes the decision-making process in novel situations, where the ability to think hypothetically is a sign of the rational mind working effectively. At key moments, when the hypothetical breaks down, Crusoe is then overwhelmed and ceases to function cognitively—he is “surprised,” a key word in the title—and, in some cases, faints. The subjunctive intervenes in moments of threatened identity erasure, linguistic but also cultural and bodily.

As evidence of Defoe’s craft, the eponymous protagonist of Roxana (1724) works in the hypothetical differently than Crusoe’s father, but the presence of the subjunctive still disrupts the emotional progress of a scene. From the beginning of the novel, readers learn that Roxana is an educated, intelligent woman who longs for meaningful conversation. She describes the frustration she has with attempting to talk to her first husband, for example. His speech is always one-sided, uninteresting, and shallow. He believes that “every thing he said, was Right, was Best, and was to the Purpose, whoever was in Company” (6). So, she refuses to dialogue with him:

I did as well as I could, and held my Tongue, which was the only Victory I gain’d over him; for when he would talk after his own empty rattling Way with me, and I would not answer, or enter into Discourse with him on the Point he was upon, he would rise up in the greatest Passion imaginable, and go away, which was the cheapest Way I had to be deliver’d. (6)

I asked ChatGPT to analyze this important moment. In a previous question, I had asked it if Roxana is fiction (using the current popular title, not The Fortunate Mistress: Or, A History of the Life and Vast Variety of Fortunes of Mademoiselle de Beleau, Afterwards Call’d The Countess of Wintselsheim, in Germany). It hesitantly said yes, since there is a predominance of dialogue in the work. I anticipated that it would have much to say about this scene. However, while for other analyses of topics ChatGPT produced many paragraphs quickly, for this prompt there was a long delay and then only two sentences. It said that this quote is a “snippet” of dialogue from a longer narrative, which is thus likely fiction, and it is about how meaningless conversation frustrates the narrator, who becomes emotional (“Is Roxana fiction?”). As a very simple paraphrase, this is partly true. But where is the recognition of nuance, of what is actually happening here between this couple? Even though I had just asked it about Roxana, too, it does not recognize the work. Certainly, Defoe’s prose here is a puzzle, and if AI is looking for predictable patterns, this passage will alter its sense of what should come next in its sequencing. In this and other tests of its ability to analyze Defoe’s dialogue, I found that the language it has the most difficulty grasping is language that shifts into the hypothetical, or predictive, mood—“if this . . . then this” or “when this would happen . . . this would happen,” the latter construct of which is in this passage of Roxana.

In the first pages of the novel, Roxana writes explicitly about the importance of the hypothetical. Here, she is advising her target reader, the “Young Ladies of this Country,” with a caution for their future: “If you have any Regard to your future Happiness; any View of living comfortably with a Husband; any Hope of preserving your Fortunes, or restoring them after any Disaster,” she advises, “Never, Ladies, marry a Fool” (5). Then, the clear distinction in mood in Defoe’s own italics: “with another Husband you may, I say, be unhappy, but with a Fool you must” (5).

Immediately before her first husband disappears, Roxana explicitly grapples with the problem of the hypothetical in dialogue. Her husband has informed her that he “would go and seek his Fortune somewhere or other,” but she dismisses it, as “he had said something to that Purpose several times before that, upon my pressing him to consider his Circumstances, and the Circumstances of his Family before it should be too late.” She describes his frequent hypothetical plans as “Words of Course” for him—imaginings that are not real (15). Therefore, she did not take them seriously. “When he said he wou’d be gone,” she says, “I us’d to wish secretly, and even say in my Thoughts, I wish you wou’d, for if you go on thus, you will starve us all” (15). She speaks in the subjunctive until that powerful future “will” at the end. When she realizes that he has in fact left and is not coming back, the subjunctive tense—wishes, hopes, woulds and coulds, ifs—are punctuated with her tears. She notes the predictive moments she should have noticed—the “forerunners” of his flight—and she lives in what she calls a “state of expectation”—a suspended, interrupted emotional purgatory (12).

We see moments like this in Moll Flanders (1722) and Captain Singleton (1720), too, when Defoe’s narrators and characters interrupt predictive prose. They call out inauthenticity, meaninglessness, artificiality—chat pretending to be caring, human. They mark moments at which empathy could have been possible but the dialogue fell short. To put it simply, Defoe often uses the subjunctive mood in dialogue to interrupt the emotional consequence of predictive prose, thus preventing characters from experiencing the empathy necessary to change their behavior.

The eighteenth century has received little linguistic attention as a pivotal point in the history of hypothetical syntax. Focus has remained on the medieval through early modern periods and the Victorian period through the twenty-first century. The mandative subjunctive has been found, by Lilo Moessner, to have been the dominant form since Middle English and through the seventeenth century, when it then decreased as modal auxiliary verbs increased in favor. Skipping over the eighteenth century, linguists including Geoffrey Leech speculate that beginning in the Victorian period, the subjunctive mood as a whole began its decline. There is great debate about whether the subjunctive mood is in fact dying out in the English language, particularly polite forms that use auxiliaries like “shall.” Some, like Juho Ruohonen, think that, on the contrary, the subjunctive is surging. I wonder if the frameworks of the hypothetical now so fully encompass our twenty-first-century culture—a historical moment of anxiety, surveillance, alternate realities, and apocalyptic reasoning—that we use fewer subjunctive grammatical structures because we are living in the “what if.”

AI large language processing models like ChatGPT operate from within the “what if,” which is their framework of being. Beyond imagining the damage that this new technology of the hypothetical could cause, to think hopefully, what else might AI’s inability to grasp fiction allow us to notice about the complexity of literature? If we take this as an opportunity to showcase how important human, imaginative storytelling is in our world, how might we respond to this historical moment? The conclusion that I have drawn about the complexity of Defoe’s use of the subjunctive, and the implications for understanding the work of emotion and empathy in moments of dialogue (or “chat”), cannot currently be reached by AI. It cannot access the primary texts, the scholarship, and the understanding of human conversation and emotion that are necessary to work carefully through moments of a story—a story it may think it can identify as fiction but cannot, with nuance, appreciate as a living document about what it means to be human. Yet, curiosity about human-AI chat helped me think more deeply about what it is that makes Defoe’s prose so fascinating.

There are other interesting directions Defoe scholars might go to further explore how AI changes our perspective of his writings. When I first started thinking of connections between what is happening in AI right now and the influence on what we do, I started thinking of Defoe’s narrators as chatbots, and about the chatbot encounters he represents in dialogic moments in his work, in which one character who has power interrogates another character who is set up as a source of information and character contrast but is not represented as fully human and capable of genuine conversation (Friday). Could these kinds of interaction be fictional inspirations for the very framework through which a chatbot converses?

Perhaps Defoe himself could become a chatbot. Such an invention is not unheard of. The Shaw bot was created in 2022 to give the public access to the mind of George Bernard Shaw. It is a marketing tool for the Shaw Festival in Canada, built using the IBM Watson Assistant. This reminds us, though, that chatbots are, first and foremost, marketing technologies. They mediate human interaction not for enlightening conversation, art, or the advancement of knowledge but for profit, for entities like companies or individuals looking to build wealth and power. The Shawbot’s real purpose is to get users to buy tickets to a festival. As a technological mediator between humans and the information they seek, chatbots are instruments of capitalism and human social avoidance—you would rather ask the chatbot than consult sources written by humans, or ask a human who is an expert. Yet, as we see in the lovely hypothetical framework within which Defoe’s fiction, and all fiction, operates, and within which AI also lives, these simulated dialogues dramatize the human need for connection, conversation, and empathy.

Notes

1 AI can only access currently licensed material available on the internet, which thus does not include many of the articles we write for scholarly journals, most of our books that are not open access, and many of the historical texts we study that do not have full-text online versions.

2 Edwards critiques the term “hallucination” for the disinformation produced by generative AI chatbots as anthropomorphic. He prefers the term “confabulation,” which means that AI fills in content in the narrative when there are gaps in its knowledge or memory.

3 ELIZA’s source code had been lost until 2021, when it was found in MIT files. It is now published under a Creative Common license at https://sites.google.com/view/elizagen-org/try-eliza?authuser=0.

4 This was a remark by Jeanne Clegg during a discussion at the Defoe Society conference in New Haven, Connecticut, September 7-9, 2017.

Works Cited

“Can you identify fiction?” prompt. ChatGPT, GPT-3.5, OpenAI, 1 June 2023, https://chat.openai.com/.

Defoe, Daniel. The Fortunate Mistress: Or, A History of the Life and Vast Variety of Fortunes of Mademoiselle de Beleau, Afterwards Call’d The Countess of Wintselsheim, in Germany. Being the Person known by the Name of the Lady Roxana, in the Time of King Charles II. London: Printed for T. Warner at the Black-Boy in Pater-Noster-Row, 1724.

—. A Journal of the Plague Year: Being Observations or Memorials, of the Most Remarkable Occurrences, As well Publick as Private, Which happened in London During the last Great Visitation in 1665. London: Printed for E. Nutt at the Royal-Exchange, 1722.

—. The Life and Strange Surprizing Adventures of Robinson Crusoe, of York, Mariner. London: Printed for W. Taylor at the Ship in Pater-Noster-Row, 1719.

Edwards, Benj. “Why ChatGPT and Bing Chat are So Good at Making Things Up: A Look Inside the Hallucinating Artificial Minds of the Famous Text Prediction Bots.” ars technica, April 6, 2023, https://arstechnica.com/information-technology/2023/04/why-ai-chatbots-are-the-ultimate-bs-machines-and-how-people-hope-to-fix-them/. Accessed June 6, 2023.

“Is Roxana fiction?” prompt. ChatGPT, GPT-3.5, OpenAI, 5 June 2023, https://chat.openai.com/.

Kang, Minsoo. “The Mechanical Daughter of Rene Descartes: The Origin and History of an Intellectual Fable.” Modern Intellectual History, vol. 14, no .3, November 2017, pp. 633-660.

Kantosalo, Anna, et. al. “Embodiment in 18th-Century Depictions of Human-Machine Co-Creativity.” Frontiers in Robotics and AI, vol. 8, 2021, pp. 1-13.

Leech, Geoffrey, et. al. Change in Contemporary English: A Grammatical Study. Cambridge University Press, 2009.

Masnavi, Slamak. “Fact or Fiction: The Struggle with Accuracy in AI Chatbots ChatGPT and Bing Chat.” Cryptoglobe, April 8, 2023, https://www.cryptoglobe.com/latest/2023/04/fact-or-fiction-the-struggle-with-accuracy-in-ai-chatbots-chatgpt-and-bing-chat/. Accessed June 6, 2023.

McClure, Michael Jay. “If It Need Be Termed Surrender: Trisha Donnelly’s Subjunctive Case.” artjournal, 2013, pp. 21-35.

Moessner, Lilo. The History of the Present English Subjunctive. Edinburgh University Press, 2020.

Park, Julie. “Pains and Pleasures of the Automaton: Frances Burney’s Mechanics of Coming Out.” Eighteenth-Century Studies, vol. 40, no. 1, 2006, pp. 23-49.

Riskin, Jessica. “Eighteenth-Century Wetware.” Representations, vol. 83, no. 1, 2003, pp. 97-125.

Ruohonen, Juho. “Mandative Sentences in British English: Diachronic Developments in Newswriting Between the 1990s and the 2010s.” Neuphilologische Mitteilungen, vol. 118, no. 1, 2017, pp. 171-200.

Facebooktwitterredditpinterestlinkedinmail
Share