Not Trying to Make It Weird, But I AI'd Myself
Testing What an LLM Knows About Me, a (Kind Of) Public Figure
I can’t believe you caught me. It isn’t what it looks like.
I can explain.
It’s the first time, I promise, and I don’t know what came over me. I just got this idea, and, well, there’s no sense in hiding it, is there? Fine. I’ll come clean.
I admit it. I AI’d myself.
There! Happy now?
Even if you’re not, I actually was. Am! I learned something about myself, or at least about myself as the internet might see me, and maybe that will prove useful in better understanding my books from an outsider’s perspective. Maybe it’ll even inform how I approach marketing going forward. Who knows?!
Now that we’ve got the unpleasantness out in the open, let’s at least explore what ChatGPT 4o thinks about a topic about which I am a subject-matter expert, including—
What it got right
What it got wrong
What this tells us about how large language models (LLMs) work
What this means for how trustworthy we should find LLM outputs
How we might still make good use of what LLMs do well
To do this, we’ll walk through the full text of the conversation, step by step. The transcript we’ll be using for this discussion can be found here.

First Impressions
Before we dive in too deep, I should say I was surprised ChatGPT knew about me at all, not that I necessarily should have been. ChatGPT 4o can browse the web for current information, an improvement over the up-to-September-2021 dataset its predecessor models were trained on.
Given those predecessors were trained on data through September of 2021, I really shouldn’t have been surprised it knew about me, as the bulk of my digital footprint as an author was between 2016-2020.
Yes, it’s been a few years. Please don’t think me washed up.
Or do. I’ll be over here in the meantime telling myself in the mirror that I’m good enough, I’m smart enough, and gosh darn it, people like me.
What It Gets Right
Here’s a list of where ChatGPT went right—
Wisconsin author
Speculative and science fiction
Pen name R.R. Campbell
Imminent Dawn
Contributions to community
Topics of speculative fiction work
Where It Flatters Me
I’ll be honest: it was nice to see a nod to genre blending, complex characters, and thought-provoking themes. If this is what the internet really thinks about my books, that’s a win, and one I’ll take.
Where It Goes Wrong
For as much as I might want to be reassured by its observations about genre, character, and theme work, well, it gets harder to take as gospel given what ChatGPT got wrong, which were far baser facts. A couple of notes—
Imminent Dawn is not a series; it’s the first book in a series
EMPRESS is not the name of that series; it’s EMPATHY
EMPATHY (or, if you’re ChatGPT, EMPRESS) is not a trilogy
I’ll expound on why I think it got wrong the facts it got wrong below, but my first thought about what it got wrong was actually, Dang, can you imagine trying to use this thing as a student to write papers about people or their works?
Not that I expect anyone to be writing papers about me or my books, but… it was a thought I had.
What? ChatGPT may be capable of flattery, but no one flatters me like I flatter me.
Needless to say, given these errors, I thought I’d call one of them out and see what ChatGPT had to say about it.

First, it was reassuring to see ChatGPT get it right when challenged, though that patterns with how it ordinarily responds to pushback, at least in my experience.
That said, I do still get nervous when ChatGPT is so accommodating; this has meant historically that it might have overcorrected and become a citizen of hallucination nation, but, thankfully, that doesn’t appear to be the case here.
I could quibble with the characterization of mind-to-mind communication, but it is technically true. I was a little surprised it had gotten this far into describing the series without describing how that mind-to-mind communication occurred, namely via an internet-access brain implant.
At this juncture, though, I became interested not only in the objective, but the subjective as well. To do this, I responded to the question at the end of its most recent message with—
Yes, please tell me more about the series and how it has been received. Is it popular?
ChatGPT responded with a wall of text to this one, so let’s take it section by section.

This improves on mind-to-mind communication by getting more specific with direct neural communication, in my opinion, but it drops a big oof on us by listing Event Horizon as having a publication year of 2020.
More on that later, though. Here’s the next chunk of text it sent along.

This is actually a fantastic summary of the key themes and storyline, especially for the first book.
The problem? It gets the last name wrong of the only character it chooses to name.
It is close, however. The character’s actual name is Dr. Wyatt Halman, and the error ChatGPT makes here is in the same vein as the error it made earlier by calling it the EMPRESS series instead of the EMPATHY series.
I’ll expound on the likely reason for this mistake later, but for now, let’s look at the next bit of its response.

This is, again, an exceptional summary overall. It does err in continuing to refer to EMPATHY as a trilogy, but I haven’t had a chance to correct it on that yet; remember, this is all still in response to a single question I asked it earlier.
I also can’t say whether EMPATHY compares to The Circle. I’ll admit I’m not familiar with Eggers’ work (should I check it out?), and this was the first time I’d seen my work compared to his, which means this is either a hallucination or there’s some media out there in which this comparison is made.

Maybe I’m being harsh on myself—or, well, the series—but I was surprised to see it be so generous with respect to EMPATHY’s popularity. I thought for sure it’d take a look at metrics like the number of reviews on the most common platforms and go, “Meh, looks like it didn’t do as hot as the author might have liked, sales-wise,” but no! Instead, it focused on a loyal readership and that it is—and this is according to ChatGPT, mind you—a standout in its genre.
The more I thought on it, though, I do think it’s fair to say EMPATHY has a loyal readership. People do still reach out from time to time to ask if I have more planned for the series (more on that here), which isn’t nothing.
But okay, back to the chat. ChatGPT has asked me a question, and I have an answer for it.
Okay, well, I have a question for it, and it’s one based on the glaring error it made much earlier in its most recent response.

When challenged, ChatGPT again does get it (mostly) right, which is reassuring.
It is true that Event Horizon was announced as the third installment in the series, and, to be fair, it was originally slated for publication in 2020. The rub is that, as a result of disagreements between me and my publisher with respect to how the book’s contents should be presented (they wanted to do a Martin-esque A Feast for Crows-A Dance With Dragons split given the number of POV characters and amount of ground covered), I ultimately pulled the book from publication.
To be fair, I don’t know if I ever shared that information publicly. This all happened in February and March of 2020, so one might understand how an announcement would have gotten lost in the fray. Confusion and delay, indeed!
ChatGPT does continue to insist on the bit about it being a trilogy—it was (is?) planned to be a pentalogy—but I don’t need to keep picking that nit.
Where does this leave us?
We’ve had a chance to review the conversation, to see where ChatGPT got it right, mostly right, and where it got me plain wrong.
In the next installment in this series, we’ll take a closer look at how ChatGPT likely made the mistakes it made, what that tells us about how LLMs work, and how we might use this knowledge to good effect going forward.
In the meantime, if it’s not too personal of a question, tell me in the comments: have you ever AI’d yourself? What did the LLM get right about you, and where did it come up short?
I tried it out after reading this post, and surprisingly, it had zero trouble telling me about myself. It instantly listed my novella and even my no-longer-available Kindle Vella stories (plus an erotica book written by someone who has the name name as me - I have nothing against that genre, but I wouldn't mind if it hadn't credited me lol). It was even able to immediately supply my Twitter account when asked, plus mentioning writing projects that aren't published but that I've Tweeted about.
Ryan — I loved your tongue-in-cheek approach to this post—it was so much fun to read! I've never ventured out to ChatGPT or “AI’d” myself, so it was fascinating to see what it knows (and doesn't know) about you. The hits and misses made for an intriguing read. Thanks for sharing!