It’s interesting when you know so much about a topic that you’re actually ‘smarter’ than an AI.
It doesn’t happen often. I really like ChatGPT. It’s the perfect companion to menial tasks—asking what prompts mean when completing confusing paperwork, for example, or creating multiple-choice questions for a low-stakes review quiz that you’re basically only using to entertain your students until Thanksgiving Break.
You’ve probably been cautioned against using ChatGPT for ‘real’ research, and for good reason. There isn’t a single piece of information that I’d retreive from ChatGPT without first confirming its truthfulness from other sources.
Why? Well, I’ve actually witnessed it mess up quite a few times. Never with ordinary stuff, which I have base-level knowledge of.
However, I’m something of an expert on George Orwell (as much as I might deny it), and I’ve seen ChatGPT err in reporting his story many times.
I’ll sometimes consult the robot if I’m doubting myself and in search of rapid-fire answers for my book Breaking Big Brother (not as the source of my knowledge, mind you, and not as the ultimate fact-checker, either, but as a security blanket of sorts, confirming any information that I may have doubts on in the short-term so I can keep on writing). I might ask it a few questions, confirming what I know already just to give myself the confidence to write it down on the page without sending me into an endless loop of Googling that will halt my actual writing for a day or more. Then, I’ll be hit with something that’s both a confidence boost and a source of absolute terror. Are you ready?
The AI gets something wrong.
Wow. Shocker. You’ve heard it reported on the Internet. Maybe, if you’re not a hopeless, lazy nerd like me, this is the exact reason why you stay away from the technology. Nevertheless, if you’re a habitual user of AI, it’s something that, when using it for ‘casual’ research, you assume can’t happen.
Consider this a cautionary tale (and whatever you do, don’t use this to discredit Breaking Big Brother—I repeat, I have read basically everything Orwell has written, and I would not be torturing myself writing this stupid book if I thought it was something an AI could write for me). However, catching mistakes the AI makes about Orwell highlights a pretty important problem, which is that AI breeds laziness. We typically ask AI questions we don’t already know the answers to, and we typically take the answers for granted. This is not how we should be using this technology. Rather, we should be treating it as an extremely advanced search engine.
This next part can be a mouthful, so without further ado, let’s kick off the actual reason for me writing this post, which was to help, you, not to shame you:
Melissa’s List of Actual, Guilt-Free Uses for AI
1. As a search engine
Although Google has overstepped this privilege lately (largely due to AI), when it was first developed, it actually wasn’t in the business of providing concrete answers to you. It just provided you with the sources you needed to find out the answers for yourself. So, if you’re having trouble finding an obscure source (for example, a letter that George Orwell received in 1936 that’s probably available in some out-of-print Collected Works that you don’t own), ChatGPT will link it for you in seconds. It’s remarkable.
2. As a quote-finder
Technically, this could be categorized under the ‘search engine’ thing, but I needed to make this a list, damnit. If you have a quote in your head, and you can paraphrase the quote and describe what it means but aren’t quite sure where you’ve read it, just imput your hazy memory of the quote into ChatGPT and it’ll provide the book, edition, and page number for you. Just make sure to double-check it yourself, because as we’ve already discussed, it can’t always be trusted.
3. As a way to jog your memory
This might be my favorite use for AI. It bypasses a lot of the ‘mental drudgery’ of scholarly work—instead of having to reread an entire essay or chapter or book, you can ask the AI to summarize it for you in the amount of words that you’re willing to read, skim it in seconds, and remember exactly what you read however long ago it was that you actually read it.
This is a convenient and dangerous tool (and if you’re knowledgeable in the thing you’re asking it about, it’s probably where you’re going to notice the most mistakes).
There you have it—the legitimate uses for AI as I see them. They are few and far between, and they’re basically the same things we use Google for.
Herein lies the problem. Although, as I mentioned before, AI is changing this somewhat, in the past, Google results were never considered a definitive answer to things. They were just a result to get things started. When AI is approached in this way, it’s remarkable. It compeltely cuts out the drudgery of sifting through hundreds of search results, skimming articles just to find halfway through that there’s no way that they’ll actually give you what you’re looking for. For this particular purpose, people in the future are going to be looking at typical search engines the way kids today look at dial-up Internet (“wait, you guys actually had to go through all that?”).
However, the problem with this whole thing is that AI actually purports to have answers. It does sometimes. Usually, even. But if we rely on it, within days we’ll lose all the toughness it takes to figure things out for ourselves, and what will survive is a base-level approximation of the truth.
It’s still early. Maybe AI will eventually get so good that it doesn’t make any mistakes anymore (aside from the ones that its programmers decide that it should make, of course). Still, is this thoughtless life one that you want to live?
In many ways, when our computers get smarter, we do too. AI frees up a lot of time. When approached with caution, it allows us to think deeper about things. Still, there’s something strange about conversing with a computer. Collaborating with a computer. It makes you wonder where the human part ends and where the computer begins. Everyone makes mistakes, right?
And what does it say about us that, in certain matters, we trust the word of these inventions that we have created than our own intuition?
Thank you for reading. If you enjoyed this post and would like to support my work, consider becoming a free or paid subscriber.
You can also buy me a coffee. Or a ChatGPT subscription.
You break it down well how to use it best Melissa, but I don't Trust it all.
I’m still waiting for the virtual Merlin to mentor me to become the best version of myself. I wouldn’t mind also having a virtual Money Penny personal assistant.