
X things I learned by writing a white paper with ChatGPT
I recently wrote an entire white paper using AI.
Here are xx things I learned from that experience.
To start, take a look at that white paper for yourself.
You can click the thumbnail cover below to see it.
Or download it from this link: https://thatwhitepaperguy.com/wp-content/uploads/2023/03/9-Reasons-Why-ChatGPT-Cant-Write-Your-Next-White-Paper-v2-1.pdf
AI lesson #1: We’re a long way from pushbutton white papers
A few quick metrics on that paper:
• V1 done with ChatGPT 3.5 in February 2023
• V2 revised with the new Bing in March 2023
• Total words drafted by ChatGPT: 3,250
• Words cut by me: 250 or so
• Total number of drafts: 10
• Total number of prompts: 150
As you can see, doing this project took far more than simply prompting, “Write me a white paper.”
In fact, it took 150 prompts over several sessions.
Some of those prompts were due to my learning curve.
But I still had to ask, beseech, coax, command, demand, order, plead, pressure, query, suggest, and urge ChatGPT to generate the output I could use.
We’re still a long way from pressing a button to generate a finished white paper.
And that’s a good thing for writers!
AI lesson #2: ChatGPT is lightning fast
No human writer can possibly spit out coherent words faster than ChatGPT.
That’s probably its best strength.
Here’s a table that shows how long that white paper took to do with ChatGPT and the new Bing, compared to doing a similar white paper without any AI.
White paper with AI | White paper without AI |
|
---|---|---|
Time to generate ideas | A few seconds | 2 hours or more |
Time to research | 0.5 hour | 10 hours or more |
Time to first draft | 1.5 hours | 10 hours or more |
Time to revise (10 drafts) | 4 hours | 4 hours or more |
Time to fact-check and redo research | 6 hours | A few minutes |
Total | 12 hours | 26 hours |
Using AI shaved off 14 hours—two full workdays—or more than half the time to get to a usable draft.
You tell me: Would you like to save 14 hours writing your next white paper?
AI lesson #3: ChatGPT 3.5 only writes C-level copy
What do you think of the style in the “9 Reasons” white paper?
To me, it’s clear and authoritative. So it gets those two things right.
But I found plowing through page after page of its text became rather a chore.
The style seems bland, flat, wooden, and uninspired.
I’d give ChatGPT 3.5’s output no more than a C or maybe C–.
These impressions were confirmed when I checked the readability.
This table shows the readability of the part ChatGPT wrote—from the Introduction to About the Author—compared to the Afterword I wrote.
9 Reasons text by ChatGPT | Afterword by Gordon Graham | |
---|---|---|
Flesch-Kincaid Reading Ease | 36 out of 100 | 72 out of 100 |
Grade Level | 13.5 | 6.6 |
It turns out, my text is twice as readable as the AIs text.
In a white paper, I shoot for Reading Ease of 50 or more, and a Grade Level of 10 or less.
ChatGPT failed to reach these basic benchmarks for acceptable copy.
AI lesson #4: ChatGPT makes sh*t up
You’ve heard this before. When it comes to research, ChatGPT falls flat on its face.
Here are two more metrics from this white paper…
- Sources proposed by ChatGPT: 18
- Sources that were bogus: 16
Those sources from ChatGPT sounded good.
They were from people who actually exist, saying things they might have said, in books they might have written.
But on each draft, those sources would bounce around:
- ChatGPT would quote a different person saying the same thing
- Or the same person saying the same thing in a different book
- Or the same quote from a different page
When I tried to find those sources, I couldn’t.
Neither could Google, Bing, nor Amazon.
My conclusion: You can’t trust the sources from ChatGPT, because it just makes sh*t up.
AI lesson #5: ChatGPT needs real-time web access
I shudder when I hear of anyone using ChatGPT to do research.
Without access to the Web, that’s asking for trouble.
Fortunately, you can now use the AI along with the new Bing to access the web.
And there are plug-ins to do the same thing.
That means you can now prompt ChatGPT to find reliable sources and create footnotes that stand up to fact-checking.
But you must never, ever skip that step.
For the second edition of this paper, the new Bing proposed 20 sources, but I could only confirm 15 of them.
So I ended up discarding those too.
Aall told, ChatGPT and the new Bing proposed 38 sources, of which 21 failed my fact-checking. That’s 55% bogus!!
AI lesson #5:
Conclusions
Like everything, ChatGPT and competing large language models have some strengths and some weaknesses.
To get the most from AI, marketers really must learn what it does well, and what it does poorly.
Then you can rely on AI for what it does best.
And use human skills in areas where it does poorly.
Does that make sense?
I’ll be releasing more experimental white papers in the coming weeks.
To hear about those, make sure to subscribe to my free newsletter.
And if you create any white papers with AI, I’d love to hear about them.
Good luck on your own journey of learning about AI.