We recently reviewed all the white papers we worked on from 1997 to 2020.
The results were fascinating.
Of these 300 white papers, a full 50 were abandoned and never published. That’s 1 in 6.
Three dozen more were published, but with clear defects. That’s 1 in 9.
That means 214 of our papers (71%) were published as planned, while 29% weren’t.
You can hear about some of the more outrageous projects in these posts:
And you can hear about what went right more than 200 times here:
Now we’d just like to describe why and how we did that analysis.
Our methodology, if you will.
Why do this?
For years, Gordon has been keeping a list of every white paper he worked on. That’s why he knew the count was getting close to 300.
We can’t remember anyone else claiming to have personally worked on 300 white papers over a stretch of 20+ years.
That must be some kind of record.
When GG reached that mark in mid-2020, he figured it was time to look back and see what he’d learned from all that.
And he wanted to share his observations and advice with the upcoming generation of long-form content writers… and the clients who hire them.
What we counted
For this analysis, we counted any white paper that Gordon worked on directly.
He wrote most of these himself from scratch.
There were a few exceptions.
Some of these 300 white papers were:
- Revised from a draft done by someone else
- Updated from an earlier version
- Planned in detail, but never written
- Subcontracted to another writer (about a dozen in all)
And we included 3 works-in-progress that are all going smoothly.
That’s only 1% of the total, so even if something changes, it won’t be enough to change these results.
How we rated each project
Gordon remembers many of these projects like they were yesterday.
And for most projects, we have a finished PDF on hand.
Where his memory was hazy, he went back to review any notes, e-mails, drafts and invoices for that project.
To report the results, we were inspired by The Standish Group.
This organization has analyzed many thousands of software projects, using the simple metaphor of a stoplight.
We used the same metaphor, with these definitions:
- Red: white paper started but never published (failed)
- Orange: white paper published, but with defects (challenged)
- Green: white paper published as planned (successful)
Over several weeks, Gordon collected his thoughts for each project.
Then he grouped similar types of problems together, refined the descriptions and counted the projects in each group.
A few papers suffered from more than one problem. If so, he assigned each one to the category he figured was the main issue.
He didn’t sugarcoat anything.
When a project flopped, he admitted it. Then he tried to identify why.
Editorial success, not business results
Our data covers the process of creating a white paper and the finished results from an editorial point of view.
That’s what we know.
We have only sketchy data on the business results generated by any project. Clients don’t often share that with writers.
We’d love to know about results too. From now on, we plan to circle back and ask.
That said, our ability to impact business results is limited. We currently have no hand in running the campaigns for the white papers we create.
So it’s possible for a white paper to be excellent in editorial terms but still not generate any significant business results.
It could be sent to the wrong list, with a weak e-mail, a muddled landing page, and so on.
Down the road, the sponsor’s offering may have a deep technical flaw, an outlandish price-point, or some other show-stopper.
To publish a white paper that succeeds editorially and generates good business results takes many players contributing at the top of their game.
That’s sort of like creating a show that both the fans and the critics love.
When it happens, everybody’s happy.
Let’s all continue to push for that.