Better writing measured
We say as writers that we can make writing better, but how can we measure this?
You can use editorial authority, or user research, but I wanted to use a way that was simple to analyse, could be done by anyone, and could justify the work we’d been doing.
As before, it was about getting data and finding what to analyse. Unlike my previous work, where I looked at sentiment, here I wanted something more solid and less controversial.
I was helped in that my content editor had already written on the general principles of what we did to improve government writing so that gave me somewhere to start.
So to treat this like a proper experiment it meant coming up with testable theories.
Writing ideas to test
Keeping the tests simple, these were my four writing improvements to test:
- Better titles
- Shorter content
- More readable content
- The reader, not the government, is the focus
All of these things can be tested through a variety of programs and time. In practice this meant:
- Better titles – these will be longer as they’ll be more descriptive of what’s in there
- Shorter content – if each page is to just look at one or two user needs the average page length should fall
- More readable content – improved readability scores, less jargon, shorter words and sentences
- The reader is the focus – as the government style is to use ‘you’ to address the user, there’ll be more use of ‘you’ and ‘your’ and less use of Defra, government, us etc
Did our writing pass the test?
Yes. The blog has the full details but all these tests were passed.
Though the blog is only one case study, I looked at all Defra content from February 2015 and compared it with February 2013 and the results were even more impressive.
What next for writing analysis?
This was a success. Though sentiment analysis is still an important tool, looking at terms that are less contentious is a better way to get others to trust an analysis.
Last year I was excited by claims that there’s a way to measure writing success of past books. Until I carried out the test myself.
Next I plan to publish my findings on what was wrong, what was glossed over and propose a better way.