Spinning Data Yarns

4 min read
Unreliable narration may be unavoidable but it doesn’t have to hinder your analytics practice.
Storytelling is a hot term in the analytics field right now. This is because it’s undeniable that narrative forms have the potential to deliver information in a way that people find compelling, and so prompt them to take action on data.


Qlik VYW 221014

James Richardson,

Senior Director, Global Product Marketing at Qlik.





A tale with a beginning, middle and an end is a form we’re all used to, and as applicable to data as it is to any other aspects of human experience. However, storytelling is more sophisticated than it might at first seem.  Any narrative has a narrator, and that narrator inevitably has a viewpoint, which may make them unreliable*.

In some situations, an unreliable data narrative could arise because the narrator has their own agenda and reasons for presenting information with some ‘spin’. That could mean telling us the facts they’ve decided they want us to know, but omitting or downplaying things they’d rather we didn’t.  This is being wittingly unreliable. In business, this type of narrative is most commonly seen when someone is trying to persuade their colleagues, customers or suppliers to adopt a course of action, and so they present arguments backed with selective data to do so. There can be a darker side here too though, one of the serious issues undermining BI take-up that I first wrote about years ago is managers’ desire to dance with the numbers, usually using spreadsheets to put their own <ahem> gloss on data trends.

On the other hand, and far more commonly, an unreliable narrative could arise because humans aren’t able to fully recall every single piece of information to replay a wholly accurate narrative. Human memory is inherently unreliable. This is a well-known phenomenon, recognized by neuroscientists as the ‘Ebbinghaus curve of forgetting’.  The curve shows that our memories decay over time at a predictable rate. This happens surprisingly quickly – according to research carried out in the field: eyewitness testimony memories can begin to degrade within two hours.  What happens then is very interesting – as the memories fade we start to tell different narratives to ourselves, gradually omitting pieces of information we subconsciously consider irrelevant and even amending the memories about what happened in our brain into a coherent story.


It might seem to us like the narrative we’re telling is exactly the way events occurred, or what the data said when we looked at, but we’re likely unintentionally missing out on important information.  To any audience, these untold stories could be of prime importance.  A narrator may not know they’re being unreliable through omission, but rather, they’re just telling their audience as much as they can from their point of view, working from within their frame of reference.

So why is all of this relevant in the world of data and business analytics?

Business decision makers are increasingly provided with BI outputs in the workplace that are intended to inform them how a situation began and will develop, whether for example, that’s triggered by the sales figures for the past quarter or the levels of staff absence over the past couple of months. Often the data is narrated as a series of time-series or chronologically ordered visualizations which are often highly crafted.  The question is can the story be taken at face value?  Is the narration reliable?

To ameliorate unreliability and bias, data stories need to be part of the flow of analysis, within the same environment.  This is why it’s critical that Qlik Sense preserves the context (or viewpoint) of the data that the narrator/author used in building in a data story – as it acts for a springboard to ask questions, test assumptions and flex the original narrative context.

Unreliable narration is a given. Any business developing its analytics practice and program should consider how it can use the inherent unreliability in data stories as a trigger to drive greater engagement. All data stories need to be unpicked, people need to debate, and the author/narrator questioned on reliability to get the best decision outcome. People might not be trying to trick their colleagues with their narratives, but organizations could still end up with misdirected outcomes if they blindly assume what they’re hearing and seeing is the whole story.

*The term ‘unreliable narrator’ was coined in 1961 by Wayne C. Booth of Chicago University in his seminal work ‘The Rhetoric of Fiction’. Although strictly relating to fictional works, it’s interesting and worthwhile to consider how we can learn from and perhaps apply critical thinking on narratives to the emerging field of data storytelling.

Best Regards


About Arne Rosness