Good Example of Meaningless Statistics

Neilsen published a survey of internet use amongst children aged 2 to 11 (Growing Up, and Growing Fast: Kids 2-11 Spending More Time Online). This is one of those reports that lends credence to that famous proclamation of 19th Century British Prime Minister Benjamin Disraeli:

There are three kinds of lies: lies, damned lies, and statistics

With nary a reflection on the breakdown of internet use across the age group (i.e. how much time does a 2 year old spend surfing YouTube, compared to an 11-year-old), nor a remark on the methodology (are they depending on parents for these reports, or did they conduct information gathering firsthand?), Neilsen paints a broad picture of evolving information consumption amongst 16 million of our most impressionable netizens.

As anyone who has ever been even remotely involved in the early childhood industry recognizes, the age range of 2 to 11 spans an incredible developmental period that simply cannot be lumped into a single survey. Comparing an 11-year-old to a 2-year-old is, in a nutshell, silly.

It’s the rough equivalent of comparing an 80-year-old to a 20-year-old. There are tremendous cultural, cognitive, educational, and health differences that collectively represent too much of a qualitative disparity for the represented data to be at all meaningful.

Plus, how was this survey even conducted on toddlers? How was the information collected? What is the rate of error?

Anyone reading this report should be very cautious about applying its conclusions in any pragmatic fashion whatsoever.

2 thoughts on “Good Example of Meaningless Statistics

  1. Hard to know what is true when the participant can’t speak for themself. While better age divisions may be MORE representative of the sample, there is no way to get data that won’t be dismissed for that age group, so I can understand why they lumped them together.

  2. I disagree with Bree. There are ways to get valid data for the various youth age groups. It’s just a matter of deciding to capture the data.

    I see several other issues with the article Andrew referenced. A link to the full report would have been helpful.

Comments are closed.