40 or so studies about human perception in 25-30 minutes. Maybe 35.
May 29, 2016 10:27 AM   Subscribe

Kennedy Elliott, graphics editor at The Washington Post presents a broad, graphics-filled overview of how humans perceive data graphics. [Links to Medium, not WaPo.]
posted by Room 641-A (9 comments total) 64 users marked this as a favorite
 
The conference website has videos for this (and the other OpenVis 2016 talks), in a viewer with slide thumbnails and keywords.
posted by James Scott-Brown at 10:43 AM on May 29, 2016 [4 favorites]


Thanks for adding that!
posted by Room 641-A at 10:46 AM on May 29, 2016


This is really interesting, and I'm glad it was posted.

But, if invited to pick nits, it seems like one problem with this and nearly all studies related to the visual perception of quantitative data is that they fail to distinguish between different audiences. As someone who spends 2% of my plot-making time creating things for the general public and 98% of my plot-making time creating things for science PhDs, this make a really big difference in deciding which lessons are important. That a hand full of freshmen psych majors inevitably associate altitude with the y-axis is interesting in the abstract, but I have a hard time believing it's actually important except in very specific contexts. Also,
Two of these studies individually reject Tufte’s popular “high data-to-ink ratio” philosophy. . . The final two experiments by Spence (19, 20) deal with Steven’s law, which again (very simplistically) says that an object’s size appears larger when presented with larger objects, or smaller when presented with smaller objects. Spence found that contrary to popular physics, this distortion does not happen when comparing two shapes of the same dimensionality. Only when you vary the dimensionality among shapes does this distortion occur.
Huh? First of all, what kind of psychopath make plots comparing objects of different dimensionality? Is there a single example of this in the real world that isn't part of a contrived experiment? This seems like crazy town.

Second, I'm pretty sure putting objects of different dimensionality into a plot aren't exactly Tufte canon, given the extent to which he rails against mixing representative areas and visual volumes in his first book. Don't get me wrong - I'm mostly an opponent of the data-to-ink ratio as anything more than a polemic to generate discussion. Adding hats to error bars is essential if you want someone across the room to actually see the extent of your error bars. Adding tick marks on the right axis is really handy when you're trying to read numbers of a plot. But, this particular argument is pretty unsatisfying. People say they like 3D graphics, and also your plot should only represent values with the same number of dimensions? I'm not surprised.

Third and most petty, "contrary to popular physics?" The most sympathetic reading I can conjure still doesn't make this into a meaningful statement. It's so irrelevant that it doesn't really hurt the argument, but what the hell was it supposed to mean?
posted by eotvos at 12:28 PM on May 29, 2016 [5 favorites]


After looking up the "same dimensionality" papers (happy to help if you hit a paywall), the actual study was focused on the number of dimensions which were allowed to vary in order to represent data. If you make a 3D object where only one dimension conveys data, people understand the result more quickly than a 1D representation. (Where by "people," I mean 20 U. Toronto undergrads.)

I withdraw my criticism of the original study. This is pretty neat, and quite anti-Tufte. But, the summary presented in the original article is thoroughly misleading. I can't tell whether it was entirely misunderstood by the author or just very badly described, but result is embarrassing.
posted by eotvos at 12:50 PM on May 29, 2016 [2 favorites]


Maybe 35.

I see what you did there.
posted by spacewrench at 5:31 PM on May 29, 2016


In contrast to eotvos, I spend 98% of my time plot-making time creating things for undergraduates (on behalf of higher education publishers). And really, the focus is usually on making things easily understandable for the C+ and below students. Skimming through the various studies, this is really great. Some of the conclusions are 'obvious' to me (either through years of practice, or just because these are well-established 'rules' in the industry). Some were surprising though. For example, in the section on pictographs and shapes vs. single bars, the graphs styles labelled "don't do this" are actually the ones preferred by publishers (I'm blaming them, not me!).
posted by Kabanos at 9:24 AM on May 30, 2016 [1 favorite]


But, if invited to pick nits, it seems like one problem with this and nearly all studies related to the visual perception of quantitative data is that they fail to distinguish between different audiences.

I noticed the use of Mechanical Turk mentioned here and there throughout the piece, which would be far from a "general public" study, skewing more toward the tech-nerd side of the equation, a group that generally tends to prefer complexity over clarity (based on my working in a tech firm preparing graphics for various reports). Most average people have probably never even heard of MT, let alone used it.

Oh, and, chord diagrams need to die a hideous, painful death.
posted by Thorzdad at 9:27 AM on May 30, 2016


Also, the bits about interactive data visualizations were interesting. I can't but laugh and agree at what our digitally-trained minds now consider broken and unusable:
Originally the researchers wanted to also include a 1-second delay option [in interactive latency] but in pilot studies, users found this unusable.
I look forward to even more research on the effectiveness of interactives; a lot is still guess work right now. Another facet that begs for more research is how data visualizations optimized for universal accessibility (which is increasingly a requirement) intersect with conventional best practices . A lot of powerful infographic tools, like color-coding, have to be modified, if not discarded outright.
posted by Kabanos at 9:39 AM on May 30, 2016


The references to Steven's law are also somewhat confused/over-simplified: she says that "We know that from Steven’s law, when an object is seen in context of other larger objects, it appears larger itself", and "The final two experiments by Spence (19, 20) deal with Steven’s law, which again (very simplistically) says that an object’s size appears larger when presented with larger objects, or smaller when presented with smaller objects".

But what Steven's law actually says is that the relationship between the magnitude of a physical stimulus and its perceived intensity is given by a power law. The value of the exponent depends on the stimulus, and can be measured empirically (~1 for length, 0.8 for area, 0.6 for volume).

This perceptual non-linearity affects the perception of the relative sizes of objects: if we have two objects, scaled to represent some quantities, the perceived proportion of the total represented by one is a non-linear function of the true proportion. Specifically, for an exponent < 1, proportions of less than a half would be over-estimated, and proportions of over a half under-estimated. The more dimensions are used, the further the exponent is from 0, and the larger the effect of this distortion.

The point of the paper eotvos links to is that the exponent depends on the number of dimensions encoding data, rather than the total number of dimensions of the glyph.

However, the way it was introduced risks confusion with the Ebbinghaus illusion, in which an object does actually appear larger or smaller depending on its context, rather than appearing larger or smaller relative to its context. The Ebbinghaus illusion also acts in the opposite direction: the circle surrounded by larger circles appears smaller.
posted by James Scott-Brown at 3:57 AM on May 31, 2016 [1 favorite]


« Older The Dark Night Begins   |   Leyla McCalla: from classical cello to Langston... Newer »


This thread has been archived and is closed to new comments