Graphs Maps Trees
January 13, 2006 9:54 AM Subscribe
Graphs, Maps, Trees. The Valve is hosting a literary event for professor Franco Moretti's new book, Graphs, Maps, Trees. Moretti aims to reinvigorate literary studies by constructing abstract models based upon quantitative history, geography, and evolutionary theory. PDFs of the original articles: Graphs, Maps, Trees. A review at n+1 is here.
Great post. Can we all meet back here in a couple of days to discuss it?
posted by gesamtkunstwerk at 10:33 AM on January 13, 2006
posted by gesamtkunstwerk at 10:33 AM on January 13, 2006
without having read the article, here's my kneejerk:
yes, because graphs really grab the imagination and inspire people.
posted by shmegegge at 11:24 AM on January 13, 2006
yes, because graphs really grab the imagination and inspire people.
posted by shmegegge at 11:24 AM on January 13, 2006
Fascinating, especially the segment on clues and detective stories. Thanks.
posted by Falconetti at 11:46 AM on January 13, 2006
posted by Falconetti at 11:46 AM on January 13, 2006
Thanks for posting this - I think his work is fascinating and look forward to reading all this.
posted by josh at 11:54 AM on January 13, 2006
posted by josh at 11:54 AM on January 13, 2006
Another "Thanks - I'll have to read this later"! Looks fascinating, though.
posted by carter at 12:00 PM on January 13, 2006
posted by carter at 12:00 PM on January 13, 2006
I got to see Moretti speak a few years ago--a brilliant guy.
I'm looking forward to spending time with this.
posted by bardic at 12:00 PM on January 13, 2006
I'm looking forward to spending time with this.
posted by bardic at 12:00 PM on January 13, 2006
I paid most attention to "Graphs", which argued for an emphasis on the study of texts as a population, rather than focussing on individual readings. This immediately suggests a more "scientific" approach, and the title "scientist" is given to the author in this tribute (the source of the links above) - http://www.thevalve.org/go
Now in what follows I suspect much can be explained in the difference in emphasis between social (history, geography, economics) and natural science (physics, chemistry, biology). As a graduate of the latter, I find the former somewhat disturbing.
"Disturbing" rather than "wrong" largely because while it does, indeed, feel wrong, it is very difficult to explain exactly why (I don't manage to do so here).
When I read the "Graphs" essay I had a couple of initial responses. First, I was interested. The aim is for a violent cultural change in an academic discipline (or the creation of a new one). And the argument is clear and, moderately, convincing.
Second, I was amused and concerned. The "science" that the article illustrated seemed very immature. Now any new discipline will suffer from this, but my worry focussed (ironically) more on this individual text, written by a "scientist".
You can see this most clearly in the way information is presented. Graphs have poor labeling, the presentation is crude (for example, the first illustration shows several curves which, it is claimed, are similar apart from a translation in the x axis, yet they are presented without normalisation in that direction, and show what appears to be exponential growth yet are given a linear y scale).
Once this issue is raised in the illustrations it is apparent throughout the text. I did not notice any reference to statistical error; in a study of a population of objects that is a remarkable omission. This is not just a problem of procedure - there are interesting questions that cannot
be asked without the appropriate framework. For example, in the essay on maps, I wondered whether there exist two dimensional non-parametric statistics (ie based only on relative position) that might allow analysis of maps of "imaginary worlds".
Perhaps the most worrying "analysis" was the emphasis on "cycles". This is a notoriously difficult subject - detecting periodicity in existing data. Several pages of wild speculation make no reference to even the most basic analytic tools (eg power spectra).
But, as I have already asked, perhaps this is normal for the "soft" sciences? I have no idea. What disturbs me is to what extent the differences above are qualitative (the use of a different professional vocabulary) rather than quantitative (allowing a different level of confidence in the conclusions).
Two related topics make this particularly clear:
1 - Inferring models from existing data. If you study some data, and draw some conclusion, then I, at least, am uncertain how you test whether that conclusion is correct.
2 - Little emphasis on testable hypotheses.
The problem with (1) is that the complete set of "what might be interesting" in a set of data is impossible (I believe - although it seems like it might be possible) to define. We have standard tools that can say whether a particular peak in a power spectrum is significant, but does that signficance include the fact that the data did not show any distinct change on the date when my favourit pet goat died? Probably not. So what happens when I look at some data and see that correlation? In other words - we don't approach data with a list of hypotheses pre-determined. We are not testing anything.
And so (1) is tied to (2). With all the problems that entails for evolution, astronomy, etc.
This is interesting and incomplete and probbaly wrong in many places, but I've run out of time (breakfast over - time for work!). Perhaps more later.
[self link - from post on own blog]
posted by andrew cooke at 6:29 AM on January 15, 2006
Now in what follows I suspect much can be explained in the difference in emphasis between social (history, geography, economics) and natural science (physics, chemistry, biology). As a graduate of the latter, I find the former somewhat disturbing.
"Disturbing" rather than "wrong" largely because while it does, indeed, feel wrong, it is very difficult to explain exactly why (I don't manage to do so here).
When I read the "Graphs" essay I had a couple of initial responses. First, I was interested. The aim is for a violent cultural change in an academic discipline (or the creation of a new one). And the argument is clear and, moderately, convincing.
Second, I was amused and concerned. The "science" that the article illustrated seemed very immature. Now any new discipline will suffer from this, but my worry focussed (ironically) more on this individual text, written by a "scientist".
You can see this most clearly in the way information is presented. Graphs have poor labeling, the presentation is crude (for example, the first illustration shows several curves which, it is claimed, are similar apart from a translation in the x axis, yet they are presented without normalisation in that direction, and show what appears to be exponential growth yet are given a linear y scale).
Once this issue is raised in the illustrations it is apparent throughout the text. I did not notice any reference to statistical error; in a study of a population of objects that is a remarkable omission. This is not just a problem of procedure - there are interesting questions that cannot
be asked without the appropriate framework. For example, in the essay on maps, I wondered whether there exist two dimensional non-parametric statistics (ie based only on relative position) that might allow analysis of maps of "imaginary worlds".
Perhaps the most worrying "analysis" was the emphasis on "cycles". This is a notoriously difficult subject - detecting periodicity in existing data. Several pages of wild speculation make no reference to even the most basic analytic tools (eg power spectra).
But, as I have already asked, perhaps this is normal for the "soft" sciences? I have no idea. What disturbs me is to what extent the differences above are qualitative (the use of a different professional vocabulary) rather than quantitative (allowing a different level of confidence in the conclusions).
Two related topics make this particularly clear:
1 - Inferring models from existing data. If you study some data, and draw some conclusion, then I, at least, am uncertain how you test whether that conclusion is correct.
2 - Little emphasis on testable hypotheses.
The problem with (1) is that the complete set of "what might be interesting" in a set of data is impossible (I believe - although it seems like it might be possible) to define. We have standard tools that can say whether a particular peak in a power spectrum is significant, but does that signficance include the fact that the data did not show any distinct change on the date when my favourit pet goat died? Probably not. So what happens when I look at some data and see that correlation? In other words - we don't approach data with a list of hypotheses pre-determined. We are not testing anything.
And so (1) is tied to (2). With all the problems that entails for evolution, astronomy, etc.
This is interesting and incomplete and probbaly wrong in many places, but I've run out of time (breakfast over - time for work!). Perhaps more later.
[self link - from post on own blog]
posted by andrew cooke at 6:29 AM on January 15, 2006
Interesting, Andrew. Thanks! I was also worried that the statisical analysis had too much handwaving contained within, but I don't have the vocabulary or the training to really comment. Incidentally, Moretti himself is reading comments at the Valve and is responding. You might want to directly present these criticisms to him and see what he says.
posted by painquale at 2:12 PM on January 15, 2006
posted by painquale at 2:12 PM on January 15, 2006
hell no! they were just notes scribbled down before breakfast.... :o)
posted by andrew cooke at 4:57 AM on January 16, 2006
posted by andrew cooke at 4:57 AM on January 16, 2006
« Older Could it happen? | Save the hotel Newer »
This thread has been archived and is closed to new comments
posted by jokeefe at 10:12 AM on January 13, 2006