The Croquet Project - Web 2.0
October 1, 2006 2:37 AM Subscribe
The Croquet Project is a staggeringly ambitious attempt to create 'an operating system for the post-browser Internet' - a multi-platform, open-source, extensible, decentralised, peer-to-peer, 3D virtual reality metaverse [2,3], designed for 'highly scalable deep collaboration', led by Alan Kay.
Hmmm. Microsoft is interested. That means it won't work, but we'll have to use it.
posted by dhartung at 3:10 AM on October 1, 2006
posted by dhartung at 3:10 AM on October 1, 2006
It would be rather depressing if the person who finally moves us past the windowing metaphor for GUIs developed 30 years ago at Xerox PARC is... Alan Kay. A new set of interface metaphors has been needed for at least a decade now; the fact that we're still stuck with windows, desktops, files and folders is a major example of how in which the MS monopoly's dependence on backwards compatibility has held back the computer industry, let is very difficult for Joe User to understand. Nowadays, everyone just thinks that windows and directories are how computers work; its hard to even imagine a world without them. Hopefully projects like this, along with the new kinds of interfaces and input devices the next generation of gaming consoles are providing, will help people to see what other metaphors for human-computer interaction are possible. Fortunately, Kay is brilliant. Even though Smalltalk itself didn't get much traction, it's influence on language design was enormous; hopefully, Croquet will be similarly influential.
I also liked the subtle Snow Crash reference in their screenshot tour.
posted by gsteff at 3:39 AM on October 1, 2006
I also liked the subtle Snow Crash reference in their screenshot tour.
posted by gsteff at 3:39 AM on October 1, 2006
Christ...
the fact that we're still stuck with windows, desktops, files and folders is a major example of how the MS monopoly's dependence on backwards compatibility has held back the computer industry, yet is very difficult for Joe User to understand.
posted by gsteff at 3:41 AM on October 1, 2006
the fact that we're still stuck with windows, desktops, files and folders is a major example of how the MS monopoly's dependence on backwards compatibility has held back the computer industry, yet is very difficult for Joe User to understand.
posted by gsteff at 3:41 AM on October 1, 2006
Call me old fashioned, but I don't need another way to interact with my computer, much like I don't need another way to read a book.
I immediately thought of Second Life when I saw some of the Croquet graphics, a big turn off for me. Virtually moving around in 3D spaces makes me feel restless, lost and out of control.
And I don't like the name!
posted by beno at 7:52 AM on October 1, 2006
I immediately thought of Second Life when I saw some of the Croquet graphics, a big turn off for me. Virtually moving around in 3D spaces makes me feel restless, lost and out of control.
And I don't like the name!
posted by beno at 7:52 AM on October 1, 2006
If only they'd stopped at "multi-platform, open-source, extensible, decentralised, peer-to-peer." An operating system along these lines seems to me like something transcendently worth pursuing, if not an absolute precondition if any of the more ambitious ubicomp scenarios under consideration are to have any hope of working smoothly.
It's the "3D virtual reality metaverse" part that gives me a serious case of the icks. It's not so much that I don't think the processor cycles will be there to underwrite that kind of computationally-intensive interface - although I do have some questions about that, if Croquet were to be rolled out to any significant extent - as that it offends my sense of parsimony and elegance.
I'm aware that this is something of a minority position, and that many people I respect are taking baby steps toward similar visions, whether through Google Earth + SketchUp, Second Life hacks, or what have you. But I personally find 3D/VR interfaces a little embarrassing, and more than a little adolescent in their underlying motivations.
What scares me about Croquet is this: I can't seem to remember who said it, but some dude once said that "the best way to predict the future is to invent it." Given Alan's stature and the nature of the institutions he's enlisted in support of this vision, I'd say there's a nonzero chance that Croquet or something very similar is how we're going to bridge the virtual and the actual in the years to come...no matter how grotesque or unseemly we find it.
posted by adamgreenfield at 7:59 AM on October 1, 2006
It's the "3D virtual reality metaverse" part that gives me a serious case of the icks. It's not so much that I don't think the processor cycles will be there to underwrite that kind of computationally-intensive interface - although I do have some questions about that, if Croquet were to be rolled out to any significant extent - as that it offends my sense of parsimony and elegance.
I'm aware that this is something of a minority position, and that many people I respect are taking baby steps toward similar visions, whether through Google Earth + SketchUp, Second Life hacks, or what have you. But I personally find 3D/VR interfaces a little embarrassing, and more than a little adolescent in their underlying motivations.
What scares me about Croquet is this: I can't seem to remember who said it, but some dude once said that "the best way to predict the future is to invent it." Given Alan's stature and the nature of the institutions he's enlisted in support of this vision, I'd say there's a nonzero chance that Croquet or something very similar is how we're going to bridge the virtual and the actual in the years to come...no matter how grotesque or unseemly we find it.
posted by adamgreenfield at 7:59 AM on October 1, 2006
Holy hell, beno. That was textbook late adopter. Did that just come out naturally? It sounded too perfect.
posted by quite unimportant at 8:03 AM on October 1, 2006
posted by quite unimportant at 8:03 AM on October 1, 2006
If someone can invent something better than the standard interface I'd love to see it. But "better" can't just mean "flashier": it has to mean some way of getting to the information faster, easier; or just supplying more of it in way I can use.
At present, a computer interface is still limited by the screen: one or two square feet of two-dimensional surface. Every attempt I've seen to make that into a three-dimensional metaphor has failed: it takes too long to navigate to anything, and you lose everything that's not visible on the screen in front of you.
The desktop metaphor has a huge advantage in that you're mapping the real two-dimensional surface of the screen to the virtual two-dimensional surface of the desktop. I suspect nothing much is going to improve on it until we get some kind of 3-D display, or virtual reality goggles that don't give you motion sickness, eyestrain and a headache.
posted by TheophileEscargot at 8:07 AM on October 1, 2006
At present, a computer interface is still limited by the screen: one or two square feet of two-dimensional surface. Every attempt I've seen to make that into a three-dimensional metaphor has failed: it takes too long to navigate to anything, and you lose everything that's not visible on the screen in front of you.
The desktop metaphor has a huge advantage in that you're mapping the real two-dimensional surface of the screen to the virtual two-dimensional surface of the desktop. I suspect nothing much is going to improve on it until we get some kind of 3-D display, or virtual reality goggles that don't give you motion sickness, eyestrain and a headache.
posted by TheophileEscargot at 8:07 AM on October 1, 2006
I was fascinated by the presentation - much more by Kay's snarking at the inefficiencies of modern computing than the environment itself. If I were twenty years younger I might even have found it inspiring (I mean, it is inspiring, but it's probably a bit late for me to be inspired).
posted by Grangousier at 8:13 AM on October 1, 2006
posted by Grangousier at 8:13 AM on October 1, 2006
Just want to point out, TheophileEscargot, that a computer interface is only "limited by the screen" if you think of a computer as something that lives in a box on your desk.
I don't think that kind of scenario will disappear completely, but given where Moore's Law appears to be taking us, I'm willing to bet that it'll be a low-frequency use case before too terribly long. We have, instead, to get used to thinking about information processing as something that will be invested in shoes and coffee mugs and bathroom mirrors - and it's clear that in all of these contexts, the standard GUI is inapplicable.
We do need new interface vocabularies, and badly. I'm just not convinced that they need to look anything like environments. Robert Scoble may think that such is the future - but then, I've never found him to be particularly insightful, and I don't remember ever having substantively agreed with any viewpoint advanced by him. I certainly don't take him seriously as a guide to next-generation ubiquitous interfaces.
posted by adamgreenfield at 8:25 AM on October 1, 2006
I don't think that kind of scenario will disappear completely, but given where Moore's Law appears to be taking us, I'm willing to bet that it'll be a low-frequency use case before too terribly long. We have, instead, to get used to thinking about information processing as something that will be invested in shoes and coffee mugs and bathroom mirrors - and it's clear that in all of these contexts, the standard GUI is inapplicable.
We do need new interface vocabularies, and badly. I'm just not convinced that they need to look anything like environments. Robert Scoble may think that such is the future - but then, I've never found him to be particularly insightful, and I don't remember ever having substantively agreed with any viewpoint advanced by him. I certainly don't take him seriously as a guide to next-generation ubiquitous interfaces.
posted by adamgreenfield at 8:25 AM on October 1, 2006
Adamgreenfield: the dude who said that was, in fact, Alan Kay.
As for me, I'm with Theophile on this. If someone can invent a better 2D desktop, I'd be all for it, but first-person 3D still seems rather gratutitous to me. I have the same problem with games - I love 2D sidescrollers and Zelda-style adventure games and such because all the information I need is right there; the screen itself is a 2D surface, so treating it like some kind of window onto a 3D world is only as immersive as peering through a window would be, and that means there might be important things behind you, above you, on the other side of something blocking your vision, and so on, but you can't see them because you can only see out of a tiny rectangular porthole.
Two-dimensional graphics make no pretenses; you're seeing a representation of something, and nobody's tryng to convince you that you're "really there". Instead, you're working with things through an explicitly visible avatar (whether mouse pointer or Megaman), and I find it's easier to map controls onto something I can see - easier, ironically, to "become" that thing and immerse myself in its world - than to control a glorified camera sending pictures to a tiny rectangle.
posted by wanderingmind at 8:46 AM on October 1, 2006
As for me, I'm with Theophile on this. If someone can invent a better 2D desktop, I'd be all for it, but first-person 3D still seems rather gratutitous to me. I have the same problem with games - I love 2D sidescrollers and Zelda-style adventure games and such because all the information I need is right there; the screen itself is a 2D surface, so treating it like some kind of window onto a 3D world is only as immersive as peering through a window would be, and that means there might be important things behind you, above you, on the other side of something blocking your vision, and so on, but you can't see them because you can only see out of a tiny rectangular porthole.
Two-dimensional graphics make no pretenses; you're seeing a representation of something, and nobody's tryng to convince you that you're "really there". Instead, you're working with things through an explicitly visible avatar (whether mouse pointer or Megaman), and I find it's easier to map controls onto something I can see - easier, ironically, to "become" that thing and immerse myself in its world - than to control a glorified camera sending pictures to a tiny rectangle.
posted by wanderingmind at 8:46 AM on October 1, 2006
Maybe your sarcasm detector needs a tune-up, wanderingmind? ; . )
posted by adamgreenfield at 8:49 AM on October 1, 2006
posted by adamgreenfield at 8:49 AM on October 1, 2006
Call me old fashioned, but I don't need another way to interact with my computer, much like I don't need another way to read a book.
Command line 4 lyf! Or is your arbitrary line in the UI sand switches and LEDs?
posted by revgeorge at 9:33 AM on October 1, 2006
As a consumer:
I want augmented reality glasses now. Right now. There's no reason I shouldn't have them, I'm willing to pay... oh... $7500 for them.
With Google maps and all the tagged crap out there and wireless internet and everything else, I want to just look at my desk (my actual, steel work desk) and see virtual shit appear and type on it then file it, I should see driving directions in front of me and all manner of awesome virtual floating shit all around me at all times.
I don't understand why this isn't possible.
I'm sick of lugging around my damned laptop everywhere.
You! Computer people! Smaller, cheaper, more accessible, on-demand, pervasive! Chop chop!
Thank you, that is all.
posted by Baby_Balrog at 10:48 AM on October 1, 2006
I want augmented reality glasses now. Right now. There's no reason I shouldn't have them, I'm willing to pay... oh... $7500 for them.
With Google maps and all the tagged crap out there and wireless internet and everything else, I want to just look at my desk (my actual, steel work desk) and see virtual shit appear and type on it then file it, I should see driving directions in front of me and all manner of awesome virtual floating shit all around me at all times.
I don't understand why this isn't possible.
I'm sick of lugging around my damned laptop everywhere.
You! Computer people! Smaller, cheaper, more accessible, on-demand, pervasive! Chop chop!
Thank you, that is all.
posted by Baby_Balrog at 10:48 AM on October 1, 2006
So, yeah, I'm basically saying that I'm sick of having to look at a monitor or my laptop screen or something to get the information I need.
I don't care if it's 3-d or whatever, I still have to look at my damn computer to get it.
I want to be jacked in. That's the only future I'm interested in. "Exploring my computer" like some kind of N-64 game is not really going to make my life any easier.
posted by Baby_Balrog at 10:50 AM on October 1, 2006
I don't care if it's 3-d or whatever, I still have to look at my damn computer to get it.
I want to be jacked in. That's the only future I'm interested in. "Exploring my computer" like some kind of N-64 game is not really going to make my life any easier.
posted by Baby_Balrog at 10:50 AM on October 1, 2006
...but given where Moore's Law appears to be taking us...
Moore's Law isn't taking us anywhere and never has been.
posted by vacapinta at 11:52 AM on October 1, 2006
Moore's Law isn't taking us anywhere and never has been.
posted by vacapinta at 11:52 AM on October 1, 2006
Lemme go check the date, I think I'm in the wrong decade.
posted by TwelveTwo at 11:58 AM on October 1, 2006
posted by TwelveTwo at 11:58 AM on October 1, 2006
Looks like William Gibson is even more of an unwitting technological visionary than he's already been given credit for.
posted by NeonSurge at 12:29 PM on October 1, 2006
posted by NeonSurge at 12:29 PM on October 1, 2006
My point is: I don't see the problem this project is trying to solve.
Real new ways of information interaction will probably come when we move beyond the keyboard/screen combo (direct brain interfaces and such). Also, it will likely not come from a comittee of visionaries setting out to invent the future.
posted by beno at 1:09 PM on October 1, 2006
Real new ways of information interaction will probably come when we move beyond the keyboard/screen combo (direct brain interfaces and such). Also, it will likely not come from a comittee of visionaries setting out to invent the future.
posted by beno at 1:09 PM on October 1, 2006
This kind of leap-of-faith innovation almost never works. More often than not, overnight paradigm shifts cause disaster due to an extreme lack of feedback into process. In this case in particular it seems these guys are going off and working on 3D videogame stuff without realizing that text, for the most part, is still the "killer app" for home computing.
posted by nixerman at 1:38 PM on October 1, 2006
posted by nixerman at 1:38 PM on October 1, 2006
What was the name of that 3d apple interface experiment in 1995 or so?
posted by craniac at 2:22 PM on October 1, 2006
posted by craniac at 2:22 PM on October 1, 2006
Project X
Funny you should mention that, I worked at Apple at the time... (although not on this project)
posted by beno at 2:34 PM on October 1, 2006
Funny you should mention that, I worked at Apple at the time... (although not on this project)
posted by beno at 2:34 PM on October 1, 2006
This is very cool. I'm going to have to give it a shot when I have more time.
I think a two dimensional implementation with the same ideals would probably be a better starting point for this sort of experiment. It would be easier to implement and I think it would probably still provide a lot of insight into the challenges and benefits associated with collaborative environments like this. It would probably run better too. Still I can see why they went with 3D. Who could resist trying to implement cyberspace in its full glory?
posted by benign at 2:57 PM on October 1, 2006
I think a two dimensional implementation with the same ideals would probably be a better starting point for this sort of experiment. It would be easier to implement and I think it would probably still provide a lot of insight into the challenges and benefits associated with collaborative environments like this. It would probably run better too. Still I can see why they went with 3D. Who could resist trying to implement cyberspace in its full glory?
posted by benign at 2:57 PM on October 1, 2006
gsteff: A new set of interface metaphors has been needed for at least a decade now; the fact that we're still stuck with windows, desktops, files and folders is a major example of how in which the MS monopoly's dependence on backwards compatibility has held back the computer industry, let is very difficult for Joe User to understand.
Well, I don't know that it's just due to MS's monopoly. Technology alone doesn't change human processes and workflows. You have to change both the technology and the culture. (Which is one of the reasons why we are not using Xerox systems.) This creates quite a bit of cultural inertia. The more successful socio-technical processes actually limit innovation due to their success.
Having said that, I think the desktop metaphor is interpreted very weakly, in much the same way that a "telephone" is no longer interpreted as a speaker, microphone and dialing device connected to a wire. I don't think that many people consider a a GUI window as directly analogous to a house window, a GUI file directly analogous to an older paper file, or a folder directly analogous to an older paper folder. The concept of a "desktop" has become much more abstract and removed from a literal physical metaphor as well.
Nowadays, everyone just thinks that windows and directories are how computers work; its hard to even imagine a world without them.
Windows and directories are computer interfaces designed around human perceptual limits and certain kinds of human productivity. In any environment you need a way to focus attention onto a particular task. Windows provide a way to shuffle tasks to the foreground or background as needed.
I also don't believe that folders/directories are going away in the near future because they leverage one of the fundamental ways that human beings make sense of the universe. Also a folder/directory provides ways to manipulate collections of objects as a group rather than as individual items. (Tag and search fanboys make some basic assumptions about directories/folders that have not been true since the 1970s: they are static, they are mutually exclusive.)
adamgreenfield: We have, instead, to get used to thinking about information processing as something that will be invested in shoes and coffee mugs and bathroom mirrors - and it's clear that in all of these contexts, the standard GUI is inapplicable.
Well, I would argue that we already are in that situation. The microprocessor in most cars is more sophisticated than the earliest computers. We have calculators, clocks, thermostats, automatic teller machines, POS systems, digital cameras and camcorders. And in those cases, the awesome (compared to 1970) processing power is harnessed in a way that those things still function like cars, calculators, clocks, thermostats, ATMs, cash registers, digital cameras and camcorders.
The standard GUI was not designed for any of those things. The standard GUI was designed for organizing, manipulating, reading and composing a variety of documents. To this end a GUI that functions rather loosely like an office desk is probably better than a GUI that functions like a shoe or coffee cup.
New interface vocabularies will develop when the underlying structure of the task changes. We have GUIs based on radically different tasks. So World of Warcraft, Solitaire, and The Sims2 have very different interface vocabularies compared to WinXP Explorer, OS X Finder, Gnome Nautilus, or KDE Konqueror.
posted by KirkJobSluder at 3:14 PM on October 1, 2006
Well, I don't know that it's just due to MS's monopoly. Technology alone doesn't change human processes and workflows. You have to change both the technology and the culture. (Which is one of the reasons why we are not using Xerox systems.) This creates quite a bit of cultural inertia. The more successful socio-technical processes actually limit innovation due to their success.
Having said that, I think the desktop metaphor is interpreted very weakly, in much the same way that a "telephone" is no longer interpreted as a speaker, microphone and dialing device connected to a wire. I don't think that many people consider a a GUI window as directly analogous to a house window, a GUI file directly analogous to an older paper file, or a folder directly analogous to an older paper folder. The concept of a "desktop" has become much more abstract and removed from a literal physical metaphor as well.
Nowadays, everyone just thinks that windows and directories are how computers work; its hard to even imagine a world without them.
Windows and directories are computer interfaces designed around human perceptual limits and certain kinds of human productivity. In any environment you need a way to focus attention onto a particular task. Windows provide a way to shuffle tasks to the foreground or background as needed.
I also don't believe that folders/directories are going away in the near future because they leverage one of the fundamental ways that human beings make sense of the universe. Also a folder/directory provides ways to manipulate collections of objects as a group rather than as individual items. (Tag and search fanboys make some basic assumptions about directories/folders that have not been true since the 1970s: they are static, they are mutually exclusive.)
adamgreenfield: We have, instead, to get used to thinking about information processing as something that will be invested in shoes and coffee mugs and bathroom mirrors - and it's clear that in all of these contexts, the standard GUI is inapplicable.
Well, I would argue that we already are in that situation. The microprocessor in most cars is more sophisticated than the earliest computers. We have calculators, clocks, thermostats, automatic teller machines, POS systems, digital cameras and camcorders. And in those cases, the awesome (compared to 1970) processing power is harnessed in a way that those things still function like cars, calculators, clocks, thermostats, ATMs, cash registers, digital cameras and camcorders.
The standard GUI was not designed for any of those things. The standard GUI was designed for organizing, manipulating, reading and composing a variety of documents. To this end a GUI that functions rather loosely like an office desk is probably better than a GUI that functions like a shoe or coffee cup.
New interface vocabularies will develop when the underlying structure of the task changes. We have GUIs based on radically different tasks. So World of Warcraft, Solitaire, and The Sims2 have very different interface vocabularies compared to WinXP Explorer, OS X Finder, Gnome Nautilus, or KDE Konqueror.
posted by KirkJobSluder at 3:14 PM on October 1, 2006
Yeah, after playing around with second life for a while, I see this as a real non-starter. Second Life got real boring, real quick for me.
As many people have mentioned, the problems now aren't what you're seeing on the computer, it's what you're using to interact with what you're seeing. I'm so sick of keyboards I could puke, but it's essentially the same tech as it was 30 years ago.
I've even played around with the idea of buying a Twiddler just to give myself a new and novel type of repetitive stress disorder. I'm bored with the ones I have.
posted by lumpenprole at 3:21 PM on October 1, 2006
As many people have mentioned, the problems now aren't what you're seeing on the computer, it's what you're using to interact with what you're seeing. I'm so sick of keyboards I could puke, but it's essentially the same tech as it was 30 years ago.
I've even played around with the idea of buying a Twiddler just to give myself a new and novel type of repetitive stress disorder. I'm bored with the ones I have.
posted by lumpenprole at 3:21 PM on October 1, 2006
A new set of interface metaphors has been needed for at least a decade now;
Well, get to work man! I have yet to see one that worked better for me. I remember like 8 years ago downloading The Brain and setting it up on my work computer. It was the first type of 'tag cloud' interface I had ever seen and I was convinced it was going to revolutionize the way we thought.
Of course, after a few days I turned it off and went back to my prosaic nested folders. It was just easier. And easy is my primary desire from an interface.
posted by lumpenprole at 3:27 PM on October 1, 2006
Well, get to work man! I have yet to see one that worked better for me. I remember like 8 years ago downloading The Brain and setting it up on my work computer. It was the first type of 'tag cloud' interface I had ever seen and I was convinced it was going to revolutionize the way we thought.
Of course, after a few days I turned it off and went back to my prosaic nested folders. It was just easier. And easy is my primary desire from an interface.
posted by lumpenprole at 3:27 PM on October 1, 2006
vacapinta, do you seriously mean to argue that a trend of (a) falling prices per increment of processing speed and memory and (b) increased processing power or storage in a given volume does not weigh very heavily in determining the sorts of systems that it becomes economical to propose?
beno, "direct brain interfaces" is (and will hopefully remain) science fiction. By contrast, as KirkJobSluder implies, we are already implicated in a wide variety of everyday computational tasks for which the GUI is poorly suited. My concern is not so much that "we need new interfaces" as "we need interfaces that work consistently in the different contexts presented by ubiquitous/pervasive computing." In other words, if you bring your hands together to zoom out on your dashboard map interface, it sure would be nice if you didn't have to move your hands apart to accomplish the same thing on your wallscreen display at home.
posted by adamgreenfield at 4:05 PM on October 1, 2006
beno, "direct brain interfaces" is (and will hopefully remain) science fiction. By contrast, as KirkJobSluder implies, we are already implicated in a wide variety of everyday computational tasks for which the GUI is poorly suited. My concern is not so much that "we need new interfaces" as "we need interfaces that work consistently in the different contexts presented by ubiquitous/pervasive computing." In other words, if you bring your hands together to zoom out on your dashboard map interface, it sure would be nice if you didn't have to move your hands apart to accomplish the same thing on your wallscreen display at home.
posted by adamgreenfield at 4:05 PM on October 1, 2006
adamgreenfield: My concern is not so much that "we need new interfaces" as "we need interfaces that work consistently in the different contexts presented by ubiquitous/pervasive computing." In other words, if you bring your hands together to zoom out on your dashboard map interface, it sure would be nice if you didn't have to move your hands apart to accomplish the same thing on your wallscreen display at home.
Well, I'm not convinced that they should use the same interface conventions because the implications regarding attention and motion are different for both of these applications. Within a car you want to minimize elements which draw visual attention away from the road. With a wallscreen display you typically want to capture a person's attention for entertainment purposes.
Ideally I would think that the point of pervasive computing would be to have no apparent interface at all.
posted by KirkJobSluder at 4:17 PM on October 1, 2006
Well, I'm not convinced that they should use the same interface conventions because the implications regarding attention and motion are different for both of these applications. Within a car you want to minimize elements which draw visual attention away from the road. With a wallscreen display you typically want to capture a person's attention for entertainment purposes.
Ideally I would think that the point of pervasive computing would be to have no apparent interface at all.
posted by KirkJobSluder at 4:17 PM on October 1, 2006
[Y]ou typically want to capture a person's attention for entertainment purposes.
See, I don't ever think this is OK. As an interface designer, one of my fundamental principles is respect for the user, and in practical terms this generally translates into peripheral and "pull" over focal and "push." It goes without saying that there are plenty of other designers who are happy to design attention-grabbing interfaces...but in the final analysis I find the implications of such decisions more than a little dispiriting.
As you imply, unless further depth and complexity is specifically invoked by the user, as far as I am concerned the best thing an interface can do is get out of the way.
posted by adamgreenfield at 4:41 PM on October 1, 2006
See, I don't ever think this is OK. As an interface designer, one of my fundamental principles is respect for the user, and in practical terms this generally translates into peripheral and "pull" over focal and "push." It goes without saying that there are plenty of other designers who are happy to design attention-grabbing interfaces...but in the final analysis I find the implications of such decisions more than a little dispiriting.
As you imply, unless further depth and complexity is specifically invoked by the user, as far as I am concerned the best thing an interface can do is get out of the way.
posted by adamgreenfield at 4:41 PM on October 1, 2006
adamgreenfield: See, I don't ever think this is OK.
I do, for example, playing games and watching DVDs. For that matter, I think most applications are ideally used in a "widescreen" mode with the full focal attention of the user.
posted by KirkJobSluder at 4:53 PM on October 1, 2006
I do, for example, playing games and watching DVDs. For that matter, I think most applications are ideally used in a "widescreen" mode with the full focal attention of the user.
posted by KirkJobSluder at 4:53 PM on October 1, 2006
I installed this thing and I keep getting red squares that say "Waiting for connection..." and then the entire UI locks up. It's a neat idea, but I'm so sick of half assed projects like this. If I can't make it work after messing with it for half an hour, someone has done something seriously wrong.
Plus, the UI (when it isn't locked up) feels terrible and is as slow to respond as I'd expect from something written in Smalltalk. Why must people write software that makes my very fast computer feel like a 386?
posted by knave at 7:54 PM on October 1, 2006
Plus, the UI (when it isn't locked up) feels terrible and is as slow to respond as I'd expect from something written in Smalltalk. Why must people write software that makes my very fast computer feel like a 386?
posted by knave at 7:54 PM on October 1, 2006
Alan Kay is a visionary, yes. A "sexy product innovator" he is not. Smalltalk always felt awkward to me, but watching experts use it is amazing. Ditto Squeak. I made my way around the Croquet demo, and found a case of RSI building up. Heck, intuitive navigation in 3D worlds is a "cracked issue" in the FPS game world, but this interface doesn't come close.
I love the idea, though -- MetaVerse meets MOO.
posted by dylanjames at 7:57 PM on October 1, 2006
I love the idea, though -- MetaVerse meets MOO.
posted by dylanjames at 7:57 PM on October 1, 2006
That video is really old - in 2003, Croquet was <0 .1. just this year they released the 1.0 sdk. there is a much more up-to-date presetation (in wmv, yuck) a href="http://mslive.sonicfoundry.com/mslive/Viewer/NoPopupRedirector.aspx?peid=172f6de5-135b-4ba0-9207-ac6d383812c9&shouldResize=False#">over here from Accelerate Madison. Please actually watch the video and read about the software before judging it - it is so much more than just a "glitzy interface", in fact, the 3d programming could use some work. I'm mostly a CLI person myself, but this project is really exciting. It's still quite slow, however, and you need a fast network connection for collaborative work, but it has lots of promise. More importantly, download it, read the manual and play with it.
I'd also recommend watching this presentation of students using squeak in Extremadura, Spain and these videos from Alan Kay's 2003 etech presentation.0>
posted by nTeleKy at 9:42 AM on October 2, 2006
I'd also recommend watching this presentation of students using squeak in Extremadura, Spain and these videos from Alan Kay's 2003 etech presentation.0>
posted by nTeleKy at 9:42 AM on October 2, 2006
WTF? That link came out fine in preview. It still works, anyway.
posted by nTeleKy at 9:45 AM on October 2, 2006
posted by nTeleKy at 9:45 AM on October 2, 2006
I can just see it now -- endless rows of cubicles full of office workers looking at their identical green-grass-and-blue-sky virtual "worlds," all of them inhabited entirely by floating 3D Word windows.
posted by rusty at 11:35 AM on October 2, 2006
posted by rusty at 11:35 AM on October 2, 2006
Alright. Here's the deal (I did UI design before I got into security...amusingly enough, there's actually way, way fewer people who understand good UI's than even infosec...)
Most of the information value of what we put into and pull out of computers is textual. Yes, there are certain applications in which the output (geospatial, quantitative) or the input (group manipulation) or both (games) is nontextual, but by in large we're manipulating ideas and not even speech is as fast/efficient for concept manipulation as text.
That's not to say the CLI is more efficient than the GUI. There's an entirely different conversation there; lets just leave it at that GUI is utterly masterful at prompting (checkboxes rather than recalled settings) and information hiding (not showing you settings you don't care about). In other words, the GUI is a Shell (literally, on Windows) for text.
Text comes in through our visual system, which essentially maps a complex system of curved edges into the language system of choice. There are straight-up neural links between our visual and auditory systems; I've always suspected that learning to read is about remapping acoustic comprehension of syllables onto our visual edge detectors. Text, even in pictographic languages, remains nothing but a highly efficient form of audio visualization, one that is almost entirely centered on our ability to parse curved edges.
Now, edge detection is interesting. If you take a look at the physiology of vision, you'll find a region of the occipital lobe that actually has individual neurons that respond to angles at 15 degrees, 30 degrees, 45 degrees, and so on. What's interesting is that this system is far more forgiving of changes in size than it is of angle: A word may be large or a word may be small, but the series of detected angles is roughly the same. Take the same word and rotate it, or even worse, tilt it Star Wars style...and now you're getting a completely different set of angles coming out of the edge detector. Such faults can be corrected for, of course, but my guess is that something in the imaging pipeline has to try to rectify the incoming image.
Rectifying scale is easy -- you don't even think about what it takes to handle watching a movie at different distances from the screen. Rectifying angular distortion is much harder; you don't consider this until you're forced to sit in the front row of a packed theatre all the way off to one side. Then you claw your eyes out.
So, check out these screen shots. Tilted virtual windows...a necessary consequence of the place metaphor, perhaps. But it is unusable just the same. The place metaphor is, for the most part, a fairly poor mechanism for information transfer. Sure, tilt is just an amateurish flaw that can be resolved with simple tricks like showing every viewer a display that points directly at them no matter what their viewing angle happens to be. This seemingly works well because of the scale invariance property I discussed earlier...except there are two problems that the place metaphor keeps injecting:
1) If text is physically somewhere, words can overlap. Overlapping edges, especially when the overlaps are both of characters, kill text parsing engines dead.
2) Suppose text renders near their correct position, but is made to not catastrophically overlap. Either each word is the same size despite apparent distance to the source, causing a break in the visual analogy, or some text is bigger than other text, forcing the visual system to constantly adapt to what is essentially new font after new font after new font.
These faults aren't necessarily fatal, but they're fairly problematic. One interesting thing is that standard 2D GUI's have an entire world of overlap bugs that nobody is bothering to fix. The concept of overlapping windows is fundamentally broken. Any application which is somewhat occluded has lost the entire use of its occluded region. I believe it was Windows 2000 when MS added the ability to have non-uniform windows for applications. Everyone played with it for a little while, and then tried very hard to forget it ever existed. But while we've all gone back to rectangular GUIs, we're still doing screen layout manually. This is ridiculous, flat out, and it's where I'm actually cautiously optimistic the new GUI toolkits (Aero, Quartz, XGL) might actually lead to improvements in user efficiency.
How did we get all the way to 2006 without GUI's that know IM clients need to be long vertical strips along one side of the screen? I mean, Powerpoint figured out years ago that I want templates for laying stuff out on screen. And yet I still have to manually resize my web browser when I choose to pay attention to GAIM!
As I wrote earlier, the two great things a GUI does is show you things you'd otherwise forget and hide stuff you don't presently care about. As such, my expectation is that we'll see screen real estate actively managed, with a region for between one to three windows for active work to occur, a second region for three to twelve visually scaled "background apps" that need to get monitored (perhaps with a way for their newest data to be zoomed in before shrinking down to their limited view), a third region for IM client lists and any other vertical visual strips (newsfeeds, log drops, etc), and the stock fourth taskbar for any processes not interesting enough to visually represent. The obsession with the crazy things 3D hardware is capable of has obscured the genuinely useful scaling functionality it can potentially bring to the table.
Apple, of course, has touched on the use of image scaling for application selection, in their deployment of Expose. However if you look at this screen shot, you'll see something very noticable missing...
Margins, or any sort of visual segmentation. Yikes. Granted, text is way more edge-senstive than big graphical blobs, but these are scaled down windows delivering text. It would be far more usable if Apple was giving the eye consistent left margins for which to lock onto. Also, there is a constant tradeoff between the value of having a smaller window that has the same total appearance of its parent, and having text that is actually readable. It's clear that one of the driving forces keeping HTML around is the fact that rescaling fixed size text to an arbitrary sized window is something it does very well. Scaling text separately from imagery is a technically possible feat, but I'm left wondering what the results would look like.
In summary, Croquet's sort of 3D UI work is repeatedly rediscovered to be useless, though at least the network/collaboration stuff looks cool. But there are a few interesting things that universal 3D acceleration brings to the table, not the least of which is some hope that we can finally get actively intelligent window management sometime this century.
posted by effugas at 12:22 AM on October 3, 2006 [2 favorites]
Most of the information value of what we put into and pull out of computers is textual. Yes, there are certain applications in which the output (geospatial, quantitative) or the input (group manipulation) or both (games) is nontextual, but by in large we're manipulating ideas and not even speech is as fast/efficient for concept manipulation as text.
That's not to say the CLI is more efficient than the GUI. There's an entirely different conversation there; lets just leave it at that GUI is utterly masterful at prompting (checkboxes rather than recalled settings) and information hiding (not showing you settings you don't care about). In other words, the GUI is a Shell (literally, on Windows) for text.
Text comes in through our visual system, which essentially maps a complex system of curved edges into the language system of choice. There are straight-up neural links between our visual and auditory systems; I've always suspected that learning to read is about remapping acoustic comprehension of syllables onto our visual edge detectors. Text, even in pictographic languages, remains nothing but a highly efficient form of audio visualization, one that is almost entirely centered on our ability to parse curved edges.
Now, edge detection is interesting. If you take a look at the physiology of vision, you'll find a region of the occipital lobe that actually has individual neurons that respond to angles at 15 degrees, 30 degrees, 45 degrees, and so on. What's interesting is that this system is far more forgiving of changes in size than it is of angle: A word may be large or a word may be small, but the series of detected angles is roughly the same. Take the same word and rotate it, or even worse, tilt it Star Wars style...and now you're getting a completely different set of angles coming out of the edge detector. Such faults can be corrected for, of course, but my guess is that something in the imaging pipeline has to try to rectify the incoming image.
Rectifying scale is easy -- you don't even think about what it takes to handle watching a movie at different distances from the screen. Rectifying angular distortion is much harder; you don't consider this until you're forced to sit in the front row of a packed theatre all the way off to one side. Then you claw your eyes out.
So, check out these screen shots. Tilted virtual windows...a necessary consequence of the place metaphor, perhaps. But it is unusable just the same. The place metaphor is, for the most part, a fairly poor mechanism for information transfer. Sure, tilt is just an amateurish flaw that can be resolved with simple tricks like showing every viewer a display that points directly at them no matter what their viewing angle happens to be. This seemingly works well because of the scale invariance property I discussed earlier...except there are two problems that the place metaphor keeps injecting:
1) If text is physically somewhere, words can overlap. Overlapping edges, especially when the overlaps are both of characters, kill text parsing engines dead.
2) Suppose text renders near their correct position, but is made to not catastrophically overlap. Either each word is the same size despite apparent distance to the source, causing a break in the visual analogy, or some text is bigger than other text, forcing the visual system to constantly adapt to what is essentially new font after new font after new font.
These faults aren't necessarily fatal, but they're fairly problematic. One interesting thing is that standard 2D GUI's have an entire world of overlap bugs that nobody is bothering to fix. The concept of overlapping windows is fundamentally broken. Any application which is somewhat occluded has lost the entire use of its occluded region. I believe it was Windows 2000 when MS added the ability to have non-uniform windows for applications. Everyone played with it for a little while, and then tried very hard to forget it ever existed. But while we've all gone back to rectangular GUIs, we're still doing screen layout manually. This is ridiculous, flat out, and it's where I'm actually cautiously optimistic the new GUI toolkits (Aero, Quartz, XGL) might actually lead to improvements in user efficiency.
How did we get all the way to 2006 without GUI's that know IM clients need to be long vertical strips along one side of the screen? I mean, Powerpoint figured out years ago that I want templates for laying stuff out on screen. And yet I still have to manually resize my web browser when I choose to pay attention to GAIM!
As I wrote earlier, the two great things a GUI does is show you things you'd otherwise forget and hide stuff you don't presently care about. As such, my expectation is that we'll see screen real estate actively managed, with a region for between one to three windows for active work to occur, a second region for three to twelve visually scaled "background apps" that need to get monitored (perhaps with a way for their newest data to be zoomed in before shrinking down to their limited view), a third region for IM client lists and any other vertical visual strips (newsfeeds, log drops, etc), and the stock fourth taskbar for any processes not interesting enough to visually represent. The obsession with the crazy things 3D hardware is capable of has obscured the genuinely useful scaling functionality it can potentially bring to the table.
Apple, of course, has touched on the use of image scaling for application selection, in their deployment of Expose. However if you look at this screen shot, you'll see something very noticable missing...
Margins, or any sort of visual segmentation. Yikes. Granted, text is way more edge-senstive than big graphical blobs, but these are scaled down windows delivering text. It would be far more usable if Apple was giving the eye consistent left margins for which to lock onto. Also, there is a constant tradeoff between the value of having a smaller window that has the same total appearance of its parent, and having text that is actually readable. It's clear that one of the driving forces keeping HTML around is the fact that rescaling fixed size text to an arbitrary sized window is something it does very well. Scaling text separately from imagery is a technically possible feat, but I'm left wondering what the results would look like.
In summary, Croquet's sort of 3D UI work is repeatedly rediscovered to be useless, though at least the network/collaboration stuff looks cool. But there are a few interesting things that universal 3D acceleration brings to the table, not the least of which is some hope that we can finally get actively intelligent window management sometime this century.
posted by effugas at 12:22 AM on October 3, 2006 [2 favorites]
« Older Long Term Ecological Research | Costs of climate change adaptation Newer »
This thread has been archived and is closed to new comments
posted by MetaMonkey at 2:40 AM on October 1, 2006