Half a step away from Minority Report UI
May 22, 2012 2:34 AM Subscribe
"For the first time, you can control a computer in three dimensions with your natural hand and finger movements". [SLYT] LEAP is a USB peripheral that can provide real-time motion tracking to 100th of a millimeter.
Cheaper than the Microsoft Kinect they are accepting pre-orders and dev-kits are also available.
Cheaper than the Microsoft Kinect they are accepting pre-orders and dev-kits are also available.
Polonius.
Yet here, Laertes? Aboard, aboard, for shame!posted by Blazecock Pileon at 2:43 AM on May 22, 2012 [9 favorites]
The wind sits in the shoulder of your sail,
And you are stay'd for. There- my blessing with thee!
And these few precepts in thy memory
Look thou character. Give thy thoughts no tongue,
Nor any unproportion'd thought his act.
Be thou familiar, but by no means vulgar:
Submit thyself not to the tempts of preorders.
Google "Gorilla Arm"
Anyway, other then the need to do lots of curls in order to be able to use something like this continuously is the lack of software support. With a 2D monitor, it's kind of pointless. If everything is 2D, and you want more precision then a mouse gives you, you can just use a touch screen or get a cheapo digitizer tablet which will give you a lot more control because you'll be able to rest your arm on the table and just move your fingers.
With a true 3D display, you could have buttons and objects appear in space in front of you. Then you could use this thing to interact with them. But we don't even have much software that takes advantage of 3D display tech anyway. Maybe some cheesy games could be made for it, but that's about it.
A long time ago I bought some Cheesy 3D shutter glasses. They hooked up through some crazy VGA adapter so that they could synch to your VGA CRT's vertical blank interval. Anyway, they worked - kind of - if the programs you were trying to use used the right 3D library (Direct 3D I think). Quake used Open GL, so the glasses didn't even work with that.
Basically I watched some D3D demos in 3D, then but the glasses in a drawer and forgot about them. They probably wouldn't even work with an LCD and definitely not on anything other then a VGA connected monitor.
So yeah. Even if something seems cool, if it doesn't have lots of software support, it's not really worth getting.
posted by delmoi at 2:49 AM on May 22, 2012 [1 favorite]
Anyway, other then the need to do lots of curls in order to be able to use something like this continuously is the lack of software support. With a 2D monitor, it's kind of pointless. If everything is 2D, and you want more precision then a mouse gives you, you can just use a touch screen or get a cheapo digitizer tablet which will give you a lot more control because you'll be able to rest your arm on the table and just move your fingers.
With a true 3D display, you could have buttons and objects appear in space in front of you. Then you could use this thing to interact with them. But we don't even have much software that takes advantage of 3D display tech anyway. Maybe some cheesy games could be made for it, but that's about it.
A long time ago I bought some Cheesy 3D shutter glasses. They hooked up through some crazy VGA adapter so that they could synch to your VGA CRT's vertical blank interval. Anyway, they worked - kind of - if the programs you were trying to use used the right 3D library (Direct 3D I think). Quake used Open GL, so the glasses didn't even work with that.
Basically I watched some D3D demos in 3D, then but the glasses in a drawer and forgot about them. They probably wouldn't even work with an LCD and definitely not on anything other then a VGA connected monitor.
So yeah. Even if something seems cool, if it doesn't have lots of software support, it's not really worth getting.
posted by delmoi at 2:49 AM on May 22, 2012 [1 favorite]
The developers more or less confirmed someone's guess on Hacker News that it uses "2 normal digital cameras and a uv or infrared dot pattern [and] the glass is a uv filter." (the dev is ChrisFornof).
Like phone-based AR, I have yet to see a killer app for this, but they're being smart by giving away 20,000 developer units so that at least they increase the chance of someone making one.
posted by adrianhon at 3:01 AM on May 22, 2012 [2 favorites]
Like phone-based AR, I have yet to see a killer app for this, but they're being smart by giving away 20,000 developer units so that at least they increase the chance of someone making one.
posted by adrianhon at 3:01 AM on May 22, 2012 [2 favorites]
But it'll be so cool for playing your theramin! I'll still wait until it comes out with its Badger skinning.
posted by oneswellfoop at 3:01 AM on May 22, 2012
posted by oneswellfoop at 3:01 AM on May 22, 2012
This device combined with a robotic arm seems like a really fancy way to rip your junk off and toss it across the room if you don't have set kind of close to a 1:1 ratio during your masturbation sessions. And yes, you know it will come to that.
posted by Purposeful Grimace at 3:12 AM on May 22, 2012 [11 favorites]
posted by Purposeful Grimace at 3:12 AM on May 22, 2012 [11 favorites]
This interface would be fairly useless for games like Quake, where you spend a lot of effort trying to avoid touching any of the objects you are concerned with. It would be better for something like The Incredible Machine, where you need to arrange objects with great precision.
posted by LogicalDash at 3:13 AM on May 22, 2012
posted by LogicalDash at 3:13 AM on May 22, 2012
Google "Gorilla Arm"
Yes, though some tasks are unavoidably likely to lead to gorilla arm.
For instance, I'd like to see it used to convert sign language to text or voice, or to help translate between different sign languages (such as ASL to BSL), or to teach signing.
Also, long-distance hand jobs.
posted by pracowity at 3:23 AM on May 22, 2012 [1 favorite]
Yes, though some tasks are unavoidably likely to lead to gorilla arm.
For instance, I'd like to see it used to convert sign language to text or voice, or to help translate between different sign languages (such as ASL to BSL), or to teach signing.
Also, long-distance hand jobs.
posted by pracowity at 3:23 AM on May 22, 2012 [1 favorite]
Combined with tiny, or huge, manipulator arms it could do some very interesting things.
posted by aeschenkarnos at 3:26 AM on May 22, 2012
posted by aeschenkarnos at 3:26 AM on May 22, 2012
The developers more or less confirmed someone's guess on Hacker News that it uses "2 normal digital cameras and a uv or infrared dot pattern [and] the glass is a uv filter." (the dev is ChrisFornof).Well, one easy way to make this possible would simply be to straight up violate whatever patents the Kinect is using If the Kinect was $240 a couple years ago, it seems to reason it could be made more cheaply today.
posted by delmoi at 3:27 AM on May 22, 2012
(The real problem with Kinect is the latency. They had to have had brutal arguments about whether 30fps was fast enough. Turns out, no, it isn't.)
posted by effugas at 3:36 AM on May 22, 2012 [1 favorite]
posted by effugas at 3:36 AM on May 22, 2012 [1 favorite]
I had an itch on my nose, and next thing I know, I erased everything on my hard drive.
posted by crunchland at 4:17 AM on May 22, 2012 [5 favorites]
posted by crunchland at 4:17 AM on May 22, 2012 [5 favorites]
A keyboard with this built in would be pretty excellent. Swipe two fingers left for undo, a pincer motion to close, thumbs up to confirm a dialogue box, middle finger to force close…
posted by dudekiller at 4:43 AM on May 22, 2012 [5 favorites]
posted by dudekiller at 4:43 AM on May 22, 2012 [5 favorites]
Even if something seems cool, if it doesn't have lots of software support, it's not really worth getting.
I hope we'll see that happen with devices like this coming on the market. I agree, I think the big 'leap' would have to be in software development before this can take off.
With a true 3D display, you could have buttons and objects appear in space in front of you. Then you could use this thing to interact with them.
This is just my personal observation, having worked on and used 3D displays in combination with touchless controls. Although it is really fun to use in this configuration, I'm not sure that 3D has much advantage over 2D displays when it comes to usability. The difficulty in designing touchless interfaces is direct feedback, but also switching between operations.
Let's say you are viewing a 3D widget, and you want to 'swipe' your hand to rotate it. As a gesture it works pretty well, you wave your hand in front of the screen, and the widget rotates. But what happens if there's a button on your widget that you want to press? As soon as you move your hand within the interaction zone, the widget starts to rotate and your button is now on the other side.
That's just a simple example, but it gets infinitely more complicated the more you want your software to do. This stuff can be solved, but it's hard.
I had an itch on my nose, and next thing I know, I erased everything on my hard drive.
For example, this could happen.
posted by romanb at 4:44 AM on May 22, 2012
I hope we'll see that happen with devices like this coming on the market. I agree, I think the big 'leap' would have to be in software development before this can take off.
With a true 3D display, you could have buttons and objects appear in space in front of you. Then you could use this thing to interact with them.
This is just my personal observation, having worked on and used 3D displays in combination with touchless controls. Although it is really fun to use in this configuration, I'm not sure that 3D has much advantage over 2D displays when it comes to usability. The difficulty in designing touchless interfaces is direct feedback, but also switching between operations.
Let's say you are viewing a 3D widget, and you want to 'swipe' your hand to rotate it. As a gesture it works pretty well, you wave your hand in front of the screen, and the widget rotates. But what happens if there's a button on your widget that you want to press? As soon as you move your hand within the interaction zone, the widget starts to rotate and your button is now on the other side.
That's just a simple example, but it gets infinitely more complicated the more you want your software to do. This stuff can be solved, but it's hard.
I had an itch on my nose, and next thing I know, I erased everything on my hard drive.
For example, this could happen.
posted by romanb at 4:44 AM on May 22, 2012
Like phone-based AR, I have yet to see a killer app for this
Ya know what would really be killer? Explain AR. (For us oldsters - "Augmented Reality")
posted by rough ashlar at 5:05 AM on May 22, 2012
Ya know what would really be killer? Explain AR. (For us oldsters - "Augmented Reality")
posted by rough ashlar at 5:05 AM on May 22, 2012
The responsiveness seems excellent, on that front this is a real step forward. Maybe some kind of arm support would help with fatigue (which is a huge problem with Kinect too).
On a cnet article there's some talk about an "app store", so I'm guessing the business plan is to give out the devkits for free and get royalties from developers.
posted by ikalliom at 5:18 AM on May 22, 2012
On a cnet article there's some talk about an "app store", so I'm guessing the business plan is to give out the devkits for free and get royalties from developers.
posted by ikalliom at 5:18 AM on May 22, 2012
This device combined with a robotic arm seems like a really fancy way to rip your junk off and toss it across the room if you don't have set kind of close to a 1:1 ratio during your masturbation sessions. And yes, you know it will come to that.
Purposeful Grimace
Yes, this is a real danger. Which is why we need to get a Congressional oversight on this right away.
posted by victory_laser at 5:42 AM on May 22, 2012
Purposeful Grimace
Yes, this is a real danger. Which is why we need to get a Congressional oversight on this right away.
posted by victory_laser at 5:42 AM on May 22, 2012
You mean you have to use your hands? That's like a baby's toy!
posted by banished at 5:44 AM on May 22, 2012 [4 favorites]
posted by banished at 5:44 AM on May 22, 2012 [4 favorites]
Can anyone explain why it has to be on the desk pointing up? Implies that the filed of vision is really shallow. For example, why couldn't it just sit above the monitor and point down at the desk. This might eliminate the Gorilla Arm problem - you can draw on the desk and lift your hands off when you need to manipulate the third dimension. I dont know.
posted by victory_laser at 5:50 AM on May 22, 2012
posted by victory_laser at 5:50 AM on May 22, 2012
I'm fairly excited about using this for design. Combine it was a glove that has haptic finger vibrators on it and... oh nevermind. It's just going to be used for sex, isn't it.
posted by hanoixan at 5:55 AM on May 22, 2012 [1 favorite]
posted by hanoixan at 5:55 AM on May 22, 2012 [1 favorite]
The important thing is that it's unobtrusive enough so it won't be noticed among your desk clutter as you convince your young child you have superpowers.
posted by mikepop at 6:09 AM on May 22, 2012
posted by mikepop at 6:09 AM on May 22, 2012
The point cloud at 0:52 is waaaay better quality than the Kinect. This is interesting.
posted by scose at 6:25 AM on May 22, 2012
posted by scose at 6:25 AM on May 22, 2012
Although the coverage seems too good for a single camera...
posted by scose at 6:27 AM on May 22, 2012
posted by scose at 6:27 AM on May 22, 2012
OT: It seems like the best solution to gorilla arm would be to put the screen above you, tilt back your chair 90 degrees, and point your arms up. That way your bones can support the weight of your arms.
posted by leotrotsky at 6:29 AM on May 22, 2012 [1 favorite]
posted by leotrotsky at 6:29 AM on May 22, 2012 [1 favorite]
rough ashlar: AR is when you have footage of real life with computer graphics painted on top in real time. Movies attempt to portray this with overly elaborate heads-up displays, like those captions on everything Robocop sees. That's certainly one way to implement augmented reality, and a sensible way when you're putting it in a fighter jet (which already happens) or using it to ease the process of inventorying a warehouse (coming soon, probably).
It's easy to conceive of potential killer apps for AR: What if everyone you met at a party had a big ol' status bar with background information and stuff, instead of that piddly little nametag everyone wears? What if you didn't have to look down to see where your GPS was routing you? And it can do those things, yeah, but most of the time it still requires the user to provide "markers" that are easy for the image processor to locate. Simply putting a barcode on an object of interest is usually enough, but that barrier is still too high for if you want your glasses to draw a line on the road where you're supposed to go. If the dividing line gets interrupted for whatever reason, and there's no other road paint in eyeshot, your glasses no longer know where the center of the road is. Maybe they can find the edges instead, but that's dodgy if the asphalt is broken or there's something growing on the shoulder.
posted by LogicalDash at 6:43 AM on May 22, 2012
It's easy to conceive of potential killer apps for AR: What if everyone you met at a party had a big ol' status bar with background information and stuff, instead of that piddly little nametag everyone wears? What if you didn't have to look down to see where your GPS was routing you? And it can do those things, yeah, but most of the time it still requires the user to provide "markers" that are easy for the image processor to locate. Simply putting a barcode on an object of interest is usually enough, but that barrier is still too high for if you want your glasses to draw a line on the road where you're supposed to go. If the dividing line gets interrupted for whatever reason, and there's no other road paint in eyeshot, your glasses no longer know where the center of the road is. Maybe they can find the edges instead, but that's dodgy if the asphalt is broken or there's something growing on the shoulder.
posted by LogicalDash at 6:43 AM on May 22, 2012
LogicalDash: Yep, I can see the potential of heads-up display AR, but I am much more dubious about phone-based AR. I don't see many people enjoying looking at the world through a 4" display held up by their arm.
posted by adrianhon at 6:46 AM on May 22, 2012
posted by adrianhon at 6:46 AM on May 22, 2012
adrianhon: I think the goggles are going to be Bluetooth peripherals for phones.
posted by LogicalDash at 6:48 AM on May 22, 2012 [1 favorite]
posted by LogicalDash at 6:48 AM on May 22, 2012 [1 favorite]
The point cloud at 0:52 is waaaay better quality than the Kinect.
I initially thought that segment was a clue that it was using some radically different mechanism than Kinect -- notice the points on top of the hands, out of the view of a camera.
Given what adrianhon says above, it seems like that segment is "faked". i.e. Those aren't point samples; it's just a hand model, rendered only with vertices, controlled by an input skeleton, just like Kinect.
I don't get how they would ever get the sample points on top of the hands. Well, without using some high-energy radiation anyway, which might be a tough sell to consumers.
posted by CaseyB at 7:10 AM on May 22, 2012
I initially thought that segment was a clue that it was using some radically different mechanism than Kinect -- notice the points on top of the hands, out of the view of a camera.
Given what adrianhon says above, it seems like that segment is "faked". i.e. Those aren't point samples; it's just a hand model, rendered only with vertices, controlled by an input skeleton, just like Kinect.
I don't get how they would ever get the sample points on top of the hands. Well, without using some high-energy radiation anyway, which might be a tough sell to consumers.
posted by CaseyB at 7:10 AM on May 22, 2012
I'm really interested in this bit from the Engadget article:
"Users can customize it to suit their needs with custom gestures and sensitivity settings, in addition to chaining multiple Leap devices together to create a larger workspace."
There's a lot you can do if these chain together. 3d scanners! Motion-controlled performance spaces and art installations! Gigantic interactive maps at tourist information centers!
A thought, though: if these are camera-based, and they do become ubiquitous and eventually break out of the "pack of gum on a desk" form factor and get embedded in all sorts of appliances, I reaaally hope the developers put a lot of effort into security, because it's going to end in some interesting privacy violations if they didn't.
posted by jason_steakums at 7:18 AM on May 22, 2012
"Users can customize it to suit their needs with custom gestures and sensitivity settings, in addition to chaining multiple Leap devices together to create a larger workspace."
There's a lot you can do if these chain together. 3d scanners! Motion-controlled performance spaces and art installations! Gigantic interactive maps at tourist information centers!
A thought, though: if these are camera-based, and they do become ubiquitous and eventually break out of the "pack of gum on a desk" form factor and get embedded in all sorts of appliances, I reaaally hope the developers put a lot of effort into security, because it's going to end in some interesting privacy violations if they didn't.
posted by jason_steakums at 7:18 AM on May 22, 2012
This technology's been around for years.
posted by resurrexit at 7:19 AM on May 22, 2012 [1 favorite]
posted by resurrexit at 7:19 AM on May 22, 2012 [1 favorite]
Watched the first minute or so of the video, pre-ordered. I can find some sort of use for this, and it slides in under my $100 "impulse purchase" price limit.
posted by mrbill at 7:30 AM on May 22, 2012 [2 favorites]
posted by mrbill at 7:30 AM on May 22, 2012 [2 favorites]
I still maintain that the best way to implement something like this needs a haptic feedback element. We are used to having a touch response with our fingers to tell us where they are. I don't doubt that with practice that becomes less of an issue, but I think it would help people get used to the interface.
posted by Hactar at 7:34 AM on May 22, 2012
posted by Hactar at 7:34 AM on May 22, 2012
>I still maintain that the best way to implement something like this needs a haptic feedback element.
I'm curious if developers are hung up on the expense of engineering precise haptic response. The thing is, I would imagine that precision matters much less than simple time proximity, when it comes to building subjective connection.
The bell doesn't need to be instrinsically related to dog food, in order to trigger salivation; it just needs to be heard quickly and consistently.
posted by darth_tedious at 8:02 AM on May 22, 2012
I'm curious if developers are hung up on the expense of engineering precise haptic response. The thing is, I would imagine that precision matters much less than simple time proximity, when it comes to building subjective connection.
The bell doesn't need to be instrinsically related to dog food, in order to trigger salivation; it just needs to be heard quickly and consistently.
posted by darth_tedious at 8:02 AM on May 22, 2012
CaptainCaseous beat me to the Oblong link, but here's Underkoffler's TED talk explaining their approach to gestural user interfaces. We've seen one major revolution in UI in the past few years; multi-touch interfaces led by Apple's work on the iPhone. Gestural interfaces may well be the next revolution. Affordable hardware like Kinect and this LEAP thing are necessary prerequisites, but the really interesting work is in the interaction design.
posted by Nelson at 8:07 AM on May 22, 2012 [1 favorite]
posted by Nelson at 8:07 AM on May 22, 2012 [1 favorite]
... oh nevermind. It's just going to be used for sex, isn't it.
Not at all! I will be using it to punch people over the internet. TCP/IPunch.
posted by Ritchie at 8:25 AM on May 22, 2012 [4 favorites]
Not at all! I will be using it to punch people over the internet. TCP/IPunch.
posted by Ritchie at 8:25 AM on May 22, 2012 [4 favorites]
Everyone trots out the Gorilla Arm without stopping to think that this could go ALONG WITH existing interaction.
For instance, yesterday I was dyeing my hair. I realized I didn't have any music playing after my hands were covered in bleach. I had two ways to make music happen: poke the media keys on my keyboard, or pick up my stylus and navigate to iTunes via the Wacom tablet. Both of these would involve either getting bleach everywhere, or stripping off my gloves. One of these on my desk with a few gestures bound to music control would have been perfect.
Also if you poke around the site you'll find the usecase that inspired them to make it: 3D modelling.
Mice didn't replace keyboards. Gesture doesn't have to replace mice, either.
I'll find out how well it works this winter, when the one I've pre-ordered arrives.
posted by egypturnash at 8:29 AM on May 22, 2012
For instance, yesterday I was dyeing my hair. I realized I didn't have any music playing after my hands were covered in bleach. I had two ways to make music happen: poke the media keys on my keyboard, or pick up my stylus and navigate to iTunes via the Wacom tablet. Both of these would involve either getting bleach everywhere, or stripping off my gloves. One of these on my desk with a few gestures bound to music control would have been perfect.
Also if you poke around the site you'll find the usecase that inspired them to make it: 3D modelling.
Mice didn't replace keyboards. Gesture doesn't have to replace mice, either.
I'll find out how well it works this winter, when the one I've pre-ordered arrives.
posted by egypturnash at 8:29 AM on May 22, 2012
3d interfaces are the kind of thing somebody who has never designed interfaces thinks is cool. Really, it's just a further layer of distracting abstraction.
Let's say you have two folders on your desktop, one has some word docs, one has some spreadsheets. Now, they are four virtual feet apart, one behind the other. What information have you added? None.
I worked with the Kinect team when it was still a secret project trying to prototype how the xbox dashboard would work with it. We made all sorts of fancy 3d wheels and virtual library stacks and flying piles and so on and so on.
Eventually we went back to a slightly modified version of how it had worked before. Everything else made it more of a pain to navigate and added nothing except a wow factor that was shortly destroyed by frustration.
Now, if this in some way can be used as a keyboard replacement, I'm all for it. But so far, most attempts at using this kind of thing to replace a keyboard fail miserable because of the lack of haptic feedback from the keys.
Of course, for less than 100, I'll probably buy one for hacky fun, so there's that.
posted by lumpenprole at 8:48 AM on May 22, 2012
Let's say you have two folders on your desktop, one has some word docs, one has some spreadsheets. Now, they are four virtual feet apart, one behind the other. What information have you added? None.
I worked with the Kinect team when it was still a secret project trying to prototype how the xbox dashboard would work with it. We made all sorts of fancy 3d wheels and virtual library stacks and flying piles and so on and so on.
Eventually we went back to a slightly modified version of how it had worked before. Everything else made it more of a pain to navigate and added nothing except a wow factor that was shortly destroyed by frustration.
Now, if this in some way can be used as a keyboard replacement, I'm all for it. But so far, most attempts at using this kind of thing to replace a keyboard fail miserable because of the lack of haptic feedback from the keys.
Of course, for less than 100, I'll probably buy one for hacky fun, so there's that.
posted by lumpenprole at 8:48 AM on May 22, 2012
I'd love to see this technology integrated into SixthSense to get around the color coded fingertips thing.
posted by jason_steakums at 9:11 AM on May 22, 2012 [1 favorite]
posted by jason_steakums at 9:11 AM on May 22, 2012 [1 favorite]
The 10 micrometer accuracy they are claiming is insane and would be useful for a lot of high-end industrial applications if true.
posted by Potsy at 9:31 AM on May 22, 2012
posted by Potsy at 9:31 AM on May 22, 2012
>3d interfaces are the kind of thing somebody who has never designed interfaces thinks is cool. Really, it's just a further layer of distracting abstraction
Yeah. It's important to address purpose:
3D is great for emotional immersion; on the other hand, because efficient info sorting is fundamentally a process of streamlining, 3D is probably directly counter-productive for information management.
posted by darth_tedious at 9:48 AM on May 22, 2012
Yeah. It's important to address purpose:
3D is great for emotional immersion; on the other hand, because efficient info sorting is fundamentally a process of streamlining, 3D is probably directly counter-productive for information management.
posted by darth_tedious at 9:48 AM on May 22, 2012
scose The point cloud at 0:52 is waaaay better quality than the Kinect. This is interesting.I agree, that more than anything got my interest, as I can see real applications in my working life for such a graph network presentation. CaseyB thinks it's fake, but I'm wondering if he took you literally and watched at 0:52 and not 0:32.
posted by hincandenza at 10:08 AM on May 22, 2012
I've had the good fortune to see one of these up close and play with it a little bit. An online video really doesn't do justice to how cool and powerful it is.
posted by eustacescrubb at 10:18 AM on May 22, 2012 [1 favorite]
posted by eustacescrubb at 10:18 AM on May 22, 2012 [1 favorite]
They mentioned in the HN thread that they are planning to release this to developers a few months before they release it to the general public, in hopes to get the app store off the ground by the time it's generally available.
As others have mentioned, it's not ever going to be your only input device, but it seems like it could definitely be useful for certain tasks.
posted by !Jim at 10:24 AM on May 22, 2012
As others have mentioned, it's not ever going to be your only input device, but it seems like it could definitely be useful for certain tasks.
posted by !Jim at 10:24 AM on May 22, 2012
A computer game based on all the hilarious 3D computer interfaces from movies (Jurassic Park!) controlled with your hands could be incredible. Live the dream of the 90s!
posted by wemayfreeze at 10:33 AM on May 22, 2012
posted by wemayfreeze at 10:33 AM on May 22, 2012
...
The 3D file browser used on the Silicon Graphics IRIX Operating Environment was a real piece of software. FSN, fusion, is quite real. In fact, it lives on in FSV.
UNIX geeks have always been able to live the dream of the 90's... even in the 90's.
But, even as these file system browsers look cool, they are woefully useless.
A shell prompt is where the fastest of manipulations take place. This is true for workstations, servers, mainframes, and supercomputers.
It does not matter which OS is the host.
/PedanticComputerGeek
~]#
posted by PROD_TPSL at 11:27 AM on May 22, 2012 [2 favorites]
The 3D file browser used on the Silicon Graphics IRIX Operating Environment was a real piece of software. FSN, fusion, is quite real. In fact, it lives on in FSV.
UNIX geeks have always been able to live the dream of the 90's... even in the 90's.
But, even as these file system browsers look cool, they are woefully useless.
A shell prompt is where the fastest of manipulations take place. This is true for workstations, servers, mainframes, and supercomputers.
It does not matter which OS is the host.
/PedanticComputerGeek
~]#
posted by PROD_TPSL at 11:27 AM on May 22, 2012 [2 favorites]
If they can honestly meet the price-point and accuracy promises made on the site then these will sell themselves, and developers will fight each other with knives for the opportunity to write THE killer app for it.
But I think those are two very big ifs, indeed.
posted by lekvar at 11:49 AM on May 22, 2012
But I think those are two very big ifs, indeed.
posted by lekvar at 11:49 AM on May 22, 2012
It's easy to conceive of potential killer apps for AR: What if everyone you met at a party had a big ol' status bar with background information and stuff, instead of that piddly little nametag everyone wears
That sounds horrible.
Although the potential for second-life flying penises now makes me yearn for the over-the-top hacking opportunities of such a system.
posted by rough ashlar at 11:51 AM on May 22, 2012
That sounds horrible.
Although the potential for second-life flying penises now makes me yearn for the over-the-top hacking opportunities of such a system.
posted by rough ashlar at 11:51 AM on May 22, 2012
Those are not proper 3d file browsers. What's needed here is a cone tree; better-quality images may be more self-explanatory. You really need to see it working in animated form, but I'm having difficulty finding the right video on YouTube.
posted by anigbrowl at 11:51 AM on May 22, 2012
posted by anigbrowl at 11:51 AM on May 22, 2012
There's some cool new research(SLYT) that showed you can use built-in hardware (a mic and speakers) of existing devices to sense gestures. They just emit a tone at ~20k and use the doppler effect. It only gives you 3 gestures, essentially (towards, away, side-side) but it still solves the shit-on-my-hands problem.
posted by victory_laser at 12:47 PM on May 22, 2012 [1 favorite]
posted by victory_laser at 12:47 PM on May 22, 2012 [1 favorite]
Which incidentally is also really great for masturbating.
posted by victory_laser at 1:20 PM on May 22, 2012
posted by victory_laser at 1:20 PM on May 22, 2012
Well, that's nice for people whose hearing is sufficiently damaged to the point of being unable to hear anything above 15khz. If you try that around me, though, I will hunt down the sound emitting device and disable it by any means necessary.
posted by anigbrowl at 1:50 PM on May 22, 2012
posted by anigbrowl at 1:50 PM on May 22, 2012
Playing a first-person shooter while going "pew pew!" at the screen with a little pointy-finger gun movement looks like a lot of fun!
posted by ShutterBun at 9:24 PM on May 22, 2012
posted by ShutterBun at 9:24 PM on May 22, 2012
Wait. Who said anything about reaching out and touching an LCD? Gorilla arm isn't a problem if the system can capture the kind of subtle "talking with your hands" gestures we make all the time.
What I can figure out is how the system knows when you're performing a command and just reaching for your coffee cup or absentmindedly wiggling your fingers.....unless you engage it with a pedal or something like that which just seems kind of hilarious.
posted by snuffleupagus at 8:52 AM on May 23, 2012
What I can figure out is how the system knows when you're performing a command and just reaching for your coffee cup or absentmindedly wiggling your fingers.....unless you engage it with a pedal or something like that which just seems kind of hilarious.
posted by snuffleupagus at 8:52 AM on May 23, 2012
What I can figure out is how the system knows when you're performing a command and just reaching for your coffee cup
Most of these systems, and from the video it looks like this is no exception, have a fairly limited sensor area and are set to discard the motion of pulling out of it.
posted by lumpenprole at 9:29 AM on May 23, 2012
Most of these systems, and from the video it looks like this is no exception, have a fairly limited sensor area and are set to discard the motion of pulling out of it.
posted by lumpenprole at 9:29 AM on May 23, 2012
It won't really be Minority Report until the output is on laser-etched wooden balls. Speaking of which, if you thought ink cartridges were expensive...
posted by Trace McJoy at 10:09 AM on May 23, 2012
posted by Trace McJoy at 10:09 AM on May 23, 2012
Gorilla arm isn't a problem if the system can capture the kind of subtle "talking with your hands" gestures we make all the time.
This is an interesting mental track to take. I would guess that most people are thinking Minority Report-style interface, with wide, sweeping gestures. The example video on-site tones this down a bit, but there's still a lot of gesticulate-wildly-at-the-monitor going on.
But why does the LEAP reader-thingy have to be under the monitor? Imagine just having it on your lap so that you can make gestures in a more natural posture, like a gestural keyboard. Not for text entry, but for apps like Blender or Photoshop where a user might fid themselves using the mouse and keyboard simultaneously. Pointing at the monitor seems like a distraction at best, especially given that most users are already comfortable with the abstraction of using a mouse.
posted by lekvar at 2:32 PM on May 23, 2012
This is an interesting mental track to take. I would guess that most people are thinking Minority Report-style interface, with wide, sweeping gestures. The example video on-site tones this down a bit, but there's still a lot of gesticulate-wildly-at-the-monitor going on.
But why does the LEAP reader-thingy have to be under the monitor? Imagine just having it on your lap so that you can make gestures in a more natural posture, like a gestural keyboard. Not for text entry, but for apps like Blender or Photoshop where a user might fid themselves using the mouse and keyboard simultaneously. Pointing at the monitor seems like a distraction at best, especially given that most users are already comfortable with the abstraction of using a mouse.
posted by lekvar at 2:32 PM on May 23, 2012
Yeah. I was thinking either directly to the left of capsloc or to the right of numpad-enter.
posted by snuffleupagus at 9:02 AM on May 24, 2012
posted by snuffleupagus at 9:02 AM on May 24, 2012
(Depending on whether it sort of replaces or augments the mouse...and thus the drivers they release.)
posted by snuffleupagus at 9:03 AM on May 24, 2012
posted by snuffleupagus at 9:03 AM on May 24, 2012
« Older Second stage propulsion performing as expected. | This post could not handle a link from the Daily... Newer »
This thread has been archived and is closed to new comments
posted by DU at 2:41 AM on May 22, 2012 [3 favorites]