HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Welcome to Project Soli

Google ATAP · Youtube · 73 HN points · 22 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Google ATAP's video "Welcome to Project Soli".
Youtube Summary
Project Soli is developing a new interaction sensor using radar technology. The sensor can track sub-millimeter motions at high speed and accuracy. It fits onto a chip, can be produced at scale and built into small devices and everyday objects.

Follow Google ATAP on Twitter for updates on Project Soli: https://twitter.com/GoogleATAP

Visit https://groups.google.com/forum/#!forum/soli-announce to sign up for updates.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Oct 16, 2019 · melling on Soli
Google finally ships a research project in a real product and people don’t seem to understand the usefulness.

Human nature for some reason often devolves to “what’s the point?”

Here’s the 2015 announcement. There’s a lot of potential. Maybe we’d all benefit from “imagine what’s now possible “

https://youtu.be/0QNiZfSsPc0

meowface
It's really cool and innovative technology, but I also struggle to see the point for the common user. If this were a computer monitor or projector screen, sure. But my phone is pretty much always in my hand if it's being used in any way. If it's already in my hand, what do I gain from this?

I can see how it'd be very useful for people who mount their phones on some sort of distant stand. But I feel like that's pretty uncommon; though maybe I'm just ignorant of how common it actually is. It's probably much more common for tablets, like for watching videos/movies at a distance. I do that, and I could definitely see myself using this with a tablet. But for something small that's pretty much always either in my pocket or in my hand?

drusepth
One thing that might help with the "imagine what's possible" mentality here if you're convinced your phone will always be in your hand (which is a totally valid assumption, btw), is to start imagining what other devices this chip could be inserted into, and what you could do with them "remotely".
rtkwe
One spot where it's already improving things is in the face unlock. It uses the Soli sensor to detect when you're approaching the device then uses the camera (maybe combined with to the Soli sensor) to unlock much faster than even the iPhone. In MKBHD's initial review it was fully on and unlocked before he even brought it all the way up.
kllrnohj
Your phone isn't always in your hand. In fact, your phone isn't in your hand for the vast majority of the day most likely. And it's in those scenarios that something like this could be cool if it works well.

If it can detect your presence before you actually pick it up it could do things like fire up the face auth systems to give quicker unlock. It can auto-lock if you set it down and walk away without manually turning it off or waiting for the timeout. If the alarm is going off it can lower the volume if it detects your presence before you actually dismiss or snooze the alarm.

There's all sorts of possibilities for all the times where you are not holding your phone. If it works well, that is.

Oct 15, 2019 · cbolton on Google Announces Pixel 4
The radar chip is especially interesting. I remember being intrigued when it was presented by the ATAP team years ago: https://www.youtube.com/watch?v=0QNiZfSsPc0

The team showed an impressive amount of cool stuff back then, it's nice to see one of these projects shipping on a flagship product. There's more info here on how it's used: https://www.blog.google/products/pixel/new-features-pixel4/

EDIT: found the original presentation: https://www.youtube.com/watch?v=mpbWQbkl8_g . I think the touch sensitive cloth project also made it into a few products.

imagiko
It is project Soli that graduated from ATAP!
Video from Google about the technology (2015): https://www.youtube.com/watch?v=0QNiZfSsPc0
srcmap
Very cool demo video.

Wonder if phone camera + AI software can do equivalent demo today?

Training to just recognize certain type of fingers movement seems trivial for AI now.

tanilama
Interesting. This might finally enable those hologrammatic user interfaces as shown in many sci-fi movies.
lawrenceyan
Go big or go home. That's one thing I like about Google/Alphabet. Not being afraid to try completely new things outside the normal comfort zone of their traditional product space, and aiming for the most radical potentially game changing ones at that.

I'm glad that Sergey and Larry have been able to stay at the helm, keeping Google/Alphabet from devolving into just another short term quarterly earnings chasing company and continuing to set up institutions in place that will ensure the culture of innovation which has made the company so successful lasts long into the future.

kkarakk
Go big, let your product stagnate for a couple of years coz it doesn't improve adsense dollar revenue in a direct way and then go home.

reminder that inbox is getting the shotgun to the head this year while gmail still feels like it's stuck in the early 2010s and that's just the most recent one.

any area that google dominates in nowadays feels almost accidental, like they don't actually want to dominate that area but the alternatives don't have their datacentres and thus aren't as good-for eg youtube.

blfr
This is, sadly, an accurate comment. Although to be fair to Google, they ported snoozing to regular Gmail and polished the UI a little.
lawrenceyan
What exactly is stagnating in your opinion? The Google Ads platform is currently one of the primary means of generating steady income for funding research and development efforts into new future technology, innovation, and growth.

I personally believe focusing so much on advertising is a short term narrow minded way of judging Google/Alphabet as the company is pretty much a fully fledged conglomerate at this point. Even so, the growth of advertising revenue has consistently increased 20-30% for almost 15 years now. Considering it's sheer size, that's definitive and definite success regardless of your qualms with how Google culture encourages employees to try new things even if they might fail. In fact, though it may seem random, it is exactly because Google/Alphabet is so willing to try and do new things that it can succeed in so many immensely different industries and markets from molten salt energy storage all the way to pest control using genetically modified mosquitoes.

kkarakk
>What exactly is stagnating in your opinion?

All of google's email software offerings, all their chat software offerings, their AR platform offerings, their VR platform offerings, does Android things even exist anymore beyond a toy development kit?

i don't even bother with google products anymore unless i see 2 years of concrete support & development. google product early adoption does not pay off

glenrivard
What product/service? Seems like they win a segment and keep pushing which was not the way of the past.
kace91
Popular wisdom is that google culture rewards too much the creation of new projects, but not as much their maintenance - which looks comparatively worse in a CV or when being considered for raises. Since they also hire mostly high-achievers, the result is that everyone wants to move into new things and "old" products are soon left to wither.

I'd be curious to know the opinion of actual Googlers about this common theory.

izacus
I've yet to see a single explanation why Google is supposed to maintain a non-profitable project/product indefinetly.

If Inbox (or other projects commonly coming up on these whines) would be a startup, it would end up on https://ourincrediblejourney.tumblr.com/ a long time ago. Why would Google keep maintaining and burning money on an unsuccessful free (!) product? Isn't "fail fast" the main praised mentality of startups here?

opencl
A lot of these projects Google has killed were very obviously going to be non-profitable from the start because they had no business model.

Take Allo or Wave for example, how was a messaging app with no ads supposed to make any money even if it was successful? So why did they even bother in the first place if they're going to kill non-profitable projects?

rbritton
I wouldn’t expect anyone to maintain something unprofitable indefinitely, but Google’s reputation is to offer something free long enough to destroy any existing market and then kill it when there’s no one left to fill the void. It’d generally be better if they never offered it at all.
Hand Gesture controls bother me. I see all these fancy implementations enabling hand gestures such as the Google ATAP project called Soli [1], but none of these replace haptical feedback. One of the major aspects of any haptical human-to-machine interface is feedback and robustness. When you turn a high quality 30 indent encoder from Alps [2] or avionics grade Elma encoder [3]; you'll realize the value of haptic feedback. There is a reason why cockpits are full of knobs and dials, although unfortunately, that's changing due to highly integrated multi-function glass instruments.

Problem with hand gestures is robustness. Theramins (hand gesture based music instrument) are cool, but they take forever to master and I am not saying they are trying to replace pianos, but let's just say - iPad pianos are not fun precisely because of the lack of feedback.

I really think gesture based controls are inferior. They have no place in my living room, my car or anywhere in my life.

That said, I think the implementation of hand gesture based systems as pointed out in this article are truly interesting. It is a challenging problem.

[1]https://www.youtube.com/watch?v=0QNiZfSsPc0

[2]http://www.alps.com/prod/info/E/HTML/Potentiometer/RotaryPot...

[3]https://www.elma.com/en-as/products/rotary-switches/rotary-s...

themagician
I wish we could meet somewhere in the middle for user interfaces in general. We went from dedicated buttons and knobs to slates of glass. I was really hoping for reprogrammable buttons and knobs instead, where labels and scales are replaced with LCD displays.

IMO things like the Model 3 just take it too far.

fermienrico
I don't know how to explain this but I'll try my best:

Categorized/Binary controls: Things like clapping to turn on/off lights, or a hand gesture to navigate to home in a car.

Continuous controls: Changing temperature, modulating a note (Theramin), controlling a quadcopter as in this article.

Binary/categorized controls are kind of ok. Hand gestures for continuous controls are not ok.

yorwba
I remember reading about some Google project that used haptic self-feedback for continuous gesture controls: To turn a virtual knob you'd rub your fingertips in a turning motion and to move a slider you'd slide your thumb along your index finger.

I think that's a pretty clever solution; unfortunately I can't find the blog post right now.

EDIT: They call it Project Soli: https://atap.google.com/soli/

vadimberman
I was about to say the same. The hand gesture control is the new "drag and drop" that was overhyped back in late 1990s.

While "drag and drop" has its uses today, it's nowhere as ubiquitous as its proponents back then wanted. Back then, it was forcefully pushed into spaces where it didn't belong.

Same for the gesture controls. Damn you, Minority Report, for putting this idea into the heads of the software business people. It's not all slick hand waving in the air.

In my dreams the keyboard disappears. My eyes, or fingers, move the cursor, I can speak interactively with the computer, gesture, etc. Can fingers/hands be tracked sufficiently to provide an accurate soft keyboard?

https://youtu.be/0QNiZfSsPc0

zamalek
Leap Motion Orion is pretty magical - there is a whole new level of presence when your hands come with you into VR. Sadly it seems as though nobody noticed, 95% of the stuff is tech demos. I only used it for a few weeks but totally felt as though it covered the cost.
And Project Soli.

https://www.youtube.com/watch?v=0QNiZfSsPc0

carapace
Wow, radar gesture sensing.
mediocrejoker
I don't think this was even mentioned at Google IO this year. Anyone know what's going on with Soli?
How old are you? It’s really not that hard to imagine the keyboard being replaced by one, or a combination, of technologies in the next decade or two.

Voice has gotten dramatically better in the past few years. Another decade? There are also gesture technologies:

https://m.youtube.com/watch?v=0QNiZfSsPc0

Anyway, if you’ve got 3 or 4 decades of life, you still think we won’t have sufficient advances to replace the keyboard?

jodrellblank
I can't imagine what a replacement would look or behave like. You can replace your keyboard with almost anything that proxies human to computer, but nothing else is the same or as capable.

The radar video is impressive (as were Leap Motion videos, which didn't live up to the hype), and sure it makes sense for extending a watch or phone, but the problem with autocorrect and voice recognition and predictive keyboards and spell checkers is much of the time, I'm not writing clear, coherent, English sentences.

Sufficient advances to write foreign words and jargon and deliberate misspellings and usernames and hostnames and slang and quotes which are deliberately not spelled normally?

A lot of the claims about "you'll just think it" hide the reality - e.g. when I try to speak a password and I can't, because it's just a pattern of motion (and you can't escape this overall problem by saying you won't need passwords in the future).

Human hands are dextrous and sensitive, almost no other body parts are anything like as much - voice is laborious and prone to problems of pronunciation and tiredness and background noise and homonyms, anything else isn't going to come close until you can have brain surgery implants - and even then a) no thanks and b) I bet that still underestimates the complexity of reading clear intent.

Voice, eye tacking, and gesturing don’t offer possibilities?

https://m.youtube.com/watch?v=0QNiZfSsPc0

http://ergoemacs.org/emacs/using_voice_to_code.html

And yes, if you’re clicking away in a room of programmers, it won’t work for you, but with a Hololens I’ll find a quiet location.

How about a little more innovation in keyboards? It is 30 years later. Split keyboards?

https://shop.keyboard.io

Tenting capable?

Add a Soli chip?

https://www.youtube.com/watch?v=0QNiZfSsPc0

cholantesh
I'm disappointed that the King's Assembly and Keymouse both turned out to be vaporware.
falcolas
Like beds, people aren't usually willing to spend lots of money on a good keyboard. It's unfortunate, especially considering how much time you spend typing on one. But the moment you start talking mechanical switches, your keyboard is going to be above the hundred dollar mark (knock-offs notwithstanding). Split the keyboard, another two hundred dollars. Throw in something like Soli, and you're looking at a thousand dollar keyboard.

How many people would be willing to pay over a thousand dollars for a good keyboard?

mark-r
I prefer using a standard layout, because then I don't have to adjust when I'm using a keyboard that isn't mine. I wouldn't mind having better keys though. I had an old mechanical keyboard at home for many years, but I finally retired it when my wife told me she couldn't stand the noise. I've always felt my typing was better with the mechanical keyboard.
Sorreah
Is there any good reason why splitting a hundred dollar keyboard would have to cost three times as much?

Anyway, I paid 30 euros for my last mechanical (with rgb lighting) shipped to my door. Patent expiration is a beautiful thing.

zck
> Is there any good reason why splitting a hundred dollar keyboard would have to cost three times as much?

Far decreased demand, and that effect on economies of scale. I'm typing on a split keyboard right now, but most people don't want one.

m3Lith
And what that secret model is?
falcolas
Extra controllers (and PCBs), cabling to connect the two halves, extra plastic for the housing... there are definite real costs involved. However, once you're above a hundred dollars you're in a niche market anyways, so why not charge more boutique prices? I imagine this is the real explanation for the price.

As for patent expiration - yeah, it can be a beautiful thing for prices, but I'm still quite leery of picking up a cheap keyboard; quality parts that last (and more importantly behave consistently through their lifespan) still cost money.

speeder
People willing lots of them... having the money? not so much.

For example I am from Brazil, currently without any work. But my peak income was 2000 usd month with 1000 usd going straight to pay student debts... I had other things to do with money (like buy decent glasses, I can type on crappy keyboard, but I can't type if I can't see anything).

Now if I had enough money for a keyboard I would use it all to repay more debts.

And I am actually in the elite... the average Brazillian makes around 300 usd per month in total (and yes, the average brazilian can't afford student loans)

melling
You're doing that thing were you're dragging the conversation to the developing world, which no one was talking about.

There are absolutely lots of people who can afford it.

Most of the 7 people people on the planet would never buy it. There are, however, tens of millions of people who would buy it.

Once any innovation is made, it'll become super cheap within a decade or so then even more people will benefit.

Consider that there are people who pay $10,000 for a mattress. http://www.marketwatch.com/story/the-175000-mattress-2013-04...

All you need is a sufficient market for a product. It doesn't need to be purchased by everyone in the world.

falcolas
Well, when you consider how many hours (days, years) are spent typing on a keyboard, and how important your hands are to your career - hence the bed analogy - $100 for a good keyboard is a steal. Even if it only lasts a year (many will last 5+), that's a pennies a day investment in your body and your career.

Put another way, would you expect a professional mechanic to use a $30 set of harbour freight wrenches?

speeder
You missed my point.

I am quite willing to spend 100 on a keyboard, I just literally don't have it.

When you are struggling to pay rent, medicine, food, you don't have much choice... See many people that go years with rotten teeth because they can't afford a dentist, it is obvious they need to go to dentist, and they do want to, they just can't do it.

For example the glasses I mentioned in my post are now very old, and slightly outdated compared to current prescription, yet I don't have money to change them at all either. So I am keeping them, and hoping they will last a little longer.

falcolas
> I am quite willing to spend 100 on a keyboard, I just literally don't have it.

You're right, that is a very different situation than the one I'm referencing - where people do have the money (or could make their health a priority) but still use crappy keyboards and wonder why their hands ache.

Dec 16, 2016 · 2 points, 0 comments · submitted by wener
i'm disappointed that we haven't made more progress with voice interfaces. Many people are happy with the keyboard/mouse as their primary input devices but combined with subtle gestures like you might get with Google's Soli and we should get a better user interface for everyone.

http://www.youtube.com/watch?v=0QNiZfSsPc0

gurkendoktor
Good trackpads/touchpads with gestures aren't that far away from Soli, once you accept that people are lazy and don't want to fight gravity with their hands.
melling
How's that going to work with VR?
Someone could devise a more subtle set of movements for text entry. Google's Soli would recognize small movements:

http://www.youtube.com/watch?v=0QNiZfSsPc0

There are others: https://github.com/melling/ErgonomicNotes#gesture-computing

Apr 09, 2016 · melling on Running Emacs on Android
Now we need a gesture keyboard so we can type, or twitch, instead of using an onscreen keyboard.

Finger IO: http://fingerio.cs.washington.edu

Google's Soli: http://www.youtube.com/watch?v=0QNiZfSsPc0

Perhaps a modified game controller: http://www.amazon.com/iGrip-Ergonomic-Keyboard-by-AlphaGrip/...

agumonkey
I've seen people being proficient under complex GUIs (final fantasy) with game pads. I'd love to lisp on a paddle.
Really interesting.

Reminds me of SOLI (which is radar rather than sonar): https://www.youtube.com/watch?v=0QNiZfSsPc0

Is there a way of trying this out? I know it'd only be demo line drawing applications but it'd still be interesting to try.

you can provide audible feedback when you move your hands. Many gestures don't provide anyone tactile feedback. Move your hands or fingers then something happens on the screen. Google's new chip could probably help interpret a series of gestures:

http://www.youtube.com/watch?v=0QNiZfSsPc0

Paper notebooks might be. http://www.catholiceducation.org/en/culture/art/inside-leona...

The cell phone, etc will be a much bigger extension. However, we still need better ways to input, organize and access the information. Handwritten notes, voice memos, pictures, drawings, unstructured and structured data, etc.

Cortana/Siri with hand tracking ( http://www.youtube.com/watch?v=0QNiZfSsPc0 ) and a HoloLens will allow people to organize everything in creative ways.

"The keyboard is the only thing I need, except for the glasses."

I think this loses a lot of appeal until we can throw away the keyboard. We should be able to do better. Hopefully, Google ships this silicon this year:

https://www.youtube.com/watch?v=0QNiZfSsPc0

I'd rather have those monitors and no keyboard.

Animats
Now that's a step forward. Gloves in VR without force feedback have sucked. The insight here is that there are new kinds of gestures, such as touching fingers together and rubbing them sideways. They're able to sense those gestures with Doppler radar. They probably don't even try to resolve the gestures into positions; they may just throw a machine learning algorithm at the problem of recognizing the gestures from the Doppler radar outputs.

This starts to make VR look useful for something other than roller coaster simulators.

Google has a nice demo of a volume control knob. Will this scale up to a virtual mix board?

Gesture based computing is almost here in some form. Apple, Google, Intel and Microsoft are all working on it:

Apple: http://9to5mac.com/2016/02/02/apple-proximity-sensors-patent...

Intel Real Sense - http://www.intel.com/content/www/us/en/architecture-and-tech...

Google's Soli chip - http://www.youtube.com/watch?v=0QNiZfSsPc0

Microsoft shipped the Kinect 6 years ago.

The need for hand tracking with VR headsets should give it another boost.

By the 20th anniversary of Minority Report?

http://youtu.be/7SFeCgoep1c

rm_-rf_slash
Gesture input is ultimately pointless without force feedback, or else you don't feel the interaction and the lack of intuitive feeling makes you want to go back to comfortable interfaces.

I remember being really excited for the Wii and swinging my sword for the first time in Legend of Zelda. My sword was blocked; my hand kept moving. Immersion gone.

melling
What you have done is give an example where it would be an improvement to have force feedback. It's a common fallacious way to try and disprove something.

It does not mean that this is the general case. Actually, it doesn't matter if it is. If there are a dozen uses without feedback, and 3 dozen uses with feedback, it's still a big win to get the first dozen uses.

rm_-rf_slash
It is a big loss regardless if the lack of convenient feedback means the application is never used, even if force feedback is not necessary to function.
melling
yes, it's a big loss if you have nothing now and add a solution for some people. But since it can't meet the needs of everyone ... </sarcasm> You want it all or nothing. seems unreasonable.
rm_-rf_slash
As Tim Cook said to me, if you make something that doesn't change behavior, it's a gimmick, and it won't last.

If motion sensors have an application - perhaps for people with disabilities - by all means go for it. But innovation for its own sake can be a waste of time.

AndrewKemendo
I like that quote. Thanks.
melling
Do you mean like speech recognition before it's 100% ready? It's pretty limited now and I've noticed that Siri is easily confused and people seem to have to repeat themselves quite often.

Obviously, motion sensors have a lot of use without force feedback. Feel free to wait until that point. Telling the rest of us that we don't need it seems pointless.

xjay
The imaginary eWii would trigger electroshocks on collisions, causing short muscle spasms.
hammock
I don't believe that's true, it may depend on the action.

I'm very comfortable pointing a person or animal where to go, and the lack of force feedback doesnt make me want to go up to them and push them into where they ought to be. Bit different than wielding a sword.

No one said that the solution has to be all speech. Throw in some small gestures and that'll be faster than reaching for a key.

http://m.youtube.com/watch?v=0QNiZfSsPc0

Throw in eye-tracking and you can pick out any word quickly.

taurath
Totally right - I think however it will be a long time before things become so natural - especially with the concept or writing as a whole. I'm sure you'll be able to find specialists with any interface who can perform far faster than even experts on another interface. I'm not being cynical here, but I think the keyboard is an excellent interface right now. Neural feedback and direct thought transcription seems like the most likely thing to replace it, but it also might require far too much attunement. Take all the ways that humans communicate - writing and text have survived as long as they have for a reason.
It would be great if some of the repetitive tasks didn't even require a keyboard but rather use subtle gestures, eye tracking, and voice commands. Leap Motion, for example, never panned out but Google's Soli project might get us there: https://www.youtube.com/watch?v=0QNiZfSsPc0

Throw in better autocompletion (http://www.benkuhn.net/autocomplete) and software development becomes more precise and less repetitive.

smt88
I find autocompletion in JetBrains and Visual Studio (disclaimer: haven't tried many others) to be outrageously good. Sometimes, in a dynamic/weak language, I wonder how the hell it can know what to suggest.
melling
There's probably an even more advanced level of autocompletion that could be achieved. If I type this Swift code, how much autocompletion would I get?

let helloWorld = "hello world"

Basically, there needs to be an additional English word dictionary added to the process. And understanding camel case would be killer.

PopeOfNope
Good, but slow. Even most of the autocomplete solutions I've used in VIM fall in that category. The only one that doesn't is exuberant ctags and that's because it's not intelligent; it does tag lookups and nothing more. If you have two classes with the same function name, it can't tell which one you want. Which is too bad. It doesn't seem like asking for fast autocomplete is unreasonable for 2015. Then again, between PHP, html, css and JS, Zeal takes up many gigabytes of my hard drive. The wordpress docs are even bigger. Maybe it's not that autocomplete is slow, it's that the data set it has to operate on is increasing at an alarming rate.

Slightly OT food for thought.

spdionis
How is JetBrains autocomplete slow? It's 90% of the time instant for me. The 10% of the times always happen in the same patterns and I already expect it to not work immediately. In the other 90% i just always expect the autocomplete to be there and keep typing without thinking about it.
PopeOfNope
what type of development do you do with it? Like I mentioned, I use it for wordpress development. That means on any given keystroke, IntelliJ has to find the right symbol buried in gigabytes of data, since any given php file can have html, css, js, php and/or wordpress. Not to mention the wordpress plugins I'm developing are (imho) unnecessarily large. Half a second to several seconds is the typical autocomplete time for me, with the time increasing the longer it's open.

Just writing this makes me long for the good old days of html 4, css 2 and JS only being used to animate mouse trails.

smt88
I've worked on the same stack in PhpStorm, and autocomplete was instant for me.
spdionis
I use PhpStorm and write in symfony. All framework related plugins are enabled and even those work instantly. And I have just an average laptop.

Maybe your problem is lack of ram. Jetbrains IDEs need enough ram to work properly, otherwise they do get slow.

smt88
Slow storage (HDD instead of SSD) would also make a huge difference.
spdionis
I have an HDD
PopeOfNope
23gb of ram, modern i7, ssd. I was doing development on Vagrant via VMWare, which slows things down, but not that much.

Maybe the deciding factor here is what we consider to be "instantly" is different. I do most of my work in VIM in a terminal and find most web-based apps[0] unbearably slow.

[0]: I'm not saying IntelliJ is web-based. I'm using it as a common metric.

Sep 14, 2015 · 1 points, 0 comments · submitted by antr
Sep 12, 2015 · 3 points, 0 comments · submitted by gmays
Sep 10, 2015 · 1 points, 0 comments · submitted by todd8
This tool is for people who can't use their hands, of course.

For the average person, we're pretty close to being able to use voice instead of typing.

https://www.extrahop.com/blog/2014/programming-by-voice-stay...

http://ergoemacs.org/emacs/using_voice_to_code.html

Throw in eye tracking and precise gestures (http://www.youtube.com/watch?v=0QNiZfSsPc0) and the keyboard isn't necessary.

learc83
There's also this http://voicecode.io/

I'm waiting for the windows version to be released, so I don't have any first hand experience. The videos look promising though.

I use Dragon and python extensions to supplement typing for now.

hasenj
Thanks for the Project Soli video!

Mind blown!

tcdent
Instead of coming home from the office with Carpal Tunnel we'll be coming home with hoarse voices. Color me skeptical.
melling
Some people will gladly take a hoarse voice. Ever see someone have to resort to using their nose?

http://www.looknohands.me

Sep 10, 2015 · 1 points, 0 comments · submitted by nikropht
How long before gestures move off the screen surface? This technology got off to an exciting start but it quickly hit a wall. It was disappointing, for example, when Microsoft had to unbundle the Kinect.

Microsoft's Kinect - 2010

Leap Motion - 2010

Intel Real Sense - http://www.intel.com/content/www/us/en/architecture-and-tech...

Google's Soli chip - 2016 - http://www.youtube.com/watch?v=0QNiZfSsPc0

Google's Touch Sensitive Fabric: http://www.wired.com/2015/05/google-atap-project-soli-gestur...

drzaiusapelord
The Kinect as-is really had no future. It was too limited for gaming and too clunky for anything else. I do think the tech is great - if not amazing, but only if its part of a larger pie. MS's Hololens with a Kinect reading your body/hands/face/fingers makes a lot of sense to me. AR that's aware of your every move can be big, perhaps even a game changer in some/many industries.

At my desktop, I really don't need anything like Kinect. I can type on a keyboard or use a mouse. Or swipe on the nearby screen. Gesturing is clunky in those scenarios. Now put an AR headset on my face and set me loose. Now I need some kind of remote input that I don't carry around. That's something AR needs and MS has a major lead here.

i_am_ralpht
When the basic essential interactions like "select" feel natural and unambiguous -- holding your hand in place for Kinect wasn't a good replacement for "click"/"tap".
mcphage
The future is fingerguns. "Pew pew to open program".
Qworg
I've done a lot of full 3D gestures work. They work well, but people are finding it hard to come up with workable interactions to care about/spend battery on.

Google's new stuff had an opportunity to move gestures off the screen, mostly because it is low power/localized rather than high power/installations/invasive.

Better pen input would be great, especially for drawing.

How about some form of chorded keyboard or glove? A chorded keyboard was demonstrated in 1968 by Engelbart:

http://web.stanford.edu/dept/SUL/library/extra4/sloan/mouses...

Google's Soli project might lead to better input:

http://youtu.be/0QNiZfSsPc0

abawany
I have been using Windows Tablet PC since 2003 (HP, Toshiba , and now Fujitsu).

The (typically) Wacom stylus works very well and with OneNote, allowed me to take extensive class notes across two masters programs.

OneNote also interprets the handwriting so it was quite the pleasant surprise to find that my scribblings were somewhat searchable.

dpflan
Soli looks very interesting. There is Smarter Objects from MIT Fluid Interfaces that is similar in nature, but has a more dynamic approach. It's certainly not as simple as Soli seems because of the augmented-reality component it uses. But if Google brings back Glass and ships Soli in IoT/smart household objects, then you'd have the same result.

--EDIT-- Link to MIT Fluid Interfaces Smart Objects: http://fluid.media.mit.edu/projects/smarter-objects

Jun 16, 2015 · 3 points, 2 comments · submitted by awjr
dang
https://news.ycombinator.com/item?id=9625786
awjr
"Project Soli is developing a new interaction sensor using radar technology. The sensor can track sub-millimeter motions at high speed and accuracy. It fits onto a chip, can be produced at scale and built into small devices and everyday objects."
Jun 10, 2015 · 3 points, 0 comments · submitted by rufus42
Jun 02, 2015 · 2 points, 0 comments · submitted by uptown
May 29, 2015 · 57 points, 8 comments · submitted by mmastrac
earleybird
Winner of PopSci "Whats new" award, 1994[0]

[0] https://str.llnl.gov/str/pdfs/01_96.2.pdf

alirazaq
They say the hardware is ready; so when can we expect developer kits?
obulpathi
If someone were to integrate this into a smartphone/smartwatch as an input device and customize the software, it would be a killer project.
SergeyHack
I wonder what new possibilities for total spying this approach enables.
None
None
pbreit
I wonder how this compares to Leap Motion and Kinect?
joeyo
The most obvious difference is that it doesn't require line-of-sight. The spatial resolution is presumably better as well.
digi_owl
It seemed capable to telling fingers apart when rubbing against each other, through a sheet of paper or similar.

I suspect they could embed this behind the screen of a smartwatch, and allow broader motions in the air above the watch to select object on the smaller watch face. This in particular as it can detect 3D location, so it can tell a press (moving closer to the screen) from a selection (horizontal and vertical motion).

mikhailt
I wish the title was more useful. For those wondering what Soli is:

> Project Soli is developing a new interaction sensor using radar technology. The sensor can track sub-millimeter motions at high speed and accuracy. It fits onto a chip, can be produced at scale and built into small devices and everyday objects.

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.