HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Pointing to the future of UI | John Underkoffler

TED · Youtube · 118 HN points · 2 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention TED's video "Pointing to the future of UI | John Underkoffler".
Youtube Summary
http://www.ted.com Minority Report science adviser and inventor John Underkoffler demos g-speak -- the real-life version of the film's eye-popping, tai chi-meets-cyberspace computer interface. Is this how tomorrow's computers will be controlled?

TEDTalks is a daily video podcast of the best talks and performances from the TED Conference, where the world's leading thinkers and doers give the talk of their lives in 18 minutes. TED stands for Technology, Entertainment, Design, and TEDTalks cover these topics as well as science, business, development and the arts. Closed captions and translated subtitles in a variety of languages are now available on TED.com, at http://www.ted.com/translate.

Follow us on Twitter
http://www.twitter.com/tednews

Checkout our Facebook page for TED exclusives
https://www.facebook.com/TED
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Which reminds me of this Ted talk from 12 years ago, https://youtu.be/b6YTQJVzwlI Pointing to the future of UI
Jun 05, 2010 · 118 points, 83 comments · submitted by chuhnk
fjabre
Everybody seems to be stuck on hand gestures and arm movement for the future, but while this looks cool I wonder just how comfortable it is to keep you arms waving about like that for hours on end. Also, it's hard to argue that 3D is always superior to 2D when presenting information. In some cases 2D is more than sufficient.

I also wonder why there isn't more talk about Brain–computer interfaces. It seems to me that the most natural UI is one that can be navigated just by thinking. It might be little Borg-like but I can't imagine HCI going in any other direction long-term.

*http://en.wikipedia.org/wiki/Brain%E2%80%93computer_interfac...

gojomo
The way things are going, the first mass-market UI that can be "navigated just by thinking" will probably require users to think in mandarin.
Groxx
I've generally thought that too, gestures (especially when your arm is involved) are hugely tiring if you have to do them for a long time on anything larger than an iPhone. And 3D imposes extra thinking. Subtle 3D could work - we have minimal already, with layered windows and shadows and "3D buttons" - but nothing drastic.

And my hopes too are for brain interfaces, though I think I've got a pretty good idea of the difficulty inherent in that. I can hope, right?

inimino
Physical inactivity (sitting at your desk all day moving only your fingertips) is implicated in many of the health problems most hackers are likely to suffer from and eventually die from. I think many of us would welcome the opportunity to stand up and move around a bit while still getting our work accomplished, even if it wasn't the only interface we used.

BCIs, on the other hand, would mean you can potentially finish that software module, or at least read your email, while simultaneously going for a run ...as long as you don't run into a tree or the road.

donw
I'm going to vote for no on that; at least for me, running takes a lot of focus, to the point where ditching the iPod helped me become a stronger runner. I had always run with music before, but I found that if I focused on the run itself, I could push myself harder.

Weightlifting and climbing work the same way, in my experience, and I can't imagine that my limited martial arts training would have been any better if I had something else to focus on other than not getting punched in the face.

Our society is obsessed with trying to do everything at once, rather than giving full focus to each activity in turn. Personally, I'd rather that technology reduce the busy work, so that we can focus more directly on each activity, rather than trying to amalgamate them together all the time.

musclman
Sure, it looks cool, but it appears to require a lot of slow and inefficient physical movement to accomplish the most basic of tasks. Imagine a bunch of cubical works sitting at their desktops waving their arms around trying to navigate their computer's file system :-)
josh33
All technology feels lousy out of its time. He's giving the hackers/entrepreneurs the tech. It's up to us to create some valuable/entertaining/necessary functions out of it.
uptown
Here's a video demo of Project Natal in use that looks exhausting.

http://www.youtube.com/watch?v=Jm0KKa6wACQ

ghempton
Its kind of crazy how well project natal seems to work, especially considering that in the TED talk, gloves were required.
jules
It looks like being exhausting was the point of the application.
JesseAldridge
Holy Christ. The game seems designed to make people look like spastic dorks.
whimsy
No kidding. There was a comic about this a while back.

http://okcancel.com/comic/3.html

ryanjmo
Imagine the repetitive stress injury. My body hurts from just using the wii.
EAMiller
Agreed, every time I see this my arms feel tired.
mechanical_fish
Keep in mind that, like most of the things designed at the Media Lab, this was built to look really good in demos. The movements are big and inefficient so they will show up well on stage and on camera.

In the real world the movements might be much more subtle. Certainly no broader than, say, American Sign Language, which is quite analogous to what we are reinventing here.

lkozma
Sorry, I'm skeptical about that. The movements are big so that they will be captured and correctly interpreted by their pattern recognition system. I think they would not be able to detect very subtle movements like ASL. It is ingenious to claim that it was made artificially big and clumsy for the audience.
jexe
Not so sure that these huge gestures are necessary, depending on the technology - You can play games perfectly fine on the wii with little more than the flick of your wrist.
lkozma
But there the device itself has accelerometers, whereas here, at least if I uderstood it correctly, the camera has to capture the markers on the hand which is much more noisy.
stanleydrew
I think you mean disingenuous.
lkozma
Thanks, I did mean that :)
tgandrews
The moving physical objects makes sense, touch makes sense. Learning a series of gestures to manipulate data on a screen makes less sense to me than a mouse.

The mouse is movement and control, the gestures need you to hold your hands in strange positions and move within a field of view (what the camera can see). An improved design will need to be intuitive.

jules
What would make sense I think it just using your hands on the table instead of using a mouse. Touchpads do some of this but they don't work very well and the surface is usually very small.
Scriptor
The brain is able to make itself think that external tools are mere extensions of the body, so it's no surprise that the mouse has been successful. I wonder if the lack of any tactile response in these interfaces hinders them somewhat or if it's all the same for the brain.
johnthedebs
"It has to be for every human being...It's been 25 years. Can there really be only one interface? There can't."

I love that, and I totally agree. I think what we're going to see in the future is applications that primarily use the interfaces they're best suited for with fallbacks to other less well-suited interfaces.

Is this going to be the future for every application? No way. The same way touch-based interfaces aren't the future for every application. But (as with any demo) he's only scratching the surface here and I believe that a UI which matches the way we already think about things has some huge implications.

treblig
At the end he mentions "[In 5 years time,] when you buy a computer, you'll get this."

I think that's about 5 years too soon on that one. Incredible demonstrations though... awesome to see science fiction become reality.

stcredzero
All you need is a large display and a webcam. Isn't Microsoft already doing this for gaming? The enhanced resolution Wii controller with a large flatscreen already has most of this capability.
evo_9
Yes Microsoft is releasing Project Natal (rumored to b renamed Wave)for the Xbox this September. It uses a similar array of cameras to track multiple users for gaming and it can be used to navigate the 360's Ui.

Should be interesting for gaming, but just as in general computing application this type of control mechanism isn't suited as a complete replacement for a mouse or a joystick.

ams6110
Sigh. In 5 year's time I'm quite certain I'll still be spending 90% of my time in Emacs or a shell, just like I was doing 5, 10, 15, 20 years ago.
bruceboughton
I find the conclusion of the talk hard to swallow: that these sort of interfaces will be common in the computer you buy 5 years from now.

Why? This goes against the current major trend in the industry: mobilisation / pocket-isation. It is inconceivable that our built environments will have the required sensors, projectors, etc. to enable these interfaces. Even more than that, our computing is becoming ever more mobile. Computing has to fit our environment, not the other way round.

Maybe this stuff is the future, but it's certainly not the near future and I didn't really see much value in the interfaces demoed.

Then again, the point of R&D is to discover what doesn't work as much as what does and you can't do that without realising your ideas.

est
I saw similar demo from Microsoft Surface and MIT Sixth Sense
RevRal
The thrilling potential of SixthSense technology: http://www.ted.com/talks/lang/eng/pranav_mistry_the_thrillin...

I haven't seen the one by Microsoft. I really recommend watching this, it's pretty intense.

what
That's pretty amazing. He said it's easy to build and he wanted to open source the code, so that people can build their own. I hope he actually does.
mambodog
Its a shame this thing didn't get more discussion the first time around (ie, here: http://news.ycombinator.com/item?id=944923)

To my mind, this idea seems much more realistically applicable than the one that Underkoffler presented. Partly because Pranav Mistry just has much better ideas for applications of the tech, perhaps because his designs seem more application oriented.

Maybe the submitters need work on their linkbait titles (like 'The Future of UI').

megaman821
That is awesome. Using a projector is novel and as technology advances I could see that being replace by clear lcd glasses, contact lenses, and perhaps bionic implants.
elblanco
I'd trade the projector for a pair of glasses with a transparent display.

The true power of this is really in the software.

It's amazing, stunning, work.

None
None
daralthus
The potential is, that these applications react to the world you are in. For example the knowledge that there are products on the shelf and I want the cheapest and the best. The other thing, that I don't need to carry a camera, a phone, a book, a laptop, an ipod or like paper, pencil, wallet, puppy, drum, guitar, tv, different looking shirt, shoe, wallpaper... So many things could be virtual. (I mean augmented with the real world.)
daralthus
This is still 2d. You can't interact in 3d if it is just projected in 2d. He is just pointing and flying. You don't have to fly between documents if you have a real 3d augmented reality. Apps would be like real 3d objects, you can touch them, manipulate their shapes like you do it with your keyboard or a door-knob, the only difference is that they won't be from real material. So they won't have phisical boundaries, if not programmed that way. (It just depends on the needs.) I want to make a demo on it, is there somebody who want to join?
bruceboughton
It's quite a scary thought that with a truly 3D/AR computing experience, we might not be able to tell which elements of our environment are Real World and which are virtual.

Imagine a virus that injects fake flooring into your vision where instead there is a 40ft fall!

daralthus
wow, how cool crime sci-fi would that be.. But there are always good and bad things in a new technology. Imagine all that junk that can be saved with virtual stuff when people buy the new and throw away the old.
Tycho
I'm not so sure about all'a'dat (although I do remember thinking the stuff in Minority Report was awesome, years ago) but I can see a need for 3D/depth augmentation of standard desktop interfaces. I want to be able to tuck windows away 'in the distance' or twist them round to a slanted pane (so they take up a fraction of the space but are still more or less visible/legible). I also want to step in and out of '3D mode' when making ER diagrams or UML diagrams and such, for when there's too many criss-crossing lines.

Undoubtedly these things have already been tried (I saw a nice Linux demo somewhere, years ago, with 3D windows) but I'd like them standardized, and touch-operated. It'd make a big difference IMO.

Periodically when using a cluttered interface I mutter 'this is why the need for 3D is so great' and my colleagues laugh at me. But I'm only half joking.

koeselitz
"We didn't have networks at all at the time of the MacIntosh's introduction." Seriously? Seems like that estimation might be about fifteen years off to me.

Edit to say: In fact, he's an intelligent and well-versed enough guy that I'm sort of puzzled by this remark. Does anyone know what Mr Underkoffler means when he says that networks weren't around then? I think he must have something different in mind than I do.

stcredzero
I thought there were Ethernet LAN in Xerox Parc in the 70's.
inimino
I presume "we" there means the general market to which the Mac was introduced. Ubiquitous Internet access was years away.
ghempton
He kind of fumbles on the question of "what is the killer app?" You'd think it would be on the tip of his tongue considering how long hes been working on this...

That said, there really needs to be more open source software to enable this front. I think we will really see some innovation once a hacker can take a $10 webcam and and open source lib and start creating software with these types of user interfaces.

davidalln
A lot of this has been done with openFrameworks (http://openframeworks.cc/). It ties together a plethora of open source frameworks including OpenCV with easy to use bindings to indirectly create a somewhat simplified version of C++. If you YouTube openFrameworks, you'll get a lot of demos demonstrating this technology using free libs.
None
None
pedrokost
What i really hate about operating systems is that they the same as they were in the beginning. THe only thing that changed was the visual appearance. We still have a taskbar, a desktop,etc. Can't someone reinvent how operating systems work?
pmarin
http://cm.bell-labs.com/plan9/
None
None
None
None
frou_dh
I respect the chops of those creating these things but I just don't feel like I'd want to use them daily. Perhaps I'm already locked in to a legacy mindset by my mid 20s!
ryanjmo
This talk seems like a whole bunch of 3-D snake oil.
None
None
elblanco
Nice first effort, but after watching all I can think is:

arms = tired

looks like a clumsy, highly particular and low volume way to sift through data

donaq
5 years seems way too optimistic for this stuff to appear in consumer products. I'm betting no.
obama012369
This is my very favorite a shopping site: http://wowcool.org The website wholesale for many kinds of fashion shoes, like the nike,jordan,prada,adidas, also including the jeans,shirts,bags,hat and the decorations. All the products are free shipping, and the the price is competitive, and also can accept paypal payment.,after the payment, can ship within short time.

free shipping competitive price any size available they do wholesale and retail! All are extremely CHEAP: http://wowcool.org their products: jordan air max oakland raiders. Ed Hardy AF JUICY POLO Bikini. Christan Audigier BIKINI JACKET. gstar coogi evisu true jeans. coach chanel gucci LV handbags. coogi DG edhardy gucci t-shirts. CA edhardy vests.paul smith shoes. jordan dunk af1 max gucci shoes. EDhardy gucci ny New Era cap. coach okely Adidas CHANEL DG Sunglass. Worthy of my recommendation, go and see: http://wowcool.org

georgieporgie
The gestures presented in the video look like a creative way to replace the tendinitis in my wrists with a variety of shoulder, neck, and hip issues.

I think the most compelling UI development in recent years is the Wii remote. Motion sensitivity and IR pointing in one device, and it's in the hands of consumers, conditioning them to expect a new level of interaction. Like everything we thought the PowerGlove would be, but without the fanfare or bad movie.

I anticipate laptops with multiple cameras built-in for better sensing of gestures and eye movements. In fact, I think I'm going to pick up an extra webcam or two and play around with just that...

sdfx
These new UI concepts are always awe inspiring, but what is really the benefit of using them?

One of the most popular demonstrations is browsing through a bunch of photos on a huge screen with large gestures. While this looks quite impressive it's inaccurate when performing specific tasks (e.g. the color-selection planes in the video), the gestures are limited and you have to learn somewhat unintuitive ones apart from the more obvious "select", "move left" and "zoom" gestures. His more real world example (the table and the globe) didn't quite work, but what he could show us wasn't a step up from using a mouse.

An other favorite is the "physical elements on a table" example. This works reasonable well but his examples again were not convincing. Using it as a wind tunnel without being able to rotate it in three dimensions? Calculating the shadows of buildings?

But what's holding us back? Processing power? Cost? Hardware requirements? Or a general lack of use cases, of areas where this really makes sense?

DannoHung
I don't know. Why did it take so long for touch screen interfaces to become huge when almost everyone loves them?

On the other hand, why are command line interfaces still the most efficient way for experts to interact with a system?

jacobolus
> why are command line interfaces still the most efficient way for experts to interact with a system?

This is far from generally true. I defy you to make a command line interface for performing a piano concerto, painting a landscape, or flying an airplane.

alain94040
True. I also don't really feel the need for a command line on my iPhone.

My point being that yes, command line interfaces are the most efficient way to interact with Linux, but elsewhere, not so much. Maybe it means that Linux was designed for the command line.

The fun fact is that is that deep inside, the iPhone runs some variant of Unix.

hernan7
Well, if we are talking about computer interfaces, the more sophisticated graphic and music programs have some kind of scripting capabilities. Think Photoshop, AutoCAD, C-Music...

If you are talking about computer vs non-computer interfaces, yes, you are not going to play a piano concert with a mouse either.

jacobolus
Who said anything about a mouse? The three interfaces I was thinking about were (a) a piano (that is, the keys and pedals), (b) a brush, a canvas, and some tubes of paint, and (c) a joystick and a few walls of dials and switches and buttons.

If you want to put things directly in a computer, you probably want a digital piano keyboard, a graphics tablet, and a joystick + keyboard.

Scripting it is an alternative way to interact with, e.g., Photoshop, but you wouldn’t want to “paint” a digital picture by typing in some JavaScript to direct it.

bad_user
> but you wouldn’t want to “paint” a digital picture by typing in some JavaScript to direct it

A painting you cannot, but the architecture of a building is better if it's built by using a CLI interface ... because you need control over every detail of your construction. Just look at AutoCAD sometimes.

Also, for playing on a piano you first need to read a sheet of music that describes your melody.

jacobolus
> [...] the architecture of a building is better if it's built by using a CLI interface [...]

Bullshit. The architecture of buildings is refined from models made in clay, drawings made on paper, people walking around the physical site, and annotations to photographs. Eventually, the building is realized with cranes and hammers and two-by-fours and drywall.

There’s potentially one step in there where each part is precisely specified numerically, and you could maybe make that step easier by typing into a CLI. But saying that’s “the interface”, or “a better interface” to the design or construction of the building is an almost impossibly myopic analysis.

bad_user
> why are command line interfaces still the most efficient way for experts to interact with a system?

That's an easy one to answer.

It's for the same reason humans developed natural language by sounds emitted by vocal cords, with a pretty simple mechanism ... you've got a vocabulary of words that describe something, like an action, or an attribute or a physical object, and then you can mix them together to form phrases, with multiple phrases used to describe entire plots.

So you're not limited in any way, being able to communicate designs, strategies, emotions, just with words ... and the combinations you can come up with are infinite, the more you talk and write, the more skilled you get in communicating.

It was the most efficient way it could happen ... there's only so much you can describe with hand gestures without the movements becoming unintuitive.

So it is with command-line interfaces ... you've got commands you can issue perfectly described by words. And you can mix and match them however you like in totally unpredictable ways.

It's not natural language, but it's a lot closer then graphical interfaces. And a lot of the work programmers or sysadmins do is just story-telling. Personally I don't see any way around that unless we are talking about very limited niches.

The question is why don't normal people do it? Well, 10 years ago very few people could type. Now 12 year-old teenagers do it efficiently, and typing on a computer is quickly replacing hand-writing. I believe that in the future most people will be familiar with a programming language / CLI interface (while also having more advanced graphical interfaces).

Sooner or later everyone will for the same reason humans learned how to speak ... evolution didn't favor those who didn't. And learning a natural language is the hardest thing humans do ... we just don't notice it anymore since we're learning it since birth and it's in our genes already.

hernan7
Great post. You bring a tear to this old Unix hand's eye.
euccastro
> there's only so much you can describe with hand gestures without the movements becoming unintuitive.

Even more so for sounds. Most spoken words are no more intuitive than arbitrary gestures in modern gestural languages (which can similarly be articulated in powerful systems just like spoken languages).

Aural, visual, tactile and chemical channels each have advantages and disadvantages for different applications. I'm not convinced intrinsic 'intuitiveness' is one of these factors. In communication, intuition is mostly hardwired convention. Lowering ears and wagging tail has a different (and quite opposite) instinctive meaning for cats and dogs.

Scriptor
Command-line interfaces seem to involve two main parts. The commands themselves (ls, mv, rm, and so on) and the arbitrary parameters supplied to them (all files with the .txt extension or compiler flags). For the former hand gestures would work well since there is a limited and well-defined set of commands, the latter would be better served with a regular text input. Even here the distinction gets blurred by commands created by applications you install yourself.

A combination of the two with instant switching between the two modes would be nice. But that still seems like an uncreative solution.

bad_user
> Aural, visual, tactile and chemical channels each have advantages and disadvantages for different applications

I wasn't talking about the method of delivery.

A wagging tail may not be intuitive, but moving a physical object from A to B by picking it up and dropping it is very much intuitive, not just by hardwired convention (both cats and dogs do it in the same way).

And the only rationale for the futuristic graphical interfaces we see demoed (including stuff like multitouch in iPhone/iPad) is based on the implicit intuitiveness of handling physical objects with your hands.

But that's just a niche ... beyond that you need a language for building stories out of composable phrases, and the demos of futuristic graphical interfaces I've seen just don't cut it.

levesque
There is benefit for a small subset of applications, applications which can gain from having 3d input. Most of the stuff he does in that demonstration is not of this category. But interfaces like that can be very nice for architecture (or computer assisted design) and 3d data visualization (medical or other). Gaming is also bound to be an interesting application. It will be nice to see what Microsoft comes up with for natal as well.

For general computing I am not sure if there is a use for this kind of interface, this stuff takes a lot of space and is long to setup while we are going the other way - towards smaller and more portable computers.

Also, let us not forget that gestural interfaces are very tiresome and it would be hard to use one for a long period of time.

hop
Their incentive is to make it look cool, wow an audience, and bring more grant money to the MIT Media Lab. Contrast this with a company that puts the pieces together and ships a useful product - like the iPad UI...

I always thought crazy concept cars were a waste of time and resources for car companies. If they instead focused on massive in-house iteration (like Apple's 10-3-1 prototyping process), better cars would be brought to market.

watmough
Philip Greenspun had some pretty choice words on MIT Media Lab.

http://philip.greenspun.com/panda/suck

Scroll down to Example 3.

It sure looks like Greenspun's page, using simple formatting, has aged very very many times better than anything the MIT Media Lab would have been doing.

jacobolus
The Media Lab does all kinds of neat research. The point of Greenspun’s argument is that the purely promotional website structure designed by generic managers, even ones who manage a lab that does neat stuff, is much less useful to its intended audience (potential grad students) than a website designed for some concrete purpose like explaining the first-person history of computer science.

You shouldn’t misrepresent his words to be an all-purpose indictment of the Media Lab.

andreyf
like the iPad UI

Imagine a 60 inch iPad hanging on the wall of your kitchen. Imagine being able to point and gesture instead of touching. Want to watch a movie while cooking? Switch back to the cook-"book" app when it's time to add the next ingredient, then go back to your film.

None
None
ahoyhere
Your arms would either be very fit, or very painful, after not long at all.
angrycoder
A 60 inch iPad I could see. But multitasking? Thats just crazy talk.
MikeCapone
> I always thought crazy concept cars were a waste of time and resources for car companies. If they instead focused on massive in-house iteration (like Apple's 10-3-1 prototyping process), better cars would be brought to market.

Definitely. I've been watching the car industry for a few years now, and I'm still waiting for an "Apple" to emerge.

riffer
I've been watching the car industry for a few years now, and I'm still waiting for an "Apple" to emerge.

Give Tesla a few years

stcredzero
I thought Toyota had 10 parallel teams working on the hybrid power train, of which they picked 1.
watmough
A great point, but car companies do iterate. The iteration just occurs over model years.

Take the widely acclaimed Chevy Malibu as an example. This started off as a pretty cruddy GM, but now it's reputedly competitive with Japanese models.

I'm sure that hard deadlines for production, very strict cost controls etc., conspire to impose a more rigid than agile process.

Perhaps, take a look at UK car companies in the 50's for a looser approach to car design. http://www.aronline.co.uk/index.htm?ado15story1f.htm [Development of the Austin-Morris Mini]

elblanco
They iterate even more out of the public eye. There are endless reams of concept art that never even made it to scale model clay. All of the major manufacturers have design houses staffed with people who's sole job is to draw neat looking car ideas. Eventually a few float to the top and they make scale models out of clay (or in this day and age on rapid prototype machines). And then those design studies may end up as full sized, half-car (lengthwise) mock-ups made out of clay. If they survive that, they may even get made into a pure, non-functional concept car like we see at trade shows. It the public reception is positive, they turn manufacturing engineers loose on the design and they cut out all the artistic junk that would cause the car to cost 4 times as much to make, and introduce safety devices and such that change the styling.....it might make it to a functional prototype at that point where they show it at next year's auto-show, if reception is still good, it'll probably make it into to production.
joelhaus
Wasn't sure what "Apple's 10-3-1 prototyping process" involves; it ends up that the title is pretty self-explanatory:

Apple designers come up with 10 entirely different mock ups of any new feature. Not, Lopp said, "seven in order to make three look good", which seems to be a fairly standard practice elsewhere. They'll take ten, and give themselves room to design without restriction. Later they whittle that number to three, spend more months on those three and then finally end up with one strong decision.

[Source: http://www.businessweek.com/the_thread/techbeat/archives/200...]

tomlin
Except mock-ups != prototype.
joelhaus
Picky, picky.

Semantics: http://en.wikipedia.org/wiki/Prototype#Semantics

tomlin
I'm picky because, as a coder, I have to personally deal with different variations of what people think "prototype" means.

One client thinks prototype means "mock-up", another thinks it means "ready to go" after a round of QA.

Surely you can see how this might effect a project overall?

joelhaus
No doubt. That's why specifying project requirements and writing proposals is usually so important for both client and coder. Managing expectations is key.
tomlin
Managing is a waste of time. Working smart is better.

It's simple. We have a word for mock-up or comp. We have a word for prototype which is...prototype.

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.