Hacker News Comments on
Dasher: information-efficient text entry
Google TechTalks
·
Youtube
·
43
HN points
·
5
HN comments
- This course is unranked · view top recommended courses
Hacker News Stories and Comments
All the comments and stories posted to Hacker News that reference this video.⬐ alexstageintRIP David MacKay. The world could really do with him right now.⬐ lbotosYou can download it here: http://www.inference.org.uk/dasher/Download.html------
The text below this line was completely written with daSher ! where is the period . Oh, there it is down at the bottom of the box .
----
Played with it for about 10 min, and I timed myself. 5 minutes to type the above. :P
I think it could be really fun if it learns my patterns and I learn it. I found that turning the speed up helped me get paths better, but as you can see the S was fighting me, and sometimes getting "back" felt tough, but it's the same with a keyboard I am just more used to it.
⬐ bradrn⬐ toshDasher is actually pretty easy to use. I installed it on my computer awhile ago but never really got around to using it. It's certainly a lot slower touch typing though.This comment entirely written with Dasher.
minute 32: breathing as one-dimensional input method (!):https://www.youtube.com/watch?v=wpOxbesRNBc&t=1920s
current version of Dasher in the browser: https://dasherv6demo.netlify.app
⬐ news_to_meIncredible! I feel like there’s so much room for improvement for mainstream input methods. Amazing this existed in 2007 — too bad I can’t use it on my iPhone.⬐ sidpatil⬐ patricktheboldThere's a Dasher Mobile app in the App Store. I had it installed for a while.Funny I was just thinking today that this would be good for a smart watch.⬐ jFriedensreichloved that demo back then but it seems swiping mobile keyboards ate up most of the non-disabled target audience. a bit off topic but its weird how sentimental i get seeing that google tech talks intro. its like a flashback to a past where google still was only the good guy and i still naively thought open source and open rest apis would inevitably lead to an open, connected and free digital world.⬐ Nzentl;dr demonstrates a software tool for typing by navigating through a probability weighted letter cloud. The 'standard' navigation involves mousing up / down to get toward letters and forward / back for speed and backspace. 'Standard' because they determined this can be useful for people for whom a querty keyboard is a challenge (disabled or non latin alphabet). Jump to 6:43 to see it in action.This feels like a possible tool for getting through text adventures easier. I find that I am not good at guessing the right words for something very free form, like Emily Short's _Galatea_. With Dasher embedded, I could navigate through the language model weighted by the verbs she built the game around, with less artifice / spoilers than choosing from the full list of words.
I was going to suggest maybe using this to augment my phone's swipe keyboard, such that - when it doesn't guess the word and it doesn't show up in the three suggested alternatives, I could touch the side and flow through all the alternatives it thought of. But, like the author, I can see that mode switch is worse than pure swipe or dasher.
⬐ neologIs it possible to get this on android?⬐ severine⬐ dangYes! Check http://www.inference.org.uk/dasher/MobileDasher.htmlAlso on F-Droid: https://f-droid.org/packages/dasher.android/
A few past (tiny) threads. Maybe this one will get more attention:Dasher: Information-efficient text entry (2007) - https://news.ycombinator.com/item?id=27724932 - July 2021 (1 comment)
Dasher: information-efficient text entry - https://news.ycombinator.com/item?id=807997 - Sept 2009 (1 comment)
A keyboard without keys - Dasher - https://news.ycombinator.com/item?id=132630 - March 2008 (4 comments)
Dasher: information-efficient text entry - https://news.ycombinator.com/item?id=16521 - April 2007 (2 comments)
There's also https://en.wikipedia.org/wiki/Dasher_(software).
I'd love a neural interface to Dasher, which adaptively learns as you use it which letter combinations are the most popular, and adjusts over time to make it easier and faster to input common text.It would be wonderful integrated with a context and language sensitive IDE.
https://en.wikipedia.org/wiki/Dasher_(software)
https://github.com/dasher-project
http://www.inference.org.uk/dasher/
>To make the interface efficient, we use the predictions of a language model to determine how much of the world is devoted to each piece of text. Probable pieces of text are given more space, so they are quick and easy to select. Improbable pieces of text (for example, text with spelling mistakes) are given less space, so they are harder to write. The language model learns all the time: if you use a novel word once, it is easier to write next time. [...]
>Imagine a library containing all possible books, ordered alphabetically on a single shelf. Books in which the first letter is "a" are at the left hand side. Books in which the first letter is "z" are at the right. In picture (i) below, the shelf is shown vertically with "left" (a) at the top and "right" (z) at the bottom. The first book in the "a" section reads "aaaaaaaaaaaa..."; somewhere to its right are books that start "all good things must come to an end..."; a tiny bit further to the right are books that start "all good things must come to an enema...". [...]
>.... This is exactly how Dasher works, except for one crucial point: we alter the SIZE of the shelf space devoted to each book in proportion to the probability of the corresponding text. For example, not very many books start with an "x", so we devote less space to "x..." books, and more to the more plausible books, thus making it easier to find books that contain probable text.
The classic Google Tech Talk by the late David MacKay, the inventor of Dasher:
https://www.youtube.com/watch?v=wpOxbesRNBc&ab_channel=Googl...
It's based on the concept of concept of arithmetic coding from information theory.
http://www.inference.org.uk/mackay/dasher/
Ada Majorek, who has ALS, uses it with a Headmouse to program (and worked on developing a new open source version of Dasher) and communicate in multiple languages:
https://www.youtube.com/watch?v=LvHQ83pMLQQ&ab_channel=Dashe...
Dasher is fantastic, because it's based on rock solid information theory, designed by the late David MacKay.Here is the seminal Google Tech Talk about it:
https://www.youtube.com/watch?v=wpOxbesRNBc
Here is a demo of using Dasher by an engineer at Google, Ada Majorek, who has ALS and uses Dasher and a Headmouse to program:
https://www.youtube.com/watch?v=LvHQ83pMLQQ
Another one of her demonstrating Dasher:
Ada Majorek Introduction - CSUN Dasher
https://www.youtube.com/watch?v=SvsSrClBwPM
Here’s a more recent presentation about it, that tells all about the latest open source release of Dasher 5:
Dasher - CSUN 2016 - Ada Majorek and Raquel Romano
https://www.youtube.com/watch?v=qFlkM_e-sDg
Here's the github repo:
Dasher Version 4.11
https://github.com/GNOME/dasher
>Dasher is a zooming predictive text entry system, designed for situations where keyboard input is impractical (for instance, accessibility or PDAs). It is usable with highly limited amounts of physical input while still allowing high rates of text entry.
Ada referred me to this mind bending prototype:
D@sher Prototype - An adaptive, hierarchical radial menu.
https://www.youtube.com/watch?v=5oSfEM8XpH4
>( http://www.inference.org.uk/dasher ) - a really neat way to "dive" through a menu hierarchy/, or through recursively nested options (to build words, letter by letter, swiftly). D@sher takes Dasher, and gives it a twist, making slightly better use of screen revenue.
>It also "learns" your typical useage, making more frequently selected options larger than sibling options. This makes it faster to use, each time you use it.
>More information here: http://beznesstime.blogspot.com and here: https://forums.tigsource.com/index.php?topic=960
Dasher is even a viable way to input text in VR, just by pointing your head, without a special input device!
Text Input with Oculus Rift:
https://www.youtube.com/watch?v=FFQgluUwV2U
>As part of VR development environment I'm currently writing ( https://github.com/xanxys/construct ), I've implemented dasher ( http://www.inference.org.uk/dasher ) to input text.
One important property of Dasher is that you can pre-train it on a corpus of typical text, and dynamically train it while you use it. It learns the patterns of letters and words you use often, and those become bigger and bigger targets that string together so you can select them even more quickly!
Ada Majorek has it configured to toggle between English and her native language so she can switch between writing email to her family abroad and co-workers at google.
Now think of what you could do with a version of dasher integrated with a programmer's IDE, that knew the syntax of the programming language you're using, as well as the names of all the variables and functions in scope, plus how often they're used!
I have a long term pie in the sky “grand plan” about developing a JavaScript based programmable accessibility system I call “aQuery”, like “jQuery” for accessibility. It would be a great way to deeply integrate Dasher with different input devices and applications across platforms, and make them accessible to people with limited motion, as well as users of VR and AR and mobile devices.
http://donhopkins.com/mediawiki/index.php/AQuery
Here’s some discussion on hacker news, to which I contributed some comments about Dasher:
A History of Palm, Part 1: Before the PalmPilot (lowendmac.com)
⬐ blixtDasher is really impressive. I really like the idea of bringing it into VR and maybe it can be taken even further. If you turn the exploration into a 3D graph of X/Y options into the Z direction as opposed to 2D graph of Y options in the X direction, combined with eye tracking of newer VR headsets, you should be able to get a decent improvement to accuracy and would be able to increase the speed.
People seem to pick up Dasher pick up pretty quickly http://www.inference.phy.cam.ac.uk/dasher/users.html and the demo makes it seem much easier than it is https://www.youtube.com/watch?v=wpOxbesRNBc
This is really awesome. It looks hard, but it isn't. Here's a talk from 2007 that explains it well https://www.youtube.com/watch?v=wpOxbesRNBc