HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Xanadu Basics 1a - VISIBLE CONNECTION

TheTedNelson · Youtube · 75 HN points · 7 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention TheTedNelson's video "Xanadu Basics 1a - VISIBLE CONNECTION".
Youtube Summary
The original hypertext concept of the 1960s
got lost on the way to the Web--
and all current document standards oppose it.

This is an important fight.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Nov 19, 2021 · DonHopkins on Forth vs Lisp
Thank you for your kind words, and for asking a great question!

My secret system that has gotten me through the coronavirus pandemic is simply an investment in an automatic coffee machine that grind beans and foams milk for me. ;)

I use HN search and google site search, copy and paste and clean up old postings, and then check through all the links (since HN abbreviates long links with "..." so I have to copy and paste the full links manually), and I update the broken and walled links with Internet Archive links.

I realize some of my posts get pretty long, and I apologize if that overwhelms some people, but it's a double edged sword. One important goal is to save other people's time and effort, since there are many more readers than writers, and I have to balance how long it takes for somebody who's interested to read, versus how long it takes for somebody who's not interested to skip.

The new HN "prev" and "next" buttons that were recently added to help make HN posts more accessible to people with screen readers are helpful to everyone else too. (Accessibility helps everybody, not just blind people!)

And as the internet has gotten faster and storage cheaper, while the user interface quality, usability, and smooth flow and interactivity of browsers has stagnated (especially on mobile), the cost of skipping over long post gets lower, while the cost of jumping back and forth between many different links and contexts stays ridiculously and unjustifiably expensive (just ask Ted Nelson). Especially with pay sites, slow sites, and links that have decayed and need to be looked up on archive.com (which itself is quite slow and requires waiting between several clicks).

Another consideration is to make it easier for people to find all the information in one place in the distant future, and for search engines and robots and evil overlord AIs to scan and summarize the entire text.

I think of what I try to do as manually implementing Ted Nelson's, Ivan Sutherland's, Douglas Engelbart's, and Ben Shneiderman's important ideas about "transclusion".

https://en.wikipedia.org/wiki/Transclusion

[Oh the irony of this "Transclusion on Transclusion":]

>This article includes a list of general references, but it remains largely unverified because it lacks sufficient corresponding inline citations. Please help to improve this article by introducing more precise citations.

>In computer science, transclusion is the inclusion of part or all of an electronic document into one or more other documents by hypertext reference. Transclusion is usually performed when the referencing document is displayed, and is normally automatic and transparent to the end user. The result of transclusion is a single integrated document made of parts assembled dynamically from separate sources, possibly stored on different computers in disparate places.

>Transclusion facilitates modular design: a resource is stored once and distributed for reuse in multiple documents. Updates or corrections to a resource are then reflected in any referencing documents. Ted Nelson coined the term for his 1980 nonlinear book Literary Machines, but the idea of master copy and occurrences was applied 17 years before, in Sketchpad.

I err on the side of transcluding relevant text that I and other people have posted before, instead of just linking to it, because often the links need to be updated or get lost over time, it's clumsy to link into the middle of a page, there's no way to indicate the end of the relevant excerpt, and I can leave out the redundant stuff.

Following links is distracting and costly, so most people aren't going to click on a bunch of inline links, read something, then come back, re-establish their context, and keep on reading from where they left off, since it loses your context and takes a lot of time to flip-flop back and forth (especially on mobile). Today's web browsers make that extremely clumsy and inefficient, and force you wait a long time and lose the flow and context of the text.

So I aspire to simulate Ted Nelson's and other people's ideals with the crude stone knives and bearskins that we're stuck with today:

S1E28 ~ The City On The Edge Of Forever / stone knives and bearskins

https://www.youtube.com/watch?v=F226oWBHvvI

>"Captain, I must have some platinum. A small block would be sufficient. Five or six pounds." -Spock

See what I did there? Most people have seen that a million times before, instantly recognize the quote, and don't need to actually click the link to watch the video, but there it is if you haven't, or just like to watch it again. If it's a long video, then I'll use a timecode in the youtube link. And I also take the time to quote and transcribe the most important speech in videos, so you don't have to watch it to get the important points. The most interesting videos deserve their own articles with transcripts and screen snapshots, which I've done on my medium pages.

https://donhopkins.medium.com/

For example, this is an illustrated transcript of a video of a talk I gave at the 1995 WWDC, including animated gifs showing the interactivity, so you don't have to wade through the whole video to get the important points of the talk, and can quickly scroll through and see the best parts of the video all playing out in parallel, telling most of the story:

1995 Apple World Wide Developers Conference Kaleida Labs ScriptX DreamScape Demo; Apple Worldwide Developers Conference, Don Hopkins, Kaleida Labs

https://donhopkins.medium.com/1995-apple-world-wide-develope...

ScriptX and the World Wide Web: “Link Globally, Interact Locally” (1995)

https://donhopkins.medium.com/scriptx-and-the-world-wide-web...

And I made animated gifs of the interesting parts of this video transcript, to show how the pie menus and continuous direct gestural navigation worked:

MediaGraph Demo: MediaGraph Music Navigation with Pie Menus. A prototype developed for Will Wright’s Stupid Fun Club.

https://donhopkins.medium.com/mediagraph-demo-a7534add63e5

Here is another video demo transcript with animated gifs of some related work, another approach to the same general problem of continuous navigation and editing with pie menus and gestures:

iPhone iLoci Memory Palace App, by Don Hopkins @ Mobile Dev Camp

https://donhopkins.medium.com/iphone-app-iloci-by-don-hopkin...

>A talk about iLoci, an iPhone app and server based on the Method of Loci for constructing a Memory Palace, by Don Hopkins, presented at Mobile Dev Camp in Amsterdam, on November 28, 2008.

Dang posted this great link to a video by Ted Nelson explaining the most important ideas of his life's work: document structure, transclusion, and the idea of visible connection in text on screen:

https://news.ycombinator.com/item?id=23491701

>dang on June 11, 2020 | parent | context | favorite | on: Xanaflight: Three Pages

>This is software implementing one version of Ted Nelson's idea of visible connection in text.

>For more on that idea, see https://www.youtube.com/watch?v=hMKy52Intac, which contains beautiful examples.

>Xanadu Basics 1a - VISIBLE CONNECTION

>The original hypertext concept of the 1960s got lost on the way to the Web-- and all current document standards oppose it.

>This is an important fight.

>Ted Nelson: "Here we have a Xanadoc. Right now it's disguised as plain text. But if we want to see connections, here they are. The ones outlined in blue are Xanalinks. They aren't just jumplinks, what other people call hyperlinks. I've called them jumplinks since before the web. You're jumping to you know not where: it's a diving board into the darkness. Whereas Xanalinks visibly connect to other content, with a visible bridge. The other documents open and I can scroll around in them! When I client, I can close them again by clicking. This is of course only one possible interface."

In the 90's, I worked with Ben Shneiderman and Catherine Plaisant at the University of Maryland Human Computer Interaction Lab on a NeWS PostScript-based hypermedia browser and emacs-based authoring tool called "HyperTIES".

HyperTIES had pie menus for navigation and link selection, a multimedia formatted text and graphics browser with multiple coordinated article windows and definition panes, and a multi window Emacs authoring tool with tabs and pie menus for editing the intertwingled databases of documents, graphics, and interactive user interfaces.

Each article had a short required "definition" so you could click on a link to show its definition in a pane and read it before deciding if you wanted to double click to follow the link or not, so you didn't have to lose your context to see where each link leads.

HyperTIES even had embedded interactive graphical PostScript scriptable "applets", like JavaScript+SVG+Canvas, long before Java applets, but implemented using James Gosling's own Emacs and NeWS, instead of his later language Java.

Designing to Facilitate Browsing: A Look Back at the Hyperties Workstation Browser: By Ben Shneiderman, Catherine Plaisant, Rodrigo Botafogo, Don Hopkins, William Weiland. Published in Hypermedia, vol. 3, 2 (1991)101–117.

https://donhopkins.medium.com/designing-to-facilitate-browsi...

>Hyperties allows users to traverse textual, graphic or video information resources in an easy way (6,7). The system adopts the ‘embedded menu’ approach (8), in which links are represented by words or parts of images that appear in the document itself. Users merely select highlighted words or objects that interest them, and a brief definition appears at the bottom of the screen. Users may continue reading or ask for the full article (a node in the hypertext network) about the selected topic. An article can be one or several pages long. As users traverse articles Hyperties retains the path history and allows easy and complete reversal. The user’s attention should be focused on the document contents and not on the interface and navigation. Hyperties was designed for use by novices, giving them a sense of confidence and control, but we also sought to make it equally attractive to expert users.

Don Hopkins and pie menus in ~ Spring 1989 on a Sun Workstation, running the NEWS operating system.

https://www.youtube.com/watch?v=8Fne3j7cWzg

>After an 1991 intro by Ben Shneiderman we see the older 1989 demo by Don Hopkins showing many examples of pie menus on a Sun Workstation, running the NEWS operating system. This is work done at the Human-Computer Interaction Lab at the University of Maryland.

>A pie menu is a menu technique where the items are placed along the circumference of a circle at equal radial distance from the center. Several examples are demonstrated on a Sun running NeWS window system, including the use of pie menus and gestures for window management, the simultaneous entry of 2 arguments (by using angle and distance from the center), scrollable pie menus, precision pie menus, etc. We can see that gestures were possible (with what Don call "mouse ahead" ) so you could make menu selections without even displaying the menu. Don uses an artifact he calls "mousee" so we can see what he is doing but that extra display was only used for the video, i.e. as a user you could make selections with gestures without the menu ever appearing, but the description of those more advanced features was never published.

>Pretty advance for 1989... i.e. life before the Web, when mice were just starting to spread, and you could graduate from the CS department without ever even using one.

>This video was published in the 1991 HCIL video but the demo itself - and recording of the video - dates back to 1989 at least, as pictures appear in the handout of the May 1989 HCIL annual Open House.

>The original Pie Menu paper is Callahan, J., Hopkins, D., Weiser, M., Shneiderman, B., An empirical comparison of pie vs. linear menus, Proc. ACM CHI '88 (Washington, DC) 95-100. Also Sparks of Innovation in Human-Computer Interaction, Shneiderman, B., Ed., Ablex (June 1993) 79-88. A later paper mentions some of the more advanced features in an history of the HyperTies system: Shneiderman, B., Plaisant, C., Botafogo, R., Hopkins, D., Weiland, W., Designing to facilitate browsing: a look back at the Hyperties work station browser Hypermedia, vol. 3, 2 (1991)101-117.

>PS: For another fun historic video showing very early embedded graphical links (may be the 1st such link) + revealing all the links/menu items + gestures for page navigation:

HCIL Demo - HyperTIES Browsing

https://www.youtube.com/watch?v=fZi4gUjaGAM

>Demo of NeWS based HyperTIES authoring tool, by Don Hopkins, at the University of Maryland Human Computer Interaction Lab.

HCIL Demo - HyperTIES Authoring with UniPress Emacs on NeWS

https://www.youtube.com/watch?v=hhmU2B79EDU

>Demo of UniPress Emacs based HyperTIES authoring tool, by Don Hopkins, at the University of Maryland Human Computer Interaction Lab.

HyperTies Browser (1989)

https://www.youtube.com/watch?v=azVVl26HgvQ

>This is the HyperTies-based hypertext version of Schneiderman & Kearsley’s 1989 book “Hypertext Hands-On!” included in the book (on 2 x5.25” disks). Running in DOS on Win XP 32-bit VM.

Hypertext on Hypertext CACM1988

https://www.youtube.com/watch?v=29b4O2xxeqg

>A demo by Ben Shneiderman of the widely circulated ACM-published disk “Hypertext on Hypertext”. It contained the full text of the eight papers in the July 1988 Communications of the ACM. The Hyperties software developed at the University of Maryland HCIL was used to author and browse the hypertext data.

>See http://www.cs.umd.edu/hcil/hyperties/ for more information.

I hope this addresses your questions! Thanks for giving me an excuse to rant and rave about topics I love.

ngcc_hk
Not the question guy. But … Holy smoke. Great article and the in fact jumping in and out is hard. I think the one used by iOS books for check out a word or even a link worked as book is a different software than safari. But safari to another link and back … good analysis and good work !
Jun 11, 2020 · dang on Xanaflight: Three Pages
This is software implementing one version of Ted Nelson's idea of visible connection in text.

For more on that idea, see https://www.youtube.com/watch?v=hMKy52Intac, which contains beautiful examples.

Nov 04, 2019 · pratio on Roam – Tool for Thinking
That's the first thought that came to my mind. I love hearing ted nelson explain it hhttps://www.youtube.com/watch?v=1gPM3GqjMR4 It is a good effort to bring that concept to life.

This video is better https://www.youtube.com/watch?v=hMKy52Intac

Jun 23, 2019 · 73 points, 25 comments · submitted by nathcd
simonh
I get the same feelings reading his descriptions of Xanadu as I did reading descriptions of Google Wave. It all sounds a though it aught to be great and wonderful, but I can't actually visualise how it would be useful every day. What actual value would I get out of it, constantly on an ongoing basis?

I can see how perhaps you could construct interesting use cases, but I just don't see the value as a primary interface. It doesn't even sound all that hard to implement. I know there's a basic implementation, but his description of it is hedged around with caveats. Certainly with modern development environments, it should be many orders of magnitude easier than in the 60s or 70s to get this working, yet it's still struggling to get off the ground. Contrast with the web he is so critical of, which was useful and valuable the first day it was released.

If it takes 50 years and you still don't even have a compelling proof of concept, I can't help suspecting the grand vision is just far too elaborate and heavy weight to be workable in practice.

yourapostasy
Pre-teen me was utterly fascinated with Xanadu, but that was before I learned about the computational and storage costs of data structures, and how large-scale human interaction with systems evolves. Now, I can't wrap my head around what Nelson promotes and what I extrapolate to a scaled-up global system. I'd really like to know from those more deeply immersed in the implementation of his vision how the following pieces are addressed, because I can't quite reconcile between the dribs and drabs I've pieced together of the internals and what I know of large-scale systems.

1. As near as I can tell, the ZigZag data structure at its core is a type of directed graph. Fast path traversal has always been a challenge for me with these at scale. When I want to pull a list of all referring/predecessor links, I'm either walking the graph, or maintaining a continuously-updated cross-reference lookup. I must be overlooking how he's solved this problem, because the kind of micro-payment environment he envisions seems to me to require knowing this information (is this what he calls a "cell"?) to compute the presented price of a piece of information derived from a foundation of other pieces of information.

2. How does transclusion work for non-text data like audio, video, rich media, chats, or in a generalized way, arbitrary data sets?

3. Was there ever a notion of transclusion quality or security? I don't see any acknowledgement in Nelson's design of the need to address bad actors abusing transclusion, giving rise to equivalents to spammers, trolls, phishers, etc. within the xanadocs environment.

4. What determines a cell? As near as I can tell (and my understanding is admittedly flawed), cells are decided by the author, but I must be wrong here. That seems a high cognitive load to make those decisions, and I don't see any support for crowd-defined cells, nor an API that would facilitate building such an interaction.

BTW, for those who lament the quiescence of the Gzz [1] project due to patent protection, it can pick up again now that the patent expired [2]. Whoever picks it up likely should read about the experiences of one of the original Gzz authors [3]. The most programmer-oriented write up of the data structure I could find [4] didn't erase my impression that Nelson's vision was fine for a small group, but doesn't address scaling up to the challenges we see on a global level.

It's still fascinating to think about, and I see value within a corporate environment, but I struggle seeing it staying useful in the public realm due to the this-is-why-we-can't-have-nice-things principle.

[1] http://www.nongnu.org/gzz/

[2] https://en.wikipedia.org/wiki/ZigZag_(software)#History

[3] http://lambda-the-ultimate.org/node/233#comment-1715

[4] https://hackernoon.com/an-engineers-guide-to-the-docuverse-d...

enkiv2
I am the author of [4].

Regarding your first question:

ZigZag is not really intended for large shared multi-user stuff. It's not supposed to scale huge. We have completely different data structures for translit (i.e., the hypertext part).

ZigZag can be used for fast manipulation of spacial relations (basically, applying multiple kinds of projections to a multi-dimensional directed graph) & we use it under the hood for representing display layout in XanaSpace, but its main use is as a personal mind-mapping utility. It has only extremely tenuous connections to how we do hypertext, and none at all to micropayments.

The transcopyright system (i.e., the thing for ownership and micropayments) is based on transclusion. Basically, all that means is that we provide facilities to fetch things piecemeal, and we provide facilities for owners to restrict control over getting usable forms of things. (Specifically, we support applying a one time pad, so that if you've got distributed fetching of real content, you can still require folks to fetch the OTP from a trusted oracle & maybe pay for it too before they can actually decode the data.)

Question 2:

We can always fall-back to byte offsets. We floated the idea of supporting more user-friendly offset formats for audio & video (specifically adopting ideas from the w3c media fragments spec), specifically because audio & video have complicated compression schemes & finding safe frame boundaries requires deep knowledge of the underlying format. We put that on hold in order to focus on getting text stuff really solid in the UI department.

Question 3:

People can create spammy documents. They can also transclude from or link to more popular documents in the hopes that their low-quality documents will get more exposure. But, links are not universally resident (which is to say, creating a link to a document doesn't mean that everybody viewing that document immediately sees it). Instead, links get distributed the same way as documents do: by word of mouth (as people recommend to their friends to view particular documents or load up particular sets of links). Links do double-duty as connections between documents and as formatting information (like themes or skins), and collections of links (ODLs) can be passed around separately from documents. (By 'passed around' I generally mean that their addresses get circulated, although we also had the notion of an archive format for passing around collections of documents and links that weren't to be made generally available.)

In other words, the spam potential of linking between a popular document and a low-quality document is akin to the spam potential of creating a low-quality website that is hidden from google.

Question 4:

A cell is a data structure consisting of a pair of associative arrays (pointers to adjacent cells, keyed by strings) & a 'value' (typically a string).

Cells are created by the author (or maybe by a program), which is fine, because cells never travel beyond the vicinity of a single machine. (We had a 'slice' mechanism where cells owned by different folks could join up together, but this was never intended to scale beyond small groups.)

Again: since ZigZag is not a part of the hypertext/transliterature system, the performance of ZigZag is a lot less important. (Within the translit realm, we had exotic high-performance data structures during the Autodesk / XOC era, which we've basically dropped in favor of simple stuff like spanpointers.)

If you're interested in ZigZag, I recommend reading my implementation (https://github.com/enkiv2/misc/blob/master/ds-lib/ZZCell.py). It's cleaner than GZZ & is almost identical to what's still used internally at Xanadu (last I've heard), because this is a rewrite from memory of what we wrote for XanaSpace & when Ted occasionally sends people to me for advice on how to implement new versions of ZigZag I point them to this.

enkiv2
> It all sounds a though it aught to be great and wonderful, but I can't actually visualise how it would be useful every day. What actual value would I get out of it, constantly on an ongoing basis?

There are a couple of releases you can use, if you want to get a feel for it. The integrated editor we had planned out isn't implemented in any of them, unfortunately, so editing is still awkward.

Most of the big benefits will only come out of having a community using this stuff, and because Ted keeps tweaking formats & there's no compatibility between the released implementations, even those of us within the Xanadu group can't really experience that.

> It doesn't even sound all that hard to implement.

As a former implementer, yes: none of the core concepts are hard to implement.

Ted is a stickler for smooth UIs, & nearly 100% of the difficulty has been in trying to convince existing UI libraries to display his interesting visualization and input ideas in a performant way.

For instance, the XanaduSpace demo used OpenGL to display everything in 3d but internally a lot of stuff was hardcoded, & when I took over development of it, we found that we had a lot of overhead when editing text, so we switched from the OpenGL 1.1 API to the current API & idioms, which let us do faster rendering. However, we had to completely rewrite glyph rendering in order to fit more than about 20 pages of text in GPU RAM (and cards didn't expose standard API calls for freeing these allocations). Our super-elegant proposed method of storing text in GPU RAM ended up never quite working (and we got burnt out by our day jobs at the same time as trying to wrangle with OpenGL), so we never met the benchmark of displaying an entire copy of the King James Bible in real 3d. We plugged the same backend into a TK frontend, which was usable but ugly. Then we drifted away from the project. The code is still around, but I don't think anybody picked it up.

Currently-available web-based Xanadu systems (OpenXanadu and Xanadu Cambridge) are cripped by same-origin policy. Xanadu doesn't want to host arbitrary user data locally (at least, not unless and until customers pay for storage). I wrote a caching proxy for OpenXanadu back in 2013 or 2014, but it wasn't used for basically the same reason: storing arbitrary user data costs us money & makes us liable for takedown requests & other such stuff. While desktop-based systems like XanaSpace could fetch from arbitrary remote hosts using arbitrary protocols (mostly HTTP, but I snuck ftp and gopher support in, and pushed for IPFS support as well), OpenXanadu is stuck only fetching from xanadu.com & a handful of sites that have whitelisted us (and attempts to get project gutenberg & the internet archive to whitelist us for SOP failed, I think). Again: trying to work around SOP was a lot harder than the actual implementation.

I wrote an experimental span-based editor for XanaSpace, and I was asked to port it to javascript for use with OpenXanadu. How it worked was that you selected text in a 'source document' panel (some text or stripped HTML fetched from the wild blue yonder), the text you highlighted would be colored & a copy would be inserted into a kind of visible pastebin, and you'd insert chunks from that bin into an accumulating new document in another panel. Then, in a box below, the EDL format for the document you're creating would be listed, which you can copy into a file to be loaded by OpenXanadu. Unfortunately, there was a hard to find off by one error in the code for ignoring color tags in the source document -- in other words, when highlighting, the character offsets would get less and less accurate the more you go along. It was an intermittent problem. I didn't end up fixing it before I left the project. The XanaSpace version worked fine.

This editor was supposed to be a stopgap, & the eventual editor we planned to make would involve dragging selections directly out of existing documents, snapping them together to form new documents, & being able to then insert or delete text from within these composite documents directly. Existing GUI toolkits, to the extent that they allowed dragging of widgets, didn't show the dragged item as it crossed widget boundaries (and anyway, couldn't show beams between scrolling text things), so we were going to have to write our own text rendering/editing code (which isn't so bad -- we did it for XanaduSpace and again for XanaSpace) and make it handle mouse events reliably on arbitrary unicode text using proportional truetype fonts (substantially more complicated), make that handler differentiate click from select from drag, etc. It was gonna be complicated, & every little noodle of text needed to store its own arbitrarily-complex history, with the beams it was attached to animating along with the drag and such. We put it off, and it looks like everybody else also put it off. We knew that if it wasn't sufficiently pretty, it would never see the light of day (even as open source).

I prefer to develop in public, so after I left Xanadu I started developing my own (less pretty, but at least conceptually-pure) implementations. I also wrote down software-engineer-centric descriptions of concepts we used, so that other people can do the same. (They can be found here: https://hackernoon.com/an-engineers-guide-to-the-docuverse-d... ) None of this stuff is actually hard to implement, but for a backend guy like me, making it pretty was a lot harder than making it usable.

SuperPaintMan
Started playing around with https://gitlab.com/krampus/wormwood , it's a client that seems pretty good. It takes a bit of elbow grease to get going, but this should be fun to play with.

Setup is a bit of a pain and it could use a readme but once you install the perl modules, build the assets and set up the server it's pretty nice. Could probably use a dockerfile as well

Give it a poke! http://lain.gboards.ca/view.cgi?url=static/doc/doc.xan.org

underd0g
Information(Idea in mind) => Write a document => Write on PC and print out on paper => Display on screen

Two dimension paper or screen is never the best way of displaying information, but it is the cheap and easy one.

the core problem is how we organize and display information, I don't think the old screen will solve it in the good way, but recently MR, AR show us the three dimension way, maybe we can find more dimension in the future.

simonh
I see people write things like this from time to time, but I'm not convinced this will work out in practice. Human 3D vision is quite limited. We have distance perception, but it's quite crude and we have no way to perceive through things, to see their actual depth or full dimensionality. We only get a view of the side facing us. In fact fake 3D on two dimensional displays provides a very convincing analogue.

The primary way we exchange information is text and for that 3D provides no advantage, in fact it makes it worse by being distracting. Visual representations are a niche for representing certain constrained forms of formatted data, in the form of charts. These can be very compelling for very structured and regularised information.

The old saying that a picture paints a thousand words is all very well, but if I give you an arbitrary thousand word text, you'll have a heck of a time representing that entirely visually.

pharke
We do have 3D senses, but the primary ones aren't vision. Proprioception tells us the position of our body in space and is relevant to the discussion because we can tag and encode information based on posture or movement, think hand-gestures, body language, right hand rule, counting on fingers, etc. Another is enabled by the existence of grid cells[1] in the brain. These allow us to be aware of (and imagine) our position in Euclidean space. This is where AR/MR could really shine by bringing the Memory Palace mnemonic to life and possibly extending it in all kinds of fantastic ways. People have exploited these senses for millennia as memory aids and ways of understanding and interpreting information, we shouldn't lose that tradition and should make every effort to enhance it with modern technology rather than replace it with a single interface metaphor.

[1] https://en.wikipedia.org/wiki/Grid_cell

Rhinobird
1a - https://www.youtube.com/watch?v=hMKy52Intac&t=327s

1b - https://www.youtube.com/watch?v=1gPM3GqjMR4

1c - https://www.youtube.com/watch?v=hGKbRcvIZT8

1d - https://www.youtube.com/watch?v=qyzgoeeloJA

2 - https://www.youtube.com/watch?v=9_o8q4Sp-w0

2.5- https://www.youtube.com/watch?v=_xYwgJW7T8o

3 - https://www.youtube.com/watch?v=kV_vYkSmkxk

Rhinobird
and, these:

4 - https://www.youtube.com/watch?v=jLYz2vWv1wg

5 - https://www.youtube.com/watch?v=YsbrEOxVPcg

dredmorbius
Someone might care to suggest to Mr. Nelson the prospect of the visible connections afforded by YouTube playlists....
tudorw
Lots of interesting things on Ted's twitter https://twitter.com/TheTedNelson
vermilingua
One of those interesting things is the ridiculous cult of personality that’s formed around him. Nelson is brilliant, but some of the crap his followers put out is just unreal.
slim
what's with hiding complexity? is it desirable to show connections? from the examples in the video, it seems the connections do not convey any additionnal meaning.

It seems to me that there are indeed systems where connections convey meaning. In those systems, the nodes connected are invisible (represented as dots) as to show the topology of the connections.

Maybe it's not useful to show texts and connections at the same time

schpaencoder
The point is that humans navigate and orient themselves easier in a spatial persisting environment.
z3t4
One problem is CORS creating a firewall around every web page. Another problem is copyright, showing someone else's work on your web page. Fair use would have to be expended to showing parts of a document.

Wikipedia did something like this where you can see content when hovering over a link ...

But I think what the old guy is talking about is actually showing a visual graph how documents and text are linked together. We humans are good at scanning text. So I think something like this would work well in a search engine, where instead of getting a list of links, you get a visual graph of interlinked documents, that makes it faster to explore different branches and find what you are looking for.

nabla9
Ted Nelson is visionary but bad businessman/project leader.

The Curse Of Xanadu: https://www.wired.com/1995/06/xanadu/

He patented some of his ideas (like ZigZag data structure) but failed to commercialize despite multiple attempts. Then he killed open source attempt to make it happen outside his control (GZigZag). Being very defensive and insisting of owning the project when one has not the ability to lead is unfortunate. Nelson had very good ideas.

The idea that you make closed source hypertext project seemed possible at the time, before the web, but it has no change of happening now.

enkiv2
It's not true that he killed GZZ. He had filed the patent already, & the GZZ developer (who had previously been working closely with Ted) dropped the project when he heard that a patent existed. (As far as I am aware, all patents filed with respect to ZigZag or Xanadu are expiring within the next few years.)

Several Xanadu implementations are already released as open source (the first one in 1999).

Believe me: I've tried to convince Ted that the appropriate way to get stuff to have traction these days is to develop in public, rather than keeping stuff secret during development & only releasing source when it's been polished. It's not that he wants a closed & commercialized system (though he hasn't completely given up on having some stake in monetization), but that he wants enough control to avoid the kinds of runaway misunderstandings that produced the web. If I had seen a warped version of my own work ruin the world, I'd be conservative too.

Now that web standards are so complicated that writing a new web browser is impossible for even large & very rich companies (Microsoft is ditching IE in favor of Chrome, following dozens of others), a properly-designed (and thus, simple) hypertext system becomes a lot more sensible -- for more and more people, the web will seem less and less like a reasonable compromise (being never a particularly good system for hypertext, and a far worse application sandbox platform). Some folks have been jumping to gopher, but others will probably migrate to new systems that are along the lines of Xanadu.

nabla9
Maybe the reason was his incompetence, incisiveness or something else, but he ended up putting people to work on a project without revealing relevant stuff of his intentions and underlying situation. Not communicating and stringing people along is not a good personality trait.

see: http://lambda-the-ultimate.org/node/233#comment-1715

enkiv2
When I was brought onto the project, he made sure I was OK with working on patent-encumbered designs & possibly closed-source code, specifically because of his experience with the GZZ developer. It's a misunderstanding he learned from.
carapace
Ted Nelson responded:

'Errors in "The Curse of Xanadu," by Gary Wolf'

http://xanadu.com.au/ararat

Look I'm as bummed as anyone that he never got Xanadu off the ground, but please don't link to that "hit piece". Ted Nelson is a visionary and deserves more respect.

dredmorbius
With due respect, Nelson's response ssuggests an inability to distinguish forest from tree.

There is a difference between narrative and factual error. Nelson fails to draw that and is stuck in minutiae.

There's also considerable prickliness, if not outright paranoia, shown.

(And yes, I've read the Wired piece as well.)

alehul
Some further reading:

A recent and great interview with Nelson - https://www.notion.so/tools-and-craft/03-ted-nelson

A long-form read by Wired from 1995 - https://www.wired.com/1995/06/xanadu/

Nelson's response to Wired - http://xanadu.com.au/ararat

jacobush
Thanks, I read the Wired back then but never Nelson's response.
LeonB
Also -- he has a recent response: https://www.youtube.com/watch?v=-_-5cGEU9S0
Apr 22, 2019 · shalabhc on Project Xanadu
Ted Nelson recently did a series of videos on the important ideas in Xanadu. Start here: https://www.youtube.com/watch?v=hMKy52Intac

Also you can buy his classic 'Computer Lib' book here: https://computerlibbook.com/

Apr 14, 2019 · 2 points, 0 comments · submitted by nathcd
Great to see those ideas discussed! It is a little strange, however, that Ted Nelsons ideas around Xanadu and its link representations aren't even mentioned.

https://youtu.be/hMKy52Intac?t=1m44s

goodmachine
Check the full dissertation, they are.

https://www.reinterpretcast.com/pdfs/savage-j-dissertation-2...

Related, Ted Nelson recently posted a series of videos describing the important ideas on Xanadu, the original hypertext project: https://m.youtube.com/watch?v=hMKy52Intac#
hinkley
There is almost nothing like his ideas out in the wild. The closest analogs I know of are, ironically enough, tools for software developers.

We can take this farther. There are a lot of people trying to raise the bar on development tools and at least some of us believe that people tend to do what they know. What is demonstrated for them.

Give them crappy tools and they will be satisfied to create crappy things with them. Give someone a better tool and many will rise to that level.

There are several aspects of code comprehension that I think would be conducive to his designs. And monitors are getting wide enough now that we nearly have the space to do them.

jancsika
> There is almost nothing like his ideas out in the wild.

Go to Ted Nelson's Wikipedia page and mouse over the "Project Xanadu" link.

> Give them crappy tools and they will be satisfied to create crappy things with them. Give someone a better tool and many will rise to that level.

The value of Wikipedia page previews has very little to do with the quality of the technology. The quality of the technology is perfectly adequate for at least one version of what Ted Nelson is describing.

The value is determined by what those tooltips are legally allowed to link-- can they show all primary documents which anyone on HN can trivially discover and access in a digital format, or can they merely show other Wikipedia pages?

As long as it is illegal to link at least primary documents that whose authorship was funded by public money (like everything on Scihub), it really doesn't matter what type of view or even doc format you have.

setr
that may not be entirely true; if you imagined a universe where xanadu exists, and was popular as a general tool, then you can envision that open-source primary documents would be preferred over closed, simply on the basis that you can actually consistently link to them (in the same fashion that, today, non-paywalled sites are naturally favored over paywall for sharing on HN).

And if xanadu documents were more useful (because of their interlinking and history) than the close-able flat documents (or perhaps closed documents that only link to closed source), then naturally things would slowly trend towards open documents by default, as it becomes more difficult to constrain yourself to the closed-source world.

The problem today might be then, that everyone already accepts that primary documents are likely to be sealed off, and there's no significant incentive otherwise. But if the idea of having a sufficiently useful document upgraded from a flat pdf to a historically-archiving xanadu document...

nabla9
Ted Nelson had great ideas but he totally screwed up the delivery.

He insisted total control over ZigZag. He wanted it to exist as proprietary software not open protocol or anything like that. Decades passed noting came for his ideas and software.

Then came the web, html, RDFs etc. and Ted Nelson became desperate. He received patent for zzstructure in 2001. When group of talented individuals approached him he first approved their work in GZZ (Gnu ZigZag) then he screwed them totally over when they started to get work done.

Ted Nelson's genius is voided by his insecurities, greed or something. Maybe when his patent expires 2021 something will come of it.

http://www.nongnu.org/gzz/

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.