HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Humans can't read URLs. How can we fix it? - HTTP 203

Google Chrome Developers · Youtube · 4 HN points · 6 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Google Chrome Developers's video "Humans can't read URLs. How can we fix it? - HTTP 203".
Youtube Summary
In this episode, Jake makes the case that URLs are impossible for humans to interpret, especially when it comes to security. What are browsers doing today to overcome that? And, is there a better way?

Subscribe to Google Chrome Developers here → https://goo.gle/ChromeDevs

Also, if you enjoyed this, you might like the HTTP203 podcast! → https://goo.gle/2M5Fpcv

Whoa, thanks for reading the description right to the end! Why not post a comment saying you spotted the easter egg in this episode. There isn't one of course, but it'll confuse everyone who didn't read the description.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Yes. Years. I think a URL bar that sticks to the security relevent parts is a good idea. It's hard for even experienced users to figure that stuff out.

I tried to explain it here https://youtu.be/0-wB1VY3Nrc

bityard
Well I mean the whole story is that google's own evidence was that the experiment didn't bear any fruit.

The web is made of URLs and hiding them would be like taking the express train back to the AOL days. This "feature" would have brought negligible (if any) security, but would have made the web many times more difficult to use for both inexperienced and power users alike.

Jake Archibald (a Googler) presents some decent points on this on the HTTP 203 podcast: https://youtu.be/0-wB1VY3Nrc

I still don't agree with the removal of URLs, but I do still recommend watching the entire video if you want to get more perspective on the issue (beyond just the conspiracy theories about AMP and control).

Reading URLs is actually really hard - even for experts. This video covers the problems well: https://www.youtube.com/watch?v=0-wB1VY3Nrc

This is bad for web security, since the registerable domain is the part you have to trust, but it's surprisingly difficult to figure out that part.

However I feel a bit uneasy about this since URLs are important and tell you where you are on a website. I prefer Firefox's approach which emphasises the registerable domain in the URL bar and fades out the rest of it, making it easier to spot the important bit. However it's still quite subtle - it could do with being a clearer distinction.

anewdirection
Is this a parody comment?
Lammy
The way we use DNS (reversed) does make URLs kind of confusing for specificity, like:

https://specific.more.less.example.com/less/more/specific.ht...

AshleysBrain
The video points out things like: how do you spot an eTLD? There's .com, but what about .co.uk? .github.io? Do you know all the exceptions? There's basically a database of them and you just have to know them to correctly interpret the security origin of the domain.
It is, there’s a long special list for domains that host arbitrary subdomains, GitHub pages is one of them. It’s explained well in https://youtu.be/0-wB1VY3Nrc
dTal
Wait, seriously? Their default behaviour is so broken that they have to whitelist a ton of sites, and screw anybody who slips through the net (or is trying to start a new service)?
jraph
Most browsers use the Mozilla public suffix list [1] that is frequently updated, and a new service can easily submit a new entry. This list is used for many different features.

This list is necessary anyway, because you have top level domains that look like .co.uk, so you can't just split the domain name by dots and take the last component to determine the top level domain.

So, not saying I unconditionally like the fact they are hiding important information in the URL bar, but I would not say their default behavior is that broken.

[1] https://publicsuffix.org - probably the list being mentioned in the video linked by the parent commenter, I guess

ubercow13
This list seems to be actually used to determine how cookies can be shared across domains, but not by Safari? Does this mean that Safari might have different behaviour as to when cookies can be shared across domains?
A couple of Google Chrome devs talk about the issues surrounding the readability of urls and their security implications and possible solutions in an episode of their podcast[0]. I think they make a compelling argument for hiding most of the url in part to prevent phishing however I do think they should allow this behaviour to be toggled via a flag.

[0] https://youtu.be/0-wB1VY3Nrc

mkl
Do you know if this information is somewhere more accessible than a 20 minute video?

Hiding the https and www is already frustrating enough, and this change would make Chrome barely usable for my purposes.

delouvois
Their goal is full AMP dominance. Just look at these evil guys faces. It's clear enough that they're going to pass their frustrations onto you, no matter what.
ubercow13
The claimed purpose is basically just to prevent phishing.

They explain a number of reasons why it is difficult for people to extract from a URL the part which is relevant to security, ie. the bit that affects who has authority over the page and how your cookies will be separated by the browser. The cookie sharing actually had some rules I didn't know about as a non-web developer but experienced URL user. They show how every browser is already going some way towards this but they all have some problems, for example Safari shows the full domain not just the important part.

megous
Looks like this will be great for reflected XSS attacks. Even advanced users will not be able to notice there's something weird going on outside of the domain name part of the URL. Perfect!

Basically any page on the website with this vulnerability will be useable to show a fake login page, and user will not even notice he's not on the /login, but on some weird path + ?_sort=somejavascript

Not that it's that hard to clean up url via history api after you get access to the page via XSS atm, but there's still some short period of time where the full url is shown in such a case, that may provoke suspicion.

joshuamorton
Stick "?jsessionid=<random 80 character string>" in front of the xss and no one will ever look.
Mar 01, 2020 · 1 points, 0 comments · submitted by kinlan
Jan 28, 2020 · 3 points, 2 comments · submitted by asdf-asdf-asdf
dfabulich
In this video, a Googler here proposes adding the "ETLD+1" part of the domain name as a "chip" where the extended-validation certificate name used to be.

So this link: https://jakearchibald.github.io/svgomg/

Would look like this in the browser:

    jakearchibald.github.io | https://jakearchibald.github.io/svgomg/
The proposal appears at 16:45 https://www.youtube.com/watch?v=0-wB1VY3Nrc&t=16m41s
jaffathecake
More specifically, https://workforus.theguardian.com/index.php/careers/ would look like this:

     theguardian.com | workforus.theguardian.com/index.php/careers/
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.