HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Admiral Grace Hopper Explains the Nanosecond

funbury · Youtube · 273 HN points · 15 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention funbury's video "Admiral Grace Hopper Explains the Nanosecond".
Youtube Summary
Admiral Grace Hopper was one of the first programmers of the Harvard Mark I computer. She developed the first compiler for a computer programming language. Here she explains a nanosecond to a room of learners.

Transcript:

They started talking about circuits that acted in nanoseconds. Billionths of a second. Well, I didn't know what a billion was (I don't think most of those men downtown knew what a billion is either). And uh .. if you don't know what a billion is, how on Earth do you know what a billionth is? I fussed and fumed. Finally one morning, in total desperation, I called over to the engineering building and I said: "Please cut off a nanosecond and send it over to me." And I brought you some today. Now, what I wanted when I asked for a nanosecond was: I wanted a piece of wire which would represent the maximum distance that electricity could travel in a billionth of a second. Now, of course, it wouldn't really be through wire. It'd be out in space; the velocity of light. So, if you start with the velocity of light and use your friendly computer, you'll discover that a nanosecond is 11.8 inches long (the maximum limiting distance that electricity can travel in a billionth of a second). Finally, at the end of about a week, I called back and said: "I need something to compare this to. Could I please have a microsecond?" I've only got one microsecond (so I can't give you each one). Here's a microsecond: 984 feet. I sometimes think we ought to hang one over every programmer's desk (or around their neck) so they know what they're throwing away when they throw away microseconds. Now I hope you all get the nanoseconds. They're absolutely marvelous for explaining to wives and husbands and children and Admirals and Generals and people like that. An Admiral wanted to know why it took so damn long to send a message via satellite. And I had to point out that between here and the satellite there were a very large number of nanoseconds. You see -- you can explain these things. It's really very helpful, so be sure to get your nanoseconds.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
So they state:

> One could argue that we don’t really need PTP for that. NTP will do just fine. Well, we thought that too. But experiments we ran comparing our state-of-the-art NTP implementation and an early version of PTP showed a roughly 100x performance difference:

While I'm not necessarily against more accuracy/precision, what problems specifically are experiencing? They do mention some use cases of course:

> There are several additional use cases, including event tracing, cache invalidation, privacy violation detection improvements, latency compensation in the metaverse, and simultaneous execution in AI, many of which will greatly reduce hardware capacity requirements. This will keep us busy for years ahead.

But given that NTP (either ntpd or chrony) tends to give me an estimated error of around (tens of) 1e-6 seconds, and PTP can get down to 1e-9 seconds, I'm not sure how many data centre applications need that level of accuracy.

> We believe PTP will become the standard for keeping time in computer networks in the coming decades.

Given the special hardware needed for the grand master clock to get down to nanosecond time scales, I'm doubtful this will be used in most data centres of most corporate networks. Adm. Grace Hopper elegantly illustrates 'how long' a nanosecond is:

* https://www.youtube.com/watch?v=9eyFDBPk4Yw

How many things need to worry the latency of signal travelling ~300mm?

SEJeff
Disclaimer: I work in finance, and have for 15+ years.

> Given the special hardware needed for the grand master clock to get down to nanosecond time scales, I'm doubtful this will be used in most data centres of most corporate networks.

The "special hardware" is often just a gps antenna and a PCI card though. In fact, many tier 1 datacenters actually provide a "service" where they'll either cross connect you directly to a PPS feed from a tier 0 grandmaster time service or plug your server into a gps antenna up on the roof. It isn't really that exotic. For financial application, especially trading ones, syncing a LAN timesync to a handful of nanoseconds is doable and optimal.

It is just a matter of time before non-finance sees reasons that better timesync is useful. Precision Time Protocol aka IEEE 1588 was released in 2002 and IEEE 1588 version 2 was released in 2008. This isn't exactly a new thing.

With the right hardware and a tier 0 timesource, modern ntp on modern hardware with modern networks can keep a LAN in sync subsecond. However, as a protocol, NTP only guarantees 1 second accuracy.

bradknowles
Disclaimer: I've been involved in supporting the NTP Public Services Project since 2003.

I assure you, with the right hardware and paying attention to your latencies, NTP can get you down below one millisecond accuracy. Poul Henning Kamp was doing nanosecond level accuracy with NTP back in the mid-aughts, but then he had rewritten the NTP server code, the NTP client code, and the kernel on the server.

As an NTP service provider, what you really want to keep an eye on is the Clock Error Bound that gives you the worst case estimate for how bad the time is that you could be serving to your customers. For the client side, you mainly care about just the accuracy you're actually getting.

SEJeff
Yes, I've seen it get down to a few milliseconds of sync on the right hardware (boundary clock on the switches, stratum 0 timeserver with pps, etc), but the protocol only guarantees 1 second of sync. Am I incorrect in that assertion?
throw0101a
> However, as a protocol, NTP only guarantees 1 second accuracy.

The "one second" number is not inherent to the protocol, but comes from a survey from 1999:

> A recent survey[2] suggests that 90% of the NTP servers have network delays below 100ms, and about 99% are synchronized within one second to the synchronization peer.

* http://www.ntp.org/ntpfaq/NTP-s-algo.htm#Q-ACCURATE-CLOCK

A 2005 survey found (AFAICT, see Figure 1) that north of 99% — more like 99.5% — of servers had offsets less than 100ms, and that 90% have offsets less than 10ms:

* PDF: https://web.archive.org/web/20081221080840/http://www.ntpsur...

The output of "chronyc tracking" from a randomly-selected system I help run:

    Reference ID    : […]
    Stratum         : 3
    Ref time (UTC)  : Wed Nov 23 13:35:21 2022
    System time     : 0.000002993 seconds fast of NTP time
    Last offset     : +0.000003275 seconds
    RMS offset      : 0.000008091 seconds
    Frequency       : 13.191 ppm slow
    Residual freq   : +0.001 ppm
    Skew            : 0.017 ppm
    Root delay      : 0.001615164 seconds
    Root dispersion : 0.000048552 seconds
    Update interval : 65.3 seconds
    Leap status     : Normal
So chrony's algorithm thinks that the system time is currently off by 0.000 002 993 seconds (3e-6), and on average it is off by 0.000 008 091 seconds (8e-6).

All I had to do to achieve this was have some infra nodes (that do other things as well) point to *.pool.ntp.org, and then have my cattle and pets point their NTP software to those infra nodes. No special hardware, no special software, no configuring of network switches and routers: just a few lines of Ansible and I get (estimated) microsecond (1e-6) error levels.

akira2501
> But given that NTP (either ntpd or chrony) tends to give me an estimated error of around (tens of) 1e-6 seconds

Is that a hard bound or an average? If it's an average, then what are the limits of the bounds, both in magnitude and duration?

> and PTP can get down to 1e-9 seconds

We use it for audio, and the reason it works well there is because there is no exponential backoff with your peers, allowing for even small timing slips to become enough to notice. 1ms of latency is far too much for our application, we typically aim for 0.25ms.. and we're only running 96kHz. If we lose PTP sync, we notice within a few minutes.

Another advantage of PTP is it can operate as a broadcast and, as the article notes, switches can be PTP aware and help update the timing as the broadcast flows through the network. Conveniently, PTP also allows for multiple timing domains and masters to co-exist on the same network.

It's also an absurdly simple protocol, you can build a receiver for it in about 200 lines of C code. I've actually become quite taken with it since it was forced into our space about 10 years ago.

hoseja
I wouldn't call it absurdly simple.
throw0101a
> Another advantage of PTP is it can operate as a broadcast

NTP can work via broadcast and multicast.

yAak
The way I read it: a larger "Window Of Uncertainty" impacts performance. Having a smaller WOU by using PTP gave them a 100x perf increase in their experiments with the commit-wait solution for "ensuring consistency guarantee."

I'm completely ignorant on the topic in general though... so I'm probably missing something. =)

pclmulqdq
I have never seen NTP in a datacenter of reasonable size get below an error of 1e-4. PTP without custom hardware can easily get 3 orders of magnitude better.
bradknowles
PTP is harder to do within a datacenter. You either need hardware support in every switch and network interface, or you're doing software timestamping at which point you might as well be using NTP. And PTP doesn't support DNS, only IPv4 or IPv6 or Layer 2 MAC addresses.

PTP also requires some intelligence with regards to configuring your Ordinary Clocks, your Transparent Clocks, and your Border Clocks. And you have to have these configured on every device in the network path.

PTP does have a unicast mode as well as multicast, which can help eliminate unknowable one-way latencies.

It's a pain.

Check the documentation at https://linuxptp.nwtime.org/documentation/ and especially https://linuxptp.nwtime.org/documentation/ptp4l/

forrestthewoods
> How many things need to worry the latency of signal travelling ~300mm?

Arguably every program? The slowest part of modern programs is memory access. L1 cache memory access is ~1 nanosecond and RAM is ~50 nanoseconds.

Is 49 nanoseconds a lot? No, if you do it once. Yes, if every line of code pays the price.

stingraycharles
> But given that NTP (either ntpd or chrony) tends to give me an estimated error of around (tens of) 1e-6 seconds, and PTP can get down to 1e-9 seconds, I'm not sure how many data centre applications need that level of accuracy.

I know that in trading, auditing trades / order books requires extremely accurate timing, and they typically deploy GPS hardware to get the required level of accuracy. As GPS is accurate to a level of 30ns, 1e-6 to 1e-9 (1ns) is exactly the kind of improvement needed to not need GPS hardware anymore.

throw0101a
> […] improvement needed to not need GPS hardware anymore.

You're simply trading NTP hardware for PTP hardware (grandmaster clocks). There is no way to get to 1e-9 scales without hardware support.

bradknowles
These days, all the vendors I know of are shipping hardware that does both. So, it's not just an NTP server, it's also a PTP server. Maybe you don't use one or the other part of that functionality, or maybe they are licensed separately, but they are there.
bradknowles
A good Stratum-1 GNSS from a company like Meinberg or Microchip will include a Rubidium or Cesium reference clock that is then disciplined by GPS and can get you down to sub-nanosecond level accuracy.
kierank
Most professional network cards have PTP support and a grandmaster is cheap enough or your colo provider will provide PTP as a service.
zh3
It seems a little like it might mean that software errors caused by race conditions can be reduced by making the timing windows smaller. As this is a complex area (it might not appear so, but it is) the pragmatic solution could be to reduce the windows rather than fix the issue (maybe an FB engineer can speak OTR).
DannyBee
First, 300mm is not the real measure in practice for the common use case. PTP is often used to distribute GPS time for things that need it but don't have direct satellite access, and also so you don't have to have direct satellite access everywhere.

For that use case, 1ns of inaccuracy is about 10ft all told (IE accounting for all the inaccuracy it generates).

It can be less these days, especially if not just literally using GPS (IE a phone with other forms of reckoning, using more than just GPS satellites, etc). You can get closer to the 1ns = 1ft type inaccuracy.

But if you are a cell tower trying to beamform or something, you really want to be within a few ns, and without PTP that requires direct satellite access or somet other sync mechanism.

Second, I'm not sure what you mean by special. Expense is dictated mostly by holdover and not protocol. It is true some folks gouge heavily on PTP add-ons (orolia, i'm looking at you), but you can ignore them if you want. Linux can do fine PTP over most commodity 10G cards because they have HW support for it. 1G cards are more hit or miss.

For dedicated devices: Here's a reasonable grandmaster that will keep time to GPS(/etc) with a disciplined OCXO, and easily gets within 40ns of GPS and a much higher end reference clock i have. https://timemachinescorp.com/product/gps-ntpptp-network-time...

It's usually within 10ns. 40ns is just the max error ever in the past 3 years.

Doing PTP, the machines stay within a few NS of this master.

If you need better, yes it can get a bit expensive, but honestly, there are really good OCXO out there now with very low phase noise that can more accurately stay disciplined against GPS.

Now, if you need real holdover for PTP (IE , yes you will probably have to go with rubidium, but even that is not as expensive as it was.

Also, higher end DOCXO have nearly the same performance these days, and are better in the presence of any temperature variation.

As for me, i was playing with synchronizing real-time motion of fast moving machines that are a few hundred feet apart for various reasons. For this sort of application, 100us is a lot of lag.

I would agree this is a pretty uncommon use case, and I could have achieved it through other means, this was more playing around.

AFAIK, the main use of accurate time at this level is cell towers/etc, which have good reasons to want it.

I believe there are also some synchronization applications that have need of severe accuracy (synchronous sound wave generation/etc) but no direct access to satellite signal (IE underwater arrays).

bradknowles
That's the one I was thinking about getting for my home lab. I'm also looking at: https://www.meinbergglobal.com/english/products/synchronizat...

I already have an ancient Meinberg Stratum-1 somewhere that I should pull out of storage and send back to Heiko so that they can put it in a museum. These days, for proper datacenter use, I'd go for something like this one: https://www.meinbergglobal.com/english/products/modular-2u-s...

bergenty
Probably high frequency hedge funds because being first matters a lot. That 300mm is the difference between winning and losing, it’s pretty binary up there.
BaconPackets
I would be really curious to drill down into one of these problems. Is super precise really the solution?
ransom1538
"But given that NTP (either ntpd or chrony) tends to give me an estimated error of around (tens of) 1e-6 seconds, and PTP can get down to 1e-9 seconds, I'm not sure how many data centre applications need that level of accuracy."

I would NOT want to be working on this project at META. Seems like a prime place for more layoffs.

stefantalpalaru
None
hotpotamus
My guess is the problem this solves is keeping your team looking busy when Zuck is hunting around for redundancy. I set up PTP for a customer in 2011 and I remember back then that NTP errors were estimated into the 10s to even 100s of milliseconds (I suspect because stratum 1 clocks were much rarer back then, but I haven't followed this too closely). Even the customer admitted it was a bit persnickety on his part, but we were all impressed that PTP was able to get clocks synced to under 1ms difference.

I'd also be happy to hear more concrete use cases, but I suppose having your clocks synced with really really good precision just for its own sake is hardly a bad thing.

klodolph
I would also love to see an explanation of “why do we need this much accuracy?” that actually goes through the derivation of how much accuracy you need.

Some of the justification for Google’s TrueTime is found in the Spanner docs:

https://cloud.google.com/spanner/docs/true-time-external-con...

Basically, you want to be able to do a “snapshot read” of the database rather than acquiring a lock (for reasons which should be apparent). The snapshot read is based on a monotonic clock. You can get much better performance out of your monotonic clock if all of your machines have very accurate clocks. When you write to the database, you can add a timestamp to the operation, but you may have to introduce a delay to account for the worst-case error in the clock you used to generate the timestamp.

More accurate timestamps -> less delay. From my understanding, less delay -> servers have more capacity -> buy fewer servers -> save millions of dollars -> use savings to pay for salaries of people who figured out how to make super precise timestamps and still come out ahead.

This kind of engineering effort makes sense at companies like Google and Meta because these companies spend such a large amount of money on computer resources to begin with.

lifeisstillgood
This is something like my third attempt to read the spanner paper - I get how it helps ordering of transactions but I am confused if it is used in making transactions atomic across machines ?
throw0101a
> I get how it helps ordering of transactions but I am confused if it is used in making transactions atomic across machines ?

AIUI, you cannot quiet think of it like a regular database where a particular row has a particular value which would necessitate only one writer doing (atomic) updates at a time.

Rather it is a MVCC-like database and a bit like an append-only log: as many writers as needed can write and there are 'multiple values' for each row. The "actual" value of the row is the one with the highest transaction ID / timestamp. So updates can happening without (atomic) locking by just adding to the value(s) that already exist.

When reading, applications just generally get served the value with the highest-value timestamp, and since time is synchronized to such a tiny interval, it is a reasonably sure bet that the highest value is the most recent transaction.

This is similar in concept to a vector clock (see also Lamport):

* https://en.wikipedia.org/wiki/Vector_clock

But instead of logical clocks with 'imaginary time', 'real time' is used down to the sub-microsecond level.

jasonwatkinspdx
Meta uses some variations on Hybrid Logical Clocks, which are very similar to TrueTime, so yes this does apply. Besides performance they very much want to avoid consistency issues that could result in a security breach, eg, if I block Alan and then post "Alan is a dookie head" you don't want some node seeing the second event first. Well really the bigger concern is someone spots this as a potential vulnerability point and scripts something.
Aug 02, 2022 · BenoitEssiambre on Use one big server
I'm glad this is becoming conventional wisdom. I used to argue this in these pages a few years ago and would get downvoted below the posts telling people to split everything into microservices separated by queues (although I suppose it's making me lose my competitive advantage when everyone else is building lean and mean infrastructure too).

In my mind, reasons involve keeping transactional integrity, ACID compliance, better error propagation, avoiding the hundreds of impossible to solve roadblocks of distributed systems (https://groups.csail.mit.edu/tds/papers/Lynch/MIT-LCS-TM-394...).

But also it is about pushing the limits of what is physically possible in computing. As Admiral Grace Hopper would point out (https://www.youtube.com/watch?v=9eyFDBPk4Yw ) doing distance over network wires involves hard latency constraints, not to mention dealing with congestions over these wires.

Physical efficiency is about keeping data close to where it's processed. Monoliths can make much better use of L1, L2, L3, and ram caches than distributed systems for speedups often in the order of 100X to 1000X.

Sure it's easier to throw more hardware at the problem with distributed systems but the downsides are significant so be sure you really need it.

Now there is a corollary to using monoliths. Since you only have one db, that db should be treated as somewhat sacred, you want to avoid wasting resources inside it. This means being a bit more careful about how you are storing things, using the smallest data structures, normalizing when you can etc. This is not to save disk, disk is cheap. This is to make efficient use of L1,L2,L3 and ram.

I've seen boolean true or false values saved as large JSON documents. {"usersetting1": true, "usersetting2":fasle "setting1name":"name" etc.} with 10 bits of data ending up as a 1k JSON document. Avoid this! Storing documents means, the keys, the full table schema is in every row. It has its uses but if you can predefine your schema and use the smallest types needed, you are gaining much performance mostly through much higher cache efficiency!

Swizec
> I'm glad this is becoming conventional wisdom

My hunch is that computers caught up. Back in the early 2000's horizontal scaling was the only way. You simply couldn't handle even reasonably mediocre loads on a single machine.

As computing becomes cheaper, horizontal scaling is starting to look more and more like unnecessary complexity for even surprisingly large/popular apps.

I mean you can buy a consumer off-the-shelf machine with 1.5TB of memory these days. 20 years ago, when microservices started gaining popularity, 1.5TB RAM in a single machine was basically unimaginable.

cmrdporcupine
Honestly from my perspective it feels like microservices arose strongly in popularity precisely when it was becoming less necessary. In particular the mass adoption of SSD storage massively changed the nature of the game, but awareness of that among regular developers seemed not as pervasive as it should have been.
tsmarsh
'over the wire' is less obvious than it used to be.

If you're in k8s pod, those calls are really kernel calls. Sure you're serializing and process switching where you could be just making a method call, but we had to do something.

I'm seeing less 'balls of mud' with microservices. Thats not zero balls of mud. But its not a given for almost every code base I wander into.

Tainnor
> I'm seeing less 'balls of mud' with microservices.

The parallel to "balls of mud" with microservices is tiny services that seem almost devoid of any business logic and all the actual business logic is encapsulated in the calls between different services, lambda functions, and so on.

That's quite nightmarish from a maintenance perspective too, because now it's almost impossible to look at the system from the outside and understand what it's doing. It also means that conventional tooling can't help you anymore: you don't get compiler errors if your lambda function calls an endpoint that doesn't exist anymore.

Big balls of mud are horrible (I'm currently working with a big ball of mud monolith, I know what I'm talking about), but you can create a different kind of mess with microservices too. Then there all the other problems, such as operational complexity, or "I now need to update log4j across 30 services".

In the end, a well-engineered system needs disciple and architectural skills, as well as a healthy engineering culture where tech debt can be paid off, regardless of whether it's a monolith, a microservice architecture or something in between.

gizzlon
> I'm seeing less 'balls of mud' with microservices. Thats not zero balls of mud.

They are probably younger. Give them time :P

BenoitEssiambre
To clarify, I think stateless microservices are good. It's when you have too many DBs (and sometimes too many queues) that you run into problems.

A single instance of PostgreSQL is, in most situations, almost miraculously effective at coordinating concurrent and parallel state mutations. To me that's one of the most important characteristic of an RDBMS. Storing data is a simpler secondary problem. Managing concurrency is the hard problem that I need most help with from my DB and having a monolithic DB enables the coordination of everything else including stateless peripheral services without resulting in race conditions, conflicts or data corruption.

SQL is the most popular mostly functional language. This might be because managing persistent state and keeping data organized and low entropy, is where you get the most benefit from using a functional approach that doesn't add more state. This adds to the effectiveness of using a single transactional DB.

I must admit that even distributed DBs, like Cockroach and Yugabyte have recognized this and use the PostgreSQL syntax and protocol. This is good though, it means that if you really need to scale beyond PostgreSQL, you have PostgreSQL compatible options.

FpUser
>"I'm glad this is becoming conventional wisdom. "

Yup, this is what I've always done and it works wonders. Since I do not have bosses, just a clients I do not give a flying fuck about latest fashion and do what actually makes sense for me and said clients.

lmm
I've never understood this logic for webapps. If you're building a web application, congratulations, you're building a distributed system, you don't get a choice. You can't actually use transactional integrity or ACID compliance because you've got to send everything to and from your users via HTTP request/response. So you end up paying all the performance, scalability, flexibility, and especially reliability costs of an RDBMS, being careful about how much data you're storing, and getting zilch for it, because you end up building a system that's still last-write-wins and still loses user data whenever two users do anything at the same time (or you build your own transactional logic to solve that - exactly the same way as you would if you were using a distributed datastore).

Distributed systems can also make efficient use of cache, in fact they can do more of it because they have more of it by having more nodes. If you get your dataflow right then you'll have performance that's as good as a monolith on a tiny dataset but keep that performance as you scale up. Not only that, but you can perform a lot better than an ACID system ever could, because you can do things like asynchronously updating secondary indices after the data is committed. But most importantly you have easy failover from day 1, you have easy scaling from day 1, and you can just not worry about that and focus on your actual business problem.

Relational databases are largely a solution in search of a problem, at least for web systems. (They make sense as a reporting datastore to support ad-hoc exploratory queries, but there's never a good reason to use them for your live/"OLTP" data).

BenoitEssiambre
Http requests work great with relational dbs. This is not UDP. If the TCP connection is broken, an operation will either have finished or stopped and rolledback atomically and unless you've placed unneeded queues in there, you should know of success immediately.

When you get the http response, you will know the data is fully committed, data that uses it can be refreshed immediately and is accessible to all other systems immediately so you can perform next steps relying on those hard guarantees. Behind the http request, a transaction can be opened to do a bunch of stuff including API calls to other systems if needed and commit the results as an atomic transaction. There are tons of benefit using it with http.

lmm
But you can't do interaction between the two ends of a HTTP request. The caller makes an inert request, whatever processing happens downstream of that might as well be offline because it's not and can never be interactive within a single transaction.
Tainnor
Now you're shifting the goalposts. You started out by claiming that web apps can't be transactional, now you've switched to saying they can't be transactional if they're "interactive" (by which you presumably mean transactions that span multiple HTTP requests).

Of course, that's a very particular demand, one that doesn't necessarily apply to many applications.

And even then, depending on the use case, there are relatively straightforward ways of implementing that too: For example, if you build up all the data on the client (potentially by querying the server, with some of the partial data, for the next form page, or whatever) and submit it all in one single final request.

Tainnor
I really don't understand how anything of what you wrote follows from the fact that you're building a web-app. Why do you lose user data when two users do anything at the same time? That has never happened to me with any RDBMS.

And why would HTTP requests prevent me from using transactional logic? If a user issues a command such as "copy this data (a forum thread, or a Confluence page, or whatever) to a different place" and that copy operation might actually involve a number of different tables, I can use a transaction and make sure that the action either succeeds fully or is rolled back in case of an error; no extra logic required.

I couldn't disagree more with your conclusion even if I wanted to. Relational databases are great. We should use more of them.

lmm
> I really don't understand how anything of what you wrote follows from the fact that you're building a web-app. Why do you lose user data when two users do anything at the same time? That has never happened to me with any RDBMS.

> And why would HTTP requests prevent me from using transactional logic? If a user issues a command such as "copy this data (a forum thread, or a Confluence page, or whatever) to a different place" and that copy operation might actually involve a number of different tables, I can use a transaction and make sure that the action either succeeds fully or is rolled back in case of an error; no extra logic required.

Sure, if you can represent what the user wants to do as a "command" like that, that doesn't rely on a particular state of the world, then you're fine. Note that this is also exactly the case that an eventually consistent event-sourcing style system will handle fine.

The case where transactions would actually be useful is the case where a user wants to read something and modify something based on what they read. But you can't possibly do that over the web, because they read the data in one request and write it in another request that may never come. If two people try to edit the same wiki page at the same time, either one of them loses their data, or you implement some kind of "userspace" reconciliation logic - but database transactions can't help you with that. If one user tries to make a new post in a forum thread at the same time as another user deletes that thread, probably they get an error that throws away all their data, because storing it would break referential integrity.

Tainnor
> Sure, if you can represent what the user wants to do as a "command" like that, that doesn't rely on a particular state of the world, then you're fine. Note that this is also exactly the case that an eventually consistent event-sourcing style system will handle fine.

Yes, but the event-sourcing system (or similar variants, such as CRDTs) is much more complex. It's true that it buys you some things (like the ability to roll back to specific versions), but you have to ask yourself whether you really need that for a specific piece of data.

(And even if you use event sourcing, if you have many events, you probably won't want to replay all of them, so you'll maybe want to store the result in a database, in which case you can choose a relational one.)

> If two people try to edit the same wiki page at the same time, either one of them loses their data, or you implement some kind of "userspace" reconciliation logic - but database transactions can't help you with that.

Yes, but

a) that's simply not a problem in all situations. People will generally not update their user profile concurrently with other users, for example. So it only applies to situations where data is truly shared across multiple users, and it doesn't make sense to build a complex system only for these use cases,

b) the problem of users overwriting other users' data is inherent to the problem domain; you will, in the end, have to decide which version is the most recent regardless of which technology you use. The one thing that evens etc. buy you is a version history (which btw can also be implemented with a RDBMS), but if you want to expose that in the UI so the user can go back, you have to do additional work anyway - it doesn't come for free.

c) Meanwhile, the RDBMS will at least guarantee that the data is always in a consistent state. Users overwriting other users' data is unfortunate, but corrupted data is worse.

d) You can solve the "concurrent modification" issue in a variety of ways, depending on the frequency of the problem, without having to implement a complex event-sourced system. For example, a lock mechanism is fairly easy to implement and useful in many cases. You could also, for example, hash the contents of what the user is seeing and reject the change if there is a mismatch with the current state (I've never tried it, but it should work in theory).

I don't wish to claim that a relational database solves all transactionality (and consistency) problems, but they certainly solve some of them - so throwing them out because of that is a bit like "tests don't find all bugs, so we don't write them anymore".

lmm
> Yes, but the event-sourcing system (or similar variants, such as CRDTs) is much more complex.

It's really not. An RDBMS usually contains all of the same stuff underneath the hood (MVCC etc.), it just tries to paper over it and present the illusion of a single consistent state of the world, and unfortunately that ends up being leaky.

> a) that's simply not a problem in all situations. People will generally not update their user profile concurrently with other users, for example. So it only applies to situations where data is truly shared across multiple users,

Sure - but those situations are ipso facto situations where you have no need for transactions.

> b) the problem of users overwriting other users' data is inherent to the problem domain; you will, in the end, have to decide which version is the most recent regardless of which technology you use. The one thing that evens etc. buy you is a version history (which btw can also be implemented with a RDBMS), but if you want to expose that in the UI so the user can go back, you have to do additional work anyway - it doesn't come for free.

True, but what does come for free is thinking about it when you're designing your dataflow. Using an event sourcing style forces you to confront the idea that you're going to have concurrent updates going on, early enough in the process that you naturally design your data model to handle it, rather than imagining that you can always see "the" current state of the world.

> c) Meanwhile, the RDBMS will at least guarantee that the data is always in a consistent state. Users overwriting other users' data is unfortunate, but corrupted data is worse.

I'm not convinced, because the way it accomplishes that is by dropping "corrupt" data on the floor. If user A tries to save new post B in thread C, but at the same time user D has deleted that thread, then in a RDBMS where you're using a foreign key the only thing you can do is error and never save the content of post B. In an event sourcing system you still have to deal with the fact that the post belongs in a nonexistent thread eventually, but you don't start by losing the user's data, and it's very natural to do something like mark it as an orphaned post that the user can still see in their own post history, which is probably what you want. (Of course you can achieve that in the RDBMS approach, but it tends to involve more complex logic, giving up on foreign keys and accepting tha you have to solve the same data integrity problems as a non-ACID system, or both).

> d) You can solve the "concurrent modification" issue in a variety of ways, depending on the frequency of the problem, without having to implement a complex event-sourced system. For example, a lock mechanism is fairly easy to implement and useful in many cases. You could also, for example, hash the contents of what the user is seeing and reject the change if there is a mismatch with the current state (I've never tried it, but it should work in theory).

That sounds a whole lot more complex than just sticking it an event sourcing system. Especially when the problem is rare, it's much better to find a solution where the correct behaviour naturally arises in that case, than implement some kind of ad-hoc special case workaround that will never be tested as rigorously as your "happy path" case.

Tainnor
> It's really not. An RDBMS usually contains all of the same stuff underneath the hood (MVCC etc.), it just tries to paper over it and present the illusion of a single consistent state of the world, and unfortunately that ends up being leaky.

There's nothing leaky about it. Relational algebra is a well-understood mathematical abstraction. Meanwhile, I can just set up postgres and an ORM (or something more lightweight, if I prefer) and I'm good to go - there's thousands of examples of how to do that. Event-sourced architectures have decidedly more pitfalls. If my event handling isn't commutative, associative and idempotent I'm either losing out on concurrency benefits (because I'm asking my queue to synchronise messages) or I'll get undefined behaviour.

There's really probably no scenario in which implementing a CRUD app with a relational database isn't going to take significantly less time than some event sourced architecture.

> Sure - but those situations are ipso facto situations where you have no need for transactions.

> Using an event sourcing style forces you to confront the idea that you're going to have concurrent updates going on

There are tons of examples like backoffice tools (where people might work in shifts or on different data sets), delivery services, language learning apps, flashcard apps, government forms, todo list and note taking apps, price comparison services, fitness trackers, banking apps, and so on, where some or even most of the data is not usually concurrently edited by multiple users, but where you still will probably have consistency guarantees across multiple tables.

Yes, if you're building Twitter, by all means use event sourcing or CRDTs or something. But we're not all building Twitter.

> If user A tries to save new post B in thread C, but at the same time user D has deleted that thread, then in a RDBMS where you're using a foreign key the only thing you can do is error and never save the content of post B.

I don't think I've ever seen a forum app that doesn't just "throw away" the user comment in such a case, in the sense that it will not be stored in the database. Sure, you might have some event somewhere, but how is that going to help the user? Should they write a nice email and hope that some engineer with too much time is going to find that event somewhere buried deep in the production infrastructure and then ... do what exactly with it?

This is a solution in search of a problem. Instead, you should design your UI such that the comment field is not cleared upon a failed submission, like any reasonable forum software. Then the user who really wants to save their ramblings can still do so, without the need of any complicated event-sourcing mechanism. And in most forums, threads are rarely deleted, only locked (unless it's outright spam/illegal content/etc.)

(Also, there are a lot of different ways how things can be designed when you're using an RDBMS. You can also implement soft deletes (which many applications do) and then you won't get any foreign key errors. In that way, you can still display "orphaned" comments that belong to deleted threads, if you so wish (have never seen a forum do that, though). Recovering a soft deleted thread is probably also an order of magnitude easier than trying to replay it from some events. Yes, soft deletes involve other tradeoffs - but so does every architecture choice.)

> That sounds a whole lot more complex than just sticking it an event sourcing system. Especially when the problem is rare, it's much better to find a solution where the correct behaviour naturally arises in that case.

I really disagree that a locking mechanism is more difficult than an event sourced system. The mechanism doesn't have to be perfect. If a user loses the lock because they haven't done anything in half an hour, then in many cases that's completely acceptable. Such a system is not hard to implement (I could just use a redis store with expiring entries) and it will also be much easier to understand, since you now don't have to track the flow of your business logic across multiple services.

I also don't know why you think that your event-sourced system will be better tested. Are you going to test for the network being unreliable, messages getting lost or being delivered out of order, and so on? If so, you can also afford to properly test a locking mechanism (which can be readily done in a monolith, maybe with an additional redis dependency, and is therefore more easily testable than some event-based logic that spans multiple services).

And in engineering, there are rarely "natural" solutions to problems. There are specific problems and they require specific solutions. Distributed systems, event sourcing etc. are great where they're called for. In many cases, they're simply not.

faeriechangling
>As Admiral Grace Hopper would point out (https://www.youtube.com/watch?v=9eyFDBPk4Yw ) doing distance over network wires involves hard latency constraints, not to mention dealing with congestions over these wires.

Even accounting for CDNs, a distributed system is inherently more capable of bringing data closer to geographically distributed end users, thus lowering latency.

threeseed
> I'm glad this is becoming conventional wisdom

It's not though. You're just seeing the most popular opinion on HN.

In reality it is nuanced like most real-world tech decisions are. Some use cases necessitate a distributed or sharded database, some work better with a single server and some are simply going to outsource the problem to some vendor.

dist1ll
Exactly. The HN crowd is obsessed with minimalism and reducing "bloat".

It has become a cult, where availability and scale requirements are apparently fiction. "You are not FAANG, you don't have these requirements."

Aeolun
> outsource the problem to some vendor

At least that way you can be certain of failure.

Jun 17, 2022 · 176 points, 46 comments · submitted by scrlk
lordleft
Can we take a moment to acknowledge her incredible presentational ability? She was charming, wry, slightly subversive, and still conveyed a really cool scientific concept in one brief talk.
bsder
Charming, wry, and slightly subversive is all her personality. However, more than a few people who have been on the receiving end would argue with you about "charming" and "slightly".

Presentation ability, however, was learned and practiced a lot.

She used to make all her subordinates give oral reports weekly on written articles she would pass out and then discuss as a group.

If you committed any of various presentation sins, you had to dump a quarter into the penalty jar.

Her subordinates got very good at presentations.

corrral
I've noticed that the prep-school-to-Ivy pipeline is great at producing people with that quality.

I just checked and, sure enough, that's exactly what she did.

[EDIT] The quality of being a confident, engaging conversationalist and presenter, I mean.

For her it was The Hartridge School and then Yale. Hartridge, in its modern form as the Wardlaw-Hartridge School, runs a bit over $40k/yr by the time you're nearing the end, down to about $16k for pre-k, though many won't be paying full sticker price.

zahma
The fastest reactions in our body (hydride shifts, very small rearrangement of atoms to maximize charge stability) take a mere picosecond. This nanosecond is 1000 times longer than a picosecond. Try to imagine that the molecules in your body are spinning at crazy frequencies and rearranging themselves incessantly at that speed.
johnsanders
"...we should hang one over [programmers' desks], or around their necks so they know what they're throwing away when they throw away a microsecond." Relevant forty years later.
ajdude
That line really stuck with me
lelandfe
https://youtu.be/3N_ywhx6_K0?t=33

Hopper on Letterman

shakezula
So interesting to hear the street-level policy effects of the Carter administration talked about like this. It adds something intangible to this video.
dominotw
So sharp and quick witted at that age.
sbarre
What a great interview
tomwheeler
It's too bad that talk shows aren't like this anymore. The Letterman producers from the 1980s deserve kudos for finding interesting guests (not only Grace Hopper, but also Isaac Asimov, Doc Edgerton, Don Herbert, and plenty of others).

Aside from Neil deGrasse Tyson or Dr. Fauci, it's pretty rare to see a scientist on a late night show now. For all the empty talk about the importance of STEM, it's pretty unlikely that you'll see a pioneer of computer science.

KerrAvon
The flip side is that this was the _only_ place you ever saw them. It's not difficult to find endless videos of Neil deGrasse Tyson or Dr. Fauci on YouTube today. In the 1980's, you didn't see people like Grace Hopper or Asimov _at all_ except for this sort of appearance on a talk show or maybe something brief on PBS if you were lucky to catch it at the time it was broadcast.
guenthert
That doesn't mean we were unaware of Asimov. We ... read.
dredmorbius
That's true, and there were only the three commercial networks and PBS (as of the 1970s) on which to catch appearances.

Radio was somewhat more open, and Asimov writes of hearing (and not recognising) his own voice coming from the radio in his autobiography (his wife clued him in).

That said, I just searched Invidious for any appearances of Kim Stanley Robinson, one of the most notable current science fiction authors, on any of the late-night shows (Colbert, Kimmel, Fallon) ... and there's nothing. Though tons of other videos featuring KSR:

https://yewtu.be/search?q=%22kim+stanley+robinson%22+%28kimm...

jbandela1
The foot has been criticized for being an arbitrary measurement with no real relation to a repeatable physical distance.

However, it turns out that a foot is within 2% of the distance light travels in a nanosecond!

Because of this, the foot becomes really convenient when talking about latencies. For example, if something is 6 inches away from the cpu on a motherboard, the lowest possible latency to reach that is 0.5 nanoseconds.

Time to push for the adoption of feet everywhere /s

doliveira
Reminds me of the whole "Fahrenheit is awesome because 0°F is too cold and 100°F is too hot"

(BTW, for me 0°C is too cold and 40°C is too hot)

8note
Celcius is nice because -30 is the limits of cold that I want, and +30 is the limits of hot
stn8188
Love your comment, this quick rule is critical, but... Don't forget that the relative permittivity if PCB material is roughly 4, so the rule for circuit boards is 6" per nanosecond :)
InitialLastName
A similar heuristic is very useful for acoustics: the speed of sound is close enough to 1 foot/ms to be a great rule of thumb for estimation.
ChainOfFools
Or... rule of foot, for the compulsive unit-cancelers out there
protomyth
I thought if the US ever switched from Imperial we should just switch to light-nanoseconds since its so close and then make fun of the metric folks for being Earth-centric.

Units of weight / volume would be a pain though since a cubic light-second of water is about 7.118 US gallons and weighs (@1g) about 59.227 lbs at the melting point of ice.

Of course, we should still go with base 8 like the Yuki tribe (spaces between fingers instead of fingers because that's how many bottles you can carry).

ArnoVW
Light nanosecond depends on 'second', which is litteraly earth centric :-)
protomyth
Less Earth centric than the distance from the North Pole to the Equator. We'll still use seconds on Mars.
dredmorbius
And you can use feet on Mars, despite there being no actual feet there.

The second is 1/86,400 of a nominal Earth sidreal day. More or less.

Slightly less with time as the Earth's rotation is in fact slowing.

chrisseaton
> For example, if something is 6 inches away from the cpu on a motherboard, the lowest possible latency to reach that is 0.5 nanoseconds.

Isn't that going to be insignificant compared to everything inside the computer on both ends?

mlonkibjuyhv
Not if you're making computers.
dredmorbius
To expand on this: if you're designing / building / assembling a computer or cluster, and components are 6" apart, then the minimum latency for communicating between those components is 1 nanosecond.

Given clock speeds of multiple GHz, that means spending an entire clock cycle or more simply communicating between components.

See also the case of the 500 mile email: https://www.ibiblio.org/harris/500milemail.html

(A Sendmail misconfiguration resulted in a maximum response time of 3 milliseconds, or roughly 500 miles of travel at the speed of light. The observed behaviour was that a uni campus computer could send email only within a 500 mile radius, as noted by the statistics department.)

chrisseaton
> Given clock speeds of multiple GHz, that means spending an entire clock cycle or more simply communicating between components.

Yeah that's expected isn't it? That's why we have caches on die. Nobody is out there expecting main memory reads to retire in a clock cycle, let alone IO! I don't think even lower tier cache access retires in a single clock cycles. That's just not how processors work these days.

dredmorbius
It's not just single systems.

It's clusters. It's datacentres. It's tools which span the globe. Or extend into space.

The Web by default is now transacted over HTTPS. This means that every session requires a TLS handshake:

- Client hello

- Server hello + key

- Client key exchange.

- Server finished.

- Client finished.

- Data transfer begins.

That's six exchanges, and three round trips. For an antipodal set of hosts, at 300ms per trip, that's nearly 2 seconds just to set up a session. If you're communicating with a Moon base, it's eight seconds.

And if you're using a tool or protocol which presumes cheap or fast round-trips, and uses a lot of round trips, you may find it's unusable.

Some years back a multi-campus site rolled out a remote-console tool that worked across platforms in datacentres --- we had both Linux and Windows hosts.

Working locally with the DC one building over in the campus, or even with a facility elsewhere in the province, performance was laggier than local, but tolerable. The team operating out of Dubai was waiting five minutes to see login screens presented.

Distance is time.

wolf550e
TLS1.3 removes one roundtrip. HTTP3 (QUIK) remove more by combining the TCP handshake with the TLS handshake.
dredmorbius
SPAs return those roundtrips with a vengence.
dredmorbius
Which gets back to my, and Hopper's, initial point:

Space adds time.

If you're doing something, anything, which involves communicating between two or more components frequently, then the further apart those components are, the longer it will take.

(It's also more likely to be affected by other issues --- latency, unreliability, interference, injection, exfiltration, ...)

And that will grow linearly with distance as a multiple of interactions.

There's a lot of code and processing which presumes delays are small and components are near. As those assumptions are violated, performance tends to degrade spectacularly.

_joel
Welcome to the UK in 2022!

You'll even get a crown logo emblazened on the nanosecond ;)

None
None
Arcorann
Of course, if one were to switch measurements for that reason it'd be better to use the Japanese shaku [1], which at 30.303 cm is closer to a light-nanosecond than a foot.

[1] https://en.wikipedia.org/wiki/Japanese_units_of_measurement

_benj
I loved this!

It ties so well the another comment about the speed of computers on the front page:

> On a 3GHz CPU, one clock cycle is enough time for light to travel only 10cm. If you hold up a sign with, say, a multiplication, a CPU will produce the result before light reaches a person a few metres away.

<https://news.ycombinator.com/item?id=31769936>

wiredfool
I had a genuine Grace Hopper nanosecond from when she visited my high school. Sadly lost now.
LanceH
I saluted her once. No idea who she was, she was an old lady standing at a bus stop on base, wearing an odd (dated) uniform with an unusual rack of ribbons.
askin4it
Is there a long version of this story?
Abekkus
I'd heard that in the army, once you reach General, you get to pick out your own uniform (she made it to some level of admiral during her service)
hinkley
Rear Admiral
croes
Previous discussion

https://news.ycombinator.com/item?id=24341229

pvg
And those from 6, 9 and 10 years ago

https://news.ycombinator.com/item?id=12130933

https://news.ycombinator.com/item?id=5045842

https://news.ycombinator.com/item?id=3655886

dougmwne
I love this so much. Computing is right up against the limits of the universe and seeing that your 4 ghz processor has a cycle time of about 7 light-cm shows you exactly how close we are to that limit. Looking at the computing time spent on accessing a remote server 8000 km away also keeps things in perspective.
mcdonje
That was charming. Great demonstration of scientific communication. Great visualization.
mywittyname
This is the first time that I've ever heard her speak. I have to say, she's amazingly charming and charismatic.
stefantalpalaru
None
paganel
Sometimes I forget how militarized the computer industry used to be, probably still is, in one way or another. For every Stallman and Aaron Swartz there’s a Grace Hopper dressed in military attire while talking about computers.

Sad, too, that I haven’t seen any news in here today about Assange’s extradition to the US, at least not on the front page.

nosefrog
My grandfather got his PhD because he lost an argument to Grace Hopper because she had a PhD and he didn't. He was working for the Navy at the time, and he wanted to use a higher level language for some operating system they were building. Grace Hopper thought that higher level languages were only suitable for business applications, not other computing purposes where performance was more important.

This is a link to a paper he wrote about one of the first cross compilers that they had built: https://dl.acm.org/doi/abs/10.1145/367436.367477

This comparison reminds me of Grace Hopper explaining how long nanosecond is: https://www.youtube.com/watch?v=9eyFDBPk4Yw
I would add that software infrastructure can run incredibly fast and scale incredibly well on modern hardware if you're a bit careful about resource usage.

Traditional relational DBs like Postgresql are very scalable on modern hardware. If you take the time to craft a normalized schema with low redundancy being careful about keeping data small, you can achieve performance and resource efficiency and cache efficiency hundreds of time better than bloated distributed document nosql based systems. You can also get better transactional integrity, reduce reliance on queues (use load balancers instead) and get more instant and more atomic error propagation and edge case resolution. You can really build something lean and mean to a point that would be physically impossible in distributed systems or systems that involve multiple hops over potentially congested networks (as Admiral Grace Hopper likes to remind us, distance matters in computing https://www.youtube.com/watch?v=9eyFDBPk4Yw). As far as I know, normalized relational databases are still the best at efficiently using multiple levels of physically near caches to their full potential. Most applications don't need to scale horizontally and can easily fit on single servers (that can scale to dozens of cores and terabytes of ram).

I hear arguments that it's the engineers that are expensive so it's ok to be wasteful with hardware if it saves engineering time but in my experience the type of engineers that are good at optimizations are often good at engineering in general and the ones that forget to think about efficiency are also sloppy with other things. It doesn't mean you have to write your code in C. Very fast code can be written in high level languages with the proper skills (See projects like Fastify).

nikanj
Unfortunately in a lot of cases, it’s _our_ engineers are expensive so we are wasting _your_ CPU/RAM/etc
bob1029
> Very fast code can be written in high level languages with the proper skills

I sometimes wonder why we don't have mandatory coursework that demonstrates the upper bound of what a modern x86 system is capable of in practical terms.

If developers understood just how much perf they were leaving on the plate, they would likely self-correct out of shame. Latency is the ultimate devil and we need to start burning that into the brains of developers. The moment you shard a business system to more than 1 computer, you enter into hell. We should be trying to avoid this fate, not embrace it.

We basically solved the question "what's the fastest way to synchronize work between threads" in ~2010 with the advent of the LMAX Disruptor. But, for whatever reason this work has been relegated to the dark towers of fintech wizardry rather than being perma-stickied on HN. Systems written in C#10 on .NET6 which leverage a port of this library can produce performance figures that are virtually impossible to meet in any other setting (at least in a safe/stable way).

This stuff is not inaccessible at all. It is just unpopular. Which is a huge shame. There is so much damage (in a good way) developers could do with these tools and ideologies if they could find some faith in them.

lenocinor
Wow, reading about the LMAX Disruptor is fascinating. Thank you for sharing!
TobTobXX
https://computers-are-fast.github.io/
Somewhat similar is Grace Hopper explaining the nanosecond: https://www.youtube.com/watch?v=9eyFDBPk4Yw
> Light is slow, not exactly 1 foot per nanosecond but it's a reasonable mnemonic.

"Admiral Grace Hopper Explains the Nanosecond" is a good illustration of this (segment is only 2m):

* https://www.youtube.com/watch?v=9eyFDBPk4Yw

* https://en.wikipedia.org/wiki/Grace_Hopper

Full lecture:

* https://www.youtube.com/watch?v=ZR0ujwlvbkQ

Aug 27, 2021 · dvh on Latency Sneaks Up on You
Grace Hopper explaining 1 nanosecond: https://youtu.be/9eyFDBPk4Yw
Relevant video for those who haven’t seen the lecture:

https://www.youtube.com/watch?v=9eyFDBPk4Yw

Pretty reminiscent of Grace Hopper's nanosecond lecture [1]

Certainly there are reasons why a feature might slow down a system beyond the branching (for example, if it requires you to load other resources)... but really, the issue is that as pointed out in the youtube video. Software companies very frequently do not care about performance.

[1] https://www.youtube.com/watch?v=9eyFDBPk4Yw

Speaking as somebody who never stopped optimizing like its the 90s: there's definitely a learning curve, but when microbenchmarking becomes a reflex, you learn your language in depth and the burden drops quickly. It only seems esoteric to you because the industry has collectively decided that Grace Hopper is an old codger who doesn't need to be listened to.

https://www.youtube.com/watch?v=9eyFDBPk4Yw

ddingus
so good
Sep 11, 2020 · 2 points, 0 comments · submitted by jamesadevine
Sep 01, 2020 · 93 points, 14 comments · submitted by zack6849
simonebrunozzi
"So they know what they're throwing away when they're throwing away microseconds"

She was a genius. Was a great character.

jkinudsjknds
I enjoyed listening to her, but I don't know if I understand her point here. There was something intuitive about why certain processes require a large number of nano seconds due to physics. But I have a tough time thinking of any process besides financial trading that can't spare a microsecond.
metiscus
Safety critical hard-realtime systems come to mind as an example.
ncmncm
Anything that you need to do a lot of, in a second, suffers for wasted microseconds.

If your input processing and font rendering pathways waste microseconds (and they do) it makes a delay in characters you type showing up on the screen, and a corresponding, measurable reduction in your editing throughput.

When our machines were a thousand times slower, characters showed up on the screen in substantially less time than they do today. The difference is mainly a result of cumulative wasted microseconds, in so many places that nobody can afford to gather them up.

jkinudsjknds
hmm... idk. This feels difficult to believe. I don't think my editing speed would suffer if you threw 100 additional microseconds of delay into font rendering. I feel any meaningful loss would need to be measured in milliseconds or longer.

Typing tests don't really help here because you don't read what you type if you're competent. I don't know how I could test...

ncmncm
It is funny (to me anyway) that the nanosecond wires she used to give out take rather more than a nanosecond to traverse.

The speed of signal propagation is largely determined by the density of the wire's insulation, because the signal is an electromagnetic wave carried at the skin of the conductor, with its electric field oscillating in the insulation, so propagation is limited by properties of that. The distance covered in a nanosecond is typically between seven and eight inches. In wires with foam insulation (typically co-ax) signals go a little faster. In optical fiber, a nanosecond is under seven inches, because the speed is determined by the same property of the glass, which is denser.

jgrahamc
Based on this I made downloadable and printable nanoseconds: https://blog.jgc.org/2012/10/a-downloadable-nanosecond.html
dls2016
She brought some nanoseconds to Letterman: https://www.dailymotion.com/video/x35dsz7
jki275
Posted pretty regularly here, it's a great talk. She was a true treasure to the CS world and to the Navy.
zack6849
Yeah, I wasn't sure how often this pops up, it came up in conversation and I'm not familiar enough with HN to see when the last time it was posted, so I figured i'd just post it and let it go unnoticed if it's been posted too often recently.
nayuki
What year was this filmed in?
DavidSJ
1993 it would appear:

https://youtu.be/Sn0f0vpn8jE?t=7m50s

kingisaac
She died in 1992, so I highly doubt it was recorded in '93.
csixty4
She's the ghost in the machine.

(Given the quality of the video, I'd be inclined to think 1983 rather than 1993)

jihadjihad
Great stuff. I'm reminded of the classic "we can't send an email over 500 miles!" which is somewhat related, and equally entertaining: http://web.mit.edu/jemorris/humor/500-miles
Here's a short must-see video of Grace Hopper explaining a nanosecond:

https://www.youtube.com/watch?v=9eyFDBPk4Yw

> The nanosecond wire alone should be a compelling argument.

For those that haven't seen it:

https://www.youtube.com/watch?v=9eyFDBPk4Yw

(Of course, I expect that most here have, but xkcd 10,000 and all that.)

acqq
"Now I hope you all get the nanoseconds. They're absolutely marvelous for explaining to wives and husbands and children and Admirals and Generals and people like that. An Admiral wanted to know why it took so damn long to send a message via satellite. And I had to point out that between here and the satellite there were a very large number of nanoseconds."
Yes it is and no it isn't.

Learn from Admiral Grace Hopper: https://www.youtube.com/watch?v=9eyFDBPk4Yw

The speed of light in fiber optic cable is about 2/3 the speed of light, which is why SpaceX's Starlink will have a potential latency advantage over oceanic fiber optics.

It also gives good perspective on what's being thrown away when you add a single ms of latency through a router or a display. 186 miles at the speed of light. Many on HN don't get this, but this is why cloud gaming is perfectly feasible if we're running with low latency displays and inputs, low distance to the edge compute, and few routing hops.

Nov 18, 2019 · 2 points, 1 comments · submitted by dedalus
mmcclellan
Thanks. Definitely worth the 120 billion nanoseconds it takes to watch.
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.