# HN Theater

The best talks and videos of Hacker News.

### Hacker News Comments onThe Birth and Death of JavaScript

www.destroyallsoftware.com ·
HN Theater has aggregated all Hacker News stories and comments that mention www.destroyallsoftware.com's video "The Birth and Death of JavaScript".
HN Theater Rankings
• Ranked #10 all time · view

#### Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
https://www.destroyallsoftware.com/talks/the-birth-and-death...
That's one of my all-time favourite talks, but I lost the reference, and googling "JavaScript talk" is futile.

Thanks for the link, I'll be sure to bookmark it this time...

Sep 22, 2020 · Fej on DOS Subsystem for Linux
You're thinking of this:

https://www.destroyallsoftware.com/talks/the-birth-and-death...

Gary Bernhardt was prophetic https://www.destroyallsoftware.com/talks/the-birth-and-death...
WASM will never replace JavaScript. Something else might, and it might have support for compiling to WASM, but we'll see.
Wasm is amazing, but it's not intended to replace JavaScript. If you just want to add a form to a website, animate a drop-down menu, or the like, JS (or TS) will probably always be preferable to writing in another language and compiling to wasm. The web platform moves fast and maybe in a couple of years I'll eat these words. But nothing I've seen so far points towards the demise of JS.
A form like these?

https://www.qt.io/web-assembly-example-pizza-shop?hsCtaTrack...

https://gallery.flutter.dev/#/

Gary's talk is not exactly about the "demise of JS". Watch the talk, it's a great one :)
If you haven't seen The Birth & Death of JavaScript, you're in for a treat.

Gimp running in Chrome running inside Firefox

You can't really make this statement without linking to https://www.destroyallsoftware.com/talks/the-birth-and-death...
I now want to use the Javascript-based Gigatron emulator[1] in a browser on a Windows 2000 VM under the jslinux emulator[2]. (I wonder how jslinux would handle a few-year-old version of Firefox...)

Then I can run the Gigatron-based 6502 emulator in that browser to run the 8080 simulator you referenced to run CP/M. Under CP/M I should be able to find a COBOL program to run. I would be achieving an immense coefficient-of-"Inception" and re-enacting "The Birth and Death of all Software" [3] simultaneously.

Doing all of this in Windows NT 4.0 or Linux on my DEC Multia w/ an Alpha CPU would just be icing on the cake.

Apr 23, 2020 · p1necone on Wgpu-rs on the web
https://www.destroyallsoftware.com/talks/the-birth-and-death...

This is becoming (somewhat) more true every day.

https://www.destroyallsoftware.com/talks/the-birth-and-death...

The talk is hilarious, and very much on point.

https://www.destroyallsoftware.com/talks/the-birth-and-death... I guess Gary was right. In other news, we have five years of war ahead of us.
It's absolutely going to happen. WASM isn't required for this future-- it just helps optimize it. There is a ton of money out there for the company who makes a performant and compatible browser-in-a-browser w/ proper accessibility. Somebody will eventually take the "deal with the devil" to develop it.

An obligatory link to an important talk: https://www.destroyallsoftware.com/talks/the-birth-and-death...

WebAssembly in the kernel faster-than-native reminds me of the comedic talk by Gary Bernhardt, "The Birth & Death of JavaScript" [1]. Great foresight
You should check this talk about JS and ASM :) https://www.destroyallsoftware.com/talks/the-birth-and-death...
Feb 03, 2020 · pdkl95 on WebUSB is dead
Viewing a document on the web needs to be decidable. The original design of the web was HTML documents with forms. This IBM 3270 style design used the browser as the user interface to server-side programs. The browser's job of presenting a document was decidable and the form submission, page load process allowed the user to understand and control the data sent to a server. The server learns what was in the form and URL when the user decided to click the submit button.

Moving the software into the browser improved latency, but questions like "Is this webpage doing something dangerous/annoying?" became undecidable. If we provably cannot determine if a webpage will halt without running it, we obviously cannot as any more complicated questions about the webpage's behavior. As long as webpages have access to a Turing complete language, the browser will be de facto an OS. Unfortunately, returning to a web of documents isn't going to happen anytime soon; too many people profit from this ability to run programs on other people's computers.

(Gary Bernhardt's amazing (and terrifying) talk "The Birth & Death of JavaScript"[1] was absurdist comedy, not a guide to future browser designs)

Decideability is absolutely not the problem. Subjectivity is a problem - There is no objective definition of an obnoxious ad.

I mean, behaviour of a website may be undecideable. Behaviour over the 5 minutes you're viewing it is. I dont think notions from computability theory are particularly enlightening here.

There is no way to know for sure that the minified JS on a website isn't sending every keystroke you enter into a password field before you click submit, until you run the JS.
Why wouldn’t “deminifying” work? Then you just read the code.
Funnily enough Gary Betnhardt seems to only write and talk about javascript nowdays.
> Unfortunately, returning to a web of documents isn't going to happen anytime soon; too many people profit from this ability to run programs on other people's computers.

Yes, I've resigned myself to this truth long ago. Since then, I've been watching the web get smaller and smaller as more and more websites break unless they're allowed to run code client side.

I fully expect that the majority of the web will become inaccessible to me within my lifetime. Alas, the future does not always bring wonderful things!

Yeah, I’m wondering the same. I thought of that talk too when I saw this post.
Sorry, I disagree with the reference to academia: the median quality of academic conference talks is abismal in my experience. Sure they are more technical, but they are also that much less engaging, and target a much more narrow audience. No experimenting with styles and flows, just cookie-cutter formats with lots of text and plenty of citations.

Programmer conferences may have a more open format and obviously that invites some low quality talks, but it also leaves the door open to really amazing, totally experimental formats and topics. I'm thinking of stuff like Gary Bernhardt's "The Birth and Death of Javascript" [1], which would never fly at an academic conference in my experience (or at least would not be appreciated), but was immensely influential in programming circles.

It's really uncanny how Gary Bernhardt predicted it all [0] a few years ago.
Seems like the obvious next step is using WebAssembly outside the browser so we can really go full-circle on this one

EDIT: some quick googling shows it's already being done

Yup. In addition to JavaScript runtimes that have added WebAssembly support, such as Node, there are dedicated WebAssembly runtimes, like wasmtime: https://github.com/bytecodealliance/wasmtime

If you want to provide a plugin or extension interface, and want to give those plugins a limited interface rather than making them all-powerful, embedding a WebAssembly runtime gives you all of that plus the ability for people to easily write plugins in any language.

Also see https://hacks.mozilla.org/2019/08/webassembly-interface-type... for an illustrated description of how that'll work smoothly across languages.

> embedding a WebAssembly runtime gives you all of that plus the ability for people to easily write plugins in any language

No, you would still have to provide an application programming interface for every target language you want to support.

No, you don't.

If you export functions to WebAssembly, any language that can run in WebAssembly can call those functions.

And if you define WebAssembly Interface Types for your exported functions (note: still in development), any language that handles interface types can automatically handle things like "how does this language represent a string safely".

Either way, you don't need to define a new API for every language.

What you are describing is more like __asm__("") in C, not an interface for application developers. Those are still required for every single target language because of the mismatch between the levels of abstractions between those languages, webassembly and actual logic exported.
That's not accurate, and people are not expected to write an API adapter for every target language. Please read https://hacks.mozilla.org/2019/08/webassembly-interface-type... .

Interface Types are the mechanism for handling the different abstractions in different languages without having to write language-specific interfaces.

WebAssembly prior to Interface Types is more akin to an exported C function than to inline assembly. That already gives you enough for many kinds of interfaces, most notably the common pattern of obtaining an opaque handle and calling functions on that handle.

WebAssembly with Inteface Types gives you everything needed for high-level interfaces in any language. That lets you define strings, buffers, handles, arbitrary structured types, and pretty much anything you'd expect of a high-level interface.

I have read the article, when it came out actually, it's merely about type mapping rules to an intermediate common representation. And it absolutely doesn't imply that interface types give you everything for high level interfaces in any language. Not that it's even possible, as high level <-> wasm <-> another high level transformation cannot realistically be automated for an arbitrary high level language, what can be is just high level <-> wasm <-> wasm level looking code in another high level language. It's like decompiling, you can't produce high level code automatically, only low level looking code in high level languages.
It isn't just done, it is productized. Both Cloudflare [1] and Fastly [2] have been marketing that they support WASM in their edge networks. Both companies seem to suggest that this is a competitive advantage they could have over other cloud offerings.
Completely! WebAssembly has a great potential outside of the browser. Mozilla has been doing a great work on their posts showcasing this new possibility.

Check out Wasmer! https://wasmer.io/ : along with other runtimes we are enabling the use case of WebAssembly programs as standalone applications that can run in any platform, or as libraries to be usable in any programming languages (disclaimer: I work at Wasmer!)

I think we're still waiting for a WASM-only OS, though :)
Like this? https://browsix.org/
ksec
I wonder if Cloudflare will do something like this given they want to run WASM in their Edge Servers.
I think I remember reading something on HN about some kind of tool for running WASM in some kind of kernel module to speed up app performance.

EDIT: here it is: https://medium.com/wasmer/running-webassembly-on-the-kernel-...

I remember something like that being posted too; I think this is it? https://github.com/nebulet/nebulet
So cool!
Nebulet is a great piece of engineering! <3
Repeating what I wrote here [1], Fabrice Bellard wrote JSLinux in 2011, which is a CPU emulator written in JavaScript that runs the Linux kernel (using typed arrays and relying on fast JITs).

That's just a way of saying that "the best way to predict the future is to invent it" (and do so before people who were "predicting" it).

If you knew about asm.js and Google's (now abandoned) Nacl and PNacl, there's nothing surprising about the development of wasm. It's been 10+ years in the making.

And 20+ years ago Microsoft's browser had ActiveX plugins (which didn't use a VM and were really unsafe and unportable). Making a portable, sandboxed bytecode solves an obvious problem with that.

Also the JVM ran in the browser, etc.

You are technically correct, but at the same time I do think this way of framing it is selling Gary Bernhardt a little bit short. There is still an uncanny part in knowing all the right things at the right time to predict the future.

"The future will be just like the past, but with a different name".

I guess it's just as accurate when Gary Bernhardt says it as when everyone else says it. It's not exactly a theme that's gone overlooked before. But still... executing bytecode in the browser goes way, way, way, way back.

jchw
It’s not just executing code in the browser. It’s been a while since I’ve watched the video but its more about JavaScript becoming the universal assembly language. I think at least that aspect is wrong because Wasm came into existence. But ignoring that detail and subbing in Wasm for JS, it’s uncanny.
It's even more uncanny than that. The talk hypothesized that JavaScript will not be the universal assembly language but rather a lower level language (or more accurately an OS-like system I suppose as the talk presents it), "Metal," would be. In that respect it predicted WebAssembly on the nose.
In December that same year, the Internet Archive started letting people boot and run MS-DOS emulators in the browser. And this was well after binfmt_misc has been (ab)used with JS engines to execute JS like a native program.

A couple years earlier, NetBSD device drivers were running in the browser, and JS-engine-as-hypervisor was an explicit, if distant, goal. https://news.ycombinator.com/item?id=4757581

I personally see WASM as a natural descendant of ASM.js, so in that sense I'd say its not really wrong, just missing a step.

I don't know that it's necessarily uncanny though either; ASM.js was already a thing back when that presentation was given, so the existence of WASM isn't really surprising. The real central "prediction" of that talk wasn't WASM, it was METAL, which hasn't quite taken off yet. (The idea is out there[0], but so far mostly just as a self-fulfilling prophecy).

JSLinux was largely a toy, AFAIK. I don’t think it was meant as an exhortation to write all software that way.
I think that's underselling the talk. Apart from the very enjoyable presentation, it makes valuable insights.

The talk isn't trying to sell itself as 100% original. It makes reference to asm.js and a game demo that already existed at the time of the talk as well as repl.it.

Despite that, I do think it makes a unique insight that even though JavaScript is ubiquitous, it will NOT be the language that future languages compile to, but rather a bytecode perhaps inspired by JavaScript that will be the language of the future. Also, importantly this bytecode will win; that is most languages will the ability to directly compile to or have a VM in this bytecode.

Moreover this bytecode has the potential to entirely supplant native code and can do so with equal or better performance.

At least to me, neither of those were obvious insights even though I knew of these plugins and JSLinux.

First off, those plugins died. Silverlight, ActiveX, Java on the web, Flash, all of these died out and were replaced by JavaScript before wasm really took off. It might've looked like the end state would be a version of JavaScript "winning."

Second, things like PNacl, Emscripten, etc. still seemed like curiosities (as the talk refers to when showing Repl.it). It wasn't clear that they or the ideas they championed would get widespread adoption.

These days it is looking more and more likely that wasm is going to become a target for all sorts of different compilers. The fact that it's a major compilation target of Rust, a language that's about as far away from what I would've thought of as a language for the web as possible, is striking.

And though we're still a long ways away from running everything on WebAssembly, it no longer seems as exotic an idea as it once did to me.

And because of that, as well as the fantastic presentation, I still return to this talk every so often awed at how much closer we are to realizing Metal.

There's still a lot of room for the talk to go very wrong, but it's not as far-fetched as when I first watched it.

EDIT: Put another way; the talk is interesting to me because it emphasizes the birth and death of JavaScript. It talks about a world where the same forces that propelled JavaScript to towering heights of popularity ultimately cast it aside and create a world not possible without JavaScript, but in which JavaScript itself essentially no longer exists.

OK, yeah the "metal" part is fair. He showed asm.js and then posited that there would be something called "metal" that causes JavaScript to die and enables application written in more languages. And major apps could be ported to it.

Originally my conception of the video was more like this commenter below: It’s been a while since I’ve watched the video but its more about JavaScript becoming the universal assembly language.

I guess a lot of people are saying "he predicted this" without referring what specifically he is predicting.

FWIW it's not clear to me that WASM is going to do that. Everything I've heard from the team says that WASM and JS are complementary. Not that WASM will cause JavaScript to die.

I think there's some possibility of that happening in the distant future, but it's far from obvious. I think JS VMs will always be better at running JS than WASM VMs running JS engines, and all the JS out there will exist for a long time.

Also, JS is at a pretty good level of abstraction to manipulate the DOM, whereas C, C++ and Rust aren't. And it has some good syntactic shortcuts. Despite being a Python person, I would probably even argue that JS is better to manipulate the DOM, despite JS and Python being very similar otherwise. Function literals might be one reason.

So when people say "he predicted this", it would be nice to be specific about what the prediction was. WASM is a step in that direction but I would argue it's also fairly clear given that asm.js existed and he showed it. The real question is if WASM can handle all these use cases. Working on a language has made me appreciate many reasons that it's hard to make a polyglot VM. Tiny changes can bias your VM towards one compilation source vs. another.

The video creator is on HN so if the gods of internet attention shine upon us he may be able to comment here.

In lieu of that, I'll offer my one-line take of the prediction of the video. JavaScript will fade from popularity, but its (original) popularity will inspire a low-level assembly-like language (looking more and more like WebAssembly these days) that will provide a new substrate for most application development, web-based or otherwise, replacing traditional binaries.

As you point out it's not at all clear, even five years on from this talk, that this prediction will be correct. WebAssembly is currently complementary to JS and cannot fully replace it. The vast majority of websites these days use JS but not WebAssembly. Use of WebAssembly for applications that traditionally have not been run inside a web browser (e.g. GIMP, LibreOffice, etc.) is still nascent and it's nowhere near a sure bet that it'll take off there.

But maybe, just maybe, it'll happen.

The sheer number of big corporate backers, and standardization, is what will make it happen. That's really what is different here versus Java applets, Silverlight, NaCl etc. Those all failed because nobody was big enough to single-handedly push something like that onto the ecosystem. Now that they're acting in concert, things are very different.

Everything else is "just engineering". E.g. as far as being complementary to JS, and not being able to replace it - they are already working on access to DOM.

> Also, JS is at a pretty good level of abstraction to manipulate the DOM

I agree with your point in general, but surely the fact that there are 10 million js frameworks invented every week is proof that the native DOM APIs are not a good abstraction? As a mostly front end dev, most of my UI logic these days target _React_, not the dom APIs. To the extent that I write JavaScript, it’s pure data manipulation, which can be written in any language.

That's true, although for another example, React is also commonly used with JSX to express DOM fragments. And JS has that syntax but Python, C, Rust, etc. don't.

In other words, the particulars of the language matters. I wouldn't underestimate 20+ years of JS evolution toward expressing the problem better.

I'm working on my own language and all those details are hard. When they work, they're invisible to users. You only notice when it's not there or doesn't work! I would agree that Python is a better language than JS in most respects, but it's not clear to me that it's a better language for writing web front ends.

e.g. the async abstractions and promises are different and I believe that matters.

cbhl
> Moreover this bytecode has the potential to entirely supplant native code and can do so with equal or better performance.

I interpreted that part of the talk as hyperbole and sarcasm. It was saying that programmers will be so far removed from how computers work that they'll happily program against a model that has five layers of abstraction that simply serve to provide the original interface of the bottom layer.

Bytecode, by its nature, has to be translated into native code -- the way for it to be 'faster' than native code is to be native code. In software, you can do this with static or JIT compilation. The hardware people do this by changing their CPUs to make the things that people do in the bytecode faster. (Apple introduced new floating point CPU instructions in their iPhones just to make JavaScript faster.)

Around 1999, HP had a project called Dynamo where they implemented a JIT PA-RISC virtual machine on actual PA-RISC hardware. In some cases, they got better performance than native because the JIT would recognize hot paths at run time. I only bring it up to show that it's not 100% certain VMs can't win over conventional native. When I compile with GCC or CLang, I don't think my executable is tracing the hot paths and rewriting itself as it runs.
I don't think this was entirely sarcasm, it's mostly hyperbole. I am sure there will be some generally used configuration that will do basically that, and for a good reason. I just have no idea what the reason may be.

About speed, JIT optimized bytecode can be faster than what is possible to static binaries. The JIT has more information than the compiler to optimize your code. Currently I don't know of any that is that fast, but there is no inherent limitation here.

https://www.destroyallsoftware.com/talks/the-birth-and-death...

My understanding is that using interpreted bytecode you remove the need for hardware-based process isolation, which incurs some pretty significant performance penalties. Basically, if your software does a lot of IPC or syscalls, it's very possible for an interpreter based solutions to work better, if it's integrated at the kernel level.

I didn't get that understanding at all. The insight is that by going bytecode only you can rethink your security model. You can do things that you can't do safely if you allow native code to run.

It's swapping out one abstract model for another, just that we are so used to the current abstract model that we don't perceive it as one.

There's also a lot of sticky points at the bottom of the stack that lead towards native solutions, starting with memory management and basic I/O functionality. Nobody wants to code directly against the hardware for very long, so you end up with a driver, and then driver and resource management, and then an operating system of some kind. Even on the early microcomputers it was the case that you have a boot ROM of some sort and would code against that for most tasks.

With WASM you have the same kind of thing but the added wrinkle of the browser-based I/O being a different set of "basic abstractions" from what you get in libc, and every solution that bridges the gap being a bit of a hack. Being bytecode doesn't really change the fact that you still have to deal with the resulting dependencies at some level.

QT is already experimenting with rendering to Canvas using WASM in the browser, I've tried to call them out on it as bad practice a couple of times in the past.

Rust on the other hand is doing some genuinely exciting, powerful stuff with allowing WASM to talk to the DOM and allowing native developers to target HTML directly within their apps. Rust's approach is to treat the language like a minimal, drop-in replacement for Javascript that doesn't require you to ship an entire rendering engine alongside it.

It is yet to be seen which approach to web portability is going to win. Obviously I'm rooting for Rust, and I personally think apps that are written using Rust's strategy will nearly always be higher quality than apps written using QT's strategy. But that doesn't necessarily mean that Rust will win, there are a lot of factors at play here. It'll be interesting to see.

But agreed, native apps are definitely coming to the web in some form or another. Funnily, the opposite is also true, since there's been a lot of buzz about using WASM for native sandboxing. I like to think that Gary Bernhardt[0] is pleased about that.

WASM machines--the next (hopefully) better version of Lisp machines (https://en.wikipedia.org/wiki/Lisp_machine)!

It looks like Gary Bernhardt was pretty spot on in his talk "The Birth and Death of JavaScript": (https://www.destroyallsoftware.com/talks/the-birth-and-death...)

Lisp machines didn't go away because of some conspiracy, they just stopped making sense. The vast majority of the benefit was that since they ran on (and were compared to other) 1980s minicomputer hardware without instruction caches, pulling the interpreter into microcode meant that the interpreter's overhead wasn't competing with data fetches on the von Neumann memory bus.

Instruction caches (and JITs to a degree) solve the same problem in much more general ways. That's why Azul went out of their way to create an appliance to run Java code with custom CPUs, and ended up with a pretty standard RISC for the most part.

All of that applies to WASM machines too.

Nov 28, 2019 · 1 points, 0 comments · submitted by traverseda
Nov 24, 2019 · m_sahaf on Jslinux (2018)
Gary Bernhardt prophecy from his The Birth and Death of JavaScript[0] is coming true.
JSLinux predates that talk by 3 years. Also the “prophecy” is a special case of Atwood’s law from 2007, where the “anything” is an operating system.
> Your browser is going to act as a VM to run a browser that will display the content.

Gary Bernhardt's talk "The Birth & Death of JavaScript"[1] was an ominous portent of a terrifying future. Unfortunately, some people apparently saw it as development roadmap.

>Running a javascript interpreter, written in C and cross compiled to WASM, in a browser, does feel like a joke.

Every day I see more clearly how prophetic "The Birth & Death of Javascript"[1] (2014) was. I'd love to pluck 1996 Brendan Eich into the future and show him how far his little programming language would go.

I was closely involved as CTO and then SVP of Engineering at Mozilla from the inceptions of both WebGL (originally Canvas 3D) and asm.js (see http://asmjs.org/ for docs), which led to the 4 day port via Emscripten or Unreal Engine 3 and Unreal Tournament from native to web and in Firefox at 60fps. This prefigured WebAssembly, which came in 2015 after it was clear from MS and the V8 team that one VM (not two as for Dart or PNaCL) would win.

Gary added the insight that system call overhead is higher than VM safety in one process (he may have exaggerated just a little) to predict migration of almost all native code. The general idea of a safe language VM+compiler being smaller and easier to verify than a whole OS+browser-native-codebase, as well as having lower security check overhead, I first heard articulated by Michael Franz of UCI, and it inspired my agenda at Mozilla that led to the current portable/multiply implemented JS+WebAssembly VM standard.

Aug 07, 2019 · MrRadar on Wine on Windows 10
Have you seen The Birth and Death of Javascript? https://www.destroyallsoftware.com/talks/the-birth-and-death...
I like to think of it more that Javascript/WebASM will finally accomplish what Java spent decades trying to do: be the completely ubiquitous hardware-independent code platform.

Javascript has truly become the "Write Once, Run Anywhere" language.

How do you figure, when certain features of javascript are supported on some browsers and not others? You've just swapped OS dependence for runtime dependence. JS's solution to this problem? Another layer of abstraction to make the JS cross-browser.

WASM already has this problem, what with 5 or 6 different incompatible runtimes already in existence.

You just use the lowest common denominator, depending on your definition if “everywhere”. When in need use shims.

It’s not literally “run anywhere” it’s “for all intents and purposes run anywhere”.

xena
I basically want to do this _without_ javascript though. My implementation is in Go: https://github.com/Xe/olin
xena
Bad news, that talk has been a constant source of inspiration for my entire endeavors :)
That's fine, I'll continue to disagree and understand that we'll never work together since we move fast and don't break things where I work.
dang
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."
Spectre came along and ruined the awesome conclusion of that talk.

The idea was that the cost of using WASM would be entirely offset by the speedup of removing the cost of hardware memory protection. We could do that if everything ran in one big VM because the VM would enforce the memory protection.

Unfortunately, now we can't rely on a VM for memory protection anymore. We have to assume that any code running in the same address space can access any memory in the same address space.

So we need the hardware memory protection after all. You can say goodbye to your WASM-only future.

Well, Spectre is a largely-theoretical class of vulnerabilities, that doesn't even apply to chips that don't do speculation in hardware, and that is purely about information disclosure via side-channel mechanisms. It might be a bit of a concern for some users, but it's not the end of the world - for instance, the designers of the Mill architecture have a whole talk discussing how Spectre as such doesn't really apply given the architectural choices they make. And if running stuff in different address spaces is enough to mitigate it effectively, that still provides quite a bit of efficiency compared to an entirely conventional OS.
Nitpick re "chips that don't do speculation in hardware" - load forwarding and speculative cache prefetching and branch prediction are done even by lots of current (and past) processors that don't do speculative execution and hence are considered "in-order" microarchitectures.
tntn
> doesn't even apply to chips that don't do speculation in hardware

This is an interesting way to put it. I would have said "applies to pretty much every CPU manufactured in the last decades." Your statement would make sense if speculation in hardware was some niche thing, but I think you would be hard-pressed to find an invulnerable CPU that is used in situations where people care about both performance and security.

That's great for the mill, but isn't relevant to the world outside of mill computing.

xena
This is part of why I want to make a custom OS where each WebAssembly process can be in its own hardware protected memory space. I'm looking at building on top of seL4.
I assume that new chips will address this vulnerability, correct? Couldn't the VM detect whether the hardware is secure and decide whether to use hardware memory protection or not?
tntn
> new chips will address [these vulnerabilities]

It doesn't seem likely. The chipmakers will fix the vulnerabilities that break isolation between processes and between user-kernel, but the within-process issues will probably stick around.

At this point it seems practically impossible to deal with completely.

V8 at least have given up on the concept of trying to protect memory within the same address space.

https://www.destroyallsoftware.com/talks/the-birth-and-death...

We're well on our way.

One of my favorite examples of life imitating art.
Not GP, but Gary Bernhardt is the guy who gave the classic "Wat" [0] and "Birth and Death of JavaScript" [1] talks, and some searching turns up "pretzel colon" as the "&:" operator in Ruby [2]. I assume he's mentioned it in a screencast or something, but I wasn't able to find it.
I love the talk on "The Birth and Death of JavaScript" by Gary Bernhardt: https://www.destroyallsoftware.com/talks/the-birth-and-death...

Highly encourage everybody to watch it and how people may interact with JavaScript more and more through things like WASM. Very funny talk too :)

There's little relationship between wasm and js.
The talk predicts that Js will be dead (for user space) the moment it conquers the OS. Dead in this context means it will be invisible to the app developer just like C.
At the time, asm.js looked like it was going to possibly be the universal bytecode that runs everything. WebAssembly hadn't gone public at that time.
The talk is more about the web as an application platform, and its values, coming down to OS userland. Whether that means JS or Wasm or both doesn't really matter.
The age is nigh, its already become the easiest way to run and install Safari's JavaScriptCore on every platform:

    $wapm install -g jsc  Can then use jsc to execute .js, as a REPL or inline expressions: $ jsc -e "console.log([1,2,3].map(x => x*2).reduce((x,acc)=>acc+x))"
You can also symlink the JavaScript core framework which contains the executable there without installing anything as an alternative
Every time someone submits wasm related content, I feel obliged to link this classic talk:

https://www.destroyallsoftware.com/talks/the-birth-and-death...

Bernhardt, Gary – The Birth & Death of JavaScript (PyCon 2014)

This talk is (thankfully) obsolete now. We don't need to write (or even to compile to) JS to have multi platform, we just need to write/generate wasm :-)
Honest question, why is this talk so relevant? I agree it had a good foresight, but I watched it and am a bit surprised by how much it is mentioned.
The presentation style is quite funny. It was even funnier at the time, when such a thing was considered almost inconceivable. Now, it seems prescient.
My guess as to why it’s mentioned so much is that the general expectation is that many people haven’t seen it yet. When people do watch it, they’re surprised by how much good foresight there was, and so they repeat the cycle.
wasm and wasi will eventually take over enabling higher level languages such as C#, Java and Python to be used at frontend (Blazor project is an example).

As Mr. Bernhardt says: JavaScript had to be bad for the evolution to happen. (https://www.destroyallsoftware.com/talks/the-birth-and-death...)

Apr 05, 2019 · streblo on I'm Joining CloudFlare
All of this reminds me of a talk by Gary Bernhardt called The Birth and Death of JavaScript (https://www.destroyallsoftware.com/talks/the-birth-and-death...), which although farcical is actually a really compelling vision of the future of infrastructure.

Congrats Steve! Excited to see how this turns out.

I was lucky enough to be present for one of the times Gary gave this talk live. I’ve been joking that for the past few years, his nightmare is my dream. It, like a lot of Gary’s stuff, has been very influential on me.

Thanks!

Just make sure you stay clear of the exclusion zone and you'll be fine.
This reminded me of the talk "The Birth and Death of JavaScript" (2014) by Gary Bernhardt, where he goes into some of the more absurd possible implications of what happens when applications cross-compiled to JS approach or surpass traditional desktop performance: https://www.destroyallsoftware.com/talks/the-birth-and-death...
Can't help being reminded of that talk by Gary Bernhardt: “The Birth & Death of JavaScript”[0] — exploring a hypothetical future where JS takes over everything without (most) anyone using it of their own volition.
Except entirely irrelevant as WASM is not Javascript.
_jn
Sure, though it’s still a web technology taking over an otherwise unrelated space ¯\_(ツ)_/¯
None
None
I don't think it's unrelated. Despite the name, WASM isn't really a "web technology" - it's a sandbox technology and a compile-once-run-everywhere technology, and there has always been demand for that outside the web, even before the web existed. It might be that the web is what created enough demand for it to happen in the end, but what do we care?

The problem with JS was never that it's a web technology. It's that it's a bad technology that happened to be in the wrong place at the wrong time to get a first mover advantage.

Can you show me an example of a WebAssembly app that runs in the browser with JavaScript enabled?
There are probably newer examples, but from when WebAssembly was coming out:

https://alpha.iodide.io/

https://github.com/mdn/webassembly-examples/

Meant to say with JavaScript disabled.
This talk was really prophetic https://www.destroyallsoftware.com/talks/the-birth-and-death...
> the text format defined by the WebAssembly reference interpreter (.wat)
I like to share this talk with new junior devs instead of ranting about strangely defined behavior in javascript. Saves time, and it's more fun.
Maybe prophetic in the sense of "people really wanted this for years and it has finally been implemented". People have been talking about it since at least when NaCl debuted in 2011.
“If I had asked people what they wanted, they would have said faster horses.” - Henry Ford

JavaScript by itself is not a “nice” language, I would ask what the root of your request is: an easy garbage collected language that is ubiquitous?

This "long-held fervor" that culminated in WASM started back before Node.js existed, so back then, "Javascript" was just a thing browsers did (except for, say, Windows Scripting Host's support of it.) The fervor back then wasn't really driven by a desire for a ubiquitous anything; it was driven specifically by people thinking about browsers, and what is required to program web-apps in browsers.

What people have wished for, since... oh, 2001 or so, is the ability to write web-app frontends without needing to grok and deal with the awful runtime semantics of Javascript—the way you do when you write Javascript, yes, but also the way you do when you write in a language that directly transpiles to Javascript, like TypeScript or ClojureScript.

Languages like TypeScript may add semantics on top of the JS runtime's semantics, but they can't get away from the fact that the JS runtime is the "abstract machine" they program, any more than e.g. Clojure can get away from the fact that the JVM is the abstract machine it programs. That's why none of these languages were ever seen as a "saving grace" from the "problem of Javascript", the way WASM is.

WASM has its own abstract machine (which runs efficiently in browsers), which finally frees people from the tyranny of the Javascript-runtime-as-abstract-machine.

It's great that it's also now replacing the Javascript-runtime-abstract-machine in other contexts (e.g. Node-like server-side usage, plain-V8-like embedded usage) but that was never really "the thing" that anyone cared about.

---

Mind you, the NaCl fervor was for a ubiquitous VM—but the NaCl fervor wasn't nearly as large, and isn't really what's propelling WASM to prominence right now. Even then, it wasn't about "an easy garbage collected language that is ubiquitous", no. The goal of it was to be able to take code that you already have—native code, written to run fast, like a AAA game—and put it in a browser-strength sandbox, such that it can be zero-installed just by visiting a URL, with full performance. You know, like ActiveX was supposed to be. But better.

NaCl didn't really get us there, because it happened right as the architectural split between x86 and ARM really started heating up, and NaCl's solution to that split—PNaCl, a.k.a. sandboxed LLVM IR—was both too late and not really efficient-enough at the time to fully supplant the "Native" NaCl in NaCl messaging. (LLVM IR works well-enough now for Apple to rely on it for being a "unified intermediate" of both x86 and ARM target object code, but that shift only began with ~8 additional years of LLVM development after the version of LLVM that PNaCl's IR came from.)

WASM seems to get us there. But do we care any more? Everyone already has other solutions to this problem. ChromeOS can run Android apps; Ubuntu Snappy packages can expose GUIs to Wayland; Windows has a Linux ABI to run Docker containers. Ubiquity is a lot easier now than it was back then, for any particular use-case you might want.

On the embedded-scripting side of things, everyone has seemingly settled on embedding LuaJIT or V8. Do people even need embedded scripts to be fast, in a way that "WASM as compilation target" would help with? Maybe for the more esoteric use-cases like CouchDB's "design documents" or the Ethereum VM (https://medium.com/coinmonks/ewasm-web-assembly-for-ethereum...) But I doubt WASM will hit ubiquity here. Why would OpenResty switch? Why would Redis? Etc. You're not writing-and-compiling native code for any of these in the first place, so adding WASM here would only break existing workflows.

> What people have wished for, since... oh, 2001 or so, is the ability to write web-app frontends without needing to grok and deal with the awful runtime semantics of Javascript

It was so even before 2001. Consider why the <script> element as the "type" attribute to begin with (and why it had the "language" attribute originally, before "type"). As I recall, W3C specs from that era gave examples such as Tcl! On Windows, you could use anything that implemented Active Scripting APIs - e.g. Perl. JS just happened to be the one that everybody had, because it was the first, and so it became the common denominator, to the detriment of the industry.

> I would ask what the root of your request is: an easy garbage collected language that is ubiquitous?

Honestly, I just want to be able to write the stuff I have to in JS today in my favorite language.

If by "people" you mean "a vocal minority", then sure.
A lot longer than that. There was a standard calls ANDF in 1989 (architecture neutral distribution format) trying to solve the same problem. I'm curious why it didn't work out, but I imagine part of the problem was that it had to support both many different flavors of Unix as well as cpu arch.
> I'm curious why it didn't work out

I worked at a multi-platform UNIX software vendor back then, and did the project of compiling our products to ANDF. At one point I counted 20 distinct UNIX variants that we had in our office (all workstations).

It was believed that UNIX systems complied with government procurement standards (eg this is where POSIX is relevant), but there were always bugs and differences in behaviour. In C code this is handled by #if pre-processor directives to adapt as needed. ANDF required turning that into runtime if statements instead. (An example would be if there were two different network interface calls depending on platform.) That would have been a herculean task and was in same places impossible such as the same API taking different numbers of parameters on different platforms. The ANDF compiler would have to pick one for its header files.

Even something like endianess is usually handled by #if and not an if statement, so you can see just how much effort would be needed just to have it compile. You still had to do all the multi-platform installation instructions and testing, so no one had an incentive to make things more complicated.

Java killed ANDF stone dead. While Java was most prominent for applets in the browser, it also competed with C++ on the backend. C++ compilers cost a lot of money, had licensing daemons that locked you to one platform, and implemented different subsets of the C++ standard and library. Java was more forgiving and the JVM provided standard features like multi-threading and networking. You will also note that Sun spent a lot of effort to keep Java standard.

It looks like that talk is from 2014. Ideas like that had been talked about for many years -- that is, moving all of computing to the browser.

July 2007:

Atwood's Law: any application that can be written in JavaScript, will eventually be written in JavaScript.

https://blog.codinghorror.com/the-principle-of-least-power/

JSLinux by Fabrice Bellard in 2011:

https://bellard.org/jslinux/

As far as I remember, the addition of Typed arrays to JavaScript made this feasible.

In typical Bellard fashion, he didn't really talk about it -- he just demonstrated it!

Quite possibly! Or maybe at install or link time. Or maybe we're looking at a future where almost all code goes through a JIT engine and you've only got a few normal cores that run the OS that manages everything. Or possible everything will be Javascript[1].
So in practice, nobody uses actual x86 assembly anymore, but everybody uses it as a target for compilation. It’s just like The Birth & Death of JavaScript¹, but non-fictional, and on a lower level – a Birth & Death of x86 assembly, if you will.
Or JVM byte code.
>nobody uses actual x86 assembly anymore,

well almost.

https://www.destroyallsoftware.com/talks/the-birth-and-death...
i saw that video in the past, but watching it again now took me 15 minutes to realize this was supposed to be a fun video about an absurd future.
Good grief we've reached the point of a JS interpreter running in ring 0? Gary Bernhardt was a prophet.
Well, strictly speaking, WASM and JS are two different things, but basically yeah.
Dec 08, 2018 · earenndil on Nginx on Wasmjit
This talk is relevant https://www.destroyallsoftware.com/talks/the-birth-and-death.... Tl;dw in-kernel JIT has the potential to be 4% faster than direct execution. I am still dubious, however, as JIT requires more resources by a long shot than direct binary running.
Dec 07, 2018 · lwb on Nginx on Wasmjit
This is satire but still very interesting: https://www.destroyallsoftware.com/talks/the-birth-and-death...
Dec 07, 2018 · Ajedi32 on Nginx on Wasmjit
One step closer to METAL[1]
ksec
Everything said in the talk went true. Which means we are very close to a Nuclear War. ( And it certainly looks like a possibility at the way things are going )
Dec 07, 2018 · traverseda on Nginx on Wasmjit
Obligatory birth-and-death-of-javascript

https://www.destroyallsoftware.com/talks/the-birth-and-death...

I suppose calling it METAL would have been too on-the-nose.

molf
Not sure why you're downvoted, this is a very interesting talk.
It's pretty relevant to the discussion of "why would you want to run wasm in the kernel", but I'm not too worried about the votes.
It's not, though... WebAssembly doesn't really have all that much to do with js, any more than Flash or Java plugins would if they ended up being standardized instead. Every time there's a wasm thread it gets posted, but it misses the point of the talk to suggest that wasm is the prediction bearing fruit.

The talk is great, but I'd suggest that's the reason for the downvotes.

> it misses the point of the talk to suggest that wasm is the prediction bearing fruit

Does it? It seems that the talk has two main points:

1. Javascript succeeded because it was (at least initially) just good enough not to be completely unbearable, but bad enough that people ended up using it primarily as a target for other languages.

2. Ring 0 JIT can be 4% faster that normal binaries.

WASM is primarily a target for other languages, and qualifies as a language that can theoretically be JITted 4% faster than native code can be run.

Point number one isn't applicable to wasm.

The execution inside the kernel is related, but nobody replies to a lua in kernel post with a link to the talk.

Because Lua isn't related to Web development, while JavaScript and WASM are.
Right, but that's my point. Wasm is tenuously connected with JavaScript because both are web technologies, so people link the talk.

But they couldn't be more different technically, and if wasm does indeed become the lingua franca of future computing it will be much more boring than the craziness of js doing the same.

The talk was great because it was about an insane yet plausible future. We now have a boring and probable future.

I don't believe in that, too much experience to believe in a miracle bytecode format that will magically succeeded where others failed.

It will just be yet another VM platform.

Even less reason to link to the talk then :-)
Electron is becoming the new Win32. Might as well own it and make it faster, more native.

I think it’s a better alternative to Apple's Marzipan.

It’s a tragedy for the Web but makes sense for Windows.

Anyway, as others have noted, this seems more and more prophetical every year:

https://www.destroyallsoftware.com/talks/the-birth-and-death...

A much better video about the same topic:
Well according to "The Birth & Death of JavaScript" prophecy, we will soon have war in 2020 all the way to 2025. So far everything in the prophecy has been true, so I am not sure if I am still alive in 10 years.
Gary Bernhardt kind-of predicted this. https://www.destroyallsoftware.com/talks/the-birth-and-death...
Indeed! Amazing talk and amazing mind, thanks for sharing. Reminds me of Rich Hickey (whom he mentions in the talk), similarly independent and deep thinking from the first principles.
This will happen sooner than in 2030s: https://github.com/piranna/wasmachine (WebAssembly on FPGA)
The point about this approach being "closer to the metal" than other cloud providers definitely brought this talk to my mind. I just hope the nuclear war he also predicted for 2020 doesn't come to pass :/
Relevant: https://www.destroyallsoftware.com/talks/the-birth-and-death...
So everything is eventually going to WebAssembly. Eventually.
The parallels with the Destroy All Software talk "The Birth and Death of Javascript" [0] are crazy. Seeing the section where they address the possibility of Node modules and system access from WASM is like seeing a flying car advertisement in real life.

Ditto the feeling. I don't "get" WASM from a typical in-house CRUD development. Game makers, maps, movie editors; sure they need the speed. But some say it's gonna revolutionize most browser-based application dev, but I can't get specific examples that are relevant to us. And even for those domains listed, relying on inconsistent and buggy DOM as found in browser variations is a problem it probably won't solve. DOM will still suck as a general purpose UI engine.

WASM just makes DOM bugs run faster.

Between WASM and modern graphics APIs, we might be able to actually kill DOM altogether. Something like this:
Let's not do that until we have a way to make non-DOM-based web applications accessible to screen readers and other assistive technologies.
Meta-data can be embedded to describe and categorize content. But accessibility is usually not a goal for many "office productivity" applications (per my domain-specific standards suggestion). Usage of DOM alone does not guarantee accessibility either.

As far as Qt, while it may be a good starting point, I don't think direct C++ calls is practical. Some intermediate markup or declarative language should be formed around it: a declarative wrapper around Qt.

> Some intermediate markup or declarative language should be formed around it: a declarative wrapper around Qt.

QML is exactly that.

> accessibility is usually not a goal for many "office productivity" applications

I think I might be misunderstanding you. Are you saying accessibility is usually not a goal for the kind of applications that people need to be able to use to do their jobs?

I believe there's a reasonable limit to how adaptable workplace software has to be to those with poor vision, etc.
The DOM isn't particularly "buggy", relative to, say, Win32 or Cocoa. It may or may not be a bad API, but implementations are pretty solid.

(I have to confess I've never understood the objections to the DOM. I have literally never had an instance in which I had to use the raw Win32 API that didn't turn into a miserable experience.)

With Win32 and Cocoa you pretty much have one vendor with roughly 3 or so major releases. But with browsers you have roughly 8 vendors above 1% market-share with 3 or so (notable) major releases each. Therefore, you have to target roughly 8x more variations of the UI engine.

Look how hard Wine's job is to be sufficiently compatible.

I believe we need to either simplify the front-end standards (pushing as much as possible to the server), or fork browsers into specialities: games, media, CRUD, documents, etc. What we have now sucks bigly. Try something different, please!

Interoperability, and the standards process, is how we get specs that are sensible. Whenever I have to program using Win32, Cocoa, etc. I inevitably spend a ton of time reverse engineering how the proprietary APIs work. For DOM APIs, things generally work how they are supposed to work, because they were actually designed in the first place (well, the more recent APIs were).

Wine isn't comparable, because the Web APIs are designed by an open vendor-neutral standards committee and have multiple interoperable open-source implementations.

Your proposals break Web compatibility and so are non-starters. Coming up with fixes for problems in the DOM and CSS is the easy part. Figuring out how to deploy them is what's difficult.

Re: "the standards process, is how we get specs that are sensible." - They are not sensible: different vendors interpret the grey areas differently. A sufficiently thorough written standard would be longer and harder-to-read than the code itself to implement such.

Re: "Your proposals break Web compatibility" -- Web compatibility is whatever we make it. A one-size-fits-all UI/render standard has proven a mess. What's the harm in at least trying domain-specific standards? We all have theories, but only the real world can really test such theories.

Also https://www.destroyallsoftware.com/talks/the-birth-and-death...
The screenshot of GIMP running in Chrome running in Firefox is one of my favorites!
I really enjoyed "The Birth And Death Of Javascript".
"The Birth and Death of Javascript" becomes more and more real
I like the part about the Bay Area being a radioactive wasteland. :)
Fallout: New Francisco
We're firmly within NCR territory here.
The Hobologists send their regards.
Technically, he only said it was an "Exclusion Zone", so it could refer to real estate prices in the 2035 Bay Area for non-quadrillionaires.
I still don't buy it. Javascript might buy a 4% performance improvement, but the increased resource usage makes that impractical for most scenarios. For server use, the increased wear makes it cheaper to buy more computers and have them last longer. For application use, performance is not relevant enough to make it interesting. So really the only possible application is video games.

EDIT: before people accuse me of making a false dichotomy: I acknowledge that there are other uses for computers, but am unable to think of any others where the increased resource consumption would be worth it. Another thing: cell phones, the battery would drain much more quickly if everything were a webapp. Perhaps video game consoles will switch to such a JIT, though...

I mean Bitcoin is pretty much built on “increased resource consumption is worth it”. But I get your point.
It’s not. It’s built on “here’s a way to convert energy into financial security and have a transaction platform on top of it”.
Any person in the situation where “financial security” based on government-issued currency is tenuous enough to make cryptocurrency an attractive option is either 1. doing something illegal, or 2. would be better off with access to the energy used.

“There are no financial instruments that will protect you from a world where we no longer trust each other.” https://www.mrmoneymustache.com/2018/01/02/why-bitcoin-is-st...

there are 3 assertions here and they are all dumb.

first two aren't even about cryptocurrencies - every currency is more attractive than bolivar in Venezuela these days. i guess all those people are doing something illegal (surviving?) or will be better off with electricity (spoiler alert - they already have electricity).

the last one is even dumber and coming from supposedly somebody who should know their way around finances and yet quotes like the one you posted or this one:

> These are preposterous numbers. The imaginary value of these valueless bits of computer data represents enough money to change the course of the entire human race, for example eliminating all poverty or replacing the entire world’s 800 gigawatts of coal power plants with solar generation. Why? WHY???

just show how clueless he is.

there is a gigantic difference between not trusting and not having to trust.

and of course market cap numbers are preposterous. it's because they are meaningless and don't represent anything that exists in reality. not the amount of money that was spent acquiring those currencies, not the amount of money that can be made selling those currencies. it's meaningless numbers.

> There is a gigantic difference between not trusting and not having to trust.

This is the point, though. Everyone expecting financial security has to trust. There is no alternative.

the alternative is math. when you sign a transaction and it gets included in the blockchain - i don't need to trust anyone that i got the money, i don't need to fear the transaction will be reversed due to some banking policies, i don't need to fear my account will be closed, etc.

so you're right, i have to trust math and i am ok with that.

You still have to trust your counterparties, though, and they will always be less trustworthy in aggregate than the functioning of a financial system.

You're worried about the wrong player in this game.

no, that's not how it works. counterparty risk exists irrelevant of which financial system you operate in. you may be paying insurance to reduce the damages, you may sue them in court of law for breaking contract - all of that is irrelevant to financial system.

cryptocurrencies is simply different financial system where you don't need to trust middlemen.

it's really astonishing how complacent people have become towards trusting middlemen in financial systems. if you ignore the banking services that you're paying for either directly or indirectly via taxes - what is the bank doing for you? why should you pay for having a record in the database? why should bank be involved in facilitating or even censoring your transactions?

i'm worried about exactly the right player in this game.

“...[Bitcoin] also has some ideology built in – the assumption that giving national governments the ability to monitor flows of money in the financial system and use it as a form of law enforcement is wrong.”

Seems like the author I cited has it exactly right, then.

he has some things right, but not nearly enough.

is this the way you concede your opinion expressed above was wrong? because you just jumped from "financial system is not a risk, counterparty is a risk" to "but what about transparency and law enforcement"..

They are the same problem, and not a jump at all. You even used the phrase “court of law” yourself.

Financial security depends on the ability to show who bad actors are and undo their transactions. I don't see how this is possible without some sort of middleman involving civil government. Yes, the current banking system has flaws, but they are not technological flaws and cannot be solved by technological means.

> You even used the phrase “court of law” yourself

yes, in context of resolving dispute with counterparty

> Financial security depends on the ability to show who bad actors are and undo their transactions

no, that's in my opinion the opposite of financial security. i feel financially secure when i know no government, bank or corporation fuck up can affect the state of my account.

> Yes, the current banking system has flaws, but they are not technological flaws and cannot be solved by technological means.

the flaw of current banking system is humans. humans are often corrupt, incompetent, unreliable and with malicious intent.

basing financial security on assumption that only honest, competent, reliable and well intentioned humans end up in positions of power is obviously wrong.

i don't hold that assumption and think that taking humans out of the loop is the best way to address flaws of existing system.

the fact that flaws are not technological in no way means technology can't solve them.

> the flaw of current banking system is humans. humans are often corrupt, incompetent, unreliable and with malicious intent.

Software, including a blockchain or Bitcoin, is designed by humans.

A clock is designed by human. It doesn’t rely on human to tell time. You can figure out the rest I hope.
Something designed by humans can have design flaws.
yeah, i guess we should throw all those watches away.
With projects such as https://github.com/rianhunter/wasmjit it looks like this talk is basically coming true.

Not exactly Javascript but running untrusted code in a safe language in the kernel can give already performance improvements for some workloads due to avoiding system call overhead. It will be interesting to see where this goes in the future.

Catching up with 1961.
Oh, to be sure, the technology will be there (and not unlikely make it into game console). I just struggle to see any other inlet for it.

Not to mention that that benchmark is very synthetic and doesn't really reflect the kinds of speedups that will generally be found.

Another step towards Metal[0] becoming a reality.

[0] A (half) joke from Gary Bernhardt's excellent The Birth and Death of JavaScript (pronounced "yavascript"): https://www.destroyallsoftware.com/talks/the-birth-and-death...

Prescient https://www.destroyallsoftware.com/talks/the-birth-and-death...
Not really, it was what everyone was hoping would happen for ages. Remember PNaCl?
Except, WebAssembly isn't even close to Javascript. They're completely different languages. WebAssembly is closer to C than to Javascript.
WebAssembly is basically just a binary encoding of asmjs, which is the subset of javascript discussed in the talk.
WebAssembly is a statically-typed language which passes values around using a stack. It’s very different from asm.js.
Yeah, but it doesn't have "Javascript" in the name, which makes it automatically better by way of bypassing everyone's irrational hatred of JS
None
None
While asm.js was basically just a textual encoding of C in JavaScript... round and round we go! :)
I'd say it more a textual encoding of LLVM IR. Which makes the s-expression text format of WebAssembly a text encoding of a binary encoding of a javascript encoding of a compiler intermediate representation of your program. Round and round indeed.
jchw
On the surface level, sure. However, it's mostly just a lower abstraction way of accessing largely the same JIT. I'm pretty sure browsers supporting WebAssembly are doing so by reusing most of what they already have. And if you dig deeper, this was almost certainly inspired by tools like Emscripten and the Asm.js concept. After all, Asm.js accomplished a similar goal to WebAssembly, at the end of the day; Wasm is a cleaner, higher performance, less backwards compatible way of doing largely the same thing.

JS already unhinged from the browser pretty thoroughly. I think when it comes to Wasm it's almost as much about what it doesn't have as what it does have. Lack of DOM bindings and a GC make it much more suitable for hosting in more environments like the kernel.

As predicted by gary bernhardt[1].

FWIW, I don't think doing that will be a net win. I/O-bound applications might run slightly faster (I think gary bernhardt's number was 5%), but in exchange they'll take quite a bit more system resources, which mean increased power use and reduced lifespan.

Could you explain what you mean by saying that they'll take more system resources?
More ram, cpu usage, etc.

So, imagine you have a given task. One option is it takes 60ms to execute and during that time it takes 50% cpu. Or, it takes 50ms to execute but takes 80% cpu. The second one takes less wallclock time but more cpu time.

And his visionary talk on JavaScript, asm.js and others:

Reference in above comment is to the future “metal architecture” Gary starts detailing about halfway through.

Gary Bernhardt’s presentations are always masterfully done. This one is particularly funny.

On HN: https://news.ycombinator.com/item?id=7605687
Obligatory mention of this tongue in the cheek and visionary look presentation "The Birth & Death of JavaScript": https://www.destroyallsoftware.com/talks/the-birth-and-death...

This is basically what is happening with wasm and it's happening much faster Gary Bernhardt was anticipating in that presentation.

IMHO wasm finally displaces javascript as the only practical language to run stuff in a browser. Frontend stuff happens in javascript primarily because browsers cannot run anything else now that plugins have been killed off. Wasm changes that.

At the same time a lot of desktop apps are in fact javascript apps running on top of electron/chrome. Anything like that can also run wasm.

Finally people have been porting vms to javascript for pretty much as long as wasm and its predecessors have been around. So booting a vm that runs windows 95 or linux in a browser is not very practical but was shown to work years ago. This is what probably inspired the presentation above.

I've actually been pondering doing some frontend stuff in kotlin in or rust. I'm a backend guy and I just never had any patience for javascript. I find working in it to be an exercise in frustration. But I actually did some UI work in Java back in the day (swing and applets). Also there's a huge potential for stuff like webgl, webvr, etc. to be driven using languages more suitable to building performance critical software if they can target wasm. I think wasm is going to be a key enabler for that.

Most of the stuff relevant for this has been rolling out in the last few years. E.g. webgl is now supported in most browsers. Webvr is getting there as well. Wasm support has been rolled out as well. Unity has been targeting html5 + wasm for a while now. And one of the first things that Mozilla demoed years ago with wasm was Unreal running in a browser.

I would not be surprised to see some of this stuff showing up in OSes like fuchsia (if and when that ships) or chrome os.

I was lucky enough to see that talk live, and it's stuck with me. It's certainly where a big part of my enthusiasm for WebAssembly comes from.
yes
I remember YavaScript!
Jun 19, 2018 · datalus on Qt for WebAssembly
Obligatory Gary Bernhardt video reference: https://www.destroyallsoftware.com/talks/the-birth-and-death...
We can't really get rid of speculative execution, but we will effectively need to get rid of the idea that you can run untrusted code in the same process as data you want to keep secure.

It's interesting that the future predicted by excellent (and entertaining) talk The Birth & Death of Javascript [1] will now never come to pass.

> We can't really get rid of speculative execution

Why not?

> but we will effectively need to get rid of the idea that you can run untrusted code in the same process as data you want to keep secure.

If we accept that, then we also need to get rid of the idea that we can branch differently in trusted code based on details of "untrusted data".

Your last point is basically what Spectre v1 mitigations are all about, at least if you throw in the word "speculation" somehow. The rule is: don't speculate past branches that depend on untrusted data (though there are certain additional considerations about what the speculated code would actually do).

It's just that there are a lot of branches that don't depend on untrusted data. Speculatively executing past them is perfectly fine and extremely valuable for performance. That's why nobody wants to get rid of speculative execution.

It sounds like what we really need is a memory model that reflects this notion of trusted and untrusted data. The "evil bit", basically, but for real.
Obligitory:
If it's actually prophecy coming true, people might want to leave SF before it becomes part of the exclusion zone.
It's already happened, it's just that the SF/MV overmind migrated to another host and has resumed normal operation.
None
None
None
None
It's a really funny talk, but this is nothing we haven't seen before with Java, Python, .Net... heck one of the most popular unikernel implementations is in OCaml! We'll survive having more languages in/as kernels.
Gary Bernhardt explains it better in this talk: https://www.destroyallsoftware.com/talks/the-birth-and-death...

But to summarize, jumps between kernal-space and user-space are expensive. Instead of doing that, we can run a well-vetted interpretor in kernal-space, and run "userspace" programs in kernal-space, in the interpretor.

This actually isn't slower (or so it is claimed), because a JITed interpretor can be native speed on hot-code paths, and the inefficiencies for most workloads are more than made up for by not having expensive syscalls.

So what you end up with is something that is about as fast as normal compiled code for cpu-intensive workloads (maybe faster sometimes), much faster for workloads involving a lot of syscalls, and interpreted languages like python/javascript end up much faster as well, presuming they can take advantage of the efficient JIT implementation.

Personally, what most exites me about this technology path, is that it should reduce the cost of interprocess communication to near zero. Combined with a shared object model, and a capabilities system, it could be pretty awesome.

The next step of evolution predicted by Gary Bernhardt. https://www.destroyallsoftware.com/talks/the-birth-and-death...
A few related matters which might seem unrelated until one starts seeing the bigger picture:

http://lampwww.epfl.ch/~amin/pub/collapsing-towers.pdf Amin, Nada; Rompf, Tiark - Collapsing Towers of Interpreters [January 2018]

sequel to that paper: https://dl.acm.org/citation.cfm?id=3136019

https://www.reddit.com/r/nosyntax

https://grothoff.org/christian/habil.pdf The GNUnet System

https://wiki.debian.org/SameKernel

http://drops.dagstuhl.de/opus/volltexte/2017/7276/pdf/LIPIcs... Wang, Fei; Rompf, Tiark - Towards Strong Normalization for Dependent Object Types (DOT)

https://news.ycombinator.com/item?id=16343020 Symbolic Assembly: Using Clojure to Meta-program Bytecode - Ramsey Nasser

http://conal.net/papers/compiling-to-categories/

https://icfp17.sigplan.org/event/icfp-2017-papers-kami-a-pla...

Each of these, and this, paves another brick into a road to a very very different computational paradigm... I post this here without much explanation (and I've left a lot of other very relevant stuff out) of how these parts fit together, and I apologize for that, but I simply lack the time to give this the proper writeup it would deserve.

The end is neigh and it is the death of javascript.
I think it's the start of something wonderful...
Don't horse around. The word is nigh
I've seen this talk linked so many times and now that I've finally watched it, definitely recommend it it's pretty good.
And somewhere, Gary Bernhardt is shaking his head in disbelief.

https://www.destroyallsoftware.com/talks/the-birth-and-death... was supposed to be satire... but "asm everything" might actually kind of happen.

Yeah, I definitely thought of Gary when I was writing the piece. The first time as tragedy, the second time as farce!

You jest, but this sort of thing probably is the death of almost all native software, sadly.

As long as performance is important, native software will always have an edge. Depends on the application.
I’m a big fan of native UI, but I doubt this particular claim. The web stack is almost certainly more performance than GTK, for example.
Apologies, when I say native software, I don't just mean the UI.
I don't agree on this count either. You can have performance and maintainability by writing your business logic in a daemon. Only very few applications will find this IPC cost unacceptable, and WASM promises to raise the performance ceiling even further. There are lots of good arguments for native software, but I don't find the performance argument to be particularly compelling.
There are some things we mourn. Nobody mourns cross platform distribution of native apps. Anyone who does hasn’t had to manage the insanity of installer apps and of papering over a million different OS versions and app versions.

It’s 2018 and we still don’t have a common, wide spread, OSS framework for self-updating native apps that runs on all the major desktop operating systems. And let’s not even go into app store territory...

Well it would be great if there weren't hundred of FOSS desktop variants....
There are only a handful you'd ever care about, and Qt apps work fine on all of them.
But Qt itself is a dumpster fire. It just happens to be one of the least bad toolkits for Linux.

Disclaimer: My last job was Qt developer.

This seems to be a common complaint as if we were all standing on a life raft pushing it down with the collective weight of our respective keisters.

If only some people would stop contributing to creative work you don't see the value of and distributing it for free on the internet!

At present you have

Debian and a bazillion ubuntu derivatives using a debian package. ubuntu/debian are going to have different versions of some libraries but you can package deps with your app.

Arch and derivatives have a pkgbuild. This is quite simple if anyone cares about your app your arch users will probably upload one for you to the arch user repo.

Fedora and suse have rpms. These will be similar but not identical.

3 packages and you can cover most of your potential users.

In the future you reasonably may expect to be able to distribute a flatpak and be done with it.

It is not only about package formats, since even when the format is the same, the expected directory layouts or installed libraries will be completely different.

No one really took FHS seriously, each installation is a special snowflake with its own GUI and dynamic libraries story across Linux variants is even worse than it was on other OSes.

The cross platform part is called source code. If all the chunks you use to build your app are cross platform packaging wont be horrible.
Quick example: to delivery safely my app with an web app, I need to add HTTPS and that's basically it. These days I can add Letsencrypt trivially.

With modern Windows, macOS, Linux, iOS, Android, I'm going to have to run around for ages figuring out how to sign my packages. Despite what you might say, it's never easy or pleasant.

The best native apps aren't cross platform. The worst native apps often are (at least from a mac perspective). We are simply moving to the lowest common denominator for all platforms, to the profit of business and the loss of the end-user. We can now develop software that is equally shitty for everyone much cheaper than we could before.

Is that the end of the world? No, of course not. But we aren't building better tools over time--for instance, google docs is distinctly worse than the word I used on macintosh back in the 90s for all the use cases I care about. So is pages! All this software is more complex, and generally for little benefit.

I don't disagree that the best native apps are amazing. But it seems so wrong to regard a desire to build cross-platform software as some kind of scheme by developers against their users. In so many cases, being able to use a program/service on many platforms is a huge part of what makes it valuable to the end-user. You can get your gmail from your computer, or your phone, or from a web cafe, and have it all work the same. You can share google docs with anyone and know they'll be able to access them. You can decide you're fed up with windows/macos/android/ios and you want to switch to something else, and most of your software will still be there waiting for you on the other side...
> In so many cases, being able to use a program/service on many platforms is a huge part of what makes it valuable to the end-user.

Sure, some users, in some cases, may happen to use some features that are new. The pitch isn't "this is a good tool", it's "you have to use this tool to interact with others or retain data portability." Seems pretty user hostile to me.

Should we actually start calling it "Yavascript"?
I was wondering why he kept pronouncing it that way. I thought he was just doing funny German pronunciation thing.
It's very common to hear Scandinavian speakers pronounce it that way.
None
None
did he call yavascript to avoid something like this?
Phonetic drift and "information lost to time" is a pretty obvious reference, I think.
He was from the future in the presentation, so he knew that after this oracle debacle, everyone since 2018 onwards called it "yavascript"
I'm still holding out for JawaScript
dmoy
In that case everybody in the exclusion zone better start selling their real estate. Help the short term housing crunch while they're at it.
Another comment pointed out that Netscape originally planned to call it LiceScript but was encouraged by Sun to change it JavaScript so I say if we're going to change the name we go with the original one. Either that or argue that the original holder of the Java trademark essentially blessed the use of its mark and continue calling it JavaScript.
LiceScript? Pretty apt name for something that has infested the web!
> LiceScript

Ahem, "LiveScript": https://en.m.wikipedia.org/wiki/JavaScript#History

Of course you're right. I'll chalk that one up to poor proofreading on my part and not even try to blame autocorrect.
I was gonna say, dodged a bullet by not calling it _LiceScript_
I don't know... LiceScript has a nice ring to it.
Fitting. The direction the JavaScript community has been going in for the past half decade has left me scratching my head often.
The Birth & Death of JavaScript https://www.destroyallsoftware.com/talks/the-birth-and-death...

A talk from the 'future' about how everything became 'YavaScript'.

This talk is awesome. It's funny but I learned a lot. And the prediction of the talk is happening! [1]
let's hope the exclusion zone doesn't happen, k?
you're reminding me of this 30 minute talk from 2014 "the birth and death of javascript", in which he supposes a future where CPUs basically run webassembly directly https://www.destroyallsoftware.com/talks/the-birth-and-death...
Reminds me of Jazelle[1], which allowed some ARM CPUs to run Java bytecode directly. AFAIK it never really caught on.
not exactly, the idea is to have a normal cpu like arm or x86 but the kernel doesn't ever deal with native binaries, only web assembly, and the performance penalty of running stuff in a VM gets offset by disabling memory protection and its performance penalty, since memory protection is already guaranteed by the VM.
It also probably didn't help that Jazelle was locked behind an NDA.
And here I thought this entire time that Gary was joking when he predicted this, though I'm glad we didn't have to suffer a war to make this happen.
You've basically just described "The Birth & Death of JavaScript" https://www.destroyallsoftware.com/talks/the-birth-and-death...
Hmm, HyperCard in the same list as Zope and node? Interesting. :-)

The idea that JavaScript "won" is a little controversial to me. I think it's huge and important, but the world is still changing. Embedded Python goes places that Node still can't. I absolutely see the value you describe in sticking to one ecosystem, but I don't think JavaScript/TypeScript/Node is the only way to get those benefits. (See also: Transcrypt) I really enjoyed the PyCon 2014 talk on the general subject: https://www.destroyallsoftware.com/talks/the-birth-and-death...

The most recent conversation I had with Ted was after someone had just demonstrated the HoloLens for him and a few others. Ted had some feedback for the UI developer, and it didn't have anything to do with JavaScript or that level of implementation detail at all. It was all about the user experience. I don't want to put words into his mouth, but like he says in this recent interview, this is all hard to talk about because it really has changed so quickly.

I do think you're right that a lot of what Ted wanted to see could be implemented today in JavaScript and Git. But I think about the technical meat of that vision to be about data-driven interfaces. I am simply not old enough to really understand how notions of "scripting" changed between the 60s and the 80s. But the fact that Xanadu was started in SmallTalk suggests to me that scripting was part of the vision, even if a notion like "browser extensions" might not have been in mind.

Completely agree that there are other voices to learn from, and other important mistakes that have been made since Xanadu! (I think Ted would agree, too.)

It's more than that even - given that a high portion of the code being run now is using virtual machines, a lot of that protection is redundant. If all code is run inside VMs and zero 'native' code is allowed, then you could run without needing protection rings, system call overhead, memory mapping, etc - which in theory could more than make up for the virtual machine overhead.

This was being explored with Microsoft's Singularity and Midori projects but seems to be a dormant idea.

A fun talk on this idea with JavaScript: https://www.destroyallsoftware.com/talks/the-birth-and-death...

And that, of course, is how you get Spectre.
It should be possible to apply retpoline in the JIT to mitigate that.
Emphasis on "mitigate".
Isn't that a unikernel?
No. Each copy of the VM is still it's own process. The kernel just trusts the VM to be safe.
I don’t think it works like that. You still need protection between the rings inside the VM and the CPU is providing that protection. The VM is not only a user process for the host, it is executing its kernel code in the virtualized ring 0 on the CPU (where its still the CPU that provides protection).
As i understand it, with this approach you wouldn't have a separation between kernel code and the virtual machine - everything runs in a single address space and you rely on the virtual machine verifying the code as it JITs it.
All this seems like a pretty compelling reason to move to using VMs and type safety to provide process isolation instead of using the hardware, e.g. https://www.destroyallsoftware.com/talks/the-birth-and-death... or Microsoft's Singularity and Midori projects.

It's a shame there's so much inertia behind the current setup of hardware memory management etc that it seems it'll be a long time before anything actually happens here.

How would this approach have helped in this case?

As I understand it, the flaw in question allows reading kernel memory by executing user space code. How can a layer of software, on top of this type of buggy hardware, fix this issue?

In one sense, the issue has already been fixed by a layer of software on top — which restricts a bunch of stuff, and reduces performance — but I assume this isn’t what you’re looking for.

Those systems don't use the buggy aspect of the hardware (memory protection) at all. Instead, all code is run inside a VM which provides memory protection and process isolation - there is no 'native' code at all.

Not using the hardware memory protection provides a ~20% performance boost, which makes up for the ~20% overhead of running everything through VMs.

Jan 02, 2018 · 1 points, 0 comments · submitted by xwvvvvwx
Ah yes. The fall and rise of JavaScript.
Every time I see such news, I tell myself maybe it is the time we port a browser into the Linux kernel and build METAL, like Gary Bernhardt foretold.
Or directly into the cpu. Since there already is a web server in the cpu [1], adding a browser might enable us to run and use web applications without even installing an operating system.
The METAL (proposed by Gary Bernhardt) is not about running in CPU, but running all software in byte code and render native code obsolete, by implementing software process isolation and save the overhead of syscalls, memory mapping and protection rings.
IBM’s AS/400=eServer is basically implemented this way since 1988 years (some of it going back to 1979). Everything old is new again.

(It has 128 but pointers, btw - so it’s future proof for at least 20 more years)

The original bytecode OS is AFAIK the UCSD p-System (from 1978) https://en.wikipedia.org/wiki/UCSD_Pascal
I guess this just means we’re one step closer to Gary Bernhardt’s vision:
Every time something like this is posted, I always think of that video.

As ridiculous and played-for-laughs as it is, it's looking more and more accurate (in one form or another) every day.

If we have learned anything in the past 2 years, it is that the difference between parody and reality is vanishingly small.
What happens when the stuff like the “exclusion zone” and the war from 2020–2025 come true...? ¯\_(ツ)_/¯
Now you just need to compile a JavaScript interpreter from C++ to WebAssembly... Makes me think of “The Birth & Death of JavaScript” talk [1].
We need brainfuck in this pipeline somewhere
I'd prefer C-INTERCAL or LOLCODE
Upvote for INTERCAL. Keywords like "please" and "maybe" should be part of every language :)
I have bad news for you ;) https://github.com/mbbill/JSC.js

Or good depending on the perspective of course.

To me, this is the most exciting prospect of WebAssembly by far: an IR that works natively across all major platforms, including the web. It's funny because it makes Gary Bernhardt's "The Birth & Death of JavaScript"[1] seem a lot less like a joke.
A good occasion to watch Gary Bernhardt's talk "The Birth & Death of JavaScript" [0] again, where he talks about the precursor of WebAssembly: asm.js and the future implication it "could" have in the future in a really humorous way. A few years old but still relevant.

You want Gimp for Windows running in Firefox for Linux running in Chrome for Mac? Yeah sure.

>You want Gimp for Windows running in Firefox for Linux running in Chrome for Mac? Yeah sure.

I actually do. I want all code ever written and every environment it was ever written for to have a URL that will let me run it in the browser. Everyone else seems to want the web to go back to being whitepapers but I want actual cyberspace already!

There's another reason why I want JavaScript in the browser to die:

We haven't had a new browser engine written from scratch since KHTML. Firefox is a descendant of Netscape, Chrome (and Safari) is a descendant of WebKit which is itself a descendant of KHTML, Edge is closed source, but I'm almost sure there's some old IE code in there.

Why?

It's simply too expensive to create a fast (and compatible) JS engine.

If WebAssembly takes off, I hope that one we'll have more than three browser engines around.

What do you base the assumption on that Javascript is the critical piece of complexity here? (it might very well be, but it's not obvious to me)

At least some of the JS engines are used in non-browser projects (V8 and Microsofts), which at least superficially would suggest you could write a new browser engine and tie it to one of those existing JS interpreters. WebAssembly will gain interfaces to the DOM as well, so the complexity of that interaction will remain.

> Edge is closed source, but I'm almost sure there's some old IE code in there

EdgeHTML is a fork of Trident, so yes. That said, I'm led to believe there's about as much commonality there as there is between KHTML and Blink: they've moved quite a long way away from where they were.

> It's simply too expensive to create a fast (and compatible) JS engine.

I don't think that's so clear cut: Carakan, albeit now years out of date, was ultimately done by a relatively small team (~6 people) in 18 months. Writing a new JS VM from scratch is doable, and I don't think that the bar has gone up that significantly in the past seven years.

It's the rest of the browser that's the hard part. We can point at Servo and say it's possible for a comparatively small team (v. other browser projects) to write most of this stuff (and break a lot of new ground doing so), but they still aren't there with feature-parity to major browsers.

That said, browsers have rewritten major components multiple times: Netscape/Mozilla most obviously with NGLayout; Blink having their layout rewrite underway, (confusingly, accidentally) called LayoutNG; IE having had major layout rewrites in multiple releases (IE8, IE9, the original Edge release, IIRC).

Notably, though, nobody's tried to rewrite their DOM implementation wholesale, partly because the pay off is much smaller and partly because there's a lot of fairly boring uninteresting code there.

> Notably, though, nobody's tried to rewrite their DOM implementation wholesale

Depending on your definition of "wholesale", the Edge team claims it took them 3 years to do exactly that:

Oh, yeah, that definitely counts. I was forgetting they'd done that. (But man, they had so much more technical debt around their DOM implementation than others!)
I completely disagree that the issue is JavaScript here.

In my opinion, the issue is the DOM. It's API is massive, there is decades of cruft and backwards compatibility to worry about, and it's codebase is significantly larger in all major open source browsers out there.

I'm not sure I agree that the DOM is that bad(more that people are using it improperly and for the wrong things), but yeah, modern JavaScript is hardly to blame for anything. The closest thing to an argument I've heard is "mah static typing".

Asking WebASM to be everything, including a rendering engine, is asking for problems at such an atrocious level.

> I'm not sure I agree that the DOM is that bad(more that people are using it improperly and for the wrong things)

If an API makes it easy to make mistakes it's a bad API. Blaming "people" is a cop-out.

If an API is meant for documents and you're using it for applications, that's not the API's fault
So what you're saying is that people are stupid?

If people are misusing the DOM API at a fundamental level(to do non-DOM related things), that's not a fault of the API. It's as if everyone has forgotten that DOM means Document Object Model. The vast majority of websites and web apps are not very complicated on the client-side, so I'd say that the DOM API generally does its job well. Trying to do anything that's not about constructing a document or is doing heavy amounts of animation or node replacement using a document-building API is asking for a bad time. It's quite implicitly attempting to break fundamentals of computer science.

Making the API one uses to render pages in a browser a free-for-all doesn't solve the problem, and you end up losing many of the advantages of having actual standards. What would be better is for the browser to provide another set of APIs for things beyond the "expertise" of the DOM. This kind of the case right now in some regards, but there's a reason why React and Glimmer use things like virtual DOMs and compiled VMs. I'd argue that a standardization of such approaches could be a different API that probably shouldn't even be referred to as a DOM because they are meant to take a lot of shortcuts that aren't required when you are simply building a document. In a sense, WASM is intended to fulfill this purpose without replacing the DOM or JavaScript.

Object-oriented programming is quite often misued/misunderstood. Does that mean it's a "bad" concept? I wouldn't say so. Education tends to suck, and people are generally lazy and thus demand too much from the tools they use.

I'm not copping-out because I'm not putting the DOM on a pedestal. Calling it a bad API because it doesn't work well for a small minority of cases is a total mischaracterization. If it was an objectively bad API, it wouldn't have seen the astounding success it has.

EDIT: I'm not saying that programmers are stupid... but that their expectations are sometimes not congruent with reality.

I'm not implying that the DOM is bad (IMO it's one of the most powerful UI systems that I've ever used. Both in what it's capable of, as well as the speed that I'm able to develop and iterate something with it), just that it's BIG.

There's a lot of "legacy" there, a lot of stuff that probably should be removed, or at least relegated to a "deprecated" status.

Naw, the DOM is fairly small. Go here for a summary http://prettydiff.com/guide/unrelated_dom.xhtml

HTML is far larger than the DOM. Like comparing an ant to Jupiter.

WebAssembly has nothing to do with JavaScript. When people make this association it is clear they are painfully unaware of what each (or both) technologies are.

WebAssembly is a replacement for Flash, Silverlight, and Java Applets.

At the moment it is because the only performant visual output is the canvas.

They are adding native DOM access, which changes things.

and js. eventually!
JS is a language and not a bytecode media. Perhaps chocolate will replace cars and airplanes. I love me some chocolate.
JS is a language and not a bytecode media.

That's an arbitrary distinction that's driven by developer group politics, not a meaningful technical distinction. (Much like the old Rubyist, "It's an interpreter, not a VM.")

Machine languages were originally intended to be used by human beings, as were punch cards and assembly language. There's no reason why a person couldn't program in bytecode. In fact, to implement certain kinds of language runtime, you basically have to do something close to this. Also, writing Forth is kinda close to directly writing in Smalltalk bytecode. History also shows us that what the language was intended for is also pretty meaningless. x86, like a lot of CISC ISAs, was originally designed to be used by human beings. SGML/XML was intended to be human readable, and many would debate that it succeeded.

> That's an arbitrary distinction that's driven by developer group politics

Not at all. JavaScript is a textual language defined by a specification. Modern JavaScript does have a bytecode, but it is completely proprietary to the respective JIT compiler interpreting the code and utterly unrelated to the language's specification.

> There's no reason why a person couldn't program in bytecode.

True, but that isn't this discussion.

The point is that a "textual language defined by a specification" can serve the exact same purpose that a bytecode does. And JavaScript is very much on this path.

That is this discussion, because the fact that people program directly in JavaScript does not prevent it from being in the same class of things as a bytecode.

>JS is a language and not a bytecode media

Does it even matter when most people are using it as a compiler target (even just from newer versions of the language)?

Yes but the point is JavaScript as the "one true way to do client side scripting" can be replaced by webassembly in that capacity.
It cannot. WebAssembly bytecode does not have an API to web objects or the DOM. It is literally a sandbox for running bytecode only.
I love it when computer nerds get so convinced ephemeral things are exactly that which they claim
sli
I am extremely unclear on whatever point you're trying to make, here, because it really does seem to come from a place of ignorance on WASM and JS. It makes no sense.

It seems like you're claiming, in a really roundabout way, that WASM will never have DOM access, even though it's planned[1]. There are even VDOMs[2] for WASM already. Future WASM implementations that include DOM access can absolutely, and for many folks will, replace Javascript.

It will take more than just DOM access to replace Javascript. Just off the top of my head you'd also need access to the event loop, XHR, websockets, audio & video, webRTC, Canvas, 3D, Filesystem, cookies & storage, encryption, Web Workers and more.
> It seems like you're claiming, in a really roundabout way, that WASM will never have DOM access

I am not going to say never. It does not now and will not for the foreseeable future though. I know DOM interop is a popular request, but nobody has started working on it and it isn't a priority.

Part of the problem in implementing DOM access to a unrestricted bytecode format is security. Nobody wants to relax security so that people who are JavaScript challenged can feel less insecure.

> I know DOM interop is a popular request, but nobody has started working on it and it isn't a priority.

I linked to the latest proposal downthread; people are absolutely working on this.

It's not about being "javascript challenged", it's about wanting to use a non-crappy language riddled with problems.
Which of the security concerns browser Javascript deals with do you think are intrinsic to the language, as opposed to the bindings the browser provides the language? If the security issues are in the bindings (ie: when and how I'll allow you to originate a request with credentials in it), those concerns are "portable" between languages.
It isn't due to the language but to context of known APIs provided to the language that can only be executed a certain way by the language.

How would a browser know to restrict a web server compiled into bytecode specifically to violate same origin? The browser only knowns to restrict this from JavaScript because such capabilities are allowed only from APIs the browser provides to JavaScript.

I really don't understand your example. Are you proposing a web server running inside the browser as a WebAssembly program, and the browser attempting to enforce same origin policy against that server? That doesn't make much sense.
Yep, it doesn't make sense and that is the problem. There is no reason why you couldn't write a web server in WASM that runs in an island inside the browser to bypass the browser's security model.
This does not make any sense, sorry.
Not sure if this is directly relevant, but there have been all sorts of type confusion bugs when resizing arrays, etc. Stuff in the base language. They exist independent of API, but merely because the language is exposed.
Flash, Silverlight and Java Applets all provided APIs and functionality above and beyond what JavaScript or the browser DOM provides. WebAssembly is the opposite, as it is much more restricted than JavaScript. WASM is a sandbox inside a sandbox.
yeah, wasn't this (or related to it) at the top of HN just yesterday?
IIRC Servo uses quite a bit of Firefox code

Edit: Looking at the project it seems like it uses SpiderMonkey, but is otherwise new code

Servo doesn't render major websites properly (last I checked). Their UI is placeholder. Their Network/Caching layer is placeholder. There's no updates, configuration, add-ons, internationalization.

Servo is not meant to be a real browser. That's not a bad thing, but I don't think you can use it as an example of a browser built quickly by a small team.

Chrome's V8 engine was actually written from scratch, unlike Webkit's JavaScriptCore (which descended from Konqueror/KJS, as you say). Google made a big deal about marketing this fact at the time. (1)

And while yes, Mozilla's Spidermonkey comes from the Netscape days, and Chakra in Edge descends from JScript in IE, plus aforementioned JavaScriptCore, each of those engines still evolved massively: most went from interpreted to LLVM-backed JITs over the years. I suspect that no more than interface bindings remain unchanged from their origins, if even. ;-)

(1) I can't currently find the primary sources from when Chrome released on my phone, but here's a contemporary secondary one: https://www.ft.com/content/03775904-177c-11de-8c9d-0000779fd...)

If the issue is JavaScript, what explains the explosion of JavaScript engines? I agree that JavaScript is a cancer whose growth should be addressed, but implementation complexity isn't a reason.
If these proposed browsers don’t ship with a JS engine [1], do you also hope to have more than one internet around?

[1] Such as V8, Chakra, JavaScriptCore, SpiderMonkey, Rhino, Nashorn, there is a variety to choose from, also experimental models such as Tamarin, they are almost certainly not the critical blocker for developing a browser.

IE/Edge heritage goes back to Spyglass Mosaic.
The real problem is CSS. Implementing a compliant render engine is nearly impossible, as the spec keeps ballooning and the combinations of inconsistencies and incompatibilities between properties explode.

Check out the size of the latest edition of the book "CSS: The Definitive Guide":

Until CSS is replaced by a sane layout system, there's not going to be another web browser engine created by an independent party.

Isn't Grid and Flexbox supposed to be that sane layout system? At least that's what I've heard from those who have used them.
woah
React Native has what seems like a pretty sane css-like layout system. Maybe this could become the basis for a "css-light" standard that could gradually replace the existing css, and offer much faster performance for website authors who opt in.
I presume parent's point is about how they then interact with other layout modes (what if you absolutely position a flex item, for example), along with the complexity of things that rely on fragmentation (like multicol).
Even if Grid and Flexbox are awesome and perfect and the solution to all our problems, they don't make everything else in css suddenly disappear, a new layout/render engine still has to implement every bit of it, quirks included.
I think Blink's LayoutNG project and Servo both show that you can rewrite your layout implementation (and Servo also having a new style implementation, now in Firefox as Stylo). I think both of those serve as an existence proof that it's doable.
It's doable if you already have a large team of experienced web engine development experts, a multi-million budget and years to spend on just planning.

Implementing an open standard shouldn't be like this. Even proprietary formats like PDF are much simpler to implement than CSS.

A minimal, 80% PDF, maybe. A complete PDF? No.
It's not doable, part from when it is.

'Open standard' has nothing about something being simple and straight forward. What CSS is trying to do is complicated because of a whole bunch of pragmatic reasons.

Last time a browser tried to 'move things forward', ngate aptly summed it up as

    Google breaks shit instead of doing anything right, as usual.
Hackernews is instantly pissed off that it is even possible to
question their heroes, and calls for the public execution of
anyone who second-guesses the Chrome team
I never claimed that the existing browser vendors can’t do it incrementally — they certainly can. What I wrote was: “... there's not going to be another web browser engine created by an independent party.”
TCP/IP is an open standard, yet I suspect you would have the same problems implementing it (Microsoft famously copied BSD stack at first).

You could probably say the same thing about any complex open standard, like FTP etc.

Netscape navigator on DOS in Firefox via WebAssembly.

Or how about a live coding environment for a Atari VCS (1200) emulator ;)
My CS background is a bit weak... is the hypothetical Metal architecture he describes supposed to be satire or actually a good idea?
__s
Implement a WASM JIT in kernelspace & you don't have to have a userspace while still having hot code hopefully optimized to remove bounds checking. Now all your programs are WASM modules & we can replace your CPU with some random architecture that doesn't have to care about supporting more than ring0. Oh why not implement a nearly-WASM CPU? Probably just change branches to GOTO. Now the only program people care about, their browser, can have a dead simple JIT for this architecture, with WASM-in-the-browser being nearly as fast as any other program
There’s prior art for this too, Microsoft started a research project called Singularity that was essentially a kernel that only executed .NET bytecode, and had similar advantages (everything in ring0, no syscall overhead, etc.)

It died pretty unceremoniously though.

It died because it couldn't become an actual product and had a lot of very smart engineers spending a lot of time on something that had no future. Some of the core tech was reused and turned into other products.
Mostly satire because the math doesn't really work out in such a way.
It doesn't? How so? I was under the performance savings calculations he used were at least plausible. (Though obviously just a back-of-the-napkin estimate.)
Some say that joking is a socially acceptable way to say socially unacceptable ideas.

I think it's a great idea, though many disagree. It's basically ChromeOS but to the next level.

fny
I'm more excited about the prospects of running V8 inside ChakraCore inside Quantum.
I'm more exited about the prospect of running all of FF or Chromium inside of Edge so I can cut my workload down by 50%
And for a somewhat more practical but at the same time more exotic example, the Internet Archive has a ton of old minicomputers and arcade games running in MESS/MAME, each compiled to webasm. One click and you can boot anything and play it in your browser. https://archive.org/details/softwarelibrary
ricw
This is amazing. If only it would also work in my phone. Probably for the best to stop me “wasting” time ;).
Are you sure that's actually using WASM? It sounds to me like it's currently using ASM.js compiled via Emscripten. (Though in theory there's no reason why it _couldn't_ be WASM, since Emscripten supports WASM as a compiler target.)
I thought they switched over back in July. https://twitter.com/textfiles/status/884084207688892416
That is gorgeous. I just booted Win 3.1 to Minesweeper in less than a minute on my phone's browser.

Too bad makers of the original Minesweeper did not think to build touch input support.

There's the Gary Bernhardt classic "The Birth & Death of Javascript"
Obligatory "The Birth & Death of JavaScript" reference: https://www.destroyallsoftware.com/talks/the-birth-and-death...

METAL is coming.

Oddly, no one here linked to Gary Bernhardt's talk: The Birth & Death of JavaScript (YavaScript)
As always, the fantastic Gary Bernhardt takes this through to its logical conclusion https://www.destroyallsoftware.com/talks/the-birth-and-death...
jerf
First, while presented humorously, I take it somewhat seriously as well. And one place where I disagree with it is that unless you consider WebAssembly as Javascript, it isn't true. It isn't Javascript destined to take over the world, it's WebAssembly.

You will know WebAssembly is here and in charge when the browsers compile Javascript itself into WebAssembly and take at most a 10% performance hit, thus making it the default. Call it 2022 or so.

https://www.destroyallsoftware.com/talks/the-birth-and-death...

Your comment immediately made me think of this. Highly recommended talk for anyone that hasn't seen it. It goes through a "fictional" (maybe not so much anymore) history of javascript until 2035. We are getting pretty close to javascript all the way down.

Reminds me of a 'future' talk about how JavaScript took over the world as a language even though no one actually programmed in. This was because as long as you could transpile to ASM, you could get native performance via JS.

https://www.destroyallsoftware.com/talks/the-birth-and-death...

(Worth watching no matter what your background is, it's funny and informative)

Aug 10, 2017 · thousande on WebAssembly: A New Hope
Like running the Windows version of Gimp in Wine in X Window in Chrome inside Firefox on a Mac? http://imgur.com/a/wRals
Are you referring to this talk?
https://www.destroyallsoftware.com/talks/the-birth-and-death...

...this is extremely relevant to what you're saying. A talk worth watching.

https://www.destroyallsoftware.com/talks/the-birth-and-death...

Unfortunately some people seem to have missed that this talk was labeled "comedy" and "absurdist", are actually trying to implement a JavaScript kernel. The talk just got the name wrong - it was "Electron", not "Metal".

I know it's an experimental project, but that's exactly the time to learn Ruby+Tk, or {anything}+Qt, or any of the other cross-platform toolkits. If you have to bundle an entire GUI server ("the browser") with your app to shoehorn HTML into use cases it wasn't designed for, you're doing it wrong.

> you're doing it wrong

"wrong" is subjective here. You could also argue that Ruby+Tk is "wrong" because Ruby is not as efficient as coding in C++, or C, or Assembly.

What we're talking about here, (and every time this comes up, again, and again, and again, over and over) is what shortcuts you're willing to take to get to MVP. There are a lot of Electron success stories out there, and when you're a solo dev starting out, and want to make a cross platform app, Electron is approachable and proven.

Apr 18, 2017 · psiclops on Google Earth Redesigned
Last part of your comment reminds me of the birth & death of javascript [0]
In 2014 I laughed.

No longer laughing.

Apr 07, 2017 · 1 points, 0 comments · submitted by mromnia
I found that the killer application for Javascript would be to write github repos for Javascript wrappers for typescript. Seriously, why is CSS as Javascript objects a thing? Are we so bored with the stack we've been using that we can only make things interesting by cross compiling everything? It makes me think of this video: https://www.destroyallsoftware.com/talks/the-birth-and-death...
That's exactly why we made styled-components![0]

I really dislike writing styles as JavaScript objects, so styled-components let's you write actual CSS in JavaScript. We have also added support for a bunch of editors, so you don't miss out on syntax highlighting just because of that.

I'd encourage you to check it out!

I just discovered http://typestyle.io/ which should make it much easier to write CSS in JS... Auto-completion everywhere...

I'll probably steal the approach for my own CSS-in-JS lib (j2c).

+1 for typestyle. My team and I have been using it for the past few weeks in trying to clean up a rather monolithic css project, and it's been really cool to work with.
The future is coming!
We are hurtling faster and faster towards this talk every day https://www.destroyallsoftware.com/talks/the-birth-and-death...
I kinda see the future in that talk as that as a worthwhile goal. If we could get all the security and interoperability advantages of the web but with native-level performance, support for multiple languages, and backwards-compatibility with legacy software that'd be huge.
So let me get this straight: WASM is basically asm.js, but without having to go through javascript?

The future has changed! https://www.destroyallsoftware.com/talks/the-birth-and-death...

I know it's easy to dismiss it but wasm will be th e biggest deal in the world.
I'm not sure I see any interest in that tbh: what's the point of porting a memory-safe (with GC and runtime) on a VM especially dedicated to remove the costly memory safety part of JavaScript.

Plus, the concurrent model of Go doesn't really shine if you run it in a single threaded configuration (which is most likely to be the case in wasm).

I might be missing something but to me it sounds like pure hype to put go and wasm together.

>> I'm not sure I see any interest in that tbh:

Running other languages(than JS) in the browser is interesting for sure.

>what's the point of porting a memory-safe (with GC and runtime) on a VM especially dedicated to remove the costly memory safety part of JavaScript.

a) binary portability - compile and ship WASM link/run on any machine with WASM VM - eg. package up to NPM and run trough any V8 target

b) extra sandboxing layer for security (it's designed for browsers where you are supposed to run non-trusted code)

c) portable APIs (assuming WASM targets expose the same underlying platform like node or w/e)

d) WASM is single threaded right now but from what I've seen it's a top priority for next release to spec out shared memory threads

I agree with d), but I'm quite skeptical about the other 3.

a) and c) means it's portable on everything a JS VM runs on, which is not that much more than what Go runs on.

b) is legit, but that really sounds like an overkill.

Well a) and c) also means you can run in the browser - client/server code sharing and such can be a really big win.
You're confusing concurrency and parallelism.

Even with GOMAXPROCS=1 (1 CPU running Go code), it's very liberating to be able to write blocking Go code and not worry about callbacks or async/await and let Go's runtime deal with it all while you write concurrent code.

> You're confusing concurrency and parallelism.

I'm not, it's just that the most appealing feature of Go is it's ability to use parallelism for concurrency with a decent overhead, which makes it straightforward to scale vertically.

> Even with GOMAXPROCS=1 (1 CPU running Go code), it's very liberating to be able to write blocking Go code and not worry about callbacks or async/await and let Go's runtime deal with it all while you write concurrent code.

IMO, Async/await is a much cooler pattern than Goroutine + channels do do concurrent stuff on one thread. I find it way easier to use, and less error prone. The drawback is that you need a different paradigm when you want to take advantage of parallelism.

Of course it's a matter of personal preferences, but the prevalence of async/await in different programming languages indicates that at least I'm not the only one thinking this way :).

async/await is possibly simpler to implement. At least: in go effectively every method is async (at least the api doesn't show what is and is not), and that means you can't afford the inefficiencies that typical async/await implementations have because you'd be paying them all over the place.
ojr
javascript shows no sign of dying since that talk, more people are doing desktop software (Electron) and more people using js to make native apps (React Native). Node.js is also popular on the server. Babel also took off which lessen the need for compile-to-js languages and makes the future of javascript into more javascript. Javascript is more ubiquitous than ever, wasm is best suited for hardware intensive applications and library authors, this is overkill for normal client side javascript
Those techs you mentioned are successful largely because they use Javascript as a runtime. Supposing those runtimes also eventually support WASM (with a good story of linking binaries), I see no reason why other languages won't begin to be used. Perhaps not for a few years though.
> WASM is basically asm.js, but without having to go through javascript?

Sort of, but not exactly. See section 2 of the linked article.

>WASM is basically asm.js, but without having to go through javascript?

No. It also has non-JS accessible language features, and the article lists several of them.

I really suggest watching these talks for anyone who hasn't. The first especially gets at what kinds of issues javascript has, and what might happen to it.

https://www.destroyallsoftware.com/talks/the-birth-and-death...

And thus the era of METAL[1] begins...
None
None
None
None
This is super enjoyable thank you ^_^
And compelling. Can you really make that work securely in the kernel?
It's a nice idea, but it goes against layered security [1]. Perhaps computer-assisted proofs can make that a non-issue.

In any case, I would very much like to see an implementation of this.

Mar 04, 2017 · 1 points, 0 comments · submitted by mromnia
The Birth & Death of JavaScript.

A talk by Gary Bernhardt.

The man is funny :)
His other talks are great too! My favourite is "A Whole New World". https://www.destroyallsoftware.com/talks
Just as planned:
Dec 12, 2016 · oevi on Show HN: Web Bluetooth
This talk revolves around exactly this question: https://www.destroyallsoftware.com/talks/the-birth-and-death...

You might enjoy watching it.

Reminds me of this talk by Gary Bernhardt:
fidz
I don't want his prediction happen, unless we have WebAssembly.
There it is, shoot yourself. https://i.imgur.com/pC6EV0v.png
None
None
I've not watched that in a while! Enjoyed it again, so thanks!

Have we not almost arrived at this dystopian javascript hella-future though?

I mean, with unikernels which run node in ring 0:

http://runtimejs.org/

And that we can run an emulated linux in browser:

http://bellard.org/jslinux/

I'm doing my best to ignore it all....

Nov 24, 2016 · nitemice on The Lua VM, on the Web
The future is coming!
This really was a fantastic talk.

Additionally: as I understand things, the methodology behind asm.js could (in principle) be applied to any JIT-ed language. Is the choice of doing this in JS only because JS is the language of browsers (i.e.: kernel-level sandboxing can easily use existing browser sandboxing code) or are there other technical considerations behind this decision?

I have watched this talk on two occasions now. Gary Bernhardt is really a great speaker. I am curious has anyone subscribed to the screencasts at:

https://www.destroyallsoftware.com/screencasts

It's a bit pricey at $29 dollars a month but if all of the content is as interesting, entertaining and thought provoking as this talk I could see it being worth the price. I would be curious to hear anyone's feedback. Oct 24, 2016 · kenOfYugen on Operating Systems I believe something along the lines of runtime.js [1] is implied. The timeline suggests a reference to Gary Bernhardt's talk, "The Birth & Death of JavaScript" [2]. "The Birth and Death of Javascript" by Gary Bernhardt (probably the most talented speaker on tech) at https://www.destroyallsoftware.com/talks/the-birth-and-death... I'd mention Bret Victor's work before (maybe Drawing Dynamic Visualizations?), but Bret cheats by writing a lot of amazing code for each of his talks, and most of the awesome comes from the code, not his (great nonetheless) ability as a speaker. Then you have John Carmack's QuakeCon keynotes, which are just hours and hours of him talking about things that interest him in random order, and it still beats most well prepared talks because of how good he is at what he does. HN will probably like best the one where he talks about his experiments in VR, a bit before he joined Oculus (stuff like when he tried shining a laser into his eyes to project an image, against the recommendations of... well, everyone): https://www.youtube.com/watch?v=wt-iVFxgFWk Agreed. Was gonna post this if it wasn't up already. After this, The Birth and Death of Javascript: https://www.destroyallsoftware.com/talks/the-birth-and-death... He could've taken the concept further tho. I think there are real hardware simplifications you could do if the OS is a jitting VM - no memory mapping unit and take out the expensive fully-associative TLBs. This is great. I love that he goes into the future. :) caub this is quite outdated but it's from 2035... I always have trouble when telling people in person to go watch this - how should I pronounce the "J" in Javascript? * SPOILER ALERT, and seriously go watch it first * If I pronounce "J" I do him an unjustice, and if I pronounce "Y" I ruin a great surprise that comes quite a few minutes into the talk. I always go with the "J" pronunciation. It maintains the expectation that the talk makes a joke out of by breaking. I would rather give everyone that first time experience of hearing the "Y" pronunciation than do Gary an injustice. Gary Bernhardt mentions an OS written in JS in his video: https://www.destroyallsoftware.com/talks/the-birth-and-death... I'm not an expert, but the memory required to render the seemingly simplest of interfaces in html/js/css in a browser today seems excessive. Some web sites bring a reasonably powered desktop to its knees. I'm not sure I want this problem on my desktop too. Though I guess this is just one step closer to having METAL. https://www.destroyallsoftware.com/talks/the-birth-and-death... I feel your pain. My first computer had 4K of RAM; my second had 48K; my third, 640K. I learned to work small. It pains me to see the equivalent of Hello World taking up untold MB. But when I think about software as a business rather than an art, I have to concede that it's a very rare circumstance where RAM efficiency matters as much as I'd like. Note the way the cost of memory has declined: http://www.jcmit.com/mem2015.htm Watches now have 100,000 times the RAM that I started with. Costs are dropping by 1-2 orders of magnitude per decade. Something that is absurdly wasteful now could well be economically reasonable very soon. The weight still hurts the user experience. All that RAM still has to get written to disk for sleep/hibernate, and loading it back from disk is in the critical path of wake from that sleep. It still fills up CPU caches, which keeps them from running at maximal efficiency (both speed-wise and power-wise). Extra weight will always matter for people who want to deliver first-rate user experiences. Memory isn't the issue to me. Latency is. Most of the applications I work with today do a lot, that's true, but their UI latency is often worse than it was on my 7.16Mz M68000 Amiga 500, and other things as well are just slow. I've mentioned here several times in the past that on my laptop I can "boot" Linux-hosted AROS (so the problem is not the Linux kernel, nor X) with a custom startup script to boot it straight into a full featured, scriptable text editor in less time than it takes to start Emacs. I'm sure it's possible to tune my Emacs setup (for example, I found out by a fluke, that the default Emacs setup on debian will wait for a DNS request to complete or time out before it starts - break your DNS setup and Emacs will hang for ages) or pick another editor (many of the other ones I've tried are either just as slow or feature-limited compared to the Amiga editor in question - FrexxEd, co-written by the same guy that started curl), but the point is that we've come to accept the kind of slow startup and UI latency that was unacceptable back then. E.g. people spent weeks tuning and trimming AmigaOS commands to make them the smallest possible so we could make as many of them as possible RAM resident to avoid the tiny fractions of a second it'd take the load-time linker to load them. I'm happy we don't need to think that much about the RAM any more. But we do need to think about the latency. There's the attitude that we should just throw servers at this instead of developer time. That's fine when you can compensate by e.g. throwing more RAM in and/or a program is run relatively rarely or where paying for a beefier server in some data centre can achieve the same performance. But it's not true when latency grows into user noticeable levels because you can't get high enough single-core performance, and that program is run a lot. I don't disagree. I am certainly frustrated every time my phone feel sluggish, which is several times a day. On the other hand, I've been frustrated with the slowness of computers for a long time. CPU speed, RAM, disk, everything has gotten way better. But I'm still just about as irritated, and I suspect that things are just about as sluggish. Again, I think it's an economic equilibrium. Things are fast enough that most people buy them; those of us who want things faster aren't numerous to outvote those who want fancier features or cooler UI bling instead. I hope this changes, but I'm not holding my breath. Poorly written websites can bring a desktop PC to its knees, but so can a poorly written native application. That isn't an argument against leveraging web technology so much as an argument in favour of well written applications. And at least with a chromeless-browser-pretending-to-be-application you have the protection of the browser process sandbox so an app that goes awry isn't going to take your computer down that badly. > That isn't an argument against leveraging web technology so much as an argument in favour of well written applications The difference is there is very little room for optimisation in most javascript runtimes. They don't support multi-threading and the memory is impossible to manage. "Lower" level languages always allow better performance tweaking when necessary. Javascript allows next to none. You can't tell javascript :"Give me an array of 10 elements", or "give me a integer of that length". So no "headroom" for performances with Javascript. Most performance problems with websites/webapps out there are just due to sheer nastiness of the shovelware that they embed. No need to have fine-tuned debugging and profiling when the main fix is "don't accidentally run this jQuery selector 1000 times on every click". Very low hanging fruit. > the memory is impossible to manage. Nonsense. There are plenty of strategies for managing memory efficiently in JavaScript. Yes you can't do a C++ level of allocation, decallocation, etc but you most certainly can manage the amount of memory your code uses. This is a non-argument. Firstly, JS does indeed have typed arrays and other tools for managing memory. More importantly, unless you're doing 3D or similar it's vanishingly rare that the memory used by objects you've allocated in JS will be measurable compared to the memory used for DOM objects and rendering generally. Replacing JS with some other language wouldn't affect the memory usage of typical web pages. I think the spirit of the argument is that if you have requirements for heavy computation such that you need multithreading + low level memory management, then you probably shouldn't use this. Use the right tool for the job. This is just one of them. Plus there are alternatives to threads. Look at the state of Atom or VSCode. Much progress has been made in terms of perf and these are not trivial applications. > Use the right tool for the job That's a funny argument in a thread about js. Js has started as a small language to do scripting on the pages, now they are sticking it everywhere. Because people don't actually use the right tool for the job when it comes to software. Instead, they take the tool they know, ignore the better tools they would rather not have to learn, and make the known tool do things it was never designed for. Excel is a great example of that, but it's done with programming languages as well. mrec Slightly nitpicky, but while lack of support for multithreading might make a particular app slow or unresponsive, I'd have thought it makes it less likely for that app to be able to bring the entire system to its knees. > They don't support multi-threading Yes, they do, via Web Workers. They just don't support multithreading at the level of concurrent access to the DOM, and they don't support shared memory. Very few native libraries support concurrent access to UI widgets, and not many native applications make heavy use of shared memory for compute either. (Most native applications don't have heavy compute needs in the first place…) > Give me an array of 10 elements new Array(10)? > give me a integer of that length Uint8Array, Uint16Array, Uint32Array? Please : > Yes, they do, via Web Workers. Which are not part of the javascript spec, it is DOM related. And web workers were never meant to increase performance. In fact in practice they don't, they often make code slower. They just guarantee that the UI thread will not block. > new Array(10)? Which doesn't allow any specific runtime optimization as the array can be resized at anytime > Uint8Array, Uint16Array, Uint32Array? Which doesn't give me an integer of a specific size BUT any array of integer. >That isn't an argument against leveraging web technology so much as an argument in favour of well written applications On point, as someone who has written both native and web apps. it really comes down to execution. Elitism aside, Web based apps can work for some scenarios > Poorly written websites can bring a desktop PC to its knees, but so can a poorly written native application. Even well-written Web applications tend to use more system resources than their well-written desktop counterparts. A few days ago, I was surprised to find that Chromium was using 6 GB of RAM, while all other processes combined (including three Emacs instances, running fancy modes) were using just 1GB. And, no, I wasn't playing browser games or doing anything fancy in Chromium: just viewing text and images. > And at least with a chromeless-browser-pretending-to-be-application you have the protection of the browser process sandbox so an app that goes awry isn't going to take your computer down that badly. It would be much better to use programs that don't need to run in a sandbox in the first place. (To be fair, OS-enforced memory protection can be considered a kind of sandboxing too.) No disagreement about memory usage, except to say that [some web sites] are usually either poorly or unethically coded. I was on theverge.com yesterday, wondering why the network indicator was going crazy on a blog post. Pull up dev tools and it's ad code downloading megs worth of data, endlessly. Whether that's on theverge, or bad actor for ad code; I don't know or care - but when people talk about bad websites that's usually the primary example in my experience. See the spotify client, popcorn time, visual studio code. these are javascript. They run great. Also the Slack web client Have you checked how much memory they use? I know we have 8-core laptops with 16 gigs of RAM, but, still, it's excessive. Any idea why the memory usage is so high? Is it related to the Electron/Chromium platform, or bad application design? Would need to check, but my bet is caching of partially rendered HTML as bitmaps, as well as a full JavaScript JIT environment. It's just a high footprint enironment. Don't have an 8 core, or 16 gigs of ram. I'm talking about my desktop. I have a 4 core i5 from 2013. Spotify has 173M in res. Visual studio code with 10 files open has 99M in res. I have firefox open with about 125 tabs open, 6 terminal windows, 2 pdf files open, VLC open but not playing, IRC in about 15 channels. Load average is 0.60. Memory load across all CPU's about 10%. 8gigs used out of 12gigs of ram. With firefox using 5.1 gigs. It is pretty excessive, but maybe web assembly would find a real home in making applications that do well on both native and web. WebAssembly will hopefully end JavaScript's stranglehold on the web. The justification for using JavaScript for server/desktop/mobile applications seems to essentially be that a certain large group of programmers only know JavaScript. Udik I guess you tend to reuse the technologies you're familiar with, especially when they are very well proven for solving a very complicated problem that requires a lot of expertise (I'm talking about fancy UIs, of course). Which is also why I think the emphasis on JS is wrong: the main selling point of these technologies is NOT javascript, is HTML5/ CSS. Javascript is just a (very handy indeed) scripting language like any other: frankly the logic and requirements of most applications around are not complex enough to justify the usage of anything more solid or complicated than that. Just today I was contacted by a former colleague, a Java developer, asking advice on using Electron to develop a quick desktop application. On the other hand, there is a long list of languages that transpile to javascript. So where are the masses of serious C#/ Java/ <pick-your-language> developers using transpilers to write the logic of their web applications? So where are the masses of serious C#/ Java/ <pick-your-language> developers using transpilers to write the logic of their web applications? I think it is catching on with C++ game developers, who want to be able to run the same codebase natively or in the web browser. The Unity 3D game engine has built-in support for this using WebGL, for example. I don't know if CLR or JVM languages have been successfully transpiled to the browser using asm.js / WebAssembly (it wouldn't be very efficient at all, so I doubt that will ever be popular). Agreed, Im so tired of being forced to write js. I'm not sure. All the browser APIs will still use the JS object model. If we look at the situation on the JVM or CLR, it seems like languages specifically designed for those object models (even if adapted from other languages) have been more successful than attempts to directly port existing languages. e.g., Scala, Clojure, F# or Powershell vs. Jython, JRuby, C++/CLI or IronPython. My bet would be that languages specifically designed for the JS ecosystem (whether JS itself or something like CoffeeScript or TypeScript) will continue to be the most popular for writing most of an app, with some apps dropping down to C or C++ or Rust compiled to wasm for a subset of performance-critical code. I'm sure there'll be a mix. But consider things like Opal (Ruby => javascript), and it seems like there's at least some people who very much want to be able to work in a single language, but for that language to not be javascript... I'm sure once webassembly is ready, that'll at least get more popular. Too much wishful thinking during the second half of the talk, though. Also, while fancy runtime systems can improve the performance of dynamic[0] languages, it doesn't come for free: the price to be paid is the loss of elegance. For instance, a JIT compiler could inline a virtual method that seems not to be overridden anywhere, but if later on it turns out that the virtual method was overridden somewhere, the “optimization” has to be undone. How can anyone in their right mind trust a language that requires such dirty implementation tricks to achieve decent performance? [0] By which I mean “less amenable to static analysis”, regardless of whether the language has a static type system. For instance, Java and C# are dynamic languages in this sense. > How can anyone in their right mind trust a language that requires such dirty implementation tricks to achieve decent performance? Isn't this a rather broad brush with which to paint all JIT language implementations, including Java, C#, and, say, PyPy? it's interesting that C# has recently moved away from JIT and reflection for client apps (.NET Native), though. Could you name a JIT compiler that doesn't do this kind of thing, yet offers performance comparable to AOT-compiled languages? (FWIW, I'm not saying it's impossible. It's perfectly possible, but you'd need a source language that offers much better static guarantees than the typical language that a JIT compiler is written for.) --- Sorry, can't reply to you directly, because “I'm submitting too fast”. So my reply goes here: > for the simple reason that type systems cannot capture all relevant runtime context. Type checking isn't the only kind of static analysis out there. And there's no need to use statistics to optimize anything at runtime when your ahead-of-time compilation step already emits optimal target machine code. > Java is a good example here, since it's strongly statically typed. Java is as dynamically typed as it gets: instanceof, downcasts and reflection, all conspire to reduce the usefulness of static type information to zero. > By your reckoning, all greedy optimizations that CPUs do like branch prediction and prefetching are also similarly 'inelegant', because they can be wrong and require rolling back. Yes, indeed. It's more elegant to know beforehand what exactly you have to do, and then do just that and nothing else. > that _doesn't_ do this kind of thing Why does that matter to anyone? > you'd need a source language that offers much better static guarantees than the typical language that a JIT compiler is written for Java and C# are both statically, strongly typed languages where JITs are the dominant implementation. Almost by definition, it's impossible to write a JIT compiler that outperforms AOT compilation without looking at runtime data, because AOT compilers have a lot more time to look for difficult static optimizations. The reason JITs can keep up is because they have access to information that an AOT compiler does not. > And there's no need to use statistics to optimize anything at runtime when your ahead-of-time compilation step already emits optimal target machine code. This is simply untrue. For example, it's not possible to statically determine whether a function should be inlined or not. However, a JIT can see that it's used in a hot loop and dynamically inline. For any language, no matter the type system, runtime information will always be a superset of compile-time information. There will always exist optimizations in a JIT that aren't possible in an AOT compiler. > Java is as dynamically typed as it gets: instanceof, downcasts and reflection, all conspire to reduce the usefulness of static type information to zero. Idiomatic Java code doesn't use these features heavily. Just because it's possible to wipe out type information doesn't mean that the vast majority of code that an AOT or JIT compiler sees won't be strongly typed. > For example, it's not possible to statically determine whether a function should be inlined or not. MLton has absolutely no problems inlining functions, even higher-order functions, at compile time. This is only difficult in languages with virtual methods, because they can be overridden anywhere. If anything, that's an indictment of virtual methods, not AOT compilers. > However, a JIT can see that it's used in a hot loop and dynamically inline. What if it's a virtual method call that's known to be overridden in several places? You can't inline it, even if it's in the middle of a hot loop. > For any language, no matter the type system, runtime information will always be a superset of compile-time information. Runtime information is always anecdotal, specific to one particular run of a program, so... > There will always exist optimizations in a JIT that aren't possible in an AOT compiler. ... for every “optimization” a JIT can perform, there will always exist a program for which the “optimization” will have to be rolled back after it has already been performed, because it turned out to be unsound. > Idiomatic Java code doesn't use these features heavily. Language implementations must work correctly whether you write idiomatic or unidiomatic code. > Just because it's possible to wipe out type information doesn't mean that the vast majority of code that an AOT or JIT compiler sees won't be strongly typed. Most code I write in Python could be given static types too. That doesn't make Python a statically typed language. And “strongly typed” doesn't really mean anything. > MLton has absolutely no problems inlining functions, even higher-order functions, at compile time Of course, but how does it know which functions to inline? If you inline everything, then you will blow through your cache. > What if it's a virtual method call that's known to be overridden in several places? You can't inline it, even if it's in the middle of a hot loop. That's not true--a JIT could optimistically replace with a concrete realization. > Runtime information is always anecdotal, specific to one particular run of a program, so... That's a benefit. No matter what AOT compiled code you have, it is possible to speed up execution if you know what code paths you will take. > ... for every “optimization” a JIT can perform, there will always exist a program for which the “optimization” will have to be rolled back after it has already been performed, because it turned out to be unsound. Yes, but so what? As long as it improves performance in the average case, and the worst case is bounded, then that is a net win. You can equally well write deliberately obfuscated code that an AOT compiler has trouble with. > Language implementations must work correctly whether you write idiomatic or unidiomatic code. Implementation is correct. Only reflection is slow. If you don't want that, don't write reflection. > Of course, but how does it know which functions to inline? Small functions and higher-order functions are the most natural candidates. (The two categories greatly overlap in most cases.) > That's not true--a JIT could optimistically replace with a concrete realization. You'd have to roll back an unsound optimization in the middle of a hot loop. I'm pretty sure that's not what you want. > it is possible to speed up execution if you know what code paths you will take. That knowledge can be encoded statically in many cases, if only you used the right languages. > As long as it improves performance in the average case, and the worst case is bounded, then that is a net win. This is only the case when your program wasn't close to optimal to begin with. > You can equally well write deliberately obfuscated code that an AOT compiler has trouble with. Yes, but languages amenable to static analysis will actively get in your way if you try to write such obfuscated code. The static analysis either tells you that your program is gibberish, or outputs nonsensical gibberish of its own. So the path of least resistance is to write code that the static analysis knows how to optimize. Which is not the case in dynamic languages (including pseudo-static ones like Java). > Only reflection is slow. If you don't want that, don't write reflection. So. basically, you're telling me to ditch Java's entire library and framework ecosystem? > Small functions and higher-order functions are the most natural candidates. (The two categories greatly overlap in most cases.) But then you're just guessing. Isn't that also inelegant? Let's play devil's advocate: how do you decide the cutoff on function size for inlining? Well, you would profile a bunch of programs with various cutoffs... now all you have is a heuristic, and MLton will inline some functions that it shouldn't, and it will fail to inline other functions that it should. It will do worse than a JIT at this, because the JIT has more information. > That knowledge can be encoded statically in many cases, if only you used the right languages. Sure, but you won't ever succeed in encoding all of it, which is why runtime techniques can have a place. > This is only the case when your program wasn't close to optimal to begin with. That's simply untrue, and you can prove that formally -- given a machine M and a program P that produces outputs on a set of inputs I, it is always possible to come up with a program P' that produces those outputs with fewer steps on some subset of I, in return for taking more steps on the rest of I (except in the trivial case where the running time is completely independent of input). You can view a JIT as iteratively replacing P with P' after it sees which inputs I are most common, and this is true no matter what M, P, or I are. In particular, there exists a version of P' that is faster than the statically optimized version of P on your program's input. > Yes, but languages amenable to static analysis will actively get in your way if you try to write such obfuscated code. I don't see how that isn't equally applicable to writing code to fool your JIT. > So. basically, you're telling me to ditch Java's entire library and framework ecosystem? Framework code doesn't generally run inside your inner loops, so I don't see how that should affect either your AOT or JIT compiler much. > In particular, there exists a version of P' that is faster than the statically optimized version of P on your program's input. Sure, but I'm interested in what the program does on all meaningful inputs, not a specific one. Otherwise, I'd just precompute the answer and hardcode it. > I don't see how that isn't equally applicable to writing code to fool your JIT. AOT compilers are supposed to provide feedback to the programmer about what the program means (e.g., inferred types, type errors). JIT compilers are not. > I don't really understand why optimistic heuristics bother you so much. Um, because they can be wrong, and then you need to fix errors, which makes the system more complex? > Seems like you just have an aesthetic preference. Yes, for simplicity, and for thinking before writing code. > Um, because they can be wrong, and then you need to fix errors, which makes the system more complex? Yes, but I don't understand why compiler complexity concerns you, so long as the whole thing works. AOT compilers are also extremely complex, and are also full of heuristics. You seem really hung up on the fact that an optimization can be rolled back at some point, but why should you care? JIT optimizations can be 'wrong' in the same way that caches can miss. It the right engineering solution to eliminate caching, over some belief that one should never be 'wrong' anywhere in a program, even though the eventual answer is always correct? This might matter in system with real-time performance demands, but then you should be equally concerned about the garbage collector, for example. > Yes, but I don't understand why you compiler complexity concerns you, so long as the whole thing works. Because I find it easier to trust simpler systems than complex ones. > AOT compilers are also extremely complex, and are also full of heuristics. Yep, those heuristics are annoying too. (But less so than the ones JIT compilers use, because at least they don't involve temporarily breaking my program.) > unless you've profiled it and see this having a real adverse effect on overall performance? How many times do I have to repeat that what annoys me is the excessive complexity? >Let's play devil's advocate: how do you decide the cutoff on function size for inlining AOT compiler writers have been tuning inline heuristics for more than 40 years. Sure sometimes you have to help the compiler with annotations or PGO, but in the large majority of cases things just work. In fact AOT can deal much better with the massive code explosion due to aggressive inlining than JIT compilers which have a very tight time budget for optimisations.  trust a language that ...  Where's the problem? JIT compilers work. Most high-level languages require compiler tricks for achieving performance from Scala to Haskell to Prolog. It's useful to distinguish between (1) having a clean, easy to understand semantics and (2) having a fast implementation. Use all the hackery in the world to get your language fast, as long as it's abstract semantics is easy and canonical. > Most high-level languages require compiler tricks for achieving performance from Scala to Haskell to Prolog. To make things perfectly clear: I'm not against optimizations being performed automatically by compilers or runtime systems. What I'm against is unclean designs: deliberately performing an unsound optimization and then rolling it back is an unclean design. > Use all the hackery in the world to get your language fast, The language implementation is a program itself, and I don't have any good reasons to trust a hackish language implementation any more than I trust other hackish programs - that is, not at all. > as long as it's abstract semantics is easy and canonical. Hah! This thread is about JavaScript.  is an unclean design.  It's not unsound. Otherwise you'det g incorrect results. You could say the design is wasteful, because you optimise and then throw away the optimisation. But it's hard to do better for some kinds of languages.  trust a hackish language implementation  I agree. And indeed JIT compilers are hard to get right. But in practise even JIT compilers are much higher quality than applications: ask yourself, how many of the bugs in your code turned out to be compiler bugs, vs how many were ultimately your mistakes? > It's not unsound. Otherwise you'd incorrect results. You could say it's wasteful, because you optimise and then throw away the optimisation. The optimization is unsound. If it weren't, it wouldn't have to be rolled back occasionally. If you're talking about the combination of the optimization and the rollback mechanism, it's not unsound, but it's inelegant. A runtime system designed this way only understands your program in a statistical sense (based on concrete execution profiles, which may vary from one run to another), never with the full certainty that static analyses (type checking, abstract interpretation) can give you. > how many of the bugs in your code turned out to be compiler bugs, vs how many were ultimately your mistakes? Of course, most were my mistakes. But the very reason why those bugs made it into the final executable is the lack of powerful static analyses in the first place. Curiously enough, when I use languages that make static analyses possible, I write programs with less bugs and they perform better without relying on fancy runtime system tricks. --- Sorry, can't reply to you guys, because “I'm submitting too fast”. So my replies go here: @mafribe: > That's an orthogonal issue. It's not. Static analyses gather valuable information that can be used to emit efficient code. > More powerful static analysis is also more time-consuming. So perform it ahead of time! > One of the design goals of Javascript JITs is to make web-pages as responsive as possible. That rules out complicated static analysis. Of course, a browser can't spend much time statically analyzing JavaScript programs, but programs can be statically analyzed (gasp!) before they're deployed. --- @smallnamespace: > Why does that matter to anyone? Because this implementation technique is unnecessarily complex, and a far simpler alternative exists: Know beforehand what your program has to do. Think before you write code. > Java and C# are both statically, strongly typed languages where JITs are the dominant implementation. Their type systems can be easily subverted, so they're not “strongly typed” in my book. --- @smallnamespace: Oops, sorry, I accidentally swapped my two replies to you: this one and https://news.ycombinator.com/item?id=12481956 .  The optimization is unsound.  The JIT compiler is sound, w.r.t. to the source language's semantics. That's the only thing that matters for the programmer.  but it's inelegant.  Elegance is in the eye of the beholder. I was blown away when I first encountered JIT compilers.  lack of powerful static analyses  That's an orthogonal issue. More powerful static analysis is also more time-consuming. One of the design goals of Javascript JITs is to make web-pages as responsive as possible. That rules out complicated static analysis. > it's not unsound, but it's inelegant. A runtime system designed this way only understands your program in a statistical sense (based on concrete execution profiles, which may vary from one run to another), never with the full certainty that static analyses (type checking, abstract interpretation) can give you. This is a false dichotomy. You can always build a JIT that uses runtime statistics to speed things up, even in languages that are quite amenable to static analysis, for the simple reason that type systems cannot capture all relevant runtime context. Java is a good example here, since it's strongly statically typed. By your reckoning, all optimistic heuristics that CPUs do like branch prediction and prefetching are also similarly 'inelegant', because they can be wrong and require rolling back. Optimistic heuristics have a long history in computer science, and IMO it seems strange to single one particular use case as being particularly evil. No one likes that, but 1) HTML/CSS/JS is sill the only truly cross-platform (mac/windows/linux/web/ios/android/etc.) UI platform/ecosystem in 2016, and it will probably stay that way in the foreseeable future because OS makers love their walled gardens. 2) An app's memory efficiency is not a top 10 priority of an average solo-or-small-team developer's concerns, because that's not what most users pay for. Electron/NW.js apps will only become more common, so I hope things will improve with asm.js/webassembly/etc. It would be interesting to see the evolution of memory usage for the popular platform du jour from the earliest decades until today. My uneducated guess says it will be an exponential. Python would like a word with you about point 1) woah I use a network simulation software inside of lubuntu inside of a VirtualBox inside my Mac. VirtualBox is a stream of headaches, and you suffer a big performance hit. What I wouldn't give for the software to serve an http/css/js interface so that I could run it with docker or natively instead. Make sure that the virtual machine has "paravirtualization interface: kvm" setting for linux guests, that the guest tools are installed and if you really use network, adapter type: virtio-net doesn't hurt. And also, that you have enough RAM. Then, virtual machines in VirtualBox are fine. woah Thanks! Good samaritan :) Honest question – how do you build cross-platform application UI in python? Kivy. eyko I'm not familiar with the Python ecosystem but I imagine there are bidings for GTK, QT, or other UI toolkits. The same applies to most (popular) languages. The claim that HTML/CSS/JS is the "only" truly cross platform stack is simply not true. The main hurdle that other languages faced was not having a native UI. For example, GTK on OSX or Windows did not feel native. Key bindings were often not native. Similar story with Java (was it Swing?). The HTML/JS/CSS combo has exactly the same issue (it's not comparable to a native GUI - Cocoa or whatever it is. > The main hurdle that other languages faced was not having a native UI. Electron faces this same hurdle, but with a twist. It just doesn't care; it picks a rendering target (the web) and simply uses it everywhere. Perhaps that's the approach QT and others need to take as well: stop trying to match Apple's UI on Apple, and Windows' UI on Windows. > stop trying to match Apple's UI on Apple, and Windows' UI on Windows. Great idea, but make sure it at least looks/feels good. I remember Swing stuff from the late 90s and... IMO the primary issue wasn't so much that "it doesn't look like Windows" but that ... it was a very poor experience. Copy/paste/keys - yeah, that's an annoyance, but if the UI is clean, friendly, easy to understand and be productive on, people can look past the differences of the native host OS. Swing (and GTK and others) really don't provide a 'better' UX (imo). corv Python on iOS? It's not pretty but: https://kivy.org/docs/guide/packaging-ios.html corv Interesting Python can work on desktop, but it's solutions for mobile are iffy at best. Java as well for basically anything but iOS. C# might even cover that now. Xamarin, I reckon?[1] Well I have yet to see Python make a decent cross platform application on all of those platforms especially iOS and Android. I love Python but people over state its benefits and effectiveness. Loic Python is wonderful for business logic and scientific computations. HTML/JavaScript is wonderful to make beautiful interactive user interfaces. So at the moment, I am using PyQt/PySide and load my HTML/JavaScript/ReactJS GUI in a webview with a simple bridge object between my Python code and the JavaScript code. As soon as I need native interactions with the file system, etc., I am using Python, which is robust and proven (and I am used to it). For example, if I need a dialog to save a file, I just use QFileDialog. At the end, because of a clear separation between the GUI and the computation/system interactions with the bridge object, I just need a different bridge object to have everything running fully online. One point is of course that I am not developing for phones and tablets, I package everything with pyinstaller for the good old desktop users. There's also Qt but they have been moving in the web direction too. And don't forget JavaFX Do people use that? On point (1) you've kind of loaded the comparison by including web in that list... It's a common business scenario: start out as a web app, then as business picks up let's start providing native apps. Or the other way: start out with a native app on the platform we most care about + a web app for "access from anywhere". Sep 01, 2016 · rybit on Hosting My Static Site +1 on screw the absolutely necessary caching. :) I couldn't agree with too much tooling can abstract what is usually simple - just serve some HTML. I think that the whole API economy and saas/paas stuff really has to be evaluated carefully. You have business considerations around lockin, time to integrate vs time to build your own, etc. I think that they work really well when you're building something simple, but there is a range of the size of your site where it is more of a hinderance. The decision to use a service should be about what it gives you, not because it is cool. Aside: I have totally been that engineer that has made something "clever". I am sure there are other engineers that curse me for what I thought was a great tool b/c I looked at the site for 0.1 seconds (sorry!). I really wanted to address the talk about static. Let's take the instance of a blog (like any of the heroku/rails tutorials out there). Yes, you must have a canonical place for the copy to live. Be it in a db or flat files on disk or in your git repo. But you don't need to have the actual request go to the origin for that info and then jam it through some jinja/unicorn/etc template. Just to render a silly article to the end user. When you write that article, you know what that page is going to look like, _why dynamically generate it_? This is the way that static can work, generate all the versions of the content and rely on JS to do magic on the frontend (https://www.destroyallsoftware.com/talks/the-birth-and-death...). Removing the whole call back to the origin db for what is essentially static content. This obviously is going to be faster than a DB query + template render + network traffic, as well as more secure. It is an http GET ~ hard exploit vector. Now does this extend into the arena of apps (react, angular, newest fanciest JS framework). The actual assets are also static, no? They should be served exactly the same as the HTML we have. Then it is up to the JS to query whatever service/API you want and automagically generate some HTML. The big thing is that services like wordpress/drupal/rails have made it very easy for people to build sites in a classic LAMP stack, but that is kinda flawed in a lot of ways. Wordpress's plugin system essentially lets you remotely run your code on their servers. That is a dangerous game to play. All to do something that doesn't even need a server in the first place. Why risk it when you don't need to? And you'd get some nice improvements if you don't. People shouldn't even know what a LAMP stack is to make there business site. Now is this approach right for every site? Nopezzzz. I don't believe in silver bullets, but there are a lot of sites that fit this mold. And it is a different approach to building your site out. Either way - sorry to hear about Capistrano. Shell scripts ftw (though I have some that are terrible out there too). This (funny and accessible, don't worry) talk by Gary Bernhardt is what helped me understand asm.js for the first time: https://www.destroyallsoftware.com/talks/the-birth-and-death... Aug 04, 2016 · Analemma_ on Visual Studio Code 1.4 The future as foretold by "The Birth and Death of JavaScript" (https://www.destroyallsoftware.com/talks/the-birth-and-death...) gets closer every day :) I honestly wouldn't be surprised if someone like MS announces a new open source "JS to efficient native machine code" compiler one of these days. MS Chakra JS VM is already open source: https://github.com/Microsoft/ChakraCore Like all high-performance JS VM's their JIT's already emit efficient machine code. If Google, Apple or MS could get JS running any faster they would've. Ever watched "The Birth & Death of JavaScript" ? https://www.destroyallsoftware.com/talks/the-birth-and-death... It's a future prediction of how "javascript lost but programmers won". Just wait until after the nuclear world war in the early 2020s. Jul 23, 2016 · nhaliday on Rust: The New LLVM Speaking of which, https://www.destroyallsoftware.com/talks/the-birth-and-death... Great talk, Gary is a cool dude. May 04, 2016 · tambourine_man on Node OS THE BIRTH & DEATH OF JAVASCRIPT Great Talk to watch We should just let V8 run in the kernel and do away with system calls. https://www.destroyallsoftware.com/talks/the-birth-and-death... Someone else linked to a talk that mentioned removing all the layers in some theoretical architecture called METAL (this is a old talk) basically running asm.js (again talk is old) directly through the Kernel and even removing overhead that Kernels need for making native code safe (such as the Memory Management Unit) and as a result it would run faster than normal native code. https://www.destroyallsoftware.com/talks/the-birth-and-death... The major thing to be gained from all this then is software that can run fast but not have to be recompiled for all the different systems and hardware. Your comment reminded me of this highly entertaining talk - https://www.destroyallsoftware.com/talks/the-birth-and-death... Although tongue in cheek, I think it gives some food for thought. I feel like WebAssembly is to asm.js what the modern JS profession is to old follow-your-cursor effect on webpages - it becomes something to take seriously and use, and having done a bunch of porting things with Emscripten the idea of a browser within a browser doesn't sound as crazy as it used to! jerf To be honest, WebAssembly isn't really javascript anymore. asm.js was, albeit only sorta-kinda-just-barely (but in an important way), but WebAssembly isn't. There's a reasonable case to be made that in 20 years "everything" will be WebAssembly, but we won't be calling it Javascript, thinking of it like Javascript, or using it like Javascript. In the long term, this is the death knell for Javascript-as-the-only-choice. Javascript will live on, but when left to fend for itself on its own merits, it's just another 1990s-style dynamic scripting language with little to particularly recommend it over all the other 1990s-style dynamic scripting languages. But Javascript programmers need not fear this... it will be a very long, gradual transition. You'll have abundant time to make adjustments if you need to, and should you not want to, there will still be Javascript jobs for a very long time. You act like JavaScript's only upside is the fact that it's required in the browser. IME the opposite is true. I'm seeing companies flock to it outside of browser contexts in areas where "code reuse" or "isometric/universal" style programs aren't even possible. jerf It's not that it's the only upside. It's that once out of the browser, it isn't really a standout language. For instance, if you're going to have a "fair fight" out there, one can't help but notice that Python already has everything that we're standing around waiting for in ES6, plus the next couple of iterations. And that's just Python. You should also check into Perl, Ruby, Lua, and PHP, and that's without straying even a little outside of the "1990s dynamic scripting language" field, to say nothing of what you can find if you leave that behind. It just isn't that impressive of a language once you remove the browser prop. It isn't necessarily bad, or at least no worse than some other popular languages as well, but there's nothing uniquely good about it in the greater language landscape. To be honest, anyone who thinks that Javascript does have some sort of unique advantage needs to get out more and learn a few more languages. Even Python, which you'll find goes very quickly if you already know JS. Javascript is very, very not special. Again, since people seem to confuse these things, that does not make it bad, but it's very not special. Very boring, middle-of-the-road scripting lanugage that is, if anything, well behind the other ones in features because of its multihead, no-leader-in-practice development model. But JS isn't just a "worse python" either. I've gotten around when it comes to languages, from business-basic to c++ to go to python, php, ruby, js, lua, and lisp. Having spent a non-trivial amount of time in each, JS has by far one of the best ecosystems i've ever seen. See, I hate talking about languages because it's hard to define what a language even is. Is it purely the syntax? Is it syntax+standard library? Or is it the whole set of syntax+libraries+ecosystem+idioms? From a purely syntax point of view, js is lacking some things and while they are getting fixed, it's taking longer than most would like. And i agree that in this aspect js is currently "mediocre" at best. From a "whole ecosystem" point of view, js is wonderful. It's fast, secure enough to give arbitrary code from anyone in a browser, has a stupidly huge set of libraries, an "idiomatic" style which works very well for some problems, and it's almost literally everywhere and on everything, has multiple competing implementations that helps drive performance and reduce bugs. Yeah, it's got it's quirks (and in JS's case, a lot of them), but every language it's age and older does. Now if there was some way to magically take all of the "other" parts from js and apply them to another language, you'd have an overnight success, but the fact is that the language syntax is such a small part of what a language truly is. You should check out runtime.js [1] if you haven't. Instead of duktape it uses v8. I'd like to see an equivalent project using spider monkey so that C/C++ code could be compiled into efficient JS. Check out this speech if you haven't already: The Birth & Death of JavaScript [2] Cool. "We took a dynamic language and made it fast(er again)". And you're gonna apply that to the darkest corners of ES6? Whee, language theory nerd paradise! I can't wait for the compilers targeting that using all sorts of really odd idioms that in effect work like fuzz testing. Jan 13, 2016 · lumpypua on Elm in the Real World Any commentary on "The Birth and Death of Javascript"? https://www.destroyallsoftware.com/talks/the-birth-and-death... As far as I can tell, Javascript the runtime is a juggernaut that there's no stopping. If a platform gets popular tooling fixes itself. It does not fix itself. It gets fixed through a lot of hard work by many people who shouldn't be hidden behind the curtain. But you are right that if a platform is popular enough, it will usually attract people that will do this work. Personally, even though I also do have quite some webdev experience, I am wishing that mobile native development wins. But on HN that is a sure way to be downvoted. I'm personally hoping the future holds some way to blend the openness/interoperability of the web with the performance of native apps. Maybe React Native, maybe web assembly, etc. I hope that pure mobile development doesn't win, because the mobile app ecosystems are way more controlled than desktops/servers, and if everything went Android/iOS, it would be a major loss for freedom. This whole conversation just makes me smile at how much truth and more is coming out of Gary Bernhardt's The Birth and Death of JavaScript[1]. This library reminds me of some of the "future" introduced in this (satirical, yet insightful) talk: https://www.destroyallsoftware.com/talks/the-birth-and-death... Discussion: https://news.ycombinator.com/item?id=7605687 This is extremely cool, by the way. Conceptually, this is a reasonable line to draw in the sand. Minified Javascript should be thought of as equivalent to assembly language or Java bytecode[1], and if the policy is to not ship compiled binaries without source available, minified JS should fall into that category. [1] https://www.destroyallsoftware.com/talks/the-birth-and-death... is funny, but the honest truth is that it's likely the future of web technologies, as Javascript is so deeply entrenched in the fiber of the web standards and the practical implementation of the browsers. Jun 07, 2015 · wz1000 on HTML is done Isn't this problem solved by having HTML/JS/CSS as a compile target, which is what we are heading to now, with languages and technologies like CoffeeScript, PureScript, Elm, ClojureScript, ghcjs, emscripten etc. Even on the backend, in the end you have to run native machine instructions. HTML/JS/CSS substitute for that on the front end. Of course, x86_64 may be a better compile target, but the advantage of the existing languages over any new system is that there has been a lot of work that has already gone into making them fast, backwards compatibility, all browsers already implement them, and there are a lot of things that can target them now. > HTML/JS/CSS as a compile target That only works if your abstractions don't leak and your libraries are stable enough so that you can avoid having to debug on the lower level anyway. Personally to me that's "ugh." HTML and CSS as a compile target? What's the lowest level primitive in html? A div? A span? ew. It should be compiled to something that has access to primitives such as lines, polygons or pixels. Compiling to js is ok as that language is as close to the metal as you can get on a vm anyway. So long as performance isn't affected then it's really no different to the developer. Still the platform would be more elegant if it didn't compile to some intermediary high level language. Access to primitives such as lines, polygons or pixels, you say? Canvas, SVG, and WebGL fit that bill. Sure. And I know about these. The original poster was talking about languages with HTML+CSS+JAVASCRIPT as compile targets, not canvas, svg or webgl. I'm simply addressing his comment. Either way, WebGL/canvas operate as child elements in an html page, you still need to use javascript, and SVG isn't GPU accelerated. It's still, overall, an ugly mess. But definitely, with improvements, canvas, svg and webgl are all candidates for my aforementioned "next steps." How is it that much different than using OS APIs to do common tasks like text, windows, buttons, scroll bars, etc? If you really want to do pixel level stuff, there is canvases and images and SVGs. How would I replace html in the browser? Create a new rendering api in canvas? Can it be done? Sure. Will it be pretty? Not so sure. Imagine your operating system can only compile one single language: Perl. And with perl the only way you can render anything on the screen is with a QT api and QT UI primitives. Programmers can still do anything within this ecosystem. Technically you can have perl as compile targets for any other language. Lets also pretend that QT has this little UI element called canvas that has a api allowing you pixel level control. While technically you could do anything in a platform like the one I described above, I'm sure you can easily see why it's still bad. pron You need HTML for search. _RPM > Compiling to js is ok as that language is as close to the metal as you can get on a vm anyway JavaScript is as low as the metal you can get in a Virtual Machine? hmm... asm.JS carries you rather close to bare metal. Of course you can go lower, but there will be sacrifices. Brendan Eich actually addressed this same topic in response to some other thread I started a while back. I quote him below: "Apart from syntax wins, you can't get much lower-level semantically and keep both safety and linear-time verifiability. Java bytecode with unrestricted goto and type confusion at join points makes for O(n^4) verification complexity. asm.js type checking is linear. New and more concise syntax may come, but it's not a priority (gzip helps a lot), and doing it early makes two problem-kids to feed (JS as source language; new syntax for asm.js), which not only costs more but can make for divergence and can overconstrain either child. (This bit Java, pretty badly.)" Actually we should abandon the ring concept all together for more promising security models. frik The suggested solution is to have process isolation implemented using software - namely asm.js-enabled JavaScript virtual machines embedded in a Linux kernel, which save you from needing hardware isolation, reducing overhead. Gary calls this idea "METAL". I found little resource about the project, but there is a discussion on Reddit: http://www.reddit.com/r/compsci/comments/25w7vt/javascript_b... And we had the discussion on HN too: https://news.ycombinator.com/item?id=7605687 Nevertheless an interesting topic, that doesn't deserve a downvote of my parent. Have you seen Gary Bernhardt's THE BIRTH & DEATH OF JAVASCRIPT? I saw this a few months ago, and it never gets any less insane (in a good way) to watch. Just following Atwood's Law, I suppose. Apr 21, 2015 · derefr on Going “Write-Only” Gary Bernhardt's "The Birth and Death of Javascript"[1] is a rebuttal to this, I think. Effectively, imagine that a compiler, put in static-compilation mode, would link everything required to run a piece of code (the relevant windowing toolkit, all the system libraries, copies of any OS binaries the code spawns, copies of any OS binaries the code sends messages to over an IPC bus, etc.) into something resembling a Docker container, or a Unikernel VM image. Imagine, additionally, that all this then gets translated into some standard bytecode for a high-level abstract platform target that has things like graphics and networking primitives—say, asm.js. Now everything can live as long as something still executes that bytecode. This will be really cool in the future, someday. But it's a sci-fi like talk for a reason. (and funny enough, when I saw him give this talk, the above quoted situation is exactly what I thought of...) Maybe it's unrealizable at the moment for x86 code. On the other hand, for random digraphs of arcade-game microcontrollers, JSMESS is doing pretty well at achieving this. :) Probably not. While it wasn't intended to be so, JavaScript has become the lingua franca of web browsers and is therefore heading towards a future of being a compilation target. If your program can't compile to JavaScript, it doesn't really run on browsers (in much the same sense as "If your program can't be compiled to x86 assembly, it doesn't really run on computers"). > heading towards a future of being a compilation target Having to trace/debug compiled javascript from another language is a horrific future. In your analogy, C programmers would have to debug assembly. I agree it's the most probable future, but let no one call it innovation when it's simply resignation from lack of options. You haven't heard of source maps then. "Having to trace/debug compiled x86 machine code from another language is a horrific future." There's no reason Javascript can't get better at being the assembly language of the web, including making debugging of other languages easier. It seems to me that we should really be looking for browsers to compile (or at least appear to compile, even if they skip this as an optimization) Javascript into a lower-level 'assembly' language, so that Dart can be made to compile to the same form. A LLVM for browsers, basically. JS is a terrible assembly language. But the work that's been done on making it passable for that purpose is remarkable, so we're getting by with it. You have to ask yourself if it's easier to get all browser vendors to agree on an intermediate language or to get them to agree on new apis that makes javascript better for that purpose. Alternatively, make a new type of browser that completely replaces Javascript, make it popular and leave the old world behind. You'll never get adoption for a new browser that can't parse existing websites, unfortunately. There's no 'clean break' option. People adopt new things all the time. Look at what happened when smart phones came long - everybody started making apps for them. I think people on smart phones spend more time in apps than they do on the web. Anyway, the new browser could totally embed another browser engine to show legacy web pages. New browser that fully supports existing technologies is probably the best option. Similarly, there's nothing technically stopping someone from building a new scripting core in the browser itself and then implementing the JavaScript engine atop that (other than the need to support a brand-new untested framework in an environment of high-performance already-existing JavaScript engines, of course). It's easier to make them agree on an intermediate language. Mostly because one can make such VM as a BSD licensed plugin. Just make the DOM available as some virtualized "system calls" and you'll avoid the fate of Java and Flash. What's a plugin? Is that one of those things that "nobody" willingly installs in their browser unless they absolutely have to because much of the market doesn't even understand what they are and the ones that do still want to avoid them because they decrease browser stability? Whatever it is, somehow most people still have Flash installed on their computers, and a lot of them have Java. Those are two of the exceptions to the rule. Quite literally; there is a section of the Mozilla browser codebase that actively seeks out those plugins (along with QuickTime, Windows Media Player, and Acrobat Reader) because users who lack them experience a "broken web" (http://lxr.mozilla.org/mozilla-central/source/dom/plugins/ba...). I'm not saying becoming one of those plugins is impossible---clearly, it's been done more than zero times. I'm saying that I really wouldn't base any business decisions around the assumption that it will happen for your plugin. Mar 25, 2015 · Buge on Fear of Apple > a web-based web browser Reminds me of this: https://www.destroyallsoftware.com/talks/the-birth-and-death... Also, https://www.destroyallsoftware.com/talks/the-birth-and-death... jerf I actually acknowledged that in the version of this I posted 4 days ago: https://news.ycombinator.com/item?id=9071064 However, he has JS hanging on for longer than I bet on... once asm.js gets a good DOM binding I expect the explosion of language diversity to take about two years, tops, and for it to rapidly become clear that JS is now just another way of accessing the DOM. I think there's more pressure built up there than people realize, because right now there's no point in thinking about it, but once it's possible, kablooie. Node's value proposition, IMHO, is in some sense correct, but backwards; it's not that we want to write in Javascript on the server, it's that we want "client language = server language"... and once there's no longer a technical handcuff pinning the client side of that equation to Javascript, it will not take that long for it to no longer be Javascript. It is not an impressive language, even within its own 1990s-style dynamic language niche. (I think this is not because it's "bad", but because it has been developed in this really terrible multiple-vendors-that-actively-don't-want-to-cooperate way for most of its lifetime. It's gotten past that, I think, but during those decades all the other scripting languages were marching right along. None of the other languages could have survived such a process and gotten to where they are today, either.) I wonder how much React-style UI libraries can make up for the poor DOM access. Do all the calculation in asm.js and just dump out the diff for the plain js dom updater to deal with. GC-ed languages are going to have to include the GC which I doubt can compete with JavaScript. No one wants to write CRUD apps and manually manage memory. I don't see asm.js being used outside of games. wmf Unless someone invents a language where non-GC memory management is easy and that language can compile to both the client and server... iopq Rust makes it not quite "easy" but at least it makes it automatic and not error-prone. jerf "GC-ed languages are going to have to include the GC which I doubt can compete with JavaScript." There's no particular reason why not. It's all just bits and bytes in the end, and asm.js gives a pretty low-level view of the world. And if you're starting from a baseline of a language that can easily be 5-10x faster than browser-based JS you can afford a bit extra on the GC side. Javascript isn't magic. It's just a language. It isn't even a particularly special one, once you ignore its browser support, and it certainly isn't one focused on performance (I stopped buying the "languages don't have performance characteristics" line a while ago). It gets to run the same assembly instructions everybody else does. It isn't as fast as a lot of people here suppose, and it isn't that hard to beat out its performance even now. But why, what is the upside vs. just transpiling like the dozens of languages that already do? jerf Why are you asking as if it's some sort of theoretical question when asm.js is in hand, right now, and it performs wildly better than raw Javascript? Of course we'd rather compile to something that's faster than Javascript than compile to Javascript. (Sorry, I can't condone the word "transpile". Usage of it just reveals someone who doesn't understand compilation technology and thinks there's somehow something "special" about compiling to one intermediate language ("javascript") vs. another ("assembler").) I can't believe how many people seem to believe that Javascript is a C-speed level language, and downmod anyone who observes it's not. Well, it's still not. It's easy to see that it's not. It's not even close. If it were asm.js wouldn't exist. (I mean, if you're having trouble with my claim here, stop and think about that for a moment... if Javascript is so fast, why does asm.js even exist?) Bear in mind that Unreal and Unity both have some form of internal garbage collection systems that are compiled to asm.js. In the case of Unity, C# is transpiled into asm.js code. You could write in potentially any GC language, it's just that the GC needs to be included. "For example, will asm.js eventually take over traditional web development? Theoretically, you can compile any compiled language to asm.js, so you'll have a lot more choice for the language you want to use to create your webapps." I've outlined this progression before, which seems obvious to me, but I haven't seen anyone else discuss it.  1. Get asm.js into every browser. 2 or 3. Observe that asm.js is very verbose, define a simple binary bytecode for it. 3 or 2. Figure out how to get asm.js decent DOM access.  The last two can come in either order. And the end result is the language-independent bytecode that so many people have asked for over the years. We just won't get there in one leap, it'll come in phases. We in fact won't be using Javascript for everything in 20 years [1], but those of you still around will be explaining to the young bucks why certain stupid quirks of their web browser's bytecode execution environment can be traced back to "Javascript", even when they're not using the increasingly-deprecated Javascript programming language. Hey, you're good at this. ;) It's funny to see that people so opposed to JS are too short sighted to see that asm.js could enable what they've wanted all along. Look at that! asm.js is a cross-platform bytecode; it just happens to be ASCII and look like JS. :) Arguments against asm.js as a bytecode are mostly about aesthetics and "elegance". I think that eventually we might ditch DOM and use WebGL or canvas or something instead of it, like on the desktop. And throw accessibility out the window. Sorry visually-impaired people. The web of the future isn't for you. :( You can have accessibility without the DOM, and really the DOM is not such a great way to do this anyway. Just do things like write explicit audio-only interfaces. I can't reply to bzbarsky for some reason, but: I assumed we were talking about vision impairment, because that's what the comment I replied to mentioned. Of course you can implement whatever else you want as well. I question this "semantic DOM" idea: the trend has been towards filling the DOM with tons of crap in order to make applications, not documents. Do accessibility agents even work well on JavaScript heavy sites today? Accessibility can and will be had without the DOM; while it is a concern, it shouldn't prevent things like WebGL + asm.js apps on the web. > I can't reply to bzbarsky for some reason there's a rate limit to stop threads exploding No idea why you couldn't reply to me, but.... My point is that visual impairment is not mutually exclusive with other impairment, even though people often assume it is, consciously or not. This is an extremely common failure mode, not just in this discussion. And while of course you _can_ implement whatever else you want, in practice somehow almost no one ever does. Doubly so for the cases they don't think about or demographics they decide are too small to bother with. How well accessibility agents work on JS heavy sites today really depends on the site. At one end of the spectrum, there are JS heavy sites that still use built-in form controls instead of inventing their own, have their text be text, and have their content in a reasonable order in the DOM tree. At the other end there are the people who are presenting their text as images (including canvas and webgl), building their own <select> equivalents, absolutely positioning things all over the place, etc. Those work a lot worse. You are of course right that accessibility can be had without the DOM, but "webgl" is not going to be it either. Accessibility for desktop apps typically comes from using OS-framework provided controls that have accessibility built in as an OS service; desktop apps that work in low-level GL calls typically end up just as not-accessible as your typical webgl page is today. So whatever you want use instead of the DOM for accessibility purposes really will need to contain more high-level human-understandable information than which pixels are which colors or what the audio waveform is. At least until we develop good enough AI that it can translate between modalities on the fly. Speaking about AI, is it really that hard to do the OCR'ing of the images? I'm no expert, but I was under the impression hat this was a solved problem. That's a pretty narrow view of "accessibility". For example, you just assumed that your user can't see (or can't see very well?) but can hear. Users who are deaf and blind? Out of luck. Users who are deaf and not blind but need your thing zoomed to a larger size? Maybe out of luck maybe not (depends on whether the WebGL app detects browser zoom and actively works to defeat it like some do). Users who are deaf and not particularly blind but happen to not be able to tell apart the colors you chose to use in your WebGL? Also out of luck. What the DOM gives you is a semantic representation that the user can then have their user agent present to them in a way that works best for them. Reproducing that on top of WebGL or canvas really is quite a bit of effort if you really want to target all users and not just a favored few groups. frik You can use HTML-DOM and WebGL together (overlays, or render as texture). The WebGL support could be improved in Internet Explorer. Is it possible to mix them, while showing, for example, videos, which are clipped by paths or partly obscured by overlaying elements? frik I would say yes e.g. with CSS Regions and WebGL. The other way around you have to render the HTML and the video on textures. You use WebGL to create standard GUI applications on the desktop? WebGL and canvas are in no way replacements for the DOM. Perhaps it's to make it backward compatible, so that, for example, the traditional DOM is implemented as a Javascript library that parses HTML and renders it onto a WebGL surface? >You use WebGL to create standard GUI applications on the desktop? Increasinly high end apps do. Graphically intensive apps do it for performance. But the tradeoff is you lose all the native UI controls that the OS gives you--form fields, text selection, animation, etc. So I don't think replacing the DOM with OpenGL or similar is a good solution for general-purpose apps. All of those would be implemented in a framework. Yes, they already are. That framework is called the DOM. People keep complaining about it and trying to come up with replacement frameworks that end up slower and less capable... The DOM definitely has its problems, mind you. Investing some time in designing a proper replacement for it is worth it, as long as people understand going in that the project might well fail. Which framework has tried to replace the DOM? Flipboard's from the recent story? The resulting site breaks accessibility and the layout obviously has less capability, but it's certainly not slower. Huh, that's actually seeming not that dissimilar from what I had in mind. None have tried to replace all of it that I know of. People have tried to replace things like text editing (with canvas-based editors), CSS layout of the DOM (with various JS solutions involving absolute positioning), native MathML support (MathJax; this one has of necessity done better than most, because of so many browsers not having native MathML support). There are a bunch of things out there that attempt to replace or augment the built-in form controls, with varying degrees of success. That's my point. Currently, you cannot really replace the DOM since that's kind of the extent of the exposed APIs. None of the projects that you mention are really relevant to the discussion. I agree that they didn't change shit but it's precisely because they are still built on the DOM and cannot really go below that in the abstraction layer. >I agree that they didn't change shit but it's precisely because they are still built on the DOM and cannot really go below that in the abstraction layer. There exist UIs done in Canvas and WebGL that are arguably below the DOM in the "abstraction layer" and don't need to use much more DOM nodes besides opening a canvas/gl panel... (Most full screen 2G/3G web games fall into this, for starters, and those are in the tens of thousands. But there are also lotsa apps). > None have tried to replace all of it that I know of. >Yes, they already are. That framework is called the DOM. People keep complaining about it and trying to come up with replacement frameworks that end up slower and less capable... The ones I've seen are actually faster -- Flipboard for one gets to 60fps scrolling on mobile, IIRC. And of course all WebGL based interfaces on native mobile apps that re-implement parts of Cocoa Touch et al, are not that shabby either. Sure, it doesn't have as much accesibility, but that's something that can be fixed in the future (and of course people needing more accessibility can always use more conservative alternatives). E.g. Mac OS X uses OpenGL to render GUI, I guess I should have made myself more clear. > WebGL and canvas are in no way replacements for the DOM. That's kind of debatable. If you have access to a fast graphics layer from the browser, you can build a DOM replacement of sorts. I think that famo.us works kind of like that. To some degree, yes. You just have to be able to re-use system UI controls like fields. So you wouldn't be able to just use WebGL/canvas/whatever in place of the DOM, you'd need to come up with a new API. I know. I was thinking that there'd be something like Qt that would render the widgets using WebGL. Until the web becomes the dominant operating system, I don't think that's reasonable because you'd have to implement an entire UI kit (with all UI components, behaviors, animations, etc) but can't guarantee that it will behave at all like the underlying OS. There's only so much you can re-create in the browser. It's true that OS X uses OpenGL for GUI compositing, but that's only the lowest level. Above, there's a very important piece of the GUI stack called Core Animation which provides layer compositing. Core Animation is used by both the native GUI as well as the browser DOM. When you use layer-backed compositing on a web page (e.g. CSS 3D transforms), WebKit implements it with a Core Animation layer. So DOM-based rendering enjoys the same benefits of GPU-accelerated compositing as native apps -- although obviously with rather different semantics since HTML+CSS doesn't map directly to Core Animation. If you implement your own GUI framework on top of WebGL or Canvas, you're not getting Core Animation compositing for free, so you need to replicate that functionality in your custom framework. (This applies equally to native apps: a WebGL app is equivalent to a Cocoa app that renders everything into a single OpenGL view, and a HTML Canvas app is equivalent to using a single CoreGraphics view.) I don't think the WebGL/Canvas route makes sense for most apps other than games and highly visual 3D apps. You'll just spend a huge amount of time building your own implementations of all high-level functionality that is already provided by the OS and/or the browser: layer compositing, text layout, view autosizing, and so on. If you're doing standard GUIs, why go to all that trouble? I agree that it would be a lot of effort to pull off since you'd have to duplicate a lot of the standard OS features in the browser but if eventually the DOM becomes an even bigger bottleneck, it might be a viable solution. > You'll just spend a huge amount of time building your own implementations of all high-level functionality that is already provided by the OS and/or the browser Not only that, but you can't make a 100% guarantee that your implementation will look and work exactly the same as the native one on the underlying OS. For instance, I can re-create all the native Windows UI controls and re-implement all their behavior in exactly the same way, but what if the user has a custom theme installed? Everything breaks. (WPF has a similar problem.) Maybe, as the DOM becomes more loaded with more abstractions, people will start re-implementing abstractions the DOM already provides; just the subset of abstractions they want. Whether their implementation can beat the native code of the DOM, and the bandwidth concerns of reshipping the same logic is another story. Yea, it's already happening with React and other frameworks which are using virtual DOM. Shared memory multi threading will still be a big barrier to porting over many native applications, like games. Unless asm.js fixes that? There are plans for that: https://bugzilla.mozilla.org/show_bug.cgi?id=933001 Nice! I'll finally be able to implement Glitch for the web (glitch is like react but uses replay to shake out data races). I share your vision. However, I think getting asm.js a good DOM access definitively coming on 2), because it's an easy way to get visual feedback from anything running in a browser. Oh, and thanks for the very enjoyable link :) woah JS seems to actually be picking up a lot of speed outside the browser? jerf First, that's not particularly relevant to the question of what happens to the browser itself. Second, the field of "things you can run that is not Javascript" when not in the browser is already incredibly rich, so we already live in a flexible world. Third, frankly I'm not particularly overwhelmed by the prospect of Javascript's longevity in the server space being a long-term phenomenon... an awful lot of what gets linked on HN is less "cool things to do with JS" and "how to deal with the problems that come up when trying to use JS on the server". And fourthly, and why this reply is worth making, bear in mind that if the browser becomes feasibly able to run any language, rather than having Javascript occupy a privileged position by virtue of being the only language, the biggest putative advantage that Javascript-on-the-server has goes poof in a puff of smoke. If Javascript has to compete on equal footing, it really doesn't have a heck of a lot to offer; every other 1990s-style dynamically typed scripting language (Perl, Python, Ruby, etc) is far more polished by virtue of being able to be moved forward without getting two or three actively fractious browser vendors to agree on whatever the change is (just look at how slow the ES6 has been to roll out, when I'd rate to contain roughly as much change as your choice of any two 2.x Python releases). And it has no answer to the ever-growing crop of next-gen languages like Clojure or Rust. Without its impregnable foothold in the browser, Javascript's future is pretty dim. (In fact I consider the entire language catagory of "1990s-style dynamic scripting language" to be cresting right about now in general, so Javascript's going to be fighting for a slowly-but-surely ever-shrinking portion of pie.) Depends on how JS evolves. It got a pretty serious setback when ES4 blew up and everyone went back to the drawing board on ES5 and ES6; the ES6 launch makes it (I think) better for most use cases than Python/Ruby/et al, because the VM is an order-of-magnitude faster than most of the popular choices and ES6 is a reasonably usable language even for someone unused to Javascript's current quirks: it has a real module system, the class syntax is sane and similar to how every other lang does it, the confusing this binding is fixed with arrow-functions, generators and Promises get rid of deeply-nested callback chains, let and const get rid of confusing variable hoisting, etc. Google and Microsoft are both very seriously experimenting with typed variants of JS (TypeScript from Microsoft and SoundScript from the V8 team), and Mozilla had in fact already proposed adding static typing back in the ES4 days, so I wouldn't be surprised if the next couple of versions of the ES spec include static types. The future for JS is brighter than you think — although it's brighter only because it looks like JS will become a better language, not because of JS in its current, mostly-still-ES5 state. None None > Observe that asm.js is very verbose, define a simple binary bytecode for it. I suspect that is never going to happen, for two reasons: 1) verbosity, on its own, does not make a difference on the web - HTML and Javascript are both generally super verbose, and haven't had any accepted "simple binary encoding" designed for them in those 20 years. What does get implemented is minifiers and compressors (gzip encoding or otherwise), both of which will provide benefits to asm.js comparable to what a bytecode would, and would not require any buy-in from browser maker (the same attribute that has made asm.js successful and PNaCL unsuccessful so far). 2) Historically, anything that is not backwards compatible and does not degrade gracefully is NOT easily adopted by Browser makers, or by websites, unless it provides something that cannot be achieved without it (e.g. WebGL gets some adoption because there is no alternative; but ES6 will get little to non in the next 3 years except as a source language translated to ES5) Well, if such a bytecode were being standardized, someone would surely write a shim JS library to convert it to JavaScript on browsers that didn't have native support yet. And I think the idea of making a binary format would be more popular for a sublanguage which is essentially guaranteed to be machine-generated and inscrutable, especially given that I've seen a lot of commenters here and elsewhere have a knee-jerk reaction against the idea of using JavaScript syntax, than for HTML/CSS/JavaScript, which have a long history of being written and read manually without any (de)compilation steps, even if most big webapps are minified. But it works the other way around: it will not be standardized before there's an implementation. If it ever happens (which I think is unlikely), the standardization will follow the shim. And the knee jerk reactions are meaningless. The people who ship stuff don't seem to mind, and they are the ones who make things matter. Jan 23, 2015 · alexvoda on The Emularity This prediction is becoming true day by day: Jan 07, 2015 · sarciszewski on JavaScript in 2015 > All you have done is used a function without understanding what it was doing, or reading the documentation These aren't my examples. I haven't done anything. I credited the person who provided them: Gary Bernhardt. https://www.destroyallsoftware.com/talks/wat https://www.destroyallsoftware.com/talks/the-birth-and-death... Next time before you make an accusation, reread the post before pressing the reply button. Nov 13, 2014 · jakozaur on AWS Lambda Since Intel already ships CPU for them, maybe they should ask for an CPU with asm.js as their assembly language :-P. The jokes ppl make are starting to get a bit more real: https://www.destroyallsoftware.com/talks/the-birth-and-death... How to solve the context switch overhead issue: https://www.destroyallsoftware.com/talks/the-birth-and-death... How about: a cpu that has scores of hyperthreads? They don't block in the kernel; they stall on a semaphore register bitmask. That mask can include timer register matches another register; interrupt complete; event signaled. Now I can do almost all of my I/o, timer and inter-process synchronization without ever entering a kernel or swapping out thread context. I've been waiting for this chip since the Z80. While not exactly a chip (it never reached board stage) I designed a processor in college where the register file was keyed to a task-id register. This way, context switches could take no longer than an unconditional jump. I dropped this feature when I switched to a single-task stack-based machine (inspired by my adventures with GraFORTH - thank you, Paul Lutus). This ended up being my graduation project. > But it frustrates me how many people complain about how much it sucks when there are no projects (with any support) attempting to really change things. If my opinion of javascript is wrong and it really is that bad - I'd think there would be more of a movement to move away from it. You should watch this https://www.destroyallsoftware.com/talks/the-birth-and-death... Programmers can't always just start something "new", they have to work on existing platforms to be able to deploy their software quickly and efficiently. This video explain than even though javascript was never intended to run a 3D engine, it now is able to. As he explains it in the video, it's a hack, and it doesn't work that well for most binaries. If I could, I would start a new browser, without html, with more dynamic languages, with a clang VM, with protocol buffers, etc. But in this age of patents and market shares in IT, I don't expect having enough exposure to have users installing this future browser. If one software company can't deploy its app on the dominant systems, it's screwed. But it doesn't mean JS fits every possible job. This talk just keeps getting more and more relevant: https://www.destroyallsoftware.com/talks/the-birth-and-death... It may become self-fulfilling prophecy. I think the best answer to your question is this video : https://www.destroyallsoftware.com/talks/the-birth-and-death... Title : THE BIRTH & DEATH OF JAVASCRIPT By : Gary Bernhardt From : PyCon 2014 Reminds me of this talk: "The Birth & Death of JavaScript"[0] talked about this. From my memory, in "the future" described in the talk, everything is ported to JS so that the VM does the isolation automatically and all the overhead can be thrown away. And another by the same guy, in a similar vein: https://www.destroyallsoftware.com/talks/the-birth-and-death... This seems oddly relevant: https://www.destroyallsoftware.com/talks/the-birth-and-death... haha :) Yes... influenced the conception of Breach for sure! This is eerily close to some of the things discussed in The Birth and Death of JavaScript[1]. Though that imagines a future where we're using asm.js and probably OdinMonkey, not generic runtimes like V8. My thoughts exactly. I recently posted the link to that talk here on HN somewhere, and now seeing this is really creepy. Especially: >All kernel components, device drivers and user applications execute in a single address space and unrestricted kernel mode (CPU ring 0). Protection and isolation are provided by software. Kernel uses builtin V8 engine to compile JavaScript into trusted native code. This guarantees only safe JavaScript code is actually executable. >Every program runs in it's own sandboxed context and uses limited set of resources. How's that creepy? It's freakin awesome! The number of uses we put Javascript to is indeed frightening, given its "fragile" nature and heavy criticism it attracts every now and then. There is a great, amusing, borderline sci-fi talk by Gary Bernhardt about the future of Javascript and traditional languages compiled to Javascript. My recommendations: https://www.destroyallsoftware.com/talks/the-birth-and-death... Jun 20, 2014 · wildpeaks on Webkit.js That reminds me of the hilarious talk "The Birth & Death of Javascript" where everything get converted to asm.js, even operating systems. Check out this kernel built on V8 engine, designed to run JavaScript code https://github.com/runtimejs/runtime Jun 20, 2014 · RussianCow on Webkit.js Reminds me of Gary Bernhardt's talk "The Birth and Death of JavaScript": https://www.destroyallsoftware.com/talks/the-birth-and-death... Holy hell, Gary Bernhardt was right all along and the future will be METAL... https://www.destroyallsoftware.com/talks/the-birth-and-death... transpile it all into js and run in node! (I know this may be limiting for some features/libraries, but only a matter of time.) See https://www.destroyallsoftware.com/talks/the-birth-and-death... on taking this to an "extreme" -- linux, Gimp on X windows, even Chrome -- transpiled and running in a firefox tab. May 21, 2014 · phillmv on Arrakis Heh, this sounds analogous to what Gary Bernhardt finished "The Birth & Death of Javascript" with: https://www.destroyallsoftware.com/talks/the-birth-and-death... Or what a bunch of projects have been doing for years -- Erlang on Xen, HalVM, etc. etc. Makes me think of this video "The Life and Death of Javascript " which brings what the future could be like with asm.js: https://www.destroyallsoftware.com/talks/the-birth-and-death... May 08, 2014 · loup-vaillant on How fast is PDF.js? > Web standards (HTML/CSS) and language (Javascript) were not designed to be used as a compilation target for complex programs. Now they are. > I don't have any solutions to this, it's too late, we are already committed to browsers being full operating systems. When we do get to that point, and ditch the underlying MacWinuX, there's a good chance they won't be much more complex and much less secure than what they replaced. A typical MacWinuX desktop setup is already over 200 Millions lines of code. I'd be happy to drop that to a dozen million lines instead (even though 20K are probably closer to the mark http://vpri.org/html/work/ifnct.htm). It also shouldn't be much slower than current native applications. Heck, it may even be significantly faster. Without native code, hardware doesn't have to care about backward compatibility any more! Just patch the suitable GCC or LLVM back end, and recompile the brO-Ser. New processors will be able to have better instruction sets, be tuned for JIT compilation… The Mill CPU architecture for instance, with its low costs for branch mispredictions, already looks like nice target for interpreters. --- > I do think it's worth considering the security price we are paying to make things like PDF.js possible. Remember the 200 million lines I mentioned above? We're already paying that security price. For a long time, actually. --- That said, I agree with your main point: the whole thing sucks big time, and it would be real nice if we could just start over, and have a decent full featured system that fit in, say 50.000 lines or so. Of course, that means forgoing backward compatibility, planning for many cores right away… Basically going back to the 60s, with hindsight. Alas, as Richard P. Gabriel taught us, it'll never happen. >Heck, it may even be significantly faster. Without native code, hardware doesn't have to care about backward compatibility any more! Just patch the suitable GCC or LLVM back end, and recompile the brO-Ser. New processors will be able to have better instruction sets, be tuned for JIT compilation… The Mill CPU architecture for instance, with its low costs for branch mispredictions, already looks like nice target for interpreters. Heh, I hope you appreciate the irony in that one. On the one hand we have people arguing that we have to stick with the existing web platform for backwards compatibility reasons, but on the other you are suggesting it would be easy to switch the entire world to new totally incompatible processor architectures to make aforementioned web platforms performant. It's a matter of how many people you piss off. Ditch the browser, you have to change the whole web. Ditch the processor, and you have only a couple browsers to change. Apple did, it you know? Changing from PowerPC to X86. And they had native applications to contend with. I believe they got away with an emulation mode of some kind, I'm not sure. I for one wouldn't like to see the web take over the way it currently does. It's a mess, and it encourages more centralization than ever. But if it does, that will be the end of x86. (Actually, x86 would die if any virtual machine took over.) Ah, thanks for posting Gary's talk. Great stuff. None None I don't think you have to forgo backwards compatibility. Implement a standard VM and library set that everyone can compile to. Implement HTML/JS as a module in the new system. Problem solved. Well, it's not just HTML/JS. It's Word/OpenDocument, SMTP/POP/IMAP… Those modules are going to make for the vast majority of the code. We could easily go from 50K lines to several millions. May 02, 2014 · jarrett on Thinking in Types I hear that, and I hope that project works out. But I'm still interested in developing native apps. There's a reason so many professional applications (games, intensive apps like Photoshop and Blender, etc) are still native. You may have seen The Birth and Death of JavaScript: https://www.destroyallsoftware.com/talks/the-birth-and-death... I don't know whether that prediction will come true. But obviously it hasn't thus far. For now, if I need native performance, I need native code. Sadly, that means C, C++, or Java until such time as Haskell libraries compile reliably. shudder. Node.js-powered linux kernel builds? One small step towards https://www.destroyallsoftware.com/talks/the-birth-and-death... coming true ;) If you have not seen Gary Bernhardt's talk, you should: Apr 17, 2014 · 635 points, 227 comments · submitted by gary_bernhardt First, I very much love the material of the talk, and the idea of Metal. It's fascinating, really makes me think about the future. However, I also want to rave a bit about his presentation in general! That was very nicely delivered, for many reasons. His commitment to the story, of programming from the perspective in 2035, was excellent and in many cases subtle. His deadpan delivery really added to the humor; the fact that he didn't even smile during any of the moments when the audience was laughing just made it all the more engaging. Fantastic talk, I totally loved it! None None I was lucky enough to hear Gary give this talk in January at CUSEC and it was even better in person. Everyone in the room was clearly hanging off his every word, the actual technical content was pretty insightful and his humour was spot on. Also, Java-YavaScript Many people from Europe do that. It does sound cooler. Since JavaScript and Java have almost nothing in common, I think that's a very reasonable pronunciation. The words look similar, but have very different functional meaning. It sounds so natural that I immediately started thinking I had actually been saying it wrong all these years. fwiw, it's the way everyone in Russia pronounces it Yup. I thought it was part if the joke (that in a few generations, we might pronounce old language names differently). Kiro How do you usually pronounce it? I think I'm going to adopt this new pronounciation. This is actually how you say it in Russian. Try Google Translate. pronunciation* (it's one of few weird words that change spelling when you add a suffix, such as fridge/refrigerator) The reason why metal doesn't exist now is because you can't turn the memory protection stuff off in modern CPU's. For some weird reason (I'm not an OS/CPU developer) switching to long mode on an x86 cpu also turns on the mmu stuff. You just can't have one without the other. There's a whole bunch of research done on VM software managed operating systems, back when the VM's started becoming really good. Microsoft's Singularity OS was the hippest I think.[0] Perhaps that ARM cpu's don't have this restriction, and we will benefit from ARM's upmarch sometime? In a way that is no different from the older Xerox PARC systems or the Oberon based ones at ETHZ. All of them are based on the concept of using memory safe languages for coding while leaving the runtime the OS role. Except for C and C++, language standard libraries tend to be very rich and to certain extent also offer the same features one would expect from OS services. As such, bypassing what we know as standard OS with direct hardware integration, coupled with language memory safety, could be an interesting design as well. That is why I follow the Mirage, Erlang on Xen and HaLVM research. I didn't want to go into this level of detail in the talk, but... I think you still want the MMU enabled, just not used for process isolation. With virtual memory totally disabled, a 1 GB malloc takes 1 GB physical memory even if it's not touched, you can't have swap at all, memory fragmentation kills you dead, etc. It still has a lot of utility outside of isolation. I don't have a good sense of how the performance cost of hardware isolation breaks down into {virtual memory enabled,TLB thrashing,protection ring switching}. That's one of the reasons that I reduced the speed-up from "25-33%" in the MSR paper down to 20% in METAL. Maybe the speed-up would be less than that if virtual memory were still enabled. Unfortunately, that distinction may have been blurred in the talk. That is, I may have implied that METAL would turn the MMU off entirely. If so, it was an oversight. I've done the talk end-to-end at least fifty times, which is how I smooth my execution out. Occasionally it can "smooth" the ideas out a bit too, leading to small inaccuracies. It's sort of like playing the telephone game with yourself (which is a very strange experience). The MSR paper that I quote came from the Singularity team, so your reference is right on. Reading "Deconstructing Process Isolation" in fall of 2012 was probably the germ of the core narrative of the talk. There's a new CPU architecture in the pipe(should have good silicon within five years, if their projections can be trusted), and it has a very good design around system calls, and also in terms of virtual memory. It has a single 64-bit address space and only protection contexts, and due to the general design of the system it doesn't require any register push(or registers at all, at least in the traditional sense). In addition, it has primitives which would allow programs to call directly into drivers and kernel services without a context switch. Anyway, I don't mean to sound like an advertisement, and we've yet to see any silicon, so the jury's out. Aside: Starting process address spaces at 0 is not really a convenience as far as I know(other than offering consistent addresses for jumping to static symbols), it's a way to enable PAE on 32-bit machines so that single contexts(typically processes) can use the whole address space. There's an even bigger revolution in CPU design, with Ivan Sutherland's Fleet - which does away with the clock and sequential execution of instructions - instead, the programming model is based on messaging and traffic control - you direct signals to the units in the chip which perform the computations you want, asynchronously by default - if you need synchronicity, you need to program your own models for it. While these probably won't be available in the next 5 years, and probably won't be acknowledged by existing programmers for decades - I think these ideas will take over. You forgot to provide a link: http://millcomputing.com/docs/ After the Operating Systems and Computer Organization courses in my first year of universisty I became a little obsessed with the idea of software managed operating systems. Cool to hear Singularity inspired you as well. Now I didn't read the paper, but I think the 20% is purely the MMU, I think the protection ring switching thing is much less significant, so I think if you leave the MMU still on that 20% profit is still very optimistic. Now, if you forget about compiling C (which defeats the purpose of your talk) and just compile managed languages like regular JS, the garbage collector can build a great model of memory usage. Therefore I think it could be much better to let the garbage collector manage both the isolation, and the swapping. So everything in software. The swapping process would suffer some performance, but that's just CPU cycles, as everyone knows persistent data access isn't even in the same league as CPU memory access. So yeah, that would mean that you would have to run all untrusted code in managed mode. And with untrusted I would mean code you can't trust with full physical memory access. On linux, system calls don't result in a TLB flush - kernel data structures and code are in a different portion of the virtual address space (starting from the top of VM memory, if I remember right) that is tagged as not being available from ring 3. So system calls are quite fast. EDIT: Kernel memory begins at PAGE_OFFSET, see here: https://www.kernel.org/doc/gorman/html/understand/understand... Kernel memory lacks the flag _PAGE_USER so that it isn't accessible from userspace: https://www.kernel.org/doc/gorman/html/understand/understand... I didn't know that! It certainly makes sense. Context switches still thrash the TLB, though. The performance cost of that has gotten better as time has gone on, but I wonder how many transistors (and how much power) CPUs are burning for that mitigation. The "how computers actually work" digression originally had a section on context switches, but I removed it early on because I felt like that section was dragging. To try to paint a very rough picture of the larger thoughts from which this talk was taken: I think that microkernels and the actor model are both the right thing (most of the time). When implemented naively, they both happen to take a big penalty from context switch cost. But Erlang can host a million processes in its VM, and we're using VMs for almost everything now anyway. The obvious (to me) solution is to move both the VM and an Erlang-style, single-address-space scheduler into the kernel. Then you can have a microkernel and a million native processes without the huge overhead of naive implementations. There are surely many huge practical hurdles to overcome with that, and maybe some that can't be overcome at all, but it sure sounds right when written in two paragraphs. ;) jerf You know about http://erlangonxen.org/ ? Also something like http://corp.galois.com/halvm . That's probably still not quite low enough level to turn off the MMU, but it's getting there. None None nly What you seem to be missing with re: asm.js is that, while the JIT to native code gets you your super-fast integer operations, it's still critically incomplete with regard to memory access. Every single individual memory access has to be bounds checked or pushed through some other indirection inside the runtime. Google demonstrated similar ideas with NaCl, which achieved safety with a similarly restricted native code and a just-in-time verification step. Even if these memory accesses could be made as efficient as those performed by the CPUs access protection, you're still not gaining anything you don't already have. Regarding context switches: A full CPU context switch on x86 (not to ring1 but between two arbitrary points within a single userland address space) takes a few dozen instructions and about 40-80 cycles. A single cache-line miss resulting in a load from main memory on the other hand takes at least twice that (~200 cycles). Again, hits from jumping around in memory will dominate. How significant is a 20% overhead from virtual memory? Probably about the same as getting 1% more of your memory accesses back in to high level caches. I agree, and I think the whole premise of the performance gain is based on a "have the cake and eat it too" fallacy. Sure, the virtualized syscall to the virtualized OS will be free, but the painting of the font on the screen or the reading of the socket data will be done by the actual bare metal OS which the VM will invoke to get the actual job done. So as long as we are talking about interprocess communication there will be a gain, but not for the actual hardware facing operation. Then again, you are trading a hardware enforced isolation which is simple and proven for a isolation enforced by a complex and fragile VM. On x64 in Firefox, at least, there are no bounds checks; the index is a uint32; the entire accessible 4GB range is mapped PROT_NONE with only the accessible region mapped PROT_READ|PROT_WRITE; out-of-bounds accesses thus reliably turn into SIGSEGVs which are handled safely after which execution resumes. Thus, bounds checking is effectively performed by the MMU. nly Interesting approach, might have to look at the code. Nonetheless it highlights how useful the MMU is and how none of this is free. Looks like Erlang is already getting one step closer to the metal: Also there is another project that can be related to that goal: "Our aim is to remove the bloated layer that sits between hardware and the running application, such as CouchDB or Node.js" I guess this is in a way a response to Bret Victor's "The Future of Programming"? Thanks for link. Liked the 70s vibe and humour. From about 14:40 he gets animated, basically conducting! Would love to know a programmer's explanation for the function or purpose of arm waving and hand signals in a presentation. Not knocking, just curious! Well, he just tries to reinforce that we have two symmetrical interconnected systems and yet they have to figure out how to talk to each other. What he has on the screen. It is in a sense. I had an early form of the idea that became this talk in the spring or early summer of 2013. Bret's talk (which I loved!) was released shortly after. That made me think "I have to do this future talk now in case the past/future conceit gets beaten into the ground." jerf It's not far off my predictions: https://news.ycombinator.com/item?id=6923758 Though I'm far less funny about it. Coincidentally, I just released a podcast interview with Gary right after he gave this talk at NDC London in December 2013: http://herdingcode.com/herding-code-189-gary-bernhardt-on-th... It's an 18 minute interview, and the show notes are detailed and timestamped. I especially liked the references to the Singularity project. I'm missing some obvious joke...but why is he pronouncing it yava-script. I thought it was supposed to be some future pronunciation thing, imagining the way languages evolve. I've seen SciFi movies where in the future english is heavily influenced by spanish. I thought it was supposed to be a callback to this scene in Anchorman, but I'm not sure. https://www.youtube.com/watch?v=N-LnP3uraDo No hard 'J's in many languages (Like Slavic languages). It's pronounced 'y'. Anyone have a list? The sound ʤ seems to occur in most Slavic languages [1], I guess the primary reason why "Java" is read as "Yava" is because people tend to apply local pronunciation rules to commonly used foreign words, either because of lack of knowledge of native pronunciation or because native pronunciation sounds just silly. YavaScript is a very common pronunciation in Germany, the dj sound only appears in "loannames" like Jennifer. My grandfather always told me to find a nice yob :) bttf Ask a Hispanic friend. gotcha. Rather, ask a Scandinavian friend. My scandinavian friends call it yay-va-script. Kiro I'm from Scandinavia and have never heard anyone pronounce it like that. I'm Icelandic and we use yava-script, I find it hard to figure out how yay-va sounds. Perhaps it would help to see that pronunciation rendered in IPA for English [0]: /ˈiː.və.ˌskrɪpt/ Note particularly the italicized phoneme (), which corresponds to the "long A" sound in English, e.g., the 'a' in 'face'. I'm unfamiliar with the Icelandic language, but according to the English equivalents listed on Wikipedia's page on IPA for Icelandic [1], the corresponding phoneme in that language appears to be ei. YAY as opposed to nay VUH as in vagina SCRIPT pronounced normally At the 8:00 mark, he accidentally pronounces it correctly for a moment, and then "corrects himself" by mispronouncing it :-) I'm assuming the original pronounciation was lost in the war. Kiro How else would you pronounce it? He's in character of it being 2035 and the pronunciation was lost/changed. 100k I was hoping he'd drop in some reference that would explain it, like the take over of world government by Norway after the war (sort of like Poul Anderson's Tau Zero http://en.wikipedia.org/wiki/Tau_Zero). But I guess he just wanted it to be inscrutable. I think you're probably right -- he almost slips up at one point, but corrects himself before pronouncing the "va". For context, this was one of the most enjoyed talks at PyCon this year. cbhl I was fortunate enough to get to see this at CUSEC (the Canadian University Software Engineering Conference) and would similarly agree that this was one of the most enjoyed talks there, too. JavaScript at PyCon? SSLy Well, you need JS for the client side even if you use python for server one (eg. flask or django) Yep. It was on the schedule. I think you'll find that the python ecosystem is very large and varied. apropos, Bokeh. > xs = ['10', '10', '10'] > xs.map(parseInt) [10, NaN, 2] Javascript is beautiful. It's due to parseInt having an optional second parameter, the radix, and map passing the index as the second paramater, hence:  xs = [ parseInt('10', 0), parseInt('10', 1), parseInt('10', 2) ] Thank you. I read many comments to find an explanation. It's not optional if you lint your code :) And since 0 is falsy we get 10 for base 0. Which is odd. I wonder why they checked for falsiness and not the argument being undefined. It's not exactly checking for falsy values. Although all falsy values will lead to a radix of 10 being applied. parseInt internally uses the ToInt32 abstract operation on the radix parameter. Once it has that value, it explicitly looks to see if the value is 0. If it is, it uses a radix of 10. https://people.mozilla.org/~jorendorff/es6-draft.html#sec-pa... Edit: I hope that doesn't come off as pedantic. My point wasn't to disagree as much as it was to just add some further explanation. This is HN, there's no such thing as being pedantic. :) Or rather, there is but it's thoroughly welcome. Although keep in mind that excessive pedantry is frowned upon. Just like with any language, as long as you read the docs of the stuff you use, you don't get this problem (you might get others with automatic type conversion and missing arguments like the speaker says, but not this)... this is just stupid. Try this: int subtract(int b, int a) { return a - b; } int test = subtract(5, 3); // != 2, just read the damn docs Oh, C sucks now ! The talk is quite fun and interesting to watch though. And the end is pretty cool. By "this problem" do you mean "the this problem"? And by "this is just stupid" do you mean "this === just stupid"? If you think that's bad, you should see this! The thing is, good language design means you don't have to read the docs. The number one thing taught at user interaction / usabillity courses is that users don't read the documentation. Or skim it and go directly to one or two parts they want to check (Sure, some bizarro outliers do read it all). Besides, a golden rule from the UNIX era is the "principle of least surprise". Don't define stupid behavior as default, as in this case (both for parseInt and Map). Both behaviors make sense in isolation. It's not always so easy. Arguably the issue is not with the behavior but rather a deeper design problem within the language itself. Notice that in Obj-C no one would ever get confused regarding the second parameter of parseInt:withRadix:. Juggling these interactions is exactly what makes good language design so difficult and time consuming. In the talk, I mention JS' ten-day design time several times for exactly this reason. Language design is hard and ten days just isn't enough time to carefully consider how everything will fit together. Try to imagine a programmer, even a brilliant one, noticing the map/parseInt interaction ten days after starting to learn JS, especially in 1995 when these high level languages were far less common. Seems unlikely! > good language design means you don't have to read the docs Let's assume JavaScript is a poorly-designed language, and Clojure is a well-designed language. In the first month of language use, the user of Clojure will have looked at the docs many more times. But the user of javascript will have more bugs. That's because: 1) Clojure has a larger API -- Javascript doesn't have 1/10 that. 2) Javascript has a familiar (to many) Algol-derrived braced syntax and lots of common C/C++/Java/etc keywords. Clojure is only familiar to Lisp/Scheme users. If those things were equal, Clojure would win the "don't have to look caveats up" contest, because its design is more coherent, and doesn't give you unexpected results and undefined behavior like Javascript does. Obviously, you somehow you need to first know that "parseInt" is called "parseInt()" and not "atoi()" for example. But I wasn't implying never reading anything, including function reference. Just being able to code without needing to study and/or memorize lots of arcane edge cases. I just wrote this line about two hours ago and my tests weren't thorough enough to catch the bug it introduced. Just when I thought I knew JavaScript. Thanks for saving me some time. I'm sure you know this by now but you can keep the syntax and use Number instead xs.parseInt(Number) I think xs.map(Math.floor) takes the cake if I recall speed tests properly Always remember - it's not a language but a loosely parsable texty thing. Always give the base to parseInt in Javascript. Always. The moment you don't, all kinds of bugs follow. I have to go cry now at the number of times this has bitten me. Are there any other problems aside from octal? Not just javascript, a whole bunch of language's parseInt implementation will interpret the base from a leading zero etc. A useful function  function overValues(f) { return function(x) { return f(x); } }  Then you can do  ['10', '10', '10'].map(overValues(parseInt));  However, usually you're going to want to do the equivalent of this  ['0101', '032'].map(function(s) { return parseInt(s, 10); })  Because Javascript interprets a leading zero as an indicator of base. I think you accidentally a word... > as an indicator of base. As an indicator of base 8 (octal). v413 You need: ['10', '10', '10'].map(Number);  var xs = ['10', '10', '10']; xs.map(function (str) { return parseInt(str, 10); }); > [10, 10, 10]  Fixed that for you. Why?  map callback params: (value, index, originalArray) parseInt params: (string, radix)  Your code is passing the map array index to parseInt's radix. Where people use parseInt, they usually should use Number: ['10', '10', '10'].map(Number); // [10, 10, 10] ;) None None There are so many good WTFs in JS, but this is not one. parseInt expects 2 arguments and Array.prototype.map provides 3 to the callback it is given. Both of these facts are very well documented and known.  var mappableParseInt = function(str){ return parseInt(str, 10); }; ['10', '10', '10'].map(mappableParseInt);  I'd suspect this snippet is more a snipe at people who don't know JS very well and expect parseInt to be base-10 only. You could say it's a snipe at the weak type system that Javascript has. I dunno, as someone without much experience with Javascript, it is a little odd that arrays return the index alongside the value by default. arrays don't do that, but map() does. Normally in js you can just ignore arguments you don't care about, but it does lead to surprises like this one. It has little to do with the type system. Variadic arguments can be typed given a type system that supports it. None None Alternatively, if a function expects two arguments, the language could take exception at the fact that three were handed in. Quietly accepting arbitrary arguments could be considered breaking contract. It does have a wtf-ey whiff. A bit less annoying with ES6:  >>> ['10', '10', '10'].map(x => parseInt(x, 10)) [10, 10, 10] with a function named "parseInt" anyone would expect the inputs as base 10.. otherwise this shoud be called "parseHex" for 15 or at least "parseBytes(input, base)" The programmers are not the ones to blame on that.. this is really a bad contract between the language and the programmer Its the equivalent of a function named "getStone()" to return you a " Paper{} " :) djur "int" doesn't imply anything about the base. None None It does for human beings. We use base-10 for basically everything. This is true even for most programmers in most situations. Human beings aren't computers, and we aren't abstract math processing units who by default consider numbers abstracted (e.g. as elements of Rings). This goes double for string representations of numbers--in the majority of cases a number represented by a string is a number meant for human consumption in a normal human context; not some machine running in base-2 (or -8 or -16). It is certainly reasonable to expect "parseInt" to parse an integer out of a string in base-10 by default, and entirely unreasonable to expect to be required to provide a base as anything except an optional argument, and certainly it is unreasonable to expect that that second optional argument is treated as not optional in a composition operation. djur I agree with your conclusions. However, the poster I was responding to was suggesting that the category of "int" necessarily excludes non-decimal representations in the same sense that the category of "stone" excludes "paper". I think in this case it's not parseInt that's at fault, it's the fact that map optionally passes additional arguments. > It is certainly reasonable to expect "parseInt" to parse an integer out of a string in base-10 by default It does. > and entirely unreasonable to expect to be required to provide a base as anything except an optional argument It is optional. > and certainly it is unreasonable to expect that that second optional argument is treated as not optional in a composition operation I don't understand what you're saying here. It's never treated as required, it's just map supplies a parameter in that position, so it get's used. That's how optional parameters work. The wat (if there is one) is that map provides extra arguments. None None >> ...parse an integer out of a string in base-10 by default > It does. Not quite: in some browsers (IE), a string starting with '0' gets interpreted as octal. So parseInt('041') === 33 in IE. Guess how I found out about that. I don't think parseInt accepting an optional second argument is the surprising behavior there. The real WTF is map passing more than one argument, and the loose behavior of JS regarding argument passing overall. It's only WTF because it's not the same as other implementations of map. Once you can internalize the map implementation, it's no longer WTF & actually makes sense. That's the weird part. Why does array provide three arguments? But I agree, that's something you can learn. I guess. But it's WTF anyway. I have a function that takes either one or two arguments, I provide three, and everyone seems to be OK with that. Well that decision is pretty necessary when you realize that JS has no syntax to indicate a function is variadic (we use the arguments magic variable, but use of it does not necessarily indicate that a function is variadic) and that implementation supplied functions are not required to have their arity exposed via Function.prototype.length (http://es5.github.io/#x15.3.5.1). There's no way to know, even at run-time, whether a function is being called with too few or too many arguments, since that's equivalent to the halting problem. So the sensible alternative is just to default everything to undefined, and silently ignore extraneous arguments. But yes, if JS was strict with how it handled argument definition lists and had support for indicating infinite arity, I'd agree, this would be a WTF, or at least strange. But I think it makes a lot of sense, all things considered. So much for abstraction if you need to understand the implementation of every function you'll every use in JavaScript. > understand the implementation of every function Rather: remember three things that make up the majority of Array iterators' callback functions' signatures: Element, Index, Array. Shared by: .map, .every, .forEach, .filter, and probably some that I am forgetting. The exception I think is just .reduce[Right], which by definition requires its previous return value, so you have (retVal, elem, i, arr). Quite literally, if you remember .map callback, you remember .every callback :) Javascript deserves shtick for its truly bad parts (with, arguments, ...) and some missing parts, but .map and its friends aren't it. It makes sense but it still strongly violates the principle of least surprise. No other language I know of does this, nor do I think this would be a particularly desirable feature. vorg I suspect Nashorn, the just released edition of JavaScript for the JVM, will be heavily promoted by Oracle and become heavily used for quick and dirties manipulating and testing Java classes, putting a dent into use of Groovy and Xtend in Java shops. After all, people who learn and work in Java will want to learn JavaScript for the same sort of reasons. Very impressive to have been recorded "April 2014" and released "April 2013." Seriously, though, great presentation. Agreed! I too was wondering isn't the discovery of time travel the bigger story here? /s No, at the end of the day, the discovery of time travel ended up being a really trivial achievement because of paradox. Now, the scientific knowledge we picked up en route was monumental, but that's something else. He says several times that JavaScript succeeded in spite of being a bad language because it was the only choice. How come we're not all writing Java applets or Flash apps? Because Java, Flash, etc. couldn't easily manipulate the DOM. Javascript won for this reason. or VBScript for that matter.. I think there's some confusion about why JS won. JS couldn't easily manipulate the DOM either until JQuery in 2005-2006. The fact that Java, ActiveX etc.. had full control of the system and causes problems ensuring security was an issue, but it is not the reason why JS beat them all. Don't discount the power of 1) free and 2) easy to use software that is 3) not controlled by a single corporation. JS is the only web programming language that is all of these. Yea, maybe Python or Clojure in the browser would be cool. I would argue Clojure is absolutely more difficult for a novice to learn, and Python provides what additional benefit? JS was there first. The only reasons why plugins existed is you couldn't do these kinds of things in the DOM. JQ, and the subsequent advances in browser technology, HTML, CSS, JS - made it so you can. Also, other things being equal - programmers will choose elegance over bloat, less layers of abstraction over more. Plugin architecture became just an unnecessary layer between the programmer and the browser, after HTML/JS/CSS caught up. JS did not become ubiquitous by accident, or because it was the only choice. There were many choices (all being pimped by big well-funded companies). JS won because it was the better than the alternatives. DOM in that time wasn't that fancy it is now. The real reason is security. There were other advantages. To write JS you just need a text editor, and it's easy to pick up. To write Flash requires spending several hundred dollars. To write Java requires the JDK and to learn Java. Used to to do a good amount of flash development - you could actually do it with just a text editor and a compiler (which was free). There were also quite nice free IDEs, like FlashDevelop. We're talking late 90s/early 2000s here. If anything like that existed during Flash's heyday, I certainly wasn't aware of it. If I had to pinpoint it, I'd say Flash's primetime was around 2005-2008 perhaps, and FlashDevelop was available then. Guess we probably define it's prime differently haha, I'm thinking more of when it matured - AS3 as a language, lots of tooling choices, etc. I wasn't ever anything close to a professional Flash developer, I'll take your word for it if you say that was the best time to be developing for it. I was thinking about the days of Homestar Runner, Weebl and Bob, Newgrounds, and so on, when flash cartoons and games were (for kids, at least) a huge part of internet culture, and everyone wanted to be a Flash animator. Youtube kinda killed the Flash cartoon medium, sadly. Sure, videos are simpler and don't rely on a proprietary binary blob, but there's nothing like loading up a Strong Bad email and clicking random things (or, uh, holding down tab) trying to find secrets. Ah, don't give me too much credit haha, was more of a side-project thing for me, definitely wasn't a professional, especially at the animation side of things (as opposed to the programming side). I was also more involved with the games side of Flash, which Flash became much stronger at as ActionScript 3 came out which coincided with much better Flash performance. Flash advertising and simple animations were probably stronger earlier. I'm just interested in the topic because it's kind of neat to look back at the internet and observe its history and the changes its gone through. Just did a little wikipediaing for fun - here are when a few different websites / notable games were released: Newgrounds: 1995 Homestarrunner: 2000 Miniclip: 2001 Armor Games: 2005 Kongregate: 2006 Fancy Pants Adventures: 2006 Desktop Tower Defense: 2007 Flash and Java also required a compile. Javascript just required that you click refresh. Especially on 1995 technology, that mattered. Compiling Java took a while. I didn't use Flash enough to retain an impression of speed, but it sure wasn't instantaneous. It's also the reason why Flash was so prevalent until recently and is still installed in 90-something % of desktop computers: it's faster. Significantly faster, and very specially so in the 90s and early 2000s. While security is the main answer, it was also that Java and Flash aren't necessarily available. That is, getting them to run on another machine was frequently a huge issue, especially if you tried to put in any kind of complexity. Javascript, on the other hand, was omnipresent and comparatively accessible. It was the least bad option by a wide, wide margin. For a different comparison, I switched from Java applets to PHP in the early 2000s. I didn't really get into Javascript until many, many years later around 2009: before that, Javascript was mostly a way to make Flash work properly. Oh, yeah, especially after Microsoft stopped shipping Java. There was also the version issue to worry about. "Pardon me, Mr./Ms Customer/User -- would you mind terribly going and downloading and installing a 20 MB Java update on your 14.4k dialup connection before using this page?" Nightmarish, it was. I always found it a bit hilarious how Sun, after getting Microsoft rather onboard the Java train, albeit with their necessary native extensions, decides to sue them and put an end to it. And promptly kills off Java distribution and adoption by the largest software developer in the world. Even stranger is how Sun, a hardware/platform company, decided making a popular platform that's hardware and platform independent would help their business. Sometimes I wonder if there was a really well thought-out plan, or people were just doing things. The "necessity" of those extensions is debatable, and they meant that code wouldn't be portable to Sun's implementation. There was real cause for concern, and there weren't a lot of other options for fixing it. Sun probably also realized that they weren't about to compete directly with mighty Microsoft on platform lockin of all things, so they played a different game. Flash still powers Youtube for most users, Silverlight for Netflix and Unity's plugin is required for most 3D games on Chrome's Marketplace (not sure where else to look for successful HTML5 games). because there a lot of bad programmers use it to write a lot of page effects, like alert("log in required"), not apps. cbhl Well, about ten or fifteen years ago, "we all were" would have been the answer. Except that back then, there were multiple choices -- plug-ins meant you could choose Java, or Flash, or ActiveX (Visual Basic 6, anyone?), or VRML for that matter. The number of security issues that plug-ins have had in the last two decades makes most of them non-starters nowadays, although there are still plenty of sites that use them extensively (say, Childrens' game websites like Neopets and Nick Jr.'s website) depending on the target audience. Also, apparently internet banking and ecommerce in South Korea relies heavily on ActiveX Stellar stuff. Hugely enjoyable. Very interesting thought experiment. I won't spoil it for any of you, just go and watch! Mr. Bernhardt, you have outdone yourself sir :) Consider the relationship between Chromebooks and METAL. (I'm typing this from my Pixel...) cbhl Bernhardt later tweeted: "I gave The Birth & Death of JavaScript seven times and no one ever asked why METAL wasn't written in Rust." It was assumed because that was/will be a foregone conclusion. Extraordinarily entertaining and well presented. Where did you get the footage of Epic Citadel used in the talk? http://unrealengine.com/html5 seems to have been purged from the internet (possibly due to this year's UE4 announcements?) and I can't find any mirrors anywhere. Which is a shame, because that demo was how I used to prove to people that asm.js and the like were a Real Thing. https://web.archive.org/web/*/https://www.unrealengine.com/h... Not Sure If Serious, but this doesn't work at all in any browser I've tried it in. I don't think archive.org especially knows how to mirror a giant weird experimental single page app. I have a question, because this video confused me. I don't have background to follow through all the assertions Gary Bernhardt did, but I'll try to watch it again, since it was fun. I want to become a full stack developer. I can program and write tests in ruby, I can write applications using Sinatra and now I am learning rails. I bought a book to start learning JavaScript because it's the most popular language and basically will allow me to write modern applications. After I'm done with JS I'll probably jump into something else (rust, go, C, C++, Java, whatever helps do the staff I want). But watching this video, I'm confused: I avoided CoffeScript because I read in their documentation that in order to debug the code you have to actually know JavaScript so I figured that the best thing to do is learn JS and then use an abstraction (i.e. Coffescript) and tools like AngularJS and Node.js... Is my approach wrong? :-/ You can get around it to some extend with source maps and so on - just make sure you're generating them with whatever build process you use. In practice, however, all that lovely Coffeescript syntax can easily trip you up; often something will compile successfully, but not to the 'right' Javascript. I wouldn't recommend CS until you get your head around the fundamentals of JS. In particular, CS does some very 'clever' things with the Javascript this object; I have certainly lost my scope at unexpected points in CS programs (often in loops). When you're optimising code, furthermore, you definitely need a strong sense of what JS code you'll get out the other end. I'd recommend Reginald Braithwaite's Javascript Allonge as an overview of JS semantics - the material on scopes and environments is very useful, given that JS behaviour on that score is ... idiosyncratic. https://leanpub.com/javascript-allonge/read I guess I don't really get the point here. This video walks a line between comedy and fact where I'm not really satisfied in either. I can't always tell what's a joke, does he actually believe people would write software to compile to ASM instead of javascript because there are a few WTFs on js's "hashmaps." Much likely a newer version will come out before 2035? Or was that a joke? I also feel like poking fun at "yavascript" at a python conference is cheap and plays to an audience's basest desires. Really I see a mixture of the following: - Predictions about the future, some of which are just cleary jokes (e.g. 5 year war) - Insulting javascript preferring clojure - Talking about weird shit you could, but never would do with ASM js - Talking about a library that allegedly runs native code 4% faster in some benchmarks, with a simplistic explanation about overhead from ring0 to ring3 overhead. I'm not sure I understand the claims toward the end of the talk about there no longer being binaries and debuggers and linkers, etc. with METAL. I mean, instead of machine code "binaries", don't we now have asm blobs instead? What happens when I need to debug some opaque asm blob that I don't have the source to? Wouldn't I use something not so unlike gdb? Or what happens when one asm blob wants to reuse code from another asm blob -- won't there have to be something fairly analogous to a linker to match them up and put names from both into the VM's namespace? None None nice nice,ultimatly languages dont die,unless they are closed source and used for a single purpose ( AS3 ). In 2035,people will still be writing Javascript. I wonder what the language will look like though. Will it get type hinting like PHP? or type coercion? will it enforce strict encapsulation and message passing like Ruby ? will I be able to create adhoc functions just by implementing call/apply on an object? or subclass Array? Anyway , i guess we'll still be writing a lot of ES5 in the 5 years to come. I think there is a good time that an alpha-version of ES6 will be tentatively rolled out by 2035, 2036 latest. AS3 is not dead, and it is now open source source? http://flex.apache.org/ Dead for all practical purposes. Adhoc functions can be written using ES6 Proxy [1]. I like that he mentions "integer". It is still very incredible how JavaScript can work well without a integer construct. Or threads and shared memory. Or bells and whistles. Yes as a high-level scripting language it has many uses. You're never gonna write a kernel in it admittedly. pekk And it would work better with integers. Or are we claiming now it was a good decision to force all numbers to be floats because look how awesome node is? modern browsers support webworkers so you get "threads" but still no shared memory. "JavaScript can work well" - depends on what is understood by 'well'. Some craftsmen are capable of building cars from junk. None None I wish some of those talks were available for purchase on their own and not in the season packets. Definitely a few I'd buy since I liked this talk and the demo on the site. Guy has good vim skills for sure. The pricing model used to be different --$8/mo. He changed it when he stopped producing the series. I agree the current pricing doesn't make sense. I feel slighted for having subscribed for several months, but would now have to pay _more_ for content that I used to have access to. Ach, schade. That said, the material was compelling enough to buy at the time!
Kiro
A bit OT but what is the problem with omitting function arguments?
They'll have the value undefined which will do god knows what after some implicit type coercion
Not necessarily anything as such, but it's the sort of thing that can easily lead to bugs if you don't know what you're doing. It's the only way to overload a function with multiple signatures, though, so most libraries and frameworks make heavy use of it.
I absolutely loved this.
ika
I want a C interpreter
LLVM ships with lli, which can interpret LLVM bitcode generated by clang, a C compiler.
http://en.wikipedia.org/wiki/CINT
why not put that in browsers ?
To make a complete platform, you also need APIs, so there's more to it than just picking a language. You also need to figure out how to sandbox it; CINT appears to give programmers access to unrestricted pointers. You also want to get multiple browser vendors to agree, so some kind of specification is desired; CINT targets its own unstandardized subset of C. And you ideally want it to go fast, but CINT appears to be pretty slow:

http://benchmarksgame.alioth.debian.org/u32/compare.php?lang...

So there'd be some work to do. You could also compile the code, but complete C compilers are not fast, in browser terms.

ha-zum yavascript
jr06
Video tl;dw:

Gary Bernhardt (rightly) says that JavaScript is shit (with some other insights).

50%: "Waahhh, JavaScript is awesome and Node.js is wonderful, shut up Gary Bernhardt."

25%: Smug twats talking about how they're too busy changing the world with JavaScript to even bother to comment.

25%: Pedants and know-it-alls having sub-debates within sub-debates.

Pretty standard turnout. See you tomorrow.

pekk
Thanks for the summary, but I didn't exactly see that he was saying JavaScript is shit so much as that it was imperfect (10 days, etc.) but that didn't even matter.
It's been kind of fun watching JS developers reinventing good chunks of computer science and operating systems research while developing node.

This talk has convinced me that their next step will be attempting to reinvent computer engineering itself.

It's a pretty cool time to be alive.

"I get back to the DOM"
somebody tell this to the node.js crowd
Can so many lemmings be wrong?
This is actually not a bad lecture. Very interesting, a nice idea and surprising.
"It's not pro- or anti-JavaScript;"

OK

Did you watch it?
For those unfamiliar, Gary Bernhardt is the same guy who did the famous "Wat" talk on JavaScript:
vor_
Classic video, though it's wrong at times. For instance, the audience member who corrected him was right.
couldn't quite make it out, what did he correct?
vor_
The second JavaScript example, when he told someone in the audience, "No, that's just an object." It was a string.
I knew that it was a string. If you listen closely, you'll hear that he asked "is that an array of object?" He probably asked that because it's in square brackets. I said "No, it's just an object".

I've probably seen twenty people call this "wrong", which frustrates me. It's not wrong. It was a stringified object! I didn't say "stringified" because it wasn't relevant to the question of whether the object was in an array!

There are other things in Wat that are genuinely wrong, though, like the fencepost error about "16 commas", which mistake will haunt me forever.

Maybe worth releasing a transcript, at this point?
It's only 15 commas man, 15 commas. That extra comma could kill someone.

Speaking of which, why does WAT do different things on node.js?

> why does WAT do different things on node.js?

Wrapping the same input in parenthesis (eg, '({} + [])') yields different results.

"That extra comma could kill someone."

Only in old versions of Internet Explorer.

It is wrong because it is the toString of Object, because the + operator wants to do string concatenation. You were misleading the audience, both by using a shell which doesn't show strings with quotes, and saying that the toString of the object is 'just an object'.

And that is not the only thing that is misleading, as you clealy said that {} was an object. Yes, the syntax in js is weird as it looks like an object, but it isn't. Again, a better shell would not let you do this.

As gary explained he wasn't wrong in his response to the question because the audience member was asking whether [object Object] means the object is in an array. It doesn't. The string point is moot.

I do agree the {} + [] example has always felt a bit unfair to me (for the reason that {} is a block), but whatever, it's a light-hearted talk.

The video seems to be down now? Anyone have a mirror?

edit: nevermind, I clicked the download link. But I'm still wondering why the video's unplayable on the site.

Works with Chrome not Firefox. Yay! 2014 ;)
Funniest tech video I've seen! Actually maybe even minus the "tech".
That was really funny. First time I've laughed all day. Thanks for sharing.
Brilliant - everbody working with JavaScript should watch this!
> Javascript is a bad choice

Javascript is great once used in a "good" way. It's flexible & it's almost everywhere. Spending all your time complaining about how "bad" Javascript is kindof pointless. If you use Javascript, learn how to use it well.

Master sushi chefs don't sit there complaining how bad knives are because the knives need to be constantly sharpened.

If you are cooking spaghetti, learn how to strain the noodles, instead of using silly examples assuming people are incompetent at learning a tool.

The fact that something is "almost everywhere" doesn't make it good. It makes it useful at most.

And your examples make sense, javascript doesn't (in some cases) so it's very appropriate to complain.

> so it's very appropriate to complain

In that case, it's appropriate to complain about gravity & being restricted to the speed of light?

Are you seriously equating the laws of physics and human artefacts? That would be ludicrous. While the laws of physics are set in stone, human artefacts can be remade.

That changes everything

I'm sure you have the skills required to, say, write a preprocessor for whatever language you are using, and add some special constructs in it. Missing feature? Done in a few days. So…

If the laws of physics suck, suck it up.

If your tools suck, change them.

> Are you seriously equating the laws of physics and human artefacts?

Yes I am. The property that they share is they will not be changed or avoided in the near future.

Another property is that despite certain limitations, you can still accomplish many things. If you focus on these limitations, you will accomplish less.

> If the laws of physics suck, suck it up.

Not in all cases. Physics is just a model of our understanding of physical existence. Einstein demonstrated that.

> If your tools suck, change them.

I guess if it's worth it to spend that much effort, then go ahead. Just know that the frequent examples of javascript's "problems" are easily surmountable, that is if you don't dwell on these "problems". Javascript has some great attributes to it.

Indeed, it does not "suck". That's like saying the human body sucks because we have this ridiculous tail bone and wisdom teeth. No accounting for taste, I suppose.

I choose to focus on that and progress in mastery of my craft. If you want to complain and/or change your tools, go ahead. I don't judge you.

> Yes I am. The property that they share is they will not be changed or avoided in the near future.

You vastly overestimate the effort it takes to change your tools. When I was talking of a few days to add a feature to a language, that was a conservative estimate. With proper knowledge it's more like hours. And I'm not even assuming access to the implementation of the language. Source-to-source transformations are generally more than enough.

Heck, I have done it to Haskell and Lua. And it wasn't a simple language feature, it was Parsing Expression Grammars (the full business). I used no special tools. I just bootstrapped from MetaII (I wrote the first version by hand, then wrote about 30 compilers to the nearly final version). (For Haskell, I took a more direct route by using the Parsec library.)

Granted, writing a full compiler to asm.js is a fairly large undertaking. But fixing bits and pieces of the language is easy. Real easy.

> Not in all cases. Physics is just a model of our understanding of physical existence.

Oh, come on, don't play dumb. You know I was talking about the way the universe really works, not the way we think it works.

> I choose to focus on that and progress in mastery of my craft. If you want to complain and/or change your tools, go ahead. I don't judge you.

I'm not sure what you're saying. It sounds like you want to focus on particular programming languages. This would be a mistake, pure and simple. You want to master the underlying principles of programming languages. It can let you pick up the next big thing in a few days. It can let you perceive design flaws (such as dynamic scoping). It can let you manipulate your tools, instead of just using them.

---

My advice to you: if you haven't already, go learn a language from a paradigm you don't know. I suggest Haskell. Also write an interpreter for a toy language. Trust me, that's time well spent. For instance, knowing Ocaml made me a better C++ programmer.

First, I'd like to point out that your tone is attacking & condescending. Why?

> You vastly overestimate the effort it takes to change your tools.

Cool! If you don't mind the asset overhead, having to recreate the existing javascript ecosystem, & the abstraction mapping, & the other unknown unknowns, then it's all good. Are there any well-known production sites that use such techniques? I don't doubt there will be, but are such techniques "ready for prime time"?

I personally have not experienced enough pain to be motivated to all that.

> Oh, come on, don't play dumb. You know I was talking about the way the universe really works, not the way we think it works.

The thing about existence is we don't know about it in it's entirety. Even if we know the rules, there are many mysteries to explore. It's wonderful :-)

> It sounds like you want to focus on particular programming languages. This would be a mistake, pure and simple. You want to master the underlying principles of programming languages.

I am mastering the underlying principles of programming languages.

I want to focus on getting better, faster, & smarter. For the web, it's nice to have everything in one language. Lot's of sharing of logic. Keeping DRY. Being efficient with time. Smaller team sizes. More stuff getting done.

Maybe compiling to javascript will help for other languages.

I'm a fan of dynamic languages. There's more than one way to master the craft. Asserting your one true way is a failure of imagination.

I doubt it. You vastly underestimate my ability to adapt & evolve ;-)

Maybe one day. In the mean time, I'm focusing on becoming a more fully rounded thinker. That means subjects outside of programming. Learning yet another language has diminishing returns.

I'm humble enough to not give you unsolicited advice, which would only serve my ego.

Ooh, and I agree. OCaml, Erlang, & Lisp are fun languages. Javascript is also fun.

With this logic you can defend anything. PHP's great if used in a good way: Zuckerman's a billionaire. Right? Why is anyone even bothering with PL these days?

Sushi knives don't decide to cut you because the rice came from a different origin. And I'd guess that most craftsmen, outside of a ritual and tradition would love for their tools to have less disadvantages.

If you want to draw an analogy to spaghetti (?), it'd be like complaining that the only kind of spaghetti you can buy locally cooks only at a specific temperature, and even a bit more turns it to mush. And the reason is because the local government passed a bylaw with only a few hours of consultation that ended up banning imports of better kinds of spaghetti.

While it might be "pointless" to spend all your time complaining, there's certainly value in asking "wat" and pointing out absurdity.

The "wat" is a good first step in identifying problems. However, I see is people getting stuck on "wat" and not moving forward. People would rather win an argument than advance knowledge & the practice. Lot's of ego, programmers have.

In the mean time, one can learn to appreciate & use javascript strengths. It can be quite fun, liberating, & useful. Anecdotally, I have not run into these crazy issues, and I program in javascript everyday. I also have a large app and the framework I built is custom.

I liken this to using C++, Unix, & bash as a base. Yes, you could say these tools suck and spend time creating, marketing, & community-building for a new tool. Or you can iterate & improve upon these existing tools. There's no wrong answer. What do you want to accomplish?

> Sushi knives don't decide to cut you because the rice came from a different origin.

That analogy seems like a stretch. Care to explain? Javascript works with different locales. There are many international websites that use javascript.

Also, javascript does not "decide" to create a bug in your program. You create that bug by misusing the tool. You will get further if you take some responsibility and improve your practice.

> And I'd guess that most craftsmen, outside of a ritual and tradition would love for their tools to have less disadvantages.

I agree with that. Usually the improvements are iterative. One could use a laser cutter (which does not need sharpening) to cut sushi, but that would also burn it. Here's a good talk (Clojure: Programming with Hand Tools).

Ritual & tradition is a social tool to propagate knowledge, idioms, & practices across generations. It makes sense to challenge ritual & tradition to they improve over time. It does not make sense to whine about it without doing anything.

> it'd be like complaining that the only kind of spaghetti you can buy locally cooks only at a specific temperature, and even a bit more turns it to mush

Not getting your analogy. This seems like a stretch, similar to the person who cannot strain the spaghetti noodles on the video. Care to explain?

The following perfectly describes my sentiments about all the complaints people have about JavaScript, PHP, <name your favourite hated language>
So we should just give up completely on trying to make better things?

I don't think that was the point of the video (some amount of gratitude for the things we have) -- but then maybe I'm misunderstanding your point.

On the contrary. We should strive to correct all the "wats" that obviously exist in all these languages, but most of what I see is just complaints, most of them ignoring the amazing things that can be done with these technologies. Over 30 year I've been programming in more languages than I care to count, and I don't remember at any point having a specific language stop me from achieving my goal because it has some traps or design flaws. Always made sure to know about them and make use of the language's strong points instead of concentrating on the weak.

And we both understood the point of the video. The fact that there is much to be grateful for does not mean that we shouldn't improve on what needs improving. But for heaven's sake, if you're not going to improve on it, stop whining about it and be grateful for the amazing things it does enable.

tl;dr: you can't necessarily change the troublesome technology, so you might have to leave. But in order to have a viable alternative to that "bad" technology, you need other people (case in point: mindshare of JS). In order to get more people to "your side", you might need to point out what is wrong with the original technology.

> But for heaven's sake, if you're not going to improve on it, stop whining about it and be grateful for the amazing things it does enable.

Sometimes you're not in a position to even be able to change something, even if you wanted to. The ideas you have in mind for a technology might fly in the face of how the community around that technology, or the guardians/maintainers of it, thinks of it - introducing these changes might break too much stuff that is dependent on it, the changes might fly in the face of the culture around that technology.

So if you have some technology that you think - subjectively, or even somewhat objectively if you have conviction enough - and you can not do anything about it, you only have two choices. Embrace it and try to work with it despite its flaws, or to abandon ship.

But if you want to abandon ship, you probably want to find a safe harbor, eventually. ie a place where you can develop or utilize some other technology. But that place might be sparsely populated, because everyone else is working with that other technology. So what do you do? You suggest that others jump ship. :)

Assuming that there is actually some kind of objective merit to complain about a specific technology, it might be wise to complain to others about that technology. That way they can hopefully use that info to make an informed choice, and perhaps abandon their current technology for another technology. In time, you might even get enough people to come over to this other technology that that community is big enough to support that technology as a valid alternative to the "bad" technology. But what if everyone just stfu'ed about what their "negative" thoughts are on a technology? Would that other technology be able to get enough "acolytes" in order to be a viable alternative? Probably not, because everyone was too "positive" and polite to point out how that technology might be better than the old technology.

Would JS even be so controversial if it wasn't for that it is so entrenched in Web development? Is that not a great example of how important mindshare can be?

jerf
"I don't remember at any point having a specific language stop me from achieving my goal because it has some traps or design flaws."

I have to admit, a language has never stopped me personally. But it most assuredly has hurt me when trying to program with other people, who do not have a direct psychic hotline into my brain that tells them what preconditions must hold before my code will work properly, and what things they can and can not do with a certain library, and most importantly, why they can and can not do those things. Languages that allow me to encode more of those things into the program itself, instead of the ambient documentation-that-nobody-ever-reads-even-when-I've-put-tons-of-work-into-it, work better.

And as my memory isn't all that great, it turns out that if I'm away from my own code for long enough, I become one of those people who don't have a direct hotline to my-brain-in-the-past.

> But it most assuredly has hurt me when trying to program with other people, who do not have a direct psychic hotline into my brain that tells them what preconditions must hold before my code will work properly, and what things they can and can not do with a certain library, and most importantly, why they can and can not do those things

Programming is hard. It's an ongoing process of mastery. This is true with any programming language. There is no silver bullet.

> Languages that allow me to encode more of those things into the program itself

There are plenty of tools that almost every language provides for you. It's an architectural concern to ensure that there is as little mapping between the domain and the code.

I personally find Javascript to be flexible, which allows me to architect my software in a way that is communicative of the domain, without many restrictions.

> I become one of those people who don't have a direct hotline to my-brain-in-the-past

A story is a great way to communicate information. Automated functional (black box) testing is also good. Also, try to reduce the mapping between the domain and the software. Ideally, the software (naming) should have a 1-1 map to the domain.

Also, keep the structures flat, as this idiom tends to reduce complexity.

Keep consistent & iterate on architectural idioms between projects.

These are some ways to improve communicability of the codebase & to have insight into the business domain logic.

jerf

Ah, you see, there's the problem... this wasn't business logic. To put it in Haskell terms, I had code that was not in IO, but I couldn't actually encode that restriction in the language.

Most of your post amounts to "program better", which is vacuous advice. We've spent decades telling each other to "program better". We've proved to my satisfaction that's not enough. Have you used languages not from the same tradition as Javascript? It is possible, even likely, that you are not aware of the options that are available out there, even today.

> Ah, you see, there's the problem... this wasn't business logic.

What is "this"?

> Most of your post amounts to "program better", which is vacuous advice

No it's not. It's certainly better than dwelling on some edge case shortcomings and limiting your growth by blaming the tools.

No tool is perfect. Learn to use it better. Master it. Improve it. If you want to use a different tool, then use a different tool. There's no need to spread negativity.

There has been plenty of progress in Javascript idioms & programming idioms in the past few decades. You can accomplish many things with Javascript and the environment will only continue to improve. Programmers will continue to get better from the ecosystem & practices that have been learned over time.

Even your mighty Haskell is not perfect. Time to accept non-perfection & evolve :-)

> Have you used languages not from the same tradition as Javascript?

Yes, I have. I also draw inspiration from other languages & environments.

> It is possible, even likely, that you are not aware of the options that are available out there, even today.

Yes, I'm aware. When they prove themselves, I'll consider using them. In the meantime (and always), I'm happily mastering my craft free of unnecessary angst.

In that video, Louis is talking about himself in the third person: he's the one complaining on the plane. It's not about some group of "others" who are "bad" and don't appreciate the world; it's about our nature as humans.
> he's the one complaining on the plane.

Hmm, "the guy next to me goes 'pfft, this is bullsh*t'".

> it's about our nature as humans

Well, it's about our current generation of Americans (maybe Westerners). This complaining seems like unnecessary stress to me. I understand, because I used to do it.

I read (or watched) Louis say that he was the guy on the plane, but this was a couple of years ago and I'm failing to google it now.

I'm not convinced that this behavior is specific to Americans or Westerners; it may just be that we're most attuned to our own ways of expressing it. I'm also not convinced that it's a general property of the species, though; it would be arrogant for me to claim that kind of fundamental knowledge of how the human mind works. That kind of arrogance is the bread and butter of Hacker News, of course, so this is now necessarily a bad HN comment. ;)

I should've said something more like "it's about the way that we all act towards technology".

Brendan Eich covered this subject at O'Reilly Fluent conference in 2012:
Seems he took the Wat talk a little personally, but I'm not sure why he defends {} + [] by saying the first { is a statement... wat?
It's automating semicolon insertion. The browser translates {} + [] into {}; + [], so + [] === 0 too. {}; is undefined.
Being pedantic: it's NOT semicolon insertion. Your actual point is correct, the {} is an empty block statement, and the +[] is a separate expression statement. It's equivalent to "{} 0".

However, semicolon insertion is only triggered when there's a newline at a position where there would otherwise be a syntax error. Here, neither is the case: blocks don't have to be terminated by semicolon (so no syntax error), and there's no newline in the source code!

It's because they wanted to say both:

    if (a) b;

and

   if (a) { b; c; }

which tends to make you think of curly braces as a syntactic feature which can appear anywhere, turning many lines of code into one line of code. If you think that way then these should possibly also be valid:

    {b; c}
{b}
{}

but, since JS scope is function-oriented and in other places (e.g. functions) the braces are ultimately needed anyways, even for one-liners, it seems like this was a stupid choice and we should have just rejected the form:

    if (a) b;

and then the reuse of {} for lightweight (if non-robust) hashmaps would perhaps be unambiguous again.
Which is exactly what Perl did, but to alleviate the clumsiness of single statement if conditionals they added a post-conditional if statement of the form STATEMENT if CONDITION; (which has the benefit of being how some people express simple conditionals in real life. "Go left if you see the blue house.")
Brendan Eich is a homophobe and whoever links to his videos are complicit in his bigotry.

All his opinions should be discarded.

You have no idea whether he's a homophobe or not, and clearly when the subject is Javascript his opinions should be considered very carefully.
Nobody's opinions should be discarded based on their behavior. Otherwise, we'd have a scant few scientists, artists, and thinkers in history actually worth discussing. (Also, people who link to him are complicit? Are you serious?)
If by cover you mean he essentially shrugs and says "it was the 90s" and moves on to ES6.
Which seems a pretty appropriate reaction, no? :-)
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ [email protected]
;laksdfhjdhksalkfj more things