HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Why Continuations are Coming to Java

Ron Pressler · InfoQ · 199 HN points · 0 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Ron Pressler's video "Why Continuations are Coming to Java".
Watch on InfoQ [↗]
InfoQ Summary
Ron Pressler discusses and compares the various techniques of dealing with concurrency and IO in both pure functional and imperative programming languages.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Jul 02, 2019 · 197 points, 177 comments · submitted by lelf
trias
Honest question, maybe obvious: Why is the java-scheduler/ thread model so much better than the OS-level one? What does the java one do, or better what does it not do? And why can't the principles which make java-scheduling fast be applied to OS-threads?
aardvark179
There's a few different things to unpack in that question.

1. Fibers are more lightweight than threads because we only need to carry around a small amount of state representing the stack used so far by the fiber and a few other things. So it should be possible to have many more of them than we can have OS threads.

2. We can in theory make decisions about scheduling based on what the program is doing in ways that the OS cannot do. For example if one fiber releases a lock we might know to schedule the fiber waiting for that lock in preference to anything else, and would not have to wake all fibers waiting on that lock and let them race to acquire it as commonly happens with OS threads.

3. Some of these advantages can be done with OS threads when combined with user scheduling. This is available in Windows, but has not made it into mainline Linux yet. It allows the application to give the OS useful hints on which thread to schedule next, but it doesn't really reduce the weight of the threads themselves.

(I work for Oracle, and spend some of my time on project loom. These opinions are my own.)

TickleSteve
Regarding point 2)

That is not how OS level threads would work. When a lock is released, the next ready-to-run task (blocked on that lock) will be made runnable. They wont all be released then 'race' to acquire the lock. The behaviour is identical in user-space or kernel-space.

scott_s
That depends on the synchronization mechanism used, and whether or not that mechanism is hooked into the kernel scheduler. Sometimes it is (such as pthread_mutex) and sometimes it is not (such as rolling your own spin locks).
TickleSteve
ok, any "sane" implementation would not use spin-locks for this purpose, but yes it is technically possible to do so.

Spin-locks are not really the mechanism of choice for this level of abstraction.

codebje
The OS thread model is lightweight processes - each OS thread is essentially a separate process sharing a common section of the process control block. Context switches are still relatively heavyweight involving a kernel mode switch, CPU state save and restore.

Application threads are a part of the application runtime environment. Thread scheduling is little more than a function call and doesn't involve a syscall invocation, and can be hooked into runtime environment trigger mechanisms.

Kernels have to be one size fits all, application threads can be tuned more precisely, with greater visibility over the application's state.

marktangotango
Seems like your saying application threads are “green threads”, when the jvm has used system threads one to one for a long time now. So for the jvm at least there’s no difference between a jvm thread an an OS thread. Some jvm impls may still use green threads, but the major ones haven’t for many years. Apologies if I’ve misunderstood.
avodonosov
OS threads use statically allocated stack. This limits the nubmer of threads one can have (say if stack is 1MB and you have 1GB of memory, you can have 1024 threads max). That's the main difference. I think some savings also come from avoiding full blown context switches.
garmaine
This is demonstrably not true. That address space isn’t allocated until it is used. Even with initial allocations of 4kB pages, there’s plenty more space than you’re estimating.
pron
Even 4k is a lot, and whatever memory is committed stays committed (and worse, it may even be paged). Perhaps it would be possible for the kernel to uncommit unused stack pages, but I don't think kernels want to assume threads never access memory below their stack pointer.
garmaine
Oh I agree OS threads a really heavyweight and there is a real need for fibers / green threads. I think we're arguing over whether they are 100x more efficient or 10,000x more efficient. Details matter ;) Thanks for your work on this for Java!
lostmsu
I think you severely overestimate the efficiency gain both times.
garmaine
It's a bit dated, but here's a reference:

https://blog.tsunanet.net/2010/11/how-long-does-it-take-to-m...

wahern
4k isn't that much, although realistically it's probably more like 8k--stack + guard page. Even then it's not that much. Consider that even in languages like Go or environments like Node.js where hundreds of thousands of concurrent task "contexts" are common, people still do ridiculous stuff like slurping in entire request and response bodies rather than processing them in a stream-oriented fashion.

The whole point of [partial] user space scheduling is that the process itself has the smarts to handle things like stack mapping and context switching. The opacity of kernel stack management and scheduling is what limits more efficient solutions in this space. Because some user space solutions won't be as smart as others doesn't mean we shouldn't provide the tools needed. It's also worth pointing out that such tools don't prevent other approaches, such as language-level generators. Choosing fibers for logical tasks (like an HTTP request) and generators or coroutines for simpler interfaces (like iterators) would come naturally if both were well-supported up and down the software stack. That people are forced to choose one or the other is a result of engineers making premature architectural compromises.

chrisseaton
No you have it backwards - the address space is always allocated. The physical memory is not allocated (we say committed) until it is used.

But allocating the address space even if it is not committed still consumes finite resources and still limits the number of threads you can create.

xyzzyz
Yeah, but there’s plenty of space to put 2MB stacks in 48 bits (that’s 256 TBs) of address space.
chrisseaton
Again, there's plenty of address space in 48 bits, but you need to commit physical memory in order to allocate the space for the page table for that address space. And that's per-process, and it's going to thrash the TLB.
garmaine
That's not how the OS's page management tables work. The OS assigns space for the stack at a (typically random) location in the process' address space, but no physical allocation is made. The very first time the stack is used a page fault exception is generated, causing a context switch to the OS. Only then does the memory management subsystem allocate a page for the stack and return control back to the program.

Handling memory allocation lazily like this is necessary to handle a number of edge cases, such as spinning up a massive number of short-lived threads. It also prevents thrashing of the TLB cache.

In practice, real operating systems finely tune their behavior here. I would not be surprised at all if a 4kB allocation is made for a thread's stack upon creation in modern operating systems. But I would be very surprised if, e.g., Linux allocated a full 1MB of memory at thread creation time instead of handling the vast majority of it lazily.

EDIT: Oh wait, I think you were mostly agreeing with me :) Yes, my original comment did mess up address allocation vs physical page allocation due to a brain fart. I meant it the other way around and I think we're saying nearly the same thing.

The one major point of difference is that to make an address allocation in the page table doesn't require a physical allocation. The OS can either leave that allocated space unconfigured, or assign it a protected page table. In either case it faults on access and the OS knows that before killing the process with core exception to first look in its internal lazy delayed-allocation tables to see if it the access was to an allocated area of the address space with deferred allocation.

avodonosov
These are good points, thank you.

BTW, I studied a little deeper, Java actually commits the stack memory upon thread creation.

The memory distribution of java process may be seen using jcmd <pid> VM.native_memory as suggested here http://xmlandmore.blogspot.com/2014/09/jdk-8-thread-stack-si...

But due to Linux memory overcommit this memory is not really allocated.

I've managed to create 127000 Java threads on my machine, and the resident memory of this process, as shown by `top` is 2,167G ~ 17k per thread.

If I disable memory overcommit, only around 32000 threads are created.

garmaine
Thanks for digging deeper into it, that’s some good info.
chrisseaton
> The one major point of difference is that to make an address allocation in the page table doesn't require a physical allocation

No it does require a physical allocation. The page table entry for the new virtual memory needs to be a physical allocation!

People say 'you can have 256 TB of virtual memory'! Yes you can, but you will need 256 GB of physical memory to hold the page table for that, won't you, assuming 4 KB pages, even if none of that 256 TB is committed to physical memory.

You say 'that's not how the OS's page management tables work' - yes it is! Look up Intel 64 and IA-32 Architectures Software Developer’s Manual, Volume 3, Chapter 4, formats of page-directory entry and page-table entry. It's committed memory.

garmaine
You don't have to allocate a page on the table for an area of memory that is not physically allocated at all. Yes, accessing it will trap to the operating system, but that's the entire point.
ygra
You only lose address space as long as that stack memory isn't used, right?
avodonosov
Good point, yes. As mentioned in a sibling comment, even 4k for one page of really allocated memory is significant.
chrisseaton
No you use up address space no matter what.

You're thinking of physical memory - you only use that when the stack memory is used.

But address space is also limited, and it stops you creating as many threads as you may want.

tpxl
The lightweight threads(fibers) are actually not threads on the OS level, they just behave similarly. As such there are no context switches etc., which are slow.
trias
i know, but why doesn't the OS do this too? Most programmers don't care if their "thread" is an actual thread on the machine or just a "fiber". If the benefits are this large, it should be provided by the OS in my opinion.
jacques_chester
OS scheduling requires you to switch contexts between different modes in the CPU. Essentially, the OS has the power to perform operations that regular code does not. Threads and processes are usually built on top of these additional permissions.

Context switching is relatively expensive. The CPU needs to push a lot of application state out of the way to clear the path for the OS code to run, then after the OS is done, push the OS out of the way and retrieve the application's state and code again.

Whereas a fiber remains entirely inside the application code. It never requires a context switch. But a fiber loses some of the powers of the OS: for example, it can't draw hard memory boundaries between fibers that will be enforced by the CPU.

For some things you want processes, for some threads, for some fibers.

wahern
Context switching doesn't need to be as expensive as it is in these cases, it's just that Linux doesn't provide the mechanisms needed to make it more efficient. See, e.g., https://blog.linuxplumbersconf.org/2013/ocw/system/presentat... which implemented a syscall for a process-directed context switch directly to another thread.
TickleSteve
If the OS were to do it, it would involve a switch to kernel-space and would then be termed an OS-level thread...

Another way of putting it is that yes, Operating Systems do do it. Thats what an OS-level thread is.

garmaine
They do: it’s called threads.

It’s the OS context switch that makes process threads so much more costly than user-scheduled fibers. You cannot involve the OS if you want efficiency.

Otherwise it is the same exact thing.

chrisseaton
> If the benefits are this large, it should be provided by the OS in my opinion.

Like with most things, there are drawbacks as well as benefits.

lmm
The OS doesn't know the details of what things are specific to a given "thread". So it has to take a "big dumb sledgehammer" approach to switching tasks: it has to switch out the whole stack (4k or 8k) even when there are only a few bytes that belong to that particular task (because it has no way of knowing which bytes those are), and it has to interrupt the tasks at essentially arbitrary times rather than waiting for them to yield (because, again, it has no way of knowing what the yield points are) which then means the tasks have to have more synchronization overhead etc. to work around the fact that they might be preempted at any point.

In this age of VMs/containers for everything I'm not convinced a conventional OS offers a lot of value - OSes made sense when programs needed to access different kinds of hardware and we liked to have multiple processes/users sharing a single machine while broadly trusting each other, but neither of those things is really true any more. Look at unikernels for where I think the future is going - bootable VMs that act as a language runtime that controls things like threading directly, no need for an OS intermediary.

deathanatos
> it has to switch out the whole stack (4k or 8k) even when there are only a few bytes that belong to that particular task (because it has no way of knowing which bytes those are)

This reads to me like you believe the OS must memcpy/move the whole stack out of the way on a context switch. It doesn't; the other thread has other, dedicated memory its stack, and on a context switch, the stack pointer is simply adjusted to point at the other stack.

Assuming Java's fibers work similar to other green thread implementations, green threads/fibers work similarly — it's just a pointer change, except the adjustment is done in userspace.

(And, to some degree, the OS does know what bytes are stack for any given task. It's whole pages, yes, but that allows the program to manage it otherwise. But I think most green-thread/userspace threads are similar: the stack is preallocated ahead of time and left to the thread to manage. Sure, you might not know the exact range, but you don't really need to? Go, I think, is an interesting outlier here; IIRC, it dynamically expands and contracts the allocated space on the stack in response to the application's demands, though I do think they had some interesting issues w/ loops thrashing allocations if they fell along an allocation boundary. I think they've also long since fixed that issue.)

wahern
Java's fibers actually work like the previous poster described, which may explain the confusion:

> The current prototype implements the mount/dismount operations by copying stack frames from the continuation stack – stored on the Java heap as two Java arrays, an Object array for the references on the stack and a primitive array for primitive values and metadata. Copying a frame from the thread stack (which we also call the vertical stack, or the v-stack) to the continuation stack (also, the horizontal stack, or the h-stack) is called freezing it, while copying a frame from the h-stack to the v-stack is called thawing. The prototype also optionally thaws just a small portion of the h-stack when mounting using an approach called lazy copy; see the JVMLS 2018 talk as well as the section on performance for more detail.

Source: https://wiki.openjdk.java.net/display/loom/Main

The video presentation describes it better. I think in their expected usage scenarios the stacks of fibers aren't particularly deep so the copying isn't that expensive. Also, IIRC, doing it this way was less intrusive to the existing JVM architecture; it's possible in time they'll rearchitect things to use a more traditional technique.

pavlov
Windows provides fibers in addition to threads. Apparently it’s a quite good API although not widely used:

https://nullprogram.com/blog/2019/03/28/

vkaku
The deal is that userland switching basically gives more control and lower latencies and can reduce the overhead of multiple processing taking over the CPU. Busy waiting / polling is not often a bad thing.
vkaku
We don't have a JavaOS outside smartcards yet, which is basically a custom Java runtime with the IDT handlers in it. :) If that happens, we may get that missing super efficient implementation.
exabrial
Simply: less stuff to do. Saving a few stack frames vs a transition into kernel space and subsequent hardware calls.

I imagine Java can do some party tricks with lock sharing too with fibers backed by the same OS thread.

pron
There are a couple of issues here: scheduling, and managing the stack. The reason a runtime can do better than the kernel is that its constraints are (hopefully) less constraining than those of the kernel.

In the case of managing the stack, the kernel must be able to support languages like C/C++ and Rust, that can have internal pointers pointing into the stack itself. This makes it hard to reallocate stacks dynamically and move them around in memory. The JVM doesn't allow such pointers in application code, and internal bookkeeping pointers are known.

As to the scheduler, the kernel must support very different kinds of threads: threads that serve server transactions that block very often, and threads that, say, encode a video, that rarely block at all. These different kinds of threads are best served by different schedulers (e.g. the "frequently blocking" kind of thread is best served by a work-stealing scheduler, but that may not be the best scheduler for other kinds of threads). When you implement fibers in the runtime, you can let the developer choose a scheduler for their needs.

avodonosov
pron, isn't it simpler to just allocate everything on heap, than reallocating coroutine stacks? Or is allocating on heap significantly less performant than managing coroutine stacks (stack need to be copied, while heap allocated object can stay the same).

Also, is there any timeline or estimate when Loom will be released with official JDK?

pron
In the current Loom implementation, continuation stacks are allocated on the heap; they can grow and shrink as ArrayLists do -- the object stays the same, but the backing array needs to be reallocated.
avodonosov
I meant every stack frame to be a separate heap-allocated object.
kasperni
> Also, is there any timeline or estimate when Loom will be released with official JDK?

From another place in this thread:

As OpenJDK has switched to time-based releases, we no longer plan releases based on features. When a feature is ready, it is merged into the mainline and released in the following release. We never commit to a timeline, but I would say that the probability for fibers to land in one of the two releases next year as quite high. Of course, we release projects gradually, and it's possible that some planned fiber features will not land in the first release, or that the performance in the first release will later be improved etc. However, early access Loom binaries (with API still very much in flux) are expected in a month or so, with the intention of gathering feedback on the API. The decision on when fibers are ready to be merged will greatly depend on that feedback.

jimmaswell
That sounds like the runtime has more constraints thus it can make more assumptions, not less constraints, while the OS one must be more general.
clhodapp
Constraints on the user free the platform. Freedoms for the user constrain the platform.

If you are interested in someone pontificating about this at length, see https://youtu.be/GqmsQeSzMdw

lilactown
It depends on your perspective.

The OS one has more _requirements_ (the number of "it must do..." is much higher), which constrains it's design space much more. I think you can call that as "having more constraints" because it must meet more needs to be even viable as a solution.

The number of requirements for JVM threads is much lower, thus less constraining, allowing it more freedom to implement solutions that can meet its narrower window of features.

ScottBurson
Ron Pressler!? Isn't that our very own 'pron'?
oblio
You should see him go on /r/java!

I'm kidding, his posts are awesome :)

kasperni
> You should see him go on /r/java!

Yeah, if just someone had banned him, we could have had Loom by now.

Just kidding as well... great work pron

kristianp
The Project Loom webpage: https://wiki.openjdk.java.net/display/loom/Main
dancek
We had a weird network issue in a production system, and because the interplay of different components was very difficult to debug I briefly investigated each component. This lead me to run Jersey (a JAX-RS implementation, ie. a REST library) with a debugger.

Jersey is written in a continuation passing style. That's the only time I've seen the style outside academic discussions of Scheme. I found the code flow difficult to grok and thought continuation passing didn't fit Java very well. That may all be just due to my lack of experience, though.

Granted, the alternatives to handling concurrency each have their own problems. Maybe this is a good idea; it will be interesting to see in practice.

aardvark179
We're using the underlying continuations in Loom to build fibers. You don't need to pass continuations, you just write normal blocking code and the runtime takes care of yielding and scheduling. All you have to do is run things as fibers, which is pretty much like running them as threads.
lmm
I found iteratees to be the best of all worlds: you get the performance advantages of continuation-passing style, but you can still extract the start, middle, or end of a stream pipeline as an ordinary value that makes sense to reason about and test.
mbrock
Having native access to continuation control lets you avoid writing in continuation passing style. It’s common to implement native continuation control with a compiler pass that automatically converts code into continuation passing style. (Similar to how you would transpile async/await in JavaScript.)
zeveb
Wow, continuations are pretty awesome, because with them you can build any control structure you like. Now if Java only had macros, so that homegrown control structures could be concise & succinct, like built-ins …

It's kinda of funny how languages are slowly becoming more and more Lisp-like — even Lisp is available, still offering what it offers.

As an aside: argh, in Firefox it stole the spacebar, so there's no way to scroll down the page, even if I click outside of the video.

Why do web pages do that?

waste_monk
Lack of Continuations / coroutines has been a huge pain in my rump for a while now; it is fantastic that they're finally adding support for it.
Decabytes
Is tail call elimination the same as tail call optimization? I know that schemes use this to keep the stack from blowing up during recursive function calls, and that currently the JVM, and by extension Clojure is not able to handle this.

I also remember reading somewhere that this wasn't possible to add/not on the road map for the JVM. Does project Loom change this?

timgilbert
> Is tail call elimination the same as tail call optimization? I know that schemes use this to keep the stack from blowing up during recursive function calls, and that currently the JVM, and by extension Clojure is not able to handle this.

Yes, the elimination of the tail call is the optimization.

FWIW, Clojure handles this through syntax, so recursive algorithms are generally implemented with loop/recur forms that don't blow up the stack (instead of having a function literally call itself): https://clojuredocs.org/clojure.core/loop

michaelmrose
Clojure does explicit tail calls with recur.
dan-robertson
Yesish.

The way I read it, tail call elimination is the guaranteed application of tail call optimisation with a well defined meaning of what a tail call is. Eg scheme requires TCE.

Tail call optimisation is a transformation a compiler may choose to do to make code that looks like “return foo(...)” run faster. A compiler may choose to not do it because it might not be implemented or faster or it could make debugging harder. There is no guarantee it will happen.

TCO makes some code faster. TCE allows one to write different looking programs knowing they won’t blow up the stack.

This all being said I think most people mean what I have referred to as TCE when they say TCO.

jonathanstrange
Seems like every sufficiently advanced language at one time or another turns into LISP.
Blackthorn
I'm not too familiar with Scheme, but Common Lisp definitely doesn't support anything like this.
qazpot
Rabbit hole goes much more deeper than that. All programs eventually turn into LISP. https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule
pjmlp
"Invited Talk - Guy Steele" at ClojureTV.

We were not out to win over the Lisp programmers; we were after the C++ programmers. We managed to drag a lot of them about halfway to Lisp.

avodonosov
The question is not why, but when. When?

Our server has long waiting http handlers, which occupy java threads while waiting, thus limiting the server throughput to the number of threads java can maintain, and I'm holding off a rewrite on async servlets for several years already to avoid complicating the code, because I hope cheap threads (fibers) will become available, but there is no timeline for Loom. When can we hope it will be released?

digaozao
Maybe with kotlin coroutines you can rewrite the application partially. Is there a difference between that and fibers?
avodonosov
Kotlin coroutines, as well as js and c# async/await, require us to change declaration of calling function. We can call a "suspend" function only from a "suspend" function, which in turn can only be called by another "suspend" function. So if I change some of my functions to "suspend" everyone upper in the call stack should be changed to "suspend", which is not always possible because that may be a 3rd party code.

That's not necesserily a deal breaker, but that's the difference.

For Java there is Quasar lib, which introduces async/await through bytecode instrumentation. It's by the same authors who work on the project Loom currently.

I don't use it because: a) the project web site feels a bit abandoned; I guess they concentrate on Loom now 2) I need analogues of wait/notify, sychronized, etc which are used in this code and not provided by Quasar; Quasar only provides lower-level primitives. One either need to implement such constructs on top of Quasar, or reimplement the logic. So it's not a drop-in replacement.

The project Loom aims to be mostly a drop-in, in particular fibers will support synchronized, wait/notify, etc.

jarym
Quasar was created by the same guy (Ron Pressler) that is the tech lead for Project Loom. The reason Quasar feels a bit abandoned is because he's bringing the thinking right into the JVM.
avodonosov
Kotlin coroutines, as well as js and c# async/await, require us to change declaration of calling function. We can call a "suspend" function (which I assume = "async" in js) only from a "suspend" function, which in turn can only be called by another "suspend" function. So if I change some of my functions to "suspend" everyone upper in the call stack should be changed to "suspend", which is not always possible because that may be a 3rd party code. That's not necesserily a deal breaker, but that's the difference.

For Java there is Quasar lib, which introduces async/await through bytecode instrumentation. It's by the same authors who work on the project Loom currently.

I don't use it because: a) the project web site feels a bit abandoned; I guess they concentrate on Loom now 2) I need analogues of wait/notify, sychronized, etc which are used in this code and not provided by Quasar; Quasar only provides lower-level primitives. One either needs to implement such constructs on top of Quasar, or reimplement the logic. So it's not a drop-in replacement.

The project Loom aims to be mostly a drop-in, in particular fibers will support synchronized, wait/notify, etc.

pron
As OpenJDK has switched to time-based releases, we no longer plan releases based on features. When a feature is ready, it is merged into the mainline and released in the following release. We never commit to a timeline, but I would say that the probability for fibers to land in one of the two releases next year as quite high. Of course, we release projects gradually, and it's possible that some planned fiber features will not land in the first release, or that the performance in the first release will later be improved etc.

However, early access Loom binaries (with API still very much in flux) are expected in a month or so, with the intention of gathering feedback on the API. The decision on when fibers are ready to be merged will greatly depend on that feedback.

stcredzero
Because Java eventually implements everything that was done years ago in Smalltalk. Despite some having said it was unnecessary back then. Also, expect that a feature that was implemented in Smalltalk by a single dev as a library (this applies to continuations) will take a small team in Java. (Though the Java implementation may well be more robust and faster than the Smalltalk one from years ago.)
eatonphil
The bits about multi-node computation reminds me of Racket's Places [0]. Have any other mainstream languages (Racket doesn't count) tried this technique before?

[0] https://docs.racket-lang.org/distributed-places/index.html

lilactown
On its face, sounds like Erlang/OTP.
eatonphil
Ah duh! Good catch.
punnerud
30min in and I think:

Are they now trying to make Java to work like PL/SQL, Cobol, SAP (ABAP = SQL+COBOL)..?

I think it is a good idea, and is a way to bridge the gaps between the old and “new” way of writing programs. The real reason is not just scalability but traceability, that you can change code at runtime, closer connected to the database..

nimbix
What ever happened with continuations in Scala? A lot of people were excited about them back in 2009, but if I google them now I still only get pages from 10 years ago. I haven't used Scala in a very long time, so I stopped paying attention to any changes in the language.
hderms
as far as I can tell scala concurrency basically boils down to a few different subclasses that have some overlap: 1. streaming/observable focused like Monix 2. IO effect type like Cats Effect and ZIO 3. Twitter futures/Stdlib futures which are different from each other in some crucial ways
wmfiv
Probably need to add 4. Akka.
rzzzt
Apache Commons Javaflow is an early implementation of continuations: https://commons.apache.org/sandbox/commons-javaflow/tutorial...
mcv
I vaguely seem to recall that Rhino, Cocoon's pre-Node serverside javascript, had continuations too. That much have been around 2006 or thereabouts.
bokchoi
I remember them too. I had to dig a little to find the links I remember from the past.

http://web.archive.org/web/20190127111220/https://wiki.apach...

From the Rhino Google group, it looks like some people are still using continuations in Rhino:

https://groups.google.com/forum/#!topic/mozilla-rhino/Gfo-dO...

https://github.com/szegedi/spring-web-jsflow

dmix
That Rhino js engine stuff was all written in Java too.
scanr
They were great. You could serialise them, store them in a database and then deserialise them some time later and keep going. Cheap workflow engine.
stingraycharles
Continuations are a very old concept, and you'll be finding many programming languages have support before that. For example, Scheme had continuations (call/cc) since 1975, and I'm sure there must have been implementations before that.
dfgdghdf
In the talk he says that continuations compose much better than monads. Can someone elaborate on this?
jerf
In a Java context, I'd say it's undeniably true. Haskell-style composition of monadic values depends on type system features that Java doesn't have and on no account should even be thinking about adding at this point in time. And that's ignoring the syntax mess it would be in Java today, too.

In a Haskell context, since continuations are typically implemented as a monadic value, the statement would be more or less nonsensical. But I don't think that would be the context in question.

Edit: noelwelsh speaks of another level of composition than I was, where one is trying to take, say, State and STM and create a new monadic value that does both those things. The analysis given there is correct as well. My point here is more syntactic and basic functioning, in that even if you pick a single monadic value type to stick with, it's not going to play well in Java anyhow. So, in conclusion, generalized monad-type code in Java is just comprehensively not really possible. Continuations, or what are being called that in modern times, however, fit into Java-type languages, as proved by several very similar languages that have had them for many years, and they work fine.

pron
We're trying to minimize and discourage the use of monads, not encourage it :)

Having said that, monads don't compose well regardless of language.

noelwelsh
Monads don't compose, in the sense that if you two monads of different types you can't always mash them together to create a single monad that encompasses both of the effect the two monads represent. This is not surprising as (side-)effects don't in general compose so there is no reason that monads should.

Now I disagree with Ron's assertion that continuations compose better than monads. Here's why:

All monads can be expressed in terms of the continuation monad. As the name suggests this is just a monadic encoding of continuations. (See https://www.schoolofhaskell.com/school/to-infinity-and-beyon...).

If your language has continuations built-in then all programs are effectively running in some "ambient" monad that can express all other monads. Just like if you have exceptions you're effectively running within an error-handling monad.

So essentially rather than writing `F A` (or `F[A]` or `F<A>` or whatever syntax your language supports) to represent effectful programs all programs are implicitly running within such an effect type.

rjn
There are ways to compose monads, https://en.wikibooks.org/wiki/Haskell/Monad_transformers
pron
It's true that monads and delimited continuations are equivalent, but what we have here aren't any delimited continuations but what's known as multi-prompt delimited continuations, and I call them "scoped continuations" (http://blog.paralleluniverse.co/2015/08/07/scoped-continuati...). Those compose well, and their pure-FP counterpart is algebraic effect systems (based on monads, but don't use monad transformers).
darkpuma
What impact, if any, would this have on Clojure?
Blackthorn
It would basically be a better and much more seamless Pulsar which is awesome.
didibus
I had a discussion about this with some other folks recently.

Clojure already supports forms of delimited continuations. You can use core.async for example, or cloroutine. That said, it can't yield across stack frames. One problem with that is that if you use a go block for example, calling anything inside it which will block can sabotage your go thread. With Project Loom, all blocking call made within a Fiber will yield it instead of blocking the thread. That would be the big advantage, and possibly something Clojure could leverage as well, so that say making a blocking IO call even indirectly inside a go block would park instead of blocking.

agumonkey
anybody get this feeling that all languages are converging to a similar core set ?
drcode
All languages are becoming Lisp, except for the added challenge of avoiding parentheses.
cultus
Everyone hates lisp, but can't avoid reinventing it.
agumonkey
I'd say ml but yeah
pjmlp
Dylan did it first though.
pantulis
Modula-2 had coroutines!
pjmlp
Indeed, but I was replying to "All languages are becoming Lisp, except for the added challenge of avoiding parentheses.".

Now if we are speaking about coroutines, there are other languages even older than Modula-2, although it was probably the most mainstream one at its peak.

oddthink
Does anyone have a good link to the state of the modern thinking on continuations?

I remember from back in the Scheme R5RS era there was a conflict between call/cc and dynamic-wind.

If you have a dynamic extent (like for example a try {} finally {} block that does resource allocation/deallocation), and you exit, but then jump back into the block, how far do you try to re-wind the state? You probably don't re-open files, but you would probably want to re-set any dynamic-scoped variables.

I don't remember seeing this raised as an issue recently, so it must have been solved somehow, and I'm just wondering how.

firethief
Nothing is right about call-cc: http://okmij.org/ftp/continuations/against-callcc.html
ghusbands
Dynamic-wind was created to tame call/cc and exceptions, rather than being in conflict with it. It directly provides the facilities for re-re-setting things when you re-enter a left scope.

However, you might want to look up delimited continuations as a different approach to similar kinds of control flow.

oddthink
Yes, thank you. I was thinking of Kent Pitman's critique that unwind-protect didn't have the right semantics when implemented with dynamic-wind, and that (in general) it seemed like fluid-let and unwind-protect wanted different sorts of continuations to implement them. (And now I need to go read more about one-shot vs multi and delimited continuations.)

Someone (I forget who) had a critique about dynamic-wind itself that (IIRC) involved capturing continuations from within the before- and after-thunks of dynamic-wind, but that seems more like an edge case.

exabrial
How will ThreadLocal behave under different fiber context switches?
pron
A fiber will appear to be a thread. You can even call Thread.currentThread and get a consistent Thread object representing the fiber. This is done so that as much existing code as possible would be able to run, unchanged, in fibers. We may (or may not), however, introduce newer, better constructs that new code will be encouraged to use.
exabrial
Oooo cool! It may be important or useful to include the 'old' contract elsewhere in the SDK.
cryptos
What is called continuations in Java is available as "coroutines" in Kotlin, today. Both terms means more or less the same, from a programmers point of view, actually Continuations are part of Kotlins coroutines.

https://kotlinlang.org/docs/reference/coroutines-overview.ht...

vlaaad
What is a language feature of Kotlin is available as a library in Clojure: https://github.com/clojure/core.async/
repolfx
Almost all of Kotlin's coroutines support is also in a library. The stuff that comes from the compiler is really the smallest possible slice, required because there is syntax for it.
avmich
Interesting that, AFAIK, computer science considers continuations a more generic mechanism than coroutines. Continuations allow continuation-passing style, creating from scratch other control flow mechanisms (like return from a subroutine), exceptions and coroutines (and maybe something else which can't be reduced to items from this list, I'm not sure). Wonder if all of that can be done with coroutines?
mbrock
Continuations can be reused many times. With delimited continuations you capture the continuation as a function that works basically like any other function. With unlimited continuation you capture the continuation as a weird function that never returns but can still be kept as a closure-like thing and called repeatedly. The classic use of multiple continuation calls is “logic programming” or “nondeterminism” where for example you say “let x = choose [1,2,3]” and choose then arranges to have the continuation called three times (perhaps even concurrently).
raverbashing
Is there some special jvm/dalvik support for it?
cryptos
There is no JVM part of Kotlin coroutines, it just uses plain JVM features as every other Java program.
lupajz
It's a compiler plugin that generates bytecode for you, so there's no need for support from the runtime.
pron
While continuations and coroutines are often used interchangeably (continuations are usually one-shot delimited continuations), Loom's continuations are very different from Kotlin's coroutines, which are more similar to C#'s async/await (they are stackless), while the fibers built on top of Loom continuations are more like Erlang's processes or Go's goroutines.

The difference is very big from the programmer's perspective. Kotlin's coroutines are a syntactic concept; i.e. a piece of code is either a subroutine or a coroutine, and you can use one or the other in different syntactic contexts. Loom's continuations, however, are a purely dynamic construct, like a thread. You can run any piece of code inside a continuation. The implication is that no language change is required for continuations and the fibers that are based on them -- they are just a different implementation of threads -- and no API changes. In fact, even though this talk provides some background, developers don't need to learn about continuations at all to use them. All they need to know is that they can use a light-weight implementation of threads.

giancarlostoro
Any idea how these fibers compare to the ones used by D as well? I also wonder if this means Kotlin will adopt Fibers since it does build on top of JVM capabilities.
pron
I don't know D; I also don't currently know what Kotlin's plans are.
giancarlostoro
Fair enough, I know D has fibers, but I'm not sure how different implementations are between languages in regards to fibers. As for Kotlin it was more of a personal reflection, that is fine if you don't know.

Thanks for answering though, I look forward for this. I shared it accross a number of dev communities I frequent and one joke was "java and lightweight in the same sentence" which made me chuckle since I know Fibers are amazing.

SeanLuke
How does this differ from scheme-style continuations? IIRC, full continuations in the scheme sense of the term are currently impossible in Java as they require forbidden stack manipulation. Or is what's being proposed here something entirely else?
aardvark179
We're adding delimited one-shot continuations to the JVM. The common public interface to these is likely to be fibers rather than raw continuations, as they are much easier to work with and allow the standard library and lots of existing code to take advantage of the new features with little to no change.
maxiepoo
By one-shot you mean dynamically enforced, right? I.e. if you invoke it a second time it will raise an exception?
ufo
Not exactly, since they are using a fiber API instead of one built with raw continuations. In this context, being one-shot means that there is no operation for cloning or "forking" a fiber.

Each time you resume the execution of a fiber, it continues to execute until it reaches its next yield point. There is no way to go "back in time" and resume execution from a previous state of the fiber, which is what multishot coroutines or the fork system call would let you do.

chrisseaton
> currently impossible in Java as they require forbidden stack manipulation

They're changing the JVM and Java specs, so what is forbidden or not will change.

chrisseaton
> currently impossible in Java as they require forbidden stack manipulation

As long as you maintain your logical stack (which is completely virtualised away in machine code anyway), Java has no opinion on how you manipulate your system stack - it isn't forbidden.

bjoli
You almost never want full continuation. They are less efficient, harder to reason about (because they capture things outside the call/cc) and are a generally fantastic way to accidentally leak memory.

Oleg Kiselyov has written about it extensively: http://okmij.org/ftp/continuations/

teilo
So will fibers in the JVM be as fast, robust, and as cheap as processes in Erlang's BEAM?
yarg
Here's a decent video with some code included: https://www.youtube.com/watch?v=vbGbXUjlRyQ.

The intro music's a little weird for a dev con.

mamcx
One thing I get learning about implementing continuations is that at the end them have huge runtime cost and complicate implementation and debugging greatly.

Even do a simple interpreter with them quickly get out of control.

Exist update info in how tame it?

_delirium
One approach, which Java is taking, is to implement a more structured form of continuation, rather than say the Scheme-style call/cc ones. A quote from the article: "To be more precise, if you're interested in the theory of continuations in the academic literature, the kind of continuation of this class implements are called one-shot multi prompt delimited continuations..."
mamcx
That kind of continuation don't have much info that I remember and probably I'm not the only one:

https://www.reddit.com/r/ProgrammingLanguages/comments/9r2kt...

but still look interesting if somehow manage to be performant-enough and tame enough. I think continuations are best for the internals of the compiler/vm than to surface to the end user...

cetpa
JAVA is threaded go down to the OS level and is relatively resource-expensive (RAM and context switching) so don't scale well. If you want to know more visit CETPA INFOTECH PVT LTD
mapcars
>if a continuation gets into an infinite loop, we can ask it to preempt itself, and forcefully remove it from the CPU.

This is interesting, any idea of how they can do it?

repolfx
That's easy. All Java threads regularly pass through "safe points" where the runtime can choose to suspend them. Think about garbage collection - the runtime must be able to reliably and regularly pause threads so their stacks can be scanned. Whether interpreted or JIT compiled, there are ways to queue a piece of code 'into' a thread that's currently running and a short time later that code will execute. Safe points are used for many different things in HotSpot.

So whilst I haven't looked at the code, presumably continuations are pre-empted by forcing the host thread to a safepoint and then unmounting it.

Note that forced pre-emption + a scheduler + a userspace notion of a thread gives you the tiniest core of an operating system. The next big leap in operating system design is certainly language-generic virtual machines fused with the basics of an OS.

mb720
There's an error in the presentation's slides: The type of returnIO is "a → IO a" and not "IO a → IO a".
pron
Correct. That was a typo that was then copied and pasted so the mistake appears twice.
vkaku
About time. I definitely could use a yield keyword, if someone is reading this.
gigatexal
So it’s like python’s yield statement aka generators?
ufo
It seems that these are stackless so there is no distinction between regular functions and generator functions. You can yield from many levels deep in the call stack, without needing to use "yield from"
topspin
It's more general than generators. You can create generators using it, and also other control flow constructs.
billsix
Yup, I did exactly that in Scheme

http://billsix.github.io/bug#_make_generator

y4mi
I find it amusing that whenever people talk about Java, they're somehow using magic to describe it. It's like Java programmers see themselves as magicians...

This is especially widespread with spring boot. I've even read sentences like "It's hard to say what it does, you can only see it's effects!"

rendaw
I think that's the writer/Project Loom's technical lead's thing. I got into a discussion with him about how magical Quasar's docs make it seem here, but we didn't reach accord:

https://www.reddit.com/r/java/comments/7ulmwn/coroutines_in_...

It's quite possible to make non-magical coroutines in Java. It's about how you document them (and perhaps how you try to hide implementation details).

AlEinstein
Speaking as a dev who started with Java 1.2, I actually much prefer to write code that doesn’t rely on magic unseen effects.

The only exception to this is dependency injection via annotations but for god’s sake use it in moderation.

But then I still prefer to write for loops rather than streams because I know what the jit compiler is doing.

raverbashing
Unless you're writing assembly or maybe C then "magic" is happening
saagarjha
Assembly and C have plenty of magic to go around, don't worry.
nosianu
You are just rearranging electrons and holes anyway, and nobody understands what's happening on the quantum level of billions of tiny structure elements in silicon (plus "stuff"). When I program I always envision something like a gigantic planet sized steam punk engine, and wonder about the equally gigantic apparent disconnect between my code and what is actually happening. I'm moving matter around on an incredibly tiny scale, along unseen paths, rearranging it in different patterns, billions of times per second in the middle (CPU), collecting some in pools to which I occasionally refer (memory). It also helps me to remind myself that I'm not doing something abstract, that every thing I do has a cost and takes energy. The more stuff (matter) I have to move around, the more costly. And whether I engage a giant sub-machine for a given task or manage to build a small one that achieves the same matters.

Yes, plenty of magic.

And then I look at biological systems, not even the brain yet, just biochemical pathways. Also lots of structure, mixed in with lots of... mixing, and lots of molecules mix and sometimes match. The whole thing is much slower than man-made electronics - but many orders of magnitude more parallel. And probabilistic. And I wonder how to go from programming the man-made stuff to describing those machines in a similar way. Right now we are only tinkering at the far edges of it. Whenever somebody posts about a revolutionary new programming languages I think to myself, you are still just working with the exact same machine. It's like when you zoom in to a flat surface, that under a microscope looks ragged. The only reason we see a huge difference between various programming languages is because we are waaayyyyy zoomed into one paradigm.

In a biological system the elements doing the work are many different molecules with distinct shapes. In CPUs there only are shapeless electrons, so unlike the molecules which meet and match all the time based on probabilities the electrons don't actively contribute to the computing themselves. Also, the molecules are almost all active, all the time, in computers our "code" lies dormant and waiting in pools (memory) doing nothing most of the time. In comparison, only an incredibly tiny fraction of our human-made computing elements are actually doing something at any point!

In nature/biology shape and structure on the nano level are the major design factor. Shapes of molecules and shapes of the structures used to separate or guide or hold them. Most of the process is determined by those shapes. In human-made computing we are extremely limited in what shapes we use under the hood. We use higher frequencies and limiting ourselves to few goals, otherwise nature would outcompute us by many orders of magnitude (and that's only true because we decide to ignore most of the computation going on in biological systems as not relevant "it's just random noise with no purpose"). I see a disconnect between how we program and what is going on. We think it's a vital "abstraction". I think that abstraction is a blessing, sure, but also a great hindrance. In the end those shapes and structures matter a lot. If somehow we could get a translation into more and more flexible (nano) shapes/structures than now, where we have a 100% fixed structure and only electrons (so, no shapes)... even biological systems are actually quite limited in what kinds of shapes they use, there was a path-dependency on which random path evolution took. I think it would be possible to far exceed biological systems, but we would have to get down to the shapes (molecules and nano-structures). That is far away from now, where it takes us years to come up with one fixed structure and shapeless moving point-forms inside of it.

earenndil
C yes, assembly--where? Its mapping to machine code is relatively straightforward, and as is the mapping from machine code to ELF. Or do you mean all the magic the CPU does to make it go fast?
todd8
Well, it’s turtles all the way down. [1]

Even the assembly language programmer is running on top of the OS. The OS is deep magic: runtime interrupts happening to make the floating point hardware handle certain edge cases correctly. Fork system calls that semantically provide a copy of the parent’s address space in almost no time by using virtual memory tricks (copy on write pages). Virtual memory itself.

The OS lets the user space assembly language programmer touch some of the hardware directly, but not all of it.

[1] https://en.m.wikipedia.org/wiki/Turtles_all_the_way_down

kryptiskt
It's black magic that summons spectres.
saagarjha
Yes, along with complications that arise as you interact with your OS and other programs.
EmpirePhoenix
Yeah, in the best case. In the worst case it is more like this: https://news.ycombinator.com/item?id=5261598 which is pretty magic.
jacques_chester
Assembler code used to be a close mapping of what the CPU does. But it too is now a high-level language with a funny syntax, it's just that the complexity is hidden by the CPU frontend instead of by a language compiler.
jacques_chester
> The only exception to this is dependency injection via annotations but for god’s sake use it in moderation.

Dependency injection comes in two varieties: constructor injection and future pain injection.

Choose wisely.

noisy_boy
Having used Spring Boot for a while, I made up an analogy: Spring boot is a like a room which has a 1000 switches and a staircase leading to the basement. Usually these switches are enough for most use people. However, if your use case isn't served by them, you can follow the stairs to a much bigger room.

This room is called Spring MVC and it has 2 million switches which will let you do anything you please. Good luck.

dtech
It's because the Java ecosystem relies heavily on things automatically being done for you behind the scenes, like run-time reflection and classpath scanning.
lmm
> I find it amusing that whenever people talk about Java, they're somehow using magic to describe it. It's like Java programmers see themselves as magicians...

It's just the opposite, Java programmers are doing something much more limited. Java has a deliberately simplified language design, but the result is that most serious Java systems have to use one or more frameworks that step outside of what's possible in Java proper (by using reflection, proxies, bytecode manipulation, JVM agents or other such shenanigans) to meet their requirements. And such frameworks are necessarily "magic" from the perspective of someone thinking in Java proper.

provolone
Just avoid bloated frameworks and your problem is solved.
swsieber
Considered removing spring where I work to speed up startup times. Found the biggest bang for the buck was rewriting a bean factory to parallelize singleton preinstantion.

That's what I thought the optimal route was. That's how much we benefit from the strong spring ecosystem.

(It was a little less of a pain than you might think because I didn't have to support all spring features, just the ones we use, which happen to mostly not be features used during singletons preinstantion)

cryptos
This is not a balanced argument. Spring offers a lot of helpful things and can increase productivity. But it also has it's downsides like when something is not working because all the magic behind the scenes, making you more productive normally, just doesn't work in your case.

If I look at Go code, for example, I can hardly bear all the tedious repetition and dumb code! There is hardly any magic in Go, but the downside is, that you have to write every `filter`, `map` and `reduce` by hand.

provolone
That's fine. Use what works for you. There's more than one way to skin a cat.

It isn't particularly balanced to assume that the spring way is the only way to do things because magicians...

If the OP has that problem with spring then he can use the base servlet API or another solution.

geodel
I do exactly same as you described. Use plain servlets with embedded tomcat in application. It all works very well for me.

But now I have to support multiple Spring boot projects. I can't help noticing one thing common in these projects that it is about 10% functionality and 90% of Spring turd nuggets strewn all over project repos.

provolone
Feels like today there are some libraries that are only offered for Spring. Other than that I have been happy to do as you do with Jetty.

I'll add that autoconfiguration is a symptom, not a solution.

mrighele
It's magic because some frameworks that are used a lot offer quite a bit of functionality, terrible documentation and awful diagnostics, so when things go wrong your best recourse is randomly tweak snippets posted on stackoverflow by other hapless victims.
twic
I wonder if it's because there is such a sharp line between what a Java programmer can do in Java, and what the JVM does to make that possible. Much of what the JVM does is impossible to even describe in normal Java. That line doesn't exist in, say, C++, where the compiler is just another C++ program.

With the caveat that there are JVMs written in Java, so that line can be crossed:

https://www.jikesrvm.org/

The same mechanism could apply at the Spring Boot boundary. Using Spring Boot (for simple applications) is easy enough. Implementing it takes code that is beyond bizarre.

pjmlp
I guess you haven't seen too much C++ magic.

You just need to master SFINAE, ADL, template metaprogrmming including tag dispatch, constexpr and eventually you will reach C++ Gandalf status.

earenndil
You could apply the same metaphor to machine code on a CPU.
groestl
The JVM is not magic, neither are Java programmers magicians, but I think the reason magical terms are used to describe Java systems is that the ecosystem had lots and lots of time to develop libraries and abstractions (a few of which happen to be of the right kind, read: leak just the right amount). Your typical Java programmer is well aware what it takes to implement a battle-proof connection pool, and will not attempt to implement their own. To them, adding the config stanza to enable a connection pool indeed feels like magic. Other platforms, that come with preconfigured connection pools, do not leak this to the programmer, and 80% of their users will not know what a connection pool is.
james-mcelwain
The way I would phrase it is that the Java ecosystem is simultaneously both higher level and lower level (in terms of abstraction, not the machine) than many other platforms.

The ability to swap out basically any component of a framework like Spring simply by including a different dependency or replacing a bean at runtime allows for fine grained tuning that isn't possible in many other frameworks.

But this flexibility and/or modularity is enabled by a heap of pretty magical abstractions that glues everything together.

groestl
> simultaneously both higher level and lower level (in terms of abstraction, not the machine) than many other platforms.

I think that's a good way to characterize it.

twic
The Java library developer's aesthetic has traditionally been to make things possible, rather than to make things easy. So you may have to write a lot of code or config to get something to happen, but you will have a lot of control over what happens.

This is the opposite emphasis to, say, the Ruby community, which values a simple, lickable, surface interface with a large amount of opaque magic hiding behind it.

lhotkins
It’s really interesting how the lines between imperative and functional languages are becoming more and more blurred overtime as they’re borrowing features from one another.
mruts
Will this mean that Akka can support true multitasking like Erlang? If so, that sounds amazing. Being able to write blocking code in actors is such a big win.
gopuseyuj
i earn $400 hourly on the internet. She has been with out artwork for five months however final month her charge emerge as $12747 really on foot on the internet for some hours. study greater on this net internet sit...www.prizebest.com
twic
> I serve as a technical lead for Project Loon. That is the project that's intended to add continuations and fibers to the JDK.

> However, actually, Project Loom, the goal of the project is to add continuations, fibers, and tail call elimination.

I'm guessing that the project is called either Loom or Loon (and i believe it's the former), but i like the idea that there are actually two cooperating projects, each of which occasionally suspends and lets the other run.

thaumasiotes
It's probably Loom. It's thematically connected to "fibers" and doesn't make the immediate statement that you think your own project is doomed. Neither of those is true of Loon.
twic
Doesn't seem to have slowed the Canadians down much:

https://en.wikipedia.org/wiki/Loonie

rileyphone
There is a Project Loon somewhere, it's a google attempt to deliver internet through balloons.
brighter2morrow
More bandwidth than Comcast
lesbaker
"Loon" is an error in transcription; if you listen to the video of the talk, he says "Loom", but with a British-sounding accent.
mkobit
It is indeed Project Loom - https://openjdk.java.net/projects/loom/
vkaku
I don't mind a stack size increase if we can push for better inlining/folding :D Also, stick to one name it's easier.
Tharkun
It is indeed Loom, you know, because a loom is used to weave threads into cloth. It's possibly also a loony project, but who am I to judge.
Jun 22, 2019 · 2 points, 0 comments · submitted by ptx
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.