HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
"Unison: a new distributed programming language" by Paul Chiusano

Strange Loop Conference · Youtube · 22 HN points · 10 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Strange Loop Conference's video ""Unison: a new distributed programming language" by Paul Chiusano".
Youtube Summary
Unison is an open source functional programming language with special support for building distributed, elastic systems. It began as an experiment: rethink all aspects of the programming experience, including the core language, runtime, tooling, as well as code versioning and publishing, and then do whatever is necessary to eliminate needless complexity and make building software once again delightful, or at the very least, reasonable.

We're used to thinking of a program as a thing that describes what a single OS process will do, and then using a separate layer of technologies outside of our programming languages to "configure" many separate programs into a single distributed, elastic "system". This gets complicated. The core language of Unison starts with the premise that no matter how many nodes a computation occupies, it should be expressible via a single program, not many separate programs. Unison programs can describe their own deployment, elastically scale and orchestrate themselves, and deploy themselves in parallel onto any number of nodes for execution.

This talk introduces the Unison language and its tooling and shows what it can be like to program systems of any size with this model of computing.

Paul Chiusano
Unison Computing
@pchiusano

Paul Chiusano started the research that led to the Unison language and is a cofounder of Unison Computing, a public benefit corp. He has over a decade of experience with purely functional programming in Haskell and Scala and coauthored the book Functional Programming in Scala. He lives and works in Somerville, MA.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
I first encountered Algebraic Effects in Unison where they're called "abilities" [0] via the strangeloop talk from 2 years ago [1]. Just from the little I've seen of it I feel like AE is a fundamental abstraction tool that's been missing in programming language design. "Fundamental" as in the same level as function arguments. So many problems that were solved with myriad complex programming language constructs are just absorbed as a trivial user-implementation with an effect system: exceptions, sync vs async, dependency injection, cancellation tokens, dynamic contexts... all of these problems where the essential complexity is a need to have an effect that cuts through the function call stack.

I'm not saying that all our problems are solved and the programming world will now be rainbows and butterflies, I'm just saying that this feature is the correct framing and abstraction for issues we've run into many times in the past, and it has the potential to greatly simplify and unify the hacky, bespoke, situational solutions we've found.

[0]: https://www.unisonweb.org/docs/abilities

[1]: https://youtu.be/gCWtkvDQ2ZI

georgehm
Adding some more related articles. this was mostly a result of me trying to find some more useful articles to better understand and it was lost in my browsing history.

https://overreacted.io/algebraic-effects-for-the-rest-of-us/

https://users.scala-lang.org/t/from-scala-monadic-effects-to...

https://dl.acm.org/doi/pdf/10.1145/3122975.3122977

thesz
As I keep saying, what is a language feature in other languages, is a library in Haskell: https://hackage.haskell.org/package/effect-handlers

And this is how it should be. Not a language feature, but library. Dealing with language feature you deal with compiler and may affect more people than needed, with library you can use (and extend) it as you wish.

resoluteteeth
Haskell is probably going to need to get some language features for extensible effects to have acceptable performance (e.g. the unmerged work on eff).
grumpyprole
Java has always had checked exceptions, a weak form of type-checked effect. They were controversial because developers didn't like being forced to handle them, but I always thought they were a great idea. Algebraic effect handlers just generalise the idea of an exception, by providing a continuation that can be called to resume execution.
infogulch
The problem with Java exceptions is that they are used to paper over the lack of multiple return values for totally mundane situations where there are genuinely multiple possible outcomes. "tried to open file that doesn't exist", "tried to open a socket to a domain that couldn't be resolved", "user doesn't have permission to perform that action", etc, are normal failures not exceptional. But all of these totally normal outcomes are mediated by the same language feature that also deals with indexing past the end of an array or dereferencing null, both of which are clearly program bugs. That's why checked exceptions were controversial: they were a noisy workaround for proper language tool to manage multiple outcomes. Go takes a small step towards fixing this by making packing and unpacking tuples easy and normalizing returning an error as the last tuple value; rust and other languages with discriminated unions and an integrated match actually solves this.

I guess if it helps you understand typed effects if you describe it as "java checked exceptions with an option to resume" then I'm glad that works for you, but for me, Java exceptions have so much other baggage surrounding their design that I would prefer describing it from the other direction: "typed effects would enable you to implement a host of cross-stack features, including a checked exception system like Java's".

grumpyprole
I am not advocating Java the language and it's shortcomings are really the topic of another thread. I am also not seeking to understand algebraic effects starting from Java, I've read the original Eff paper and would encourage others to do so. I raised them only as an example of a form of type checked effect that is already in widespread use.
This seems like exactly the sort of issue the Unison language is trying to tackle. They want straightforward execution of distributed computations using the same language, with no friction across machine boundaries, no sacrificing composibility.

The article mentions:

> remote procedure calls with higher-order functional arguments would require serializing those functions, which is not easy to do safely or efficiently.

This is exactly what Unison is designed to do. Code is not stored as text, but is stored in a serialized AST form and identified by hash. Therefore, transmission between nodes is trivial. The language is still a work in progress, but it's making rapid strides.

Here's an explanation: https://www.youtube.com/watch?v=gCWtkvDQ2ZI

The part relevant to distributed computation is at 30:08, but the earlier parts go through the basics of how code is stored, and why this design choice was made, which might help with understanding what's going on.

Jun 27, 2021 · umvi on Unison Programming Language
I had a really hard time wrapping my mind around this just reading the website alone. If you are in the same boat, watch the first 10 minutes of this video at 1.5x speed: https://www.youtube.com/watch?v=gCWtkvDQ2ZI

and it will make so, so much more sense.

...and if you are like me you'll probably need to read this twitter thread to get the answer to your #1 question: https://twitter.com/unisonweb/status/1173942974381744134

Basically the core idea (or one of the core ideas) is instead of a function (like fib(n) which returns nth Fibonacci number) being identified by its name (fib) as is the case with most traditional languages, it's instead identified by a hash of its implementation.

dcposch
Cool to see people thinking this big!

One challenge I foresee is unintentional coupling. Say you have two functions:

func serialize(MyRecord) ...

func debugToString(MyRecord) ...

Now if you ever make the mistake of having giving those the same implemention, then in Unison they'd be the same hash reference, right?

Then if you want to update, say the debug print later it would update all callsites for that hash including the ones that originally called serialize(). The two are no longer distinguishable.

sgk284
The names are just pointers, and they're both pointing to the same definition in your example. But when you redefine one of those, you would point one of the names to a new definition.

It's similar to how DNS can have two domains point to the same IP, but then you can change one of those domains point to a new IP.

ajuc
> The names are just pointers, and they're both pointing to the same definition in your example. But when you redefine one of those, you would point one of the names to a new definition.

But how do you know which name was called where if the callers referenced the content hash not the name?

aparsons
Would it not be correct for those callers to keep calling the old (shared) implementation?
ajuc
well it would be nice to have a way to update old code
milansuk
I also think that the DNS analogy is wrong because all callers are hash-based. The only solution I see is to go through the list of all callers and manually update selected ones.

If I understand Unison right, the names are used only on the developer's layer(to write code), but when you save code, it's all hash-based.

Still, Unison got my attention.

acjohnson55
It knows what name you intended to use, because that's in your source, so I'm pretty sure it isn't a problem if implementations converge and diverge.
refried_
Hello, Unison author here.

This is definitely an issue that is real, and is currently a problem, and that we will fix; probably by giving the function author an option to salt the hash of new definitions that have some semantic meaning beyond their implementations (appropriate for most application/business logic). No salt for definitions whose meanings are defined by their implementations (appropriate for most generic "library" functions like `List.map`).

We already make this distinction for data types, but not yet for value/function definitions.

vanderZwan
Why not also show that your definition already exists elsewhere, together with with a warning? Or is it doing that too?
dcposch
Nice, makes sense!
hota_mazi
This seems to be very developer hostile.

Not only do they have to provide a salt themselves but on top of that, they need to make a judgment call of when something has "more semantic meaning beyond their implementation" (to use your words) rather than being some more "fundamental" code.

I'm also surprised that you haven't solved this problem yet: at least once a day, IDEA warns me that some portion of my code is duplicated exactly in some other area of my code, so this kind of duplicated logic is already quite common.

billytetrud
Why not simply record where each reference occurs and ensure that if one definition is modified, the other is not? The programmer shouldn't have to think about salting any hashes, it should be automatic and hidden under the hood.
tgbugs
Interesting. You can write a macro and some buffer modifying code to do this in elisp. But having now written up the rest of my response, why not just use Smalltalk?

The hard part is coming up with the normalization routine which guarantees that (lambda (a) b a) == (lambda (b) a b) and coming up with the rules for statement reordering for top level and internal definitions so that you can identify semantically equivalent statements where the outcome is order invariant. This is critical for making the hash functions useful and I suspect preventing denial of service attacks on the human brains that have to audit the code.

Being able to write a version of the code and then do the equivalent of creating a package.lock file to crystallize the hashes seems like a reasonable workflow. This probably winds up being easier in common lisp though since you can put the crystallized implementations in their own packages.

You could also view this as a kind of extreme type theory where every function (with regular names) has the type of its normalized representation (compacted to a hash for sanity's sake) and then you can run the checker to see if the types/hashes have changed. If you have somewhere that keeps track of every hash that a function with a particular name has had then you could automatically refactor, or could even support having multiple versions of the function with the same name used in a program at the same time. I'm not sure how users would feel about having to carry around `(funcall ((with-norm-id '(lambda (+ a b)) f)) a b)` though ... probably just give up on editing the textual representation and go back to the image based approach of Smalltalk and Interlisp where you can hide the hashes.

Will be interesting to see how Unison evolves.

mpweiher
> identified by a hash of its implementation

Sound a lot like darklang.

Like others, I am dubious about this being in any way a useful feature. Separating implementation from name (/interface) and binding to that interface/name instead of the implementation is one of the fundamental and useful parts of abstraction.

remram
This reminds me of Kubernetes, where all cluster state is neatly structured and placed in a replicated data store (etcd) that is the source of truth for operation, with the right parts immutable (e.g. volumes).

The first thing people do is check in textual representations of those things in version control and operate on that instead.

lisper
Sounds like a cool idea, but how do you fix bugs in functions with lots of callers?
acjohnson55
The same way we do now: release a new version and tell people to migrate.
lisper
No, now I can just redefine the buggy function and all the callers will get the new version automatically. Having to update all callers seems like a high price to pay. Seems like the Right Answer is something like a hash of the api or the contract rather than the implementation.
ucarion
The (public) name of the implementation is the unique identifier of the contract in most systems, so I think your "Right Answer" is roughly the status quo?
lisper
No. The name of a function in current languages has no connection at all to what the function does. (What would be the contract for a function named ‘foo’?)
gbhn
That's kind of the thing that makes APIs possible, right? It sounds to me like "what if programming were done in a completely flat global namespace in which abstractions, encapsulation, and structure were impossible."
lisper
No. An API specifies more than the name of the function. It will specify the arguments, their types, the type of the return value, and at least informally, what the function does. You can change the underlying implementation without changing the API. That's the whole point of an API. The problem with current API technology is that the informality of the spec of what the function does. That allows some aspects of the behavior of the function to change without triggering any warnings.

By having the linker work on hashes of implementations you eliminate that problem but create a new problem. You can no longer change the behavior of the function because you can't change the function. That means you can't suddenly change behavior that some caller is counting on, but it also means you can't fix bugs without changes in the caller.

tgbugs
Reading this now, I'm imagining all the horrors of static linking but applied to every function instead of whole modules.

Maybe the simplest solution is to allow the function to change to the new version, but make it easy to revert in the event that something breaks. This of course means that you can't make the names of the functions their hash (without lying, preventing the runtime from checking that hashes ways match, or modifying emitted bytecode or native code to do what you want), it has to be an orthogonal layer on top of them like types (as I mentioned elsewhere in the thread).

ajuc
Nope, according to https://twitter.com/unisonweb/status/1173942969726054401

when you change a function implementation the system has to walk the callers graph backwards starting from all the places where the function was called updating all the implementations with the new hash, then callers of these with the new implementation and so on up to main (or whatever it's called).

I had a chance to implement something like this in a system that used jbpm 3 graph language (basically process X version 1 called process Y version 1 and I updated process Y to version 2). It's nontrivial especially with recursion, I'm wondering how they are dealing with that.

lamontcg
A git-like datastore for your AST+callgraph.
ajuc
Let's say you have definitions like that:

    f: Nat -> Nat
    g: Nat -> Nat
    h: Nat -> Nat
    h x = g (x * 2)
    g x = f (x * 3)
    f x = x < 0 ? 1 : h (x / 4)
And now you change f to be

    f x = x < 1 ? 1 : h (x / 4)
How do you do that? There's a cycle in the callgraph. In fact - how do you calculate a hash of a function that calls itself if you need its hash to calculate its hash :)

EDIT: nevermind, recursion is a special case handled differently.

umvi
That's basically what the twitter thread I linked explains. It sounds like there is an automatic propagation mechanism for updating downstream callers if the type hasn't changed, otherwise it sounds like a manual update process.
torginus
Sounds like trading one set of problems for another.
capableweb
Welcome to software engineering where there is no golden bullets, only different tradeoffs :)
nytgop77
There are also circles of hell. (opposite of silver bullet)
aparsons
Law of conservation of complexity
vanderZwan
Tangent: here's what I find fascinating: when we evaluate which algorithm or data structure is best to handle a given situation, we know how to reason about algorithmic complexity and pick a best option for our situation.

But then when it comes to ideas like this we just tend to say "we're trading one set of problems or another", as if we can't evaluate the problems in a similar manner. And I'm not picking on you here, I tend to do the same!

Yes, we're trading one set of problems for another, but what if the old set of problems was "O(n²)" and the new set of problems is "O(nlog(n))"? Or maybe it's the other way around. Why isn't it obviour how to apply those earlier skills here?

nytgop77
I would like to note that even with algorithms/datastructures "best" is USUALLY not the word (even assuming that all algorithms/structures are discovered/known). "Good enough" / "fits in my multi dimensional budget" is reality:

(0) It is easy to order two real numbers. 1 is greater than 0. But can we order points on the cartesian plane (1,2) or (2,1) or (0,100000000000)? Already at 2nd dimension not all points are easily ordered. There is solution "just give priority to 1st coordinate", but can you really completely disregard memory usage and focus solely on cpu?

(1) Theoretically, to get "best", one needs to evaluate not only O(n) of avg cpu usage, and avg memory usage, but also worst and best cases, while using knowledge of input data distribution (e.g. maybe input is almost sorted). Memory access patterns, cache locality, battery life, suitability for your hardware also must come into play. (many dimensions)

(2) Practially one has to do profiling on real hardware with real configurations / inputs / workloads. Different workloads may favor different algorithms/structures. (again results with many dimensions)

(3) Due to constrained time people will not even go over all algorithms. Real people will immediatly rule out really bad ones, then pick 1 or 2 algorithms that theoretically are good enough match, and see if their profiling results fit in cpu/memory budgets. (even if budget is not on paper, but just part of intuition)

fouc
Checked the twitter thread and I was thinking it sounds a lot like how strings are linked lists in elixir.
andi999
Cool, but why/what for?
umvi
I'm just as much of a novice as you, but one of the use cases the creators had in mind are distributed computing systems. For example, if you have to crunch a bunch of data in the cloud, you would write your data crunch function/algorithm (which is represented by some hash '#asdjh238ad') then spin up nodes to crunch data using '#asdjh238ad'. When a new node in a cluster comes up it can say "I don't have '#asdjh238ad'" and the orchestrator or one of the node's peers can send over a copy of it.

With a traditional programming language you couldn't do this because "send me a copy of sort()" would be met with "which sort()?". Whereas with unison every different sorting implementation would have different hash, so there would be no confusion.

Nullabillity
That makes sense as a build system (and is more or less how Nix works). The question would be why you'd subject your source code to this.
mst
there was a paper that implemented an r7rs compatible module system for termite scheme that used hashes for identification for netework transfers of code but left the source files still normal - I think focusing on the textual representation too much misses the point a bit here.
taneq
Why not just identify it as the function text at that point?
torginus
I'm not totally convinced by this.

- Storing the AST on the disk in a million files is not necessarily the best use of the filesystem. In contrast, most languages store text files on the disk, and build up a similar AST in memory only

- You can't view your code without special tools, which means all text editors/version control etc. need to be Unison-aware

- Since the language is append only, all edits look like additions in version control

- Their solution for the diamond problem (depending on multiple versions of the same library) is having hard dependencies on exact versions, and including both copies can be at best wasteful, at worst bad (what if v2 fixes a bug that was in the v1 dependency), I think this is a hard problem, and the reason why semver exists

- As others have mentioned, the append-only nature of the language makes bugfixes difficult

- Solutions that dynamically discover code dependencies and automatically run tests exist for both procedural and functional languages

- Detecting that 2 things are the same through hashing is nontrivial, can it detect that 1 + x + 1 is the same as x + 2? The ASTs are different

asoltysik
> Storing the AST on the disk in a million files is not necessarily the best use of the filesystem

A new codebase format just uses a sqlite database instead of a million files

> Since the language is append only, all edits look like additions in version control

Traditional methods of showing change in verson control, that is text diffs, don't make sense here anyway

> Detecting that 2 things are the same through hashing is nontrivial, can it detect that 1 + x + 1 is the same as x + 2? The ASTs are differen

It can't detect that. It if could it would be pretty cool, but I don't think it would improve the usability too much

the-smug-one
What's the point of detecting that 1 + x + 1 is the same as x + 2 anyway? If I wrote it in one way, I meant it to be that way for a reason. Should it also be able to prove arbitrary code is semantically equivalent? Well, it can't do that for obvious reasons.
ballenf
Why not use hashing with locality for similarity? That is if the two samples above "hashed" to a similar value it might be helpful to find similar code.

Hashing was created to prevent collisions and ensure small changes have big differences in result. The first requirement makes sense here, but not sure how the second helps.

xpe
Semver is an uneasy compromise at best. Rich Hickey has a nice talk that digs into the principles around changing software. Once you see this POV, you are unlikely to view Semantic Versioning as anything other than a messy hack.

I'm not saying it is worse than nothing, but sometimes ideas have a way of sticking around too long and making people comfortable.

modernerd
The talk for those interested: https://youtube.com/watch?v=oyLBGkS5ICk
Pet_Ant
And the transcript: https://github.com/matthiasn/talk-transcripts/blob/master/Hi...
infogulch
I think Unison paired with a strong graph database instead of the filesystem would be a powerful combo. It would very naturally represent the AST graph directly and would benefit from graph db optimizations. The cost would be the need to invest a lot in new tooling: you'd want a graph db-based source control implementation that offers similar cryptographic certainty to git; you'd have trouble using existing tooling directly like text editors that expect files on disk; etc.
musingsole
The combination of the two makes me think auto/AI-generated code would be much more feasible and powerful in such an ecosystem.
jackcviers3
Semver doesn't help in the case of transitive binary incompatibility. If lib A depends on B v2, and lib C depends on B v1, and application D depends on A and C, you cannot load a version of B that satisfies D, A, and C. Semver tells you that B 1 and B2 are incompatible, but not how to solve the issue.

Unison solves the issue - there isn't any binary incompatibility, because the transitive versions of Bv1 and Bv2 cannot be in conflict - the function references are to guaranteed unique and different versions of the art.

As for bug fixes - you can specify in your code exactly which version to use.

As for editors needing to be unison aware - they just delegate everything to the compiler via lsp and bsp.

Bug fixes are no more difficult than making the change. A new version is created, and your code can now depend on it. Old code will still run off of the old version. It's up to the code owner to decide to use the new, but fixed version.

Version control is all handled in the language itself.

As for the hard hashing problem... Runar is a particularly intelligent individual. I expect that his algorithm works pretty well.

The first argument about storing the ast is moot in an age where cached compiled typescript, Python, and .class files take up inordinate amounts of disk space.

> Solutions that dynamically discover code dependencies and automatically run tests exist for both procedural and functional languages

Eh. Piping and yarn ain't got nothing on maven and ivy and apt. But yes, dependency management isn't anything new under the sun. Dynamically resolving individual function versions in packages alongside binary incompatible functions is.

billytetrud
Honestly, programming without language aware tools in this day and age is very inefficient. Sure, in a pinch you can use a text editing program to edit stuff, but it wouldn't be so hard to install the standard editor in that case.
lamontcg
Loading both copies of a library can be very useful to deal with the situation where one piece of code has been ported to v2 (due to bugs/features or just generally keeping up with updates) and another piece of code is hard blocked on the v1->v2 migration because it is much more costly, and its possible that v2 is actually buggier for that other use case. There's a bit of a naive idea that software always gets better for everyone and that projects have an infinite amount of spare time to drop everything to bump dependencies. That feature is actually very useful.

(Which is not to defend the rest of the append-only immutability of the rest of the language, that looks a bit whack -- but then I've seen whack stuff get wildly popular, so I have no idea -- but while having 2 versions loaded at the same time might be useful I'm not sure I want to deploy every version that has ever existed that smells way too bloated)

torginus
You are right - but choosing the correct solution imho needs to be done with human oversight - I think a semver based dependency resolution works great here, for example if bar requires foo 1.0 and baz needs 1.0.1 they will happily use the same version, but if baz used foo 1.1 they would use the separate ones.
fastball
I think another key idea is that you're still thinking about libraries as complete packages where you kinda install two versions of the same thing. But it seems more likely in the Unison ecosystem that you'd end up with the ability to much more easily only extract the specific functions you need.

So say there is v1 and v2 of a utility lib in my dep tree, but actually only using func A from v1 and func B from v2. Then I just have the AST of v1.A and v2.B in my deps and everything works.

Nullabillity
You still need some unit of atomicity to be able to maintain invariants. You can't combine HashMap_LinearProbe::insert with HashMap_Chains::remove, because they both depend on implementation details in order to maintain HashMap's invariants.
lamontcg
Except 1.0.1 can fix a bug that one piece of code needs, while another piece of code can be happily bug dependent upon it.

You can scream at the developers that they've violated semver but a "bugfix" is entirely subjective (relevant xkcd, spacebar heating, etc).

And even when developers violate semver in a point release the problem still exists. They actually rarely, if ever, rollback with a 1.0.2 that is equivalent to 1.0.0 and instead usually move forwards.

And if you have a language that supports loading 1.0 and 1.1 then there's no point in being artificially constrained over which two versions can be loaded at the same time based on the label, the underlying framework shouldn't be built to care. There's no need for a multi-version library loader to care about what a bugfix is.

uncomputation
(Mentioned xkcd: https://xkcd.com/1172/)
slver
SemVer remains a pragmatic approach that works in vast amount of cases. It’s unclear what alternative we have here which works in more cases.
jonahx
Go takes an alternative approach:

https://www.youtube.com/watch?v=wWApoImHuf8

slver
That's not a new idea, I also put the version in my package name, so that you can use v1.x with v2.x etc.

It doesn't contradict SemVer.

However when you do that, it becomes a big deal to jump from one version to another even if the breaks are minor. So pros and cons.

slver
It doesn't take an alternative approach, it's literally walking in the footsteps of Java.

As a young platform it promised compatibility forever and kept releasing 1.x versions. Eventually it got bloated, stale, and people started moving to other platforms that offer more modern featureset.

So Java changed their approach and started releasing major versions regularly and deprecating packages for removal.

This video shows how poor we are at communicating these lessons to one another, and we constantly have this moment of "teen rebellion spirit" where some new project purports to come up with a Totally New And Way Better Approach To Things, and eventually it relearns the same lessons as everyone else (sometimes even poorly).

It's like this saying "those who don't learn their history are doomed to repeat it".

lamontcg
We don't have any better alternative, but lets not be naive about it when it comes to building bits of framework.

Semver would just be an artificial impediment at this level.

slver
SemVer is an impediment only if you insist to make it so.
Not necessarily the same thing, but have you looked into the Unison project at all?

Unison is an open source functional programming language based on a simple idea with big implications: code is content-addressed and immutable.

Homepage: https://www.unisonweb.org/

Strange Loop talk: https://www.youtube.com/watch?v=gCWtkvDQ2ZI

I was just racking my memory and searching through my library of interesting links to find exactly this! Paul Chiusano gave a nice introductory talk at strangeloop last year: https://www.youtube.com/watch?v=gCWtkvDQ2ZI
> Why can't methods (for instance) get displayed as convenient, without worrying about where they are in the file?

This is something unison is playing with with their codebase manager.

https://www.youtube.com/watch?v=gCWtkvDQ2ZI

zubairq
Actually Pilot works in the same way. It treats all code as independent of the file system, identified by the h SHA256 of the content. https://github.com/zubairq/pilot
Jan 10, 2020 · reggieband on The Unison language
I recently came across Unison through YouTube recommending me a series of videos from the "Strange Loop" channel [1]. The fundamental idea of uniquely addressing functions based on a hash of their AST is mind-blowing to me. Immediately my mind started to consider many of the possible paths such an idea could lead down, many of which are clearly tickling the minds of many of the commenters in this thread.

My first thought was the same insight from von Neumann architectures: code is data. So I thought of package repositories with URLs corresponding to hashes of function definitions. http://unison.repo/b89eaac7e61417341b710b727768294d0e6a277b could represent a specific function. A process/server could spin up completely dumb other than bootstrapped with the language and an ability to download necessary functions. You could seed it with a hash url pointer to the root algorithm to employ and a hash url to the data to run against and it could download all of it's dependencies then run. I imagine once such a repo was up and running you could do something like a co-routine like call and that entire process just happens in the background either on a separate thread or even scheduled to an entirely different machine in a cluster. You could then memoize that process such that the result of running repo-hash-url against data-hash-url is cached.

e.g. I have http://unison.myorg.com/func/b89eaac run against http://unison.myorg.com/data/c89889 and store the result at http://unison.myorg.com/cache/b89eaac/c89889

1.https://www.youtube.com/watch?v=gCWtkvDQ2ZI

Jan 10, 2020 · tracnar on The Unison language
There's an example in this talk: https://youtu.be/gCWtkvDQ2ZI?t=1903

Basically you can quote code (like in Lisp) and pass it to an executor (thread, remote server, ...).

The main difference is that thanks to the immutability, you can also have a protocol which shares efficiently all the dependencies required to execute the code (like a git pull).

It also uses an effect system, so the implementation does not need to decide which executor to use, it only uses an interface called an "ability", and the caller can choose the implementation.

Jan 10, 2020 · ed on The Unison language
For a good introduction, watch this talk at 1.5x speed from 2:50 to 5:40

https://youtu.be/gCWtkvDQ2ZI?t=170

Jan 10, 2020 · emmanueloga_ on The Unison language
"Unison: a new distributed programming language" by Paul Chiusano [1]

1: https://www.youtube.com/watch?v=gCWtkvDQ2ZI

the_duke
This talk highlights the benefits and interesting features of Unison much better than the docs.

Highly recommended.

capableweb
The video is from Sep 15, 2019 and still more useful than their website? Anyone have a transcript to link?
Sep 25, 2019 · 9 points, 3 comments · submitted by mpweiher
imglorp
I was hoping to see a conversation here.
eterps
Yep this post went completely under the radar :-(
eterps
There's some discussion on proggit:

https://old.reddit.com/r/programming/comments/d55jr3/unison_...

Sep 23, 2019 · 3 points, 0 comments · submitted by mpweiher
Sep 16, 2019 · 6 points, 0 comments · submitted by pfraze
Sep 16, 2019 · 4 points, 0 comments · submitted by truth_seeker
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.