HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Keynote: The Value of Values

Rich Hickey · InfoQ · 189 HN points · 23 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Rich Hickey's video "Keynote: The Value of Values".
Watch on InfoQ [↗]
InfoQ Summary
Rich Hickey compares value-oriented programming with place-oriented programming concluding that the time of imperative languages has passed and it is the time of functional programming.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
> An object's internal state is "none of your business."

An object's state is my business, as immutable objects can be used in ways that mutable ones cannot. They can be passed to arbitrary functions with no need for defensive copying. They can also be useful in concurrent programming. None of that means breaching the separation of interface and implementation.

> Strings in Java are technically a class, but they're really treated like primitives (evidenced by the fact that literals are magically made into String objects).

Immutable objects can generally be treated as values, that's their charm. There's a good talk on this topic, The Value of Values. [0]

> immutable class instances aren't really "objects" anymore- they're just (possibly opaque) data types

They're certainly still objects. The essence of object-orientation is in dynamic dispatch, not in stateful programming.

[0] https://www.infoq.com/presentations/Value-Values/ (Perhaps skip to 22:00 to get a sense of the general point.)

ragnese
> An object's state is my business, as immutable objects can be used in ways that mutable ones cannot. They can be passed to arbitrary functions with no need for defensive copying. They can also be useful in concurrent programming. None of that means breaching the separation of interface and implementation.

I'm not advocating for object oriented programming. What I'm saying is that if you "buy in" to the actual, abstract, concept of object oriented programming, then the internal structure or state of the object you're communicating with is, by definition, out of your control. Of course, in practice, you know that sending a "+ 3" message to the object "Integer(2)" is always going to return the same result, but you have no idea if the Integer(2) object you're talking to is logging, writing to a database, tweeting, or anything else. And in "true" OOP, you're not supposed to know- you just take your Integer(5) response message and go on your way. When I say "true OOP" I'm thinking about something like Smalltalk or an Actor framework/language.

I'm not talking about anything practical here. Just the "pure" concepts. Obviously, Java has made pragmatic choices to allow escape hatches from "true" OOP in a few places: unboxed primitives, static methods, and a handful of other things, probably.

So it's just very un-Smalltalk-like for an object's API/protocol/contract to make any kind of reference or promise about its internal state at all. That is implementation in a pure OO sense.

MaxBarraclough
> if you "buy in" to the actual, abstract, concept of object oriented programming, then the internal structure or state of the object you're communicating with is, by definition, out of your control

That's not specific to OOP though, it's a very general concept in programming.

A program is generally decomposed into smaller units which make some promise about how they will behave, hiding their internal workings from the programmer who makes use of them. This is just as true for C/Forth/Haskell as for Python/Java/Smalltalk, depending on how a program is designed.

> you have no idea if the Integer(2) object you're talking to is logging, writing to a database, tweeting, or anything else. And in "true" OOP, you're not supposed to know- you just take your Integer(5) response message and go on your way

Right, you're meant to interact with an object in such a way that you rely only on the documented behaviour that the object promises to provide, you aren't meant to rely on knowledge of its internals. Objects are also a good way of cleanly separating concerns, and then composing the solutions.

On further thought I got it wrong earlier. You're right that internal state isn't my business, but immutability isn't about internal state.

Whether String (or some other class) is mutable or not isn't an implementation detail, it's an important property of the public interface offered by the class, and it's only a property of the public interface. I don't care whether my JVM implements String in Java or in assembly code, neither do I care if it's immutable internally, but I do care that the implementation satisfies the advertised behaviour of the class, and String promises to be (that is, to appear) immutable.

The internal implementation is required to meet the constraints imposed by the class's public interface, and in the case of String, those constraints include that the class must appear immutable to the user, even under concurrent workloads. In principle the implementation is permitted to have mutable internal state, provided the object always appears immutable to the user.

Similarly, whether a class is thread-safe, is a public-facing attribute of the class. The class can implement thread-safety any way it wants.

Disclaimer: I use both at work and prefer clojure.

Common Lisp is huge, which is fine. In some concepts it is closer to traditional languages (mutable data structures, multiple inheritance OO, tools to write low-level efficient code). It is more extendable than clojure (reader macros, symbol macros, CLOS and MOP). With quicklisp you can find most of the libraries you might need.

Clojure is much more different from traditional languages. It puts immutable maps and vectors up front and expects you to work with data directly. Why is that preferable is hard to explain in a short post, if you're interested about the ideas I'd recommend watching Rich Hickey's talks like Value of Values[1].

> neither of these languages can help me much in finding a new job

I don't know about CL but there are clojure shops hiring. Depending on your employer if the JVM is being used you might get a chance to use clojure as well (or CL through ABCL for that matter). It's also used by some shops for the frontend. And for your own work you can often choose what to use. Both can be used as a scripting language (clojure through babashka[2]).

> CL sounds like the logical choice because of SLIME

I'm not sure what you mean.

> Clojure sounds like the choice to avoid because of the JVM and the fact that I already know Go that plays in the same domain...

I don't get the JVM hate. Sure, every platform has its issues, but the JVM is a highly tuned virtual machine that has decades of development behind it. It is cross platform. It is used for high performance projects as well. It has libraries for everything.

[1] http://infoq.com/presentations/Value-Values [2] https://github.com/babashka/babashka

lispm
> concepts it is closer to traditional languages

and in others it's not: it has fully introspective and reflective implementations, source-level interpreter implementations, Lisp on various levels (instead of using a static VM infrastructure) -> Lisp implementations largely written in itself incl. its compilers, highly interactive with integrated condition handling, a dynamic object system, image-based development, ...

TornadoFlame
Hi there, thanks for your answer!

> Disclaimer: I use both at work and prefer clojure.

Could you please elaborate? Just want to know how you use both at work! (i just can't really think of a 2021 scenario for CL, legacy code aside)

> I'm not sure what you mean.

Nothing at all, it was a typo. Swap SLIME with 'cl-lib, what i wanted to say what i wanted to say is that common lisp seems to have a certain level of compatibility in Emacs itself through this package, whether it is also feasible with Clojure, i ignore it

> I don't get the JVM hate. Sure, every platform has its issues, but the JVM is a highly tuned virtual machine that has decades of development behind it. It is cross platform. It is used for high performance projects as well. It has libraries for everything.

No no! It wasn't hate, i just stumble upon this article [1] and thought that the level of interactivity between CL and Clojure was clearly in favor of the former.

Sorry if I'm talking rubbish,I discovered the LISP ecosystem less than a week ago, CL and Clojure were just names before, all these dialects with all those implementations are confusing, because i don't have a clear overview!

Thank you again :)

[1] https://gist.github.com/vindarel/3484a4bcc944a5be143e74bfae1...

kazinator
> i just can't really think of a 2021 scenario for CL, legacy code aside

I've been reading similar statements every years since I started programming in Lisp in around 2000, and could easily find more of the same going back in Usenet archives and whatnot before that.

"Against the tide of Common Lisp" (Usenet, 1986): https://www.usenetarchives.com/view.php?id=net.lang.lisp&mid...

"Considered Opinion: LISP is Terrible" (Usenet, 1983): https://www.usenetarchives.com/view.php?id=net.lang.lisp&mid...

Excerpt: "I have been attempting to evaluate LISP as a programming language for VLSI design, and after having read MANY LISP programs, I have come to the same set of conclusions about LISP as a programming environment. To make LISP usable, each particular interest group defines/modifies/extends LISP for its own purposes. The result is a set of islands of users, each with their own "flavour" of language, which they claim is the "one, true, and holy LISP. I like the LISP data structuring; As a programming language I rank it in the same class as APL: useful, highly unmaintainble, very hard to document, and VERY unportable."

TornadoFlame
Thank you, now it's worse than before!

No really, i get your points (through you other answers too).

I decided to just dive into CL, if you have some valuable resource, i would like to know!

Dec 24, 2020 · 1 points, 0 comments · submitted by mbrodersen
I recommend watching this talk:

“The Value of Values” https://www.infoq.com/presentations/Value-Values/

It explains what the difference is between state and value and why most (almost all) programs actually have very little state and can be written mostly stateless. It was a big eye opener for me.

Jul 22, 2020 · 1 points, 0 comments · submitted by tosh
> can't the compiler work that bit out

Technically no, not in all cases, due to the halting problem. In practical terms, read-before-write issues do happen in real C code, so it makes sense to take steps to avoid it. (Languages like Java force the programmer to write code where the compiler can guarantee the absence of read-before-write errors, sometimes just synthesising an assignment of zero, but it's still possible the programmer will assign a dummy value and accidentally end up using it.)

> Source code is for humans.

Yes, that's precisely my point. It's about making the code readable and easy for a programmer to reason about. It's unlikely there will be any performance impact either way; decent compilers should be good at lifetime-analysis and register-allocation.

It's more readable to declare a short-lived local on its first use. This makes its precise type more apparent, as you don't need to scroll up to its declaration. This is particularly important in C, where using the wrong type can have especially nasty consequences.

The new style also makes it immediately clear over what scope the variable is relevant, as the local does not exist in scope until it is declared and assigned. That is to say, it only exists when it should. I expand on this in my other comment in this thread.

Related to this, the new style helps prevent undefined behaviour by making it less likely you'll accidentally introduce a read-before-write. Again, those errors do happen in real production code. It's the kind of error static analysers pick up in long-trusted codebases.

The old style makes your code less dense, artificially increasing the number of lines in a function.

The new style also enables you to use const, which of course requires assignment at the point of declaration. If you use const with your locals, you do not have to scan the code to determine if the local is modified later on, you know at a glance that it will not be. This lets you reason about values, rather than the current state of a local. If you can access the local, you know it holds the right value. [0]

If it turns out the lifetime of a local needs to be broadened, you can move the declaration up to a broader scope, but in my experience this is surprisingly rare.

It's not exactly relevant, but in C++, with RAII, you don't really have a choice, and you pretty much must use the new style rather than the old-school C style. But that doesn't tell us much here. In a similar vein, Java and C# programmers could use the old-school declare-at-the-top style, but none of them ever do.

It's just a style that used to be necessary in old versions of the C standard, which people got accustomed to. For what it's worth, the Linux kernel seems to use both styles. [1] [2]

> here's all the scratch space I'll be needing in this block

For the reasons I've given above, I don't think this is a good way to approach locals. It makes sense to leverage scope and constness to improve readability, not to just introduce a free-form set of uninitialised locals with overly broad lifetimes. That approach opens the door to avoidable bugs, and needlessly burdens the reader with having to scan the code to determine basic properties of the locals (which they may then get wrong).

[0] https://www.infoq.com/presentations/Value-Values/

[1] https://github.com/torvalds/linux/blob/master/init/do_mounts...

[2] https://github.com/torvalds/linux/blob/master/kernel/sched/c...

Dec 30, 2016 · JMStewy on Why Clojure? (2010)
That talk was "The Value of Values," which I enjoyed as well.

Links for anyone interested:

https://www.infoq.com/presentations/Value-Values (1 hour version)

https://www.youtube.com/watch?v=-6BsiVyC1kM (1/2 hour version)

cle
That's a great talk and has had a lot of influence. It's specifically referenced in Project Valhalla's proposal: http://openjdk.java.net/jeps/169
agumonkey
Haa good catch, that's a strong sign if Oracle/Java is using it as an inspiration.
Jun 15, 2016 · 1 points, 0 comments · submitted by adgasf
So, rather than reinvent the wheel, I just ported the Clojure collections over, added generics and made them more Java-ery [1]

In the Clojure world, (and indeed in Haskell and other pure-fp languages) there is a lot more garbage collection. Rich Hickey makes a great case for this trade off here [2].

On the JVM now very little of it is stop-the-world GC, it mostly does incremental. It's been a while since I profiled a Java app in anger, but even back on JDK5 this was the case, and I know they changed it significantly in JDK7 to the G1 collector.. anyone have a clue about that?

[1] https://github.com/robmoffat/pure4j/blob/master/docs/tutoria... [2] http://www.infoq.com/presentations/Value-Values [3] http://www.oracle.com/webfolder/technetwork/tutorials/obe/ja...

The big take away for me was the idea of blobs and trees as Hickeyian values and commits as Von Neumann places. [1] If there is one weakness is the article it is ending with stash, which to me seems to be a bit of a feature in search of a workflow more than something that enhances distribution and sharing among a team in so far as the class of problem it addresses seems to result from larger issues of team structure and workflow [if it's worth saving then the whole team should know about it and have access].

It's useful to know about, but placing it at the end makes it seem like a high order bit rather than something for a corner case.

[1]: http://www.infoq.com/presentations/Value-Values

erikb
Commits are also immutable in git. But your first sentence sound like they are mutable, or not?
brudgers
I suppose I was thinking more in terms of idempotency than persistence. N Shakespearean monkeys type:

   $> echo "A rose" > any_other_name
   $> git init
   $> git commit -a -m "Initial Commit"
There's one blob hash, one tree hash, and M commit hashes where M is the size of the set of the tuples of email addresses and author names among the monkeys.
JoshTriplett
I find stash useful primarily as a place to hold things for a few minutes, and no longer. In particular, I frequently use "git stash", followed by "git pull --rebase", and if all went well, "git stash pop". I could just as easily do "git commit -a -m 'WIP'", "git pull --rebase", and "git reset HEAD^", but I find stash more intuitive.
brudgers
That sort of gets at my point, stash makes sense in a corner of a high disciplined workflow. But the article presents it as approaching "best practice" and as a belt to sport with one's lederhosen. By analogy it's a bit like multiple inheritance in C++, on occasion and for some people it might be just the thing, but it's probably not a good starting assumption at the design phase.

Though again, it's a minor criticism and mostly related to the impression of importance positioning it at the end of the article suggests [i.e. placing the strongest point in paragraph four of the five paragraph essay].

davvolun
> as a belt to sport with one's lederhosen

That sounds to my American ears like the most German thing I've ever heard.

Thanks, and good questions. Here's a first pass:

- Not yet: But I'm a big fan of Rich Hickey's thoughts on data and time if that gives any clues on where I'd like to take it (http://www.infoq.com/presentations/Value-Values)

- Search function is very light just a client side search of the titles, but if people want it we'll probably build a server side full text search

- We've been debating different tagging schemes and actually built one out already but decided not to include it in the MVP until we get more feedback (let us know [email protected])

- Not yet, but we do want to build at least a minimal export functionality to give everyone peace of mind

- No API plans

- It's something we have to feel out, but it's followed closely my own use case where I need to store some information that I constantly refer to; self hosted is a thought but that automatically rules out most casual users. We'll see...

jszymborski
Can't use this without full-text search... it's a shame. I was looking for something more pretty than Tomboy Notes, but so far nothing is beating it.
Similar interesting talk by Rich Hickey:

http://www.infoq.com/presentations/Value-Values

luddypants
I was wondering how this relates to Datomic... I'm not really familiar enough to say much about similarities and differences, but would be interested if someone who is could comment.
None
None
ludwigvan
I asked the same question at the end of his talk, see the relevant section in the video:

https://www.youtube.com/watch?v=fU9hR3kiOK0&t=2579

Recursion is looping. Some iterative loops can be expressed recursively - these are the ones that can be tail call optimized. SICP has a good explanation:

https://mitpress.mit.edu/sicp/full-text/sicp/book/node15.htm...

The advantage of using recursive definitions for loops is that it avoids modifying places in favor of returning values. Rich Hickey gives a good talk on the subject in The Value of Values:

http://www.infoq.com/presentations/Value-Values

This seems to be spreading the downsides of mutable objects across boundaries. I find a more functional approach which deals primarily with values rather than variables to be the best way to get a grip on the complexity of modern web apps.

See Rich Hickey's "The Value of Values" for better elucidation than I can provide:

http://www.infoq.com/presentations/Value-Values

Great explanation regarding how to think about state and mutability (place oriented programming).

A brilliant related talk by Rich Hickey: http://www.infoq.com/presentations/Value-Values

> The world is very much mutable

This depends on your definition of "the world". If time and memory are taken into account, you could easily make the case that the world is very much immutable. For example, if your friend changes his email address, its not as if his old address no longer exists. Even if it only exists in memory, it still exists somewhere, and you're still able to reference it.

If you're interested in this topic, I highly recommend Rich Hickey's talk "The Value of Values"[1].

[1] http://www.infoq.com/presentations/Value-Values.

seanmcdirmid
In that case, the reference is still mutable even if the email address isn't. One may also delete an account; i.e. you can change the state of something. A language should set clear boundaries on what is mutable or not, and I see both FP and IP languages going with extreme boundaries in either direction.
One of Clojure's fundamental opinions, asserted by the language eagerly, is that vector quantities should be treated as values the way scalar quantities are in most languages. Specifically, this means that when you operate with a list and append to it, instead of modifying the value of the next (cdr) pointer in the last node, you create a new list and return the new list as the result of the append operation. These immutable collections afford certain classes of algorithm that destructively modified ones do not. Rich reviews this concept pretty well here: http://www.infoq.com/presentations/Value-Values

This story doesn't spend much time discussing this particular opinion of Clojure's or why you would choose to circumvent it by making a mutable linked list. Additionally by virtue of skipping the immutability discussion, it fails to discuss how append to the head of a list is crucially different from append to the tail when working with data structurally sharing underlying data.

Additionally it prefers to define an interface over a protocol (using definterface instead of defprotocol), and it uses camelCase naming instead of Clojure's culturally embraced hyphen-cased naming.

In short this describes a way to build a data structure in Clojure, but is not a particularly good as an example to be followed of how you should build data structures in Clojure. johnwalker cited a couple more idiomatic examples, e.g. data.finger-trees: https://github.com/clojure/data.finger-tree/blob/master/src/...

OK, that comment was too light on substance.

The problem with strings being mutable by default is that one has to do a lot of defensive copying to avoid unexpected behavior. Mutable strings do have their place, inside a function that's building up a string, before that string is visible to any other part of the program. But it shouldn't be the default.

See also: "The Value of Values" by Rich Hickey (http://www.infoq.com/presentations/Value-Values)

alexchamberlain
I disagree. Variables are mutable in C++, unless otherwise stated. Interfaces can indicate they will not modify a value by taking a const (reference).
StefanKarpinski
The primary reason when deciding whether a type should be mutable or not is psychological: mutable things are containers with independent identity from their content; things that are identified by their value should be immutable. The prototypical mutable type is an array; the prototypical immutable types are numbers: if you change the imaginary part of a complex number, you don't have the same number with different content, you have a different number. When you think about strings as arrays of bytes like you do in C, it makes sense for them to be mutable; in a higher level language where they behave much more like atomic values, it makes much more sense for them to be immutable – it can be really jarring when some called code deep down in the guts of a program mutates a string and you see the results at the top level. In C++, which is somewhere between a low level and high level language, it's hard to say which way it should be, but the STL approach does seem to treat them more like values than containers, which implies that they probably should be immutable.
Have a bunch of stateful stuff that you need to keep track of? Encapsulation is nice.

I think this one is open for debate. There are other models for managing state which make things simpler to reason about than classical OO with encapsulation. One such example is Clojure's epochal time model.

http://www.infoq.com/presentations/Value-Values

RogerL
I think everything I wrote is a matter for debate. It's impossible to capture all of software design in two paragraphs. Ten minutes ago I was writing some code to display a baseball game - a quick Python hack, not a game or anything. To me a class plus a bit of encapsulation for things like players, ball, the field is 'just right' in a Goldilocks way. A bigger problem, a different domain, and your link may make a lot more sense. My real point was that you pick and choose based on your needs, not that some incredibly hastily written list is immutable and not open to argument.

I would also suggest you (perhaps through equal haste in writing) made a category error. Encapsulation != OO. For example, I can achieve encapsulation in C just by putting variables in my .c file, and not distributing the .c, but only the headers and a lib. I am not trying to nitpick, but wondering if 'classical OO' is part of your assumption.

In any case, I have never programmed in Clojure, and know nothing about epochal time models. It looks interesting enough, but is it a tool I can readily reach for if I am programming in C++, Python, or what have you? Will others understand it? Googling provides only a dozen or so relevant links. I think all in all I stand behind "Encapsulation is nice". It is nice, it is not the only way or necessarily the best.

chongli
I would also suggest you (perhaps through equal haste in writing) made a category error.

Yes, indeed it was haste. Where I really want to draw a distinction is between values and mutable objects (which may or may not use encapsulation). Encapsulation is a leaky abstraction when applied to mutable objects because the hidden state of some object may impact other parts of the system in various ways. Values (and functions of values) are a much sounder abstraction because they are referentially and literally transparent.

know nothing about epochal time models

The epochal time model is a mechanism used to coordinate change in a language which otherwise uses only immutable values (e.g. Clojure). It provides the means to create a reference to a succession of values over time. This means that any one particular list is immutable but the reference itself is mutated to point to different lists over time. The advantage of this is that these references can be shared -- without locks, copying or cloning -- because the succession of values is coordinated atomically.

The value Feb 23 2013 7:32:14AM never changes, but the result of the function now() does change from time to time.

I don't know if the above makes any sense to you.

http://www.infoq.com/presentations/Value-Values

Jan 25, 2013 · krosaen on The IDE as a value
I like this trend towards values instead of objects, Rich's talk is really worth watching too http://www.infoq.com/presentations/Value-Values. Using this approach in UIs is stretching the idea even further into a realm where I previously thought OO had a sweet spot.

I still like to convince myself, though, as to why this approach makes sense over the traditional OO / encapsulation approach. My current line of reasoning is that while having well defined services and interfaces can help organize a software system, the actual arguments and return values of these end points are really better thought of as values - and the expected structure of these values can certainly be part of the API. You just don't force people to construct and destruct objects for everything you are passing around.

A couple of related resources on this topic:

"Stop writing classes" from PyCon http://www.youtube.com/watch?v=o9pEzgHorH0

Rob Pike's reflection on using data vs OO:

https://plus.google.com/101960720994009339267/posts/hoJdanih...

chipsy
I agree with the trend, and also with the sentiment around interfaces.

Mutable/encapsulated approaches(which don't end at OOP, but rather continue into language design) act to define protocols. Protocols have a necessarily stateful nature; in the best case, the state is a single value at a single time, but in many real-world situations, state change events are frequent and need careful attention. Source code itself tends to need hundreds of changes to reach application goals, regardless of the language.

Functional and immutable style, on the other hand, acts on processes that are inherently computative, and this is the "nitty gritty" of most business logic. Even if the system is designed in an imperative fashion, it can bolster itself through a combination of a few "key methods" that apply functional style, and a type system that imposes a substantial amount of immutability.

The tricky part with OOP as we know it is to recognize when you're making a protocol, and when you're making an immutable computation. Many of the OO type systems used in industry lack the expressiveness to distinguish the two, and rush the natural life cycle of the architecture by imposing classes too early.

seanmcdirmid
I don't really get it. From the blog post, Granger is describing a very imperative object system, but you seem to be claiming that it is somehow a value-oriented functional system. What am I missing?
krosaen
That's a good question - there are still objects, or groupings of data, and even tags that identify which behaviors or functions apply to them in various contexts. What strikes me as different is the entire hierarchy is a nested data structure that is easy to reason about and modify at runtime without the use of a fancy debugger. The use of 'encapsulation' that hid the underlying data in each grouping and also bind functions / methods directly to each object in this case would only make it harder to work with. Why? Because to view, construct, augment at runtime or serialize the hierarchy would require constructors, serializers, deserializers etc, instead of having something that is just a data structure, ready to be viewed, put on a queue, sent over the wire etc.

The idea of 'behaviors' also provides flexibility in what functions can act on any grouping of data - the key value pairs needn't be associated with a 'class' which dictates what the associated functions will be adds more flexibility. As the author hints, there are other ways of achieving this agility - dynamic mixins.

Finally, while having a well defined protocol (or API or endpoint or whatever you want to call it) is valuable and helps organize code, I think taking this idea to the extreme and saying that every single object or piece of data you pass around as arguments or return values from these end points needs to be expressed as an abstract protocol itself is where you really start to lose. An expected format of the data structures of the arguments and return values can and should be part of an API, but needing to wrap them in objects doesn't really help - and that's where this trend I speak of begins to seem like progress to me.

seanmcdirmid
This is quite the standard dynamic languages argument, I'm not seeing much new here but this wasn't meant to be new. However, my point is that this is heavily object-oriented and heavily imperative. It is just a different kind of object system that people might not be used to. If this ever catches on, we'll just need another Treaty of Orlando to re-harmonize the object community.

To be honest, a lot of the object system seems to be confused and muddled. I mean, there are a million different ways to get the kind of flexibility in what the author call "behaviors" (a term incredibly overloaded, BTW), protocols, traits, type classes, etc...dynamic mixins are nothing new here also (I've designed a couple of research languages with dynamic mixin inheritance, fun stuff).

As an academic, I want to see reasons why X was chosen and comparisions with previous systems that chose Y instead; but I know I won't get this from most of the people who do the interesting work in this field, so I have to figure it out for myself. Calling this an object system opens up a floodgate of related systems that I can compare it against.

snprbob86
> I still like to convince myself, though, as to why this approach makes sense over the traditional OO / encapsulation approach.

I've become a 100% believer that Values are preferable to Objects & Encapsulation, full stop. However, I don't think that means you abandon objects, state, encapsulation 100%. I think that just means that you minimize them significantly.

I just watched a talk by Stuart Halloway [1] on Datomic where he makes a little "play at home" fill in the blanks table of design decisions and implications. He makes the assertion that if you take "program with values" to be the only given filled in table cell, you can make some arbitrary decisions for one or two other spots and the rest of the spots will fall out trivially with interesting properties. I guess the point is that programming with Values just affords you so much freedom and flexibility.

[1] http://www.infoq.com/presentations/Impedance-Mismatch

I knew about clojars.org, but I hadn't known about http://www.clojure-toolbox.com/. Very nice!

I also recommend the Value of Values video presentation: http://www.infoq.com/presentations/Value-Values. It will help you internalize Clojure's approach to data.

I really wish I could remember why I didn't experience nearly as much pain starting out as the author, but I'm glad that he took the trouble to put this together.

jrheard
That talk's definitely a good one. I particularly like how he frames it as a discussion about the difference between value-oriented programming (good) and place-oriented programming (what you're used to, bad). Giving the two paradigms those names really helps clarify how you think about the distinctions between the two.
graue
Off topic perhaps, but is Rich Hickey's wisdom only summarized in talks, or has he written some of this down? I see a number of glowing reviews of videos like this. While I don't want to miss out, I would personally much rather consume this info in writing.

The one I did bite the bullet on and watch in full was his talk on “hammock-driven development”. I liked it, but couldn't help feeling it would have made a very nice blog post that I could've read in 5 to 10 minutes as opposed to watching a 30 minute video.

jrheard
I really suggest biting the bullet on "are we there yet" - it changed the way I think about state and identity (in that it got me to start thinking about them at all). I haven't found anything by him in written form aside from interviews, though, but I'm still really new here.
megrimlock
Try reading this. It concisely gets at some of the main ideas of his talks regarding values, identity, and perception. http://clojure.org/state
wonderzombie
I can't speak to text, but you don't necessarily have to watch them. InfoQ offers MP3 downloads of talks, so you could listen to them on your commute, on a walk/bike ride/jog, etc. You'll still get 80% - 95% of the value.

I second fogus' comment about the Joy of Clojure, as well. I started with Simple Made Easy and the Value of Values before I even jumped into Clojure, and the Joy of Clojure dovetails quite nicely with all of the above.

vdm
The Unofficial Guide to Rich Hickey's Brain http://www.flyingmachinestudios.com/programming/the-unoffici...
fogus
While 'Joy of Clojure'[1] can in no way be considered "the way Rich thinks", we have tried very hard to capture the Clojure philosophy as we understand it. This is based on personal interactions with Rich, his talks, the Clojure and Clojure implementations, Datomic, IRC chats and experience using the language every day.

[1]: http://www.joyofclojure.com

This is a very interesting thought experiment. The consistent nature of Redis cluster has significant advantages and disadvantages, and an eventually-consistent solution might be better for many applications.

> * Sets are merged performing the set union of all the conflicting versions.

That's an interesting approach. Dealing with compound types is very tricky in this kind of system, because it's not clear which of the options are what you want. Perhaps I am not following your approach correctly, but this seems to have a high probability of causing deleted items to reappear in a set when nodes come back online (or partitions heal). Obviously intersection isn't the right operation either, because that will cause similar consistency problems with added items.

It seems like a quorum-based approach to handling sets would give a much better consistency experience for most applications, potentially at the cost of doing more reads. I wonder if antirez considered that approach.

> The Dynamo design partially rely on the idea that writes don't modify values, but rewrite an entirely new value. In the Redis data model instead most operations modify existing values.

This is something that Rich Hickey touches on in his 'value of values' talk (http://www.infoq.com/presentations/Value-Values). Making values immutable and copying them on writes simplifies many of the complexities of merging, especially if techniques like vector clocks are used to provide ordering information.

codewright
> quorum-based approach...wonder if antirez considered that approach.

I'm sure he's aware of it, but I doubt he took it incredibly seriously unless he has a very specific strategy in mind. He's explicitly avoiding the Dynamo model for a lot of reasons.

The mixture of composite types and mutation-centric semantics have boxed Redis in a bit, although I love using it.

Hickey was probably right. Pity Datomic is commercial.

antirez
In quorum-based systems a write is performed only if the majority of the players agree to accept it. If you want a system that is write-available even in a minority partition, you can't use a quorum-based system with success, like in the famous case of the highly available shopping cart.
antirez
> but this seems to have a high probability of causing deleted items to reappear in a set when nodes come back online.

Basically there is no right way to do this, it depends on what the application goal is. For instance Dynamo queries the application in this case, so the application can merge things if needed, intersect things otherwise, and so forth.

For Sets the union was picked in order to guarantee safety. For instance if you model a shopping cart this way, and there is a net split where a client gets into a minority partition and writes a new item (as the user put a new item in the shopping cart during the partition), if you do union on merge the user will still have the item.

In other applications of course the side effect of this (resurrection of deleted items) is not a good idea, but there are different ways to deal with it. For instance in the case of the shopping cart, this can be made more resistent by adding special items that mark old items as deleted, letting the application displaying only the right thing.

rdtsc
How about providing the ability to set merge handlers in Lua for different data types / keys?

Or if there are 3 different and well known strategies, somehow let users pick one and set it as a default. If not let users run Lua scripts that will be executed to resolve conflicts in a custom way.

I do this for CouchDB, it has a very convenient changes feed that can also stream conflicts when they appear. So there is a custom (and separate) conflict resolver process that resolves conflicts in an application specific way.

For consistency though you'd need a way to run those synchronously somehow as soon as you detect the conflict.

"Visualize data, not code. Dynamic behavior, not static structure."

Yes! This reminds me of what Rich Hickey has been enlightening the world about as well [1]. Bravo Bret! Thank you for writing and sharing these ideas.

[1] http://www.infoq.com/presentations/Value-Values

More notes on the video:

- Rich's whole view on the world is pretty consistent with respect to this talk. If you know his view on immutability, values vs identity, transactions, and so forth, then you already have a pretty good idea about what kind of database Rich Hickey would build if Rich Hickey built a database (which, of course, he did!)

- The talk extends his "The Value of Values" keynote [1] with specific applicability to databases

- Further, there is an over-arching theme of "decomplecting" a database so that problems are simpler. This follows from his famous "Simple made easy" talk [2]

- His data product, Datomic, is what you get when you apply the philosophies of Clojure to a database

I've talked about this before, but I still think Datomic has a marketing problem. Whenever I think of it, I think "cool shit, big iron". Why don't I think about Datomic the same way I think about, say, "Mongodb". As in, "Hey, let me just download this real quick and play around with it!" I really think the folks at Datomic need to steal some marketing tricks from the NoSQL guys so we get more people writing hipster blog posts about it ;-)

[1] http://www.infoq.com/presentations/Value-Values

[2] http://www.infoq.com/presentations/Simple-Made-Easy

Aug 14, 2012 · 186 points, 42 comments · submitted by dmuino
talaketu
Great talk. Love the phrase "Information Technology not Technology Technology".

But I do think he has been a bit unfair to databases (and primary keys) generally, in characterizing them as "place oriented". The relational model is actually a brilliantly successful example of a value-oriented information technology.

The very foundation of the relational model is the information principle, in which the only way information is encoded is as tuples of attribute values.

As a consequence, the relational model provides a technology that is imbued with all of the the virtues of values he discusses. * language independence * values can be shared * don't need methods * can send values without code * are semantically transparent * composition, etc.

It's true that we can think of the database itself as a place, but that's a consequence of having a shared data bank in which we try to settle a representation of what we believe to be true. Isolation gives the perception of a particular value. In some ways, this is just like a CDN "origin".

Also regarding using primary key as "place". Because capturing the information model is the primary task in designing a relational database schema, the designer wants to be fairly ruthless by discarding information that's not pertinent. For example, in recording student attendance, we don't record the name of the attending student - just their ID. This is not bad. We just decided that in the case of a name change, it's not important to know the name of the student as at the time of their attendance. If we decide otherwise, then we change the schema.

hueyp
It wasn't a knock against relational databases. The issue is update in place. If you have a relational database that is append only there is no problem. He actually wrote one (datomic).

The criticism of a primary key is again not anything against having primary keys, but that in a database that allows updates in place a primary key is meaningless. It is meaningless because it doesn't specify a value -- you pass a primary key and it could be anything by the time the receiver gets around to using it. If instead the value was immutable passing a primary key would be fine.

I've done work with ERP systems and having the ability to query against arbitrary points in time would be amazing. What was the value of inventory on this date? There are other ways to go about this (event sourcing) but it moves all the complexity to application code. The goal would be for the database itself to do the work for us.

mickeyp
> you pass a primary key and it could be anything by the time the receiver gets around to using it. If instead the value was immutable passing a primary key would be fine.

Not sure what you mean by receiver here -- receiver as in the database or another component in your software hierarchy? The best way to ensure that your data goes unchanged across atomically disparate events is to insist that (Oracle, which is what I use in enterprise) lock the row. The easiest way is to use SELECT ... FOR UPDATE. The cursor you receive will have all the SELECTed rows locked until you close the cursor -- by commit or rollback. This will ensure that nobody can change your data whilst you're messing around with it, even if your goal is never to actually modify it, but merely capture a snapshot of the data. Obviously, if you have a lot of different processes hitting the same data they will block and wait for the lock to free (though this behaviour can be changed) so depending on what you're doing this may not be the most efficient way, though it certainly is the most transactionally safe way. Another way is to use Oracle's ORA_ROWSCN which is, greatly simplified, incremented for a row when that row is changed. So: read your data incl. its ORA_ROWSCN and when you update only update if the ORA_ROWSCN is the same. A similar approach could be done with Oracle's auditing or a simple timestamp mechanicm, but you obviously lose some of the atomicity from doing it that way.

> I've done work with ERP systems and having the ability to query against arbitrary points in time would be amazing.

You can do that in Oracle. You can SELECT temporally; so you could tell the DB to give you the state of the select query 5 minutes ago, subject to 1) flashback being enabled; and 2) the logs still having the temporal data.

Another way is to use an audit log table to store changes to data. We use this all the time in accounting/finance apps as people fiddling with invoices and bank account numbers must be logged; you can either CREATE TRIGGER your way to a decent solution, or use Oracle's built-in auditing version which is actually REALLY advanced and feature rich!

N.B.: I do not use other databases so my knowledge of them is limited, but it should give you some ideas at least!

yason
I think the parent meant the value vs. reference separation.

Consider that you create a record, give it a primary key N, then start referring to that value by the primary key and at some point make an update to the record, the same primary key now refers to another value. The primary key is just a reference pointer to a placeholder (=record) and depending on whatnot the value in the placeholder can change to anything. So, you have to be careful of what you mean by the primary key because it's just a reference, not a value. In the value paradigm your primary key would be a hash, like in git, that would forever be that one value instead of referring to some value.

talaketu
Exactly. Primary key is a subset of attributes - thus necessarily populated by values.

It's worth considering the correspondence between the concept of functional dependency in the relational model, and the concept of a pure function in functional programming. The issue under discussion, then, is whether referential transparency is afforded in the database.

While referential transparency in a database is achieved momentarily at the right isolation level, it is not achieved in the eternal sense of a pure function.

This is because the functional dependency in FP encodes an intensional definition, whereas the functional dependency captured in a relation is extensional, usually modelling the state of the knowledge of the relevant world, and therefore being subject to change.

gingerlime
Great talk, and without having any experience with FP, it really makes sense on many levels. I love data, and how transparent it is, and how objects seem to get in the way a lot of the time. I like queues, and shipping data from one process to another rather than sharing objects. RESTful interfaces, etc. Those concepts and tools are powerful.

The only thing I'm not too comfortable with is that space isn't really infinite. Yes, it's much cheaper, but still not infinite. If we stored all our logs in an ever growing database, and expect to be able to access it all the time, this is really very expensive. This is why we rotate logs, archive them and trash them eventually. Sure, we can afford this expense for source control, because this data (source code) is amazingly small in comparison. I'm not sure how it translates to real data on our systems, which is immensely bigger.

Also thinking about it in context of technologies like redis. redis manifets a lot of the advancement in technology in how memory is used. It's so vastly bigger and cheaper than before that we can afford to store our whole database in it, instead of on much slower disks. But then this super-fast in-memory database definitely faces storage size constraints that needs to be considered...

Just a few random thoughts. Wish I could have a chat to Rich Hickey one day. Even if I could, I have a lot more to learn until then, so I'd make the most of this chat.

dustingetz
> This is why we rotate logs, archive them and trash them eventually.

i think an organization trashes old logs for out-of-band reasons - acquiring more disk space requires following an organization's procurement process which imposes tons of friction, or because compliance with applicable regulations requires saving, e.g., emails, for three months and saving them for longer is a legal risk.

barrkel
Old logs may be a privacy liability; and depending on the type of log and the load on the system, the fully loaded cost of keeping logs indefinitely may be too high to justify based on the revenue for a customer (I'm thinking of ISP logs in particular, for example).
true_religion
Or... because I don't really care what what time Postgres started up 2 years ago.
calibraxis
I think the notion is like garbage collection (calling "new", as he mentioned) — the illusion of infinite space. (http://mitpress.mit.edu/sicp/full-text/sicp/book/node119.htm...)
oskarkv
If you found this interesting and have not tried Clojure yet, you should really give it a go. Learning Clojure teaches a lot about programming just because it is very well-designed.
elliot42
Is there any convenient way to get notified when Rich Hickey pushes a new talk or article? I can't seem to find a RSS feed, mailing list or Twitter account to follow. Any advice appreciated!
sferik
You could follow @richhickey on Twitter or just read Hacker News ;)
pydave
You could try a Google Alert [ http://www.google.com/alerts ] for "Rich Hickey (talk | article)"
notimetorelax
I've never used it, but it seems to be a very nice way to keep track of such things. Thanks a lot for the suggestion!
skardan
I think there is another great example of value based programming we use every day even on small scale: unix pipes.

cat file | grep .... | wc

There are no complex protocols involved between cat, grep and wc - just passing around the value (now I am not talking about mutable files, directories etc).

I have seen very few systems which are as simple yet as flexible and versatile. Conventional wisdom says it is because unix is set of small utilities where each program does just one thing right. After watching the talk we should note that these utilities pass around text values.

If you want to build something as powerful and flexible as unix command line, you should think about value of decomposition as well as value of values :)

kamaal
Great talk. Most of these talks on functional programming make perfect sense. These also look ideological superior.

My only problem is Object oriented programming looks more pragmatic in the real world. There are libraries, tools, tutorials, help forums and a lot of other stuff out there which helps anybody who wants to start learning OO from to go from nothing to far places.

You can't say the same thing about functional programming. The community is too elitist, the tutorials are math heavy. And the tools are too ancient. Having to use simple text editors and selling them as technologies used to build large applications is a contradictory philosophy.

ozataman
I won't claim any level of tooling/IDE parity, but FPs are quickly getting up there.

As an example, Haskell has a very helpful community, lots of stimulating content (yes, some are math heavy but many/most are not), over 3000 packages on Hackage (many of which are really excellent), 700+ people on IRC at anytime constantly talking/answering questions, at least 3 major web frameworks, many concurrency libraries, database drivers/libraries for almost anything, an astounding number of utility libraries and a real world-class, top of the line compiler (GHC) that produces blazing fast, robust code. Many companies are building commercial/proprietary tools with it for mission critical applications.

YuriNiyazov
I just love it when people try to juxtapose "pragmatic" with "math-heavy". Obviously, in the real world no one ever uses math, all they do is print "Hello World" to the screen.
sbov
Unless you use Haskell, in which even printing "Hello World" to the screen is math heavy.
Munksgaard
Clojure in Emacs with Slime is hardly a simple text editor. Emacs might be ancient, but that doesn't make it less powerful.

There are also plenty of great documentation for Clojure, Programming Clojure is one, and the online documentation is excellent.

mattdeboard
Well now, I love Clojure and write it for my hobby projects, but I have to disagree about the online documentation for it. Every command is documented, sure, but that doesn't make it excellent. For example, here's the doc for the `with-open' macro:

"bindings => [name init ...]

Evaluates body in a try expression with names bound to the values of the inits, and a finally clause that calls (.close name) on each name in reverse order."

I deliberately chose a bit of code that's actually pretty simple and straightforward; also, one that I know intimately. Now, knowing what `with-open' does as well as I do, this doc string almost makes sense on the first pass. But to the layman this is almost impenetrable. I've written a couple variants of this macro and I STILL have to read the docstring a couple of times to understand what it's saying.

The online docs are comprehensive but I would never use the term "excellent" to describe them.

Munksgaard
Good point. The clojuredocs version[1] is a bit better, but you're right, it could be better.

[1] http://clojuredocs.org/clojure_core/clojure.core/with-open

ehsanu1
Not Clojure, but here's some non-elitist/non-math-heavy FP: http://learnyouahaskell.com/

There are plenty of helpful people around in various mailing lists and IRC channels. There are sour people everywhere, can't let that ruin things for you.

You need less IDE support with FP languages in general than with OO languages, but that's coming too: https://www.google.com/search?q=%22light%20table%22

lrenn
> The community is too elitist, the tutorials are math heavy. And the tools are too ancient. Having to use simple text editors and selling them as technologies used to build large applications is a contradictory philosophy.

Careful. Statements like this only create more "elitists" by insulting people. Have you seen leiningen, Counter Clockwise, or La-Clojure? Part of the reason you need all that tooling is because of the objects. If you haven't become proficient in a functional language, can you really say the tooling is insufficient? It's like telling someone their movie sucked without seeing it, or only staying for the first 5 minutes. When I get rid of my Foo, FooImpl, JPAFoo, FooService, FooServiceImpl, FooDao, Bar, BarImpl, etc, the requirements for my editor and tooling suddenly change. If I'm not using big heavy frameworks, I no longer need all those plugins. I don't need to be able to click on my spring configuration and jump to the implementation. When I'm working in repl, I don't need heavy integration with Jetty (for example) because I never need to restart the server. If my restful webservice just needs to be a function that returns a map (ring), then I don't need annotation support, or some framework plugin. If my code is data, my editor already supports both.

I need to move around functions, and compile my code. Code completion? Navigation? Sure, but Emacs, CC, La-Clojure can all do that. I hope you aren't insinuating that Emacs/Slime is a "simple" text editor ;)

Tutorials are their own issue. A new object oriented programming language only needs to teach you their syntax and their API. A Clojure tutorial targeted at someone who has only ever done serious work in an OOPL is going to have to explain not only Clojure, but fundamental concepts related to functional programming. Once you really learn one, the rest all make sense in the same way it's relatively easy to jump around OOPLs.

If you've accepted the technical argument, don't let those other things hold you back. The Clojure community is great, and the Eclipse and IntelliJ stuff has really come a long way.

mark_l_watson
+1 the requirement for tooling is less. I switch between IntelliJ and Emacs for Clojure development, and tools don't hold me back.

I did a fair amount of Java GWT/SmartGWT development last year and having both client side Java code and server side code running in twin debuggers was handy, but really only crucial because of the complexity of the whole setup. That said, I only write simple web apps and web services in Clojure and Noir and perhaps that is why I don't feel that I need complex tools.

sbov
This video, along with many others I've watched from him, espouses the purity of data, and talks about not tangling data with functions (object orientation).

In this video, he seems to go along and say that values are great for software projects that use multiple languages, in part because values are universal meaning one doesn't need to translate classes into multiple languages.

However, regardless of whether you use an object oriented design or not, don't you usually have a set of functions you tend to perform on a set of data or values? For instance, you may not wrap your customer data behind a class and methods, but there are still going to be some rules related to all that data you're passing around. So in the multiple languages scenario, wouldn't you still have to translate those rules from language to language?

DeepDuh
You would. The thing is that it's supposed to be easier and involving less code, since you don't need to port the interface boilerplate. At least that's how I understood it.
ajcronk
Has anyone seen a Sales CRM implemented with a temporal+value approach? Seems quite useful for tracking movement through a funnel.
jcromartie
Anything that touches financial records-keeping would be ripe for this kind of software, too. I spent quite some time on a financial services system and our number one enemy was mutable state.
mahmud
You can use a stock rdbms and still keep track of changes. Just keep a seperate "Updates" table which consists of the tuple {class, object, change-description, changed-when, changed-by}

You don't need both class and object, but I prefer to log both object type and id.

mybuddymichael
Rich Hickey is a great thinker.
jacoblyles
Is there anywhere I can go to get a collection of useful programming videos? Somewhere that aggregates videos like these after they are uploaded?
nickik
You should be more clear what you want, programming videos can be anything from printing hallo world in java to theoretical computer sience where they dont even know how large the problems are.

Are you intressted in languages, compilers, algorithms, data structurs, parallism, concurency, databases, operating system, virtual maschines, garbage colleters, grafics or something more meta like the this 'value of value' video.

I can provide you links almost everything but im not going to do the work until you tell me what you want.

pfraze
InfoQ is actually a pretty good place. Just scroll through their backlog and you're sure to find something.
lrenn
This talk needs a different title, because it's way more important than the "Value of Values".

It's a call to stop writing object oriented software. He gives a convincing argument. You can probably find a thing or two to disagree with, but like he says, this is something we all know to be true. It's just that place oriented programming was necessary limitation due to hardware. Eventually, that limitation will no longer exist, or cease to be relevant. At that point, the only thing that makes sense is "value oriented programming" and by extension immutable functional programming.

Datatomic takes this same argument and applies it to databases.

Edit: And this might be crazy, but perhaps this is the answer to the "why can a building be built in a week, but a software project will be a year late and broken". What if you started on the first floor of the building, came in the next day and the dimensions had changed. What if when you needed two similar walls, you took a copy of one. But when you put a light switch on the copy, you accidentally put one on the wall you copied. Buildings are made up of values. A wall of a certain length. A staircase of 42 steps. These values don't change, and if they did, constructing buildings would be a hell of a lot harder.

Evbn
Why would you want to use software that hasn't changed since your house was built? I wish my house could be updated as frequently and easily as my software.
lrenn
The fact that you can't update your house easily is because it is a physical object. Using values doesn't change how easy it is to change your software. If anything, it makes it easier because you know your changes can't screw anyone else up.
Chris_Newton
[This talk is] a call to stop writing object oriented software. He gives a convincing argument. You can probably find a thing or two to disagree with, but like he says, this is something we all know to be true.

I agree very much with the underlying theme of the talk. But changing the focus to working with immutable values is only one step in a direction away from the dominant, imperative style of programming typified by OOP. There are (at least) two more big steps that need to be taken before I can see this more functional style of programming having any chance of going mainstream.

Firstly, we have to deal with the time dimension. The real world is stateful. All useful programs interact with other parts of the world in some form, and the timing of those interactions is often important. While programming with pure functions has advantages and lends itself very well to expressing some ideas, sooner or later we have to model time. There are plenty of relevant ideas with potential, but I don’t think we’re anywhere near getting this right yet.

Secondly, there are some algorithms that you simply can’t implement efficiently without in-place modification of data. If programs are to be expressed in a way that pretends this doesn’t happen, then the compilers and interpreters and VMs need to be capable of optimising the implementation to do it behind the scenes. At best, this is a sufficiently smart compiler argument, and even as those optimisations develop, I suspect that programmers will still have to understand the implications of what they are doing at a higher level to some extent so that they can avoid backing the optimiser into a corner.

We know from research into program comprehension that as we work on code we’re simultaneously forming several different mental models of that code. One of these is what we might call control flow, the order of evaluating expressions and executing statements. Another is data flow, where values come from and how they are used. Often the data flow is what we really care about, but imperative code tends to mix these two different views together and to emphasize control flow even when it’s merely an implementation detail. Moving to a more functional, value-based programming style is surely a step in the right direction, since it helps us understand the data flow without getting lost in irrelevant details.

To really get somewhere, though, I suspect we’ll need to move up another level. I’d like to be able to treat causes and effects (when a program interacts with the world around it or an explicitly stateful component) as first class concepts, because ultimately modelling of time matters exactly to the extent that it constrains those causes and effects. Within that framework, everything else can be timeless, at least in principle, and all those lovely functional programming tools can be applied.

Sometimes, for efficiency, I suspect we’ll still want to introduce internal concepts of time/state, but I’m hoping that however we come to model these ideas will let us keep such code safely isolated so it can still have controlled, timeless interactions with the rest of the program. In other words, I think we need to be able to isolate time not only at the outside edge of our program but also around the edge of any internal pieces where stateful programming remains the most useful way to solve a particular problem but that state is just an implementation detail.

So, I agree with you that this idea is about much more than just programming with immutable values. But I don’t think we can ever do away with a time dimension (or, if you prefer, place-oriented programming) completely. Rather, we need to learn to model interactions using both “external time” and “internal time” with the same kind of precision that modern type systems have for modelling relationships between data types. And whatever model we come up with had better not rely on scary words like “monad”, at least not to the general programming population rather than the guys designing programming languages. In fact, ironically (or not), it starts to sound a lot like the original ideas behind OOP in some respects. :-)

lrenn
> a time dimension (or, if you prefer, place-oriented programming)

Place oriented is the opposite of having a time dimension. It means that at any time, the thing at that place might be different. This was done for your second argument, efficiency. The talk argues that if functional programming with values is efficient enough for you, than you shouldn't be writing object oriented software.

Values on the other hand absolutely have a time dimension. His "Facts" slide says it as does datomic and his revision control example.

He has another great talk that touches a bit more on time:

http://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hic...

Chris_Newton
For the avoidance of doubt, when I’m talking about (not) doing away with a time dimension here, I mean from the perspective of the world changing over time as our program runs, not of associating values that effectively last forever with a certain point in time (as in purely functional data structures, version control systems, etc.).

That is, even if we follow the current trend of adopting more ideas from functional programming in mainstream programming languages, I’m saying that I doubt we will ever completely remove variable state, which is what I understand Rich to mean by “place-oriented programming”, or events that must happen in a certain order.

Instead, I think we will learn to control these aspects of our programs better. When we model time-dependent things, we want to have well-specified behaviour based on a clean underlying model, so we can easily understand what our code will do. Today, we have functions and variables, and we have type systems that can stop us passing the colour orange into a function eat(food). Tomorrow, I think we’ll promote some of these time-related ideas to first-class entities in our programming languages too, and we’ll have rules to stop you doing time-dependent things without specifying valid relationships to other time-dependent things. Some of the ideas in that second talk you linked to, like recognising that we’re often modelling a process, are very much what I’m talking about here.

As an aside, it’s possible that instead of adding first-class entities for things like effects, we will instead develop some really flexible first-class concepts that let us implement effects as just another type of second-class citizen. However, given the experience to date with monads in Haskell and with Lisps in general, I’m doubtful that anything short of first-class language support is going to cut it for a mainstream audience. It seems that for new programming styles to achieve mainstream acceptance, some concepts have to be special.

In any case, my hope is that if we make time-related ideas explicit when we care about them, it will mean that when we don’t need to keep track of time, we needn’t clutter our designs/code with unnecessary details. That contrasts with typical imperative programming today, where you’re always effectively specifying things about timing and order of execution whether you actually care about them or not, but when it comes to things like concurrency and resource management the underlying models of how things interact usually aren’t very powerful and allow many classes of timing/synchronisation bug to get into production.

danenania
I'm far from an expert on this topic, but it seems that you're still missing the main point--this talk is exactly about how to model time dependent data (immutable data), and how not to (mutable state, oo). Hickey definitely isn't advocating a system that can't change with time. Such a system would be pointless. He wants changes in state (which, naturally, occur on a time axis) to be represented by new values, not as in place, destructive updates of old values, as it's done in oo and currently popular databases.

If you look at the results of this approach in Datomic, I think you actually do see a design that treats time as much like a first-class citizen as it's ever been treated, in the sense that time essentially acts as a primary key, and developers are provided with a time machine that allows them easy and efficient access to any state that has existed in their data in the history of the application. (In theory, at least--I haven't personally tried Datomic).

Chris_Newton
I’m pretty sure I understand where Rich is coming from. I’m just arguing that while moving to persistent, immutable values is a big step in what could be a good direction, it’s not sufficient by itself to justify or cause a shift in mainstream programming styles on the scale of abandoning OOP (as suggested in the original post I replied to).

You lose things in that transition, very useful things that are widely applicable. We’re not going to just give those up without having something good enough to replace them, and I thought that in this specific talk those two areas I mentioned were glossed over far too readily.

For example, although Rich said very clearly that he thought it was OK to manipulate data in-place as an implementation detail of how a new value is built, he then argued that once the finished result was ready it should become an immutable value, and that we no longer need to use abstractions that are based on or tied to that kind of underlying behaviour. I contend that there are many cases where it is not so simple even with today’s technology, and that the idea of constraining in-place mutation to the initial creation of a value is a leaky abstraction that will not survive a lot of practical applications.

Later on, processes are briefly mentioned, but that part of the talk is about information systems, which are mostly concerned with pure data analysis. That makes it is rather easy to dismiss the idea of modelling interactive processes in context, but unfortunately, very many real world programs do need to be concerned with the wider time-related concepts like effects.

I’m sure Rich himself is well aware of all of these issues. He’s discussed related ideas in far more detail on other occasions, including in the talk that lrenn cited above. But I find his challenge near the end of this talk, “If you can afford to do this, why would you do anything else? What’s a really good reason for doing something else?” to be rather unconvincing. For one thing, that’s a mighty big “if”, whether you interpret “afford” in terms of performance or dollars. For another thing, the answer to those questions could simply be “Because ‘this’ isn’t capable of modelling my real world, interactive system effectively.”

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.