HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Rich Hickey: Deconstructing the Database

InfoQ · Youtube · 238 HN points · 8 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention InfoQ's video "Rich Hickey: Deconstructing the Database".
Youtube Summary
Rich Hickey, author of Clojure, and designer of Datomic presents a new way to look at database architectures in this talk from JaxConf 2012.

** For more presentations from JaxConf 2012, head to http://mrkn.co/txtch
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Nov 05, 2022 · 2 points, 0 comments · submitted by tosh
Oct 23, 2022 · 1 points, 0 comments · submitted by tosh
Rich Hickey's talk on the design of Datomic is also something every programmer should see at least once: https://youtu.be/Cym4TZwTCNU
Mar 10, 2019 · tosh on The Essence of Datalog (2018)
Datalog is also used as query language for Datomic

https://www.datomic.com/

https://youtube.com/watch?v=Cym4TZwTCNU

tannhaeuser
No it's not. The query language of Datomic, and that taught on learndatalogtoday.com, is a Datomic-proprietary language similar to Prolog-in-LISP implementations or frame-based expert systems of old.
None
None
souenzzo
There is datascript, which has a foss implementation of datascript (with nice performance)

https://github.com/tonsky/datascript And this (deprecated) project, that translate datalog to SQL(lite) https://github.com/mozilla/mentat

Mar 06, 2019 · 3 points, 0 comments · submitted by tosh
Mar 03, 2019 · 1 points, 0 comments · submitted by tosh
You will enjoy the fact-based database Datomic, which is the reason Rich Hickey made Clojure: https://www.youtube.com/watch?v=Cym4TZwTCNU

Datalog is a much, much, much query language better than SQL: http://www.learndatalogtoday.org/

hood_syntax
I take issue with some of Rich Hickey's technical opinions, but Datomic seems pretty cool. I knew Datalog has been around for a while, but had never looked into it further.
hardwaresofton
I do remember Datomic and I think it's a great tool but I fell out of love with the Clojure ecosystem and JVM-based languages as a whole and don't think I'll be getting back into it/them.

I do remember wanting to check out Datomic (I believe after seeing a talk on how it was being used at a bank in southern america?[0]), but I found it unreasonably hard to find and download/experiment with the community edition -- compare this to something like Postgres which is much more obvious, more F/OSS compliant (I understand that they need to make money) and Datomic doesn't really look that appealing to me these days.

At this point in my learning of software craftmanship I can't do non-statically type-checked/inferenced languages anymore -- I almost never use JS without Typescript for example. Typed clojure was in relatively early stages when I was last actively using clojure, and I'm sure it's not bad (probably way more mature now), but it's a staple in other languages like Common Lisp (the declare form IIRC). The prevailing mood the clojure community seemed to be against static type checking and I just don't think I can jive with that anymore.

Thinking this way right now Datomic wouldn't be a good fit for me pesonally but I believe that it is probably high quality paradigm.

[EDIT] - I found the talk: https://www.youtube.com/watch?v=7lm3K8zVOdY

[0]: https://www.datomic.com/nubanks-story.html

pgt
Also look at Magic, an experimental typed JVM Lisp with immutable namespaces: https://github.com/mikera/magic
pgt
Yes, Datomic is the killer app for Clojure [^1]. Have a look at Datascript[^2] and Mozilla's Mentat[^3], which is basically an embedded Datomic in Rust.

Hickey's Spec-ulation keynote is probably his most controversial talk, but it finally swayed me toward dynamic typing for growing large systems: https://www.youtube.com/watch?v=oyLBGkS5ICk

The Clojure build ecosystem is tough. Ten years ago, I could not have wrangled Clojure with my skillset - it's a stallion. We early adopters are masochists, but we endure the pain for early advantages, like a stable JavaScript target, immutable filesets and hot-reloading way before anyone else had it.

Is it worth it? Only if it pays off. I think ClojureScript and Datomic are starting to pay off, but it's not obvious for who - certainly not very ever organisation.

React Native? I tore my hair out having to `rm -rf ./node-modules` every 2 hours to deal with breaking dependency issues.

Whenever I try to use something else (like Swift), I crawl back to Clojure for the small, consistent language. I don't think Clojure is the end-game, but a Lisp with truly immutable namespaces and data structures is probably in the future.

[^1]: In 2014 I wrote down "Why Clojure?" - http://petrustheron.com/posts/why-clojure.html [^2]: https://github.com/tonsky/datascript [^3]: https://github.com/mozilla/mentat

Dec 20, 2016 · 1 points, 0 comments · submitted by tosh
Although tangential to your question, I feel you might enjoy presentations by Rich Hickey wherein he describes traditional database architectures and presents the whys and wherefores behind Datomic. For example:

[1] https://www.youtube.com/watch?v=Cym4TZwTCNU [2] http://www.infoq.com/presentations/datomic-functional-databa...

Mar 18, 2014 · 1 points, 0 comments · submitted by toxik
We recently started using Solr in a similar manner where I work. We don't go as far as to pack an index in with our code, but each page serving instance has an embedded Solr server that it uses to pull a lot of fairly static data. Those indexes are updated nightly from a master Solr server which imports its data from a normal relational database.

So far it has worked really well, but I don't think it really offers many advantages scalability wise over other NoSQL solutions. Like most other solutions it trades fast writes/consistent reads for fast reads, which is fine in many cases. Around the time we were adding Solr though, I heard about Datomic (http://www.datomic.com/). I haven't had time to really investigate it, but it seems to me that Datomic provides fast reads in a manner very similar to the Solr configuration you described. Live versions of the database's index are deployed to agents that live on your app servers, but unlike the Solr configuration these agents receive updates to the index in real time. At a glance at least, it seems to offer all of the advantages of the Solr solution while still allowing relatively fast updates to the data.

This is an interesting talk on the topic by Rich Hickey (who created Datomic): http://www.youtube.com/watch?v=Cym4TZwTCNU. It's long but I thought it was worthwhile.

cvillecsteele
At Room Key we're investigating Datomic. It's definitely interesting.
Aug 28, 2012 · 229 points, 90 comments · submitted by noidi
lhnz
There are two people that I will stop what I'm doing and watch every new lecture they make: Rich Hickey and Bret Victor. Both are visionaries.
puredanger
And both will be at Strange Loop this year! http://thestrangeloop.com/sessions
Sandman
Strange Loop is such a great conference. I wish there was something like that here in Europe.
puredanger
What about Devs Love Bacon? http://devslovebacon.com/
KevinEldon
Strange Loop is a very hot ticket. I tried to get a ticket w/ company support within 6 weeks of the early admission offer... no luck, the conference was sold out. I'm happy to see such a heavily tech-focused conference doing so well. I'll be looking to get an very early ticket next year.
agumonkey
The clojure crowd seems to be filled with it. And they even attract gurus like oleg kiselyov. Most of their lectures are worth the time entirely.
kylecordes
One of the reasons I've adopted Clojure (for small things... so far...) is the high density of very smart folks using it and building it.
agumonkey
I did too until I realized I wasn't ready for so much lispiness, too abstract for me right now.
mattdeboard
"too abstract" is not a phrase I'd think to associate with a lisp. It's all just data. If anything I'd expect people to say there's too little abstraction.
agumonkey
But the right amount of macro and alternative design can throw a noob off quite easily. Beside that I agree with you.
minikomi
Wholeheartedly concur. I'm also a fan of Rob Pike's straightforward way of "getting to the core" of quite tricky concepts and putting them forth in such a way that they become obvious. Pike's razor, perhaps?
lhnz
Can you recommend a Rob Pike lecture? :)
minikomi
I enjoyed this one about using message passing functions to compose a lexical scanner in go

http://www.youtube.com/watch?v=HxaD_trXwRE

chasingtheflow
Everytime I watch "Inventing on Principle" I learn something new. Are there any other great Bret Victor talks available that the community would suggest?

Or Rich (Value of Values, Simple Made Easy ...) for that matter.

scottjad
For Rich, two of my favorites are:

Are We There Yet http://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hic...

Clojure Concurrency http://blip.tv/clojure/clojure-concurrency-819147 These early videos of Rich demoing his little project have such a cool feel. There are several more on the blip.tv account.

And in addition to the two you mention and these are good:

Hammock Driven Development http://blip.tv/clojure/hammock-driven-development-4475586

Clojurescript http://blip.tv/clojure/rich-hickey-unveils-clojurescript-539...

leibniz
Interesting that you name them 'together'. On the surface, they are doing quite different things. On a deeper level, it seems to me, they approaching things in a very similar manner. I think what they share is a style of work very detached from the hectic, local improvement approach, which is usually forced upon us in industry for efficiency reasons. They inspiringly take their time to dig deep to identify hidden assumptions to get to the root causes of problems. Quite in the sense of the artist or scientist Bertrand Russell thought of. http://downloads.bbc.co.uk/rmhttp/radio4/transcripts/1948_re...
kinleyd
At a deeper level they are both pragmatic philosophers. They think at a high level but they are hands-on and have their feet firmly on ground realities. Inventing on principle is Bret Victor's contribution, but Rich Hickey surely lives it; and Hammock-Driven Development is Rich's notion, but there wouldn't be "Inventing on principle" without HDD on Bret's part. These two are awesome.
erikpukinskis
Fascinating stuff. Some things that came up for me while watching this and the other videos on their site[1]:

It's not Open Source, for anyone who cares about that. It's interesting how strange it feels to me for infrastructure code to be anything other then Open Source.

I'm sort of shocked that the query language is still passing strings, when Hickey made a big deal of how the old database do it that way. I guess for me a query is a data structure that we build programmatically, so why force the developer to collapse it into a string? Maybe because they want to support languages that aren't expressive enough to do that concisely?

[1] http://www.datomic.com/videos.html

jkkramer
You are not forced to pass strings. That's merely a convenience (for Java). You can construct and pass data structures instead.
nickik
Yes, and in clojure you acctually use data literals not just strings.
sriram_malhar
I'm always puzzled when the Datomic folks speak of reads not being covered under a transaction. This is dangerous.

Here's the scenario, that in a conventional update-oriented store, is termed as a "lost update". "A" reads object.v1, "B" reads the same version, "B" adds a fact to the object making it v2, then "A" comes along and writes obj.v3 based on its own _stale_ knowledge of the object. In effect, it has clobbered what "B" wrote, because A's write came later and has become the latest version of the object. The fact that DAtomic's transactor serialized writes is meaningless because it doesn't take into account read dependency.

In other words, DAtomic gives you an equivalent of Read-committed or snapshot isolation, but not true serializability. I wouldn't use it for a banking transaction for sure. To fix it, DAtomic would need to add a test-and-set primitive to implement optimistic concurrency, so that a client can say, "process this write only if this condition is still true". Otherwise, two clients are only going to be talking past each other.

fogus

    ...would need to add a test-and-set primitive
Datomic provides CAS via its `:db.fn/cas` database function. I'm not sure that it's documented at the moment.
tensor
I think "transaction functions" are intended to provide a solution to that, although I do wonder how it would fair performance wise under very cpu intensive transaction functions.
richhickey
Addressed here: http://news.ycombinator.com/item?id=4448351
sriram_malhar
Lovely. That addresses my concern.
Tuna-Fish
You are incorrect -- datomic transactions can depend on the previous state of the DB, and prevent it from being modified from under it. To do this, you do the transaction in the transactor process.
bsaul
Anyone understands how this system would deal with CAP theorem, in the case of a regular "add 100$ then remove 50$ to the bank account, in that order and in one go" type of transaction ? The transactor is supposed to "send" novelty to peers, so that they update their live index. That's one point where i would see trouble (suppose it lags, one "add" request goes to one peer, the "read" goes to the second, you don't find what you just add...) Another place i see where it could mess things up is the "Data store" tier, which uses the same traditional technics as of today to replicate data between different servers (one peer requests facts from a "part" of the data store that's not yet synchronized with the one a second peer requests). It seems like all those issues are addressed on his "a fact can also be a function" slide, but he skips it very quickly, so if anyone here could tell me more...
azolotko
In Datomic you can setup a function the transactor can call within a transaction. This function takes the current value of the database and other supplied arguments (e.g. $100 and -$50 from your example) and according to its logic produces and returns a list of "changes" the transactor should apply to the database. The function is pure in sense that it doesn't have any side effects, it just "expands" into new data. And of course this "expantion" and application of its results happens in the same transaction.
olivergeorge
This might only be vaguely related. It's an example of managing bank account balances using datomic taking advantage of transaction functions.

https://gist.github.com/3134849

ilaksh
Well he did put "atomic" in the name.
plaeremans
Well there is one transactor, the transactor handles each transaction sequential, so the transactor can abort a transaction when the world has changed since it got queued to the transactor.

There is (virtually) infinite read scalability. Each datum has a time associated with it, so you might not see the latest information (yet), you know the state of the world at a certain point in time.

I think it 's a really well designed system.

arscan
I recall Datomic making a bit of a splash on HN when it was announced 6+ months ago, but basically crickets since then. Anybody build something cool that took advantage of Datomic's unique design?
Tuna-Fish
People tend to be more conservative about data than they are about the other parts of their stack -- probably for a good reason.

I don't think datomic (or it's kin) will have that huge of an influence until years from now.

it
If you want to be conservative about your data, it looks like Datomic is an excellent choice. It preserves the entire history of your data instead of allowing it to be modified in place.
dagw
Conservative generally means using tried and tested technologies that has run stably for years on thousands of servers and where every corner case is well documented and well understood. Not using some new largely untested technology that sounds awesome on paper, but has yet to truly prove itself in the real world.

Disclaimer: I'm a huge fan of clojure and Hickey and I think Datomic sounds amazing. I'm currently looking for an excuse to play with Datomic and learn more about it, so don't take what I said as any indication that I'm somehow against Datomic.

kinleyd
I think Datomic is potentially disruptive and represents some great thinking on the part of an individual. Whether it will be disruptive will hinge on how well that thinking has subsumed the years of industry experience and practicalities, not to forget the conservative approach to data. I'd be interested to see how it pans out.
nickik
I think if will also be imortend what other databases come out of this. There is space for other prioritys in terms of CAP in a world were perseption and process are seperated. Or just other implmentations of the same ideas, opensource maybe.
lucian1900
At least I avoid using it (even though I really like its design) because it isn't open source. I will not trust my data to something I can't even attempt to maintain myself if necessary.
puredanger
We're building some stuff with it atm but I can't go into details. I've run into several others using it for a variety of things as well and the support group is active. http://groups.google.com/group/datomic
brlewis
Anyone have a summary for those of us who don't want to watch an hour-long video?
bct
I haven't watched the video yet either, but I'm guessing he's talking about Datomic's design, which would make this a good place to start: http://docs.datomic.com/architecture.html
breckinloggins
I'm only a few minutes into it, but so far the tl;dr is:

"There are lots of things we do with respect to databases simply because of the limitations of our databases. Here are some things that could be made far simpler if you just had a better database."

Yes it's partly a sales presentation for Datomic, but you could do worse than be sold by Rich Hickey.

breckinloggins
More notes on the video:

- Rich's whole view on the world is pretty consistent with respect to this talk. If you know his view on immutability, values vs identity, transactions, and so forth, then you already have a pretty good idea about what kind of database Rich Hickey would build if Rich Hickey built a database (which, of course, he did!)

- The talk extends his "The Value of Values" keynote [1] with specific applicability to databases

- Further, there is an over-arching theme of "decomplecting" a database so that problems are simpler. This follows from his famous "Simple made easy" talk [2]

- His data product, Datomic, is what you get when you apply the philosophies of Clojure to a database

I've talked about this before, but I still think Datomic has a marketing problem. Whenever I think of it, I think "cool shit, big iron". Why don't I think about Datomic the same way I think about, say, "Mongodb". As in, "Hey, let me just download this real quick and play around with it!" I really think the folks at Datomic need to steal some marketing tricks from the NoSQL guys so we get more people writing hipster blog posts about it ;-)

[1] http://www.infoq.com/presentations/Value-Values

[2] http://www.infoq.com/presentations/Simple-Made-Easy

danecjensen
this reminds me a lot of "How to beat the CAP theorem" http://nathanmarz.com/blog/how-to-beat-the-cap-theorem.html
sbmassey
How would you idiomatically fix invalid data in Datomic? for example, if you needed to update a badly entered value in a record, but keep the record's timestamp the same so as not to screw up historical queries?
puredanger
You can retract facts, just like you can assert facts.
bmurphy1976
You wouldn't. You can't change history either. You would insert a compensating transaction and the data would be fixed from that time forward when you inserted the compensating transaction.
sbmassey
Figures. So I guess that, in the case where you both need to worry about fixing invalid data, and also need to do historical queries, you would have to add your own timestamp to the data to represent the actual time of the event, because the built-in timestamp is just giving you the time of the state of the database. Hopefully that wouldn't get too hairy.
hobbyist
I often wonder, is Phd in computer science really required to do awesome work?
leibniz
When I heard the first couple of videos of Rich, I also asked myself this question. Here's Matt Welsh's take on "Do you need a PhD"?: http://matt-welsh.blogspot.co.at/2012/03/do-you-need-phd.htm...
Jach
I would say no, but a good chunk of knowledge is (though probably less is required than imagined) and if the academic methods of acquiring knowledge suit you (for many hackers they don't) then a Ph.D is a fine way to get a good chunk of knowledge. Here's Rich Hickey's recommended reading for Clojure specifically: http://www.amazon.com/Clojure-Bookshelf/lm/R3LG3ZBZS4GCTH/re... It's not exhaustive for even functional programming and design let alone the entirety of computer science, but it's certainly a good chunk of knowledge enough to do awesome work from.
swannodette
Rich Hickey never went to school for Computer Science as far as I know. He studied music composition.
wooby
Rich has a CS master's degree.
swannodette
Did not know that!
tensor
Of course not. You can certainly learn all you need to know on your own. However, that doesn't make the process of learning any easier. If you can get into a PhD program, it is a wonderful way to get access to information of various sorts.

If not, then the new free CS courses that are now being offered by Stanford and others provide extra help beyond reading books and papers. I'd highly recommend trying some!

duck
I'm getting "This video is currently unavailable"?
anatoly
I don't know - are you?
saurik
Watching this talk has so far (I'm halfway through, and now giving up) been very disappointing, primarily because many of the features and implementation details ascribed to "traditional databases" are not true of the common modern SQL databases, and almost none of them are true of PostgreSQL. As an initial trivial example, many database systems allow you to store arrays. In the case of PostgreSQL, you can have quite complex data types, from dictionaries and trees to JSON, or even whatever else you want to come up with, as it is a runtime extensible system.

However, it really gets much deeper than these kinds of surface details. As a much more bothersome example that is quite fundamental to the point he seems to be taking with this talk, at about 15:30 he seriously says "in general, that is an update-in-place model", and then has multiple slides about the problems of this data storage model. Yet, modern databases don't do this. Even MySQL doesn't do this (anymore). Instead, modern databases use MVCC, which involves storing all historical versions of the data for at least some time; in PostgreSQL, this could be a very long time (when a manual VACUUM occurs; if you want to store things forever, this can be arranged ;P).

http://en.wikipedia.org/wiki/Multiversion_concurrency_contro...

This MVCC model thereby directly solves one of the key problems he spends quite a bit of time at the beginning of his talk attempting to motivate: that multiple round-trips to the server are unable to get cohesive state; in actuality, you can easily get consistent state from these multiple queries, as within a single transaction (which, for the record, is very cheap under MVCC if you are just reading things) almost all modern databases (Oracle, PostgreSQL, MySQL...) will give you an immutable snapshot of what the database looked like when you started your transaction. The situation is actually only getting better and more efficient (I recommend looking at PostgreSQL 9.2's serializable snapshot isolation).

At ~20:00, he then describes the storage model he is proposing, and keys in on how important storing time is in a database; the point is also made that storing a timestamp isn't enough: that the goal should be to store a transaction identifier... but again, this is how PostgreSQL already stores its data: every version (as again: it doesn't delete data the way Rich believes it does) stores the transaction range that it is valid for. The only difference between existing SQL solutions and Rich's ideal is that it happens per row instead of per individual field (which could easily be modeled, and is simply less efficient).

Now, the point he makes at ~24:00 actually has some merit: that you can't easily look up this information using the presented interfaces of databases. However, if I wanted to hack that feature into PostgreSQL, it would be quite simple, as the fundamental data model is already what he wants: so much so that the indexes are still indexing the dead data, so I could not only provide a hacked up feature to query the past but I could actually do so efficiently. Talking about transactions is even already simple: you can get the identifier of a transaction using txid_current() (and look up other running transactions if you must using info tables; the aforementioned per-row transaction visibility range is even already accessible as magic xmin and xmax columns on every table).

lucian1900
The fundamental problem is even though it uses MVCC internally, Postgres doesn't expose that model. Instead, it exposes an update-in-place model.
saurik
Please see the response I left from ten minutes ago to DanWaterworth's similar comment (the one where I point out that, as a client, you can tell the difference between those two models by testing for some of the specific problems that are mentioned near the beginning of this talk; yadda yadda).
TheMiller
To elaborate further on what saurik has already pointed out: Hickey's "update-in-place" characterization of relational databases places blame in the wrong place. This is more about how people think about the data models; they think of their primary keys as identifying places, and updating rows as modifying those places. The relational model itself does not encourage this mode of thinking, although SQL arguably does, unless you think of UPDATE as shorthand for DELETE followed by INSERT. It's not that uncommon to build data models, or parts of models, that have the characteristic of never deleting facts. It's true that time-based as-of queries in such models are unwieldy, but that's a problem of the query languages that can be addressed by how queries are constructed, without redesigning the entire database system. There is research on "temporal databases" that addresses this, although I'm not up to date on it.

What I could not understand from the talk or from googling for more information on Datomic afterwards, is how it supposedly simplifies anything about the consistency issues that he talks about, with respect to the read/decide/write sequence. You read; time passes; other people make changes; when you write, the real world (modeled in your "transactor" as the single common store and arbiter of what is) will have changed.

lucian1900
That's the whole point, there are never any changes. You only ever have new versions, in new transactions. It's much more similar to a DVCS than a relational db.
richhickey
Re: read/decide/write - Datomic supports transaction functions (written in Java or Clojure) which run within the transaction, on the transactor, and are true functions of the prior state of the database (with full access to the query API). They can be used to do all of the transformative work (traditional read/modify/write), or merely to confirm that preconditions of work done prior to transaction submission still hold true (optimistic CAS). The model is simple - there are no complexities of isolation levels etc. All transactions are serialized, period. Completely ACID.

The critical thing is that queries and reads not part of a transformation never create a transaction, and don't interact with the transactor at all.

None
None
TheMiller
Thanks for the reply. In the talk, I missed the point that transaction functions are able to implement "optimistic CAS" (which I assume is the same as what is more traditionally but inaccurately called "optimistic locking"). To clarify, though: Wouldn't precondition testing have to be done /inside/ the transaction, not before, in order to be effective?

That's a nice model, I suppose, but doesn't strike me as particularly novel. As has been pointed out elsewhere, the idea that read-only operations can avoid creating a transaction is not particularly important per se. What's more important is avoiding locking, and this is already available in several DBMSes. Indeed, in the sense that reads are consistent "as of" a certain point in time, Datomic really does have a notion of a read transaction. I saw in another of your replies here that you consider these to be more flexible than transactions because they are in effect immutable snapshots of the database state, and independent of a single process. I guess for some applications that could be important, but for many applications the process independence is irrelevant (perhaps it helps with scaling a middle tier?), and the eventual storage cost of true immutability will become intractable (although perhaps you'll eventually have ways to release state prior to a point in time). Coordinating different parts of code within a single process over a specific span of time doesn't strike me as any easier with Datomic's model than with traditional transactions.

This talk just didn't focus on the kinds of data modeling problems that are important to me and which I consider difficult, which mostly have to do with maintainability of the data model - except, possibly, for the ease of dealing with temporal queries. And I share others' unease with what seemed to be inaccurate characterizations of current DBMS implementations.

krosaen
I had no idea one could get a consistent read view across multiple queries within a transaction using most sql databases. That does poke a hole in a major benefit that I thought was unique to datomic, great to know!

However, I do think trying to setup a sql database to be able to query against any previous view of the world based on a transaction id as datomic allows wouldn't be as "simple" as you make it out to be.

richhickey
Datomic allows one to get a consistent basis for multiple queries, separated by arbitrary amounts of time, outside of any transactions. Having to group queries motivated and conducted by different parts of your system into a single transaction in order to get a consistent basis is a source of coupling.
saurik
Given that I need to share that common basis among the different parts of my system that need the consistent view, I am already taking the hit on that coupling: I am going to need to make certain that all of those parts all have access to the shared basis, whether it be a timestamp, a transaction identifier, or an active connection.

If the concern is coupling across space, you can store the transaction on the server in most existing databases. Past that, I would be highly interested in knowing the motivating use case that is causing the need for this much global and distributed snapshot isolation, but otherwise agree: I'd love to see more databases actually expose the ability to more easily "query the past" as well as request guarantees on vacuum avoidance.

richhickey
I would characterize the level of coupling involved in sharing "the basis is 12345" as categorically different from having to nest database access within the same transaction. Consider, e.g. the ease of moving the former to different processes.
jeltz
Agreed, but this is due to MVCC databases not saving the snapshots of committed transactions. If you have an active transaction in PostgreSQL you can start a new transaction using the same snapshot from another process. (This feature is exposed in SQL in version 9.2, and was motivated by the plan to implement parallel pg_dump.)

EDIT: You can find documentation of this at http://www.postgresql.org/docs/9.2/static/sql-set-transactio...

If you in PostgreSQL would save all historical snapshots and disable vacuum you could communicate any point in time simply with a snapshot number. Now this probably wont be very efficient since PostgreSQL is not optimized for this kind of use.

saurik
This feature is very interesting! However, it is definitely not designed for this purpose, and is thereby fairly heavyweight: it is writing a file to disk with information about the snapshot instead of just returning it to the client, and the result is the filename.

Not knowing this existed, I spent the last hour implementing this feature in a way that just requires getting a single integer out (the txid) and then restoring it as the snapshot being viewed (but not changing the txid of the running transaction, which solves a lot of the "omg what would that mean" problems).

http://news.ycombinator.com/item?id=4448767

With this implementation, you can just use txid_current() to save a transaction snapshot and you can restore it using my new variable (which correctly installs this functionality only into the currently executing transaction's snapshot). (In a more ideal world, I'd re-parse the string from txid_current_snapshot().)

I didn't also solve the vacuum problems, but I think there might be some reasonable ways to do that. Regardless, I imagine that most of the interesting usages of this are not "restore a snapshot from three days ago" but more "share a snapshot between multiple processes and machines for a few seconds".

However, if you can get even one of those processes to hold open a transaction, you can copy its snapshot to other transactions in other sessions, and then even this naive implementation should be guaranteed to have access to the old data.

If you are using it only for these short periods of time, for purposes of "distributed and decoupled consistency", we also don't need to worry about the overhead of never running a vacuum: the vacuum process runs as normal, as we are only holding on to data very temporarily.

That said, in practice, you really can go for quite a while without running a vacuum on your database without issues, and I imagine any alternative system is going to run into similar problems anyway (log structured merge trees, for example, get screwed on this after a while, as the bloom filters become less selective).

You can't, however, with PostgreSQL, make this work "forever" (if you wanted to store data going back until the beginning of time) due to 32-bit txid wraparound problems :(. (This also should affect my silly implementation, as I should save the epoch in addition to the txid.)

jeltz
To get a consistent view across multiple queries you just use the SERIALIZABLE isolation level. In PostgreSQL REPEATABLE READ also works, but the standard does not guarantee this (the standard allows for ghost reads since it assume you use locking rather than MVCC snapshots to implement REPEATABLE READ).

The ability to query any historical view of the data is indeed not there in PostgreSQL in any simple or reliable way. That is an advantage of Datomic, but I do not see why it would be impossible to implement in a "traditional database".

saurik
The reason I claim this would be simple is that PostgreSQL is almost already doing this. The way the data is stored on disk, every row has two transactions identifiers, xmin and xmax, which represent the transaction when that row was inserted the the transaction that row was deleted; rows, meanwhile, are never updated in place, so the old data stays around until it is deleted by a vacuum.

To demonstrate more tangibly how this works, I just connected to my database server (running PostgreSQL 9.1), created a table and added a row. I did so inside of a transaction, and printed the transaction identifier. I then queried the data in the table from a new transaction, showing that the xmin is set to the identifier of the transaction that added the row.

Connection 1:

    demo=> create table q (data int);
    CREATE TABLE
    demo=> begin; select txid_current();
    BEGIN
    189028
    demo=> insert into q (data) values (0); commit;
    INSERT 0 1
    COMMIT
    demo=> begin; select xmin, xmax, data from q;
    BEGIN
    189028|0|0
Now, while this new transaction is still open, from a second connection, I'm going to create a new transaction in which I am going to update this row to set the value it is storing to 1 from 0, and then commit. In the first connection, as we are still in a "snapshot" (I put this term in quotes, as MVCC is obviously not copying the entire database when a transaction begins) from a transaction started before that update, we will not see the update happen, but the hidden xmax column (which stores the transaction in which the row is deleted) will be updated.

Connection 2:

    demo=> begin; select txid_current();
    BEGIN
    189029
    demo=> update q set data = 1; commit;
    UPDATE 1
    COMMIT
    demo=> select xmin, xmax, data from q;
    189029|0|1
Connection 1:

    demo=> select xmin, xmax, data from q;
    189028|189029|0
As you can see, the data that the other transaction was referencing has not been destroyed: the old row (the one with the value 0) is still there, but the xmax column has been updated to indicate that this column no longer exists for transactions that began after 189029 committed. However, at the same time, the new row (with the value 1) also exists, with an xmin of 189029: transactions that begin after 189029 committed will see that row instead. No data was destroyed: and this data is persisted this way to disk (it isn't just stored in memory).

My contention then is that it should be a fairly simple matter to take a transaction and backdate when it began. As far as I know, there is no reason that this would cause any serious problems as long as a) it was done before the transaction updated or inserted any data, b) there have been no vacuums during the backdated period, c) HOT (heap-only tuple) updates are disabled (in essence, this is an optimization designed to do online vacuuming), and maybe d) the new transaction is read only (although I am fairly confident this would not be a requirement).

For a more complete implementation, one would then want to be able to build transactions (probably read-only ones; I imagine this would cause serious problems if used from a writable transaction, and that really isn't required) that "saw all data as if all data in the database was alive", which I also believe would be a pretty simple hack: you just take the code that filters dead rows from being visible based on these comparisons and add a transaction feature that lets you turn them off. You could then use the already-implemented xmin and xmax columns to do your historical lookups.

P.S. BTW, if you want to try that demo at home, to get that behavior you need to use the "repeatable read" isolation level, which uses the start of the transaction as the boundary as opposed to the start of the query. This is not the default; you might then wonder if it is because it is expensive and requires a lot more coordination, and as far as I know the answer is "no". In both cases, all of the data is stored and is tagged with the transaction identifiers: the difference is only in what is considered the reference time to use for "which of the rows is alive".

However, it does mean that a transaction that attempts to update a value that has been changed from another transaction will fail, even if the updating transaction had not previously read the state of the value; as most reasonable usages of a database actually work fine with the relaxed semantics that "data truly committed before the query executes" provides (as that still wouldn't allow data you update to be concurrently and conflictingly updated by someone else: their update would block) and those semantics are not subject to "this transaction is impossible" errors.

Both Connections (setup):

    demo=> set session characteristics as transaction isolation level repeatable read;
    SET
campnic
A couple questions about your example:

I noticed that the xmin changed from Connection 1 from 189018 -> 189028. Is that just a typo?

Is the concept of transaction in this regard a 'state of the entire db'? If a transaction included multiple modifications would they all get the same xmax? If so, I see this as a difference between the presentation and your example. The transaction is a modification to the entirety of the db and is a state of the db. In Hickey's presentation, he very clearly says that the expectation is that the transaction component of the datom is specific to an individual datom.

Since its been a while since I've worked on DBs, and even then I didn't know much, your demo has helped put it in perspective.

saurik
Thank you so much for noticing that error. What happened is that I started doing it, and then made a mistake; I then redid the entire flow, but forgot I needed to re-copy/paste the first half as the transaction identifier would have changed. I have updated my comment to fix this.

Yes: the transaction is the state of the entire database. However, you can make your transactions as fine-grained as you wish: the reason to not do so, however, is that you are likely to end up with scenarios where you want to atomically roll-back many changes that other people using the database should not see in the case of a failure. You certainly then will at least want the ability to make multiple changes at once.

The tradeoff in doing so, however, is that you will need to make a new transaction, which will have a different basis. I agree: if you then have a need to be able to make tiny changes to individual items one at a time that need to be from a shared consistency basis and yet have no need to be atomic with respect to each other (which is the part I am going to be quite surprised by), then yes: you need the history query function to implement this.

I would be fascinated by a better understanding of that use case. Does he go into an explicit example of why that would be required later in the talk? (If so, I could try to translate that use case as best I can into a "traditional database" to see whether you really need that feature; if you do, it might be valuable to try to get something related to this design into PostgreSQL: I am starting to get a better understanding of the corner cases as I think more about it, and think I can come up with a proposal that wouldn't make this sound insane to the developers.)

krosaen
Thanks for taking the time to elaborate, very interesting. I wonder if the sql db vendors or open source projects will take the next step to make querying against a transaction ID possible given the underlying implementation details bring it pretty close.

I also see Rich has made some interesting points elsewhere in this thread about consistent views being available outside of transactions and without need for coordination (within datomic) - seems more appropriate to comment directly there though.

Overall I think it's important to understand these nuances, and not view datomic as some revolutionary leap, even if I am excited about the project. I appreciate your insight into the power already within sql db engines.

saurik
(edit: Apparently, nearly an hour ago, jeltz pointed out that PostgreSQL 9.2 actually has implemented nearly this identical functionality through the usage of exported snapshots, so I recommend people go read that comment and the linked documentation. However, my comment is still an example of the functionality working.)

(edit: Ah, but the feature as implemented actually saves a file to disk and thereby has a lot of server-side state: the way I've gone ahead and implemented it does not have this complexity; I simply take a single integer and store nothing on the server.)

http://news.ycombinator.com/item?id=4448472

> I wonder if the sql db vendors or open source projects will take the next step to make querying against a transaction ID possible given the underlying implementation details bring it pretty close.

For the hell of it, I just went ahead and implemented the "backdate a transaction" feature; I didn't solve the vacuum guarantees problem, however: I only made it so that a transaction can be backdated to another point in time.

To demonstrate, I will start with a very similar sequence of events to before. However, I am going to instead use txid_current_snapshot(), which returns the range (and an exception set that will be unused for this example) of transaction identifiers that are valid.

Connection 1:

    demo=# create table q (data int);
    CREATE TABLE
    demo=# begin; select txid_current_snapshot();
    BEGIN
    710:710:
    demo=# insert into q (data) values (0); commit;
    INSERT 0 1
    COMMIT
    demo=# begin; select txid_current_snapshot();
    BEGIN
    711:711:
    demo=# select xmin, xmax, data from q;
    710|0|0
Connection 2:

    demo=# begin; select txid_current_snapshot();
    BEGIN
    711:711:
    demo=# update q set data = 1; commit;
    UPDATE 1
    COMMIT
    demo=# select xmin, xmax, data from q;
    711|0|1
Connection 1:

    demo=# select xmin, xmax, data from q;
    710|711|0
    demo=# begin; select txid_current_snapshot();
    BEGIN
    712:712:
    demo=# select xmin, xmax, data from q;
    711|0|1
So far, this is the same scenario as before: I have two connections that are seeing different visibility to the same data, based on these snapshots. Now, however, I'd like to "go back in time": I want our first connection to be able to use the same basis for its consistency that we were using in the previous transaction.

Connection 1:

    demo=# set snapshot_txid = 711;
    SET
    demo=# select txid_current_snapshot();
    711:711:
    demo=# select xmin, xmax, data from q;
    710|711|0
This new variable, snapshot_txid, is something I created: it gets the current transaction's active snapshot and modifies it to be a range from that transaction id to that same id (I think a better version of this would take the exact same string value that is returned by txid_current_snapshot()).

From that previous basis, the row with the value 0 is visible, not the row with the value 1. I can, of course, go back to the future snapshot if I wish, in order to view the new row. (I am not yet certain what this will do to things writing to the database; this might actually be sufficient, however I feel like I might need to either mess with more things or mark the transaction read-only.)

Connection 1:

    demo=# set snapshot_txid = 712;
    SET
    demo=# select txid_current_snapshot();
    712:712:
    demo=# select xmin, xmax, data from q;
    711|0|1
saurik
I am not certain whether your response comes from a reading of my comment before or after the paragraph I added that started with "for a more complete implementation", but if it was from before I encourage you to read that section: the ability to do the query is pretty much already there due to the xmin/xmax fields that PostgreSQL is already reifying.
DanWaterworth
In another talk he addresses your point specifically. He said, and I'm paraphrasing:

"It doesn't matter if you're using append only data-structures if your view of the world is update in place".

PostgreSQL exposes a view to the world of an update in place database, no matter what it's doing underneath. You could create a new interface to PostgreSQL's internals that doesn't and if you did, it would look a lot like datomic.

saurik
First, there is a major difference between MVCC and update-in-place that you can detect as a client, and that difference is that the problems that Rich outlines at the beginning of his talk do not happen: if one client edits something in the database, other transactions do not get an inconsistent view because the data on disk has already been permanently and irrevocably "updated in place". (Which, to be clear, means that modern SQL databases do not "expose a view to the world of an update in place database".)

Second, if all that is required to get his model is to add a command to an existing database (such as PostgreSQL, as I feel I know enough about how it works to be confident that this would be a reasonably simple task) "mark the current transaction read-only and pretend that it is as old as transaction X" (something that can be implemented quite rapidly in an existing system like PostgreSQL) we really aren't talking about something that is either very new, or that totally reinvents the "traditional database".

azolotko
AFAIK there was a Postgres extension that did something like that. But than it became deprecated and was eventualy removed.
azolotko
AFAIK there was a Postgres extension that did something like that. But than it became deprecated and was eventualy removed.
DanWaterworth
> Which, to be clear, means that modern SQL databases do not "expose a view to the world of an update in place database".

I disagree with you on this point. MVCC may prevent you from suffering from locking problems, but you can still, from a user perspective, modify rows. A row is a place and you are updating it. It's definitional.

On your second point. The data model that datomic databases expose is very different from SQL databases. That's enough of a difference to say that it's fundamentally new. Furthermore, I don't think anyone would disagree that the architecture of datomic is very different from that of PostgreSQL.

When you compare a distributed database that uses immutable data against PostgreSQL, the thing that is immediately apparent to me is that garbage collection is much more difficult in the distributed setting. You can't just rewrite the network interface for PostgreSQL and get datomic, but you might be able to get single-server datomic.

I think you are seeing a simple design and thinking, "anyone could have thought that up", when actually it's not that easy. The design that you are looking at has obviously gone through extensive refinement.

saurik
In your last paragraph, I feel like you are mischaracterizing my overall thesis. I am not claiming the design is simple: MVCC took many lives in sacrifice to its specification and discovery, and I certainly am not claiming "anyone could have thought that up". Instead, my primary issue is that this is a talk about databases and database design that is providing motivation vs a strawman: specifically, the way Rich seems to believe "traditional databases" work, and for which we spend the first almost 20 minutes learning the negatives, roadblocks, and general downsides.

However, almost none of the things that he indicates actually are downsides of most modern database systems, and certainly not of PostgreSQL. His downsides include that the data structuring is simplistic, that you can't have efficient and atomic replication of it (not multi-master mind you, but seemingly even doing real-time replication of a master to a read-only slave while maintaining serialization semantics seems to be dismissed), and that if you attempt to make multiple queries you will get inconsistent data due to update-in-place storage.

Yes: update-in-place "storage", not "update-in-place semantics within the scope of an individual transaction". Even if he was very clear about the latter (which is again quite different from "update-in-place semantics", which MVCC definitely does not have), that would still undermine his points, as the problem of inconsistent data from multiple reads, a problem he goes into great detail about with an example involving a request for a webpage that needs to make a query first for its backend data and then for its display information, does not exist with MVCC.

During this discussion of storage, he specifically talks about how existing database storage systems work, not at the model level, but at the disk level, discussing how b-trees and indexes are implemented with their destructive semantics... and all of these details are wrong, at least for PostgreSQL and Oracle, and I believe even for MySQL InnoDB (although a lot of its MVCC semantics are in-memory-only AFAIK, so I'm happily willing to believe that it actually destroys b-tree nodes on disk).

The talk then discusses a new way of storing data, and that new way of storing data happens to share the key property he calls new with the old way of storing data. The result is that it is very difficult to see why I should be listening to this talk, as the speaker either doesn't know much about existing database design or is purposely lying to me to make this new technology sound more interesting :(. Your response that in a different talk he attempted to backpatch his argument with something that still doesn't seem to address MVCC's detectably-not-the-same-as-update-in-place-semantics doesn't help this.

Now, as I stated up front, after listening to half of this talk, I couldn't take it anymore, and I gave up: I thereby didn't hear an entire half hour of him speaking. Maybe somewhere in that second half there is something new about how some particular quirk of his model allows you to get a distributed system, but that seemed sufficiently unlikely after the first half that it really doesn't seem worth it, and based on the comments from discussion (such as in the threads started by bsaul and sriram_malhar, which seems to indicate that writes are centralized and reads are distributed, something you can do with any off-the-shelf SQL solution these days) that seems to hold up.

fauigerzigerk
I agree with most of what you say, but I don't think that MVCC is really what this is about. The qualities you describe are a feature of ACID. MVCC is just a way of implementating ACID so that it requires less locking.

More importantly, I think, there are issues with some data structures that are not well supported by postgres or any other DBMS (relational or otherwise). I do a lot of text analytics work and there are things I need to store about spans of text that I could model in a relational fashion but I don't because it would lead to 99% of my data being foreign keys and row metadata.

There will always be domains where you need highly specialized combinations of data structures and algorithms that are not efficient to model relationally and even less in terms of some of the other datamodels that you find in the NoSQL space.

That said, I found that even in natural language processing, RDBMS do a lot of things surprisingly more efficiently than conventional wisdom would have it. Storing lots of small files for instance, something that file systems are suprisingly bad at.

Sometimes I'm surprised how many people like to complain about premature optimization using languages that are hundereds of times slower than others but then go ahead and use horribly inflexible crap like the BigTable data model just in case they need to scale like Google.

Of course that's off topic because it's not remotely what Hickey proposes.

saurik
If you implement ACID using the "normal" locking semantics (such as the ones the SQL standard authors who defined the isolation levels used in the language were assuming) you can tell the difference because old values are not preserved.

Instead, we would have had contention: in the case of my example walkthrough, to implement the repeatable read semantics that I requested, the first connection would have taken a share lock on the rows it queried, causing the second connection to block on the update until after the first connection committed.

This means that you would not have been able to have the semantics where the first connect and the second connection were seeing different things at the same time (which, to be again clear, is due to none of the data being destroyed: MVCC is providing the semantics of a snapshot).

(As for your text analytics work, I am curious: are you using gin and trigrams at all? There are a bunch of things I dislike about PostgreSQL's implementation of both, but if you haven't looked at them yet you really should: if your use case fits into them they are amazing, and if not the entire point of PostgreSQL is to let you build your own data types and indexes using the ones it comes with as examples.)

fauigerzigerk
I don't use gin or trigrams because I don't do much general purpose text search. I do things like named entity recognition, anaphora resolution, collecting statistics about the usage of terms over time, etc.

But you're right, it might be a good idea to look into the postgres extension mechanism. I've never seriously done that.

richhickey
The model of consistency envisioned by Datomic is one in which consistency normally available only within a transaction is available outside of any transactions, and without any central authority. Consistent views can be reconstituted the next hour, day or week. Consistent points in time can be efficiently communicated to other processes. Nothing about MVCC gives you any of that. MVCC is an implementation detail that reduces coordination overhead in transactional systems. I used MVCC in the implementation of Clojure's STM. While you might imagine it being simple to flip a bit on an MVCC system and get point-in-time support, it is a) not efficient to do so, and b) still a coordinated transactional system.

The differences I am pointing out, and the notion of place I discuss, are not about the implementation details in the small (e.g. whether or not a db is MVCC or updates its btree nodes in place) but the model in the large. If you 'update' someone's email is the old email gone? Must you be inside a transaction to see something consistent? Is the system oriented around preserving information (the facts of events that have happened), or is the system oriented around maintaining a single logical value of a model?

The fact is with PostgreSQL et al, if you 'update' someone's email the old one is gone, and you can only get consistency within a transaction. It is a system oriented around maintaining a single logical value of a model. And there's nothing wrong with that - it's a great system with a lot of utility. But it isn't otherwise just because you say it could be.

Also, you seem to be reacting as if I (or someone) has claimed that Datomic is revolutionary. I have never made such claims. Nothing is truly novel, everything has been tried before, and we all stand on the shoulders of giants.

I'm sorry my talk didn't convey to you my principal points, and am happy to clarify.

saurik
First of all, thank you very much for the reply: you really didn't need to bother, as despite being a Clojure user who stores a lot of data, I'm probably simply not in your target market segment ;P.

For the record, I do not believe that you have explicitly stated this is revolutionary, although I believe various other people on HN in various threads on Datomic have. However, my specific reactions in the comment you are responding to are due to DanWaterworth's insistence that I believe that it is trivial: my original comment does not touch on this angle, and is entirely about "real databases aren't implemented like this".

That said, I do believe that if after 30 minutes of listening to a talk that doesn't mention "this is largely how existing systems are implemented, but we provide the ability to see all the rows at once", there is an implication "this isn't at all like anything you've ever seen or implemented before", which is why after DanWaterworth's comment, I started exploring that angle.

Yes: in the case of PostgreSQL's MVCC, the old e-mail is gone from the perspective of the model for other people not inside of a transaction viewing the contents, however the kinds of problems you were describing at the beginning of the talk did not need to avoid transactions.

However, the implementation is so close that if I were explaining this concept to someone else, I'd probably use it as a model, especially given that it even already reifies the special columns required to let you do the historical lookups (xmin and xmax).

As I mentioned in another comment on this thread (albeit in an edit a few minutes later), you can get historical lookup in PostgreSQL by just adding a transaction variable that turns off the mechanism that filters obsolete tuples: you can then use the already-existing transaction identifier mechanism and the already-existing xmin and xmax columns as the ordering.

The result is then that I'm watching the talk wondering where the motivation is: many of the listed motivations weren't really true faults of the existing systems, and the ones that remain seem like implementation details of the database technology.

In the latter situation, when I say it "could be" I really do mean "it is": PostgreSQL can take advantage of the fact that it is built out of MVCC when it builds other parts of itself, such as its streaming master/slave replication (which is another feature of many existing systems that you seemed to discount in your motivation section).

I am thereby simply not certain what the problem is that Datomic is trying to solve for me, whether it be revolutionary or evolutionary (again: I don't really care; I'm just commenting on the motivation section), as the listed motivations seem to be fighting against a strawman design for a database solution that doesn't have transactions to get you 90% there and isn't itself implemented and taking advantage of append-only storage.

nickik
Well, all you point out is that one aspect of datomic could be implmented with some SQL systems. Datomic however has many other aspacts that are intressting.

Other then that, the true genius is to recogniced that a system like that would be worthwhile. Just pointing out that one could theoreticly do that with something else is kind of pointless if nobody has ever done it.

saurik
I am not saying "Datomic is stupid" or anything so simple; I'm saying I was "disappointed" in this talk because it motivated Datomic against a strawman that mischaracterized the actual problems that people using "traditional databases" have sufficiently that it was no longer possible to determine what was actually being claimed as an advantage.

I realize that to many people it is impossible to dislike a presentation of something without disliking the thing being presented, the person making the presentation, and the entire ideology behind the presentation, but that is a horrible thing to assume and is unlikely to ever be the case to such a simple extreme.

I will even go so far as to say that watching this talk seems to be doing a disservice to many people on the road to doing them a legitimate service: some of the people commenting on this thread (or previous ones on HN about similar talks and articles about Datomic) actually do/did not realize that "traditional databases" can even do this at a transaction level, as the argument in the talk downright claims they can't.

The result is that when I bring up that you actually get even some of these advantages with off-the-shelf copies of PostgreSQL, I get comments of the form "I had no idea one could get a consistent read view across multiple queries within a transaction using most sql databases. That does poke a hole in a major benefit that I thought was unique to datomic, great to know!"; that can only happen when there is some serious misinformation (accidentally) being presented.

Now, does that mean that Datomic is something no one should use, and that it doesn't put things together in a really nice way, and that it doesn't have a single thing in it that is innovative, or that Rich is wasting his time working on it? No: certainly it does not. I did not claim that. I can't even claim that, as I gave up on the talk after the first half so I could spend my time attempting to clarify some of the things said in the first half that were confusing people.

jeltz
> When you compare a distributed database that uses immutable data against PostgreSQL, the thing that is immediately apparent to me is that garbage collection is much more difficult in the distributed setting. You can't just rewrite the network interface for PostgreSQL and get datomic, but you might be able to get single-server datomic.

PostgeSQL-XC solves this problem by adding a global transaction management server. I guess it does the same thing as the transactor for Datomic so there really is no major difference here.

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.