HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
How SQL Database Engines Work

tcation · Youtube · 815 HN points · 1 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention tcation's video "How SQL Database Engines Work".
Youtube Summary
Dr. Richard Hipp, creator of SQLite, presents "How SQL Database Engines Work" at OpenSQLCamp 2008.

The description:
To many programmers, SQL RDBMSes are a magical black box. A talk can help clear up a lot of the mystery. SQL users tend to make better use of the language once they have a rough idea of what is going on beneath the covers.
HN Theater Rankings
  • Ranked #26 all time · view

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Oct 26, 2022 · 2 points, 0 comments · submitted by metadat
Jun 24, 2018 · 813 points, 129 comments · submitted by zbaylin
dmoreno
I recently created a database engine (exosql [1]), only query and no storage. It uses postgres-like foreign data wrappers to get all data.

It's not valid for big datasets, as it stores all in memory (or maybe it is?), but as a learning experience has been amazing to think and develop a real database: planner, executor, choose algorithms, implement features as lateral joins and so on.

I will definetly listen very carefully to these talks.

[1] https://gitHub.com/Serverboards/exosql

zerr
Any books/lectures/articles you followed?
dmoreno
I followed specially the postgres and sqlite documentations. For some specific areas I checked their source codes. But mainly I used explain from postgres as reference on what algorithms (seq scan, hash scan and do on) use for specific queries.
suj1th
A similar learning experience for me was when I was exploring Apache Calcite. That again is only query, and no storage. It has a concept of 'adapters' which, I assume, is similar to the postgres-like foreign data wrappers you mention.
pipu
I truly recommend CMU's Andy Pavlov's video lectures on the topic (and also more advanced stuff)

https://www.youtube.com/playlist?list=PLSE8ODhjZXjYutVzTeAds...

manigandham
Yes, they put several courses over the years and they're all great: https://www.youtube.com/channel/UCHnBsf2rH-K7pn09rb3qvkA/pla...
swinghu
very good
PretzelFisch
I found Prof. Dr. Jens Dittrich database playlists interesting and pleasant to watch. https://www.youtube.com/channel/UCC9zrtAkl6yY4dpcnWrCHjA
logicallee
Any video filter experts here?

Request to any video filter expert

------------------------------------

I started watching this. The slides are unreadable but the camera is perfectly still and the slides are for several "key frames" where the compression algorithm decides to replace one set of compression artifacts for another.

For example try to read the first keyword under "Translates into:":

https://www.youtube.com/watch?v=Z_cX3bzkExE&t=2m14s

The keyword is unreadable at the start but as you keep looking at it over 50 keyframes it becomes readable to me.

Since the camera is in a fixed position it should be possible to combine the data from those artifacts into a single superresolution video with very small assumptions. (i.e. the assumption that the image is the same image until at least 5% change or something). There's not even anyone moving in front of it.

-> Can someone who actually knows this stuff apply a superresolution interlacing filter to this video and post the superresolution version somewhere?

I hope this is not too much work, and I am sure we would all appreciate the results since the slides are not human-readable before applying some kind of superresolution!

coleifer
How precious that you think someone cares.
logicallee
I think there's an Adobe After Effects plugin. Granted the people who are set up for heavy video editing are probably a very small percentage of HN readers :) There's probably more HN readers who would take is as a challenge to do an unsupervised convolutional neural net :D and come back with a research paper. I myself don't use Adobe After Effects.
peterwwillis
I'm sorry you got downvoted for this comment. HN voting is the worst.
logicallee
Perhaps I was naive about the state of the art. After the now-dead reply I received, I searched, and found a couple of papers like this

https://arxiv.org/abs/1801.04590 - "Frame-Recurrent Video Super-Resolution"

but if you look at p. 8, I think many of the algorithms still wouldn't end up with readable text. This paper is from this year, so it is an area of active research.

I wrote a quick mail to the authors to see if they would put the video through their setup (since the last paper update was just 3 months ago) and share their results.

2. Trying it myself...

After my downvotes I tried this small piece of software:

http://www.infognition.com/VideoEnhancer/

Which shows a before/after. Here is their page on their super-resolution algorithm:

http://www.infognition.com/articles/what_is_super_resolution...

I used their plugin on virtualdub on a sample of the video. The results weren't useable. Here is a picture which shows the before and after:

https://imgur.com/a/0rhy7q7

(The diagonal lines are a watermark because I didn't pay to register video enhancer.) Also note that though it might look like a sharpen mask was applied, in fact it was not: this is just the superresolution that video enhancer came up with.

Now granted I don't think that this particular site uses state of the art algorithms (its references on the page I linked are decades old) but it's the first one I found.

The site also has a page explaining when it doesn't work:

http://www.infognition.com/articles/when_super_resolution_do...

It specifically calls out "If your video is compressed to a low bitrate, in many cases this is very bad for super-resolution."

This certainly seems to be the case here. On my comparison picture above you can see that it certainly is an improvement, it is just not enough. I still can't read most of the lines. I think this also doesn't use as many keyframes as it could. (Which makes sense - it is rare that a rare static image is up for, in this case, 25 full seconds!)

There are at least 14 full keyframes there so I think there is more detail to be extracted, but it would, obviously, take longer analysis. I'll let you know if I find anything better or get an answer from the paper authors.

acqq
It's surely interesting that you try to figure out the "state of art" but as far as I see the amount of information is too low to recover more readability, and that was my estimate even before your experiments. The way I saw it, the algorithm would have to model the "compression interference patterns" to do that, and even if something like that existed the amount of information seems to be far too low.

If it's really about the given talk and not exploring state of art in algorithms for recovering of the (textual?) information lost due to video compression, you'd be better off to communicate

1) with Dr Hipp who gave the talk -- he published the more recent original slides on the sqlite site:

https://www.sqlite.org/talks/

so it could be reasonable he'd be willing to publish these older slides (which he probably considers in some aspects outdated). Then only if that fails:

2) with the author of the video who possibly still has a higher quality version of the video, if the quality was dropped during the video compression or preparation for youtube, e.g. while trying to reduce bandwidth or reencode from the native recording format.

logicallee
yeah, I after I wrote those paper authors I also wrote the slide author - who is the only one who emailed me back, so I got the slides.

Still, I think more information is recoverable. For example I could tell Video Enhancer wasn't using that many frames (from the encoding progress and framerates), maybe just a few adjacent frames.

Yet you could see a very clear improvement:

https://imgur.com/a/0rhy7q7

(As though a sharpen mask was applied, but it isn't.)

I did see if I could get an improvement if I forced it to use more frames by running through a few times, each time doubling video speed so I end up with 1 frame from the whole segment - but it didn't end up better. Anyway, now I have the original.

reilly3000
There was a similar issue with the detection of text at angles faced when trying to decipher if the first lady was wearing jacket with a dog whistle insensitive message on it. The original source image was at an angle, so determining its authenticity was challenging until other images emgerged.
dicroce
so I was trying to figure out why a query was slow the other day... it was a nasty query with like 14 joins... I used explain and saw that it was a mess... now in my case I was able to switch to outer joints and nest related joins and got it fast.. but I had some interesting thoughts.

In SQL, indexes are implicit.. they are used if available but it's easy get a large query to scan sometimes when it shouldnt... what if there was a different query language with explicit index syntax.. I think you'd get a lot more predictable performance.

vidarh
MySQL has or had a way of forcing index use, and while it was very occasionally a life-safer, it was much less useful than you might think, as often when a DB engine falls back to sequential scan, it's for a reason (e.g. the query planner might have found that the indexes don't cover the columns you need, and the number of indexed lookups into the table that is needed are costly enough that a sequential scan might end up being faster, for example).

It was useful occasionally "back in the day" when MySQL's query planner was really bad, but today it will mostly appear useful if there's something subtly wrong with your config. I don't use MySQL much any more, but on Postgres one typical mistake might be that the costs configured for the query planner doesn't match your hardware (e.g. if your seek cost is configured to be high enough relative to sequential reads, sequential scans starts to look really good even when you need only a small portion of the data; principle will be the same on MySQL but I don't know if you have the same control over the query planner costs or not).

wgjordan
> what if there was a different query language with explicit index syntax..

There is, it's a feature in MySQL called Index Hints [1].

[1] https://dev.mysql.com/doc/refman/8.0/en/index-hints.html

brianwawok
Or oracle has it

A DBA can even sit there as queries fly past, and add hints on the fly.

And then you change a query from "select" (lowercase) to "SELECT" (uppercase), and query plans break and you break production.

Fun times

Matt3o12_
Could you elaborate why changing select from lowercase to uppercase would break anything?
kimdotcom
Stored procedures where master and slave are on different OSs and/or case-sensitivity settings are not the same?

Just a guess.

Amezarak
I would suggest that there's potentially something you need to look at with your database schema - a couple dozen joins shouldn't be causing any problems you have to think about.

Part of this is because of the way a well-normalized database is organized. Most databases have a few large tables and many smaller tables. So in the general case, most of your joins will be against smaller tables. Joins with larger tables are usually very fast, as long as the fields you join on are indexed (and you're not doing a CROSS JOIN or something.) The other thing that helps (which it sounds like you did by "nesting related joins") is to always think about limiting (filtering) the datasets you're joining against at as many stages as possible; that way you're always doing the least amount of work necessary, and it's usually conceptually simpler to read and understand.

As others have said, most databases do have index hinting as part of the query language. However, in my (long) experience, you should almost never use it. Index hints should be a huge code smell.

kimdotcom
I use UUIDs as primary keys, you insensitive clod!
DenisM
Two reasons to not have indexes in the query:

1. Query expresses the result it produces, not the method that was used to obtain it. Semantic vs implementation. It may be a pain to write, but it will be easier to read later.

2. DBA could add/drop indexes on the fly to tune performance of a live system without making any changes to the application code. And being 100% certain he is not changing the semantics of what's going on.

As others noted, if you must you can use query hints for force particular index to be used for a particular operation. MSSQL also allows to pin down a query plan you like for a given query so that it doesn't drift away later due to environment changes.

I agree it is sometimes a pain to force SQL to use the index you wanted it to use.

cryptonector
So, the way I think this should work is that there should be a way of addressing the table sources in a query from an application that has them parsed, and then externally (i.e., in the application) provide planning hints to the compiler.

Something like (in some terrible pseudocode):

    q = parse_query("...");
    q.hint(FIRST_TABLE, "a");
    q.hint(INDEX, "b", "b_idx1");
    c = q.compile();
    r = c.run(...);
AmericanChopper
I’ve never worked anywhere where I had to worry about DBAs running around dropping indexes. The main reasons not to build an index are usually storage and write overhead. Every index a table has means you have to do another write operation on every insert, which can really start to add up. They can also add significant overhead to any migration operation that happens to require an index rebuild.

In my experience, the most common reason for an optimizer choosing not to use an existing index, is out of date statistics. For those who aren’t aware, the database collects table statistics for things like cardinality, number of distinct values, etc... This is the information the optimizer uses when it’s building a plan. If they get out of date the optimizer will start to come up with nonsense plans. Even worse, if your stats get too out of date, you can become scared to update them, because a new set of stats can potentially change the plans built for every single query in ways that are hard to predict.

As others have stated, you can put index hints directly into your queries, but this should be avoided as they’re hard to maintain. Most ‘enterprise’ RBDMS also have some form of plan management, but this should be avoided even more, as managed plans permanently bypass the optimizer, which is even harder to maintain.

api
Sqlite is incredible. Tiny and usually used for small stuff but I have heard of 1TB+ databases with acceptable performance.
stevoski
If you are a Java programmer and want to learn how an SQL database engine works, take a look at the source code of H2.

Even better, try to add a basic feature to H2 (eg. a new built-in function). It is surprisingly easy, and you come away with a decent understanding of the basics of building an SQL database engine.

A_Person
Gosh, I must say there seems to be some misunderstanding of RDBMS concepts in some posts in this thread!

I was writing database systems professionally, back in the days before the RDBMS concept was even a thing. So here's my (enormously long and convoluted) 2 cents worth. Please be sure to pack a sandwich and get hydrated before you continue.

Say you were dealing with doctors and patients, and needed to store that information in a database. Back in the day, you'd typically use a so-called hierarchical database. To design one of those, you need to decide, what is the most common access method expected to be: getting the patients for a given doctor, or the doctors for a given patient? You'd design the schema accordingly. Then, the preferred access method was easy to code, and efficient to run. The "other" access method was still possible, but harder to code, and slower to run. The database schema depended on how you thought the users would access the data.

But that is absolutely what NOT what to do with an RDBMS. Certainly you look at the users' screens, reports, and so on - but that's just to determine what unique entities the system must handle - in this case, doctors and patients. Then you ignore how the users will access the data, and work out what are the inherent logical relationships between all the entities.

Your initial answer might be this. A doctor can have many patients, and a patient can have many doctors. As any competent relational designer will instantly know, this means you need a resolving table whose primary key is a composite key comprising the primary keys of the other two tables. So if Mary was a patient of Tom, you'd add Mary to the patients table (if not already there), Tom to the doctors table (ditto), then add a Mary/Tom record to the resolving table. By that means, a doctor could have any number of patients, a patient could have any number of doctors, and it's trivially easy to write simple, performant SQL to access that data however you want.

But then you'd have a ghastly thought: patients can also be doctors, and doctors can also be patients! Say Tom was also a patient of Mary! Now you need a Tom record in the patient's table, but that would inappropriately duplicate all his data from the doctors table! Something's clearly wrong. You'd soon see that from a data modelling viewpoint, you don't want doctors and patients as separate entities - you want a single entity Person, and a resolving table to relate arbitrary people in specified ways.

So what?!!

So this. IMHO, many developers using relational databases have absolutely no idea about any of that. They design hopelessly unnormalised schemas, which then need reams of ghastly SQL to get anything out. The query planner can barely parse all that crap, let along optimise it. The database has to stop every five minutes to wet its brow and take a drink.

So here's my advice to any inexperienced relational database designers who have actually managed to get this far!! If you can answer the following questions off the top of your head, you're probably on the right track. If you can't, you're lacking basic knowledge that you need in order to use an RDBMS properly:

- what is a primary key? - what is a foreign key? - what is an important difference between primary keys and foreign keys? - what is a composite key? When would you use one? - what are the three main relations? - what is normalisation? - what is denormalization? - what is a normal form? and so on.

Just my 2c! :-)

mmjaa
I imagine you've written more than your fair share of PROGRESS 4GL code in the past .. your qualifications questions are pretty much straight out of the PROGRESS 4GL user guide .. ;)
A_Person
Nope! I've never used postgres at all. Most of my RDBMS work was done on HN NSFW ALERT - oracle :-) But your comment supports my general point, which is, that data modelling and relational schema design skills are product agnostic; normalization is normalization, as it were.
justinclift
Just to point out, "PROGRESS 4GL" is a different thing, not PostgreSQL based.

It seems to be called OpenEdge Advanced Business Language these days:

https://en.wikipedia.org/wiki/OpenEdge_Advanced_Business_Lan...

And yeah, the terms you mentioned are foundational RDBMS terms, common to all reasonable implementations of RDBMS's. :)

A_Person
Ok. Oracle had/has a presumeably similar thing (PL/SQL). I misread his comment to mean, postgres SQL.
mmjaa
Yes indeed, I meant PROGRESS 4GL, which was sort of around before all the "RDBMS" hoopla congealed all this wonderful info into a single art. Its just that your last paragraph really sounded like a quote, near-verbatim, from the very guides published by PROGRESS just for the purposes of educating people on how to make their data relatable .. but of course that is because these are natural laws for databases.
A_Person
I remember when 4GLs were the hot new thing. Let's get into those! Oops, nope, forget that, now it's Data Dictionaries, let's get into those instead! Rinse & repeat ...
kimdotcom
So, were you writing DB software before Codd's research at IBM was available?
A_Person
Yes indeed. I haven't checked his research dates, but I was writing database software in the late (uh) 60s and early 70s. I think that was even before "goto harmful". I still remember our standards officer saying, let's try some of this structured programming stuff!
A_Person
Further to my other reply, I've just checked the intertubes, and found that Codd's paper "A Relational Model of Data for Large Shared Data Banks" was published in 1970, and Dijkstra's "Go To Statement Considered Harmful" in 1968. So I think my fading memories of all this are accurate (for once!).
vram22
I'm guessing A_Person is making some up of what he/she said (to be entertaining) (I don't mean the DB facts - which are right, of course, and the questions at the end), but it was an amusing post anyway :)

Well done.

A_Person
Thanks :-)
None
None
okket
Sadly bad audio (room mic with all the ambient noise) and bad video quality (slides are almost unreadable). But great content.
randop
Thank you. Very educational. Interesting to know that ORDER BY includes significant performance penalty without LIMIT.
serioushaha
slides : https://www.slideshare.net/VikasBansal23/how-sqlite-works
angelfreak
Really great, thanks for posting.
bitmapbrother
This is a much better talk with better video and sound.

https://www.youtube.com/watch?v=Jib2AmRb_rk

cup-of-tea
What is that acronym he keeps saying? MBCC?
ntonozzi
MVCC - Multi version concurrency control.
ericnyamu
check this out https://tweetsverse.blogspot.com/2018/06/welcome-to-new-foru...
anothergoogler
Love how he started. People who don't stop their conversations for a presenter are the worst. People who don't stop their conversations for a presentation by Richard Hipp deserve a spell of laryngitis.
Jacq5
Spell of laryngitis for Darwin awards winner is the new prize.
blackrock
I'm not going to call anyone out here, but why do people keep using the word orthogonal?

It doesn't even compute. It doesn't even make any sense, in how they use it in relation to the topic.

Are the issues at right angles of one another? No.

Are the issues statistically independent of one another? Perhaps.

I suggest to use a more appropriate descriptive word to describe the situation.

You folks should read the urban meaning of orthogonal, to understand how people roll their eyes at you, when you inappropriately use the term.

https://www.urbandictionary.com/define.php?term=orthogonal

Just another friendly PSA.

sanderjd
Oh wow I totally disagree. I have always found the computer science usage of "orthogonal" to be very analogous to orthogonal sets in linear algebra. It is only in two dimensions that it boils down to right angles; That's the uninteresting case! The more general concept is one of independence, of something being broken down into its constituent parts, such that each part is pulling its weight in some way that even all the other parts combined could not. Which is exactly how I see it being used here. I still remember the moment my programming languages professor introduced the concept of orthogonality in that field; I intuitively grasped the meaning and was awed by that power of analogy.
wwweston
What does it mean for something to be at a right angle to something else?

There's a euclidean geometric answer to that statement, but it's hardly the only correct answer.

When people use it to mean that they're speaking of two issues that have a range of independent possibilities, it's not wrong to invoke linearly independent bases.

abiox
is english a second language for you? 'orthogonal' is frequently used to indicate two things are not directly related or dependent.

> You folks should read the urban meaning of orthogonal

nope, nope, nope. that site's a hive of scum and villainy, and a massive number of entries are just random nonsense.

i'd rather go to wiktionary[0], which includes:

"Of two or more problems or subjects, independent of or irrelevant to each other."

[0] https://en.wiktionary.org/wiki/orthogonal

Izkata
> You folks should read the urban meaning of orthogonal, to understand how people roll their eyes at you, when you inappropriately use the term.

If that mattered at all, then we'd have stopped using other remapped words first, like "tree".

ternaryoperator
That's how languages evolve. Words that meant one very distinct thing come to mean something only partially like the original. People decry the misuse. And finally the new meaning becomes the one true meaning and the original sense is marked in dictionaries as "archaic."

For a word that's gone through that exact cycle, have a stare at "artificial," which was the adjective for "artifice," which at one time meant craftsmanship. When St. Paul's Cathedral was first shown to King Charles II, he praised it for being "very artificial" -- a compliment. [1]

In the meantime, I agree that it can be frustrating to see words apparently misused. But I think this is hardly the mark of an "idiot," as you put it.

[1] https://quoteinvestigator.com/2012/10/31/st-pauls-cathedral/

jokoon
I don't like to use SQL engine because I don't understand how they work, I never really know if my query will be O(1), O(log(n)), O(n), etc, or what kind of algorithm will optimize my query.

Who really does understand how a SQL engine work? Don't you usually require to understand how something work before starting using it? Which SQL analyst or DB architect really knows about the internals of a SQL engine? Do they know about basic data structures? Advanced data structures? Backtracking?

That's why I tend to avoid systematically using a SQL engine unless the data schema is very very simple, and manage and filter the data case by case in code. SQL is good for archiving and storing data, and work as an intermediary, but I don't think it should drive how a software works. Databases can be very complex, and unfortunately, since developers like to make things complicated, it becomes hairy.

I think SQL was designed when RAM was scarce and expensive, so to speed up data access, it has to be properly indexed with a database engine. I really wonder who, today, have data that cannot fit in RAM, apart from big actors.

I tend to advocate for simple designs and avoid complexity as most as I can, so I might biased, but many languages already offers things like sets, maps, multimaps, etc. Tailoring data structures might yield good results too.

Databases still scare me.

coldtea
>I don't like to use SQL engine because I don't understand how they work, I never really know if my query will be O(1), O(log(n)), O(n), etc, or what kind of algorithm will optimize my query.

Unless you're generating totally dynamic queries that's a moot point.

You can always try it and measure it -- just like you know, you would profile a program in any programming language. And you can trivially have the database show you the query plan as well.

Do you also not use APIs because you don't know a priori if a call is O(1) or O(N) or O(Nlog(N)) etc?

>I think SQL was designed when RAM was scarce and expensive, so to speed up data access, it has to be properly indexed with a database engine. I really wonder who, today, have data that cannot fit in RAM, apart from big actors.

That's really orthogonal.

Speed and indexes still matter today with big data (or plain "tens of thousands of web users" loads), where we often have to denormalize or use indexed non-sql stores just to get more speed for the huge data we still need to be able to query fast.

Besides, something indexed will be faster whether they are in disk or in RAM compared to something in the same storage that's not indexed.

So unless we're coding something trivial, server side we still want all the speed we can get from our data than plain having them as simple structures RAM provides.

You wouldn't use a linked link as opposed to a hash table just because your data "fit in RAM". Even in RAM ~O(1) vs ~ O(N) matters [1].

SQL was invented and caught on because: companies had tried and were burned by no-sql stores with incompatible storage standards, lax reliability guarantees, no interoperability between client programs, no over-econmpassing logical abstraction (compared to relational algebra) but ad-hoc reinventions of the square wheel, and so on. Ancient issues the wave of Mongo fanboys brought back all over again.

[1] unless the linked list is so tiny as to fit in cache and avoid the hash table inderection, but I digress

ludsan
Your concern about the opaque and abstract layers below you apply to language compilers as well (which I think is the "better" alternative you seem to prefer).

There is literature legion on the implementation of the database that would assuage you, should you concern yourself with reading it. I don't think you need to. Trust that many many smart people have engineered many many decades of excellent software.

That is, not to say, that you won't need to peek below or concern yourself with certain choices -- indices, commits, columnar-vs-row, etc. as your performance or access patterns dictate.

More importantly, the relational model is still the gem that shines as a beacon to model logic and data and is too often undervalued due to its association with 'enterprise software' and the implementation language (SQL is a bit warty).

nostalgeek
> There is literature legion on the implementation of the database that would assuage you, should you concern yourself with reading it. I don't think you need to. Trust that many many smart people have engineered many many decades of excellent software.

can you recommend some material?

pg314
"Architecture of a Database System" by Hellerstein, Stonebraker and Hamilton [1], gives a good overview. The source code and documentation of PostgreSQL is excellent if you want to dive deeper.

[1] http://db.cs.berkeley.edu/papers/fntdb07-architecture.pdf

glhaynes
100% agreed - it's exactly the same pattern as other tools. Write it in the most simple/obvious/maintainable way; then, if you have a performance issue (quite rare IME when building something that doesn't obviously need a database FTE from the outset), spend a few minutes semi-educatedly poking at it to see if you can stumble onto a drastic improvement; then, if not, dive deeper.
greenyoda
> I really wonder who, today, have data that cannot fit in RAM, apart from big actors.

Keeping all your data in RAM has significant problems, even if it all fits. For example, would you want to lose all your customers' orders and billing information if your code crashed?

In addition to the relational database model, SQL databases offer ACID transactions, which are useful if you want to have consistent and reliable data:

https://en.wikipedia.org/wiki/ACID

PuercoPop
> For example, would you want to lose all your customers' orders and billing information if your code crashed?

There are things like WAL and snapshots. Having your dataset in RAM and querying directly doesn't exclude persisting it to disk. Read Stonebraker's "The End of an Architectural Era"[0]. Basically the OP is right in that SQL DBs were designed assuming that RAM was scarce and that asumption is no longer valid. They are innefficient for every common use case. By at least an order of magnitude.

[0]: http://cs-www.cs.yale.edu/homes/dna/papers/vldb07hstore.pdf

jstimpfle
> exclude

As a non-native speaker, I think that preclude is the word you're looking for. Not disagreeing with what you say.

mmt
> Read Stonebraker's "The End of an Architectural Era"

I tried, but it lost me in section 2.3:

>It seems plausible that the next decade will bring domination by shared-nothing computer systems, often called grid computing or blade computing.

No, it doesn't seem plausible at all. This has been, by some accounts, the future of computing, since at least the 80s.

https://en.wikipedia.org/wiki/Transputer

But shared-nothing is just too darned hard to program for.

Also, main memory is still scarce. We're just barely up to the 1TB of just some of the (small, by today's standards) databases the paper mentions. Ironically, it seemed to emphasis traditional business database needs over what might happen with the tech industry itself, which has turned out to be the main driving force behind database usage (and data creation).

slow_donkey
To be fair, using redis or elasticsearch as a main datastore is doable. Although I'm not sure they're much better choices in terms of understanding how they work.

You could summon Antirez I guess

kimdotcom
Doesn't redis by default have recovery via the filesystem enabled?
philjohn
It does - BUT depending on how often you have it syncing changes to disk you can lose data.
jnwatson
For Postgres at least, you can literally ask it how a query works, via EXPLAIN. Now, there’s a skill to understanding the output of that, but at least it isn’t a black box.
sebojanko
There's something similar for SQL Server too.
ebikelaw
And the query plan can literally change out from under you at any time. SQL sucks. You should be able to dictate the query plan to the engine directly. If SQL exists as a tool to create and serialize such plans via exploration and experimentation, that’s fine. As a runtime query system it is completely unsuitable.
wvenable
> You should be able to dictate the query plan to the engine directly.

That's like arguing you should be able to dictate the assembly that your compiler produces. The entire point of SQL (and most compilers) is that they can use their knowledge to optimize the result in ways that are too difficult or too involved for humans to do. Most people cannot out-optimize a compiler in the general case. And most people cannot out-optimize a SQL DBMS.

Also, with SQL, the result might be highly dependent on the data itself. A table with 1,000 rows yesterday might be queried entirely differently from the same table now with 100,000 rows. Are you going to constantly go back and dictate the query plan to the engine every few months as the data changes? Probably not. Use the tool as intended and you'll be fine. Anything else is premature optimization at best.

dustingetz
You're both right, there is a values mismatch here. The reason we need optimizing compilers to target modern CPUs is because the hardware architecture is so complicated. The root cause of the complexity is less essential concerns and more an artifact of the history of how things unfolded, compounded by the difficulty of disrupting the current local maximas that we're stuck in.
ryanworl
Funny enough, in the more recent lecture I posted for SQLite below, it actually doesn’t work that way. The query planner will generate the same plan every time provided you don’t run the analyze command over your data to generate new statistics.
matthewmacleod
You are almost certainly going to be worse at deciding how your query should run that the query planner will be.

A straightforward Postgres database will almost certainly fulfill your performance requirements unlsss you have a pretty edge-case scenario. Under those circumstances, it would be foolish to start by assuming that you are doing something that the database cannot easily accomplish. After all, it’s much easier to migrate the parts of your stack that require custom code to a new infrastructure when you reach scale, rather that trying to constantly patch bugs and performance issues in your shitty wannabe SWL engine!

blattimwind
IME query planners do a rather good job at this and scale nicely from small (few pages) to very large data sets. Missteps are rare, but will always happen. It's a heuristic, not magic.
GhostlyHacker
You can force a certain query plan in SQL Server’s newer version(https://www.red-gate.com/simple-talk/sql/database-administra...) but most of the time query plans won’t change unless the statistics or cardinality of the data changes. You control both of those things.
pg314
> As a runtime query system it is completely unsuitable.

Millions of users beg to differ.

makoz
There are some tools to force specific hints/plans on the engine depending on the database e.g. SQL Plan Management in Oracle
purerandomness
SQL is a declarative language: You describe what you want, not how to get it.

If that's not what you need, there are plenty of procedural languages whith which you describe how to get things.

A query plan will change based on statistics. It's the engine's job to decide if your query is best served by parallelizing it to multiple cores, or deciding if it's worth JITing it before execution.

The actual execution of the query is an implementation detail of the engine, and should be. That's the entire point of a SQL standard: To provide users with an interface to talk to engines, which then retrieve the data you asked for.

It is its core strength and the reason why it's so successful.

ebikelaw
"You describe what you want, not how to get it."

The entire archive of the pgsql users' mailing list disagrees. Every wants to know why their plan is suboptimal, or why it changed. People also want to know why the planner takes 100ms to generate the plan and only 1ms to execute the query, and so forth.

The idea that you just say what you want and you get the optimal result from your database is just ridiculous to anyone who has had to use them under any significant load.

wvenable
The thing is, 99.999999% of the time (and I'm probably missing a few 9's) the engine does exactly the most optimal thing.

However, database engines aren't perfect -- I know I've encountered bugs in older SQL server versions where the query never finishes but making some trivial adjustments fixes it. This is a bug. And most mailing lists are filled with people encountering bugs. Saying what you want and getting the best result is exactly what you should expect.

zbentley
> The entire archive of the pgsql users' mailing list disagrees. Every wants to know why their plan is suboptimal, or why it changed.

And rather a lot of the archives of $scripting_language_of_your_choice are people confused about duck-typing/type system failures. That doesn't mean scripting languages should be replaced with statically typed ones; just that there are pain points in every system, and right (or wrong) tools for every job.

Don't believe me? Check how much of the FAQ traffic from first-time Rustaceans (or Swift/Java/etc. newcomers) has to do with how to satisfy their language's type system.

You pick your poison. SQL gives you a clearly defined set of tradeoffs up front. If that's not for you, no worries, move along.

zzzcpan
The biggest tradeoff SQL gives you is exactly the one GP points out: it's a leaky abstraction where understanding performance is tied to implementation so much, that you pretty much lose all the benefits of SQL, except familiarity with the technology.
mmt
> Every wants

This seems like mere selection bias.

As the sibling comment points out, users who have no problem aren't likely to post "Everything is fine!" to the mailing list. In fact, it would likely be rude to do so.

wolf550e
All RDBMS engines have some kind of EXPLAIN, otherwise it's impossible to troubleshoot performance issues. The differences are in the amount of detail you get from the optimizer/query planner, whether you get profile of actual run side-by-side with the plan, etc.
dragonwriter
> Who really does understand how a SQL engine work?

Presumably, at a minimum, all the people who work on such engines, including committees to the various open-source ones.

But also lots of other people.

> Don't you usually require to understand how something work before starting using it?

No. Very few programmers understand how compilers work before they start using them. I'd say it's more common to require working on something to really understand how it works than the reverse.

> I think SQL was designed when RAM was scarce and expensive, so to speed up data access, it has to be properly indexed with a database engine. I really wonder who, today, have data that cannot fit in RAM, apart from big actors.

Indexing is no less important for in-memory data access.

josteink
This is what you get when developers are afraid to touch anything which doesn't look like Javascript or JSON.

The incompetence and ignorance shown in this post is simply astounding.

dagw
I really wonder who, today, have data that cannot fit in RAM, apart from big actors.

It's also a question of price. Once you get above about 256 GB of RAM server prices start to go up really really fast. And while there are systems with dozens of TB of RAM they are stupidly expensive.

So even if, in theory, most databases could fit in RAM, most people cannot afford that. And at the end of the day, 100+TB isn't that large a database in the grand scheme of things and you're not easily fitting that into RAM.

mmt
> Once you get above about 256 GB of RAM

I think it might be as high as 1TB these days, though with what's going on with DDR4 prices, the situation is strange at the moment.

Of course, I don't disput your point that a 100+TB database isn't all that large, especially with indexes.

I suspect that it's this false dichotomy of "fit in RAM" and "big data" has resulted in many needless forays into distributed computing.

DreamSpinner
I like to break up data problems like this:

* Trivially Small

* Fits In RAM

* Fits on one server (CPU / Storage)

* Fits on one Big Hardware (Mainframe or other Specialized equipment)

* Requires Distributed Storage And/Or Processing

mmt
I think it's important to know that other boundaries exist, at least conceptually, but, ultimately, it's the practical considerations that are important. For example, for what you listed, what would you say are the storage cutoffs for each tier today?

> data problems

I find there is also, occasionally, a lack of awareness of when a problem is data-heavy, encouraged by abstraction layers like ORMs.

This can lead to casual or naive (neither meant derogatorily) distributed computing, where the app hoovers up the data from database to be processed. This can be great for anything CPU-intensive but terrible for the I/O-intensive.

> Fits on one Big Hardware (Mainframe or other Specialized equipment)

I don't think this really exists today, unless you're including an otherwise commodity high-end (e.g. 8-socket) server that carries up to a 4x price premium in "specialized equipment".

I'm aware that mainframes still exist, but, for a variety of reasons [1], I'd consider them as being in a world of their own, rather than a step on this continuum.

[1] e.g. inherently distributed architecture, not obvious if storage scale actually greater than high-end commodity, interop issues

collyw
Understanding the relational model properly will allow you to write performant simple code. Unlike replacing all the well tested code you will have to write for yourself when you get rid of a relational database.
dmoreno
I completely disagree.

Unless your data requirements are very specific, mem only, or not adapted to the relational paradigm any SQL engine will provide you with the best and more efficient algorithms to manipulate your data in the most common situations.

I think that if any, databases should be used more.

PuercoPop
> Unless your data requirements are very specific, mem only, or not adapted to the relational paradigm any SQL engine will provide you with the best and more efficient algorithms to manipulate your data in the most common situations.

Too bad that Michael Stonebraker, Turing Award winner, disagrees with you. SQL are not the best solution for any common use case from the performance perspective.

Nevermind what they do to the design of an application. IMHO less people should default to using a database upfront. At least while protyping the idea.

https://cs.brown.edu/~ugur/fits_all.pdf

manigandham
You missed the "in the most common situations" part.

Modern RDBMS can handle millions of ops/sec and terabytes of data. They do the job fine 99% of the time and are constantly adding new features.

If you have the 1% need for another data store, there are hundreds of options, and interestingly many of them are also starting to implement SQL as an interface now.

Sean1708
> IMHO less people should default to using a database upfront. At least while protyping the idea.

Surely it should be the opposite? While you're prototyping you should use a DB by default, then switch to your own implementation if you find out that it will speed things up (and you need that speed). It's not like the code that you would replace a DB with is going to be trivial, using a DB is going to keep the code simple until you need it to be complex.

1stranger
How do you prefer to persist your data?
tobyhinloopen
In memory database, dumped to file? Not saying how I would do it, just thinking alternatives.

I’m using postgresql usually

jacquesm
That's got some seriously limiting characteristics that make it unusable for anything but the simplest settings. Note that the subject here (SQLite) is mostly used in an embedded context and that's roughly what you are describing. Even though that is a very large number of applications it is not the one you are likely to encounter when doing stuff in services for multiple users.
purerandomness
Ideally, if your DB fits into RAM, it should be served from RAM entirely.

If less frequently used parts of it don't fit, offload to disk.

Ideally, the DB should guarantee data integrity after a crash even if the DB is served from RAM.

That's the ideal scenario: You have the best of all worlds.

Coincidentally, that's exactly what Postgres does.

On top of that, it's the best NoSQL database currently available.

Of course, you can use it with SQL if you ever feel the need.

collyw
> On top of that, it's the best NoSQL database currently available.

Surely that depends on the load? From what I understand Postgress isn't easy to set up when the data spans more than one server, which many NoSQL database do actually handle fairly well.

zzzcpan
> I never really know if my query will be O(1), O(log(n)), O(n), etc, or what kind of algorithm will optimize my query

Aka leaky abstractions. SQL just wasn't designed for performance. A query language that takes performance into account should definitely ignore any ideas from SQL. Maybe have declared data structures instead of tables with operations that use known algorithms, explicitly chosen.

gaius
SQL just wasn't designed for performance.

That is strictly true in the literal sense that SQL is just a textual representation of relational algebra and calculus, and noone says a mathematical notation is "designed for performance" or otherwise.

But in a more practical, useful sense, it's the language most designed for performance, since the query planner has so much leeway to perform optimisation. It can do more dramatic transformations of the parse tree even than a C compiler.

zzzcpan
If you cannot predict performance of a query you cannot write a query with guaranteed performance characteristics and cannot guarantee performance of your application. So, no, SQL and RDBMSs in general are neither designed for performance nor are any good at it. Which is one of the reasons we have the whole world outside of them.
olavk
SQL and the relational model is designed for good performance rather than predictable performance. It may use statistics collected at runtime to improve performance, so this is not predictable - but it will be fast and getting faster. Hardcoding the query plan will most likely be slower unless you know the usage patterns up front - including the amount of data in the tables. This is very rarely the case.
gaius
you cannot... you cannot... cannot

You keep using that word. I don’t think it means what you think it means.

Because people have been doing it since the 1980s...

mmt
> Which is one of the reasons we have the whole world outside of them.

The other reason seems to be a failure to learn (or a belief in "this time it's different") the lessons from the 70s and 80s that informed many of the fundamental design decisions of RDBMSes.

The "NoSQL" world has had a remarkable number of incidents with ACID failures. Unsurprisingly, a fix involves sacrificing performance.

This isn't to say that the trade-off is never worth it. In fact, RDBMSes can and do offer such trade-offs as options. It's just not the default.

It may be accurate that RDBMSes are designed and "shipped" with default configurations that are ACID-first [1] (to coin a term), whereas the "world outside" is performance-first [2].

However, it's nowhere near accurate, and maybe even disingenuous, to suggest that SQL or the relational model somehow prevents high performance. The reality of the actual tools contradicts your claim, as the sibling comment pointed out.

[1] with the exception of early MySQL which defaulted to MyISAM as a storage engine

[2] as the joke goes, so is writing to /dev/null and reading from /dev/zero

blattimwind
You're not scared, you're just too lazy to learn the tools of your trade.

Databases are not very complex and use pretty much only textbook data structures and algorithms. Understanding how they process a given query and how a query will probably perform/scale (even without EXPLAIN ANALYZE) is not hard to learn. You do need to learn it (at some point; you don't for small data, which is most). But it's far from difficult.

> That's why I tend to avoid systematically using a SQL engine unless the data schema is very very simple, and manage and filter the data case by case in code.

And that's the mentality that gives us webshops were applying a simple filter results in a couple seconds load time and uses hundreds of MB of RAM per request, server side.

aardvark291
>Databases are not very complex and use pretty much only textbook data structures and algorithms

skeptical expression

collyw
If you are used to imperative programming it does take a bit of time before you get to thinking is sets. Looking at the code I have inherited, even some reasonably experienced developers don't get to that stage.

Do you really think that replacing an "ORDER BY" statement with your own sort is going to be simpler?

aardvark291
"________ is not very complex and uses pretty much only textbook data structures and algorithms" is pretty much a false statement for any engineered product.
blattimwind
And the difference is usually close to nil for understanding them. (Also see squirrelicus' comment). Whether my database uses a B-tree or a proprietary, patented, insanely complex and finely tuned data structure that does essentially the same thing as a B-tree just a little bit faster is, for all intents and purposes, not a relevant distinction.

Now it's true that most databases have a ton of features and even more optional features that can (and will) interact in interesting ways, but I thought it was fairly obvious that most applications use very few or none of these. They are the 90/10-sort of features; 10 % of a database vendor's customers need 90 % of the specialized features in a database, and every single one of them uses a different handful.

Obviously you don't need to understand all these specialized features to use a database; you only need to grasp the handful if any at all you actually need at a time. Applications striving for wide database compatibility tend to rarely use any of these, simply because they don't exist in all databases, or work differently, or have divergent interfaces.

So any time you have an application that runs on MySQL or postgres in production but is developed and tested on SQLite (an antipattern itself, but I digress), you can be assured that you'll only see fairly basic DDL and SQL.

(You also seem to be intermingling understanding and building. I can use and understand how a typewriter works without having a clue how to build one. Yes, there are lots of hard problems solved by databases, but how they do it is mostly a don't care. I don't have to care how SQLite does power-fail-safe transaction, it does and what that means for me, is all I have to know as a user.)

squirrelicus
Whether they use highly tuned or proprietary or obscure algorithms (they do) is less important than the fact that understanding b-trees and basic normal forms will get you 90% of the way to understanding how to use one. It's just not that hard as a database user.
barrkel
Databases are amongst the most complex systems you will ever use as a developer. At the limit, they rival operating systems for complexity - distributed concurrent systems with lots of low-level memory and filesystem action, along with a parser, optimizing compiler and often a code generator too.
vidarh
While that may be true for some of them, a lot of databases are far simpler.

Sqlite for example is conceptually very simple, and it's fairly cleanly layered, so you can aim to understand the parser, the compiler and the bytecode execution engine separately.

It's also exceptionally well documented. Look at the "Technical and Design Documentation" section here [1].

It won't tell you everything about how a more complex database like e.g. Postgres works, but it will give a very good overview of most of what is relevant for a database user that wants a better understanding of why a database does what it does with a given query.

[1] https://sqlite.org/docs.html

bpicolo
The performance characteristics of them, though, are easy to learn, and easier to get a grasp on via EXPLAIN and such.
barrkel
Given a database and a query, yes, you can understand the execution profile.

What you can't do is start out with a schema and a query and understand the execution profile. Query planners take statistics into account when determining which indexes to use or to fall back to scanning, and can normally only examine a fraction of possible join orders. So you can usually only fully predict the performance profile of simple queries. Databases can run perfectly well one day and fall off a performance cliff the next, when statistics change; doesn't happen often, but it can happen.

More likely, something works perfectly well in test and in the early days, but rapidly becomes untenable as the data grows and superlinear slowdowns emerge.

Thus you need to put run-time monitoring into place, and have a routine process of examining slow-running queries, and fixing things that are slow: rewriting queries, adding indexes, possibly denormalizing data, materializing views, etc. It's an ongoing process in any application that works with lots of data.

blattimwind
I might add, those processes have little to do with SQL. When you have a lot of data, and a lot of queries, then you're going to have to monitor and optimize your databases.

I might also add that a basic understanding of data modelling and what an index can and can't do is sufficient to avoid many, many performance pitfalls (this again has mostly nothing to do with SQL per se). Any undergrad course on databases teaches these basics.

barrkel
The variance provided by an enormous pile of state and an algorithm which uses that state means it's a bit more unpredictable than most other systems, where you're used to seeing lines, curves and almost always monotonic series.
ryanworl
There is a more recent lecture on the same topic from 2015 at CMU: https://youtu.be/gpxnbly9bz4
okket
Much better quality, thank you.
prudhvis
At 34:30 mark he goes on to say that there are some buggy implementations for mmap. Is that in anyway related to how Linux handles pages marked as free?
codetrotter
Mods please update link to this and change title to reflect the year of the replacement video
dang
Does it cover the same material though? I don't want to deprive people of the original survey.
ryanworl
I don’t think it is worth changing the link. They are not the exact same material and anyone interested in SQLite should just watch both.
wilburlikesmith
And wow, there I thought programmers know it all... Best practices, instances and implementations... Suppose we must get a Google / Tesla AI bot to create and write us an effecient and suitable database (plus structure and auto AI joins :p) haha, what do I know...

Hey: @ryanworl wasn't aimed at your comment, you're right, they should watch and learn it all..

provlem
This requires more upvote to remain on top.
_wmd
this video is a 100ft overview, original link is about SQL internals
netgusto
Detailed explanations start at 26m40s:

https://youtu.be/gpxnbly9bz4?t=26m40s

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.