HN Books @HNBooksMonth

The best books of Hacker News.

Hacker News Comments on
Noise: A Flaw in Human Judgment

Daniel Kahneman, Olivier Sibony, Cass R. Sunstein · 1 HN comments
HN Books has aggregated all Hacker News stories and comments that mention "Noise: A Flaw in Human Judgment" by Daniel Kahneman, Olivier Sibony, Cass R. Sunstein.
View on Amazon [↗]
HN Books may receive an affiliate commission when you make purchases on sites after clicking through links on this page.
Amazon Summary
From the Nobel Prize-winning author of Thinking, Fast and Slow and the coauthor of Nudge, a revolutionary exploration of why people make bad judgments and how to make better ones--"a tour de force” ( New York Times). Imagine that two doctors in the same city give different diagnoses to identical patients—or that two judges in the same courthouse give markedly different sentences to people who have committed the same crime. Suppose that different interviewers at the same firm make different decisions about indistinguishable job applicants—or that when a company is handling customer complaints, the resolution depends on who happens to answer the phone. Now imagine that the same doctor, the same judge, the same interviewer, or the same customer service agent makes different decisions depending on whether it is morning or afternoon, or Monday rather than Wednesday. These are examples of noise: variability in judgments that should be identical. In Noise, Daniel Kahneman, Olivier Sibony, and Cass R. Sunstein show the detrimental effects of noise in many fields, including medicine, law, economic forecasting, forensic science, bail, child protection, strategy, performance reviews, and personnel selection. Wherever there is judgment, there is noise. Yet, most of the time, individuals and organizations alike are unaware of it. They neglect noise. With a few simple remedies, people can reduce both noise and bias, and so make far better decisions. Packed with original ideas, and offering the same kinds of research-based insights that made Thinking, Fast and Slow and Nudge groundbreaking New York Times bestsellers, Noise explains how and why humans are so susceptible to noise in judgment—and what we can do about it.
HN Books Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this book.
Oooh.. this sounds like a great computer science problem.

"How to get an objective rating in the presence of adversaries"

It is probably extensible to generic reviews as well... so things like the Amazon scam. But in contrast to Amazon, conference participants are motivated to review.

I honestly don't see why all participants can't be considered as part of the peer review pool and everybody votes. I'd guess you run a risk of being scooped but maybe a conference should consist of all papers with the top N being considered worthy of publication. Maybe the remaining could be considered pre-publication... I mean everything is on ArviX anyways.

So instead of bids you have randomization. Kahneman's latest book talks about this and it's been making the rounds on NPR, NyTimes etc...

https://www.amazon.com/Noise-Human-Judgment-Daniel-Kahneman/...

bitL
Reinforcement Learning with Game Theory - precisely what Littman, the author of the article, specializes on.
cortesoft
All of those solutions assumes that an objective rating exists. There might just not be one.
Al-Khwarizmi
Indeed. There is a lot of talk in my field equating large variance in reviews with bad reviewing, but sometimes it's just because we are humans.

Take for example a paper that presents a very innovative method, but with subpar results; and another one that presents an incremental improvement on some existing method, but with results that advance the state of the art. Which is better?

Even if you ask knowledgeable, careful and honest reviewers, you will get contradictory responses, because it's highly subjective whether you rate originality as more important than results or vice versa (and other factors, like whether you think the first method can eventually be improved to be useful or not, which is often just an educated guess). I see this happening all the time, and I don't think it's something that can be "fixed", it's just how humans work.

cubano
Bingo...we have a winner.

This issue reeks with the rank smell of base politics and in/out group dynamics, and humans have been fighting, in the abstract, these issues since the time Egyptians were building the pyramids.

How can there possibly be an "objective rating" when career advancement, peer respect, and big money all are in the mix depending upon results?

visarga
It's not just personal interest here, it's that we can't tell what is good research at the outset, it might take years to be able to appreciate it. It's like climbing a mountain when you can't see the path ahead and have no map. It might lead to the top, or might lead to a lesser peak, or you have to pass a chasm.

In other words objectives are deceiving and rating is based on objectives.

Example: mRNA inventor being sidelined at her university when her method wasn't famous

Example2: Schmidhuber inventing stuff and being forgotten because data and compute were just too small back then

It's all about building a diverse collection of stepping stones. Any new discovery might seem useless and we can't tell which are going to matter years later, but we need the diversity to hedge against the unknown.

anon_tor_12345
This is not a CS problem (unless everything is a CS problem) but a very well known market design problem

https://en.m.wikipedia.org/wiki/Collusion

PartiallyTyped
I believe they were thinking of it as a consensus problem, where parties need to agree on an objective evaluation in the presence of adversaries eg. authors, people with very similar/identical publications, plagiarists.
visarga
Too many papers and reviewing is a thankless job (there is no personal upside from review), and there would be a conflict of interest.
PeterisP
In many such events all participants are required to be part of the peer review pool.

However, they review a limited amount of papers (e.g. 3) - "everybody votes" presumes that everybody has an opinion on the rating of every paper. That does not scale - getting a reasonable opinion about a random paper, i.e. reviewing it, takes significant effort; an event may have 1000 or 10000 papers, having every participant review 3 papers is already a significant amount of work, and getting much more "votes" than that for every paper is impractical.

It's unfeasable and even undesirable for everyone to even skim all the submitted papers in their subfield - one big purpose of peer review is to filter out papers so that everyone else can focus on reading only a smaller selection of best papers instead of sifting through everything submitted. The deluge of papers (even "diarrhea of papers" as called in a lecture linked in another comment) is a real problem, I'm a full-time researcher and I still barely have time to read only a fraction of what's getting written.

sjg007
I disagree plus meta-moderation might help. Then again we see voting rings in HN... but a conference has an entrance fee so maybe that would limit it.
foota
In theory you could probably do something like have three runoff rounds, such that low-scoring papers are eliminated before people do their second review.
HN Books is an independent project and is not operated by Y Combinator or Amazon.com.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.