Hacker News Comments on
Noise: A Flaw in Human Judgment
·
1
HN comments
- This course is unranked · view top recommended courses
Hacker News Stories and Comments
All the comments and stories posted to Hacker News that reference this book.Oooh.. this sounds like a great computer science problem."How to get an objective rating in the presence of adversaries"
It is probably extensible to generic reviews as well... so things like the Amazon scam. But in contrast to Amazon, conference participants are motivated to review.
I honestly don't see why all participants can't be considered as part of the peer review pool and everybody votes. I'd guess you run a risk of being scooped but maybe a conference should consist of all papers with the top N being considered worthy of publication. Maybe the remaining could be considered pre-publication... I mean everything is on ArviX anyways.
So instead of bids you have randomization. Kahneman's latest book talks about this and it's been making the rounds on NPR, NyTimes etc...
https://www.amazon.com/Noise-Human-Judgment-Daniel-Kahneman/...
⬐ bitLReinforcement Learning with Game Theory - precisely what Littman, the author of the article, specializes on.⬐ cortesoftAll of those solutions assumes that an objective rating exists. There might just not be one.⬐ Al-Khwarizmi⬐ anon_tor_12345Indeed. There is a lot of talk in my field equating large variance in reviews with bad reviewing, but sometimes it's just because we are humans.Take for example a paper that presents a very innovative method, but with subpar results; and another one that presents an incremental improvement on some existing method, but with results that advance the state of the art. Which is better?
Even if you ask knowledgeable, careful and honest reviewers, you will get contradictory responses, because it's highly subjective whether you rate originality as more important than results or vice versa (and other factors, like whether you think the first method can eventually be improved to be useful or not, which is often just an educated guess). I see this happening all the time, and I don't think it's something that can be "fixed", it's just how humans work.
⬐ cubanoBingo...we have a winner.This issue reeks with the rank smell of base politics and in/out group dynamics, and humans have been fighting, in the abstract, these issues since the time Egyptians were building the pyramids.
How can there possibly be an "objective rating" when career advancement, peer respect, and big money all are in the mix depending upon results?
⬐ visargaIt's not just personal interest here, it's that we can't tell what is good research at the outset, it might take years to be able to appreciate it. It's like climbing a mountain when you can't see the path ahead and have no map. It might lead to the top, or might lead to a lesser peak, or you have to pass a chasm.In other words objectives are deceiving and rating is based on objectives.
Example: mRNA inventor being sidelined at her university when her method wasn't famous
Example2: Schmidhuber inventing stuff and being forgotten because data and compute were just too small back then
It's all about building a diverse collection of stepping stones. Any new discovery might seem useless and we can't tell which are going to matter years later, but we need the diversity to hedge against the unknown.
This is not a CS problem (unless everything is a CS problem) but a very well known market design problem⬐ PartiallyTyped⬐ visargaI believe they were thinking of it as a consensus problem, where parties need to agree on an objective evaluation in the presence of adversaries eg. authors, people with very similar/identical publications, plagiarists.Too many papers and reviewing is a thankless job (there is no personal upside from review), and there would be a conflict of interest.⬐ PeterisPIn many such events all participants are required to be part of the peer review pool.However, they review a limited amount of papers (e.g. 3) - "everybody votes" presumes that everybody has an opinion on the rating of every paper. That does not scale - getting a reasonable opinion about a random paper, i.e. reviewing it, takes significant effort; an event may have 1000 or 10000 papers, having every participant review 3 papers is already a significant amount of work, and getting much more "votes" than that for every paper is impractical.
It's unfeasable and even undesirable for everyone to even skim all the submitted papers in their subfield - one big purpose of peer review is to filter out papers so that everyone else can focus on reading only a smaller selection of best papers instead of sifting through everything submitted. The deluge of papers (even "diarrhea of papers" as called in a lecture linked in another comment) is a real problem, I'm a full-time researcher and I still barely have time to read only a fraction of what's getting written.
⬐ sjg007I disagree plus meta-moderation might help. Then again we see voting rings in HN... but a conference has an entrance fee so maybe that would limit it.⬐ footaIn theory you could probably do something like have three runoff rounds, such that low-scoring papers are eliminated before people do their second review.