Hacker News Comments on
You Can Always Get What You Want — But Not What You Need (Graduation 2016)
Berkeley School of Information
·
Youtube
·
136
HN points
·
2
HN comments
- This course is unranked · view top recommended courses
Hacker News Stories and Comments
All the comments and stories posted to Hacker News that reference this video."You Can Always Get What You Want — But Not What You Need" > https://youtu.be/QCw_D7dr7RwIf you spare half an hour to Google, you can find out what we want collectively;
1) Find global spending on fusion research and note it down.
2) Pick some other stuff to compare it to. Like global annual military budget. Annual profits of the largest companies and what they sell to make it. Or how much do we spend on 'entertainment'. Or popularity of Kardashians(given even I know that f#cking name...) or whatever.
You'll be quick to realize that the fusion is sitting in the corner, waiting for us to "want to have it".
It makes me feel depressed to see on every fusion article in 2018, there is a paragraph allocated to explain how fusion is different from fission and how it is not... basically not a bomb.
The topic feels home in HN, but I do not know if you ever brought up fusion in a talk with family or even among your young friend circle. I certainly have engaged in that experiment myself. People do not even know what the heck is fusion, let alone lobbying to fund it with tax money. In this state, I say, the advancement on fusion is rather stellar, I congratulate wholeheartedly anyone putting sweat and money in it to bring things to this level.
⬐ pfdietzFusion isn't marginal because funding is low.Rather, funding for fusion is low because fusion is marginal.
The money would be there if the technology warranted it. But examined closely, fusion doesn't appear promising, so people close their checkbooks.
⬐ jimbokunYes!The sentiment is something I (and I suspect many other engineers) feel, but not able to express it nearly so simply and elegantly.
Before you sit down and start writing that code or creating that new algorithm, stop and think about what you really want to achieve.
⬐ truculationFascinating. Instead of banning money or fixing prices (did they try that in Russia?) we retain free markets but now with a new layer of government intervention. Not intervention by social spending or regulation but by buying and selling according to the dictats of an AI. And the utility function of this AI is set according to what exactly?My guess is it would have to be according to our values, not to specific goals:
⬐ UncleEntity⬐ rmateus> Not intervention by social spending or regulation but by buying and selling according to the dictats of an AI.Which would still run up against The Economic Calculation Problem which has constrained all central planning proposals and (probably) always will.
Once they solve that then we can talk about an AI which can predict the outcomes of billions upon billions of individual decisions which are made by market actors to fulfill their wants and needs.
⬐ zokier> but now with a new layer of government interventionAs far as I can tell, the government is the least active actor in this area, so curious choice of picking it here.
⬐ crdoconnorThe Soviet Union always had money and banking. It was just less meaningful than it is in America.⬐ fossuserThis is the goal alignment problem that MIRI, FHI, and some safety groups at Open AI are working on. It seems like a hard problem to get right (with bad consequences if not aligned super intelligence happens first).Multicriteria value decision analysis theory and methods are able to provide the solution to model the need (utility). For instance, in reinforcement learning the payoff (goal) function is always a given without any depth consideration about its construction. It can be modeled by using the methods referred previously.⬐ xapata⬐ akshayBI thought that theory told us multicriteria optimization is intractable?⬐ arbieIf that's the same as multivariate opt, there are billion-dollar industries that rely on it every day.⬐ xapata⬐ rmateusNo, unless I've mistaken the phrase, they're not the same. I'm thinking of the problem when competing criteria can't be satisfactorily combined as inputs to a function with a scalar output. This comes up in ethical paradoxes, for example.⬐ rmateusMaking value trade-offs between competing criteria ultimately boils down to hard ethical choices (not exactly paradoxes)Multicriteria optimization is distinct from multicriteria value decision analysis. The former does not model the value/utility of the criteria, while the latter does. If you do not model value/utility (what you need) then you cannot solve the underlying multicriteria problem.Companies make things that they can sell in a marketplace it is necessarily not what that place needs. For example social media companies just want to sell their social networks in any country with internet irrespective of that country have drinking water problem or humanitarian crisis.⬐ debtPeter Norvig’s book A modern approach to AI is pretty cool. I don’t know what they use now in 101 AI courses but I remember that book being very complete.I still believe though that hard AI will only ever be possible if we ever make serious breakthroughs in the material sciences.
⬐ dikkechillTo differentiate between needs and (economic) wants, I found the "fundamental human needs" framework quite useful: https://en.wikipedia.org/wiki/Fundamental_human_needs (as developed by Chilean Economist Manfred Max-Neef).Basically: Subsistence, Protection, Affection, Understanding, Participation, Leisure, Creation, Identity and Freedom.
⬐ jonbaerThe key point @ 3:15 ... https://youtu.be/QCw_D7dr7Rw?t=193⬐ dangUrl changed from https://www.youtube.com/watch?v=Zsbb1Ze4T_M&list=PLrAXtmErZg..., which is a clip from this.⬐ NoneNone⬐ tw1010Great video. I think the book The Elephant in the Brain is required reading in this context: http://elephantinthebrain.com/