HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Kenneth Stanley: Why Greatness Cannot Be Planned: The Myth of the Objective

TTI/Vanguard · Youtube · 98 HN points · 21 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention TTI/Vanguard's video "Kenneth Stanley: Why Greatness Cannot Be Planned: The Myth of the Objective".
Youtube Summary
Presented at TTI/Vanguard's Collaboration and the Workplace of the Future.
September 30, 2015 | Washington, DC

Why Greatness Cannot Be Planned: The Myth of the Objective
Kenneth O. Stanley, Associate Professor, University of Central Florida
In artificial intelligence and elsewhere, it has long been assumed that the best way to achieve an ambitious outcome is to set it as an explicit objective and then to measure progress on the road to its achievement. Upending this conventional wisdom, a series of unusual experiments in machine learning has shown that, for a broad class of outcomes, the very act of setting objectives can block their achievement. More fundamentally, the same so-called “objective paradox” applies not only in computer algorithms but across many human endeavors: Often, to achieve our highest aspirations, we must be willing to abandon them. As a corollary, collaboration can sometimes thwart innovation by tacitly forcing its participants into an objective-driven mindset. The moral is both sobering and liberating: We can potentially achieve more by following a non-objective yet still principled path, after throwing off the shackles of objectives, metrics, and mandated outcomes.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
This is similar to that ai researcher, Kenneth Stanely, and his talk/book about how planning and having an objective sabotages reaching it. "Greatness cannot be planned" I think was the title.

https://www.youtube.com/watch?v=dXQPL9GooyI

Remember why it was that it was easier to learn new things when you were younger, yes there is a certain biology for it. (Side note: I'm pretty sure I heard Andrew Huberman talk about this and he recommended Choline as a supplement to help get your levels to similar to someone younger, no idea how correct this is, i've made a note to read more into it myself)

But I would say, it is that sense of wonder and imagination that you lose when you are older, you've seen stuff, even interesting stuff becomes 'oh just another interesting thing, much like this other thing i saw before'.

What you need to do is optimise things for novelty, interest, excitement and challenge rather than for outcome. As you get older, you likely have things you want, these will become goals, but these will be based on what you already know and your supposed path to them will also be best on your current knowledge. The problem is, that won't even get you to anywhere interesting, it might not even get you to that 'goal' because the world is way more complex than you think and you begin to think you know as you get set in your ways.

When you did interesting things, when you learned the most, it was when you didn't know any better, you didn't know what you were doing, you need to get rid of that idea that you know, you need to accept you don't get out of your comfort zone and do things, try things, optimise for that novelty rather than an outcome and you might find something interesting and that is where you will learn the most.

If you really want to learn, change your goal from achieving things and outcomes, to seeking out novelty, things that interesting/excite you and anything that is a little bit challenging for you. It will put you back in that space again, or at least as much as you can be.

I found this talk interesting on this subject: It talks about lessons learned from AI and machine learning that show this to be a valid path. https://www.youtube.com/watch?v=dXQPL9GooyI&t=1s

Jan 16, 2022 · teabee89 on Letting Go of Random
Somewhat related to this is the excellent presentation "Why greatness cannot be planned: myth of the objective" by Kenneth Stanley which I highly recommend! https://www.youtube.com/watch?v=dXQPL9GooyI
I've also read this book, but unfortunately I found that whenever the authors talk about things beyond their immediate research - it gets bad.

The good parts are the descriptions of their research and discussion of the findings as far as it directly interacts with their research. Fortunately you don't have to read the whole book to get to the good parts. There is a 40 minute presentation/talk on YouTube that covers the good parts. [1] This talk is what originally got me to read the book.

The bad parts are where the authors take their AI algorithm quirks and jump to insane conclusions, going as far as to claim that setting goals for your own life is a bad idea. The fundamental misstep that the authors take is that they conflate goals with measurement of goals. They keep giving examples of naive ways one could measure progress towards goals - basically the functions they write for their AI program. These functions suck. Instead of just admiting that the functions they wrote suck, the authors conclude that one shouldn't have goals at all, because they couldn't come up with a good way to measure progress.

--

[1] https://www.youtube.com/watch?v=dXQPL9GooyI

X=[0,∞]

"Why Greatness Cannot Be Planned: The Myth of the Objective" by AI researcher Kenneth Stanley https://www.youtube.com/watch?v=dXQPL9GooyI

The meta-principle is that in complex domains there is no straight line, well worn, "plannable" path to greatness.

Kenneth Stanley calls it "The Myth of the objective" and has spent the last several years trying to formalize this idea (within AI as well) and get it more traction.

Book -- https://www.goodreads.com/book/show/25670869-why-greatness-c...

Talk -- https://youtu.be/dXQPL9GooyI

ppsreejith
Thank you! This was a very interesting watch. I've also started reading the book.

For some reason, this reminds me of the debate between Peter Thiel and David Graeber on the topic "Where did the future go?" where Graeber points out the lost potential of people as one major reason.

Relevant youtube video and section: https://youtu.be/eF0cz9OmCGw?t=2373

The video is interesting in its own right.

bvsrinivasan
In all probability, Youtube's algorithm will already suggest this to anyone who watches the talk, but there is a (very) long, ML Street Talk interview on this idea here https://youtu.be/lhYGXYeMq_E

Stanley addresses some strong and subtle criticisms here but I actually preferred the book the most. The book is a bit repetitive but has some very good ideas in the appendix.

paulz_
Just finished the talk. Plan on reading the book. This is one of the most interesting concepts I've come across in recent memory. Really appreciate you sharing.
ddoubleU
Woah, thanks for sharing this. I can see a connection with the way Feyman stopped trying to go for his objective yet he still achieved it
slx26
Very interesting talk, thanks for sharing, it really adds some weight to the intuitive idea showcased in Feynman's note.
humanistbot
Reminds me of Suchman's "Plans and Situated Actions" --- an anthropological study of plans and planning, which also has a lot of implications for AI. Resonances to that famous Eisenhower quote "Plans are nothing, planning is everything"

https://www.goodreads.com/book/show/342380.Plans_and_Situate...

But we’re not done yet. Give it time. We can go in plenty of directions. Just because you don’t think the current direction is right, that doesn’t rule out other directions happening. And who’s to say this stuff won’t end up being somehow useful? There’s a great talk on how trying to make progress toward an objective in evolutionary algorithms is not a good way to get there.

https://www.youtube.com/watch?v=dXQPL9GooyI

Of course evolutionary algorithms are just one direction as well. But that doesn’t mean that nothing else is happening.

Hey! this is Joel, the maker of Artbreeder (originally called ganbreeder). The project was inspired by interactive evolutionary algorithms and novelty search (https://www.youtube.com/watch?v=dXQPL9GooyI&t=1s)

The goal was to make a tool that reflected the collaborative and explorative aspects of creativity. Also to make GAN's (and high-dimensional spaces more generally) accessible to everyone in a fun way.

Let me know if you have any questions :)

programmarchy
How did you make it so fast? Are the multiple combinations generated in parallel ahead of time once you breed images? Seems like the GPU power must be immense.
joelS
Nothing fancy, just running on AWS :) Specifically, https://aws.amazon.com/ec2/instance-types/g4/ - The uploading is actually the expensive part since it can take ~2 minutes per image.
stevofolife
How much are you paying to keep this up? Is it expensive to run GPU instances like this?
joelS
it's not cheap but there are ways to make it manageable, i.e a aws spot g4 instance can be as low as 15 cents an hour
vagab0nd
How do you handle interrupt? Do you keep multiple instances in different zones?
sillysaurusx
The 1024x1024 StyleGAN2 network can generate an image in around 200ms - 500ms. And Artbreeder may be powered by some beefy GPUs, so that might reduce the latency to something like 100ms. Throw in 100ms of server lag time, and it feels almost instantaneous. It's quite a technical achievement!
netsec_burn
Joel, heads up the images that aren't supposed to be visible allow directory browsing. e.g. https://s3.amazonaws.com/artbreederpublic-shortlived/ https://s3.amazonaws.com/artbreederpublic/ https://s3.amazonaws.com/ganbreederpublic/ s3cli can navigate and download the images. Cool project!
aabbcc1241
I had a similar idea inspired by JWildfire but didn't got enough momentum to make it. You made it! And you've so much content on it already. Appreciate your effort. Spent half day having fun on your website :)
joelS
Thanks! Glad you liked it. JWildfire is super cool, I would love to add a fractal flame category to Artbreeder.
battery_cowboy
I recall you had 'a font one' in the works, but I can't find anything about it now, is that still coming? The demo is really neat in the header image!
joelS
I still have the model! I started working on the image -> vector conversion (which is more complicated than I expected) and then got distracted. I should finish that up, thanks for the reminder :)
battery_cowboy
It's such a cool idea, imagine a webfont that changes slightly each page load.

I could make a document that slowly became more illegible after each new visitor, just for fun!

archibaldJ
Great work mate! Any plans of openning up an API for a subscription fee or something? :))
joelS
Thanks! No plans at the moment but it’s possible!
sillysaurusx
Hi Joel,

Given that nVidia has a pretty clear license saying that their model and all derivates may only be used non-commercially (https://github.com/NVlabs/stylegan2/blob/cec605e0834de5404d5...) and Artbreeder is clearly commercial (https://artbreeder.com/pricing) but is using StyleGAN (https://artbreeder.com/i?k=755521d9fe07456286a06f06), do you feel that Artbreeder is intentionally violating nVidia's license?

For what it's worth, I fully support your project and I nearly launched something similar myself. But many people choose not to do projects such as yours due to licensing considerations, and I'm wondering in general how to reconcile the moral dilemma. On one hand, nVidia almost certainly doesn't care. On the other hand, they did say very clearly "thou shalt not use this model, or any derivative thereof, commercially."

(It's tempting to believe the license only covers the code, not the model. But the same license appears in their download drive: https://drive.google.com/drive/folders/1QHc-yF5C3DChRwSdZKcx...)

detaro
How does that image link tell you which specific model they use?
sillysaurusx
Answering that requires a bit of technical detail.

If you click "Edit Genes" (e.g. on the upper-right of https://artbreeder.com/i?k=755521d9fe07456286a06f06) you'll find a bunch of sliders. StyleGAN uses sliders like this to control various facial features. For any given StyleGAN model, you can find a slider for a given attribute as follows: Create a classifier for that attribute (for example, "blond hair" probability), generate 50,000 or so outputs from the model, classify each image, then plot a fitted line through all 50k latents. You now have a blonde hair slider for your specific model.

Ok, but sliders aren't unique to nVidia's licensed model. How do we check?

Suppose Artbreeder used a model that was trained from scratch. Since it has a random initialization, its latent space wouldn't be compatible at all with nVidia's FFHQ model. Sliders on Artbreeder wouldn't work with nVidia's model, and vice-versa. A "blond hair" slider for Artbreeder would result in nonsense when applied to nVidia's model.

Artbreeder has an API, through which you can get the latent vector of any given image. You can use this to extract Artbreeder's sliders as follows: Extract the latent vector for a given Artbreeder image; move a slider to the maximum setting; extract the latent vector for the result; subtract the two latent vectors. Presto, you now have the slider.

If you add the resulting slider values to nVidia's FFHQ model, you will find that they control the same facial features on both Artbreeder and nVidia's FFHQ model. They don't match exactly – Artbreeder appears to be fine-tuned from nVidia's base model – but it's very close. So it's clearly derivative, and thus falls under nVidia's "no commercial usage for derivative work" restriction.

momokoko
Can we safely take your comment as legal advice in making business decisions about the use of the nVidia model? If so, you may want to consider the liability you are opening yourself up to. Since HN is an international site, it may be helpful to specify which geographic region you practice law out of as well.

Thanks for the free legal advice! It definitely will save a few people the cost of paying for a legal interpretation of this situation. It is very generous of you.

citizenkeen
In a previous career, I was an IP lawyer in the United States, so I've honed a decent sense of what can and can't be said without running afoul of bar guidelines.

> Can we safely take your comment as legal advice in making business decisions about the use of the nVidia model?

What advice were they giving? It seemed like they were mostly asking questions. I don't think any bar in the United States would take those comments as advice, legal or otherwise.

Maybe I missed something. In the interests of quality comments and constructive feedback, maybe you could walk us through your line of reasoning?

gumby
FWIW I read sillysaurusx's comment as someone being friendly and pointing out a potential bug, not as snark or arrogance.
aspenmayer
He’s not nvidia’s lawyer either so what does it matter either way? Please don’t do this here.
sillysaurusx
I feel bad for asking the question, so maybe I deserved that.

But I asked it because I've spent many nights wondering the same thing about my work in AI, in the United States. Should I pursue something knowing it's legally questionable? How about knowing it's illegal? Big companies seem to walk the tightrope. Joel isn't a big company, yet he's guiding Artbreeder to commercial success. Should I be more ambitious?

These questions affect us all, not just Joel. So I ask it with the sole intention that it might help us navigate the strange waters we often find ourselves in, both as AI researchers and as individuals. I suspect in the coming years that we'll see many more lone-wolf type projects (https://thisfursonadoesnotexist.com/ has recently been making headlines) and legal issues are becoming more and more of an elephant in our shared room.

There are broader ethical questions at play here, too. nVidia trained their model using people's faces, without paying any of them, then claims to hold the rights to limit how people are allowed to make money on the result. Legal, perhaps, but ethical? And if it's unethical, what does that mean we should do, if anything?

Joel's project is a concrete example to point to when someone says "This project has license X, so get real and stop dreaming about starting a business." (The sentiment is shockingly common with almost any AI project, since most AI work comes from big companies with licenses like nVidia's.) Personally, I find it inspiring that someone is doing it anyway.

battery_cowboy
It's sad you even had to ask, don't be sad yourself, and your contribution to the conversion taught me several important things I didn't know before about this area, so thanks a bunch!
aspenmayer
Don’t feel bad you asked the question. I can tell from content and context that you were asking in good faith. The reply you got, well. Let’s just say it seemed in less so good faith.
joelS
Hi there! Good question, it's one I get a lot and don't mind.

First off, I am not a lawyer but have consulted them. A lot of this is a gray area and not well defined. As you mention, there is the code, the given models, models fine-tuned on those models, models trained from scratch, using styelgan reimplementations, etc. Even with the license being included there does not mean that the output of a model trained from scratch counts as derivative work. And technically I just sell things like upscaling, keeping images private, google sync etc and provide unlimited image creating for free (not sure if that would hold up in court but I think it's relevant). So, no I do not think I am violating the license. But some lawyers may feel differently.

Also, I try hard to cite everything involved. Google was thrilled by my use of the biggan model and how it showed off the model. With the amount of money I have spent on Nvidia cards, I hope they are happy too :)

and then image ownership is a whole other question. Everything in arbreeder is CC0 to keep things simple https://artbreeder.com/terms.pdf

Best, Joel

sillysaurusx
Are you saying that Artbreeder is using a model trained from scratch?

If so, was that a recent change? When I checked several months ago, your latent space sliders were compatible with nVidia's FFHQ model: https://news.ycombinator.com/item?id=23149739

The only explanation seemed to be that Artbreeder was fine-tuned from nVidia's released FFHQ model.

To rephrase the question: It seems like you were using nVidia's model (or a fine-tuned version) at some point in the past. At the time, you were also selling subscriptions via your pricing page. Do you just sort of ignore the legal implications and hope that it never becomes an issue?

(That's sometimes an effective strategy.)

spacefearing
Quit your concern trolling and create your own service if you're envious. Jeeez.
mandelbrotwurst
^ IANAL, but I could imagine it being a slightly less effective strategy if he replies here in the affirmative.
sillysaurusx
Here's a notebook showing that at one time, Artbreeder's latent vectors were close to StyleGAN's: https://colab.research.google.com/drive/1Yr-YUvmtsDVUWD_Pau9...

If you open the notebook in playground mode, then runtime -> run all, you'll get someone who looks similar to Elon Musk: https://i.imgur.com/tjNouoM.png

Compare this to the Elon Musk from https://artbreeder.com/i?k=ecbc24f8a45295415dc80fac

Screenshot: https://i.imgur.com/wzdybdw.png

The latent vector is Artbreeder's. If it was a model trained from scratch, it would produce gibberish. But since it produces something very similar to Elon Musk, we can infer that the Artbreeder face network was a fine-tuned version of StyleGAN as of 7 months ago.

How did I generate something that is substantially close to Artbreeder's output, using their own latent vector, if I only have access to nVidia's StyleGAN model? :)

Anyway. Artbreeder is a wonderful project and I hope it succeeds. I was just curious about the legal and ethical implications, because I've had to wrestle with similar questions.

j4nt4b
Awesome work! I love this concept, but I'm a writer who's interested in collaborative generative writing. Do you know of any efforts to create something similar for writing, such as poetry or micro-fiction? If not, I'm curious if you could point me in the right direction for putting something like that together.
joelS
Hey, thanks! I dont know about anything collaborative and for writing. https://aidungeon.io/ Maybe collaborative to play. And then there are a lot of writers exploring fine-tuning text models like GPT2 for personal use. https://twitter.com/MagicRealismBot is one of many examples. https://towardsdatascience.com/how-to-fine-tune-gpt-2-so-you...

Maybe you should make it!

j4nt4b
Thank you!
kleer001
If you check out the archives of NaNoGenMo over at github you'll likely find what you're looking for:

https://nanogenmo.github.io/

Check out the yearly resources.

Jack000
a direct transfer of this concept to text isn't possible because the SotA for text generation is based on transformers, which do not work using latent space sampling. The directly analogous version of stylegan in the nlp world would be vq-vae, but currently it doesn't match transformers in generation quality.

There are other ways to do generative writing though, mostly with prompts.

MattRix
Something like https://talktotransformer.com/ ?
gwern
If you'd like to write poetry or fanfiction, I provide GPT-2-1.5b models for both on https://www.gwern.net/GPT-2 - I don't provide a slick interface for writing (although Astralite has an upcoming web interface which is very snazzy), but they could be plugged into any such interface.
Apr 24, 2019 · laughingman2 on Thinking on Your Feet
On a similar vein, "Kenneth Stanley: Why Greatness Cannot Be Planned: The Myth of the Objective".

https://www.youtube.com/watch?v=dXQPL9GooyI&app=desktop

Kenneth leads the AI research in Uber. He had worked on Evolutionary strategies for machine learning, artificial Life etc.

I don't think this algorithm is very common. In particular, the colour palette can usually be chosen last, and a lot of generative art (at least all that shown in the article) will not need multiple resolution passes. I think a more common discovery/refinement process is to go from more random to less random, or random in particular kinds of ways. For example, you can specify the values or distributions (or at least ranges of values) for particular parameters based on things you think will look good or which should end up similar in ways you like to things you've found already.

Generative art is a search in an extremely high dimensional space. The more you go into it aiming for something in particular, the more nailed down it is and the closer to traditional art (and the harder to pull off well IMHO, think how much difference slightly changing the position of one of the elements in these images would make, versus slightly changing the position of someone's eye). If you go looking for "new and interesting" or "looks good" without really aiming for anything in particular, it can be a lot easier to find good things.

I've seen a couple of really interesting talks recently about the surprising successfulness of objective-less search by Ken Stanley:

https://www.youtube.com/watch?v=dXQPL9GooyI (40 min.)

https://www.youtube.com/watch?v=tZBViI8ZaU0 (90 min.)

He started off looking at human-directed machine-evolved generative art, and ended up with a far more general discovery.

Aug 28, 2018 · js8 on Why Love Generative Art?
I think people who don't consider it being art do not understand that part of the art is the interpretation that happens in the spectator's mind.

Dan Dennett has a beautiful example in his talk on consciousness (https://www.ted.com/talks/dan_dennett_on_our_consciousness/t...), where he talks about the Bellotto's painting.

"But, more importantly, I cringe at the idea that randomness leads to discovery or spontaneity or human approachableness."

I am not sure I completely agree with that. See this presentation: https://www.youtube.com/watch?v=dXQPL9GooyI

None
None
dahart
Thanks for the video, I’ll watch it later this morning.

It’s totally okay to disagree with me, but FWIW, there’s a big difference in my mind between arbitrary and random, or between accident and random, or between discovery and random. A uniform random variable has an expected value, a guaranteed average, and known bounds, so to frame it according to that video, using a random number generator is designed and planned, and it won’t lead to anything happening that you didn’t expect. You can’t predict each instance, but you can predict the behavior. So I’m saying I already buy the title of that video (“Why greatness can’t be planned”) but I don’t think rand() is a vehicle to get you there.

Kenneth Stanley: Why Greatness cannot be planned.[1]

https://www.youtube.com/watch?v=dXQPL9GooyI&index=49&t=2s&li...

The significant differences between people are primarily cultural.

This may be true, but I suspect that the reason for the "diversity effect", if any, is simply that it creates diversity of ideas. Whether that arises from cultural differences, biological differences, or some other source is likely immaterial.

There is some evidence that novelty-searching is more productive overall than goal-searching, at least in some contexts (video of a talk on the subject by its principal researcher, Ken Stanley, PhD: https://youtu.be/dXQPL9GooyI). Perhaps diversity contributes simply by increasing novelty within the organization---it's not that people are doing things better, but that they are doing them differently.

I don't think we necessarily require intent to consider something as art.

If a pianist playfully improvises, mindlessly, without intent, does that mean she is not creating art? Sure, you can say that her intent is to maximize her own pleasure.. but then, how it is different from evolution?

It reminds me, is intent the same thing as objective? https://www.youtube.com/watch?v=dXQPL9GooyI

AstralStorm
The pianist still improvises within the grammar and syntax of the art form, say jazz or pop. (Our devises an entirely new vocabulary and grammar of their own.) This is different from just style add these kinds of improvisation have a set of rules and are very different from form free play.
There's an interesting book on the topic discussed in these emails, entitled "Why Greatness Cannot Be Planned":

http://amzn.to/2CtIrRR

There's a YouTube talk by the author on the subject here:

https://youtu.be/dXQPL9GooyI

The rough idea is this:

- creativity often arises by stumbling around in a problem space, or in operating "randomly" under an artificially-imposed constraint

- modern life is obsessed with metrics and goal-setting, and this has extended into creative pursuits including science, research, and business

- sometimes, short-term focus on the goal defeats the goal-given aims (see e.g. shareholder value focus)

- the authors point out that when they were researching artificial intelligence, they discovered that systems that focused too much on an explicitly-coded "objective" would end up producing lackluster results, but systems that did more "playful" exploration within a problem space produced more creative results

- using this backdrop, the authors suggest perhaps innovation is not driven by narrowly focused heroic effort and is instead driven by serendipitous discovery and playful creativity

I found the ideas compelling, as I do find Kay's description of the "art" behind research.

mathattack
This is the difference between Research(which Kay excelled at) and Development and Commercialization (which he didn’t). Both are important. The latter requires focus, discipline and deadlines.
pbowyer
Have you any references to the Development and Commercialization he didn't excel at? I know of Kay by name, but not his life story.
pixelrevision
See Apple. They took a lot of the inventions there and brought them to market.
username223
He was a Xerox PARC guy back in the Smalltalk days, when they developed incredibly expensive systems that never really sold. In the 2000s, he was still griping about how everything was worse than PARC in the 1970s. I haven't paid much attention to him since then.
noblethrasher
Well, in a few of his presentations, he talks about the difference between invention, which is what PARC did, and innovation, which is what companies like Apple do.
phodo
There is a related book, Strategic Intuition [1], by William Duggan, which speaks to the nature of creativity. Highly recommended read if you are interested in this topic.

[1] https://www.amazon.com/Strategic-Intuition-Creative-Achievem...

pvg
"Erasmus Darwin had a theory that once in a while one should perform a damn-fool experiment. It almost always fails but when it does come off, it is terrific. Darwin played the trombone to his tulips. The result of this particular experiment was negative"
arca_vorago
- the authors point out that when they were researching artificial intelligence, they discovered that systems that focused too much on an explicitly-coded "objective" would end up producing lackluster results, but systems that did more "playful" exploration within a problem space produced more creative results

A comment of mine from a year ago: "I think the way forward for AI will be in simulating human conciousness processes, including whatever limitations computationally that come with that. In this sense, I feel that the kind of AI that will break barriers first is going to be in a game, and not on a factory floor."

bewe42
I'm coming here from the hackerbooks newsletter where I saw this book mentioned. Just wanted to chime in how compelling I find the ideas.

Though the book is slightly repetitive, what stuck with me is the idea of a "stepping stone approach". In order to discover something new or interesting, all we need to do is to find the next stepping stone to jump to. The rest is not is not in our hands. I tell this to myself whenever I think I have to have this "grand vision", which I never have so I don't do anything. I'm happy I stumbled upon it.

j_s
Off-topic: always interesting to see how referral links fare here.

I personally am not opposed if disclosed, since AFAIK (documented as of 2012 https://news.ycombinator.com/item?id=4156414) this gives access to other items purchased in the same time frame (whether the specific item is purchased or not) and analytics for links with no orders.

tacon
That book was written by computer scientists, and they demonstrate how the optimal search algorithm is often to pick the most unusual thing to try that you haven't tried yet. I'd like to see more discussion about this as an approach to personal development. Many people advise students and new graduates to sample the search space broadly, without detailed goals, just to see what's out there. It almost becomes the "do something every day that scares you" maxim. There is an old saying about "Everything you want in life is waiting just outside your comfort zone."
maroonblazer
I've not watched the video nor read the book but I take it slightly differently.

Reinertsen, in "Managing the Design Factory", describes the need for companies to generate high value information as efficiently as possible. Using Shannon's theory of information, he argues that low-information events are those events where the likelihood of either success or failure is high. Out of this comes the fact that you maximize the information output when the probability of failure is 50%.

How many of us take risks - professionally or personally - where the probability of failure is 50%?

There may even be theoretical support for this idea, in the form of Ken Stanley's (now with Uber) work with "novelty search."

https://youtu.be/dXQPL9GooyI

A similar conclusion was made using AI. Here is the video: https://youtu.be/dXQPL9GooyI

The takeaways: - The path to success is through NOT trying to succeed - To achieve our highest goals we must be willing to abandon them - It is in you interest that others DO NOT follow the path you think is right

manmal
Thank you for posting this, it has the potential to change one's outlook to life (planning life vs going with the flow).
Aug 10, 2016 · 1 points, 1 comments · submitted by fenollp
fenollp
From same author (2002) "Evolving Neural Networks through Augmenting Topologies" []

[]: http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf

There is a very good reason for that which is explained very eloquently in this speech. Really worth a look.

https://youtu.be/dXQPL9GooyI

"You cannot find things by looking for them"

Any person who have ever tried to solve problems (not just rationalize next steps based on previous findings) will know that there is really something unfruitful and irreproducible about logical reasoning.

One of the primary reasons is that logical reasoning you make is based on it's premise. As an example you can rationalize your way through Game of Thrones as long as your premise is set to accept a number of sub-conclusions. Or be a theologist rationalizing your way through the bible.

But this powerful process is also a cage which keeps you trapped inside the same logical consequences of your premises.

And so to invent something new you have to be "irrational" to break out of the boundaries of the premise.

So the claim is not to abandon logic but the belief that problems are solved and inventions made by rational logical process alone. It turns out that most things are actually more trial and error which again also historically has some merit.

YeGoblynQueenne
>> One of the primary reasons is that logical reasoning you make is based on it's premise.

Yes, that's called the closed-world assumption. Something is true if you can prove it from the things you know.

You know how else we call the "things you know"? Data.

You know what else we do with data? Train machine learning algorithms.

You know where machine learning algorithms would be if they abandoned their (implicit) closed-world assumption?

skybrian
If a machine learning algorithm takes closed-world assumptions too far, it overfits the input data. We test whether it's working by testing it on data it hasn't seen.

You can't tell whether overfitting is happening by just looking at the input data. You'd at least need a holdout set for testing.

ThomPete
Where machines would be if they increased the available input sources? We know that as little as we know where a kid will end up when we put them into the world and they start to interact with it being the patter recognizing feedback loops that we are.

The point is that premises allow us to make logical reasoning but the premises themselves are not necessarily logic or universally true or objective or any other notion of right or wrong.

Look through history. Most of the really big innovations weren't a product of some objective but rather incidents and curious phenomena we never expected.

Rational reasoning will take you some of the way as long as you have the premise, but the question is how to get to the premise.

Mar 11, 2016 · vannevar on Hard Tech is Back
For founders, the trick is finding that sweet spot where a problem sounds hard on paper (such as self-driving cars or VR headsets, wooo!), but actually is feasible using current technology...

This is how true innovation happens. Like evolution, innovation is not a steady inching forward (though that kind of progress certainly operates continuously in the background), but a series of sudden leaps punctuated by relative stability. Those sudden leaps happen when the right brain is in the right place at the right time to make the right analogy or synthesis with existing ideas. You can't fund people to produce it on command, you can only hope to be among the first to recognize that it has happened and to invest in it at that early point, before the innovation is disseminated.

For an approach that does seek to produce innovation on command, see Ken Stanley's work on novelty search: https://www.youtube.com/watch?v=dXQPL9GooyI

That reminds me of this amazing talk called Why Greatness Cannot be Planned:

https://www.youtube.com/watch?v=dXQPL9GooyI

TLDV: Innovation is not driven by narrowly focused heroic effort. We'd be wiser and the outcomes would be better if instead we whole-heartedly embraced serendipitous discovery and playful creativity. We can potentially achieve more by following a non-objective yet still principled path, after throwing off the shackles of objectives, metrics, and mandated outcomes.

This also matches my experience 100%. All my best discoveries are accidental.

steveeq1
Dude, this is an awesome talk!

Kinda reminds me a lot about Edward DeBono: https://www.youtube.com/watch?v=jFFZ0XSfCRw

He was, in turn, recommended by Alan Kay for learning and creativity: http://www.squeakland.org/resources/books/readingList.jsp )

Now going to order "Why Greatness Cannot Be Planned".

dang
Discussed a few days ago at https://news.ycombinator.com/item?id=11078635.
agumonkey
It's a profound talk, but incomplete in my view. It's mostly that we're culturally far too used to linear and predictable (culturally only because we know very well about randomness, laughter and surprise is wired in our brains). Also, we cannot predict the path, nor the number of possible paths that will approach a given structure.
mentos
Great video! Everyone on HN should watch this.
None
None
visakanv
+1. As a writer, literally all of my best work is completely unplanned. On hindsight there was preparation, background, context, etc, but still.
Outdoorsman
Congrats on being a Quora Top Writer!

I agree...the best ideas "come in on little cat feet"...

visakanv
Thanks! And to riff off of the cat feet analogy, you have to be comfortable enough to let the cat come to you. You can't scare it away by getting anxious. That sort of calm comes with lots of practice.
meowface
I've heard this sentiment expressed for the creation of music and art as well.
golergka
Yep.

You're just playing around with stuff, create a lot of drafts and play with them if you feel that they're fun. And then, suddenly, one of the drafts doesn't get dull; surprisingly, it gets more and more interesting the more things you add to it.

And then you realize that this one is actually done.

cableshaft
Maybe, although in my experience a piece of work is never really done, you just eventually realize you have to declare a stopping point and move on to the next project.

A software's features could always be improved, a book can always be revised further, a song could always be tweaked or rearranged, you can always knock the sand castle down and try to make the parapets straighter this time.

Most of my projects have a wishlist of features, 80-90% of them eventually get cut when I decided I was finally, finally going to release the damn thing.

telesilla
I'm a composer as well as a software developer. There is no way in hell I could plan an interesting piece of music. You really just have to wait for it to come, however importantly it does mean you have to be ready to listen: i.e. put yourself in environments and states of mind that are most receptive and be extremely well-honed on your craft on a technical level so you can execute well.
hypertexthero
This reminds me of John Cleese’s [wonderful lecture on creativity](https://www.youtube.com/watch?v=Qby0ed4aVpo). A rough [transcript](http://genius.com/John-cleese-lecture-on-creativity-annotate...).
derefr
I think this is a sentiment most people share, because most creative people only output—at most—a few masterworks in their lifetimes.

I bet the people who are constantly pumping out works that, on their own, would be considered achievements—the Picassos and Frank Zappas and Stephen Kings of the world—have a pretty different view on how much of creating something good is inspiration vs. a schlepp.

Swizec
Any famous writer I've ever read about has said that the amateur waits for inspiration, whereas the professional just sits down and gets to work.

12 months from now you won't be able to tell what you wrote while inspired and what while schlepping.

soft_dev_person
Yes, this is so important to communicate. I think "waiting for inspiration" is a bad approach and bad advice for anyone who want to do something professionally. Unless you are inspired 20-40 hours per week, only hobbyists have the luxury of waiting for inspiration.

Also, how will you ever improve in something if you only do it while inspired? And how will your inspired work be any good if you have no practice and experience to build upon?

Swizec
The main thing is that getting 4 masterpieces is extremely likely if you produce 10,000 works, and next to impossible if you produce only 4.
allengeorge
Also, you (hopefully!) learned something while failing at those 9994 attempts that you fed into the 4.
cJ0th
> The main thing is that getting 4 masterpieces is extremely likely if you produce 10,000 works

Yes and no. Once you succeeded in creating a masterpiece it can become incredibly hard to ever create something worthwhile again because success (especially when rewarded by the markets) can screw up your mind for good. You loose your sense of wonder, you feel the pressure to have success again and so on. It could very well be that you just create more of the same or get worse due to this. I think bands that are considered one hit wonders might provide a good example.

It's a bit like an optimization algorithm that found a local optimum but can't escape its current neighborhood.

visakanv
> 12 months from now you won't be able to tell what you wrote while inspired and what while schlepping.

I've been writing almost every day for probably about a decade now. The reality of it is a little more nuanced that that.

It's more like... 20% of the time you were inspired, you'll realize that what you produced was still crap.

Similarly, about 20% of the time you were schlepping, you'll realize it was actually pretty good.

And about maybe 20-30% of the time, you'll find yourself getting 'inspired' midway through a schlep. And as you do it more and more often, you start subconsciously preparing things in anticipation of the work.

Swizec
So, 20% chance of crap while inspired, 36% to 44% chance of good while schlepping?

I'll take those chances. Especially if it increases my chance of getting started at all.

Remember, the only option with 0% chance of great is not doing anything.

visakanv
Yup, exactly!
chris11
So what is your writing process like? I'm personally really interested in how well designed things are created. It seems like there is a paradox, you can't force great things to happen with objectives and planning. But you can objectively tell that something is great. So are you personally concerned about whether a specific piece of writing is objectively great, or are you more interested in your ability to write well in general?
visakanv
>So what is your writing process like?

Thanks for asking! I have several different but interconnected processes:

1. I have a regular writing practice where I just open up something (Evernote, Byword, notepad, whatever) and start writing about whatever comes to mind. This is primarily just to develop the habit of writing. I mainly do this with my 1,000,000 word writing project [1].

2. I do content marketing for work, which is much more systematic and planned out– I work backwards from common queries and search terms. The most successful post I've ever written, though, was something that I had written out of curiosity [2].

3. Sometimes 'inspiration' hits: somebody asks the right question, or something happens and I feel a need to respond– and everything just comes right out of my subconscious, almost as if it were already written. But when I examine this, it's clear that I was already ruminating about it for a long time before.

> It seems like there is a paradox, you can't force great things to happen with objectives and planning.

Not exactly a paradox. The objectives and planning aren't about making great things happen; they're about creating a stable context in which great things happen. It's about putting in the hours of practice so that you can improvise freely when the moment calls for it.

> So are you personally concerned about whether a specific piece of writing is objectively great, or are you more interested in your ability to write well in general?

I think I cycle between the two, back and forth, depending on what I'm doing. If I'm working on something with a sense of purpose, I want it to be great. Otherwise I'm happy just working on my craft on a day to day basis.

_____

[1] http://www.visakanv.com/1000/

[2] http://www.referralcandy.com/blog/47/

Feb 11, 2016 · 96 points, 18 comments · submitted by henning
chillacy
Steve jobs said something like: You can only connect the dots looking backwards. I've found that to be true in many ways. If you look at any success story, it usually involves many degrees of luck, chance, crazy coincidence, etc. From business stories, to how you met your closest friends, or even your significant other.

Of course, this is survivorship bias, and we're constantly missing great opportunities every day.. but it's still interesting how things turn out.

agumonkey
But there's a metalevel theory there. After spending years failing to learn music, I finally hit a few sweet spots. I couldn't be more wrong before. But knowing how failing can lead to knowing, how being wrong helps you map the possible space to home in the right leads you to a very different approach on discovery.
kevinwang
Some of the extrapolations from the picture platforms to other domains are a bit hand-wavy in the talk, but his response to the Edison question [0] is pretty enlightening.

Additionally - just pondering here - this heuristic of novelty seems present in human nature: - the drive that scientists, mathematicians, researchers, and technologists have to discover new things - the drive that hackers and artists have to make new and different things

Does anyone agree that these aspects of humans are in tune with what the subject of the talk, or are they more unrelated than I think?

[0]: https://www.youtube.com/watch?v=dXQPL9GooyI

coderKen
Found this pretty much enlightening, I think this applies to careers as well. When I look back at the steps I took towards getting to where I am now, I don't see any connection, they were seemingly random steps.
thewarrior
Didn't Steve Jobs say the same thing ? You cant connect the dots going forward. They only make sense when looking back.
lkrubner
This was much better than I expected from the title.
agumonkey
I dismissed it and only watched it because it was linked on this HN thread https://news.ycombinator.com/item?id=11123374
jakeogh
Agreed. The book has gotta be worth $20: http://www.springer.com/us/book/9783319155234
sah2ed
The kindle version on Amazon [0] is slightly cheaper at $16.19.

[0] http://www.amazon.com/Why-Greatness-Cannot-Planned-Objective...

WalterBright
There are counterexamples, for instance, the Wright Bros. invented the airplane through a directed research and development process.
greggman
Interesting talk. I really couldn't tell how much of it really meant something and how much of it was fluff like "power poses".

In any case he's been giving similar talks for a while.

https://www.youtube.com/watch?v=tZBViI8ZaU0

turbohz
But there are still too many javascript libraries out there!
thewarrior
That's a very interesting point. When I look at React and React native , I do feel that they could be stepping stones to something really interesting.

The Decco app that was posted on HN only reinforced this idea. I feel that one day we'll be flinging fully formed react compoenents as fast as we fling for loops while coding today. Some form of smart component search and automatic insertion could speed up programming by a huge amount.

Abstracting out the DOM rendering means we could have universal components.

But these aren't here yet. You don't have to use the latest framework in your project if you don't need. But that shouldn't stop the experimentation and the evolution.

So go forth and write ye Single page app frameworks and ye Coffee script to brainfuck transpilers.

WalterBright
This fits right in with the Connections thread:

https://news.ycombinator.com/item?id=11075430

which is all about serendipitous discoveries.

argonaut
Speaking from an AI/ML perspective, I'd like to point out one flaw, which is that exploration (of unexplored states) is certainly something that can be incorporated into your objective function, or accounted for in other ways, depending on the specific AI/ML algorithm and domain. This is a very common thing to do in reinforcement learning, for example.
chillacy
I think he just about says this in his talk when he discuses developing "novelty search". The applications to AI seemed more straightforward (don't blindly run up to a local maxima if your search space isn't monotonic), but the application to life is less straightforward but more interesting.
kevindeasis
I'd like to hear arguments against his claim. Even better if it is a machine learning experiment that invalidates his idea.

It would be interesting to compare the two sides.

ssivark
The story of how Feynman found out about the Challenger disaster is quite interesting. It is not that he was a clever detective ferreting out the facts -- it's just that he tried something different and let himself be nudged along in the process.

To quote from his book "What do you care what other people think?"...

Well, a few days after the accident, I get a telephone call from the head of NASA, William Graham, asking me to be on the committee investigating what went wrong with the shuttle! [...] When I heard the investigation was going to be in Washington, my immediate reaction was not to do it: I have a principle of not going anywhere near Washington or having anything to do with government, so my immediate reaction was -- how am I going to get out of this? [Gweneth, his wife says] "If you don't do it, there will be twelve people, all in a group, going around from place to place together. But if you join in the commission, there will be eleven people -- all in a group going around from place to place together -- while the twelfth one runs around all over the place, checking all kinds of unusual things. There probably won't be anything, but if there is, you'll find it." She said, "There isn't anyone else who can do that like you can."

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.