HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Karl Sims - Evolved Virtual Creatures, Evolution Simulation, 1994

MediaArtTube · Youtube · 6 HN points · 10 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention MediaArtTube's video "Karl Sims - Evolved Virtual Creatures, Evolution Simulation, 1994".
Youtube Summary
This video shows results from a research project involving simulated Darwinian evolutions of virtual block creatures. A population of several hundred creatures is created within a supercomputer, and each creature is tested for their ability to perform a given task, such the ability to swim in a simulated water environment. Those that are most successful survive, and their virtual genes containing coded instructions for their growth, are copied, combined, and mutated to make offspring for a new population. The new creatures are again tested, and some may be improvements on their parents. As this cycle of variation and selection continues, creatures with more and more successful behaviors can emerge.
More info: http://www.karlsims.com/evolved-virtual-creatures.html
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
And in 1994 Karl Sims used virtual environments to evolve and train virtual creatures

https://youtu.be/JBgG_VSP7f8

I think evolution is not an undirected process in that sense because it's an optimization process, that optimizes to create more copies of itself. Superintelligence will likely use some Evolutionary Computation (see en.wikipedia.org/wiki/Evolutionary_computation ).

Also see Karl Sims 'Creatures' from the 90s: youtube.com/watch?v=JBgG_VSP7f8 or OpenAI's Multi-Agent Hide and Seek: youtube.com/watch?v=kopoLzvh5jY

Also relevant is Karl Sims' Evolving Virtual Creatures:

https://www.youtube.com/watch?v=JBgG_VSP7f8

Not sure how they're connected, but I remember being obsessed with that paper and actually recreating it in 2D as a grad project.

Jun 07, 2019 · 4 points, 0 comments · submitted by walkingolof
The article's philosophical discussion of "surprise" feels a little klunky and academic to me, and I think conflates some different kinds of surprise, but it's still really fun to think about.

Many of the projects here are all inspired by Karl Sims' work in the early 90's.

http://www.karlsims.com/

Sims evolved virtual creates by specifying goals to achieve and then running a physics simulation. He noted at the time that the evolution process was great at exploiting bugs in the simulation.

https://www.youtube.com/watch?v=JBgG_VSP7f8

I was insipred enough by Sims' genetic images (http://www.karlsims.com/genetic-images.html) that I spent a few years trying to get surprising and beautiful results of my own, with some limited success (https://flic.kr/s/3Xoz).

VyseofArcadia
> The article's philosophical discussion of "surprise" feels a little klunky and academic to me

To me it sounded like something a referee would request or something they suspected a referee would request.

abecedarius
Nice work! Your inspired-by-Sims collection is the best I've seen in that genre, I think. I always look when someone links to their genetic images because I once made some too: https://djb.deviantart.com/gallery/ (They were less garish at the original gamma setting, that's how long ago it was.)

It always felt like there's more potential in this direction. Maybe with a magic sprinkling of neural nets?

dahart
Hey thank you!! Yours look great too. I know what you mean about gamma, mine were probably a little more washed out 15 years ago, I accidentally made them a bit gamma-proof I think. ;) I’m dying to find the time to do a JavaScript+WebGL version of this that other people can play with.

The big thing I spent time on was animating cross-fades between evolved expressions. I’ll send a link to the vid & paper if you’re interested.

abecedarius
I'd be interested, thanks.

A web version would make a great time-waster. At least as the author I wound up spending much more time clicking through the space of images than working on the code; it might perhaps not feel as compelling without the feeling of getting something for almost nothing when your own program comes to life.

dahart
Here they are. Yeah, I agree with you, a big piece of the enjoyment was writing the code, and then being surprised by what happened while exploring the image space. You definitely don't have as much love for the surprises in someone else's program, not knowing what it's expected to do or how it works. At least one year I was working on this I spent more time using it than coding it, and it was pretty educational and productive to be more of a user.

https://www.youtube.com/watch?v=kLmtvIt6ihA

http://dahart.com/paper/hart_evomusart_2007_paper.pdf

http://dahart.com/paper/hart_evomusart_2007_slides.pdf

BTW, Karl Sims read this paper and recommended it for Siggraph, but a few other reviewers were fairly opposed to generative art papers. C'est la vie.

abecedarius
Wow, the cross-fades really sell it. I never even tried to do anything like that; my genetic programs were for a kind-of-weird stack machine as the easiest thing to do. https://github.com/darius/tusdl/blob/master/evo.c

Slightly related to the starting topic of learned exploits: op_hwb_color() producing most of the colors seems to have suffered from some undefined or implementation-defined behavior -- I couldn't reproduce old pictures when I came back to this many years later. Yet that one misbehavior deserves a lot of the credit for whatever aesthetic value turned up. C'est encore la vie.

Nov 06, 2017 · dahart on Self-Replicating Functions
I hadn't before, thanks for the tip! It's been a while since I played with this stuff, but I might have to get a copy and explore. I have heard of MetaSynth in audio circles. I'm not sure, but the images and videos on the ArtMatic site look (to me) very inspired and influenced by the work of Karl Sims, which is the main source of inspiration behind my project... http://www.karlsims.com/papers/siggraph91.html

You can find a bunch of his videos on YouTube too.

https://youtu.be/AgeuRukfZLE https://youtu.be/vIVjEkWTEXI https://youtu.be/SLAa-CnUEW4 https://youtu.be/JBgG_VSP7f8

FraKtus
Thank you for the link to the article! I know Eric Wenger the author of MetaSynth / ArtMatic / Voyager since many years and his motivation to create ArtMatic was to create textures to feed MetaSynth to create music from visuals..
Oct 12, 2017 · ehsankia on Competitive Self-Play
Also, much better competing AIs (and wittier narration), from back in 1994 [1]

[1] https://www.youtube.com/watch?v=JBgG_VSP7f8&t=2m10s

It seems to me that there are several levels of misunderstanding here. And the most interesting part is when Suzuki asks "So, what is your goal?", and the engineer answers "Well, we would like to build a machine that can draw pictures like humans do.". This is certainly a commendable goal. Then we cut to another sequence, where Miyazaki is drawing, and reflecting: "I feel like we are nearing the end of times. We humans are losing faith in ourselves..." It may or may not be so, but this is an interesting point to discuss. I can certainly understand how it could feel like it, but you'd have to study the motive for wanting to "replace" humans by machines, or just have machine perform like humans. One key misunderstanding in this exchange, and foremost on the part of the journalists reporting it (damn dumb journalists!), is that "This is a presentation of an artificial intelligence model which learned certain movements." Obviously, they didn't teach it to move, they had a certain model of the body, and let the AI loose, trying to learn by itself how to move. This is a classical application of genetic programming. See how some of those creatures look "creepy" too: https://www.youtube.com/watch?v=JBgG_VSP7f8 https://www.youtube.com/watch?v=HgWQ-gPIvt4 https://www.youtube.com/watch?v=CXTZHHQ7ZiQ https://www.youtube.com/watch?v=yci5FuI1ovk They obtained some creepy result, added a creepy texture and made it into a quick & dirty demo for "animating zombies". Clearly, the people who assisted to this demo didn't understand what it was. And the answer of the artist was the worst. Imagine assisting to Wright brothers' demo, and coming with: "I have a friend who is blind, this doesn't respect the pain and the suffering of those with vision disabilities who will never be able to pilot a plane." What a tool!
It seems to me that there are several levels of misunderstanding here.

And the most interesting part is when Suzuki asks "So, what is your goal?", and the engineer answers "Well, we would like to build a machine that can draw pictures like humans do.".

This is certainly a commendable goal.

Then we cut to another sequence, where Miyazaki is drawing, and reflecting: "I feel like we are nearing the end of times. We humans are losing faith in ourselves..."

It may or may not be so, but this is an interesting point to discuss. I can certainly understand how it could feel like it, but you'd have to study the motive for wanting to "replace" humans by machines, or just have machine perform like humans.

One key misunderstanding in this exchange, and foremost on the part of the journalists reporting it (damn dumb journalists!), is that "This is a presentation of an artificial intelligence model which learned certain movements."

Obviously, they didn't teach it to move, they had a certain model of the body, and let the AI loose, trying to learn by itself how to move. This is a classical application of genetic programming. See how some of those creatures look "creepy" too:

https://www.youtube.com/watch?v=JBgG_VSP7f8 https://www.youtube.com/watch?v=HgWQ-gPIvt4 https://www.youtube.com/watch?v=CXTZHHQ7ZiQ https://www.youtube.com/watch?v=yci5FuI1ovk

They obtained some creepy result, added a creepy texture and made it into a quick & dirty demo for "animating zombies".

Clearly, the people who assisted to this demo didn't understand what it was.

And the answer of the artist was the worst. Imagine assisting to Wright brothers' demo, and coming with: "I have a friend who is blind, this doesn't respect the pain and the suffering of those with vision disabilities who will never be able to pilot a plane." What a tool!

jackfrodo
>And the answer of the artist was the worst. Imagine assisting to Wright brothers' demo, and coming with: "I have a friend who is blind, this doesn't respect the pain and the suffering of those with vision disabilities who will never be able to pilot a plane." What a tool!

This isn't what he was saying at all. He has a friend, with something perhaps like Parkinson's. The zombie animation was clearly evocative of his friend's difficult movements. So to him, it's morally repulsive to create disturbing animation that isn't even drawn by a human that so closely resembles physical disabilities.

Whether or not it makes for good horror or art is another matter. He just finds the whole concept disgusting.

anigbrowl
And the most interesting part is when Suzuki asks "So, what is your goal?", and the engineer answers "Well, we would like to build a machine that can draw pictures like humans do.".

This is certainly a commendable goal.

Why? It's not like there is some shortage of human artists. Indeed, the main problem for artists is the difficulty of making a living at it. As a painter I feel quite depressed when I see a friend on Facebook or somesuch upload an attractive new piece of artwork that turns out to be a new Snapchat filter.

I think they just took a concept figured out in 1994 (https://www.youtube.com/watch?v=JBgG_VSP7f8) and applied it to robots, in an attempt to get an easy publication written up.

Really, though, we know we can make robot parts. The hard part is the software. What these people did by making it robotic was completely pointless (because making the robots is a huge amount of unnecessary added work) that doesn't add to our knowledge until we can solve all the software problems in simulations first.

Something like evolving virtual creatures would be a fun (although pretty ambitious) addition [1].

There's a processing 2D virutal creatures model called sticky feet [2] (source available) that perhaps could be ported to js pretty quickly.

[1] https://www.youtube.com/watch?v=JBgG_VSP7f8

[2] http://www.cc.gatech.edu/~turk/stickyfeet/

matt1
For sure -- that is one of the medium term projects I want to work on. We'll see how it goes in the browser though...
Jul 12, 2013 · 2 points, 0 comments · submitted by pjbrow
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.