HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
OpenAI Model Generates Python Code

Fazil Babu · Youtube · 18 HN points · 6 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Fazil Babu's video "OpenAI Model Generates Python Code".
Youtube Summary
Full Video: https://www.pscp.tv/Microsoft/1OyKAYWPRrWKb?t=29m19s
Credits: Microsoft, OpenAI
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
There have been demos of it writing code from English comments.

https://www.youtube.com/watch?v=fZSFNUT6iY8

It's not clear how much the demos have been gamed for presentation, and it seems more of an opportunity than a threat - it will still need devs to put stuff together, and (assuming it is as impressive as demoed) will take a lot of the donkey work away.

bananaface
A significant chunk of what devs are paid for is the donkey work. Making every dev significantly more productive increases the supply of dev power relative to the demand, dropping the price.
maps7
I would need to see a lot more than that to be impressed/worried. I love how there's even a serious bug in there - discounts at 80% instead of 20%
Apparently it was trained over GitHub repositories.

https://youtu.be/fZSFNUT6iY8

aerovistae
What I'm wondering is how it matched up the natural english description with the repository content though.
taylorfinley
I'd like to see this done with tests as the input, and the working app as the output. Then writing an app would be as simple as defining the tests it must pass.
FabioBertone
> Then writing an app would be as simple as defining the tests it must pass

In most cases, if you can define the tests, you have already figured out 80% of the solution.

p49k
Probably through tutorials and verbosely-commented code examples?
zumachase
The prompt contained a few react examples with annotated English descriptions that were similar enough. You’re not seeing the entire prompt and as impressive as this is, it’s likely that you’d end up writing more code trying to get it to do what you want than if you just wrote it yourself. Future GPTs may of course be better, but there’s a bit of “magic” to these demos that the author isn’t super upfront about.
Jul 21, 2020 · 1 points, 0 comments · submitted by andreaorru
There's been some pretty cool work in this area recently: https://www.youtube.com/watch?v=fZSFNUT6iY8
mulmen
Wow. This is exactly what I imagined but it already exists. This is a great illustration of how we as humans could use our abilities to disambiguate to collaborate with a computer and write code. Very impressive stuff.

To me it seems like learning how to talk to Alexa or Cortana or "Google" is a limitation or regression for humans. This shows that it could actually be beneficial.

Thanks for this philosophical rabbit hole just in time for a weekend.

lunixbochs
As someone who works in voice tech, I think talking to Alexa is setting back our expectations of voice tech by at least a decade. The actual tech and capabilities we have available right now are so much better than static capabilities over a high latency internet connection.
Udik
That's seriously cool. I think that a tool that could take a high level description of a small piece of code and write it for me (it doesn't have to be correct- I can double check it and fix it quicker than I write it) would make my coding much quicker and less stressful. There are so many times in which I think: "I know exactly how to do this, step by step: I'm just annoyed at having to type it down for the umpteenth time".
Jun 11, 2020 · 1 points, 0 comments · submitted by lopespm
May 29, 2020 · 3 points, 1 comments · submitted by emsy
thephyber
I'm curious to see where this goes.

The skeptical part of me worries that the docstring is being used to select which code to borrow from, so this technique probably wins and loses based on "the wisdom of crowds" unless there is some other code quality scoring.

I have personally been interested in genetic programming and have written something similar (except the function signature and unit tests become the fitness function).

I found this video interesting. It takes comments and generates code.

https://www.youtube.com/watch?v=fZSFNUT6iY8

May 24, 2020 · 1 points, 1 comments · submitted by sharemywin
sharemywin
wonder how specific the training was for this?
May 21, 2020 · 1 points, 0 comments · submitted by andybak
May 21, 2020 · 1 points, 0 comments · submitted by csantini
May 21, 2020 · 6 points, 3 comments · submitted by pimterry
Jeff_Brown
More interesting to me is the capacity of Idris 2 to complete a function based on type signatures -- because (when it works) you can be sure it got it's correct.
jepler

    def f() -> bool:
        """return True iff the Collatz Conjecture is true"""

    def g() -> bool:
        """return False iff OpenAI correctly completes this code"""
These tricks will only work for simple things, won't work for complex things (but is likely to give obviously nonsense answers), but even worse there will be a middle swamp where it'll write something complicated, plausible, and wrong.

As usual, a video of 2 or 3 simple examples, curated beforehand so you're confident they'll work, tells a different story.

Just for the sake of curiosity, I also asked GPT-2 whether it would work. Here's what it said:

    Here's why this Python code-generating AI won't work. The other
    reason is that it will require you to make something that performs a
    large and complex program. This, after all, is what the function
    call is to do, a "simple function" that returns a function that
    accepts an integer number as its result. Let's say that I say
    "hello" in the code and I get the value by making a change. To do
    this, I divide the value by a few. Next, I increment the value by
    two. The process will fail, because I need a new function which will
    return a result, but in this case I get a value by incrementing the
    value by two. This gives me the answer that is most likely true. We
    need to know exactly which value we want for the computation and to
    implement ways to handle returning it.
You said it, GPT-2.
pimterry
> These tricks will only work for simple things, won't work for complex things

Even complex systems include a lot of simple code.

A practical tool based on this doesn't need to be able to generate large working systems or solve complex problems hands-free. It just needs to be able to provide autocompleteable suggestions for the simple code parts. It can do so only when it's very confident it knows what you're looking for, and you'll share a meaningful percentage of time off of all software development work. Never write any boilerplate code for anything ever again.

Accuracy is likely to improve in that case too, since inline suggestions have much more context than just a function declaration.

That's before you start thinking about running this in reverse, and using it as AI-powered linting. If I write some code and the AI is very confident that part of it isn't what it would suggest, flag it for examination. It'll only catch simple mistakes, of course, but a lot of developers make a lot of simple mistakes that they miss every day.

May 21, 2020 · 4 points, 0 comments · submitted by rchaudhary
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.