Hacker News Comments on
OpenAI Model Generates Python Code
Fazil Babu
·
Youtube
·
18
HN points
·
6
HN comments
- This course is unranked · view top recommended courses
Hacker News Stories and Comments
All the comments and stories posted to Hacker News that reference this video.There have been demos of it writing code from English comments.https://www.youtube.com/watch?v=fZSFNUT6iY8
It's not clear how much the demos have been gamed for presentation, and it seems more of an opportunity than a threat - it will still need devs to put stuff together, and (assuming it is as impressive as demoed) will take a lot of the donkey work away.
⬐ bananafaceA significant chunk of what devs are paid for is the donkey work. Making every dev significantly more productive increases the supply of dev power relative to the demand, dropping the price.⬐ maps7I would need to see a lot more than that to be impressed/worried. I love how there's even a serious bug in there - discounts at 80% instead of 20%
Apparently it was trained over GitHub repositories.
⬐ aerovistaeWhat I'm wondering is how it matched up the natural english description with the repository content though.⬐ taylorfinleyI'd like to see this done with tests as the input, and the working app as the output. Then writing an app would be as simple as defining the tests it must pass.⬐ FabioBertone⬐ p49k> Then writing an app would be as simple as defining the tests it must passIn most cases, if you can define the tests, you have already figured out 80% of the solution.
Probably through tutorials and verbosely-commented code examples?⬐ zumachaseThe prompt contained a few react examples with annotated English descriptions that were similar enough. You’re not seeing the entire prompt and as impressive as this is, it’s likely that you’d end up writing more code trying to get it to do what you want than if you just wrote it yourself. Future GPTs may of course be better, but there’s a bit of “magic” to these demos that the author isn’t super upfront about.
There's been some pretty cool work in this area recently: https://www.youtube.com/watch?v=fZSFNUT6iY8
⬐ mulmenWow. This is exactly what I imagined but it already exists. This is a great illustration of how we as humans could use our abilities to disambiguate to collaborate with a computer and write code. Very impressive stuff.To me it seems like learning how to talk to Alexa or Cortana or "Google" is a limitation or regression for humans. This shows that it could actually be beneficial.
Thanks for this philosophical rabbit hole just in time for a weekend.
⬐ lunixbochs⬐ UdikAs someone who works in voice tech, I think talking to Alexa is setting back our expectations of voice tech by at least a decade. The actual tech and capabilities we have available right now are so much better than static capabilities over a high latency internet connection.That's seriously cool. I think that a tool that could take a high level description of a small piece of code and write it for me (it doesn't have to be correct- I can double check it and fix it quicker than I write it) would make my coding much quicker and less stressful. There are so many times in which I think: "I know exactly how to do this, step by step: I'm just annoyed at having to type it down for the umpteenth time".
Here's a video of the demo:
⬐ thephyberI'm curious to see where this goes.The skeptical part of me worries that the docstring is being used to select which code to borrow from, so this technique probably wins and loses based on "the wisdom of crowds" unless there is some other code quality scoring.
I have personally been interested in genetic programming and have written something similar (except the function signature and unit tests become the fitness function).
I found this video interesting. It takes comments and generates code.
⬐ sharemywinwonder how specific the training was for this?
⬐ Jeff_BrownMore interesting to me is the capacity of Idris 2 to complete a function based on type signatures -- because (when it works) you can be sure it got it's correct.⬐ jeplerThese tricks will only work for simple things, won't work for complex things (but is likely to give obviously nonsense answers), but even worse there will be a middle swamp where it'll write something complicated, plausible, and wrong.def f() -> bool: """return True iff the Collatz Conjecture is true""" def g() -> bool: """return False iff OpenAI correctly completes this code"""
As usual, a video of 2 or 3 simple examples, curated beforehand so you're confident they'll work, tells a different story.
Just for the sake of curiosity, I also asked GPT-2 whether it would work. Here's what it said:
You said it, GPT-2.Here's why this Python code-generating AI won't work. The other reason is that it will require you to make something that performs a large and complex program. This, after all, is what the function call is to do, a "simple function" that returns a function that accepts an integer number as its result. Let's say that I say "hello" in the code and I get the value by making a change. To do this, I divide the value by a few. Next, I increment the value by two. The process will fail, because I need a new function which will return a result, but in this case I get a value by incrementing the value by two. This gives me the answer that is most likely true. We need to know exactly which value we want for the computation and to implement ways to handle returning it.
⬐ pimterry> These tricks will only work for simple things, won't work for complex thingsEven complex systems include a lot of simple code.
A practical tool based on this doesn't need to be able to generate large working systems or solve complex problems hands-free. It just needs to be able to provide autocompleteable suggestions for the simple code parts. It can do so only when it's very confident it knows what you're looking for, and you'll share a meaningful percentage of time off of all software development work. Never write any boilerplate code for anything ever again.
Accuracy is likely to improve in that case too, since inline suggestions have much more context than just a function declaration.
That's before you start thinking about running this in reverse, and using it as AI-powered linting. If I write some code and the AI is very confident that part of it isn't what it would suggest, flag it for examination. It'll only catch simple mistakes, of course, but a lot of developers make a lot of simple mistakes that they miss every day.
Relevant section https://youtu.be/fZSFNUT6iY8