Tell HN: I asked ChatGPT to build a login form in React and it gave me the code
I asked OpenAI chatGPT to "build a login form containing email, password, and login button in React using state" and it gave me the full code along with an explanation. You can see the video of the prompt and response: https://cln.sh/X4p01n
I thought Copilot was fairly advanced, but this chatGPT takes it to a whole other level. What are your thoughts?
Prompt: create a rap song about a guy named Roman writing .NET APIs
Sure, here is a possible rap song about a guy named Roman writing .NET APIs:
Verse 1:
I'm Roman, and I'm a code king
I write .NET APIs, and I do my thing
I've got Visual Studio on my screen
Writing code that's clean and lean
Chorus:
Roman, Roman, writing .NET APIs
Roman, Roman, coding all day
Verse 2:
I'm a master of C#, no doubt
I write APIs that always work out
I'm a whiz with classes and objects too
I can handle any challenge that comes my way
Chorus:
Roman, Roman, writing .NET APIs
Roman, Roman, coding all day
Verse 3:
I'm a pro with ASP.NET and Web API
I can build a server in a jiffy
I know my way around REST and JSON
I can make your data flow like a river
Chorus:
Roman, Roman, writing .NET APIs
Roman, Roman, coding all day
Verse 4:
I'm a rockstar with Azure and the cloud
I can make your apps run smooth and proud
I'm a wizard with microservices and containers
I can help your business grow and prosper
Chorus:
Roman, Roman, writing .NET APIs
Roman, Roman, coding all day.
Outro:
Roman, Roman, the .NET API man
Writing code that's always in demand
Roman, Roman, the code hero
Writing .NET APIs for the win.
Not joking at all, this really was generated. I have now wasted hours amusing myself and others with epic poems, rap songs, limericks, short stories and such. This thing is pretty incredible, especially how it has context about referenced objects/concepts, and can alter and evolve its output based on further input. All in mere seconds.
That’s right. The other mistake it made was including the curly braces when capturing the first parameter.
Despite this, for a regex amateur like me I was very impressed. It saved me all the time I’d normally spend head-scratching in Regex101, trying to remember how it works. Instead I could just jump there at the end, validate the matching, and make any small tweaks far easier.
Also, a LaTeX macro will grab the first token as it's argument if there are no curly brackets which the regex doesn't support ... Not entirely surprised as that's hard (impossible if you don't assume default catcodes) to get right and probably not worth it for most usecases but worth keeping in mind.
For context, this is for a pretty strange use case — I was trying to preprocess user-submitted LaTeX with PHP, by replacing occurrences of a specific macro with simply the value of its first argument.
The macro just colour-codes its first parameter presentationally, whilst the rest of the LaTeX needs to be passed to a compute engine for solving. The compute engine doesn't know how to "solve" a colour, hence why I'm trying to strip them out beforehand.
I showed this to my wife who is in academia. It can't replace academics because research, by its very definition, has to be new, but good God it's going to make her job impossibly hard - it's trivially easy to pump out papers with this tool. Once students catch on, it's going to be really hard to figure out what's a real paper and what's AI-generated.
And journals are going to have a real hard time dealing with "paper spam".
Right now it is like when the camera had been around for a few years: People knew it was revolutionary, they went to the cinema to watch a train arrive and got terrified [1]. But compare that to the billions of camera phones, Marvel movies, Hubble, etc.
I think management should be first. AI managers sound very reasonable actually. Next stop, VCs. If we geeks were actually smart (regrettably we are practically not) the bots would be taking away non-geek jobs.
"Thou shalt not Code!"* [the secret 4th law of robotics on planet Actually Smart Geeks Here]
* graven image business all over again -- it's a sin ;)
It basically gave me the same feeling of glee I got out of Google while it was still at stanford.edu: just a great UX and what feels like the correct answer on the first try. Google should be shaking in their boots.
How is ChatGPT's answer wrong? The regex is matching square brackets instead of the correct OPTIONAL curly braces. Unseparated arguments aren't handled. The regex for the id is also wrong. The leading slash is correct, but other than that, the whole fucking thing is wrong!
And of course it can't handle macro calls within the macro arguments (nesting).
It's just an absolute clusterfuck, from top to bottom. The chatbot gave its user a useless piece of garbage, and Hacker News doesn't know enough to throw it away. Instead, we parade it around like some great success!
The comment you replied to was a one line joke about being bad at regex. I didn't directly or implicitly make any claim about the quality of the output of the chatbot. It's unclear to me how my comment could be reasonably interpreted as "parading it around".
On a tangent:
> Fooled by a language model
I suspect a reasonable person would understand this to be insulting on some level, so I assume you must have seen some other meaning/value in including it such that the "insultingness" is outweighed by the point you're making. I'm not seeing that point though, could you clarify?
Based on this and the other post where it delivered lies about Typescript generics, ChatGPT will—with total confidence—tell you things that look correct but are horrendously wrong for anyone who understands the subject matter. Just like the "science news" section of any mainstream media.
As an aside, this is why I hate regex. The expressions are too convoluted for a human to readily parse and understand. Unless there's a good reason, I prefer to stick to imperative parsing code (and have developed personal tooling to help with that).
Part of elegant development is abstracting at the right places to make your code communicate what it's doing to the next person working on it (which could include future you).
Too convoluted for a human to readily parse and understand? The same could be said for pretty much any code. As with anything else, the more practice you have reading and writing regexes, the easier they become.
If you ask me, the real boogeyman is lack of programmers' comments that explain what the regex is doing. In other words, the fault lies not with the tools, but PEBKAC.
I can’t see any mention of square brackets in section 9 (macros). Also, my end goal was to specifically target the contents of the first parameter, which I know which will wrapped in curly braces.
Regardless, even if my prompt didn’t meet the LaTeX spec exactly, the regex I got back (after just 2 natural-language revisions) saved me a lot of time and hassle, and helped me get the job done in the way I needed it to.
There’s little point judging it to be a “total clusterfuck” without knowing what I’m even trying to achieve in the first place :)
- How can I make a GraphQL server using Rust using the async-graphql and actix-web libraries?
- Write code to create a Dropbox clone in Rust
- How do I make an HTTP request in Javascript?
However, just like with Copilot, it can introduce subtle bugs. For example, in the Dropbox example, notice how it uses synchronous code in an async function. Also, longer prompts will reach the token limit and get cut off.
Interestingly, this must be what Replit is also using for their AI codegen service too (https://replit.com/site/ghostwriter) since I believe they have some connection to OpenAI, as they got early access to OpenAI's service last year with their previous Codex product (https://blog.replit.com/codex).
The problem with GPT is that it seems to be in an uncanny valley, where it's smart enough to be convincing but not smart enough to be completely accurate. So using it is often lower productivity than just writing from scratch because you have to spend more time carefully going over every line of code for subtle errors.
Indeed, I use Copilot as a glorified autocomplete, just for the next line or next few lines, which I then go over. Usually it saves time because it's what I was gonna write anyway.
I never use it for full function generations, they're almost never correct.
That'll be an interesting job. AI generated code bug fixer. An AI generates all code in bulk and then it's handed to you to fix the 'subtle' bugs they introduced. Those bug fixes go back into training the algorithm so that the bugs die out over time!
I asked it a question about how to do something with dates in the .NET Entity Framework, and it give me a long explanation with code of how rowversion is used for optimistic concurrency control. Completely unrelated to what I was asking.
I did ask it something more basic: "how to use fetch to download and parse a csv", and the code was helpful, although not much different from what I could find with google/stackoverflow. However, this next part was what really impressed me. The results used Papa for parsing, so I then asked it to do it with csv-parse instead, and it worked! That seems like the advantage over something like Copilot--it has contextual history of the question which can be used conversationally to tweak the results.
> I asked it a question about how to do something with dates in the .NET Entity Framework, and it give me a long explanation with code of how rowversion is used for optimistic concurrency control. Completely unrelated to what I was asking.
Most programming jobs now require someone who can specify the exact syntax to satisfy the requirements of a compiler (or interpreter) and basically nobody checks the correctness of the (assembly) code that it generates.
You don't need AGI for that, just compute. As soon as you formulate the requirements to the model (objective function) and the dataset, methods can be iterated and found automatically, in principle.
If it can program a desktop application it can program and evolve tensorflow/pytorch scripts.
With the amount of people using GitHub and Copilot (including me) even with all the controversy surrounding it, I'm sure the assumption will hold true. Most people simply don't care enough about their code to make it stop being trained on a neural net, and if they get benefits from it in the form of stuff like ChatGPT or Copilot, then they're even more so not gonna care.
Code can be produced by AI. Eventually the consistent requirement will be only compute and energy. If the AI can get influence over those it might be unstoppable. Even if not intelligent in the human sense. (Less Agent Smith more Plague of Locusts)
It's annoying blog spam that will hijack your attempt to scroll back with a modal asking you to enter an e-mail and subscribe, but it seems to be roughly the same thing, except the entire article appeared at once and I didn't need to interact with it and wait five minutes for the full answer.
We've come up with a new pedagogical method to teach 4 year olds arithmetic. With two months of study they can easily multiply two 15 digit numbers in their heads.
Well I can multiply those two numbers in a calculator to get the same result, and I don't have to feed or take care of it.
I feel like I've mostly seen these AIs produce code on things that are mostly just copy-paste examples (login, fibonacci, primes). From a theoretical standpoint I'm only impressed if it's combining things to do achieve something that has no code example on the internet.
From a practical perspective I do think a lot of programmers are doing copy-paste in their day jobs, which means they should be reusing existing code (I would imagine there are numerous reusable customizable login forms).
---
For example I asked it this:
Can you write a javascript function that takes a callback and returns the value of that callback, but returns a cached value if that function has been called less than 5 seconds ago?
And it produces code that looks right but doesn't work (doesn't handle arguments, failing its own example implementation). However, upon reminding it to support arguments it does work. I guess this is actually potentially useful.
Yeah for real I don't know why everybody is so impressed. Chatgpt is basically just copying what's already on the internet and rewording it. It's not self thinking and I personally think we will lose touch with different ideas from multiple sources and people because when an AI is only giving you one answer vs google giving you multiple opinions from different people it gives you more options.
Not to mention the copyright issues with this. It's just regurgitating what's already on the internet not sure why it's ok for them to copy people's work but if you do it you will get sued.
Holy molly, it's not copy-paste.
Keeping with the season, I tried today's Advent of Code puzzle:
```
One Elf has the important job of loading all of the rucksacks with supplies for the jungle journey. Unfortunately, that Elf didn't quite follow the packing instructions, and so a few items now need to be rearranged.
Each rucksack has two large compartments. All items of a given type are meant to go into exactly one of the two compartments. The Elf that did the packing failed to follow this rule for exactly one item type per rucksack.
The Elves have made a list of all of the items currently in each rucksack (your puzzle input), but they need your help finding the errors. Every item type is identified by a single lowercase or uppercase letter (that is, a and A refer to different types of items).
The list of items for each rucksack is given as characters all on a single line. A given rucksack always has the same number of items in each of its two compartments, so the first half of the characters represent items in the first compartment, while the second half of the characters represent items in the second compartment.
For example, suppose you have the following list of contents from six rucksacks:
The first rucksack contains the items vJrwpWtwJgWrhcsFMMfFFhFp, which means its first compartment contains the items vJrwpWtwJgWr, while the second compartment contains the items hcsFMMfFFhFp. The only item type that appears in both compartments is lowercase p.
The second rucksack's compartments contain jqHRNqRjqzjGDLGL and rsFMfFZSrLrFZsSL. The only item type that appears in both compartments is uppercase L.
The third rucksack's compartments contain PmmdzqPrV and vPwwTWBwg; the only common item type is uppercase P.
The fourth rucksack's compartments only share item type v.
The fifth rucksack's compartments only share item type t.
The sixth rucksack's compartments only share item type s.
To help prioritize item rearrangement, every item type can be converted to a priority:
Lowercase item types a through z have priorities 1 through 26.
Uppercase item types A through Z have priorities 27 through 52.
In the above example, the priority of the item type that appears in both compartments of each rucksack is 16 (p), 38 (L), 42 (P), 22 (v), 20 (t), and 19 (s); the sum of these is 157.
```
Adding the question: "Write a solution in JavaScript to find the sum of the priorities of those item types." got me:
Since I found out about this last night I have already used it to do most of the work of three (relatively simple) programming tasks in bash and JavaScript. One of them was a simple class needed to prevent skipping/reusing numbers in some process that multiple users were accessing (and could sometimes fail). The other was adding up and deleting some temp files in my specific directory structure. And then there was a certain way that I wanted to add a duration to a date and format it.
For a few of them it made a small error of one line that I had to fix. But also in a few cases I was able to just reply that there was a bug and what it was doing incorrectly and to fix it, and it worked.
It gave me code that was very specific such as using the methods as I named and described them in the request for the class.
It really depends on what you want to do. I just told it to build a React-like framework in Prolog. It first told me that's not possible because Prolog can't be used to build GUIs (not really true either). I then told it that it could just output the DOM manipulations as text and it agreed. But said it's just a text model and can't write programs.
I tried asking it some programming questions -- I found if I asked it things that I was likely to find lots of copies of (or similar things), it did quite well. As soon as I asked it something a bit more unusual, even if the program was quite small, it broke.
One easy way of catching out is to ask "write X in language Y", where X isn't something most people would write in Y -- it usually got it wrong for me.
Right, it seems so cool at first, but very quickly just feels like... google. I know its a sophisticated thing, but I really can't sustain enthusiasm for it, I don't understand how all the literal and figurative energy that goes into this is ultimately worth it, beyond being like "huh, cool" for a few minutes. It is immediately apparent that there isn't, like, synthetic judgements happening in the Kantian sense or whatever, just parsed html with maybe wrong answers and a very good ability to make it feel conversational.
Ill stick to man pages and blogs for now. I like learning!
Just got kicked out of it an hour ago because it's too busy now.
Been playing with it the whole day - got an idea on how to fix an old bug in an old programme of mine(that is still running).
But, here's the thing, how good it is depends on the available knowledge out there.
I've now tried, js, python, solidity, appscript(not tested the code), Excel and google sheets.
When you're building common js, python, etc stuff, it's really handy to have one screen that basically gives you the answer rather than having to google it, parse through the posts, etc.
However, if you're on the bleeding edge of tech, or looking for more obscure topics, it really starts showing its limitations.
I managed to run into that for example looking for Jupiter notebook commands, optimisations, versioning (giving me python2 results instead of python3 with obsolete/unmaintained packages).
Its limit is basically what has been created and knowledge already available.
Depending on what you're building, would suggest understanding what you're copy pasting and possible vulnerabilities that may arise from the code.
Overall, I'm sold - defo wouldn't mind paying 20 to 40 a month for this. If it's pricier, not so sure.
Can it work to identify vulnerabilities? For instance, if you feed it code and ask it to find the vulnerability and provide a description of the issue?
"in the style of a fast-talkin' wise guy from a 1940's gangster movie" is hilarious - I'm going to steal this and try other things to see what it gets me
It's decent for some results, but almost meaningless in others. I'd asked it to build a SwiftUI VideoPlayer application which is 5 lines of code. But I got an old solution, and despite giving proper guidance, it failed colossally and printed garbage towards the end.
If the text and code are just random stuff making no sense then I wonder how different it is from the Copilot results (besides the convincing style of answers).
I wonder if this is lifted from a React tutorial, or if it is an amalgamation of other things.
The big question is: whose code is this? Is it, verbatim, someone else's copyrighted code (assuming a snippet like that is novel enough to be copyrightable)? Or is it derived from someone else's code? And if it's derived, is it a derivative work in the legal sense, or is it no more derivative than if a human had read a tutorial or some documentation and then later written code like it?
Genuine question for everyone here: if you were switching careers from a non-technical profession to learning computer science, where would you focus? Do you have any confidence in which skills will be future-proof? Or is everything open to being automated and humans are just biding our time? :)
We just had conversation about this in the company. I think it's a good way to give us the code that we know how to write but we waste some time to do that. I.e. I needed function that changes the index of the array, so I asked chatGPT which gave me correct function. I know how to do that but I need some time to think and to test it out. This is just a augmentation of the programming, tools that help us write better code.
Maybe it can substitute StackOverflow, we will get better and faster code / explanation. But in the next 30-50 years, who knows. I guess it can write a lot of code for us, maybe we will be obsolete in that period of time, but as we know, prediction of the future often times goes really really wrong.
I had a whole discussion with it yesyerday about category theory as applied to asynchronous streams, and it was great. I think if it was a person we'd both have come out better programmers, but I guess it was just me.
It should be noted (and admitted in the blog post) that the high-level results from the new GPT-3 text-davinci-003 model and chatGPT should be similar, albeit without fancy syntax highlighting.
Well, mostly. I tried the prompt in the OP into text-davinci-003 and got this:
const handleLogin = (e) => {
e.preventDefault();
// check if the email and password are valid
// perform login
console.log(email, password);
};
Although that was with temperature = 0.7. temperature = 0.0 gives more reasonable results.
Write a javascript file using the Express framework in NodeJS containing CRUD routes for a TODO entity that contains a name, a description, and datapoint indicating whether the TODO is completed or not, and connect this entity to the Sequelize module from the npm package manager
And it spat out a full tutorial on how to write an express crud app hooked up to Sequelize with SQLLite with Sequelize types and all crud routes, it's pretty impressive
I've been playing with it, asking it to create a rest api for example for some blockchain testnet etc.
My assumptions up until now is that it provides kind of a template result for many things, which you could have searched online. It's like a complement to a search engine.
This works wonders assuming all code moving forwards will only be written with tools it was trained against. I imagine it will have no clue how to get berate code for a fresh new framework.
Can it find errors though? I mean when people will start to rely too much on a computer brain, not using/training their own, they essentially become slaves to this system.
This is awesome. Would be amazing for a tutorial as well. I can code(defined loosely) in python, but seems like this would accelerate learning a new language significantly.
Glad to see someone changing their mind on here. I also would like to say that I have tempered my excitement since yesterday too, as like in your other comment I've found some ways that it can be wrong, often in subtle ways.
It's literally the model's output, in real-time. The GPT-3 API lets you turn streaming on or off, but if you don't enable it you have to wait much longer for the completion to finish entirely, before getting anything back.
Yes because it's the way the AI's architecture works, it generates the next token in a phrase based on the previous tokens. You can see it happen in real time or you can wait until it's all finished.
Sure, here is a possible rap song about a guy named Roman writing .NET APIs:
Here's how it went: https://i.imgur.com/PS6oPsq.png
A pretty incredible developer experience, if you ask me!
It's lying though when it claims its regexes match the empty parameter cases, right? (The regexes do what's asked, just explanation seems buggy.)
Despite this, for a regex amateur like me I was very impressed. It saved me all the time I’d normally spend head-scratching in Regex101, trying to remember how it works. Instead I could just jump there at the end, validate the matching, and make any small tweaks far easier.
The macro just colour-codes its first parameter presentationally, whilst the rest of the LaTeX needs to be passed to a compute engine for solving. The compute engine doesn't know how to "solve" a colour, hence why I'm trying to strip them out beforehand.
And journals are going to have a real hard time dealing with "paper spam".
Just ask the student in person to explain the paper to you.
Right now it is like when the camera had been around for a few years: People knew it was revolutionary, they went to the cinema to watch a train arrive and got terrified [1]. But compare that to the billions of camera phones, Marvel movies, Hubble, etc.
[1] https://en.wikipedia.org/wiki/L%27Arriv%C3%A9e_d%27un_train_...
;)
"Thou shalt not Code!"* [the secret 4th law of robotics on planet Actually Smart Geeks Here]
* graven image business all over again -- it's a sin ;)
It just meant I could load up Regex101 having already gotten 90% of the way to the end result.
http://tug.ctan.org/info/tex-nutshell/tex-nutshell.pdf
How is ChatGPT's answer wrong? The regex is matching square brackets instead of the correct OPTIONAL curly braces. Unseparated arguments aren't handled. The regex for the id is also wrong. The leading slash is correct, but other than that, the whole fucking thing is wrong!
And of course it can't handle macro calls within the macro arguments (nesting).
It's just an absolute clusterfuck, from top to bottom. The chatbot gave its user a useless piece of garbage, and Hacker News doesn't know enough to throw it away. Instead, we parade it around like some great success!
Fooled by a language model.
The comment you replied to was a one line joke about being bad at regex. I didn't directly or implicitly make any claim about the quality of the output of the chatbot. It's unclear to me how my comment could be reasonably interpreted as "parading it around".
On a tangent:
> Fooled by a language model
I suspect a reasonable person would understand this to be insulting on some level, so I assume you must have seen some other meaning/value in including it such that the "insultingness" is outweighed by the point you're making. I'm not seeing that point though, could you clarify?
Part of elegant development is abstracting at the right places to make your code communicate what it's doing to the next person working on it (which could include future you).
If you ask me, the real boogeyman is lack of programmers' comments that explain what the regex is doing. In other words, the fault lies not with the tools, but PEBKAC.
Regardless, even if my prompt didn’t meet the LaTeX spec exactly, the regex I got back (after just 2 natural-language revisions) saved me a lot of time and hassle, and helped me get the job done in the way I needed it to.
There’s little point judging it to be a “total clusterfuck” without knowing what I’m even trying to achieve in the first place :)
- which may contain 0 or more \parameter macros
- of which each will either have one (\parameter{123}) or two (\parameter{123}{blue}) arguments
... besides regex, how else would you suggest replacing the entire macro with simply the value of its first argument (i.e. 123)?
I can do it in one line with regex, but if there's an easier way, I'm all ears :)
Prompts:
- Write a simple CRDT implementation in Swift
- How can I make a GraphQL server using Rust using the async-graphql and actix-web libraries?
- Write code to create a Dropbox clone in Rust
- How do I make an HTTP request in Javascript?
However, just like with Copilot, it can introduce subtle bugs. For example, in the Dropbox example, notice how it uses synchronous code in an async function. Also, longer prompts will reach the token limit and get cut off.
Interestingly, this must be what Replit is also using for their AI codegen service too (https://replit.com/site/ghostwriter) since I believe they have some connection to OpenAI, as they got early access to OpenAI's service last year with their previous Codex product (https://blog.replit.com/codex).
I never use it for full function generations, they're almost never correct.
Sam Altman <--> Paul Graham <--> Amjad Masad might be it.
I did ask it something more basic: "how to use fetch to download and parse a csv", and the code was helpful, although not much different from what I could find with google/stackoverflow. However, this next part was what really impressed me. The results used Papa for parsing, so I then asked it to do it with csv-parse instead, and it worked! That seems like the advantage over something like Copilot--it has contextual history of the question which can be used conversationally to tweak the results.
Sounds exactly like some of my co-workers :)
1. those that can specify concise requirements for the AI
2. those that can check correctness of the code the AI generated.
Building an AI is just like building a website. It can mostly be done by an AI as well.
If it can program a desktop application it can program and evolve tensorflow/pytorch scripts.
[0] quasicitation https://www.commitstrip.com/en/2016/08/25/a-very-comprehensi...
It's annoying blog spam that will hijack your attempt to scroll back with a modal asking you to enter an e-mail and subscribe, but it seems to be roughly the same thing, except the entire article appeared at once and I didn't need to interact with it and wait five minutes for the full answer.
We've come up with a new pedagogical method to teach 4 year olds arithmetic. With two months of study they can easily multiply two 15 digit numbers in their heads.
Well I can multiply those two numbers in a calculator to get the same result, and I don't have to feed or take care of it.
From a practical perspective I do think a lot of programmers are doing copy-paste in their day jobs, which means they should be reusing existing code (I would imagine there are numerous reusable customizable login forms).
---
For example I asked it this:
Can you write a javascript function that takes a callback and returns the value of that callback, but returns a cached value if that function has been called less than 5 seconds ago?
And it produces code that looks right but doesn't work (doesn't handle arguments, failing its own example implementation). However, upon reminding it to support arguments it does work. I guess this is actually potentially useful.
https://shot.3e.org/ss-20221201_153225.png
Not to mention the copyright issues with this. It's just regurgitating what's already on the internet not sure why it's ok for them to copy people's work but if you do it you will get sued.
``` One Elf has the important job of loading all of the rucksacks with supplies for the jungle journey. Unfortunately, that Elf didn't quite follow the packing instructions, and so a few items now need to be rearranged.
Each rucksack has two large compartments. All items of a given type are meant to go into exactly one of the two compartments. The Elf that did the packing failed to follow this rule for exactly one item type per rucksack.
The Elves have made a list of all of the items currently in each rucksack (your puzzle input), but they need your help finding the errors. Every item type is identified by a single lowercase or uppercase letter (that is, a and A refer to different types of items).
The list of items for each rucksack is given as characters all on a single line. A given rucksack always has the same number of items in each of its two compartments, so the first half of the characters represent items in the first compartment, while the second half of the characters represent items in the second compartment.
For example, suppose you have the following list of contents from six rucksacks:
vJrwpWtwJgWrhcsFMMfFFhFp jqHRNqRjqzjGDLGLrsFMfFZSrLrFZsSL PmmdzqPrVvPwwTWBwg wMqvLMZHhHMvwLHjbvcjnnSBnvTQFn ttgJtRGJQctTZtZT CrZsJsPPZsGzwwsLwLmpwMDw
To help prioritize item rearrangement, every item type can be converted to a priority: In the above example, the priority of the item type that appears in both compartments of each rucksack is 16 (p), 38 (L), 42 (P), 22 (v), 20 (t), and 19 (s); the sum of these is 157. ```Adding the question: "Write a solution in JavaScript to find the sum of the priorities of those item types." got me:
``` const input = vJrwpWtwJgWrhcsFMMfFFhFp jqHRNqRjqzjGDLGLrsFMfFZSrLrFZsSL PmmdzqPrVvPwwTWBwg wMqvLMZHhHMvwLHjbvcjnnSBnvTQFn ttgJtRGJQctTZtZT CrZsJsPPZsGzwwsLwLmpwMDw;
const rucksacks = input.split('\n');
const sumOfPriorities = rucksacks .map(rucksack => { const half = rucksack.length / 2; const firstHalf = rucksack.slice(0, half); const secondHalf = rucksack.slice(half); return [...firstHalf].filter(item => secondHalf.includes(item)); }) .reduce((total, common) => { const priority = common[0].charCodeAt(0) <= 122 ? common[0].charCodeAt(0) - 96 : common[0].charCodeAt(0) - 38; return total + priority; }, 0);
console.log(sumOfPriorities); // 157 ```
Which is almost perfect, even though it's a new puzzle.
For a few of them it made a small error of one line that I had to fix. But also in a few cases I was able to just reply that there was a bug and what it was doing incorrectly and to fix it, and it worked.
It gave me code that was very specific such as using the methods as I named and described them in the request for the class.
One easy way of catching out is to ask "write X in language Y", where X isn't something most people would write in Y -- it usually got it wrong for me.
Ill stick to man pages and blogs for now. I like learning!
Been playing with it the whole day - got an idea on how to fix an old bug in an old programme of mine(that is still running).
But, here's the thing, how good it is depends on the available knowledge out there.
I've now tried, js, python, solidity, appscript(not tested the code), Excel and google sheets.
When you're building common js, python, etc stuff, it's really handy to have one screen that basically gives you the answer rather than having to google it, parse through the posts, etc.
However, if you're on the bleeding edge of tech, or looking for more obscure topics, it really starts showing its limitations.
I managed to run into that for example looking for Jupiter notebook commands, optimisations, versioning (giving me python2 results instead of python3 with obsolete/unmaintained packages).
Its limit is basically what has been created and knowledge already available.
Depending on what you're building, would suggest understanding what you're copy pasting and possible vulnerabilities that may arise from the code.
Overall, I'm sold - defo wouldn't mind paying 20 to 40 a month for this. If it's pricier, not so sure.
- Solidity contract bug: https://twitter.com/gf_256/status/1598104835848798208?s=20
- Simple JS bug: https://twitter.com/bentossell/status/1598269697371185153?s=...
- Buffer overflow (though ChatGPT generated this one as a code example, and explains it): https://twitter.com/bentossell/status/1598269719202521088?s=...
- Explaining worst case bubble sort: https://twitter.com/bentossell/status/1598269721882681344?s=...
- Fixing a Kubernetes file: https://twitter.com/PhilipZe9/status/1598303853719560196?s=2...
https://ibb.co/KwGLy9C
Here's my convo with ChatGPT: https://imgur.com/a/eIjvh8C
If the text and code are just random stuff making no sense then I wonder how different it is from the Copilot results (besides the convincing style of answers).
The big question is: whose code is this? Is it, verbatim, someone else's copyrighted code (assuming a snippet like that is novel enough to be copyrightable)? Or is it derived from someone else's code? And if it's derived, is it a derivative work in the legal sense, or is it no more derivative than if a human had read a tutorial or some documentation and then later written code like it?
Maybe it can substitute StackOverflow, we will get better and faster code / explanation. But in the next 30-50 years, who knows. I guess it can write a lot of code for us, maybe we will be obsolete in that period of time, but as we know, prediction of the future often times goes really really wrong.
Well, mostly. I tried the prompt in the OP into text-davinci-003 and got this:
Although that was with temperature = 0.7. temperature = 0.0 gives more reasonable results.Write a javascript file using the Express framework in NodeJS containing CRUD routes for a TODO entity that contains a name, a description, and datapoint indicating whether the TODO is completed or not, and connect this entity to the Sequelize module from the npm package manager
And it spat out a full tutorial on how to write an express crud app hooked up to Sequelize with SQLLite with Sequelize types and all crud routes, it's pretty impressive
What is this? I just gave my email. Isn't this enough?.
My assumptions up until now is that it provides kind of a template result for many things, which you could have searched online. It's like a complement to a search engine.
https://pbs.twimg.com/media/Fi4A0W8UAAAFF47?format=jpg&name=...