A fun experiment but LLM generated stories will never escape the uncanny valley, because they're not novel, they're remixes. I know the critique of that position is "but humans just remix the things they know from the world" but that is the same misunderstanding that those who thought truck driving jobs would be automated by 2018 made about that profession. i.e. The less you know about a field, the more likely you are to believe it will be easy to automate.
Art, in this case illustration and narrative, requires a coherent viewpoint. In literature this is often referred to as the authorial voice. Voice is special because of the uniqueness of the human life behind it, which I realize sounds a bit airy-fairy, but is very true once you start digging into what works and what doesn't. When you read something and wonder why it's so engaging when it's not _that_ much different than other very similar pieces, that's often what's at work.
It's also why some pieces really don't work for some people. The "a kid could draw that" critiques of various modernism styles come from this place.
I don't begrudge people experimenting with art through LLM generation, and perhaps it can be a great tool for scaffolding ideas, I grant that. However, I do worry about the deluge of poorly conceived LLM content that will make the good stuff harder to find. We had a glorious age where the gatekeepers had lost most of their power. With the flood of content about to be released, I'm afraid we'll find ourselves beholden to them once again.
> A fun experiment but LLM generated stories will never escape the uncanny valley, because they're not novel, they're remixes.
Much of children's best-selling fiction is extremely derivative. The way the market works, it's nearly impossible to get things published that aren't extremely derivative.
To the extent that, I think this might actually be a good place to start - with auto-generated derivative garbage - and then fix the "problems" with the plot through editing.
My first thoughts on this story were pretty negative. The plot is almost incoherent. There's not much conflict. But it does have a little bit of a mystery, and it does get resolved.
Although much of it came from out-of-nowhere (The Theme Park), Alice's Adventures in Wonderland and many other extremely popular children's books are literary nonsense.
The illustration definitely does not fit together well, and is not what you'd expect to get from a book you paid $20 for.
But, overall, it's really not bad.
I could see a huge market for parents wanting custom books like this generated for their kids about all their zany interests. If your kids loves baking & cute animals & mystery - they'd probably love this story.
Maybe it costs $1 or $2 - it's definitely not a big business - it's probably not going to "disrupt" the market (because it's so entrenched I don't even think disruption is possible).
If this is too much literary nonsense, I'd love to see what it could do if you asked for the same story "In the style of AA Milne" or Robert Lewis Stevenson or James Mathew Barrie or Beatrix Potter.
I can definitely see a near future where almost every parent does this for their kid once or twice a year.
Kids happily get sucked into Minecraft video narrative "series" with totally insane plots that make no sense and are basically gibberish even line-to-line, and into dirt-cheap Asian and Eastern European CG cartoon imports on Netflix that are barely better. I'm 100% sure a mostly-AI creative process could produce something higher quality than those... um, genres, I guess, today.
(seriously, there's some so-cheap-I'm-not-sure-why-they-bothered kids' cartoons on Netflix, if you look for them—you're on the right track if it looks like it was rendered with mid-grade 2005 desktop 3D rendering tech using stock models and stock scenes with way-too-little decoration or clutter, despite in-fact dating no earlier than 2015. A lot of them have whole scenes that are basically just the characters talking in circles. The plots are always extremely straightforward and about 1/5 as much plot as they should need to fill the time, even by very generous reckoning. Lots of interpersonal conflict that amounts to nothing, existing just so they can have two characters have some conflict in a scene to absolutely no thematic-, message-, characterization-, or plot-related end. Most turn out to have been originally produced in India or Romania or something, if you start digging. It's crazy. But young kids will watch these.)
Oh, I'm not claiming that content's good for them, I'm reinforcing the notion that it won't be kids who reject crappy AI stories on account of their being crappy. Adults might—and should, if the stories are indeed crappy. Kids will binge content that's truly already on par with or worse than what current AI can accomplish, if you let them.
I think it depends on how you use it. If you write a one-line prompt, you'll get a "generic" result. With more guidance, you can get something much more personal. For example, you could imagine a story and let ChatGPT pick the actual words.
At the end, you can put an arbitrary amount of work to get a generated story. You can treat ChatGPT as a co-author who will help you do the parts you want (e.g. find ideas, find text structure, write the actual sentences). The end result depends on how much work you want to do yourself.
This is very achievable by layering it. Start by asking for a synopsis, then characters, then story arc, then outline, then chapters, then chapter outlines, then content. Each time seeding the prompt with the hierarchy of content above that level.
> A fun experiment but LLM generated stories will never escape the uncanny valley, because they're not novel, they're remixes. I know the critique of that position is "but humans just remix the things they know from the world" but that is the same misunderstanding that those who thought truck driving jobs would be automated by 2018 made about that profession. i.e. The less you know about a field, the more likely you are to believe it will be easy to automate.
I'm curious how many modern children's books you've read. There are a ton of very shitty... Remixes with poor cheap art. Especially books for cities and holidays. I could see this eating into that
This piece did not seem to be written by a human. The writing style would arguably escape notice, though it was an even a little too twee for a children's book. But the plot was so far out there that it's hard to argue that it seems natural. I think even most children, after having experience with children's books, would find something odd about the sequence of events.
Perhaps, if the cover was engaging enough. However this seems like the kind of story that my child (4 1/2) might read once and then never pick up again. She'd be like "ew, who wants butter all over their kitchen?"
I've been using AI image generators to help me brainstorm compositions/colors/themes for traditional artworks. Your note about lack of "coherent viewpoint" very much rings true for AI-generated visual arts. The AI will get many patterns/images "right" in a given piece but has no sense of how far to take those patterns or how different components attach or relate to each other.
Even so, it's a fun and interesting tool to use alongside traditional media, so I'm excited to see it improve.
> A fun experiment but LLM generated stories will never escape the uncanny valley, because they're not novel, they're remixes
Kid's books aren't intended to be novel. It's not like a four year old is going to say, "Man, this is a retread of Hamlet!" They are often re-tellings packaged up in a new way to appeal to kids in the current generation. Sometimes as simple as removing sexist, classist, and/or racist elements.
It's initially the parent who'll pick the book though (or the critic? the people who decide what to put in the front window of the bookstore? probably not the four year old though), so the uncanniness could come into play.
> Voice is special because of the uniqueness of the human life behind it
This IMO is a handwavy criticism of AI-generated content by people judging content who have the foreknowledge that the content was AI-generated.
To put it another way, pretend it was a year (or more) ago and ChatGPT/Stable Diffusion were ~unknown. You stumble across this kids book. Be completely honest here: would your reaction be "there's no way a human created this book"?
I clicked on the link agreeing with you and came back disagreeing. Maybe I wouldn't have thought "an LLM made this", but the story is disjointed in a way that makes no sense for a children's book. The structure is all wrong.
-> Hedgehog making bread
-> Loves making bread
-> One day bread stolen
-> Butter everywhere
-> Hedgehog sets out to find culprit
-> Finds culprit
-> Opens butter themed amusement park with culprit
-> The End
There is no overlapping theme or logic to the story. Children's book will have some form of gimmick and stick to it (young humans aren't all that bright) so you can have a baker hedgehog that encounters adversity (bread is gone) and maybe tie it in with a human culprit (bread thief, didn't say please or something), but the butter amusement park is out of place and uncanny.
> Once upon a time there was porcupine. Every day, she baked a loaf of bread. One day the loaf disappeared. Because of that, she had too much butter. Because of that, things got slippery. Until finally she opened the first ever butter theme park.
It seems like the LLM did exactly what the author wanted
Not sure why you’re getting downvoted. That prompt is in the credits and super relevant to the discussion.
I read the prompt and I can see the human’s intention: write a story about how you turn something bad into something good. But the AI transitions didn’t land and instead of getting a heartwarming story of a clever hedgehog, it just feels like a badly executed Deus Ex Machina.
> There is no overlapping theme or logic to the story.
I think that's pretty different than GP's point about art needing to have "voice." It seems entirely clear to me that AI will get to the point where it can generate something coherent in terms of plot/theme/logic - that's just a matter of time.
The ones that have stood the test of time are masterpieces of storytelling. They shine insight into a facet of life. They have a strong throughline, a message without moralizing. The best ones ground an experience to make it tangible to parent and child.
Where the Wild Things Are - a fever dream about rebellion and unrealized sexuality, hero's journey into the latent world and back.
Green Eggs and Ham - picky eating works on its own, or can stand in for just about any other unpleasant experience. The repetition mirrors the real-life experience of getting used to something you don't like.
Can't remember the name, about fuzzy little monkey people living in a hut? -
really cozy, about the warmth and safety of being a cuddled-up tiny person without worry.
There's no throughline in this one. The prickly porcupine stands in for a stressed mom. The fairy stands in for a child who messes up the kitchen. The solution is play all day and leave the kitchen dirty? Doesn't hold together.
I'd take out the filler "journey" bit, add in some conflict between the mom and kid, and make the ending more realistic (fairy uses magic to make icing for the cake?). That might bring it up to library-filler tier book.
To me, the art looks really impressive though. Again, no throughline - I'd expect a progression that goes from lonely porcupine -> in the wilderness -> sharing the stage -> back at home but happy
>I'd take out the filler "journey" bit, add in some conflict between the mom and kid, and make the ending more realistic (fairy uses magic to make icing for the cake?). That might bring it up to library-filler tier book.
THIS. This is exactly the power of these models. The AI generated an entire story in some extremely quick amount of time and an experience Human can edit and curate the results. It generates first drafts trivially.
It's not an expectation that a book will be a masterpiece of storytelling. Take the very hungry caterpillar, it has the simplest premise:
-> Caterpillar eats food -> Caterpillar eats more food -> Caterpillar eats even more food -> Caterpillar gets sick -> Caterpillar eats less food -> Butterfly.
The succession of events have a thread that join them together. I'm pretty sure ChatGPT could write it as well if you give it the above summary, but the hedgehog story doesn't have that and feel uncanny because of it.
Yeah but The Very Hungry Caterpillar is a good children's book. Maybe you don't have children but trust me there are plenty of bad children's books (especially now it's easy to self publish).
Take a look at Penguin in Peril for example.
Cats are hungry -> they go to see a film -> they are still hungry so they decide to steal a penguin -> the penguin escapes -> the penguin swims through a fountain back to the zoo -> the cats get put in prison.
I dunno maybe it's still fairly coherent, but we've gone from "completely impossible" to "bad children's books" in like 2 years. How many more years do you think it will take to get to "good children's book"? I bet less than 10.
As someone that has been reading multiple children’s books per day for the last 5 years to my children, let me tell you that some books are like what you describe with a nice intertwined story that makes sense together and then some other books feel like randomly thrown together scenes like this that are just “and then this happened… and then this happened…”. I tend to notice a lot of the latter come from either 1. Celebrities that decide they want to be children’s book authors, or 2. Somebody that wants to make cool pictures and adds the story as an afterthought. With how nice these pictures are for OP’s book, it would probably be tricked into thinking this book was written by person #2.
I suspect with AI we always will be in "this story (song, picture) is Ok" and never "is great" category. I recall the member of ABBA was asked whether they ever felt the song would be a hit. She (this was definitely either Agnetha or Anni-Frid) said that never except for "Dancing Queen". So the AI will be able to mimic ABBA-style, but it would struggle to create something like this song.
My reaction would still be, "This is a pretty bad children's book."
The reason I might not have guessed it was a machine creation is that the prose is adequate on a technical level, and previous markers for machine creation included wonky prose. But going forward we'll all learn new markers for machine-created dreck.
There's only so many words in the english language. If I ask ChatGPT to write me a story about how a I turn into a super hero tomato that saves the world against the villainy of pasta, that's a novel story I'm pretty sure.
> A fun experiment but LLM generated stories will never escape the uncanny valley
Never is a really, really long time.
> Art, in this case illustration and narrative, requires a coherent viewpoint.
Why can't an AI have a coherent viewpoint, especially in the context of a children's book, where that viewpoint need only be incredibly simple? "John has lost his <insert item>, feels sad about that, and wants to get it back" is a coherent viewpoint, and certainly something AI can (if not now, at some point in the future) write a coherent story about.
> Voice is special because of the uniqueness of the human life behind it, which I realize sounds a bit airy-fairy, but is very true once you start digging into what works and what doesn't.
So what works and what doesn't? You're saying all of this at a high level and giving a timeline of forever on your prediction, but then you say it's true if you dig in without doing that.
> It's also why some pieces really don't work for some people. The "a kid could draw that" critiques of various modernism styles come from this place.
The thing about modernism is that it requires outside context to understand. It is a movement in response to art movements that preceded it, and the fact that its style uses simplistic shapes and lines is because of what it's a response to. You need to understand that history to appreciate why it is what it is.
In that sense, you're absolutely right - a child can't produce modern art, because even if it looks visually similar to existing art, it lacks that context.
The problem with that argument is that it's totally irrelevant to the topic at hand - children's books. They do not need history or context - those things serve no purpose because, for obvious reasons, they would be lost on the audience. All you have to hit is the superficial level - if you book looks and reads like a great children's book, it's a great children's book!
On a somewhat related note, I think AI art will eventually be seen as a movement somewhat akin to modernism. In response to the fact that art has become increasingly elite and insular (the whole fact that a modern painting can't be fully appreciated just by looking at the painting - you must know art history, which most people don't have the time or inclination to learn), people will be inclined to appreciate the flood of beautiful but democratized art that can be made by anyone. Just as the generation of artists before modernism thought it was awful, so will the current generation of artists about AI art. I'm a believe that art should be about aesthetics, not history, and AI art is certainly great for that.
True but irrelevant. If we want to get to decent art, we'll need something well beyond an LLM.
Humans generate stories using introspection. Every writer is their own first reader. They rapidly and iteratively try out story elements to create experiences and feelings. They then find the right words, again seeking out particular feelings. After iterating on the words for a while, they then have others read the words: partners, friends, eventually an editor. Those people all experience feelings, which get observed and sometimes articulated to the writer, driving further revisions.
LLMs can create story-shaped things. They can write glib, imitative dreck. But to write a new, compelling book, they need to be able to iterate against high-quality simulations of reader emotions. That may be something that we can create in the lab, but if we do, it won't be called an LLM.
Right, I think humanity will some day create something resembling an AGI, but the current LLM statistical approach is extremely unlikely to be the path to that. You need many more layers of sophistication.
Like, AI art generators don't just have issues with human hands or eyes, they also create buildings which are architecturally incoherent. I don't know how you solve that with mere data or refinements.
Yea, I love how people just throw out statements like "they're not novel, they're remixes" with absolutely zero justification. What a complex AI like ChatGPT is and isn't is a very interesting conversation. But I'm not sure people realize just how much their personal bias factors in to the conversation. So often people offer something as obvious fact which has no factual basis.
As actual usage? I enjoy making stories up (or misremembering existing ones) for my daughter. The human touch is valuable to me. I also wonder: many well-loved stories of today started as a story the writer was telling his/her children; one example is The Hobbit. What if Tolkien had thought "screw this, I'm overworked and tired, I'll just type some prompt for ChatGPT and it will be enough for Christopher".
I know mine is a kneejerk reaction. But I can't help thinking that this is solving a problem no actual humans have. We know how to tell stories, good or bad. We don't need automation for this. We don't need help writing terribly bland and generic stories, either. So it feels vaguely dystopian to me.
Technically it is impressive. ChatGPT and things like it are the most impressive thing in years, to me.
It is vaguely dystopian, true, but one thing I remember in order to sleep at night is that ChatGPT and the like are trained on human-written text. So we might currently be looking at ChatGPT at its very best, or close to its very best. Reasoning: from here on out, the stuff it trains on will be polluted with the automatically generated text. Photo copies of photo copies eventually lead to blurrier and crummier images of the real thing.
We can keep paying people to come up with optimisations to the algorithm itself, keep paying annotators to manually pepper human common sense into the system, but it's my theory that these payments won't keep up with the spread of automatically generated content in the source dataset and the negative impact that has on the language model that the algorithm outputs.
ChatGPT currently enshrines insight and style from 2020-2021 (more-or-less indistinguishable from insight and style from 2022-2023), but now that the system exists, rather than observing a rapid pace of new writing styles and original insights emerging on the web of 2024, we'll potentially see a slightly slower style/original insights emergence rate, then the next year an even slower emergence rate. This will continue until it reaches a stage where the spoken world of language and world wide web world of language have completely diverged, similar to the way 1950s film dialogue bore little resemblance to 1950s speaking styles.
Short-term, ChatGPT has called creative pursuits into question, but long term, I think such systems will strongly validate creative pursuits, and only really replace non-creative roles. By turning the web into a wasteland of written cruft, GPT will validate the need for human flourishes, error, divergences from the norm and the arbitrary rewriting of unspoken rules. I think only a strong AI raised like us in our own societies could infuse that kind of culture into its writing, but the process of developing such an AI would basically just be a reinvention of slavery, and we probably don't have the resources here on earth to support it longterm anyway.
I agree with the premise that content will get more polluted, but there is an element of human vote every time we choose a prompt output and say “this is good enough for me to post/use/turn into a book”.
This is just a very convoluted way of manually labeling data as good and bad.
Even if western governments adopt similar laws, however, I'm not sure if they would be that effective. People would start messing with the definition of AI. E.g. 80 years ago a spelling and grammar checker would probably have fit society's definition of AI, and both of those techs arguably have a cultural impact on the web. Spellcheckers lead to less new words or dialectal variations of words coming into existence, for example.
I'm starting to worry that we're going to run pretty short of activities humans find fulfilling to do, that have any notable amount of extrinsic reward or value.
What happens when the computer does tell better stories to your kids than you do? When you turn the computer loose on them for a week and, well, god damn, by the end of the week they seem to have taken exactly the lessons they needed and are bringing them up in real-life situations and are even getting better at reading themselves, and raving about how good the stories are, and you can't deny that if you'd done it, the results wouldn't have been as good?
Now repeat for all the ways that humans find it satisfying to serve others using their creativity.
I very much doubt it's healthy for all the things we find fulfilling and enjoyable to have only intrinsic motivation to drive them.
It's hard to turn down better outcomes, but what if a side-effect is that as a species we become psychologically (even spiritually, if you will) lost? So kids are raised more effectively—but for what? So that all the things they enjoy doing are valuable to, and needed or wanted by, exactly no-one?
So we get more efficient at producing goods and entertainment—for what? So the brushes and paints we buy for our paintings that nobody cares to see and that your talentless cousin can outdo—in every way, including sublimity and mood and all that—with a lazy prompt on some future image generator, are somewhat cheaper? What's the point?
> What happens when the computer does tell better stories to your kids than you do?
Others have answered "but ChatGPT will never get as good at this". Let's, for the sake of argument, suppose that it does: suppose ChatGPT does get better.
What then? Why is it a net win for mankind that a computer program can tell better (or "good enough") stories? If mechanical tasks get done by machines, and creative tasks get done by machines, and games get played by machines, and everything under the sun gets done by machines...
...what's left for us, then? Writing is an enjoyable activity for humans. It's not something to "optimize", it's fine as it is. If we optimize humans out of the activity of writing (or composing music, or drawing, etc), have we truly "won"?
That's exactly my point about the extrinsic vs. intrinsic motivation. Of course we can still do whatever we want, we can just no longer count even as much as we can now (which is already much reduced by e.g. recording and broadcast or mass manufacturing) on anyone else caring about it, needing it, or wanting it. I think being able to feel like one's work and creativity are genuinely wanted and needed by others in one's family, friend circle, and community is probably pretty psychologically important to people. It looks like all that's going to be left of that is some very-forced variety, and even that, not to most people—the organic need and desire for other people's creativity and effort may soon be sharply reduced, even further.
My question is precisely that, what will we want to do if computers do everything, including writing, reading, critiquing, buying, selling and stealing art?
We will be left to ponder our existence and meaning in the vastness of the cosmos? Nope: in this hypothetical future, computers will also be better at that than us.
I'm not threatened by computers if they can design a car, or write a better programs (though my livelihood is threatened for obvious reasons). But why would I want computers writing novels? What is there for me to gain, as a human?
(I'm specifically excluding the "wow, this tech is amazing" angle from it. I'm also amazed by ChatGPT.)
I understand your point of view, but writing novels is an activity that is enjoyable for humans. This to me feels like "we developed an AI that enjoys food better than humans": do we need an automated improvement in food-enjoyment?
Think about whatever you enjoy doing: what if computers do it better, and become cheaper and faster at it, and you are no longer necessary? Sure, you could do it for your mom or partner to see, I guess. Disappointing, isn't it? Now repeat for every conceivable hobby or human activity. Do you see where I'm going?
> access to better novels
Are current novels bad? Do we have a lack of novels? Does the process of novel-writing need "disruption" by AI? Is mankind complaining, "if only we had more and better novels"?
I think this is true, but there are (likely) more children's books out there than you can read in a lifetime, so it seems like adding more middling AI ones to the mix would not be a value-add. At least (skilled) human authors adding to the pile have a better chance to make the story meaningful.
Maybe we need a critic AI that can identify the most touching, meaningful, instructive, and morally enlightening stories out there and help us sort through the already daunting number of options
> But I can't help thinking that this is solving a problem no actual humans have.
This. There are 10000 children's books out there with coherent art sets and a storyline that's not just total gibberish.
This kind of crap is no better than SEO blogs. Its only purpose is to pump out massive amounts of content in the hopes of earning a small amount of money on each squirt of crap that maybe amounts to something over the long term.
Build good things. Solve problems. That's how you make money.
Most writers learn by writing. This is a "solved" problem, in the sense that mankind has had writers -- successful and otherwise, good and bad -- for hundreds of years now. We got this, this isn't an activity that needs improving by AI tools. It's not an actual problem that humans have, e.g. "I would write better, if only I had an AI assistant!". Creative writing is also not about speed, it's not something that needs optimizing for that. For art in general, actually: "if only Picasso could paint faster!" is not an actual problem mankind has.
I'm talking about creative writing, mind you. Not some technical writing like creating manuals or reports -- I'm all for AI helping us with those.
I can see your point but I'm not sure it's that straightforward. Let's say you simply took a hat with your kid's favorite characters from tv/movies/books/whatever on slips of paper, drew some, then did the same for plots, then made up a story to tell your kid.
At some level, sure, it's not the same as coming up with a story on your own - there's not going to be anything meaningful in why you selected those particular elements of the story - but that merely constrains those choices to free you up to be creative with other aspects of the story.
I think there's still room for someone who wants to be creative to use the tools to provide the same experience you're describing. It's not like one has to give the raw output to one's child. And I think such raw outputs will be generally bland, missing something relative to those with a human touch. However, giving bland and meaningless entertainment to children wouldn't really be something new, and I still see a fair concern in there. I suppose this is already part of what articles on "Kids' YouTube" are talking about.
What actual problem about writing would this solve?
This is like coming up with a plot idea and having some hired writer write it. This already existed before computers. So you're essentially using GPT as a ghostwriter, only (unlike human ghostwriters) it has no way to come with something innovative, it just regurgitates whatever it's already in its data in some way or another.
"But," I can guess you'll object, "human (ghost)writers are already writing derivative crap, what's the difference?". Exactly. We already know how to hire humans to write derivative crap, and the truly good stuff is out of the question for GPT. So what is this solving?
But what about children's bedtime stories? Well, there are tons of really good bedtime stories, lovingly crafted by good authors. Do we need automated ones written by ChatGPT?
The world doesn't need another love song . But the human composing it might need it, as a form of expression, even if the result is garbage or trite. Humans enjoy creating stuff, even if nobody wants that stuff. Writers write, or their minds will explode. The creation process is its own goal.
This is not the case for ChatGPT; a LLM has no motivations.
Go ahead and create, no technology is stopping anyone from doing that. But I don't see why you can't create while using a technology like ChatGPT. Coming up with the proper prompts, rewriting some of its output, using its output as an input, etc. There's a lot of things you can do to combine your own novel thoughts with the text generated by a machine. That's a way to create things that has never really been possible.
Imagine being an OK writer and now you potentially have an editor. Imagine asking it to come up with a few ideas regarding something you're working on and you can use one as inspiration. There's just unlimited ways people will work with and integrate these new tools that you or I can't even possibly dream up.
As for asking if we need more bedtime stories - why not? Maybe it would be cool if me and my kid could use some experience we had together to create a story that features him and his friends in the mold of a classic story.
> Go ahead and create, no technology is stopping anyone from doing that
Yeah, but where did I say that? That's not my point at all.
Here are my thoughts, rephrased:
- We, mankind, don't need AI to "improve" writing. This is a problem nobody has. There's a plethora of writing, both for children and for adults.
- People enjoy writing, so this isn't an activity we want to "optimize". There's no need for ChatGPT here.
- When computers take over creating everything, what is left for us humans? If/when books are being written and published by AI, sure you can go on writing for your mom & friends I guess, but something fundamental will be lost.
- When most "content" (I hate that word) is created like this, some sort of extreme Netflixization of movies and books taken to its logical consequence, we will become consumers. Isn't that depressing?
> As for asking if we need more bedtime stories - why not? Maybe it would be cool if me and my kid could use some experience we had together [...]
I do this every night with my daughter, without AI. It's called "parenting". You don't need gadgets or computers for this. ChatGPT isn't solving an actual problem in this case either.
"Computer, tell my kid a funny story about that time something fun happened to us; I think it was yesterday, it's in my mobile logs anyway."
Why is it not parenting to create a story with an AI with your kid? Why is what you are doing better?
How do you know something is lost when using an AI to assist in writing? Could it be that something will be gained? Perhaps it will unleash creativity in people that wouldn’t have written before? Maybe the greatest literary work ever created will be built by someone using an AI. Or maybe we’ll see the limits emerge and this won’t be another John Henry story.
Your argument very much sounds like the argument of the Luddite’s. But here’s the thing - you can always check out of it. The Amish certainly have thrived while not participating in very much of what we call “advancements”. I think that’s a better approach than Ted Kaczynsky’s.
Did I miss it somewhere or you don't actually make physical copies for these? I would love that; I know the perfect people to gift that subscription to. Let them generate a few stories and maybe once per month select a book to get printed and delivered. This is a genuinely exciting product.
I tried it and I wonder how I could generate more images. I made a story and only had one image.
I would also love to print it out as a book (don't see myself reading a book to a kid on a laptop / tablet) and that every page would have an image.
Would also be good if I could define which content should be represented as an image.
To be clear, Bedtimestories is just the beginning, its an easy target, and easy to market. Our mission is a lot bigger, we want to become "the narrative company" and are building the tools to make that happen.
Excited, and hope to have you and the wife and kids along for the ride!
We've built a few adapters and the layer between us and the model(s) makes it easy to swap out or support multiple.
Currently we are running 100% on OpenAI, GPT3 & Dalle2, but we have tried some (unofficial MJ solutions) and our own trained SD models. We are working on a feature we call "magic photos" that essentially allows you to upload a set of images of a human, train your own model, and use that in as many stories as you want, you can then create unlimited variations of that character in different situations.
Here are some early versions of that. My daughter as a scientist, in her sci-fi lab. She is created on SD using around 12 images for training, hosting the model ~5gb directly on S3. Then we just use GPU provider pay-per-use to generate the images based on prompts.
I did something similar for my younger daughter using MagicStory (not mine). I have told her the story quite a few times at her bed time. She has shown it to her teacher and her friends are "jealous." It has her pictures, kinda morphed like a princess and stuff.
The images looks neat but the stories are a little bit too simple/shallow aren't they?
Another thing. Don't you have a bad feeling uploading images of your kids, conveniently tagged with their age, to a random web service that promises to delete the images without even revealing the makers?
I mean yes, it looks legitimate and you can find a person on Twitter who seems to be the maker of it, but that's not a guarantee for the safety of the uploaded images. I'm not a data paranoid, but careless sharing my kids' images with a random service like that crosses a border.
> loading images of your kids ... to a random web service
I used to worry about that... but then someone pointed out to me that there are pretty much no stories of anyone being hurt by that.
Worst case, those images leak all over the web... And then what? The internet is already filled with pictures of people and children, yours won't stand out. The photos aren't tagged by name, and even if they were, few people care about obama as a 5 year old pirate , so why would they care about your child?
Overall, the risk of personal harm from a data leak seems really small - so I put my efforts elsewhere, like making sure my child gets to spend more time with me - that's infinitely more valuable.
I agree with this, but also note that a lot of children's books written by humans (I suppose) are terribly bland. My daughter has some Disney books which has illustrations from the films, and the writing in those is exceptionally bland. Same goes with some wonderfully illustrated, but terribly written, books from the 50s and 60s. I'd actually much prefer reading her GPT stories than those books which we have on the shelf (I guess I should get rid of them actually).
Hey! Maker of https://magicstory.ai here! Super glad to hear your daughter loved it :) we're pushing updates to improve the stories and definitely take privacy seriously - right now all images get deleted once your characters model is trained.
Also planning to have stock characters so you can make stories without uploading images since I get that's definitely a concern for most parents. Would love any other feedback on how to improve the experience!
Thanks -- totally understand! In a few weeks we will have some starter characters so that you don't have to upload any images to get a story.
Re: pricing bc of the manual time it takes to approve stories + GPU costs + super large queue we actually just started offering a paid tier to move up the queue and get your story within 24 hours. Definitely still experimenting tho!
Congrats, looks really amazing! May I ask what your AI stack is? Do you use MidJourney for the images and GPT for the stories or something else? How do you ensure the output stays consistent across images - do you use the seed image and a custom set of prompts for each image, or did you use eg StableDiffusion and did some customized training à la Dreambooth to ensure consistency? Is the workflow fully automated, or are the outputs reviewed and the prompts iterated on manually?
Thanks so much! Happy to share our stack:
- GPT3 davinci-003 writes the story
- we did a lot of manual experimenting w prompts and used that to fine-tune GPT3 which now generates the image prompts for us
- Stable Diffusion + Dreambooth generate the images (we train a model on your hero which gets deleted once your story is finished)
Right now the workflow is mostly automated, but we still manually approve stories before they get sent out to ensure quality. Unfortunately bc of manual time it takes + GPU costs + super high demand we will probably start offering a paid tier. Hope that helps!
The prompt  didn't fully transfer to the generated story.
This is an important part of the prompt:
> she had too much butter. Because of that, things got slippery.
> Until finally she opened the first ever butter theme park.
But the generated story doesn't say that the layer of butter in the kitchen was slippery, or explain that butter in general is slippery (and hence could create "slippery slides"), which make the idea seem like a bizarre jump:
> Sparkles was sorry for her mischief, and offered to help Penelope turn the butter back into bread. But Penelope had a better idea.
> Together, they opened the first ever butter theme park
I'm of the opinion that all kids books should be heavy on metaphor and allegory and all of the other important and joyful aspects of language. Restricting children to books that have been "written at their level" or "made easy to read" is doing them a disservice. It's no wonder so many people grow up to be seldom readers if their first exposure to reading is saccharin pablum that presents no challenge, no meaning, no mystery, and no danger. A proper children's book should be one which an adult can also read and enjoy if they give way to that childlike wonder that still lives within them.
Yes, indeed! Children have a huge capacity for stories that engage their imaginations, creativity, and open sense of wonder. And, I think if we're honest, kids are often more honest critics than adults.
Author of "A Wrinkle in Time" Madeleine L'Engle said, "You have to write the book that wants to be written. And if the book will be too difficult for grown-ups, then you write it for children."
I also ain't a big fan of the idea, a few thoughts
* (Currently) Most AI generated images have an aesthetic to them, so it's easy to spot sometimes. Typically low-res, smeared details, bad hands/body horror. This may improve over time, but I don't know how I'd feel ethically (both morally and in work/effort) leaning so hard on AI.
* For me at least, as soon as I learn something creative is completely (or mostly) AI generated I lose all interest in engaging with the content beyond surface-level consumption. I think fundamentally AI generated creative works are banal and will age horribly over time.
Honestly I think it would be much better to buy a nice paperback kid's book made by a human as a gift for your friend's kid. They'll love it and the kid will have a connection with a human author and artist.
Really? This may be AI generated to an extent, but it's also showing that the author and friends worked together and put together a labor of fun and love for a pair of friends. That'll have more meaning to that family than a random kid's book.
Yeah that's nice sentiment but I don't know, kids don't care about that sort of stuff. I'd rather spend a few hours researching paperback kid's books that could positively impact their growth and plant seeds—made by career artists and writers. It's about the kid, not OP and the parents.
This is like giving a kid an AI-generated JRPG game you made in an afternoon vs giving them a Final Fantasy game.
Honestly when I was a kid I would have been all about the AI-generated JRPG at least 60% over playing the latest Final Fantasy. I couldn't get enough different games at that age, so I'd absolutely enthusiastically dive into both.
There is a market, at least within south east asia, that mass produces stories like aesop's fables, and regional folklore. The books produced are thin 2minute reads and are available for purchase for less than 1 USD usually. It's not surprising to see grammatical and spelling errors slip into the prints of some of these books. The artwork is basic and while stylistically more consistent than OP's effort, it's entirely generic and forgettable (unlike the artworks of legends like Axel Scheffler or Sara Ogilvie).
I can foresee a very near future where this market is absolutely overrun with auto generated books + artwork. I don't think of this as a good or bad thing. It's just going to be interesting to see who gets the first jump on this.
I fully expect that any kind of content store (Amazon books, podcasts, audio books, newspapers, news sites, social media) will be absolutely overrun with generated word soup trying to scam people out of a few bucks - e.g. just like YouTube is overrun with autogenerated creepy children videos.
Whether having to sift through massive amount of trash to get anything created with some quality is a good or a bad thing... decide for yourself.
There'll be a lot of low effort stuff but there's also going to be a torrent of good work done with it. A lot of people seem to be missing the developments in these tools that allow you to take the initial result and then shape it through further prompts or by running it through other tools. If you're willing to put the work in to guide and polish the result you can produce some pretty good work with much less effort than it would normally take.
I wouldn’t mind if a separated genre springs up, labeled AI generated or such. The problem I have is as you said, I don’t want cheap content masqerading as human made. Human content can be cheap too but automating that away would send a torrent of trash upon us to sift through
If you look around on Amazon a bit for books, you'll find a lot of low quality, high volume, and highly formulaic stuff aimed at different types of readers. Basically authors are churning out masses of stories with relatively simple, predictable plots with a lot of action/sex/intrigue/whatever geared to readers that gobble that stuff up as fast as it can be produced.
With AI you can improve the quality of the writing, tweak the style of delivery, add detail to story lines, and deliver stories faster.
Seen a few people trying to make these and honestly, it’s ok as a novelty thing. Like for a bday or unique occasion maybe. But using these as any kind of replacement for professionally done work feels like a disservice to children.
Good authors+illustrators for children are just so far ahead of any AI generated content right now it’s not even comparable. Something as seemingly simple as Julia Donaldson’s Smartest Giant in Town is a heart warming tale, told along a continuous theme with a consistent voice where the story wraps up beautifully at the end. Then it’s all brought together with delightful illustrations that match and delight so well. They aren’t just good illustrations. They are delightful. Heck. They even have Easter eggs of sorts. All of that gave my son and I so much to discuss, point at, draw, and sing together. Wouldn’t want any child who could have that experience to miss out on it.
Don't take this as a trolling question, but I can't help but wonder if this sentiment towards AI generated art and creativity is similar to how a chef might feel about shrink wrapped frozen food factories. Just like how a frozen meal can be thrown in the microwave and still be considered a meal, AI generated art can still be considered creative and beautiful. But, as we all know, the timing and execution to properly deliver a "real meal" from a chef is far different than a frozen meal from a factory.
I find myself conflicted on this topic as I can see how the makers of yesterday have been replaced by process automation, taking away some of the magic of the creation process. But, just as a frozen meal from Costco has its place, so does AI generated art. The macro level ingredients may be the same, but the final product is different.
Before fast food, the number of chefs and home cooking would have had to be far higher and/or more frequent than now. Some would argue that we are paying the price for all that processed food now, but others would argue it's the only way to scale people's lives and their time.
I agree that something "feels" different about midjourney making artwork that I likely could never make myself, but I'm struggling to fully have a reason or complete reasoning for why. And then I wrestle with, is this a normal evolution problem? Just this time it hit something nearer to me personally and it impacts my life.
I have gone from using midjourney for hours a day in amazement to being completely sick of it and anything it outputs in less than a month.
I have seen hundreds of perfect variations on Duchamp's Fountain in my life too. AI Art is very much like Fountain variations without the human Duchamp involved. Then some AI Art fool extrapolating from that and believing Home Depot has an entire section dedicated to museum quality variations on Fountain.
The actual output is a very small aspect of the phenomena of "art".
Food is essential to our survival. No matter how much effort you put into making food, you have to make something to eat or you buy food that is cooked by someone else. So making it more accessible for people who don't have time, money or necessary skills has completely no downsides.
Art is just something we do and consume for fun. We don't need to make art more efficient, because there is already too much art to consume in a lifetime. Also there is no requirement to consume as much art as possible. And automating art making means that there will be even more average art that anyone interested will have to sift through to find something nice.
Overall all of this effort could be put into automating something that will make everyday life easier, like if you want to make art more accessible then for example automate translations so we can read books or watch movies that we previously couldn't.
Art is more necessary to survival than one might think. Consider the "Quartet for the End of Time, an eight-movement piece of chamber music by the French composer Olivier Messiaen, written and first performed while he was a Nazi prisoner of war in 1941. And even when it comes to food, eating well involves creativity: arranging textures, colors, nutrition, etc. The French speak of l'art de la table, or the art of a well-served meal that encourages conviviality and conversation.
But it is not like that ... at all. Analogies run the risk of being completely wrong.
A frozen food is made of real food, the process all starts from scratch with "natural" ingredients (well ingredients).
The frozen food vs fresh cooked food is more like a film vs live acting. Or listening on tape vs a live concert. Even this analogy is lacking, a frozen food is made fresh at some point, it is about mass production not origin and ownership.
The frozen food is not food from your kitchen (or trashcan) remixed and added onto other people's plate.
Um, I'm pretty sure your analogy fell off the truck and got ran over at some point...
Trying to take the 'freshness' of food and compare it to storytelling is just an abject failure here as the vaaaaaaaaaaaaaaaaaaaast majority of storytelling has no freshness at all, it's rehashing of the same basic plot elements, hell, was it the Greeks that stated this a few thousand years ago?
And you're confusing the tall flowers from the field. Dime novels and pulp mags are a great example of this. You're remembering the good stuff because it's stuck around and the cheap, mass produced trashed ended up in a burn pile.
I think there are important differences between this situation and your chef/frozen food analogy that cause it to break down:
* A digital image is permanent. A meal, by design, is consumed and has a very specific lifespan.
* A digital image is trivial to duplicate and consume again. A meal can be very hard to duplicate (and may even require another chef, or food scientists). No two people can consume exactly the same meal.
* A digital image is not required to survival. Regular intake of food and water is (at least for humans).
All advances in creative tooling are automating creativity. What are you on about? Look at photoshop color selector automates mixing paint, digital filters for audio automate analog mixers. This attitude towards AI creative tools is gatekeeping.
The people who wrote the Photoshop source code were paid for their contributions with full consent that their work would be included in the product and sold for a profit by Adobe.
The artists whose work was fed into these systems did not consent, nor were they paid for their contribution. In fact, I have seen a few reports of people using these systems to generate lookalike artworks intended to resemble the work of specific working artists—artists whose livelihoods depend on client commissions. The only way that these systems could produce knock-off artwork after receiving a prompt containing that artist's specific name is if the artist's own copyrighted work was used in the training data.
Do you have any doubt that many working artists' copyrighted work was included in the the training data that produced these systems, without those artists licensing their work for that purpose?
It's not gatekeeping to insist that a project obtain a voluntary license from the people who contribute their work to a project.
And it's not gatekeeping to point out the fact that these platforms have only been successful by using working artists' copyrighted work in violation of their rights, without asking permission or providing compensation.
No, your perspective on this is all wrong. The outrage about AI art is not about how it empowers its users.
Rather, the problem is that these systems only exists today because of the involuntary contributions of the very same people who are likely to be put out of business.
It really is about the how the contributors to these systems are being treated. The coders that contributed to Photoshop are analogous to the artists whose work was used to train these models. The difference between them is that the former gave their consent to the contribution, and the latter did not.
So, my perspective on this is all wrong, despite the fact that the original poster wrote, pretty damn specifically, "Automating creativity is so gross. Never mind the fact that these models are trained on stolen artwork and texts."
That's it, that's the entire post. Never mind the "stolen artwork" argument, you still have people upset about "automating creativity".
> Be real, copyright/attribution is only one complaint, a tiny part of the sudden hatred against AI art/creativity.
When the artists whose work was stolen to train these models are compensated for that work then maybe its a "tiny part". Until then it is taking peoples work and using it to create an ultimate competitor in their own market that they cannot compete with. They now have to compete against themselves and they haven't even been paid for that "privilege".
This kind of wording reminds me of a very old (probably outdated) argument about theft of digital goods. In meatspace when you steal something you are depriving someone of a thing. If I steal your biscuit, you don't have the biscuit. But if I copy a picture of the biscuit you are still left with your own picture, perfect and identical in every way. If I'm not reselling that image to everyone, you lost on that one sale. I wonder if openAI goes and pays retail price for each image they used, would this discussion finally end?
We are talking past each other. I am not talking about, and indeed am not interested in, the compensation discussion. I will let other people battle it out. I was just responding to calling automation of art gross. I don't know who you are talking to.
This is a legit complaint, but is realistic to overcome in future I think. There have been reports of OpenAI hiring software engineers to train AI models to write code in their language. Can be done with art as well, just will take time. It's an impressive demonstration of what is achievable though.
It’s not gatekeeping to value the outcomes of work created this way less than you would from the mind of a human.
Saying that I agree with you in that I see this technology as a tool that can be used by humans to make amazing things. No one really has yet as the outputs have just been blunted, sloppy things. But if I’m an artist I see these tools as the latest tool to help make things.
> It’s not gatekeeping to value the outcomes of work created this way less than you would from the mind of a human.
There is a difference between saying something is less than and calling it gross. Also, imho, value of art is usually the art not the process. Sometimes it is the process, like someone showing off a new method but that is of interest to very, very few people. Just like academic papers' methodology section. Even within the larger community, it's useful to only a few people. (Apart from just checking someone's work)
What is wrong with gatekeeping? It's not like everyone needs to be able to make art. Also, i don't know much about Photoshop, but most digitatl filters usually act the same as analog mixers. They do not automate anything.
It's not like everyone needs to make photographic reproductions, why not delegate the task to photorealistic painters? Now that we've gotten used to the convenience of having a camera in our pocket, who would suggest banning photography?
Most people don't make good photographic reproductions. They have mobiles with cameras, sure, but they cannot take an interesting or even technically good photo.
Good human photographers are still a thing. They can even get a job out of it! There are books about photos by photographers of renown.
AI automation here would be more like AI deciding which photos to take, and taking them for you, with your interaction limited to "take some nice photos of my vacation". Then some other AI could give likes to the photo, closing the loop so that humans need not be involved at all in that silliness of photography!
On an emotional level, I totally agree. But as a developer of creative tools, I've been trying to make sense of what's good and bad in this field. Arguably, all creative software is at least partially automating creativity (photoshop, ableton etc). But my conclusion so far is that the gross part is when the user inputs a vague request and a finished product comes out. Contrastingly, if the user can make continuous changes with realtime monitoring of their actions, then that is an instrument - mostly amplifying the creative impulses of its user (which is good).
When you're demonstrating or studying a tech (like we're doing with gpt now), there's a tendency to go "here's what it can do." Once people start using it for more mundane work, the focus becomes "here's what I can do with it."
It probably doesn't matter what children's stories language models can write themselves. It matters what stories people can write using them. Same for software/graphic design/law/etc. These are just tech demos.
This becomes clearer once demos play out. 2 years ago, scripting a philosophical debate using language models felt like AIs expressing their opinion. By now, we're past that.
I think (and hope) you're right. Recently, someone coined the phrase "bach faucet" to describe the mundanity of an "AI" that can compose millions of novel bach fugues that are indistinguishable from the real thing . At the end of the day that is just pointless - abundant art serves no purpose in society, and its value will be zero when considering supply and demand.
OTOH... using AI to create art shifts around supply and demand such that there will be practical implications. The way email correspondence between A customer and a bank may look very different once both are using GPT enabled email clients.
As someone who is not that great at creating art, I like it. It enables me to build more things than I could before. I don't find it much different than a website builder is for people who lack those skills.
Thinking more deeply, I find that AI art has no real value to me, though. I wouldn't hang it on my walls or hope it survives for my kids. If I had written the algorithms that power the model, then I'd feel like more of a stakeholder and maybe I would see the output as something with long term value. To me it feels more like fast food. It fills the void, but it's nothing special-- it's disposable 'art'. But maybe that's just me.
If a person learns how to paint expertly by looking at (and even copying, stroke for stroke) paintings, would you consider that immoral, and those paintings having been stolen? If not, what would you say the difference is?
The difference is that a human made the artistic stroke-for-stroke reproduction through human skill and expertise. Even a counterfeit painting is an exercise of perception and interpretation. The value is in the human's hand in the output.
Making a lump of silicon generate a painting is far less trivial. That's not relevant. Being trained on paintings was described as stealing, and unless your argument is that stealing is easy so anything non-trivial can't be stealing, I don't see the relevance.
So if there were a genius painter that learned various artistic styles on sight you would say that their work had no value?
I think you might be in trouble there, a number of savants have been known throughout history who have been able to learn various things with no instruction and simply by hearing or seeing a work done by someone else have been able to recreate it or produce something in their style.
Have you read "Pierre Menard, Author of the Quixote" by Borges?
Part of the conceit of the story is that even a mechanical copy of a known work can become something else because of the inner experience of the person reproducing it. Menard's Quixote, his fictional author thought, was different because Menard knew things and lived through things that Cervantes before him didn't, and so the exact same words are injected of different connotations because of this.
It's absurd, but it explores something about authorship.
ChatGPT and AI have no inner lives, no authorial perspective, and so they cannot use this defense. In their case, a copy is truly "stolen" with no additional value.
This is kind of a tired reasoning. I don’t know what the goal of this kind of reasoning is. The fact is that virtually no one learns or dies art this way. No one in any statistically meaningful amount has a photographic memory and uses such memory to create new art stroke by stroke.
There are many paintings that have been made following along to a Bob Ross video - using the same paints, the same brushes, and the same elements within the painting.
Even after following along and gaining the confidence to paint on their own, they continue to use the same approach to painting. Are those paintings afterwards derivative works and to be treated the same way as an AI painting asked to create a scene with certain elements in the stye of Bob Ross?
My question is "what is a derivative work?" This is equally applicable to images generated for a hobby as it is for professionals. It applies to art generated with a paintbrush and canvas, on a painting app on a digital tablet, and as generated though an AI model.
The definition of a derivative work is agnostic to the means to create it. The final product is what is being evaluated.
If the answer is "but the Bob Ross paintings that are derivative works by amateurs are never sold and only hung at home - it would only be an issue if they sold them or tried to display them publicly as described https://www.law.cornell.edu/uscode/text/17/106 " - then I would agree.
If I create an image through any means - be it paint on canvas, stylus on tablet, or prompt and ML and don't violate 17 U.S. Code § 106 then there shouldn't be any issue.
If, however, I publish that image - again, no matter how it is created - then there is an issue. But it isn't the maker of the paints, or Adobe, or the creators of whatever ML model generated it. A program cannot hold a copyright and cannot itself create a derivative work. ( https://www.smithsonianmag.com/smart-news/us-copyright-offic... ) and so the issue of creating and publishing a derivative work is upon the human who did it - not the computer.
If you want to sue someone for publishing an image generated as "create a landscape of San Francisco in the style of Pixar" then have Pixar go after the person who published that image. It doesn't matter what process created that image.
I'm more referring to the companies offering the tools. While some are free to use (for now), they are still profiting from this type of personal use. often being able to convert to a paid model later down the line, think github copilot
This “gatekeeping” accusation is so stupid. It’s a laughable conclusion of the current woke/victimization culture. I have seen people on HN refer to the idea of needing sone kind skill to do something creative/difficult as “ableist”.
I mean the concept of public domain is fungible when companies like Disney can pay off their favorite politicians to get 20 more years of monopolies on their ideas that 'steal' from the society they live in.
Being that there is enough of a debate on this topic alone that's been raging for decades it's not easily decidable by a one liner.
true. Personally I think the concept of public/private domain is a good thing. But like all good things, they get abused by bad actors who are huge corporations. But the artwork and texts used for training these models are often coming from small independent creators. The same creators who will be sidelined by this new technology. Bitter pill to swallow
On one side, it might be nice to create stories that fit a specific want/desire for what a child might need. We might even go for the concept of a segment of one kind of story.
On the other hand, I am picky with the stories I read to my child. I see stories I read as education. They transmit some non-conscious cultural things about being human, about handling this reality and the world.
For me the books I read to my kid are the future equivalent of people telling stories around the fire to transmit some necessary and vital knowledge thousands of years ago. I believe (without a scientific ) they are shaping both my child how my child sees the world (through words construct and imagination) and my child value system.
Of course, I am trying to expose him to various stories and books as I am also not sure what I think is important is actually important for his life.
But I am a bit reluctant/skeptical whenever I buy a book launched recently. I am not 100% sure if the author just wrote a story because they want to sell something, and it does not have any other meaning. Don't get me wrong, it is ok to just buy a book because it looks nice, even if it is shallow. But there are so many books that I don't want all the books to be the same.
> But I am a bit reluctant/skeptical whenever I buy a book launched recently. I am not 100% sure if the author just wrote a story because they want to sell something, and it does not have any other meaning.
I understand your concern. But in this case the whole "book" is just a few pages long. You can vet the thing in minutes.
It is a lot harder with longer books. I remember vividly an instance from my childhood a case where my mom was reading us a Doctor Dolittle book. I remember it, because in the middle of the book they had a fight with a crocodile who tried to eat the hero, and according to the story they bound the crocodile and gagged it and left it for good. And as a child I felt so sorry for the crocodile who just followed its natural instinct, and now bound and gagged he will surely die a slow and painful death. My mom seeing that this made me upset she turned the page back and decided to improvise a story pretending as if she was reading from the page. In the new revised story they made the crock promise them not to eat them, and then they untied him.
And that was a book published half a century before us reading it. So recency is not necessarily a safeguard.
Also sometimes the chat you have with your kid about a story is as important as the story itself. For example about this linked one I would ask any kids if they also found it odd that the hedgehog gave up their passion of baking and opened a theme park. That can lead to deep conversations about the purposes of life, or maybe a much shallower but equally delightful one about the fun of theme parks.
Absolutely, there's always a moral to a story in any well-known children book. As for trusting the old, well-tested stories more, I agree, we all grew up on them, but they also hold a darker side as many of the classics also hide a lot of subtle old days social and gender stereotypes and references, as well as many hidden sexual allegories that I personally don't really want my kids to buy into - from banal ones like portraying aristocrats as superior, to the really foobar ones with all these passive princesses just sitting and waiting for that one prince charming to come and rescue them. Hats off to the new wave of fairytales (like e.g. Shrek) that are now trying to move away from all these stereotypes.
Though I like AI producing "Human alike" work, but I would like to caution ourselves on consuming everything AI is producing in such use cases. Human creativity should be first preference and avoid to be replaced, based on outcome of consuming AI work.
It’s obviously not perfect, but it’s pretty darn impressive that AIs can do this now. Something like this was not possible just three years ago. I can’t wait to see what this technology will be capable of in the next few years.
To be honest I had the same thought 10-15 years ago when I was reading stories to my daughter. It was all written after a template, changing names of places, characters and goals for the story. Many were based on Harry Potter with kids going to magic schools and saving the world. Even a series of them with exactly the same story in each book, just a new goal for each book. The series covered half a shelf at the local library. Daughter quickly got tired of them and we started reading Terry Pratchett instead.
There's a huge industry of ghostwriting bad children's books based on what's popular. Zoe Bee did a great video on her experience writing unlicensed Minecraft stories.
But the point is that these are basically paying humans to churn out low-quality cookie cutter stories in a time frame and compensation rate that hardly permits for any real creativity. This is an obvious example for something GPT can do well, but that's just automating garbage production.
Automating Terry Pratchett on the other hand is impossible without "genuine" creativity: his stories work not just because of the technicalities (a fleshed-out world with consistent rules and characters) but also because he had "something to say". He had personal and political views, experiences and opinions that influenced his work and world building.
In essence, works like OP's and the similar ones others have linked here remind me of Hayao Miyazaki's infamous reaction to machine learning driven animation: it's an insult to life itself. It's a soulless approximation of the trappings of art that doesn't understand what it supposedly attempts to capture and can say nothing about it because it doesn't have an opinion on it.
Does anyone here really think that AI will produce a new "Jungle Book" by Kipling or a new "Mona Lisa" by Da Vinci or a new "Requiem" by Mozart" which could even have a remote possibility of becoming cultural icon and reference? And is that a goal or event that the spectators are waiting for? And if it occurs what would it infer about human cultural evolution?
Does anyone here really think artists using computers will be able to create great works?
This is really just a dumb question. These are tools, like paintbrushes and paint software. Any idiot can put paint on a canvas. Any idiot can make doodles in GIMP. Any idiot can write a prompt. None of these statements say anything about the upper bound of creative potential for any of these tools. The fact that a creative work could be summoned deterministically from a seed in an embedding space is irrelevant. And no, we shouldn't put old works on a pedestal. Note that your examples are basically all pre-digital.
These are not sentient AI. These are tools used by humans. And you can't judge tools' potential by the least competent users, the average user, or even the best users.
Maybe. At least ChatGPT seems to "think" so. I was curious about how ChatGPT would go with creating memes (this era’s main cultural output).
> Could the memes generated from this have a remote possibility of becoming cultural icon and reference? And is that a goal or event that the humans are waiting for? And if it occurs what would it infer about human cultural evolution?
It is possible that the memes generated from this image of a robot with a blank, emotionless expression could become a cultural icon and reference, although it is difficult to predict which memes will become popular and enduring cultural phenomena. The goal of creating memes is often to entertain and engage with a particular online community, rather than to become a cultural icon.
If a meme generated from this image were to become a cultural icon, it could suggest that humans find the idea of a robot trying to understand human behavior and emotions to be relatable or amusing. It could also indicate that the concept of AI and robots trying to understand human behavior has become a prevalent theme in popular culture.
However, it's important to note that memes are a form of digital culture that can be ephemeral and change rapidly, so it's not a certainty that this meme will become a cultural icon or reference, but it could be a possibility.
Famous art is about context, circumstances and marketing. Strip a famous piece of art of its context and it doesn’t have meaning. If you find someone who can put an AI generated piece of art in the right context, it could become the next Mona Lisa. But then, is the AI the artist or the person who uses AI to create context?
In that case are publishers also authors when the artists and writers are (contracted) tools like any other?
This is just as rehash of the debate around authorship for prompting a single image. If you had typed the same prompt on Google and picked a picture, or hired an artist with that prompt, you wouldn't claim to be the artist. Therefore you're having a different role than that in the process, ML model or not: publisher, producer, whatever.
The sooner these people come to terms with it, the better, so they can promote themselves for the skills they actually have (lackluster as I find them) and leave actual authors to carve their own niches among the incoming flood of generated content.
'On your kitchen' is not something most English speakers would say; they would say 'in your kitchen'. Otherwise it seems like the butter is 'on top of' the kitchen, like on the roof. And instead of 'your kitchen' it would be 'the kitchen', since the story takes place at the house of the characters in the story and not the reader's house. Also usually children's books are much more descriptive: "Butter went everywhere! On the ceiling, on the floor, on the refrigerator, on the toaster..."
And to the point of not being kids friendly, imagine your child has this book read to them, and you find the next day that they've illustratively covered your own kitchen in butter... Most children's stories are written in a way that spells out danger or bad behavior meaningfully.
I don't know, I'm imagining reading this to my two year old and using the words like "layer" seems pointlessly complicated. The story seems too silly for a child with better vocabulary. I could be off.
It's almost like the "I" should be in quotes. Because if anyone else in the world gave ChatGPT the same prompts, they would get more-or-less the same story. (visual generative AI is different).
I think this makes things like college essay generation less scary. If two students prompt ChatGPT with "write a personal statement for the common application for college" or whatever, ChatGPT will more-or-less produce the same generic statement, with minor variations. It can not personalize for each student because it doesn't know anything about either student. Prompt engineering? Ok sure, but by the time I have written a prompt that is comprehensive enough to make an essay that describes me, I've basically written the essay myself.
So anyway, I imagine people will try to copyright output from generative AI responses - interesting question about "who" produced that output.
An absurd analogy by somebody who obviously doesn't play the piano.
To get to that point, one has to spend a painstaking amount of time perfecting their craft and improving their virtuosity on the instrument, not to mention that everyone adds in their own interpretation and dynamics to a piece.
I can sit down on my Kawai and rattle off the maple leaf rag by Scott Joplin without too much issue, i'm happy to hand John Q. Public the sheet music, but there ain't no way they're playing it.
If anyone's interested in the process for doing something like this, I wrote about generating children's books at the end of last year.
The most interesting part was using ChatGPT to target and tweak certain parts of the story and merge them back in. When trying to make edits to a very small part of the story it understood what I wanted extremely well.
The most frustrating part was trying to get a consistent character in Midjourney. I tried using identical seeds and parameters. I even used a base image of the character to try and influence it. Nothing worked that well. My solution was to come up with scenes that obscured the main character's face—the ones with him peaking out from under the covers or walking down the street.
Going to liken this to music, because I'm sure AI generated music is coming. Long before AI entered the picture, I realized that the difference between just enjoying and really getting into music was the ability to relate to the artist in some way, and through that a kind of bond in forged. As the tools become more sophisticated and there are more layers between the artist and their output, this connection becomes more tenuous. With AI in the picture it starts to disappear. I don't expect my attitude to change on this, so I doubt when AI generated music comes along I'm going to be able to enjoy it more than casually.
I wrote a few children's books when these tools first came out, but ultimately I decided against publishing them due to ethical concerns. I couldn't quite put my finger on it, but something felt dirty with the act of publishing and profiting off this work as my own.
That said, it was a very fun exercise and I look forward to the stories that are created in the future where the authors don't have the same hang up I did. Even if you don't publish, I would recommend going through the creation process at least once to familiarize yourself with how simple it actually is (tedious, but simple) to create this type of work.
I keep coming back to the fact that these technologies enable people to create simulacrums of high quality content quickly - but that shows because of the obvious short comings as identified by other posters.
If I gave myself a week to write a children's book using these tools with a human editor loop - could I create (or perhaps someone more talented) a great book? That seems possible. This seems like it was created in a couple of hours - and that is slightly interesting - but perhaps we will quickly move beyond the amazement of this as we start to see lots of quickly generated low quality content.
What elevates a human story higher than ChatGPT's? Do our brains have some hidden trascendental property that LLMs can never get to? Or is it simply our own biases once we learn that this is the product of an algorithm rather than human-made?
I find that Midjourney tends to have its own characteristic touch as well. It's great if you like it, but it can be tricky if you want something different. I've found StableDiffusion to be a bit more versatile, even though it's harder to get a similar level of artistic quality.
I was going for more of a kid's book tone but I think I hadn't quite learned enough about the depth and nuance you can introduce into the output via prompting. In retrospect now, I agree it has that generic-AI tone.
Absolutely. This was recycled almost directly from some of the prompt material I gave while refining. I've learned more about "prompt engineering" (as it seems to be called) since and think it would be relatively easy to smooth these edges out.
Is there any feasible way to get midjourney to use the same "character" each time ? The hedgehog and the girl (and the woods) just change - losing some emotional constant across the book.
I am "writing" a book about fairies with my seven year old and Inthink I might use midjourney for the background scenes and photoshop in her fairies, but ... yeah chatgpt and midjourney are kinda like twitter / FB - a mirror of our society and it's artifacts.
Nice idea, probably not long before people start doing this to self-publish "fiction" to clog up Amazon and Wattpad further.
Sidenote: Why does Stable Diffusion/Midjourney/Dall-E all generate animal/cartoon art that looks like a Disney/Pixar ripoff? It's worse than the generic "Corporate Memphis" art style that every company uses for their marketing websites.
Others have commented on the aspects of the story and whether we should do this with AI, etc. I'll say that I'm not opposed to the idea, but I think the results were a little weak. I think they could be made better by improving your prompts to the AI. For example, the words say that the butter trail led from the house into the woods, but the illustration doesn't show that. There's one large puddle in the kitchen, and that's it.
Another issue is that the oven that Penelope puts the bread into is very different from the oven that she later finds empty. One looks like a wood-burning oven, almost like a fireplace, the other looks like a microwave without a door. (Note the top of the oven in the first image is a curved arch, whereas the second one is a rounded rectangle.)
I think this lack of continuity between images is part of what gives the uncanny valley feeling to it. It's like some incoherent dream where neither the story nor setting is the same from scene to scene.
I used very basic prompts including words like "cute" and "happy" and environment descriptions but not much beyond that (no art style or specific colors of the characters). I felt overall the character consistency has room for improvement but could have been a lot worse. Midjourney's default "style" has a sort of specific flavor to it.
Oh, I like the simplicity of the delivery. I did the same but wanted to publish it on amazon kindle and got lost in the final steps. Maybe I Should just put the website up as you did. Nice graphics, the character consistency is very good, and it is a bit hard to do with standard tools. Bravo.
A friend of mine has introduced his first grader to ChatGPT, Dall-E, etc.
Now one of his favorite games is "AI" and he lights up in wonder typing the kinds of things at a prompt a first grader would.
I don't have kids and wouldn't dare wade into parenting approaches but compared to how most of the kids I've seen interact with technology this (to me) is very promising. Compared to watching endless YouTube videos or even playing Minecraft or whatever it's a lot easier to see how "playing AI" could progress to a genuine interest in technology beyond staring at a tablet.
My 9-year old wrote and published a book with dalle-2 and openai playground. Chatgpt was not a thing then. It is quite interesting and powerful to see how these AI complement child's unbounded imagination. I would have to say, childrens imagination has more entrop/perplexity than the AI would come up - At the end GPT was merely putting correct grammar and adding the bulk to the story.
Children's imagination is the most undocumented, unexplored frontier of human imagination and I still have hope for humans on this edge of creativity.
My youngest son loves the story about the Three Goats Gruff and also the show Blaze And The Monster Machines on Nickelodeon. One of the first things I thought to do with ChatGPT was ask it to write the story of the Three Goats Gruff as an episode of Blaze. The result was pretty impressive, with Blaze and his friends matching off against a troll to race over a bridge to save three goats. Totally plausible plotline for an episode, and most importantly, my son was thoroughly entertained when I read it.
it is obvious here that computer programmers have elevated chatGPT to the most sensitive grounds of cultural transmission, right away. Since there is a documented and ongoing mental health crisis with young teens and electronic communication, this appears to be adding fuel to the fire of whatever it is that is going on, that is causing real illness on a large scale.
I did something similar back in early 2021 (that I ended up self-publishing for fun). It turned out ok at best, but it was a pretty fun project - it's now a cool snapshot of AI generation technology of the time.
The progress that has been made in text and image generation over the last ~2 years is insane.
~15 minutes with ChatGPT to turn the initial plot into a story with the tone and length I was looking for, an hour or two with Midjourney illustrating and a few more hours for the site from zero to up and running.
I had ChatGPT write a similar kids story. There are quite a few surprises in it too! (:
Once upon a time in a faraway land, there was a king named Maximus. He was a kind and fair ruler, but he was also very demanding. He wanted to make sure that all of the subjects in his kingdom were happy and well taken care of, but he didn't always know how to do that.
One day, King Maximus decided to hold a contest to see who could come up with the best idea for improving the lives of the people in his kingdom. He announced the contest to all of the people in the land, and many of them started thinking of ideas right away.
One young girl named Emma was struggling to come up with an idea. She didn't know what to do, and she was starting to feel discouraged.
Then, a girl dragon named Supreet came to her and said, "Emma, I have an idea for how we can make sure that everyone in the kingdom has enough food to eat. We can divide the food into smaller pieces and give some to each person. This way, everyone will have enough to eat, and there will be no waste."
Emma was very impressed with Supreet's idea. She thanked her and said that she would present it to the king. However, she decided to keep Supreet's idea to herself and pretend that it was her own. She didn't want to share the credit or the reward with Supreet.
When she went to the castle to present her idea to the king, King Maximus was very impressed. He thanked her and said that he would consider it carefully. He also told her that she had done a very good job of using factors to solve the problem.
"Factors are the parts that make up a whole," the king explained. "In this case, the food is the whole, and the smaller pieces are the factors. By dividing the food into smaller pieces, you are using factors to make sure that everyone has enough to eat."
Emma was very happy to have won the contest, and she was proud of herself for using factors to come up with a solution to a problem. She didn't tell anyone that the idea was actually Supreet's, and she accepted the reward for herself.
However, the king soon found out that Emma had not been truthful about the origin of the idea. He was very disappointed in her and decided to punish her for her dishonesty.
He ordered Emma to roam all over the kingdom and teach the concept of factors to all of the children, dogs, and cats. She was to spend the rest of her life traveling from place to place, sharing her knowledge and helping others to understand this important concept.
Emma was very sorry for what she had done, and she accepted her punishment. She set out on her journey, traveling from village to village and teaching the concept of factors to anyone who would listen.
She met many new people and animals along the way, and she made many new friends. She also learned a lot from the people and animals she met, and she became a much wiser and better person as a result.
Despite the challenges and difficulties she faced, Emma remained determined and committed to her task. She continued to teach the concept of factors to all who would listen, and she helped many people and animals to understand and apply this important concept in their own lives.
In the end, Emma's hard work and dedication paid off. She became known throughout the kingdom as a great teacher and a wise and kind person. And she lived the rest of her life in peace and happiness, knowing that she had made a positive difference in the lives of others.
I hate to be the contrarian voice, but I really don't like where things are headed with these types of assistive tools.
I like the idea of using it partially to help complement some effort that you've done yourself, but the idea that you're using both an LLM to produce the story and a diffusion model to produce the pictures makes the entire piece of work almost devoid of individual effort.
Using them to produce a private story for your own child is a noble enough effort, but I'm seeing lots of comments on wanting to get these types of things published. With the explosion of these vanity press level dime novel tabloid literary works, it will be that much more difficult to sift out the wheat from the chaff.
Completely agree. People really don't understand where this is going. It's not like society lacks mediocre mid-brow content as it is. LLM's and diffusion models reduce the cost of production to basically zero, and we now have an endless supply mediocre content. Extend it out a few 10x improvements, and we get an endless stream of so-so Netflix shows produced with zero effort. Basically WALL-E made real.
It doesn't look to me like the cost of production was zero and the content really wasn't bad. For anyone who didn't read to the end, the author (operator?) explains his process:
> I typed a series of prompts to ChatGPT to get it to write me a story with the following plot:
> Once upon a time there was porcupine. Every day, she baked a loaf of bread. One day the loaf disappeared. Because of that, she had too much butter. Because of that, things got slippery. Until finally she opened the first ever butter theme park.
He mentions elsewhere that generating the images was a lot of work.
So basically he and his friends had an oddball premise for a story, and with some non-trivial amount of labor, he produced a story which was far from the worst thing I've read on HN.
It was actually pretty clever for a kids' story and the illustrations were good. The finished product was better than the prompt. Somehow the process produced a fairy, a plot twist and slabs of butter riding a roller coaster.
Where's the crisis here? A person came up with an idea and used a tool to generate a work product from it. Without that tool he would have needed to hire a professional illustrator and maybe a writer. I bet this took a fair bit of work to do right. But the technology enabled a creative work that wouldn't have occurred otherwise.
I have no doubt there will be tons of crappy spam enabled by ChatGPT too, but frankly this little story wasn't it, it was humorous and I doubt it was a trivial endeavor.
People are ultimately going to do that. And if we really can't figure out a book written by an AI from that written by a human, then what's the point of all of it? Eventually good authors will still succeed as their writing style will not be squeezed out of the grand summation of all the writings in the world.
1. Entire classes of skilled jobs are eliminated, only the most elite writers and artists can make money
2. Value for story creation gets concentrated in the few companies that own and operate the best LLM's and best stable diffusion models
3. Society becomes more fragmented because we are all watching different content
These are just a few I'm able to think of in 2 minutes.
It's a common mistake to assume all technological progress is an unadulterated good. Just look at all the unanticipated negatives that came out of the computer revolution, like every technological revolution before it.
Agree that the assumption is that progress, no matter what, is good and healthy.
If one considers how inefficient humans are (daily consumption, heating, transport..etc) and how in comparison computers are extremely efficient (once they have been built), you realise where "pure" progress will take us.
Ephemeralization is the term Buckmeister-Fuller used.
I'm not at all assuming all technological progress is an unadulterated good - in fact, nothing is, everything would have a downside if it has an upside. All I'm saying is it's still bound to happen and we can do nothing about it.
It's not the first time that 'entire classes of skilled jobs' have been eliminated. It's the natural side effect of automation and has happened with every generation of new products. Industry automation might have taken millions of jobs away so far to give us better products faster. Tomorrow my job as a usual developer might be gone to these AI tools (though it's a far fetched thought), and I have to be prepared for it, and re-skill/up-skill myself if I ever sense that day is about to arrive.
Another strong agree. These models don't understand anything, they just produce something that is statistically (at many levels) close enough that it will fool most of the people, most of the time.
For those that are increasingly happy to live in, and raise their children in, that fake and simplistic environment, I suppose it's their call. But reading/watching/living in that simpler, dumber environment can't be good for our intellect and higher abilities.
And what the brain produces is not something that is statistically (at many levels) close enough that it will fool most of the people, most of the time?
Have you looked into even basic neuroscience research or perhaps at least personally known someone with, for example, dementia? It would perhaps teach you something about what the mind is and isn't and what we know and don't know and how perhaps AIs might fit in this story.
Just handwaving because it's "just statistics" is not useful.
As for the environment - well, I have my opinion on social media and online content and even most print content and most books. Will AI make that worse? Yes. But it's already bad enough that this is not a meaningful change. The good stuff has always been accessible and for a long time.
In Roald Dahls short story "The Great Automatic Grammatizator", it becomes impossible to separate the generated from the human. The machine initially is published under fake names, but what most don't realize is that for even established authors the profession no longer exists and they are left with the choice of destitution or licensing their names under which the the machines books can continue to be published.
"And worse is yet to come. Today, as the secret spreads, many more are hurrying to tie up with Mr Knipe. And all the time the screw turns tighter for those who hesitate to sign their names.
This very moment, as I sit here listening to the howling of my nine starving children in the other room, I can feel my own hand creeping closer and closer to that golden contract that lies over on the other side of the desk.
Give us strength, Oh Lord, to let our children starve"
Thank you for saying that. The tech is really cool but it gives utterly untalented storytellers the tools to inflict really bad content on the world. The insipid and pointless plot of this one is a case in point. (I do admire the OP’s get-something-done adventurous spirit of trying out the tech and making something, kudos, but not their artistic sensibilities on this one unfortunately.) At the same time it will allow others to create fantastic stuff. We are going to need some better filters.
Seems like in another era you might get upset at the invention of the record player, because now poor people would have the ability to listen to music without having to pay to see it live.
Yes, better tools for making art means more art will be generated, and lowering the bar means less talented people might start making up the bulk of the new art that is generated, but new mediums inevitably spark creativity in a handful of emerging artists, and that enables them to create masterpieces that could not have been created before.
counterpoint; separating the wheat from the chaff is already terribly difficult, and perhaps this will provide the necessary motivation to produce an effective method of doing so.
Google, once upon a time, solved a similar problem. With all the information on the internet, how do you find the good stuff? The relevant stuff? I don't think it does this job at the moment - and perhaps flooding the world with content will create the conditions for the next major disruptor to create something that actually works rather than this ad-bloated nonsense we have now.
It already takes a ton of my time seeking out books and other media that align with what I want. Humans have done a spectacular job of creating stuff that's minimally acceptable to make a sale. AI isn't creating the problem you're talking about, even if it is making it worse.
Perhaps it will become bad enough that it becomes worthwhile to fix.
Yep. Part of the problem is the set of incentives surrounding these huge centralized databases of consumer preference - the folks holding the reins have no reason to fix the situation because they make money off the circumstances as well.
This was effectively the situation with the internet when the Goog came in and wrecked everyone's cash cow - because it was valuable enough to take the hit and prospect for future value to undermine the status quo.
I see two ways to break the current stalemate:
- Some 'new google' comes by and does to google what google did to Yahoo et al.
- Somebody creates a system that does the same job, but distributed. You control your recommendation algorithm, so you can prune it as you want.
I really really want the second one, but I haven't yet figured out how one would get the necessary data. There's a bootstrapping problem here - in order for recommendation algorithms to work, you need a ton of folks creating data for them. In order to get those folks, you would need to have a good enough recommendation algorithm to attract them.
I'm wondering now, though, if data on user preferences is for sale somewhere. Like, if I wanted to bootstrap such a system, is there some adtech business somewhere that has information on what books / music / etc people like, and how much would it cost to get that in there?
But now, of course, because you're looking at spending a bunch of money to bootstrap - you're hobbled by the need to be profitable to make that back. Stuck in Trapitalism.
It can easily be polished and that _does_ take work. So it's not zero effort. Yes, there will be a lot of generated content and that will probably lead to a recalibration of what's important and what not (in life, work, art, etc).
I have been trying to use GPT-J or GPT-NeoX to create stories, but they all come out very strange. Here's one I generated using https://textsynth.com/playground.html and a prompt "Once upon a time there was a mouse called Miisa. One day she":
"Once upon a time there was a mouse called Miisa. One day she was driving through the woods and she saw a beautiful castle in the middle of nowhere. She didn’t go in but wanted to. Then she saw a door which said “The Secret Passage.” She went inside and met the owner who was the witch of the woods. “I’ve been lonely for a while and would like to live with you” said Miisa. The witch agreed. “But there’s one condition,” she said. “No matter how many times I ask you to do something, you must do it.” Then the witch asked Miisa to put a hat on her, which would make her invisible. Miisa was terrified. But she did it any way. Then the witch asked Miisa to kill a mouse which lives in her home. The witch wanted to have the mouse’s blood. “I can’t do that,” Miisa said. “I hate mice.” “Well,” said the witch. “You can’t kill a living thing. So then you must torture a living thing.” Miisa was getting scared but then a thought came into her head. She asked the witch, “Does this mean that I can torture a mouse?” The witch didn’t say anything, so Miisa said that she didn’t want to do it and started to leave the castle. The witch was so sad that she threw a fireball to stop Miisa. And that was that."
Children learn much more than morals when they are small. Even "nothing happens" stories have some value for them, they teach them other normal things (bread making?), or trigger some thoughts (you need to investigate sometimes). The problem with this story is that it's stupid, there's no explanation for the fairy's behaviour, and the butter park doesn't make sense.
Kids absorb all sorts of lessons parallel/orthogonal to and sometimes completely opposite to the ones intended by the committee of creators, like "I'm being palmed off with garbage", "I'm being subtly coerced by forces I don't quite understand."
> I can not identify any deeper moral or social or any idea.
The fairy destroys the bread that Penelope has created by turning it into butter. The fairy then expresses remorse, apologises, and offers to undo this ("Sparkles was sorry for her mischief, and offered to help Penelope turn the butter back into bread"). Penelope accepts the apology, and they set aside their differences to engage in a joint venture together.
Arguably this isn't a good expression of a moral lesson, but there are moral themes here (destroying what other people have worked hard to create is bad; apologising and offering to fix your mistakes is good, forgiveness is better than holding a grudge).
Sometimes a story is just something happens and nice pictures. Not every story needs to be indoctrinating specific arbitrary morals that humans like to impose on others. Let a story be in itself just a story of some thing and which no big deal happens. Most of life is like that, and should be enjoyed that way. There's the moral story you asked for.
This is the plot he used to feed the AI: "Once upon a time there was porcupine. Every day, she baked a loaf of bread. One day the loaf disappeared. Because of that, she had too much butter. Because of that, things got slippery. Until finally she opened the first ever butter theme park."
I can bet you that if you specifically asked the AI to include some morals it would do so
In fact I'll show you.
Input: what are some values that could be included in a children's book with the following plot: Once upon a time there was porcupine. Every day, she baked a loaf of bread. One day the loaf disappeared. Because of that, she had too much butter. Because of that, things got slippery. Until finally she opened the first ever butter theme park.
Some values that could be included in a children's book with this plot could include:
Perseverance: The porcupine continues to bake bread even after it goes missing, and she uses her excess butter to create something new and exciting.
Creativity: The porcupine comes up with the idea for a butter theme park, showing that it's possible to turn a problem into an opportunity.
Resourcefulness: The porcupine uses her excess butter to create something new and exciting, rather than letting it go to waste.
Entrepreneurship: The porcupine opens the first ever butter theme park, demonstrating the importance of taking initiative and starting one's own business.
GPT-3 is great at giving generic advice, true. But that aside, it surprises me how uncritical HN seems to be when it comes to children's books. Sure, they may look simple at first glance but a lot of thought and work goes into good children's books and likewise children can get a lot out of them if they're done right.
It's not that the story necessarily has to have a moral, though many do. It's that they're funny and artful and witty. They can make kids think and dream and inspire them. Also a real quality kid's book doesn't just speak to children but humans in general. If you read it as an adult and think it's a boring story, it's not a good book for your child imo.
The linked butter story doesn't have any of this. It's grammatically correct and each sentence is somewhat related to the previous one but that's about all the positive I can find in it. It's completely meaningless.
The thing is, the majority of users here aren't even VC technologists but simple coders and tech enthusiasts. Giving this sort of reading to your child seems like the nerd equivalent of parents that let their kids watch the Disney channel all day.
It's bad enough what media some adults consume but when it comes to children it's twice as concerning, because kids have a special ability to learn that is largely lost later in life. We're only young once, it's incomprehensible to me that someone would knowingly fill their child's brains with junk information.
Every time the copyright is challenged by a newcomer, we have a revolution for communication. When google and youtube came along and scraped everyone's content with little regard for copyrights, a new world was created. I hope all the lawsuits fall through, so that AI generated content can do the same thing