I will agree that AI is the limiting factor, it puts Apple in a very odd position where they can’t throw developers at a problem to make it better in the traditional way. You can’t prompt engineer your way out of some of these issues. I know they’re working on various other AI initiatives as well so I don’t wanna pretend that all they’re doing is prompt engineering.
All that said, Apple deserves to be smacked hard for how much they pushed AI in the new iPhone while having delivered practically none of it.
Lastly, who thought it would be a good idea to update Siri’s UI without updating Siri’s capabilities? I can only imagine how they’re going to pitch the “new Siri for real this time” when the UI hasn’t changed.
"If it’s 100% accurate, it’s a fantastic time saver. If it is anything less than 100% accurate, it’s useless."
This.
Finally, some of the mainstream media is seeing through the bs. I'm not an apple fanboy, but they're one of the few who is refusing to drink the ai koolaid that the cool kids keep pushing, and it's a good thing for users.
The complaint about reliability is exactly what I've had from the start. There are places this doesn't matter much and adds to the product (eg generating conversational content for Skyrim characters), but there are a lot of places where overconfidence in a semi-accurate method is annoying to fatal (code generation is a good example).
Look, if you want to waste your time, go ahead. It's your time. But I'm really over people confidently predicting that all code will be written by ai when I've seen the slop that's produced. I'd much rather craft something that works as intended than "save time" having to debug shit code.
It’s a great example too. Think if your calendar moved some of your events around without telling you 5% of the time. Or if your email server didn’t send 5% of your emails. Or if your car didn’t start 5% of the time. And I use these examples because ai people want ai to be completely rooted in your life like a car or a calendar.
I find that I use ai tools quite regularly including integrating it into data pipelines, using it to help coding, and using it as a research tool. but also I am a data scientist who works on ml applications to geoscience. I should be expected to have some nuanced understanding of when things work or don’t. But why would some average person care to do that?
It depends on how you use the tool. If you just having it generate the whole codebase for you then yeah it probably shit, but you can definitely make it useful in ways that improves your productivity.
A great example of this is tab/jump completion in Cursor.
All that said, Apple deserves to be smacked hard for how much they pushed AI in the new iPhone while having delivered practically none of it.
Lastly, who thought it would be a good idea to update Siri’s UI without updating Siri’s capabilities? I can only imagine how they’re going to pitch the “new Siri for real this time” when the UI hasn’t changed.
This.
Finally, some of the mainstream media is seeing through the bs. I'm not an apple fanboy, but they're one of the few who is refusing to drink the ai koolaid that the cool kids keep pushing, and it's a good thing for users.
The complaint about reliability is exactly what I've had from the start. There are places this doesn't matter much and adds to the product (eg generating conversational content for Skyrim characters), but there are a lot of places where overconfidence in a semi-accurate method is annoying to fatal (code generation is a good example).
Look, if you want to waste your time, go ahead. It's your time. But I'm really over people confidently predicting that all code will be written by ai when I've seen the slop that's produced. I'd much rather craft something that works as intended than "save time" having to debug shit code.
I find that I use ai tools quite regularly including integrating it into data pipelines, using it to help coding, and using it as a research tool. but also I am a data scientist who works on ml applications to geoscience. I should be expected to have some nuanced understanding of when things work or don’t. But why would some average person care to do that?
A great example of this is tab/jump completion in Cursor.