AI enables new tools & features but in itself is not a product.
There's a good essay from Andrew Chen on this topic: Revenge of the GPT Wrappers: Defensibility in a world of commoditized AI models
"Network effects are what defended consumer products, in particular, but we will also see moats develop from the same places they came from the past decades: B2B-specific moats (workflow, compliance, security, etc), brand/UX, growth/distribution advantages, proprietary data, etc etc." [1]
Also check out the podcast with the team at Cursor/Anysphere for details into how they integrate models into workflows [2]
Technology enables UX. When the underlying technology is commodity—which is often the case—it's easy for competitors to copy the UX. But sometimes UX arises from the tight marriage of design and proprietary technology.
Good UX also arises from good organization design and culture, which aren't easy to copy. Think about a good customer support experience where the first agent you talk with is empowered to solve your issue on the spot, or there's perfect handoff between agents where each one has full context of your customer issue so you don't have to repeat yourself.
One could aregue the "non-moats" together can accumulate into something considerable making it moat. A brand is definately a moat but in the minds of consumers. This is not something you can overcome easily, even if the product is inferior.
Kleenex is probably a good example of this. Tissues are a commodity but nevertheless people will still pay more for Kleenex branded tissues. That feels like a moat to me.
Lots of people, these days, just use ChatGPT to search the web these days. I've actually never understood google search ads, as I have never clicked on one, even by accident 10+ years. If I want to buy something I search within Amazon for it.
YouTube however, yeah, that is a stellar advertising platform.
unless you adblock. But i concede if you don't block, you do get advertised to in an amount that i would describe as "astronomical", so there's even a parallel there.
An enterprise using RAG, fine tuning etc. to leverage their data and rethinking how RL and vector DBs etc. can improve existing ops ... is likely going to make some existing moats much better moats
If your visibility on current state of AI is limited to hallucinogenic LLM prompts -- it's worth digging a bit deeper, there is a lot going on right now
ML is still a thing. I believe that most AI research is still non-LLM ML-related - things like CNN+Computer Vision, RL, etc. In my opinion, the hype around LLMs has a lot to do with its accessibility to the general public compared to existing ML techniques which are highly specialised.
To be fair, I remember that some 5 years ago a lot of ML was quite accessible to programmers as it was often just a couple lines of python using tensorflow, or later pytorch.
I am almost in disbelief that LLMs are the thing that reached the "tipping point" for most companies to magically care for ML. The amount of products, that could have been built properly 5 years ago, that exist now in a slower form because of "reasoning" LLMs, is likely astonishing.
scikit too, i think. LLMs probably took off because they think it's a legal shield against "IP infringement/theft". Amazon has access to whatever magnitude of books and so does google, so if FB, X, mistral (actually i am not sure, is that a university led project? they probably got books if so), openai want to return decent results about books, they gotta get the books, too. Buying, scanning, OCR, copyediting, feeding the training scripts (jsonifying the input most likely), forget that, anna and russia and the high seas are literally right there, 4 octets away.
I'd hope that the very first commercially successful "AI media" be it a 1 minute commercial or a 10 minute TV segment or whatever brings the lawsuits. I really want to know if i can feel any vindication about arguing about this for the last 3 decades of my life. (IP specifically)
more to your member-berries, whole swaths of interesting research disappeared, either abandoned or bought and closed sourced. Genetic Algorithms, artificial life, stuff with optics, 3-atom thick transistors (hey, IBM patented that, but microsoft also did basically the same thing with their STP qubits - everything has to be arranged at atomic widths or whatever. IBM also built a USS Enterprise (unsure if D i am not a huge fan) out of atoms, forget if to scale. in like 2003. Microsoft spent 17 years playing catch-up with *the* hardware people.)
yeah. is the conclusion that moneyed interests suck?
Convolutional neural networks for image recognition and more generally image processing. They are much better than they were a few years ago, when they were all the rage, but the hype has disappeared. These systems improve the performance of radiologists at detecting clinically significant cancers. They can be used to detect invasive predators or endangered native wildlife using cameras in the bush, in order to monitor populations, allocate resources for trapping of pests, etc.
ML generally is for pattern recognition in data. That includes anomaly detection in financial data, for example. It is used in fraud detection.
Image ML/AI is used in your phone's facial recognition, in various image filtering and analysis algorithms in your phone's camera to improve picture quality or allow you to edit images to make them look better (to your taste, anyway).
AI image recognition is used to find missing children by analysing child pornography without requiring human reviewers to trawl through it - they can just check the much fewer flagged images.
AI can be used to generate captions on videos for the deaf or in text to speech for the blind.
There are tons of uses of AI/ML. Another example: video game AI. Video game upscaling. Chess and Go AI: NNUE makes Chess AI far stronger and in really cool creative ways which have changed high level chess and made it less drawish.
Well “AI” is a lot more than just generic text generators. ML (read: AI that makes money) is the bread and butter of all of the largest internet companies. There’s no LLM that can accurately predict user behavior.
And even if there was, the fast follower to the Bitter Lesson is the Lottery Ticket Hypothesis: if you build a huge, general model to perform my task, I can quickly distill and fine tune it to be better, cheaper, and faster.
> In short, unless you’re building or hosting foundation models, saying that AI is a differentiating factor is sort of like saying that your choice of database is what differentiated your SaaS product: No one cares.
I still think of AI as the analogy of databases is perfect. No database is set up for the necessary applications where they get deployed. The same is true for LLMs except for some very broad chatbot stuff where the big players already own everything.
And if AI is just these chat bots, the technology is going to be pretty minor in comparison to database technology.
The choice of AI does matter, and older models can potentially do your task much better than newer ones.
In the course of building my side project (storytveller), I’ve found that newer models tend to do worse at storytelling. I’ve tested basically every model under the sun that is available for use in production, and one stands out - now it may be that others will come along and not do the research that I’ve done to choose the same model, and thus my application will be better than theirs in part because of the AI model I choose.
Having “AI” as part of your application will not matter as much, that I agree with, but having “the right AI” will.
Of course,the user experience will definitely matter as well, as will the marketing and other criteria - another point of agreement with the article. But that does not diminish the fact that, if your application does not involve testing benchmarks, there is a good chance that a model that may not be the newest could still be the best for your particular use case, so you should not just blindly choose the latest shiniest model as this article sort of implies.
No it is not, especially the filter controls, and a few other "add-ons" that are not actually core parts of LLMs. As for what they actually do with it, we don't know that either unless by leaks.
I now see AI as part 2 of the CPU evolution. I think there are lots of correlations we can draw on looking at it that way:
1) Lots of players enter at the start because there are no giant walled gardens yet.
2) Being best in class will require greater and greater capex (like new process nodes) as things progress.
3) New classes of products will be enabled over time depending on how performance improves.
There is more there, but, with regard to this post, what I want to point out is that CPUs were pretty basic commodities in the beginning, and it was only as their complexity and scale exploded that margins improved and moats were possible. I think it will play out similarly with AI. The field will narrow as efficiency and performance improves, and moats will become possible at that point.
When will people realize that the use of AI art in any piece of content is almost as bad as bad typo or ads. It devalues the content and even adds a barrier to acceptance and produces the feeling that the creator does not value my time and attention.
That is true, but the context around it currently, the fast rush to appropriate freely from others and of course without contributing/crediting anything back, not to mention the sludge that has exploded all over the internet is one of agreement with the OP. I'm more than sure that AI could be used in very intelligent ways by artists themselves though, and I don't mean in a lazy way to cut corners and pump out content but a more deliberate way where the effort is visible (but I don't just mean visual arts).
Depends. I like when it's used artistically and you're intended to notice. I've been listening to a lot of AI covers and some of them lean into the artifacts to a high degree in different ways, akin to noise music. First track here is a great example:
That's something these companies don't seem to understand. Any model that is smart enough to be considered a true AI is also smart enough to teach what it knows to other AI models. So the process of creating a complex AI is commoditized. It just takes another group with access to the original AI to train other models with similar knowledge.
I also believe that, just like humans, AI models will be specialized so we'll have companies creating all kinds of special purpose models that have been trained with all knowledge from particular domains and are much better in particular fields. Generic AI models cannot compete there either.
This article didn't really say all too much, essentialy you can't differentiate your product with prompts alone, and you need deeper integrations with workflows, ok thats pretty clear - what else?
Deepseek showed us very well that openAI at the very least does not have a significant moat. Also, that the ridiculous valuations dreamt up for AI companies are make-believe at best.
If - and that's a big if - LLM-Tech turns out to be the path to true (not the OpenAI-definition) AGI, everybody will be able to get there in time, the tech is well known and the research teams notoriously leaky. If not, Another AI winter is going to follow. In either case, the only ones who are going to make a major profit there are the ones selling shovels during the gold rush - nvidia. Well, and the owners who promised investors all kinds of ridiculous nonsense.
Anyway, the most important point, in my opinion, is that it's a bad idea to believe people financially incentivized to hype their AIs into unrealistic heights.
It seems increasingly likely that LLM development will follow the path of self-driving cars. Early on in the self-driving car race, there were many competitors building similar solutions and leaders frequently hyped full self-driving as just around the corner.
However, it turned out to be a very difficult and time-consuming process to move from a mostly-working MVP to a system that was safe across the vast majority of edge cases in the real world. Many competitors gave up because the production system took much longer than expected to build. However, today, a decade or more in to the race, self-driving cars are here.
Yet even for the winners, we can see some major concessions from the original vision: Waymo/Tesla/etc have each strictly limited the contexts you can use self-driving, so it's not a 100% replacement for a human driver in all cases, and the service itself is still very expensive to run and maintain commercially, so it's not necessarily cheaper to get a self-driving car than a human driver. Both limitations seem like to be reduced in the years ahead: the restrictions on where you can use self-driving will gradually relax, and the costs will go down. So it's plausible that fleets of self-driving cars are an everyday part of life for many people in the next decade or two.
If AI development follows this path, we can expect that many vendors will run out of cash before they can actually return their massive capital investment, and a few dedicated players will eventually produce AIs that can handle useful subsets of human thoughtwork in a decade or so, for a substantial fee. Perhaps in two decades we will actually have cost-effective AI employees in the world at large.
There are plenty of real applications that Nvidia is fueling. Things that make money. There will be a reckoning for the hype men, but there is a good amount of value still there.
(As always, the task of the hype man isn’t to maintain the bubble indefinitely, but just long enough for him to get his money out.)
There are more fundamental issues at play, where I see stock price fairly divorced from actual, tangible value, but the line still goes up because people are going to keep buying tulips forever, right?
It sucks because I think investing in the stock market takes away from dynamic investment in innovative startups and real R&D, and shift capital towards shiny things.
I think what they're saying is that Cursor makes money because it's a good editor in general that integrates AI well, not just because of the fact that it uses AI.
If you just slap a ChatGPT backend onto your product, your competitors will do it too and you gain nothing without some additional innovation.
Cursor without AI is just VSCode. They came up with an AI-native code crafting experience that no one else has thought of before and if you asked me how they did it I wouldn't be able to answer you.
(1) That's what the original author is saying. Their valuation is possibly incorrect.
(2) On the other hand, Cursor's value is essentially gluing the two things together. If your data is already in the castle (e.g. my codebase and historical context of building it over time is now in Cursor's instance of Claude) then the software is very sticky and I likely wouldn't switch to my own instance of Claude. The author also addresses this noting that "how data flows in and out" has value, which Cursor does.
> The AI Code Editor - Built to make you extraordinarily productive,
Cursor is the best way to code with AI.
Cursor is literally a VS Code fork + AI.
> unless you’re building or hosting foundation models, saying that AI is a differentiating factor is sort of like saying that your choice of database is what differentiated your SaaS product: No one cares.
Cursor is doing exactly what they say "no one cares" about.
The article heavily emphasises the point that having the "smartest" AI isn't a moat, it's the experience and integrations that build the moat. That's exactly why Cursor is more popular than Aider.
TFA is saying that it's your product that matters, as always, and that using AI can't be your moat since everyone has access to AI.
It seems cursor did a bunch of things right, from choosing to base it on an already popular editor, to the vision and specific ways they have integrated AI, to the flexibility of which models to use. No doubt there was some "early mover" advantage too.
Certainly the AI isn't their moat since it's mostly using freely available models (although some of their own too I believe), and it remains to be seen how much of a moat of any kind or early-mover advantage they really have. The AI-assisted coding market is going to be huge, and presumably will attract a lot more competition.
I'm old enough to remember when the BRIEF (basic reconfigurable interactive editing facility) editor (by Underware) took the world by storm, but where is it now?
They are not safe against Microsoft, who have the resources to copy every feature that Cursor has into VS code and can afford to offer it for "free" for a very long time and Microsoft also has access to the exact same models as Cursor.
So not only that tells you there is no moat, but offering the best tools and models for free is exactly what Microsoft's modern definition of "Extinguish" is from their EEE strategy.
Copilot does seem to be catching up in some areas but from my testing Cursor still has better UX. There's substantial value in the "glue" that Cursor provides, one that Microsoft has failed to replicate so far.
An alternative phrasing is “you can’t build a moat with an AI model”. Which Cursor exemplifies by way of supporting 10 different models and adding more all the time.
People who say this are forgetting what it was like in the 2000s, when patent suits were flying back and forth like WWI gas shells. Once people realized that they could patent almost any old idea simply by adding "with a computer" or "on the Internet," the floodgates opened.
Rest assured, right now people are filing claims to the same old stuff, only now "with AI" tacked on. And rest assured, the rubber-stamping machine in the USPTO's basement is running 24/7 approving them.
What’s different now is that tech companies all have mutually assured destruction pacts.
Many key pieces of AI technology, like transformers, have patents. If you start trying to enforce your “…with AI” patent against Google, they’re just going to turn around and sue you for using their patented technology.
> In many ways, the whole point of AI applications is that it should feel like magic because something that you previously had to do by hand is now fully automated with believable intelligence. If you’re thinking about taking traditional forms of UX and adding AI to them, that’s an okay starting point but not a defensible moat.
No. Stop! Please! I want my UX in an app to do the damn thing I precisely intend it to do, not what it believes I intend to do - which is increasingly common in UX design - and I hate it. It's a completely opaque black box and when the "magic" doesn't work (which it frequently does not, especially if you fall outside of "normal" range) - the UX is abysmal to the point of hilarity.
That's fine, but there are many tasks (for example, natural-language processing) where there is no classical way to precisely define the steps you want to be performed, and hand-tuning a list of keywords or heuristics is time-consuming and less accurate than reading through the texts yourself. In this case, providing high-level instructions to an LLM to read through the text and attempt to apply the spirit of your instructions is invaluable.
It's been going downhill for a while, even before AI. The VC-funded React SaaS craze took the "put something barely functional on the web, make some subscription cash" model and scaled it to what is essentially a scam/spam industry.
So if the UX feels increasingly framed in terms of what the "developers" see/want/believe to be profitable, and less from the actual user's perspective, that's because the UX was sketched by hustlers who see software development explicitly as a "React grind to increase MRR/ARR."
I think React is nice if you actually have a state-heavy application (maps, video player, editors,...) to build and the web is OK as a distribution platforms. But most web applications is just data display and form posting. I'm still disappointed with GitHub going full SPA.
I mean, a video player is actually quite stateful. Buffering video data and then decoding delta-changes to get the current image, while adapting to varying connection speeds, external CDNs, and syncing with audio, is a lot! Not counting user interactions like seeking, volume, full-screen, etc. Web browsers have a built-in player that would do a lot of this for you now, but if you were Netflix and wanted a really good player you'd probably customize a lot of this.
Granted, React would not be too helpful with the core video player engine, but actual video apps have lots of other features like comments, preview the next videos, behind-the-scenes notes, etc.
And then, if it's two-way video, you have a whole new level of state to track, and you'd definitely need to roll your own to make a good one.
This is a good summary of the downturn of Google search. It constantly tries to offer you the kind of results it thinks you should want, not what you actually searched for. Thank god for Kagi and other up and coming alternatives.
Similar vein - I can't even type "2 + 2 = 5" on my iPhone anymore because the keyboard keeps suggesting "2 + 2 = 4" as soon as I finish the first part and autofills it when I hit the space. It's extremely frustrating, and I feel like a keyboard is a prime example of a system that should do what I say. If I make a mistake, I'll correct it.
I don't see integrating AI into UI/UX to be incompatible with enabling the user to do as she intends. In fact, I think the thoughtful use of AI could better help users do what they want. It is annoying when "smart" features completely miss the mark, but it's a worthy pursuit in my book.
It's just stating a preference for deterministic behavior, i.e. tools not agents. Once you've learned how to use a tool it will reliably do what you intend it to do. With agents, whether human or AI, less initial learning is required but there's always a layer of indirection where errors can arise. The skill ceiling is lower.
But isn't that generally true? That's why anyone bothers to question their existing beliefs -- because they prefer "the truth" over "what [they] believe to be true". Not everyone does, of course, but everyone did just value "what [they] believe to be true" then any sort of self-reflection on belief would be a strict net-negative!
The point is that you never get past what you believe to be true. If you discover that what you believe to be true is false, the only thing you can do is start believing something else to be true. It's silly to say that you want to stop doing this, as if there's some way to escape the cycle.
I think the "the user doesn't really know what they want" idea has been taken a bit too far.
Basically all applications these days are like this. Rather than assume users are sentient, intelligent beings capable of controlling devices and applications in order to achieve some goal, modern app design seems to be driven by a philosophy which views the operators of applications as imbeciles that require constant hand-holding and who must be given as little control and autonomy as possible. The analog world becomes more appealing every day because of this.
"They" have been working for decades to turn the computer back into a TV. And they've mostly succeeded. People don't use the internet, they consume social media algorithms. And they don't play video games, they watch cut-scenes with animated button clicks in between.
Developers expect UI controls to afford fine-grained control over functions as if they’re a thin wrapper for the API. For example, if there’s some condition that prevents something from happening in a program and there are one or two things you can do to mitigate it, most developers would rather know the process failed and be given the choice about how/when to proceed. Our understanding of what controls ‘should’ do based on a sophisticated mental model of software architecture, networking, etc. We don’t have to deliberately reason about what the computer is doing— we just intuit that it’s writing a file, or making a web request, etc etc etc and automatically reason about the appropriate next steps based on what happens during that operation— sort of like writing code. Knowing that the process failed and how gives us valuable information that we can use to troubleshoot the base problem and possibly improve something.
Nontechnical users do not have that mental model: they base their estimation of what a control should do on what problem they believe that control solves. The discrepancy starts with misalignment between the user’s mental model of the problem, and how software solves the problems. In that hypothetical system where some condition is preventing something from happening and there are one or two things you can do to mitigate it, the nontechnical user doesn’t give a fuck if and how something failed if there’s a different possible approach won’t fail. They just want you to solve their problem with the least possible amount of resistance, and don’t have the requisite knowledge to know why the program is telling them, let alone how it relates to the larger problem. That’s why developers often find UIs built for nontechnical users to be frustrating and seem “dumbed down”. For users concerned only with the larger problem and have no understanding of the implementation, giving them a bunch of implementation-specific controls is far dumber than trying to pivot if there’s a stumbling block in the execution and still try to do what needs to be done without user intervention. Moreover, even having that big mess of controls on the screen for more technically-sophisticated users increasing cognitive load and makes it more difficult for nontechnical users to figure out what they need to do.
It’s a frustrating disconnect, but it’s not some big trend to make terrible UIs as developers often assume. Rather, it’s becoming more common because UI and UX designers are increasingly figuring out what the majority of users actually want the software to do, and how to make it easier for them to do it. When developers are left to their own devices with interfaces, the result is frequently something other developers approve of, while nontechnical users find it clunky and counterintuitive. In my experience, that’s why so few nontechnical users adopt independent user-facing FOSS applications instead of their commercial counterparts.
What we call today "AI" will replace human thought about as much as the TI-99/4A Speech Synthesizer replaced the human voice. Despite not putting talented voice actors out of work by a long shot, artificial voices have found many uses: automated announcements, weather information for aviation and maritime applications, assistive software for the blind, etc. So it will be with machine learning. Use it as a tool to augment your ability to find trends in seas of unstructured data, but the hard intellectual work you'll still need to do yourself. I wish more people would get this.
It’s something that everyone has to implement because their products will be inferior without it. But it’s not something you can use to build a monopoly easily, and since everyone has to do it there will be many people racing to the bottom pushing the price down.
I feel like every enterprise ad or marketing site is like this. I have no idea what it does, but it says it'll make numbers go up, so execs buy. It must work to a degree, because it's so common.
There's a good essay from Andrew Chen on this topic: Revenge of the GPT Wrappers: Defensibility in a world of commoditized AI models
"Network effects are what defended consumer products, in particular, but we will also see moats develop from the same places they came from the past decades: B2B-specific moats (workflow, compliance, security, etc), brand/UX, growth/distribution advantages, proprietary data, etc etc." [1]
Also check out the podcast with the team at Cursor/Anysphere for details into how they integrate models into workflows [2]
[1] https://andrewchen.substack.com/p/revenge-of-the-gpt-wrapper...
[2] https://www.youtube.com/watch?v=oFfVt3S51T4&t=1398s
Moats are the logistic network that Amazon has.. ok spend $10bn over 5 years and then come at me - if I didn't sit still...
Moats are what Google has in advertising... ok, pull 3% of the market for more money than god and see if it works..
brand/ux is not a moat, it's table stakes.
1. Status symbols - my Lambo signifies that my disposable income is greater than your disposable income
2. Fan clubs - I buy Nikes because they do a better job at promoting great athleticism, and an iPhone to pay double for hardware from 3 years ago
3. Visibility bias - As a late adopter I use whatever the category leader is (i.e. ChatGPT = AI, Facebook = the Internet)
What you describe sounds more like market power resulting from a monopoly
Technology enables UX. When the underlying technology is commodity—which is often the case—it's easy for competitors to copy the UX. But sometimes UX arises from the tight marriage of design and proprietary technology.
Good UX also arises from good organization design and culture, which aren't easy to copy. Think about a good customer support experience where the first agent you talk with is empowered to solve your issue on the spot, or there's perfect handoff between agents where each one has full context of your customer issue so you don't have to repeat yourself.
Except for the technical advantage of M-series macs that's like all of the Apple moat. Apple brand and UX is what is selling the hardware.
They make the UX depend on the number of Apple devices you have, so a little bit of network effect. But that's mostly still UX.
Lots of people, these days, just use ChatGPT to search the web these days. I've actually never understood google search ads, as I have never clicked on one, even by accident 10+ years. If I want to buy something I search within Amazon for it.
YouTube however, yeah, that is a stellar advertising platform.
If your visibility on current state of AI is limited to hallucinogenic LLM prompts -- it's worth digging a bit deeper, there is a lot going on right now
I am almost in disbelief that LLMs are the thing that reached the "tipping point" for most companies to magically care for ML. The amount of products, that could have been built properly 5 years ago, that exist now in a slower form because of "reasoning" LLMs, is likely astonishing.
I'd hope that the very first commercially successful "AI media" be it a 1 minute commercial or a 10 minute TV segment or whatever brings the lawsuits. I really want to know if i can feel any vindication about arguing about this for the last 3 decades of my life. (IP specifically)
more to your member-berries, whole swaths of interesting research disappeared, either abandoned or bought and closed sourced. Genetic Algorithms, artificial life, stuff with optics, 3-atom thick transistors (hey, IBM patented that, but microsoft also did basically the same thing with their STP qubits - everything has to be arranged at atomic widths or whatever. IBM also built a USS Enterprise (unsure if D i am not a huge fan) out of atoms, forget if to scale. in like 2003. Microsoft spent 17 years playing catch-up with *the* hardware people.)
yeah. is the conclusion that moneyed interests suck?
https://news.ycombinator.com/item?id=43115079
ML generally is for pattern recognition in data. That includes anomaly detection in financial data, for example. It is used in fraud detection.
Image ML/AI is used in your phone's facial recognition, in various image filtering and analysis algorithms in your phone's camera to improve picture quality or allow you to edit images to make them look better (to your taste, anyway).
AI image recognition is used to find missing children by analysing child pornography without requiring human reviewers to trawl through it - they can just check the much fewer flagged images.
AI can be used to generate captions on videos for the deaf or in text to speech for the blind.
There are tons of uses of AI/ML. Another example: video game AI. Video game upscaling. Chess and Go AI: NNUE makes Chess AI far stronger and in really cool creative ways which have changed high level chess and made it less drawish.
And even if there was, the fast follower to the Bitter Lesson is the Lottery Ticket Hypothesis: if you build a huge, general model to perform my task, I can quickly distill and fine tune it to be better, cheaper, and faster.
"better" = "smaller, more specialized, domain-specific"
I still think of AI as the analogy of databases is perfect. No database is set up for the necessary applications where they get deployed. The same is true for LLMs except for some very broad chatbot stuff where the big players already own everything.
And if AI is just these chat bots, the technology is going to be pretty minor in comparison to database technology.
In the course of building my side project (storytveller), I’ve found that newer models tend to do worse at storytelling. I’ve tested basically every model under the sun that is available for use in production, and one stands out - now it may be that others will come along and not do the research that I’ve done to choose the same model, and thus my application will be better than theirs in part because of the AI model I choose.
Having “AI” as part of your application will not matter as much, that I agree with, but having “the right AI” will.
Of course,the user experience will definitely matter as well, as will the marketing and other criteria - another point of agreement with the article. But that does not diminish the fact that, if your application does not involve testing benchmarks, there is a good chance that a model that may not be the newest could still be the best for your particular use case, so you should not just blindly choose the latest shiniest model as this article sort of implies.
The hammer does matter.
Once an algorithm/technique is discovered, it becomes a free library to install.
Data and user-base are still the moat. The traffic that ChatGPT stole from Google is the valuable part.
1) Lots of players enter at the start because there are no giant walled gardens yet. 2) Being best in class will require greater and greater capex (like new process nodes) as things progress. 3) New classes of products will be enabled over time depending on how performance improves.
There is more there, but, with regard to this post, what I want to point out is that CPUs were pretty basic commodities in the beginning, and it was only as their complexity and scale exploded that margins improved and moats were possible. I think it will play out similarly with AI. The field will narrow as efficiency and performance improves, and moats will become possible at that point.
That is true, but the context around it currently, the fast rush to appropriate freely from others and of course without contributing/crediting anything back, not to mention the sludge that has exploded all over the internet is one of agreement with the OP. I'm more than sure that AI could be used in very intelligent ways by artists themselves though, and I don't mean in a lazy way to cut corners and pump out content but a more deliberate way where the effort is visible (but I don't just mean visual arts).
https://youtu.be/HgfsKS-Ux_A
I also believe that, just like humans, AI models will be specialized so we'll have companies creating all kinds of special purpose models that have been trained with all knowledge from particular domains and are much better in particular fields. Generic AI models cannot compete there either.
If - and that's a big if - LLM-Tech turns out to be the path to true (not the OpenAI-definition) AGI, everybody will be able to get there in time, the tech is well known and the research teams notoriously leaky. If not, Another AI winter is going to follow. In either case, the only ones who are going to make a major profit there are the ones selling shovels during the gold rush - nvidia. Well, and the owners who promised investors all kinds of ridiculous nonsense.
Anyway, the most important point, in my opinion, is that it's a bad idea to believe people financially incentivized to hype their AIs into unrealistic heights.
However, it turned out to be a very difficult and time-consuming process to move from a mostly-working MVP to a system that was safe across the vast majority of edge cases in the real world. Many competitors gave up because the production system took much longer than expected to build. However, today, a decade or more in to the race, self-driving cars are here.
Yet even for the winners, we can see some major concessions from the original vision: Waymo/Tesla/etc have each strictly limited the contexts you can use self-driving, so it's not a 100% replacement for a human driver in all cases, and the service itself is still very expensive to run and maintain commercially, so it's not necessarily cheaper to get a self-driving car than a human driver. Both limitations seem like to be reduced in the years ahead: the restrictions on where you can use self-driving will gradually relax, and the costs will go down. So it's plausible that fleets of self-driving cars are an everyday part of life for many people in the next decade or two.
If AI development follows this path, we can expect that many vendors will run out of cash before they can actually return their massive capital investment, and a few dedicated players will eventually produce AIs that can handle useful subsets of human thoughtwork in a decade or so, for a substantial fee. Perhaps in two decades we will actually have cost-effective AI employees in the world at large.
In a limited fashion, though. We don't have generalized fully autonomous vehicles just yet.
(As always, the task of the hype man isn’t to maintain the bubble indefinitely, but just long enough for him to get his money out.)
There are more fundamental issues at play, where I see stock price fairly divorced from actual, tangible value, but the line still goes up because people are going to keep buying tulips forever, right?
It sucks because I think investing in the stock market takes away from dynamic investment in innovative startups and real R&D, and shift capital towards shiny things.
I read the first one when it was posted here too and I don't get their point. It's a lot of words, but what are you trying to say?
If you just slap a ChatGPT backend onto your product, your competitors will do it too and you gain nothing without some additional innovation.
(2) On the other hand, Cursor's value is essentially gluing the two things together. If your data is already in the castle (e.g. my codebase and historical context of building it over time is now in Cursor's instance of Claude) then the software is very sticky and I likely wouldn't switch to my own instance of Claude. The author also addresses this noting that "how data flows in and out" has value, which Cursor does.
> The AI Code Editor - Built to make you extraordinarily productive, Cursor is the best way to code with AI.
Cursor is literally a VS Code fork + AI.
> unless you’re building or hosting foundation models, saying that AI is a differentiating factor is sort of like saying that your choice of database is what differentiated your SaaS product: No one cares.
Cursor is doing exactly what they say "no one cares" about.
It's bad writing (and thinking).
It seems cursor did a bunch of things right, from choosing to base it on an already popular editor, to the vision and specific ways they have integrated AI, to the flexibility of which models to use. No doubt there was some "early mover" advantage too.
Certainly the AI isn't their moat since it's mostly using freely available models (although some of their own too I believe), and it remains to be seen how much of a moat of any kind or early-mover advantage they really have. The AI-assisted coding market is going to be huge, and presumably will attract a lot more competition.
I'm old enough to remember when the BRIEF (basic reconfigurable interactive editing facility) editor (by Underware) took the world by storm, but where is it now?
Any other former BRIEF users/fans out there ?!
First mover advantage.
They are not safe against Microsoft, who have the resources to copy every feature that Cursor has into VS code and can afford to offer it for "free" for a very long time and Microsoft also has access to the exact same models as Cursor.
So not only that tells you there is no moat, but offering the best tools and models for free is exactly what Microsoft's modern definition of "Extinguish" is from their EEE strategy.
Cursor was released after Copilot
Rest assured, right now people are filing claims to the same old stuff, only now "with AI" tacked on. And rest assured, the rubber-stamping machine in the USPTO's basement is running 24/7 approving them.
Many key pieces of AI technology, like transformers, have patents. If you start trying to enforce your “…with AI” patent against Google, they’re just going to turn around and sue you for using their patented technology.
Start there, no matter the tool.
No. Stop! Please! I want my UX in an app to do the damn thing I precisely intend it to do, not what it believes I intend to do - which is increasingly common in UX design - and I hate it. It's a completely opaque black box and when the "magic" doesn't work (which it frequently does not, especially if you fall outside of "normal" range) - the UX is abysmal to the point of hilarity.
So if the UX feels increasingly framed in terms of what the "developers" see/want/believe to be profitable, and less from the actual user's perspective, that's because the UX was sketched by hustlers who see software development explicitly as a "React grind to increase MRR/ARR."
Granted, React would not be too helpful with the core video player engine, but actual video apps have lots of other features like comments, preview the next videos, behind-the-scenes notes, etc.
And then, if it's two-way video, you have a whole new level of state to track, and you'd definitely need to roll your own to make a good one.
This is pretty weird epistemological phrasing. It's a bit like saying "I want to know the truth, not just what I believe to be the truth!"
Which seems… pretty reasonable to me. Both involve the other party substituting some vaguely patronizing judgment of theirs for the first party’s.
Basically all applications these days are like this. Rather than assume users are sentient, intelligent beings capable of controlling devices and applications in order to achieve some goal, modern app design seems to be driven by a philosophy which views the operators of applications as imbeciles that require constant hand-holding and who must be given as little control and autonomy as possible. The analog world becomes more appealing every day because of this.
Nontechnical users do not have that mental model: they base their estimation of what a control should do on what problem they believe that control solves. The discrepancy starts with misalignment between the user’s mental model of the problem, and how software solves the problems. In that hypothetical system where some condition is preventing something from happening and there are one or two things you can do to mitigate it, the nontechnical user doesn’t give a fuck if and how something failed if there’s a different possible approach won’t fail. They just want you to solve their problem with the least possible amount of resistance, and don’t have the requisite knowledge to know why the program is telling them, let alone how it relates to the larger problem. That’s why developers often find UIs built for nontechnical users to be frustrating and seem “dumbed down”. For users concerned only with the larger problem and have no understanding of the implementation, giving them a bunch of implementation-specific controls is far dumber than trying to pivot if there’s a stumbling block in the execution and still try to do what needs to be done without user intervention. Moreover, even having that big mess of controls on the screen for more technically-sophisticated users increasing cognitive load and makes it more difficult for nontechnical users to figure out what they need to do.
It’s a frustrating disconnect, but it’s not some big trend to make terrible UIs as developers often assume. Rather, it’s becoming more common because UI and UX designers are increasingly figuring out what the majority of users actually want the software to do, and how to make it easier for them to do it. When developers are left to their own devices with interfaces, the result is frequently something other developers approve of, while nontechnical users find it clunky and counterintuitive. In my experience, that’s why so few nontechnical users adopt independent user-facing FOSS applications instead of their commercial counterparts.
First: AI requires an awful lot of resources, which in itself is a moat.
Second: having a moat doesn’t prevent your service to be attacked. See Tesla.
Third: not having a moat doesn’t prevent your service from dominating. See TikTok.
It’s something that everyone has to implement because their products will be inferior without it. But it’s not something you can use to build a monopoly easily, and since everyone has to do it there will be many people racing to the bottom pushing the price down.
superbowl salesforce Ad that friend shared with me to get my comments. I still have no idea wtf this is or what AI has to do with it.
https://www.youtube.com/watch?v=rcLAeURXvHY