Having worked with AI and LLMs for quite a bit now as a "wrapper", I think the real key is that doing well (fast, accurate, relevant) requires a really, really good ETL process in front of the actual LLM.
A "wrapper" will always be better than the the foundation models so long as it can do the domain-specific pre-generation ETL and data aggregation better; that is the true moat for any startup delivery solutions using AI.
Your moat as a startup is really how good your domain-specific ETL is (ease of use and integration, comprehensiveness, speed, etc.)
QuantumGood 32 days ago [-]
ETL: Extract, Transform, Load
imjonse 34 days ago [-]
'In recent years, innovative AI products that didn’t build their own models were derided as low-tech “GPT wrappers.” '
The ones derided were those claiming to be 'open-source XY' while being a standard tailwind template over an OpenAI call or those claiming revolutionay XY while 90% being the proprietary model underneath.
I am not sure how many were truly innovative that weren't cloneable in a very short time. Using models to empower your app is great, having the model be all of your app while you pitch it otherwise is to be derided.
muzani 34 days ago [-]
I was mentoring at a hackathon this weekend. Someone asked how they could integrate a certain open source pentesting agent into their tool.
I asked them, "Well, it's open source. Instead of making a bunch of adapters, couldn't you just copy the code you want?"
Turns out the whole agent was 11 files or so. The files were about 200 lines. Over half were just different personas to do the same thing. They just needed to copy one of the prompts and have a mechanism to break the loop.
The funny part with open source is nobody reads the code even though it's literally open. The AI pundits don't read what they criticize. The grifters just chain it forward. It's left-pad all over again.
t_mann 34 days ago [-]
Contrary take: "AI founders will learn the bitter lesson" (263 comments): https://news.ycombinator.com/item?id=42672790 the gist: "Better AI models will enable general purpose AI applications. At the same time, the added value of the software around the AI model will diminish."
Both essays make convincing points, I guess we'll have to see. I like the Uber analogy here, maybe the winners will be some who use the tech in innovative ways that only leverage the underlying tech.
bearjaws 33 days ago [-]
Not to mention, if you have a good idea, OAI, Anthropic, Google, will implement it.
e.g. OAI Operator, Anthropic Computer Use, and Google NotebookLM.
deepsquirrelnet 33 days ago [-]
The differentiator is whether or not your company operates with domain specific data and subject matter experts that those big companies don’t have (which is quite common).
There’s plenty of applications to build that won’t easily get disrupted by big AI. But it’s important to think about what they are, rather than chase after duplication of the shiny objects the big companies are showing off.
kridsdale3 33 days ago [-]
And they don't have to pay the margin on the API calls. So an equal UX on the same model API will be twice as profitable when operated by the first-party.
danenania 33 days ago [-]
They may implement it, but it's questionable whether they'll have the best implementation in any particular category.
iamwil 33 days ago [-]
Is this not consensus yet that people in the model layer are fighting commoditization and so-called wrappers have all the moats? I'd written something similar back in Nov of last year, and I thought I was late in writing it down.
>Is this not consensus yet that people in the model layer are fighting commoditization and so-called wrappers have all the moats?
Yes. It will become a duopoly, where the leading frontier model holds >90% market share, and most useful products will be built around it. With the remaining 10% being made up by large portions of the other big vendors, and then everyone else for niche cases.
The idea of picking and choosing between individual models for each specific use case is going away rapidly as the top ones pull away from the pack, and inference prices are falling exponentially.
glooglork 34 days ago [-]
> Imagine it becomes truly trivial to copy cat another product — something as simple as, “hey AI, build me an app that does what productxyz.com does, and host it at productabc.com!” In the past, a new product might have taken a few months to copy, and enjoyed a bit of time to build its lead. But soon, perhaps it will be fast-followed nearly instantly. How will products hold onto their users?
It's actually not that easy to copy/paste AI agents, prompts take quite a lot of tweaking and it's a rather slow and manual process because it's not that easy to verify that they're working for all the possible inputs.
This gets even more complicated when you get a number of agents in the same application and they need to interact with each other.
tossandthrow 34 days ago [-]
Have you tried using Ai to write your prompts? It is quite efficient.
Besides that, you quote "imagine it becomes...", it is a fair to assume that these technologies will become better.
glooglork 34 days ago [-]
Yeah, I'm using it and I agree it will probably become a lot better, but I don't think we're really close to a point where AI itself will be able to just write an app that has 100s of prompts that interact with one other. Even if it does, you'll probably be able to get it running better by manually optimizing a bunch of stuff (when I say manually, I'm also including iterating over a prompt in a chat with LLM).
It's capable of creating CRUD apps from scratch more or less by itself, and I can see how in this area we soon might get to a point where you can get your own clone of a lot of apps up and running in 30 minutes.
But I imagine a lot of future value we might see created will come from:
1) specialized prompts - looks simple but I don't think it is, especially if you have 100s of them in your application and you have complex logic on how they interact between each other, you're using different models for different parts of your application based on their strengths, etc
2) access to structured data you can connect your agents to
3) network effects - app that is mostly used gets better just by using the usage data (the article did talk about network effects)
I don't think it's really easy to replicate these 3 factors. The article is also mentioning some of this, I'm not really arguing with that, just pointing out that I don't think it will be that simple to c/p full applications.
satisfice 33 days ago [-]
“Have you tried…”
It’s not “trying” that matters. What matters is testing. But nobody is testing LLMs… Or what they call testing is mostly shrugging and smiling and running dubious benchmarks.
kridsdale3 33 days ago [-]
If the imperative-code based apps that I've been shipping my whole career had failure rates on par with the *best* LLM prompts (think 10 to 35 percent), I'd not have a career.
satisfice 31 days ago [-]
This is why I comment on Hacker News. Because sometimes I feel a little less alone. Cheers, friend.
deepsquirrelnet 33 days ago [-]
Stanford NLPs framework DSPy really encourages a traditional ML development process. It’s about the only one I’d consider to be a true ML framework.
EternalFury 34 days ago [-]
Business as usual. While electricity is remarkable, no one gets extremely rich selling it.
End-user value is the only value that can be sold at a profit.
mohsen1 34 days ago [-]
And guess who has a grip of the end user? Operation System owners. Now that you might not need an app for most things, OS vendors are in even more powerful position. Gone the days of "this amazing app can do X", now it's going to be "have you noticed you can ask Siri to do X?" They have all of the context that app developers are going to miss about the user.
Both Apple and Google are doing a poor job of integrating AI capabilities into their Operation Systems today. Maybe there is room for a new player to make a real AI-first Operation system.
echelon 33 days ago [-]
> AI-first Operation system.
An AI-first pane of glass (OS, browser, phone, etc.) with an agent that acts in my behalf to nuke ads, rage bait, click bait, rude people on the internet, spam, sales calls and emails, marketing materials, commercials, and more.
If you want to market to me, you need to pay me directly. If you want to waste my time, goodbye.
nicewood 33 days ago [-]
I agree that the OS vendors are in a great position to add value via broad, general purpose features. But they cannot cover it all - it's breadth over depth. So I think the innovation for niches and specific business processes will be still owned by specialized 'GPT Wrappers'.
koakuma-chan 33 days ago [-]
Does anyone actually use Siri?
nasmorn 32 days ago [-]
Siri will not even play my very specifically named Spotify playlist unless I prefix the name with “my”. It will rather just play something completely random public thing
oarsinsync 34 days ago [-]
> Operation System
OS is generally expanded to Operating System, not Operation System, in English
34 days ago [-]
raincole 34 days ago [-]
No one gets extremely rich selling food, water and electricity because these fields attract government intervention all the time.
(Not saying it's a bad or good thing, nor saying AI is comparable)
abrichr 33 days ago [-]
Food:
- Ray Kroc – Turned McDonald's into a global fast-food empire.
- Howard Schultz – Scaled Starbucks into an international giant.
- Michele Ferrero – Created Nutella, Kinder, and Ferrero Rocher, making his family billionaires.
Water:
- François-Henri Pinault – Controlled Evian via Danone.
- Antoine Riboud – Expanded Danone into a bottled water empire (Evian, Volvic).
- Peter Brabeck-Letmathe – Former Nestlé CEO; Nestlé owns Perrier, Pure Life, Poland Spring, etc.
Electricity:
- Warren Buffett – Berkshire Hathaway Energy owns multiple utilities.
- Li Ka-shing – Built major energy holdings through CK Infrastructure.
- David Tepper – Invested heavily in power utilities via Appaloosa Management.
34 days ago [-]
kgwgk 33 days ago [-]
> While electricity is remarkable, no one gets extremely rich selling it.
Enron did!
esafak 33 days ago [-]
I better buy some shares in them!
wcrossbow 33 days ago [-]
I read this and of course couldn't believe it. Isn't 14.7B enough to be considered extremely rich these days[1]? In the the Forbes real-time billionaires list is quite easy to find _many_ such examples.
If everyone has incredibly good AI, then perhaps the unique asset will be training data.
Not everyone will have the training data that demonstrates precisely the behavior that your customers want. As you grow, you'll generate more training data. Others can clone your product immediately... but the clone just won't work as well. In your internal evals, you'll see why. It misses a lot of stuff. But they won't understand, because their evals don't cover this case.
(This is quite similar to why Bing had trouble surpassing Google in search quality. Bing had great engineers, but they never had the same data, because they never had the same userbase.)
KaoruAoiShiho 33 days ago [-]
I predict this article to be embarrassingly wrong. The moat of models is compute, wrappers are just software engineering, one of the first things to be commoditized by AI in general.
pchristensen 33 days ago [-]
Software engineering followed by product research and product market fit. Those are less at risk.
KaoruAoiShiho 33 days ago [-]
Idea guys are a dime in a dozen.
DebtDeflation 33 days ago [-]
Probably worth thinking more about what we mean by "wrapper". A year or so ago, it often meant a prompt builder UI. There's no moat for that. But if in 2025 a "wrapper" means a proprietary data source with a pipeline to deliver it along with some proprietary orchestration along with the UI (and the LLM API being called), then it likely warrants looking at it differently.
34 days ago [-]
delifue 34 days ago [-]
Software can take a freeride of hardware improvements. GPT wrappers also can take a freeride of foundation model improvements.
kridsdale3 33 days ago [-]
I always roll my eyes when someone makes a "Show HN" post that their wrapper app has amazing new capabilities. All they did was push a commit where they typed "gpt4o-ultra-fancy-1234" in to some array.
ggm 34 days ago [-]
If you believe a prompt of the form "hey, GPT A make yourself behave like GPT B" can be articulated to be a Chinese room, I put it to you the amount of missing information between what informs A and what informs B will make this a mountain of work.
Do you think it's less work than just making GPT B and why? What quality in the system (inductance aside) is this simply additive?
My strawman reads as "wishing for fairytales" basically. But this strawman to me, is the reductive intent inside the article. "Ask a GPT to perform like another later different GPT" epitomises magical thinking.
Why bother training if the recursive application is that simple? Because... it's not that simple.
returnInfinity 34 days ago [-]
A wrapper will do more than this.
Imagine a new UI/UX for a CRM. Completely redesigned from the ground up.
Multiple GPT wrappers in a single product. All wrappers working together to achieve a single goal.
Also throw in agents.
And distribution matters, if some app goes viral, it has a high chance to succeed and beat the current incumbent.
hobs 34 days ago [-]
The better the LLM GPT thing you build the more your arm your competition to build better LLM GPT things, there's no moat there.
TeMPOraL 34 days ago [-]
There's no moat in any of it, but it's not like it matters much if you're not one of the major platforms. This is not a weather for sand-castle builders, it's a weather for surfers. LLM winds a-blowin, pick a wave to jump on, ride down the profits gradient until it's spent, jump to another. It's a time for a million products to bloom, each promising stars, every one gone in 10 months.
I'd be more worried about platform dependency at this point. The ol' adage about building your business on someone else's API applies, doubly so in the current geopolitical climate. All those hot AI startups are but a single executive order away from losing half their market, or getting erased from existence altogether.
swyx 34 days ago [-]
its truly interesting to see this come around full circle from 2023 when i started writing about the role of the AI Engineer, and now this https://www.latent.space/p/gpt-wrappers
daxfohl 33 days ago [-]
The real question is how do they achieve vendor lock in? My bets are on Microsoft to figure that out.
A "wrapper" will always be better than the the foundation models so long as it can do the domain-specific pre-generation ETL and data aggregation better; that is the true moat for any startup delivery solutions using AI.
Your moat as a startup is really how good your domain-specific ETL is (ease of use and integration, comprehensiveness, speed, etc.)
The ones derided were those claiming to be 'open-source XY' while being a standard tailwind template over an OpenAI call or those claiming revolutionay XY while 90% being the proprietary model underneath. I am not sure how many were truly innovative that weren't cloneable in a very short time. Using models to empower your app is great, having the model be all of your app while you pitch it otherwise is to be derided.
I asked them, "Well, it's open source. Instead of making a bunch of adapters, couldn't you just copy the code you want?"
Turns out the whole agent was 11 files or so. The files were about 200 lines. Over half were just different personas to do the same thing. They just needed to copy one of the prompts and have a mechanism to break the loop.
The funny part with open source is nobody reads the code even though it's literally open. The AI pundits don't read what they criticize. The grifters just chain it forward. It's left-pad all over again.
Both essays make convincing points, I guess we'll have to see. I like the Uber analogy here, maybe the winners will be some who use the tech in innovative ways that only leverage the underlying tech.
e.g. OAI Operator, Anthropic Computer Use, and Google NotebookLM.
There’s plenty of applications to build that won’t easily get disrupted by big AI. But it’s important to think about what they are, rather than chase after duplication of the shiny objects the big companies are showing off.
https://interjectedfuture.com/the-moats-are-in-the-gpt-wrapp...
Yes. It will become a duopoly, where the leading frontier model holds >90% market share, and most useful products will be built around it. With the remaining 10% being made up by large portions of the other big vendors, and then everyone else for niche cases.
The idea of picking and choosing between individual models for each specific use case is going away rapidly as the top ones pull away from the pack, and inference prices are falling exponentially.
It's actually not that easy to copy/paste AI agents, prompts take quite a lot of tweaking and it's a rather slow and manual process because it's not that easy to verify that they're working for all the possible inputs. This gets even more complicated when you get a number of agents in the same application and they need to interact with each other.
Besides that, you quote "imagine it becomes...", it is a fair to assume that these technologies will become better.
It's capable of creating CRUD apps from scratch more or less by itself, and I can see how in this area we soon might get to a point where you can get your own clone of a lot of apps up and running in 30 minutes.
But I imagine a lot of future value we might see created will come from:
1) specialized prompts - looks simple but I don't think it is, especially if you have 100s of them in your application and you have complex logic on how they interact between each other, you're using different models for different parts of your application based on their strengths, etc
2) access to structured data you can connect your agents to
3) network effects - app that is mostly used gets better just by using the usage data (the article did talk about network effects)
I don't think it's really easy to replicate these 3 factors. The article is also mentioning some of this, I'm not really arguing with that, just pointing out that I don't think it will be that simple to c/p full applications.
It’s not “trying” that matters. What matters is testing. But nobody is testing LLMs… Or what they call testing is mostly shrugging and smiling and running dubious benchmarks.
Both Apple and Google are doing a poor job of integrating AI capabilities into their Operation Systems today. Maybe there is room for a new player to make a real AI-first Operation system.
An AI-first pane of glass (OS, browser, phone, etc.) with an agent that acts in my behalf to nuke ads, rage bait, click bait, rude people on the internet, spam, sales calls and emails, marketing materials, commercials, and more.
If you want to market to me, you need to pay me directly. If you want to waste my time, goodbye.
OS is generally expanded to Operating System, not Operation System, in English
(Not saying it's a bad or good thing, nor saying AI is comparable)
- Ray Kroc – Turned McDonald's into a global fast-food empire.
- Howard Schultz – Scaled Starbucks into an international giant.
- Michele Ferrero – Created Nutella, Kinder, and Ferrero Rocher, making his family billionaires.
Water:
- François-Henri Pinault – Controlled Evian via Danone.
- Antoine Riboud – Expanded Danone into a bottled water empire (Evian, Volvic).
- Peter Brabeck-Letmathe – Former Nestlé CEO; Nestlé owns Perrier, Pure Life, Poland Spring, etc.
Electricity:
- Warren Buffett – Berkshire Hathaway Energy owns multiple utilities.
- Li Ka-shing – Built major energy holdings through CK Infrastructure.
- David Tepper – Invested heavily in power utilities via Appaloosa Management.
Enron did!
[1] https://www.forbes.com/profile/sarath-ratanavadi/?list=rtb/
Not everyone will have the training data that demonstrates precisely the behavior that your customers want. As you grow, you'll generate more training data. Others can clone your product immediately... but the clone just won't work as well. In your internal evals, you'll see why. It misses a lot of stuff. But they won't understand, because their evals don't cover this case.
(This is quite similar to why Bing had trouble surpassing Google in search quality. Bing had great engineers, but they never had the same data, because they never had the same userbase.)
Do you think it's less work than just making GPT B and why? What quality in the system (inductance aside) is this simply additive?
My strawman reads as "wishing for fairytales" basically. But this strawman to me, is the reductive intent inside the article. "Ask a GPT to perform like another later different GPT" epitomises magical thinking.
Why bother training if the recursive application is that simple? Because... it's not that simple.
Imagine a new UI/UX for a CRM. Completely redesigned from the ground up.
Multiple GPT wrappers in a single product. All wrappers working together to achieve a single goal.
Also throw in agents.
And distribution matters, if some app goes viral, it has a high chance to succeed and beat the current incumbent.
I'd be more worried about platform dependency at this point. The ol' adage about building your business on someone else's API applies, doubly so in the current geopolitical climate. All those hot AI startups are but a single executive order away from losing half their market, or getting erased from existence altogether.