11 points by janalsncm 11 months ago | 23 comments
And as a follow up question, what would you call a company that was training original ML models (not even just LLMs)? Is that also an “AI” company and if so, should we draw a distinction?
onel 11 months ago
That's an interesting question and initially I read it as the common critique I hear: are AI wrapper companies, actual companies? And my answer is yes, as long as you provide value and someone is paying you are a company.
Now, are they "AI" companies? I'm leaning towards no. It depends on how much AI you are doing. If you provide a nice UI with just an API call to get the answer, then no. If you do your own embedding, vector DB querrying, re-ranking, not sure, but still leaning no. But if you start optimizing your embedding model, maybe play with fine-tuning LLMs, then yes.
It's not black and white and you can't blame startups for trying to appear bigger than they are.
null_investor 11 months ago
It's a silly nomenclature that some VCs and the entrepreneurial community started labelling the AI companies as GPT wrappers, to pretend that they are less useful, when in reality what matters is if they have users that are happy with their service.
And we can all be clear on this, at the moment, except Meta (because they use it for their recommendation algorithm), all AI companies using H100s are losing money, and lots of it.
I just read an article that OpenAI will spend $7B this year to make $1B ARR, and they are literally the leader in that space (well, Claude is better at the moment, but we expect them to release a better model since they started training it a while ago).
Every AI company doing LLMs are default dead companies on their way to bankrupcy. The ones that provide the API layer are even more likely to get to chapter 11 with the high training + staffing costs.
meiraleal 11 months ago
fdarkaou 11 months ago
I choose to embrace it and that's why my company is literally called AnotherWrapper lol
muzani 11 months ago
The real companies should be doing all angles better - cheaper, faster, higher quality output, more reliable. It doesn't matter if it's simply "prompt engineering". You can add value by adding exponential backoff or using different AI providers when one is down or slow (as they usually are).
There's a lot of excitement around RAGs because it lets startups play along this faster+cheaper+quality. There's domain expertise - a skilled writer can make better writing apps than some random with good prompting skills. Perplexity is probably adding extra caching layers in front of the AI so they can do it cheaper and faster.
downrightmike 11 months ago
janalsncm 11 months ago
muzani 11 months ago
janalsncm 11 months ago
muzani 11 months ago
They might still be subsidizing some of the cost but maybe it's more like LEDs vs bulbs at this point. The power of quantum lighting for cheaper and better.
wmf 11 months ago
solardev 11 months ago
Now if they had their own llama instance or similar that they're doing on their own machines, that would be closer to the true cost? But I doubt that quality is sufficient, or can keep up with the latest training.
muzani 11 months ago
fragmede 11 months ago
solardev 11 months ago
And then to actually see it all actually happen, rise to the top, only for a failed coup to almost dismantle your organization, and then to survive THAT and keep going. Must've been nuts. I hope some of them write a memoir about it later.
IceCoffe 11 months ago
Pinkthinker 11 months ago
rajatchakrab 11 months ago
casualwriter 11 months ago
yamumsahoe 11 months ago
If it sells at margins better than its cost of production, it's a money making machine, it's a company right there.
vasili111 11 months ago
fragmede 11 months ago
JSDevOps 11 months ago
imvetri 11 months ago
TreasurePalace 11 months ago