every other article these days on this site is about AI. And it's incredibly tedious and annoying.
Isn't it enough that clueless marketers who get their Tech knowledge from businessinsider and bloomberg are constantly harping on about AI.
Seems we as a community have resigned or given up in this battle against common sense. Maybe long ago. Still there should be some form of moderation penalizing these shill posts that only glorify AI as being the future, ... the same way that not everything about crypto or the blockchain ended up on the FP. Seems with AI we're looking the other way and are OK with it?
Or maybe it's me.
steveBK123 13 days ago [-]
Not just you. Clearly useful tools coming out of this AI crazy, but a LOT of fluff.
Outside of pure tech companies, there's a lot of "Head of AI" hiring by CTOs to show "we are doing AI" regardless of if they have found any application for it yet.
I've also seen a lot of product pivots to AI where they don't really have a need or explanation for the use case that AI helps with.
Further I've seen a number of orgs that were laggards in their internal technology become incredibly distracted thinking AI will solve for them not having even a proper rudimentary 2010s class IT org.
I think the comedown from this will be worse than crypto as while there will be more real use cases, there is far more hype based "we have to do something" adoption that hasn't found a use yet. A lot of orgs that remained weary of crypto got fully on the AI bandwagon. Investment must be an order of magnitude more.
akra 13 days ago [-]
A lot of the fluff is all about boosting sales IMO which is where a lot of money for tech comes from. When MBA types (a large chunk of tech's buyers) hear the promises of efficiency, and replacing workers they get all excited very very quickly unless it requires lots of capital to do so - where they might instead look at cheaper labor (think offshore). AI is the ultimate SaaS product to these types, or at least how it it pitched to them it is. These people see tech workers, and IP as just "resources" - they are fungible bodies not qualified professionals. Obviously this creates a lot of technology delivery issues; and dysfunction. Places run by these types (many corporations) see technology as an expensive cost centre and secondary to the main business. I've seen this even in companies where they have to pivot to being a tech business - because of market competition they were forced to invest, but because it was reluctant investment the old culture remains. Engineers and other "builders/do'ers" are usually second class to the "decision makers" in these places. These are the places where engineers keep the place running, sometimes doing a lot of work and absolutely critical, but get paid little and receive little recognition. This is very common position for a software engineer outside the US.
With this kind of thinking often comes being a laggard in technology as you put it - engineers are a "forced necessary cost" because competitors are forcing us to keep up; not because we actually value it.
AI in their minds has vindicated their thinking hence the excitement about it. As a product it is very easy to "sell/fluff" to these kinds of people; it really excites them. They think engineers are now the expendable people they always wanted them to be rather than the people they had to put up with to get what they wanted. They were now justified in being "laggards" - they now have AI to do it cheaper than they would of had to pay an engineer before.
Yes there's a lot wrong with the thinking above, overestimation of current capabilities, etc and real innovative growth leading companies don't think this way. But the decision makers in these companies don't have that perspective. Much of technology trends, corporate hype is around things you can sell to these decision makers who often overpay for the wrong kind of technologies if you sell to them right (think typical RFQ/RFP corporate processes) - AI is an easy sell/dream to these people.
steveBK123 13 days ago [-]
Yes I think this encapsulates my critique and observation from inside the belly of the beast. To these guys $100/year or even $1000/year per head to give AI to 1000 excel jockey analysts is great. It's cheaper, by an order of magnitude, than upsizing their IT org to the size of their leading competitors. I mean we are talking single digit SWEs for the numbers above.
Of course they probably aren't thinking through how AI will allow their leading competitors to maintain or expand their lead if it is any good anyway.
Interestingly I was in a meeting with our firms IT org recently where they were describing some of the "upgrades" they are making across systems, some of which were going to degrade service. Upon enough prodding they conceded the reasoning was not value, or even cost, but cost attribution. That is, it was too hard to figure out how to meter usage & charge back to business lines, so they are essentially going to discontinue those services and make business lines self manage. Crazy.
pfdietz 13 days ago [-]
> Clearly useful tools coming out of this AI crazy, but a LOT of fluff.
Isn't this true of every boom? Like A.C. Clarke said, you find the limits of the possible by venturing into the impossible.
steveBK123 13 days ago [-]
On the adoption side, no.
This feels way more like the 90s IT offshoring wave where it was forced bluntly, top down because they thought it would save money.
Things like crypto or cloud or big data had a much longer and measured adoption cycle.
swatcoder 14 days ago [-]
It's you.
The AI discussions can indeed be repetitive and tiresome here, especially for regulars, but they already seem to be downweighted and clear off the front page quite fast.
But it's a major focus of the industry right now, involving a genuinely novel and promising new class of tools, so the posts belong here and the high engagement that props them up seems expected.
gorjusborg 14 days ago [-]
> It's you.
Not just him.
> But it's a major focus of the industry right now, involving a genuinely novel and promising new class of tools, so the posts belong here and the high engagement that props them up seems expected.
In your opinion (and admittedly others), but that doesn't make the overhype any less tiresome. Yes it is novel technology, but there's alway novel technology, and it isn't all in one area, but you wouldn't know it by what hits the front page these days.
Anyway, it's useless to shake fists at the clouds. This hype will pass, just like all the others before it, and the discussion can again be proportional to the relevance of the topic.
SubiculumCode 14 days ago [-]
I don't know about the professional professionals, but as a science professor, I have to wear a lot of hats, which has required me to gain skills in a multitude of areas outside my area of deep expertise.
I use Claude and Chatgpt EVERY DAY.
Those services help me run out scripts for data munging, etc etc very quickly. I don't use it for high expertise writing, as I find it takes more than I get back, but I do use it to put words on a page for more general things. If your deep expertise is programming, you may not use it much either for that. But man oh man has it magnified my output on the constellation of things I need to get done.
What other innovation in the last decade has been this disruptive? Two years ago, I didn't use this. Now I do as part of my regular routine, and I am more valuable for it. So yes, there is hype, but man oh man, is the hype deserved. Even if AI winter started right now, the productivity boom from Claude level LLMs is nothing short of huge.
outworlder 14 days ago [-]
> I use Claude and Chatgpt EVERY DAY.
We use several tools derived from "AI research" every single day in our lives.
They are tools and, at every cycle, we gain new tools. They hype is the issue.
goatlover 14 days ago [-]
Personal anecdotes on the benefits of using LLMs don't address complaints about tedious articles over-marketing AI tech. That LLMs provide benefits is well known at this point, it doesn't mean we can't recognize the latest hype cycle for what it is. There's a long list of previous technologies that were going to "change everything".
SubiculumCode 14 days ago [-]
Yes, of course, but they almost always did too. Internet. Mobile Phones.
I think the issue is whether you think that HN posts on AI are basically marketing, or about sharing new advances with a community that needs to be kept on top of new advances. Some posts are from a small startup trying something, or from a person sharing a tool. I think these are generally valuable. I might benefit from a RAG, but won't build one from scratch. In terms of this crowd, I can't think of advances that in other areas that are as impactful as machine learning lately. Its not like crypto. Crypto was an interesting innovation, but one in which mostly sought a market instead of the a market seeking an innovation. There is no solid "just use a database" analogical response here like was the well used refrain to attempt at practical uses of cryptocurrency tech. Sure, AI companies built on selling something silly like "the perfect algorithm to find you a perfect date!" is pure hackery, but even at the current level of llm, I don't think we are any where near understanding its full potential/application. So even if we are on the brink of an AI winter, its in the Bahamas.
If HN readers feel that AI-related articles are showing up too much, then I'd say it would be on them to find articles on topics that interest them and post them to HN.
exe34 14 days ago [-]
surely it's not hype if it works?
gorjusborg 13 days ago [-]
No, hype is an over-presence and/or over-representation of the benefits. Just because there's a hint of truth to the sentiment does not mean there is no hype.
guappa 12 days ago [-]
It is hype if it works but doesn't provide nearly enough revenue to cover the costs.
n_ary 14 days ago [-]
> I use Claude and Chatgpt EVERY DAY […] What other innovation in the last decade has been this disruptive?
I use some sort of IDE everyday. Previously at my early days, I was a “true-hacker” and using Vim in console to type out massive code bases, we had waterfall development practices, I would fumble through source codes of various poorly documented features of the language or libraries for hours to figure out about needed function/method/attribute and how anything really worked and the quirks of strange error messages needing to call up the vendor or buy a book on the topic and hope it had answers… and of course, I practiced typing every weekend to speed up my typing speed.
Now, I just type something and the IDE reads my mind and shows appropriate suggestions and also helpfully imports the packages for me while also constantly formatting my codebase on save and raising red/yellow squigglies to suggest my mistakes. I copy paste any quirky errors in a search engine and immediately find other human beings reporting the problem and solutions, I can happily continue developing on the same codebase and parallel features while other teams can continue with their features and we’ll know soon if we have stepped on each others’ toes shortly in CI pipeline. What took me weeks now takes like few hours and if someone told me to do the same in Vim, I’d be blankly looking at them because that is arcane. Of course my IDE misguesses sometimes and I can correct it, but I am insanely productive now compared to decades back. What other innovation has made this kind of gains?
Also I could take a horse and go to a duty travel in another continent for a several month or even year journey, but now I can take a flight and be there in hour and I am saving insane amount of time.
The examples can go on, we have new novelties which are ground breaking, but AI is too much hype, and it is not well deserved. All the hype comes because the VC money burning on it and needs the prep the market for massive return. Too much of hype can turn sour if that topic had been seen to rise and die a fee times, hence it is natural that a lot of HN audience do not feel great to look at this anymore.
Kbelicius 13 days ago [-]
Last decade... GP said last decade. IDEs and airplanes are older.
> The examples can go on
Since the examples never started not sure how they could go on.
norir 13 days ago [-]
Counterpoint: I programmed exclusively in vim for a decade, switched to intellij for scala, did find it more productive (although I found intellij annoyingly sluggish relative to vim -- especially at startup), but then realized that scala itself was limiting my productivity even with the help of an IDE. I abandoned scala, went back to vim and wrote my own language in the most minimal way possible. I don't even use simple tab completions. Yet I am more productive in my language than in any other that I've previously used with or without an IDE.
I don't doubt that you are more productive with an IDE than without, but I personally think the magnitude is reflective of poor language and system design rather than the magic of IDEs (which I believe is relatively minor compared to using a fast compiler with good error reporting). In fact, I sort of think IDEs lead to a kind of trap where people design systems that require their use to be effective which then makes it seem as though the features of the IDE are essential rather than perhaps a source of complexity that is actually making the system worse.
I also will say that your horse vs flight example raises something for me. It's a bit like saying I could drive the Camino de Santiago in a day which saves me an insane amount of time. Sure, it's true, but it misses the entire point of the journey. I basically think the vast majority of programming efficiency boosting tools (ides and llms alike) are mainly just taking us faster on a road to nowhere. I live in San Francisco, supposedly the mecca of technology, and almost never encounter anyone working on anything of truly significant value (according to my personal value system). But I do find a lot of people slinging code for cash, which is a fine and understandable choice, but deeply uninspiring to me. Which also reflects how I feel about LLMs and the like.
jchw 14 days ago [-]
> It's you.
I disagree.
contextfree 14 days ago [-]
How does any of that apply to this particular article? Isn't a broader historical perspective exactly what's needed if you want to be free from the immediate hype cycle?
One of my biggest irritations with HN comment sections is how frequently people seem to want to ignore the specific interesting thing an article is about and just express uninteresting and repetitive generic opinions about the general topic area instead.
auggierose 14 days ago [-]
It's a CACM article. Without having read this one, I'd say CACM articles on HN are absolutely appropriate.
DyslexicAtheist 14 days ago [-]
that's not really a justification in my view. The entire education industry is complicit in this circus. It's not just engineers hoping to get a payday it's academics too that are hoping to get funding and tenure.
That said, I'm not hating the player, people gotta eat. But I totally lack appreciation for the game.
tomrod 14 days ago [-]
I've worked in the analytics space for over ten years building what today is called "AI" as a service or product. The hype seems more like pent up release for the valid stuff, and block chain for the tech marketer type stuff.
tincholio 14 days ago [-]
It's been a common problem with HN. I remember when NodeJS came out, it was exactly the same, and then with all the crypto-craze.
dgfitz 14 days ago [-]
Nah, it’s not just you.
AI is really neat. I don’t understand how a business model that makes money pops out on the other end.
At least crypto cashed out on NFTs for a while.
gopalv 14 days ago [-]
> I don’t understand how a business model that makes money pops out on the other end
Tractors and farming.
By turning what is traditionally a labour intensive product into a capital intensive one.
For now, the farmers who own tractors will beat the farmers who need to hire, house and retain workers (or half a dozen children).
This goes well for quite some time, where you can have 3 people handle acres & acres.
I'll be around explaining how coffee beans can't be picked by a tractor or how vanilla can't be pollinated with it.
dragontamer 14 days ago [-]
And I'll be around explaining why it's a bad idea to stockpile $X00,000,000 worth of Equipment in Columbia, where coffee grows readily.
Capital intensive industries require low crime and geopolitical stability. Strongman politics means that investors who buy such equipment will simply be robbed at literal gunpoint by local gangs.
mangamadaiyan 14 days ago [-]
Nitpick: s/Columbia/Colombia/
dgfitz 14 days ago [-]
I may be mistaken, but I was under the impression that, largely, farmers do not own their equipment. They lease it, and it costs a lot.
Edit: Also, 3 people can handle 100 acres of land, given the crop. That happens today.
SubiculumCode 14 days ago [-]
depends on the crop. Strawberries? No. Wheat, yes.
dgfitz 14 days ago [-]
Sure does. I agree. Crop-type wasn't specified.
Edit: Crop-type was specified, I was incorrect.
svara 14 days ago [-]
> I don’t understand how a business model that makes money pops out on the other end.
What issues do you see?
I pay for ChatGPT and for cursor and to me that's money very well spent.
I imagine tools like cursor will become common for other text intensive industries, like law, soon.
Agreed that the hype can be over the top, but these are valuable productivity tools, so I have some trouble understanding where you're coming from.
dgfitz 14 days ago [-]
I feel like the raw numbers kind of indicate that the amount of money spent on training, salary, and overhead doesn't add up. "We'll beat them in volume" keeps jumping out at me.
14 days ago [-]
tdeck 14 days ago [-]
What you're paying for ChatGPT is not likely covering their expenses, let alone making up their massive R&D investment. People paid for Sprig and Munchery too, but those companies went out of business. Obviously what they developed wasn't nearly as significant as what OpenAI has developed, but the question is: where will their pricing land once they need to turn a profit? It may well end up in a place where it's not worth paying ChatGPT to do most of the things it would be transformative for at its current price.
Looking at history, anything in its first few iterations costs insane and stay as luxury or is sold at massive loss. Once the research goes on for several years, the costs keep coming down first very slowly and then in avalanche . The question always remains to which one can continue “selling at a loss” long enough to last the point until the costs continue going down while people are used to paying standard price(see smartphones), or the product is so market dominant that competition does not have resources to compete and cost can be raised(see Netflix).
tdeck 13 days ago [-]
One very viable possibility is that the technology sticks around and does great things, but the first mover entities like OpenAI go out of business anyway because it's gotten cheaper to copy their work.
ssl-3 14 days ago [-]
I paid money to Amazon for most of a decade before they had a profitable year.
dgfitz 14 days ago [-]
You realize they had razor thing margins on purpose, right?
ssl-3 13 days ago [-]
Of course.
Does that aspect differ from this other emerging market?
13 days ago [-]
goatlover 14 days ago [-]
Question is whether these companies are profitable off the services they're providing, or still being propped up by all the VC money pouring in.
DyslexicAtheist 14 days ago [-]
good point about the business model. probably AI has more even the ones reaping the rewards are only 4 or 5 big corps.
It seems with crypto the business "benefits" were mostly adversarial (winners were those doing crimes on the darknet, or to allow ransomware operators to get paid). The underlying blockchain Tech itself though failed to replace transactions in a database.
The main value for AI today seems to be generative Tech to improve the quality of Deepfakes or to help everyone in Business write their communication with an even more "neutral" non-human like voice, free of any emotion, almost psychopathic. Like the dudes who are writing about their achievements on LinkedIn in 3rd person, ... Only now it's psychopathy enabled by the machine.
Also I've seen people who, without AI are barely literate, are now sending emails that look like they've been penned by a post-doc in English literature. The result is it's becoming a lot harder to separate the morons, and knuckle-draggers from those who are worth reaching out and talking to.
yes old man yelling at cloud.
xarope 14 days ago [-]
+1. The other concern is that AI is potentially removing junior level jobs that are often a way for people to gain experience before stepping up into positions with more agency and autonomy. Which means in future we will be dealing with the next generation of "AI told me to do this", but "I have no experience to know whether this is good or not", so "let's do it".
n_ary 14 days ago [-]
On the contrary, Junior positions will be more and growing because now you do not have the fear of Juniors being liability for a while as with very little guidance they can get stuff done while Seniors keep an eye on bigger things and have free time not being spammed by n00b queries.
Also, given the saturation of STEM graduates now, you have proven group of juniors who can learn themselves over expensive lottery of bootcampers who might bail out the moment it is no longer the surface level React anymore.
With AI, more tiny businesses can launch into market and hire juniors and part time expertise to guide the product slowly without the massive VC money requirement.
dgfitz 14 days ago [-]
I agree with you. I just don't see the AI "summer" happening.
tartoran 14 days ago [-]
Crypto is coming back for another heist. Will probably die a bit once Trump finishes his term
tim333 13 days ago [-]
>every other article
On a quick count it seems to be more like 1/10. Maybe just ignore them and read something else?
I'm interested in the AI stuff personally.
Grimblewald 14 days ago [-]
My problem is the abuse of the term AI to a point where it has lost all meaning. I'd be all for a ban on the term in favour of the specific method driving the 'intelligence' as I would rule out some of qualifying simple because they are not capable of making intelligent decisions, even if they can make complex ones (looking at you random forest).
defanor 13 days ago [-]
Do you mean that cryptocurrency submissions were penalized that way? I recall them being about as annoying and similarly filling the front page with uninformative submissions, but have not heard of such penalties. Same as with other subjects during their hype waves.
dismalaf 14 days ago [-]
Well AI probably is the future. Might not necessarily be LLMs (I personally don't rate LLMs) but enough people are interested in it nowadays that it's almost certain AGI will happen in our lifetimes.
teleforce 14 days ago [-]
Honestly I'm intrigued on why you don't rate LLM. Arguably the main reason AI got out from its winter is the emergence of LLM.
dismalaf 14 days ago [-]
Because I can't see current techniques for creating LLMs fixing the pre-training problem. Right now big tech companies are training LLMs on, well, pretty much all human knowledge ever assembled, and they're still pretty dumb. They're wrong far too often and they don't have the capacity to learn and figure out things with a limited amount of data as humans do. Also, it's pretty clear that LLMs are flatlining.
Now, they are good text interfaces. They're good for parsing and creating text. There even seems to be very, very basic thought and maybe even creativity (at a very, very basic level). At this point though, I can't see them improving much more without a major change in technology, techniques, something. The first time I saw them I thought they were just regression analysis on steroids, and not going to lie, they still have that vibe considering tech companies have clusters up to 350k H100s and LLMs still are dumber than the average person for most tasks.
I'm currently creating an app that uses an LLM as an interface and it's definitely interesting, but most of the heavy lifting of the app will be the functions it calls and a knowledge database since it needs to have more concrete and current knowledge. But hey, it's nicer than implementing search from scratch I guess.
tokioyoyo 13 days ago [-]
Because almost once a quarter there's a big release that raises the expectations for top AI companies. Which brings up discussions, new articles and eventually posts in the front page.
2020-2022 HN front page was full of crypto news, mostly in negative light, but still. And before that there was more hot bubble topics. It's very usual.
Animats 14 days ago [-]
The 1980s AI "boom" was tiny.
In the 1980s, AI was a few people at Stanford, a few people at CMU, a few people at MIT, and a scattering of people elsewhere. There were maybe a half dozen startups and none of them got very big.
nyrikki 14 days ago [-]
Quite incorrect, even smaller colleges like in Greeley Colorado had Symbolics machines and there are threads of Expert Systems all throughout the industry.
The industry as a whole was smaller though.
The word sense disambiguation problem did kill a lot of it pretty quickly though.
Animats 14 days ago [-]
Threads, yes. We had one Symbolics 3600, the infamous refrigerator-sized personal computer, at the aerospace company. But it wasn't worth the trouble. Real work was done with Franz LISP on a VAX and then on Sun workstations.
There were a lot of places that tried a bit of '80s "AI", but didn't accomplish much.
nyrikki 14 days ago [-]
2/3 of the fortune 100 companies used Expert Systems in their daily operations and knowledge bases survived.
I don't know how that can be dismissed as nothing.
YeGoblynQueenne 12 days ago [-]
And they still do, except now they're called "business rules" and they are ad-hoc, buggy versions of an expert system.
YeGoblynQueenne 12 days ago [-]
>> In the 1980s, AI was a few people at Stanford, a few people at CMU, a few people at MIT, and a scattering of people elsewhere.
Maybe that's the view from the US. In the '70s, '80s and '90s, ymbolic and logic-based AI flourished in Europe, in the UK and France with seminal work on program verification and model checking, with rich collaborations on logic programming between mainly British and French institutions, in japan with the 5th Generation Computer project, and in Australia with the foundational work of J. Ross Quinlan and others on machine learning, which at the time (late 8'0s and early 90's) meant primarily symbolic approaches, like decision tree learners.
But, as usual, the US thinks progress is only what happens in the US.
14 days ago [-]
jekude 14 days ago [-]
> Artificial life fizzled as a meta discipline
I've wondered for a while if Artificial Life is in its own winter, waiting for someone to apply the lessons of scale we learned from neural nets.
We're seeing artificial life come back as non-player characters in video games.
antipaul 13 days ago [-]
From “big program, small data” to “big data, small program” seems like a useful way to summarize the main shift from elaborate rules in the first generation, to huge piles of data today.
drcwpl 13 days ago [-]
The biggest problem was "expert systems and a flood of public money" - that public money led to complacency and a lot of research in unproductive areas. It is private money that has really kickstarted the new systems since Google bought AlexNet
massimosgrelli 13 days ago [-]
I'm not an expert in the field, but I find this article incredible. For someone like me who didn't major in AI at CS, it's clear and entertaining.
Isn't it enough that clueless marketers who get their Tech knowledge from businessinsider and bloomberg are constantly harping on about AI.
Seems we as a community have resigned or given up in this battle against common sense. Maybe long ago. Still there should be some form of moderation penalizing these shill posts that only glorify AI as being the future, ... the same way that not everything about crypto or the blockchain ended up on the FP. Seems with AI we're looking the other way and are OK with it?
Or maybe it's me.
Outside of pure tech companies, there's a lot of "Head of AI" hiring by CTOs to show "we are doing AI" regardless of if they have found any application for it yet.
I've also seen a lot of product pivots to AI where they don't really have a need or explanation for the use case that AI helps with.
Further I've seen a number of orgs that were laggards in their internal technology become incredibly distracted thinking AI will solve for them not having even a proper rudimentary 2010s class IT org.
I think the comedown from this will be worse than crypto as while there will be more real use cases, there is far more hype based "we have to do something" adoption that hasn't found a use yet. A lot of orgs that remained weary of crypto got fully on the AI bandwagon. Investment must be an order of magnitude more.
With this kind of thinking often comes being a laggard in technology as you put it - engineers are a "forced necessary cost" because competitors are forcing us to keep up; not because we actually value it.
AI in their minds has vindicated their thinking hence the excitement about it. As a product it is very easy to "sell/fluff" to these kinds of people; it really excites them. They think engineers are now the expendable people they always wanted them to be rather than the people they had to put up with to get what they wanted. They were now justified in being "laggards" - they now have AI to do it cheaper than they would of had to pay an engineer before.
Yes there's a lot wrong with the thinking above, overestimation of current capabilities, etc and real innovative growth leading companies don't think this way. But the decision makers in these companies don't have that perspective. Much of technology trends, corporate hype is around things you can sell to these decision makers who often overpay for the wrong kind of technologies if you sell to them right (think typical RFQ/RFP corporate processes) - AI is an easy sell/dream to these people.
Of course they probably aren't thinking through how AI will allow their leading competitors to maintain or expand their lead if it is any good anyway.
Interestingly I was in a meeting with our firms IT org recently where they were describing some of the "upgrades" they are making across systems, some of which were going to degrade service. Upon enough prodding they conceded the reasoning was not value, or even cost, but cost attribution. That is, it was too hard to figure out how to meter usage & charge back to business lines, so they are essentially going to discontinue those services and make business lines self manage. Crazy.
Isn't this true of every boom? Like A.C. Clarke said, you find the limits of the possible by venturing into the impossible.
This feels way more like the 90s IT offshoring wave where it was forced bluntly, top down because they thought it would save money.
Things like crypto or cloud or big data had a much longer and measured adoption cycle.
The AI discussions can indeed be repetitive and tiresome here, especially for regulars, but they already seem to be downweighted and clear off the front page quite fast.
But it's a major focus of the industry right now, involving a genuinely novel and promising new class of tools, so the posts belong here and the high engagement that props them up seems expected.
Not just him.
> But it's a major focus of the industry right now, involving a genuinely novel and promising new class of tools, so the posts belong here and the high engagement that props them up seems expected.
In your opinion (and admittedly others), but that doesn't make the overhype any less tiresome. Yes it is novel technology, but there's alway novel technology, and it isn't all in one area, but you wouldn't know it by what hits the front page these days.
Anyway, it's useless to shake fists at the clouds. This hype will pass, just like all the others before it, and the discussion can again be proportional to the relevance of the topic.
I use Claude and Chatgpt EVERY DAY.
Those services help me run out scripts for data munging, etc etc very quickly. I don't use it for high expertise writing, as I find it takes more than I get back, but I do use it to put words on a page for more general things. If your deep expertise is programming, you may not use it much either for that. But man oh man has it magnified my output on the constellation of things I need to get done.
What other innovation in the last decade has been this disruptive? Two years ago, I didn't use this. Now I do as part of my regular routine, and I am more valuable for it. So yes, there is hype, but man oh man, is the hype deserved. Even if AI winter started right now, the productivity boom from Claude level LLMs is nothing short of huge.
We use several tools derived from "AI research" every single day in our lives.
They are tools and, at every cycle, we gain new tools. They hype is the issue.
I think the issue is whether you think that HN posts on AI are basically marketing, or about sharing new advances with a community that needs to be kept on top of new advances. Some posts are from a small startup trying something, or from a person sharing a tool. I think these are generally valuable. I might benefit from a RAG, but won't build one from scratch. In terms of this crowd, I can't think of advances that in other areas that are as impactful as machine learning lately. Its not like crypto. Crypto was an interesting innovation, but one in which mostly sought a market instead of the a market seeking an innovation. There is no solid "just use a database" analogical response here like was the well used refrain to attempt at practical uses of cryptocurrency tech. Sure, AI companies built on selling something silly like "the perfect algorithm to find you a perfect date!" is pure hackery, but even at the current level of llm, I don't think we are any where near understanding its full potential/application. So even if we are on the brink of an AI winter, its in the Bahamas.
Also, looking at the most popular stories with AI in the title over the last month show quite a varied array of topics. https://hn.algolia.com/?dateRange=pastMonth&page=0&prefix=fa...
If HN readers feel that AI-related articles are showing up too much, then I'd say it would be on them to find articles on topics that interest them and post them to HN.
I use some sort of IDE everyday. Previously at my early days, I was a “true-hacker” and using Vim in console to type out massive code bases, we had waterfall development practices, I would fumble through source codes of various poorly documented features of the language or libraries for hours to figure out about needed function/method/attribute and how anything really worked and the quirks of strange error messages needing to call up the vendor or buy a book on the topic and hope it had answers… and of course, I practiced typing every weekend to speed up my typing speed.
Now, I just type something and the IDE reads my mind and shows appropriate suggestions and also helpfully imports the packages for me while also constantly formatting my codebase on save and raising red/yellow squigglies to suggest my mistakes. I copy paste any quirky errors in a search engine and immediately find other human beings reporting the problem and solutions, I can happily continue developing on the same codebase and parallel features while other teams can continue with their features and we’ll know soon if we have stepped on each others’ toes shortly in CI pipeline. What took me weeks now takes like few hours and if someone told me to do the same in Vim, I’d be blankly looking at them because that is arcane. Of course my IDE misguesses sometimes and I can correct it, but I am insanely productive now compared to decades back. What other innovation has made this kind of gains?
Also I could take a horse and go to a duty travel in another continent for a several month or even year journey, but now I can take a flight and be there in hour and I am saving insane amount of time.
The examples can go on, we have new novelties which are ground breaking, but AI is too much hype, and it is not well deserved. All the hype comes because the VC money burning on it and needs the prep the market for massive return. Too much of hype can turn sour if that topic had been seen to rise and die a fee times, hence it is natural that a lot of HN audience do not feel great to look at this anymore.
> The examples can go on
Since the examples never started not sure how they could go on.
I don't doubt that you are more productive with an IDE than without, but I personally think the magnitude is reflective of poor language and system design rather than the magic of IDEs (which I believe is relatively minor compared to using a fast compiler with good error reporting). In fact, I sort of think IDEs lead to a kind of trap where people design systems that require their use to be effective which then makes it seem as though the features of the IDE are essential rather than perhaps a source of complexity that is actually making the system worse.
I also will say that your horse vs flight example raises something for me. It's a bit like saying I could drive the Camino de Santiago in a day which saves me an insane amount of time. Sure, it's true, but it misses the entire point of the journey. I basically think the vast majority of programming efficiency boosting tools (ides and llms alike) are mainly just taking us faster on a road to nowhere. I live in San Francisco, supposedly the mecca of technology, and almost never encounter anyone working on anything of truly significant value (according to my personal value system). But I do find a lot of people slinging code for cash, which is a fine and understandable choice, but deeply uninspiring to me. Which also reflects how I feel about LLMs and the like.
I disagree.
One of my biggest irritations with HN comment sections is how frequently people seem to want to ignore the specific interesting thing an article is about and just express uninteresting and repetitive generic opinions about the general topic area instead.
CACM was totally complicit in spreading the blockchain hype: https://cacm.acm.org/?s=blockchain
That said, I'm not hating the player, people gotta eat. But I totally lack appreciation for the game.
AI is really neat. I don’t understand how a business model that makes money pops out on the other end.
At least crypto cashed out on NFTs for a while.
Tractors and farming.
By turning what is traditionally a labour intensive product into a capital intensive one.
For now, the farmers who own tractors will beat the farmers who need to hire, house and retain workers (or half a dozen children).
This goes well for quite some time, where you can have 3 people handle acres & acres.
I'll be around explaining how coffee beans can't be picked by a tractor or how vanilla can't be pollinated with it.
Capital intensive industries require low crime and geopolitical stability. Strongman politics means that investors who buy such equipment will simply be robbed at literal gunpoint by local gangs.
Edit: Also, 3 people can handle 100 acres of land, given the crop. That happens today.
Edit: Crop-type was specified, I was incorrect.
What issues do you see?
I pay for ChatGPT and for cursor and to me that's money very well spent.
I imagine tools like cursor will become common for other text intensive industries, like law, soon.
Agreed that the hype can be over the top, but these are valuable productivity tools, so I have some trouble understanding where you're coming from.
[1]: https://www.fooddive.com/news/sprig-is-the-latest-meal-deliv...
[2]:https://techcrunch.com/2019/01/21/munchery-shuts-down/?gucco...
Does that aspect differ from this other emerging market?
It seems with crypto the business "benefits" were mostly adversarial (winners were those doing crimes on the darknet, or to allow ransomware operators to get paid). The underlying blockchain Tech itself though failed to replace transactions in a database.
The main value for AI today seems to be generative Tech to improve the quality of Deepfakes or to help everyone in Business write their communication with an even more "neutral" non-human like voice, free of any emotion, almost psychopathic. Like the dudes who are writing about their achievements on LinkedIn in 3rd person, ... Only now it's psychopathy enabled by the machine.
Also I've seen people who, without AI are barely literate, are now sending emails that look like they've been penned by a post-doc in English literature. The result is it's becoming a lot harder to separate the morons, and knuckle-draggers from those who are worth reaching out and talking to.
yes old man yelling at cloud.
Also, given the saturation of STEM graduates now, you have proven group of juniors who can learn themselves over expensive lottery of bootcampers who might bail out the moment it is no longer the surface level React anymore.
With AI, more tiny businesses can launch into market and hire juniors and part time expertise to guide the product slowly without the massive VC money requirement.
On a quick count it seems to be more like 1/10. Maybe just ignore them and read something else?
I'm interested in the AI stuff personally.
Now, they are good text interfaces. They're good for parsing and creating text. There even seems to be very, very basic thought and maybe even creativity (at a very, very basic level). At this point though, I can't see them improving much more without a major change in technology, techniques, something. The first time I saw them I thought they were just regression analysis on steroids, and not going to lie, they still have that vibe considering tech companies have clusters up to 350k H100s and LLMs still are dumber than the average person for most tasks.
I'm currently creating an app that uses an LLM as an interface and it's definitely interesting, but most of the heavy lifting of the app will be the functions it calls and a knowledge database since it needs to have more concrete and current knowledge. But hey, it's nicer than implementing search from scratch I guess.
2020-2022 HN front page was full of crypto news, mostly in negative light, but still. And before that there was more hot bubble topics. It's very usual.
In the 1980s, AI was a few people at Stanford, a few people at CMU, a few people at MIT, and a scattering of people elsewhere. There were maybe a half dozen startups and none of them got very big.
The industry as a whole was smaller though.
The word sense disambiguation problem did kill a lot of it pretty quickly though.
There were a lot of places that tried a bit of '80s "AI", but didn't accomplish much.
I don't know how that can be dismissed as nothing.
Maybe that's the view from the US. In the '70s, '80s and '90s, ymbolic and logic-based AI flourished in Europe, in the UK and France with seminal work on program verification and model checking, with rich collaborations on logic programming between mainly British and French institutions, in japan with the 5th Generation Computer project, and in Australia with the foundational work of J. Ross Quinlan and others on machine learning, which at the time (late 8'0s and early 90's) meant primarily symbolic approaches, like decision tree learners.
But, as usual, the US thinks progress is only what happens in the US.
I've wondered for a while if Artificial Life is in its own winter, waiting for someone to apply the lessons of scale we learned from neural nets.