> The EU trails the US not only in the absolute number of AI-related patents but also in AI specialisation – the share of AI patents relative to total patents.
E.U. patent law takes a very different attitude towards software patents than the U.S. Even if that wasn't the case: “Specialisation” means that no innovation unrelated to AI gets mind share, investment, patent applications. And that's somehow a good thing? Not something you can just throw out there as a presupposition without explaining your reasoning.
FWIW, these studies are too early. Large orgs have very sensitive data privacy considerations and they're only right now going through the evaluation cycles.
Case in point, this past week, I learned Deloitte only recently gave the approval in picking Gemini as their AI platform. Rollout hasn't even begun yet which you can imagine is going to take a while.
To say "AI is failing to deliver" because only 4% efficiency increase is a pre-mature conclusion.
If rollout at Deloitte has not yet begun... How on earth did this clusterfuck [0] happen?
> Deloitte’s member firm in Australia will pay the government a partial refund for a $290,000 report that contained alleged AI-generated errors, including references to non-existent academic research papers and a fabricated quote from a federal court judgment.
Because even if an organisation hasn't rolled out generative AI tools and policies centrally yet, individuals might just use their personal plans anyway (potentially in violation with their contract)? I believe that's called "shadow AI".
Exactly, my company started carefully dipping their toes in to org wide AI mid last year (IT has been experimenting earlier than that, but under pretty strict guidelines from infosec). There is so much compliance and data privacy considerations involved.
And for the record I think they are absolutely right to be cautious, a mistake in my industry can be disastrous so a considered approach to integrating this stuff is absolutely warranted. Most established companies outside of tech really can’t have the “move fast break things” mindset.
Meanwhile, "shadow" AI use is around 90%. And if you guess IT would lead the pack on that, you are wrong. It's actually sales and hr that are the most avid unsactioned AI tool users.
They had official trainings on how to use Copilot/ChatGPT and some other tools, security and safety trainings and so on, this is not some people deciding to use whatever feature was there from Ms by default.
OpenAI is buying up like half of the RAM production in the world, presumably on the basis of how great the productivity boost is, so from that perspective this doesn't seem any more premature than the OpenAI scaling plan. And the OpenAI scaling plan is like all the growth in the US economy...
Yeah. We are only just beginning to get the most out of the internet, and the WWW was invented almost 40 years ago - other parts of it even earlier. Adoption takes time, not to speak of the fact that the technology itself is still developing quickly and might see more and more use cases when it gets better.
As a counter-point, someone from SAP in Walldorf told me they have access to all models by all companies to their choosing, at a more or less unlimited rate. Don't quote me on that, though, maybe I misunderstood him, it was in a private conversation. Anyway, it sounded like they're using AI heavily.
Yes I was recently talking to a person who was working as a BA who specializes in corporate AI adoption- they didn’t realize you could post screenshots to ChatGPT
The “corporate” in “corporate AI” can mean tons of work building metrics decks, collecting pain points from users, negotiating with vendors…none of which requires you to understand the actual tool capabilities. For a big company with enough of a push behind it, that’s probably a whole team, none of whom know what they are actually promoting very well.
It’s good money if you can live with yourself, and a mortgage and tuitions make it easy to ignore what you are becoming. I lived that for a few years and then jumped off that train.
I cannot read the paper that this article is based on, but it seems that it refers to the use of big data analytics and AI in 2024, not LLM. It concludes that the use of AI leads to a 4% increase in productivity. Nowadays the debate over AI productivity centers around LLMs, not big data analytics. This article does not seem to contradict more recent findings that LLM do not (yet) provide any increased productivity at the company level.
I wonder if web searches used to be pretty productive, then declined as sponsored results and SEO degraded things.
Nowadays an ai assist with a web search usually eliminates the search altogether and gives you a clear answer right away.
for example, "how much does a ford f-150 cost" will give you something ballpark in a second, compared to annoying "research" to find the answer shrouded in corporate obfuscation.
The killer app for AI might just be unenshittifying search for a couple of years.
Then SEO will catch up and we'll have spam again, but now we'll be paying by the token for it. Probably right around the time hallucination drops off enough to have made this viable.
Too much money in ads, and search is just a huge cash pipeline straight towards it. No way we can have non-ad-infested llm search out in the wild from any major vendor in upcoming future. Google-fu just becomes llm-google-fu, while sometimes it goes off rails and then apologizes in that typical super annoying way (and screws up something else).
Maybe smaller ones can somehow provide almost comparable but ad-free service, heck even mildly worse but genuine results would win many people over, this one included.
The thread seems to be about the opposite problem. The OP can't find the page they're looking for because Google is too strict about whitespace, according to the top comment.
I used to be able to google a question like that and get an accurate answer within the top 3 results nearly every time about 20 years ago. Then it got worse and worse and became pretty much completely useless about 10 years ago.
Now AI will give me a confident answer that is outright wrong 20% of the time or kind of right but not really 30% of the time. So now I ask something using an AI chatbot and carefully word it so as to have it not get off topic and focus on what I actually want to know, wait 30 seconds for its long ass answer to finish, skim it for the relevant parts, then google the answer and try to see where the AI sourced its answer from and determine whether it misinterpreted/mixed up results or it's accurate. What used to be a 10 second google search is now a 2-3 minute exercise.
I can see very much how people say AI has somehow led to productivity losses. It's shit like this, and it floods the internet and makes real info harder to find, making this cycle worse and worse and take more and more time for basic stuff.
We always had the technology to do things better, it's the money making part that has made things worse technologically speaking. In this same way, I don't see how AI will resolve the problem - our productivity was never the goal, and that won't change any time soon.
Yup. Any LLM recommendation for a product or service should be viewed with suspicion (no different than web search results or asking a commission-based human their opinion). Sponsored placements. Affiliate links. Etc.
Or when asking an LLM for a comparison matrix or pros and cons between choices ... beware paid placements or sponsors. Bias could be a result of available training data (forgivable?) or due to paid prioritization (or de-prioritizing of competitors!)
then declined as sponsored results and SEO degraded things
It didn't decline because of this. It declined because of a general decade long trend of websites becoming paywalled and hidden behind a login. The best and most useful data is often inaccessible to crawlers.
In the 2000s, everything was open because of the ad driven model. Then ad blockers, mobile subscription model, and the dominance of a few apps such as Instagram and Youtube sucking up all the ad revenue made having an open web unsustainable.
How many Hacker News style open forums are left? Most open forums are dead because discussions happen on login platforms like Reddit, Facebook, Instagram, X, Discord, etc. The only reason HN is alive is because HN doesn't make need to make money. It's an ad for Y Combinator.
SEO only became an issue when all there is for crawlers is SEO content instead of true genuine content.
Why is it depressing? Personally, unless the alternative is literally starving, I wouldn't want to do a job that a robot could do instead just so that I could be kept busy. That sounds like an insult to human dignity tbh.
Apropos, I once had a boss who said he was running a headcount reduction pilot and anyone who had the time and availability to help him should email him saying how much time they had to spare. I cannot deny this had a satisfying elegance.
I've all-ways asked the managers can you kindly disclose all confidential business information. In which they obviously respond with condescending remarks. Then I respond with, then how am I going to give you a answer without all the knowledge of how the business runs and operates? You can go away and figure out what is going to make work for the business then you can delegate what you want me to do, it is the reason why you pay me money.
What stands out for me is that the productivity gains for small and medium-sized enterprises are actually negative. But in Germany, for example, these companies are the backbone of the entire economy. That means it would be interesting to know how the average was calculated, what method was used, what weighting was applied, etc.
All in all, it's an interesting study, but it leaves out a lot, such as long-term effects, new dependencies, loss of skills, employee motivation, and much more.
Of note, "AI adoption" here means using "technologies that intelligently automate tasks and provide insights that augment human decision making, like machine learning, robotic process automation, natural language processing (NLP), algorithms, neural networks" and not just LLMs.
AI is affecting everything the same as Covid, as we've been in one single-topic hysteria since 2020. With one short break for attaching bottle caps to their bottles.
Not even Russian invasion or collapse of their automotive industry rattled them.
E.U. patent law takes a very different attitude towards software patents than the U.S. Even if that wasn't the case: “Specialisation” means that no innovation unrelated to AI gets mind share, investment, patent applications. And that's somehow a good thing? Not something you can just throw out there as a presupposition without explaining your reasoning.
Case in point, this past week, I learned Deloitte only recently gave the approval in picking Gemini as their AI platform. Rollout hasn't even begun yet which you can imagine is going to take a while.
To say "AI is failing to deliver" because only 4% efficiency increase is a pre-mature conclusion.
If rollout at Deloitte has not yet begun... How on earth did this clusterfuck [0] happen?
> Deloitte’s member firm in Australia will pay the government a partial refund for a $290,000 report that contained alleged AI-generated errors, including references to non-existent academic research papers and a fabricated quote from a federal court judgment.
[0] https://fortune.com/2025/10/07/deloitte-ai-australia-governm...
But obviously people were copy/pasting content to ChatGPT and Claude long before that.
Is putting Google Analytics onto your website and pulling a report 'big data analytics'...?
And for the record I think they are absolutely right to be cautious, a mistake in my industry can be disastrous so a considered approach to integrating this stuff is absolutely warranted. Most established companies outside of tech really can’t have the “move fast break things” mindset.
"The Internet" is completely dead. Both as an idea and as a practical implementation.
No, Google/Meta/Netflix is not the "world wide web", they're a new iteration of AOL and CompuServe.
These are not the openclaw folks
Genuinely confused, I don't get it
It’s good money if you can live with yourself, and a mortgage and tuitions make it easy to ignore what you are becoming. I lived that for a few years and then jumped off that train.
Nowadays an ai assist with a web search usually eliminates the search altogether and gives you a clear answer right away.
for example, "how much does a ford f-150 cost" will give you something ballpark in a second, compared to annoying "research" to find the answer shrouded in corporate obfuscation.
Then SEO will catch up and we'll have spam again, but now we'll be paying by the token for it. Probably right around the time hallucination drops off enough to have made this viable.
I kind of want to become Amish sometimes.
Maybe smaller ones can somehow provide almost comparable but ad-free service, heck even mildly worse but genuine results would win many people over, this one included.
https://news.ycombinator.com/item?id=30130535
Now AI will give me a confident answer that is outright wrong 20% of the time or kind of right but not really 30% of the time. So now I ask something using an AI chatbot and carefully word it so as to have it not get off topic and focus on what I actually want to know, wait 30 seconds for its long ass answer to finish, skim it for the relevant parts, then google the answer and try to see where the AI sourced its answer from and determine whether it misinterpreted/mixed up results or it's accurate. What used to be a 10 second google search is now a 2-3 minute exercise.
I can see very much how people say AI has somehow led to productivity losses. It's shit like this, and it floods the internet and makes real info harder to find, making this cycle worse and worse and take more and more time for basic stuff.
The result started with 3 "sponsored links" which threw her down the rabbit hole.
This used to be easy.
I found it a sad condemnation of how far the tech industry has fallen into enshittification and is failing to provide tools that are actually useful.
Or when asking an LLM for a comparison matrix or pros and cons between choices ... beware paid placements or sponsors. Bias could be a result of available training data (forgivable?) or due to paid prioritization (or de-prioritizing of competitors!)
In the 2000s, everything was open because of the ad driven model. Then ad blockers, mobile subscription model, and the dominance of a few apps such as Instagram and Youtube sucking up all the ad revenue made having an open web unsustainable.
How many Hacker News style open forums are left? Most open forums are dead because discussions happen on login platforms like Reddit, Facebook, Instagram, X, Discord, etc. The only reason HN is alive is because HN doesn't make need to make money. It's an ad for Y Combinator.
SEO only became an issue when all there is for crawlers is SEO content instead of true genuine content.
For those hearing this at work, better prepare an exit plan.
If anyone still resigns that is. They seem to have automated that too.
If the manager doesn’t have ideas, it is they who deserve the boot.
All in all, it's an interesting study, but it leaves out a lot, such as long-term effects, new dependencies, loss of skills, employee motivation, and much more.
Not even Russian invasion or collapse of their automotive industry rattled them.