ashwinnair991 hour ago
The concern isn't that AI reasons differently. It's that we start outsourcing the slow thinking entirely and then forget we ever had it.
n_u17 minutes ago
Are you a LLM? This comment is written twice in this thread and of your last 10 comments, 6 use the pattern "X isn't Y" or "X didn't Y, Z did"

https://news.ycombinator.com/item?id=47469767 > The concern isn't that AI reasons differently.

https://news.ycombinator.com/item?id=47469834 > The concern isn't that AI reasons differently.

https://news.ycombinator.com/item?id=47470111 > The problem isn't time.

https://news.ycombinator.com/item?id=47469760 > Airlines have been quietly expanding what they can remove you for. This isn't really about headphones.

https://news.ycombinator.com/item?id=47469448 > Good tech losing isn't new, it's just always a bit sad when it happens slowly

https://news.ycombinator.com/item?id=47469437 > The tool didn't fail here, the person did

christophilus11 minutes ago
Definitely AI. Every comment sounds like GPT.
pepperoni_pizza1 hour ago
I already noticed that. When I feel lazy, I feel like reaching for the AI. Exactly the same laziness voice that nudges me to drive instead of walking.

But then I go running and swimming for fun, and there is no laziness voice there, telling me to stop, because I enjoy it. And similarly with AI, I only use it for things where I don't care about, like various corporate bs. Maybe the cure for AI-brain is to care about and be passionate about things.

Conversely, does this mean that the kind of people who use AI for everything don't care about anything?

necrotic_comp38 minutes ago
There's something interesting I've found about my interactions with the AI - I use it as a thought-partner. I don't ask it to solve a problem for me (well, first at least!) I think about it as a tool to work with, engage with the problem, and spit out a result that I then test and review.

I see it as part of the feedback loop, and it speeds up some of the mechanical drudgery, while not removing any of the semantic problems inherent in problem solving. In other words, there's things machines are good at, and things humans are good at - if we each stick to our strengths, we can move incredibly fast.

delijati51 minutes ago
That is why i compare it to fast-food. From time to time you enjoy it but you should not consume it too much ;)
keiferski11 minutes ago
”Which is why the Matrix was redesigned to this, the peak of your civilization. I say your civilization because as soon as we started thinking for you it really became our civilization which is of course what this is all about.“
gmuslera1 hour ago
The main problem with "System 3" is that it have its own kind of "cognitive biases", like System 1, but those new cognitive biases are designed by marketing, politics, culture and whatever censor or makes visible the original training. Even if the process, the processing and whatever else around was perfect (that is not, i.e. hallucinations)

But, we still have the System 1, and survived and reached this stage because of it, because even a bad guess is better than the slowness of doing things right. It have its problems, but sometimes you must reach a compromise.

HPsquared29 minutes ago
I suppose the publishing process has always existed as system 3. It's just that now we have a new way to read and write with an abstract "rest of the world".
kikkupico34 minutes ago
Contrary to the general opinion, I feel that AI has IMPROVED my cognitive skills. I find myself discovering solutions to problems I've always struggled with (without asking AI about it, of course). I also find myself becoming much better at thinking on my feet during regular conversations. I believe I'm spending more time deep thinking than ever before because I can leave the boring cognitive stuff to AI, and that's giving my mind tougher workouts and making it stronger; but I could be completely wrong.
eslaught1 minute ago
Without an empirical methodology it's hard to know how true this is. There are known and well-documented human biases (e.g., placebo effect) that could easily be involved here. And besides that, there's a convincing (but often overlooked on HN) argument to be made that modern LLMs are optimized in the same manner as other attention economy technologies. That is to say, they're addictive in the same general way that the YouTube/TikTok/Facebook/etc. feed algorithms are. They may be useful, but they also manipulate your attention, and it's difficult to disentangle those when the person evaluating the claims is the same person (potentially) being manipulated.

I'd love to see an empirical study that actually dives into this and attempts to show one way or another how true it is. Otherwise it's just all anecdotes.

siva713 minutes ago
It's so fascinating, i feel the same but at the same i feel like most people get dumber than before ai (and most seem to struggle adapting ai)
Ozzie_osman32 minutes ago
When humans have an easy way to do something that is almost as good, we choose that easy way. Call it laziness, energy conservation, coddling, etc. The hard thing then becomes hard to do even when the easy thing isn't available, because the cognitive muscle and the discipline atrophy.

Like kids who are never taught to do things for themselves.

tac1927 minutes ago
Do you refuse to use a calculator or spreadsheet, because doing long hand division helps you exercise your mental muscle? Do you refuse to use a database, because it will make your memory weaker? Or, do you refuse to use a car, because it makes you less able to walk when the car is unavailable? No. Because the car empowers you to do something that, at the very least, takes a lot longer on foot.

People have worried with every single new technology that it will enfeeble the masses, rather than empower them, and yet in the end, we usually find ourselves better off.

ashwinnair991 hour ago
The concern isn't that AI reasons differently. It's that we stop practicing the slow thinking ourselves and then quietly forget we ever had it.