> Claude is really good at helping here, mostly because thinking quickly saturates: when you’ve thought about a problem for five minutes, you’ve had all the thoughts you’re gonna have, and it’s time to talk to someone else.
It reminds me of:
"if you're thinking too much, write; if you're not thinking enough, read"
It's as if the act of writing engages yourself in a sort of conversation with the future reader.
> Ironically the one situation I’ve found where Claude noticeably hallucinates is when I ask questions about a large and novel text that’s entirely in context. Which is the opposite of what I would expect to see. …
Sigh.
Accurate responses in such situations would be useful for busy professionals in low-stakes scenarios.
But LLMs cannot replace the effect on the human mind that results from actually reading, understanding, and thinking about a text. There is no substitute: we must do our own thinking, because it is the work that matters — the journey, not the destination, yields the benefits.
What warrants a "sigh" here? The author is talking about specific situations where the whole text is in context, which I agree is surprising, but you reply with a generic dismissal of LLMs.
You can also argue with humans, who can motivate you to understand topics better and learn new perspectives. That has been happening since the dawn of humanity.
Yeah, but it's very hard to find someone on demand on that very specific topic that you are very interested in at that given moment. Back and forth takes forever, you have to keep checking your notifications, and some of them resort to personal attacks rather than focusing on the content.
> thinking quickly saturates: when you’ve thought about a problem for five minutes, you’ve had all the thoughts you’re gonna have, and it’s time to talk to someone else.
Perhaps consider you suffer from incredibly short attention span. Thinking for five minutes does not exhaust all the thoughts you are going to have on a topic, and if you spend five minutes considering the implications of such a though you quickly realise how absurd it is.
History is filled with stories of “shower thoughts” and bolts of inspiration which came from thinking on a topic long and hard and immersing yourself in it. If your idea were true, humanity would still believe the Earth is the center of the Universe and we wouldn’t have computers. Yours is precisely the type of mentality which leads to the proliferation of scams and conspiracy theories. It’s also a worrying trend with LLMs, that people are so willing to turn off their brains sooner and sooner.
You never called any question impossible, said Harry, until you had taken an actual clock and thought about it for five minutes, by the motion of the minute hand. Not five minutes metaphorically, five minutes by a physical clock.
And furthermore, Harry said, his voice emphatic and his right hand thumping hard on the floor, you did not start out immediately looking for solutions.
[...]
So Harry was going to leave this problem to Fred and George, and they would discuss all the aspects of it and brainstorm anything they thought might be remotely relevant. And they shouldn't try to come up with an actual solution until they'd finished doing that, unless of course they did happen to randomly think of something awesome, in which case they could write it down for afterward and then go back to thinking. And he didn't want to hear back from them about any so-called failures to think of anything for at least a week. Some people spent decades trying to think of things.
If you're studying for the SAT, just do timed practice tests until your score is where you want. This is probably more efficient than using a LLM, because it trains you in all ways (time management, etc) to perform well on the test. Either LLM chat service will be fine for explaining answers you don't understand.
I am utterly repelled by LLMs. I don’t know why otherwise thoughtful people use them, and this piece doesn’t explain the attraction either, except apparently what strikes me as creepy and pointless doesn’t strike everyone that way.
I notice little evidence of testing the information that he gets from Claude. From my own testing, which I repeat every so often, I find I cannot rely on anything I get from LLMs. Not anything. Have you tried AI summaries of documents or meetings that you know well? Are you happy with the results? I have not yet seen a summary that was good enough, personally.
Also a lot of example use cases he offers sound like someone who is not very confident in his own thinking, but strangely super-confident in whatever an LLM says ($2000/hr consultant? really?).
Claude cannot perform an inquiry. No LLM can. These tools do not have inquiring minds, nor learning minds. He says hallucinations have reduced. How can he know that, unless he cross-checks everything he doesn’t already know?
I find LLMs exhausting and intellectually infantilizing. From this piece I cannot rule out that there is something very nice about Claude. But I also can’t rule out that there is a certain kind of addictive or co-dependent personality who falls for LLMs for unhealthy reasons primarily.
It reminds me of:
"if you're thinking too much, write; if you're not thinking enough, read"
It's as if the act of writing engages yourself in a sort of conversation with the future reader.
Sigh.
Accurate responses in such situations would be useful for busy professionals in low-stakes scenarios.
But LLMs cannot replace the effect on the human mind that results from actually reading, understanding, and thinking about a text. There is no substitute: we must do our own thinking, because it is the work that matters — the journey, not the destination, yields the benefits.
Perhaps consider you suffer from incredibly short attention span. Thinking for five minutes does not exhaust all the thoughts you are going to have on a topic, and if you spend five minutes considering the implications of such a though you quickly realise how absurd it is.
History is filled with stories of “shower thoughts” and bolts of inspiration which came from thinking on a topic long and hard and immersing yourself in it. If your idea were true, humanity would still believe the Earth is the center of the Universe and we wouldn’t have computers. Yours is precisely the type of mentality which leads to the proliferation of scams and conspiracy theories. It’s also a worrying trend with LLMs, that people are so willing to turn off their brains sooner and sooner.
And furthermore, Harry said, his voice emphatic and his right hand thumping hard on the floor, you did not start out immediately looking for solutions.
[...]
So Harry was going to leave this problem to Fred and George, and they would discuss all the aspects of it and brainstorm anything they thought might be remotely relevant. And they shouldn't try to come up with an actual solution until they'd finished doing that, unless of course they did happen to randomly think of something awesome, in which case they could write it down for afterward and then go back to thinking. And he didn't want to hear back from them about any so-called failures to think of anything for at least a week. Some people spent decades trying to think of things.
I notice little evidence of testing the information that he gets from Claude. From my own testing, which I repeat every so often, I find I cannot rely on anything I get from LLMs. Not anything. Have you tried AI summaries of documents or meetings that you know well? Are you happy with the results? I have not yet seen a summary that was good enough, personally.
Also a lot of example use cases he offers sound like someone who is not very confident in his own thinking, but strangely super-confident in whatever an LLM says ($2000/hr consultant? really?).
Claude cannot perform an inquiry. No LLM can. These tools do not have inquiring minds, nor learning minds. He says hallucinations have reduced. How can he know that, unless he cross-checks everything he doesn’t already know?
I find LLMs exhausting and intellectually infantilizing. From this piece I cannot rule out that there is something very nice about Claude. But I also can’t rule out that there is a certain kind of addictive or co-dependent personality who falls for LLMs for unhealthy reasons primarily.