|
| Yup, use it or lose it applies. So you have to be careful to follow through and try to understand the reasoning you are using.
I use AI tools (LLMs like Claude) daily, but as thinking partners, not substitutes for thinking. The key difference: I'm using them to organize and articulate what I already know from 38 years teaching, or to help me work through complex material I'm studying. I'm not asking them to think for me.
Search engines (Google, DuckDuckGo) are different animals - they find existing information. LLMs can reason through problems with you, but they can also lead you astray if you don't verify and understand what they're telling you. And yes, they have no long-term memory - each conversation starts fresh.
The deeper issue is the same one we've always had with calculators, spell-check, GPS - any tool. If you use it instead of developing understanding, you're in trouble. If you use it to extend what you can do with your understanding, you're fine. The critical thing is: can you recognize when the tool is giving you garbage?
I should mention: I bounced this off Claude before posting to catch any technical mix-ups and tighten the language. That's actually part of my point - I knew what I wanted to say, but I wanted help saying it clearly. That's the difference between using AI and being used by it. | |
|