Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

Ms. Toad

(37,467 posts)
15. First - I asked about "will" (your assertion) not "can." I indicated there are ways in which I can see it being
Thu May 15, 2025, 02:37 PM
May 2025

used to enhance critical thinking. I'm all for the use of tools.

But that is not how AI, as we traditionally think about it (generative, rather data crunching) it is currently being used in the vast majority of cases.

Second - data gathering and crunching is not what is being described in the article - that kind of "dumb" artificial intelligence has been in use for decades - and it has become the trendy thing to refer to all such things as AI.

So the only thing in your list that is part of the kind of AI discussed in the article which would also contribute to critical (rather than efficient) thinking in the article is "see(ing) different sides of an issue." What percent of the use of generative AI do you believe is actually being used to help users see/think about different sides of an issue - as opposed to generating a response designed to get a good grade - or to avoid thinking?

I am not anti-generative AI. And I agree it is around to stay. BUT, unless things change, I see AI diminishing the critical thinking ability of those who use it - as it is currently used (haphazard untrained use to generate answers).

To change that, we need to start educating students in elementary school on how to use it.

We need to teach how to evaluate output to determine if it is accurate. (AI is designed to be conversational, not factual. But currently people are relying on what it spits out as accurate. I have never found an accurate summary in the AI tool google uses (and I check regularly - including today). Yet, in conversations I've had with people about AI, they tell me about asking ChatGPT about even things which could be life-threatening, and rely on the answers. It's like the Dr's fears about their patients relying on Dr. Google - but on steroids because there is no assembly/integration with other sources required. The process of evaluating output is a critical thinking skill. AI could be used to teach these skills - but using AI without more does not, inherently, develop them.

We need to teach using it as a tool - not as a producer of end results. In your scenarios, generative AI should not be used to test hypotheses because of its propensity to lie (gap fill). Harnessing computers to perform test hypotheses is great - because the program is designed and tested by humans who control the contents of the "black box," rather than leaving the contents of the "black box" to a device which is designed to leave no gaps - and what it fills those gaps with is largely crap. Teaching students to create the "black box" used to evaluate the outcome of a hypothetical analysis - EVEN IF generative AI is used to write the code - teaches critical thinking because the first step is to know what process is needed to evaluate the outcome, the second step is describing that to AI to generate the code, and the third step is evaluating the code to ensure that what was generated actually does what was intended.

And (unrelated to critical thinking) - we need to unwind the intellectual property theft by (at a minimum) compensating people for the use of their IP in training, allowing people to remove their IP from the training, and (best option) obtaining true informed consent.

Recommendations

0 members have recommended this reply (displayed in chronological order):

Latest Discussions»General Discussion»Everyone Is Cheating Thei...»Reply #15