General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsEveryone Is Cheating Their Way Through College ChatGPT has unraveled the entire academic project.
https://nymag.com/intelligencer/article/openai-chatgpt-ai-cheating-education-college-students-school.htmlArchive
https://archive.md/2025.05.07-143501/https://nymag.com/intelligencer/article/openai-chatgpt-ai-cheating-education-college-students-school.html#selection-2129.0-2133.50
excellent but long article. well worth the read.
a few things off the top of my head. do I have an issue with students that use AI to fake their way through college? yup. do have and issue with colleges basically charging impossible amounts for tuition that will probably never be paid off in the students life time? damn right I do. I stand at a cross roads. do I feel then it's okay for students then to cheat using AI because of the grossly inflated rates and impossible interest rates on the loans? I can say, I kinda see the students point.
the problem though is: critical thinking of any kind, at this rate, will be dead completely in a few years.
after reading this article. if I feared for future of our youth, I'm now terrified. They think it's an easy way to get an A, which it is, but in the long run, our work force will be degraded by the inability our new hires to think critically, do hard work or put in what needs to be done for a professional job.
I'll be 82 in 20 years (if I live that long). for many many reasons, I'm terrified of the coming future. this adds to that mix in a new and scary way.

surfered
(6,947 posts)nitpicked
(1,209 posts)Just kidding...
surfered
(6,947 posts)IMO.
JustABozoOnThisBus
(24,146 posts)... and there is a Michigan / Ohio State game on television and a cold six-pack in the fridge.
Feel like studying?
WhiskeyGrinder
(24,906 posts)remote learning, asynchronous learning, and so on. Their experience was completely torpedoed. Mainstream K-12 education generally teaches to the test,; anything else requires extensive resources that communities just aren't willing to provide. The transactional nature of education itself is partly to blame -- particularly when their professors are using ChatGPT to make syllabi, grade papers and assess in-class work. It's shitty all around.
newdeal2
(2,604 posts)For most people, college is all about getting a diploma so that you can get a certain job.
Companies are introducing AI and now want their employees to be able to use ChatGPT to make their work more efficient. Somehow thats not called cheating.
So it makes sense to me that kids are mastering these tools before they get into the real world. Employers will see it as a valuable skill.
gab13by13
(28,255 posts)they are doing it in grade school.
anciano
(1,811 posts)that is time efficient and IMO will enhance critical thinking. It is still in its infancy, but will continue to improve in its applications as it evolves. AI will play a dominant role in our lives as we move forward.
Ms. Toad
(36,972 posts)I believe it CAN be used that way, but that few, if any, are actually using it that way.
anciano
(1,811 posts)in several ways. It is a very time efficient way to assemble relevant data for verification and input into the decision making process, and it can assist in speculative and hypothetical analyses by helping one see different sides of an issue and posit likely outcomes based on what decisions you may eventually choose to make.
Bottom line: IMO, I believe AI is a useful interactive tool that can enhance critical thinking when used judiciously.
Ms. Toad
(36,972 posts)used to enhance critical thinking. I'm all for the use of tools.
But that is not how AI, as we traditionally think about it (generative, rather data crunching) it is currently being used in the vast majority of cases.
Second - data gathering and crunching is not what is being described in the article - that kind of "dumb" artificial intelligence has been in use for decades - and it has become the trendy thing to refer to all such things as AI.
So the only thing in your list that is part of the kind of AI discussed in the article which would also contribute to critical (rather than efficient) thinking in the article is "see(ing) different sides of an issue." What percent of the use of generative AI do you believe is actually being used to help users see/think about different sides of an issue - as opposed to generating a response designed to get a good grade - or to avoid thinking?
I am not anti-generative AI. And I agree it is around to stay. BUT, unless things change, I see AI diminishing the critical thinking ability of those who use it - as it is currently used (haphazard untrained use to generate answers).
To change that, we need to start educating students in elementary school on how to use it.
We need to teach how to evaluate output to determine if it is accurate. (AI is designed to be conversational, not factual. But currently people are relying on what it spits out as accurate. I have never found an accurate summary in the AI tool google uses (and I check regularly - including today). Yet, in conversations I've had with people about AI, they tell me about asking ChatGPT about even things which could be life-threatening, and rely on the answers. It's like the Dr's fears about their patients relying on Dr. Google - but on steroids because there is no assembly/integration with other sources required. The process of evaluating output is a critical thinking skill. AI could be used to teach these skills - but using AI without more does not, inherently, develop them.
We need to teach using it as a tool - not as a producer of end results. In your scenarios, generative AI should not be used to test hypotheses because of its propensity to lie (gap fill). Harnessing computers to perform test hypotheses is great - because the program is designed and tested by humans who control the contents of the "black box," rather than leaving the contents of the "black box" to a device which is designed to leave no gaps - and what it fills those gaps with is largely crap. Teaching students to create the "black box" used to evaluate the outcome of a hypothetical analysis - EVEN IF generative AI is used to write the code - teaches critical thinking because the first step is to know what process is needed to evaluate the outcome, the second step is describing that to AI to generate the code, and the third step is evaluating the code to ensure that what was generated actually does what was intended.
And (unrelated to critical thinking) - we need to unwind the intellectual property theft by (at a minimum) compensating people for the use of their IP in training, allowing people to remove their IP from the training, and (best option) obtaining true informed consent.
highplainsdem
(56,149 posts)results are checked for errors it frequently makes. And newer models make even more errors.
It's also trained illegally and unethically on stolen intellectual property. It's wrecking education. It degrades both the internet and the environment.
highplainsdem
(56,149 posts)It's fine to post again, but I'd like people seeing this OP to see the earlier replies as well.
RJ-MacReady
(569 posts)AI is here to stay and its only going to become more and more ingrained in society.