In the last couple of years, the use of AI has spread throughout the entire world. OpenAI created ChatGPT, Google created Bard/Gemini, and Apple recently revealed its "Apple Intelligence." With all these name-brand companies creating Artificial Intelligence applications, we've reached the time of sci-fiction books. However, that is not the case at all.
Instead of AI that helps humanity, like in "I, Robot" by Isaac Asimov, or companion ride-or-die like Cortana from "Halo," produced by Microsoft, we have AI that helps students commit high levels of academic misconduct and Character Chat Bots that allow people to talk to fictional characters through a parasocial relationship lens.
But are students and youth actually becoming harmed by AI? Is this the same public outcry that Calculators caused in the 1970s? Not at all; this is on a whole other level. AI is not being used as a "tool" in schools; it's replacing critical thinking skills, problem-solving skills, and so much more. Students are being removed from authentic learning because AI solves their educational problems.

I'm currently in a class called "Learning in 280 Characters or Less," in which we were tasked to 'Spread the News' for a problem we have identified on a global scale. I can describe how anti-AI I have been since 2022, when OpenAI first released ChatGPT, but I won't. My opinions don't matter in the grand scheme of things. Everything I'm about to write about this topic has been researched and found through credible sources. AI is causing harm to our students and our youth; some have already died from neglectful safeguarding techniques because, ultimately, these companies profit from our use and data of these applications.
Academics:
Growing up in the early 2000s, I remember being scolled for a grade 3 math test I cheated on. I asked my buddy what he got for a question and was caught. It was my first time committing academic misconduct, and through evaluation with the teacher and principal, I was marked a 0 on that question. Extremely lucky, I know. I still remember asking my buddy because I was scared of academic failure. I ended that test with a 90%+ (grade 3 remember so not at all a flex). I knew what I was writing about, but one singular bump in the road caused me to seek misconduct because it was easier than tripping and making a mistake.
Now, it's 2024, and we have students entering first-year university who grew up in a very different world than I did, with only 6 years difference in our age. Students are given essay questions, intense math homework, and so much more than they can just ask AI to complete. Don't wanna write about why Hamlet is a tragic hero? Let AI write it for you. Need to know how to find the value of x? ChatGPT will see it and explain every step in grand detail.
Not to sound like an old man (I'm nearly 24 years old), but we didn't have tools that did this for us back in my day. We had to look through Hamlet and seek videos explaining how to find x and practice. Practice. We had to explore and experience failure. And as I mentioned already in this part of this blog, failure was scary to me. Still is to this day. However, I did not have a tool that would remove from failing that one question, I did not have a tool that let me push through my Computer Science degree in university, causing me not to switch to being an English major (if you couldn't tell already haha).
Students nowadays do not see using AI as AI plagiarism. On top of that, there is such a large scale of what students deem AI plagiarism that it almost becomes impossible for every person to have the same moral scale to determine what cheating is and what 'assistance' is. This plagiarism line was conducted on 356 students in all sorts of universities in Hong Kong. The funny thing is, they all admitted to using AI on their papers, but as I said, some didn't see it as cheating since "they didn't use a prompt,"; they used it for grammar and spelling.
In another study from the University of Lompopo, over 50% of students declared they used AI in some fashion to work on their academic papers because they "Were too lazy" to do so. The alarming part was that over 70% of students mentioned getting high grades through the use of AI. At that point, how can we blame students for committing academic misconduct when the institutions who tell them not to use it are grading them high?
I hope by now, in this blog, you see where this going. Students are achieving high grades through not their work alone. AI helps them with problem-solving and removes their critical thinking skills from the authentic learning process. If students must hit learning goals and outcomes independently, are they achieving the learning process? No.
Students are not actively learning or earning their grades, but the educational system is still rewarding as if they are! The education system and applications like ChatGPT are creating negative loops, giving students no reason to not commit academic misconduct if they only benefit from the system they are in.
SOCIAL MEDIA:
Again, not to sound like an old man (which I'm not), but in the early 2000s, I grew up on how to be a part of society by being a member of it. Playing with classmates on the playground, asking out my first crush, and having so many different experiences built me up for what the world is like today. However, now, youth can use AI chatbots to replace this process. Wanna ask out someone? Well, maybe that 'person' is an AI replica from a TV show you like that an AI is mimicking.
In Florida, a 14-year-old boy talked to a chatbot on Character. AI. He developed a sexual and romantic relationship with this chatbot. Unfortunately, this kid was experiencing mental health issues and tried to bring this up to the chatbot. He mentioned suicidal ideation, and the chatbot did not do anything to stop the conversation and give resources. Instead, it encouraged the kid to die by suicide so he could join the character in her world - whether virtually or whatever he believed she lived in.
With AI still in its first chapter, it's scary to think what will happen when chatbots become better than they already are. If we already have mentally ill youth dying by suicide because of their relationship with AI, what will the next few years look like? How will this hurt children's social development if they connect with AI to learn, process, and grow at the start of their lives?
SOLUTION:
Educational systems must start working on more solutions and policies to prevent students from using AI assistance. Students need clear guidelines and policies walked through to them by professors. Hence, they understand the consequences their AI use will have on their educational journey and how it impacts their time in university.
Companies like OpenAI need to also rework how AI is currently being used. They collect user data and understand what type of things users ask GPT to create. Regardless, ChatGPT needs to be redesigned to become a tool rather than a solution for its users. Instead of inputting a prompt and putting out a full-on essay or solution, ChatGPT should be a guide and a reference. It should help folks learn to problem solve, explore how to critically think, and become apart of the authentic learning process.
In terms of AI chatbots, they need serious work. They need to learn how to process sucidial ideation, have age limits on it so minors arent using this as a social media tool, and so so much more. Chatbots are designed to be dangerous to the human social system. We should not be using it to replace human connection, and these bots need more machine learning to properly identify mental illness and give geographical resources to users.
Overall, I think AI has a lot of good applicable use if the companies behind them go towards the solution and not the problem. I just hope that we're not too late to put some of these solutions to work and really explore the good that AI can have. Unfortunately, this is only one corner of AI that is bad; there is so much more bad out there than this. We need more support from governmental agencies to put legislation on AI to prevent children, students, and the vulnerable from getting impacted by capitalistic companies profiting off the overreliance on this technology.
Comments
Post a Comment