I have to give my school credit for the creative way they caught Richard. The teacher reviewed the history of his edits and discovered the spontaneous appearance of chunks of text. The teacher immediately suspected that Richard had copied and pasted the text in, so she asked Richard for his source document where he wrote the text. He didn’t have one.
Being on my high school’s disciplinary committee, I have seen a lot of students make impulsive decisions and take up four hours of my Sunday afternoon as a result. This one particularly intrigued me because it involves technology, whose usage lacks a clear line of when it is appropriate and when it is not.
Richard was drafting a news story for his journalism class and decided to write about the basketball season. Conveniently, the basketball coach, let’s call him Mr. Lydian, records every game in the form of an email that he would share with the entire school. Richard read all of them to gain inspiration, and he thought that Mr. Lydian’s accounts provided great insight. As a result, he summarized the accounts with ChatGPT and used them as the majority of his engaging narrative.
During the hearing, Richard explained his belief in AI’s increasingly important role in the future, so he perceived the usage of ChatGPT as a skill that he wanted to be proficient at. Richard saw AI as a tool similar to PowerPoint in the 90s—an essential ability that would only benefit him more if he started to develop it earlier. “It’s good prompt design practice,” he argued. If ChatGPT could capture the soul of Mr. Lydian’s emails better than he could, why waste time writing a summary himself that is not only more time-consuming but worse in quality? The sentences that AI generates simply flow.
As a member of the disciplinary committee, although my job was to help Richard recognize his mistakes, I could not help agreeing with many of Richard’s takes on AI. In education, many teachers have thought about the use of AI and embraced the technology as a helper in the classroom. They have used ChatGPT to improve the curriculum, generate examples, and in some cases even grade students’ assignments. This process saves an immense amount of time that teachers can then use to focus more on students. In addition, technologies such as ChatGPT have a level of power that exceeds our control. They “hold authority and influence independent of the intention or control of the humans in charge” (JWU). As a result, given that ChatGPT is already readily available, banning the use of it is immensely difficult.
Prohibition would require that all essays be handwritten, which drastically decreases the quality of essays because, in higher education, everybody types faster than they write. Therefore, even teachers who initially despised ChatGPT are slowly adjusting to the technology. In fact, the Impact Research/Walton Foundation survey found that “88 percent gave the AI program a good review, saying it has had a positive impact on instruction,” and “Thirty-eight percent of teachers say they have given their students the green light to use the program” (Edweek). These viewpoints clearly support Richard’s opinions on AI’s increasing prominence.
To clarify, I agree with the school that this action should be considered an academic dishonesty violation because the teachers wanted to assess Richard’s writing capabilities without unauthorized aid. However, I am guilty of having similar thoughts. He conveyed the most honest and logical thinking of a lot of high school students. Especially since most have realized the inaccuracy of AI checkers, why sacrifice your free time when this magical device can convey your thinking in a more fluent manner? I was actually glad that Richard revealed how widespread and normalized this mindset of capitalizing off of ChatGPT is, and he was just unluckily caught due to not being discreet enough. Then, I suddenly realized how this abuse of this technology can be immensely problematic.
Many believe that AI is a crutch that shields one’s authentic work with perfectly articulate sentences and prevents young adults from actually learning the content. Although these opponents of ChatGPT recognize ChatGPT’s helpfulness in small tasks such as emailing teachers, students fail to learn the interpersonal skills that these tasks provide. Additionally, they doubt AI’s ability to construct complete logical arguments due to the nature of its functionality—text generators don’t critically think like humans. They simply perform probability distributions to complete sentences by guessing. As a result, journalists such as Rodolfo Delgado have realized that “it [ChatGPT] lacked the touch of humanity that was inherently mine… However, I’ve found that in the long run, what truly captivates readers is the presence of genuine emotion. In a digital landscape saturated with grammatically impeccable articles, it is crucial to remember that our audience comprises humans” (Forbes). Currently, phrasing thoughts into coherent sentences is an essential skill for communication, but in the future, would writing and expressing emotions not matter anymore? What about talking? Would people start to use ChatGPT to converse? A wave of dystopian technological possibilities flooded into my head.
To satisfy my curiosity, I have to know the appropriate uses of ChatGPT, but my judgment requires a deeper understanding of how well ChatGPT writes in reality. Since it is unfair to compare the work of a robot to that of authors and professors, I decided to use myself, an average high school student, as a benchmark. I spent my entire Thanksgiving break drafting a paper analyzing AI’s effects on work, so I wrote another paper in five minutes on the same subject with ChaptGPT. Then, I compared the two works. The initial AI draft took a completely different approach than I did, making an objective comparison difficult. Thus, I entered my thesis as part of the prompt. The length of the essays was also problematic for ChatGPT because it only outputs a maximum of about 600 words at once. I had to force the machine to split the essay into ten parts and write each one individually.
In short, despite being easy to read, I can confirm Delgado’s claim that generative AIs’ analytical writing is awful. The robotic, formulaic paper overemphasizes summaries that lack arguable content, provides minimal references to studies or statistics, overuses subheadings that destroy continuity, and repeats information it has already mentioned.
After this experiment with ChatGPT, I developed a framework to decide whether an assignment should incorporate AI’s help. For homework that is focused on content and facts, ChatGPT is really helpful as a teacher’s assistant to summarize complex events, concepts, or philosophies. Although the tool is infamous online for hallucinating and consequently providing false information, I haven’t encountered any cases while researching straightforward high school material such as Lincoln’s approaches to the Civil War or photosynthesis explained at a molecular level. In these cases, we can use ChatGPT like a search engine. However, unlike Google, where we can get an infinite number of unsatisfactory outcomes from a search, ChatGPT provides a concise response that usually answers the question directly. Even if we are still confused, we can reference ChatGPT’s previous answer in my next question. This process of learning can save us a lot of time from scouring dense research papers on specific topics such as the Cultural Revolution’s effects on Chinese feminist movements.
However, we should rarely use AI when assignments shift away from simple summaries and explanations, requiring us to make connections and apply critical thinking. When attempting to generate thoughtful answers to more analytical prompts, ChatGPT tends to output complex sentences with abstract nouns with an assertive tone, yet when we try to decipher their meaning, we realize that the machine is simply stating the obvious. In fact, to avoid biases, OpenAI specifically programmed their chat bot to express few opinions—it wouldn’t say anything substantial. Thus, in these cases, using the technology is instead a waste of time.
Educators should not dissuade students from using ChatGPT because there is definitely space for such technologies in today’s education. Banning AI in learning is like scrapping the invention of nuclear power because of atomic bombs. In fact, treating ChatGPT as a taboo would only increase students’ curiosity to explore it themselves (as I have), and many would not consider the ethical implications. We must teach students the proper applications. I have benefitted too much from ChatGPT’s intuitive interface and coherent clarifications to disregard it as a gimmick that causes harmful reliance. I remember when teachers despised grammar checkers such as Grammarly a couple of years ago. If that could integrate so well into the curriculum that my school now offers students free subscriptions, I believe ChatGPT can do it too.
Being his class’ representative on the Disciplinary Committee, Harry felt the urgent need to reflect on a student’s recent academic dishonesty incident. Due to a strong interest in the ethics of technology, Harry saw the case as an opportunity to express his opinions on ChatGPT, raising awareness of its effects on education while accurately reflecting his peers’ thoughts on AI text generation. Outside of school, he is also passionate about playing tennis, producing music, and writing blogs on Medium (@Harrycats2019).
—this essay was previously published in the Santa Barbara Independent
References:
https://www.axios.com/2024/03/06/ai-tools-teachers-chatgpt-writable https://www.edweek.org/technology/what-do-teachers-think-of-chatgpt-you-might-be-surprised/2023/ 03#:~:text=Thirty%2Deight%20percent%20of%20teachers,ChatGPT%20without%20their%20teachers%27
https://www.forbes.com/sites/forbesbusinesscouncil/2023/07/11/the-risk-of-losing-unique-voices-what- is-the-impact-of-ai-on-writing/?sh=3dba63914db6