Close Menu
Alan C. Moore
    What's Hot

    Trump greenlights Nippon deal with US Steel, announces rally in Pittsburgh

    May 23, 2025

    Donald Trump threatens 25% tariff on Apple — will other tech companies face the heat? Here’s what the US President said

    May 23, 2025

    Weekend Parting Shot: Playing the Hits

    May 23, 2025
    Facebook X (Twitter) Instagram
    Trending
    • Trump greenlights Nippon deal with US Steel, announces rally in Pittsburgh
    • Donald Trump threatens 25% tariff on Apple — will other tech companies face the heat? Here’s what the US President said
    • Weekend Parting Shot: Playing the Hits
    • Trump administration sues 4 New Jersey cities over ‘sanctuary’ policies
    • ‘Operation Deep Pockets’ nets guns, narcotics
    • US judge blocks bid to bar Harvard’s international students
    • Steve Bannon says Harvard is now a disaster: ‘I say this as a graduate’
    • ‘Toms River is red and won’t stand…’: NJ bar drops Bruce Springsteen cover band over anti-Trump rants
    Alan C. MooreAlan C. Moore
    Subscribe
    Friday, May 23
    • Home
    • US News
    • Politics
    • Business & Economy
    • Video
    • About Alan
    • Newsletter Sign-up
    Alan C. Moore
    Home » Blog » Let’s Talk About ChatGPT and Cheating in the Classroom

    Let’s Talk About ChatGPT and Cheating in the Classroom

    May 23, 2025Updated:May 23, 2025 Tech No Comments
    Uncanny Valley AI Rampant Cheating Colleges Culture jpg
    Uncanny Valley AI Rampant Cheating Colleges Culture jpg
    Share
    Facebook Twitter LinkedIn Pinterest Email

    There’s been a lot of talk about how AI tools like ChatGPT are changing education. Students are using AI to do research, write papers, and get better grades. So today on the show, we debate whether using AI in school is actually cheating. Plus, we dive into how students and teachers are using these tools, and we ask what place AI should have in the future of learning. 

    You can follow Michael Calore on Bluesky at @snackfight, Lauren Goode on Bluesky at @laurengoode, and Katie Drummond on Bluesky at @katie-drummond. Write to us at [email protected].

    How to Listen

    You can always listen to this week’s podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here’s how:

    If you’re on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for “uncanny valley.” We’re on Spotify too.

    Transcript

    Note: This is an automated transcript, which may contain errors.

    Michael Calore: Hey, this is Mike. Before we start, I want to take the chance to remind you that we want to hear from you. Do you have a tech-related question that’s been on your mind or just a topic that you wish we talk about on the show? If so, you can write to us at [email protected], and if you listen to and enjoy our episodes, please rate it and leave a review on your podcast app of choice. It really helps other people find us. How’s everybody doing? How you feeling this week?

    Katie Drummond: I’ll tell you how I’m feeling. It’s Katie here. My vibe levels are up. I’m feeling really good. I was at Columbia University earlier this week with five of our fantastic editors and reporters at WIRED because we were honored at the Columbia Journalism School this week for our politics reporting. And so we got dressed up, I gave a speech and it was so wonderful to have a minute to sit back and take a breath and think about all of the journalism we’ve done in the last several months and celebrate that. And it was also really, really cool to just see and talk to journalists who were graduating from journalism school and feel their energy and their excitement and their drive to do this work. Because I think, as you guys know, and you probably agree, we’re all quite tired. Lauren, how are you?

    Lauren Goode: When you said, “Because we’re tired.” I wasn’t sure if you meant we’re just tired in this moment or we are existentially tired because I am a little tired in this moment, but I am not existentially tired. I’m here for the fight, Katie.

    Katie Drummond: Oh, I’m so glad to hear that.

    Lauren Goode: Yeah.

    Katie Drummond: Yeah, I’m tired in this moment. I just think it’s so nice to spend some time with a couple hundred people who are new to this and just so excited to get down to business. It was very cool.

    Michael Calore: How much ChatGPT use is there at Columbia University in the journalism department, do we think?

    Lauren Goode: Good question, Mike.

    Katie Drummond: I really hope very little.

    Michael Calore: Me too. For the sake of us all. This is WIRED’s Uncanny Valley, a show about the people power and influence of Silicon Valley, and today we are talking about how AI tools like ChatGPT are changing education from middle school to graduate school. More and more students are using generative chatbot tools to gather information, finish assignments faster and get better grades, and sometimes just write things for them. Just this month, there has been a ton of reporting and discourse on this trend, and some of it has been fairly optimistic, but a lot of it has also been critical as one user on X put it, “The kids are cooked.”

    Lauren Goode: The kids are all right.

    Katie Drummond: Which X user was it? I can think of a few. I’m just curious. We don’t actually know.

    Michael Calore: So on this episode, we’re going to dive into how students are using ChatGPT, how professors are using it, whether we think this trend is, in fact, cheating when the students use it, and what AI’s place could be in the future of learning. I’m Michael Calore, director of consumer tech and culture here at WIRED.

    Lauren Goode: I’m Lauren Goode. I’m a senior correspondent at WIRED.

    Katie Drummond: And I’m Katie Drummond, WIRED’s global editorial director.

    Michael Calore: So before we dive into what has been happening with AI and students potentially using ChatGPT to cheat in their coursework, I want to have all of our cards on the table. Did either of you cheat in high school or in college? And if so, how?

    Katie Drummond: I feel like I should go first here because I’m the boss and I want to set Lauren up for success in her answer. I did not cheat in college. I was a very serious person in college. I was getting an undergraduate degree in philosophy, which felt like a very serious thing to be doing at the time. So I was totally above board. And also, as I was thinking about this earlier, this was in the early 2000s and it wasn’t, I don’t think, or wouldn’t have been particularly easy to cheat at philosophy back then, whereas interestingly, it would be pretty easy to cheat at philosophy now. You’re reading a lot. You’re writing a lot of essays. It’s hard to imagine how I would’ve effectively cheated, but I didn’t cheat. I did cheat in high school though. Everybody cheated all the time. I’m not saying I cheated all the time. I’m not going to answer that question, but I did cheat. I specifically remember we had graphing calculators and we would program equations and answers into the calculators using special code so that teachers, if they went through our calculators, they wouldn’t be able to tell that it was cheats. But we went to pretty great lengths to cheat on math exams, which is so stupid because I would’ve done great on the math exam regardless, but there was just something about being able to get away with it.

    Lauren Goode: Do you feel like a weight has been lifted from you now that you have confessed?

    Katie Drummond: No, I don’t care. Look, I think that most students, at least in middle school and high school, dabble with cheating, and so I have no shame. What are they going to do? Strip me of my high school diploma. Good luck.

    Lauren Goode: Yeah, it’s kind of a rite of passage.

    Katie Drummond: Exactly.

    Lauren Goode: I was very similar to Katie in that I did not cheat in college. In high school though, I remember reading Cliff’s Notes for some book assignments. My best friend and I also did some light cheating in high school because the first initial of our last names wasn’t that far apart, and it was a small school as well, so she was often sitting in front of me and I was directly behind her. And we had a tapping scheme where we’d tap our pencils during Scantron tests.

    Katie Drummond: Wow.

    Michael Calore: Oh, like sending secret messages to each other.

    Lauren Goode: Yeah, yeah. So if she was on question 13, she would sort of slide her Scantron to the side of the desk and so that you could see which number, which question number 13, and then the person who had the answer would tap their pencil a corresponding number of times to be like, answer A, answer B, answer C. Anyway, I don’t want to implicate her. Totally. She’s an adult now with a career and two grown children, and I’m not sure if the statute limitations has expired on this grand felony from Notre Dame Catholic High School. So maybe we can scrap that from the record. Thank you very much. Mike, did you cheat?

    Michael Calore: No, I was a total goody-goody, like super-duper do everything by the book Eagle Scout kind of kid. Didn’t cheat in high school. I did encounter a course in college that I had a really hard time keeping up with. It was the 19th century British novel, and the reading list was absolutely brutal. It was one super long, boring book every week. And I mean, there was some good stuff in there, like Jane Eyre and Frankenstein. And then there were absolutely terrible books in there, like Barchester Towers and The Mayor of Casterbridge. So I learned the art of the shortcut. I would zoom in on one chapter and I would read the Cliff’s Notes, and then I would read that chapter and I would be able to talk about that chapter in depth on a test.

    Katie Drummond: Oh, that’s very smart. That’s smart. But not cheating.

    Michael Calore: Not necessarily cheating. I don’t consider Cliff’s Notes to be cheating. I’m one of those people.

    Lauren Goode: Why not?

    Michael Calore: Well, because you’re still actually doing the work and comprehending. And I think some of the examples that we’re going to talk about don’t even have that step in them. They just sort of skip over all the learning,

    Lauren Goode: Yeah, but you’re not understanding the full context of where that author fits into a certain category of other writers.

    Katie Drummond: Lauren, I think that what you’re trying to do right now is distract both us and our audience from your Scantron felony, when in fact, it seems like Mike is the most innocent party here. I just need to say.

    Lauren Goode: Fair enough.

    Michael Calore: At least I did the reading. All right, well we’ve all come clean. So thank you for all of that. And we can acknowledge that, of course, cheating is nothing new, but we’re talking about it now. Because of the use of AI tools like ChatGPT by students and how it has exploded in recent years. It’s become a topic of debate in both the tech and education spheres. So just to get a sense of the scale of how much students are using AI, one estimate by the Digital Education Council says that around 86% of students, globally, regularly use AI. During the first two years that ChatGPT was publicly available, monthly visits to ChatGPT steadily grew and then started to dip In June when school gets out.

    Katie Drummond: 86%.

    Michael Calore: 86%. So yeah, I’ve used AI in my school.

    Katie Drummond: That is an astonishing figure.

    Michael Calore: So the appeal of something like ChatGPT, if you’ve used it, you understand why it would be useful to students. The appeal of using it is pretty obvious. It can write, it can research, it can summarize, it can generate working code, but the central question remains. Is using ChatGPT in schoolwork cheating? Where do we draw the line here?

    Katie Drummond: So I don’t think that there’s a black and white answer, which is good for the length of this episode, but I think that that informs my overall view about AI and education, which is that this technology is here, you can’t hide it, you can’t make it go away. You can’t prevent teenagers and young adults from accessing it. So you need to learn to live with it and evolve and set new rules and new guardrails. So in that context, I think there are a lot of uses of AI for students that I would not qualify as cheating. So getting back to the Cliff Notes debacle, I think using AI to summarize information, like say you’re coming up with notes to help you study and you use AI to summarize information for you and come up with a study guide for you, I think that’s a fantastic use of AI and that would actually just save you a lot of time and allow you to focus on the studying part instead of the transcription and all of that stuff. Or honestly to me, using it to compile research for you that you’ll use to then write a paper, I think use cases like that are a natural evolution of technology and what it can help us do. I think for me, where AI becomes cheating is when you use AI to create a work product that was assigned and meant to come from you and now doesn’t. But Lauren, I’m curious to hear what you think.

    Lauren Goode: Well, it would make for a really good podcast if I vehemently disagreed with you right now. I think we’re pretty aligned on this. Earlier this week I happened to be at the Google I/O conference, which is their annual software conference, and it’s a huge AI fest. It’s an AI love fest. And so I had the opportunity to talk to a bunch of different executives and many of these conversations were off the record. But after we got through the round of like, “Okay, what’s the latest thing you announced?” I just said, “How are you feeling about AI and education? What’s your framework for thinking about this?” And one of the persons said, “Are you using it to replace the goal of the exercise?” And it’s a blurry line, but it’s, I think, a line to draw in terms of whether or not you’re “cheating”. So if you’re going to ask that question, you first have to determine the goal and then you have to determine what the product is. The product of an education is not actually test scores or assignments. The product is, are you learning something from doing it? So if you’re using AI to generate an output, it’s understandable that you would say, “Does this output demonstrate cheating?” But the cheating actually happens during the generative part of generative AI. And once again, that’s very fuzzy, but I think that if the goal of an assignment is not just turn this thing into your teacher’s desk on Tuesday morning, goal of it is, did you learn something? And so if you’re using AI to cheat through the learning part, which is like I think what we’re going to be discussing, then yes, I guess that is cheating. Broadly, the use of these tools in education, just broadly speaking, doesn’t scream cheating to me.

    Katie Drummond: I think that’s a really interesting way of thinking about it actually. I like that a lot. Thank you person at Google.

    Michael Calore: Yeah. If the assignment is to write 600 words about the French Revolution, then that’s obviously something that ChatGPT can do for you pretty easily. But if the assignment getting knowledge into your brain and then being able to relay it, then to prove that you’ve memorized it and internalized it and understand it, then I think there’s a lot of things that ChatGPT and tools like it can do for you. Like you mentioned Katie, you can use it to summarize books, you can use it to help you with the research. One of the most ingenious uses that I’ve seen is people ask it to generate practice tests. They upload their whole textbook and they say, “I have a test on Friday on chapters four and six, can you generate five practice tests for me to take?” And then that helps them understand what sort of questions they would be getting and what kinds of things keep popping up in all of those practice tests, those things are probably the most important things to learn. So let me quickly share a real world example of AI cheating to see what you think about it. The most infamous case perhaps comes from a recent New York Magazine story about students using ChatGPT for their coursework. The story starts off with Chungin Roy Lee, a former Columbia student who created a generative AI app explicitly to cheat on his computer science schoolwork. He even ended up using it in job interviews with major tech companies. He scored an internship with Amazon after using his AI helper during the interview for the job. He declined to take that job, by the way. So that’s pretty ingenious. He’s coding an app. He’s using generative AI to make an app to help him cheat on things and get jobs. Do you think that the “ingenuity” behind building something like this is cheating? Do we think that his creation of this AI tool carries any merit?

    Lauren Goode: I mean, it’s so clearly cheating because the intent is to cheat. If we go back to that question of, are you using it to replace the goal of what you’re trying to do? His goal is cheating. His goal is like, “Look how clever I am and then I’m cheating.” Lee strikes me as the irritant in the room. What it’s doing is bubbling to the surface, a lightning rod topic that is much bigger than this one specific app.

    Katie Drummond: Well, and he, in April of this year, something I thought was interesting just in terms of he’s the irritant, but how many complicit irritants does he have on his team? Lee and a business partner raised $5.3 million to launch an app that scans your computer screen, listens to the audio and then gives AI generated feedback and answers to questions in real time. And my question when I read that was, “Who are these investors? Who are these people?” The website for this company says, “We want to cheat on everything.” And someone was like, “Yes, I am writing a check.” Of course it’s cheating. They say that it’s cheating. I mean, I appreciate the creativity. It’s always interesting to see what people dream up with regards to AI and what they can create. But using AI to ace a job interview in real time, not to practice for the job interview beforehand, but to, in real time, answer the interviewer’s questions, like you’re setting yourself up and your career up for failure. If you get the job, you do need to have some degree of competence to actually perform the job effectively. And then I think something else that I’m sure we’ll talk about throughout this show is it’s the erosion of skill. It’s knowing how to think on your feet or answer tough questions or engage with a stranger, make small talk. There are all of these life skills that I worry we’re losing when we start to use tools like the tools that Lee has developed. And so of course I think there are interesting potential use cases for AI like interview prep or practice is an interesting way to use that technology. So again, it’s not about the fact that AI exists and that it’s being used in the context of education or a job interview, but it’s about how we’re using it. And certainly in this case it’s about the intent. Is someone who is developing these tools specifically with the intention of using them and marketing them for cheating? And I don’t like that. I don’t like a cheater, other than when I cheated in high school.

    Michael Calore: Well, we’ve been talking a lot about ChatGPT so far and for good reason because it’s the most popular of the generative AI tools that students are using, but there are other AI tools that they can use to help with their coursework or even just do their schoolwork for them. What are some of the other ones that are out there?

    Lauren Goode: I think you can literally take any of these AI products that we write about every day in WIRED, whether it’s ChatGPT, whether it’s Anthropic’s Claud, whether it’s Google Gemini or the Perplexity AI search engine, Gamma for putting together fancy decks. All of these tools, they’re also sort of highly specialized AI tools like Wolfram or MathGPT, which are both math focused models. And you can see folks talking about that on Reddit.

    Katie Drummond: Something interesting to me too, is that there are now also tools that basically make AI detectors pretty useless. So there are tools that can make AI generated writing sound more human and more natural. So you basically would have ChatGPT, write your paper, then run it through an additional platform to finesse the writing, which helps get that writing around any sort of AI detection software that your professor might be using. Some students have one LLM write a paper or an answer, and then they sort of run it through a few more to basically make sure that nothing can show up or nothing can be detected using AI detection software. Or students, I think too, are getting smarter about the prompts they use. So there was a great anecdote in this New York Magazine story about asking the LLM to make you sound like a college student who’s kind of dumb, which is amazing. It’s like maybe you don’t need the A plus, maybe you’re okay getting the C plus or the B minus. And so you set the expectations low, which reduces your risk, in theory, of getting caught cheating.

    Michael Calore: And you can train a chatbot to sound like you.

    Katie Drummond: Yes. Yeah.

    Michael Calore: To sound actually like you. One of the big innovations that’s come up over the last year is a memory feature, especially if you have a paid subscription to a chatbot, you can upload all kinds of information to it in order to teach it about you. So you can give it papers, you can give it speeches, YouTube videos of you speaking so it understands the words that you’d like to use. It understands your voice as a human being. And then you can say, “Write this paper in my voice.” And it will do that. It obviously won’t be perfect, but it’ll get a lot closer to sounding human. So I think we should also talk about some of the tools that are not necessarily straight chatbot tools that are AI tools. One of them is called Studdy, which is study with two Ds, which I’m sure the irony is not lost on any of us that they misspelled study in the name, but it’s basically an AI tutor. You download the app and you take a picture of your homework and it acts like a tutor. It walks through the problem and helps you solve it, and it doesn’t necessarily give you the answer, but it gives you all of the tools that you need in order to come up with the answer on your own. And it can give you very, very obvious hints as to what the answer could be. There’s another tool out there called Chegg, C-H-E-G-G.

    Katie Drummond: These names are horrific, by the way. Just memo to Chegg and Studdy, you have some work to do. You both have some work to do.

    Lauren Goode: Chegg has been around for a while, right?

    Katie Drummond: It’s a bad name.

    Lauren Goode: Yeah.

    Michael Calore: It has been, it’s been very popular for a while. One of the reasons it’s popular is the writing assistant. Basically you upload your paper and it checks it for grammar and syntax and it just helps you sound smarter. It also checks it for plagiarism, which is kind of amazing because if you’re plagiarizing, it’ll just help you not get caught plagiarizing and it can help you cite research. If you need to have a certain number of citations in a paper, oftentimes professors will say, “I want to see five sources cited.” You just plug in URLs and it just generates citations for you. So it really makes that easy.

    Katie Drummond: I mean, I will say there are some parts of what you just described that I love. I love the idea of every student, no matter what school they go to, where in the country they live, what their socioeconomic circumstances are, that they would have access to one-on-one tutoring to help support them as they’re doing their homework, wherever they’re doing it, whatever kind of parental support they do or don’t have. I think that that’s incredible. I think the idea of making citations less of a pain in the ass is like, yeah, that sounds good. Not such a huge fan of helping you plagiarize, right? But it’s again, it’s like this dynamic with AI in education where not all good, not all bad. I’ve talked to educators and the impression I have gotten, and again, this is just anecdotal, but there is so much fear and resistance and reluctance and this feeling among faculty of being so overwhelmed by, “We have this massive problem, what are we going to do about it?” And I just think that too often people get caught up in the massive problem part of it and aren’t thinking enough about the opportunities.

    Michael Calore: Of course, it’s not just students who are using AI tools in the classroom, teachers are doing it too. In an article for The Free Press, an economics professor at George Mason University says that he uses the latest version of ChatGPT to give feedback on his PhD student’s papers. So kudos to him. Also, The New York Times recently reported that in a national survey of more than 1800 higher education instructors last year, 18% of them described themselves as frequent users of generative AI tools. This year, that percentage has nearly doubled. How do we feel about professors using generative AI chatbots to grade their PhD students papers?

    Lauren Goode: So I have what may be a controversial opinion on this one, which is just give teachers all the tools. Broadly speaking, I don’t think it is wrong for teachers to use the tools at their disposal, provided it aligns with what their school system or university policies say if it is going to make their lives easier and help them to teach better. So there was another story in The New York Times written by Kashmir Hill that was about a woman at Northeastern University who caught her professor using ChatGPT to prepare lecture notes because of some string of a prompt that he accidentally left in the output for the lecture notes. And she basically wanted her $8,000 back for that semester because she was thinking that, “I’m paying so much money to go here and my teacher is using ChatGPT.” It currently costs $65,000 per year to go to Northeastern University in Boston. That’s higher than the average for ranked private colleges in the US, but it’s all still very expensive. So for that price, you’re just hoping that your professors will saw off the top of your head and dump all the knowledge in that you need, and then you’ll enter the workforce and nab that six-figure job right off the gate. But that’s not how that works, and that is not your professor’s fault. At the same time, we ask so much of teachers. At the university level, most are underpaid. It is increasingly difficult to get a tenure-track position. Below the university level, teachers are far outnumbered by students. They’re dealing with burnout from the pandemic. They were dealing with burnout before then, and funding for public schools has been on the decline at the state level for years because fewer people are choosing to send their kids to public schools.

    Katie Drummond: I mean, I totally agree with you in terms of one group of people in this scenario are subject matter experts, and one group of people in this scenario are not. They are learning a subject. They are learning how to behave and how to succeed in the world. So I think it’s a mistake to conflate or compare students using AI with teachers using AI. I think that what a lot of students, particularly at a university level, are looking for from a professor is that human-to-human interaction, human feedback, human contact. They want to have a back-and-forth dialogue with their educator when they’re at that academic level. And so if I wrote a paper and my professor used AI to read the paper and then grade the paper, I would obviously be very upset to know that that feels like cheating at your job as a professor. And I think cheating the student out of that human-to-human interaction, that, ostensibly, they are paying for access to these professors, they’re not paying for access to an LLM.

    Lauren Goode: Lesson plan, yeah.

    Katie Drummond: But for me, when I think about AI as an efficiency tool for educators, so should a professor use AI to translate a written syllabus into a deck that they can present to the classroom for students who are maybe better visual learners than they are written learners? Obviously. That’s an amazing thing to be able to do. You could create podcast versions of your curriculum so that students who have that kind of aptitude can learn through their ears. You know what I mean? There are so many different things that professors can do to create more dynamic learning experiences for students, and also to save themselves a lot of time. And none of that offends me, all of that actually, I think is a very positive and productive development for educators.

    Michael Calore: Yeah, I mean essentially what you’re talking about is people using AI tools to do their jobs in a way that’s more efficient.

    Katie Drummond: Right, which is sort of what the whole promise of AI in theory, in a best-case scenario, that’s what we’re hoping for.

    Lauren Goode: What it’s supposed to be. Yeah.

    Katie Drummond: Yeah.

    Michael Calore: Honestly, some of these use cases that we’re talking about that we agree are acceptable, are much the same way that generative AI tools are being used in the corporate world. People are using AI tools to generate decks. They’re using them to generate podcasts so that they can understand things that they need to do for their job. They’re using them to write emails, take meeting notes, all kinds of things that are very similar to the way that professors are using it. I would like to ask one more question before we take a break, and I want to know if we can identify some of the factors or conditions that we think have contributed to this increasing reliance on AI tools by students and professors. They feel slightly different because the use cases are slightly different.

    Katie Drummond: I think that Lauren had a really good point about teachers being underpaid and overworked. So I think the desire for some support via technology and some efficiency in the context of educators, I think that that makes total sense as a factor. But when I think about this big picture, I don’t really think that there is a specific factor or condition here other than just the evolution of technology. The sometimes slow, but often very fast march of technological progress. And students have always used new technology to learn differently, to accelerate their ability to do schoolwork and yes, to cheat. So now AI is out there in the world, it’s been commercialized, it’s readily available, and they’re using it. Of course they are. So I will acknowledge though that AI is an exponential leap, I think, in terms of how disruptive it is for education compared to something like a graphing calculator or Google search. But I don’t think there is necessarily some new and novel factor other than the fact that the technology exists and that these are students in this generation who were raised with smartphones and smart watches and readily accessible information in the palms of their hands. And so I think for them, AI just feels like a very natural next step. And I think that’s part of the disconnect. Whereas for teachers in their thirties or forties or fifties or sixties, AI feels much less natural, and therefore the idea that their students are using this technology is a much more nefarious and overwhelming phenomenon.

    Michael Calore: That’s a great point, and I think we can talk about that forward march of technology when we come back. But for now, let’s take a break. Welcome back to Uncanny Valley. So let’s take a step back for a second and talk about that slow march of technology and how various technologies have shaped the classroom in our lifetimes. So the calculator first made its appearance in the 1970s. Of course, critics were up in arms. They feared that students would no longer be able to do basic math without the assistance of a little computer on their desk. The same thing happened with the internet when it really flowered and came into being in the late 90s and early 2000s. So how is this emergence of generative AI any similar or different than the arrival of any of these other technologies?

    Lauren Goode: I think the calculator is a false equivalence. And let me tell you, there is nothing more fun than being at a tech conference where there’s a bunch of Googler PhDs when you ask this question too. And they go, “But the calculator.” Everyone’s so excited about the calculator, which is great, an amazing piece of technology. But I think it’s normal that when new technology comes out, our minds tend to reach for these previous examples that we now understand. It’s the calculator, but a calculator is different. A standard calculator is deterministic. It gives you a true answer, one plus one equals two. The way that these AI models work is that they are not deterministic. They’re probabilistic. The type of AI we’re talking about is also generative or originative. It produces entirely new content. A calculator doesn’t do that. So I think if you sort of broadly categorize them all as new tools that are changing the world, yes, absolutely tech is a tool, but I think that generative AI, I think it’s in a different category from this. I was in college in the early 2000s when people were starting to use Google, and you’re sort of retrieving entirely new sets of information in a way that’s different from using a calculator, but different from using ChatGPT. And I think if you were to use that as the comparison, and the question is, is skipping all of those processes that you typically learn something doing, the critical part? Does that make sense?

    Katie Drummond: That makes sense. And this is so interesting because when I was thinking about this question and listening to your answer, I was thinking about it more in that way of thinking about the calculator, thinking about the advent of the internet and search, comparing them to AI. Where my brain went was what skills were lost with the advent of these new technologies and which of those was real and serious and maybe which one wasn’t. And so when I think about the calculator, to me that felt like a more salient example vis-a-vis AI because the advent of the calculator, are we all dumber at doing math on paper because we can use calculators?

    Michael Calore: Yes.

    Katie Drummond: For sure.

    Lauren Goode: Totally, one hundred percent.

    Katie Drummond: For sure. You think I can multiply two or three numbers? Oh no, my friend, you are so wrong. I keep tabs on my weekly running mileage, and I will use a calculator to be like, seven plus eight plus 6.25 plus five. That’s how I use my calculator. So has that skill atrophied as a result of this technology being available? 100%. When I think about search and the internet, I’m not saying there hasn’t been atrophy of human skill there, but that to me felt more like a widening of the aperture in terms of our access to information. But it doesn’t feel like this technological phenomenon where you are losing vital brain-based skills, the way a calculator feels that way. And to me, AI feels that way. It’s almost like when something is programmed or programmable, that’s also where I feel like you start to lose your edge. Now that we program phone numbers into our phones, we don’t know any phone numbers by heart. I know my phone number, I know my husband’s phone number. I don’t know anyone else’s phone number. Maybe Lauren, maybe you’re right. It’s this false equivalence where you can’t draw any meaningful conclusion from any one new piece of technology. And AI again, I think is just exponentially on this different scale in terms of disruption. But are we all bad at math? Yes, we are.

    Michael Calore: Yeah.

    Lauren Goode: Well, I guess I wonder, and I do still maintain that it’s kind of a false equivalence to the calculator, but there were some teachers, I’m sure we all had them, who would say, Fine, use your calculator, bring it to class.” Or, “We know you’re using it at home for your homework at night, but you have to show your work.” What’s the version of show your work when ChatGPT is writing an entire essay for you?

    Michael Calore: There isn’t one.

    Katie Drummond: Yeah, I mean, I think some professors have had students submit chat logs with their LLMs to show how they use the LLM to generate a work product, but that starts from the foundational premise that ChatGPT or AI is integrated into that classroom. I think if you’re just using it to generate the paper and lying about it, you’re not showing your work. But I think some professors who maybe are more at the leading edge of how we’re using this technology have tried to introduce AI in a way that then allows them to keep tabs on how students are actually interacting with it.

    Lauren Goode: Mike, what do you think? Do you it’s like the calculator or Google or anything else you can think of?

    Michael Calore: Well, so I started college in 1992, and then while I was at college, the web browser came around and I graduated from college in 1996. So I saw the internet come into being while I was in the halls of academia. And I actually had professors who were lamenting the fact that when they were assigning us work, we were not going to the library and using the card catalog to look up the answers to the questions that we were being asked in the various texts that were available in the library. Because all of a sudden we basically had the library in a box in our dorm rooms and we could just do it there. I think that’s fantastic.

    Katie Drummond: Yes.

    Michael Calore: I think having access at your fingertips to literally the knowledge of the world is an amazing thing. Of course, the professor who had that view also thought that the Beatles ruined rock and roll and loved debating us about it after class. But I do think that when we think about using ChatGPT and whether or not it’s cheating, like yes, absolutely, it’s cheating if you use it in the ways that we’ve defined, but it’s not going anywhere. And when we talk about these things becoming more prevalent in schools, our immediate instinct is like, “Okay, well how do we stop it? How do we contain it? Maybe we should ban it.” But it really is not going anywhere. So I feel like there may be a missed opportunity right now to actually have conversations about how we can make academia work better for students and faculty. How are we all sitting with this?

    Lauren Goode: I mean, banning it isn’t going to work, right? Do we agree with that? Is the toothpaste out of the tube?

    Katie Drummond: Yes, I think-

    Lauren Goode: And you could be a school district and ban it and the kids are going to go, “Haha, Haha, Ha.”

    Michael Calore: Yeah.

    Katie Drummond: I mean that’s a ridiculous idea to even…

    Lauren Goode: Right.

    Katie Drummond: If you run a school district out there in the United States, don’t even think about it.

    Lauren Goode: Right. And what’s challenging about the AI detection tools that some people use, they’re often wrong. So I think, I don’t know, I think we all have to come to some kind of agreement around what cheating is and what the intent of an educational exercise is in order to define what this new era of cheating is. So a version of that conversation that has to happen for all these different levels of society to say, “What is acceptable here? What are we getting from this? What are we learning from this? Is this bettering my experience as a participant in society?”

    Katie Drummond: And I think ideally from there, it’s sort of, “Okay, we have the guardrails. We all agree what cheating is in this context of AI.” And then it’s about how do we use this technology for good? How do we use it for the benefit of teachers and the benefit of students? What is the best way forward there? And there are some really interesting thinkers out there who are already talking about this and already doing this. So Chris Ostro is a professor at the University of Colorado at Boulder, and they recommend actually teaching incoming college students about AI literacy and AI ethics. So the idea being that when students come in for their first year of college that we need to actually teach them about how and where AI should be used and where it shouldn’t. When you say it out loud, you’re like, “That’s a very reasonable and rational idea. Obviously we should be doing that.” Because I think for some students too, they’re not even aware of the fact that maybe this use of AI is cheating, but this use of AI is something that their professor thinks is above board and really productive. And then there are professors who are doing, I think, really interesting things with AI in the context of education in the classroom. So they’ll have AI generate an essay or an argument, and then they will have groups of students evaluate that argument, basically deconstruct it and critique it. So that’s interesting to me because I think that’s working a lot of those same muscles. It’s the critical thinking, the analysis, the communication skills, but it’s doing it in a different way than asking students to go home and write a paper or go home and write a response to that argument. The idea being, “No, don’t let them do it at home because if they go home, they’ll cheat.” It’s an interesting evolution of, I think, Lauren, to the point that you’ve brought up repeatedly that I think is totally right is thinking about what is the goal here, and then given that AI is now standard practice among students, how do we get to the goal in a new way?

    Michael Calore: Yeah, and we have to figure out what we’re going to do as a society with this problem because the stakes are really, really high. We are facing a possible future where there’s going to be millions of people graduating from high school and college who are possibly functionally illiterate because they never learned how to string three words together.

    Katie Drummond: And I have a second grader, so if we could figure this out in the next 10 years, that would be much appreciated.

    Lauren Goode: So she’s not using generative AI at this point?

    Katie Drummond: Well, no, she’s not. Certainly not. She gets a homework packet and she loves to come home and sit down. I mean, she’s a real nerd. I love her, but she loves to come home and sit down and do her homework with her pencil. But my husband is a real AI booster. We were playing Scrabble a couple of months ago, adult Scrabble with her. She’s seven, Scrabble is for ages eight and up, and she was really frustrated because we were kicking her ass, and so he let her use ChatGPT on his computer and she could actually take a photo of the Scrabble board and share her letters. Like, “These are the letters that I have, what words can I make?” And I was like, “That’s cheating.” And then honestly, as we kept playing, it was cool because she was discovering all of these words that she had never heard of before and so she was learning how to pronounce them. She was asking us what they meant. My thinking about it softened as I watched her using it. But no, it’s not something that is part of her day to day. She loves doing her homework and I want her to love doing her homework until high school when she’ll start cheating like her mother.

    Michael Calore: This is actually a really good segue into the last thing that I want to talk about before we take another break, which is the things that we can do in order to make these tools more useful in the classroom. So thought exercise, if you ran a major university or if you’re in the Department of Education before you lose your job, what would you be doing over your summer break coming up in order to get your institutions under your stead ready for the fall semester?

    Katie Drummond: I love this question. I have a roadmap. I’m ready. I love this idea of AI ethics, so I would be scouring my network, I would be hiring a professor to teach that entry level AI ethics class, and then I would be asking each of my department heads because every realm of education within a given college is very different. If you have someone who runs the math department, they need to think about AI very differently than whoever runs the English department. So I would be asking each of my department leads to write AI guidelines for their faculty and their teachers. You can tell I’m very excited about my roadmap.

    Michael Calore: Oh yes.

    Katie Drummond: I would then review all of those guidelines by department, sign off on them, and also make sure that they laddered up to a big picture, institutional point of view on AI. Because obviously it’s important that everyone is marching to the beat of the same drum, that you don’t have sort of wildly divergent points of view within one given institution.

    Lauren Goode: What do you think your high level policy on AI would be right now if you had to say?

    Katie Drummond: I think it would really be that so much of this is about communication between teachers and students, that teachers need to be very clear with students about what is and is not acceptable, what is cheating, what is not cheating, and then they need to design a curriculum that incorporates more, I would say, AI friendly assignments and work products into their education plan. Because again, what I keep coming back to is, you can’t send a student home with an essay assignment anymore.

    Lauren Goode: No, you can’t.

    Katie Drummond: You can’t do that. So it comes down to, what are you to do instead?

    Lauren Goode: I like it.

    Katie Drummond: Thank you. What would you do?

    Lauren Goode: I would enroll at Drummond. Drummond, that actually sounds like a college. Where did you go to school? Drummond.

    Michael Calore: It does.

    Lauren Goode: Well, I was going to say something else, but Katie, now that you said you might be hiring an ethics professor, I think I’m going to apply for that job, and I have this idea for what I would do as an ethics professor teaching AI to students right now. On the first day of class, I would bring in a couple groups of students. Group A would have to write an essay there on the spot and group B presumably were doing it, but actually they weren’t. They were just stealing group A’s work and repurposing it as their own. And I haven’t quite figured out all the mechanics of this yet, but basically I would use as an example for here’s what it feels like when you use ChatGPT to generate an essay because you’re stealing some unknown person’s work, essentially cut up into bits and pieces and repurposing it as your own.

    Katie Drummond: Very intense, Lauren.

    Lauren Goode: I would start off the classroom fighting with each other, basically.

    Katie Drummond: Seriously?

    Michael Calore: It’s good illustration. I would say that if I was running a university, I would create a disciplinary balance in the curriculum across all of the departments. You want to make sure that people have a good multi-disciplinary view of whatever it is that they’re studying. So what I mean is that some percentage of your grade is based on an oral exam or a discussion group or a blue book essay, and some other percentage is based on research papers and tests and other kinds of traditional coursework. So, I think there has to be some part of your final grade that are things that you cannot use AI for. Learning how to communicate, how to work in teams, sitting in a circle and talking through diverse viewpoints in order to understand an issue or solve a problem from multiple different angles. This is how part of my college education worked, and in those courses where we did that, where one third of our grade was based on a discussion group, it was one class during the week was devoted to sitting around and talking. I learned so much in those classes, and not only about other people, but also about the material. The discussions that we had about the material were not places that my brain would’ve normally gone. So yeah, that’s what I would do. I think that’s the thing that we would be losing if we all just continued to type into chatbots all the time. There are brilliant minds out there that need to be unleashed, and the only way to unleash them is to not have them staring at a screen.

    Lauren Goode: Mike’s solution is touch some grass. I’m here for it.

    Michael Calore: Sit in a circle, everybody. Okay, let’s take one more break and then we’ll come right back. Welcome back to Uncanny Valley. Thank you both for a great conversation about AI and school and cheating, and thank you for sharing your stories. Before we go, we have to do real quick recommendations. Lightning round. Lauren, what is your recommendation?

    Lauren Goode: Ooh. I recommended flowers last time, so…

    Katie Drummond: We are going from strength to strength here at Uncanny Valley.

    Lauren Goode: My recommendation for flowers has not changed for what it’s worth. Hood River, Oregon. That’s my recommendation.

    Michael Calore: That’s your recommendation. Did you go there recently?

    Lauren Goode: Yeah, I did. I went to Hood River recently and I had a blast. It’s right on the Columbia River. It’s a beautiful area. I you are a Twilight fan, it turns out that the first Twilight movie, much of it was filmed right where we were. We happened to watch Twilight during that time just for kicks. Forgot how bad that movie was, but every time the River Valley showed up on screen, we shouted, “Gorge.” Because we were in the gorge. I loved Hood River. It was lovely.

    Michael Calore: That’s pretty good. Katie?

    Katie Drummond: My recommendation is very specific and very strange. It is a 2003 film called What a Girl Wants, starring Amanda Bynes and Colin Firth.

    Michael Calore: Wow.

    Katie Drummond: I watched this movie in high school, where I was cheating on my math exams. Sorry. For some reason, just the memory of me cheating on my high school math exams makes me laugh, and then I rewatched it with my daughter this weekend, and it’s so bad and so ludicrous and just so fabulous. Colin Firth is a babe. Amanda Bynes is amazing, and I wish her the best. And it’s a very fun, stupid movie if you want to just disconnect your brain and learn about the story of a seventeen-year-old girl who goes to the United Kingdom to meet the father she never knew.

    Michael Calore: Wow.

    Lauren Goode: Wow.

    Katie Drummond: Thank you. It’s really good.

    Lauren Goode: I can’t decide if you’re saying it’s good or it’s terrible.

    Katie Drummond: It’s both. You know what I mean?

    Lauren Goode: It’s some combination of both.

    Katie Drummond: It’s so bad. She falls in love with a bad boy with a motorcycle, but a heart of gold who also happens to sing in the band that plays in UK Parliament, so he just happens to be around all the time. He has spiky hair. Remember 2003? All the guys had gel, spiky hair.

    Lauren Goode: Yes, I still remember that. Early 2000s movies, boy, did they not age well.

    Katie Drummond: This one though, aged like a fine wine.

    Michael Calore: That’s great.

    Katie Drummond: It’s excellent.

    Lauren Goode: It’s great.

    Katie Drummond: Mike, what do you recommend?

    Lauren Goode: Yeah.

    Michael Calore: Can I go the exact opposite?

    Katie Drummond: Please, someone. Yeah.

    Michael Calore: I’m going to go literary.

    Katie Drummond: Okay.

    Michael Calore: And I’m going to recommend a novel that I read recently that it just shook me to my core. It’s by Elena Ferrante, and it is called The Days of Abandonment. It’s a novel written in Italian, translated into English and many other languages by the great pseudonymous novelist, Elena Ferrante. And it is about a woman who wakes up one day and finds out that her husband is leaving her and she doesn’t know why and she doesn’t know where he’s going or who he’s going with, but he just disappears from her life and she goes through it. She accidentally locks herself in her apartment. She has two children that she is now all of a sudden trying to take care of, but somehow neglecting because she’s-

    Katie Drummond: This is terrible.

    Michael Calore: But it’s the way that it’s written is really good. It is a really heavy book. It’s rough, it’s really rough subject matter wise, but the writing is just incredible, and it’s not a long book, so you don’t have to sit and suffer with her for a great deal of time. I won’t spoil anything, but I will say that there is some resolution in it. It’s not a straight trip down to hell. It is a, really, just lovely observation of how human beings process grief and how human beings deal with crises, and I really loved it.

    Katie Drummond: Wow.

    Michael Calore: I kind of want to read it again, even though it was difficult to get through the first time.

    Katie Drummond: Just a reminder to everyone, Mike was the one who didn’t cheat in high school or college, which that totally tracks from the beginning of the episode to the end.

    Michael Calore: Thank you for the reminder.

    Katie Drummond: Yeah.

    Michael Calore: All right, well, thank you for those recommendations. Those were great, and thank you all for listening to Uncanny Valley. If you liked what you heard today, make sure to follow our show and to rate it on your podcast app of choice. If you’d like to get in touch with us with any questions, comments, or show suggestions, write to us at [email protected]. We’re going to be taking a break next week, but we will be back the week after that. Today’s show is produced by Adriana Tapia and Kiana Mogadam. Greg Obis mixed this episode. Jake Loomis was our New York studio engineer, Daniel Roman fact-checked this episode. Jordan Bell is our executive producer. Katie Drummond is WIRED’s global editorial director, and Chris Bannon is the head of Global Audio.

    Source credit

    Keep Reading

    Stargate’s First AI Data Center in Texas: 10 Things You Need to Know

    Open Source AI: Cost-Effective and Widely Used, Says Meta-Backed Report

    Inside Anthropic’s First Developer Day, Where AI Agents Took Center Stage

    Inside Anthropic’s First Developer Day, Where AI Agents Took Center Stage

    Let’s Talk About ChatGPT and Cheating in the Classroom

    Kentucky’s Bitcoin Boom Has Gone Bust

    Editors Picks

    Trump greenlights Nippon deal with US Steel, announces rally in Pittsburgh

    May 23, 2025

    Donald Trump threatens 25% tariff on Apple — will other tech companies face the heat? Here’s what the US President said

    May 23, 2025

    Weekend Parting Shot: Playing the Hits

    May 23, 2025

    Trump administration sues 4 New Jersey cities over ‘sanctuary’ policies

    May 23, 2025

    ‘Operation Deep Pockets’ nets guns, narcotics

    May 23, 2025

    US judge blocks bid to bar Harvard’s international students

    May 23, 2025

    Steve Bannon says Harvard is now a disaster: ‘I say this as a graduate’

    May 23, 2025

    ‘Toms River is red and won’t stand…’: NJ bar drops Bruce Springsteen cover band over anti-Trump rants

    May 23, 2025

    Health officials urge Mennonites to get measles shots

    May 23, 2025

    Teachers’ protest disrupts traffic at 2 El Paso port of entries

    May 23, 2025
    • Home
    • US News
    • Politics
    • Business & Economy
    • About Alan
    • Contact

    Sign up for the Conservative Insider Newsletter.

    Get the latest conservative news from alancmoore.com [aweber listid="5891409" formid="902172699" formtype="webform"]
    Facebook X (Twitter) YouTube Instagram TikTok
    © 2025 alancmoore.com
    • Privacy Policy
    • Terms
    • Accessibility

    Type above and press Enter to search. Press Esc to cancel.