—Citation Person
Dear Citation,
The easy response is that reporting is probably not required if you’re using conceptual AI for analysis purposes. However, identification is definitely required if you compose using ChatGPT or another AI device.
These are two guiding questions I think you may ask yourself if you’re morally conflicted about disclosing your involvement with AI program: Did I use AI for study or content? And if it turns out that the tools are manufactured rather than healthy, might the creator of this AI-aided composition feel deceived? Although scientists are undoubtedly held to a higher standard when it comes to proper reference, these questions may not work out perfectly for every condition, and I’m sure you can. However, I thoroughly believe taking five minutes to reflect can help you understand correct usage and prevent unwanted problems.
A crucial first step is to distinguish between research and texture. If I’m using conceptual AI as a kind of uncertain encyclopedia that may point me to various sources or expand my perspective on a subject without including that topic in the real writing, I believe that’s less dangerous and doubtful to leave the stench of deception. Never use a ChatGPT output or perplexity page as the main source of truth, but always double-check any information you find in the chatbot’s outputs. The majority of chatbots now have the ability to link to external websites, allowing you to browse through them to read more. Think of it, in this context, as part of the information infrastructure. ChatGPT may be the path you travel on, but there must always be an external link to get there.
Let’s say you decide to use a chatbot to sketch out a first draft, or have it come up with writing/images/audio/video to blend with yours. I believe it is prudent to err on the side of disclosure in this situation. The Uber Eats app now includes a disclaimer that the food description was created by AI and may contain inaccurate ingredients, including Domino’s cheese sticks.
Every time you use AI for creation, and in some cases for research, you should be honing in on the second question. In essence, ask yourself if the reader or viewer would later realize that some of what they experienced was the result of AI. If so, you should absolutely use proper attribution by describing how you used the tool in line with your audience. It would only be a dry and unfun experience for the two of us, not only would producing parts of this column without making any disclosure go against WIRED’s policy.
You can give context to your AI usage by taking into account the people who will be using your work and your intentions to create it in the first place. That setting is helpful for navigating challenging situations. A work email created by AI and carefully proofread by you is typically going to work just fine. Even so, using generative AI to create a condolence email following a death would be a clear case of insensitivity. Consider closing out of that ChatGPT browser tab and grabbing a notepad and pen if a person on the other side of the communication is trying to connect with you on a personal, emotional level.
How can educators teach children to use AI tools responsibly and ethically? Do the benefits of AI outweigh the dangers?
—Raised Hand
Dear Raised,
I believe we should start off young and remain realistic when learning about generative AI. Children are beginning to learn computer literacy skills in elementary school and continue to do so until senior year of high school. Lessons about the safe, efficient use of AI tools could help students develop solid technical skills as well as develop a healthy emotional distance from chatbots.
Teachers and parents are right to be concerned about students using ChatGPT and other homework helpers like ByteDance’s Gauth AI to quickly get answers or using generative AI to swindle their essays for them. Lesson plans that are more focused on in-class discussion and practice might help to resolve this problem. However, students run the risk of losing focus on just their homework. Over the next few years, I expect teenagers to dive further into long, heartfelt, and sometimes inappropriate conversations, not with random strangers online but with sweet-talking chatbots like Character. AI or Replika.
Teenagers will likely turn even more inward and asocial as they enter a challenging, awkward stage of life, already complicated by the harsh spotlight of contemporary social media, relying on artificial companions to comprehend the world around them. A teenager in Florida used role-playing chatbots frequently and confessed to the AI before his suicide in early 2024, according to reports from The New York Times. Teaching children how to use AI safely involves avoiding false information as well as avoiding unreal relationships and remaining anchored to reality.
In the year 2025, it’s still a bit of a debate whether the benefits of generative AI in the classroom outweigh the drawbacks. The tools have already permeated students ‘ daily lives. As educators, it is crucial to equip these kids with the knowledge and abilities needed to navigate the world around them. A blind avoidance of generative AI may not be wise, but it could be disastrous.
At your service,
Reece