Artificial intelligence is everywhere, from the recommendations on our social media feeds to the autocompletion of text in our e-mails. Generative AI creates original text, images, audio and even video based on patterns it’s identified in the data used to create it. AI chatbots, or interactive AI, turn this into predictive power that strings text to create humanlike conversations, answering users’ questions and offering personalized engagement.
More and more, teens are using generative AI, popularized by platforms such as ChatGPT. According to a report from the nonprofit Common Sense, 72 percent of teens have used AI companions, or chatbots designed to have personal or emotionally supportive conversations, and more than half of teens use them regularly.
I’m a psychologist who studies how technology affects children. Recently, I was part of an expert advisory panel convened by the American Psychological Association (APA) to explore what effects these tools may be having on adolescent well-being.
On supporting science journalism
If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
The truth? We are still learning.
The AI landscape is quickly evolving, and researchers are scrambling to catch up. Will AI create a new frontier for supporting teens’ well-being, with opportunities for personalized emotional support, active learning and creative exploration? Or will it crowd out their real-life social connections, expose them to harmful content, and fuel loneliness and isolation?
The answer will likely be: all of the above, depending on how AI platforms are designed and how they are used. So where do we start? And what can we—parents, educators, lawmakers, AI designers—do to support young people’s well-being on these platforms?
The APA panel made a series of recommendations in a new report. Here’s what we think parents need to know.
AI for teens needs to be made differently than AI for adults
So often, in developing new technology, we don’t think ahead of time about how kids might use it. Instead we race to create adult-centered products and hope for widespread adoption. Then, years later, we try to retrofit safeguards onto these products to make them safer for teens.
As with other new technologies such as smartphones and social media, the burden for managing these tools cannot rest on parents alone. It’s not a fair fight. This is the responsibility of everyone—including lawmakers, educators and, of course, the tech companies themselves.
With AI, we have an opportunity to design, specifically, for young peoplefrom the beginning. For example, AI companies could aim to limit teens’ exposure to harmful content, work with developmental experts to create age-appropriate experiences, limit features designed to keep kids using platforms longer, make it easier to report problems (such as inappropriate conversations or mental health concerns) and regularly remind teens of the limits of AI chatbots (e.g., that chatbots’ information may be inaccurate and that they should not replace human professionals). AI platforms should also take steps to protect teens’ data privacy, ensure that young people’s likenesses (their images and voices) cannot be misused and create effective, user-friendly parental controls.
Children need to learn what AI is and how to safely use it, starting in school. This starts with basic education on how AI models work, how to safely and responsibly use AI in ways that do not cause harm, how to spot false information or AI-generated content and what ethical considerations exist. Teachers will need guidance on how to teach about these topics and the resources to do so—an effort that will require collaboration from policymakers, tech developers and school districts.
Talk early and talk often
As a parent, broaching conversations about AI with teens can seem daunting. What topics should you cover? Where should you start? First, test out some of these platforms for yourself—get a sense of how they work, where they may have limitations and why your child might be interested in using them.
Then consider these key conversation topics:
Human relationships matter
Of the 72 percent of teens who have ever used AI companions,19 percent say they spend the same or more time with them as they do with their real friends. As the technology improves, this trend may become more pervasive, and teens who are already socially vulnerable or lonely may be at a greater risk of letting chatbot relationships interfere with real-life ones.
Talk to teens about the limits of AI companions compared with human relationships, including how many AI models are designed to keep them on the platform longer through flattery and validation. Ask them whether they’ve used AI to have meaningful conversations and what kinds of topics they’ve discussed. Make sure they have plenty of opportunities for in-person social interactions with real-life friends and family. And remind them that these human relationships, no matter how awkward, messy or complicated, are worth it.
Use AI for good
When used well, AI tools can offer incredible opportunities for learning and discovery. Many teens have already experienced some of these benefits, and this can be a good place to start conversations. Where have they found AI to be helpful?
Ask how their schools are approaching AI when it comes to schoolwork. Do kids know their teachers’ policies around using AI with homework? Have they used AI in the classroom? We want to encourage teens to use AI to support active learning—stimulating critical thinking and digging deeper into concepts they’re interested in—rather than to replace critical thinking.
Be a critical AI consumer
AI models do not always get things right, and this can be especially problematic when it comes to health. Teens (and adults) frequently get information about physical and mental health online. In some cases, they may be relying on AI for conversations that would have previously taken place with a therapist—and those models may not respond appropriately to disclosures around issues such as self-harm, disordered eating or suicidal thoughts. It’s important for teens to know that any advice, “diagnoses” or recommendations that come from chatbots should be verified by a professional. It can be useful for parents to emphasize that AI chatbots are often designed to be persuasive and authoritative, so we may need to actively resist the urge to take their answers at face value.
The APA recommendations also highlight the risks associated with AI-generated content, which teens may create themselves or encounter via social media. Such content may not be trustworthy. It could be violent or harmful. It could, in the case of deepfakes, be against the law. As parents, we can remind teens to be critical consumers of images and videos and to always check the source. We can also remind them never to create or distribute AI-doctored images of their peers, which is not only unethical but also, in some states, illegal.
Watch out for harmful content
With few safeguards in place for younger users, AI models can produce content that negatively affects adolescents’ safety and well-being. This could include text, images, audio or videos that are inappropriate, dangerous, violent, discriminatory or suggestive of violence.
While AI developers have a crucial role to play in making these systems safer, as parents, we can also have regular conversations with our children about these risks and set limits on their use. Talk to teens about what to do if they encounter something that makes them uncomfortable. Discuss appropriate and inappropriate uses for AI. And when it comes to communication about AI, try to keep the doors open by staying curious and nonjudgmental.
AI is changing fast, and rigorous scientific studies are needed to better understand its effects on adolescent development. The APA recommendations conclude with a call to prioritize and fund this research. But just because there’s a lot to learn, that does not mean we need to wait to act. Start talking to your kids about AI now.
IF YOU NEED HELP
If you or someone you know is struggling or having thoughts of suicide, help is available. Call or text the 988 Suicide & Crisis Lifeline at 988 or use the online Lifeline Chat.