AI chatbots are becoming part of everyday life. They can help us study or get work tasks done, but relying on them for conversation or emotional support can also affect our mental health and weaken real social connection. Your fellow Canadians recently met in a youth forum to discuss these issues and came up with some recommendations - now we need your help to refine them!
Please choose your answer.
Chatbots are everywhere - helping us study, work, and socialize. But as these AI tools become more common, young people are asking important questions: Can AI chatbots make us too emotionally dependent, or more isolated? Can they compromise our critical thinking skills? Can they expose us to harmful content?
Gen(Z)AI’s first forum brought together 100 representatives to participate in expert briefings, interactive workshops, and deliberative policymaking sessions. The result: young Canadians are dissatisfied with the status quo and support stricter regulation of AI chatbots.
The representatives came up with a series of issue statements highlighting their concerns about AI chatbots, and a set of policy recommendations for the Canadian federal government.
Now, we want to hear from you! Which of the issues are most important to you? How do you feel about the recommendations?
💬 Join the conversation. Your experiences and ideas can help shape safer, more supportive AI for everyone.
Please choose your answer.

Over reliance on AI chatbots can lead to emotional dependence that exacerbates social isolation / atomization and contributes to mental health issues.
Please choose your answer.

Your fellow Canadians recommended that the federal government ought to:
Mandate that AI platforms address the addictive design of AI chatbots by requiring measures such as content filters and optional data cache deletion, and explicitly providing users with the ability to determine levels of responsiveness and conversationality.
ℹ️ This means that the government should introduce measures that require AI chatbot platforms to allow users to control the kinds of responses they receive from chabots, including how quickly the chatbot replies, and to delete older conversations.
Not at all
Completely

The forum recommended that the federal government ought to:
Mandate accessible flagging capacity for users, require platforms to regularly report these instances, in a timely fashion, to an independent body with enforcement capacity, and make such reports accessible to the Canadian public.
ℹ️ This means: Companies would be required to give users the option to report problematic or harmful content during their conversations with chatbots, and then provide that information to an independent organization, so that the public can see how these problems are handled.
Not at all
Completely

AI chatbots increase users’ potential exposure to harmful content, including sexually explicit, extremist, and/or self-harm content.
Please choose your answer.

Participants recommended that the federal government ought to:
Establish a new, independent government body to enforce AI safety standards, conduct systems evaluations, algorithmic audits, and risk assessments, and intake user complaints, including by offering dispute resolution and other recourse mechanisms.
ℹ️ This means: A new, independent government organization would check how AI works, review its decisions, assess risks, and help users report problems, resolve disputes, and get remedies when a chatbot causes harm.
Not at all
Completely
We’d like to better understand how AI chatbots affect you in everyday life. Have you had particularly positive or negative experiences – for example around your emotional well-being or harmful content?
Whatever you share will remain fully anonymous.
So far, we’ve focused on emotional dependence and harmful content. But other issues might matter just as much to you – for example AI and jobs, bias and discrimination, or the impact of AI on the environment.
Your information is anonymous and helps make our results representative 🔒
Your information is anonymous and helps make our results representative 🔒
Your information is anonymous and helps make our results representative 🔒
Your information is anonymous and helps make our results representative 🔒
Continue your experience