AI-generated content and recommendation systems can mislead us, push extreme content, and make it harder to find trustworthy information.
At Gen(Z)AI’s second forum, young Canadians discussed these issues and suggested ways that the federal government could address them.
💬 Now, we want to hear from you! Which issues matter most, and what do you think of these recommendations?
Please choose your answer.

⚠️ AI-generated content, including mis- and dis-information, overwhelms users and undermines confidence in reliable and accurate information, which has distinct and disproportionate effects on vulnerable populations.
Please choose your answer.

Your fellow Canadians recommended that the federal government ought to:
Mandate that digital platforms explicitly label AI-generated content and give users the functionality to omit this content from their feeds.
Not at all
Completely
Your fellow Canadians recommended that the federal government ought to:
Give people copyright over their own features and likeness, and create an online regulator to enforce the removal of non-consensual AI-generated material, including Child Sexual Abuse Material (CSAM).
ℹ️This means that governments should make it a rule that you control how your image, voice, or other personal features are used online, and that platforms must remove any AI-generated content depicting you that is created without your permission.
Not at all
Completely

⚠️ AI-recommendation systems push content that is ideologically extreme and reinforces information echo chambers, resulting in social and political polarization with effects in both on- and off-line spaces.
ℹ️AI-recommendation systems are the algorithms that decide what content you see online, based on your past behaviour and what the platform thinks you will engage with. Echo chambers occur when these systems mostly show you content that matches what you already believe, so you rarely see different perspectives.
Please choose your answer.
Participants recommended that the federal government ought to:
Mandate that platforms monitor, flag, and transparently share information with both the public and government, about the spread of mis- and dis-information, especially during high risk moments, including elections and public health crises.
Not at all
Completely
Participants recommended that the federal government ought to:
Require that platforms introduce standards for AI-recommendation systems and data profiling processes to limit the spread of harmful content and known and suspected bot activity, and promote local content.
ℹ️ This means that platforms should follow rules for how their recommendation systems work, so they reduce harmful or misleading content, limit fake accounts or bots, and highlight content from local creators and communities.
Not at all
Completely

⚠️ An information environment driven by engagement-based incentives and flooded with mis- and dis-information contributes to mistrust in news, government, and other traditional institutions.
ℹ️What this means: Online platforms prioritize interaction via clicks and likes over ensuring true and factual content. This makes it easy for attention-seeking false and misleading information to spread, which can make people doubt news, government, and other trusted sources.
Please choose your answer.

Participants recommended that the federal government ought to:
Mandate that platforms implement an independent third-party authentication mechanism to validate content posted from news and other public service organizations in order to promote and prioritize credible and reliable content.
Not at all
Completely
We’d like to better understand how AI-generated content and recommendation systems affect you in everyday life.
Have you noticed times when false information, extreme content, or algorithm-driven recommendations shaped what you saw or believed? Whatever you share will remain fully anonymous.
So far, we’ve focused on AI-generated content, misinformation, and how recommendation systems affect what people see online. But other issues might matter just as much to you - for example AI and jobs, bias and discrimination, or the impact of AI on the environment.
You helped enrich and move this project forward, thank you very much.
We want to make every voice heard to build projects together that reflect the needs of the many.
To keep following this project and stay informed about its concrete outcomes, please leave your email and don’t forget to check the box!
Your information is anonymous and helps make our results representative 🔒
Search or browse the list
Your information is anonymous and helps make our results representative 🔒
Your information is anonymous and helps make our results representative 🔒
Your information is anonymous and helps make our results representative 🔒
Continue your experience