AI is everywhere - helping us study, work, and socialize. But the way AI companies collect, use, and share our personal data can put our privacy at risk, expose sensitive information, and make it harder to control what happens to our data.
At Gen(Z)AI’s third forum, young Canadians discussed these issues related to data privacy and suggested ways the federal government could address them.
💬 Now, we want to hear from you! Which issues matter most, and what do you think of these recommendations?
Please choose your answer.

⚠️ The way that AI systems collect user data, and disclose that collection, is deliberately opaque, making it difficult for users to provide informed consent about how their data is being used and sold. This may disproportionately affect vulnerable groups, including children and the elderly.
Please choose your answer.
Your fellow Canadians recommended that the federal government ought to:
Mandate that platforms and AI companies implement meaningful and informed consent mechanisms to users, including by publishing a version of their terms and conditions that uses plain-language and is accessible by default.
Not at all
Completely
Your fellow Canadians recommended that the federal government ought to:
Impose privacy-by-default standards for all AI systems.
ℹ️What this means: AI platforms and systems should automatically protect your personal data without requiring extra steps from you, such as changing your settings. Privacy settings should be strong by default, so users are protected even if they don’t change any options.
Not at all
Completely

⚠️ AI systems are not subject to adequate and enforceable safeguards for the collection, storage, and sharing of user data. This may lead to intended and unintended harms including, but not limited to, excessive surveillance and profiling.
Please choose your answer.

Participants recommended that the federal government ought to:
Develop enforceable data privacy standards that require that platforms and AI companies:
- Prevent purpose-creep by strictly limiting data handling to clearly-defined and user consented purposes;
- Provide accessible options to delete user data upon request; and,
- Be subject to harsher legal consequences for the mishandling of users’ sensitive data, including, but not limited to, health, financial and identity-related information
ℹ️ Purpose-creep is when a company starts using your data for new reasons you didn’t agree to.
Not at all
Completely
We’d like to better understand how you believe that AI systems, platforms, and companies gather your personal data. Have you noticed times when your information was collected, shared, or used in ways that surprised you or made you uncomfortable?
Whatever you share will remain fully anonymous.
So far, we’ve focused on AI-generated content, chatbots, misinformation, privacy, and how recommendation systems affect what people see online. But other issues might matter just as much to you - for example AI and jobs, bias and discrimination, or the impact of AI on the environment.
You helped enrich and move this project forward, thank you very much.
We want to make every voice heard to build projects together that reflect the needs of the many.
To keep following this project and stay informed about its concrete outcomes, please leave your email and don’t forget to check the box!
Your information is anonymous and helps make our results representative 🔒
Your information is anonymous and helps make our results representative 🔒
Continue your experience