AIDA: patchworking AI risk management for Canadian federal regulators while the act is on pause
Canada’s Artificial Intelligence and Data Act (AIDA) is the country’s first endeavour into the private-sector regulation of artificial intelligence (AI) systems. It was introduced in June 2022 as part of the wider Digital Charter Implementation Act, known as Bill C-27. But a combination of overwhelming critique and a suspended parliament effectively ‘killed the bill’ in January 2025.
Written by a human
As a result, AIDA is not law, and Canada currently lacks a binding, comprehensive federal AI regulatory framework. With high regulatory uncertainty and Canadian businesses in limbo, we’re exploring how companies can use AI transparently while protecting their business against legal threats.
What’s in place while AIDA is paused?
While the government continues to pause AIDA, Canada is relying on existing Federal and Provincial laws, alongside temporary measures.
The voluntary AI regime
The Voluntary Code of Conduct was released in September 2023 as a guide for corporations that either:
- Develop advanced generative AI models, such as foundational machine learning models or large language models
- Manage or deploy such systems in Canada
This temporary measure was designed to provide immediate, practical guardrails and promote safe, transparent, and accountable practices, bridging the gap until the formal legislation would be implemented.
However, it is entirely voluntary: no penalties, no enforcement body, no audits.
The six core pillars of the voluntary code of conduct
| Pillar | Guidelines | Reasoning |
| Safety | Conduct pre-deployment testing to identify and mitigate harmful outputsEvaluate model behavior under normal, stress and misuse scenariosMonitor for emergent behaviors | Determine whether the model is fit-for-purpose, or whether it can be manipulated to generate harmful outcomes |
| Fairness and equity | Assess and mitigate biasAvoid discriminating against protected groupsDocument fairness testing approaches | Take a risk-based approach to outcome-generation, ensuring outcomes are accessible and fair to all |
| Transparency | Disclose that content is AI generatedProvide explanations with model definitions, capabilities and limitationsMaintain documentation | Ensure there is a minimum standard of explainability within all AI systems |
| Accountability | Assign responsibility to specific teams for AI oversightConduct impact assessments regularlyMaintain risk-management programs | Establish clear internal governance structures for AI oversight |
| Human oversight and monitoring | Use AI with human supervision to ensure critical outcomes are not fully automatedMonitor deployed systems for misuse or unexpected behaviorProvide channels for users to report harmful or inaccurate output | Prioritize human-in-the-loop practices to protect against hallucinations and harm |
| Validity and robustness | Ensure models operate as intended and within the domains they’re deployedDocument testing regimesProtect models from adversarial attacksUpdate systems as risks evolve | Continue to prioritize model and human safety and security even as new threats emerge |
The code encourages internal risk reports and data provenance documentation. All types of entities have something to follow, as this enables the building of an AI impact assessment.
Under these would-be AIDA obligations, it also proposes that firms should publish summaries of their system capabilities, risks, and mitigations. Under this non-mandatory code, firms are also expected to disclose any high-impact incidents that they experience.
PIPEDA
Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA) is a data privacy law that applies to the private sector. It sets the rules for how companies must collect, use, and disclose personal information, built around 10 principles:
- 1. Be accountable for the information you hold
- 2. Identify its purpose for collection
- 3. Obtain consent
- 4. Limit collection to what is necessary
- 5. Limit use, disclosure and retention
- 6. Ensure accuracy
- 7. Provide safeguards
- 8. Be open about data practices
- 9. Allow individuals to access their information
- 10. Allow individuals to challenge compliance
Despite recognizable phrasing across the globe, PIPEDA is considered outdated compared to other privacy laws like GDPR, especially as it was introduced in 2001. Bill C-27 attempted to replace the law with stronger privacy legislation, but it ‘died’ too as part of the overall collapse.
Why is AI regulation necessary?
Canada is known as a global AI hub, especially being home to one of the ‘Godfathers’ of AI – Yoshua Bengio. But this innovation and willingness to experiment are precisely why guardrails are necessary.
Protecting human rights
AI systems often inherit the biases present in their training data. So without regulation, these ‘black box’ algorithms can lead to systemic discrimination in critical areas, such as:
- – Hiring: resume-screening tools have been known to penalize candidates based on gender or race
- – Lending: AI used by banks can inadvertently deny loans to marginalized communities based on zip codes or historical data
- – Law enforcement: facial recognition technologies have shown higher error rates for people with darker skin tones, leading to concerns about wrongful arrests
Promoting safety
As AI moves from chatbot functionality to integration into everyday systems, such as autonomous vehicles, the safety stakes get much higher:
- – Physical harm: AIDA proposed that high-impact systems would be rigorously tested before they reach the public, but without these rules in place, physical harm must be strongly considered (similar to the EU AI Act)
- – Liability: if AI systems make catastrophic errors, the law must clarify who is responsible. Is it the developer? The user? Or the company that deployed it?
What are the gaps in the public sector, and what playbooks are there to close these gaps?
While 92% of Canadian business leaders say that Canada’s approach to regulating AI should be agile, flexible, and relatively light in touch, the same proportion said the Federal government should regulate high-impact AI as soon as possible.
This is because there are several limitations of the voluntary code, including that:
- – It’s non-binding: so there is no legal force behind it
- – It’s not audited, so companies have no verification system while following it
- – It’s not completely comprehensive: focusing on generative AI rather than all AI types
It was only ever supposed to be a temporary stopgap – but is now the long-term standard until further legislation, and the parliamentary problems, are resolved. With PIPEDA lacking the fully comprehensive human rights that other global privacy laws offer, it puts Canada behind in terms of regulatory coverage in this area.
For example, the General Data Protection Regulation (GDPR) gives an individual the right to opt out of decisioning based solely on automated processing for products like loans or job applications, as well as challenging the discrimination of an algorithm. But PIPEDA offers no such right. A Canadian citizen may be denied without being able to know how the decision was made or being able to challenge it.
Fortunately, several playbooks and frameworks already exist to help reduce the exposure to vulnerabilities.
Helpful frameworks
Canadian businesses can rely on international standards to ensure their AI is tested, exported, and trusted according to global criteria. This includes:
- – ISO 42001: Also known as the Artificial Intelligence Management System, this was adopted by the Standards Council of Canada in 2024. It requires organizations to have a documented system for managing data quality, bias, and transparency across the AI lifecycle.
- – NIST AI Risk Management Framework: Although developed in the U.S., it’s widely adopted by Canadian entities thanks to the interoperability of both North American economies. This framework breaks down AI governance into four functions – govern, map, measure, and manage.
With AI communications capturing, surveillance, and archiving solutions, we help firms accurately detect risks across all channels of communications. With private data centers, we can host the AI models directly within our own environment, allowing you to stay aligned with compliance demands and keep your data secure, even at the forefront of innovation. What’s more, robust reconciliation controls ensure the quality and completeness of your data.