Consultations on the AI Strategy for the Federal Public Service: What We Heard

On this page

Minister’s message

As President of the Treasury Board, one of my responsibilities is to guide the government’s digital transformation toward modern and integrated systems and services that meet the needs of Canadians in the 21st century.

Artificial intelligence (AI) represents one of the most transformative technological advancements in recent Government of Canada initiatives. Its broad application provides opportunities for innovation and efficiency in service delivery across the federal public service. At the same time, it must be approached with the same ethical standards that underpin the development of policies, guidelines, and strategies to ensure its responsible use.

This report summarizes the views shared by Canadians in the development of Canada’s first comprehensive AI strategy for the federal public service.

From May to October 2024, experts and citizens alike shared their views on the use of AI by the Government of Canada. Research institutes, academia, industry, civil society, bargaining agents, Indigenous organizations, and members of the public provided feedback across 4 key pillars outlined in this report: human-centred, collaborative, ready, and trusted. As a result, more than 300 submissions and consultations will shape the development of a comprehensive AI strategy.

To all those who participated in the consultations, thank you for sharing your ideas. Your participation is key to developing an AI strategy that aligns with democratic elements through consultations with citizens. This feedback will help shape not only the use of AI in the Government of Canada, but the future of government itself.

I would also like to thank the former President of the Treasury Board, the Honourable Anita Anand, for her leadership and work in advancing digital government.

Work is underway to release the Government of Canada’s AI strategy in the spring of 2025. The strategy will provide a roadmap outlining best practices to facilitate the ethical, secure, and successful use of AI.

The federal public service can leverage the tools and resources that AI offers to improve the government’s operational efficiency, increase our science and research capacity, protect our interests, and deliver simpler, faster digital services to Canadians and Canadian businesses. With an extensive consultation process now complete and top of mind, the government is positioned to move forward with a strategy that will help provide the modern service Canadians deserve.

Original signed by:

The Honourable Ginette Petitpas Taylor, P.C., M.P.
President of the Treasury Board

About this report

This report provides a summary of the feedback submitted during in‑person and online consultations that took place from April to October 2024.

For this report, we used Microsoft Copilot to help compile and synthesize data collected through submissions from the online consultations. Any personal identifiers, such as names of individuals or company names, were removed prior to their use. It was also used to assist in the drafting and editing of sections within the document, but content was thoroughly reviewed and fact-checked by the team working on the development of the strategy.

Background

On April 24, 2024, the President of the Treasury Board, Anita Anand, announced the launch of an AI Strategy for the federal public service that will align and accelerate responsible AI adoption across government.

On May 27, 2024, President Anand hosted a roundtable with academics and researchers from leading AI research institutions such as Amii, Mila, Vector Institute, CIFAR, and several universities, including Carleton, McGill, Concordia and Western universities, and the University of Ottawa.

At the roundtable, President Anand outlined her vision for AI’s transformative impact on the public service and set out three main goals:

  • enhancing services delivery for Canadians
  • automating routine tasks to increase operational efficiency
  • bolstering Canadian research capacity in science

President Anand recognized the potential of AI in boosting government productivity and streamlining interactions with Canadians, but she emphasized the need to adopt it ethically and responsibly.

She also highlighted the need for a human-centred, transparent and secure AI strategy and underscored that innovation and collaboration are essential for the strategy’s success.

Since May, the Treasury Board Secretariat (TBS) has consulted widely on the strategy including with the Office of the Privacy Commissioner of Canada, bargaining agents, representatives from the Canadian AI sector, civil society organizations, Indigenous organizations, and the Digital Governance Council and its member organizations.

Based on these consultations, TBS identified four key pillars to define the strategy:

  1. Human-centred
  2. Collaborative
  3. Ready
  4. Trusted

In September and October, TBS held online public consultations on Consulting with Canadians. A diverse group of Canadians responded to the call, both in professional background and personal identity (see Appendix).

Participants were asked to comment on the following:

  • The proposed pillars for the Government of Canada’s (GC) AI use
  • Priority areas for the GC to use AI
  • Areas where the GC should not use AI
  • Types of AI the GC should not use

Introduction

For the federal government to be able to implement AI technologies in ways that maximize benefits to Canadians while maintaining ethical standards, it needs to develop an AI strategy that is rooted in making sure all its uses of AI are human-centred, collaborative, ready and trusted.

AI is not just a technological advancement. It is a tool for:

  • enhancing the government’s capacity for scientific research
  • better protecting Canada’s interests
  • improving the quality of services provided to Canadians

The consultations focused on the four proposed pillars of the AI strategy. The pillars provide flexibility so that the federal government can adapt to changing circumstances and technologies and remain relevant and responsive to new opportunities and challenges.

Feedback on proposed pillars

1. Human-centred

Canadians have been clear: all uses of AI in the GC must put people first.

For the public, AI should streamline service delivery, reduce wait times and consistently provide more accurate information. For example, the GC could use AI-driven chatbots to provide immediate assistance for answering common questions, which can make services more accessible and user-friendly. The GC should aim to gain the trust of the public as the government adopts AI in different departments.

For public servants, AI should automate routine tasks and free up employees to focus on more complex tasks. Used in this way, AI can improve efficiency and increase job satisfaction.

Participants expressed concerns about training and accessibility. Many stressed the need for comprehensive retraining programs to help public servants adapt to changes in their roles due to AI. Many emphasized the need for AI systems to be inclusive and meet the needs of all citizens, including marginalized and vulnerable groups. AI must not create barriers to access, such as creating a greater digital divide. Rather, it must improve accessibility—for everyone.

2. Collaborative

Collaboration can involve sharing best practices, resources, data, expertise and even computing power.

Collaboration—both within the federal government and with external partners—will be crucial in making decisions about adopting AI in the GC. It will help avoid duplicating investments, foster innovation, and standardize AI rulesets so that they work seamlessly between federal institutions, with provinces and territories.

Finally, increased AI adoption and a robust AI Strategy can play a crucial role in facilitating collaboration on global challenges such as climate change, pandemics and cybersecurity threats by analyzing data, predicting trends and coordinating responses.

Participants in the consultations emphasized the need for the GC to work with various parties when adopting AI, including Indigenous communities, equity seeking groups, academia and industry.

International cooperation is also key. Aligning AI policies with international efforts to rally around a common approach will ensure Canada continues to be a leader on the global AI stage.

3. Ready

Readiness means having the infrastructure, tools and policies needed to adopt AI safely and securely. Having the right supports in place will not only allow the federal public service to adopt AI today, but it will also set us up for long-term success.

Participants highlighted the need for scalable and secure AI systems, as well as for continuous training and purposeful upskilling so that public servants have the skills to use AI technologies effectively.

The foundational role for data in AI solutions was also identified. It was repeatedly noted that reliable and trusted data, such as that provided by Statistics Canada is the backbone of successful AI initiatives. The output of AI systems is only as good as the data going into it or being used to train it, considerable effort may be required to make existing government databases AI ready. Continued investments in data governance frameworks ensures that data will be accurate, complete and relevant for AI applications.

Robust IT infrastructure, including cloud computing and high-performance computing capabilities, is also essential for AI deployment. Participants also suggested that the GC should develop in-house AI capabilities to better manage infrastructure and reduce reliance on external vendors.

4. Trusted

Trust is essential for successfully adopting AI in the GC. As stewards of the public trust, public servants must use AI in ways that foster and build on that trust.

Respondents stressed the importance of transparency and accountability and of understanding biases in AI models. Addressing biases in AI models is essential to prevent discrimination and ensure fairness in AI-driven decisions. This is key for the public service as we include AI for delivering representative and inclusive government programs and services to all Canadians.

To build public trust, the GC must communicate clearly about how and when it uses AI. For example, it must label AI-supported content and provide explanations and recourse when AI is involved in decision-making.

Identifying who is accountable for decisions made by AI and making corrections when mistakes happen is crucial for ensuring recourse outside of the parameters of the AI and for addressing any issues that arise.

Privacy and security are paramount, and the GC needs a strong focus on requirements to protect sensitive data and maintain public trust.

Potential uses of AI in government

We asked respondents where they think AI could be used in the federal government. Common responses included the following:

  1. Administrative efficiency: AI could automate the creation, sorting and management of documents, reducing the time spent on administrative tasks. It could also help manage group email inboxes by sorting emails, deleting unnecessary ones, and drafting responses to common queries.
  2. Public service delivery: AI-powered chatbots could manage routine inquiries from citizens, providing quick and accurate information and freeing up human agents for more complex issues. AI could also be used to translate large volumes of text quickly and accurately, ensuring that government communications are available in both official languages.
  3. Data analysis and decision support: AI could analyze large datasets to predict trends and outcomes, which could be useful in areas such as public health monitoring, environmental monitoring, and economic forecasting. It could also help in analyzing policy impacts by processing large amounts of information and providing insights that can inform decision-making.
  4. Human resources: AI could streamline the recruitment process by scheduling interviews and exams. It could screen applications to search for specific combinations of skill sets or mark written exams. It could also personalize training programs for employees, identify skill gaps and recommend relevant courses.
  5. Public health and social services: AI could help in finding patterns and supporting public health decisions. It could also help manage and process applications for social services, streamline administrative tasks and make resource allocation and enhance the efficiency of resource allocation to communities most in need.
  6. Security and compliance: AI could monitor transactions and activities to detect and prevent fraudulent activities. It could also support government operations by providing analysis as part of the regulatory impact assessment process.
  7. Environmental management: AI could optimize the use of natural resources by analyzing data on consumption and availability. It could also model and predict the impacts of climate change, which could be helpful in developing effective mitigation strategies.
  8. Public engagement: AI could analyze public sentiment on social media and other platforms to gauge public opinion. It can also process and analyze feedback from public consultations, providing insights that inform policy and program development.
  9. Legal and judicial support: AI could help in reviewing legal documents, identifying relevant information, and ensuring compliance with legal standards. It can also help manage and prioritize cases, ensuring that resources are used efficiently and that cases are resolved in a timely manner.

Areas of focus

Participants’ comments centred on four areas: procurement, sustainable AI practices, talent and training, and ethical use.

They emphasized that addressing these areas is crucial to ensuring that AI technologies are properly integrated, aligned with sustainability goals, supported by a skilled workforce, and governed by strong ethical standards. By ensuring the AI strategy covers these areas, the public service will be able to harness AI’s potential while maintaining public trust and accountability.

Procurement

Efficient and cost-effective procurement has a direct impact on the success of government operations. While not unique to AI, respondents said government procurement processes must be transparent and fair, show that the government is using public funds effectively, and ultimately lead to better services and outcomes.

  • The GC should shift to a flexible and agile, outcome-based procurement model, which would emphasize achieving operational outcomes and providing value to citizens, rather than simply meeting technical requirements and specifications. By focusing on outcomes such as improved citizen services, increased efficiency, or cost savings, the GC can encourage the flexible adoption of AI technologies that deliver tangible benefits and drive public sector transformation. This flexibility is crucial in the rapidly evolving field of AI, where new technologies and methodologies are constantly emerging and can drastically change the approach to achieving a particular objective during a contract’s lifespan.
  • Procurement of AI must be ethical and procurement processes must ensure transparency in the tools that are bought. Vendors should provide clear documentation about their AI models, including details on training data and algorithms. Contracts should include clauses that mandate regular audits and adherence to ethical guidelines. Respondents called for open-source AI solutions and stressed the need for interoperability between different AI systems to avoid vendor lock-in and to allow for flexibility.
  • Supporting Canadian businesses, particularly small and medium-sized enterprises, should be considered essential for promoting innovation and economic growth. In the context of the AI strategy, this support can be manifested through initiatives that give opportunities for these enterprises to provide AI technologies, training, and resources to the federal government. By doing so, the GC can foster a vibrant domestic AI ecosystem that drives technological advancements and economic benefits for all Canadians.
  • Participants emphasized sovereignty in procurement decisions which can help make sure critical AI technologies and services are developed and retained in the country, enhancing national security and self-reliance but also stimulates local innovation and job creation. By prioritizing Canadian AI solutions, we can build a robust and secure AI infrastructure that aligns with our national interests, supports the long-term growth of our economy, and reduce reliance on foreign or large multi-national enterprises.

Sustainable AI practices

  • The GC should consider the environmental impact of AI technologies when making procurement decisions. It should choose energy-efficient AI solutions that have a minimal carbon footprint.
  • Instead of using vast amounts of data, the GC should use relevant data in smaller, optimized models because they use less energy to store and process data. This approach can result in better performance with less environmental impact.
  • The GC should consider requiring AI tools to be energy-efficient and requiring vendors of these tools to have environmentally friendly data centres into procurement decisions.
  • All public servants, particularly program managers, technical AI researchers, developers, and policymakers, should understand the environmental impact of AI and the important role of sustainable AI practices in minimizing this impact.

Talent and training

  • Federal public servants at all levels need training in AI. AI literacy programs should cover the basics of potential applications, as well as ethical considerations. Employees who are directly involved in AI projects, such as data scientists and AI engineers, need specialized training. A culture of continuous learning and professional development will help employees keep pace with the developments in AI.
  • To attract and retain top AI talent, the GC will have to provide competitive salaries, career development opportunities, and a supportive work environment. There should be clear career paths and opportunities for professional growth.
  • Many respondents stressed the need for departments to work together to share AI knowledge, tools and best practices. Partnerships with companies and academic institutions can help the GC tap into external expertise and resources.
  • AI teams in the GC need to be diverse. Diversity brings together different perspectives, which helps identify and address biases in AI models, including in data and algorithms. Having diverse teams can lead to fairer and more inclusive AI systems. A mix of perspectives fosters creativity and innovation, resulting in more robust solutions to complex AI challenges.

Ethical use

  • Respondents stressed that comprehensive ethical guidelines for AI use in the federal public service are essential. These guidelines should cover fairness, transparency, accountability, and bias prevention. Respondents suggested regular ethical audits to ensure compliance with the guidelines and prevent harmful biases. Transparency in decisions about AI use is crucial, as are clear explanations. The GC should make AI models available to the public when possible.
  • Respondents called on the GC to put in place mechanisms to mitigate, detect and correct biases in AI systems. Inclusive design can help make sure AI systems consider the needs and perspectives of all user groups, especially marginalized communities.
  • Robust privacy protections are critical to make sure personal data is handled responsibly and in compliance with privacy laws.
  • Strong security measures are also essential to protect AI systems from cyber threats and unauthorized access.
  • Participants insisted that humans must oversee AI decision-making processes, particularly in high-stakes areas such as healthcare and law enforcement.
  • Ethics committees should be set up to make sure AI systems are developed and deployed in compliance with ethical standards.
  • Communication with users, including the public, helps gather input on AI initiatives and address ethical concerns. Raising public awareness about AI benefits and risks through education campaigns and transparent communication is an essential element of a successful AI initiative.

Suggested areas for inclusion in the AI Strategy

The following are common key practices related to each pillar and area of focus that were heard throughout the consultations.

Human-Centred

  • Automation: Use AI to handle repetitive and administrative tasks, allowing public servants to focus on higher-value activities.
  • Retraining Programs: Develop comprehensive retraining programs to equip public servants with the skills needed to use and work with AI technologies.
  • Inclusive Design: Ensure AI systems are inclusive from the start, addressing the needs of diverse populations and avoiding discrimination or bias.

Collaborative

  • Partnerships: Foster collaborations with academia, industry, Indigenous groups, and international bodies to leverage expertise and ensure ethical AI development.
  • Public Consultations: Conduct regular public consultations to gather feedback and ensure AI initiatives meet public expectations.

Ready

  • AI Infrastructure: Ensure infrastructure can scale to meet future AI demands, using energy-efficient cloud computing services to minimize environmental impact.
  • Data Readiness: Establish strong linkages to existing data strategies to ensure data is readily available for AI applications.
  • Environmental Responsibility: Build or procure AI solutions that demonstrate responsible environmental practices.

Trusted

  • Transparency: Maintain transparency by publishing AI impact assessments and providing clear explanations for AI decisions.
  • Accountability: Establish frameworks to define responsibilities and address issues related to AI use.
  • Ethics and Privacy: Lead in ethical AI practices by championing transparency, addressing bias, upholding fair practices, and protecting privacy through robust policies.

Areas of Focus

  • Agile Procurement: Shift to a flexible, outcome-driven procurement model to stay ahead in AI. Demand transparency from vendors—no more "black box" tech. Clear documentation on AI models, training data, and algorithms is a must.
  • Boost Domestic Businesses: Prioritize Canadian businesses, especially small and medium enterprises, to cut foreign reliance. Level the playing field for smaller vendors against global giants.
  • Eco-Friendly AI: Make environmental impact a top priority in AI procurement. Choose AI practices that are green and sustainable.
  • Top Talent & Training: Foster a culture of continuous learning. Offer competitive salaries and growth opportunities for AI experts. Embrace diversity to create solutions that reflect a wide range of needs.
  • Ethical AI: Ensure human oversight in AI decision-making, especially in critical areas like healthcare and law enforcement. Set up ethics committees and regular audits to prevent biases. Educate the public on AI’s benefits and risks for a successful rollout.

Suggested areas where AI should not be used

Consultation participants expressed concern about the risks of using AI, including the following:

  • the risk of reinforcing biases in automated decision-making
  • threats to privacy and civil liberties from AI-driven surveillance
  • the potential for bias in employment-related decisions
  • the limitations and lack of empathy of AI in political decision-making and policy recommendations
  • risks of unfair exclusions in decisions about eligibility for social services

The GC must be extremely careful in how it uses AI in these sensitive areas. It must implement AI applications responsibly, with robust oversight and ethical guidelines, to foster public trust and uphold democratic values.

Many participants mentioned the following as areas where the GC should not use AI:

  • Automated decision-making in criminal justice: Respondents suggested that using AI in criminal justice (such as in sentencing, parole decisions, and predictive policing) could raise ethical concerns by potentially reinforcing racial and socio‑economic biases. This bias could reduce fairness in judicial decisions and lead to harsher outcomes for certain groups. They also suggested that AI systems might operate as “black boxes” and obscure the decision-making process, which would erode public trust and risk systemic discrimination. An overreliance on AI tools could endanger fair, impartial decision-making and might infringe on individuals’ rights.
  • Surveillance and mass data collection: Some respondents expressed concern about AI-driven mass surveillance, facial recognition, and extensive data collection threatening privacy and civil liberties. Some are concerned about government use of AI for tracking individuals in public spaces or monitoring dissent because they fear it could suppress free expression and infringe on personal privacy. Errors in AI surveillance may lead to wrongful accusations and harassment. Perceived constant monitoring could have a negative effect on free speech. Participants suggested that the potential for abuse underscores the urgent need for strict regulatory measures, especially when AI-driven surveillance encroaches on public spaces and personal data.
  • Employment-related decisions: Some fear that AI tools used in hiring, promotion or firing decisions could propagate bias, particularly in the federal public service, because they rely on historically biased data. These tools have been known to disadvantage women and people of colour, particularly in industries where data reflects past preferences for Caucasian male candidates. Such biases could undermine workplace diversity and fairness, and AI’s inability to replicate nuanced human judgment could further complicate employment decisions. Dependence on AI in hiring risks reinforcing structural inequalities and running counter to efforts to create inclusive workforces.
  • Political decision-making and policy recommendations: Political decision-making and policy formulation require human judgment, ethical considerations, and democratic processes that AI alone cannot provide. Participants suggested that a policy developed exclusively by AI, driven solely by data, might propose cost saving measures that cut social programs, disregarding the adverse impacts on vulnerable populations. AI often optimizes responses for measurable outcomes and may overlook the needs of marginalized people. Relying on AI for policy could pose risks that would create a technocratic system that undermines democratic principles, prioritizes efficiency over inclusion and potentially marginalizes groups that rely on advocacy.
  • Social services eligibility determination: Some participants noted that automating decisions about eligibility for social services (for example, for income supports or disability benefits) could unfairly exclude people because of algorithmic bias and error. Without human oversight and recourse, these systems may disproportionately affect the most vulnerable and as a result exacerbate social inequalities by excluding people in genuine need. Humans should always be involved in administrative decisions.

Conclusion

The feedback from the consultations provided a comprehensive picture of the key considerations for the GC’s AI strategy. By focusing on making sure all uses of AI are defined by the strategy’s principles of being human-centred, collaborative, ready and trusted, the government can ensure that AI adoption enhances public services, improves efficiency, and maintains ethical standards.

The suggestions and recommendations from consultation participants highlight the importance of inclusion, transparency and continuous engagement with interested parties to build an effective and trustworthy AI ecosystem in the public sector.

The strategy will continue to evolve as the government responsibly and ethically adopts artificial intelligence for different aspects of its work.

As the public trust increases in AI adoption, we hope that this results in increased public participation, which will strengthen the pillars of democracy in Canada.

We thank everyone who participated in the consultation process and look forward to working with you again in the future.

Appendix: Data about participants in online public consultations

The consultations were conducted on Consulting with Canadians from mid-September to the end of October 2024.

Breakdown of submissions received

  • Total submissions received: 283
  • Submissions from individuals: 219
  • Submissions from organizations: 64

Breakdown of submissions from individuals, by gender

  • Female: 43%
  • Male: 47%
  • Other: 1%
  • Prefer not to say: 9%

Breakdown of submissions from individuals, by other identity factors

Question Yes No Prefer not to say
I am a person with a disability 20% 68% 12%
I identify as Indigenous 6% 83% 11%
I identify as a racialized person or visible minority 21% 66% 13%

Page details

Date modified: