Questions and Answers – Online Harms

On this page

Theme 1: Mandate Commitments

1. What is the purpose of enacting new legislation?

The legislation currently under development would establish a new regulatory framework for social media platforms with a view to making them more transparent, accountable, and responsive in curbing the spread of harmful content.

Under this framework, social media platforms would be required to remove categories of harmful content outlined in the legislation. A key objective of the framework is to create an enabling environment in which all Canadians can participate in online public life without shame, threat, or fear. At the same time, the framework is also intended to ensure that Canadians have access to appropriate tools and recourse mechanisms to report abuse and obtain redress when engaging online.

2. Why is the Government bringing forward new legislation?

The online environment amplifies and spreads content that promotes hateful and extremist ideologies and causes harm to victims of serious crimes. The harm caused by the proliferation of this type of content online affects all Canadians, particularly equity-deserving communities, and serves to limit their ability to express themselves online without fear. The experience of these harms online is well documented and known.

Hateful online content against Indigenous peoples

For instance, recent events underline the extent to which Indigenous people face this kind of hatred, which includes racist, derogatory, and threatening content. The killing of Colten Boushie in 2016 saw social media flooded with hate-filled comments targeting Boushie and his friends. In 2018, two women in Flin Flon, Manitoba were charged with uttering threats and inciting hatred after posting photos of a vandalized car, saying Indigenous people would be killed and calling for a ‘shoot an Indian’ day. In 2020, two ethno-nationalist groups called the Proud Boys and the Sons of Odin used social media to threaten and attack members of the Wet’suwet’en community during pipeline protests. Data from Statistics Canada show that police-reported hate crimes against Indigenous people are on the rise: between 2016 and 2018, incidents targeting First Nations, Métis, or Inuit communities rose by 17% (from 30 to 35).

Ideologically-motivated violence and hate

Terrorist groups and violent extremists alike use social media platforms to spread propaganda, recruit, organize and incite violence, and some violent extremists promote and incite hatred against identifiable groups as defined in the Criminal Code. In June 2020, the Institute for Strategic Dialogue published a report on right-wing extremism in Canada, identifying 6,660 right-wing extremist channels, pages, groups and accounts across seven social media platforms that have reached over 11 million users. Since 2014, Canadians inspired in whole or in part by their extreme views have killed 21 and wounded 41 individuals in Canada. This includes the 2014 Moncton shooting (anti-authority violence), the 2014 Saint Jean sur Richelieu vehicle-ramming attack (religious motivated violent extremism), the 2015 Edmonton stomping attack (gender-driven, ‘incel’ violence), the 2017 Quebec mosque shooting (ethno-nationalist violence), the 2018 Toronto van attack and alleged GTA massage parlour attacker (‘incel’ violence), the 2019 Sudbury knife attack on a mother and baby (‘incel’ violence), and the 2020 GTA stabbing in a massage parlour (‘incel’ violence).

Child sexual exploitation

Perhaps one of the most egregious harms manifesting online is child sexual exploitation and abuse, which can take place in the form of the creation, consumption, and sharing of child sexual abuse material, child luring, and child grooming. Offenders are always seeking new ways to exploit children as technology evolves. Canadian law enforcement has seen dramatic increases in reports of online child sexual exploitation in recent years. In 2019, the RCMP saw a 1106% increase of reports from 2014 (i.e. 8,535 to over 102,000 annually). This exploitation disproportionately impacts girls. In 2019, the RCMP found that girls made up 62% of identified Canadian victims depicted in online child sexual exploitation material. A recent review by the Canadian Centre for Child Protection (C3P) found that 80% of victims appeared to be female and 78% of images reviewed depicted children under the age of 12.

Gender-based hate

Harmful content on social media is also often gender-specific: a 2017 poll by Amnesty International in eight high-income countries found that 23% of women between the ages of 18 and 55 had experienced some form of harassment or abuse on social media platforms at least once. More recently, a 2020 study by Plan International found that 58% of girls surveyed across 22 countries had experienced harassment or abuse when using social media.

3. Why are you developing this legislation now? Shouldn’t we be focusing on the COVID-19 pandemic?

The COVID-19 pandemic has only exacerbated these problems. There has been a documented increase in COVID-19 related online harms, including racism, hate crimes and verbal and physical harassment. For example, in a June 2020 survey, close to 30 percent of Chinese Canadians said that they have been exposed to racist social media posts or graffiti since the beginning of the pandemic. Similarly, Jewish communities have been exposed to online antisemitism through conspiracy theories and statements blaming Jews and Israel for manufacturing the virus.

It is also clear that Canadians are looking for Government intervention in this area. A recent poll commissioned by the Canadian Race Relations Foundation found that over 90 percent of Canadians think that online hate and racism is a problem, and 60 percent believe the federal government should do more to prevent the spread of hateful and racist behavior online. It is worth noting that by a 2 to 1 margin, Canadians worry more about the harms caused by hate speech and racism than limits to freedom of expression or protection of privacy.

Finally, social media platforms have shown that they will not act aggressively enough to confront these harms. Instead they have consistently demonstrated that they often do so only when it suits their own interest. Therefore, there is an urgent need to ensure that these companies are playing an active, consistent and transparent role in supporting the public interest and removing harmful and illegal content from their platforms.

4. What Government-wide commitments would new legislation fulfill?

The legislation under development would fulfill my mandate letter commitment to create new regulations for social media platforms, starting with a requirement that all platforms remove illegal content, including hate speech, within twenty-four hours. Other online harms in scope include radicalization, incitement to violence, exploitation of children or creation or distribution of terrorist propaganda.

It would also support principles related to Canada’s digital policy framework under the Digital Charter by diminishing the extent of harmful content on social media platforms, and by ensuring strong enforcement and accountability for violations of the laws and regulations that support the Charter’s principles. Moreover, the legislation would support overarching commitments made in the 2020 Speech from the Throne.

Additionally, action would also address recommendations made by the Standing Committee on Justice and Human Rights in its 2019 report, Taking Action to End Online Hate, and fulfill key commitments of the Christchurch Call to Eliminate Terrorist & Violent Extremist Content Online.

5. Why is a new legislative and regulatory framework necessary to deal with harmful content on social media?

Canada’s existing legislative and regulatory frameworks are currently unable to meaningfully address the proliferation of harmful content online. Existing laws, including the Criminal Code and the Mandatory Reporting Act, are narrowly focused, and enforceable only through the court system

The legislation under development would address this gap by creating a new regulator to administer, oversee and enforce new legislated requirements to make the reporting, oversight and moderation of harmful content transparent, swift, effective and thorough.

6. How will you ensure that any new regulatory framework is free from systemic racism?

Rooting out systemic racism is very difficult. I think it helps to remember Angela Davis’s line: “In a racist society, it is not enough to be non-racist, we must be anti-racist.”

We cannot simply run through a checklist and say, “Okay, we’ve done it, we’re not racist.” We must set ourselves equity goals, in terms of outcomes and reporting, and we must continually and actively monitor and review those goals and work to improve our progress. Any new regulatory framework would have these goals.

7. Have platforms, victim groups, political parties, or provinces been consulted on the new legislation?

Over the past few months, officials in my Department and I have engaged with stakeholders from civil society organizations and the digital technology sector, which has included several roundtable discussions. The Parliamentary Secretary to the Minister of Justice and the Department of Justice also held consultations with stakeholders in 2020 with respect to the legal remedies for victims of online hate.

I held 4 roundtables with academic and technical experts, equity-deserving and advocacy groups, Indigenous communities, industry, think-tanks, and platforms. The Parliamentary Secretary to the Minister of Canadian Heritage has held a multi-stakeholder roundtable and substantial consultation with the Liberal party caucus, to gather views from all corners. The Parliamentary Secretary to the Minister of Justice has also held extensive consultations with community organizations, advocacy groups, police, prosecutors, and academics on hate speech and harmful content online.

8. How does new legislation relate to the Government’s broader policy agenda for the Internet and large tech platforms?

The legislation under development would advance many of the goals and principles that the Government has outline for digital and Internet policy in Canada’s Digital Charter, namely:

Theme 2: Regulatory Framework

9. What sort of requirements will be imposed on social media platforms?

Canadians can expect to see obligations that draw upon similar framework in comparable countries, and that could include:

10. What types of harmful content will be subject to regulation?

Canadians can expect that any new legislation would apply to the categories of harmful content outlined in my mandate letter: Hate speech; child-sexual-exploitation content; terrorist content; content that incites violence; and intimate images shared non-consensually.

They can also expect that our approach to selecting the types of content to regulate will be rooted in Canadian criminal law. These categories were selected as they represent the most pernicious categories of harmful content which can, on balance, most easily be identified and scoped into a new legislative and regulatory framework that upholds freedom of expression.

11. What social media platforms will be subject to regulation?

Our intention is to be crystal clear in legislation about who the new regulations would apply to.

Canadians can expect that we would regulate large services that Canadians intuitively associate with “social media platforms”, such as Facebook, YouTube and Twitter.

They can also expect that we would regulate platforms where child sexual exploitation and the non-consensual sharing of intimate images are more likely to occur, like Pornhub.

12. How will new requirements for regulated social media platforms be enforced?

A regulatory authority would need to be identified to administer and oversee the framework, and promote and enforce compliance.

Theme 3: Impact

13. What will the benefits of the new legislation be? What about the limitations?

There are a number of benefits associated with the legislation we are developing, the primary one being a reduction in harmful content online – and, with it, an increase in inclusive, safe expression and participation in digital life by those who are marginalized from it because of harmful content.

The legislation would also drive a tremendous spike in information and reporting about how social media platforms conduct their business when it comes to harmful content. This would help develop better research, practices, and oversight of how harmful content is managed.

Of course, we need to be clear and upfront that this legislation would not cover the full range of online harm, which includes bullying, harassment, and other “awful but lawful” behaviours. These are real limitations, but rest assured we are taking incremental steps to confront harmful content online.

Additionally, there are cases in which legislation and regulation may not be the most useful mechanism to address online harms. In some situations, other interventions may be more appropriate.

Consider the Digital Citizen Initiative (DCI) at the Department of Canadian Heritage, which seeks to build citizen resilience by delivering programming and research on misinformation, disinformation. As part of its activities, it funds citizen-focused activities and research to achieve concrete outcomes such as combatting COVID-related racism and misinformation affecting Asian communities in Toronto; It leads policy development and collaboration on misinformation and disinformation, harmful online content, and platform governance amongst civil society, academia and government partners; and contributes internationally on principles for diversity of content online. Any legislation or regulation we advance would have these activities in place to supplement and amplify their benefits.

Consider, too, that in a number of cases combatting harmful content online will mean confronting content that is illegal in nature. In these cases, law enforcement would play an important role, in addition to any legislation or regulations we may bring forward.

14. What will the new legislation mean for social media users in Canada?

A key objective of the legislation under development is to ensure that users of regulated social media platforms in Canada have the tools they need to flag harmful content in the targeted categories, to understand how platforms moderate this content. New transparency obligations for regulated platforms would allow users to better understand the volume and incidence of content in the targeted categories, which would enable them to make more informed decisions about how and where they engage online.

15. How will the regulatory framework interact with policies and procedures already implemented by social medial platforms?

The Government recognizes the action that some social media companies have already undertaken to mitigate and prevent the proliferation of harmful content on their platforms. Platforms that are already active in moderating harmful content on their services would have that work acknowledged, with requirements set to improve on their existing content moderation processes.

16. What impact will new legislation and regulations for social media have on freedom of expression?

Any legislation that would regulate how social media platforms moderate content – and which would require them to remove certain types of content – implicates the freedom of expression of social media platforms and their users as guaranteed in section 2(b) of the Canadian Charter of Rights and Freedoms. My eyes are open to this fact.

The legislation we are developing would not seek to compromise freedom of expression to support the safety of our citizens. In fact, these two values are mutually reinforcing. Our approach would strike an appropriate balance between protecting expression on an open internet and preventing harm on a safer internet. In doing so, it would foster a safer online environment for Canadians to express themselves freely. The safer people feel from these most pernicious forms of content online, the more they will be able to express themselves openly.

Given the sensitivities at stake, we would take clear and intentional steps to ensure that any limitations on expression are reasonable and proportionate. For example, the legislation and regulations being developed would be limited to categories of harmful content that are rooted in the Criminal Code and related jurisprudence – much of which, to put it differently, is already illegal. And we would be very, very careful to put in place procedural safeguards to protect users from unreasonable, unwarranted or disproportionate censorship.

17. How does the new legislation compare to similar initiatives in other jurisdictions?

We have benefited from analyzing the experiences of like-minded countries that have taken action in recent years, notably Australia, France, the United Kingdom, and the European Commission.

Given the worldwide proliferation of online harms and the global consensus that self-regulation is not working, allies and like-minded countries have undertaken different methods to tackling online harms.

We are looking at what Canada can learn and adopt from these countries. In France and Germany, we see approaches characterized by robust requirements for social media platforms to remove broad categories of content in short time periods defined with broad definitions. In Australia, the United Kingdom, and under the European Commission we see approaches that emphasize the importance of measures to enhance transparency and accountability of platform decision-making. Furthermore, these approaches stress the benefits of on-regulatory mechanisms, including the creation of partnerships and the pursuance of collaboration with platforms, civil society organizations and users. We are taking stock of how these regimes have rolled out, and assessing how they might fit in the Canadian context.

Across all jurisdictions, we have noted the merits of putting forward an incremental and adaptable approach that can evolve over time, as well as the importance of establishing a regulator with the specific expertise to do this work.

18. How long would it take for a new legislation like this to come into force?

We are committed to taking an incremental approach, where new powers and scope would be introduced systematically and strategically.

Similar to regulatory frameworks for online harms introduced in other jurisdictions, this proposal would take time to implement. New legislation would need to be tabled, debated, and passed.

A regulator would then be tasked with developing further details regarding how platforms are to adhere to their obligations. This phase is expected to include consultation with Canadians and implicated stakeholders.

Page details

Date modified: