The Trust & Safety space is constantly growing, and it has come with a slew of vocabulary that might seem difficult to grasp. This glossary brings all the different elements in the T&S space into one page, so that you are equipped to navigate new legislations, codes, and practices in the future.
Keyword | Definition |
---|---|
Content moderation | Monitoring and screening user-generated content to ensure that it complies with a platform’s T&C as well as with legal guidelines. |
CSAM/CSEM | Child Sexual Abuse Material, or Child Sexual Exploitation Material, in image, video, or audio form. Multiple regulations globally are targeting this issue, including the proposed EU regulation to combat child sexual abuse. |
Dark Patterns | Web design elements that intentionally mislead or obscure information from a user to urge them to make potentially harmful choices. This can include elements such as disguised ads, urging users to share personal data, and automatic subscriptions. |
Digital Service Coordinators (DSCs) | New national regulatory bodies to be created in each EU Member State following the approval of the Digital Services Act. Under the new regulatory framework, DSCs will be responsible for implementing and enforcing the obligations under the Digital Services Act. They will be designated by Member States by 2024. |
Doxxing | Publishing private and identifiable information about someone on the Internet with malicious intent. Examples include publishing someone’s address or phone number without consent. |
Europol | The European Union’s law enforcement agency, based in The Hague. Their operational activities focus on a broad range of serious and organized crimes, including cybercrime, terrorism, and intellectual property crime. |
GIFCT | The Global Internet Forum to Counter Terrorism is an NGO founded by Facebook, Microsoft, Twitter, and Youtube in 2017. It has since expanded to include a variety of online platforms with the objective of setting standards and processes to counter terrorist and violent extremist content online. |
Grooming | A form of child sexual exploitation whereby a person attempts to establish some form of relationship with a child prior to sexual abuse. |
LEA | Law Enforcement Agencies. Relevant LEAs in the European Trust & Safety ecosystem include the European Commission, Europol, and Digital Service Coordinators, to name a few. |
NCMEC | The National Centre for Missing & Exploited Children is a private, non-profit corporation that works to find missing children and combat sexual exploitation of children. |
Targeted advertisement | A form of advertising that is directed at a specific audience based on their traits and/or personal information. |
Online disinformation | False information that is circulated on online platforms to deceive people. It has the potential to cause public harm and is often done for economic or political gain. |
Online marketplaces | Platforms where businesses and/or consumers can buy and sell goods and services online. An online marketplace can be between businesses, between consumers, or from businesses to consumers. |
Personal data | Any identifiable data is regarded as personal data. This includes names, home addresses, email addresses, ID numbers, an IP address, etc. |
Proactive detection | In the context of content moderation, it refers to general monitoring to discover new, unknown, and unreported threats that fail to comply with legal obligations or a service’s T&C. |
Reactive moderation | In the context of content moderation, this refers to removal or restriction of content after it has been reported by users and/or other third parties. |
Removal orders | In the TCO, this refers to a demand from a national authority to remove content deemed to be terrorist or violent extremist content from a service. Upon reception of the order, the content then needs to be removed within an hour. |
Safety by design | The principle of incorporating safety considerations into the architecture of digital spaces to protect those most at risk. |
Sexting | Sending and/or receiving sexually explicit text/audio/video content through online services. |
Terms and Conditions (T&C) or Terms of Services (TOS) | The legal agreement between the provider of an online service and the person who wants to use the service. New regulations oblige providers to make their T&S easily understandable, especially if the service is used by minors. |
Trusted Flaggers | Generally, this refers to individuals or entities that have proven expertise in flagging harmful or illegal content to online service providers. Recent legislation in the EU has institutionalized their role and given them more responsibilities. Now, trusted flaggers need to apply and gain status from competent authorities and are also held accountable for their actions. |
Terrorist and Violent Extremist Content (TVEC) | Refers to terrorist and violent extremist content; it is the subject of many regulations and codes around the world. |
User generated content (UGC) | Unlike personal data, this refers to content created by users that they can then share with others on the service. This could exist in multiple forms, such as text, audio, or video. |
User notices | Refers to complaints made by users about harmful or illegal content online. Users can complain to trusted flaggers, directly to the service, or to a law enforcement agency. |
Complaint mechanism | A feature on the online service’s interface that allows users to notify the service of content they find problematic. This can be in the form of a form, an email, or an automatic report. Once the complaint is sent, the provider of the service decides how the content should be dealt with, depending on whether it violates the T&C or is illegal. |
Out-of-court dispute mechanism | In the Digital Services Act, this refers to the user having the right to contest the action taken against their content by the service provider. The out-of-court dispute settlement mechanism does not replace the right of the user to take the service to court, if they wish to do so. An out-of-court dispute settlement body is recognized by the DSC of the relevant Member State and must be impartial and independent, have expertise, be transparent, act swiftly and efficiently, and follow the established rules of procedure. |
Content moderator | A person engaged in monitoring and taking action against harmful or illegal content online. |
Automated detection tools | Tools that can detect harmful or illegal content online through the use of AI and by crosschecking existing databases. They can be more effective in removing or restricting explicitly harmful content and support human moderators in making their tasks efficient. |
Hash sharing | Hashes are unique digital identities for image or video content. Hash sharing thus refers to the creation of a common database such that moderators or automated tools can flag content has already been removed before easily. An example of this is the GIFCT database. |
Intellectual property | Inventions, literary or artistic work, designs, names, etc., are considered intellectual property and are protected by law through patents, copyright, and trademarks. This system allows people to earn from what they have created as well as protects their intellectual property from being stolen. |
Trust & Safety | The field and practices that manage challenges related to content- and conduct-related risk, including but not limited to consideration of safety-by-design, product governance, risk assessment, detection, response, quality assurance, and transparency. |
To find out more, contact us at info@tremau.com.
Tremau Policy Research Team