Lead Forensics

Trust & Safety Glossary

The Trust & Safety space is constantly growing, and it has come with a slew of vocabulary that might seem difficult to grasp. This glossary brings together the terminology used in the T&S space into one page, so that you are equipped to navigate the conversations, new legislations and practices.

a
  • Age Verification

    Age verification is a process used by websites or online services to confirm that a user meets a specific minimum age requirement. Age verification is typically used in jurisdictions where laws prohibit minors from accessing certain online content or services, such as gambling, pornography, or alcohol sales.

  • Algorithmic amplification

    Algorithmic amplification refers to how algorithms can boost the visibility of some content at the expense of others by modifying the distribution of information, such as ranking, ordering, promoting, recommending, etc.

  • Appeal

    In the content moderation space, an appeal is a process whereby a user who is impacted by a company’s decision can contest the decision by requesting it to be reviewed. The company’s decision can range from disabling a user account, to denying access to certain services. This can involve requesting a review of the decision by another party within the organization, such as a manager or an escalations department, or by a designated external body.

  • Automated detection tools

    Tools that can detect harmful or illegal content online through the use of machine learning algorithms or image copy detection software that check against existing databases of prohibited content. They can be used in both automatically removing or restricting content where there is high probability of the content being prohibited and as a support to human moderators to signal potentially violating content. 

    See also: Proactive moderation

b
  • Ban

    A ban, on online platforms, refers to a decision by a service to prevent access to certain content or to restrict a user account or group from using the service. For instance, this can include an account suspension or termination.

  • Block

    This refers to the disabling of an online service or specific aspects of it for one or more users. There could be various reasons why content is blocked, including legal restrictions in certain jurisdictions. When done between users, the block function may disable other features, such as preventing the blocked user from viewing the blocker’s profile or other related information.

  • Bot Account

    A bot account is an automated account created and managed by a software program, or “bot”, rather than a human. The permissibility and classification of a bot account depends on the rules of the respective service and the bot’s activities, which could be benign, harmful, or neutral. Bot accounts can be used for a variety of purposes, including spamming, amplifying certain messages or content, or performing tasks such as liking or retweeting specific posts.

c
  • Cache server

    A cache server is a server used to store frequently requested web content locally in a network, providing faster access to the content and saving bandwidth. Cached data includes web pages, forms, images, and videos

  • Child Sexual Abuse Material (CSAM)

    Child Sexual Abuse Material refers to any material that portrays sexual activity involving a person who is below the legal age of consent. To accurately describe the exploitation and abuse of children and protect the dignity of victims, the European Parliament recommends using the term “child sexual abuse material” instead of “child pornography.”

  • Community Guidelines

    A set of rules and restrictions that users are required to adhere to when using a digital service. These guidelines typically outline what is and is not acceptable content or behavior on a given service, as well as the consequences for violating these rules. These terms may also be referred to as content policies or codes of conduct. See also: Terms of Service

  • Community Moderation

    Community moderation is a content moderation approach in which site or service users, rather than site administrators, corporate employees, or contractors, are responsible for reviewing and moderating user-generated content. Wikipedia, Reddit and Discord are some online platforms that mostly utilize community enforcement of standards of behavior and content to maintain a safe online environment for all users.

    Just adding mostly here as all these services have some centralised moderation around the most serious forms of illegal content, i.e. CSAM 

  • Complaint Mechanism

    A complaint mechanism on an online service is one whereby a user, whose content has been removed or whose account is disabled, can ask for a review of that decision on the grounds that it was not against the terms of service of the platform and/or not illegal.

    See also: Out of court dispute settlement

  • Content Moderation

    Reviewing user-generated content to ensure that it complies with a platform’s T&C as well as with legal guidelines.  See also: Content Moderator

  • Content Moderator

    A person engaged in reviewing and taking action against content that is illegal or against the terms of service of a given platform.

  • Content Removal

    This refers to the act of deleting or taking down content, which can be carried out either by the user who posted it or by the hosting platform. The platform’s terms of service typically determine what content is prohibited.

  • Copyright infringement

    Copyright infringement is the unauthorized use of copyrighted material, such as text, images, or videos, which violate the rights of the copyright holder without their permission or a legally valid exception. This can include creating copies, distributing, displaying, or performing the copyrighted work, or creating derivative works without authorization. Copyright infringement is a violation of the exclusive rights of the copyright owner, which can lead to legal action, including damages, injunctions, and other remedies.

    Read more

  • Counterfeiting

    Counterfeiting is the unauthorized production or sale of products or services using a false trademark, which can deceive consumers or observers into believing they are authentic. This can include money, goods, or other products that are made and sold with the intent to defraud others by using a false representation. 

    See also: Intellectual Property

d
  • Dark Patterns

    Dark patterns refer to tactics used in user interface (UI) design that aim to deceive people into purchasing or taking specific actions that benefit the business. These techniques are often subtle and hard to notice, making them a threat to internet users. This can include elements such as disguised ads, urging users to share personal data, and automatic subscriptions

  • De-indexed

    The term refers to a web page or website that has been taken off a search engine index, which may occur on a temporary or permanent basis.

  • Demonetization

    Demonetization refers to limiting or restricting the ability of an account, channel, or person to generate income from their content on a specific platform. This may happen if the creator violates the platform’s terms and conditions or if there are changes in the platform’s algorithms that determine eligibility for revenue.

  • Digital Service Coordinators (DSCs)

    New national regulatory bodies to be created in each EU Member State under the Digital Services Act. DSCs will be responsible for implementing and enforcing the obligations of the DSA. They will be designated by Member States by 2024.

  • Disinformation

    False information that is circulated on online platforms to deceive people. It has the potential to cause public harm and is often done for economic or political gain.

  • Doxing

    Publishing private and identifiable information about someone on the Internet with malicious intent. Examples include publishing someone’s address or phone number without consent. 

e
  • Europol

    The European Union’s law enforcement agency, based in The Hague. Their activities focus on a broad range of serious and organized crimes, including cybercrime, terrorism, and intellectual property crime. Europol also hosts the EU internet referral unit that detects and investigates malicious content on the internet and in social media.

f
  • Fact-checking

    Fact-checking is a process used to verify the accuracy of published statements and provide an unbiased analysis of whether the claims in the communication can be trusted. The methods to do this and types of content vary widely

  • Filtering

    Filtering, in the context of content moderation, is a (typically automated) process that involves the automatic removal, downranking, or disabling of content that meets specific criteria.

  • Flagging

    Flagging is a term used interchangeably with reporting, which refers to the act of requesting a review of online content, conduct, or a user account. The content can be flagged by an algorithm, a content moderator, or another user.

g
  • Geoblocking

    Geoblocking, also known as geofencing, is a technological feature that limits access to online content or services based on the user’s geographic location. The user’s country of use is determined by the IP address of the device being used, which then authorizes or denies access to the content.

  • Global Internet Forum to Counter Terrorism (GIFCT) 

    The Global Internet Forum to Counter Terrorism is an NGO founded by Facebook, Microsoft, Twitter, and YouTube in 2017. It has since expanded to include a variety of online platforms with the objective of setting standards and processes to counter terrorist and violent extremist content online. The GIFCT also manages the Hash-Sharing Database that member platforms can connect to. 

    See also: Hash matching

  • Grooming

    A form of child sexual exploitation whereby a person attempts to establish some form of connection (such as building trust in a relationship) with a child, potentially with an aim of sexual abuse or exploitation either online or offline.

h
  • Hash-based matching

    Hashes are unique digital identities for image or video content.   The technique of hash-based matching is utilized to detect data that has resemblance.

    The process of assigning a unique hash value to an image using an algorithm is known as image hashing. Hash matching usually refers to a process where the hashes of images on an online service are compared to the hashes in a database of images already determines by human reviewers to be prohibited or illegal.

    Hashes are sometimes also called “digital fingerprints. As when multiple copies of one image exist, they all share the same hash value. See also: Flagging

  • Hate Speech

    Hate speech is any form of communication, whether written, spoken or otherwise expressed, that attacks or incites violence, discrimination or hostility against a particular individual or group on the basis of their race, ethnicity, nationality, religion, sexual orientation, gender identity, or other characteristics.

  • Hosting service

    A hosting service enables individuals, companies and other service providers  to host websites, databases, applications.
    Within the meaning of the DSA, a hosting service  offers the storage of user-generated content. This includes for example filesharing, social media, video-sharing platforms as well as marketplaces.

i
  • Intellectual property

    Inventions, literary or artistic work, designs, names, etc., are considered intellectual property and are protected by law through patents, copyright, and trademarks. This system allows people to earn from what they have created as well as protects their intellectual property from being stolen.

  • ISP (Internet Service Provider)

    An Internet Service Provider is a company or organization that offers internet access to users through wired or wireless connections. They have the necessary equipment and telecommunication line access to establish a point of presence on the internet in the area they serve. ISPs can operate locally, regionally, nationally, or globally, and may be either public or private.

m
  • Mere Conduits

    “Mere conduits” is a term used in the DSA and it refers to an intermediary service that transmits user information without modifying it in the transmission. See also: ISP

  • Misinformation

    Misinformation refers to incorrect or inaccurate information that is spread unintentionally and usually without malice, but can still lead to confusion or potential harm to individuals. See also: Disinformation

n
  • Notice and Action

    Notice-and-action is a mechanism that allows users to notify or flag illegal content to an online service.  Under the DSA, notice and action mechanisms are mandatory for all hosting service providers and they must be easy to access and user-friendly.

o
  • Online Marketplaces

    Platforms where businesses and/or consumers can buy and sell goods and services online. An online marketplace can be between businesses, between consumers, or from businesses to consumers. In the DSA online marketplaces are understood as a digital service that facilitates transactions between consumers and sellers by providing an interface for the presentation of goods or services offered by those sellers. See also: Online Platforms 

  • Online Platforms

    An online platform refers to a digital service that enables interactions between two or more sets of users who are distinct but interdependent and use the service to communicate via the internet. The phrase “online platform” is a broad term used to refer to various internet services such as marketplaces, search engines, social media, etc.

    In the DSA, online platforms are described as entities that offer hosting services and distribute user information to the public. This includes various types of online platforms, like social networks, online marketplaces and app stores.

  • Out-of-court dispute mechanism

    In the DSA, this refers to the user having the right to contest the action taken against their content by an online platform. The out-of-court dispute settlement mechanism does not replace the right of the user to take the service to court, if they wish to do so. An out-of-court dispute settlement body is recognized by the DSC of the relevant Member State and must be impartial and independent, have expertise, be transparent, act swiftly and efficiently, and follow the established rules of procedure.  

    See also: Digital Services Coordinator

p
  • Parental Controls

    Parental controls are software tools that enable parents or guardians to manage a child’s online activity and reduce the risks they might encounter. These features allow for the control of what content can be accessed, which apps can be downloaded or restrictions on in-app purchases.

  • Personal data

    Any identifiable data is regarded as personal data. This includes names, home addresses, email addresses, ID numbers, an IP address, etc. In the US, various laws define personal data in their own words, such as personal health information defined in HIPAA. In the EU, the GDPR defines it as any information relating to an identifiable person, such as a name, email address, location data, and other special categories including data revealing ethnic origin, political opinions, religious beliefs and biometrics. 

  • Proactive detection 

    In the context of content moderation, proactive detection refers to the practice of actively seeking out and identifying problematic content on a platform. Proactive detection techniques can identify policy violations on an online platform, using automated tools and algorithms that can accurately identify a wide range of harmful content.

  • Proxy

    A proxy is a server or software application that acts as an intermediary between a client (such as a web browser) and another server. Like a VPN, it can also mask a web user’s IP address. Proxies are commonly used for various purposes, such as to bypass internet censorship and geographic restrictions by allowing users to access content from different locations. See also: VPN

r
  • Reactive moderation

    In the context of content moderation, this refers to removal or restriction of content after it has been reported by users and/or other third parties.

    See also: Community moderation

  • Removal orders

    In the Regulation on Terrorist Content Online, this refers to a legal order from a national authority to remove content established to be terrorist  content from an onlineservice. Upon reception of the order, the content then needs to be removed within an hour. 

    Read more

  • Risk Assessment

    It refers to the process of identifying, analyzing, and evaluating the severity and probability of risks and threats associated with business or product development, deployment, and maintenance. 

    In the context of the DSA, very large online platforms are required to annually assess the systemic risks stemming from the design, functioning or use of the platforms, including any actual or foreseeable impact on four distinct categories of risks.

    Read more

s
  • Safety by design

    The principle of incorporating safety considerations into the processes, governance architecture, design, and functionality of online spaces to protect those most at risk. 

t
  • T&C (Terms and Conditions) or TOS (Terms of Services)

    The legal agreement between the provider of an online service and the person who wants to use the service. New regulations oblige providers to make their T&S easily understandable, especially if the service is used by minors.  See also: Community Guidelines

  • Targeted advertising

    Targeted advertising is a marketing approach that involves delivering customized ads to specific audiences based on their characteristics, interests, behavior, and preferences. This is achieved by collecting and analyzing data about users from various sources such as websites, apps, social media, and offline interactions. The aim is to increase the effectiveness of advertising campaigns by reaching users who are more likely to be interested in the product or service being promoted.

    See also: Personal data

  • Ticket

    A ticket typically refers to a customer service request or complaint that is submitted to a company or organization through a social media platform. When it comes to customer support software that’s frequently utilized for monitoring grievances, such as Trust & Safety cases, a “ticket” is used to refer to a particular support case or occurrence that is typically identified by an ID number or other labeling convention.  Social media tickets are often managed using specialized software tools that help companies track and respond to customer inquiries in a timely and efficient manner.

  • Transparency Report

    A transparency report is a document released by an organization that discloses information related to its policies, practices, and actions. Typically, transparency reports provide details on the handling of requests for user data and content removal, as well as government requests for user records, among other relevant metrics and insights.

  • Trust & Safety

    The field and practices that manage challenges related to content- and conduct-related risk, including but not limited to consideration of safety-by-design, product governance, risk assessment, detection, response, quality assurance, and transparency.

    See also: Safety by design

  • Trusted Flaggers

    Generally, this refers to individuals or entities that have proven expertise in flagging harmful or illegal content to online service providers. Within the meaning of the DSA, trusted flaggers are entities that have been awarded an official status by a Digital Service Coordinator. Online platforms will need to ensure notices from such organizations are treated with priority.

  • TVEC (Terrorist and Violent Extremist Content)

    Terrorist and violent extremist content (TVEC) on the internet includes offensive and illegal material that promotes harmful extremist views through various forms such as articles, images, videos, speeches, and websites.

u
  • UGC (User generated content)

    Refers to content created by users that they can then share with others on the service. This could exist in multiple forms, such as text, audio, or video.

  • URL (Uniform Resource Locator)

    A URL, or a Uniform Resource Locator, is a string of characters that provides the address or location of a resource on the internet. Each web page or other internet resource on the internet has a one-of-a-kind IP address that consists of a combination of numbers and/or letters. A typical URL consists of several parts, including a protocol identifier, a domain name or IP address that identifies the server hosting the resource, and a path that specifies the location of the resource on the server. 

v
  • VLOPs (very large online platforms)

    The DSA has coined a new term, very large online platforms and very large online search engines which are defined as online platforms with over 45 million monthly active users in the EU. Once an online platform passes this threshold, they are designated as a VLOP by the European Commission.

    Read more

  • VPN (Virtual Private Network)

    A Virtual Private Network is a secure way for users on public networks to connect to a private network, typically by encrypting all network traffic. VPNs use encryption and other security protocols to ensure that data transmitted over the network is protected from unauthorized access. VPNs can also help web users conceal their IP address, thereby preventing their location from being accurately detected or logged by the service provider.

Join our community

Stay ahead of the curve – sign up to receive the latest policy and tech advice impacting your business.