by Zhanar Tukyeva, IRM Cyber Group

Effective from 17th March 2025, the UK’s Online Safety Act has officially come into force. The Online Safety Act 2023 is a law aimed at protecting both children and adults online by making social media companies and search services more accountable for user safety.

The Act requires platforms to:

• Reduce risks of illegal activity and remove illegal content.
• Protect children from harmful and age-inappropriate content.
• Be transparent about harmful content for adults and give users more control over what they see.

The Online Safety Act applies to search services and any online platforms that let users post content or interact with others. This includes social media, cloud storage, video-sharing sites, online forums, dating apps, and messaging services. The Act also covers services based outside the UK if they have a significant number of UK users, target the UK market, or pose a material risk of harm to UK users.

The Online Safety Act requires platforms to perform risk assessments on how their algorithms might expose users to harmful or illegal content and take steps to reduce these risks. The law also addresses the danger of repeated content exposure, like when algorithms push large volumes to children quickly. Platforms must publish yearly reports on their safety measures and algorithm impact.

Companies that break the Online Safety Act can be fined up to £18 million or 10% of their global revenue, whichever is higher. Senior managers who do not comply with Ofcom’s information requests could face criminal charges. In severe cases, Ofcom can ask courts to block payment providers, advertisers, and internet services from working with a site, stopping it from making money or being accessed in the UK.

Challenges Faced by Companies
The UK’s Online Safety Act, enacted in 2023, provided companies with a transition period to prepare for its enforcement, which began on March 17, 2025. During this time, organizations were expected to align their operations with the Act’s requirements to mitigate risks associated with illegal online content. While major tech companies have taken steps toward compliance, the preparedness across all companies varies. Despite the transition period allowing for adjustments, smaller platforms have struggled with resource constraints, making compliance particularly challenging (e.g., Fixed Gear and Single-Speed (LFGSS) forum).

How are various governments ensuring digital safety?
Several countries have implemented online safety laws similar to the UK’s Online Safety Act:

• Australia: In 2021, Australia passed its own Online Safety Act, mandating that digital platforms take reasonable measures to remove illegal or age-inappropriate content.
• European Union: The EU’s Digital Services Act (DSA) requires online platforms to identify and reduce risks related to illegal and harmful content, closely aligning with the UK’s safety standards.
• Sri Lanka: The country’s proposed Online Safety Bill aims to regulate online content and improve user protection, reflecting a similar approach to the UK’s legislation.

These examples reflect a global trend toward enhancing online safety and holding digital platforms accountable for user protection.

Broader Potential Impact
The Online Safety Act aims to create a safer digital environment by holding online platforms accountable for harmful and illegal content. While its intentions are clear, the Act presents several operational, compliance, and strategic challenges that companies must navigate:

  1. Operational and Compliance Challenges
    • High financial and technical burdens, particularly for smaller platforms, to implement safety measures like content moderation and reporting.
    • Financial penalties for non-compliance can reach up to £18 million or 10% of global revenue, posing severe financial risks.
    • Smaller and niche platforms may face closure or reduced diversity due to high compliance costs.
  2. Privacy and Encryption Concerns
    • End-to-end encryption may be weakened, raising privacy concerns and making platforms more vulnerable to cyberattacks.
    • Balancing safety improvements with data protection remains challenging.
    • If encryption backdoors are exploited, users’ private data could be at risk.
  3. AI-Related Risks
    The Act indirectly introduces AI-related risks as platforms increasingly rely on automated systems for content moderation and risk assessment. Some of the key challenges include:
    • Excessive Blocking: Automated moderation may block legitimate content, stifling freedom of expression.
    • Algorithmic Bias: AI systems may disproportionately target specific groups, driven by biased data.
    • Opaque Decision-Making: Complex algorithms used for moderation might lack transparency, making it hard to ensure fairness and accountability.
    • Privacy Issues: Enhanced content monitoring could lead to intrusive data collection.
    • Context Misinterpretation: Automated moderation might misinterpret satire, sarcasm, or cultural references, leading to unjust content removal.
  4. Impact on Innovation and Platform Diversity
    • Smaller platforms may shut down or reduce services due to compliance costs, limiting diversity in online communities.
    • Dominance by major tech companies may increase as they are better positioned to meet compliance requirements.
  5. Reputational Risk
    • Platforms that fail to protect users from harmful content may face backlash, eroding brand reputation and user trust.
    • Excessive content filtering could lead to accusations of censorship or bias.
  6. Financial and Legal Risk
    • Non-compliance can result in hefty fines and legal disputes.
    • Potential class action lawsuits from users affected by data breaches or harmful content.
  7. Global Influence and Regulatory Fragmentation
    • The Act could encourage other countries to adopt similar regulations, creating a fragmented global regulatory landscape.
    • Companies operating globally may face challenges in harmonizing compliance strategies across jurisdictions.
  8. Cybersecurity Risk
    • Backdoor access for monitoring could expose platforms to cyberattacks, putting user data at risk.
    • Greater risk of mass data leaks, damaging user trust and platform credibility.

The Road Ahead
The UK’s Online Safety Act represents a significant shift toward greater accountability and user protection in the digital space. While it aims to create a safer online environment, the law poses numerous challenges for companies, particularly smaller platforms, as they navigate compliance requirements and balance safety with privacy and innovation.

As more countries adopt similar legislation, businesses must proactively develop robust risk management strategies to address compliance, operational, and cybersecurity risks. Navigating the evolving regulatory landscape requires not only technical adaptations but also a careful consideration of the broader social and ethical implications. Ultimately, the success of the Online Safety Act will depend on achieving a delicate balance between safeguarding users and preserving fundamental digital rights.