SAFE
CIC
The Safeguarding Specialists
01379 871091

SAFE Newsfeed

Former surgeon jailed for five and-a-half years after admitting offences including assault occasioning actual bodily harm and child cruelty during multiple male circumcision operations

Source: Crown Prosecution Service (CPS) published on this website Friday 17 January 2025 by Jill Powell

A former surgeon has been jailed for a combined total of more than five and-a-half years (67 months) today (Wednesday 15 January) at Inner London Crown Court having admitted assault occasioning actual bodily harm, child cruelty and administering a prescription only medicine to several young and vulnerable patients whilst ignoring basic hygiene rules and performing non-therapeutic male circumcision whilst ignoring basic hygiene rules and performing non-therapeutic male circumcisions.

Dr Mohammad Siddiqui, 58, from Birmingham pleaded guilty at Southwark Crown Court on 29 October 2024 to a total of 25 offences which included, 11 counts of actual bodily harm, 6 counts of cruelty to a child and 8 counts of administering prescription only medicines contrary to the law. The prosecution was brought because of the methods Siddiqui used which showed a complete disregard to patient health, safety and comfort in private residences between 2014 and 2018.

Between June 2012 and November 2013, Dr Siddiqui provided a private mobile circumcision service whilst working asclinical fellow in paediatric surgery at University Hospital Southampton NHS Foundation Trust. In this capacity he was able to source the anaesthetic Bupivacaine Hydrochloride which is a prescription only medication.

In 2015 Siddiqui was ‘struck off’ the General Medical Council Register after a panel of the Medical Practitioners Tribunal Service found him guilty of failures in performing non-therapeutic male circumcisions in the homes of four babies.

Despite having been ‘struck off’, Dr Siddiqui continued to promote and provide a mobile circumcision service. No longer being considered a ‘Health Care Professional’ he was able to do so because non-therapeutic male circumcision is unregulated with no requirement to be carried out by a medical practitioner. Dr Siddiqui continued to use Bupivacaine Hydrochloride and carry out circumcisions in unsafe, unsanitary and harmful ways. He advertised his services across the United Kingdom and by appointment performed non-therapeutic male circumcisions on young patients up to the age of 14 in their homes.

A serious organised crime prevention order was granted after being sought by the Crown Prosecution Service, which would prevent Dr Siddiqui from undertaking non-therapeutic circumcision following his release from custody. Without such an order or any license provision he could engage in these activities. For this reason, the order would be significant to safeguard children in the future.  

Age checks to protect children online

Source: Ofcom published on this website Wednesday 16 January 2025 by Jill Powell

Children will be prevented from encountering online pornography and protected from other types of harmful content under Ofcom’s new industry guidance which sets out how we expect sites and apps to introduce highly effective age assurance.

Today’s decisions are the next step in Ofcom implementing the Online Safety Act and creating a safer life online for people in the UK, particularly children. It follows tough industry standards, announced last month, to tackle illegal content online, and comes ahead of broader protection of children measures which will launch in the Spring.

Robust age checks are a cornerstone of the Online Safety Act. It requires services which allow pornography or certain other types of harmful content to introduce ‘age assurance’ to ensure that children are not normally able to encounter it. Age assurance methods – which include age verification, age estimation or a combination of both – must be ‘highly effective’ at correctly determining whether a particular user is a child.

We have today published industry guidance on how we expect age assurance to be implemented in practice for it to be considered highly effective. Our approach is designed to be flexible, tech-neutral and future-proof. It also allows space for innovation in age assurance, which represents an important part of a wider safety tech sector where the UK is a global leader[2]. We expect the approach to be applied consistently across all parts of the online safety regime over time.

While providing strong protections to children, our approach also takes care to ensure that privacy rights are protected and that adults can still access legal pornography. As platforms take action to introduce age assurance over the next six months, adults will start to notice changes in how they access certain online services. Our evidence suggests that the vast majority of adults (80%) are broadly supportive of age assurance measures to prevent children from encountering online pornography.

What are online services required to do, and by when?

The Online Safety Act divides online services into different categories with distinct routes to implement age checks. However, the action we expect all of them to take starts from today:

  • Requirement to carry out a children’s access assessment.  All user-to-user and search services – defined as ‘Part 3’ services – in scope of the Act, must carry out a children’s access assessment to establish if their service – or part of their service - is likely to be accessed by children. From today, these services have three months to complete their children’s access assessments, in line with our guidance, with a final deadline of 16 April. Unless they are already using highly effective age assurance and can evidence this, we anticipate that most of these services will need to conclude that they are likely to be accessed by children within the meaning of the Act. Services that fall into this category must comply with the children’s risk assessment duties and the children’s safety duties.
  • Measures to protect children on social media and other user-to-user servicesWe will publish our Protection of Children Codes and children’s risk assessment guidance in April 2025. This means that services that are likely to be accessed by children will need to conduct a children’s risk assessment by July 2025 – that is, within three months. Following this, they will need to implement measures to protect children on their services, in line with our Protection of Children Codes to address the risks of harm identified. These measures may include introducing age checks to determine which of their users are under-18 and protect them from harmful content.
  • Services that allow pornography must introduce processes to check the age of users: all services which allow pornography must have highly effective age assurance processes in place by July 2025 at the latest to protect children from encountering it. The Act imposes different deadlines on different types of providers. Services that publish their own pornographic content (defined as ‘Part 5 Services[6]) including certain Generative AI tools, must begin taking steps immediately to introduce robust age checks, in line with our published guidance. Services that allow user-generated pornographic content – which fall under ‘Part 3’ services – must have fully implemented age checks by July.

What does highly effective age assurance mean?

Our approach to highly effective age assurance and how we expect it to be implemented in practice applies consistently across three pieces of industry guidance, published today[5]. Our final position, in summary:

  • confirms that any age-checking methods deployed by services must be technically accurate, robust, reliable and fair in order to be considered highly effective;
  • sets out a non-exhaustive list of methods that we consider are capable of being highly effective. They include: open banking, photo ID matching, facial age estimation, mobile network operator age checks, credit card checks, digital identity services and email-based age estimation;
  • confirms that methods including self-declaration of age and online payments which don’t require a person to be 18 are not highly effective;
  • stipulates that pornographic content must not be visible to users before, or during, the process of completing an age check. Nor should services host or permit content that directs or encourages users to attempt to circumvent an age assurance process; and
  • sets expectations that sites and apps consider the interests of all users when implementing age assurance – affording strong protection to children, while taking care that privacy rights are respected and adults can still access legal pornography.

We consider this approach will secure the best outcomes for the protection of children online in the early years of the Act being in force. While we have decided not to introduce numerical thresholds for highly effective age assurance at this stage (e.g. 99% accuracy), we acknowledge that numerical thresholds may complement our four criteria in the future, pending further developments in testing methodologies, industry standards, and independent research.

Opening a new enforcement programme

We expect all services to take a proactive approach to compliance and meet their respective implementation deadlines. Today Ofcom is opening an age assurance enforcement programme, focusing our attention first on Part 5 services thatdisplay or publish their own pornographic content.

We will contact a range of adult services – large and small – to advise them of their new obligations. We will not hesitate to take action and launch investigations against services that do not engage or ultimately comply.

First codes of practice and guidance published, on new duties for tech firms under the UK’s Online Safety Act

Source: Ofcom published on this site Wednesday 18 December 2024 by Jill Powell

Ofcom has, four months ahead of the statutory deadline, published its first-edition codes of practice and guidance on tackling illegal harms – such as terror, hate, fraud, child sexual abuse and assisting or encouraging suicide[ – under the UK’s Online Safety Act.

The Act places new safety duties on social media firms, search engines, messaging, gaming and dating apps, and pornography and file-sharing sites. Before they can enforce these duties, they are required to produce codes of practice and industry guidance to help firms to comply, following a period of public consultation.

Ofcom have consulted carefully and widely to inform their final decisions, listening to civil society, charities and campaigners, parents and children, the tech industry, and expert bodies and law enforcement agencies, with over 200 responses submitted to their consultation.

As an evidence-based regulator, every response has been carefully considered, alongside cutting-edge research and analysis, and we have strengthened some areas of the codes since our initial consultation. The result is a set of measures – many of which are not currently being used by the largest and riskiest platforms – that will significantly improve safety for all users, especially children.  

What regulation will deliver

The illegal harms codes and guidance markshttps://www.ofcom.org.uk/ a major milestone in creating a safer life online, firing the starting gun on the first set of duties for tech companies. Every site and app in scope of the new laws has from today until 16 March 2025 to complete an assessment to understand the risks illegal content poses to children and adults on their platform.

Subject to the codes completing the Parliamentary process by this date, from 17 March 2025, sites and apps will then need to start implementing safety measures to mitigate those risks, and the codes set out measures they can take.[4] Some of these measures apply to all sites and apps, and others to larger or riskier platforms. The most important changes Ofcom expect the codes and guidance to deliver include:

  • Senior accountability for safety. To ensure strict accountability, each provider should name a senior person accountable to their most senior governance body for compliance with their illegal content, reporting and complaints duties. 
  • Better moderation, easier reporting and built-in safety tests. Tech firms will need to make sure their moderation teams are appropriately resourced and trained and are set robust performance targets, so they can remove illegal material quickly when they become aware of it, such as illegal suicide content. Reporting and complaints functions will be easier to find and use, with appropriate action taken in response. Relevant providers will also need to improve the testing of their algorithms to make illegal content harder to disseminate. 
  • Protecting children from sexual abuse and exploitation online. While developing the codes and guidance, Ofcom heard from thousands of children and parents about their online experiences, as well as professionals who work with them. New research, published today, also highlights children’s experiences of sexualised messages online[4], as well as teenage children’s views on our proposed safety measures aimed at preventing adult predators from grooming and sexually abusing children.[5] Many young people we spoke to felt interactions with strangers, including adults or users perceived to be adults, are currently an inevitable part of being online, and they described becoming ‘desensitised’ to receiving sexualised messages.

Taking these unique insights into account, our final measures are explicitly designed to tackle pathways to online grooming. This will mean that, by default, on platforms where users connect with each other, children’s profiles and locations – as well as friends and connections – should not be visible to other users, and non-connected accounts should not be able to send them direct messages. Children should also receive information to help them make informed decisions around the risks of sharing personal information, and they should not appear in lists of people users might wish to add to their network.

Action Fraud issue new alert warning to look out for unusual messages or phishing emails from hotel accounts using the Booking.com platform

Source: Action Fraud published on this website Wednesday 15 January 2025 by Jill Powell

Those using the platform Booking.com to book their holidays or accommodation are being warned they could be targeted with emails or messages requesting payments from hotels who have had their account taken over by fraudsters. Between June 2023 and September 2024, Action Fraud received 532 reports from individuals, with a total of £370,000 lost.

Insight from Action Fraud reports suggests the individuals were defrauded after receiving unexpected messages and emails from a Booking.com account belonging to a hotel they had a reservation with, which had been taken over by a criminal. Using this account, the criminals send in-app messages, emails, and WhatsApp messages to customers, deceiving them into making payment and/or requesting credit card details.

The specific account takeovers are likely to be the result of a targeted phishing attack against the hotel or accommodation provider, and not Booking.com’s backend system or infrastructure.

Booking.com and Action Fraud are providing the following advice on how to spot signs of fraud and protect your Booking.com account:

  • No legitimate Booking.com transaction will ever require a customer to provide their credit card details by phone, email, or text message (including WhatsApp).
    • Sometimes a hotel provider will manage their own payment and may reach out to request payment information, like credit card details – before providing any information, always verify the authenticity of communication between yourself and the hotel’s account.
  • If you receive any urgent payment requests that require immediate attention, like a booking cancellation, immediately reach out to the Booking.com Customer Service team via the details on the official Booking.com website and/or app to confirm.
    • Any payment requests that do not match the information in the original booking confirmation should also be double checked and confirmed with Booking.com Customer Service before proceeding. 
  • Any messages purporting to be from Booking.com that contain instructions to follow links and/or open/download files should be treated with caution.
    • If you have any doubts about a message, contact Booking.com directly. Don’t use the numbers or address in the suspicious message and use the details from their official website.
  • For more information about how to protect your Booking.com account, please visit: Safety Tips for Travellers | Booking.com

If you receive any suspicious emails or text messages, report them by forwarding emails to: report@phishing.gov.uk, or texts to 7726.

Find out how to protect yourself from fraud: https://stopthinkfraud.campaign.gov.uk

If you’ve lost money or provided financial information as a result of any phishing scam, notify your bank immediately and report it to Action Fraud at  https://www.actionfraud.police.uk/report-phishing or by calling 0300 123 2040. In Scotland, call Police Scotland on 101.

Prosecutors publish updated ‘deception as to sex’ guidance

Source: Crown Prosecution Service (CPS) published on this website Monday 16 December 2024 by Jill Powell

UPDATED prosecution guidance, which clarifies the law on when deceiving someone or failing to disclose birth sex could affect consent in rape cases, has been published 16 December by the Crown Prosecution Service.

The new deception as to sex guidance has been updated to assist prosecutors in their decision making in this complex area of law.

The law, which the guidance reflects, states there is no difference between a deliberate deception about birth sex and a failure to disclose birth sex.

Central to the update, the guidance makes clear:

  • In line with the law on consent – charges will depend on whether a victim was aware of the person’s birth sex and therefore consented to sexual activity by choice. The suspect must also have reasonably believed consent had been given.
  • It also clarifies that a suspect may deceive a complainant as to their birth sex if they choose not to disclose their sex or trans identity. It also clarifies there is no expectation for a complainant to confirm the sex of the defendant prior to sexual activity.
  • Not every situation where a trans or non-binary person fails to disclose their sex will involve a criminal offence – each will be assessed on a case-by-case basis.

Prosecutors are given guidance on the evidential considerations to be applied in these cases, including where the suspect is trans or non-binary, as well as relevant case law and an explanation of how a failure to disclose sex could remove consent.

Siobhan Blake, Chief Crown Prosecutor and national lead for rape and serious sexual offences, said:

“We recognise this is a highly sensitive area of law – it is important our guidance provides prosecutors with the knowledge they need to make decisions in the rare cases where deception as to sex may have occurred. Importantly, this guidance also clarifies the law where suspects are non-trans such as females pretending to be male and vice versa. Every prosecutor has a duty to act with impartiality, each case is always assessed on its individual merits, so we make fair and objective decisions”

To enable prosecutors to make informed decisions, the updated guidance includes background information on trans and non-binary persons. There have also been revisions to the language used in the guidance, so it better reflects current social terminology.  

The CPS has updated the title of the guidance to more accurately reflect that this part of the law is based on a person’s sex, rather than gender identity.

The new guidance forms part of the Rape and Sexual Offences prosecution guidance.