Apple has begun asking iPhone and iPad users in the UK to confirm they are 18 or older as part of the iOS 26.4 update — a device-level age check that can require scanning an ID or supplying a credit card. The prompt appears after installing the update and, if a user doesn’t verify (or is underage), Apple will automatically enable child-safety restrictions such as blocking adult web content and blurring messages that contain nudity.
What Apple is asking for — and how it verifies
According to Apple’s support guidance, adults can confirm their age by scanning a driving licence or national ID, or by using a credit card on file (Apple’s guidance specifies credit cards rather than debit cards). If someone already has an Apple account, the company may also use an existing payment method or even the length of time the account has been active to confirm age.
The company says children under 13 won’t be able to create an account without a guardian. For those who decline or can’t verify, the device switches on web content filters and other protections automatically — including the message- and FaceTime-related checks Apple uses to identify nude imagery.
iOS 26.4 doesn’t only bring age checks: the update also introduced a few consumer features, from AI-generated playlists to new emoji. (If you want the changelog, see our coverage of the iOS 26.4 release.)[/news/ios-26-4-playlist-playground-emojis]
Why regulators cheered — and privacy campaigners pushed back
Ofcom, the UK communications regulator, welcomed Apple’s step as “a real win for children and families,” noting it aligns with the broader push to keep young people away from harmful online content after the Online Safety Act reforms. The law already forces certain websites and platforms to implement age checks (for example, sites hosting pornographic content), although app stores and device-level controls sit in a more complicated regulatory spot.
Critics, however, see risks. Silkie Carlo of Big Brother Watch described the change as a heavy-handed move that threatens privacy and argued it effectively forces millions of adults to hand over ID or credit-card details to regain full access. Others on social platforms and Reddit have said they plan to try to bypass the checks and voiced concern about how sensitive verification data will be stored and protected.
Concerns aren’t purely theoretical: age-verification systems collect identity documents and payment details that could be exposed if mishandled. That risk has been a central part of the debate since the UK began pressing online services to tighten age assurance.
A device-level choice, not just a legal requirement
The UK government’s Online Safety Act drove much of the momentum to require age verification on services that host adult content, but app stores and operating systems haven’t been explicitly covered in the same way. Apple’s move is notable because it implements checks at the device and account level rather than waiting for each site or app to build its own gate.
Regulators have signalled they’ll assess how effective and appropriate such approaches are — Ofcom said it will monitor how app stores are used by children and evaluate age assurance methods — but for now Apple is treating the UK as one of the first markets to receive these device protections.
A messy rollout detail (and why you might have seen a confusing prompt)
When the iOS 26.4 beta appeared last month, some UK users saw a prompt suggesting they might not be able to download apps without verifying their age. Apple later said that specific message — implying age verification was required to download apps — was displayed in error. The company’s current public guidance clarifies which services and actions require confirmation, though it hasn’t published an exhaustive list of every feature that will be restricted without verification.
The larger context: government experiments and wider industry moves
This update lands as the UK government tests other measures aimed at reducing young people’s exposure to harmful or addictive services — including trials that will disable or limit social apps for groups of teens, and consultations on whether to tighten age limits for social media access. At the same time, firms and campaigners debate how to balance child safety with privacy and freedom of expression.
Apple’s age checks are only one piece of that puzzle. They interact with security work happening elsewhere in iOS — Apple has been quietly patching web and browser issues in recent releases — and with broader feature rollouts in iOS 26 that aim to change how people use their devices day to day. You can read more about Apple’s recent security fixes and the wider iOS 26 changes in our related pieces.[/news/background-security-update-apple] [/news/ios-26-features-ai-personal-voice-updates]
Whether users see this as a sensible nudge toward safety or an unwelcome demand for more personal data will play out in courtrooms, regulatory reports, and in everyday behavior — people either complying with the prompt, seeking alternatives, or pushing for clearer rules about what tech firms can demand from their customers. For now, the next time an iPhone in the UK asks to see ID, it’s not a scam: it’s Apple trying to square device control with a country intent on tighter online protections.




