I don’t like to shill for companies, but I’m glad System76 made a statement. The addendum does feel like their legal team made them add it though:
> Some of these laws impose requirements on System76 and Linux distributions in general. The California law, and Colorado law modeled after it, were agreed in concert with major operating system providers. Should this method of age attestation become the standard, apps and websites will not assume liability when a signal is not provided and assume the lowest age bracket. Any Linux distribution that does not provide an age bracket signal will result in a nerfed internet for their users.
> We are accustomed to adding operating system features to comply with laws. Accessibility features for ADA, and power efficiency settings for Energy Star regulations are two examples. We are a part of this world and we believe in the rule of law. We still hope these laws will be recognized for the folly they are and removed from the books or found unconstitutional.
Anyways, it feels like all sides of the political spectrum are trying to strip away any semblance of anonymity or privacy online both in the US and abroad. No one should have to provide any personal details to use any general computing device. Otherwise, given the pervasive tracking done by corporations and the rise of constant surveillance outdoors, there will be nowhere for people to safely gather and express themselves freely and privately.
> No one should have to provide any personal details to use any general computing device
I agree. I also agree with S76 that some laws regarding how an operating system intended for wide use should function are acceptable. How would you react to this law if the requirement was only that the operating system had to ask the user what age bracket it should report to sites? You get to pick it, it isn't mandatory that it be checked, and it doesn't need to be a date, just the bucket. Is that still too onerous?
I ask because I feel like if we don't do something, the trajectory is that ~every website and app is going to either voluntarily or compulsorily do face scans, AI behavior analysis, and ID checks for their users, and I really don't want to live in that world.
The main problem with the "report your age to the website" proposals is that they're backwards. You shouldn't be leaking your age to the service.
Instead, the service should be telling your device the nature of the content. Then, if the content is for adults and you're not one, your parents can configure your device not to display it.
It may often times be trickier than that - content often mixed of course. My 10 y/o hit me with a request yesterday to play Among Us where the age verification system wanted my full name, address, email, AND the last 4 digits of my SSN. I refused.
There's a good chance that they're never going to verify any of the information you give them, in which case it's another download for Mr M Mouse of 1375 E Buena Vista Dr, 32830, with a SSN that ends in 1234.
I made the mistake of providing my date of birth as being 1/1/1900 on multiple websites, and have been receiving marketing material from the AARP in the mail for many years.
I disagree. Giving fake info adds noise to the mechanism, makes it useless. Ultimately I'm inclined to believe that privacy through noise generation is a solution.
If I ever find some idle time, I'd like to make an agent that surfs the web under my identity and several fake ones, but randomly according to several fake personality traits I program. Then, after some testing and analysis of the generated patterns of crawl, release it as freeware to allow anyone to participate in the obfuscation of individuals' behaviors.
> You might want to take a look at differential privacy
Differential privacy is just a bait to make surveillance more socially acceptable and to have arguments to silence critics ("no need to worry about the dangers - we have differential privacy"). :-(
It may often times be trickier than that - content often mixed of course. My 10 y/o hit me with a request yesterday to play Among Us where the age verification system wanted my full name, address, email, AND the last 4 digits of my SSN. I refused.
The bad actor still gets ROI, eg 'paid', for another bit of user data.
Making the overall system less useful is good. However, not allowing a company to profit, and giving fake info still allows for that, is paramount. EG, even with fake info, many metrics on a phone are still gamed and profitable.
That's why they're collected, after all. For profit.
Awesome. Now you have a system where every blog entry, every Facebook post needs a lawyer consultation.
Around 20 years ago, Germany actually made a law that would have enforced such a system. I still have a chart in my blog that explained it, https://www.onli-blogging.de/1026/JMStV-kurz-erklaert.html. Content for people over 16 would have to be marked accordingly or be put offline before 22:00, plus, if your site has a commercial character - which according to german courts is every single one in existence - you would need to hire a someone responsible for protecting teenagers and children (Jugenschutzbeauftragten).
Result: It was seen as a big censor machine and I saw many sites and blogs shut down. You maybe can make that law partly responsible for how far behind german internet enterprises still are. Only a particular kind of bureaucrat wants to make business in an environment that makes laws such as this.
Later the law wasn't actually followed. Only state media still has a system that blocks films for adults (=basically every action movie) from being accessed without age verification if not past 22:00.
> Now you have a system where every blog entry, every Facebook post needs a lawyer consultation.
You have that with any form of any of these things. They're almost certainly going to be set up so that you get in trouble for claiming that adult content isn't but not for having non-adult content behind the adult content tag.
Then you would be able to avoid legal questions by labeling your whole site as adult content, with the obvious drawback that then your whole site is labeled as adult content even though most of it isn't.
But using ID requirements instead doesn't get you out of that. You'd still need to either identify which content requires someone to provide an ID before they can view it, or ID everyone.
That's an argument for not doing any of these things, but not an argument for having ID requirements instead of content tags.
Funnily enough, marking content that's harmless as only for adults was also punishable, though that might have been in context of a different law. That would be censorship, blocking people under 18 from accessing legal content, was the reasoning. Welcome to German bureaucracy.
But you are right. It's an argument that the "just mark content accordingly" is also not a better solution, not that ID requirements are in any way better. The only solution is not to enable this censorship infrastructure, because no matter which way it's done, it will always function as one.
> Funnily enough, marking content that's harmless as only for adults was also punishable, though that might have been in context of a different law. That would be censorship, blocking people under 18 from accessing legal content, was the reasoning. Welcome to German bureaucracy.
That's how you get the thing where instead of using different equipment to process the food with and without sesame seeds, they just put sesame seeds in everything on purpose so they can accurately label them as containing sesame seeds.
I understand they can't say "contains sesame seeds" if it doesn't, but why can't they say "processed on equipment that also processes sesame seeds" like some packages do?
> plus, if your site has a commercial character - which according to german courts is every single one in existence - you would need to hire a someone responsible for protecting teenagers and children (Jugenschutzbeauftragten).
> Awesome. Now you have a system where every blog entry, every Facebook post needs a lawyer consultation.
The alternative is that "just to be safe" you'll mark your entire site as needing age (identity, stool sample, whatever) verification. A single piece of sensitive content sets the requirements for the entire site.
I would assume its fake and an attempt at identify theft at some level of the system. Is their PC infected at the OS level or just a fraudulent browser extension or something more like a popup ad masquerading as a system dialogue? A less trusting person would assume any request made by a computer is totally non-fraudulent and would gladly submit any requested private information.
"Dad, I can't do my math homework, a pop up says you need to provide a copy of your bank statement, your mom's maiden name, and a copy of your birth certificate, SS card, and drivers license, and can you hurry up Dad, my homework is due tomorrow morning." And people will fall for this once they get used to the system being absurd enough.
It feels to me that parental controls are seen as another profit centre. If we want to put laws in place, we should be putting in laws to empower parents.
Heh that's already what parental controls do (granted, the website don't report the content, and it's based on blacklists), but they are trivial to bypass. Even the article mention it:
> The child can install a virtual machine, create an account on the virtual machine and set the age to 18 or over
It's precisely how I worked around the parental control my parents put on my computer when I was ~12. Get Virtualbox, get a Kubuntu ISO, and voilà! The funniest is, I did not want to access adult content, but the software had thepiratebay on its blacklist, which I did want.
In the end, I proudly showed them (look ma!), and they promptly removed the control from the computer, as you can't fight a motivated kid.
That's assuming the parental controls allow the kid to create a virtual machine. And then that the kid knows how to create a virtual machine, which is already at the level of difficulty of getting the high school senior who is already 18 to loan you their ID.
None of this stuff is ever going to be Fort Knox. Locks are for keeping honest people honest.
We could argue on the technical feasability all day, as non-kvm qemu does not need any special permission to run a VM (albeit dog slow).
I honestly don't really agree on the difficulty, as if this becomes a commonplace way to bypass such laws, you can expect tiktok to be full of videos about how to do it. People will provide already-installed VMs in a turnkey solution. It's not unlike how generations of kids playing minecraft learnt how to port forward and how to insatll VPNs for non-alleged-privacy reasons: something that was considered out of a kid's reach became a commodity.
> None of this stuff is ever going to be Fort Knox. Locks are for keeping honest people honest.
On that we agree, and it makes me sad. The gap between computer literate and illiterate will only widen a time passes. Non motivated kids will learn less, and motivated ones will get a kickstart by going around the locks.
> We could argue on the technical feasability all day, as non-kvm qemu does not need any special permission to run a VM (albeit dog slow).
That's assuming the permission is for "use of kernel-mode hardware virtualization" rather than "installation of virtualization apps".
Notice that if the kid can run arbitrary code then any of this was already a moot point because then they can already access websites in other countries that don't enforce any of this stuff.
I promise there are people who can't figure out how to do it.
And again, the point of the lock on the door where you keep the porn is not to be robustly impenetrable to entry by a motivated 16 year old with a sledgehammer, it's only to make it obvious that they're not intended to go in there.
Depends on how much people want the hidden content. People in Eastern Europe, regular people, noch tech wiz kids, know how to use torrent and know about seed ratios etc. At least it was so ca 5 years ago. People can learn when the thing matters to them.
Regular people want to get things done, the tinkering is not a goal for them in itself and they gravitate to simple and convenient ways of achieving things, and don't care about abstract principles like open source or tech advantages or what they see as tinfoil hat stuff. But if they want to see their favorite TV series or movie, they will jump through hoops. Similarly for this case.
It might be Fort Knox just fine at some point, when computers will require a cryptographically signed government certificate that you're over 18, and you can't use the computer until you provide it.
a kid who can install Linux, or set up an ssh tunnel to a seedbox, is a kid who doesn't need to be told by the government what he or she should be watching
I'd actually argue that's exactly the kid who the government is there to tell them what they shouldn't be watching. The government is never really there to restrict the incompetent, they are pretty good at doing that themselves.
There's an ocean of difference between your device changing behavior based on a flag set by individual sites and your device using a blacklist set by some list maintainer - the main difference being that the latter is utterly useless due to being an example of badness enumeration.
> Instead, the service should be telling your device the nature of the content. Then, if the content is for adults and you're not one, your parents can configure your device not to display it.
That makes sense for purely offline media playback, but how could that work for a game or application or website? Ship several versions of the app for the different brackets and let the OS choose which to run? Then specifically design your telemetry to avoid logging which version is running?
You'd also not be reporting your age, you'd be sending a "please treat me like an adult" or "please treat me like a child" flag. That's hardly PII. More like a dark/light mode preference, or your language settings (which your browser does send).
> That makes sense for purely offline media playback, but how could that work for a game or application or website? Ship several versions of the app for the different brackets and let the OS choose which to run?
Suppose you had an ID requirement instead. Are you going to make two different versions of your game or website, one for people who show ID and another for people who don't? If so, do the same thing. If not, then you have one version and it's either for adults only or it isn't.
> You'd also not be reporting your age, you'd be sending a "please treat me like an adult" or "please treat me like a child" flag.
Except that you essentially are reporting your age, because when you turn 18 the flag changes, which is a pretty strong signal that you just turned 18 and once they deduce your age they can calculate it going forward indefinitely.
This is even worse if it's an automated system because then the flag changes exactly when you turn 18, down to the day, which by itself is ~14 bits of entropy towards uniquely identifying you and in a city of a 100,000 people they only need ~17 bits in total.
The shifts between flags will correlate with date of birth though, or do you think someone turning 16 or 18 will wait a year or two to switch to more adult content for privacy? Also I'd guess the tech industry would push for more specific age buckets.
Games already have PG ratings and similar in different countries, I don't see the issue there. Web content could set a age appropriateness header and let browsers deal with it, either for specific content or for the whole website if it relies on e.g. addictive mechanics.
Applications is a wide field, but I'd be interested in specific examples where you think it wouldn't work.
> Applications is a wide field, but I'd be interested in specific examples where you think it wouldn't work.
Sure. Take a game with voice chat. Child mode disables voice chat. How does the game, which presumably uses a load of telemetry, avoid incidentally leaking which users are children via the lack of voice telemetry data coming from the client? It's probably possible, but the fact is we're talking about third party code running on a computer, and the computer running different code paths based on some value. The third party code knows that value, and if it has internet access can exfiltrate it. In that sense, if there's an internet connection, there's not a meaningful difference between "the OS tells the service/app your age rating preference" and "the OS changes what it displays based on your age rating preference."
Though while I'm throwing out fantasy policies we could solve this by banning pervasive surveillance outright.
Windows already allows this. Content can be set based on age in Microsoft Family. Set an age on a user's account and MS curates the store experience, regardless of which computer the user is logged into.
It's necessary if the page contains mixed content. Under your proposal, Google Search would need a separate search page that shows adult content, and that would be even worse for privacy - logs would show whether you accessed the adult search page - and adult sites (not only porn) would try quite hard to not be relegated to that second, less discoverable, search page.
What you're describing with Google Search already exists, search engines already offer their own search settings including "safe search" or whatever they call it which filters out adult images.
Services can absolutely decide to provide their own content settings. It doesn't require a universal setting or OS requirements, and it doesn't require providing PII to every website or telling a central authority every site you visit.
Who decides the 'nature' of the content? Who decides what constitutes age appropriate?
These questions of liberty are as old as the hills. And the keepers of the internet and virtually every single government past and present have repeatedly and endlessly shown themselves to be lying, conniving, self interested parties. When will 'we' ever learn?
Exactly. Except this way you can't build a complete biometric database if all citizen! Since it's so obvious how to do it correctly without creating such a database one could make the assumption the creation of such a database is the actual goal.
> I also agree with S76 that some laws regarding how an operating system intended for wide use should function are acceptable.
The only laws the government should pass regulating software running on someones computer are laws protecting those consumers from the companies writing that software. For example, anti-malware/anti-spyware.
The government has no business telling a random company that their software needs to report my age, whether it's unverified and self-reported or not.
I think a better approach would be incentives versus punishments.
Like - you don't make it illegal to not do age attestations, but you provide a mechanism to encourage it.
You get a certification you can slap on your website and devices stating you meet the requirements of a California Family-Friendly Operating System or whatever. Maybe that comes with some kind of tax break, maybe it provides absolution in the case of some law being broken while using your OS, maybe it just means you get listed on a website of state-recommended operating systems.
That certification wouldn't necessarily have to deal with age attestation at all. It could just mean the device/OS has features for parents - built-in website filtering, whatever restrictions they need. Parents could see the label and go "great, this label tells me I can set this up in a kid-safe way."
Hell, maybe it is all about age certification/attestation. Part of that certification could be when setting it up, you do have to tell it a birthdate and the OS auto-applies some restrictions. Tells app stores your age, whatever.
The point is an OS doesn't want to participate they don't have to. Linux distros etc would just not be California Family-Friendly Certified™.
I wouldn't have to really care if California Family-Friendly Certified™ operating systems are scanning faces, IDs, birth certificates, collecting DNA, whatever. I'd have the choice to use a different operating system that suits my needs.
> if we don't do something, the trajectory is that ~every website and app is going to either voluntarily or compulsorily do face scans, AI behavior analysis, and ID checks for their users
You're going to get that, anyway. Platforms want to sell their userbases as real monetizable humans. Governments want to know who says and reads what online. AI companies want your face to train their systems they sell to the government, and they want to the be the gatekeepers that rank internet content for age appropriateness and use that content as free training material.
Age verification across platforms is already implemented as AI face and ID scans. This is where we're already at.
I am well aware of the alignment of interests and the dismal state of things. I'm of the opinion that the only way to divert is radical legal action that shatters the defense industry and social media titans, and it sure as hell won't be Gavin Newsom who delivers it.
My objection to all this stuff is the requirement to share government ID / biometrics / credit card info etc with arbitrary third party sites, their 228 partners who value my privacy and need all my data for legitimate interest, and whatever criminals any of those leak everything to, and also give the government an easily searchable history of what I read when those sites propagate the info back.
Any scheme that doesn’t require this won’t get pushback from me.
As an alternative: I already have government-issued ID and that branch of government already has my private info; have it give me a cryptographic token I can use to prove my age bracket to the root of trust module in my computer; then allow the OS to state my age to third parties when it needs to with a protocol that proves it has seen the appropriate government token but reveals nothing else about my identity.
It's pointless, does not increase security, does increase complexity of every interaction, and introduces a lot of weird edge cases.
What i want is full anonymity enshrined in law, while at the same time giving parents, not governments, but parents, options to limit what their children can do on the internet.
The push to do biometric data collection is entirely the result of entrepreneurs trying to get ahead before laws are passed. Their behavior is the result of the push to restrict the open internet. If we don't do anything, they will stop. You don't always have to do "something". Sometimes the harm comes by trying to do something.
> I agree. I also agree with S76 that some laws regarding how an operating system intended for wide use should function are acceptable. How would you react to this law if the requirement was only that the operating system had to ask the user what age bracket it should report to sites? You get to pick it, it isn't mandatory that it be checked, and it doesn't need to be a date, just the bucket. Is that still too onerous?
What's the point in doing any of this if it doesn't result in materially better outcomes?
The point is that I think it's one of a few things that if done together could result in better outcomes. First, it standardizes parental controls, which ought to be so easy to use that failure to do so is nearly always a proactive decision on the part of the guardian. It doesn't need to be perfect, just reduce friction for parents and increase friction for kids accessing the adult internet.
Second, it would signal to worried parents and busybodies that something has been done to deal with the danger that unmediated internet access might pose to minors. I don't think that it's a big issue, but a lot of energy has gone into convincing a lot of people that it is.
The other part of achieving a good outcome would be to disempower those in the political and private sphere who benefit from a paranoid and censorious public and have worked to foment this panic. That's the much harder part, but it's not really the one being discussed here. I'm pitching the low-intrusiveness version to gauge sentiment here for that easier part of the path.
> First, it standardizes parental controls, which ought to be so easy to use that failure to do so is nearly always a proactive decision on the part of the guardian.
If this mattered to the market, don't you think a company would have implemented it or would have been built to fill the need?
Your last point is the only one I partially agree with. The rest... will make no practical difference to what is going on in the world today.
I genuinely think the only two solutions to this problem that are workable are "zero privacy, zero freedom" or "fuck the children, we don't care".
Now, to be fair... there is a middle-ground that is neither of those options that I believe would be much more effective and allow us to retain our freedom and privacy and keep kids a lot safer. It's called education. But... no one will go for it, because I think for it to truly be effective you'd have to go as far as showing very young kids all the darkness that's out there and lay it out in paintstaking detail exactly how it works and deeply drill it into them. Ain't a snowballs chance in hell anyone would go for that, BUT... would it work? I'd bet you bottom dollar it would. The current extent of this education in public schools is a half hour visit from a police offer to the classroom and handing out a sheet to the kids and giving a 'good touch' / 'bad touch' talk. What's needed is a full length university level course on the whole topic from end to end.
If you're in an adversarial relationship and need to defend yourself the best thing you can do is "know your enemy". But no... "they're too young to learn about that stuff, we need to shield them from it - think of the children!" is the reasoning people throw back at you when you suggest it. It hands down has to be the number one thing that could actually move the dial significantly, and it's just completely unpalatable to the majority of the populace.
What makes you think this is going to stave off that world? More likely you'll get both, since I doubt this API is going to satisfy other states' age verification requirements.
Sometimes a token effort or theater is sufficient to quell public sentiment. Like the oft-ignored and ineffective speed limits on roads, or the security theater at airports. That only handles the sentiment angle though. You still have to do something about would-be autocrats who want censorship and surveillance tools, and the oligarchs who want tracking and targeting data.
I'm getting upset by face scan creep too. I do not like it. No sir. But mandating a self-reporting mechanism feels about as useful as DNT cookies, or those "are you 18? yes/no" gates on beer sites.
It'd be more useful than DNT because there would be legal teeth on the side of the sender and receiver of the signal. It'd be more useful than the yes/no gates because an operating system could choose to allow the creation of child accounts.
I.e. it would be a standardization of parental controls with added responsibility on sites/apps to self-determine if they should be blocked or limit functionality, rather than relying on big white/blacklists. Basically an infrastructure upgrade, rather than relying on a patchwork of competing private solutions to parental controls and age checks. The hope is also that a system like this would remove concerned parents from the list of supporters for pervasive mass surveillance and age scans. If they feel like you'd need to be a moron to miss the "This is a child device" button while setting up their kid's phone and laptop, and it's broadly understood that just pressing that button locks down what the device can access pretty effectively, that puts and damper on the FUD surrounding their child's internet usage.
> How would you react to this law if the requirement was only that the operating system had to ask the user what age bracket it should report to sites? You get to pick it, it isn't mandatory that it be checked, and it doesn't need to be a date, just the bucket. Is that still too onerous?
Unfortunately no. There's a requirement that the OS disregard the user-indicated age if it has reason to think they're lying. Presumably this creates the obligation to monitor the user for such indicators.
I assume this is less "if they're lying" and more "if you've independently collected this data". It doesn't require you to challenge the user-indicated age, it requires you to use your own signal before that of the OS.
As a silly example, tax software probably has your full birthday, including year, which is more precise. Many social networks collected this data, as did a lot of major tech companies that implemented parental controls already.
Almost. Technically an adult must create an account for any non–adult who wants to use the computer, and configure it with the appropriate age category.
Honestly it’s the dumbest thing ever. Best just not to play that game.
How is that dumb? It seems reasonable and pragmatic. If the current status quo is ID uploads and face scans, this seems like the better approach. It shifts the responsibility back to parents. All adult service operators have to do is filter requests with the underage HTTP header set.
Sadly, the only real response here is non-compliance. Recently, credit card company wanted me to provide ID upon login ( I was amused -- while my setup may not be common, it has not changed for years now ). So I didn't and just ignored it. I checked on it this month and it suddenly was fine. But then.. one has to be willing to take a hit to their credit and whatnot.
The point remains though. They have zero way to enforce it if we choose to not comply. Just saying.
They have plenty of ways to enforce it. It's a law, they can take you to court. I guess it's easy to forget these days but laws do still apply to some people. If you're going to host a service, I guess consider using Tor or something.
Friend. On this very forum, you will normally see me argue that further deterioration of civil society is bad and we should be doing everything to maintain society as is. However, as with most things, there is a limit. That limit varies from person to person, but it is getting harder and harder to argue that laws apply ( especially once you recognize they don't quite apply across the board ).
<< If you're going to host a service, I guess consider using Tor or something.
I think the person meant that if you don't comply there may be civil or criminal consequences, so if you want to knowingly provide a non compliant website or app, you should host it on tor to prevent your person from being the subject of the state.
I know the CA law is civil only, so I don't think there is much CA can do if you publish an OS and don't make money from CA folks, but other implementations may decide to impose criminal penalties.
Totally agree, but I think we are heading to a full intrusion system in every aspect. And this is just the beginning.
Even decentralized identity systems are not that decentralized, of course.
A cornerstone philosophy behind the American legal system is that we must view every single increase in State power as a potential slippery slope, and must prove that it isn't.
In this case, it's a slippery slope; if we're normalized to this, what other incursions into our 1A rights to free speech, religious freedom and public gathering will we allow?
And I say religious freedom, because these kinds of laws are largely peddled by religious folk or people who otherwise have been deeply influenced by early American Puritan religious culture.
I, nor my children, should be forced to subject to such religiously-motivated laws. I can decide for myself and for my child what is appropriate.
I, nor my children, cannot be compelled to enter personal information into a machine created by someone who is also illegally compelled to require it.
I, nor my children, can be compelled to avoid publicly gathering on the internet just because we don't want to show identification and normalize chilling surveillance capitalism.
The problem is that the comparison falls flat. ADA does not sniff for birth date and surrender that data to others. One has to look at things at a cohesive unit, e. g. insecure bootloaders by Microsoft surrendering data to others. It seems as if they try to make computers spy-devices. That in itself is suspicious. Why should we support any such move? Some laws are clearly written by lobbyists.
> Anyways, it feels like all sides of the political spectrum are trying to strip away any semblance of anonymity or privacy online both in the US and abroad.
It's not this or that political party, your neighbors simply don't share your values. Maybe you don't agree with their values either — like to what degree we should be ceding privacy in favor of fighting child exploitation on the internet. Child protection arguments work because it is a compass to the true feelings of your neighbors.
The problem with this argument is that everyone agrees with protecting children.
"Think of the children" arguments are the legislator's fallacy: Something must be done, this is something, therefore we must do this.
In reality there are alternative means to accomplish any given goal, and the debate is about what should be done, because no one benefits from using methods that cost more than they're worth.
Well, almost no one. The opportunists who drape themselves in the cloak of "safety" when they want to have the government mandate the use of their services or use it as an excuse to monopolize markets or establish a chokepoint for surveillance and censorship do benefit from the machinations that allow them to screw the majority of the population. But the majority of the population doesn't.
There's fake protecting children and there's real protecting children. The Colorado and California parental controls API laws (not age verification laws, there is no age verification in these laws) are clearly "real protecting children" since all they do is mandate standardisation of a parental controls API on each OS.
We kept saying parental controls are all that's really needed, these states said "ok then, do parental controls" and we're still complaining.
It is as Aristotle said, the average person is a natural born slave (to their emotions, and thus to the rhetoriticians most skilled in changing them). That is why democracy always fails in the end. Americans just had such good geographic and historic luck to delay this reckoning by a century or two.
If you see politics through this lens then the 'democratic backsliding' that has been universal across the world for the past two decades is entirely unsurprising.
>Anyways, it feels like all sides of the political spectrum are trying to strip away any semblance of anonymity or privacy online both in the US and abroad. No one should have to provide any personal details to use any general computing device.
What was the legislative history for the California law? Who sponsored it, and who are their backers? Is there some coordinated effort by surveillance state proponents?
The California law doesn't strip away any anonymity or privacy except for the additional fingerprint signal of you being a kid or not, which is no worse than Accept-Language
It kind of does. Depending on how the mechanism works, if I check the user's age every time my executable is launched, and my user launches daily, I can determine with certainty what the user's birthday is. That information may be enough to deanonyimize in rural regions. It certainly gives away pii.
Makes sense, these laws are great for the establishment. Difficult to adhere to for newcomers or smaller parties. Compliance to this madness eats away a much larger proportion of thin profits.
That quote doesn't imply that those companies are pushing for it. The lawmakers might be pushing for it, and the companies might be ambivalent to whether it's done or not but said "if you're going to do this, then it should be worded this way."
Disclosure: I work at Google, but not on anything related to this.
I remember western public laughing about requirement of the former USSR to register a typewriters. So we have a case of he who slays the dragon becomes one
This law serves tech mob well, more work and power for us :)
It was never about anything else. Silly media men
> We are accustomed to adding operating system features to comply with laws. Accessibility features for ADA, and power efficiency settings for Energy Star regulations are two examples. We are a part of this world and we believe in the rule of law. We still hope these laws will be recognized for the folly they are and removed from the books or found unconstitutional.
Your welcome! Boss chuckles mightily!
>Otherwise, given the pervasive tracking done by corporations and the rise of constant surveillance outdoors, there will be nowhere for people to safely gather and express themselves freely and privately.
We can often see through your own eyes, but this is highly "classified" information. And watch through dog AND cat, hahaha I still can't believe all of you guys let our brave GRU warriors into your holmes!
lum guy 3 :dab:
kitty mob edit: you really think these organisms, or your partners in friends, ju$t show up by coincidence? you all in our us blackop ~~~ we love brain-computer-interfaces, not necessarily purely neurological, the microorganisms are great for us ;)))))
The children are a distraction. They're a secondary justification. Don't lose the plot. This law serves only one outcome: enablement of further authoritarianism.
Fxxk off, to all political actors pretending this is about child protection. Protecting children is not the job of the OS, the device manufacturer, or the internet service provider. It is the parent’s job. If you cannot supervise, monitor, and discipline your child’s internet use, that is your failure, not theirs.
They can provide tools, sure. But restricting adults because some parents fail at parenting is insane. That is how a totalitarian state grows: by demanding the power to monitor and control every individual.
If you cannot control your children, that is your fault. And if that is the case, you should think twice before having kids.
In general, I argue for less state control on anything. But your argument seems flawed from its core. If someone is a bad parent, should we simply ignore it and let the children turn out idiots as well? And the line is often blurry, so that's why we designed schools that should compensate even for dumb parents.
And, just to be clear on this topic, I think these age restriction laws are mostly bullshit, but I'm deeply against the concept of putting all the responsabiliy of raising children onto the parents.
> we simply ignore it and let the children turn out idiots as well
There is not a lot of safeguarding against this in the real world tbh. At the very least I think the OS or internet age verification is not the place to start improving this.
There is some. Bars won't serve minors. The standardisation of parental controls law (the CA/CO one) is much closer to "bars won't serve minors" than it is to "camera drones will follow minors around to make sure they don't drink alcohol"
You make a good point that society may be responsible as well, however we are arguing over trying to use technology to solve meatland problems and this one never should be automated into tech, ever. It's putting burden on artists and engineers to solve things they aren't causing or really responsible for.
what about children being fed unhealthy things? childhood obesity is dangerous and also affects their mental and physical health.
let's install cameras in all supermarkets that ensure parents cannot buy unhealthy things for their children.
of course, adults can continue to purchase anything they want for "themselves". but the facial scanning in supermarkets is imperative for child safety!
> should we simply ignore it and let the children turn out idiots as well?
Just because you're an idiot at 18 doesn't mean you are one for life.
> so that's why we designed schools that should compensate even for dumb parents.
Does that actually work?
> against the concept of putting all the responsabiliy of raising children onto the parents.
Then how do you feel about parents requiring a license before they have a child? If you wish to invite yourself into their responsibilities shouldn't you also invite yourself into their bedroom first?
> If you wish to invite yourself into their responsibilities shouldn't you also invite yourself into their bedroom first?
You're turning of question of measure (how much should society be involved in raising children) into an all or nothing debate, which I explicitly want to reject.
> Does that actually work?
Yes, because of mass education almost every adult you meet can read and write, something new for the last 100 years. Just because a system has (currently huge) faults, doesn't mean we should remove the system entirely.
Okay, assuming that’s the case for the sake of argument, that’s still a huge problem right? Kids raised by bad parents suffer, which is inhumane. And if you don’t care about that, they also cause problems or costs for society at large (especially if there are a lot of them).
Those are bad outcomes. So is it any wonder that we look for policy/regulatory issues to mitigate the harms of bad parenting?
It’s compelled speech. A transmission of expression required by law. The argument settled in 1791. The First Amendment does not permit the government to compel a person’s speech just because the government believes the expression thereof furthers that person’s interests.
It's also a consumer product regulation, of which many already exist. The government compels you to speak about the ingredients in a food product you manufacture, and we don't seem to have a problem with that.
A better analogy would be regulation of addictive activities like gambling and regulation of addictive substances like painkillers. Given that the platforms being regulated were intentionally engineered to maximize addictive potential, this seems a fair and reasonable response.
I assume you live in the free world. Some socialist states in history, such as East Germany, pushed child-rearing and early education much further into the hands of the state through extensive state-run childcare and kindergarten systems. That model is gone, and for good reason.
Even with schools in place, the basic responsibility for raising children still belongs to the parents. Schools can support, educate, and compensate to some extent, but they cannot replace parental responsibility.
I also see far too much awful news — in my country, Korea, for example — about terrible parents harassing school teachers because their children are out of control.
I was born in a communist country in Eastern Europe, which is now crony capitalist. The issue is extremely complex, and all I can say in such a short paragraph is that ideologically-driven implementations are doomed to fail. It doesn't matter if you believe in "free-market", "the state", "free-speach", "socialism" or "equality", if you put these above the concrete reality of modern parenting, and how much harder it's getting compared to previous generations.
Cops to track what people did on the internet, checking every image to ensure it's not pornographic, or every transaction online, to ensure it's not criminal!
Sounds great! Let's just start by rolling out the program to target elected officials and their families as a trial. If every congressional or senate representative wants to undergo a few years of scrutiny to make sure the system works well, maybe the people will follow gladly.
Sorry, the point I am trying to make, is bullshit laws should be tested on the group of people advocating and passing those laws, because maybe they wouldn't like the law when it applies to them.
this whole thing is part of building a mechanism to restrict free speech down the line to cover for a certain "greatest ally" of the united states. make no mistake, the "not a genocide" over the last two years and the recent "not a war" is very much related to this.
How does mandating every OS to have a parental controls API lead to wholesale suppression of speech? Will they mandate it to always be set to the most restrictive setting?
this isn't "parental controls" this is a mandate to verify your age and subsequently identity to an external third party. can't you see how this is a slippery slop to deannonymizing the internet and being able to restrict access for reason that won't be revealed until later?
I’m tired of the U.S. nanny state with its pilgrim-religious historical backdrop of prudishness, infecting everyone outside its own borders.
Their ideas are deeply unhealthy for children and worst of all, lazily shift the responsibility of parenting from the parents to the state.
Many European countries have long had a culture of slowly increasingly responsibilities and freedoms to their children gradually, letting them slowly and safely test their boundaries. At least the proposed EU solution (for identity) tries to prevent overreach. The wholesale EU spying to “save the children”, which seems to be funded by the U.S. is a different topic and we need to continue to fight it tooth and nail.
The insidiousness lies with major tech companies and their pursuit of eyeballs on screens. The Internet was supposed to be something we used to learn, gain knowledge and connect. They took the internet over, bastardized it and made deeply addictive apps and games to keep you watching ads regardless of age.
These age checks are just for data collection and spying to sell the data to the highest bidder, which is likely governments in order to control and herd their populations.
The reason for this is easy to understand in the context of AI. In the future the only valuable asset will be a data and the access to that data.
In the future, any app will be built, replicated, deployed and maintained by AI. Apps, websites, especially B2B apps - their days are numbered.
If my business needs a billing system tailored to my business in the future, I’ll describe it and have an AI built and maintain it. That is not that far away in relative terms.
Our goal collectively (as technology advocates) is to make sure that this consolidation of personal data doesn’t happen. If personal AI is to be built, then the user should have full ownership and away from the spying eyes of groups like Palantir and the NSA. They cannot be trusted. The Jews learnt that catastrophically in Germany in the 1940’s putting their trust in a government that became authoritarian and evil.
What is digital will never die and what is digitally given cannot be taken back.
You should look into what's actually happening in other countries before blaming it on "the U.S. nanny state". The rest of the Anglosphere makes the United States look like a Libertarian utopia. I live in the United Kingdom, and brother - this is who they are. I assume Australia, New Zealand, and Canada are similar. There are real problems re: "think of the children" in the United States, but if you think "the U.S. nanny state" is bad then you have no fucking clue how bad things could be.
You literally blamed these moves on US religious prudishness, and then said that they were only about surveillance. Which is it? Just kidding. We all know it's nothing more than surveillance and control, and you just have an anti-religious axe to grind.
If the US actually gave a flying FUCK about "protecting the children," the current administration would be making good on Trump's promise to release the Epstein files -- as now ordered by a federal law passed by a overwhelming majority of both houses of Congress -- and prosecuting everyone involved.
We see what's really going on. We can't do anything about it, apparently, but we see.
> Throwing them into the deep end when they’re 16 or 18 is too late.
I saw this a lot in college. Kids that didn’t have any freedom or autonomy while living at home went wild in college. They had no idea how to self-regulate. A lot of them failed out. Those who didn’t had some rough years. Sheltering kids for too long seems to do more harm than good. At least if they run into issues while still children, their parents can be there to help them through it so they can better navigate on their own once they move out.
It is a common tactic among abusive parents to convince their child without them, they'd be unable to survive in the wider world. Any mistakes will become irrefutable proof thereof, and any attempts to break this control and do things on their own will be treated as ingratitude, often prompting the abuser to instantly abandon all parental duties to "teach a lesson".
I don't disagree, but in this context I don't think those are the same parents that are yeeting their kids off to board at university as soon as they are of age.
Uh huh. And some kids haven't got their head straight after puberty at 16, and still need (or would have needed) the training wheels. Blaming it on their parents would seem unfair.
Society works on averages. Most people being ready little adults at 16 doesn't mean everyone is.
Edit: yeah, look at the downvotes. How are you all doing with that self-regulation?
I think you're responding to an argument I didn't make. And I feel necessary to point it out because it looks like other people may be reading it like that, too.
This is a comforting fiction. I've seen it go both ways where children with freedom develop pornography or drug habits and sheltered kids turn out well-adjusted and regulated.
That’s usually coupled with a lot of anxiety. Some level anxiety could be useful, as it can make a person look responsible. This can come at a heavy cost though, which they may not let others see, and might not even realize themselves until later in life.
I think the point I’m trying to make, albeit very bluntly is people often make the banal point that kids these days are too sheltered because they turned out fine. But they’re actually just observing survivorship bias.
12 is the magic number when things start going to shit. I'm sorry for what happened to you but maybe you should start a counselling service for clueless parents and tell them what should they do and what they shouldn't to correctly shelter the children. Because sheltering is an art. I think about it all the time. I always wish some one would take a bit of money but tell me how to guide or not guide my child to be independent in the rough world and to take decisions independently
Not commenting on this specific law, but I do believe the premise that children should be exposed to everything is wrong, and that the overall view on humans in this post is naive.
These days, exposing an immature brain to the raw internet is basically just handing the brain and personality over to be molded by large corporations and algorithms.
And humans have never been rational, self-contained actors that self-educate perfectly when exposed to information, converging on an objectively good and constructive worldview. Quite the opposite.
Humans develop in relation to one another, increasingly in relation to algorithms, and sometimes become messed up, and sometimes those mess-ups would have been avoidable had relations or exposure been different.
In fact I would say you as a parent is not doing your job if you are not trying to make sure a 12 year old isn't pulled into, say, an anorexia rabbit hole.
Whether that is best done through making sure exposure doesn't happen, or through exposure and education, will depend on the child and parent (and society) in question. What worked best for a highly rational self-reliant geek teen may simply be a disaster for another human. And what worked for an upper class highly educated family may not work for a poor family with alcoholized parents or working 18 hours a day to make ends meet.
And parents are not perfect -- if all parents were perfect, there also would be no alcoholics and drug addicts or poverty or war. But people are imperfect, and it's natural to make laws to mitigate at least the worst effects of that. (Again, haven't read this specific law proposal, but found the worldview of OP a bit naive.)
> These days, exposing an immature brain to the raw internet is basically just handing the brain and personality over to be molded by large corporations and algorithms.
You make the case of todays internet being insuitable for young children.
But has this been different, ever, maybe apart from the very first days of the internet?
While access through phones has reshaped the internet fundamentally, I'd propose that it has always been dangerous. When I was 12, a single wrong click could destroy your machine, or lead to a physical bill being sent to my parents home (which has happened), or lead to most disturbing pictures and videos.
So I think it's not the case that we should allow kids completeley unsupervised access (like it always has been), but it's also naive to think that we can regulate our way out of this (on state or household-level, like it always has been).
When my generation "accessed the internet", there was a massive dial-up sound and the single family PC was in the living room, visible to everyone.
Even later when the computer was in my room, I still had to go look for the creepy shit, it didn't appear in my email inbox.
Kids this age browse the internet through algoritmic apps built to maximise engagment in a corner on their bed in their room. Parental controls for most apps and operating systems are a fucking joke.
Agreed, but isn't this a parental issue? Why aren't parents moving back to a "shared pc in the living room" model?
I absolutely would not allow a kid to have an unregulated smartphone and then further compound the problem at home by allowing them to access it privately and without interruption. Device management enrollment is trivial on iphones.
I think there is a drastic difference between being once off exposed to bad images, and an algorithm making a choice of whether to subtly over time expose the Pokemon-interested child to racist Pokemon videos vs non-racist Pokemon videos on Tiktok. (Or anorexic Pokemon videos, or..)
Amount of time spent and repeated exposure being the key.
The question is really what kind of human is raised, rather than raw exposure as such.
So for that reason things are different IMO than than 20 years ago.
Yes, of course some people would fall into internet forum rabbit holes 20 years ago, and papper-letter-friend-induced rabbit holes 100 years ago. But it did help that it was like 5% of the population instead of 95% of the population spending their time there.
Regarding your last point, I don't necessarily disagree (again I didn't check up on this law, I care more about the laws in my own country), but I think arguing against the law will go better if one does not display naivety when making the arguments
Don't say "it will be better if all kids are exposed to everything early" (it won't), instead say "the medicine will not work and anyway the side-effects are worse than the sickness it intends to cure" (if that is the case).
If you are not perfect, then don't have kids then. If you can't take care of them and nurture them with the attention that they need and rightfully deserve.
I agree, and I believe too many geeks who are now parents (including the author of the blog post) do not realize that the computers they grew up with, and in particular the Internet they grew up with, is nothing like computers (phones) and the Internet kids have access today.
> believe the premise that children should be exposed to everything is wrong
imo this is what is wrong with modern parenting. the reality does not care about the child's feelings and if it is old enough to have a screen with internet unattended it is old enough for anything
The CA/CO law only requires the option to enable parental controls on an account, and as the article points out, can be worked around by a sufficiently determined child using something like a virtual machine. This is not really the government deciding how children should be raised. The parent still has the ability to choose to apply the parental controls.
It's more like the rule that minors can't buy alcohol in bars - parents can still buy alcohol at the supermarket for their children, and sufficiently determined children can find some other adult to buy it for them.
Probably by the time you know how to install a virtual machine, you can handle the unrestricted internet.
The bigger problem is it sets us on a possible path towards completely government-controlled computing devices. The fact that so many countries are pursuing ID requirements online is somewhat of a canary for this whole OS age check thing imo.
ok, that is the argument with merit in favour of shielding kids from the internet - now let's consider how does it look like when the locus of responsibility is governments
it's true that kids are vulnerable to certain forms of content on the internet
it's also true that adults are vulnerable to certain forms of content on the internet
it's also true that governments cannot police "harmful content" on the internet effectively, or even meaningfully, if most people can easily surf the internet pseudonymously
it's also very true right now that what's on "social media" is very Sybill-vulnerable, and inordenately so right now with the advent of LLMs
what do you think the playbook will look like once there is some sort of tight OS level system that is enforced across the board to certify or verify information about the user?
do you think this level of coordination to push for identifying the user at all levels that is happening across the world in a matter of weeks is genuine concern for the kids alone?
My view is that this must be left entirely to the parents. The only time a government should be allowed to interfere is when there are child abuse or neglect cases against the parents and the children are put under child protective care.
It is in my view crazy and irresponsible to allow the government override the parents' decisions about what media their children can consume. It is guaranteed that this power will be abused.
The CA/CO law is literally the government writing a law that says it shall be left to the parents but the device must give the parents the options they need.
Does it effectively outlaw general computing for minors by requiring account holders to set up accounts for minors where account holders are defined as being 18+?
Im honestly not sure; but I could see that being the result of the law and companies like best buy disallowing minors from purchasing hardware with cash for fear of liability.
I've obviously read about how bad adult literacy in the US is, but I didn't realize how many "technologists" were impacted by it. The law is short and clear and doesn't involved attestation or age verification. Yet all these "hackers" claim it does just that. The reading comprehensions and critical thinking skills seem to match the national average.
I think most people here are extrapolating the intent behind this law, the triviality with which it can be bypassed by minor account holders, and what that means for the future. Once this law is in effect, it will be ineffectual. Minors that current don't know what VMs are, what live booting is, what keyloggers are, etc. will learn immediately once blog posts start circulating about bypass mechanisms. Parents will then go back to the legislature and say the law as-written sucks, and they will demand better laws, but the only way to get better is to force all devices to authenticate with the isp with a gov-issued id/token to prove the account is not a minor. But the only way to prevent even further workarounds like the OS lying is to force hardware based remote attlestation. And that means the death of general computing and the death of any anonymity.
This law feels like a battle in The Coming War on General Computation, as Cory Doctorow put it:
> I can see that there will be programs that run on general purpose computers and peripherals that will even freak me out. So I can believe that people who advocate for limiting general purpose computers will find receptive audience for their positions. But just as we saw with the copyright wars, banning certain instructions, or protocols, or messages, will be wholly ineffective as a means of prevention and remedy; and as we saw in the copyright wars, all attempts at controlling PCs will converge on rootkits; all attempts at controlling the Internet will converge on surveillance and censorship, which is why all this stuff matters.
> all attempts at controlling the Internet will converge on surveillance and censorship, which is why all this stuff matters
it really boils down to this sadly, and it should be pretty obvious shouldn't it?
i'm finding it befuddling that even technical audiences seem to resist connecting those dots, but strong motivated reasoning is at play: these are audiences that will often feel it will be them who will be in control, and they're also emotionally nudged by the idea of child safety
"Age verification" is such a politician's way to label this. It doesn't actually verify your age. What it does do is set the groundwork to argue that none of us should use any software on any computer that an App Store with Age Verification doesn't allow us to.
But there's a bigger issue than just what software you're allowed to run on your own computer. What's really insidious is the combination of the corporate and government interest. If every server tracks how old you are, it's a short step to tracking more information. Eventually it's a mandatory collection of metadata on everyone that uses a computer (which is every human). Something both corporations and governments would love.
You were worried about a national ID? No need. We'll have national metadata. Just sign in with your Apple Store/Google Store credentials. Don't worry about not having it, you can't use a computer without it. Now that we have your national login, the government can track everything you do on a computer (as all that friendly "telemetry" will be sent to the corporate servers). Hope you didn't visit an anti-Republican forum, or you might get an unfortunate audit.
Comparing today's internet to the 90s is hardly fair. It has become extremely predatory, and most places youth gravitate towards are controlled by algorithms with the goal of getting them hooked on the platforms to make them available for manipulation by the platform's customers.
Of course, there will be stories of smart kids doing amazing things with access to vast troves of information, but the average story is much sadder.
The EU is working on a type of digital ID that an age-restricted platform would ask for, which only gives the platform the age information and no further PII.
Companies (not talking about system76) amazingly always find the shittyest interpretations of their obligations to make sure to destroy the regulations intention as much as they can. The cookie popups should have been an option in the browser asking the user whether they want to be tracked and platforms were meant to respect this flag. Not every site asking individually, not all this dark pattern annoyance. It's mind-blowing that that was tanked so hard.
None of this is for what you're describing though, there is no reality where such wildly different countries and states in different corners of the world all decided coincidentally to all do this within 6 months of each other. We know it's not "well maybe they saw X country and thought it was a good idea" because even percolating the policy would have taken over a year.
Protecting kids is just the PR reason, the real goal is requiring ID auth for every action taken on a computer. If we normalize it for downloading apps or using websites the next step is to authorize it for connecting to HTTPS at all and then the next step is requiring it to unlock your CPU cores.
If people don't push back on this now there is no world where we get out of 2030 without requiring government ID auth to install linux on your own computer not connected to the internet.
End to end silicon to server auth is absolutely possible and someone is working really hard to make it a reality.
> Comparing today's internet to the 90s is hardly fair. It has become extremely predatory...
I think you're missing the point they're trying to make. It's not that the problem isn't real, it's that the solution won't work. Kids will find a way around. They have a lot more free time than us.
> The EU is working on a type of digital ID that an age-restricted platform would ask for, which only gives the platform the age information and no further PII.
Sure, it might start out that way, but once adoption reaches anything critical the PII will be required to squash free speech as soon as possible. But by then the interaction flow will be familiar, hardly anyone will even notice, never mind care.
The EU has the best frog boiling experts in the world.
> ... will be required to squash free speech as soon as possible.
Maybe pointing the obvious but things happen if enough people care about them or do not care to oppose them.
From my perspective speech became "more free" lately - meaning everybody says all kind of incorrect, wrong things without fear of retribution even if there are laws against some of those, because people just don't care.
So maybe we should also focus on teaching people what is free speech, why is it good for them, why they needed, rather than worry about some hypothetical mechanism that someone will prevent it.
Of course both can be done, but I find it a bit funny that if the focus in mostly on not having mechanisms to prevent free speech, we might still end up in a situation that there are no such mechanisms but on the other hand nobody speaks freely because they don't care or only stare at their tiktok.
By this reasoning we should oppose every law in case it's a foothold to sneak in a different law.
The CA/CO parental controls API law is very reasonable. It only mandates each OS must have a parental controls API, the use of which is up to the parents.
Yep, the digital wallet will become the authoritarian, beating heart of your life. If you don't comply with the EU, you can say bye, bye, to your bank account, any online interaction, they block your right to travel and so on.
The corona passports showed the way to achieve ultimate control of the population, and the EU digital wallet will be a permanent corona passport.
The public sheep, in their ignorance, are cheering this on, without knowing what will await them. It is our responsibility as technologists to fight this, and to educate the sheep.
I'm surprised by the complete lack of dissent or even nuance in the discussion here. I'm much more ambivalent on this: the historical record for prohibition is not good, but instagram and the like are uniquely and disastrously harmful and the companies pushing them on children are powerful in a way that has no real historical precedent. In the balance, anything the reduces the power those companies have over our lives (and our politics) has to be at least considered. In other words, I don't think this is necessarily the right measure - but I'm desperate.
I am not sure what time or country you are talking about but when I grew up (Germany in the 90s) we officially could only buy cigarettes from age 16 (or 18?) and 50% of my friends smoked. So that did absolutely nothing.
Later (I think, man it's been a while) the vending machines needed a driver's license or id to verify the age and guess what, as long as you had access to a single person over the age of 18 you could still get cigarettes.
Stepping away from the cigarette topic... I think mixing the two topics does not make sense.
First one is: Is there stuff on the internet that kids should not be exposed to without supervision? I don't have a strong opinion, I don't have kids. Probably not, but I am not even interested in discussing this
Second one is: Will some stupid laws like the mentioned ones help in any way? Maybe a little, probably not really and only for kids who don't find a workaround. Will they have catastrophic side effects and thus are not worth implementing for minimal gain? 100% yes.
But why not force age verification / content restriction on Facebook / Instagram / alikes instead? There aren't really that many big players, isn't it?
Also, if what the OS does, is requiring to pick some number from 0 - 100 and date without doing any verification, everyone can lie. It has other flaws like not considering that many people can share accounts, some embedded devices with UI can no longer receive updates, etc. Honestly, if I thought for 30 minutes, I could list dozens of such problems. I doubt these laws can work efficiently enough.
For now this might sound like the least of evils, but are we sure that these idiot politicians won't come up with something even more insane after seeing the inefficiency of this?
How would you impose it on them? Facebook's only way to tell your age is for you to upload an ID document and don't we want to avoid that? But when a parent buys a device for their child, they can just enable the setting that says "this device is for a child" and then Facebook can see that setting, with no further identity information transmitted. That's better privacy, not worse.
This might be plausible to an extend (probably for much younger children). If the child can manage to install an OS (which is not that difficult nowadays), or is some kind of power user, then it will not work well. Also the laws are about offline software, how it will be implemented for websites (most of the harmful social media is actually there)? There are no answers (i guess the web must implement some standard, such as http specific header). There are so many edge cases that I don't even want to talk about.
you can install very tight parental controls on many devices
but this is not about optionality, this is about forcing the mainstream into verification and certification schemes that most people won't be realistically able to avoid, it's about control and compulsion of the mainstream
Did regulating cigarettes kind of work? I ask just because I don't actually know. I always assumed that the regulations were a reflection of the growing society wide distaste of cigarettes and not a cause of it. If regulations were enough to change peoples attitudes towards something then why did alcohol prohibition fail so hard?
Sure, age restrictions played a part. But the larger reasons are the increased awareness of direct health effects, banning it in public spaces, and taxing the hell out of tobacco. I’d bet if they restricted app usage in specific locations, that alone would break the habit for some people. And imagine if you charged them each time they logged on.
Most of us are old enough for 'you wouldn't download a car' nonsense...
But as adults that are starting to have kids, this "hard divide" between physical and virtual starts to break down. What I mean is that we can't always use the excuse that we can't apply some reasonable law just because an item isn't "physical".
this is why the vector of attack of "think of the kids" is almost always the first when it comes to try to lock down the internet in some way
it's "protect the kids" or "counter-terrorism" and nowadays also "harmful content" because as the internet is now fully mainstream, softer and softer heads start to prevail
> instagram and the like are uniquely and disastrously harmful
Could we perhaps regulate them to require that they be made less harmful for everyone?
> anything the reduces the power those companies have over our lives (and our politics)
If we're concerned about politics, I presume we're talking about the impact on adults, but these age-based restrictions are not intended to change anything for adults.
>instagram and the like are uniquely and disastrously harmful
-to both adults and children. What kind of worked for cigarettes was the huge tax so why not create a "mental health tax" based on the number of users x some addictiveness score and let meta either fix instagram, pay their users a therapy or pass the cost to them.
Instead this "protecting children" by giving them "degraded" experience will only motivate them to bypass the age verification and destroy the statistical evidence of the harm those platforms cause.
It's not the job of the operating system to protect children. Social media is bad even for adults, to my point of view why they don't address the source of the problem, banning what Instagram, TikTok, etc. is doing that is bad even for adults, and don't make laws that restricts even more what a person can do with their personal computer (if this law comes into effect it's like saying it would be illegal to run Linux or whatever OS that doesn't implement this bullshit)?
Well, surely because the government is full of investors in Meta and uses Meta for their propaganda, and possibly because the government wants more data to put on their databases that is used by ICE and other agencies.
So much for freedom and democracy lectured by Americans and westerners to the rest of the world. This is just censorship of every form of freedom of speech. This got nothing to do with children or youth. They will eventually censor and track everyone.
If you don't want to be censored just don't check the checkbox that says "this device is for a child, please censor adult content"? There's no passport check in the CA/CO law, in fact it's expressly forbidden.
> It can get worse. New York’s proposed Senate Bill S8102A requires adults to prove they’re adults to use a computer, exercise bike, smart watch, or car if the device is internet enabled with app ecosystems. The bill explicitly forbids self-reporting and leaves the allowed methods to regulations written by the Attorney General.
I don’t think they have processed the NY law yet. It is completely incompatible with the open source model. When people finally figure it out there will be an uproar.
I feel the same way. Looks like online "privacy" and "anonymity" will cease to exist within our lifetime. It's already starting with those "privacy respecting" solutions like zero trust age verification but that will quickly be deemed insufficient and because the legal framework will be already present it will very quickly and smoothly turn into full blown surveillance and censorship.
Time to setup I2P on my server and donate some bandwidth but I'm sure they'll make that illegal too.
True.Like many other nefarious things British started this with child protection act, which got nothing to do with children, considering their long history of pedophiles. Slowly all the other governments using the same pretext and template. Time to use pigeons and own peer to peer communication.
Next step coming soon is “well we need a license scan if you tell the OS you’re over 18”. macOS already requires a 6 step process to “trust” regular programs not from their app store. So this is just end to end control of our machines.
I can't fathom all the rage and confusion here about these laws. It's been a well-known effect since forever that when a government deems that something needs to be done, they'll go for the first "something-shaped" solution.
This all could've been avoided. Governments all over the world have been ringing the alarm bells about lack of self-regulation in tech and social media. And instead of doing even a minimum of regulation, anything to calm or assuage the governments, the entire industry went balls-to-the-wall "line go up" mode. We, collectively, only have ourselves to blame, and now it's too late.
If you look back, it didn't have to be this way:
- Governments told game publishers to find a system to handle age rating or else. The industry developed the ESRB (and other local systems), and no "or else" happened.
- Governments told phone and smart device manufacturers to collectively standardize on a charging standard, almost everyone agreed on USB-C and only many years later did the government step in and force the lone outlier to play ball. If that one hadn't been stubborn, there wouldn't have been a law.
The industry had a chance to do something practical, the industry chose not to, and now something impractical (but you better find a way anyway, or else) will be forced upon them. And I won't shed a tear for the poor companies finally having to do something.
> We, collectively, only have ourselves to blame, and now it's too late.
Why would we have to be blamed for a law written by some lobbyists? That makes no sense at all. There are of course some folks that are in favour of this because "of the children" but their rationale does not apply to me nor to many other people. Why should they be able to force people to surrender their data, with the operating system becoming a sniffer giving out private data to everyone else? That makes no sense.
The invocation of "lobbyists" in this context is meaningless. People lobby for all kinds of things. Doesn't really matter once it becomes a law anyway.
If people could just say I don't agree with this law, it "makes no sense" and it's written by "lobbyists" and the government should not "be able to force" me to comply then we don't have a society anymore.
You had better come up with some better arguments otherwise it just seems like the typical sad case of the losing side suddenly griping about the referee's monopoly of force when it's no longer going their way...
The comment you replied to rightly pointed out one way of getting ahead of said monopoly of force is addressing problems with the status quo before the state takes an interest. It didn't happen, and now you will probably get some heavy handed intervention. But ignoring this basic point to ask why oh why suggests an ignorance of the very nature of the society that is and has been constantly regulating you.
If you only happened to notice now you should consider yourself a rather lucky specimen in the long line of human history, full of those remarking "this makes no sense" as they are nonetheless compelled to comply.
The fact that lobbyists push the law is in fact very meaningful. It means that a minority with power is trying to tip the scales in their favor against the otherwised unbiased will of the majority.
To extend your analogy, it's not one side complaining after a fair match, it's them complaining that refs have been paid off.
Lobbyists do not always mean minority. I'm sure it looks like that from the outside.
There are all kinds of laws that people don't like, me included. With every law there will be some winner/loser trade-off (for lack of better word). As OP said, that is society.
If the people here were so passionate about it, they would help come up with a better solution, not a "f* off" comment.
> We, collectively, only have ourselves to blame, and now it's too late.
Can't believe I'm reading this. I don't want age verification at all, whether it's self-imposed or not. I should be free to use whatever tools I want however I want.
Democracy is not about what you want. If the majority want something you don't, the best you can do is find a compromise. There is no option of doing nothing and keep computers the same as they have been if the majority want change.
But does the majority want that change? If they want it, are they entirely aware of its potential impact on their freedom of speech and access to information? Or were they conditioned to think it's good for them because well funded corporate entities and governments spend money on promoting that image? Democracy does not work when majority is stupid and uneducated because people like that are easily controlled. I wish we were putting as much resources into education as we're putting into cheap entertainment and ads/marketing.
The vast majority wants a parental control setting on the kid's device, and that's what is being imposed in California and Colorado right now.
The vast majority don't want to upload their passports. That's what we should be opposing. Standardized parental controls set by the device owner are a great alternative and not invasive at all.
> The industry had a chance to do something practical, the industry chose not to
Wrong. There was no choice. Any type of identification technology causes more problems than it solves. The right choice is to look for different approaches than identification technology for solving the problems. And as the article points out, the problems are best tackled with education and not with tech.
I'm probably missing something, but when I read the California statute I didn't understand it to be anything like "computers enforcing age" - more like, when you create an account it needs to ask your age, and then provide a system API by which apps can ask what bracket the account holder is in. This seems better than the current solution of every app asking independently?
Again, I'm probably missing something but it strikes me as pretty trivial to comply with?
The government really shouldn't be telling us how/what we can compute at all.
But on this specific point - It's a bellwether. They're doing this to lay the groundwork and test the waters for compulsory identification and/or age verification. Getting MacOS and Windows and Linux and etc to implement this WILL be used as evidence that compulsory identity verification for computer use is legally workable.
>The government really shouldn't be telling us how/what we can compute at all.
You could say the same thing about restaurants. "The government really shouldn't be telling us how/what we can cook at all."
When you are selling a product to the public, that is something that people have decided the government can regulate to reduce the harms of such products.
Being "trivial to comply with" is completely disjunct and not at all an argument against "this type of law is fundamentally at odds with the liberty and self-determination that open source projects require and should protect." It's a shot across the bow to open-source, it's literally the government telling you what code your computer has to run. It is gesturing in the direction of existential threat for Free software and I am not exaggerating. It's purposefully "trivial" so you don't notice or protest too much that this is the first time the State is forcing you to include something purely of their own disturbed ideation in your creative work.
Free software is already mandated to do a lot of things, like not defraud the user. If you make a bitcoin wallet that sends 5% of your money to the developer without asking I'm pretty sure you'll be prosecuted, so the government is compelling you to ask the user for consent to do that.
When you make food you're compelled to write the ingredients. We tolerate these because they are obvious and trivial, but pedantically, food labelling laws also violate the first amendment.
> Free software is already mandated to do a lot of things, like not defraud the user.
Surely you recognize the difference between "you cannot go out of your way to do crime" and "your software must include this specific feature"??
> When you make food you're compelled to write the ingredients.
Well, the point about how this affects open source is that under a similar California law, every home kitchen would need to be equipped with an electronic transponder whose purpose is to announce to the world what ingredient bucket you used for tonight's casserole.
If that’s true, I think the law is fine. There are good solutions for anonymous disclosure of information about you, the most mature being Verifiable Credentials, which is an open standard: https://en.wikipedia.org/wiki/Verifiable_credentials
You can disclose just a subset of a credential, and that can be a derived value (eg age bracket instead of date of birth), and a derived key is used so that its cryptographically impossible to track you. I wish more people discussed using that, but I suspect that it’s a bit too secure for their real intentions.
In general, any proposal to use government ID for "age verification" over the internet is going to end in someone using it for mass surveillance, and it's probably not wrong to suspect that as the intention to begin with.
There is no benefit in doing that because parents already know how old their kid is. They don't need the government to certify it to them, and then they can configure the kid's device not to display adult content.
Involving government ID is pointless because the parent, along with the large majority of the general population, has an adult ID, and therefore has the ability to configure the kid's device to display adult content or not even in the presence of an ID requirement if that's what they want to do. At which point an ID requirement is nothing but a footgun to "accidentally" compromise everyone's privacy. Unless that was the point.
It doesn't even need to be that complicated. OS asks you your birthday at setup time. Stores it. Later, an app asks whether the user falls into one of the following brackets:
A) under 13 years of age, or B) at least 13 years of age and under 16 years of age, or C) at least 16 years of age and under 18 years of age or D) at least 18 years of age.
that's it. The OS can decide how it wants to implement that, but personally I'd literally just do get_age_bracket_enum(now() - get_user_birthday());
I think the uproar comes because the well is already poisoned. People are already trained to respond with an outburst of anger to any law that mentions the age of the user, and will find excuses to rationalize that outburst, even when the law isn't that bad.
I mean, "compelled speech"? Really? That's people's argument? This is about as bad as the government compelling you to write a copyright notice.
Compelled speech is bad and it’s something we don’t do, at all. All kinds of bad things come with compelled speech. Mandatory loyalty oaths, erosion of the fifth amendment, compelled work to weaken encryption, etc.
The well should be poisoned. The whole idea is poison.
>We, collectively, only have ourselves to blame, and now it's too late.
No, "we" really don't. I wrote software. It's free. You're welcome to use it, or not. Nobody is forcing my software on you. You are not allowed to tell me that the software I wrote, for free, and gave to you, for free, needs to have features that I don't care about.
You have an LLM now. I'm obsolete now, right? Do it. Build your nerfed distro, and make it popular. Oh, yeah... there isn't a single solitary disto built by an LLM, is there? Not even one. Wow. I wonder why...
Rather than age verification, this is what we should be doing instead:
Don't let phone manufacturers lock the bootloader on phones. Let the device owner lock it with a password if they decide to. Someone will make a child-friendly OS if there is demand. Tech-savvy parents should be able to install that on their kid's phone and then lock the bootloader.
What about non-tech-savvy parents?
There should be a toggle in the phone's settings to enable/disable app installation with a password, like sudo. This will let parents control what apps get installed/uninstalled on their kid's device.
But what about apps or online services that adults also use?
Apps and online services can add a password-protected toggle in their user account settings that enables child mode. Parents can take their child's phone, enable it, and set the password.
----
All it takes is some password-protected toggles. They will work better than every remote verification scheme.
The only problem with this solution is that it does not help certain governments build their global mass surveillence and propaganda apparatus, and tech companies can't collect more of your personal info to sell, and they can't make your devices obsolete whenever they want.
The idea is to let parents decide which apps are suitable for their child, for each child. Password-gating app installation (just like sudo on Linux) is not only easier to implement and use, but also much more flexible and powerful than a fixed age-based rating system.
It also prevents the legitimization of app store monopolies because no centralized authority is needed to create or enforce a rating system. And there will always be apps that don't comply with a rating system out of privacy concerns (it leaks the user's age, which is just an extra data point to track you with), and then they'll eventually try to ban non-compliant apps from running on the device completely. That's what enforcing an age-based standard would take. And even then it would still not fulfill its (claimed) purpose that well.
Principle-wise, parenting should be the responsibility of parents, not governments or corporations. Those large organizations have their own agendas which are somewhat misaligned with the individual human being.
To the extent code is functional rather than expressive it is not speech, and when the government seeks to compel code, it generally seeks to compel function not expressive content.
(That doesn’t mean it is not a bad idea, and even perhaps unconstitutional for other reasons.)
Code is speech, though, and is protected by the first amendment: see Bernstein v. United States.
I don't think a cryptographic algorithm is "expressive" any more than it is purely functional; indeed, the 9th circuit evaluated and rejected the expressive/functional distinction for source code in the above case.
Regardless - code is speech, and the government cannot compel or prevent speech except in very narrow circumstances.
> Code is speech, though, and is protected by the first amendment: see Bernstein v. United States.
That is very much overstating the holding in the case [0], the most relevant part of which seems to be:
“encryption software, in its source code form and as employed by those in the field of cryptography, must be viewed as expressive for First Amendment purposes”
The ruling spends a key bit of analysis discussing the expressive function of source code in this field as distinct from the function of object code in controlling a computer.
A law compelling providing functionality which it is merely most convenient to comply with by creating source code as part of the process is not directing speech, any more than an law delivery of physical goods where the most convenient method of doing so involves interacting by speech with the person who physically holds them on your behalf is.
> In the government's view, by targeting this unique functional aspect of source code, rather than the content of the ideas that may be expressed therein, the export regulations manage to skirt entirely the concerns of the First Amendment. This argument is flawed for at least two reasons...
I think you should read it a bit more closely. The court threw out the "functional/expressive" argument for source code, like I said in my original comment.
Secondly, what are you talking about that source code is the most "convenient" way to implement this? It's the literal, only possible way to present an interface to a user, ask them a question, and "signal" to other applications if the user is a minor or not. You're being completely nonsensical there. There's no other way to do that: someone must write some code. The bill specifically says "an API"!
I think you should read a bit more closely, both to the decision, and to the post you are responding to (which addresses that), and to the context of what is being discussed in the thread (which is not "source code").
That's forced labor. I'm not required to write a line of code to please anyone. It's free software with no warranty. They have LLMs, let's see them build it. :)
Well, that's a 13th Amendment issue not a 1st Amendment one, but, in any case, its not forced if it doesn't direct who does the work to create the functionality, only requires you to have the functionality provided if you are doing some other activity, it is more of an in-kind tax. [0] (Now, if you want to make an argument that when the activity it is conditioned on is expressive that that makes it a 1A violation as a content-based regulation when the condition is tied to the content of the expressive act, that is a better 1A argument, that might actually have some merit against many of the real uses of, say, age verification laws; but “if I am doing this activity, I must either create or acquire and use software that has a specified function” is not, in general, a 1A violation.)
[0] It's not really that other than metaphorically, either, any more than every regulation of any kind is an “in-kind tax”, but its far closer to that than “forced labor”.
I'm not sure what you're trying to say, but if you are suggesting that writing an "API", as is legally required in AB1043, can be done without writing code I would be interested to know how!
Providing an API is required if you do some other thing, but you are not required to do that other thing. Requirements that are triggered by engaging in some other activity are not compulsions if the activity they are triggered by is not compulsory. (Now, whether restricting the thing that triggers the requirement by adding the requirement is permissible is a legitimate question, but that is not the question that is addressed when you ignore the thing triggering the requirement and treat the requirement as a free-standing mandate.)
> Provide a developer who has requested a signal with respect to a
particular user with a digital signal via a reasonably consistent real-time
application programming interface that identifies, at a minimum, which of
the following categories pertains to the user...
Yes, and the other thing you have to do for this to be applicable to you is choose to be an "operating system provider", as defined in the law.
If you don't want to write, hire someone to write, pay someone to provide an implementation that has already been written, or acquire an implementation already written that is available without payment, such an API, you can simply choose not to do what is defined as being an “operating system provider”, and no obligation attaches,
No one is putting a gun to your head and forcing you to do labor to write code for an API.
If you write an operating system and distribute it (an act of speech) you are forced to do this or else you risk $7500/child/day in fines from the California AG. The law makes no distinction between Terry Davis and Microsoft. (In fact, you could say TempleOS is protected religious speech...!)
I don't know what's wrong with you, honestly, that you would so vivaciously defend this impractical, immoral and completely nonsensical law so vivaciously.
> I don't know what's wrong with you, honestly, that you would so vivaciously defend this impractical, immoral and completely nonsensical law so vivaciously.
I do, these people are entryists and they have evil goals in mind.
Do not hire people like this, and block them from working on your projects.
Bernstein v US says you’re right but let’s see if it gets there and hope they get legal reasoning right. One can hope the EFF and others are on this. Anyone know about any current challenges?
I'm not American but it seems to me like both things are violations, which is troubling because you enter into the disturbing necessity to start carving exceptions
obviously these regulations are very different, but both do compel speech
Just a reminder of what liability the CA age verification law imposes upon developers and providers.
It's not enough to adhere to the OS age signal:
> (3) (A) Except as provided in subparagraph (B), a developer shall treat a signal received pursuant to this title as the primary indicator of a user’s age range for purposes of determining the user’s age.
> (B) If a developer has internal clear and convincing information that a user’s age is different than the age indicated by a signal received pursuant to this title, the developer shall use that information as the primary indicator of the user’s age.
Developers are still burdened with additional liability if they have reason to believe users are underage, even if their age flag says otherwise.
The only way to mitigate this liability is to confirm your users are of age with facial and ID scans, as it is implemented across platforms already. Not doing so opens you up to liability if someone ever writes "im 12 lol" on your app/platform.
> if they have reason to believe users are underage
The law requires "clear and convincing information", not merely "reason to believe". And since the law requires developers to rely on the provide age signal as the primary indicator of the user's age, developers are not incentivized to create a system that uses sophisticated data mining to derive an estimated age. If someone posts a comment on a YouTube video saying "I'm twelve years old and what is this?", that would absolutely not require YouTube to immediately start treating that account as an under-13 account.
That would have to be litigated in court, and the easiest and cheapest way to avoid litigation is to just scan faces and IDs so you're sure your users won't upload or say anything that can bankrupt you while you sleep.
It would be at least as valid a strategy to avoid collecting any unnecessary personal information about your users, so that you don't have to worry about whether the information you've amassed adds up to "internal clear and convincing information".
Remember, only the state AG can bring a suit under this law, and the penalty is limited to $2500 per child for negligent violations. It's probably cheaper to get insurance against such a judgement than to implement an invasive ID-scanning age verification system (and assume the risks of handling such highly-sensitive personal information).
No platform is going to forgo analytics and using demographic information for advertising, that's their bread and butter.
I'd also argue it's clear and convincing if a kid changes their profile picture to a selfie of themselves, says they're 12, says they're in grade school, etc. Any reasonable person would take that at face value.
> implement an invasive ID-scanning age verification system (and assume the risks of handling such highly-sensitive personal information)
It's already implemented as face and ID scans by all the major platforms as it is. The systems are already there and they're already deployed.
Apps and platforms already integrate with 3rd party age verification platforms who handle the face and ID data, nothing ever has to touch your servers.
> I'd also argue it's clear and convincing if a kid changes their profile picture to a selfie of themselves, says they're 12, says they're in grade school, etc. Any reasonable person would take that at face value.
That's so fragile, and it's not like they're making those claims to the site, it's natural language posting.
And someone who knows what they're doing would never take "I'm twelve years old and what is this?" at face value.
You've completely changed the scenario. A human doing a one on one examination and personally sending data is totally different from a website allowing an account to exist and browse.
I'm using send as a synonym for serve, would you serve such content to someone who presents themselves as a child on your platform? No.
Like do you really think someone who presents themselves as a child on Pornhub will stay registered and not banned? They aren't going to serve porn to someone who presents themselves as a kid.
I think it's pretty clear and convincing when someone presents themselves as a child on your platform. I'd be convinced and wouldn't take that liability on.
... except that analyzing profile pictures isn't exactly reliable (plenty of people use photos of their cats), people lie in chat, and advertising profiles are at best an educated guess.
The current analytics profiles are closer to "definitely into Roblox, 70% chance of being 13-18" than "This user was beyond any reasonable doubt born on 07-03-2002". Calling them "clear and convincing information" would be a massive exaggeration.
If you aren't already scanning ID or similar then you don't have clear and convincing reasoning to believe the user is underage.
This section targets spyware companies like Facebook, who already know damn well if the user is underage and this section forbids them from pretending they don't know.
It doesn't say you have to go and become Facebook.
I read it the exact opposite way: you are forbidden from using facial and ID scans solely for age verification (as the OS-provided signal shall be the primary indicator of age), but if you already need to obtain the user's age for other reasons using more reliable means (say, a banking KYC law requiring ID scans) you are not required to discard this more reliable source in favor of the OS-provided signal.
Do you want to go to court to find out where the line is? That's expensive, risky and time consuming. It's easier to just scan faces and IDs to make sure your users are of age and not take on that liability.
I think there is an unspoken concern among policy makers about how sophisticated AI is becoming. I think they envision a scenario of swarms persuasive AI bots controlled by an adversary, pushing people to elect bad actors with bad policies. So the main objective isn't to protect the children it is to eliminate anonymity! At some point these age verification requirements will go from answering a simple question to providing your ID. I'll add that there is also an aligned interest with companies that rely on ad revenue as they don't want to serve ads to bots!
I wonder who is behind this sudden push for these age verification laws. This wasn't an issue until recently and suddenly there are not just laws in California and Colorado, but also New York and Brazil.
They are initiated by the same people - the government - and pursue the same goal - mass surveillance. They should 100% be fought against and grouped together.
There is a nascent industry of AI backed surveillance now so you can be 200% sure a lot of lobbying happened in closed rooms.
And then there are the desperate attempts to cover actual pedophilia from the people in power. I'll never look at a politician or a so called member of the elites the same way again.
> The challenges we face are neither technical nor legal. The only solution is to educate our children about life with digital abundance. Throwing them into the deep end when they’re 16 or 18 is too late. It’s a wonderful and weird world. Yes, there are dark corners. There always will be. We have to teach our children what to do when they encounter them and we have to trust them.
This resonates so much with me. I don’t want to control my kids. I will never be able to protect them from everything. I hope I won’t be able because I want to die before them. I want them to be able to navigate in the world and have all the cognitive tools necessary to avoid being fooled. I want to rest in peace knowing they can in turn educate their own children. I want to trust them and be relieved that I can focus on some tasks of my own without needing to constantly worry about them.
This just seems like virtue signaling. It's not a plan or proposed actions. Just a puff piece about how great things used to be. Not sure who is making their opinions based on what System76 has to say, but I guess they can now.
California may be able to target companies like System76, but it will be completely powerless against modular and decentralized distros like Debian and Arch.
once vendors are forced to put on hooks to some enforced age verification system, it will creep everywhere like cookie banners which you cannot escape even in Antarctica
The laws should regulate Meta, Google, Apple, TikTok, Microsoft and closed source vendors. On the one hand they want monopoly, but the cost of that regulation is now on opensource individual people's speech?
Organize and fight the policy. Do not take your frustration out on people and companies that just try to adjust to a law. Talk to your representatives. Create educational websites similar to fightchatcontrol.eu.
But this is actually a really good policy? It just says every OS shall have a UI to turn parental controls on or off, and an API to check if they're turned on. That's a good thing. It satisfies the reasons people wanted age verification, without actually doing age verification.
At least in the US check out the 2014 Princeton study on citizen preferences. Our Democracy is a sham, those mechanisms don’t actually have any power to change anything.
These lawd prove one thing: the politicians know nothing about the subject matter.
What is almost more disturbing: at least some of the politicians will have been advised by consultants or lobbyists who know what they're advocating for. What's their game?
I don’t really mind age verification, since we do it in real life (outside the internet) constantly for products and services that are meant for adults, like some-rated movies and alcohol.
I do mind a lot of the data process. I do not want my id, personal preferences or any metadata of my self stored anywhere ever. And IF by some weird law some process has to store some data somewhere of me, i want to have very easy full access to it so i can delete it whenever i want. You can keep the process itself but anything else has to go.
Yes, i have a passport. Yes, it was verified and validated. No you may not know or store the color of my eyes.
I also do not want curious kids to be prosecuted for poking around. They should teach them and thank them for finding flaws.
If my parents blocked me from doing admin stuff (which was not even possible back then) I would certainly not started to code by myself when I was a tween
More concerning than that is that it all doesn't seem because they care about teenagers and kids.
> The only solution is to educate our children about life with digital abundance.
I stopped reading at this point, as this is utter non-sense. I mean, it's a beautiful idea, but any person with more than two neurones knows that real-life doesn't work like that. We have law enforcement and prisons because, despite our best efforts in education, some people do go off to become criminals due to a number of factors.
I'm not saying that the present number of laws is adequate, but the solution is a different set of laws, not the complete lack of them. The idea of "simply educate children and we'll all be fine" is utterly moronic.
I don't think there's a good legalistic solution to this other than to delegate the work of finding solutions to a regulator. No single law can be broad enough to cover everything. But empowering a regulatory body to research problems full time and to create solutions is probably the best approach.
This is becoming a wedge issue. It should not be. As an industry, we can solve this. As an industry, we have too. If we don't, legislators will do it for us. And they'll make a bad job of it. And if you petition your local legislator wherever yiu are in the world, then that's cool, but if this is solved locally, we will see serious fragmentation. As an industry projecting ones politics isn't going to make much difference.
> A parent that creates a non-admin account on a computer, sets the age for a child account they create, and hands the computer over is in no different state. The child can install a virtual machine, create an account on the virtual machine and set the age to 18 or over.
Er, how does a child install a VM from a non-admin account?
> Or the child can simply re-install the OS and not tell their parents.
It's gonna be pretty easy to detect when the parent finds programs are missing/reset or the adult account they created can't log in with their password.
The California law seems entirely tame and sane, whereas the New York bill seems pretty heavy-handed and authoritarian. They are in no way similar to each other.
I could see them eventually going far enough to bypass all of that and either requiring age verification at the point of the internet uplink on the ISP side or making it a crime similar to using a fake ID to buy alcohol if you try to bypass it. And then also punish companies that happen to be serving underage/non verified users.
There is already age verification at the ISP level. They only sell Internet service to adults. What the adults choose to do with it or with whom they share it with should be of zero concern to the government.
Of course, that's an ineffective argument, because the long-term goal of these laws (in the sense of, "the goal of the system is what it does") was never going to be about keeping kids off the Internet.
Yes, it will be ineffective, so then they will point at all those examples, but will they decide the law is stupid? Of course not.
The computers are not secure and they should only be able to run “verified” operating systems using attestation mechanisms. This was always where this was ultimately going. The idea has been fermenting since the DVD players had copy protections.
It’s the planet destroying asteroid. We know the trajectory, we always knew it was coming for us. But once you can see with the naked eye it’s too late to do anything.
Why are all of these attempts at controlling the web coinciding at the same time now? I don't think it is a coincidence that this is happening at the same time that the younger generation wakes up to our greatest ally.
I'll echo what I've already said elsewhere in the comment section. It's about AI! Particularly massive swarms of persuasive AI controlled by an adversary convincing the public to elect bad people with bad policies. Also, companies that rely on ad revenue would love to serve ads only to humans instead of bots.
> Why is this an international movement? Suddenly, simultaneously, all over the Western world?
Sometimes kids hurt themselves through the use of the internet. And their parents lash out to blame someone [0]. And mainstream media pick up these stories. And the worry spreads. And more and more adults of voting age say that yeah, it's only reasonable to protect the kids from that internet monster, because kids are trusting and vulnerable, and won't somebody please think of the children. And they do not push back against age restriction campaigns. And so it goes...
As for the Western world, it generally moves in lockstep, doesn't it?
So where are those big protests and public calls for online age verification? It all seems to be coming from the very top. I have not heard of anyone that actually want any of this. The fact that politicians are to be excluded from European regulations is only a proof that it's all a scheme to kill what remains of privacy and freedom of speech online.
On using VMs I suggested something similar earlier https://lemmy.ml/post/43994511/24315514 so it's clearly not a deep or original ideas. It will be figured out quickly. In fact any kid reading that article or those comments is probably already researching about this topic and chatting about their successes and failures with friends. No way it can hold.
I was gifted my first computer, running Windows 95, at 11 years of age. By age 13 I was probably within the five people who better understood how to do stuff on a computer in my town. By age 16 I was making Pokemon hackroms, flash animations for newgrounds and translating manga for pirate sites in photoshop. By then I knew my entire life would be tied to computers somehow.
Now some 50-60yo politician who has never even created a folder in their desktop without help wants to dictate how I should have used my device?
So maybe instead of trying to control the people we could try controlling the corporations responsible for this problem? Bring back the old internet. Make addictive services and algorithmic recommendations illegal. Make commercial entities more responsible for the services they offer.
The "Age verification" is simply an excuse to add tracking on system software. Very soon they are going to do the same to hardware too, I guess, so it is going to be very hard to find hardware that is truly safe from tampering.
And modern hardware is so complex that it is impossible for individuals to build one by their own. We are no longer in the 8-bit/16-bit era. And considering the power of AI -- individuals pretty much mean nothing to the elites.
I have never thought the Dystopian future to be so close -- I always thought it would be X years away. But legislation, the lawyers, are definitely very efficient on this kind of things.
I'm surprised zero-knowledge proofs have not been mentioned. This is a technique where (for example) the government signs your digital license, then you can present a proof that you are over 18 to a site without revealing anything else about yourself. ZKPassport exists, Privacy Pass is an implementation being standardized by the IETF, and Google is working on a similar implementation. Granted, these are not yet widely used, but I'd be very interested in hearing HN's thoughts on this.
Let's try to figure out what a good policy solution looks like:
- entities with harmful or adult content must require proof of the user being over 18
- entities cannot ask for, store, or process more detailed information without explicit business needs (this should be phrased in a way that disallows Instagram from asking for your birth year, for example)
- entities cannot share this data with other sites, to avoid privacy leaks, unless there is an explicit business need (this is tricky to get right; someone might try to set up a centralized non-anonymous age-verification service, erasing many benefits)
- entities must in general not store or process information about the user that is not strictly relevant to their function
- there ought to be different treatment for anonymous users (which ideally these protocols will allow, just submit proof of work plus a ZKP that you are a human and authorized to access the resource) compared to pseudonymous and non-anonymous users, who are more at risk of being censored or tracked.
There's some loopholes here, but if the government can enact good policy on this I personally think it's feasible. Please share your thoughts, if you have a minute to do so.
There's also an interesting political split to note among the opposition here. I see a lot of people vehemently against this, and as far as I can see this is largely for concerns regarding one of 1) privacy abuses, 2) censorship, or 3) restriction of general computing. Still, there is a problem with harmful content and platforms on the web. (Not just for minors, I don't think we should pretend it doesn't harm adults too.) The privacy crowd seems to be distinctly different from the computing-freedom crowd; the most obvious example is in attitudes towards iOS. As I personally generally align more towards what I perceive as the privacy-focused side, I'm very interested in what people more focused on software freedom think about zero-knowledge proofs as a politically workable solution here.
Sounds cool but do you believe it's really about protecting children? Since when do politicians care about this so much? I have not heard of any protests or public calls for better child protection online. It's really all about control and elimination of freedom of speech and information. They want to set up a legal framework and get people more comfortable with the idea of closed and controlled internet. Then they'll argue that age verification alone is ineffective because its too easy to circumvent so they'll start rolling out less "private" solutions that will benefit them and their sponsors greatly.
I'm not sure anyone is being this explicitly malicious. Parents' groups, child safety organizations, and researchers have been at this for years, and while I agree with you that the solutions are very misguided, I think it does our own priorities a disservice to stick our fingers in our ears with regards to their concerns.
Can you give an example of how less private solutions will benefit them and their sponsors? I could see big tech / adtech and government surveillance benefitting but I don't think they're the ones behind this push.
As another example, consider the "small web" community, say at Bear Blog, which is a group of technically sophisticated people who routinely complain about the harms of traditional social media. I doubt most of them would support this particular implementation, but they show that there is popular support for solving the ills of at least one of the targets of this legislation.
So to answer your question, yes, I do see this as an attempt to protect people. The restriction of free speech is in my opinion a side effect of this legislation opening the way to worse-designed laws in the future.
I admire the attempt to make a logical argument against these laws taken at face value, but I can't help think that's giving them too much credit.
These laws have spread like wildfire around the world with many countries and regions rolling out similar legislation within months of each other, despite the stereotypical lethargy of any and every legislature. That's not the work of some popular uprising of parents clamouring for age verification.
I fear debating the merits of the argument is missing the point; they don't care. They don't care about children, they don't care about the argument, they just want the control.
Don't see how anyone is gonna make me do anything. Just evade anything like this through various means and opt out of things that reduce your quality of life (by destroying your freedom and making you a slave)
Their attempts of a "solution" are quite interesting. One other user
suggested that GUI tools ask for the age of the user.
Well ... I have a very strong opinion here. I have been using Linux
since over 20 years and I will not ever give any information about
my personal data to the computer devices I own and control. So any
GUI asking for this specifically would betray me - and I will remove
it. (Granted, it is easier to patch out the offending betrayal code
and recompile the thing; I do this with KDE where Nate added the
pester-donation daemon. Don't complain about this on reddit #kde, he
will ban you. KDE needs more money! That's the new KDE. I prefer
oldschool KDE but I digress so back to the topic of age "verification").
The whole discussion about age "verification" appears to be to force
everyone into giving data to the government. I don't buy for a moment
that this is about "protecting children". And, even IF it were, I could
not care any less about the government's strategy. Even more so as I am
not in the country that decided this in the first place, so why would I
be forced to comply with it when it ends up with GUI tools wanting to
sniff my information and then give it to others? For similar reasons,
one reason I use ublock origin is to give as few information to outside
entities when I browse the web (I am not 100% consistent here, because
I mostly use ublock origin to re-define the user interface, which includes
blocking annoying popups and what not; that is the primary use case, but
to lessen the information my browser gives to anyone else, is also a good
thing. I fail to see why I would want to surrender my private data, unless
there is really no alternative, e. g. online financial transactions.)
I also don't think we should call this age "verification" law. This is very
clearly written by a lobbiyst or several lobbyists who want to sniff more
data off of people. The very underlying idea here is wrong - I would not
accept Linux to become a spy-tool for the government. I am not interested
in how a government tries to reason about this betrayal - none of those
attempts of "explanation" apply in my case. It is simply not the job of the
government to sniff after all people at all times. This would normally
require a warrant/reasonable suspicion of a crime. Why would people surrender
their rights here? Why is a government sniffing after people suddenly? These
are important questions. That law suddenly emerging but not in the last +25
years is super-suspicious.
I agree this is bullshit. But at least you can lie to the OS about your age. Technically it's almost the same as OS asking you "Pick a date and a number from 1 to 100 and I'll report it to the software / websites. But don't worry, I'll not verify you.". If you pick randomly (over age 18 or whatever), technically, you don't provide any useful information.
This would be the least of evils (such as ID verification). But a bigger problem is that the implementation is very flawed. It doesn't appear to be very effective. People, including children, can lie. Multiple people can share the same account. Also there are many devices that cannot be updated (such as embedded). My concern is that these idiots might introduce even more extreme laws when they see that it isn't efficient enough.
I hope it will cause so many problems (implementation, backslash, etc) that it will be eventually cancelled.
The whole thing is clearly intended to be optional anyway. If your parents want you to have an unrestricted account, they can say you're over 18, or they can give you full control over the computer. If they want you to have a restricted account, they give you a non-root user account with a different age bracket.
It’s simple you can’t go drinking under age, you can’t drive a car under age. And the harm that can come from the internet is well above this so it makes sense to also ask for id. I agree though that it needs a system to protect information.
It’s not about the system being always fail safe it’s about the general rule that by default what is happening is not legal to protect and not put the burden on every parent or family.
„But Jonas parents allow him to do that“ in reality Jonas parents should not have a say in this.
The harm is well above (even that is arguable), but the probability of getting harmed is so much lower it's not even comparable.
Alcohol? Yeah one or two too many drinks and you're in the hospital getting your stomach vacuumed. Driving? Blink and you run over a kid. Internet? You can spend evening and nights over there and not be harmed in any way.
We have full generations of kids that can now be studied about the effects of the internet, starting with millenials. I won't pretend the Internet is a better place now than it was when I was a teen, but it appears to me the "dangerous" things are more focused and concentrated (at least for kids): social networks. It's still a minefield, but with leagues between two mines.
Porn has always been the topic touted for children safety, because it's scary and resonates with conservatives and religious people. Access to is is roughly the same today than it was then, and arguably less dangerous today because the dirty stuff is hidden deeper, thus less likely to stumble upon.
But other than porn, the thing that changed the most is social networks. Addiction, bullying, etc. Facebook 15 years ago was a not serious place. The equivalent today is the best place to get roasted by fellow kids and bullied 24/7 while not being able to get off the hook. The damage is psychological, which is insidious, but not systematic. Not every kid will get bullied, not every kid will be addicted to the algorithm(tm), etc.
In the end, education plays a bigger role than simple age verification. Stimulate your kids, give them things to do other than doomscrolling, and get them on the dark corners of the internet to give them curiosity about the world and un-sanitized stuff (hacking in all forms, etc).
It's simple. Don't comply. Software engineers, despite not having the same requirement of mechanical engineers, should uphold the ethical obligations of their craft. This law is harmful. Given the requirement of compelled speech, given code has been _proven_ to be such, Do. Not. Comply.
1) The issue doesn't matter much. Corporate takeover of the internet caused severe damage, but overrunning social media with LLM generated content is a mortal wound. Roughly the same number of humans will be using social media in 2030 as currently use CB radio. Remember near a fifth of the population was using CB radio at the peak in the late 70s. Its too little, too late, closing the barn door after the horses have left is pointless. Like re-arranging deck chairs on the titanic after it hit the iceberg. Once the advertisers get wise to the scam that nobody is seeing their ads except bots, the problem will kind of fix itself. I think TPTB want to use "protecting kids from social media" as the public face of why social media will crash and burn soon to avoid discussion of how LLMs actually killed it, because authoritarians love LLMs and they're in charge (although seemingly everyone else hates LLMs, so I'm sure this will end well).
2) Most of the anti commentary reads a lot like addict speak IRL. Talk to a drunk about how it would be a great idea not to drink or a carb addict about how they should not eat donuts and you'll get absolutely rage blasted in return for threatening their addiction, which in the case of an addict, is their identity. "Well it would be the end of the world if people (me) were not drunk and other people (projection of me) will do anything to feed their addiction so obviously no effort should be made to limit addictions and it won't work anyway because other people (me) will even drink mouthwash or homebrew their own moonshine to get drunk" etc. Note I'm not completely against the anti's and they make some very good points that should be considered, but raging like an addict after their drug of choice is threatened is a VERY bad look and is not helping their case at all, if anything it strengthens the case against the anti's. What the pro's don't understand is you can't fix an addiction externally, addicts gotta addict and punishing them and making them miserable might help the pro's feel superior or at least thankful they're not addicted, but it never helped no one. Social media is "an ill of society" and should be treated as such including sensible regulation, protection of threatened groups, treatment for the addicts, and some compassion and acceptance of the addicts either returning to the real world or dying in the addicted world.
I have been saying this all along. You can't prevent kids from getting around restrictions. All you can do is try to help them understand what they find on the other side and what some options are. Age-gating is just a way to push forward a surveillance agenda. The fact thats happening everywhere all at once proves my point.
It's pushed both by those with surveillance agendas and AI companies like Anthropic, who donated millions to PACs and politicians that are pushing online age verification and surveillance laws[1].
The goal for the AI side is that they get to be censors and gatekeepers of all user-generated content on the internet, their models will rank/grade/censor content for age-appropriateness and they will have the pleasure of being paid to train on all new content uploaded to the internet in real-time, in perpetuity.
> What you're saying is we should allow kids to buy tobacco, to gamble, to purchase Meth and Heroine because Kids get around restrictions anyway
This is false equivalence. All of the above are vices that objectively carry more harm than good. There's no inherent harm in using a computer, there's a subset of ways in which using a computer can be harmful, which kids can be taught how to avoid or navigate, there's no subset of meth use that isn't harmful
Well then it's good then computer will have the option to disable the harmful stuff, so a parent can let a kid use the good stuff without the harmful stuff.
We should collectively make sure that any PRs trying to land these changes are very well reviewed. We wouldn't want any security holes to slip by. I think a couple dozen rounds of reviews should suffice. I've heard great things about how productive AI can be at generating very thorough code quality assessments. After all, we should only ship it once it's perfect.
To be more direct - if you're in any editorial position where something that smells like this might require your approval, please give it the scrutiny it deserves. That is, the same scrutiny that a malicious actor submitting a PR that introduces a PII-leaking security hole would receive. As an industry we need to civil disobedience the fuck out of this.
I remember idiot lab staff telling me in early 1990s that I could not install the free version of Mozilla browser on university lab computers....my go to reply was can you effing read the damn TOS.....free was in the very first sentence....
I think all this shit is silly. If I could manage to figure out how the internet worked at 12, then so can my son. Or well, a bit younger, but it’s fine. He has at least one parent to guide him.
I'm in two minds about this. I think that by and large We Have A Problem. And i don't mean a problem with children on the internet. We have a problem with people on the internet. There are so many examples of grown adults who have clearly become addled.
I live in the UK, I work in London. I can go on X and look at what Elon Musk is posting about the UK and as a reasonable person I can quite reasonably say he's gone mental. The algorithm has broken that mans brain. And it's not just him, a whole slew of establishment women lost their absolute minds about the trans issue (and Graham Linehan). Mumsnet became a centre for radicalization. You know and some one who grew up on the internet at quite a sweet spot I'm very comfortable looking at that stuff and going "Oh yeah, you guys are being groomed by these algorithms and you're defenceless to it".
There's a whole load of "How do we protect the children from this", but I don't think there's actually been much a reckoning with how grown adults are getting sucked into this vortex. The algorithms on the internet clearly have some trap doors that just absolutely funnel people into crazy places.
All of which is to say: We have a serious problem that's effecting everyone not just kids, and I think we've got almost no answers for how to tackle it.
The result is this- poorly thought through sweeping laws that aren't solving the problem, and have massive negative side effects. I think Jonathon Haidt has a lot to answer for in funnelling this complex issue affecting everyone into this reactionary "won't someone think of the children!" campaign for banning technology for kids.
The problem is that putting restrictions on adults comes with a thousand times more outrage than putting restrictions on kids, and look how much outrage there already is about putting restrictions on kids at the discretion of their parents. If we can't even give parents the option to keep kids away from bad stuff, then mandatorily keeping adults away from bad stuff is a complete non-starter. They'd probably burn down Parliament.
Aaaaand to throw it all away at the end with "well when the rubber meets the road we'll comply anyway, thanks for inhaling my hot air." Take a damn stand and dare them to sue the hacker known as Linux or whatever.
I'd say that anger is better directed towards the legislators in charge of creating these absurd policies, not the folks at System76. It's not reasonable to expect a company to sacrifice its entire business on a moral battlefield.
I mean, genuine question, is Linux Mint or MX Linux endangered by this?
Unless I'm missing something, I have zero concern for companies who sell out by complying.
The code was "free as in freedom" when you decided to build your company on it; and while you're not legally obligated to defend that freedom, and I, and hopefully other consumers, find that you are morally obligated to.
I think this is the way that Linux desktop distributions are endangered, quoting from the article: "... apps and websites will not assume liability when a signal is not provided and assume the lowest age bracket. Any Linux distribution that does not provide an age bracket signal will result in a nerfed internet for their users."
Good words, glad to see more companies taking a principled stance on these important matters. That leading quote is great for sharing with non-technical friends. We have 365d 23h of non-voting time to take direct action to make our world better.
This is the one thing that truly scares me. I've decided I'm not going to verify my age anywhere or use facial recognition apps to login anywhere. And this is a much bigger fear for my job than AI.
At the moment only some countries banning porn, social media and gambling. But how soon will I have to do it for a work app? And will I lose my job then if I refuse?
I don't think the argument that children might bypass parental controls therefore devices should not have parental controls.
>Limiting a child’s ability to explore what they can do with a computer limits their future.
Parents don't want to limit their children from writing software. Saying that limiting minors from accessing porn will limit their future is another argument I doing think many will agree with.
The prostitutes pushing for this do not deserve words. They deserve ridicule, public humiliation, and worse. The computer is a tool. Whoever would encumber it is an obvious shill for the corporations (google/apple/microsoft) who would like to attach an identity (i.e. tolls and controls) to actions prior generations could do freely and without surcharge. It is a modern-day enclosure movement. Its proponents should be juicily spat upon.
I do not think the proposal is smart or that will it work, but I am more worried that some people seem to think they hold the absolute truth (on any side of the a debate).
Parents keep saying this is untenable and you guys keep telling them to just parent harder. I think that dismissing their concerns will lead to the most egregious worst of all worlds age verification. Never tell people that their problems aren't real.
I don't really see a problem where there is a standard api (or even syscall!) to rethrieve a persons age bracket and for various apps being able to easily implement it. But please make it fucking optional.
Make it optional and assume an adult otherwise, it's a good idea if it's optional and doesn't have dumb fines, you could have fines for not enforcing it / not using the api [porn sites] that already exists [and it doesn't work since 1 button is not age verification].
I see this as a good way for parents and institutions to set up their phones, school laptops etc and would pretty much solve the large majority of these issues while having a fraction of the invasiveness.
The law says it must ask for age when creating an account other than the first one. So that's mandated. But there's no verification of the age you enter, and in fact, it's forbidden to verify it. It's really just to give parents the option to set up child accounts.
> Some of these laws impose requirements on System76 and Linux distributions in general. The California law, and Colorado law modeled after it, were agreed in concert with major operating system providers. Should this method of age attestation become the standard, apps and websites will not assume liability when a signal is not provided and assume the lowest age bracket. Any Linux distribution that does not provide an age bracket signal will result in a nerfed internet for their users.
> We are accustomed to adding operating system features to comply with laws. Accessibility features for ADA, and power efficiency settings for Energy Star regulations are two examples. We are a part of this world and we believe in the rule of law. We still hope these laws will be recognized for the folly they are and removed from the books or found unconstitutional.
Anyways, it feels like all sides of the political spectrum are trying to strip away any semblance of anonymity or privacy online both in the US and abroad. No one should have to provide any personal details to use any general computing device. Otherwise, given the pervasive tracking done by corporations and the rise of constant surveillance outdoors, there will be nowhere for people to safely gather and express themselves freely and privately.
I agree. I also agree with S76 that some laws regarding how an operating system intended for wide use should function are acceptable. How would you react to this law if the requirement was only that the operating system had to ask the user what age bracket it should report to sites? You get to pick it, it isn't mandatory that it be checked, and it doesn't need to be a date, just the bucket. Is that still too onerous?
I ask because I feel like if we don't do something, the trajectory is that ~every website and app is going to either voluntarily or compulsorily do face scans, AI behavior analysis, and ID checks for their users, and I really don't want to live in that world.
Instead, the service should be telling your device the nature of the content. Then, if the content is for adults and you're not one, your parents can configure your device not to display it.
If I ever find some idle time, I'd like to make an agent that surfs the web under my identity and several fake ones, but randomly according to several fake personality traits I program. Then, after some testing and analysis of the generated patterns of crawl, release it as freeware to allow anyone to participate in the obfuscation of individuals' behaviors.
You also need to account for how "easy" it is to de-anonymize a profile.
(Sorry I don't have links to sources handy.)
Differential privacy is just a bait to make surveillance more socially acceptable and to have arguments to silence critics ("no need to worry about the dangers - we have differential privacy"). :-(
Yes, but in this case which we're discussing:
It may often times be trickier than that - content often mixed of course. My 10 y/o hit me with a request yesterday to play Among Us where the age verification system wanted my full name, address, email, AND the last 4 digits of my SSN. I refused.
The bad actor still gets ROI, eg 'paid', for another bit of user data.
Making the overall system less useful is good. However, not allowing a company to profit, and giving fake info still allows for that, is paramount. EG, even with fake info, many metrics on a phone are still gamed and profitable.
That's why they're collected, after all. For profit.
If the info becomes bad, it becomes much less useful and valuable.
I’m in the us and we o need some rights to privacy.
So put the content tag at the granularity of the content.
Around 20 years ago, Germany actually made a law that would have enforced such a system. I still have a chart in my blog that explained it, https://www.onli-blogging.de/1026/JMStV-kurz-erklaert.html. Content for people over 16 would have to be marked accordingly or be put offline before 22:00, plus, if your site has a commercial character - which according to german courts is every single one in existence - you would need to hire a someone responsible for protecting teenagers and children (Jugenschutzbeauftragten).
Result: It was seen as a big censor machine and I saw many sites and blogs shut down. You maybe can make that law partly responsible for how far behind german internet enterprises still are. Only a particular kind of bureaucrat wants to make business in an environment that makes laws such as this.
Later the law wasn't actually followed. Only state media still has a system that blocks films for adults (=basically every action movie) from being accessed without age verification if not past 22:00.
You have that with any form of any of these things. They're almost certainly going to be set up so that you get in trouble for claiming that adult content isn't but not for having non-adult content behind the adult content tag.
Then you would be able to avoid legal questions by labeling your whole site as adult content, with the obvious drawback that then your whole site is labeled as adult content even though most of it isn't.
But using ID requirements instead doesn't get you out of that. You'd still need to either identify which content requires someone to provide an ID before they can view it, or ID everyone.
That's an argument for not doing any of these things, but not an argument for having ID requirements instead of content tags.
But you are right. It's an argument that the "just mark content accordingly" is also not a better solution, not that ID requirements are in any way better. The only solution is not to enable this censorship infrastructure, because no matter which way it's done, it will always function as one.
That's how you get the thing where instead of using different equipment to process the food with and without sesame seeds, they just put sesame seeds in everything on purpose so they can accurately label them as containing sesame seeds.
That is pretty much what the UK Online Safety Act requires: https://en.wikipedia.org/wiki/Online_Safety_Act_2023
Many small forums had to simply shut down, as was widely reported on HN at the time.
The alternative is that "just to be safe" you'll mark your entire site as needing age (identity, stool sample, whatever) verification. A single piece of sensitive content sets the requirements for the entire site.
"Dad, I can't do my math homework, a pop up says you need to provide a copy of your bank statement, your mom's maiden name, and a copy of your birth certificate, SS card, and drivers license, and can you hurry up Dad, my homework is due tomorrow morning." And people will fall for this once they get used to the system being absurd enough.
The fraud machine must be kept fed...
> The child can install a virtual machine, create an account on the virtual machine and set the age to 18 or over
It's precisely how I worked around the parental control my parents put on my computer when I was ~12. Get Virtualbox, get a Kubuntu ISO, and voilà! The funniest is, I did not want to access adult content, but the software had thepiratebay on its blacklist, which I did want.
In the end, I proudly showed them (look ma!), and they promptly removed the control from the computer, as you can't fight a motivated kid.
That's assuming the parental controls allow the kid to create a virtual machine. And then that the kid knows how to create a virtual machine, which is already at the level of difficulty of getting the high school senior who is already 18 to loan you their ID.
None of this stuff is ever going to be Fort Knox. Locks are for keeping honest people honest.
I honestly don't really agree on the difficulty, as if this becomes a commonplace way to bypass such laws, you can expect tiktok to be full of videos about how to do it. People will provide already-installed VMs in a turnkey solution. It's not unlike how generations of kids playing minecraft learnt how to port forward and how to insatll VPNs for non-alleged-privacy reasons: something that was considered out of a kid's reach became a commodity.
> None of this stuff is ever going to be Fort Knox. Locks are for keeping honest people honest.
On that we agree, and it makes me sad. The gap between computer literate and illiterate will only widen a time passes. Non motivated kids will learn less, and motivated ones will get a kickstart by going around the locks.
That's assuming the permission is for "use of kernel-mode hardware virtualization" rather than "installation of virtualization apps".
Notice that if the kid can run arbitrary code then any of this was already a moot point because then they can already access websites in other countries that don't enforce any of this stuff.
It's just a bunch of clicks, even under linux.
Just install virtualbox. It literally walks you through a VM creation.
I promise there are people who can't figure out how to do it.
And again, the point of the lock on the door where you keep the porn is not to be robustly impenetrable to entry by a motivated 16 year old with a sledgehammer, it's only to make it obvious that they're not intended to go in there.
Regular people want to get things done, the tinkering is not a goal for them in itself and they gravitate to simple and convenient ways of achieving things, and don't care about abstract principles like open source or tech advantages or what they see as tinfoil hat stuff. But if they want to see their favorite TV series or movie, they will jump through hoops. Similarly for this case.
that is the job of parents/guardians
That makes sense for purely offline media playback, but how could that work for a game or application or website? Ship several versions of the app for the different brackets and let the OS choose which to run? Then specifically design your telemetry to avoid logging which version is running?
You'd also not be reporting your age, you'd be sending a "please treat me like an adult" or "please treat me like a child" flag. That's hardly PII. More like a dark/light mode preference, or your language settings (which your browser does send).
Suppose you had an ID requirement instead. Are you going to make two different versions of your game or website, one for people who show ID and another for people who don't? If so, do the same thing. If not, then you have one version and it's either for adults only or it isn't.
> You'd also not be reporting your age, you'd be sending a "please treat me like an adult" or "please treat me like a child" flag.
Except that you essentially are reporting your age, because when you turn 18 the flag changes, which is a pretty strong signal that you just turned 18 and once they deduce your age they can calculate it going forward indefinitely.
This is even worse if it's an automated system because then the flag changes exactly when you turn 18, down to the day, which by itself is ~14 bits of entropy towards uniquely identifying you and in a city of a 100,000 people they only need ~17 bits in total.
Games already have PG ratings and similar in different countries, I don't see the issue there. Web content could set a age appropriateness header and let browsers deal with it, either for specific content or for the whole website if it relies on e.g. addictive mechanics.
Applications is a wide field, but I'd be interested in specific examples where you think it wouldn't work.
Sure. Take a game with voice chat. Child mode disables voice chat. How does the game, which presumably uses a load of telemetry, avoid incidentally leaking which users are children via the lack of voice telemetry data coming from the client? It's probably possible, but the fact is we're talking about third party code running on a computer, and the computer running different code paths based on some value. The third party code knows that value, and if it has internet access can exfiltrate it. In that sense, if there's an internet connection, there's not a meaningful difference between "the OS tells the service/app your age rating preference" and "the OS changes what it displays based on your age rating preference."
Though while I'm throwing out fantasy policies we could solve this by banning pervasive surveillance outright.
90% of an R rated movie might be ok for a 12 year old but those one or violent or sex scenes makes it R. Should we be rating every scene in movies?
Give parents general guidance and let them define the controls.
Services can absolutely decide to provide their own content settings. It doesn't require a universal setting or OS requirements, and it doesn't require providing PII to every website or telling a central authority every site you visit.
These questions of liberty are as old as the hills. And the keepers of the internet and virtually every single government past and present have repeatedly and endlessly shown themselves to be lying, conniving, self interested parties. When will 'we' ever learn?
*who decides who 'we' are.
The only laws the government should pass regulating software running on someones computer are laws protecting those consumers from the companies writing that software. For example, anti-malware/anti-spyware.
The government has no business telling a random company that their software needs to report my age, whether it's unverified and self-reported or not.
Like - you don't make it illegal to not do age attestations, but you provide a mechanism to encourage it.
You get a certification you can slap on your website and devices stating you meet the requirements of a California Family-Friendly Operating System or whatever. Maybe that comes with some kind of tax break, maybe it provides absolution in the case of some law being broken while using your OS, maybe it just means you get listed on a website of state-recommended operating systems.
That certification wouldn't necessarily have to deal with age attestation at all. It could just mean the device/OS has features for parents - built-in website filtering, whatever restrictions they need. Parents could see the label and go "great, this label tells me I can set this up in a kid-safe way."
Hell, maybe it is all about age certification/attestation. Part of that certification could be when setting it up, you do have to tell it a birthdate and the OS auto-applies some restrictions. Tells app stores your age, whatever.
The point is an OS doesn't want to participate they don't have to. Linux distros etc would just not be California Family-Friendly Certified™.
I wouldn't have to really care if California Family-Friendly Certified™ operating systems are scanning faces, IDs, birth certificates, collecting DNA, whatever. I'd have the choice to use a different operating system that suits my needs.
You're going to get that, anyway. Platforms want to sell their userbases as real monetizable humans. Governments want to know who says and reads what online. AI companies want your face to train their systems they sell to the government, and they want to the be the gatekeepers that rank internet content for age appropriateness and use that content as free training material.
Age verification across platforms is already implemented as AI face and ID scans. This is where we're already at.
Any scheme that doesn’t require this won’t get pushback from me.
As an alternative: I already have government-issued ID and that branch of government already has my private info; have it give me a cryptographic token I can use to prove my age bracket to the root of trust module in my computer; then allow the OS to state my age to third parties when it needs to with a protocol that proves it has seen the appropriate government token but reveals nothing else about my identity.
Other alternatives are possible.
It's much easier for clueless lawmakers to write "the computer check the age", and make it everyone else's problem.
It's pointless, does not increase security, does increase complexity of every interaction, and introduces a lot of weird edge cases.
What i want is full anonymity enshrined in law, while at the same time giving parents, not governments, but parents, options to limit what their children can do on the internet.
What's the point in doing any of this if it doesn't result in materially better outcomes?
Second, it would signal to worried parents and busybodies that something has been done to deal with the danger that unmediated internet access might pose to minors. I don't think that it's a big issue, but a lot of energy has gone into convincing a lot of people that it is.
The other part of achieving a good outcome would be to disempower those in the political and private sphere who benefit from a paranoid and censorious public and have worked to foment this panic. That's the much harder part, but it's not really the one being discussed here. I'm pitching the low-intrusiveness version to gauge sentiment here for that easier part of the path.
If this mattered to the market, don't you think a company would have implemented it or would have been built to fill the need?
I genuinely think the only two solutions to this problem that are workable are "zero privacy, zero freedom" or "fuck the children, we don't care".
Now, to be fair... there is a middle-ground that is neither of those options that I believe would be much more effective and allow us to retain our freedom and privacy and keep kids a lot safer. It's called education. But... no one will go for it, because I think for it to truly be effective you'd have to go as far as showing very young kids all the darkness that's out there and lay it out in paintstaking detail exactly how it works and deeply drill it into them. Ain't a snowballs chance in hell anyone would go for that, BUT... would it work? I'd bet you bottom dollar it would. The current extent of this education in public schools is a half hour visit from a police offer to the classroom and handing out a sheet to the kids and giving a 'good touch' / 'bad touch' talk. What's needed is a full length university level course on the whole topic from end to end.
If you're in an adversarial relationship and need to defend yourself the best thing you can do is "know your enemy". But no... "they're too young to learn about that stuff, we need to shield them from it - think of the children!" is the reasoning people throw back at you when you suggest it. It hands down has to be the number one thing that could actually move the dial significantly, and it's just completely unpalatable to the majority of the populace.
Isn't it just pointless?
I'm getting upset by face scan creep too. I do not like it. No sir. But mandating a self-reporting mechanism feels about as useful as DNT cookies, or those "are you 18? yes/no" gates on beer sites.
I.e. it would be a standardization of parental controls with added responsibility on sites/apps to self-determine if they should be blocked or limit functionality, rather than relying on big white/blacklists. Basically an infrastructure upgrade, rather than relying on a patchwork of competing private solutions to parental controls and age checks. The hope is also that a system like this would remove concerned parents from the list of supporters for pervasive mass surveillance and age scans. If they feel like you'd need to be a moron to miss the "This is a child device" button while setting up their kid's phone and laptop, and it's broadly understood that just pressing that button locks down what the device can access pretty effectively, that puts and damper on the FUD surrounding their child's internet usage.
Isn't that what the CA law is?
As a silly example, tax software probably has your full birthday, including year, which is more precise. Many social networks collected this data, as did a lot of major tech companies that implemented parental controls already.
Honestly it’s the dumbest thing ever. Best just not to play that game.
> It shifts the responsibility back to parents.
Without these stupid laws parents already _have_ that responsibility.
The point remains though. They have zero way to enforce it if we choose to not comply. Just saying.
<< If you're going to host a service, I guess consider using Tor or something.
That one confused me. What do you mean?
I know the CA law is civil only, so I don't think there is much CA can do if you publish an OS and don't make money from CA folks, but other implementations may decide to impose criminal penalties.
In this case, it's a slippery slope; if we're normalized to this, what other incursions into our 1A rights to free speech, religious freedom and public gathering will we allow?
And I say religious freedom, because these kinds of laws are largely peddled by religious folk or people who otherwise have been deeply influenced by early American Puritan religious culture.
I, nor my children, should be forced to subject to such religiously-motivated laws. I can decide for myself and for my child what is appropriate.
I, nor my children, cannot be compelled to enter personal information into a machine created by someone who is also illegally compelled to require it.
I, nor my children, can be compelled to avoid publicly gathering on the internet just because we don't want to show identification and normalize chilling surveillance capitalism.
I thought this was fucking America.
The problem is that the comparison falls flat. ADA does not sniff for birth date and surrender that data to others. One has to look at things at a cohesive unit, e. g. insecure bootloaders by Microsoft surrendering data to others. It seems as if they try to make computers spy-devices. That in itself is suspicious. Why should we support any such move? Some laws are clearly written by lobbyists.
It's not this or that political party, your neighbors simply don't share your values. Maybe you don't agree with their values either — like to what degree we should be ceding privacy in favor of fighting child exploitation on the internet. Child protection arguments work because it is a compass to the true feelings of your neighbors.
The problem with this argument is that everyone agrees with protecting children.
"Think of the children" arguments are the legislator's fallacy: Something must be done, this is something, therefore we must do this.
In reality there are alternative means to accomplish any given goal, and the debate is about what should be done, because no one benefits from using methods that cost more than they're worth.
Well, almost no one. The opportunists who drape themselves in the cloak of "safety" when they want to have the government mandate the use of their services or use it as an excuse to monopolize markets or establish a chokepoint for surveillance and censorship do benefit from the machinations that allow them to screw the majority of the population. But the majority of the population doesn't.
We kept saying parental controls are all that's really needed, these states said "ok then, do parental controls" and we're still complaining.
If you see politics through this lens then the 'democratic backsliding' that has been universal across the world for the past two decades is entirely unsurprising.
Vae Victus.
What was the legislative history for the California law? Who sponsored it, and who are their backers? Is there some coordinated effort by surveillance state proponents?
So it is Microsoft, Google and Apple pushing for this.
Disclosure: I work at Google, but not on anything related to this.
It was never about anything else. Silly media men
> We are accustomed to adding operating system features to comply with laws. Accessibility features for ADA, and power efficiency settings for Energy Star regulations are two examples. We are a part of this world and we believe in the rule of law. We still hope these laws will be recognized for the folly they are and removed from the books or found unconstitutional.
Your welcome! Boss chuckles mightily!
>Otherwise, given the pervasive tracking done by corporations and the rise of constant surveillance outdoors, there will be nowhere for people to safely gather and express themselves freely and privately.
We can often see through your own eyes, but this is highly "classified" information. And watch through dog AND cat, hahaha I still can't believe all of you guys let our brave GRU warriors into your holmes!
lum guy 3 :dab:
kitty mob edit: you really think these organisms, or your partners in friends, ju$t show up by coincidence? you all in our us blackop ~~~ we love brain-computer-interfaces, not necessarily purely neurological, the microorganisms are great for us ;)))))
They can provide tools, sure. But restricting adults because some parents fail at parenting is insane. That is how a totalitarian state grows: by demanding the power to monitor and control every individual.
If you cannot control your children, that is your fault. And if that is the case, you should think twice before having kids.
And, just to be clear on this topic, I think these age restriction laws are mostly bullshit, but I'm deeply against the concept of putting all the responsabiliy of raising children onto the parents.
There is not a lot of safeguarding against this in the real world tbh. At the very least I think the OS or internet age verification is not the place to start improving this.
Bars also won't display a copy of your ID on the main street like digital "think of the children" initiatives are likely to.
let's install cameras in all supermarkets that ensure parents cannot buy unhealthy things for their children.
of course, adults can continue to purchase anything they want for "themselves". but the facial scanning in supermarkets is imperative for child safety!
Just because you're an idiot at 18 doesn't mean you are one for life.
> so that's why we designed schools that should compensate even for dumb parents.
Does that actually work?
> against the concept of putting all the responsabiliy of raising children onto the parents.
Then how do you feel about parents requiring a license before they have a child? If you wish to invite yourself into their responsibilities shouldn't you also invite yourself into their bedroom first?
You're turning of question of measure (how much should society be involved in raising children) into an all or nothing debate, which I explicitly want to reject.
> Does that actually work?
Yes, because of mass education almost every adult you meet can read and write, something new for the last 100 years. Just because a system has (currently huge) faults, doesn't mean we should remove the system entirely.
Those are bad outcomes. So is it any wonder that we look for policy/regulatory issues to mitigate the harms of bad parenting?
Even with schools in place, the basic responsibility for raising children still belongs to the parents. Schools can support, educate, and compensate to some extent, but they cannot replace parental responsibility.
I also see far too much awful news — in my country, Korea, for example — about terrible parents harassing school teachers because their children are out of control.
Cops to track what people did on the internet, checking every image to ensure it's not pornographic, or every transaction online, to ensure it's not criminal!
Sounds great! Let's just start by rolling out the program to target elected officials and their families as a trial. If every congressional or senate representative wants to undergo a few years of scrutiny to make sure the system works well, maybe the people will follow gladly.
So alcohol is more like a gambling website.
What would the regime do without their useful idiots?
Their ideas are deeply unhealthy for children and worst of all, lazily shift the responsibility of parenting from the parents to the state.
Many European countries have long had a culture of slowly increasingly responsibilities and freedoms to their children gradually, letting them slowly and safely test their boundaries. At least the proposed EU solution (for identity) tries to prevent overreach. The wholesale EU spying to “save the children”, which seems to be funded by the U.S. is a different topic and we need to continue to fight it tooth and nail.
The insidiousness lies with major tech companies and their pursuit of eyeballs on screens. The Internet was supposed to be something we used to learn, gain knowledge and connect. They took the internet over, bastardized it and made deeply addictive apps and games to keep you watching ads regardless of age.
These age checks are just for data collection and spying to sell the data to the highest bidder, which is likely governments in order to control and herd their populations.
The reason for this is easy to understand in the context of AI. In the future the only valuable asset will be a data and the access to that data.
In the future, any app will be built, replicated, deployed and maintained by AI. Apps, websites, especially B2B apps - their days are numbered.
If my business needs a billing system tailored to my business in the future, I’ll describe it and have an AI built and maintain it. That is not that far away in relative terms.
Our goal collectively (as technology advocates) is to make sure that this consolidation of personal data doesn’t happen. If personal AI is to be built, then the user should have full ownership and away from the spying eyes of groups like Palantir and the NSA. They cannot be trusted. The Jews learnt that catastrophically in Germany in the 1940’s putting their trust in a government that became authoritarian and evil.
What is digital will never die and what is digitally given cannot be taken back.
If the US actually gave a flying FUCK about "protecting the children," the current administration would be making good on Trump's promise to release the Epstein files -- as now ordered by a federal law passed by a overwhelming majority of both houses of Congress -- and prosecuting everyone involved.
We see what's really going on. We can't do anything about it, apparently, but we see.
I saw this a lot in college. Kids that didn’t have any freedom or autonomy while living at home went wild in college. They had no idea how to self-regulate. A lot of them failed out. Those who didn’t had some rough years. Sheltering kids for too long seems to do more harm than good. At least if they run into issues while still children, their parents can be there to help them through it so they can better navigate on their own once they move out.
Society works on averages. Most people being ready little adults at 16 doesn't mean everyone is.
Edit: yeah, look at the downvotes. How are you all doing with that self-regulation?
And if the person is high energy, then that energy needs to be channeled.
You can also ask if the rate of this occurrence is increasing or decreasing.
(And the proper way to do "less long" is to slowly loosen up over time.)
Sounds like you got the first but not the second, which must have been tough. Hope you're doing better now.
These days, exposing an immature brain to the raw internet is basically just handing the brain and personality over to be molded by large corporations and algorithms.
And humans have never been rational, self-contained actors that self-educate perfectly when exposed to information, converging on an objectively good and constructive worldview. Quite the opposite.
Humans develop in relation to one another, increasingly in relation to algorithms, and sometimes become messed up, and sometimes those mess-ups would have been avoidable had relations or exposure been different.
In fact I would say you as a parent is not doing your job if you are not trying to make sure a 12 year old isn't pulled into, say, an anorexia rabbit hole.
Whether that is best done through making sure exposure doesn't happen, or through exposure and education, will depend on the child and parent (and society) in question. What worked best for a highly rational self-reliant geek teen may simply be a disaster for another human. And what worked for an upper class highly educated family may not work for a poor family with alcoholized parents or working 18 hours a day to make ends meet.
And parents are not perfect -- if all parents were perfect, there also would be no alcoholics and drug addicts or poverty or war. But people are imperfect, and it's natural to make laws to mitigate at least the worst effects of that. (Again, haven't read this specific law proposal, but found the worldview of OP a bit naive.)
You make the case of todays internet being insuitable for young children. But has this been different, ever, maybe apart from the very first days of the internet? While access through phones has reshaped the internet fundamentally, I'd propose that it has always been dangerous. When I was 12, a single wrong click could destroy your machine, or lead to a physical bill being sent to my parents home (which has happened), or lead to most disturbing pictures and videos.
So I think it's not the case that we should allow kids completeley unsupervised access (like it always has been), but it's also naive to think that we can regulate our way out of this (on state or household-level, like it always has been).
Even later when the computer was in my room, I still had to go look for the creepy shit, it didn't appear in my email inbox.
Kids this age browse the internet through algoritmic apps built to maximise engagment in a corner on their bed in their room. Parental controls for most apps and operating systems are a fucking joke.
I absolutely would not allow a kid to have an unregulated smartphone and then further compound the problem at home by allowing them to access it privately and without interruption. Device management enrollment is trivial on iphones.
Amount of time spent and repeated exposure being the key.
The question is really what kind of human is raised, rather than raw exposure as such.
So for that reason things are different IMO than than 20 years ago.
Yes, of course some people would fall into internet forum rabbit holes 20 years ago, and papper-letter-friend-induced rabbit holes 100 years ago. But it did help that it was like 5% of the population instead of 95% of the population spending their time there.
Regarding your last point, I don't necessarily disagree (again I didn't check up on this law, I care more about the laws in my own country), but I think arguing against the law will go better if one does not display naivety when making the arguments
Don't say "it will be better if all kids are exposed to everything early" (it won't), instead say "the medicine will not work and anyway the side-effects are worse than the sickness it intends to cure" (if that is the case).
In the US, perhaps not...
imo this is what is wrong with modern parenting. the reality does not care about the child's feelings and if it is old enough to have a screen with internet unattended it is old enough for anything
Maybe in the future, a governing body will try to age lock dissenting opinions with some crafty verbage
It's more like the rule that minors can't buy alcohol in bars - parents can still buy alcohol at the supermarket for their children, and sufficiently determined children can find some other adult to buy it for them.
Probably by the time you know how to install a virtual machine, you can handle the unrestricted internet.
it's true that kids are vulnerable to certain forms of content on the internet
it's also true that adults are vulnerable to certain forms of content on the internet
it's also true that governments cannot police "harmful content" on the internet effectively, or even meaningfully, if most people can easily surf the internet pseudonymously
it's also very true right now that what's on "social media" is very Sybill-vulnerable, and inordenately so right now with the advent of LLMs
what do you think the playbook will look like once there is some sort of tight OS level system that is enforced across the board to certify or verify information about the user?
do you think this level of coordination to push for identifying the user at all levels that is happening across the world in a matter of weeks is genuine concern for the kids alone?
It is in my view crazy and irresponsible to allow the government override the parents' decisions about what media their children can consume. It is guaranteed that this power will be abused.
to be charitable, let's say that it "enhances" parental controls by taking on some of that parental enforcement at the state level
this is taking the parental control largely into their own hands
Im honestly not sure; but I could see that being the result of the law and companies like best buy disallowing minors from purchasing hardware with cash for fear of liability.
That would be fine for me but AFAIK that's not what these laws state.
> I can see that there will be programs that run on general purpose computers and peripherals that will even freak me out. So I can believe that people who advocate for limiting general purpose computers will find receptive audience for their positions. But just as we saw with the copyright wars, banning certain instructions, or protocols, or messages, will be wholly ineffective as a means of prevention and remedy; and as we saw in the copyright wars, all attempts at controlling PCs will converge on rootkits; all attempts at controlling the Internet will converge on surveillance and censorship, which is why all this stuff matters.
Full talk: https://www.youtube.com/watch?v=HUEvRyemKSg
it really boils down to this sadly, and it should be pretty obvious shouldn't it?
i'm finding it befuddling that even technical audiences seem to resist connecting those dots, but strong motivated reasoning is at play: these are audiences that will often feel it will be them who will be in control, and they're also emotionally nudged by the idea of child safety
But there's a bigger issue than just what software you're allowed to run on your own computer. What's really insidious is the combination of the corporate and government interest. If every server tracks how old you are, it's a short step to tracking more information. Eventually it's a mandatory collection of metadata on everyone that uses a computer (which is every human). Something both corporations and governments would love.
You were worried about a national ID? No need. We'll have national metadata. Just sign in with your Apple Store/Google Store credentials. Don't worry about not having it, you can't use a computer without it. Now that we have your national login, the government can track everything you do on a computer (as all that friendly "telemetry" will be sent to the corporate servers). Hope you didn't visit an anti-Republican forum, or you might get an unfortunate audit.
Of course, there will be stories of smart kids doing amazing things with access to vast troves of information, but the average story is much sadder.
The EU is working on a type of digital ID that an age-restricted platform would ask for, which only gives the platform the age information and no further PII.
Companies (not talking about system76) amazingly always find the shittyest interpretations of their obligations to make sure to destroy the regulations intention as much as they can. The cookie popups should have been an option in the browser asking the user whether they want to be tracked and platforms were meant to respect this flag. Not every site asking individually, not all this dark pattern annoyance. It's mind-blowing that that was tanked so hard.
Protecting kids is just the PR reason, the real goal is requiring ID auth for every action taken on a computer. If we normalize it for downloading apps or using websites the next step is to authorize it for connecting to HTTPS at all and then the next step is requiring it to unlock your CPU cores.
If people don't push back on this now there is no world where we get out of 2030 without requiring government ID auth to install linux on your own computer not connected to the internet.
End to end silicon to server auth is absolutely possible and someone is working really hard to make it a reality.
I think you're missing the point they're trying to make. It's not that the problem isn't real, it's that the solution won't work. Kids will find a way around. They have a lot more free time than us.
Banning pubs from selling to minors doesn't work, but we should still do it, right?
Sure, it might start out that way, but once adoption reaches anything critical the PII will be required to squash free speech as soon as possible. But by then the interaction flow will be familiar, hardly anyone will even notice, never mind care.
The EU has the best frog boiling experts in the world.
Maybe pointing the obvious but things happen if enough people care about them or do not care to oppose them.
From my perspective speech became "more free" lately - meaning everybody says all kind of incorrect, wrong things without fear of retribution even if there are laws against some of those, because people just don't care.
So maybe we should also focus on teaching people what is free speech, why is it good for them, why they needed, rather than worry about some hypothetical mechanism that someone will prevent it.
Of course both can be done, but I find it a bit funny that if the focus in mostly on not having mechanisms to prevent free speech, we might still end up in a situation that there are no such mechanisms but on the other hand nobody speaks freely because they don't care or only stare at their tiktok.
The CA/CO parental controls API law is very reasonable. It only mandates each OS must have a parental controls API, the use of which is up to the parents.
That has been the successful strategy for e.g. NRA w/ regard to 2nd amendment and they have been proven correct every single time.
The corona passports showed the way to achieve ultimate control of the population, and the EU digital wallet will be a permanent corona passport.
The public sheep, in their ignorance, are cheering this on, without knowing what will await them. It is our responsibility as technologists to fight this, and to educate the sheep.
If we can’t mount a strong defense of free speech on principle alone then it’s doomed anyway.
Didn't regulating cigarettes kind of work?
I am not sure what time or country you are talking about but when I grew up (Germany in the 90s) we officially could only buy cigarettes from age 16 (or 18?) and 50% of my friends smoked. So that did absolutely nothing.
Later (I think, man it's been a while) the vending machines needed a driver's license or id to verify the age and guess what, as long as you had access to a single person over the age of 18 you could still get cigarettes.
Stepping away from the cigarette topic... I think mixing the two topics does not make sense.
First one is: Is there stuff on the internet that kids should not be exposed to without supervision? I don't have a strong opinion, I don't have kids. Probably not, but I am not even interested in discussing this
Second one is: Will some stupid laws like the mentioned ones help in any way? Maybe a little, probably not really and only for kids who don't find a workaround. Will they have catastrophic side effects and thus are not worth implementing for minimal gain? 100% yes.
Also, if what the OS does, is requiring to pick some number from 0 - 100 and date without doing any verification, everyone can lie. It has other flaws like not considering that many people can share accounts, some embedded devices with UI can no longer receive updates, etc. Honestly, if I thought for 30 minutes, I could list dozens of such problems. I doubt these laws can work efficiently enough.
For now this might sound like the least of evils, but are we sure that these idiot politicians won't come up with something even more insane after seeing the inefficiency of this?
you can install very tight parental controls on many devices
but this is not about optionality, this is about forcing the mainstream into verification and certification schemes that most people won't be realistically able to avoid, it's about control and compulsion of the mainstream
Everything online is virtual, and implementing surveillance in one area, almost always spreads, infecting everything else, until we've built 1984.
But as adults that are starting to have kids, this "hard divide" between physical and virtual starts to break down. What I mean is that we can't always use the excuse that we can't apply some reasonable law just because an item isn't "physical".
it's "protect the kids" or "counter-terrorism" and nowadays also "harmful content" because as the internet is now fully mainstream, softer and softer heads start to prevail
Could we perhaps regulate them to require that they be made less harmful for everyone?
> anything the reduces the power those companies have over our lives (and our politics)
If we're concerned about politics, I presume we're talking about the impact on adults, but these age-based restrictions are not intended to change anything for adults.
-to both adults and children. What kind of worked for cigarettes was the huge tax so why not create a "mental health tax" based on the number of users x some addictiveness score and let meta either fix instagram, pay their users a therapy or pass the cost to them.
Instead this "protecting children" by giving them "degraded" experience will only motivate them to bypass the age verification and destroy the statistical evidence of the harm those platforms cause.
[0]: https://ecigone.com/featured/vaping-statistics/
Well, surely because the government is full of investors in Meta and uses Meta for their propaganda, and possibly because the government wants more data to put on their databases that is used by ICE and other agencies.
https://arxiv.org/html/2506.06299v4
That’s commerce. The regulatory target in the case is speech. We don’t do that here.
This all could've been avoided. Governments all over the world have been ringing the alarm bells about lack of self-regulation in tech and social media. And instead of doing even a minimum of regulation, anything to calm or assuage the governments, the entire industry went balls-to-the-wall "line go up" mode. We, collectively, only have ourselves to blame, and now it's too late.
If you look back, it didn't have to be this way: - Governments told game publishers to find a system to handle age rating or else. The industry developed the ESRB (and other local systems), and no "or else" happened. - Governments told phone and smart device manufacturers to collectively standardize on a charging standard, almost everyone agreed on USB-C and only many years later did the government step in and force the lone outlier to play ball. If that one hadn't been stubborn, there wouldn't have been a law.
The industry had a chance to do something practical, the industry chose not to, and now something impractical (but you better find a way anyway, or else) will be forced upon them. And I won't shed a tear for the poor companies finally having to do something.
Why would we have to be blamed for a law written by some lobbyists? That makes no sense at all. There are of course some folks that are in favour of this because "of the children" but their rationale does not apply to me nor to many other people. Why should they be able to force people to surrender their data, with the operating system becoming a sniffer giving out private data to everyone else? That makes no sense.
If people could just say I don't agree with this law, it "makes no sense" and it's written by "lobbyists" and the government should not "be able to force" me to comply then we don't have a society anymore.
You had better come up with some better arguments otherwise it just seems like the typical sad case of the losing side suddenly griping about the referee's monopoly of force when it's no longer going their way...
The comment you replied to rightly pointed out one way of getting ahead of said monopoly of force is addressing problems with the status quo before the state takes an interest. It didn't happen, and now you will probably get some heavy handed intervention. But ignoring this basic point to ask why oh why suggests an ignorance of the very nature of the society that is and has been constantly regulating you.
If you only happened to notice now you should consider yourself a rather lucky specimen in the long line of human history, full of those remarking "this makes no sense" as they are nonetheless compelled to comply.
To extend your analogy, it's not one side complaining after a fair match, it's them complaining that refs have been paid off.
There are all kinds of laws that people don't like, me included. With every law there will be some winner/loser trade-off (for lack of better word). As OP said, that is society.
If the people here were so passionate about it, they would help come up with a better solution, not a "f* off" comment.
Can't believe I'm reading this. I don't want age verification at all, whether it's self-imposed or not. I should be free to use whatever tools I want however I want.
We somehow lost the war of freedom of privacy ... or, maybe the battle still rages
The founders were right to try and enshrine some protections against unrestricted democracy in the Bill of Rights.
The vast majority don't want to upload their passports. That's what we should be opposing. Standardized parental controls set by the device owner are a great alternative and not invasive at all.
This really depends on 1) How you frame the problem/solution and 2) what subset of people you ask.
But to answer your question, I could easily see that yes, people want a "change" based on how you frame the problem.
Wrong. There was no choice. Any type of identification technology causes more problems than it solves. The right choice is to look for different approaches than identification technology for solving the problems. And as the article points out, the problems are best tackled with education and not with tech.
Governments demanding computers enforce age is as dumb as governments demanding books, pen, and paper enforce age.
This is unrelated to industry. This is idiots running the government.
Again, I'm probably missing something but it strikes me as pretty trivial to comply with?
But on this specific point - It's a bellwether. They're doing this to lay the groundwork and test the waters for compulsory identification and/or age verification. Getting MacOS and Windows and Linux and etc to implement this WILL be used as evidence that compulsory identity verification for computer use is legally workable.
You could say the same thing about restaurants. "The government really shouldn't be telling us how/what we can cook at all."
When you are selling a product to the public, that is something that people have decided the government can regulate to reduce the harms of such products.
When you make food you're compelled to write the ingredients. We tolerate these because they are obvious and trivial, but pedantically, food labelling laws also violate the first amendment.
Surely you recognize the difference between "you cannot go out of your way to do crime" and "your software must include this specific feature"??
> When you make food you're compelled to write the ingredients.
Well, the point about how this affects open source is that under a similar California law, every home kitchen would need to be equipped with an electronic transponder whose purpose is to announce to the world what ingredient bucket you used for tonight's casserole.
You can disclose just a subset of a credential, and that can be a derived value (eg age bracket instead of date of birth), and a derived key is used so that its cryptographically impossible to track you. I wish more people discussed using that, but I suspect that it’s a bit too secure for their real intentions.
There is no benefit in doing that because parents already know how old their kid is. They don't need the government to certify it to them, and then they can configure the kid's device not to display adult content.
Involving government ID is pointless because the parent, along with the large majority of the general population, has an adult ID, and therefore has the ability to configure the kid's device to display adult content or not even in the presence of an ID requirement if that's what they want to do. At which point an ID requirement is nothing but a footgun to "accidentally" compromise everyone's privacy. Unless that was the point.
A) under 13 years of age, or B) at least 13 years of age and under 16 years of age, or C) at least 16 years of age and under 18 years of age or D) at least 18 years of age.
that's it. The OS can decide how it wants to implement that, but personally I'd literally just do get_age_bracket_enum(now() - get_user_birthday());
The bill is here: https://leginfo.legislature.ca.gov/faces/billTextClient.xhtm...
The uproar seems to be extremely overblown.
I mean, "compelled speech"? Really? That's people's argument? This is about as bad as the government compelling you to write a copyright notice.
The well should be poisoned. The whole idea is poison.
Uh, no.
No, "we" really don't. I wrote software. It's free. You're welcome to use it, or not. Nobody is forcing my software on you. You are not allowed to tell me that the software I wrote, for free, and gave to you, for free, needs to have features that I don't care about.
You have an LLM now. I'm obsolete now, right? Do it. Build your nerfed distro, and make it popular. Oh, yeah... there isn't a single solitary disto built by an LLM, is there? Not even one. Wow. I wonder why...
This is so true.
Don't let phone manufacturers lock the bootloader on phones. Let the device owner lock it with a password if they decide to. Someone will make a child-friendly OS if there is demand. Tech-savvy parents should be able to install that on their kid's phone and then lock the bootloader.
What about non-tech-savvy parents?
There should be a toggle in the phone's settings to enable/disable app installation with a password, like sudo. This will let parents control what apps get installed/uninstalled on their kid's device.
But what about apps or online services that adults also use?
Apps and online services can add a password-protected toggle in their user account settings that enables child mode. Parents can take their child's phone, enable it, and set the password.
----
All it takes is some password-protected toggles. They will work better than every remote verification scheme.
The only problem with this solution is that it does not help certain governments build their global mass surveillence and propaganda apparatus, and tech companies can't collect more of your personal info to sell, and they can't make your devices obsolete whenever they want.
It also prevents the legitimization of app store monopolies because no centralized authority is needed to create or enforce a rating system. And there will always be apps that don't comply with a rating system out of privacy concerns (it leaks the user's age, which is just an extra data point to track you with), and then they'll eventually try to ban non-compliant apps from running on the device completely. That's what enforcing an age-based standard would take. And even then it would still not fulfill its (claimed) purpose that well.
Principle-wise, parenting should be the responsibility of parents, not governments or corporations. Those large organizations have their own agendas which are somewhat misaligned with the individual human being.
https://www.badinternetbills.com/
(That doesn’t mean it is not a bad idea, and even perhaps unconstitutional for other reasons.)
I don't think a cryptographic algorithm is "expressive" any more than it is purely functional; indeed, the 9th circuit evaluated and rejected the expressive/functional distinction for source code in the above case.
Regardless - code is speech, and the government cannot compel or prevent speech except in very narrow circumstances.
That is very much overstating the holding in the case [0], the most relevant part of which seems to be:
“encryption software, in its source code form and as employed by those in the field of cryptography, must be viewed as expressive for First Amendment purposes”
The ruling spends a key bit of analysis discussing the expressive function of source code in this field as distinct from the function of object code in controlling a computer.
A law compelling providing functionality which it is merely most convenient to comply with by creating source code as part of the process is not directing speech, any more than an law delivery of physical goods where the most convenient method of doing so involves interacting by speech with the person who physically holds them on your behalf is.
[0] text here: https://law.resource.org/pub/us/case/reporter/F3/176/176.F3d...
I think you should read it a bit more closely. The court threw out the "functional/expressive" argument for source code, like I said in my original comment.
Secondly, what are you talking about that source code is the most "convenient" way to implement this? It's the literal, only possible way to present an interface to a user, ask them a question, and "signal" to other applications if the user is a minor or not. You're being completely nonsensical there. There's no other way to do that: someone must write some code. The bill specifically says "an API"!
That's forced labor. I'm not required to write a line of code to please anyone. It's free software with no warranty. They have LLMs, let's see them build it. :)
> That's forced labor.
Well, that's a 13th Amendment issue not a 1st Amendment one, but, in any case, its not forced if it doesn't direct who does the work to create the functionality, only requires you to have the functionality provided if you are doing some other activity, it is more of an in-kind tax. [0] (Now, if you want to make an argument that when the activity it is conditioned on is expressive that that makes it a 1A violation as a content-based regulation when the condition is tied to the content of the expressive act, that is a better 1A argument, that might actually have some merit against many of the real uses of, say, age verification laws; but “if I am doing this activity, I must either create or acquire and use software that has a specified function” is not, in general, a 1A violation.)
[0] It's not really that other than metaphorically, either, any more than every regulation of any kind is an “in-kind tax”, but its far closer to that than “forced labor”.
Good, because I'm not writing it, f\/ck them. Free software, no warranty. Use it if you want to. Otherwise, pound sand.
Don't you mean "bad"? Shouldn't you want it to be a violation of the constitution so it gets thrown out?
Providing an API is required if you do some other thing, but you are not required to do that other thing. Requirements that are triggered by engaging in some other activity are not compulsions if the activity they are triggered by is not compulsory. (Now, whether restricting the thing that triggers the requirement by adding the requirement is permissible is a legitimate question, but that is not the question that is addressed when you ignore the thing triggering the requirement and treat the requirement as a free-standing mandate.)
If you don't want to write, hire someone to write, pay someone to provide an implementation that has already been written, or acquire an implementation already written that is available without payment, such an API, you can simply choose not to do what is defined as being an “operating system provider”, and no obligation attaches,
No one is putting a gun to your head and forcing you to do labor to write code for an API.
I don't know what's wrong with you, honestly, that you would so vivaciously defend this impractical, immoral and completely nonsensical law so vivaciously.
It is seriously disturbing.
I do, these people are entryists and they have evil goals in mind.
Do not hire people like this, and block them from working on your projects.
obviously these regulations are very different, but both do compel speech
It's not enough to adhere to the OS age signal:
> (3) (A) Except as provided in subparagraph (B), a developer shall treat a signal received pursuant to this title as the primary indicator of a user’s age range for purposes of determining the user’s age.
> (B) If a developer has internal clear and convincing information that a user’s age is different than the age indicated by a signal received pursuant to this title, the developer shall use that information as the primary indicator of the user’s age.
Developers are still burdened with additional liability if they have reason to believe users are underage, even if their age flag says otherwise.
The only way to mitigate this liability is to confirm your users are of age with facial and ID scans, as it is implemented across platforms already. Not doing so opens you up to liability if someone ever writes "im 12 lol" on your app/platform.
The law requires "clear and convincing information", not merely "reason to believe". And since the law requires developers to rely on the provide age signal as the primary indicator of the user's age, developers are not incentivized to create a system that uses sophisticated data mining to derive an estimated age. If someone posts a comment on a YouTube video saying "I'm twelve years old and what is this?", that would absolutely not require YouTube to immediately start treating that account as an under-13 account.
Remember, only the state AG can bring a suit under this law, and the penalty is limited to $2500 per child for negligent violations. It's probably cheaper to get insurance against such a judgement than to implement an invasive ID-scanning age verification system (and assume the risks of handling such highly-sensitive personal information).
I'd also argue it's clear and convincing if a kid changes their profile picture to a selfie of themselves, says they're 12, says they're in grade school, etc. Any reasonable person would take that at face value.
> implement an invasive ID-scanning age verification system (and assume the risks of handling such highly-sensitive personal information)
It's already implemented as face and ID scans by all the major platforms as it is. The systems are already there and they're already deployed.
Apps and platforms already integrate with 3rd party age verification platforms who handle the face and ID data, nothing ever has to touch your servers.
That's so fragile, and it's not like they're making those claims to the site, it's natural language posting.
And someone who knows what they're doing would never take "I'm twelve years old and what is this?" at face value.
No one is suggesting a meme should be taken literally.
Like do you really think someone who presents themselves as a child on Pornhub will stay registered and not banned? They aren't going to serve porn to someone who presents themselves as a kid.
I think it's pretty clear and convincing when someone presents themselves as a child on your platform. I'd be convinced and wouldn't take that liability on.
The current analytics profiles are closer to "definitely into Roblox, 70% chance of being 13-18" than "This user was beyond any reasonable doubt born on 07-03-2002". Calling them "clear and convincing information" would be a massive exaggeration.
This section targets spyware companies like Facebook, who already know damn well if the user is underage and this section forbids them from pretending they don't know.
It doesn't say you have to go and become Facebook.
And then there are the desperate attempts to cover actual pedophilia from the people in power. I'll never look at a politician or a so called member of the elites the same way again.
This resonates so much with me. I don’t want to control my kids. I will never be able to protect them from everything. I hope I won’t be able because I want to die before them. I want them to be able to navigate in the world and have all the cognitive tools necessary to avoid being fooled. I want to rest in peace knowing they can in turn educate their own children. I want to trust them and be relieved that I can focus on some tasks of my own without needing to constantly worry about them.
—-system76 customer
once vendors are forced to put on hooks to some enforced age verification system, it will creep everywhere like cookie banners which you cannot escape even in Antarctica
—-system76 customer
I hope things won’t go that way but I do think it’s likely they will.
- Switch to a non-compliant distro. (could put me in a dead end down the line depending on what happens)
- Find a browser that can block the API access and just use two browsers.
- Have an "online accounts" computer and an "old fashioned" computer?
- Switch to books and DVDs?
The whole point. Very well worded post. I weep for the all digital future.
What is almost more disturbing: at least some of the politicians will have been advised by consultants or lobbyists who know what they're advocating for. What's their game?
I do mind a lot of the data process. I do not want my id, personal preferences or any metadata of my self stored anywhere ever. And IF by some weird law some process has to store some data somewhere of me, i want to have very easy full access to it so i can delete it whenever i want. You can keep the process itself but anything else has to go.
Yes, i have a passport. Yes, it was verified and validated. No you may not know or store the color of my eyes.
I also do not want curious kids to be prosecuted for poking around. They should teach them and thank them for finding flaws.
More concerning than that is that it all doesn't seem because they care about teenagers and kids.
I stopped reading at this point, as this is utter non-sense. I mean, it's a beautiful idea, but any person with more than two neurones knows that real-life doesn't work like that. We have law enforcement and prisons because, despite our best efforts in education, some people do go off to become criminals due to a number of factors.
I'm not saying that the present number of laws is adequate, but the solution is a different set of laws, not the complete lack of them. The idea of "simply educate children and we'll all be fine" is utterly moronic.
Er, how does a child install a VM from a non-admin account?
> Or the child can simply re-install the OS and not tell their parents.
It's gonna be pretty easy to detect when the parent finds programs are missing/reset or the adult account they created can't log in with their password.
The California law seems entirely tame and sane, whereas the New York bill seems pretty heavy-handed and authoritarian. They are in no way similar to each other.
A. If end users will mod their distros to send a "signal" (TBD?) to websites.
B. If end users will just grab a pirate OS with apps compiled to not care about age.
Hopefully the latest TAILS I downloaded is free of Big (over 18) Brother. And (A)
Or just compile, Gentoo and LFS style.
C. If pirates just take care of all this for friends and neighbors.
D. When, not if, this unconstitutional coercion is challenged in court and cancelled via petition. Remember Proposition 8?
Of course, that's an ineffective argument, because the long-term goal of these laws (in the sense of, "the goal of the system is what it does") was never going to be about keeping kids off the Internet.
The computers are not secure and they should only be able to run “verified” operating systems using attestation mechanisms. This was always where this was ultimately going. The idea has been fermenting since the DVD players had copy protections.
It’s the planet destroying asteroid. We know the trajectory, we always knew it was coming for us. But once you can see with the naked eye it’s too late to do anything.
This ties in nicely with the international movement to require ID to use social media.
Why is this an international movement? Suddenly, simultaneously, all over the Western world? It's enough to make on believe in conspiracies...
Sometimes kids hurt themselves through the use of the internet. And their parents lash out to blame someone [0]. And mainstream media pick up these stories. And the worry spreads. And more and more adults of voting age say that yeah, it's only reasonable to protect the kids from that internet monster, because kids are trusting and vulnerable, and won't somebody please think of the children. And they do not push back against age restriction campaigns. And so it goes...
As for the Western world, it generally moves in lockstep, doesn't it?
https://www.bbc.co.uk/programmes/m0024x58
Second, where are the protests to keep kids out of bars?
Now some 50-60yo politician who has never even created a folder in their desktop without help wants to dictate how I should have used my device?
Fuck'em
The internet you describe has been gone for a long time. The internet that replaced it is several degrees more harmful, to adults and children alike.
And modern hardware is so complex that it is impossible for individuals to build one by their own. We are no longer in the 8-bit/16-bit era. And considering the power of AI -- individuals pretty much mean nothing to the elites.
I have never thought the Dystopian future to be so close -- I always thought it would be X years away. But legislation, the lawyers, are definitely very efficient on this kind of things.
Let's try to figure out what a good policy solution looks like:
- entities with harmful or adult content must require proof of the user being over 18
- entities cannot ask for, store, or process more detailed information without explicit business needs (this should be phrased in a way that disallows Instagram from asking for your birth year, for example)
- entities cannot share this data with other sites, to avoid privacy leaks, unless there is an explicit business need (this is tricky to get right; someone might try to set up a centralized non-anonymous age-verification service, erasing many benefits)
- entities must in general not store or process information about the user that is not strictly relevant to their function
- there ought to be different treatment for anonymous users (which ideally these protocols will allow, just submit proof of work plus a ZKP that you are a human and authorized to access the resource) compared to pseudonymous and non-anonymous users, who are more at risk of being censored or tracked.
There's some loopholes here, but if the government can enact good policy on this I personally think it's feasible. Please share your thoughts, if you have a minute to do so.
There's also an interesting political split to note among the opposition here. I see a lot of people vehemently against this, and as far as I can see this is largely for concerns regarding one of 1) privacy abuses, 2) censorship, or 3) restriction of general computing. Still, there is a problem with harmful content and platforms on the web. (Not just for minors, I don't think we should pretend it doesn't harm adults too.) The privacy crowd seems to be distinctly different from the computing-freedom crowd; the most obvious example is in attitudes towards iOS. As I personally generally align more towards what I perceive as the privacy-focused side, I'm very interested in what people more focused on software freedom think about zero-knowledge proofs as a politically workable solution here.
Can you give an example of how less private solutions will benefit them and their sponsors? I could see big tech / adtech and government surveillance benefitting but I don't think they're the ones behind this push.
As another example, consider the "small web" community, say at Bear Blog, which is a group of technically sophisticated people who routinely complain about the harms of traditional social media. I doubt most of them would support this particular implementation, but they show that there is popular support for solving the ills of at least one of the targets of this legislation.
So to answer your question, yes, I do see this as an attempt to protect people. The restriction of free speech is in my opinion a side effect of this legislation opening the way to worse-designed laws in the future.
Thank god totalitarian bureaucrats and lobbyists descended from the heavens like Prometheus and gave you these tools.
These laws have spread like wildfire around the world with many countries and regions rolling out similar legislation within months of each other, despite the stereotypical lethargy of any and every legislature. That's not the work of some popular uprising of parents clamouring for age verification.
I fear debating the merits of the argument is missing the point; they don't care. They don't care about children, they don't care about the argument, they just want the control.
The time is coming where we will unseat legislative traitors who use EU/Old World manipulations in the USA.
An unjust law is no law at all. That is the exception the rule of law requires to remain moral.
One developer began a discussion:
https://lists.ubuntu.com/archives/ubuntu-devel/2026-March/04...
Their attempts of a "solution" are quite interesting. One other user suggested that GUI tools ask for the age of the user.
Well ... I have a very strong opinion here. I have been using Linux since over 20 years and I will not ever give any information about my personal data to the computer devices I own and control. So any GUI asking for this specifically would betray me - and I will remove it. (Granted, it is easier to patch out the offending betrayal code and recompile the thing; I do this with KDE where Nate added the pester-donation daemon. Don't complain about this on reddit #kde, he will ban you. KDE needs more money! That's the new KDE. I prefer oldschool KDE but I digress so back to the topic of age "verification").
The whole discussion about age "verification" appears to be to force everyone into giving data to the government. I don't buy for a moment that this is about "protecting children". And, even IF it were, I could not care any less about the government's strategy. Even more so as I am not in the country that decided this in the first place, so why would I be forced to comply with it when it ends up with GUI tools wanting to sniff my information and then give it to others? For similar reasons, one reason I use ublock origin is to give as few information to outside entities when I browse the web (I am not 100% consistent here, because I mostly use ublock origin to re-define the user interface, which includes blocking annoying popups and what not; that is the primary use case, but to lessen the information my browser gives to anyone else, is also a good thing. I fail to see why I would want to surrender my private data, unless there is really no alternative, e. g. online financial transactions.)
I also don't think we should call this age "verification" law. This is very clearly written by a lobbiyst or several lobbyists who want to sniff more data off of people. The very underlying idea here is wrong - I would not accept Linux to become a spy-tool for the government. I am not interested in how a government tries to reason about this betrayal - none of those attempts of "explanation" apply in my case. It is simply not the job of the government to sniff after all people at all times. This would normally require a warrant/reasonable suspicion of a crime. Why would people surrender their rights here? Why is a government sniffing after people suddenly? These are important questions. That law suddenly emerging but not in the last +25 years is super-suspicious.
This would be the least of evils (such as ID verification). But a bigger problem is that the implementation is very flawed. It doesn't appear to be very effective. People, including children, can lie. Multiple people can share the same account. Also there are many devices that cannot be updated (such as embedded). My concern is that these idiots might introduce even more extreme laws when they see that it isn't efficient enough.
I hope it will cause so many problems (implementation, backslash, etc) that it will be eventually cancelled.
„But Jonas parents allow him to do that“ in reality Jonas parents should not have a say in this.
Alcohol? Yeah one or two too many drinks and you're in the hospital getting your stomach vacuumed. Driving? Blink and you run over a kid. Internet? You can spend evening and nights over there and not be harmed in any way.
We have full generations of kids that can now be studied about the effects of the internet, starting with millenials. I won't pretend the Internet is a better place now than it was when I was a teen, but it appears to me the "dangerous" things are more focused and concentrated (at least for kids): social networks. It's still a minefield, but with leagues between two mines.
Porn has always been the topic touted for children safety, because it's scary and resonates with conservatives and religious people. Access to is is roughly the same today than it was then, and arguably less dangerous today because the dirty stuff is hidden deeper, thus less likely to stumble upon.
But other than porn, the thing that changed the most is social networks. Addiction, bullying, etc. Facebook 15 years ago was a not serious place. The equivalent today is the best place to get roasted by fellow kids and bullied 24/7 while not being able to get off the hook. The damage is psychological, which is insidious, but not systematic. Not every kid will get bullied, not every kid will be addicted to the algorithm(tm), etc.
In the end, education plays a bigger role than simple age verification. Stimulate your kids, give them things to do other than doomscrolling, and get them on the dark corners of the internet to give them curiosity about the world and un-sanitized stuff (hacking in all forms, etc).
P.S. Is your handle a reference to Cats on Mars by Seatbelts? Yoko Kanno <3
1) The issue doesn't matter much. Corporate takeover of the internet caused severe damage, but overrunning social media with LLM generated content is a mortal wound. Roughly the same number of humans will be using social media in 2030 as currently use CB radio. Remember near a fifth of the population was using CB radio at the peak in the late 70s. Its too little, too late, closing the barn door after the horses have left is pointless. Like re-arranging deck chairs on the titanic after it hit the iceberg. Once the advertisers get wise to the scam that nobody is seeing their ads except bots, the problem will kind of fix itself. I think TPTB want to use "protecting kids from social media" as the public face of why social media will crash and burn soon to avoid discussion of how LLMs actually killed it, because authoritarians love LLMs and they're in charge (although seemingly everyone else hates LLMs, so I'm sure this will end well).
2) Most of the anti commentary reads a lot like addict speak IRL. Talk to a drunk about how it would be a great idea not to drink or a carb addict about how they should not eat donuts and you'll get absolutely rage blasted in return for threatening their addiction, which in the case of an addict, is their identity. "Well it would be the end of the world if people (me) were not drunk and other people (projection of me) will do anything to feed their addiction so obviously no effort should be made to limit addictions and it won't work anyway because other people (me) will even drink mouthwash or homebrew their own moonshine to get drunk" etc. Note I'm not completely against the anti's and they make some very good points that should be considered, but raging like an addict after their drug of choice is threatened is a VERY bad look and is not helping their case at all, if anything it strengthens the case against the anti's. What the pro's don't understand is you can't fix an addiction externally, addicts gotta addict and punishing them and making them miserable might help the pro's feel superior or at least thankful they're not addicted, but it never helped no one. Social media is "an ill of society" and should be treated as such including sensible regulation, protection of threatened groups, treatment for the addicts, and some compassion and acceptance of the addicts either returning to the real world or dying in the addicted world.
The goal for the AI side is that they get to be censors and gatekeepers of all user-generated content on the internet, their models will rank/grade/censor content for age-appropriateness and they will have the pleasure of being paid to train on all new content uploaded to the internet in real-time, in perpetuity.
[1] https://news.ycombinator.com/item?id=47162956
This is false equivalence. All of the above are vices that objectively carry more harm than good. There's no inherent harm in using a computer, there's a subset of ways in which using a computer can be harmful, which kids can be taught how to avoid or navigate, there's no subset of meth use that isn't harmful
I'm guessing you meant can't
To be more direct - if you're in any editorial position where something that smells like this might require your approval, please give it the scrutiny it deserves. That is, the same scrutiny that a malicious actor submitting a PR that introduces a PII-leaking security hole would receive. As an industry we need to civil disobedience the fuck out of this.
I live in the UK, I work in London. I can go on X and look at what Elon Musk is posting about the UK and as a reasonable person I can quite reasonably say he's gone mental. The algorithm has broken that mans brain. And it's not just him, a whole slew of establishment women lost their absolute minds about the trans issue (and Graham Linehan). Mumsnet became a centre for radicalization. You know and some one who grew up on the internet at quite a sweet spot I'm very comfortable looking at that stuff and going "Oh yeah, you guys are being groomed by these algorithms and you're defenceless to it".
There's a whole load of "How do we protect the children from this", but I don't think there's actually been much a reckoning with how grown adults are getting sucked into this vortex. The algorithms on the internet clearly have some trap doors that just absolutely funnel people into crazy places.
All of which is to say: We have a serious problem that's effecting everyone not just kids, and I think we've got almost no answers for how to tackle it.
The result is this- poorly thought through sweeping laws that aren't solving the problem, and have massive negative side effects. I think Jonathon Haidt has a lot to answer for in funnelling this complex issue affecting everyone into this reactionary "won't someone think of the children!" campaign for banning technology for kids.
You say that like it’s a bad thing!
Yes. And having a fixed cut off from 'you can't see omg boobs! on the internet' to 'you can see snuff porn on the internet' won't help.
Unless I'm missing something, I have zero concern for companies who sell out by complying.
The code was "free as in freedom" when you decided to build your company on it; and while you're not legally obligated to defend that freedom, and I, and hopefully other consumers, find that you are morally obligated to.
At the moment only some countries banning porn, social media and gambling. But how soon will I have to do it for a work app? And will I lose my job then if I refuse?
>Limiting a child’s ability to explore what they can do with a computer limits their future.
Parents don't want to limit their children from writing software. Saying that limiting minors from accessing porn will limit their future is another argument I doing think many will agree with.
I do not think the proposal is smart or that will it work, but I am more worried that some people seem to think they hold the absolute truth (on any side of the a debate).
I mean... How else would you educate children about computers and evading stupid restrictions?
Make it optional and assume an adult otherwise, it's a good idea if it's optional and doesn't have dumb fines, you could have fines for not enforcing it / not using the api [porn sites] that already exists [and it doesn't work since 1 button is not age verification].
I see this as a good way for parents and institutions to set up their phones, school laptops etc and would pretty much solve the large majority of these issues while having a fraction of the invasiveness.
Honestly, probably by intention. Its sort of a SLAPP attack on the entire world population.