It is a great irony that the heavy handed push for "protect da kids" is all happening while we learn, day by day, that the richest and most powerful members of our society have no problem hanging out with a convicted child sex trafficker.
We have known this to be the case, for quite some time, yet majority of the public still thought that a convicted felon was good enough to be president.
Hanging out with? Many of the top liberals were going to Epstein's island themselves. That's on top of other revelations in the entertainment industry and Washington DC going way back.
Voters in America stay electing self-centered, sick-minded people. God's Word (Bible) says pick people who follow God, have great character, take no bribes, and (less.
important) are skilled for that position. If people will do that, we will see amazing things happen. So far, they elect pagans (Left) and Pharisees (Right) who always give them what they voted for.
I'm just going to go ahead and say that "free love" is a terribly inappropriate way to refer to sex trafficking, regardless of the age of the victims, unless you're being facetious (e.g., The Onion's "Penis Goofin'" allegations against Epstein).
It's a question of scale. Neither crime is less serious but far more children are groomed and abused over Discord than flown in via some super rich sicko's private jet for a 'costume party'.
This is no worse than Discord just banning NSFW content wholesale throughout the platform (which they would be entirely within their rights to do). It's a big fat nothingburger.
I am not a native English speaker, I may be missing a cultural nuance, but I wouldn't call any of what they did love. That word enters nowhere in a sickening child abuse island.
There's a special phenomenon that happens as startups grow large. They begin to drift away from the ground truth of their product, their users and how it's used. It's a drift away from users. And a drift towards internal politics. A lot like Rasmussen's drift towards danger, https://risk-engineering.org/concept/Rasmussen-practical-dri...
As startups grow beyond a critical threshold, they start to attract a certain type of person who is more interested in mercenarily growing within the company / setting themselves up for future corporate rise than building a product. These people play to the company's internal court and create deeply bitter environments that leads to more mission-driven individuals leaving the company.
I deleted my Facebook account in 2011. After finding out how much critical neighborhood information I have been missing, I finally registered a new Facebook account fifteen years later to follow my neighborhood groups.
A month later, the account was suspended for supposedly breaking guidelines. I never posted a single message, never reacted to any posts.
They then required me to upload a video scan of my face to prove I was a person.
We aren’t quite at the end of the internet, but man I can really see the end of this journey coming sometime soon.
I helped an elderly woman create her first FB account. She'd just lost her husband and wanted to notify his friends about his upcoming memorial service. She knew their names but didn't have contact information.
We created the account from an Apple device, registering from her home cable modem IP, giving FB her cellphone number and ISP issued email address — all strong signals of consumer authenticity. But after she added five of her relatives within half an hour, her account was locked for suspicious activity.
There was an appeal button; she was asked to take a picture of her face from many angles and upload ID. She gave them everything they asked for, but when Facebook reviewed the appeal, they closed her account permanently.
Mark Zuckerberg, folks. It matters when his default philosophy is "They trust me dumb fucks". Copying Snapchat 9 times is more of a priority than account security. He wasn't "making a good point". He's a malicious asshole who deserved jail years ago
Ironically, this may be one of the many straws that breaks the proverbial internet camel’s back. We all wax and wane about the old internet, the pre-homogenized, non-corporate, Wild West internet.
Perhaps these constant restrictions will finally spur us to create our own spaces again Our own little groups that exist independent of the corpo-sphere.
The only reason ‘the way things used to be’ went away was because the new thing was convenient. Well, now it isn’t anymore. So let’s just go back to the old thing.
Had a similar experience after rejoining a few years ago. My account wasn't suspended for breaking guidelines AFAIK, but rather flagged as a suspicious account that required an upload of my face and driver's license. I think the account still exists in this limbo state because I'd rather not upload all of that to Facebook, and yet still not able to login to request for the account to be deleted.
FB/Discord/etc were never the internet. They were walled gardens you could enter via the internet. This could be a revitalization of the internet - pushing people back to decentralized ways of communications.
Perhaps you may have not read about how Iran is moving to a whitelisted internet. Or perhaps you believe this will not happen in your country.
However, “think of the children” will always result in more restriction in the western, not less. We are watching countries prove that it works to isolate from each other. Europe is not isolating from America in exactly the same way, but is isolating business processes from American services.
We are not on the cusp of the end of the internet, but the cliff sure seems in view to me.
*CANCEL YOUR NITRO SUBSCRIPTION NOW IF YOU'RE PAYING FOR ONE* (for whatever reason)
This was just announced today and a flood of canceled payments within the next 24 hours are the easiest way to send a message. And also tell people on the servers you're on to do the same. It's not like they give you anything of real value for that money.
> Discord, a messaging platform popular with gamers, says official ID photos of around 70,000 users have potentially been leaked after a cyber-attack.
However, their senior director states in this Verge article:
> The ID is immediately deleted. We do not keep any information around like your name, the city that you live in, if you used a birth certificate or something else, any of that information.
> The ID is immediately deleted. We do not keep any information around like your name, the city that you live in, if you used a birth certificate or something else, any of that information.
This is also contradicted by what Discord actually says:
> Quick deletion: Identity documents submitted to our vendor partners are deleted quickly— in most cases, immediately after age confirmation.
Not to mention collecting them at all means those servers are a primo location for stage actors to stage themselves to make copies of data before being deleted.
To say nothing of insider threats of which likely exist across every major social media platform in service to foreign govs
It was only one example they gave, and they accept multiple different types of ID; a driver's license or national ID card being other likely ones, and DLs do say where you live.
TL;DR: The IDs were used in age-related appeals. If someone's account was banned for being too young they have to submit an ID as part of the appeal. Appeals take time to process and review.
Discord has 200,000,000 users and age verification happens a lot due to the number of young users and different countries.
They're a nonsense company, and trusting them with any information is foolish.
They'll store everything and anything, because data is valuable, and won't delete anything unless legally compelled to and held accountable by third party independent verification. This is the default.
The purpose of things is what they do. They're an adtech user data collection company, they're not a user information securing company.
Ignoring the implications of this for the moment, let me broach a related (and arguably more important) question: what do you do when you have multiple communities you interact with only on one platform, and suddenly that platform becomes intolerable for a subset of your community?
It is the same as what everyone did after the reddit fiasco i.e. protest, boycott, grudgingly use it while complaining and then finally accept the change.
May be this discord episode will have better outcome for the masses.
It seems like the answer is pretty obvious. That subset of the community stops using it and uses something else, and the others either follow them or don't.
You, if you're not in the first group, can continue to use both to communicate with everyone, but some of them lose the ability to communicate with each other.
The ideal outcome is for everyone to stop using the intolerable thing and switch to a tolerable thing. That's even what often happens over time, but not always immediately. Probably do anything you can to make it happen faster.
... With the only caveat being that general experience of using Matrix is awful.
I second the other commenter's suggestion of using https://stoat.chat/ or as it used to be called: Revolt, which matches the "Opensource Discord" requirement perfectly.
(Incidentally, this is also the incantation that will cause its primary maintainer to show up in the comment thread and tell me that I’m not using their seemingly annual complete new client rewrite that fixes all of the problems and makes it perfect now.)
People tried warning that moving all your discussion forums into a proprietary, closed, unsearchable platform was a bad idea. And it was. But nobody cared.
If this happened 15+ years ago, a huge chunk of the userbase likely would've migrated to alternatives, potentially resulting in Discord being replaced and falling into irrelevance.
Today, though, no chance that happens. The current generation literally grew up with it, same for most of the other established social media apps. The concept of alternatives largely does not exist for them. And besides, they were probably already sending pictures of themselves and other personal data to each other through the app, so it's not like Discord doesn't already have all of that.
I mean, I grew up with AOL AIM, Yahoo Messenger, and IRC... yet I switched every time a new tech came out with more of my friends on it. Why do we think discord will be any more sticky than Digg or Slashdot, or any of the above?
People will migrate, some will stay, and it will just be yet another noise machine they have to check in the list of snapchat, instagram, tiktok, reddit, twitter, twitch, discord, group texts, marco polo, tinder, hinge, roblox, minecraft servers, email, whatsapp and telegram, and slack/teams for work.
One of the starkest social desirability biases in tech is between federated and centralized platforms. Most people, in public, say they support distributed, federated systems, but when push comes to shove, they all use centralized platforms anyway.
I think she is a polarizing figure to some, but journalist Taylor Lorenz has been complaining about this sort of thing for a long time. She has been increasingly warning about a future in which we need to scan IDs for all of our online services, in the name of protecting kids. (With the obvious implications about that data leaking, governments using it to track dissidents, etc.)
One thing most of those lack is an easy way to share screen.
Now if anyone wants to differentiate their Discord alternative, they want to have most of discord functionalities and add the possibility to be in multiple voice chats (maybe with rights and a channel hierarchy + different push-to-talk binds). It's a missed feature when doing huge operations in games and using the Canary client is not always enough.
Does matrix have decent 1:N client desktop broadcasting with low latency (and high fps) yet? I use discord for "watch parties", video and tabletop gaming...
I wonder how Stoat will fare, and how it is currently maintained, in terms of "making money"; my fear is that it would steer into the direction of Discord itself.
Nevertheless, I don't like the new name either, oh well...
I like this comment though:
Imagine you make a free software project and it runs into trademark issues because people have more money than you to register in more classes than your project.
And then even though your project existed first, they still come after you anyway.
And from that an even more expensive rebranding from this as well.
Argh. If there's no stoat emoji, petition the Unicode Consortium for one, don't just use a beaver. It's not even the right family; the badger emoji would be closer.
For me, the closest alternative to Discord is Stoat. Matrix with Element (or other clients) would be great, but it feels so slow on both desktop and mobile.
IRC does not support group voice & video calls, which is one of the primary features of Discord (and previously Skype, from which everyone migrated to Discord in the first place)
It's a viable system for the many open source software projects that collaborate over chat. Expo, Typescript, and Effect are relatively large examples. I'll participate there if available and I get locked out. Otherwise, I'll just use the stuff without contributing, no problem.
For most Discord users IRC simply does not have the feature set that people need. Basics like simple drag and drop media sharing, threaded conversations, emoji reactions and voice comms, up to more complicated stuff like screen sharing and video calling.
Last I checked Signal was not fully open source, which is iffy, believe their encryption protocol is still closed. That said its the best of a bad bunch for E2EE messaging. If you're on android I'd recommend doing what I do, which is installing from the APK on the site, manually verifying the sig locally (you can use termux for this), and then lagging ever so slightly behind on updates to avoid potential supply chain or hostile takeover attacks. This is probably over cautious for most threat profiles, but better safe than sorry imo. Also their server side stuff is close sourced, technically this isnt an issue though as long as the E2EE holds up to scrutiny though.
Edit: My information may be out of date, I cannot find any sources saying any part of the app is closed source these days, do your own research ofc but comfortable saying its the most accessible secure platform.
I set up a forum when I started my site for Linux content creation. Discord had become a black hole for technical know-how on a scale IRC could never dream of, and finding answers to common questions was nigh impossible since the technology has changed and the modern way to solve problem X was never asked in a forum and never indexed by a search engine. Granted, Reddit provided a bit of a stopgap over the last decade, but the solutions in the comments these days are more often than not a confidently incorrect copy-pasta from GPT.
I use Discord for chat and voice calls since that is what I expect from a chat app, but the amount of companies that have built their community / knowledge base / support system around Discord is worrying. You know they can just delete that, right?
I'll continue to use Discord for chat until prompted to put my face in the hole :)
The sad thing is that I think many people will en masse pony up their ID or snapshot without a second thought. I'm not sure if enough people will refuse to actually force Discord to back off this decision (unless their idea is to grab as much data as possible at once with the understanding that they are going to back off either way).
Especially if it's presented as a pop-up upon launching the app that suggests the user won't be able to talk to their friends/servers without showing ID. Carefully worded language would could spur some % of users to panic at losing years of history and immediately show ID. Folks with less privacy discernment hear "jump" and reply "how high".
I don't imagine this was a 100% their decision, it's more like a response to the epidemic of all the world's governments suddenly coming up with adult verification schemes. Discord has already required it in some countries, and it's definitely easier to get everybody to verify themselves than require it on a per-jurisdiction basis. The personal data they get is a cherry on top.
Also, this is just the beginning, more social networks will require the same soon.
I have done that for stripchat which was also requiring it. Not happy with it but I'd rather use a selfie than a whole ID document which includes an image anyway.
I'll continue using Discord in teen mode, I guess. I'd rather not lose the current connections & servers I have on there, and I'm not optimistic about people migrating away, especially non-tech people.
I get the draconian side of things, but I am also tired of thousands of russian, indian, domestic-funded etc. bots flooding the zone with divisive propaganda.
In theory, this seems like it would at least be a step in the direction of combating disinformation.
I'm curious if there are any better ways to suppress these propaganda machines?
I don't see how disallowing viewing "age-restricted" content through Discord without giving them your ID would have any impact on the spread of disinformation, outside of like, disinfo in the form or pornographic or gory images.
To add context to the discussion, it is important to recall that Discord was reported to have recently filed paperwork with the SEC for an IPO [1]. Thus it seems likely that the real reason for the age verification (i.e., user identification) policy is to boost its perceived earnings potential among Wall Street investors. According to this theory, Discord is the new Facebook.
It's clear "age verification" is not something we'll get rid of, so I think instead we should push for a publicly verifiable double-blind (zero-knowledge proof) solution that can ensure it only gives the websites a boolean and doesn't allow correlation from either side.
The alternative is having to give your ID to Facebook, Google, Microsoft, and all the other bad actors...
It's kind of surprising that no-one has really come out with a proper privacy-preserving approach to this yet. It is clearly _possible_; there are reasonable-looking designs for this. But no-one's doing it; they're just collecting photos and IDs, and then leaking them all over the place.
What are your thoughts on Apple's approach? You still have to provide your birthdate to apple. But after that, it only only ever shares your age range with other companies that request it, not your birthdate.
As others have said, it’s obvious that no real attempts have been made by anyone to create a privacy-focused solution because the end goal is to collect photo IDs.
Occasionally in my free time I have been tinkering with a certificate-based solution that could fulfill this sort of need for age verification. It’s not the most robust idea but it’s simple enough using most of what we already have. Creating a minimal protocol which doesn’t share actual identifying information nor metadata of the site you’re accessing is trivial. If I can make an 80% solution in less than 100 hours of my free time then some groups with more money and intelligence could propose a dead-simple and easy-to-adopt solution just as easily.
They do not want to solve the problem, they want to collect our IDs. If they would have wanted to actually solve it they would not have done this on legislations where it is not a requirement.
It would seem like a naive solution would be some arrangement where Discord would ask for a proof-of-age from an official service ran by the State (which issues your ID)
Well you could have government-run cryptographically signed tokens. They're already in the business of holding ID data (i.e. they don't need to collect it and this wouldn't increase the attack surface).
But assuming it has to be a private solution, you could do the same thing but make it a non-profit. Then at least _new_ services you wish to use don't need to collect your ID.
No privacy is simpler and the simpler solution is cheaper. If there's no real incentive to go with another option, companies will go with the cheaper option.
Really I've come to the conclusion that anything I send out of my LAN is probably kept on a server forever and ingested by LLMs, and indexed to be used against me in perpetuity at this point, regardless of what any terms or conditions of the site I'm using actually says.
Speaking of hosting, Discord used to be one of the biggest (inadvertent) image hosts, so they might have set up the system to reduce legal exposure than to monitor conversations per se.[1]
A lot of the internet broke the day they flipped that switch off.
Weren't external Tumblr hotlinks also a thing back in the day?
That was always the wrong threat model hierarchy. I have always been more concerned what the federal, my state and my local government can do when given more power/informstion than the federal government
You have the fervent that love recording everything "for the good of the people". But then you'll just have piles of people with separation of duties that do things with very little understanding of where they fit in the process and very little care to.
And E2EE platforms like Mega are now being censored on some platforms specifically because they're E2EE, and so the name itself must be treated as CSAM.
As people who want to talk about words like "megabytes" or "megapixels" or "megaphones" or "Megaman" or "Megan" on Facebook are finding out.
It serves UK, EU, and various US States' regulations to "protect the kids".
Discord is only the next biggest canary in the coal mine. These regulations are going to force a lot more websites and apps to do this, too.
I wish these sorts of regulations had been written hand-in-hand with a more directly technically-minded approach. The world needs a better technical way to try to verify a person's estimated age cohort without a full ID check and/or AI-analyzed video face scan before we start regulating "every" website that may post "adult content" (however you choose to define that) starts to require such checks.
lot of people complaining, but, seems like they rolled it out already in UK and Australia... no real complaints I know of, and I'm in NZ and are on NZ/Aussie discords. Also teen mode doesn't actually seem that restrictive. Seems an ok move to me. But for whatever reason people seem to froth at the mouth when it comes to discord on here.
I have a discord account that I use very rarely, and just tried it (from the UK) and it didn't ask me for any ID or face scan. If they do start doing that, I'll simply stop using the service.
I predict out-of-the-box deepfake live-camera software will get a bump in popularity, there's already plenty solutions available that need minimal tinkering. It should be trivial to set up for the purpose of verification and I don't see those identity verification providers being able to do anything about it. Of course, that'll only mean stricter verification through ID only later on, much to the present-and-future surveillance state's benefit.
Here's how Discord works. A third or so of its features, such as forum channels (EDIT: I think this specific example was wrong; stage and announcement channels, but not forum channels) or role self-assignment, are locked behind Community Mode. After enabling Community Mode, server owners are NOT ALLOWED to turn off content filtering anymore, meaning that by default, content in every channel may be filtered out by systems you cannot configure.
The only way for the server owner to circumvent the filter is to mark a channel as "NSFW", which doesn't necessarily mean the channel actually contains any NSFW content.
This change will not actually require ID for content confirmed to be NSFW. It will require ID for each and every "NSFW mode" (unfiltered) channel. The end result is that you have three choices:
- Ditch Discord features implemented in recent years (or at least this is currently possible) - this prevents a server from being listed as public;
- Require ID checks from all your users (per channel);
- Have everything scanned from all your users (per channel).
Are you saying that you can "mark" the channel as "NSFW", and Discord will stop scanning your content, possibly allowing you to share very illegal content through their servers?
Sounds weird to me. Pretty sure that they legally have to make sure that they don't host illegal content. Or does "NSFW" enable some kind of end-to-end encryption?
That has always been the case, yes, though I'm not sure what you mean by "illegal" content. There is only a small overlap between NSFW and illegal content, and the NSFW filter has never been concerned with, uh, violating photograph copyright or something.
You don't have to take my word for it, just check it yourself, although it seems that this week, they renamed the NSFW setting to "Age-Restricted Channel" (in preparation for this change, no doubt). The verification-related portion of the behavior I described was implemented for the UK months ago.
The description still contains: "Age-restricted channels are exempt from the explicit content filter."
EDIT: IANAL (or american) but if Discord was policing content for legality rather than age-appropriateness, wouldn't they lose DMCA Safe Harbor protections?
> The description still contains: "Age-restricted channels are exempt from the explicit content filter."
Wait! This does not mean they do not scan it. What I understand from that statement is that they filter explicit content, as in they prevent it from appearing on the user's screen.
When you enable the "NSFW" mode, you tell Discord "it's okay, don't filter out anything". But Discord probably still scans everything.
So that makes sense to me: if you don't validate your age, then Discord will not allow you to join channels that disable the "adult" filtering. I can personally live without adult content on Discord...
I wonder if Discord is legally forced to do that, or if they would rather do it themselves (and collect the data $$$) rather than wait to be imposed a solution they don't own.
I feel like age verification will come, there is no way around it (unlike ChatControl and the likes, age verification seems reasonably feasible and has a lot of political traction right now).
But I would rather have a privacy-preserving solution for that, e.g. from the government (which already knows my age).
There are probably enough regions where it is required or will be required soon, that it makes sense to just get it over with.
The Internet is more or less becoming a locked down, controlled and fully observed thing for end users and citizens, so adapting to that world sooner and working within it is just sensible future-proofing.
This also lets them more safely target older users with ads, purchase requests, etc. and new integrations for gambling and other high ROI systems.
GeoIP this nonsense. Legal liability is solved as a "good-faith effort" and those living in jurisdictions where this doesn't apply (or use a VPN) don't need to be stripped of privacy.
If you're a Slack user, I don't think they need your ID to tell that you're an adult
More seriously, it will become a problem on there is a significant user migration to there and a repeat of the mass hysteria. Due to being more niche, these smaller platforms are probably not in danger right now.
There is a bit of an arms race between id verification systems and users bypassing them when AI gen. Which is really just ai generated images vs. AI generated image detection.
In practice, nothing will stop it, the tooling will gradually get better at detecting prior fakes and banning those users while the newer fakes will go undetected for longer.
Putting up the requirement satisfies their CYA requirements here. The race between AI fraud vs. detection is something they can just ignore and let happen on its own.
Finally I feel validated complaining for the last decade about the move away from IRC/teamspeak to centralized services. I've been called all kinds of names.
Now those same people are complaining they're gonna have to submit their faces to discord. Which will eventually be used to prosecute or commit fraud. I'm left wondering if "tech enthusiasts" are ever actually correct.
Another company jumping on the bandwagon to data-farm in the pretext of safeguarding children. I really wonder if there's an actual method to actually safeguard children while also not holding on to data. Because, genuinely, you can't question this.. Companies would just say "we are trying to protect kids" and that'd be the end of the argument.
I really wonder if when this is fully implemented if they will have any safe guards against selling "adult verified" accounts. With AI being a possible work around for those who don't want to share an ID, selling accounts would be another big issue unless they check for IP addresses and block based on locations and logins. EDIT: I see in another comment that its against TOS to sell accounts, I doubt that has stopped anyone before though.
So my friend group has been looking for alternatives for a while now that feel like discord, works on mobile and desktop, and has voice chat.
I use Signal but the UI is very different from Discord.
I've had very mixed experiences with Element + Matrix, Element keeps crashing on mobile, and while voice chat kinda exists in Element it's not been great imho.
I looked into hosting Rocket.chat, Zullip, and Mattermost but from what I recall voice + mobile were either missing or paywalled at a per-user price.
> How do you know one party isn’t 15 when the other is 25?
You don't. That's why parents need to be involved in their children's lives.
CSAM is the easy excuse, anyway. That's the one lawmakers use, and most people are against CSAM, myself included, so the excuse goes down easy. But the impetus they don't talk about is monitoring and control.
The answer isn't to destroy privacy for everyone. The government and these corporations don't need to know what you're doing every second of the day.
> That's why parents need to be involved in their children's lives.
Can't, aren't, look at iPad kids, won't. This is about as logical as saying people should just drive safely, so we don't need guardrails and seat belts. Or saying parents should always watch their children, so we don't need age verification at the alcohol store. Besides, it's not like the school library or the friends of friends don't have devices themselves you as a parent can't see.
Parents should not need to be tech experts or helicopters to feel their kids are safe online. That's fundamentally unreasonable. In which case, privacy and child safety need to come to an unhappy compromise, just like any other conflicting interest.
For that matter, I'm surprised that HN automatically always accepts the "slippery slope" fallacy while lambasting it everywhere else.
> This is about as logical as saying people should just drive safely, so we don't need guardrails and seat belts.
This is a terrible analogy. Regulations related to driving only apply to drivers, if you're a pedestrian then you're not subject to basically any regulations that licensed drivers have to abide by. On the other hand, internet regulation like this punishes absolutely everyone to safeguard a small group, that being parents. It's like legally forcing pedestrians to wrap themselves in bubble wrap while outside so the careless drivers who couldn't behave don't dent their cars and get hurt when a pedestrian flies in their windshield, when they inevitably collide with one of them. Why is any of this their responsibility?
The fact that there is absolutely zero effort in pursuing any non-punitive options (like forcing ISPs to put networks of clients with kids in child-friendly mode, where the adult has to enter a password to temporarily view the unrestricted internet on their network, which should cover 90%+ of cases; or doing any of the proposed non-identifying proofs of age, like a generic "I'm an adult" card you can buy at the convenience store) should tell you that this has very little to do with actual concern for children. They went out of their way to enact the least private, most invasive, most disruptive option, which won't even fix anything unless you expect literally every website on the internet to be compliant.
> For that matter, I'm surprised that HN automatically always accepts the "slippery slope" fallacy while lambasting it everywhere else.
Slippery slope arguments are not automatically a fallacy. They can be if the causative relationship is weak or if the slope is massively exaggerated. But if neither of these things are true, "slippery slopes" is just looking at the trends and expecting them to continue. You can't look at a linear graph and say "well, I think there's no most likely option from now on, it could go any way really" without an argument for why the trend would suddenly deviate. The internet had been tightening up and the walls have been closing in for a long time, why would that change?
You have got to be kidding me. What is it with these lawmakers and websites demanding people do all of this stuff using services that nobody has ever heard of? I myself (as someone who is blind) have never been able to do the face scanning thing because the information they provide (for, you know, getting my face focused) is just massively insufficient. And a lot of the ones I've seen also require me to (as an alternative) do some weird ID scanning with my camera instead of, you know, just allowing me to upload my ID or something? (Then again, I really wouldn't want to give my ID to some service nobody has ever heard of either, so there.) I also am concerned when tfa says "a photo of an identity document" what does this mean? If I have to scan my ID with my camera, that's not exactly going to be simple for me to pull off. I get that we need to protect kids, but this is not the way. Not when it is discrimination by another name for individuals with disabilities (as just one example).
You can, of course, not do this (you meaning the company, Discord)
You can choose to be respectful of people who have valid reasons for not providing ID
But you want that sweet IPO money (as stated elsewhere in this thread). You don't actually care about the internet and how anonymity is a cool thing for certain vulnerable groups
All these tech CEOs should face prison time and I'm not joking. They've displayed a complete laissez faire attitude to all of these concerns
This is just the latest in a long trend of increasing spying on users. Why bother having to guess who your user is, or fingerprint a browser if you can just force them to show you their national ID?
This is transparently about spying on people, not "protecting children". The real world doesn't require you to show your ID to every business you frequent, or every advertiser you walk by. Someone can yell a swear word on the sidewalk, and not everyone within ear shot has to show ID.
Discord is also rolling out an age inference model that analyzes metadata like the types of games a user plays, their activity on Discord, and behavioral signals like signs of working hours or the amount of time they spend on Discord.
“If we have a high confidence that they are an adult, they will not have to go through the other age verification flows,”
How does anyone know whether a family is engaging in that time-honored tradition of passing down accounts from grandfather, to father, to son, to child, and their posterity, in perpetuity?
Seriously though, unless you have positively identified the person who created the account in the first place, you have 0% chance of knowing whether it is the same person using it today.
Gamers sell their high-level accounts all the time. It would be a simple matter of economics that the Discord users with the oldest accounts sell them to 12-year-olds. Likewise, accounts are shared willy-nilly, whether or not that violates the rules. And accounts can be stolen or compromised, if you're really hard up.
But under that argument, you would have to prove your age on a regular basis, the plan right now appears to be that each account would only need to do so once.
You agree not to license, sell, lend, or transfer your account, Discord username, vanity URL, or other unique identifier without our prior written approval. We also reserve the right to delete, change, or reclaim your username, URL, or other identifier.
If transfer of accounts is a policy violation, then Discord has legal cover to confidently assert that, once ID is verified, the ID'd person is the owner and controller of the account thereafter.
Account selling, stealing, and sharing will certainly still happen, but that's grounds for banning, and not Discord's legal liability anymore.
Just ban that in TOS. As we know TOS is inviolable. As such it is not possible to sell, gift or otherwise transfer an account. At least this should be considered how it works for age verification. If account transfer is found out account can be terminated thus closing the loop hole.
No law or regulation is ever 100% effective in real life. Income tax is not collected 100% effectively. Should we not do it? Criminals are not caught 100% of the time, should we not do it?
Of course this won't be 100% effective, maybe 80-90% effective. That's all they need and expect from this system.
HN is constantly obsessed with is it perfectly effective?
No law, none, is perfectly effective. Speed limits certainly aren’t self enforcing, but remove your neighborhood’s speed limits first if you truly believe laws must be demonstrated perfect.
Yeah, my youtube/google account is almost as old as youtube itself is, but will constantly ask me to verify my age when clicking on something as marked 'not for kids'. Can we just get the leisure-suit-larry age-verification system ;)
Alternative: run your own self-hosted messaging server for you, your family and friends. No company should ever get such sensitive data as private conversations.
Use Discord with a throw-away account. Create a character in GTA 5 on your laptop and show its face (in "selfie" mode) to the web-camera on another computer with Discord open. All face scan checks so far gladly accept it. Instagram has been requiring occasional face checks for ages already.
Honestly they're probably big enough to get away with it.
If it was only friend groups it would kill them for sure, we've seen that many times, but given the absurd amount many large online communities on Discord, I'd wager they can force it down and be relatively unscathed.
They played the long game - they provided a good service for 10 years, and got REALLY big before they started the enshittification process.
Haven't cared about Discord in a long time. In fact I'm glad they're continuing to shoot themselves in the foot.
During the pandemic, I was on a Discord server for folks to socialize and blow off steam about the whole situation. Yes, there were some anti-vaxx wackos, but overall the place was civil and balanced, and I met some interesting people through it. We cracked jokes and it was a little bit of fun in a tough time.
One day I came to discover that Discord had banned the server for allegedly violating... something. I wish I had written down everyone's emails because I permanently lost contact with a bunch of friends in an instant.
I never signed in to Discord again, in spite of times where some other social group wanted to use it. I vowed never to use Discord again. Fuck those guys and the Teslas they rode in on. I hope this ID verification thing is another big step towards their irrelevancy.
You should be more tolerant of the "anti-vaxx wackos". The covid 'vaccine' has a very large number of negative externalities, confirmed by scores of credentialed doctors and researchers
The difference with Reddit is it has way more persistent value. Everything on Discord is throwaway, but valuable posts on Reddit from years past are easily retrievable. The two aren't so comparable.
One of the unspoken reasons many people have for using Discord is they don't want what they say to easily be associated with them in perpetuity. Requiring ID really chips away at that, in spite of what Discord has to say about privacy around ID.
By no means am I saying that Discord will go extinct. I just haven't observed anything about it that's irreplaceable. Reddit, on the other hand, has a wealth of discussion dating back to the mid-to-late 00's.
Does it matter? The problem is that everyone uses discord for everything. It's not an isolated platform, it's THE platform if you want to have friends.
Ratings aren't legally binding though are they? I bought games older rated than I was, and it's totally up to people's parents what they're allowed to play. Are you suggesting a 15 year old should be allowed to play the 16 rated game but not discuss it?
Hard no. Reality is that this push is everywhere. Authoritarian governments are cracking down hard on dissent, they're not going to leave huge platforms for communication untouched. We'll need open source decentralized alternatives.
Children generally have these things called "parents" who are supposedly responsible for their well being. Oh hey, suddenly there isn't a contradiction.
A lot of whining here about how this is an imperfect response to the issue of children being exploited on Discord / using the platform to engage with inappropriate content.
Until someone offers up something better, I take these types of initiatives from social media platforms as huge wins. Ignoring the problem will not make it better. We've been ignoring it for about 20 years now, and it's only gotten worse.
It is a great irony that the heavy handed push for "protect da kids" is all happening while we learn, day by day, that the richest and most powerful members of our society have no problem hanging out with a convicted child sex trafficker.
Rules for thee, free love for me.
What do you mean day by day.
We have known this to be the case, for quite some time, yet majority of the public still thought that a convicted felon was good enough to be president.
I think that's the exact irony that the parent is eluding to.
It's all about the kids, unless, idk, you're rich enough?
Hanging out with? Many of the top liberals were going to Epstein's island themselves. That's on top of other revelations in the entertainment industry and Washington DC going way back.
Voters in America stay electing self-centered, sick-minded people. God's Word (Bible) says pick people who follow God, have great character, take no bribes, and (less. important) are skilled for that position. If people will do that, we will see amazing things happen. So far, they elect pagans (Left) and Pharisees (Right) who always give them what they voted for.
I'm fine with the free love and debauchery, but just really keep it to adults and be safe.
I'm just going to go ahead and say that "free love" is a terribly inappropriate way to refer to sex trafficking, regardless of the age of the victims, unless you're being facetious (e.g., The Onion's "Penis Goofin'" allegations against Epstein).
It's a question of scale. Neither crime is less serious but far more children are groomed and abused over Discord than flown in via some super rich sicko's private jet for a 'costume party'.
This is no worse than Discord just banning NSFW content wholesale throughout the platform (which they would be entirely within their rights to do). It's a big fat nothingburger.
do as we say, not as we do
and they keep protecting the pedos from prosecution. lol.
I am not a native English speaker, I may be missing a cultural nuance, but I wouldn't call any of what they did love. That word enters nowhere in a sickening child abuse island.
it's just sarcasm.
There's a special phenomenon that happens as startups grow large. They begin to drift away from the ground truth of their product, their users and how it's used. It's a drift away from users. And a drift towards internal politics. A lot like Rasmussen's drift towards danger, https://risk-engineering.org/concept/Rasmussen-practical-dri...
As startups grow beyond a critical threshold, they start to attract a certain type of person who is more interested in mercenarily growing within the company / setting themselves up for future corporate rise than building a product. These people play to the company's internal court and create deeply bitter environments that leads to more mission-driven individuals leaving the company.
Which is why we end up with decisions like OnlyFans hitting $1B / yr in revenue (with extreme profitability) off of porn and then deciding to ban porn, https://www.ft.com/content/5468f11b-cb98-4f72-8fb2-63b9623b7...
Or, Digg deciding to kill its "bury" button and doing a radical "redesign" that made Reddit worth billions.
Unity's decision to update its pricing. Sonos' app "redesign" etc etc.
Decisions that kill the company. Or, in the best case, severely cripple it.
Congratulations Discord, y'all have made the list! :)I deleted my Facebook account in 2011. After finding out how much critical neighborhood information I have been missing, I finally registered a new Facebook account fifteen years later to follow my neighborhood groups.
A month later, the account was suspended for supposedly breaking guidelines. I never posted a single message, never reacted to any posts.
They then required me to upload a video scan of my face to prove I was a person.
We aren’t quite at the end of the internet, but man I can really see the end of this journey coming sometime soon.
I helped an elderly woman create her first FB account. She'd just lost her husband and wanted to notify his friends about his upcoming memorial service. She knew their names but didn't have contact information.
We created the account from an Apple device, registering from her home cable modem IP, giving FB her cellphone number and ISP issued email address — all strong signals of consumer authenticity. But after she added five of her relatives within half an hour, her account was locked for suspicious activity.
There was an appeal button; she was asked to take a picture of her face from many angles and upload ID. She gave them everything they asked for, but when Facebook reviewed the appeal, they closed her account permanently.
Mark Zuckerberg, folks. It matters when his default philosophy is "They trust me dumb fucks". Copying Snapchat 9 times is more of a priority than account security. He wasn't "making a good point". He's a malicious asshole who deserved jail years ago
Ironically, this may be one of the many straws that breaks the proverbial internet camel’s back. We all wax and wane about the old internet, the pre-homogenized, non-corporate, Wild West internet.
Perhaps these constant restrictions will finally spur us to create our own spaces again Our own little groups that exist independent of the corpo-sphere.
The only reason ‘the way things used to be’ went away was because the new thing was convenient. Well, now it isn’t anymore. So let’s just go back to the old thing.
Had a similar experience after rejoining a few years ago. My account wasn't suspended for breaking guidelines AFAIK, but rather flagged as a suspicious account that required an upload of my face and driver's license. I think the account still exists in this limbo state because I'd rather not upload all of that to Facebook, and yet still not able to login to request for the account to be deleted.
That won't guarentee that you get your account back. Many times it's used to permaban you later.
FB/Discord/etc were never the internet. They were walled gardens you could enter via the internet. This could be a revitalization of the internet - pushing people back to decentralized ways of communications.
Perhaps you may have not read about how Iran is moving to a whitelisted internet. Or perhaps you believe this will not happen in your country.
However, “think of the children” will always result in more restriction in the western, not less. We are watching countries prove that it works to isolate from each other. Europe is not isolating from America in exactly the same way, but is isolating business processes from American services.
We are not on the cusp of the end of the internet, but the cliff sure seems in view to me.
It should go without saying but,
*CANCEL YOUR NITRO SUBSCRIPTION NOW IF YOU'RE PAYING FOR ONE* (for whatever reason)
This was just announced today and a flood of canceled payments within the next 24 hours are the easiest way to send a message. And also tell people on the servers you're on to do the same. It's not like they give you anything of real value for that money.
Here's the October 2025 Discord data breach mentioned at the end of the article:
https://www.bbc.com/news/articles/c8jmzd972leo
> Discord, a messaging platform popular with gamers, says official ID photos of around 70,000 users have potentially been leaked after a cyber-attack.
However, their senior director states in this Verge article:
> The ID is immediately deleted. We do not keep any information around like your name, the city that you live in, if you used a birth certificate or something else, any of that information.
Why they didn't do that the first time?
> The ID is immediately deleted. We do not keep any information around like your name, the city that you live in, if you used a birth certificate or something else, any of that information.
This is also contradicted by what Discord actually says:
> Quick deletion: Identity documents submitted to our vendor partners are deleted quickly— in most cases, immediately after age confirmation.
What are the non-most cases?
Also, _Discord_ deleting them is really only half the battle; random vendors deleting them remains an issue.
Not to mention collecting them at all means those servers are a primo location for stage actors to stage themselves to make copies of data before being deleted.
To say nothing of insider threats of which likely exist across every major social media platform in service to foreign govs
Since when the city one lives in is mentioned in the birth certificate?
It was only one example they gave, and they accept multiple different types of ID; a driver's license or national ID card being other likely ones, and DLs do say where you live.
I believe the original finding was that they were not deleting IDs that were involved in disputes.
And do they really actually delete it this time?
They explained it in their announcement at https://discord.com/press-releases/update-on-security-incide...
TL;DR: The IDs were used in age-related appeals. If someone's account was banned for being too young they have to submit an ID as part of the appeal. Appeals take time to process and review.
Discord has 200,000,000 users and age verification happens a lot due to the number of young users and different countries.
They're a nonsense company, and trusting them with any information is foolish. They'll store everything and anything, because data is valuable, and won't delete anything unless legally compelled to and held accountable by third party independent verification. This is the default.
The purpose of things is what they do. They're an adtech user data collection company, they're not a user information securing company.
> The ID is immediately deleted.
I call it bollocks. Likely they have to keep it for audit and other purposes.
"delete" doesn't mean delete anymore, like you say, there are always audit logs, and there is "soft" deleting.
Expect any claims that things are being deleted to be a bold faced lie.
>Why they didn't do that the first time?
The company they hired to do the support tickets archived them, including attachments, rather than deleting them.
Ah sorry our contractor did all that highly illegal stuff. Too bad we can't pierce the corporate veil anymore... shucks.
Ah, so it was the "staffer" excuse.
How convenient.
Ignoring the implications of this for the moment, let me broach a related (and arguably more important) question: what do you do when you have multiple communities you interact with only on one platform, and suddenly that platform becomes intolerable for a subset of your community?
It is the same as what everyone did after the reddit fiasco i.e. protest, boycott, grudgingly use it while complaining and then finally accept the change.
May be this discord episode will have better outcome for the masses.
[delayed]
It seems like the answer is pretty obvious. That subset of the community stops using it and uses something else, and the others either follow them or don't.
You, if you're not in the first group, can continue to use both to communicate with everyone, but some of them lose the ability to communicate with each other.
The ideal outcome is for everyone to stop using the intolerable thing and switch to a tolerable thing. That's even what often happens over time, but not always immediately. Probably do anything you can to make it happen faster.
We start a new app. Opensource Discord, Self-hosted, federated. Serving that subsection that cares about privacy and security.
Discord is a good design, and should be replicated rapidly with mutations from competitors galore.
Revolt/stoat has existed for quite a while: https://itsfoss.com/revolt/
> Opensource Discord, Self-hosted, federated
Sounds like you want https://matrix.org/
> Discord is a good design
Then the main, reference client https://element.io/ or https://fluffy.chat would work great for you.
... With the only caveat being that general experience of using Matrix is awful.
I second the other commenter's suggestion of using https://stoat.chat/ or as it used to be called: Revolt, which matches the "Opensource Discord" requirement perfectly.
Matrix is slow, buggy trash with bad clients.
(Incidentally, this is also the incantation that will cause its primary maintainer to show up in the comment thread and tell me that I’m not using their seemingly annual complete new client rewrite that fixes all of the problems and makes it perfect now.)
People tried warning that moving all your discussion forums into a proprietary, closed, unsearchable platform was a bad idea. And it was. But nobody cared.
If this happened 15+ years ago, a huge chunk of the userbase likely would've migrated to alternatives, potentially resulting in Discord being replaced and falling into irrelevance.
Today, though, no chance that happens. The current generation literally grew up with it, same for most of the other established social media apps. The concept of alternatives largely does not exist for them. And besides, they were probably already sending pictures of themselves and other personal data to each other through the app, so it's not like Discord doesn't already have all of that.
There's also people who have been through enough of these moves and community splits that they're incredibly tired of it all.
I mean, I grew up with AOL AIM, Yahoo Messenger, and IRC... yet I switched every time a new tech came out with more of my friends on it. Why do we think discord will be any more sticky than Digg or Slashdot, or any of the above?
People will migrate, some will stay, and it will just be yet another noise machine they have to check in the list of snapchat, instagram, tiktok, reddit, twitter, twitch, discord, group texts, marco polo, tinder, hinge, roblox, minecraft servers, email, whatsapp and telegram, and slack/teams for work.
Absolutely exhausting to be honest.
One of the starkest social desirability biases in tech is between federated and centralized platforms. Most people, in public, say they support distributed, federated systems, but when push comes to shove, they all use centralized platforms anyway.
atproto is a really good attempt at solving this issue
Shake your head and move on.
It's not like we haven't seen closed source applications become hostile to their users before. And it's not like we didn't warn people about it.
I think she is a polarizing figure to some, but journalist Taylor Lorenz has been complaining about this sort of thing for a long time. She has been increasingly warning about a future in which we need to scan IDs for all of our online services, in the name of protecting kids. (With the obvious implications about that data leaking, governments using it to track dissidents, etc.)
What realistic open source alternatives to Discord are there? I'm currently considering moving to one of these with my friend group:
- Matrix
- Stoat, previously revolt (https://stoat.chat/)
- IRC + Mumble
- Signal
This seems like a nice breakdown of some options:
https://taggart-tech.com/discord-alternatives/
(Not affiliated)
One thing most of those lack is an easy way to share screen.
Now if anyone wants to differentiate their Discord alternative, they want to have most of discord functionalities and add the possibility to be in multiple voice chats (maybe with rights and a channel hierarchy + different push-to-talk binds). It's a missed feature when doing huge operations in games and using the Canary client is not always enough.
Matrix screen sharing is a feature of of Element Call / MatrixRTC (in development).
For now, I think they do it through their Jitsi integration. I don't know how easy it is, as I haven't tried it.
https://docs.element.io/latest/element-cloud-documentation/i...
Stoat has screen sharing / video calling in the pipeline at least: https://github.com/stoatchat/stoatchat/issues/313
Jitsi does that well
Does matrix have decent 1:N client desktop broadcasting with low latency (and high fps) yet? I use discord for "watch parties", video and tabletop gaming...
Snikket (https://snikket.org ) with Monal as the iOS client
I wonder how Stoat will fare, and how it is currently maintained, in terms of "making money"; my fear is that it would steer into the direction of Discord itself.
Zulip?
Mailing lists. I’m only half joking. I was on some mailing lists in the 1990s that were pretty close to what I see on Discord today.
Which of these has been around for over three decades?
That would be my answer.
Same, depends on what you expect in terms of features and so on, but for chat, IRC works perfectly.
Revolt's rename to stoat is probably worse than any rebranding MSFT done ever.
It's because of the trademark: https://stoat.chat/updates/long-live-stoat
Nevertheless, I don't like the new name either, oh well...
I like this comment though:
Imagine you make a free software project and it runs into trademark issues because people have more money than you to register in more classes than your project.
And then even though your project existed first, they still come after you anyway.
And from that an even more expensive rebranding from this as well.
from: https://news.ycombinator.com/item?id=45626225, not sure how accurate it is, but it makes me want to revolt .
"[beaver emoji] Revolt is Stoat now"
Argh. If there's no stoat emoji, petition the Unicode Consortium for one, don't just use a beaver. It's not even the right family; the badger emoji would be closer.
It's open source, I'm tempted to fork it and do nothing other than change the branding.
I have found Element and Matrix to be totally unusable in iOS
Element’s awful, but I’ve found FluffyChat, another matrix client, to be a lot better, albeit with a very silly name.
For me, the closest alternative to Discord is Stoat. Matrix with Element (or other clients) would be great, but it feels so slow on both desktop and mobile.
IRC was here before Discord, and it will still be here after.
I've never heard of Stoat. Looks like IRC but it's Electron. Total waste of time.
IRC does not support group voice & video calls, which is one of the primary features of Discord (and previously Skype, from which everyone migrated to Discord in the first place)
It's a viable system for the many open source software projects that collaborate over chat. Expo, Typescript, and Effect are relatively large examples. I'll participate there if available and I get locked out. Otherwise, I'll just use the stuff without contributing, no problem.
Kids these days...
Should be blame the majority of the users, or should we accept times change?
For most Discord users IRC simply does not have the feature set that people need. Basics like simple drag and drop media sharing, threaded conversations, emoji reactions and voice comms, up to more complicated stuff like screen sharing and video calling.
Last I checked Signal was not fully open source, which is iffy, believe their encryption protocol is still closed. That said its the best of a bad bunch for E2EE messaging. If you're on android I'd recommend doing what I do, which is installing from the APK on the site, manually verifying the sig locally (you can use termux for this), and then lagging ever so slightly behind on updates to avoid potential supply chain or hostile takeover attacks. This is probably over cautious for most threat profiles, but better safe than sorry imo. Also their server side stuff is close sourced, technically this isnt an issue though as long as the E2EE holds up to scrutiny though.
Edit: My information may be out of date, I cannot find any sources saying any part of the app is closed source these days, do your own research ofc but comfortable saying its the most accessible secure platform.
I set up a forum when I started my site for Linux content creation. Discord had become a black hole for technical know-how on a scale IRC could never dream of, and finding answers to common questions was nigh impossible since the technology has changed and the modern way to solve problem X was never asked in a forum and never indexed by a search engine. Granted, Reddit provided a bit of a stopgap over the last decade, but the solutions in the comments these days are more often than not a confidently incorrect copy-pasta from GPT.
I use Discord for chat and voice calls since that is what I expect from a chat app, but the amount of companies that have built their community / knowledge base / support system around Discord is worrying. You know they can just delete that, right?
I'll continue to use Discord for chat until prompted to put my face in the hole :)
The sad thing is that I think many people will en masse pony up their ID or snapshot without a second thought. I'm not sure if enough people will refuse to actually force Discord to back off this decision (unless their idea is to grab as much data as possible at once with the understanding that they are going to back off either way).
Sounds like when Netflix reneged on family accounts.
I cancelled my account in protest, but their financials say they made money on the change (and thus all the execs are happy with it).
Especially if it's presented as a pop-up upon launching the app that suggests the user won't be able to talk to their friends/servers without showing ID. Carefully worded language would could spur some % of users to panic at losing years of history and immediately show ID. Folks with less privacy discernment hear "jump" and reply "how high".
> panic at losing years of history
I used to be like that. It was unsustainable and ultimately mentally unhealthy.
I don't imagine this was a 100% their decision, it's more like a response to the epidemic of all the world's governments suddenly coming up with adult verification schemes. Discord has already required it in some countries, and it's definitely easier to get everybody to verify themselves than require it on a per-jurisdiction basis. The personal data they get is a cherry on top.
Also, this is just the beginning, more social networks will require the same soon.
They don't have to comply in advance.
I have done that for stripchat which was also requiring it. Not happy with it but I'd rather use a selfie than a whole ID document which includes an image anyway.
The thing is, what other option do I have?
I'll continue using Discord in teen mode, I guess. I'd rather not lose the current connections & servers I have on there, and I'm not optimistic about people migrating away, especially non-tech people.
I get the draconian side of things, but I am also tired of thousands of russian, indian, domestic-funded etc. bots flooding the zone with divisive propaganda.
In theory, this seems like it would at least be a step in the direction of combating disinformation.
I'm curious if there are any better ways to suppress these propaganda machines?
How do I know that this message isn't divisive propaganda posted by a bot?
Because it's not posted by a Russian/Indian account, duh!
I don't see how disallowing viewing "age-restricted" content through Discord without giving them your ID would have any impact on the spread of disinformation, outside of like, disinfo in the form or pornographic or gory images.
To add context to the discussion, it is important to recall that Discord was reported to have recently filed paperwork with the SEC for an IPO [1]. Thus it seems likely that the real reason for the age verification (i.e., user identification) policy is to boost its perceived earnings potential among Wall Street investors. According to this theory, Discord is the new Facebook.
[1] https://techcrunch.com/2026/01/07/discords-ipo-could-happen-...
It's clear "age verification" is not something we'll get rid of, so I think instead we should push for a publicly verifiable double-blind (zero-knowledge proof) solution that can ensure it only gives the websites a boolean and doesn't allow correlation from either side.
The alternative is having to give your ID to Facebook, Google, Microsoft, and all the other bad actors...
It's kind of surprising that no-one has really come out with a proper privacy-preserving approach to this yet. It is clearly _possible_; there are reasonable-looking designs for this. But no-one's doing it; they're just collecting photos and IDs, and then leaking them all over the place.
It is only a matter of time before ID verification means the camera is always on watching the face of the person looking at the screen.
https://www.apple.com/newsroom/2025/06/apple-expands-tools-t...
What are your thoughts on Apple's approach? You still have to provide your birthdate to apple. But after that, it only only ever shares your age range with other companies that request it, not your birthdate.
As others have said, it’s obvious that no real attempts have been made by anyone to create a privacy-focused solution because the end goal is to collect photo IDs.
Occasionally in my free time I have been tinkering with a certificate-based solution that could fulfill this sort of need for age verification. It’s not the most robust idea but it’s simple enough using most of what we already have. Creating a minimal protocol which doesn’t share actual identifying information nor metadata of the site you’re accessing is trivial. If I can make an 80% solution in less than 100 hours of my free time then some groups with more money and intelligence could propose a dead-simple and easy-to-adopt solution just as easily.
They do not want to solve the problem, they want to collect our IDs. If they would have wanted to actually solve it they would not have done this on legislations where it is not a requirement.
> It is clearly _possible_
Is it?
I don't think it is.
I truly don't believe that there's any possible way to verify someone's age without collecting ID from them.
It would seem like a naive solution would be some arrangement where Discord would ask for a proof-of-age from an official service ran by the State (which issues your ID)
Well you could have government-run cryptographically signed tokens. They're already in the business of holding ID data (i.e. they don't need to collect it and this wouldn't increase the attack surface).
But assuming it has to be a private solution, you could do the same thing but make it a non-profit. Then at least _new_ services you wish to use don't need to collect your ID.
It's possible to (cryptpgraphically verifiably) split up the age verification and the knowledge of what the verification is for.
No privacy is simpler and the simpler solution is cheaper. If there's no real incentive to go with another option, companies will go with the cheaper option.
> and will see content filters for any content Discord detects as graphic or sensitive.
I didn't even realise discord scans all the images that i send and recieve.
Really I've come to the conclusion that anything I send out of my LAN is probably kept on a server forever and ingested by LLMs, and indexed to be used against me in perpetuity at this point, regardless of what any terms or conditions of the site I'm using actually says.
Speaking of hosting, Discord used to be one of the biggest (inadvertent) image hosts, so they might have set up the system to reduce legal exposure than to monitor conversations per se.[1]
A lot of the internet broke the day they flipped that switch off.
Weren't external Tumblr hotlinks also a thing back in the day?
[1]: https://www.reddit.com/r/discordapp/comments/16uy0an/not_sur...
To be fair, the terms and conditions probably say that they can do whatever they want with that data :-).
Don’t forget all the government creeps snooping on the wires.
Until the current administration, I was much more bothered by private misuse/abuse of date than the government. Now I worry about both.
That was always the wrong threat model hierarchy. I have always been more concerned what the federal, my state and my local government can do when given more power/informstion than the federal government
Good. Being OK with authoritarianism because they are on your side is never good.
Why? People who volunteer to work for these government drag nets must be total psychos.
Volunteer? I mean they do get paid.
The thing is it's a mix of both.
You have the fervent that love recording everything "for the good of the people". But then you'll just have piles of people with separation of duties that do things with very little understanding of where they fit in the process and very little care to.
We gave those brogrammers the keys to the machine when we made programming more accessible.
Pretty much every non-E2EE platform is scanning every uploaded image for CSAM at least, that's a baseline ass-covering measure.
And E2EE platforms like Mega are now being censored on some platforms specifically because they're E2EE, and so the name itself must be treated as CSAM.
As people who want to talk about words like "megabytes" or "megapixels" or "megaphones" or "Megaman" or "Megan" on Facebook are finding out.
They have to at least for CSAM.
Everything that is not end-to-end encrypted understandably has to do it.
Key changes are
- ID verification to see porn on Discord.
- Also, some warnings to not befriend stangers.
Not very heavy handed, you can google porn anytime. I am not sure who this serves.
It serves UK, EU, and various US States' regulations to "protect the kids".
Discord is only the next biggest canary in the coal mine. These regulations are going to force a lot more websites and apps to do this, too.
I wish these sorts of regulations had been written hand-in-hand with a more directly technically-minded approach. The world needs a better technical way to try to verify a person's estimated age cohort without a full ID check and/or AI-analyzed video face scan before we start regulating "every" website that may post "adult content" (however you choose to define that) starts to require such checks.
Curious how this will affect midjourney's earnings
F** that, guess I'm leaving that platform too now...
Discord has always been IRC with extra censorship and spying. Nothing really new, here. Just use IRC.
it’s not that simple. many (if not most) people would rather be where everyone already is, even if there’s less privacy
If you can't think of good reasons for why someone might use discord over IRC, you probably haven't thought about this enough.
In case anyone else can’t read it: https://archive.is/PvpAx
can't wait to beat it with a face-swap or some random driving license found on the internet
Are they going to leak IDs of minors again like they did last time? Who does this protect exactly?
It protects the investors so they can IPO
lot of people complaining, but, seems like they rolled it out already in UK and Australia... no real complaints I know of, and I'm in NZ and are on NZ/Aussie discords. Also teen mode doesn't actually seem that restrictive. Seems an ok move to me. But for whatever reason people seem to froth at the mouth when it comes to discord on here.
I have a discord account that I use very rarely, and just tried it (from the UK) and it didn't ask me for any ID or face scan. If they do start doing that, I'll simply stop using the service.
When the openclaw/moltbook fad dies, those Mac mini's could be repurposed for a p2p forum network.
Great news, there’s finally going to be sufficient motivation for people to both build out and use open source alternatives.
I predict out-of-the-box deepfake live-camera software will get a bump in popularity, there's already plenty solutions available that need minimal tinkering. It should be trivial to set up for the purpose of verification and I don't see those identity verification providers being able to do anything about it. Of course, that'll only mean stricter verification through ID only later on, much to the present-and-future surveillance state's benefit.
https://github.com/hacksider/Deep-Live-Cam
To be honest it kinda sounds like a benefit for my use-case. I don’t engage with adult content on there and use it for one server with friends.
And this will reduce spam from random accounts. Will see if it remains usable without uploading my Id.
> Users who aren’t verified as adults will not be able to access age-restricted servers and channels
I genuinely wonder which proportion of the users want access to age-restricted servers and channels...
Feels like it should be just fine not to verify the age.
Here's how Discord works. A third or so of its features, such as forum channels (EDIT: I think this specific example was wrong; stage and announcement channels, but not forum channels) or role self-assignment, are locked behind Community Mode. After enabling Community Mode, server owners are NOT ALLOWED to turn off content filtering anymore, meaning that by default, content in every channel may be filtered out by systems you cannot configure.
The only way for the server owner to circumvent the filter is to mark a channel as "NSFW", which doesn't necessarily mean the channel actually contains any NSFW content.
This change will not actually require ID for content confirmed to be NSFW. It will require ID for each and every "NSFW mode" (unfiltered) channel. The end result is that you have three choices:
- Ditch Discord features implemented in recent years (or at least this is currently possible) - this prevents a server from being listed as public;
- Require ID checks from all your users (per channel);
- Have everything scanned from all your users (per channel).
Are you saying that you can "mark" the channel as "NSFW", and Discord will stop scanning your content, possibly allowing you to share very illegal content through their servers?
Sounds weird to me. Pretty sure that they legally have to make sure that they don't host illegal content. Or does "NSFW" enable some kind of end-to-end encryption?
That has always been the case, yes, though I'm not sure what you mean by "illegal" content. There is only a small overlap between NSFW and illegal content, and the NSFW filter has never been concerned with, uh, violating photograph copyright or something.
You don't have to take my word for it, just check it yourself, although it seems that this week, they renamed the NSFW setting to "Age-Restricted Channel" (in preparation for this change, no doubt). The verification-related portion of the behavior I described was implemented for the UK months ago.
The description still contains: "Age-restricted channels are exempt from the explicit content filter."
EDIT: IANAL (or american) but if Discord was policing content for legality rather than age-appropriateness, wouldn't they lose DMCA Safe Harbor protections?
> The description still contains: "Age-restricted channels are exempt from the explicit content filter."
Wait! This does not mean they do not scan it. What I understand from that statement is that they filter explicit content, as in they prevent it from appearing on the user's screen.
When you enable the "NSFW" mode, you tell Discord "it's okay, don't filter out anything". But Discord probably still scans everything.
So that makes sense to me: if you don't validate your age, then Discord will not allow you to join channels that disable the "adult" filtering. I can personally live without adult content on Discord...
OK, but you're not the one making that decision and you don't know/can't control how that decision is being made.
> I genuinely wonder which proportion of the users want access to age-restricted servers and channels...
Way more than you think. There are tons of Discord servers that only exist to share pornography.
Credit card verification not an option.
Facial video estimates or submit an id card.
Option 3: if we analyze all of your data we have and see you are not going to bed at 8pm for middle school, you get adult status.
I wonder if Discord is legally forced to do that, or if they would rather do it themselves (and collect the data $$$) rather than wait to be imposed a solution they don't own.
I feel like age verification will come, there is no way around it (unlike ChatControl and the likes, age verification seems reasonably feasible and has a lot of political traction right now).
But I would rather have a privacy-preserving solution for that, e.g. from the government (which already knows my age).
There are probably enough regions where it is required or will be required soon, that it makes sense to just get it over with.
The Internet is more or less becoming a locked down, controlled and fully observed thing for end users and citizens, so adapting to that world sooner and working within it is just sensible future-proofing.
This also lets them more safely target older users with ads, purchase requests, etc. and new integrations for gambling and other high ROI systems.
GeoIP this nonsense. Legal liability is solved as a "good-faith effort" and those living in jurisdictions where this doesn't apply (or use a VPN) don't need to be stripped of privacy.
Privacy preserving between you and the third party, but the implication is that the government now sees what you are using.
“We will find ways to bring people back” yeah because that usually works. I imagine this gets rolled back or siloed to only adult specific channels.
Good riddance Discord. Any alternative for the masses?
They’re not gonna use Slack or phpBB.
Why would Slack not be affected by the same stupid laws?
If you're a Slack user, I don't think they need your ID to tell that you're an adult
More seriously, it will become a problem on there is a significant user migration to there and a repeat of the mass hysteria. Due to being more niche, these smaller platforms are probably not in danger right now.
Genuine question, what is stopping users from using AI to generate a fake face or ID to bypass this restriction?
There is a bit of an arms race between id verification systems and users bypassing them when AI gen. Which is really just ai generated images vs. AI generated image detection.
In practice, nothing will stop it, the tooling will gradually get better at detecting prior fakes and banning those users while the newer fakes will go undetected for longer.
Putting up the requirement satisfies their CYA requirements here. The race between AI fraud vs. detection is something they can just ignore and let happen on its own.
> prior fakes
But they assured me my biometrics are deleted after uploading!
Finally I feel validated complaining for the last decade about the move away from IRC/teamspeak to centralized services. I've been called all kinds of names.
Now those same people are complaining they're gonna have to submit their faces to discord. Which will eventually be used to prosecute or commit fraud. I'm left wondering if "tech enthusiasts" are ever actually correct.
Great, yet another reason not to use it.
I foresee Discord receiving a lot of identification documents from the likes of Ben Dover
Another company jumping on the bandwagon to data-farm in the pretext of safeguarding children. I really wonder if there's an actual method to actually safeguard children while also not holding on to data. Because, genuinely, you can't question this.. Companies would just say "we are trying to protect kids" and that'd be the end of the argument.
I really wonder if when this is fully implemented if they will have any safe guards against selling "adult verified" accounts. With AI being a possible work around for those who don't want to share an ID, selling accounts would be another big issue unless they check for IP addresses and block based on locations and logins. EDIT: I see in another comment that its against TOS to sell accounts, I doubt that has stopped anyone before though.
So my friend group has been looking for alternatives for a while now that feel like discord, works on mobile and desktop, and has voice chat.
I use Signal but the UI is very different from Discord.
I've had very mixed experiences with Element + Matrix, Element keeps crashing on mobile, and while voice chat kinda exists in Element it's not been great imho.
I looked into hosting Rocket.chat, Zullip, and Mattermost but from what I recall voice + mobile were either missing or paywalled at a per-user price.
Any recommendations?
I seem to recall Jitsi working pretty well.
Jitsi is great but the element integration felt clunky. Maybe I'll have to revisit it.
> Content Filters: Discord users will need to be age-assured as adults in order to unblur sensitive content or turn off the setting. [1]
That presumably includes selfies?
That means that to exchange racy photos on Discord, each person must first record a facial age estimation video or upload identification documents.
That seems dystopian.
1: https://discord.com/press-releases/discord-launches-teen-by-...
How do you know one party isn’t 15 when the other is 25?
You’re never going to convince a parent or a lawmaker or even me that this is dystopian. Seems like a perfectly reasonable safeguard.
> How do you know one party isn’t 15 when the other is 25?
You don't. That's why parents need to be involved in their children's lives.
CSAM is the easy excuse, anyway. That's the one lawmakers use, and most people are against CSAM, myself included, so the excuse goes down easy. But the impetus they don't talk about is monitoring and control.
The answer isn't to destroy privacy for everyone. The government and these corporations don't need to know what you're doing every second of the day.
> That's why parents need to be involved in their children's lives.
Can't, aren't, look at iPad kids, won't. This is about as logical as saying people should just drive safely, so we don't need guardrails and seat belts. Or saying parents should always watch their children, so we don't need age verification at the alcohol store. Besides, it's not like the school library or the friends of friends don't have devices themselves you as a parent can't see.
Parents should not need to be tech experts or helicopters to feel their kids are safe online. That's fundamentally unreasonable. In which case, privacy and child safety need to come to an unhappy compromise, just like any other conflicting interest.
For that matter, I'm surprised that HN automatically always accepts the "slippery slope" fallacy while lambasting it everywhere else.
> This is about as logical as saying people should just drive safely, so we don't need guardrails and seat belts.
This is a terrible analogy. Regulations related to driving only apply to drivers, if you're a pedestrian then you're not subject to basically any regulations that licensed drivers have to abide by. On the other hand, internet regulation like this punishes absolutely everyone to safeguard a small group, that being parents. It's like legally forcing pedestrians to wrap themselves in bubble wrap while outside so the careless drivers who couldn't behave don't dent their cars and get hurt when a pedestrian flies in their windshield, when they inevitably collide with one of them. Why is any of this their responsibility?
The fact that there is absolutely zero effort in pursuing any non-punitive options (like forcing ISPs to put networks of clients with kids in child-friendly mode, where the adult has to enter a password to temporarily view the unrestricted internet on their network, which should cover 90%+ of cases; or doing any of the proposed non-identifying proofs of age, like a generic "I'm an adult" card you can buy at the convenience store) should tell you that this has very little to do with actual concern for children. They went out of their way to enact the least private, most invasive, most disruptive option, which won't even fix anything unless you expect literally every website on the internet to be compliant.
> For that matter, I'm surprised that HN automatically always accepts the "slippery slope" fallacy while lambasting it everywhere else.
Slippery slope arguments are not automatically a fallacy. They can be if the causative relationship is weak or if the slope is massively exaggerated. But if neither of these things are true, "slippery slopes" is just looking at the trends and expecting them to continue. You can't look at a linear graph and say "well, I think there's no most likely option from now on, it could go any way really" without an argument for why the trend would suddenly deviate. The internet had been tightening up and the walls have been closing in for a long time, why would that change?
They'll now have kompromat associated with a name, address, and id number (be it social security, BSN, or whatever your country calls it)
You have got to be kidding me. What is it with these lawmakers and websites demanding people do all of this stuff using services that nobody has ever heard of? I myself (as someone who is blind) have never been able to do the face scanning thing because the information they provide (for, you know, getting my face focused) is just massively insufficient. And a lot of the ones I've seen also require me to (as an alternative) do some weird ID scanning with my camera instead of, you know, just allowing me to upload my ID or something? (Then again, I really wouldn't want to give my ID to some service nobody has ever heard of either, so there.) I also am concerned when tfa says "a photo of an identity document" what does this mean? If I have to scan my ID with my camera, that's not exactly going to be simple for me to pull off. I get that we need to protect kids, but this is not the way. Not when it is discrimination by another name for individuals with disabilities (as just one example).
The CEO of Discord is Humam Sakhnini. He's from McKinsey. So that tracks.
You can, of course, not do this (you meaning the company, Discord)
You can choose to be respectful of people who have valid reasons for not providing ID
But you want that sweet IPO money (as stated elsewhere in this thread). You don't actually care about the internet and how anonymity is a cool thing for certain vulnerable groups
All these tech CEOs should face prison time and I'm not joking. They've displayed a complete laissez faire attitude to all of these concerns
This is just the latest in a long trend of increasing spying on users. Why bother having to guess who your user is, or fingerprint a browser if you can just force them to show you their national ID?
This is transparently about spying on people, not "protecting children". The real world doesn't require you to show your ID to every business you frequent, or every advertiser you walk by. Someone can yell a swear word on the sidewalk, and not everyone within ear shot has to show ID.
Source: https://discord.com/press-releases/discord-launches-teen-by-...
Any age verification process that does not consider the age of the account as a verification option is a data trap, plain and simple.
They are planning on doing something similar:
Discord is also rolling out an age inference model that analyzes metadata like the types of games a user plays, their activity on Discord, and behavioral signals like signs of working hours or the amount of time they spend on Discord.
“If we have a high confidence that they are an adult, they will not have to go through the other age verification flows,”
How does anyone know whether a family is engaging in that time-honored tradition of passing down accounts from grandfather, to father, to son, to child, and their posterity, in perpetuity?
Seriously though, unless you have positively identified the person who created the account in the first place, you have 0% chance of knowing whether it is the same person using it today.
Gamers sell their high-level accounts all the time. It would be a simple matter of economics that the Discord users with the oldest accounts sell them to 12-year-olds. Likewise, accounts are shared willy-nilly, whether or not that violates the rules. And accounts can be stolen or compromised, if you're really hard up.
How often do you suppose they will be re-checking your ID? Once every... never?
They need to have an always-on camera looking at the person using the device. No camera, no discord.
But under that argument, you would have to prove your age on a regular basis, the plan right now appears to be that each account would only need to do so once.
Just remember that the Terms of Service you agreed to are about as firm as explosive diarrhea.
You agree not to license, sell, lend, or transfer your account, Discord username, vanity URL, or other unique identifier without our prior written approval. We also reserve the right to delete, change, or reclaim your username, URL, or other identifier.
If transfer of accounts is a policy violation, then Discord has legal cover to confidently assert that, once ID is verified, the ID'd person is the owner and controller of the account thereafter.
Account selling, stealing, and sharing will certainly still happen, but that's grounds for banning, and not Discord's legal liability anymore.
Then why could they not also legally get away with using account age as a proxy?
Just ban that in TOS. As we know TOS is inviolable. As such it is not possible to sell, gift or otherwise transfer an account. At least this should be considered how it works for age verification. If account transfer is found out account can be terminated thus closing the loop hole.
No law or regulation is ever 100% effective in real life. Income tax is not collected 100% effectively. Should we not do it? Criminals are not caught 100% of the time, should we not do it?
Of course this won't be 100% effective, maybe 80-90% effective. That's all they need and expect from this system.
Exactly.
HN is constantly obsessed with is it perfectly effective?
No law, none, is perfectly effective. Speed limits certainly aren’t self enforcing, but remove your neighborhood’s speed limits first if you truly believe laws must be demonstrated perfect.
Has discord even been around for 18 years?
Yeah, my youtube/google account is almost as old as youtube itself is, but will constantly ask me to verify my age when clicking on something as marked 'not for kids'. Can we just get the leisure-suit-larry age-verification system ;)
Apple deleted many legacy mac-dot-com accounts without qualms, not long ago. It was the phone accounts, in so many ways, driving it .. IMHO
Looks like it might be opt-in by server.
Alternative: run your own self-hosted messaging server for you, your family and friends. No company should ever get such sensitive data as private conversations.
Use Discord with a throw-away account. Create a character in GTA 5 on your laptop and show its face (in "selfie" mode) to the web-camera on another computer with Discord open. All face scan checks so far gladly accept it. Instagram has been requiring occasional face checks for ages already.
No thanks. Discord, it has been fun, but I decline.
Honestly they're probably big enough to get away with it.
If it was only friend groups it would kill them for sure, we've seen that many times, but given the absurd amount many large online communities on Discord, I'd wager they can force it down and be relatively unscathed.
They played the long game - they provided a good service for 10 years, and got REALLY big before they started the enshittification process.
Haven't cared about Discord in a long time. In fact I'm glad they're continuing to shoot themselves in the foot.
During the pandemic, I was on a Discord server for folks to socialize and blow off steam about the whole situation. Yes, there were some anti-vaxx wackos, but overall the place was civil and balanced, and I met some interesting people through it. We cracked jokes and it was a little bit of fun in a tough time.
One day I came to discover that Discord had banned the server for allegedly violating... something. I wish I had written down everyone's emails because I permanently lost contact with a bunch of friends in an instant.
I never signed in to Discord again, in spite of times where some other social group wanted to use it. I vowed never to use Discord again. Fuck those guys and the Teslas they rode in on. I hope this ID verification thing is another big step towards their irrelevancy.
You should be more tolerant of the "anti-vaxx wackos". The covid 'vaccine' has a very large number of negative externalities, confirmed by scores of credentialed doctors and researchers
Discord has 150 million monthly active users.
They’ll be fine. To them, this is just another internet boycott, with all that entails. Reddit survived a worse one and grew afterward.
The difference with Reddit is it has way more persistent value. Everything on Discord is throwaway, but valuable posts on Reddit from years past are easily retrievable. The two aren't so comparable.
One of the unspoken reasons many people have for using Discord is they don't want what they say to easily be associated with them in perpetuity. Requiring ID really chips away at that, in spite of what Discord has to say about privacy around ID.
By no means am I saying that Discord will go extinct. I just haven't observed anything about it that's irreplaceable. Reddit, on the other hand, has a wealth of discussion dating back to the mid-to-late 00's.
>valuable posts on Reddit
[removed]
[removed]
[removed]
[removed]
[removed]
There's this thing called the Wayback Machine, but I lol'd at your response. It's not untrue. xD
How many people are doing age restricted stuff on Discord (besides the specifically there for adult content and gooning crowd)
All of my use is primarily professional and gaming and has no age concerns
Does it matter? The problem is that everyone uses discord for everything. It's not an isolated platform, it's THE platform if you want to have friends.
Gaming certainly has age-concerns, many games are rated 13/15/16+ or 18+
But yeah, leaving discord... they are not getting my ID/Photo
Ratings aren't legally binding though are they? I bought games older rated than I was, and it's totally up to people's parents what they're allowed to play. Are you suggesting a 15 year old should be allowed to play the 16 rated game but not discuss it?
Can their parents also approve their discord usage?
Are you saying they need parents to buy the game, but shouldn't to join chats about the same game?
At least Google is pushing on zero-knowledge solutions
Maybe they can force everyone's hand like they did for https
https://blog.google/innovation-and-ai/technology/safety-secu...
Hard no. Reality is that this push is everywhere. Authoritarian governments are cracking down hard on dissent, they're not going to leave huge platforms for communication untouched. We'll need open source decentralized alternatives.
HN: Social media is terrible and ruining kids' mental health.
Also HN: Any attempt to limit access to verified adults is an "authoritarian crackdown" and totally unacceptable.
Children generally have these things called "parents" who are supposedly responsible for their well being. Oh hey, suddenly there isn't a contradiction.
Right, helicopter parenting. Gets a lot of praise here, I forgot.
Indeed, the article basically says as much in more pacifying terms:
> driven by an international legal push for age checks and stronger child safety measures
another one bites the dust.
No thank you, get fucked
This is not OK.
A lot of whining here about how this is an imperfect response to the issue of children being exploited on Discord / using the platform to engage with inappropriate content.
Until someone offers up something better, I take these types of initiatives from social media platforms as huge wins. Ignoring the problem will not make it better. We've been ignoring it for about 20 years now, and it's only gotten worse.
Be responsible for your spawn and don't be a weenie about asserting boundaries for them.
The solution is parents! Stop making your bad parenting my problem!