I've long held a belief that if countries want to mandate compliance, they should be required to provide the mechanism for compliance.
Want to deem whether content is allowed or not? Fine, provide an API that allows content to be scanned and return a bool.
Want to age-gate content? Fine, provide an identity service.
While both of these will reduce privacy, they'll achieve one of two objectives: Either those making these policies will realize the law they wrote is impossible to achieve, or it will at least provide an even playing field for startups vs incumbents if they succeed.
Many countries fail to provide their citizens with digital identification while enforcing age limits and such, but at least most of Europe will have solutions for both of these soon. Plus, "complying with the law threatens my bottom line" has never been a reason to ignore the law anyway. If restaurants were run like internet companies, people would die of food poisoning every day because "it'd be too much of a burden to check if _every_ ingredient is in date before cooking".
This proves that the laws they write are actually quite possible to achieve. It'll happen at the cost of some freedom and many of the nice things about the internet in general, but if you're a businessman looking at business potential and don't care too much about artistic creativity, the impact is actually quite minimal. I'm glad UK and US companies are hiring the most incompetent and useless "verification" companies out there to punish the government through a weird form of malicious compliance, but that won't last.
> Want to deem whether content is allowed or not? Fine, provide an API that allows content to be scanned and return a bool.
These mechanisms not only exist, governments try their best to convince people to use them. I don't think you'll get away in court arguing that there is no accessible service for you to use. What happens when you fail after applying one of these services is up for debate, but so far, laws just want you to try your best to prevent abuse, and existing services are enough for that.
> Want to age-gate content? Fine, provide an identity service.
This is a problem now, but privacy-safe age verification is rolling out across the EU in the coming years. This is actually a super easy problem to solve when the government puts in the bare minimum amount of work. A decent argument against the UK/US/etc. but not so much so for an EU member state like in this court case.
> If restaurants were run like internet companies, people would die of food poisoning every day because "it'd be too much of a burden to check if _every_ ingredient is in date before cooking".
The issue with this is internet companies are subject to the laws of all countries, whereas restaurants are subject to the laws they are located in. While someone at the scale of McDonalds may be able to handle laws at a worldwide scale, a small food truck cannot.
Tell me in which case a place like a restaurant or a movie theater was found to be liable because a consumer suddenly decided to speak harmful speech or violating someone else privacy?...
Show me one place like a restaurant or a movie theater where I can put a poster in their shop window without them asking questions about the content of the poster.
Please don't fulminate or post shallow dismissals on HN.
It's valid to argue that laws should be legible and reasonably straightforward to comply with, and to point out that society as a whole doesn't benefit if only already-dominant global megacorps can afford to comply with laws (or bear the cost of non-compliance).
Let's make the effort to discuss the topic in a curious, conversational way, rather than with this belligerent tone. The guidelines make it clear we're looking for a higher standard of discourse than this. https://news.ycombinator.com/newsguidelines.html
Please don't comment like this on HN. It's fair enough to express disagreement with the parent comment, and I've replied to them asking them to avoid fulminating and making shallow dismissals.
But this kind of reply is also in breach of the guidelines. Replying to a bad comment with an even worse one is how death-spirals happen, and that's exactly what we're trying to avoid on HN. We've had to ask you several times before to follow the guidelines. We have to ban accounts that keep ignoring our requests. https://news.ycombinator.com/newsguidelines.html
The party which decides to show the advertisment in exchange for payment, should be more responsible for what they are showing than a free user posting content.
Now things become interesting when a users pays for ranking or 'verification' checkmarks. What makes that content different than a paid advertisment?
Speaking of responsibility: I want ad networks (a scourge on modern society) to be very liable when their systems are spreading malware and scams.
IIRC the last time my mom came to me with a fake "You Must Upgrade Now" ad on her phone (from an otherwise-legit puzzle game), the ad-network did have a feedback function... but strangely there wasn't any category for fraud.
It might be less of an issue if there were clearer ad-vs-not boundaries, but that starts getting into issues like the browser security line of death [0].
I came to the comments to express the same sentiment, expecting to be an unpopular opinion. Pleasantly surprised to find your comment at the top.
Hosts should make sure they know who is posting content on their platforms, so that in the event they are sued, they can countersue the creator of the content.
Websites have to be held responsible for ADs they serve. Otherwise they tend to make unfounded excuses we cant care less about. Like scam ADs on youtube.
Doesn't the site make user pay to show their ads? I thought it was the deal, you pay the platform to host your advertising, and that was one of the justification for the CJEU.
Ads on Youtube are user-generated. If I were to upload a picture of you with your phone number as a youtube ad, offering sexual services, Youtube will likely be accountable in the same way.
Seems like a pretty big overreaction IMO. Advertisements deserve more strict regulation than general user-generated content because they tend to reach far more people. The fact that they aren't has resulted in something like 10% of all ads shown being outright scams or fraud[0]. And they should never have allowed the ad to air in the first place - it was patently and obviously illegal even without considering the GDPR.
If these companies aren't willing to put basic measures in place to stop even the most obviously illegal ads from airing, I have a lot of trouble having sympathy for them getting their just desserts in court.
> Advertisements deserve more strict regulation than general user-generated content because they tend to reach far more people.
They deserve strict regulation because the carrier is actively choosing who sees them, and because there are explicit fiscal incentives in play. The entire point of Section 230 is that carriers can claim to be just the messenger; the only way to make sense of absolving them of responsibility for the content is to make the argument that their conveyance of the content does not constitute expression.
Once you have auctions for ads, and "algorithmic feeds", that becomes a lot harder to accept.
>The entire point of Section 230 is that carriers can claim to be just the messenger
Incorrect, and it's honestly kinda fascinating how this meme shows up so often. What you're describing is "common carrier" status, like an ISP (or Fedex/UPS/post office) would have. The point of Section 230 was specifically to enable not being "just the messenger", it was part of the overall Communications Decency Act intended to aid in stopping bad content. Congress added Section 230 in direct reaction to two court cases (against Prodigy and CompuServe) which made service providers liable for their user's content when they didn't act as pure common carriers but rather tried to moderate it but, obviously and naturally, could not perfectly get everything. The specific fear was that this left only two options: either ban all user content, which would brutalize the Internet even back then, or cease all moderation, turning everything into a total cesspit. Liability protection was precisely one of the rare genuine "think of the children!" wins, by enabling a 3rd path where everyone could do their best to moderate their platforms without becoming the publisher. Not being a common carrier is the whole point!
> Congress added Section 230 in direct reaction to two court cases (against Prodigy and CompuServe) which made service providers liable for their user's content when they didn't act as pure common carriers but rather tried to moderate it but, obviously and naturally, could not perfectly get everything.
I know that. I spoke imprecisely; my framing is that this imperfect moderation doesn't take away their immunity — i.e. they are still treated as if they were "just the messenger" (per the previous rules). I didn't use the actual "common carrier" phrasing, for a reason.
It doesn't change the argument. Failing to apply a content policy consistently is not, logically speaking, an act of expression; choosing to show content preferentially is.
... And so is setting a content policy. For example, if a forum explicitly for hateful people set a content policy explicitly banning statements inclusive or supportive of the target group, I don't see why the admin should be held harmless (even if they don't also post). Importantly, though, the setting (and attempt at enforcing) the policy is only expressing the view of the policy, not that of any permitted content; in US law it would be hard to imagine a content policy expressing anything illegal.
But my view is that if they act deliberately to show something, based on knowing and evaluating what it is that they're showing, to someone who hasn't requested it (as a recommendation), then they really should be liable. The point of not punishing platforms for failing at moderation is to let them claim plausible ignorance of what they're showing, because they can't observe and evaluate everything.
Except this isn't limited to ads is it? From the post it sounds like the ruling covers any user content. If someone uploads personal data to Github now Github is liable. In fact, why wouldn't author names on open source licenses count as PII?
The court uses the phrase “an online marketplace, as controller” in key places. This suggest to me that there can be online marketplaces that are not data controllers.
The court cites several contributing factors to treat the platform as data controller: they reserved additional rights to upload content, they selected the ads to display. Github only claims limited rights in uploaded content, and I'm not sure if they have any editorialized (“algorithmic”) feeds where Github selects repository content for display. That may make it less likely that they would be considered data controllers. On the other hand, licensing their repository database for LLM training could make them liable if personal data ends up in models. I don't think that's necessarily a bad thing.
Github does include some small amount of algorithmic feeds in its recommendation engines. I have half-a-dozen projects "Recommended for you" on my github home page.
I doubt that is enough to trigger this ruling, but algorithmic content is absolutely pervasive these days.
The author of the article is claiming it extends beyond ads.
That does not appear to be what the court actually said, however.
And I 100% believe that all advertisements should require review by a documented human before posting, so that someone can be held accountable. In the absence of this it is perfectly acceptable to hold the entire organization liable.
> There’s nothing inherently in the law or the ruling that limits its conclusions to “advertisements.” The same underlying factors would apply to any third party content on any website that is subject to the GDPR.
So site operators probably need to assume it doesn’t just apply to ads if they have legal exposure in the EU.
Personally, I'm not buying the slippery slope argument. I could be wrong of course but that's the great thing about opinions: you're allowed to be wrong :)
> why wouldn't author names on open source licenses count as PII?
They are but you can keep PII if it is relevant to the purpose of your activity. In this case the author needs you to share his PII, in order to exercise his moral and copyright rights.
> I am an unabashed supporter of the US’s approach with Section 230, as it was initially interpreted, which said that any liability should land on the party who contributed the actual violative behavior—in nearly all cases the speaker, not the host of the content.
I never really understood how that system is supposed to work.
So on the one hand, Section 230 absolves the hoster of liability and tells an affected party to go after the author directly.
But on the other hand, we all rally for the importance of anonymity on the internet, so it's very likely that there will be no way to find the author.
Anonymous authors have very little reach without external promotion or a long lasting reputation.
If someone builds up a reputation anonymously then that reputation itself is something that can be destroyed when a platform destroys their account etc.
Your premise goes precisely against the base of the lawsuit itself:
> (...) an unidentified third party published on that website an untrue and harmful advertisement presenting her as offering sexual services. That advertisement contained photographs of that applicant, which had been used without her consent, along with her telephone number.(...) The same advertisement nevertheless remains available on other websites which have reproduced it.
Anonymous author, great reach, enough damage for the victim to take a lawsuit all the way to the CJEU.
> What exactly provided great reach here? Is it the creator or something else.
Irrelevant, in the initial understanding of Section 230 that this thread was advocating for returning to. Only the author is responsible for the content in that framework, not the platforms distributing and/or promoting it.
In Section 230 platforms only maintain protection as a neutral party, anything getting promoted by them puts them at risk.
“letters to the editor” are a very old form of user generated content, but the act of selecting a letter and placing it in a prominent position isn’t a neutral act.
For clarity, they do maintain limited editorial discretion. Section 230(c)(2) states that service providers and users may not be held liable for voluntarily acting in good faith to restrict access to "obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable" material.
Further, Section 230 contains statutory exceptions. This federal immunity generally will not apply to suits brought under federal criminal law, intellectual property law, any state law "consistent" with Section 230, certain privacy laws applicable to electronic communications, or certain federal and state laws relating to sex trafficking.
Forget the Internet and Section 230 for a second. Anonymous publications, particularly on contentious matters like politics and religion, are a long-standing tradition. If someone anonymously posts a leaflet in public square, how do you find the author?
>But on the other hand, we all rally for the importance of anonymity on the internet, so it's very likely that there will be no way to find the author.
So:
1) We all rally for the importance of anonymity (wrt general speech) EVERYWHERE, before even (and critical to) the founding of America. Writing like the Federalist Papers were absolutely central to arguments for the US Constitution, and they were anonymous. "The Internet" is not anything special or novel here per se when it comes to the philosophy and politics of anonymous speech. There has always been a tension with anonymous speech risks vs value, and America has come down quite firmly on the value side of that.
2) That said, "anonymous" on the internet very rarely actually is to the level of "no way to find the author with the aid of US court ordered process". Like, I assume that just as my real name is not "xoa" your real name is not "xg15", and to the extent we have made some effort at maintaining our pseudonymity it'd be somewhat difficult for any random member of the general public to connect our HN speech to our meatspace bodies. But the legal process isn't limited to public information. If you have a colorable lawsuit against someone, you can sue their placeholder and then attempt to discover their identity via private data. HN has our IP addresses if nothing else, as does intermediaries between the system we're posting from and HN, as well as possibly a valid email address. Which can potentially by themselves be enough breadcrumbs to follow back to a person and have enough cause to engage in specific discovery against them. And this is without any money getting involved, if there are any payments of any kind that leaves an enormous number of ripples. And that's assuming nobody left any other clues, that you can't make any inferences about who would be doing defamatory speech against you and narrow it down further that.
Yes, it's possible someone at random is using a massive number of proxies from a camera-less logless public access point with virgin monero or whatever and perfect opsec, but that really is not the norm.
3) Hosters not being directly liable doesn't make them immune to court orders. If something is defamatory you can secure an order to have it removed even without finding the person in question. And in practice most hosters are probably going to remove something upon notification as fast as possible, as in this case, and ban the poster in question on top.
So no, I don't think it's a "a massive vacuum of responsibility" anymore than it ever was, and the contrast is that eliminating anonymous speech is a long proven massive risk to basic freedoms.
The combo effectively enshittified swaths of the Internet, which now is full of robo-pamphleteers acting with anonymous impunity, in ways they never would if sitting face-to-face.
I love the Internet but it normalizes bad behavior and to the extent the CJEU was tracking toward a new and more stringent standard, well earned by the Internet and its trolls.
I think the author, an American, is confusing common law and civil law. The ruling's language may be overly broad but in civil law countries like all of Europe, while it may be cited in other legal cases _as a supporting argument by one side or the other_, it will not fundamentally change law and it's not the catastrophy the post author makes it sound like.
> There’s nothing inherently in the law or the ruling that limits its conclusions to “advertisements.” The same underlying factors would apply to any third party content on any website that is subject to the GDPR.
False. A European court's conclusions are specifically around the case it is ruling about, so it is inherently limited to the exact circumstances of that case, without the need for it to specify that. They do not set precedent.
And for quick reference, what the judgement actually entails:
On those grounds, the Court (Grand Chamber) hereby rules:
1. Article 5(2) and Articles 24 to 26 of Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) must be interpreted as meaning that the operator of an online marketplace, as controller, within the meaning of Article 4(7) of that regulation, of the personal data contained in advertisements published on its online marketplace, is required, before the publication of the advertisements and by means of appropriate technical and organisational measures,
– to identify the advertisements that contain sensitive data in terms of Article 9(1) of that regulation,
– to verify whether the user advertiser preparing to place such an advertisement is the person whose sensitive data appear in that advertisement and, if this is not the case,
– to refuse publication of that advertisement, unless that user advertiser can demonstrate that the data subject has given his or her explicit consent to the data in question being published on that online marketplace, within the meaning of Article 9(2)(a), or that one of the other exceptions provided for in Article 9(2)(b) to (j) is satisfied.
2. Article 32 of Regulation 2016/679 must be interpreted as meaning that the operator of an online marketplace, as controller, within the meaning of Article 4(7) of that regulation, of the personal data contained in advertisements published on its online marketplace, is required to implement appropriate technical and organisational security measures in order to prevent advertisements published there and containing sensitive data, in terms of Article 9(1) of that regulation, from being copied and unlawfully published on other websites.
3. Article 1(5)(b) of Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (‘Directive on electronic commerce’) and Article 2(4) of Regulation 2016/679 must be interpreted as meaning that the operator of an online marketplace, as controller, within the meaning of Article 4(7) of Regulation 2016/679, of the personal data contained in advertisements published on its online marketplace, cannot rely, in respect of an infringement of the obligations arising from Article 5(2) and Articles 24 to 26 and 32 of that regulation, on Articles 12 to 15 of that directive, relating to the liability of intermediary providers.
This mostly seems to be about advertisers distributing content, not so much making it "effectively impossible to run a user-generated platform legally". Unless you think basic KYC for paying customers is "effectively impossible", perhaps.
> Under this ruling, it appears that any website that hosts any user-generated content can be strictly liable if any of that content contains “sensitive personal data” about any person.
Could this lead to censorship as well? For example you could go to a website or community you don’t like, and share information that could be seen as “sensitive personal data” and then file an anonymous complaint so they get into legal trouble or get shut down?
Sigh. "..that personal data processed must be accurate and, where necessary, kept up to date. "
How do they think a hosting provider can check if personal data is accurate? Maybe if privacy didn't exist and everybody could be scrutinized.. but the ruling refers to the GDPR to justify this, and the GDPR is about _protecting_ privacy. So, what is it?
And for everything else.. is the material sensitive or not? How can anyone know, in advance?
I suggest every web site host simply forward all and every input to an EU Court address, and let them handle it. They're the ones suggesting that hosts should make sure that personal data on someone is "accurate", they're the ones demanding that the data should not be "sensitive", so they can as well be responsible for vetting the data.
But they're all crazy anyway, as they demand that a website must block anyone from copying the content.. so how, at the same time, can you even have a website? A website which people can watch?
If the ruling was about collecting data which isn't for displaying, i.e. what a net shop does (address, credit card number), then this would be understandable. But provisions for that already exists, instead they use this (GDPR) as a tool to extend this to user-created content. It's not limited to ads, and ads do need something done. Something totally different from this.
Are you sure? We might have a different reading then, I felt it was obvious it was because it was an ad. And even more, an ad display through an algorithm, I.E it wouldn't apply to Craigslist or platform that display user-generated ads in chronological order.
It only targets user-generated advertising. If a platform chooses to display an add that contains PII, they have to make sure the person whose PII are displayed has agreed.
This particular ruling is about advertising, however:
> There’s nothing inherently in the law or the ruling that limits its conclusions to “advertisements.” The same underlying factors would apply to any third party content on any website that is subject to the GDPR.
If you have legal exposure in the EU, you should ask counsel about it ASAP. Especially if your site may be in the crosshairs for whatever reason.
I've long held a belief that if countries want to mandate compliance, they should be required to provide the mechanism for compliance.
Want to deem whether content is allowed or not? Fine, provide an API that allows content to be scanned and return a bool.
Want to age-gate content? Fine, provide an identity service.
While both of these will reduce privacy, they'll achieve one of two objectives: Either those making these policies will realize the law they wrote is impossible to achieve, or it will at least provide an even playing field for startups vs incumbents if they succeed.
Many countries fail to provide their citizens with digital identification while enforcing age limits and such, but at least most of Europe will have solutions for both of these soon. Plus, "complying with the law threatens my bottom line" has never been a reason to ignore the law anyway. If restaurants were run like internet companies, people would die of food poisoning every day because "it'd be too much of a burden to check if _every_ ingredient is in date before cooking".
This proves that the laws they write are actually quite possible to achieve. It'll happen at the cost of some freedom and many of the nice things about the internet in general, but if you're a businessman looking at business potential and don't care too much about artistic creativity, the impact is actually quite minimal. I'm glad UK and US companies are hiring the most incompetent and useless "verification" companies out there to punish the government through a weird form of malicious compliance, but that won't last.
> Want to deem whether content is allowed or not? Fine, provide an API that allows content to be scanned and return a bool.
https://www.iwf.org.uk/our-technology/image-intercept/ exists for the UK, https://www.web-iq.com/solutions/atlas exists within the EU, https://projectarachnid.ca/en/#shield exists in Canada, https://www.microsoft.com/en-us/photodna exists worldwide, as well as https://developers.cloudflare.com/cache/reference/csam-scann... and https://get.safer.io/csam-detection-tool-for-child-safety. Someone even made an open source, CLIP-based tool that doesn't require hashes and can be tweaked in all kinds of ways: https://github.com/Haidra-Org/horde-safety/blob/main/horde_s...
These mechanisms not only exist, governments try their best to convince people to use them. I don't think you'll get away in court arguing that there is no accessible service for you to use. What happens when you fail after applying one of these services is up for debate, but so far, laws just want you to try your best to prevent abuse, and existing services are enough for that.
> Want to age-gate content? Fine, provide an identity service.
This is a problem now, but privacy-safe age verification is rolling out across the EU in the coming years. This is actually a super easy problem to solve when the government puts in the bare minimum amount of work. A decent argument against the UK/US/etc. but not so much so for an EU member state like in this court case.
> If restaurants were run like internet companies, people would die of food poisoning every day because "it'd be too much of a burden to check if _every_ ingredient is in date before cooking".
The issue with this is internet companies are subject to the laws of all countries, whereas restaurants are subject to the laws they are located in. While someone at the scale of McDonalds may be able to handle laws at a worldwide scale, a small food truck cannot.
> privacy-safe age verification is rolling out across the EU in the coming years
Privacy-safe requiring a Google account and a phone logged into it...
Why should online businesses have less obligations than offline businesses?
Do offline businesses have an API?
There are laws and they have to follow them and can’t just wait until someone complains.
Tell me in which case a place like a restaurant or a movie theater was found to be liable because a consumer suddenly decided to speak harmful speech or violating someone else privacy?...
Not the same.
Show me one place like a restaurant or a movie theater where I can put a poster in their shop window without them asking questions about the content of the poster.
[flagged]
Please don't comment like this on HN. The guidelines make it clear we're aiming for something better here.
https://news.ycombinator.com/newsguidelines.html
[flagged]
Please don't fulminate or post shallow dismissals on HN.
It's valid to argue that laws should be legible and reasonably straightforward to comply with, and to point out that society as a whole doesn't benefit if only already-dominant global megacorps can afford to comply with laws (or bear the cost of non-compliance).
Let's make the effort to discuss the topic in a curious, conversational way, rather than with this belligerent tone. The guidelines make it clear we're looking for a higher standard of discourse than this. https://news.ycombinator.com/newsguidelines.html
[flagged]
Please don't comment like this on HN. It's fair enough to express disagreement with the parent comment, and I've replied to them asking them to avoid fulminating and making shallow dismissals.
But this kind of reply is also in breach of the guidelines. Replying to a bad comment with an even worse one is how death-spirals happen, and that's exactly what we're trying to avoid on HN. We've had to ask you several times before to follow the guidelines. We have to ban accounts that keep ignoring our requests. https://news.ycombinator.com/newsguidelines.html
The party which decides to show the advertisment in exchange for payment, should be more responsible for what they are showing than a free user posting content.
Now things become interesting when a users pays for ranking or 'verification' checkmarks. What makes that content different than a paid advertisment?
Speaking of responsibility: I want ad networks (a scourge on modern society) to be very liable when their systems are spreading malware and scams.
IIRC the last time my mom came to me with a fake "You Must Upgrade Now" ad on her phone (from an otherwise-legit puzzle game), the ad-network did have a feedback function... but strangely there wasn't any category for fraud.
It might be less of an issue if there were clearer ad-vs-not boundaries, but that starts getting into issues like the browser security line of death [0].
[0] https://textslashplain.com/2017/01/14/the-line-of-death/
I came to the comments to express the same sentiment, expecting to be an unpopular opinion. Pleasantly surprised to find your comment at the top.
Hosts should make sure they know who is posting content on their platforms, so that in the event they are sued, they can countersue the creator of the content.
So anonymity should be impossible?
Websites have to be held responsible for ADs they serve. Otherwise they tend to make unfounded excuses we cant care less about. Like scam ADs on youtube.
But user generated content? LOL, no.
What a relief this does not apply to https://uk.LokiList.com
Doesn't the site make user pay to show their ads? I thought it was the deal, you pay the platform to host your advertising, and that was one of the justification for the CJEU.
Ads on Youtube are user-generated. If I were to upload a picture of you with your phone number as a youtube ad, offering sexual services, Youtube will likely be accountable in the same way.
Seems like a pretty big overreaction IMO. Advertisements deserve more strict regulation than general user-generated content because they tend to reach far more people. The fact that they aren't has resulted in something like 10% of all ads shown being outright scams or fraud[0]. And they should never have allowed the ad to air in the first place - it was patently and obviously illegal even without considering the GDPR.
If these companies aren't willing to put basic measures in place to stop even the most obviously illegal ads from airing, I have a lot of trouble having sympathy for them getting their just desserts in court.
[0]: https://www.msn.com/en-us/money/personalfinance/meta-showed-...
> Advertisements deserve more strict regulation than general user-generated content because they tend to reach far more people.
They deserve strict regulation because the carrier is actively choosing who sees them, and because there are explicit fiscal incentives in play. The entire point of Section 230 is that carriers can claim to be just the messenger; the only way to make sense of absolving them of responsibility for the content is to make the argument that their conveyance of the content does not constitute expression.
Once you have auctions for ads, and "algorithmic feeds", that becomes a lot harder to accept.
>The entire point of Section 230 is that carriers can claim to be just the messenger
Incorrect, and it's honestly kinda fascinating how this meme shows up so often. What you're describing is "common carrier" status, like an ISP (or Fedex/UPS/post office) would have. The point of Section 230 was specifically to enable not being "just the messenger", it was part of the overall Communications Decency Act intended to aid in stopping bad content. Congress added Section 230 in direct reaction to two court cases (against Prodigy and CompuServe) which made service providers liable for their user's content when they didn't act as pure common carriers but rather tried to moderate it but, obviously and naturally, could not perfectly get everything. The specific fear was that this left only two options: either ban all user content, which would brutalize the Internet even back then, or cease all moderation, turning everything into a total cesspit. Liability protection was precisely one of the rare genuine "think of the children!" wins, by enabling a 3rd path where everyone could do their best to moderate their platforms without becoming the publisher. Not being a common carrier is the whole point!
> Congress added Section 230 in direct reaction to two court cases (against Prodigy and CompuServe) which made service providers liable for their user's content when they didn't act as pure common carriers but rather tried to moderate it but, obviously and naturally, could not perfectly get everything.
I know that. I spoke imprecisely; my framing is that this imperfect moderation doesn't take away their immunity — i.e. they are still treated as if they were "just the messenger" (per the previous rules). I didn't use the actual "common carrier" phrasing, for a reason.
It doesn't change the argument. Failing to apply a content policy consistently is not, logically speaking, an act of expression; choosing to show content preferentially is.
... And so is setting a content policy. For example, if a forum explicitly for hateful people set a content policy explicitly banning statements inclusive or supportive of the target group, I don't see why the admin should be held harmless (even if they don't also post). Importantly, though, the setting (and attempt at enforcing) the policy is only expressing the view of the policy, not that of any permitted content; in US law it would be hard to imagine a content policy expressing anything illegal.
But my view is that if they act deliberately to show something, based on knowing and evaluating what it is that they're showing, to someone who hasn't requested it (as a recommendation), then they really should be liable. The point of not punishing platforms for failing at moderation is to let them claim plausible ignorance of what they're showing, because they can't observe and evaluate everything.
Except this isn't limited to ads is it? From the post it sounds like the ruling covers any user content. If someone uploads personal data to Github now Github is liable. In fact, why wouldn't author names on open source licenses count as PII?
The judgement is a bit more nuanced than that: https://curia.europa.eu/juris/document/document_print.jsf?mo...
The court uses the phrase “an online marketplace, as controller” in key places. This suggest to me that there can be online marketplaces that are not data controllers.
The court cites several contributing factors to treat the platform as data controller: they reserved additional rights to upload content, they selected the ads to display. Github only claims limited rights in uploaded content, and I'm not sure if they have any editorialized (“algorithmic”) feeds where Github selects repository content for display. That may make it less likely that they would be considered data controllers. On the other hand, licensing their repository database for LLM training could make them liable if personal data ends up in models. I don't think that's necessarily a bad thing.
Github does include some small amount of algorithmic feeds in its recommendation engines. I have half-a-dozen projects "Recommended for you" on my github home page.
I doubt that is enough to trigger this ruling, but algorithmic content is absolutely pervasive these days.
The author of the article is claiming it extends beyond ads.
That does not appear to be what the court actually said, however.
And I 100% believe that all advertisements should require review by a documented human before posting, so that someone can be held accountable. In the absence of this it is perfectly acceptable to hold the entire organization liable.
The ruling is about an advertisement, but:
> There’s nothing inherently in the law or the ruling that limits its conclusions to “advertisements.” The same underlying factors would apply to any third party content on any website that is subject to the GDPR.
So site operators probably need to assume it doesn’t just apply to ads if they have legal exposure in the EU.
You could always sue GitHub to find out.
Personally, I'm not buying the slippery slope argument. I could be wrong of course but that's the great thing about opinions: you're allowed to be wrong :)
> why wouldn't author names on open source licenses count as PII?
They are but you can keep PII if it is relevant to the purpose of your activity. In this case the author needs you to share his PII, in order to exercise his moral and copyright rights.
Yeah, it sounds like mirroring a repo to GitHub would violate this, as author names and emails are listed in commit history.
> I am an unabashed supporter of the US’s approach with Section 230, as it was initially interpreted, which said that any liability should land on the party who contributed the actual violative behavior—in nearly all cases the speaker, not the host of the content.
I never really understood how that system is supposed to work.
So on the one hand, Section 230 absolves the hoster of liability and tells an affected party to go after the author directly.
But on the other hand, we all rally for the importance of anonymity on the internet, so it's very likely that there will be no way to find the author.
Isn't this a massive vacuum of responsibility?
Anonymous authors have very little reach without external promotion or a long lasting reputation.
If someone builds up a reputation anonymously then that reputation itself is something that can be destroyed when a platform destroys their account etc.
Your premise goes precisely against the base of the lawsuit itself:
> (...) an unidentified third party published on that website an untrue and harmful advertisement presenting her as offering sexual services. That advertisement contained photographs of that applicant, which had been used without her consent, along with her telephone number.(...) The same advertisement nevertheless remains available on other websites which have reproduced it.
Anonymous author, great reach, enough damage for the victim to take a lawsuit all the way to the CJEU.
> great reach
What exactly provided great reach here? Is it the creator or something else.
> The same advertisement nevertheless remains available on other websites which have reproduced it.
This presumably involved the actions of 3rd parties not just the original content creator.
> What exactly provided great reach here? Is it the creator or something else.
Irrelevant, in the initial understanding of Section 230 that this thread was advocating for returning to. Only the author is responsible for the content in that framework, not the platforms distributing and/or promoting it.
In Section 230 platforms only maintain protection as a neutral party, anything getting promoted by them puts them at risk.
“letters to the editor” are a very old form of user generated content, but the act of selecting a letter and placing it in a prominent position isn’t a neutral act.
For clarity, they do maintain limited editorial discretion. Section 230(c)(2) states that service providers and users may not be held liable for voluntarily acting in good faith to restrict access to "obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable" material.
Further, Section 230 contains statutory exceptions. This federal immunity generally will not apply to suits brought under federal criminal law, intellectual property law, any state law "consistent" with Section 230, certain privacy laws applicable to electronic communications, or certain federal and state laws relating to sex trafficking.
Forget the Internet and Section 230 for a second. Anonymous publications, particularly on contentious matters like politics and religion, are a long-standing tradition. If someone anonymously posts a leaflet in public square, how do you find the author?
A few things:
>But on the other hand, we all rally for the importance of anonymity on the internet, so it's very likely that there will be no way to find the author.
So:
1) We all rally for the importance of anonymity (wrt general speech) EVERYWHERE, before even (and critical to) the founding of America. Writing like the Federalist Papers were absolutely central to arguments for the US Constitution, and they were anonymous. "The Internet" is not anything special or novel here per se when it comes to the philosophy and politics of anonymous speech. There has always been a tension with anonymous speech risks vs value, and America has come down quite firmly on the value side of that.
2) That said, "anonymous" on the internet very rarely actually is to the level of "no way to find the author with the aid of US court ordered process". Like, I assume that just as my real name is not "xoa" your real name is not "xg15", and to the extent we have made some effort at maintaining our pseudonymity it'd be somewhat difficult for any random member of the general public to connect our HN speech to our meatspace bodies. But the legal process isn't limited to public information. If you have a colorable lawsuit against someone, you can sue their placeholder and then attempt to discover their identity via private data. HN has our IP addresses if nothing else, as does intermediaries between the system we're posting from and HN, as well as possibly a valid email address. Which can potentially by themselves be enough breadcrumbs to follow back to a person and have enough cause to engage in specific discovery against them. And this is without any money getting involved, if there are any payments of any kind that leaves an enormous number of ripples. And that's assuming nobody left any other clues, that you can't make any inferences about who would be doing defamatory speech against you and narrow it down further that.
Yes, it's possible someone at random is using a massive number of proxies from a camera-less logless public access point with virgin monero or whatever and perfect opsec, but that really is not the norm.
3) Hosters not being directly liable doesn't make them immune to court orders. If something is defamatory you can secure an order to have it removed even without finding the person in question. And in practice most hosters are probably going to remove something upon notification as fast as possible, as in this case, and ban the poster in question on top.
So no, I don't think it's a "a massive vacuum of responsibility" anymore than it ever was, and the contrast is that eliminating anonymous speech is a long proven massive risk to basic freedoms.
The combo effectively enshittified swaths of the Internet, which now is full of robo-pamphleteers acting with anonymous impunity, in ways they never would if sitting face-to-face.
I love the Internet but it normalizes bad behavior and to the extent the CJEU was tracking toward a new and more stringent standard, well earned by the Internet and its trolls.
I think the author, an American, is confusing common law and civil law. The ruling's language may be overly broad but in civil law countries like all of Europe, while it may be cited in other legal cases _as a supporting argument by one side or the other_, it will not fundamentally change law and it's not the catastrophy the post author makes it sound like.
> There’s nothing inherently in the law or the ruling that limits its conclusions to “advertisements.” The same underlying factors would apply to any third party content on any website that is subject to the GDPR.
False. A European court's conclusions are specifically around the case it is ruling about, so it is inherently limited to the exact circumstances of that case, without the need for it to specify that. They do not set precedent.
Another analysis, by Heise Verlag, publisher of C’t Europe's largest IT and tech magazine https://heise.de/-11102550
The Russmedia ruling of the ECJ: Towards a “Cleannet”?
A change in liability privilege for online providers will lead to a “cleaner”, but also more rigid, monitored internet, says Joerg Heidrich.
The case is about advertising, but the ruling is not limited to advertising. That's the problem.
Long, but quite readable English judgement: https://curia.europa.eu/juris/document/document_print.jsf?mo...
And for quick reference, what the judgement actually entails:
This mostly seems to be about advertisers distributing content, not so much making it "effectively impossible to run a user-generated platform legally". Unless you think basic KYC for paying customers is "effectively impossible", perhaps.> Under this ruling, it appears that any website that hosts any user-generated content can be strictly liable if any of that content contains “sensitive personal data” about any person.
Could this lead to censorship as well? For example you could go to a website or community you don’t like, and share information that could be seen as “sensitive personal data” and then file an anonymous complaint so they get into legal trouble or get shut down?
It seems to only applies to user-generated ads you run on your platform, not generic content.
So the community would have to accept community generated ads and not checking them. I'd say it's pretty rare.
Sigh. "..that personal data processed must be accurate and, where necessary, kept up to date. "
How do they think a hosting provider can check if personal data is accurate? Maybe if privacy didn't exist and everybody could be scrutinized.. but the ruling refers to the GDPR to justify this, and the GDPR is about _protecting_ privacy. So, what is it?
And for everything else.. is the material sensitive or not? How can anyone know, in advance?
I suggest every web site host simply forward all and every input to an EU Court address, and let them handle it. They're the ones suggesting that hosts should make sure that personal data on someone is "accurate", they're the ones demanding that the data should not be "sensitive", so they can as well be responsible for vetting the data.
But they're all crazy anyway, as they demand that a website must block anyone from copying the content.. so how, at the same time, can you even have a website? A website which people can watch?
If the ruling was about collecting data which isn't for displaying, i.e. what a net shop does (address, credit card number), then this would be understandable. But provisions for that already exists, instead they use this (GDPR) as a tool to extend this to user-created content. It's not limited to ads, and ads do need something done. Something totally different from this.
> It's not limited to ads
Are you sure? We might have a different reading then, I felt it was obvious it was because it was an ad. And even more, an ad display through an algorithm, I.E it wouldn't apply to Craigslist or platform that display user-generated ads in chronological order.
[flagged]
It only targets user-generated advertising. If a platform chooses to display an add that contains PII, they have to make sure the person whose PII are displayed has agreed.
This particular ruling is about advertising, however:
> There’s nothing inherently in the law or the ruling that limits its conclusions to “advertisements.” The same underlying factors would apply to any third party content on any website that is subject to the GDPR.
If you have legal exposure in the EU, you should ask counsel about it ASAP. Especially if your site may be in the crosshairs for whatever reason.
[flagged]