I have suspected Tor has been busted for quite a long time. LE is only using this power selectively for now - the last thing that they want is to scare users away lest they go and build something more secure.
The Nym mixnet[0] seems promising but it's still new and unproven.
I had an idea a while back to make traffic analysis more difficult by building circuits distributed across adversarial countries. Would like to hear thoughts on it.[1]
- Find the "bad guy" server onion address "hidden service"
- Run a tor relay. Ideally many. No exit node shenanigans needed - hidden service, not exiting TOR. This is quite nice from a legalistic perspective since you're not on the hook for hacks coming off the exit node.
- Run a bunch of clients. Instruct to connect to "bad guy" onion.
- Gather data over time for correlation attacks. Correlate your client to relay to endpoint server.
- At some point, you'll find one of your relays is the guy connecting directly to said hidden service.
Very simple lesson here. One needs to encrypt the information, yes, but failing to consider packet timing as "information" is the fallacy.
The public tends to have a very strange idea as to a lot of things on this topic while forgetting that TOR itself was actually a department of defence project or NSW I forget originally.
> The public tends to have a very strange idea as to a lot of things on this topic while forgetting that TOR itself was actually a department of defence project or NSW I forget originally.
IIRC it was a US Navy project. But I didn't understand your point.
People don't understand Federal entities aren't monolithic. Even within a single agency, you'll have teams doing both offensive (Red Team), and defensive (Blue Team) work.
People think that just because the research came out of the Navy, it was busted or compromised from the start, which it wasn't. Efforts only spun up to wrangle it in from being an academic curiosity once it started being heavily noticed as being a frequent tool/vector in investigations of criminal/adversarial activity.
One advantage of imperfect privacy solutions like Tor is they force authorities to invest if they want to snoop. In the before times if soneone wanted to read your mail they'd need to at least convince a judge and then spend manpower interecepting the envelopes, today they can just ping google for a bcc.
Yes paper work is required. I think OP is pointing out that that it doesn’t require the same amount of work it used to. Especially from law enforcement.
LE are mostly avoiding this by just buying from the same public ad network sources that everyone else does and using that as a way to avoid the paperwork, I know people get very pissy about EUs GDPR sometimes but if you want to put an end to that kind of thing you need to tackle the private sector collection problem which almost anyone can access and is comparable with mid tier nation state capabilities.
Yes mostly true, exceptions apply mostly based on jurisdictions and capabilities.
The safeguards are actually much much better than what the opinions would lead you to believe on here.
People really seem to get off on the idea that they are on the targeting list of an intel service but you actually have to put in some real work to meet that criteria. If you’re buying drugs for example even the relevant LE authorities will at most knock on your door to scare you assuming you live in an English speaking jurisdiction.
The dark network of the future will be an onion-routed Hyphanet/Freenet, with monthly "bandwidth quotas" that make links communicate uniformly at X GB/hr regardless of traffic (padding when there is none) until the monthly quota is hit right at the end of the month. If internodal links don't vary in externally measurable ways when utilized, netflow is diminished.
i2p is a bit harder because the circuits aren't end-to-end. Your traffic goes through typically 3 relays, then an all-to-all mixing where it goes directly to the start of the recipient's relay chain, then 3 of their chosen relays. A new connection is NOT set up through the whole network for each overlay connection - it uses your same outgoing relay chain, and your last relay sends the packet to the first relay in the recipient's incoming relay chain.
It also uses a separate chain in each direction which makes any attack based on observing timing both ways more difficult.
Nothing new, and I'm pretty sure these sorts of attacks have been possible and used ever since it's founding.
TOR ultimately works like any old relay system; if you control enough nodes, you can effectively decloak people if they happen to connect to only your nodes. Nodes are assigned for connection based on a trust value so all a Nation state would have to do is host enough nodes (relay+exit) and they'd be able to decloak a connection. This kinda inherently gives TOR decloaking abilities to entities with the most infrastructure, which at that scale basically will only be nation states.
TOR works well enough for privacy when your adversaries aren't well-funded state actors. (ie. It's probably enough to mask your traffic if you use TOR to access resources to get out of an abusive relationship or need to circumvent cult-level inspection of your personal interests by religious schools. Most dictatorships also don't really have the resources to mount this sort of attack - it's probably just the US and some European countries.) That rule kinda also goes for VPNs in general however.
The articled confirms that the authorities are conducting a dragnet operation.
Everyone who connected to a certain entry relay was tracked and reported.
Does the tor daemon connect automatically? If so, even people who installed tor for fun and forget about may be on the list.
Did the lucky ones have the "Bundestrojaner" (gov surveillance app) installed on their machines?
Plenty of people still believe that using a VPN + Tor means they are “private”. What we need to teach others is that this is no longer the case - privacy is not a one size fits all solution. You may be private from other users on your network, but not nation state actors.
VPN/Tor provide something else anyway: anonymity. Privacy is a different thing. You can lead an entirely private life in an environment where everyone knows who you are and who you interact with. They just don't know how you are interacting with those people.
Most people don't need anonymity most of the time...
Really curious on your definition of private here because I have a buttload of evidence that says that is entirely untrue and you can as a private citizen buy detailed demographic and location data at an easily identifiable level for people without even needing to talk to another person.
It's not directly mentioned in this article, but the four deanonymized users were admins of a CSAM site with hundreds of thousands of users. If you're concerned about being targeted by law enforcement, step one is probably: don't be that.
Are you discussing the privacy implications? It looks like your only comment is this asinine middlebrow dismissal. Meanwhile, I've given actionable advice.
This is what everyone here seems to forget when they're ranting on about surveillance: that there are serious criminals out there who need to be caught. In this case, child abusers.
Whoever the engineers are who've worked on the technical aspects of deanonymizing Tor connections, they should feel very proud of their work and the good it's doing in the world.
Are there any projects that generates random traffic? Like a website where you have it open it keeps sending random traffic. It will make traffic analysis very hard.
Decades ago when first hearing about timing attacks I thought every network switch and nic should generate essentially white noise at all times on the wire, with the actual traffic just mixed in. Random amounts of random data going to random destinations, completely filling the pipe 100% at all times like how a carrier wave is on at all times, just as a feature of lighting up the port. If the electricity is on, the noise is on. Or at least in the switches and maybe not needed at the end points.
It probably doesn't; think about it, most websites already have a load of random stuff, plus all users combined is also heaps of randomness. No self-respecting analyist would go through logs manually, it's all fed into search / analysis software, filtering through noise.
Yes, the German monitoring would point to the VPN provider, instead of directly at the user. However, they would then install a monitoring device at your VPN node.
Split EntryGuard should help, means you connect to multiple of them instead of one, and your data is split between them then it gets to Exit through multiple paths (Middle Nodes) and there it is reconstructed to one data stream. How about that?
Connecting through multiple EntryGuards should help in this situation, Tor should split data transfer to many smaller ones travelling through different paths (Entry+Midddle) and then get it reconstructed to one stream at ExitNode.
"everyone is assumed to be acting in good faith". Right. I think that when it comes to network service security the old adage told to my father by an Irish Catholic priest applies: "Bill, once you understand that most people are just no damn good, then you'll be fine".
Or, in the words of the NSA, "Trust, but verify".
I agree that HTTPS is bad though, as it is used. We only do one-sided TLS, not mutual. Most people don't verify the server's cert by looking at it. Most apps don't encrypt messages before they go over TLS. In a more secure world a proxy with stateful packet inspection would not be possible.
As is often the case, the problem isn't technical (or at least not mainly technical). Employers, governments, and ISPs want proxies that inspect traffic, either for CYA or to increase budgets by increasing situational awareness. For governments, situational awareness increases wins by enabling them to catch people they deem bad actors. For employers and governments, increased SA means a decreased chance of leaks and people not doing what they're supposed to do with their time. For ISPs, it means they can monitor the traffic and restrict certain things (like video streaming, or running a server from home) to increase profit.
I can think of at least one potential solution. Still, it requires a technically savvy public, a patient public, and money: Open Source phones in everyone's hands, circles of trust, distributed freenet with data passed E2E encrypted via gossip protocol when two phones get near enough for Bluetooth data transmission (figure 50m roughly) where both phones are within some N degrees of separation via circles of trust. However, this mean's getting/sending data is asynchronous with long delays and no guarantees.
I have seen the Wall of Sheep in person when HTTPS everywhere was just getting started, and you would still see wireless networks secured with WEP. This is pre-Snowden.
So you're aware of those cases and just going with "yeah, let's ignore those account takeovers, impersonations, data theft, etc. across any service from social media to banking and payment" because just bad people need encryption? Walk me through the process of a non-vile person using banking securely from a cafe/hotel in your scenario.
If you're being facetious, it's hard to tell, because you're not over the top enough. There are people who genuinely hold that position and you can occasionally find them on HN. Often under the "HTTPS is a scam and makes everything slower and hard to debug" banner though.
Agreed. An added benefit is then ISPs could monitor traffic and sell the data to AD or insurance companies, who could use it to drive more sales or cancel risky insurance policies, increasing the efficiency of the economy.
Yeah that's all fun, I don't have anything to hide either. But what if I actually WILL in the future, retroactively? I've said a few things here and there, what if certain types of speech end up getting banned and if you don't remove it on time (or lost access), you risk jail-time?
Seems far away, but it's literally happening in England.
Please watch out with this kind of thinking - it's dangerous to everyone.
> what if certain types of speech end up getting banned and if you don't remove it on time (or lost access), you risk jail-time?
So racism, homophobia, and transphobia? Why would you support technologies that promote and support the dissemination of hate speech and misinformation?
>So racism, homophobia, and transphobia? Why would you support technologies that promote and support the dissemination of hate speech and misinformation?
Historical content ought not to be censored at the behest of the morals of the present. There is great value in being able to access the content of the past in it's primary source form. If that makes me some sort of "ist" so be it.
In Germany (which also happens to be the country that successfully attacked Tor) it is currently illegal to support Palestine. Is there anything in your private messages that you wouldn't want the government to know?
Encryption isn't needed, because nothing important happens over the internet.
Nobody shops online, or does their banking online. Nobody would ever work from home over the internet - how would the boss know if workers were sleeping on the job? People who want to buy stocks from their phone simply phone their stockbroker. Anyone can post any nonsense on the internet, so it's useless for any serious research. Dating online, where anyone can lie about anything? I hardly think that's likely.
The idea that a control system for an important bit of infrastructure like a power plant would be connected to the internet? A car with a driver assistance system getting software updates over the internet? Utterly inconceivable.
A pandemic that shifts almost the entire economy and almost all socialisation online? I doubt that would ever happen. In my society, we cover our mouths when we sneeze, and wash our hands after using the bathroom.
Just look at the most valuable companies, that have driven the growth of the economy over recent decades. Apple, Google, Microsoft, Nvidia, Amazon, Facebook, Netflix. If there's one thing they have in common, it's that they're absolutely nothing to do with the internet.
The internet is, at its heart, nothing more than a chatroom for shut-in losers to talk about pokemon - so there's no need for anything online to be private.
The problem is, just the mere fact of communication is sufficient to determine relationships, which can make any sort of organized action simple to identify, root out and quash:
>The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.
If the 2nd Amendment applies to a modern firearm (and it should), then the 4th amendment has to apply to e-mails and text messages.
Your proposal is too modest. Why should only law enforcement see our messages? All messages should be treated like the banns of marriage and read aloud by the local priest or posted on the church walls, so that all interested parties can learn their contents and raise any relevant legal objection.
In actual fact, your proposal is too modest. Why should only electronic messages be so publicized? After all, relevant communication, either for our protectors in law enforcement (and secret services, don't forget their hard work for our prosperity) or for us members of society, happens via all kinds of mediums.
Privacy of correspondence might have had some relevance in the past, but today, with LLMs helping us work through the huge amount of data, every letter should automatically be scanned and added to a database for further consumption. Every telephone conversation. Actually, we should force phone manufacturers to turn their devices into permanent microphones and record everything they hear, gathered in public databases.
The best results from last week's search can then be read aloud in church or at community meetups.
In fact, your proposal is too modest still. We are still free to think whatever we want! We should fast forward the development of neuralink chips and broadcast everyone's thoughts live to everyone at all times so that you can make a judgement of unethical behavior
It's your lucky day! I have trained an LLM that reliably detects sarcasm. It was trained on only the finest sarcasm, dramatic irony, situational irony, ridicule and tomfoolery. As an amazing side effect it can even detect whether statements were made in bad faith or good faith with 100% reliability. You need never guess someone's intentions again! The machine will tell you.
I intend to launch it soon, don't miss this investment opportunity!
Recent, related, and cited:
Is Tor still safe to use? - https://news.ycombinator.com/item?id=41583847 - Sept 2024 (562 comments)
I have suspected Tor has been busted for quite a long time. LE is only using this power selectively for now - the last thing that they want is to scare users away lest they go and build something more secure.
The Nym mixnet[0] seems promising but it's still new and unproven.
I had an idea a while back to make traffic analysis more difficult by building circuits distributed across adversarial countries. Would like to hear thoughts on it.[1]
[0]: https://nymtech.net/about/mixnet
[1]: https://cedwards.xyz/adversarial-routing/
It's a basic correlation attack. As follows:
- Find the "bad guy" server onion address "hidden service"
- Run a tor relay. Ideally many. No exit node shenanigans needed - hidden service, not exiting TOR. This is quite nice from a legalistic perspective since you're not on the hook for hacks coming off the exit node.
- Run a bunch of clients. Instruct to connect to "bad guy" onion.
- Gather data over time for correlation attacks. Correlate your client to relay to endpoint server.
- At some point, you'll find one of your relays is the guy connecting directly to said hidden service.
Very simple lesson here. One needs to encrypt the information, yes, but failing to consider packet timing as "information" is the fallacy.
The public tends to have a very strange idea as to a lot of things on this topic while forgetting that TOR itself was actually a department of defence project or NSW I forget originally.
If you’re interested in seeing what the next generation of this stuff looks like (although AFAIK is not really known outside of defence contracting circles) take a look at this https://github.com/tst-race/race-docs/blob/main/what-is-race...
> The public tends to have a very strange idea as to a lot of things on this topic while forgetting that TOR itself was actually a department of defence project or NSW I forget originally.
IIRC it was a US Navy project. But I didn't understand your point.
NSW = Naval Special Warfare but yes somewhere within the alphabet soup that is US natsec
But, apart from that, I don't understand your point about it being a military project.
People don't understand Federal entities aren't monolithic. Even within a single agency, you'll have teams doing both offensive (Red Team), and defensive (Blue Team) work.
People think that just because the research came out of the Navy, it was busted or compromised from the start, which it wasn't. Efforts only spun up to wrangle it in from being an academic curiosity once it started being heavily noticed as being a frequent tool/vector in investigations of criminal/adversarial activity.
[flagged]
One advantage of imperfect privacy solutions like Tor is they force authorities to invest if they want to snoop. In the before times if soneone wanted to read your mail they'd need to at least convince a judge and then spend manpower interecepting the envelopes, today they can just ping google for a bcc.
Is that true? IIRC they still need to do the legal paperwork to get an email from google et al (FISA request?).
Yes paper work is required. I think OP is pointing out that that it doesn’t require the same amount of work it used to. Especially from law enforcement.
LE are mostly avoiding this by just buying from the same public ad network sources that everyone else does and using that as a way to avoid the paperwork, I know people get very pissy about EUs GDPR sometimes but if you want to put an end to that kind of thing you need to tackle the private sector collection problem which almost anyone can access and is comparable with mid tier nation state capabilities.
Yes mostly true, exceptions apply mostly based on jurisdictions and capabilities.
The safeguards are actually much much better than what the opinions would lead you to believe on here.
People really seem to get off on the idea that they are on the targeting list of an intel service but you actually have to put in some real work to meet that criteria. If you’re buying drugs for example even the relevant LE authorities will at most knock on your door to scare you assuming you live in an English speaking jurisdiction.
The dark network of the future will be an onion-routed Hyphanet/Freenet, with monthly "bandwidth quotas" that make links communicate uniformly at X GB/hr regardless of traffic (padding when there is none) until the monthly quota is hit right at the end of the month. If internodal links don't vary in externally measurable ways when utilized, netflow is diminished.
I2P with more steps and crypto-enforced minimum quotes to deter timing/correlative attacks.
Minimally-enforced "random" timeouts to prevent DDoS->outage correlation.
Also mirrors. Lots of mirrors.
Have mirrors tied to reputation tied to invites.
Then the border to entry is time + money + reputation(which is time + money)
Throw in some 0-KPz, and you are 100% chillin in Belize or 100% in Colorado-ADX
(in minecraft, hypothetically, to sell beets, i ♥ us)
Doesn't i2p also use this model?
i2p is a bit harder because the circuits aren't end-to-end. Your traffic goes through typically 3 relays, then an all-to-all mixing where it goes directly to the start of the recipient's relay chain, then 3 of their chosen relays. A new connection is NOT set up through the whole network for each overlay connection - it uses your same outgoing relay chain, and your last relay sends the packet to the first relay in the recipient's incoming relay chain.
It also uses a separate chain in each direction which makes any attack based on observing timing both ways more difficult.
It's also not Sybil resistant at all.
Is there something new here? I’m under the impression that we knew this kind of thing was possible with enough resources.
Nothing new, and I'm pretty sure these sorts of attacks have been possible and used ever since it's founding.
TOR ultimately works like any old relay system; if you control enough nodes, you can effectively decloak people if they happen to connect to only your nodes. Nodes are assigned for connection based on a trust value so all a Nation state would have to do is host enough nodes (relay+exit) and they'd be able to decloak a connection. This kinda inherently gives TOR decloaking abilities to entities with the most infrastructure, which at that scale basically will only be nation states.
TOR works well enough for privacy when your adversaries aren't well-funded state actors. (ie. It's probably enough to mask your traffic if you use TOR to access resources to get out of an abusive relationship or need to circumvent cult-level inspection of your personal interests by religious schools. Most dictatorships also don't really have the resources to mount this sort of attack - it's probably just the US and some European countries.) That rule kinda also goes for VPNs in general however.
The articled confirms that the authorities are conducting a dragnet operation. Everyone who connected to a certain entry relay was tracked and reported.
Does the tor daemon connect automatically? If so, even people who installed tor for fun and forget about may be on the list.
Did the lucky ones have the "Bundestrojaner" (gov surveillance app) installed on their machines?
>If so, even people who installed tor for fun and forget about may be on the list.
Good. That reduces the quality of the list.
They selected that relay after they determined it was the one used by the person they were trying to go after. That isn't a dragnet.
There probably is a dragnet too.
Plenty of people still believe that using a VPN + Tor means they are “private”. What we need to teach others is that this is no longer the case - privacy is not a one size fits all solution. You may be private from other users on your network, but not nation state actors.
VPN/Tor provide something else anyway: anonymity. Privacy is a different thing. You can lead an entirely private life in an environment where everyone knows who you are and who you interact with. They just don't know how you are interacting with those people.
Most people don't need anonymity most of the time...
Really curious on your definition of private here because I have a buttload of evidence that says that is entirely untrue and you can as a private citizen buy detailed demographic and location data at an easily identifiable level for people without even needing to talk to another person.
I think the surprising element is that the German government actually deployed enough resources.
It's not directly mentioned in this article, but the four deanonymized users were admins of a CSAM site with hundreds of thousands of users. If you're concerned about being targeted by law enforcement, step one is probably: don't be that.
https://www.dw.com/de/darknet-missbrauchsplattform-boystown-...
https://www.sueddeutsche.de/panorama/kindesmissbrauch-boysto...
Cool, we got the “if you don’t have anything to hide” argument out of the way early.
Now we can discuss the actual privacy implications of this news
Are you discussing the privacy implications? It looks like your only comment is this asinine middlebrow dismissal. Meanwhile, I've given actionable advice.
I don't think you get to charge anyone else with assinine.
If a tool does not perform as designed, all users of the tool have an interest in knowing that, and working towards correcting that.
It doesn't matter that there are both good and bad users.
This is what everyone here seems to forget when they're ranting on about surveillance: that there are serious criminals out there who need to be caught. In this case, child abusers.
Whoever the engineers are who've worked on the technical aspects of deanonymizing Tor connections, they should feel very proud of their work and the good it's doing in the world.
Are there any projects that generates random traffic? Like a website where you have it open it keeps sending random traffic. It will make traffic analysis very hard.
Decades ago when first hearing about timing attacks I thought every network switch and nic should generate essentially white noise at all times on the wire, with the actual traffic just mixed in. Random amounts of random data going to random destinations, completely filling the pipe 100% at all times like how a carrier wave is on at all times, just as a feature of lighting up the port. If the electricity is on, the noise is on. Or at least in the switches and maybe not needed at the end points.
A fantasy.
It probably doesn't; think about it, most websites already have a load of random stuff, plus all users combined is also heaps of randomness. No self-respecting analyist would go through logs manually, it's all fed into search / analysis software, filtering through noise.
Depends, it is a very well-known attack vector https://www.whonix.org/wiki/Speculative_Tor_Attacks#Website_...
Yes, loopix.
https://www.usenix.org/conference/usenixsecurity17/technical...
It is already hard with bots drowning any legit traffic, and you want to add random traffic too.
As I mentioned elsewhere in this thread if you’re looking for proper state of the art it’s coming out of DARPA projects and you can see what I mean here https://github.com/tst-race/race-docs/blob/main/what-is-race...
Would using VPN prevent prying eyes from detecting the IP address? This issue seems to be related only to Tor users who do not use VPN?
Yes, the German monitoring would point to the VPN provider, instead of directly at the user. However, they would then install a monitoring device at your VPN node.
Split EntryGuard should help, means you connect to multiple of them instead of one, and your data is split between them then it gets to Exit through multiple paths (Middle Nodes) and there it is reconstructed to one data stream. How about that?
Connecting through multiple EntryGuards should help in this situation, Tor should split data transfer to many smaller ones travelling through different paths (Entry+Midddle) and then get it reconstructed to one stream at ExitNode.
Time for nodes to inject some random traffic. It sounds like if even 0.1% was random fluff they would not be able to track packets between nodes.
That time happened 10 years ago.
.1% fluff? May as well call em up yourself.
[flagged]
"everyone is assumed to be acting in good faith". Right. I think that when it comes to network service security the old adage told to my father by an Irish Catholic priest applies: "Bill, once you understand that most people are just no damn good, then you'll be fine".
Or, in the words of the NSA, "Trust, but verify".
I agree that HTTPS is bad though, as it is used. We only do one-sided TLS, not mutual. Most people don't verify the server's cert by looking at it. Most apps don't encrypt messages before they go over TLS. In a more secure world a proxy with stateful packet inspection would not be possible.
As is often the case, the problem isn't technical (or at least not mainly technical). Employers, governments, and ISPs want proxies that inspect traffic, either for CYA or to increase budgets by increasing situational awareness. For governments, situational awareness increases wins by enabling them to catch people they deem bad actors. For employers and governments, increased SA means a decreased chance of leaks and people not doing what they're supposed to do with their time. For ISPs, it means they can monitor the traffic and restrict certain things (like video streaming, or running a server from home) to increase profit.
I can think of at least one potential solution. Still, it requires a technically savvy public, a patient public, and money: Open Source phones in everyone's hands, circles of trust, distributed freenet with data passed E2E encrypted via gossip protocol when two phones get near enough for Bluetooth data transmission (figure 50m roughly) where both phones are within some N degrees of separation via circles of trust. However, this mean's getting/sending data is asynchronous with long delays and no guarantees.
I guess you weren't around for the fun when https://en.wikipedia.org/wiki/Firesheep was popular.
I have seen the Wall of Sheep in person when HTTPS everywhere was just getting started, and you would still see wireless networks secured with WEP. This is pre-Snowden.
So you're aware of those cases and just going with "yeah, let's ignore those account takeovers, impersonations, data theft, etc. across any service from social media to banking and payment" because just bad people need encryption? Walk me through the process of a non-vile person using banking securely from a cafe/hotel in your scenario.
Do you not understand what I am doing?
If you're being facetious, it's hard to tell, because you're not over the top enough. There are people who genuinely hold that position and you can occasionally find them on HN. Often under the "HTTPS is a scam and makes everything slower and hard to debug" banner though.
Agreed. An added benefit is then ISPs could monitor traffic and sell the data to AD or insurance companies, who could use it to drive more sales or cancel risky insurance policies, increasing the efficiency of the economy.
You imply that the only bad actors one needs to protect themselves against are the police. The vilest criminals can also use my data against me.
Yeah that's all fun, I don't have anything to hide either. But what if I actually WILL in the future, retroactively? I've said a few things here and there, what if certain types of speech end up getting banned and if you don't remove it on time (or lost access), you risk jail-time?
Seems far away, but it's literally happening in England.
Please watch out with this kind of thinking - it's dangerous to everyone.
> what if certain types of speech end up getting banned and if you don't remove it on time (or lost access), you risk jail-time?
So racism, homophobia, and transphobia? Why would you support technologies that promote and support the dissemination of hate speech and misinformation?
For the near future, more like info on safe abortions. Or union organizing.
>So racism, homophobia, and transphobia? Why would you support technologies that promote and support the dissemination of hate speech and misinformation?
Historical content ought not to be censored at the behest of the morals of the present. There is great value in being able to access the content of the past in it's primary source form. If that makes me some sort of "ist" so be it.
In Germany (which also happens to be the country that successfully attacked Tor) it is currently illegal to support Palestine. Is there anything in your private messages that you wouldn't want the government to know?
because they are based
Schools should ban the teaching of math so future criminals won’t be able to use encryption for their evil deeds.
It is not a thin line the one you are crossing.
Technology is never a solution to your democracy problems.
It’s 2024 and I can’t tell if this is sarcasm or not.
Then allow me to knock it up a notch!
Encryption isn't needed, because nothing important happens over the internet.
Nobody shops online, or does their banking online. Nobody would ever work from home over the internet - how would the boss know if workers were sleeping on the job? People who want to buy stocks from their phone simply phone their stockbroker. Anyone can post any nonsense on the internet, so it's useless for any serious research. Dating online, where anyone can lie about anything? I hardly think that's likely.
The idea that a control system for an important bit of infrastructure like a power plant would be connected to the internet? A car with a driver assistance system getting software updates over the internet? Utterly inconceivable.
A pandemic that shifts almost the entire economy and almost all socialisation online? I doubt that would ever happen. In my society, we cover our mouths when we sneeze, and wash our hands after using the bathroom.
Just look at the most valuable companies, that have driven the growth of the economy over recent decades. Apple, Google, Microsoft, Nvidia, Amazon, Facebook, Netflix. If there's one thing they have in common, it's that they're absolutely nothing to do with the internet.
The internet is, at its heart, nothing more than a chatroom for shut-in losers to talk about pokemon - so there's no need for anything online to be private.
> so there's no need for anything online to be private
I disagree. Unless you’re being sarcastic?
@michaelt: looks like you need to knock it further up still!
Feel free to debate the statement on its merits, at face value.
The problem is, just the mere fact of communication is sufficient to determine relationships, which can make any sort of organized action simple to identify, root out and quash:
https://kieranhealy.org/blog/archives/2013/06/09/using-metad...
>The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.
If the 2nd Amendment applies to a modern firearm (and it should), then the 4th amendment has to apply to e-mails and text messages.
Your proposal is too modest. Why should only law enforcement see our messages? All messages should be treated like the banns of marriage and read aloud by the local priest or posted on the church walls, so that all interested parties can learn their contents and raise any relevant legal objection.
In actual fact, your proposal is too modest. Why should only electronic messages be so publicized? After all, relevant communication, either for our protectors in law enforcement (and secret services, don't forget their hard work for our prosperity) or for us members of society, happens via all kinds of mediums.
Privacy of correspondence might have had some relevance in the past, but today, with LLMs helping us work through the huge amount of data, every letter should automatically be scanned and added to a database for further consumption. Every telephone conversation. Actually, we should force phone manufacturers to turn their devices into permanent microphones and record everything they hear, gathered in public databases.
The best results from last week's search can then be read aloud in church or at community meetups.
In fact, your proposal is too modest still. We are still free to think whatever we want! We should fast forward the development of neuralink chips and broadcast everyone's thoughts live to everyone at all times so that you can make a judgement of unethical behavior
Your lack ambition. We should instead proceed with Human Instrumentality Project and turn the mankind into an abstract singularity.
I’m OK with this so long as I’m in a position of power and exempt from having my communications public.
It's your lucky day! I have trained an LLM that reliably detects sarcasm. It was trained on only the finest sarcasm, dramatic irony, situational irony, ridicule and tomfoolery. As an amazing side effect it can even detect whether statements were made in bad faith or good faith with 100% reliability. You need never guess someone's intentions again! The machine will tell you.
I intend to launch it soon, don't miss this investment opportunity!