I think the social element is one of the roots of the problem.
Basically, people don't understand privacy, and don't see what
is going on, so they don't care about it. Additionally, most
privacy intrusions are carefully combined with some reward or
convenience, and that becomes the status quo.
This leads to the people who stand up to this being ridiculed
as tinfoil hat types, or ignored as nonconformist.
Once my wife was ill and at home, I was at work. I wondered how she was doing so I looked at Home Assistant and saw the hallway motion detector was triggered, the toilet fan shortly after. I saw in Adguard that some news sites were accessed. Then a spike in gas usage and a steep temp increase in the shower followed by a 1 min overall power usage of 2500 W, probably she made tea. She turned on the living room Sonos. So I guess she was doing relatively well.
I showed her all this, and joked about how I'd make a "Wife status tile" in Home Assistant.
Yes. The weirdest example of this ( and most personally applicable ) is the DNA data shared with 23andme and the like. I did not subscribe to this. Neither did my kids ( or kids of the individual who did subscribe I might add ), but due to some shared geography and blood ties, whether I want to or not, I am now identifiable in that database.
To your point, there is something in us that does not consider what information could do.
If you have nothing to hide what are you worried about? Or if you are not planning to be a criminal what are you worried about?
I am 100% not serious and do not believe either statement above. I sadly am in the same boat as you and had a blacksheep of a brother who did some sort of crime and as a condition had his DNA taken so I by default am in the system as well.
I never could understand why people would willingly offer their DNA to companies that even if they are not selling that data sooner or later could have that data leak and the consequences could mean being able to afford life and medical insurance or not.
> I never could understand why people would willingly offer their DNA to companies that even if they are not selling that data sooner or later could have that data leak and the consequences could mean being able to afford life and medical insurance or not.
I’m the odd one out on this thread but I just… don’t see why it’s a big deal? All the consequences of my dna leaking seem so extremely theoretical and unlikely that I’m willing to take the risk in exchange for a few minutes of fun reading a report about where my ancestors came from.
This is always framed like people who willingly surrender privacy must not know better or be uneducated about the harms but I think it’s fair for people to just decide they don’t evaluate the harms as very serious.
The example you gave about health insurance is implausible because it’s illegal in the US and I assume other developed countries for insurers to charge different amounts for health coverage based on pre-existing conditions. It strikes me as very, very paranoid to worry that someday my DNA might leak, and there’s something bad in it, and the law will change such that insurers can abuse it, and I for some reason won’t have a job that gives me health insurance anyway. That’s a lot of ands making the probability of that outcome very small.
> The example you gave about health insurance is implausible [...]
See [1].
From [1]:
> GINA focuses only on one line of insurance—health; it does not prohibit other insurances—life , disability, long-term care (LTC), auto, or property—from using genetic information [...] in 2020, [...] Florida became the first US state to prohibit life, LTC, and disability insurers from using genetic test results to set premiums or to cancel, limit, or deny coverage
To me that means you are not safe.
> and there's something bad in it
This is just gambling. If enough peoples' DNA is out there, you will see the whole-population rate for conditions. You might consider it OK to be unexpectedly unable to buy long-term disability insurance because you have a 50x greater risk for YYYY than the general population.
> [...] and I for some reason won’t have a job that gives me health insurance anyway
This is an extremely privileged attitude. This part seems to me that if you get very ill you *must* continue to work in order to maintain your coverage. Even a highly paid SWE can be laid low by carpal tunnel syndrome.
While the stuff about disability and LTC insurance is slightly concerning, the part about life insurance isn't. I've never seen any convincing evidence that life insurance is anything but a big scam. The only time it seems to make any sense if if you're pretty sure you're going to die very soon, and take out a term life insurance, but this seems to require either the ability to see the future, or a plan to hire someone to kill you so your family gets the insurance money.
Why auto or property insurance would be affected by your DNA I can't even begin to imagine.
Do you have a disposition making you more likely to end up in an auto accident? Can some other correlation be done which is not genetic per se but works out to some higher risk social stratum in aggregate? You never know. The power imbalance is great, they won't tell you why you got your score and with enough machine learning they probably can't even if they wanted.
Term life insurance is not a scam if you have dependents. It’s offloading the potentially severe consequences to someone else if you’re the primary wage earner and die, during a defined period of
time. And it’s generally inexpensive.
Think ‘normally I’d be working for another 20 years, would buy a house, send kids to college, etc. but I just got diagnosed with terminal cancer and now my kids are totally screwed.’.
I think I know where you are going with this, but could you elaborate? Is the objection here based on math ( whole is life is more expensive than term so it is not cost-effective? -- because otherwise you are simply paying premium for another benefit ) or something else?
It is term life + an annuity in disguise, with worse returns. That is also why it is the dominant product that life insurance sales folks try to sell. Because term life is well regulated and understood, so not high margin.
They generally get really disappointed when you buy a standard term life policy, but they’ll still sell it to you because money is money.
This stuff still seems frankly theoretical. I finally opted into long-term disability insurance after using the maximum short-term twice in the span of two years because of spinal degeneration in my late 30s. You have to agree to a medical exam and send records to apply for this insurance anyway, and in spite of trying to get it specifically because I'd used up the max short-term, and I am seemingly quite a high risk to actually become disabled, I was still approved.
In practice, in talking to co-workers also applying for the same things, the only people who ever got denied were all obese.
This is all setting aside that, assuming somewhat symmetric distributions of genetically determined traits, half of all people will have above average genetics. The conversation on the Internet always seems to fixate on people being denied coverage or charged more, but that seems to assume pricing models are just plain malicious, in which case they could charge more and deny you anyway, with or without data. Assuming they're actually using the data and building real predictive models, half the population would benefit from insurance companies having more data.
All that said, I would still never submit data to a company like 23andme, and would also never allow the police to have camera feeds of my house, even though I'm extremely confident they would never find a reason to arrest me. It's creepy, feels invasive, and I just don't want it.
> All the consequences of my dna leaking seem so extremely theoretical and unlikely that I’m willing to take the risk in exchange for a few minutes of fun reading a report about where my ancestors came from.
That's one of the things I've found odd about these discussions. Most of the concern seems to be about very theoretical things that we don't see in reality. On the other hand, the actual harm I'm seeing from mass surveillance is the fact that social media mobs often come through someone's life and try - often successfully - to ruin them.
The way things currently stand, the fact that I'm unable to delete Hacker News comments is much more of a threat than sending my DNA to 23andMe.
<< The example you gave about health insurance is implausible because it’s illegal in the US and I assume other developed countries for insurers to charge different amounts for health coverage based on pre-existing conditions.
As phrased, I am unable to comment as to whether that statement is accurate, but I will go with it for the sake of the argument.
I chuckled a little, because that one phrase immediately reminded of just how much political capital was spent to even allow 'pre-existing conditions' to be removed as a factor in denying coverage.
What exactly makes you think that law cannot be changed?
Changing the law is extremely difficult in the US because of the gridlocked-by-design political system, so I think it's unlikely. Changing it would also be extremely unpopular.
Of course it could happen. But even if it did, all the other unlikely events I listed would all have to happen for me to be harmed. The point of my post was that me being harmed due to having given my DNA to 23&me is unlikely, not impossible. Just like it's theoretically possible a brick could fall on my head while walking outside, but I still don't wear a helmet every time I go outside.
Worrying so much about this stuff just feels to me like the tech geek version of preppers who stock their house with guns and canned food in case the apocalypse comes (which never does).
I appreciate you having the courage to go against the grain on this. I share similar views, specifically about healthcare privacy in general. It's obnoxious to what extent they go to guard some bland info like my blood type or blood pressure. I'm not saying it should be published on a ticker at the hospitals website, but the only info they should really keep private are the things that could blackmail or shame people. Birth control, abortion, STDs, etc. I actually hold the unpopular opinion that HIPAA goes too far. It's "privacy theater". If the concern is health insurers dropping patients, then the agency that regulates insurance should "leak" some information in a sting operation and sue the insurers for breaking the law. We shouldn't foist that liability on IT people and allow insurance to harm people.
Roe v Wade wasn’t a law. Actions by the Supreme Court which are unfavorable are much more likely given that there are only 9 justices, they are appointed regardless of popularity, and they have lifetime appointments.
The discussion is you comparing the overturn of a law to overturn of Roe v Wade. The weight is completely irrelevant because we’re discussing the difficulty of the action.
Anyone who knows basic federal government structure in the US knows court rulings are significantly easier to move quickly compared to passing real laws.
This isn’t “playing semantics”, it completely invalidates your point. Look at how well overturning obama care went to see how difficult law passing is.
<< This isn’t “playing semantics”, it completely invalidates your point. Look at how well overturning obama care went to see how difficult law passing is.
You do have a point. I disagree that it invalidates mine, but it does weaken it based on how it was originally present it. That said, we are absolutely playing semantics, because while Roe vs Wade was not the law, it was a precedent that effectively held back even a consideration of law changes at bay. So it is not irrelevant, but you are correct from a purely technical standpoint.
<< Anyone who knows basic federal government structure in the US knows court rulings are significantly easier to move quickly compared to passing real laws.
<< Changing the law is extremely difficult in the US because of the gridlocked-by-design political system, so I think it's unlikely. Changing it would also be extremely unpopular.
I am thankful for this response, because it illustrates something OP pointed out directly ( as humans we mostly suck at estimating future risks ). Changing a law is sufficiently possible ( hard, but possible ). On the other hand, short of current civilization crumbling before our eyes, there is no timeline, in which DNA data already in the hands of some other entity could be put back in the bottle. Possible vs impossible ( assuming time machines can't exist ).
<< The point of my post was that me being harmed due to having given my DNA to 23&me is unlikely, not impossible. Just like it's theoretically possible a brick could fall on my head while walking outside, but I still don't wear a helmet every time I go outside.
I think the reality is that we do not know for sure ( although some fun science fiction does exist suggesting it is not a great idea to let that space be unregulated ).
That said, DNA, at its core, is just information. Information by itself is neither good or bad. However, humans come in all sorts of shapes, sizes and capacities for evil. In some humans, that capacity is rather shallow. In others, it runs very deep indeed. Evil is not a pre-requisite to become a CEO, but since humans can be pretty evil, it is just a matter of time before at least one is hardcore -- kicking puppies for fun type - evil. If so, that one evil person can do damage, if they so choose with information at their disposal. And the funny part is, there is just so much information hoarded and sold these days so.. really.. it is just a matter of time.
<< Worrying so much about this stuff just feels to me like the tech geek version of preppers who stock their house with guns and canned food in case the apocalypse comes (which never does).
I will not give you a speech here, but never is a really long time. If there is one thing that a person should have picked up since 2018, it is that things can and do change.. sometimes quickly and drastically. It is not a bad idea to consider various eventualities. In fact, DHS suggests it is a good idea[1] to think about your preparedness.
You might be mocking preppers, but I did not suffer from lack of toilet paper during the pandemic.
Supposing that there might be imminent drastic changes to society that would make it perilous for powerful 'evil' actors to know about my DNA, I don't see why those actors wouldn't be so powerful they couldn't just mandate DNA testing for everyone participating in society. My DNA can always be forcibly collected from me later on, regardless of what I do today.
Also, I don't see the relevance of 'never' here? Several lifetimes from now, there will be little to exploit in linking my DNA to whatever artifacts of my identity remain, since by then I'll just be a long-dead stranger. But then when we restrict ourselves to possibilities within my or my immediate descendants' lifetimes, we run into the issue above.
<< My DNA can always be forcibly collected from me later on, regardless of what I do today.
Hmm, would it not be an argument for nipping it in the bud now? I am confused.
<< Several lifetimes from now, there will be little to exploit in linking my DNA to whatever artifacts of my identity remain, since by then I'll just be a long-dead stranger.
Again, hmm. You state it as if it was a given, but it effectively assumes technology does not progress beyond what we have today. That said, all what I was about to type a moment ago is in the realm of pure speculation so I will stop here.
I still think you are wrong, but I should get some sleep and it seems unlikely I could convince you to reconsider.
given the proven low quality of most forensic science, and especially hair identification, it's all fun and games until the police decide you're the person they want to find guilty and they let the lab know that.
As I've said many times before in this subthread, I'm not claiming it's impossible that something bad could happen, just that it's very improbable, so it would be irrational to let fear of it control my life.
I feel like "control your life" is kind of a strong statement. I'm on the same side as the other commenters; I don't let fear control my life, I just let it be a factor in my decision to not send $29.99 and a cell sample to a company.
"in exchange for a few minutes of fun" is absolutely not worth enriching some dicks that don't care if I live, die, or get falsely accused.
All that notwithstanding, People plead guilty to crimes they didn't commit regularly because they are told it will make things easier on them. Or they avoid the death penalty.
The TLDR is that the actual real evidence doesn't matter - what matters is if the prosecution and the police are able to convince a jury that you did the crime. Watch at least the lawyer's section until the end(the cop's section is basically - "everything he said is true").
> The TLDR is that the actual real evidence doesn't matter - what matters is if the prosecution and the police are able to convince a jury that you did the crime.
Don’t you think there’s some correlation there though? Typically, the jury is convinced by telling them what evidence the state has.
It’s like saying a laser rangefinder doesn’t actually measure distance but time. Ok, but one Leads to the other…
"Can you state exactly what your threat model is?"
The threat model is that police and prosecutors need to find someone guilty. If you watch the video to the end, the lawyer explains exactly how even a genuienly completely innocent person might be convicted of a crime because they were able to use "some" evidence to show that you maybe were near the crime scene. They don't need definitive proof - they just need enough to sway the jury. And if you take that into consideration, then giving law enforcement any info about you can only ever work to your disadvantage.
> The threat model is that police and prosecutors need to find someone guilty.
Sure. And the chance that I'm the person they decide to pin it on because my DNA happened to be in a database is extremely low. Why are you only focusing on one of the bullet points when the point is that probabilities are multiplicative?
Well by that standard is not worth protecting your privacy at all, after all the probability of any of your data being used against you is extremely low. And it's a difficult point to argue, because obviously it's true - but still, why take the risk?
>>Why are you only focusing on one of the bullet points when the point is that probabilities are multiplicative?
Because I'm saying that all of your points don't need to be true for something bad to happen to you. The probability of all your points happening is probably so close to zero it might as well be zero. But if you've given the state any information it can be used against you - like the point made in the video shows. So I think what I'm saying is that yes, your points are improbable, but not all of them have to happen for you to get screwed over.
> Well by that standard is not worth protecting your privacy at all, after all the probability of any of your data being used against you is extremely low. And it's a difficult point to argue, because obviously it's true
Correct, this is exactly the point I'm making.
To recap: the point is that it is not necessarily the case that someone who sends their DNA to 23&Me is ignorant of the risks or stupid; it's entirely possible that they objectively analyzed the risks and decided it's not serious enough to care.
> but still, why take the risk?
For the same reason I walk outside without wearing a helmet to prevent me from bricks that could randomly fall from building facades. Mitigating the risk is not worth it, to me, relative to the hassle of doing so.
This may be a bit late in the discussion, but one of the biggest deal with allowing DNA to be put into databases which the police can trawl is that the risk of false positives increases as the database size increases. DNA profiling is a probability game with an underlying assumption that people in the DNA database are of higher risk of being guilty than those not in it. Most conviction on DNA evidence also use partial DNA, meaning they accept an even higher risk of false positive. The current methods in DNA forensic is also to use AI to combine multiple partial DNA to create a single profiles, and the false positives of those are not very well understood by judges and juries.
The legal system and the evidence value of DNA profile could adapt to a world where every persons DNA is accessible, but it is a slow process and I doubt it will be done in my life time.
<< I never could understand why people would willingly offer their DNA to companies
I can play devil's advocate and come up with some level of rationalization along the lines of 'it will help humanity cure cancer' in a handwavy kinda way, but even then one is trading future potential against near 100% guarantee that things do change in regards to what you gave -- that is: even if company is promising today it will not do something with data, a day will come when that will no longer be the case.
The blacksheep example is definitely interesting though and likely a good idea for a police drama episode ( if it wasn't used already ). Edit: And now that I think about it, if it would be made, it would show the the good certainly have nothing to fear indeed.
You could put quotes around the first line to make it clear you're joking or being sarcastic. I was pretty taken aback by someone seemingly actually saying that, at first! :`D
I don't see the logic following a data leak and not affording medical insurance, as that would imply insurance company saying "hey there's been a big DNA data leak - get that data and make profiles as to what people we should up the premiums on!" Which , ok I guess I can't believe they wouldn't because of moral reasons or even because it may already be illegal to do so or because they would worry about being found out, just seems like it would require not just thinking about it but probably also they would need to assemble a team to take advantage of it and that would not be worth it.
<< just seems like it would require not just thinking about it but probably also they would need to assemble a team to take advantage of it and that would not be worth it.
I think it has been somewhat well established that humans will do a whole lot of nasty without much thinking as long as a higher up tells them to. And this does not touch the simple fact that the companies are not exactly entities governed by morality ( and some would argue that it is not entirely certain if humans are either ).
>whole lot of nasty without much thinking as long as a higher up tells them to.
sure
>In short, I think you are wrong.
so, your technical conclusion is that because people will do bad things when told by authorities they will not need to start any sort of project to integrate the dumped data into their platforms, paying multiple developers for a potentially long time - it will just magically happen because of the power of evil?
I mean I want to believe in the power of evil as much as the next guy, but that's a bit much. And once we go back to the whole "they would need to assemble a team to take advantage of it" which maybe was not that clear at the end of my post then again, no matter how you slice the evil, it would not be worth it.
Because assembling the team to analyze and ingest 23&me data might take a while, cost a good amount of money, might decrease in value over time (or increase in risk) for something that is probably illegal to do in the first place.
Higher ups may want it done, but probably only if it can be done immediately and doesn't cost a lot of money.
<< Higher ups may want it done, but probably only if it can be done immediately and doesn't cost a lot of money.
Since we are talking evil, I suppose money is a good place to start ( being root of it and all that ). So motivation would likely be there, but I am willing to accept your qualification of 'yes, but don't spend a lot'.
Lets look at some of that potential cost structure. The analytics part these days is not exactly expensive. Hell, HN just yesterday had a story about $4.80 GPU time being used to do some reinforcement learning to find 'best HN post' on 9gb of HN data[1]. That used to be a little require more time and be more labor intensive. Edit: Yes, what we are discussing would naturally go beyond $4.80, but in terms of bang for buck it is hard to find a better time than now.
One could reasonably argue getting the right people ( right experience and knowledge ) could be prohibitive in terms of cost, but.. if you are already an insurance company, it is not exactly impossible that you already hired people with applicable experience, knowledge and skill. And if you are already paying them, maybe this little offshoot project could be sold to them as a great advancement opportunity. And if they took it too far? Well, no one told them to go overboard. After all, we at <company D> have a strict ethics policy.
If that is the case, two big pieces of the cost structure are either negligible or already part of the annual budget.
>if you are already an insurance company, it is not exactly impossible that you already hired people with applicable experience, knowledge and skill.
my experience with large companies is that there is already allocation of those resources somewhere, with lots of managers and such, I think it would be a major thing to move people around or to hire new people.
Sure it's nice to believe Evil is working agile, but really it just says its adapted some agile methods and its super slow as per the usual.
> I never could understand why people would willingly offer their DNA to companies that even if they are not selling that data sooner or later could have that data leak and the consequences could mean being able to afford life and medical insurance or not.
Reality: nobody cares about your DNA. It's useless for medical or life insurance companies, they can't discriminate based on the DNA by law. And if it's ever repealed, you can bet that life insurance companies will just start asking for your DNA info anyway.
DNA also doesn't provide actionable intelligence for advertisers that is worth more than a week of your purchase history or your Facebook profile.
However, DNA provides actionable intelligence for _you_. Mostly by highlighting the disease risks and other clinically-significant data (like drug metabolism speed).
All true but above all DNA is personal. It's yours to share or not. When someone related to you shares, they're sharing you and yours as well.
Let's rephrase: I never could understand why people would willingly offer their DNA *and the DNA of all those who share some part of their DNA* to for profit companies subject to data breaches, court orders, nefarious employees, etc.
> Let's rephrase: I never could understand why people would willingly offer their DNA
To get information that benefits them.
And from a practical point of view, a lot of my information has been leaked multiple times already. And I'm carrying a phone that tracks my movements to within a few meters all the time. I walk by multiple Ring cameras every day, etc.
"Stand Out of Our Light " - Similar message as "Surveillance Capitalism" - from an ex-Google'r - but without the depth and breadth. Not as heavy but packs a near similar punch.
Not GP but I despise this argument because the definition of a criminal depends who you're asking: in some countries, doctors giving warnings about an impending pandemic could be criminals.
Edit: I was too fast on the comment button and didn't read until the end.
Privacy also seems to be one of those things people claim to care about, but in fact are actually reluctant to do anything about. Easy, low-tech privacy measures:
- leave your phone at home most of the time.
- don't buy a smart TV or other smart devices
- don't use social media
- etc.
None of these measures are bullet-proof, but they are relatively low-cost and don't require much expertise. These are much, much more likely to be things consumers complain about than things that consumers are actually ready to do something about. I think it's clear that consumers ALSO do not understand privacy, but I'd also suggest that they don't care very much. If they cared, there would be more a market for privacy.
Re: Smart TVs. An old laptop running a browser with an ad blocker and some fancy controller setup (perhaps a wireless multi-button mouse or a specialized remote control) can get you way more privacy (and the sanity of a mostly zero-ad experience, just the occasional in-video affiliate ad) and makes whatever you watch a lot smarter, since you are the one doing the searching/surfing rather than the advertiser-funded smart TV channels that are specifically smart at advertising to you for profit. You can hook up an HDMI cable to a bigger screen/projector if you're playing stuff for lots of people.
"Don't buy a smart TV" is stupid out of touch and tonedeaf advice.
There are no non-smart TVs. You cannot buy one. Your only alternatives are to not buy a TV period, or go to great lengths to firewall your new smart TV.
Yes, non-smart displays exist, but they are not sold to consumers, nor at a price consumers can afford.
This is one of those prime examples of how the idea of "vote with your wallet" is a fantasy. Consumers are not in any way in control of the market.
There is no possible way to protest smart TVs when the only options for a new TV include spyware. Your only possible move is to not participate in the market, which then summarily ignores you.
Similarly, existing without a smartphone in today's society is largely not possible. You can't even park in many cities without an app.
I think it's clear that you don't understand the problems being discussed and are just blithely assuming that deflecting blame onto individuals is a reasonable position. It isn't. It's moronic and unconsidered.
Just as an addendum, I think you can still vote with your wallet, even in a world where there are truly no dumb TVs. The vote would be "do not buy a TV." It's an option, and isn't even that big of a deal. As another commenter noted, you can watch TV on your laptop with an ad-blocker, but you can also just read a book. I don't mean this to be rude or combative -- I'm quite serious here. TVs are luxury items in the narrow sense that they're not necessary for anything; they're just leisure. Because of that, it should be possible to completely avoid it. Take up hiking, learn the guitar, read more books, join a social club, etc. No one needs a smart TV.
Crucially, someone who owns a smart TV has implicitly made a claim "the entertainment of TV is more important to me than privacy." That's all I'm saying. I explicitly did NOT mention vehicle privacy because most people do not have a choice about whether they own a vehicle. They need it for work, child care, etc. It is possible to avoid smart vehicles, but it's getting more difficult, and I suspect it will be impossible in the future. (even with a dumb car, there can be license plate scanners in a lot of locations)
It's cheaper than the 55'' I bought a few years ago, is 4K, which mine isn't, and is roughly on par with what's for sale on the Smart TV side of things.
I just bought a non-smart TV at a yard sale for $10. I wasn't even out looking, it just happened to be there. It works great.
Much scarier, there are "smart monitors" coming which are just computer monitors but will display ads to users. Once that happens, and is wholly unavoidable, I'm honestly finished with computers for good.
As an aside, I have a very clear understanding of privacy issues. If there's a particular issue you'd like to dig into, I'd be happy to talk it out with you. I'll bet we agree more than you think.
But isn’t that the case because no one cares? If the demographic of people purposefully wanting non-smart TVs was large enough, someone would step in and offer a non-smart TV to make some money. The only reason this doesn’t happen is that it wouldn’t work because not enough people would buy it. Basically, the market already anticipates how people would vote with their wallets and has determined that smartdevice-haters are so fringe that they don’t matter.
Do you think people make a conscious choice to share their location thousands of times a day forever because they used the department store app to find the men's section one day?
The problem isn't about the big corporations themselves but about the fact that the network itself is always listening and the systems the big corporations build tend to incentivize making as many metadata-leaking connections as possible, either in the name of advertising to you or in the name of Keeping You Safe™: https://en.wikipedia.org/wiki/Five_Eyes
Transparent WWW caching is one example of a pro-privacy setup that used to be possible and is no longer feasible due to pervasive TLS. I used to have this kind of setup in the late 2000s when I had a restrictive Comcast data cap. I had a FreeBSD gateway machine and had PF tied in to Squid so every HTTP request got cached on my edge and didn't hit the WAN at all if I reloaded the page or sent the link to a roommate. It's still technically possible if one can trust their own CA on every machine on their network, but in the age of unlimited data who would bother?
Other example: the Mac I'm typing this on phones home every app I open in the name of “““protecting””” me from malware. Everyone found this out the hard way in November 2020 and the only result was to encrypt the OCSP check in later versions. Later versions also exempt Apple-signed binaries from filters like Little Snitch so it's now even harder to block. Sending those requests at all effectively gives interested parties the ability to run a “Hey Siri, make a list of every American who has used Tor Browser” type of analysis if they wanted to: https://lapcatsoftware.com/articles/ocsp-privacy.html
One man's meta is another mans data. The classification of 'data' and 'metadata' into discrete bins makes it sound like metadata is somehow not also just 'data'.
If every morning I got in my car and left for work and my neighbor followed me, writing down every place I went, what time I got there, how long I stayed, and the name of everyone I called, it would be incredibly intrusive surveillance data, and I'd probably be somewhat freaked out.
If that neighbor were my cell phone provider, it would be Monday.
What we allow companies and governments to do (and not do) with this data isn't something we can solve in the technical realm. We have to decide how we want our data handled, and then make laws respecting that.
And with that, thanks to you, today I am a bit smarter than yesterday.
Thank you very much for that phrase, the rest of your post is a very good example for the layman, but that phrase should be the subtitle of a best selling privacy book.
> If every morning I got in my car and left for work and my neighbor followed me, writing down every place I went, what time I got there, how long I stayed, and the name of everyone I called, it would be incredibly intrusive surveillance data, and I'd probably be somewhat freaked out.
It's not "surveillance data," you are in a public place and have no expectation of privacy. It's only through such neighbourhood watch and open-source intelligence initiatives that our communities can be kept safe from criminals and terrorists.
Why are you so protective of your goings-on and the names of everyone you call? Are you calling terrorists or engaging in illicit activity at the places you visit? What is it that you have to hide?
I would actually take the premise of (national) security even further and extend collection to not only metadata, but data as well. Further, these capabilities should be open-sourced and made available to all private citizens. Our current law enforcement systems are not powerful enough, nor do they move quickly enough to catch criminals - by the time sufficient information has been gathered on a suspect, it may already be too late.
An argument so cliche, it has its own Wikipedia page[1]. In the US, we currently have a presidential candidate from a major party threatening harm to people based on their political, social, and biological qualities, which outsiders often determine by inference from data such as who people are in contact with and where they travel. Further, I would argue the need for individual privacy is innate in humans; as every child matures they find a need to do things without their parents over their shoulder, even without their peers, no matter how innocent the activity and it is a need that does not vanish in adulthood. We generally agree that things like removing bedroom doors as punishment is abusive because it robs the person of privacy. The same goes for installing monitoring software on your partner's phone, or a GPS tracker on their car. Privacy means we are able to be ourselves without our lives being scrutinized, criticized, judged, rated, shamed, blamed, or defamed by every person on the street. I close the door when I defecate, I draw the blinds when I copulate, I don't tell people my passwords, and I don't scan my grocery receipt to earn points because there are some things other people don't need to know.
Lol. So who does "deserve" privacy your highness? I'm guessing you do at the very least since you seem so judgemental on those with an "incessant, insatiable need to broadcast their lives 24/7" - which you presumably do not.
You're pretty judgy and seem incapable of even conceptualising a nuanced position on this topic. And your take on Assange, Snowden and Appelbaum is clearly first order trolling.
Unless you forgot the /s at the end of your whole comment.
Observing someone by chance in public is protected. Stalking them is generally a crime, although jurisdictions differ in their inclusion of surveillance (without contact or purpose) only as a form of stalking. Generally speaking, if someone is following you around everywhere, a reasonable person will start to fear for their safety and criminal codes seek to protect people from that.
While not as immediately threatening, realizing that a company is maintaining a large dossier about you may cause some concern about how they will utilize that (obviously against your undisturbed behavior). It is reasonable to be concerned about that usage and intent.
Imagine you are a baker in the end of 1930's Germany. You deliver bread every day to a synagogue. Imagine cell phones and apps existed. The Nazi government could now with little effort see you went to a synagogue everyday for the last couple of years so they decide to send you to a camp, although you are not a jew. Meta data is not dangerous you think?
There's no need for theoreticals - we know very well that Nazis used census data which recorded a person's religion to find and kill Jews(and others). At the time I imagine giving this data to the state felt like not a big deal, but how could they know it would lead to their deaths?
> Why are you so protective of your goings-on and the names of everyone you call? Are you calling terrorists or engaging in illicit activity at the places you visit? What is it that you have to hide?
Basic political associations can become problematic when people get riled up. See “the red scare”.
We’re not far from that again with people cutting out major relationships based on support or disdain of Trump.
I am struggling to comprehend how allowing everyone between you and the services you use to view not only the metadata but the content as well could possibly be considered privacy-preserving.
It’s kind of an unorthodox take, but I’m guessing the idea is that if corporations perceived that they didn’t have secure ways to protect stuff, they would refrain from gathering as much stuff, because they would be afraid of the liability. And btw the perception / reality distinction is important here in supporting this theory.
I disagree. What makes corporations afraid of liability are laws enforcing liability. We never got those, and I don’t see why weaker encryption would’ve created them. We could, for example, have meaningful penalties when a company leaks passwords in plain text.
in your mind, ssl won't leak anything. and non ssl leaks everything.
make a list of everything you can infer without a cert looking on a ssl connection. then add on top of that all the things people with the cert or control over CAs can see and make a list of them all
when you're done you notice ssl is not perfect as you think and the extra request and no cache compound all that.
> make a list of everything you can infer without a cert looking on a ssl connection
This exactly, and not just connection but connections, plural. If the network observes my encrypted connection to ocsp.apple.com followed by another encrypted connection to adobegenuine.com, an analyst could reasonably assume I'd just opened an Adobe Creative Suite app. Or if they see ocsp.apple.com followed by update.code.visualstudio.com, I probably just opened VSCode. Auto-updaters are the same kind of privacy scourge and every additional connection makes it worse.
> Transparent WWW caching is one example of a pro-privacy setup that used to be possible and is no longer feasible due to pervasive TLS.
What? You're kidding. If we didn't have pervasive TLS we'd have neither privacy nor security. Sure, a caching proxy would add a measure of privacy, but not relative to the proxy's operator, and the proxy's operator would be the ISP, and the ISP has access to all sorts of metadata about you. Therefore pervasive TLS did not hurt privacy, and it did improve security.
You're making the same mistake as Meredith Whittaker. It's a category mistake.
> Other example: the Mac I'm typing this on phones home every app I open in the name of “““protecting””” me from malware.
What does this have to do with secure cryptography? That's what TFA is about. You are conflating security as-in cryptography with security as-in operating system security. More category errors. These are serious errors because if we accept this nonsense then we accept weak cryptography -- that's DJB's point.
Oh damn, that escalated quickly. Nice! How is that 51nb board? I totally forgot they were a thing. I have many ThinkPads but unfortunately am at the cap of coreboot-able (X230)... It's sadly getting to the point where the web, of all things, is gradually creeping out of reach.
It is the best computer I have ever used but parts availability can be an issue. For example I had the eDisplayPort flex-PCB go bad in my X210 and had to homebrew my own replacement. I have an entire spare machine just in case, since I couldn't just go out and buy one if I needed it Right Now.
Nice, that's cool to hear (best computer), but yeah I suppose it has some inherent "rarity" to it. One of the nice things about the ThinkPads is their popularity/"ubiquitousness" (is that a word?) - I have like, five X230's at this point! So easy to find an amazing deal on one if you're patient. But yeah, these are really starting to show their age. Still fine to use overall, but it can be pretty limiting at times.
Ignore the downvotes - you raise a point worth discussing.
Apple spent a good amount of time and money putting out marketing to convince people that their brand emphasizes privacy. This was part of a brand recovery effort after quite a few folks' intimate photos were leaked out of iCloud.
But it's become evident, as in the post you replied to, that they aren't as privacy-friendly as their marketers want you to believe. You should consider alternatives for your computing needs - specifically, open-source software which is not in control of large corporations.
Apple has been focusing on privacy as a part of their core offering since long, long before the iCloud photo leak. Them being imperfect is not a sign that they are willfully malevolent actors.
The post they replied to doesn’t make anything “evident” it just claims without basis that if you want privacy you should stop using Apple products.
I mean sure in an absolute sense that’s true. Using Apple products gives them some information about you. But relatively speaking, Apple tends to collect significantly less data about its users than its competitors: Meta, Google, Microsoft, et al.
I don't find the "not as bad as" argument to be a convincing one. Given that users can run hardware and software that doesn't give out any information about them, it seems defeatist to only consider software which does give out information. A lot of people have spent a lot of time and effort to make software like Linux and LineageOS available and easy; choosing the least-bad of bad options makes no sense when actual good options are available.
The OP of this thread gave a specific example of Apple circumventing user privacy in a way that I would find unacceptable. "Replied to" was not the best phrasing for that, I admit.
Users can also live in a shack in the woods which is even more privacy-preserving.
Presumably just like most users don’t want to do that, most users also don’t want to learn enough to admin a Linux system, run their own domain and email server, and keep a NAS at home as their “cloud” storage.
If you assume that users want someone else to handle this stuff for them, then yes, “not as bad as” is a great argument.
Wow, nice analogy - you really think that using Linux is like living in a shack in the woods, huh. It's actually very easy to use these days. Have you tried it?
I’ve used Linux for the last twenty five years, both as my daily driver personal desktop and as an admin.
My point is that if you want to chase privacy absolutism, a shack in the woods is where you inevitably end up. If you accept that people want to use consumer-focused goods and services that come with some privacy cost—as basically fucking everyone but a minute rounding error does—there are alternatives that are better than others. And so it’s absolutely worth comparing those alternatives.
If you want to run Tails on RISC V, route all your traffic through Tor, and conduct all your transactions with Monero then more power to you.
I don't accept that, actually. Since you like exaggerated analogies, here's one for you:
Imagine a world where, in the past twenty years, big companies started making transparent bathroom doors. And thanks to marketing, media, celebrity endorsemets etc., transparent bathroom doors have become the new norm. It worked, and most bathroom doors are now transparent or translucent.
I'm one of the people pointing out that we can get doors made of wood, and it's pretty easy to do so.
And you're the guy saying "that's so weird! Basically fucking everyone uses some degree of transparency on their bathroom doors, therefore it's normal and good, and should continue to be encouraged. Besides, this one company makes translucent bathroom doors - that's better, right?"
It is a matter of perspective. Of all Mac users, no of people wanting to hide their app usage are practically 0 when compared to people downloading free wallpaper app or game that need to be protected from their own actions. For 2nd set an OS monitoring the activity and blocking potential harmful ones is more secure.
Where people stand on this question ultimately lies in whether they trust what Apple says. For example, Gatekeeper / OCSP, the service mentioned in the GP. Apple says the following:
> Gatekeeper performs online checks to verify if an app contains known malware and whether the developer’s signing certificate is revoked. We have never combined data from these checks with information about Apple users or their devices. We do not use data from these checks to learn what individual users are using on their devices.
That's either true or it isn't. If it's true, then the GP comment is wrong about "Hey Siri who is using Tor", if it's not true, they are correct. Blocking the service using a hosts file works, and does not prevent applications from opening, a case can be made that this should be even easier with a system preferences setting, but we come back to the same question: if you trust what Apple says about the service, making it easy to disable (and blocking a DNS entry is not especially difficult) would be foolish, because the threat landscape does include malware, and does not include Apple sharing information (they claim that) they don't have, about what programs users open.
If Apple is lying, or one thinks Apple is lying, then the problems do not end with Gatekeeper. They could be logging every key I type, faking E2EE through some clever obfuscated code, and so on. Blocking the OSCP server will do nothing, they can exfiltrate anything they want from an operating system which they alone control.
I happen to believe Apple's privacy claims are honest. This is based on a couple of things: primarily, privacy is a valuable (to them) and emphasized part of their brand, and any discovered breach of trust would be hugely detrimental to their bottom line. Also, there's a dog which didn't bark, in the form of an absence of whistleblowers reporting on how Apple's privacy claims are bullshit and they actually pwn everything.
TL;DR there are OSes which claim to offer more privacy than Apple, but now you're trusting ~everyone who has contributed software to those operating systems instead. I also happen to think that e.g. Qubes and Tails do improve on privacy over the macOS etc. baseline, but I can't prove that, anymore than I can demonstrate that Apple isn't lying.
It is physically impossible to audit all the code we run personally. It just can't be done. So trust is a non-optional component of making decisions about privacy and security. It will always be thus.
I don't see metadata as a danger, I think it's a great compromise between police work and privacy.
Some of thi requirements I see here seem crazy. I want carte blanche access to the global network of other peoples computers and I want perfect privacy and I want perfect encryption...
Keep in mind that you don't decide who's a terrorist and who isn't. You might be "glad" about the NSA doing their job as long as your definition of terrorism aligns with the government's but what if that ceases to be the case?
I'm too young to truly appreciate this, but I have spent my time going through archives of the Cypherpunk mailing list.
The one thing I always think about on HN is what some of those guys would think (or presently think) about the cultural shift among nerds and otherwise techies such that this comment is even possible.
They all projected, correctly or not, such a potentially dystopian/utopian world. And they definitely didn't agree with each other. But there was still this sense of shared belief and shared cause of generally being, to say the least, skeptical and antagonistic to the state, of the kind of formal potential for liberation in code. That things could be different.
But here we are now. Computers and what they do are no longer a source of hope or doom. They either make us money, or they help us catch ambiguous enemies.
I wish I had been around for the golden era. All that is solid melts into air.
It's no mistake that the rise of cyberpunk and postmodernism coincided with the collapse of competing ideologies to market capitalism. As Capital killed its enemies, you see belief in humanity and its ideals in art go up in smoke.
Personally, I find computers to be harbingers of doom. Not essentially, of course, but it's pretty clear at this point we're not going to see the potential of the technology we already have realized within my lifetime, but we will see a good deal of the predicted use to abuse people. Hell, we already see much of it.
Blaming capitalism doesn’t make any sense because it’s a different axis. The security vs privacy debate is quite old and different societies handle the trade completely independently of how capitalistic their economy is.
Is it really a hypothetical at this point? I was under the impression that relevant cases have already been explored ( to the extent that one can given the nature of IC ). In cases like these, the moment it is actually a problem, it is likely already too late to make sensible adjustments.
Transparent HTTP caching as a way to avoid leaking metadata is not pro-privacy. It only works because the network is always listening, to both metadata and message content. The reason why people worry about metadata is because it's a way to circumvent encryption (and the law). Metadata is holographic[0] to message content, so you need to protect it with the same zeal content is protected.
But letting everyone have the message content so that metadata doesn't leak isn't helpful. Maybe in the context it was deployed, where pervasive deep packet inspection was only something China wasted their CPU cycles on, your proxy made sense. But it doesn't make sense today.
[0] X is holographic to Y when the contents of X can be used to completely reconstruct Y.
How it metadata holographic? Sure, you can know when I communicated to a particular individual, and even the format and size of the message, but it doesn't include the exact message, right?
Gordon Welchman first productionized “traffic analysis” in WW2 at Bletchley Park.
When in his retirement he tried to write about it, it was his work on traffic analysis more than his disclosing that the allies had cracked enigma that most worried the NSA who tried to stop him publishing.
Traffic analysis is in many ways more valuable than the contents of the messages.
I won't say that metadata isn't valuable, but I still don't think it's holographic. You can tell I WhatsApp my friend every day around noon, so we're probably talking about lunch, but you don't know that today I had a tuna sandwich.
Old thread but I think there’s a wood and trees thing here.
Traffic analysis is king because who you communicate with is a low noise signal and what you communicate is usually noise.
This is well known for police work and military intelligence etc.
It’s also true for ad sales. Ad networks want the trackers on sites so they can build up a profile of you based on metadata not the content of the pages you visit themselves.
I think I agree with Bernstein that the talk is mostly incoherent about this "privacy" vs. "security" tradeoff.
However, I do want to call out his "Amazon was doing good business before 1999 and the end of the crypto wars", and "companies allocate just a small fraction of their security spend to cryptography":
* Prior to the end of export controls, Amazon was still doing SOTA cryptography
* Export controls themselves boiled down to clicking a link affirming you were an American, and then getting the strong-cryptography version of whatever it was you wanted; there were no teeth to them (at least not in software products)
* Prior to the widespread deployment of cryptography and, especially, of SSH, we had backbone-scale sniffing/harvesting attacks; at one point, someone managed to get solsniff.c running on some pinch point in Sprint and collected tens of thousands of logins. Lack of cryptographic protection was meaningful then in a way it isn't now because everything is encrypted.
I don't think he was arguing that things weren't more secure after the export controls were dropped. I feel like that's why he was arguing to drop them at the time. He's just saying that all the signs point to Amazon/internet commerce becoming a behemoth either way. So we'd just end up in the same situation wrt what the talk sees as the current state of things, but with compromised cryptography.
He was right about export controls. Nobody disagrees with him. I don't even think Meredith Whittaker does. But many times, I've come across a folk belief that strong cryptography was rare in North America before export controls were eliminated; it was not.
Aside from everything else, I don't understand what Whittaker's point was; she seemed to ultimately be advocating for something, but I can't understand what, exactly.
It's probably in the talk's last sentences:
> We want not only the right to deploy e2ee and privacy-preserving tech, but the power to make determinations about how, and for whom, our computational infrastructures work.
This is the path to privacy, and to actual tech accountability. And we should accept nothing less.
But who are "we" and "whom", and what "computational infrastructure" is she referring to?
I can fill that in for you I think. The "We" and "Whom" are you, me, the arbitrary host/admin/user.
If you look at the regulatory trends developing around tech at the moment there are a lot of pushes to slap obligations on the host essentially toe the societal line of their geopolity. You will spy on your users. You will report this and that. You will not allow this group or that group.
This tightening acts in part to encourage centralization, which is regulable by the state, and discourage decentralization, which is at best, notionally doable.
The power of technologically facilitated networking has, prior to the Internet, been in large part a luxury of the State or Entity granted legitimacy by the State. With everyone having the potential to take their networks dark enough where the State level actors legitimately revert to having to physically compromise the infrastructure instead of being able to just snoop the line, it's a threat to the edifice of power currently extant to under a bottom up inversion.
No longer would the big boys in the current ivory tower be able to sit on high and know that there may be threats purely by sitting on SIGINT and data processing and storage alone. The primitive of true communications and signalling sovereignty would be in the hands of every individual. Which the establishment would like to cordially remind you includes those dirty terrorists, pedophiles, communists, <group you are mandated to treat as an outgroup>. So therefore, everyone must give up this power and conduct affairs is a monitorable way to make those other people stand out. Because you're all "good" people. And "good" people have nothing to fear.
You can't deplatform persona non grata from infra they've already largely built for themselves, which is a terrifying prospect to the current power structure.
> The primitive of true communications and signalling sovereignty would be in the hands of every individual.
That's great and all, but how does that help with mass surveillance by big tech? How would "true communications and signalling sovereignty" shield me from Google, Facebook, Whatsapp, Twitter, etc.?
> Aside from everything else, I don't understand what Whittaker's point was; she seemed to ultimately be advocating for something, but I can't understand what, exactly.
The whole talk felt like it was gearing up to making a point but then it ended. It turned out that the point was to blame our current situation on the "sins of the 90s". To be fair, it was in the title all along so I'm not sure why I was expecting otherwise.
I think this article isn't considering wifi. Most early sites were pressured into using SSL because you could steal someone's session cookie on public wifi.
Without cryptography, all wifi is public, and in dense areas, you would be able to steal so many cookies without having to actually get suspiciously close to anything.
I'm guessing without crypto, we would only access financial systems using hard lines, and wifi wouldn't be nearly as popular. Mobile data probably wouldn't have taken off since it wouldn't have been useful for commerce.
I thought WiFi was somewhat secure from other clients, even if your connection is unsecured at the TCP layer, so long as they're not impersonating the hotspot. You're certainly not secure from the hotspot itself, of course.
Only if the WiFi network is password-protected, which causes connections to be encrypted. Pretty much all WiFi is password-protected nowadays -- if a cafe wants to enable public access to their WiFi, they'll write the password on the wall -- but that only became the case after Firesheep and other sniffing tools drew attention to this issue around 2010. In the old days, there were plenty of networks with no password (and hence, no encryption) at all.
The GP specified "without cryptography", in reference to a counterfactual world where we weren't allowed to encrypt things.
> Pretty much all WiFi is password-protected nowadays
I was at Disneyland last week and stayed at one of their hotels - and all the guest Wi-Fi networks were passwordless and therefore insecure. Ditto the free WiFi at the airports at both ends; oh, and the in-flight Wi-Fi too. While walking around the park my iPhone listed a bunch of passwordless mobile hotspots too.
Are you thinking of captive-portals with logins/passwords? (E.g. Mariott/Hilton “Enter your room-number and last-name” portals) - I assume you’re aware that’s only used to authenticate after the WiFi connection is already established?
———
(I really hope that I’m wrong on this; but I’m not aware of any modern wifi standards that address this…. Of course, corp/edu networks can just use RADIUS or a client-certificate (which works on wired networks too).
Also, it’s surprising we still haven’t figured out getting TLS to work with home-user-grade routers’ control-panels…
> Pretty much all WiFi is password-protected nowadays
This is absolutely not true in the US. All major hotel chains have no encryption, airports do not, Starbucks doesn’t, etc.
It’s usually small businesses that opt for a WPA pass phrase because that’s easier to setup than the captive portal nonsense that all of the big companies use.
Aren’t they still just encrypted against the password itself? So if it is a public place like a coffee shop with a known password, anyone can decrypt the data?
Ah ok. I thought they were referring back to SSL in their first paragraph. Interesting, I had forgotten that WiFi networks once didn't all have passwords.
In a nutshell I dont think we would have seen much change - corporations only engage in security insofar as much as they are required to - we've seen that even in this "metastatic SSL enabled growth" we've basically sold out security to the lowest common denominator, and core actors in the industry just use these security features as a fig leaf to pretend they give a single crap.
Now, would CERTAIN industries exist without strong cryptography? Maybe not, but commerce doesn't really care about privacy in most cases, it cares about money changing hands.
I dont know, they sure make sure the paper-trail is shredded and shedded with the Azure Document Abo 365. When it comes to security from liability everything is top notch.
Right: So what we need to do is make organizations liable for mishandling data.
Imagine if you could sue a company for disclosing your unique email address to spammers and scammers. (They claim it's the fault of their unscrupulous business partner? Then they can sue for damages in turn, not my problem.)
There are some practical issues to overcome with that vision... but I find it rather cathartic.
Cryptocurrency, if you accept it and its ecosystem as an industry, would certainly not exist. And as for privacy, a fairy dies every time some someone praises bitcoin for being anonymous.
"A decent number" instead of "every" kind of supports my point, though. I'm not saying you get anonymity for free, but by taking the right steps and being very careful, it's actually pretty straightforward.
How could that be relevant for more than a few more years? The world does not end with the US. Regardless of the ban, strong crypto would have been developed elsewhere, as open source, and proliferated to the point of making continuation of the ban impossible: by ~2005 or earlier, it will be either US closing off from global Internet becoming a digital North Korea of a sort, or allowing strong crypto.
On that note, OpenBSD is from Canada and thus not subject to crypto export restrictions (not that I even know what such restrictions are present in the US today, if any) - https://en.wikipedia.org/wiki/OpenBSD
Popular OSes and browsers have almost entirely come from the US. If people had a choice between IE with weak crypto or Opera with strong crypto they absolutely would have chosen IE.
According to a talk by Eben Moglen (https://softwarefreedom.org/events/2012/Moglen-rePublica-Ber...), the noted connection between strong encryption and mass surveillance was a policy change by the US government. Before 2001, the policy was to repress and delay strong encryption and keep out of the public sector in order to maintain the states ability to monitor communication. After 2001 the policy changed towards mass surveillance strategies, which methods we got some insight into by the many leaks that was released a decade late by people like Snowden.
The connection is interesting, but the key word that I find important is the word policy. Mass surveillance is generally not a technology problem, it is a policy problem. If the government want to surveil every citizens movement they can put a camera on every street, regulate that every car has a gps and network connection that report their movements, have face recognition on every train and bus, and require government ID to buy a ticket that get sent to a government database. When the price of mass surveillance went down, the question of using it became a policy question.
> Meredith Whittaker, president of the Signal Foundation, gave an interesting talk at NDSS 2024 titled "AI, Encryption, and the Sins of the 90s".
The lame claim that DJB is tearing to shreds in TFA is quite shocking coming from a senior manager at an institution that works on strong crypto. Really shocking. Is she just clueless?
there was a huge chilling effect on both product and protocol design. In the 90s I had to fill out a form and submit it to RSA in order to get a copy of their library. Which I eventually got after waiting 6 months, but I had to agree not to redistribute it in any way.
Efforts to design foundational cryptographic protocols were completely hamstrung by the spectre of ITAR and the real possibility that designs would have to US only. Right around the time that the US gave up, the commercial community was taking off and they weren't at all interested in further standardization except was creating moats for their business - which is why we're still stuck in the 90s as far at the network layer goes.
Would be a good day to have that enshrined in case law, maybe the US government would let me work on rocket GNC if code can’t be export controlled at all
I haven't seen the talk, but it sounds plausible to me: Technical people got strong crypto so they didn't worry about legislating for privacy.
We still have this blind spot today: Google and Apple talk about security and privacy, but what they mean by those terms is making it so only they get your data.
> Technical people got strong crypto so they didn't worry about legislating for privacy.
The article debunks this, demonstrating that privacy was a primary concern (e.g. Cypherpunk's Manifesto) decades ago. Also that mass surveillance was already happening even further back.
I think it's fair to say that security has made significantly more progress over the decades than privacy has, but I don't think there is evidence of a causal link. Rather, privacy rights are held back because of other separate factors.
As you point out, decades ago privacy was a widespread social value among everyone who used the internet. Security through cryptography was also a widespread technical value among everyone (well at least some people) who designed software for the internet.
Over time, because security and cryptography were beneficial to business and government, cryptography got steadily increasing technical investment and attention.
On the other hand, since privacy as a social value does not serve business or government needs, it has been steadily de-emphasized and undermined.
Technical people have coped with the progressive erosion of privacy by pointing to cryptography as a way for individuals to uphold their privacy even in the absence of state-protected rights or a civil society which cares. This is the tradeoff being described.
> demonstrating that privacy was a primary concern (e.g. Cypherpunk's Manifesto) decades ago. Also that mass surveillance was already happening even further back.
How does that debunk it? If they were so concerned, why didn't they do anything about it?
One plausible answer: they were mollified by cryptography. Remember when it was revealed that the NSA was sniffing cleartext traffic between Google data centers[0]? In response, rather than campaigning for changes to legislation (requiring warrants for data collection, etc.), the big tech firms just started encrypting their internal traffic. If you're Google and your adversaries are nation state actors and other giant tech firms, that makes a lot of sense.
But as far as user privacy goes, it's pointless: Google is the adversary.
I think it's a bit dismissive to claim that "they didn't do anything about it", just because you're not living in a perfect world right now.
As one prominent example, the EFF has been actively campaigning all this time: "The Electronic Frontier Foundation was founded in July of 1990 in response to a basic threat to speech and privacy.". A couple of decades later, the Pirate Party movement probably reached its peak. These organizations are political activism, for digital rights and privacy, precisely by the kind of people who are here accused of "doing nothing".
In a few decades, people will probably look back on this era and ask why we didn't do anything about it either.
Sure, that line of thinking makes sense, but I do not understand the alternative. Are you saying that if we (the users) got new legislation (e.g., requiring warrants), then big tech wouldn't do mass surveillance anymore?
Yes, I think if there were laws that forbid mass data collection by private companies, or assessed sufficiently high penalties in the case of a breach (such that keeping huge troves of PII became a liability rather than an asset) then big tech firms would largely obey those laws.
The missed opportunity was to provide privacy protection before everyone stepped into the spotlight. The limitations on RSA key sizes etc (symmetric key lengths, 3DES limits) did not materially affect the outcomes as we can see today. What did happen is that regulation was passed to allow 13 year olds to participate online much to the detriment of our society. What did happen was that business including credit agencies leaked ludicrous amounts of PII with no real harm to the bottom lines of these entities. The GOP themselves leaked the name, SSN, sex, and religion of over a hundred million US voters again with no harm to the leaking entity.
We didn't go wrong in limiting export encryption strength to the evil 7, and we didn't go wrong in loosening encryption export restrictions. We entirely missed the boat on what matters by failing to define and protect the privacy rights of individuals until nearly all that mattered was publicly available to bad actors through negligence. This is part of the human propensity to prioritize today over tomorrow.
> What did happen is that regulation was passed to allow 13 year olds to participate online much to the detriment of our society.
That's a very hot take. Citation needed.
I remember when the US forced COP(P?)A into being. I helped run a site aimed at kids back in those days. Suddenly we had to tell half of those kids to fuck off because of a weird and arbitrary age limit. Those kids were part of a great community, had a sense of belonging which they often didn't have in their meatspace lives, they had a safe space to explore ideas and engage with people from all over the world.
But I'm sure that was all to the detriment of our society :eyeroll:.
Ad peddling, stealing and selling personal information, that has been detrimental. Having kids engage with other kids on the interwebs? I doubt it.
Kids are not stupid, though. They know about the arbitrary age limit, and they know that if they are under that limit, their service is nerfed and/or not allowed. So, the end effect of COPPA is that everyone under 13 simply knows to use a fake birthdate online that shows them to be over the limit.
Sure, it's one of the many rules that's bent and broken on a daily basis. Doesn't make it any less stupid. And it falls on the community owner to enforce, which is doubly stupid, as the only way to prove age is to provide ID, which requires a lot of administration, and that data then becomes a liability.
I was one of those kids at one point. In meatspace we have ways to deal with it and online we do as well. Of course if there is no risk to a business then they will put no resources into managing that risk.
ah to be 13 and having to lie about being 30 to not be banned from some game. so later you can be 30 and lie about being 13 to be able to play without too much ads.
COPA [0] is a different law which never took effect. COPPA [1] is what you're referring to.
Ad peddling, stealing and selling personal information, that has been detrimental.
I agree and what's good for the gander is good for the goose. Why did we only recognize the need for privacy for people under an arbitrary age? We all deserve it!
>Ad peddling, stealing and selling personal information, that has been detrimental.
So we agree on this part.
> What did happen is that regulation was passed to allow 13 year olds to participate online much to the detriment of our society.
My claim is that if "we" hadn't allowed 13 year olds to sign away liabilities when they registered on a website there would be fewer minors using social media in environments that are mixed with adults; more specifically guardians of minors would be required to decide if their kids should have access and in doing so would provide the correct market feedback to ensure that sites of great value to minors (education resources being top of mind for me) would receive more market demand and at the same time social platforms would have less impact on children as there would be fewer kids participating in anti-nurturing environments.
>> Having kids engage with other kids on the interwebs? I doubt it.
Unless those kids aren't interacting with kids at all, but instead pedo's masquerading as kids for nefarious reasons. Which yes, has been VERY detrimental to our society.
Nah. I'm not buying it. What's the rate of kids interacting with pedos instead of other kids?
Knee-jerk responses like yours, and "what about the children"-isms in general are likely more detrimental than actual online child abuse. Something about babies and bathwater.
I remember routinely clicking on some checkbox to say I was over 13 well before I was actually over 13. I'm sure most of the kids who actually cared about being on your site were still on it after the ban.
This is a good article, and throughly debunks the proposed tradeoff between fighting corporate vs government surveillance. It seems to me that the people who concentrate primarily on corporate surveillance primarily want government solutions (privacy regulations, for example), and eventually get it in their heads that the NSA are their friends.
I think the social element is one of the roots of the problem.
Basically, people don't understand privacy, and don't see what is going on, so they don't care about it. Additionally, most privacy intrusions are carefully combined with some reward or convenience, and that becomes the status quo.
This leads to the people who stand up to this being ridiculed as tinfoil hat types, or ignored as nonconformist.
everything after that is just a matter of time.
Once my wife was ill and at home, I was at work. I wondered how she was doing so I looked at Home Assistant and saw the hallway motion detector was triggered, the toilet fan shortly after. I saw in Adguard that some news sites were accessed. Then a spike in gas usage and a steep temp increase in the shower followed by a 1 min overall power usage of 2500 W, probably she made tea. She turned on the living room Sonos. So I guess she was doing relatively well.
I showed her all this, and joked about how I'd make a "Wife status tile" in Home Assistant.
All of a sudden she understood privacy.
Yes. The weirdest example of this ( and most personally applicable ) is the DNA data shared with 23andme and the like. I did not subscribe to this. Neither did my kids ( or kids of the individual who did subscribe I might add ), but due to some shared geography and blood ties, whether I want to or not, I am now identifiable in that database.
To your point, there is something in us that does not consider what information could do.
If you have nothing to hide what are you worried about? Or if you are not planning to be a criminal what are you worried about?
I am 100% not serious and do not believe either statement above. I sadly am in the same boat as you and had a blacksheep of a brother who did some sort of crime and as a condition had his DNA taken so I by default am in the system as well.
I never could understand why people would willingly offer their DNA to companies that even if they are not selling that data sooner or later could have that data leak and the consequences could mean being able to afford life and medical insurance or not.
> I never could understand why people would willingly offer their DNA to companies that even if they are not selling that data sooner or later could have that data leak and the consequences could mean being able to afford life and medical insurance or not.
I’m the odd one out on this thread but I just… don’t see why it’s a big deal? All the consequences of my dna leaking seem so extremely theoretical and unlikely that I’m willing to take the risk in exchange for a few minutes of fun reading a report about where my ancestors came from.
This is always framed like people who willingly surrender privacy must not know better or be uneducated about the harms but I think it’s fair for people to just decide they don’t evaluate the harms as very serious.
The example you gave about health insurance is implausible because it’s illegal in the US and I assume other developed countries for insurers to charge different amounts for health coverage based on pre-existing conditions. It strikes me as very, very paranoid to worry that someday my DNA might leak, and there’s something bad in it, and the law will change such that insurers can abuse it, and I for some reason won’t have a job that gives me health insurance anyway. That’s a lot of ands making the probability of that outcome very small.
> The example you gave about health insurance is implausible [...]
See [1].
From [1]: > GINA focuses only on one line of insurance—health; it does not prohibit other insurances—life , disability, long-term care (LTC), auto, or property—from using genetic information [...] in 2020, [...] Florida became the first US state to prohibit life, LTC, and disability insurers from using genetic test results to set premiums or to cancel, limit, or deny coverage
To me that means you are not safe.
> and there's something bad in it
This is just gambling. If enough peoples' DNA is out there, you will see the whole-population rate for conditions. You might consider it OK to be unexpectedly unable to buy long-term disability insurance because you have a 50x greater risk for YYYY than the general population.
> [...] and I for some reason won’t have a job that gives me health insurance anyway
This is an extremely privileged attitude. This part seems to me that if you get very ill you *must* continue to work in order to maintain your coverage. Even a highly paid SWE can be laid low by carpal tunnel syndrome.
[1]: https://pmc.ncbi.nlm.nih.gov/articles/PMC9165621/
While the stuff about disability and LTC insurance is slightly concerning, the part about life insurance isn't. I've never seen any convincing evidence that life insurance is anything but a big scam. The only time it seems to make any sense if if you're pretty sure you're going to die very soon, and take out a term life insurance, but this seems to require either the ability to see the future, or a plan to hire someone to kill you so your family gets the insurance money.
Why auto or property insurance would be affected by your DNA I can't even begin to imagine.
Do you have a disposition making you more likely to end up in an auto accident? Can some other correlation be done which is not genetic per se but works out to some higher risk social stratum in aggregate? You never know. The power imbalance is great, they won't tell you why you got your score and with enough machine learning they probably can't even if they wanted.
Term life insurance is not a scam if you have dependents. It’s offloading the potentially severe consequences to someone else if you’re the primary wage earner and die, during a defined period of time. And it’s generally inexpensive.
Think ‘normally I’d be working for another 20 years, would buy a house, send kids to college, etc. but I just got diagnosed with terminal cancer and now my kids are totally screwed.’.
Whole life insurance is a scam.
<< Whole life insurance is a scam.
I think I know where you are going with this, but could you elaborate? Is the objection here based on math ( whole is life is more expensive than term so it is not cost-effective? -- because otherwise you are simply paying premium for another benefit ) or something else?
It is term life + an annuity in disguise, with worse returns. That is also why it is the dominant product that life insurance sales folks try to sell. Because term life is well regulated and understood, so not high margin.
They generally get really disappointed when you buy a standard term life policy, but they’ll still sell it to you because money is money.
This stuff still seems frankly theoretical. I finally opted into long-term disability insurance after using the maximum short-term twice in the span of two years because of spinal degeneration in my late 30s. You have to agree to a medical exam and send records to apply for this insurance anyway, and in spite of trying to get it specifically because I'd used up the max short-term, and I am seemingly quite a high risk to actually become disabled, I was still approved.
In practice, in talking to co-workers also applying for the same things, the only people who ever got denied were all obese.
This is all setting aside that, assuming somewhat symmetric distributions of genetically determined traits, half of all people will have above average genetics. The conversation on the Internet always seems to fixate on people being denied coverage or charged more, but that seems to assume pricing models are just plain malicious, in which case they could charge more and deny you anyway, with or without data. Assuming they're actually using the data and building real predictive models, half the population would benefit from insurance companies having more data.
All that said, I would still never submit data to a company like 23andme, and would also never allow the police to have camera feeds of my house, even though I'm extremely confident they would never find a reason to arrest me. It's creepy, feels invasive, and I just don't want it.
> All the consequences of my dna leaking seem so extremely theoretical and unlikely that I’m willing to take the risk in exchange for a few minutes of fun reading a report about where my ancestors came from.
That's one of the things I've found odd about these discussions. Most of the concern seems to be about very theoretical things that we don't see in reality. On the other hand, the actual harm I'm seeing from mass surveillance is the fact that social media mobs often come through someone's life and try - often successfully - to ruin them.
The way things currently stand, the fact that I'm unable to delete Hacker News comments is much more of a threat than sending my DNA to 23andMe.
Full ACK. HN deletion policy is very bad and unfriendly.
<< The example you gave about health insurance is implausible because it’s illegal in the US and I assume other developed countries for insurers to charge different amounts for health coverage based on pre-existing conditions.
As phrased, I am unable to comment as to whether that statement is accurate, but I will go with it for the sake of the argument.
I chuckled a little, because that one phrase immediately reminded of just how much political capital was spent to even allow 'pre-existing conditions' to be removed as a factor in denying coverage.
What exactly makes you think that law cannot be changed?
Changing the law is extremely difficult in the US because of the gridlocked-by-design political system, so I think it's unlikely. Changing it would also be extremely unpopular.
Of course it could happen. But even if it did, all the other unlikely events I listed would all have to happen for me to be harmed. The point of my post was that me being harmed due to having given my DNA to 23&me is unlikely, not impossible. Just like it's theoretically possible a brick could fall on my head while walking outside, but I still don't wear a helmet every time I go outside.
Worrying so much about this stuff just feels to me like the tech geek version of preppers who stock their house with guns and canned food in case the apocalypse comes (which never does).
I appreciate you having the courage to go against the grain on this. I share similar views, specifically about healthcare privacy in general. It's obnoxious to what extent they go to guard some bland info like my blood type or blood pressure. I'm not saying it should be published on a ticker at the hospitals website, but the only info they should really keep private are the things that could blackmail or shame people. Birth control, abortion, STDs, etc. I actually hold the unpopular opinion that HIPAA goes too far. It's "privacy theater". If the concern is health insurers dropping patients, then the agency that regulates insurance should "leak" some information in a sting operation and sue the insurers for breaking the law. We shouldn't foist that liability on IT people and allow insurance to harm people.
Read this post again but take it as a response to someone claiming in 2020 that Roe V Wade could be overturned.
Roe v Wade wasn’t a law. Actions by the Supreme Court which are unfavorable are much more likely given that there are only 9 justices, they are appointed regardless of popularity, and they have lifetime appointments.
We are playing semantics here, but the impact is about the same.
Would you accept that the decision had a weight of a law?
The discussion is you comparing the overturn of a law to overturn of Roe v Wade. The weight is completely irrelevant because we’re discussing the difficulty of the action.
Anyone who knows basic federal government structure in the US knows court rulings are significantly easier to move quickly compared to passing real laws.
This isn’t “playing semantics”, it completely invalidates your point. Look at how well overturning obama care went to see how difficult law passing is.
<< This isn’t “playing semantics”, it completely invalidates your point. Look at how well overturning obama care went to see how difficult law passing is.
You do have a point. I disagree that it invalidates mine, but it does weaken it based on how it was originally present it. That said, we are absolutely playing semantics, because while Roe vs Wade was not the law, it was a precedent that effectively held back even a consideration of law changes at bay. So it is not irrelevant, but you are correct from a purely technical standpoint.
<< Anyone who knows basic federal government structure in the US knows court rulings are significantly easier to move quickly compared to passing real laws.
Zero disagreement.
I've repeated multiple times now that my post isn't intended to be a claim that no law ever changes in the US or that nothing bad ever happens.
I'm not sure how I can make my point more clear.
<< Changing the law is extremely difficult in the US because of the gridlocked-by-design political system, so I think it's unlikely. Changing it would also be extremely unpopular.
I am thankful for this response, because it illustrates something OP pointed out directly ( as humans we mostly suck at estimating future risks ). Changing a law is sufficiently possible ( hard, but possible ). On the other hand, short of current civilization crumbling before our eyes, there is no timeline, in which DNA data already in the hands of some other entity could be put back in the bottle. Possible vs impossible ( assuming time machines can't exist ).
<< The point of my post was that me being harmed due to having given my DNA to 23&me is unlikely, not impossible. Just like it's theoretically possible a brick could fall on my head while walking outside, but I still don't wear a helmet every time I go outside.
I think the reality is that we do not know for sure ( although some fun science fiction does exist suggesting it is not a great idea to let that space be unregulated ).
That said, DNA, at its core, is just information. Information by itself is neither good or bad. However, humans come in all sorts of shapes, sizes and capacities for evil. In some humans, that capacity is rather shallow. In others, it runs very deep indeed. Evil is not a pre-requisite to become a CEO, but since humans can be pretty evil, it is just a matter of time before at least one is hardcore -- kicking puppies for fun type - evil. If so, that one evil person can do damage, if they so choose with information at their disposal. And the funny part is, there is just so much information hoarded and sold these days so.. really.. it is just a matter of time.
<< Worrying so much about this stuff just feels to me like the tech geek version of preppers who stock their house with guns and canned food in case the apocalypse comes (which never does).
I will not give you a speech here, but never is a really long time. If there is one thing that a person should have picked up since 2018, it is that things can and do change.. sometimes quickly and drastically. It is not a bad idea to consider various eventualities. In fact, DHS suggests it is a good idea[1] to think about your preparedness.
You might be mocking preppers, but I did not suffer from lack of toilet paper during the pandemic.
[1]https://www.dhs.gov/archive/plan-and-prepare-disasters
Supposing that there might be imminent drastic changes to society that would make it perilous for powerful 'evil' actors to know about my DNA, I don't see why those actors wouldn't be so powerful they couldn't just mandate DNA testing for everyone participating in society. My DNA can always be forcibly collected from me later on, regardless of what I do today.
Also, I don't see the relevance of 'never' here? Several lifetimes from now, there will be little to exploit in linking my DNA to whatever artifacts of my identity remain, since by then I'll just be a long-dead stranger. But then when we restrict ourselves to possibilities within my or my immediate descendants' lifetimes, we run into the issue above.
<< My DNA can always be forcibly collected from me later on, regardless of what I do today.
Hmm, would it not be an argument for nipping it in the bud now? I am confused.
<< Several lifetimes from now, there will be little to exploit in linking my DNA to whatever artifacts of my identity remain, since by then I'll just be a long-dead stranger.
Again, hmm. You state it as if it was a given, but it effectively assumes technology does not progress beyond what we have today. That said, all what I was about to type a moment ago is in the realm of pure speculation so I will stop here.
I still think you are wrong, but I should get some sleep and it seems unlikely I could convince you to reconsider.
It's all fun and games until someone finds your hair at a crime scene.
Sprinkle in a bit of 'a white toyota was spotted leaving and he also owns a white Toyota' and you're in for an adventure.
given the proven low quality of most forensic science, and especially hair identification, it's all fun and games until the police decide you're the person they want to find guilty and they let the lab know that.
Yep. "We found his hair. He matches the profile and has the same car just send it so we can close this case."
Or it could save you. Cameras being ubiquitous kept this guy from being wrongly convicted of murder[1].
[1] https://www.theguardian.com/tv-and-radio/2017/sep/29/larry-d...
The fact that he was already charged and behind bars reinfores my point.
Agree. The negative outcomes start long before you get a day in court. Anyone who's had police at their door that they didn't summon knows this.
As I've said many times before in this subthread, I'm not claiming it's impossible that something bad could happen, just that it's very improbable, so it would be irrational to let fear of it control my life.
I feel like "control your life" is kind of a strong statement. I'm on the same side as the other commenters; I don't let fear control my life, I just let it be a factor in my decision to not send $29.99 and a cell sample to a company.
Do you think people who go outside without wearing a helmet are stupid and don’t understand the risk of bricks falling from buildings? Why or why not?
People regularly get convicted on far less evidence.
Far less evidence than what?
Can you state exactly what your threat model is? As far as I can tell, it's:
* My hair happens to be at the scene of a crime I didn't commit
* The police force 23&me to tell them whose DNA it corresponds to (or the records have already leaked so the police just know)
* I also happen to have the same color and make of car that was seen leaving the seen
* Therefore, the prosecutor successfully tries and convicts me.
Being honest, what do you think is the probability of this sequence of events happening?
That last step doesn't need to happen to both:
* destroy your life
* let a real criminal get away
"in exchange for a few minutes of fun" is absolutely not worth enriching some dicks that don't care if I live, die, or get falsely accused.
All that notwithstanding, People plead guilty to crimes they didn't commit regularly because they are told it will make things easier on them. Or they avoid the death penalty.
Okay. I’ll ask again. What do you estimate is the probability of my life getting destroyed because of this or a similar sequence of events?
I seriously recommend anyone that they watch this video:
https://www.youtube.com/watch?v=d-7o9xYp7eE
The TLDR is that the actual real evidence doesn't matter - what matters is if the prosecution and the police are able to convince a jury that you did the crime. Watch at least the lawyer's section until the end(the cop's section is basically - "everything he said is true").
> The TLDR is that the actual real evidence doesn't matter - what matters is if the prosecution and the police are able to convince a jury that you did the crime.
Don’t you think there’s some correlation there though? Typically, the jury is convinced by telling them what evidence the state has. It’s like saying a laser rangefinder doesn’t actually measure distance but time. Ok, but one Leads to the other…
Have you seen the video?
Not recently, but also, I was responding to the TLDR.
What does this have to do with my point?
You asked:
"Can you state exactly what your threat model is?"
The threat model is that police and prosecutors need to find someone guilty. If you watch the video to the end, the lawyer explains exactly how even a genuienly completely innocent person might be convicted of a crime because they were able to use "some" evidence to show that you maybe were near the crime scene. They don't need definitive proof - they just need enough to sway the jury. And if you take that into consideration, then giving law enforcement any info about you can only ever work to your disadvantage.
> The threat model is that police and prosecutors need to find someone guilty.
Sure. And the chance that I'm the person they decide to pin it on because my DNA happened to be in a database is extremely low. Why are you only focusing on one of the bullet points when the point is that probabilities are multiplicative?
Well by that standard is not worth protecting your privacy at all, after all the probability of any of your data being used against you is extremely low. And it's a difficult point to argue, because obviously it's true - but still, why take the risk?
>>Why are you only focusing on one of the bullet points when the point is that probabilities are multiplicative?
Because I'm saying that all of your points don't need to be true for something bad to happen to you. The probability of all your points happening is probably so close to zero it might as well be zero. But if you've given the state any information it can be used against you - like the point made in the video shows. So I think what I'm saying is that yes, your points are improbable, but not all of them have to happen for you to get screwed over.
> Well by that standard is not worth protecting your privacy at all, after all the probability of any of your data being used against you is extremely low. And it's a difficult point to argue, because obviously it's true
Correct, this is exactly the point I'm making.
To recap: the point is that it is not necessarily the case that someone who sends their DNA to 23&Me is ignorant of the risks or stupid; it's entirely possible that they objectively analyzed the risks and decided it's not serious enough to care.
> but still, why take the risk?
For the same reason I walk outside without wearing a helmet to prevent me from bricks that could randomly fall from building facades. Mitigating the risk is not worth it, to me, relative to the hassle of doing so.
You miss the point that being in the frame for an extended period is, surely, incredibly stressful.
How long does it take to get to trial?
Do you think you could have that on your plate for months and suffer no negatives?
Do you prefer to find bail money, or sit in jail?
This may be a bit late in the discussion, but one of the biggest deal with allowing DNA to be put into databases which the police can trawl is that the risk of false positives increases as the database size increases. DNA profiling is a probability game with an underlying assumption that people in the DNA database are of higher risk of being guilty than those not in it. Most conviction on DNA evidence also use partial DNA, meaning they accept an even higher risk of false positive. The current methods in DNA forensic is also to use AI to combine multiple partial DNA to create a single profiles, and the false positives of those are not very well understood by judges and juries.
The legal system and the evidence value of DNA profile could adapt to a world where every persons DNA is accessible, but it is a slow process and I doubt it will be done in my life time.
<< I never could understand why people would willingly offer their DNA to companies
I can play devil's advocate and come up with some level of rationalization along the lines of 'it will help humanity cure cancer' in a handwavy kinda way, but even then one is trading future potential against near 100% guarantee that things do change in regards to what you gave -- that is: even if company is promising today it will not do something with data, a day will come when that will no longer be the case.
The blacksheep example is definitely interesting though and likely a good idea for a police drama episode ( if it wasn't used already ). Edit: And now that I think about it, if it would be made, it would show the the good certainly have nothing to fear indeed.
You could put quotes around the first line to make it clear you're joking or being sarcastic. I was pretty taken aback by someone seemingly actually saying that, at first! :`D
I don't see the logic following a data leak and not affording medical insurance, as that would imply insurance company saying "hey there's been a big DNA data leak - get that data and make profiles as to what people we should up the premiums on!" Which , ok I guess I can't believe they wouldn't because of moral reasons or even because it may already be illegal to do so or because they would worry about being found out, just seems like it would require not just thinking about it but probably also they would need to assemble a team to take advantage of it and that would not be worth it.
<< just seems like it would require not just thinking about it but probably also they would need to assemble a team to take advantage of it and that would not be worth it.
I think it has been somewhat well established that humans will do a whole lot of nasty without much thinking as long as a higher up tells them to. And this does not touch the simple fact that the companies are not exactly entities governed by morality ( and some would argue that it is not entirely certain if humans are either ).
In short, I think you are wrong.
>whole lot of nasty without much thinking as long as a higher up tells them to.
sure
>In short, I think you are wrong.
so, your technical conclusion is that because people will do bad things when told by authorities they will not need to start any sort of project to integrate the dumped data into their platforms, paying multiple developers for a potentially long time - it will just magically happen because of the power of evil?
I mean I want to believe in the power of evil as much as the next guy, but that's a bit much. And once we go back to the whole "they would need to assemble a team to take advantage of it" which maybe was not that clear at the end of my post then again, no matter how you slice the evil, it would not be worth it.
Because assembling the team to analyze and ingest 23&me data might take a while, cost a good amount of money, might decrease in value over time (or increase in risk) for something that is probably illegal to do in the first place.
Higher ups may want it done, but probably only if it can be done immediately and doesn't cost a lot of money.
<< Higher ups may want it done, but probably only if it can be done immediately and doesn't cost a lot of money.
Since we are talking evil, I suppose money is a good place to start ( being root of it and all that ). So motivation would likely be there, but I am willing to accept your qualification of 'yes, but don't spend a lot'.
Lets look at some of that potential cost structure. The analytics part these days is not exactly expensive. Hell, HN just yesterday had a story about $4.80 GPU time being used to do some reinforcement learning to find 'best HN post' on 9gb of HN data[1]. That used to be a little require more time and be more labor intensive. Edit: Yes, what we are discussing would naturally go beyond $4.80, but in terms of bang for buck it is hard to find a better time than now.
One could reasonably argue getting the right people ( right experience and knowledge ) could be prohibitive in terms of cost, but.. if you are already an insurance company, it is not exactly impossible that you already hired people with applicable experience, knowledge and skill. And if you are already paying them, maybe this little offshoot project could be sold to them as a great advancement opportunity. And if they took it too far? Well, no one told them to go overboard. After all, we at <company D> have a strict ethics policy.
If that is the case, two big pieces of the cost structure are either negligible or already part of the annual budget.
FWIW, I want to think you are right.
[1]https://openpipe.ai/blog/hacker-news-rlhf-part-1
>if you are already an insurance company, it is not exactly impossible that you already hired people with applicable experience, knowledge and skill.
my experience with large companies is that there is already allocation of those resources somewhere, with lots of managers and such, I think it would be a major thing to move people around or to hire new people.
Sure it's nice to believe Evil is working agile, but really it just says its adapted some agile methods and its super slow as per the usual.
> I never could understand why people would willingly offer their DNA to companies that even if they are not selling that data sooner or later could have that data leak and the consequences could mean being able to afford life and medical insurance or not.
Reality: nobody cares about your DNA. It's useless for medical or life insurance companies, they can't discriminate based on the DNA by law. And if it's ever repealed, you can bet that life insurance companies will just start asking for your DNA info anyway.
DNA also doesn't provide actionable intelligence for advertisers that is worth more than a week of your purchase history or your Facebook profile.
However, DNA provides actionable intelligence for _you_. Mostly by highlighting the disease risks and other clinically-significant data (like drug metabolism speed).
All true but above all DNA is personal. It's yours to share or not. When someone related to you shares, they're sharing you and yours as well.
Let's rephrase: I never could understand why people would willingly offer their DNA *and the DNA of all those who share some part of their DNA* to for profit companies subject to data breaches, court orders, nefarious employees, etc.
> Let's rephrase: I never could understand why people would willingly offer their DNA
To get information that benefits them.
And from a practical point of view, a lot of my information has been leaked multiple times already. And I'm carrying a phone that tracks my movements to within a few meters all the time. I walk by multiple Ring cameras every day, etc.
Why care about one more privacy leak?
I’m quite sure nobody in my family is the type of HN privacy geek who would care about this, so I don’t feel bad at all.
Two books that would be good for you to read then:
"The Age of Surveillance Capitalism" - Long and details but extremely thorough. A must read for anyone with that excuse "I have nothing to hide".
https://en.wikipedia.org/wiki/The_Age_of_Surveillance_Capita... -
"Stand Out of Our Light " - Similar message as "Surveillance Capitalism" - from an ex-Google'r - but without the depth and breadth. Not as heavy but packs a near similar punch.
https://www.cambridge.org/core/books/stand-out-of-our-light/...
p.s. Not feeling bad and/or not recognizing the issue is a symptom of the problem. Yeah, ironic.
Not GP but I despise this argument because the definition of a criminal depends who you're asking: in some countries, doctors giving warnings about an impending pandemic could be criminals.
Edit: I was too fast on the comment button and didn't read until the end.
Privacy also seems to be one of those things people claim to care about, but in fact are actually reluctant to do anything about. Easy, low-tech privacy measures:
- leave your phone at home most of the time.
- don't buy a smart TV or other smart devices
- don't use social media
- etc.
None of these measures are bullet-proof, but they are relatively low-cost and don't require much expertise. These are much, much more likely to be things consumers complain about than things that consumers are actually ready to do something about. I think it's clear that consumers ALSO do not understand privacy, but I'd also suggest that they don't care very much. If they cared, there would be more a market for privacy.
Re: Smart TVs. An old laptop running a browser with an ad blocker and some fancy controller setup (perhaps a wireless multi-button mouse or a specialized remote control) can get you way more privacy (and the sanity of a mostly zero-ad experience, just the occasional in-video affiliate ad) and makes whatever you watch a lot smarter, since you are the one doing the searching/surfing rather than the advertiser-funded smart TV channels that are specifically smart at advertising to you for profit. You can hook up an HDMI cable to a bigger screen/projector if you're playing stuff for lots of people.
"Don't buy a smart TV" is stupid out of touch and tonedeaf advice.
There are no non-smart TVs. You cannot buy one. Your only alternatives are to not buy a TV period, or go to great lengths to firewall your new smart TV.
Yes, non-smart displays exist, but they are not sold to consumers, nor at a price consumers can afford.
This is one of those prime examples of how the idea of "vote with your wallet" is a fantasy. Consumers are not in any way in control of the market.
There is no possible way to protest smart TVs when the only options for a new TV include spyware. Your only possible move is to not participate in the market, which then summarily ignores you.
Similarly, existing without a smartphone in today's society is largely not possible. You can't even park in many cities without an app.
I think it's clear that you don't understand the problems being discussed and are just blithely assuming that deflecting blame onto individuals is a reasonable position. It isn't. It's moronic and unconsidered.
Just as an addendum, I think you can still vote with your wallet, even in a world where there are truly no dumb TVs. The vote would be "do not buy a TV." It's an option, and isn't even that big of a deal. As another commenter noted, you can watch TV on your laptop with an ad-blocker, but you can also just read a book. I don't mean this to be rude or combative -- I'm quite serious here. TVs are luxury items in the narrow sense that they're not necessary for anything; they're just leisure. Because of that, it should be possible to completely avoid it. Take up hiking, learn the guitar, read more books, join a social club, etc. No one needs a smart TV.
Crucially, someone who owns a smart TV has implicitly made a claim "the entertainment of TV is more important to me than privacy." That's all I'm saying. I explicitly did NOT mention vehicle privacy because most people do not have a choice about whether they own a vehicle. They need it for work, child care, etc. It is possible to avoid smart vehicles, but it's getting more difficult, and I suspect it will be impossible in the future. (even with a dumb car, there can be license plate scanners in a lot of locations)
> Yes, non-smart displays exist, but they are not sold to consumers, nor at a price consumers can afford.
Bollocks. They're easy to find and are not terribly expensive. They're just not at Walmart or Best Buy. Example with 5 seconds of searching
https://www.amazon.ca/Samsung-Business-QE43T-Commercial-LH43...
It's cheaper than the 55'' I bought a few years ago, is 4K, which mine isn't, and is roughly on par with what's for sale on the Smart TV side of things.
I just bought a non-smart TV at a yard sale for $10. I wasn't even out looking, it just happened to be there. It works great.
Much scarier, there are "smart monitors" coming which are just computer monitors but will display ads to users. Once that happens, and is wholly unavoidable, I'm honestly finished with computers for good.
As an aside, I have a very clear understanding of privacy issues. If there's a particular issue you'd like to dig into, I'd be happy to talk it out with you. I'll bet we agree more than you think.
But isn’t that the case because no one cares? If the demographic of people purposefully wanting non-smart TVs was large enough, someone would step in and offer a non-smart TV to make some money. The only reason this doesn’t happen is that it wouldn’t work because not enough people would buy it. Basically, the market already anticipates how people would vote with their wallets and has determined that smartdevice-haters are so fringe that they don’t matter.
> most privacy intrusions are carefully combined with some reward or convenience
...ie people are making a conscious choice based on what they value?
Do you think people make a conscious choice to share their location thousands of times a day forever because they used the department store app to find the men's section one day?
I think most people don't care, or at least don't care enough to make a different decision.
Yeah that's why everyone is smoking and drinking.
Downside of trading privacy for security: anything that makes a network connection creates metadata about you, and the metadata is the real danger for analyzing your social connections: https://kieranhealy.org/blog/archives/2013/06/09/using-metad...
The problem isn't about the big corporations themselves but about the fact that the network itself is always listening and the systems the big corporations build tend to incentivize making as many metadata-leaking connections as possible, either in the name of advertising to you or in the name of Keeping You Safe™: https://en.wikipedia.org/wiki/Five_Eyes
Transparent WWW caching is one example of a pro-privacy setup that used to be possible and is no longer feasible due to pervasive TLS. I used to have this kind of setup in the late 2000s when I had a restrictive Comcast data cap. I had a FreeBSD gateway machine and had PF tied in to Squid so every HTTP request got cached on my edge and didn't hit the WAN at all if I reloaded the page or sent the link to a roommate. It's still technically possible if one can trust their own CA on every machine on their network, but in the age of unlimited data who would bother?
Other example: the Mac I'm typing this on phones home every app I open in the name of “““protecting””” me from malware. Everyone found this out the hard way in November 2020 and the only result was to encrypt the OCSP check in later versions. Later versions also exempt Apple-signed binaries from filters like Little Snitch so it's now even harder to block. Sending those requests at all effectively gives interested parties the ability to run a “Hey Siri, make a list of every American who has used Tor Browser” type of analysis if they wanted to: https://lapcatsoftware.com/articles/ocsp-privacy.html
One man's meta is another mans data. The classification of 'data' and 'metadata' into discrete bins makes it sound like metadata is somehow not also just 'data'.
If every morning I got in my car and left for work and my neighbor followed me, writing down every place I went, what time I got there, how long I stayed, and the name of everyone I called, it would be incredibly intrusive surveillance data, and I'd probably be somewhat freaked out.
If that neighbor were my cell phone provider, it would be Monday.
What we allow companies and governments to do (and not do) with this data isn't something we can solve in the technical realm. We have to decide how we want our data handled, and then make laws respecting that.
"One man's meta is another man's data."
And with that, thanks to you, today I am a bit smarter than yesterday.
Thank you very much for that phrase, the rest of your post is a very good example for the layman, but that phrase should be the subtitle of a best selling privacy book.
> If every morning I got in my car and left for work and my neighbor followed me, writing down every place I went, what time I got there, how long I stayed, and the name of everyone I called, it would be incredibly intrusive surveillance data, and I'd probably be somewhat freaked out.
It's not "surveillance data," you are in a public place and have no expectation of privacy. It's only through such neighbourhood watch and open-source intelligence initiatives that our communities can be kept safe from criminals and terrorists.
Why are you so protective of your goings-on and the names of everyone you call? Are you calling terrorists or engaging in illicit activity at the places you visit? What is it that you have to hide?
I would actually take the premise of (national) security even further and extend collection to not only metadata, but data as well. Further, these capabilities should be open-sourced and made available to all private citizens. Our current law enforcement systems are not powerful enough, nor do they move quickly enough to catch criminals - by the time sufficient information has been gathered on a suspect, it may already be too late.
>What is it that you have to hide?
An argument so cliche, it has its own Wikipedia page[1]. In the US, we currently have a presidential candidate from a major party threatening harm to people based on their political, social, and biological qualities, which outsiders often determine by inference from data such as who people are in contact with and where they travel. Further, I would argue the need for individual privacy is innate in humans; as every child matures they find a need to do things without their parents over their shoulder, even without their peers, no matter how innocent the activity and it is a need that does not vanish in adulthood. We generally agree that things like removing bedroom doors as punishment is abusive because it robs the person of privacy. The same goes for installing monitoring software on your partner's phone, or a GPS tracker on their car. Privacy means we are able to be ourselves without our lives being scrutinized, criticized, judged, rated, shamed, blamed, or defamed by every person on the street. I close the door when I defecate, I draw the blinds when I copulate, I don't tell people my passwords, and I don't scan my grocery receipt to earn points because there are some things other people don't need to know.
[1] https://en.wikipedia.org/wiki/Nothing_to_hide_argument#Criti...
[flagged]
Lol. So who does "deserve" privacy your highness? I'm guessing you do at the very least since you seem so judgemental on those with an "incessant, insatiable need to broadcast their lives 24/7" - which you presumably do not.
You're pretty judgy and seem incapable of even conceptualising a nuanced position on this topic. And your take on Assange, Snowden and Appelbaum is clearly first order trolling.
Unless you forgot the /s at the end of your whole comment.
"Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety." [0]
[0] https://en.wikiquote.org/wiki/Benjamin_Franklin#1750s
I'm not sure how you mean that, but I take it to be kinda opposite to the position you're espousing?
I.e. you want people to give up some essential Liberty (privacy[1]) in return for some increased Safety (from "criminals and terrorists").
So, that Franklin quote seems pro-privacy, to me.
But maybe I misunderstand you ::shrug::
[1] that is: freedom to live one's life without fear of the constant scrutiny and judgement of others
Observing someone by chance in public is protected. Stalking them is generally a crime, although jurisdictions differ in their inclusion of surveillance (without contact or purpose) only as a form of stalking. Generally speaking, if someone is following you around everywhere, a reasonable person will start to fear for their safety and criminal codes seek to protect people from that.
While not as immediately threatening, realizing that a company is maintaining a large dossier about you may cause some concern about how they will utilize that (obviously against your undisturbed behavior). It is reasonable to be concerned about that usage and intent.
Imagine you are a baker in the end of 1930's Germany. You deliver bread every day to a synagogue. Imagine cell phones and apps existed. The Nazi government could now with little effort see you went to a synagogue everyday for the last couple of years so they decide to send you to a camp, although you are not a jew. Meta data is not dangerous you think?
There's no need for theoreticals - we know very well that Nazis used census data which recorded a person's religion to find and kill Jews(and others). At the time I imagine giving this data to the state felt like not a big deal, but how could they know it would lead to their deaths?
Also no need to go so far back. People are being killed based on metadata right now. Even Michael Hayden (former NSA and CIA director) confirmed this.
> Why are you so protective of your goings-on and the names of everyone you call? Are you calling terrorists or engaging in illicit activity at the places you visit? What is it that you have to hide?
Basic political associations can become problematic when people get riled up. See “the red scare”.
We’re not far from that again with people cutting out major relationships based on support or disdain of Trump.
I am struggling to comprehend how allowing everyone between you and the services you use to view not only the metadata but the content as well could possibly be considered privacy-preserving.
It’s kind of an unorthodox take, but I’m guessing the idea is that if corporations perceived that they didn’t have secure ways to protect stuff, they would refrain from gathering as much stuff, because they would be afraid of the liability. And btw the perception / reality distinction is important here in supporting this theory.
I disagree. What makes corporations afraid of liability are laws enforcing liability. We never got those, and I don’t see why weaker encryption would’ve created them. We could, for example, have meaningful penalties when a company leaks passwords in plain text.
I didn't say there weren't other things that make companies worry about liability. Not sure you read what I said though.
because you're comparing it wrong!
in your mind, ssl won't leak anything. and non ssl leaks everything.
make a list of everything you can infer without a cert looking on a ssl connection. then add on top of that all the things people with the cert or control over CAs can see and make a list of them all
when you're done you notice ssl is not perfect as you think and the extra request and no cache compound all that.
> make a list of everything you can infer without a cert looking on a ssl connection
This exactly, and not just connection but connections, plural. If the network observes my encrypted connection to ocsp.apple.com followed by another encrypted connection to adobegenuine.com, an analyst could reasonably assume I'd just opened an Adobe Creative Suite app. Or if they see ocsp.apple.com followed by update.code.visualstudio.com, I probably just opened VSCode. Auto-updaters are the same kind of privacy scourge and every additional connection makes it worse.
Citations:
- https://helpx.adobe.com/enterprise/kb/network-endpoints.html
- https://code.visualstudio.com/docs/setup/network
> Downside of trading privacy for security:
> Transparent WWW caching is one example of a pro-privacy setup that used to be possible and is no longer feasible due to pervasive TLS.
What? You're kidding. If we didn't have pervasive TLS we'd have neither privacy nor security. Sure, a caching proxy would add a measure of privacy, but not relative to the proxy's operator, and the proxy's operator would be the ISP, and the ISP has access to all sorts of metadata about you. Therefore pervasive TLS did not hurt privacy, and it did improve security.
You're making the same mistake as Meredith Whittaker. It's a category mistake.
> Other example: the Mac I'm typing this on phones home every app I open in the name of “““protecting””” me from malware.
What does this have to do with secure cryptography? That's what TFA is about. You are conflating security as-in cryptography with security as-in operating system security. More category errors. These are serious errors because if we accept this nonsense then we accept weak cryptography -- that's DJB's point.
[flagged]
It's my work computer — not my choice. At home I use a Corebooted 51nb neo-ThinkPad.
Oh damn, that escalated quickly. Nice! How is that 51nb board? I totally forgot they were a thing. I have many ThinkPads but unfortunately am at the cap of coreboot-able (X230)... It's sadly getting to the point where the web, of all things, is gradually creeping out of reach.
It is the best computer I have ever used but parts availability can be an issue. For example I had the eDisplayPort flex-PCB go bad in my X210 and had to homebrew my own replacement. I have an entire spare machine just in case, since I couldn't just go out and buy one if I needed it Right Now.
Nice, that's cool to hear (best computer), but yeah I suppose it has some inherent "rarity" to it. One of the nice things about the ThinkPads is their popularity/"ubiquitousness" (is that a word?) - I have like, five X230's at this point! So easy to find an amazing deal on one if you're patient. But yeah, these are really starting to show their age. Still fine to use overall, but it can be pretty limiting at times.
I thought Macs were better for privacy?
Ignore the downvotes - you raise a point worth discussing.
Apple spent a good amount of time and money putting out marketing to convince people that their brand emphasizes privacy. This was part of a brand recovery effort after quite a few folks' intimate photos were leaked out of iCloud.
But it's become evident, as in the post you replied to, that they aren't as privacy-friendly as their marketers want you to believe. You should consider alternatives for your computing needs - specifically, open-source software which is not in control of large corporations.
Apple has been focusing on privacy as a part of their core offering since long, long before the iCloud photo leak. Them being imperfect is not a sign that they are willfully malevolent actors.
The post they replied to doesn’t make anything “evident” it just claims without basis that if you want privacy you should stop using Apple products.
I mean sure in an absolute sense that’s true. Using Apple products gives them some information about you. But relatively speaking, Apple tends to collect significantly less data about its users than its competitors: Meta, Google, Microsoft, et al.
I don't find the "not as bad as" argument to be a convincing one. Given that users can run hardware and software that doesn't give out any information about them, it seems defeatist to only consider software which does give out information. A lot of people have spent a lot of time and effort to make software like Linux and LineageOS available and easy; choosing the least-bad of bad options makes no sense when actual good options are available.
The OP of this thread gave a specific example of Apple circumventing user privacy in a way that I would find unacceptable. "Replied to" was not the best phrasing for that, I admit.
Users can also live in a shack in the woods which is even more privacy-preserving.
Presumably just like most users don’t want to do that, most users also don’t want to learn enough to admin a Linux system, run their own domain and email server, and keep a NAS at home as their “cloud” storage.
If you assume that users want someone else to handle this stuff for them, then yes, “not as bad as” is a great argument.
Wow, nice analogy - you really think that using Linux is like living in a shack in the woods, huh. It's actually very easy to use these days. Have you tried it?
I’ve used Linux for the last twenty five years, both as my daily driver personal desktop and as an admin.
My point is that if you want to chase privacy absolutism, a shack in the woods is where you inevitably end up. If you accept that people want to use consumer-focused goods and services that come with some privacy cost—as basically fucking everyone but a minute rounding error does—there are alternatives that are better than others. And so it’s absolutely worth comparing those alternatives.
If you want to run Tails on RISC V, route all your traffic through Tor, and conduct all your transactions with Monero then more power to you.
I don't accept that, actually. Since you like exaggerated analogies, here's one for you:
Imagine a world where, in the past twenty years, big companies started making transparent bathroom doors. And thanks to marketing, media, celebrity endorsemets etc., transparent bathroom doors have become the new norm. It worked, and most bathroom doors are now transparent or translucent.
I'm one of the people pointing out that we can get doors made of wood, and it's pretty easy to do so.
And you're the guy saying "that's so weird! Basically fucking everyone uses some degree of transparency on their bathroom doors, therefore it's normal and good, and should continue to be encouraged. Besides, this one company makes translucent bathroom doors - that's better, right?"
It is a matter of perspective. Of all Mac users, no of people wanting to hide their app usage are practically 0 when compared to people downloading free wallpaper app or game that need to be protected from their own actions. For 2nd set an OS monitoring the activity and blocking potential harmful ones is more secure.
This is why I buy AAPL stock and not Apple™ products.
Better than what, is the question.
Where people stand on this question ultimately lies in whether they trust what Apple says. For example, Gatekeeper / OCSP, the service mentioned in the GP. Apple says the following:
> Gatekeeper performs online checks to verify if an app contains known malware and whether the developer’s signing certificate is revoked. We have never combined data from these checks with information about Apple users or their devices. We do not use data from these checks to learn what individual users are using on their devices.
https://support.apple.com/en-us/102445
That's either true or it isn't. If it's true, then the GP comment is wrong about "Hey Siri who is using Tor", if it's not true, they are correct. Blocking the service using a hosts file works, and does not prevent applications from opening, a case can be made that this should be even easier with a system preferences setting, but we come back to the same question: if you trust what Apple says about the service, making it easy to disable (and blocking a DNS entry is not especially difficult) would be foolish, because the threat landscape does include malware, and does not include Apple sharing information (they claim that) they don't have, about what programs users open.
If Apple is lying, or one thinks Apple is lying, then the problems do not end with Gatekeeper. They could be logging every key I type, faking E2EE through some clever obfuscated code, and so on. Blocking the OSCP server will do nothing, they can exfiltrate anything they want from an operating system which they alone control.
I happen to believe Apple's privacy claims are honest. This is based on a couple of things: primarily, privacy is a valuable (to them) and emphasized part of their brand, and any discovered breach of trust would be hugely detrimental to their bottom line. Also, there's a dog which didn't bark, in the form of an absence of whistleblowers reporting on how Apple's privacy claims are bullshit and they actually pwn everything.
TL;DR there are OSes which claim to offer more privacy than Apple, but now you're trusting ~everyone who has contributed software to those operating systems instead. I also happen to think that e.g. Qubes and Tails do improve on privacy over the macOS etc. baseline, but I can't prove that, anymore than I can demonstrate that Apple isn't lying.
It is physically impossible to audit all the code we run personally. It just can't be done. So trust is a non-optional component of making decisions about privacy and security. It will always be thus.
I don't see metadata as a danger, I think it's a great compromise between police work and privacy.
Some of thi requirements I see here seem crazy. I want carte blanche access to the global network of other peoples computers and I want perfect privacy and I want perfect encryption...
Yeah, no
Maybe you don’t, but for some people, it’s lethal.
https://www.justsecurity.org/10318/video-clip-director-nsa-c...
Good. Im glad the NSA is doing it's job. I don't want terrorists to feel safe while using our systems.
Keep in mind that you don't decide who's a terrorist and who isn't. You might be "glad" about the NSA doing their job as long as your definition of terrorism aligns with the government's but what if that ceases to be the case?
I'm too young to truly appreciate this, but I have spent my time going through archives of the Cypherpunk mailing list.
The one thing I always think about on HN is what some of those guys would think (or presently think) about the cultural shift among nerds and otherwise techies such that this comment is even possible.
They all projected, correctly or not, such a potentially dystopian/utopian world. And they definitely didn't agree with each other. But there was still this sense of shared belief and shared cause of generally being, to say the least, skeptical and antagonistic to the state, of the kind of formal potential for liberation in code. That things could be different.
But here we are now. Computers and what they do are no longer a source of hope or doom. They either make us money, or they help us catch ambiguous enemies.
I wish I had been around for the golden era. All that is solid melts into air.
It's no mistake that the rise of cyberpunk and postmodernism coincided with the collapse of competing ideologies to market capitalism. As Capital killed its enemies, you see belief in humanity and its ideals in art go up in smoke.
Personally, I find computers to be harbingers of doom. Not essentially, of course, but it's pretty clear at this point we're not going to see the potential of the technology we already have realized within my lifetime, but we will see a good deal of the predicted use to abuse people. Hell, we already see much of it.
Blaming capitalism doesn’t make any sense because it’s a different axis. The security vs privacy debate is quite old and different societies handle the trade completely independently of how capitalistic their economy is.
>completely independently
Well, certainly not completely independently.
The fact that you can make more money when people have less privacy plays a part in the decision-making process.
If only the NSA or the people designating who terrorists are vs who our allies are had such pure, pro-human intentions.
A hypothetical problem that we can tackle when (or if) it's actually a problem. Thanks for your metadata, regardless.
I'd say as soon as this becomes your problem it's too late for you to do anything about it.
Is it really a hypothetical at this point? I was under the impression that relevant cases have already been explored ( to the extent that one can given the nature of IC ). In cases like these, the moment it is actually a problem, it is likely already too late to make sensible adjustments.
>“We kill people based on metadata”
>“metadata absolutely tells you everything about somebody’s life. If you have enough metadata, you don’t really need content.”
Your response to the above quotes is so short-sighted that I don't even know where to begin.
As long as it's the people you don't like dying, I guess it's cool.
Good thing the NSA is the only group in the world that has access to metadata at scale.
Transparent HTTP caching as a way to avoid leaking metadata is not pro-privacy. It only works because the network is always listening, to both metadata and message content. The reason why people worry about metadata is because it's a way to circumvent encryption (and the law). Metadata is holographic[0] to message content, so you need to protect it with the same zeal content is protected.
But letting everyone have the message content so that metadata doesn't leak isn't helpful. Maybe in the context it was deployed, where pervasive deep packet inspection was only something China wasted their CPU cycles on, your proxy made sense. But it doesn't make sense today.
[0] X is holographic to Y when the contents of X can be used to completely reconstruct Y.
How it metadata holographic? Sure, you can know when I communicated to a particular individual, and even the format and size of the message, but it doesn't include the exact message, right?
Gordon Welchman first productionized “traffic analysis” in WW2 at Bletchley Park.
When in his retirement he tried to write about it, it was his work on traffic analysis more than his disclosing that the allies had cracked enigma that most worried the NSA who tried to stop him publishing.
Traffic analysis is in many ways more valuable than the contents of the messages.
https://en.m.wikipedia.org/wiki/Gordon_Welchman
I won't say that metadata isn't valuable, but I still don't think it's holographic. You can tell I WhatsApp my friend every day around noon, so we're probably talking about lunch, but you don't know that today I had a tuna sandwich.
Old thread but I think there’s a wood and trees thing here.
Traffic analysis is king because who you communicate with is a low noise signal and what you communicate is usually noise.
This is well known for police work and military intelligence etc.
It’s also true for ad sales. Ad networks want the trackers on sites so they can build up a profile of you based on metadata not the content of the pages you visit themselves.
"We kill people based on metadata."
https://www.justsecurity.org/10318/video-clip-director-nsa-c...
It's certainly powerful, but that wasn't the claim I'm asking about.
I think I agree with Bernstein that the talk is mostly incoherent about this "privacy" vs. "security" tradeoff.
However, I do want to call out his "Amazon was doing good business before 1999 and the end of the crypto wars", and "companies allocate just a small fraction of their security spend to cryptography":
* Prior to the end of export controls, Amazon was still doing SOTA cryptography
* Export controls themselves boiled down to clicking a link affirming you were an American, and then getting the strong-cryptography version of whatever it was you wanted; there were no teeth to them (at least not in software products)
* Prior to the widespread deployment of cryptography and, especially, of SSH, we had backbone-scale sniffing/harvesting attacks; at one point, someone managed to get solsniff.c running on some pinch point in Sprint and collected tens of thousands of logins. Lack of cryptographic protection was meaningful then in a way it isn't now because everything is encrypted.
I don't think he was arguing that things weren't more secure after the export controls were dropped. I feel like that's why he was arguing to drop them at the time. He's just saying that all the signs point to Amazon/internet commerce becoming a behemoth either way. So we'd just end up in the same situation wrt what the talk sees as the current state of things, but with compromised cryptography.
He was right about export controls. Nobody disagrees with him. I don't even think Meredith Whittaker does. But many times, I've come across a folk belief that strong cryptography was rare in North America before export controls were eliminated; it was not.
Aside from everything else, I don't understand what Whittaker's point was; she seemed to ultimately be advocating for something, but I can't understand what, exactly.
It's probably in the talk's last sentences:
> We want not only the right to deploy e2ee and privacy-preserving tech, but the power to make determinations about how, and for whom, our computational infrastructures work. This is the path to privacy, and to actual tech accountability. And we should accept nothing less.
But who are "we" and "whom", and what "computational infrastructure" is she referring to?
I can fill that in for you I think. The "We" and "Whom" are you, me, the arbitrary host/admin/user.
If you look at the regulatory trends developing around tech at the moment there are a lot of pushes to slap obligations on the host essentially toe the societal line of their geopolity. You will spy on your users. You will report this and that. You will not allow this group or that group.
This tightening acts in part to encourage centralization, which is regulable by the state, and discourage decentralization, which is at best, notionally doable.
The power of technologically facilitated networking has, prior to the Internet, been in large part a luxury of the State or Entity granted legitimacy by the State. With everyone having the potential to take their networks dark enough where the State level actors legitimately revert to having to physically compromise the infrastructure instead of being able to just snoop the line, it's a threat to the edifice of power currently extant to under a bottom up inversion.
No longer would the big boys in the current ivory tower be able to sit on high and know that there may be threats purely by sitting on SIGINT and data processing and storage alone. The primitive of true communications and signalling sovereignty would be in the hands of every individual. Which the establishment would like to cordially remind you includes those dirty terrorists, pedophiles, communists, <group you are mandated to treat as an outgroup>. So therefore, everyone must give up this power and conduct affairs is a monitorable way to make those other people stand out. Because you're all "good" people. And "good" people have nothing to fear.
You can't deplatform persona non grata from infra they've already largely built for themselves, which is a terrifying prospect to the current power structure.
It's all about control.
> The primitive of true communications and signalling sovereignty would be in the hands of every individual.
That's great and all, but how does that help with mass surveillance by big tech? How would "true communications and signalling sovereignty" shield me from Google, Facebook, Whatsapp, Twitter, etc.?
> Aside from everything else, I don't understand what Whittaker's point was; she seemed to ultimately be advocating for something, but I can't understand what, exactly.
The whole talk felt like it was gearing up to making a point but then it ended. It turned out that the point was to blame our current situation on the "sins of the 90s". To be fair, it was in the title all along so I'm not sure why I was expecting otherwise.
Well, I don't know, I think there might be her intentions on that last sentence (on how to deal with Chat Control etc)
I think this article isn't considering wifi. Most early sites were pressured into using SSL because you could steal someone's session cookie on public wifi.
Without cryptography, all wifi is public, and in dense areas, you would be able to steal so many cookies without having to actually get suspiciously close to anything.
I'm guessing without crypto, we would only access financial systems using hard lines, and wifi wouldn't be nearly as popular. Mobile data probably wouldn't have taken off since it wouldn't have been useful for commerce.
I thought WiFi was somewhat secure from other clients, even if your connection is unsecured at the TCP layer, so long as they're not impersonating the hotspot. You're certainly not secure from the hotspot itself, of course.
Only if the WiFi network is password-protected, which causes connections to be encrypted. Pretty much all WiFi is password-protected nowadays -- if a cafe wants to enable public access to their WiFi, they'll write the password on the wall -- but that only became the case after Firesheep and other sniffing tools drew attention to this issue around 2010. In the old days, there were plenty of networks with no password (and hence, no encryption) at all.
The GP specified "without cryptography", in reference to a counterfactual world where we weren't allowed to encrypt things.
> Pretty much all WiFi is password-protected nowadays
I was at Disneyland last week and stayed at one of their hotels - and all the guest Wi-Fi networks were passwordless and therefore insecure. Ditto the free WiFi at the airports at both ends; oh, and the in-flight Wi-Fi too. While walking around the park my iPhone listed a bunch of passwordless mobile hotspots too.
Are you thinking of captive-portals with logins/passwords? (E.g. Mariott/Hilton “Enter your room-number and last-name” portals) - I assume you’re aware that’s only used to authenticate after the WiFi connection is already established?
———
(I really hope that I’m wrong on this; but I’m not aware of any modern wifi standards that address this…. Of course, corp/edu networks can just use RADIUS or a client-certificate (which works on wired networks too).
Also, it’s surprising we still haven’t figured out getting TLS to work with home-user-grade routers’ control-panels…
> Pretty much all WiFi is password-protected nowadays
This is absolutely not true in the US. All major hotel chains have no encryption, airports do not, Starbucks doesn’t, etc.
It’s usually small businesses that opt for a WPA pass phrase because that’s easier to setup than the captive portal nonsense that all of the big companies use.
Aren’t they still just encrypted against the password itself? So if it is a public place like a coffee shop with a known password, anyone can decrypt the data?
Ah ok. I thought they were referring back to SSL in their first paragraph. Interesting, I had forgotten that WiFi networks once didn't all have passwords.
Unencrypted WiFi would be equal to broadcasting your unsecured TCP traffic for everyone to eavesdrop in.
Early domestic WiFi would use WEP ("Wired-Equivalent Privacy") which was very vulnerable: https://library.fiveable.me/key-terms/network-security-and-f...
Back in the day, we had a Firefox extension for stealing other people's cookies over WiFi.
https://github.com/codebutler/firesheep https://en.wikipedia.org/wiki/Firesheep
In a nutshell I dont think we would have seen much change - corporations only engage in security insofar as much as they are required to - we've seen that even in this "metastatic SSL enabled growth" we've basically sold out security to the lowest common denominator, and core actors in the industry just use these security features as a fig leaf to pretend they give a single crap.
Now, would CERTAIN industries exist without strong cryptography? Maybe not, but commerce doesn't really care about privacy in most cases, it cares about money changing hands.
I dont know, they sure make sure the paper-trail is shredded and shedded with the Azure Document Abo 365. When it comes to security from liability everything is top notch.
Right: So what we need to do is make organizations liable for mishandling data.
Imagine if you could sue a company for disclosing your unique email address to spammers and scammers. (They claim it's the fault of their unscrupulous business partner? Then they can sue for damages in turn, not my problem.)
There are some practical issues to overcome with that vision... but I find it rather cathartic.
Cryptocurrency, if you accept it and its ecosystem as an industry, would certainly not exist. And as for privacy, a fairy dies every time some someone praises bitcoin for being anonymous.
> And as for privacy, a fairy dies every time some someone praises bitcoin for being anonymous.
And yet countless thefts have happened and had the proceeds exfiltrated via Bitcoin, and the culprits never caught.
If that's not effective/practical anonymity, I don't know what is?
A decent number have been caught and plenty of others are known but can't be held responsible because they're state supported.
"A decent number" instead of "every" kind of supports my point, though. I'm not saying you get anonymity for free, but by taking the right steps and being very careful, it's actually pretty straightforward.
How could that be relevant for more than a few more years? The world does not end with the US. Regardless of the ban, strong crypto would have been developed elsewhere, as open source, and proliferated to the point of making continuation of the ban impossible: by ~2005 or earlier, it will be either US closing off from global Internet becoming a digital North Korea of a sort, or allowing strong crypto.
On that note, OpenBSD is from Canada and thus not subject to crypto export restrictions (not that I even know what such restrictions are present in the US today, if any) - https://en.wikipedia.org/wiki/OpenBSD
Popular OSes and browsers have almost entirely come from the US. If people had a choice between IE with weak crypto or Opera with strong crypto they absolutely would have chosen IE.
According to a talk by Eben Moglen (https://softwarefreedom.org/events/2012/Moglen-rePublica-Ber...), the noted connection between strong encryption and mass surveillance was a policy change by the US government. Before 2001, the policy was to repress and delay strong encryption and keep out of the public sector in order to maintain the states ability to monitor communication. After 2001 the policy changed towards mass surveillance strategies, which methods we got some insight into by the many leaks that was released a decade late by people like Snowden.
The connection is interesting, but the key word that I find important is the word policy. Mass surveillance is generally not a technology problem, it is a policy problem. If the government want to surveil every citizens movement they can put a camera on every street, regulate that every car has a gps and network connection that report their movements, have face recognition on every train and bus, and require government ID to buy a ticket that get sent to a government database. When the price of mass surveillance went down, the question of using it became a policy question.
> Meredith Whittaker, president of the Signal Foundation, gave an interesting talk at NDSS 2024 titled "AI, Encryption, and the Sins of the 90s".
The lame claim that DJB is tearing to shreds in TFA is quite shocking coming from a senior manager at an institution that works on strong crypto. Really shocking. Is she just clueless?
I used to work with the guy who was named by DJB in the crypto export case which removed the restrictions. IIRC, the NSA guy used to be his student!
One key part is that the crypto wars were around export, lest we forget "PGP Source Code and Internals".
If there was no international business, any-strength crypto would have been and could have been used.
there was a huge chilling effect on both product and protocol design. In the 90s I had to fill out a form and submit it to RSA in order to get a copy of their library. Which I eventually got after waiting 6 months, but I had to agree not to redistribute it in any way.
Efforts to design foundational cryptographic protocols were completely hamstrung by the spectre of ITAR and the real possibility that designs would have to US only. Right around the time that the US gave up, the commercial community was taking off and they weren't at all interested in further standardization except was creating moats for their business - which is why we're still stuck in the 90s as far at the network layer goes.
AFAIK the Zimmerman case was quietly dropped instead of ruled
Tinfoil hat: it was dropped to prevent exporting code being 1A protected as case law
Seems not tinfoil and rather plausibly a pragmatic decision by the prosecution.
Would be a good day to have that enshrined in case law, maybe the US government would let me work on rocket GNC if code can’t be export controlled at all
when did local prosecution got a training in international politics and commerce interests?
Federal interests can easily tell the local prosecutor "hey, don't prosecute this, it risks setting bad precedent".
I haven't seen the talk, but it sounds plausible to me: Technical people got strong crypto so they didn't worry about legislating for privacy.
We still have this blind spot today: Google and Apple talk about security and privacy, but what they mean by those terms is making it so only they get your data.
> Technical people got strong crypto so they didn't worry about legislating for privacy.
The article debunks this, demonstrating that privacy was a primary concern (e.g. Cypherpunk's Manifesto) decades ago. Also that mass surveillance was already happening even further back.
I think it's fair to say that security has made significantly more progress over the decades than privacy has, but I don't think there is evidence of a causal link. Rather, privacy rights are held back because of other separate factors.
As you point out, decades ago privacy was a widespread social value among everyone who used the internet. Security through cryptography was also a widespread technical value among everyone (well at least some people) who designed software for the internet.
Over time, because security and cryptography were beneficial to business and government, cryptography got steadily increasing technical investment and attention.
On the other hand, since privacy as a social value does not serve business or government needs, it has been steadily de-emphasized and undermined.
Technical people have coped with the progressive erosion of privacy by pointing to cryptography as a way for individuals to uphold their privacy even in the absence of state-protected rights or a civil society which cares. This is the tradeoff being described.
> demonstrating that privacy was a primary concern (e.g. Cypherpunk's Manifesto) decades ago. Also that mass surveillance was already happening even further back.
How does that debunk it? If they were so concerned, why didn't they do anything about it?
One plausible answer: they were mollified by cryptography. Remember when it was revealed that the NSA was sniffing cleartext traffic between Google data centers[0]? In response, rather than campaigning for changes to legislation (requiring warrants for data collection, etc.), the big tech firms just started encrypting their internal traffic. If you're Google and your adversaries are nation state actors and other giant tech firms, that makes a lot of sense.
But as far as user privacy goes, it's pointless: Google is the adversary.
[0] https://theweek.com/articles/457590/why-google-isnt-happy-ab...
I think it's a bit dismissive to claim that "they didn't do anything about it", just because you're not living in a perfect world right now.
As one prominent example, the EFF has been actively campaigning all this time: "The Electronic Frontier Foundation was founded in July of 1990 in response to a basic threat to speech and privacy.". A couple of decades later, the Pirate Party movement probably reached its peak. These organizations are political activism, for digital rights and privacy, precisely by the kind of people who are here accused of "doing nothing".
In a few decades, people will probably look back on this era and ask why we didn't do anything about it either.
Sure, that line of thinking makes sense, but I do not understand the alternative. Are you saying that if we (the users) got new legislation (e.g., requiring warrants), then big tech wouldn't do mass surveillance anymore?
I think they're saying if they couldn't do cryptography they'd push for legislation.
Yes, I think if there were laws that forbid mass data collection by private companies, or assessed sufficiently high penalties in the case of a breach (such that keeping huge troves of PII became a liability rather than an asset) then big tech firms would largely obey those laws.
The missed opportunity was to provide privacy protection before everyone stepped into the spotlight. The limitations on RSA key sizes etc (symmetric key lengths, 3DES limits) did not materially affect the outcomes as we can see today. What did happen is that regulation was passed to allow 13 year olds to participate online much to the detriment of our society. What did happen was that business including credit agencies leaked ludicrous amounts of PII with no real harm to the bottom lines of these entities. The GOP themselves leaked the name, SSN, sex, and religion of over a hundred million US voters again with no harm to the leaking entity.
We didn't go wrong in limiting export encryption strength to the evil 7, and we didn't go wrong in loosening encryption export restrictions. We entirely missed the boat on what matters by failing to define and protect the privacy rights of individuals until nearly all that mattered was publicly available to bad actors through negligence. This is part of the human propensity to prioritize today over tomorrow.
> What did happen is that regulation was passed to allow 13 year olds to participate online much to the detriment of our society.
That's a very hot take. Citation needed.
I remember when the US forced COP(P?)A into being. I helped run a site aimed at kids back in those days. Suddenly we had to tell half of those kids to fuck off because of a weird and arbitrary age limit. Those kids were part of a great community, had a sense of belonging which they often didn't have in their meatspace lives, they had a safe space to explore ideas and engage with people from all over the world.
But I'm sure that was all to the detriment of our society :eyeroll:.
Ad peddling, stealing and selling personal information, that has been detrimental. Having kids engage with other kids on the interwebs? I doubt it.
Kids are not stupid, though. They know about the arbitrary age limit, and they know that if they are under that limit, their service is nerfed and/or not allowed. So, the end effect of COPPA is that everyone under 13 simply knows to use a fake birthdate online that shows them to be over the limit.
Sure, it's one of the many rules that's bent and broken on a daily basis. Doesn't make it any less stupid. And it falls on the community owner to enforce, which is doubly stupid, as the only way to prove age is to provide ID, which requires a lot of administration, and that data then becomes a liability.
If you care about something (say a child from the guardians perspective or perhaps a business from the owner's perspective) you find solutions.
I was one of those kids at one point. In meatspace we have ways to deal with it and online we do as well. Of course if there is no risk to a business then they will put no resources into managing that risk.
ah to be 13 and having to lie about being 30 to not be banned from some game. so later you can be 30 and lie about being 13 to be able to play without too much ads.
COP(P?)A
COPA [0] is a different law which never took effect. COPPA [1] is what you're referring to.
Ad peddling, stealing and selling personal information, that has been detrimental.
I agree and what's good for the gander is good for the goose. Why did we only recognize the need for privacy for people under an arbitrary age? We all deserve it!
0 - https://en.wikipedia.org/wiki/Child_Online_Protection_Act
1 - https://en.wikipedia.org/wiki/Children%27s_Online_Privacy_Pr...
>Ad peddling, stealing and selling personal information, that has been detrimental.
So we agree on this part.
> What did happen is that regulation was passed to allow 13 year olds to participate online much to the detriment of our society.
My claim is that if "we" hadn't allowed 13 year olds to sign away liabilities when they registered on a website there would be fewer minors using social media in environments that are mixed with adults; more specifically guardians of minors would be required to decide if their kids should have access and in doing so would provide the correct market feedback to ensure that sites of great value to minors (education resources being top of mind for me) would receive more market demand and at the same time social platforms would have less impact on children as there would be fewer kids participating in anti-nurturing environments.
>> Having kids engage with other kids on the interwebs? I doubt it.
Unless those kids aren't interacting with kids at all, but instead pedo's masquerading as kids for nefarious reasons. Which yes, has been VERY detrimental to our society.
Nah. I'm not buying it. What's the rate of kids interacting with pedos instead of other kids?
Knee-jerk responses like yours, and "what about the children"-isms in general are likely more detrimental than actual online child abuse. Something about babies and bathwater.
I remember routinely clicking on some checkbox to say I was over 13 well before I was actually over 13. I'm sure most of the kids who actually cared about being on your site were still on it after the ban.
the issue with online kids isn't just the availability of the internet to kids but the availability of the kids to the internet
This is a good article, and throughly debunks the proposed tradeoff between fighting corporate vs government surveillance. It seems to me that the people who concentrate primarily on corporate surveillance primarily want government solutions (privacy regulations, for example), and eventually get it in their heads that the NSA are their friends.