That Which Makes Us Safe Makes Us Free: Tech and Privacy
The following is reconstructed out of a running e-mail discussion I had with Greg Stene (my old roommate) the other day.
Smith: I’ve been thinking about the question of freedom versus security a bit lately. 9.11 is obviously the impetus for a lot of what drives my pondering, but the fact is that 9.11 really has only coalesced and sped-up the dynamics that were already in place.
In short, are personal freedom and security mutually exclusive? In the last year we’ve heard it suggested time and again that Americans might have to give up some of the freedoms they have come to take for granted (at this point we have to ask people to think about the differences between actual freedoms and mere conveniences), and at a glance it does seem that privacy/freedom and personal/national security are on opposite ends of a continuum.
Technology has been central to the erosion of privacy and freedom we have seen (and here I’m thinking not just of tech that the gummint uses to snoop on people, but also of marketing tech) – it appears that technology is inherently intrusive, if we take what we see in the world today to be evidence of the necessary character of technology. (Operative word in that last sentence: “if.”)
But is this the reality of technology, and is it the reality of our culture? If we are to be safe from terrorists, does that automatically mean that we have to give up our privacy? Or is there another scenario whereby technology can be used to advance both the cause of security and the cause of privacy?
This concerns me more every time I hear the word “Ashcroft.”
Stene: Here are a couple thoughts. Privacy, as a right, was only brought up around the early 1900s. It’s still in that netherworld as a right. So we really need to separate it from the “basic rights” protected by the Constitution.
Smith: Yes, but. The Supreme Court has held that an implicit right to privacy exists within the Constitution, and in fact that implicit right is the foundation for Roe v. Wade.
Stene: Implicit is not the same as written. Implicit rights are the easiest of the rights to dismantle. Believe me, the idea of “original intent” determines a lot of Constitutional issues. And most “original intent” versions of the Constitution do not include privacy, because most original-intent advocates are literalists. If it ain’t written there, it ain’t in there.
Smith: Fair enough.
Stene: Anyway, to come back to it all. Your privacy is screwed and you won’t get it back. We’ve agreed, as a society, to trade some privacy for convenience (buy things with a card). Live with it, as you live with those bar codes on everything you buy and the happy convenience of bank cards. Personally, I use cash as much as possible. It’s relatively untraceable.
Smith: Of course, the goal is to eventually get rid of cash, isn’t it? Who does that serve?
Stene: Sorry about the dismissive bad attitude here, but what’s done is done and we’re not going to be able to turn back the clock. Tech is going to make privacy intrusion even easier and more comprehensive. Laws will not stop that because the use of tech in privacy violation is going to become ubiquitous and we will be unable to single out any one person responsible. No one will be found liable for punishment. And as the government increases its use of snooping tech, the pervading atmosphere regarding privacy will be one wherein no single person is permitted privacy, except in one’s own bedroom … and even that will be with problems.
Smith: So, to my original question, you believe that tech is inherently intrusive, then? (Forgive my determinism, here, but I have long since accepted the fact that technology has its own determining autonomy at some level; I just don’t believe that it is a product of economic determinants. Co-determining, maybe…)
Stene: The nuke is the best example of determining beyond co-determining. Without it, war in Euroland would have happened in the 50s or 60s. USSR and NATO. No weapon other than the nuke could have caused … no other weapon could have caused (imposed is a more forceful word) that peculiar form of thinking called MAD, or mutually assured deterrence.
Smith: This is a good example, but I believe there are a number of more subtle cases, as well. In fact, if you’re willing to use Postman’s definition of technology, which is broad enough to include things like “education,” it really opens up the discussion.
Stene: So, you said: “Or is there another scenario whereby technology can be used to advance both the cause of security and the cause of privacy?”
Certainly. Encrypted e-mail is an example. PGP, though it’s had its faults (in one version) laid open recently, protects our communication and enhances our security.
Smith: Yes, but things like PGP are reactive and represent the exception, not the rule, wouldn’t you say? It’s a technology that only comes into existence to combat the dominant line of development. So if I wanted to be argumentative, I might respond that isolated cases of pro-privacy tech don’t really refute the proposition that tech is by nature intrusive, right?
Stene: Weird thinking there. Here goes … you and I encrypt our e-mail, and that tells the feds they should look at it. But we should feel good about that, because encrypted e-mail is probably used by the bad guys, too. So the feds will look at their stuff, also.
In contrast, if we just send regular e-mail with a few key words as our private messaging process (“shoes” stands for “dope,” for example), we can send this kind of unencrypted e-mail all day and it never gets looked at (that’s our privacy being protected). But if we want to help the war effort, we keep the feds vigilant by sending nice and safe messages by encrypted e-mail (that’s our security).
Smith: Except that by doing so we have them wasting their time on the innocent.
Stene: Too flip a response? Possibly. Let’s take a look at the question again …
“Or is there another scenario whereby technology can be used to advance both the cause of security and the cause of privacy?”
The question is based in the assumption that increased security comes from decreased privacy.
Smith: Ummm, not exactly. More like what I posited originally, that security and privacy are on opposite ends of a continuum (in principle, if not in ultimate effect, since I can easily imagine cases where the government annihilates privacy without actually effecting greater security). But go on.
Stene: One example of how decreased privacy actually decreased security should suggest the fallacy of the assumption.
As I understand it, when the NSA was busy taking in messages from around the world (decrease in privacy), the security of the country was actually lessened in pre-9/11 because the overload of messages kept relevant information from being weeded out and paid attention to.
Smith: And this would go to what I say above. Maybe I need to find a clearer way of stating my hypothesis, though, because as I put it originally it made it sound like there’s a trade-off in real-world effect, and you illustrate nicely how that just isn’t accurate.
Stene: Information overload may well become the single most important mitigator in the use of information that comes from decreased privacy. Too much information hamstrings you even worse than having too little. At least with too little, the anomalies begin to surface and can be recognized.
So, decreased privacy, through information overload, may decrease security.
Smith: Of course, there’s an irony in here when you consider this from a purely pragmatic, operational standpoint. In terms of effect, you may have reams of privacy-compromising info on me, but if you have so much information that you can’t evaluate it adequately, then in terms of actual impact my privacy is doing just fine.
So that would lead me to think about privacy not in terms of info gathering or info-gathering capacity, but in terms of what is done with that info. Hmmm…..
Of course, you only talk in these terms when you’ve lost the policy battle, I guess….
Stene: Again: “Or is there another scenario whereby technology can be used to advance both the cause of security and the cause of privacy?”
One may, I suppose, suggest that any advance in security in times of potential disaster results in increased privacy because that increase in security will increase the probability that some majority of the status quo will be sustained.
Smith: Machiavelli would be proud of this line of reasoning, no?
Stene: The alternative to sustaining the status quo would be anarchy, death and destruction, and a complete new way of life, one in which privacy may never emerge as a protected value.
So, yeah. Tech that increases security also increases privacy, in the face of imminent threat.
Smith: Which brings us back to my original subject line: “that which makes us safe makes us free,” which comes from Minority Report, and is arguably the most chilling moment in the entire film. That future is, like all good cyberpunk, extrapolated from our present condition. It takes where we are now and exaggerates slightly.
And we can sure as hell see somebody like Bushcroft concocting such a catchphrase to justify liberty-killing legislation like the “Patriot Act,” can’t we?
Stene: As Einstein said, it’s all relative.
Smith: Relativism is dead. Film at 11.