?

Log in

No account? Create an account

Previous Entry | Next Entry

(The following, plus a few side anecdotes, was delivered at SIGINT 13, Cologne, Germany, July 5th, 2013. Here's the video.)

About a year and a half ago I was in Brussels for a workshop that Google and Privacy International hosted. The goal of this workshop was to develop policy language around privacy that Google could use in negotiating with governments -- I'm guessing trade agreements and things like that, no one was especially able or willing to give me specific details -- about user privacy, what sort of protections have to be applied to data on the wire, data at rest and so on, and what governments can and can't do with respect to the data that private (or publicly traded) companies collect and use in the course of their business.

Now, this workshop was held under the Chatham House rule, which says that I can quote things that people said, but I can't attribute them directly. On the first day of the conference, they offered two tracks, a technical track and a policy track. There were a bunch of really sharp technical people there, academics and industry people and independent researchers and like half of the Tor Project. While I didn't know most of the policy people, there were a whole lot of folks from the EFF and other good-guy kinds of organizations, and I have to figure if they managed to pick up a qualified slate of technical experts they probably did a decent job on the policy side too. But you could go to whichever track you wanted, it wasn't segregated by specialty or anything like that.

So we all meet up, it's about 9 o'clock in the morning, there's coffee and about half an hour to meet-and-greet, and then they sit us all down and give us an overview of the next two days and tell us we can go to whichever room we want, technical or policy. And I notice that every hacker I recognise, along with the computer science academics and so on, they're all headed into the tech room. And I'm like "hmm." Because sure I know a thing or two about Tor, but they've already got half the Tor Project. Not to mention, the academics there knew everything I know about privacy and then some, and there were enough people who knew enough about langsec that even if it came up they didn't really need me, and apart from that I'm not really sure what I have to offer. So I decide okay, since the point of this whole affair is to produce policy language anyway, I'll go see if I can contribute to that. Make sure there's an engineering perspective represented, that kind of thing.

Now remember, Chatham House rule, so I can't directly attribute quotes. But what I can tell you is that maybe 45 minutes, an hour into the discussion, some fuckup (ahem) who'd been sitting there fidgeting at the way things had been going pipes up and says, "Can we take as axiomatic that it's a bad idea to just up and break the Internet?" And the whole room turns and says, "NO." I mean, it wasn't quite as direct as that, there was some spirited discussion, but it very quickly became clear that to everyone in the room that was willing to open their mouth apart from this one fuckup, the very idea of a global interconnected network was something like a lump of modeling clay that you could squish and mold, shape and reshape by fiat. Never mind that there was only a thin wall between them and a whole room minus one full of engineers talking about the incredibly intricate details and constantly moving parts of this really-quite-fragile-when-you-think-about-it putative lump of modeling clay.

A little later, this same fuckup was having lunch, and got into a conversation with one of the other people from the policy room, during which the other person advanced the claim -- and I am pretty sure they were not being ironic -- that mathematics had to be subordinated to national sovereignty.

That was the point where I said to myself shit, y'all, we've got a problem.

Because as far as I can tell, every single person at that workshop was supposed to be one of the Good Guys. But when the Good Guys can't even agree on what reality is, how far can they really get toward agreeing what good is?

So now it's 2013, and the front page of pretty much every major metropolitan newspaper has been carrying articles for weeks on PRISM, on Edward Snowden, on the NSA's actions in Germany and the rest of the EU. It's tempting to think that the lines are really clear: the NSA violated everyone's rights not to mention EU data protection laws, therefore NSA bad, therefore everyone else good, which includes Edward Snowden, therefore what the hell are all these other countries doing hiding behind excuses like "he has to apply for asylum from within our country"? And then Venezuela comes into the picture and there's some arguing about trade agreements, and all of Europe's foreign ministers are suddenly very preoccupied and there's bad news out of the European Central Bank again and we all come off looking like sellouts. And everyone around the world feels vaguely unsatisfied.

Crucially, nothing has actually happened.

Perspectives may have changed. Opinions may have changed. But Edward Snowden is still somewhere in the transit zone at Sheremetyevo, and PRISM is, as far as we know, still in operation. Enormous amounts of cogitation have been expended over this topic. Millions of man-hours of human computation -- and at least an equivalent amount of CPU computation -- have been devoted to it. People obsess over the ins and outs of the rights and wrongs of what Snowden did, or of the legality or the illegality of the NSA's actions, and meanwhile the ingestion systems merrily continue ingesting.

Because nearly everything that matters is a side effect.

I should explain, at this point, exactly what it is I mean by side effect, but I'm going to have to start with a counterexample. In pharmacy, for instance, there's this notion of the clinical effect and the side effect, where the clinical effect is the effect you want to produce, like reducing pain or cooling off a fever, and the side effect is something that you don't want to produce, like a metabolic product that's incredibly toxic to your liver. This is the usage that's made its way into everyday language, and it carries with it this notion that there are always tradeoffs. You can take just enough paracetamol to take away your headache without also killing your liver, and this is reliable enough across the entire human population that we feel comfortable selling it over the counter and giving it to children. And we end up thinking about side effects as something that we manage, in the case of paracetamol adding up to a few hundred thousand emergency room visits per year due to accidental overdose. But I'll get back to this.

In computer science, what we mean by side effect is anything that changes the state of the system. If the intended result of your computation produces some change in state, then it's actually a side effect. If an unintended result of your computation produces some change in state, it's also a side effect. Intent means nothing whatsoever. You could have given that person a third dose of paracetamol after they threw up the first two because you were trying to help them with their fever and didn't realise how quickly the stomach absorbs paracetamol -- this actually happened to a friend of mine -- or you could have been straight-up trying to murder them; computer science only acknowledges the side effect of the person landing in the emergency room with a failing liver. (She survived, by the way.)

So this is why, the other day when a Belgian business news reporter interviewed me about PRISM and finished off by asking for my #1 piece of security advice for Belgian companies, I told him, "Follow the OWASP best practices and focus on your responsibility to your customers." And he got that, which I thought was encouraging. If you're a European company and a copy of your trade-secret algorithm is sitting on an NSA hard drive right now because somebody's git traffic transited through the US, it'll still be sitting there tomorrow and there's not a hell of a lot you can do about that. But you can take steps to harden the machines that algorithm is executing on, and those steps are persistent side effects. They have lasting impact. They matter.

And there's an extent to which I feel like I'm preaching to the choir here, because we get that. It's almost like a sense you develop when you observe a system over a long period of time, whether we're talking about a telephone trunk system or the time-sharing systems at MIT or the early ARPAnet -- which is all way before my time, but that's fine, because it's not a sense you have to be in some particular time or place to acquire. I got mine on IRC. It's network proprioception. Boxes come up, boxes go down, the shape of the network changes, and as you interact with that network and start learning what all its little flags and options do, how change propagates, you develop an awareness of the state of the system that I think it's really only fair to compare to your awareness of the state of yourself. Certainly from a philosophical standpoint they're about equally hard to talk about. But I think it's fair to say that hackers who know what they're doing -- reasonably competent hackers, let's say -- correlate inputs to a system with outputs from that system and, when they can, internal state changes in that system, in much the same way that people who are reasonably self-aware can think abstractly about what they experience, consider their internal responses, and produce some outward response (or not, as the case may be). And, for that matter, learn from their mistakes! I think that in much the same way as Douglas Adams characterised the knack of flying as learning to throw yourself at the ground and miss, it's entirely fair to characterise the knack of hacking as learning how you yourself can fail more quickly until whatever you're analysing fails in exactly the way you want it to.

But having this kind of mindset at all -- which is really just the scientific method all over again, nature being obeyed in order to be commanded and all that -- turns out to be rarer than you'd expect, at least if you're me, which, to be fair, means that most of the people you spend any time with at all are scientists, hackers, or both. This is not all that large of a sector of the population to begin with. And we're in a funny situation here, where for the last couple of years there's been an unusually large proportion of international attention paid to the hacker community and hacker culture by people who don't have the faintest fucking idea how we think. There's a saying for this in the United States, "armchair quarterbacking"; the metaphor refers to the guy who's sitting there in his armchair at home, drinking beer and shouting at his big-screen TV what he thinks the quarterback ought to be doing. Maybe he's played some football, maybe he even coaches kids on weekends or something, but there's this tacit understanding that for all his rhetoric -- even when he's right -- he's still just there in his armchair, if he really understood how to coach a team to glory he'd be out there in the game putting that understanding into practice.

This gets murky in the world of policy, where you can have economists like Felix Salmon who wax rhetorical about Bitcoin without having the first fucking idea what a hash function is, much less how one functions as a component of a billion-dollar financial system. He literally does not understand what he is talking about, but he understands enough about money -- or, at least, what "money" means in the parlance of the modern international finance system -- that he thinks he understands what he's talking about, and worse, that people should listen to him, even though what he's actually talking about and what he thinks he's talking about are systems as wildly disparate as ... two very disparate things. If Bitcoin is going to make any sense to you, you have to accept the notion that the levers and dials that financial regulators are used to being able to fiddle with just aren't there. The currency itself is inherently resistant to regulation, because Satoshi Nakamoto built that system like a Deist God; the parameters of the system, from block difficulty to reward halving time, were built into that system from the moment the genesis block got written out to disk.

And every time I hear one of these armchair cryptocurrency specialists -- who may be top-notch economists, but who crucially spend so much of their time thinking about how to manage systems where "unit of account" and "unit of exchange" mean the same thing that the very idea of decoupling those concepts is crazy moon language -- talking about how inherently doomed Bitcoin is, I kind of want to come back with "so how many billion-dollar financial systems have you built in the last couple of years?" Think about that for a second. We live in a world where if somebody comes up with a useful enough idea, and reduces it to practice in code that other people can actually use, and enough other people decide it's also a useful idea, a couple of years down the line it becomes a billion dollars of monetary capacity. Obviously I'm not going to pretend that's a billion dollars worth of state change -- a billion dollars worth of side effects; a lot of the volume that goes into making that number that large is people chasing bubbles, and it's reasonable to expect the usual sort of outcomes you get from chasing bubbles, namely other floating currencies disappearing into thin air when people decide it's time to get out. But I've also lost count of how many times I've heard the usual pundits predict that surely, this Bitcoin price spike is going to be the one that kills the golden goose ... and every time, like clockwork, the damn thing crashes, dusts itself off, and keeps going. It's almost as if being able to move units of exchange around internationally without having to pay rent to the established financial system is something people find value in.

So it should surprise approximately no one that the next line of defense is, of course, regulation. What surprises me is that it's taken as long as it has. Satoshi did an amazing, unprecedented thing: he designed a protocol that inherently resists tampering. In fact it's so inherently tamper-resistant that you can't actually regulate Bitcoin; you have to either take over the entire network or take a step back and regulate the exchanges where people turn other currency into Bitcoin and vice versa. And I get where the Winklevoss twins are going when they say that regulation means that Bitcoin is maturing as a financial instrument, but I don't for a moment think that's necessarily good for users. If Bitcoin "maturing" means that the majority of its users have to rent their liquidity from a regulator-approved set of oligarchs, then Bitcoin's advantages against other currencies will evaporate. If that happens, the status of the financial system remains a lot closer to quo than it otherwise would. And I can't think of many things that an established rentier likes more than the status quo.

Because nearly everything that matters is a side effect.

Now, I'm hardly going to fault Satoshi for not solving the liquidity problem in addition to not only solving the double-spending problem in a distributed setting, but also doing it in a way that people ended up using. One side effect at a time is fine, especially when you expect it's going to be a big one and you want to find out whether it even works the way you anticipate it will. But this leads me to the next category of people who are suddenly especially interested in What Hackers Do without giving much thought to why, and that's people who desperately want to stave off any kind of side effects at all.

Let me give you a recent example. Maybe a week or two ago on Hacker News, I came across an impassioned article about the difference between science and technology. The author's primary claim was that although the process of scientific discovery and the process of technological creation -- say, performing an experiment to test a hypothesis versus designing and implementing a protocol -- are both performed by humans, who have politics, therefore these processes have political effects, the outcome of the scientific process is apolitical because nature remains the same no matter what your view of the world is. And I'd even agree with that. But then he advances this claim:
TCP/IP et al are technologies created by people (smart, well paid white guys, typically) with politics (as much as they deny it, because they're scientists). You can probably say they've made a blip in our politics.

They are inherently political, we need to work out what their politics is, what they encourage or discourage, before we use them to solve political problems.

Okay, Mr. Smart, Well-Paid White Guy. (Dude, do not give me that look. You live in the western hemisphere and have a blog; you are paid better than most of the planet.) We'll just tell damn near everyone in the Middle East, not to mention every single Kenyan who's been coming up with uses for GSM that the makers probably never even imagined much less intended since long before there was an Arab Spring to vex your stony political sleep, that they need to put down their mobiles and back away from Twitter, because the Flying Spaghetti Monster only knows what those beastly protocols might encourage or discourage. (Hosni Mubarak had a few ideas, which is why he decided to shut them down entirely. You can see how well that worked out for him.)

I'm used to reactionaries; I grew up in Texas. I'm just not used to reactionaries coming in from the left. I suppose it's a sign that the left is maturing, in much the same way that Bitcoin is maturing, which is to say becoming part of an established system that finds side effects existentially threatening. And if you can con someone into holding still for fear of what waves they might make if they were to move, whether it's through guilt or fear or what-have-you, you no longer have to worry about their side effects. It's the liberal version of "fuck you, got mine."

Now, I got my first taste of this right around 25c3, when there was some press coverage of the biohacking work I'd been doing with lactobacillus. If you ever want to see a Democrat supporting gun rights, telling him one of his neighbours is doing synthetic biology in their kitchen seems to work -- I have never gotten more death threats than I did when the Huffington Post picked up that article. And we can talk until we're blue in the face about why that is, but I think what's most interesting is that when presented with a sufficiently large example, people will blithely throw away what up until then they'd considered some of their most cherished beliefs, like guns being evil or murder being wrong, at least for the sake of argument. Obviously no one's come up and shot me yet, so apparently no one's completely pitched those beliefs out the window, and I'll take that as a good thing. I'm in favour of not being shot. But I'm also in favour of change I can see, not merely change I can believe in. If that means poking the status quo with a stick to see what it does, I'm more inclined to do that than not. And if it responds, I'm just as inclined to do it again, like that XKCD comic with the electric shock button. Maybe I find out a little more about how it works. Maybe I find out a way it breaks. Either way, I've learned more about it than I knew before. And, crucially, I never would have found out if I hadn't picked up that stick.

You can think of the human brain in a lot of ways, but probably the most useful way I know of to think about it is as a massively parallel pattern-matching machine. Your neocortex learns to recognise patterns, and it builds an ontology out of those patterns, so that from light and shadow you can discern edges and from edges you can discern shapes and from shapes you can discern whether what you're looking at is something you've already identified or something novel. We quite literally spend the first couple of weeks of our lives learning how to see and hear: the machinery is there, we've already been using it in utero, but now we have to adapt to this weird outside-the-uterus environment and that means learning how to use those senses all over again. But the secret is, you never stop learning. The human brain is amazingly plastic, well on into adulthood, as long as you're willing to continue exposing it to novel experiences that it has to learn to pattern-match. Preferably lots of them, so that you don't over-train to an input set that's too small.

I can't tell you what a "social sense" feels like, at least not the way I can describe network proprioception. I was born without one and I'm still working on putting one together from the parts I have available. But I want to know what we could build if we had people who developed proprioception for, if you will, the body politic. We may very well already be creating those people, given that Western children now grow up in a society where social graphs as graphs are a major input on a daily basis. I look forward to seeing them grow up. But if those kids aren't kicking the tires -- which is what kids are supposed to do in the first place, and I guess what we never grew out of -- where are they going to find the side effects that will tell them how these network effects behave?

Comments

maradydd
Jul. 7th, 2013 10:14 pm (UTC)
Yup, that's the paper.

The DNS filtering that exists in the wild, at least what I've seen, doesn't change the ground truth in quite the same way that SOPA/PIPA would have. (In one hilarious instance, it also facilitated the construction of a kiddie-porn-finding oracle.) Mind if I defer the longer explanation to tomorrow, though? It's been a long and booze-fueled weekend and it's getting on toward my bedtime.
steer
Jul. 7th, 2013 10:21 pm (UTC)
No worries -- sleep well.

Heh... Richard Clayton... haven't seen him in a while (doubt he remembers me). He was the Cambs guru on security for one of the first projects I worked on at UCL.

I don't think it was just the one hilarious incident as the Australian list of banned sites is supposedly on wikileaks -- though I think it's a duff messed with list.

Latest Month

July 2015
S M T W T F S
   1234
567891011
12131415161718
19202122232425
262728293031 

Tags

Page Summary

Powered by LiveJournal.com
Designed by Tiffany Chow