You are viewing maradydd

Previous Entry | Next Entry

purple hair
(The following, plus a few side anecdotes, was delivered at SIGINT 13, Cologne, Germany, July 5th, 2013.)

About a year and a half ago I was in Brussels for a workshop that Google and Privacy International hosted. The goal of this workshop was to develop policy language around privacy that Google could use in negotiating with governments -- I'm guessing trade agreements and things like that, no one was especially able or willing to give me specific details -- about user privacy, what sort of protections have to be applied to data on the wire, data at rest and so on, and what governments can and can't do with respect to the data that private (or publicly traded) companies collect and use in the course of their business.

Now, this workshop was held under the Chatham House rule, which says that I can quote things that people said, but I can't attribute them directly. On the first day of the conference, they offered two tracks, a technical track and a policy track. There were a bunch of really sharp technical people there, academics and industry people and independent researchers and like half of the Tor Project. While I didn't know most of the policy people, there were a whole lot of folks from the EFF and other good-guy kinds of organizations, and I have to figure if they managed to pick up a qualified slate of technical experts they probably did a decent job on the policy side too. But you could go to whichever track you wanted, it wasn't segregated by specialty or anything like that.

So we all meet up, it's about 9 o'clock in the morning, there's coffee and about half an hour to meet-and-greet, and then they sit us all down and give us an overview of the next two days and tell us we can go to whichever room we want, technical or policy. And I notice that every hacker I recognise, along with the computer science academics and so on, they're all headed into the tech room. And I'm like "hmm." Because sure I know a thing or two about Tor, but they've already got half the Tor Project. Not to mention, the academics there knew everything I know about privacy and then some, and there were enough people who knew enough about langsec that even if it came up they didn't really need me, and apart from that I'm not really sure what I have to offer. So I decide okay, since the point of this whole affair is to produce policy language anyway, I'll go see if I can contribute to that. Make sure there's an engineering perspective represented, that kind of thing.

Now remember, Chatham House rule, so I can't directly attribute quotes. But what I can tell you is that maybe 45 minutes, an hour into the discussion, some fuckup (ahem) who'd been sitting there fidgeting at the way things had been going pipes up and says, "Can we take as axiomatic that it's a bad idea to just up and break the Internet?" And the whole room turns and says, "NO." I mean, it wasn't quite as direct as that, there was some spirited discussion, but it very quickly became clear that to everyone in the room that was willing to open their mouth apart from this one fuckup, the very idea of a global interconnected network was something like a lump of modeling clay that you could squish and mold, shape and reshape by fiat. Never mind that there was only a thin wall between them and a whole room minus one full of engineers talking about the incredibly intricate details and constantly moving parts of this really-quite-fragile-when-you-think-about-it putative lump of modeling clay.

A little later, this same fuckup was having lunch, and got into a conversation with one of the other people from the policy room, during which the other person advanced the claim -- and I am pretty sure they were not being ironic -- that mathematics had to be subordinated to national sovereignty.

That was the point where I said to myself shit, y'all, we've got a problem.

Because as far as I can tell, every single person at that workshop was supposed to be one of the Good Guys. But when the Good Guys can't even agree on what reality is, how far can they really get toward agreeing what good is?

So now it's 2013, and the front page of pretty much every major metropolitan newspaper has been carrying articles for weeks on PRISM, on Edward Snowden, on the NSA's actions in Germany and the rest of the EU. It's tempting to think that the lines are really clear: the NSA violated everyone's rights not to mention EU data protection laws, therefore NSA bad, therefore everyone else good, which includes Edward Snowden, therefore what the hell are all these other countries doing hiding behind excuses like "he has to apply for asylum from within our country"? And then Venezuela comes into the picture and there's some arguing about trade agreements, and all of Europe's foreign ministers are suddenly very preoccupied and there's bad news out of the European Central Bank again and we all come off looking like sellouts. And everyone around the world feels vaguely unsatisfied.

Crucially, nothing has actually happened.

Perspectives may have changed. Opinions may have changed. But Edward Snowden is still somewhere in the transit zone at Sheremetyevo, and PRISM is, as far as we know, still in operation. Enormous amounts of cogitation have been expended over this topic. Millions of man-hours of human computation -- and at least an equivalent amount of CPU computation -- have been devoted to it. People obsess over the ins and outs of the rights and wrongs of what Snowden did, or of the legality or the illegality of the NSA's actions, and meanwhile the ingestion systems merrily continue ingesting.

Because nearly everything that matters is a side effect.

I should explain, at this point, exactly what it is I mean by side effect, but I'm going to have to start with a counterexample. In pharmacy, for instance, there's this notion of the clinical effect and the side effect, where the clinical effect is the effect you want to produce, like reducing pain or cooling off a fever, and the side effect is something that you don't want to produce, like a metabolic product that's incredibly toxic to your liver. This is the usage that's made its way into everyday language, and it carries with it this notion that there are always tradeoffs. You can take just enough paracetamol to take away your headache without also killing your liver, and this is reliable enough across the entire human population that we feel comfortable selling it over the counter and giving it to children. And we end up thinking about side effects as something that we manage, in the case of paracetamol adding up to a few hundred thousand emergency room visits per year due to accidental overdose. But I'll get back to this.

In computer science, what we mean by side effect is anything that changes the state of the system. If the intended result of your computation produces some change in state, then it's actually a side effect. If an unintended result of your computation produces some change in state, it's also a side effect. Intent means nothing whatsoever. You could have given that person a third dose of paracetamol after they threw up the first two because you were trying to help them with their fever and didn't realise how quickly the stomach absorbs paracetamol -- this actually happened to a friend of mine -- or you could have been straight-up trying to murder them; computer science only acknowledges the side effect of the person landing in the emergency room with a failing liver. (She survived, by the way.)

So this is why, the other day when a Belgian business news reporter interviewed me about PRISM and finished off by asking for my #1 piece of security advice for Belgian companies, I told him, "Follow the OWASP best practices and focus on your responsibility to your customers." And he got that, which I thought was encouraging. If you're a European company and a copy of your trade-secret algorithm is sitting on an NSA hard drive right now because somebody's git traffic transited through the US, it'll still be sitting there tomorrow and there's not a hell of a lot you can do about that. But you can take steps to harden the machines that algorithm is executing on, and those steps are persistent side effects. They have lasting impact. They matter.

And there's an extent to which I feel like I'm preaching to the choir here, because we get that. It's almost like a sense you develop when you observe a system over a long period of time, whether we're talking about a telephone trunk system or the time-sharing systems at MIT or the early ARPAnet -- which is all way before my time, but that's fine, because it's not a sense you have to be in some particular time or place to acquire. I got mine on IRC. It's network proprioception. Boxes come up, boxes go down, the shape of the network changes, and as you interact with that network and start learning what all its little flags and options do, how change propagates, you develop an awareness of the state of the system that I think it's really only fair to compare to your awareness of the state of yourself. Certainly from a philosophical standpoint they're about equally hard to talk about. But I think it's fair to say that hackers who know what they're doing -- reasonably competent hackers, let's say -- correlate inputs to a system with outputs from that system and, when they can, internal state changes in that system, in much the same way that people who are reasonably self-aware can think abstractly about what they experience, consider their internal responses, and produce some outward response (or not, as the case may be). And, for that matter, learn from their mistakes! I think that in much the same way as Douglas Adams characterised the knack of flying as learning to throw yourself at the ground and miss, it's entirely fair to characterise the knack of hacking as learning how you yourself can fail more quickly until whatever you're analysing fails in exactly the way you want it to.

But having this kind of mindset at all -- which is really just the scientific method all over again, nature being obeyed in order to be commanded and all that -- turns out to be rarer than you'd expect, at least if you're me, which, to be fair, means that most of the people you spend any time with at all are scientists, hackers, or both. This is not all that large of a sector of the population to begin with. And we're in a funny situation here, where for the last couple of years there's been an unusually large proportion of international attention paid to the hacker community and hacker culture by people who don't have the faintest fucking idea how we think. There's a saying for this in the United States, "armchair quarterbacking"; the metaphor refers to the guy who's sitting there in his armchair at home, drinking beer and shouting at his big-screen TV what he thinks the quarterback ought to be doing. Maybe he's played some football, maybe he even coaches kids on weekends or something, but there's this tacit understanding that for all his rhetoric -- even when he's right -- he's still just there in his armchair, if he really understood how to coach a team to glory he'd be out there in the game putting that understanding into practice.

This gets murky in the world of policy, where you can have economists like Felix Salmon who wax rhetorical about Bitcoin without having the first fucking idea what a hash function is, much less how one functions as a component of a billion-dollar financial system. He literally does not understand what he is talking about, but he understands enough about money -- or, at least, what "money" means in the parlance of the modern international finance system -- that he thinks he understands what he's talking about, and worse, that people should listen to him, even though what he's actually talking about and what he thinks he's talking about are systems as wildly disparate as ... two very disparate things. If Bitcoin is going to make any sense to you, you have to accept the notion that the levers and dials that financial regulators are used to being able to fiddle with just aren't there. The currency itself is inherently resistant to regulation, because Satoshi Nakamoto built that system like a Deist God; the parameters of the system, from block difficulty to reward halving time, were built into that system from the moment the genesis block got written out to disk.

And every time I hear one of these armchair cryptocurrency specialists -- who may be top-notch economists, but who crucially spend so much of their time thinking about how to manage systems where "unit of account" and "unit of exchange" mean the same thing that the very idea of decoupling those concepts is crazy moon language -- talking about how inherently doomed Bitcoin is, I kind of want to come back with "so how many billion-dollar financial systems have you built in the last couple of years?" Think about that for a second. We live in a world where if somebody comes up with a useful enough idea, and reduces it to practice in code that other people can actually use, and enough other people decide it's also a useful idea, a couple of years down the line it becomes a billion dollars of monetary capacity. Obviously I'm not going to pretend that's a billion dollars worth of state change -- a billion dollars worth of side effects; a lot of the volume that goes into making that number that large is people chasing bubbles, and it's reasonable to expect the usual sort of outcomes you get from chasing bubbles, namely other floating currencies disappearing into thin air when people decide it's time to get out. But I've also lost count of how many times I've heard the usual pundits predict that surely, this Bitcoin price spike is going to be the one that kills the golden goose ... and every time, like clockwork, the damn thing crashes, dusts itself off, and keeps going. It's almost as if being able to move units of exchange around internationally without having to pay rent to the established financial system is something people find value in.

So it should surprise approximately no one that the next line of defense is, of course, regulation. What surprises me is that it's taken as long as it has. Satoshi did an amazing, unprecedented thing: he designed a protocol that inherently resists tampering. In fact it's so inherently tamper-resistant that you can't actually regulate Bitcoin; you have to either take over the entire network or take a step back and regulate the exchanges where people turn other currency into Bitcoin and vice versa. And I get where the Winklevoss twins are going when they say that regulation means that Bitcoin is maturing as a financial instrument, but I don't for a moment think that's necessarily good for users. If Bitcoin "maturing" means that the majority of its users have to rent their liquidity from a regulator-approved set of oligarchs, then Bitcoin's advantages against other currencies will evaporate. If that happens, the status of the financial system remains a lot closer to quo than it otherwise would. And I can't think of many things that an established rentier likes more than the status quo.

Because nearly everything that matters is a side effect.

Now, I'm hardly going to fault Satoshi for not solving the liquidity problem in addition to not only solving the double-spending problem in a distributed setting, but also doing it in a way that people ended up using. One side effect at a time is fine, especially when you expect it's going to be a big one and you want to find out whether it even works the way you anticipate it will. But this leads me to the next category of people who are suddenly especially interested in What Hackers Do without giving much thought to why, and that's people who desperately want to stave off any kind of side effects at all.

Let me give you a recent example. Maybe a week or two ago on Hacker News, I came across an impassioned article about the difference between science and technology. The author's primary claim was that although the process of scientific discovery and the process of technological creation -- say, performing an experiment to test a hypothesis versus designing and implementing a protocol -- are both performed by humans, who have politics, therefore these processes have political effects, the outcome of the scientific process is apolitical because nature remains the same no matter what your view of the world is. And I'd even agree with that. But then he advances this claim:
TCP/IP et al are technologies created by people (smart, well paid white guys, typically) with politics (as much as they deny it, because they're scientists). You can probably say they've made a blip in our politics.

They are inherently political, we need to work out what their politics is, what they encourage or discourage, before we use them to solve political problems.

Okay, Mr. Smart, Well-Paid White Guy. (Dude, do not give me that look. You live in the western hemisphere and have a blog; you are paid better than most of the planet.) We'll just tell damn near everyone in the Middle East, not to mention every single Kenyan who's been coming up with uses for GSM that the makers probably never even imagined much less intended since long before there was an Arab Spring to vex your stony political sleep, that they need to put down their mobiles and back away from Twitter, because the Flying Spaghetti Monster only knows what those beastly protocols might encourage or discourage. (Hosni Mubarak had a few ideas, which is why he decided to shut them down entirely. You can see how well that worked out for him.)

I'm used to reactionaries; I grew up in Texas. I'm just not used to reactionaries coming in from the left. I suppose it's a sign that the left is maturing, in much the same way that Bitcoin is maturing, which is to say becoming part of an established system that finds side effects existentially threatening. And if you can con someone into holding still for fear of what waves they might make if they were to move, whether it's through guilt or fear or what-have-you, you no longer have to worry about their side effects. It's the liberal version of "fuck you, got mine."

Now, I got my first taste of this right around 25c3, when there was some press coverage of the biohacking work I'd been doing with lactobacillus. If you ever want to see a Democrat supporting gun rights, telling him one of his neighbours is doing synthetic biology in their kitchen seems to work -- I have never gotten more death threats than I did when the Huffington Post picked up that article. And we can talk until we're blue in the face about why that is, but I think what's most interesting is that when presented with a sufficiently large example, people will blithely throw away what up until then they'd considered some of their most cherished beliefs, like guns being evil or murder being wrong, at least for the sake of argument. Obviously no one's come up and shot me yet, so apparently no one's completely pitched those beliefs out the window, and I'll take that as a good thing. I'm in favour of not being shot. But I'm also in favour of change I can see, not merely change I can believe in. If that means poking the status quo with a stick to see what it does, I'm more inclined to do that than not. And if it responds, I'm just as inclined to do it again, like that XKCD comic with the electric shock button. Maybe I find out a little more about how it works. Maybe I find out a way it breaks. Either way, I've learned more about it than I knew before. And, crucially, I never would have found out if I hadn't picked up that stick.

You can think of the human brain in a lot of ways, but probably the most useful way I know of to think about it is as a massively parallel pattern-matching machine. Your neocortex learns to recognise patterns, and it builds an ontology out of those patterns, so that from light and shadow you can discern edges and from edges you can discern shapes and from shapes you can discern whether what you're looking at is something you've already identified or something novel. We quite literally spend the first couple of weeks of our lives learning how to see and hear: the machinery is there, we've already been using it in utero, but now we have to adapt to this weird outside-the-uterus environment and that means learning how to use those senses all over again. But the secret is, you never stop learning. The human brain is amazingly plastic, well on into adulthood, as long as you're willing to continue exposing it to novel experiences that it has to learn to pattern-match. Preferably lots of them, so that you don't over-train to an input set that's too small.

I can't tell you what a "social sense" feels like, at least not the way I can describe network proprioception. I was born without one and I'm still working on putting one together from the parts I have available. But I want to know what we could build if we had people who developed proprioception for, if you will, the body politic. We may very well already be creating those people, given that Western children now grow up in a society where social graphs as graphs are a major input on a daily basis. I look forward to seeing them grow up. But if those kids aren't kicking the tires -- which is what kids are supposed to do in the first place, and I guess what we never grew out of -- where are they going to find the side effects that will tell them how these network effects behave?

Comments

( 25 comments — Leave a comment )
whswhs
Jul. 6th, 2013 01:11 pm (UTC)
Internet privacy policy is being developed by Ayn Rand villains? No, seriously, you've got your rent-seeking, you've got your belief in the primacy of state goals, you've got your denial of objective reality. . . .
steer
Jul. 6th, 2013 02:59 pm (UTC)
I'm puzzled by your "break the internet" story because I just don't understand what you're saying. On the face of it it's probably a bad idea to just break the existing internet. On the other hand, it's a very good idea to design something which can replace it or change it in such a way that it's no longer the internet.
I don't know if you think the "fuckup" guy is right or wrong and you described the debate in such strange terms I don't know whether you sypathise with the room (who seem to want to reengineer the internet, which is great) or the guy (who seems to be cautious abou breaking it (which is great).

that mathematics had to be subordinated to national sovereignty.

Already is to some extent -- I know of at least one set of physics equations which (at least as of the mid 90s and at least in the UK and I think US) were not public but instead you could send parameters to people and get a response. You were not to send too many parameters lest you reverse engineer the equations.
maradydd
Jul. 6th, 2013 04:20 pm (UTC)
The "(ahem)" is relevant. Hint: it was not a guy.

More later, conference stuff going on.
steer
Jul. 6th, 2013 04:58 pm (UTC)
I think you misunderstand the nature of my confusion. I'm not trying to guess who it was because I doubt it would mean anything to me as I don't move in those circles. You told the story so circumspectly I didn't actually know what the argument was about or which side you were on.
steer
Jul. 6th, 2013 04:59 pm (UTC)
Doh... sorry, being slow.
darthzeth
Jul. 7th, 2013 01:08 am (UTC)
I assume that you're the "fuckup." As I understand it, you may identify *yourself* under the rule: http://www.chathamhouse.org/about-us/chathamhouserule
whswhs
Jul. 6th, 2013 11:52 pm (UTC)
It seemed clear to me, but then I've been exposed to Hayek's criticisms of scientism.

I took the point to be that you can make piecemeal changes to laws and institutions, but it doesn't work to redesign the entire system comprehensively from first principles: in the first place, because you can't have the omniscience needed to plan an entire system (this is basically a version of the von Mises argument about economic calculation); in the second place, because there is no neutral basis for making evaluative decisions about what you propose to build; and in the third place, because your survival depends on the system you are proposing to rebuild, and if your rebuilding causes it not to work, you're up shit creek with no paddle.

It's kind of like the old medical maxim primum non nocere ("first, do no harm"). Surgery is all very well, but you need to make sure your patient will be alive during and after the surgery.
steer
Jul. 7th, 2013 12:03 am (UTC)
Fortunately I don't believe in Hayek or anything he says. :-)

t doesn't work to redesign the entire system comprehensively from first principles

Right now where I'm looking at it, Internet research is pretty much divided between the "clean slaters" (let's break it all up, start a new one and see where we go) and the "gradualists" (whoa, steady on there, I think we're being a bit hasty... let's wait until IPv6 is completely deployed before doing anything precipitate, I think there's a single bit in the IP header we can play with if we're careful).

The internet itself, of course, was designed (well, more aggregated that designed) by ignoring any kind of message you would ever get from studying the design of existing telephone systems... it seems to work OK. My guess is that the successor network to the internet will happen by a similar process... but it is only a guess. As I always say harking back to my transport network background, the railway network wasn't designed by a canal engineer thinking "how can I modify this?" So my natural sympathy is with the "redesign the entire system comprehensively from first principles" crowd -- because that's historically what has happened in transport and in communications networks.
whswhs
Jul. 7th, 2013 01:01 am (UTC)
But, at the same time, the Internet was not designed by shutting down the entire telephone network, or attempting to replace its existing functions. It seems well on the way to that sort of replacement now, of course, with voice over Internet and Skype. But that took place by an evolutionary process. The Internet was originally designed to perform the quite different function of exchanging text files, which had quite different issues—notably, a delay of a few seconds in the middle of receiving a text file wasn't much of a problem, whereas it's painfully distracting for voice or music or video. It's also worth noting that the Internet originally worked by piggybacking on the existing phone network via early modems that produced signals at acoustic frequencies; in other words, far from comprehensively replacing the telephone system with a new basic technology, it adapted the existing technology of the system to its own new purpose. The change to DSL and cable and satellite and the like came later. It's a lot like the way that the storage of genetic information in DNA went from being a secure backup system to being the primary medium of inheritance and metabolic control.

Shutting down the phone system, or breaking it, on the ground that "we're going to be exchanging information via computers" would simply have taken away a key resource that the early Internet depended on.

As to believing in Hayek, I don't see the relevance. I don't believe in Marxism, or in Christianity, but I have a fair chance of recognizing when a line of reasoning draws on Marxist or Christian ideas, and that gives me a better shot at making sense of where it's headed. I think your lack of knowledge of Hayek may have caused you to be confused when you need not have been.
steer
Jul. 7th, 2013 12:01 pm (UTC)
Sure... for a long time in its mid period the internet came over the same infrastructure as POTS and did stop POTS working (the bad old time when if someone in your house was using the modem you couldn't get calls -- indeed I got a mobile phone because I asked why I hadn't been invited to a particular event and was told "Well, we tried calling you for five days but your line was engaged).

We will see... I've seen a lot of research proposals that "would break the internet if implemented". Some of them have since been implemented. Things touted to "break the internet" which are now part of the internet.
1) Wide deployment of programmable networks (now repitched as SDN is completely acceptable)
2) Widespread use of middleboxes
3) Widespread use of P2P
4) Widespread use of DNS hacks for anycast
5) Widespread use of "bulky" data exchange formats like xml
6) Widespread use of "encapsulation" tunnelling etc
7) Widespread use of flows moderated by non-TCP mechanisms
whswhs
Jul. 7th, 2013 01:50 pm (UTC)
The fact that a particular proposal did not turn out to make the Internet nonfunctional hardly says that no one needs to look at new proposals to evaluate the risk of their doing so.
steer
Jul. 7th, 2013 01:52 pm (UTC)
Of course not... people do this, regularly as a standard part of research. However, given the number of people actively taking an interest in breaking the internet deliberately in various ways, we can take it as a given that it's hard to do.
whswhs
Jul. 7th, 2013 02:04 pm (UTC)
Yes, but the point of interest is not whether it's hard to do. It's that apparently a bunch of people formulating policy for the Internet do not take it as basic that their goal conditions should include preserving the functionality of the Internet. In effect, these people are functioning as a kind of government, and I certainly think that the goal conditions for a legitimate government should include the survival of the people it governs and of the economy that supports them and it; if they don't view those things as a trust given to them they aren't fit to govern and should face the response describe in the Declaration of Independence.

This whole discussion has a flavor of Bentham to me.
steer
Jul. 7th, 2013 02:12 pm (UTC)
It's that apparently a bunch of people formulating policy for the Internet do not take it as basic that their goal conditions should include preserving the functionality of the Internet.

I presume not "for shits and giggles" but with some other goal in mind though -- hard to say without further knowledge of what was proposed. After all "the Internet" is just a way to deliver data. If there's a better way, let's do that instead.


This whole discussion has a flavor of Bentham to me.


I work only a five minute walk from his preserved corpse.
whswhs
Jul. 7th, 2013 03:06 pm (UTC)
What is your criterion of "better"? I mislike standards of "better" that amount to "a roomful of policy wonks think it sounds like a good idea, so they're going to impose it on everyone." The ideal standard of "better" seems to me to be "large numbers of people, all over the relevant social network, individually decided they liked B better than A and chose to switch to B, without duress or misrepresentation." That's how we moved from physical mail, telegraph, and landlines to cell phones and Internet, after all, at least where I live. The old systems haven't even been disabled; they're just being used less.

Why? Well, in the first place, reversability, in the thermodynamic sense: If you progress step by step, at any point you can take a step back at low cost. And in the second place, if you make a bad choice for yourself, it hurts you; if that roomful of smart people make a bad choice, it hurts everybody, including people whose costs the policy experts may have discounted steeply.
steer
Jul. 7th, 2013 03:08 pm (UTC)
What is your criterion of "better"?

Let's go with adopted by more individuals if that's your criteria, I'm happy to use it.
whswhs
Jul. 7th, 2013 05:14 pm (UTC)
I'd actually prefer a version of the Pareto principle.
maradydd
Jul. 7th, 2013 09:24 pm (UTC)
Now that I'm home and have my laptop out, let me see if some more context will help.

The discussion leading up to that had been generally on two topics: crisis situations like Mubarak disconnecting Egypt, and initiatives like SOPA/PIPA, which would have broken DNS and DNSSEC in several ways that Paul Vixie, Dan Kaminsky and some other people wrote a paper about that I can dig up for you if you aren't able to find it. (The internet blackout got all the press, but the economic argument they made carried a hell of a lot of weight, from what I understand.) The common theme of these, at least as I see it, is unilateral action by policymakers without (much) regard for practical outcomes. I regard this as an obvious negative, for the reason that whswhs captured quite well:

a bunch of people formulating policy for the Internet do not take it as basic that their goal conditions should include preserving the functionality of the Internet

Perhaps, on reflection, I was giving the policy people more technical credit than they deserved, e.g., understanding why Vixie et al's concerns matter (it has to do with how DNS establishes ground truth, which I imagine you already understand, but let me know if you want me to elaborate) -- but I also don't think that the arguments in their paper are particularly difficult to understand. (I tried to explain, but obviously I didn't do a very good job; that said, I was being shouted down a lot. It was all pretty stressful.)

Bear in mind that I'm absolutely not opposed to radical new technologies. I'm working on several myself. Fundamentally, though, I am opposed to policy decisions that take away functionality. As I understand it, all of the touted-to-break-the-internet changes you describe above were additive, so that's a meaningful distinction, and thank you for helping me to arrive at it.
steer
Jul. 7th, 2013 09:59 pm (UTC)
Presume the paper you mean is this:

"Security and Other Technical Concerns Raised by the DNS Filtering Requirements in the PROTECT IP Bill"

I'm sure you're right -- I know next to nothing about security stuff (kind of by design).

We're probably mostly on the same side on this one. I guess to you or I, the very idea of the internet having big "off" switches is itself a brokenness. To a corrupt president of a failing regime or to a dedicated pursuer of internet piracy the the internet not having such off switches is brokenness.

Either way as I understand the argument, it's talking about DNS filtering and redirection. There's a lot of that out there already in the wild no? I mean whether you regard it as a feature, a misfeature or a brokenness it's implemented widely enough... UK certainly does it -- OpenDNS does some.

I take the point about redirection breaking DNSSEC... but if you're big government you would argue that at the end of it you should be able to sign what you like as whoever you like because you're the government and hence the trust root (which is not something either of us believe but I bet it's something they believe).

At least if they're using DNS filters it's easy to get around -- so that's one plus point for it as a technique no? I mean filtering that doesn't work is better than filtering that does. :-)

I am opposed to policy decisions that take away functionality

Depends how you regard it no? I mean DNS redirection and filtering are already used. DNSSEC is already used. They don't get on. Which part of that is taking away functionality? We'd like to argue that the DNS redirection is taking away the DNSSEC functionality because we kind of prefer that functionality. But the person who likes redirection and filtering could equally argue the other. If we were to judge based on which was most widely deployed I'm not sure DNSSEC would win.

Which reminds me, I saw an awesome presentation about visualising certificate trust chains in Tech Uni Berlin a few months back. You'd have loved it I think... bit over my head as I don't grok the details of signing so a lot of it I was just nodding.

http://notary.icsi.berkeley.edu/trust-tree/
maradydd
Jul. 7th, 2013 10:14 pm (UTC)
Yup, that's the paper.

The DNS filtering that exists in the wild, at least what I've seen, doesn't change the ground truth in quite the same way that SOPA/PIPA would have. (In one hilarious instance, it also facilitated the construction of a kiddie-porn-finding oracle.) Mind if I defer the longer explanation to tomorrow, though? It's been a long and booze-fueled weekend and it's getting on toward my bedtime.
steer
Jul. 7th, 2013 10:21 pm (UTC)
No worries -- sleep well.

Heh... Richard Clayton... haven't seen him in a while (doubt he remembers me). He was the Cambs guru on security for one of the first projects I worked on at UCL.

I don't think it was just the one hilarious incident as the Australian list of banned sites is supposedly on wikileaks -- though I think it's a duff messed with list.
moof
Jul. 7th, 2013 11:15 am (UTC)
Sometimes I wonder if it's possible to know one's own or someone else's Goedel number - or if it's possible, that there are only certain classes of minds (or Minds) that can grasp the sufficiently higher-order logic to do so. (Smullyan probably has something interesting to say about it.)

As a corollary, I wonder how close my emulated social sense comes to the real thing.
maradydd
Jul. 7th, 2013 09:29 pm (UTC)
I don't think mine is very much like the real thing at all, but I have some rudimentary processes for improving it. I don't know whether that necessarily implies that I'm iterating toward the real thing, and in fact I'm starting to suspect it doesn't, but I'm actually okay with that.
whswhs
Jul. 8th, 2013 03:30 am (UTC)
I don't think even neurotypical people have anything describable as accurate social modeling. Our brains evolved when we lived in social groups of no more than a few hundred; research on memory thresholds suggests that a normal memory space can contain ~500 distinguishable entities (for example, primary taxonomic terms such as "dog" and "eel" and "fly"), and I've seen anthropological speculation that this comes from the size of social group where we can know everyone by sight. But we don't live in a society with 500 people, but in one with over 10,000,000 times that many; the number of sovereign states is a substantial fraction of 500, in fact. So what our primary social modeling system is doing is not actual social modeling, but folk social modeling, in the sense in which impetus theory is folk physics or the concept of the mind as an engine for taking attitudes toward propositions is folk psychology.

This applies with particular force to economics, because a society of 500 people is one where the basic economic mechanisms of a market system cannot function: there are too few people for "competition" to mean anything, there's no meaningful division of labor beyond sex roles and maybe age grades, there's no need for a circulating medium and thus no prices, there's no separation of economic transactions from political alliances or political from familial alliances, and so on. Market economies and trade make us rich beyond the imagining of tribal societies, but they depend absolutely on things that can only be crudely and inaccurately assimilated to tribal social thinking.
maradydd
Jul. 8th, 2013 03:32 am (UTC)
I'd like to think that a dearth of initial biases is an asset in this, yes.
( 25 comments — Leave a comment )

Latest Month

September 2014
S M T W T F S
 123456
78910111213
14151617181920
21222324252627
282930    

Tags

Powered by LiveJournal.com
Designed by Tiffany Chow