Log in

There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.

— Sir C. A. R. Hoare

On the heels of I-guess-it-was-Wednesday-I-don't-keep-my-thumb-on-the-pulse-of-this-shit's news that "single-tap zero-character communication tool" Yo had raised over $1M in seed capital comes today's news that Yo leaks users' phone numbers and is riddled with other holes.

This should surprise approximately nobody, and yet it apparently does. By and large, people have come to grudgingly expect data breaches at their mobile providers, their banks, the restaurants and stores where they shop, even their government services — these things are complicated — but surely an app that does just one thing can get that one thing right, right?

The problem is not the one thing. The problem is everything else.

Hoare also famously observed that "One of the most important properties of a program is whether or not it carries out its intended function," and later in the same paper (emphasis mine), "The most important property of a program is whether it carries out the intentions of its user." The distinction here is subtle, but important. When a user installs a program, the user intends that the program carry out its intended function, and also intends that the program not carry out any unintended functions1. The fact that your program can carry out its intended function is not enough if your program also carries out other functions that are actively detrimental to your users, like leaking their phone numbers without their consent.

Time will tell how much of Yo's seed round will leach away into the legal system (or to the FTC, as Parker Higgins points out) as a result of its negligence, and I hope that investors, entrepreneurs-in-residence, entrepreneurs generally, and users pay attention to that figure. If this number is low, that's good news for unscrupulous investors and bad news for privacy (and civic engagement, and so on). Although it generally seems to be the case that the market does not give much of a fuck about data breaches, I would really like to see that seed round evaporate away into damages, because honestly whoever signed off on due diligence for this funding round was either criminally ignorant or criminally negligent.

An important lesson here, for users as well as due diligence, is that permissions don't tell the whole story, particularly when viewed in isolation. (You'll have to click on "view details", there doesn't seem to be a direct link.) "Find accounts on the device" sounds like an innocuous enough capability on its own — but as Glympse's explanation of how their Android app uses the permissions it requests points out, "find accounts" is also required for push notifications. Because that obviously makes sense. (I am not actually an Android developer, but from a little searching it looks like Yo's permissions map exactly to those that push requires.)

I know exactly nothing about how Yo actually manages user accounts, but I will bet anyone alive an Airbus A380 stuffed full of dollars that of Yo's supposed ~50,000 users, of the tiny fraction of those who actually looked at the permissions before they installed the app, the only ones who knew that "find accounts" enables push were Android developers themselves. Of course, it makes complete sense that an app whose intended purpose — or, well, declared purpose, anyway — is to send a single, fixed string literal from one mobile device to another would use push notifications for that (what else are you going to do, long polling?), but did anyone actually check that the code could only send that fixed string literal?

If I decompile the Yo .apk, am I going to find something that looks remarkably like:

static final String yo = "Yo";
void pushMessage(User user, String message) {
  somePushFramework.push(user, message);
pushMessage(user, yo);

The users can't check that, unless they're willing to learn how to use a Dalvik decompiler, and while I aspire to the eternally bright outlook where users heroically take the initiative to learn what their tools are really doing, I live in a world where I have had to make sure that unnecessary abstractions like this do not make it into production code. Except in this case it wasn't "there exists a function that can be hijacked to send an arbitrary message," it was "there exists a function that can be hijacked to set an arbitrary environment variable to an arbitrary value." If that doesn't keep you awake at night, you probably shouldn't be writing software. Or funding it. If you are funding it, though, you should really have someone looking at the code who is kept awake at night by bad architecture.

How Yo got from enabling push notifications to leaking phone numbers I genuinely do not know. If I had to guess broadly, I'd go with thoughtless endpoint API design, which is a rant all by itself, and yet another argument for careful architectural review. I don't have much insight into how funders make decisions, but I have picked up a fair amount of understanding over the years about how software defies people's expectations of banality, and if you are not looking at how the software you are thinking about funding might unintentionally carry out something other than the intentions of its user, then you are not thinking about one of the ways in which you are exposing yourself to liability.

Edit: Like I said, I don't follow this shit obsessively, so I completely missed Marc Andreessen almost getting the point but missing it spectacularly at the last second:

No, Marc. Those Georgia Tech kids proved by construction that Yo, in the instantiation that attracted it ~50,000 users and over a million dollars, communicates way more than a single bit. That a seasoned investor would see the potential for one-bit apps (which there most certainly is, and he gets that), but not temper that vision by at least acknowledging the potential for a gap between intention and implementation, is troubling. That any investors went so far as to throw seven figures at this project, apparently without investigating that gap, is insanity.

1There is an argument to be made here about undefined behaviour in programming language semantics and why it shouldn't ever be allowed, since technically speaking, a diabolical C compiler writer who decides that the semantics of attempting to modify a string literal or divide an integer by zero should be rm -rf / is not wrong, but that particular framing of said argument is pretty hyperbolic.
Tom Servo: Are you boys cooking up there?
Mike: No.
Tom Servo: Are you building an interociter?
Mike: No!

— Mystery Science Theater 3000: The Movie

If I had to sum up my life as an algorithm that can be expressed in one sentence, a good approximation might be "if it isn't working, try something else." I spent six and a half years in undergrad, for instance, bouncing from major to major like a robot turtle rebounding off obstacles, every time a course of study stopped being interesting, until I stumbled into linguistics looking for an easy upper-division English credit and found the interest-fueled momentum to graduate. My whole career has been like this, really. I can pretend, and on my resume I do pretend, that there's a coherent trajectory behind it all, but the farthest out any of it was planned was maybe fifteen months and that's because I deferred grad school by a year. (Also, if you think the coherent trajectory depicted on anyone's resume was intended, now you have been disabused of that notion. You're welcome.)

Having an operating principle like this means having, and constantly refining, a reliable sense of what "working" means and looks like. In day-to-day life this sense is fuzzy and subjective and took an awful lot of head-on wall collisions to develop into a robot turtle guidance system that mostly only glances off of obstacles these days. Contexts where it's possible to determine, objectively, a yes-or-no answer to some decision problem are outliers. They're really nice outliers, and I like them a lot, because the hardest thing about "solving" most kinds of problems isn't the work itself, but the uncertainty of not knowing whether or how your solution is going to fail on you. You might be able to make peace with the natural evils of flood and Halt and Catch Fire1, but even an honest mistake is a kind of inaction -- the kind you train and improve your whole career to be able to avoid. Any situation where you can determine, with no uncertainty whatsoever, that an alternative is the correct or incorrect one is a refuge.

So that works well for some problems at the individual-problem scale, but back out even just a little beyond that and uncertainty floods in around the edges. Decidable problems are priceless; for everything else, there's pattern-matching. (And when that inevitably fails, there's MasterCard.) Most "is it working or not?" decisions I run into are ones I can only pattern-match about. There are so many ways that heuristic decision-making can fail that inevitably some edge case will present itself clearly enough that the scale tips in the "not working" direction. The robot turtle casts about randomly -- or, more realistically, casts about according to some learned casting-about heuristic -- and then goes ambling along its robot turtle way. In our case, we looked for problems that lent themselves to the tools we were learning to trust, and hopped from one to the next to the next.

It's a little startling at first to look back and realise that the robot turtle has been hopping along for the last few years with very little course correction because it keeps not being obviously wrong. Of course, langsec so far has mostly kept itself to syntax and those parts of semantics that syntax can constrain, and syntax is decidable. That's starting to change. Not the decidability of syntax, I mean, the scope of langsec. I even think it's a scope expansion that Len expected:
I believe that usability is a security concern; systems that do not pay close attention to the human interaction factors involved risk failing to provide security by failing to attract users.

Usability has been the bête noire of security tools since the beginning of security tools: the sheer number of potential adversaries, the broad differences in scope of their capabilities, and the wealth of strategies (some time-tested, some showing their age, some deprecated but still hanging on in legacy APIs, and some as yet unproven) for countering them means there are no one-size-fits-all tools, only tools that apply in a given context and tools that don't. Tens of thousands of hours go into the design, peer review, implementation, and implementation review of crypto libraries and the applications that use them, yet end-to-end-encrypted instant messaging is only just now coming to Facebook via a third-party plugin. OTR has been available in open-source clients for years, but the fraction of IM users who use these clients is vanishingly small compared to the crushing volume of Facebook. Getting realtime browser-to-browser instant messaging right is hard enough even when you're Facebook. The wealth of browser platforms (and platform versions) out there does not help the situation one bit, and if you want to provide end-to-end encryption in the browser, that's a problem you have to charge head-on. And when your business model is "moar users," fucking up your usability (or the usability goodwill you've developed over time) is Not Done. First they'd have to figure out what security properties they wanted to add to Messenger, then they'd have to work out a protocol that provides those properties, there'd be tons of cross-browser issues to work out, and they still wouldn't hit the mark because browser delivery of end-to-end encryption software doesn't protect the user from whoever's doing the delivering. That's a design issue that goes right to the metal of the browser, and I'll go so far as to argue that a lot of that is because crypto APIs are terrible. Yes, the ones your browser uses the linker for.

The problem cuts that deep for the inverse of the reason that Facebook is conservative about UX: cryptographers are conservative about correctness, because their jobs rely on it. It's not that security and usability are incompatible, it's that people who care more about security are more motivated to do security things and people who care more about usability are more motivated to do usability things. But when the access patterns of software languages and libraries make it easy for the developers who use them to model their intentions, and the design elements and interaction patterns of interfaces make it easy for their users to express their intentions to the software (and those models agree where they meet up) —
And every phrase
And sentence that is right (where every word is at home,
Taking its place to support the others,
The word neither diffident nor ostentatious,
An easy commerce of the old and the new,
The common word exact without vulgarity,
The formal word precise but not pedantic,
The complete consort dancing together

— T.S. Eliot, "Little Gidding"

— the result is disruptive in ways with the potential to rock far-away foundations.

Justin Troutman recently contacted me to let me know that he's looking to meet up with people interested in the boundaries of competence between UX and crypto at HOPE X this summer. (I am assured by a reliable source that the keynote will be amazing. I can't make it, but you should go.) This is in preparation for a Much Bigger Thing to come, which I do not know how much I can speak publicly about yet, but I think it is pretty fair to say that Justin and I are looking at this problem in compatible ways and I think he's putting together a big step toward bridging the conceptual gaps that make Caring About the Opposite Problem(tm) hard.

Our first official academic workshop is tomorrow, at the conference Len always desperately wanted to get a paper into. We're a real little field now. C'mon, robot turtle, let's go try our hands at some even bigger problems.

1Okay, HCF isn't really a natural evil, but the joke wrote itself.


We are in the process of securing a full-depth, 1m80cm tall rack for the flat. It may be possible to squeeze it in the laundry room behind the dryer, although Tom is more of a mind to put it next to the refrigerator, as they're about the same height and it will get better airflow that way.

This will provide a home for thequux's massive collection of vintage hardware, which does not yet contain a LISP Machine but once we get the VAXen racked let's talk. Power management for this project should be fun, since I don't think we intend to leave these machines on all the time, as that would be loud and expensive -- admittedly once we replace the old 110V power supplies with modern, more efficient 220V ones, power consumption will go down, but the Sun 2 doesn't need to be on all the time -- but it would still nice to be able to spin machines up or down easily and also remotely. Part of the goal here is to have the world's most baroque malware disassembly lab; I am consumed by the mental image of BadBIOS waking up on an Alpha and mumbling "where am I and who the hell did I go home with last night?"

Incident to all of this is that I have actually made enough headway on the mountain of boxes that has been the front half of my (very open plan) living room for the last year-plus to do something about the rackable boxen. For much of that time I have not actually been here, but now that my life is actually kind of settling down again it is long past time to finish goddamn unpacking. With that in mind, there will probably be progress photos as I get the library / atelier together. (Over the weekend I started an experiment in using bookcases as room dividers; once I have another pair of bookcases, it will also be an experiment in using the ends of bookcases as tool storage. Conveniently I have guidance from an expert in the practise of Billy-based interior remodeling; these are not going to be load-bearing walls, but frankly neither are most of the interior walls here and they're going to be anchored to the concrete wall to which they are at right angles once I find my masonry bits. Maybe the floor too, since it's also concrete.)

Anyway, back to work. Still here, therefore still invincible.

A writing insight

Before using a simile in persuasive writing, think of a story about only the object of the simile, such that the narrative makes the subject of the simile self-evident. Like a fable.

Truth is stranger than fiction because fiction has to make sense.


You're forgetting one thing, Mr. Obama.

Quoth the White House Office of the Press Secretary:
However, we oppose the current effort in the House to hastily dismantle one of our Intelligence Community’s counterterrorism tools. This blunt approach is not the product of an informed, open, or deliberative process. We urge the House to reject the Amash Amendment, and instead move forward with an approach that appropriately takes into account the need for a reasoned review of what tools can best secure the nation.
In what way was the NSA's blanket acquisition1 of everyday Americans' telephony metadata the product of an "informed, open, or deliberative process"?

Congress has not had the opportunity to review these so-called "counterterrorism tools", but Congress absolutely has the authority to do so under its enumerated powers. Let's start with appropriation, since that's what the Amash amendment addresses. If Congress cannot determine whether the executive branch is complying with the law -- and that includes the Constitution, the highest law of the United States -- then Congress may jolly well withhold funding until its concerns are satisfied, because of Article One, Section Nine:
No Money shall be drawn from the Treasury, but in Consequence of Appropriations made by Law; and a regular Statement and Account of Receipts and Expenditures of all public Money shall be published from time to time.
That says law, not executive order. Both the executive and legislative branches seem to have forgotten this over the last decade or so, but just as an army marches on its stomach, the executive branch marches on appropriations authorized by Congress. Apparently the White House has been reminded of this fact now, since this is quite possibly the first time the Press Office has been moved to remark on a floor amendment during either Obama administration.

I have to wonder why none of Senator Ron Wyden's many attempts to determine the scope of interception under the FISA Amendments Act prompted a White House call for a "reasoned review of what tools can best secure the nation"2. All the concerns that the "recent unauthorized disclosures" have dragged into the spotlight were already there -- it's just a lot harder for the NSA to deny them now.

Which brings me to the other relevant Congressional power, one which our last Democratic President became all too familiar with, from Article One, Section Two, Clause Five:
The House of Representatives shall choose their Speaker and other Officers, and shall have the sole Power of Impeachment.
This power applies to "the President, Vice President, and all civil officers of the United States" -- which very much includes Director of National Intelligence James R. Clapper.

What this comes down to is that the White House now has a game of chicken to play. It is a game that is perhaps best illustrated by the story of the last time it was played, a time now known to legend as the Watergate Affair.

It began with a break-in3 at the headquarters of the Democratic National Committee in the Watergate Hotel. The criminal and media investigations of the event -- with help from an anonymous leaker who eventually turned out to have been the Deputy Director of the FBI at the time -- turned up evidence of shady financial dealings and even shadier political spying and sabotage. Then one of the Watergate burglary defendants threw his handlers under the bus, which led one of those handlers, John Dean, to throw Nixon under the bus. Nixon fought the investigation like hell, refusing to answer a subpoena for audiotapes made in the Oval Office which ultimately implicated him in the obstruction of justice. He took it all the way to the Supreme Court, he lost, and as the whole conspiracy came undone, he resigned before the House took its full vote on impeachment.

Edward Snowden has conveniently skipped us past all the dull bits before Congress figures out that it has a job to do. There's evidence of wrongdoing on the table, and he's already said there's much more where that came from. Snowden's not a stupid man; he's not going to release documents indiscriminately, but neither is he going to run out of them any time soon.

There are other variables to consider. There may be other current or former three-letter-agency employees or contractors out there, weighing whether to go public with evidence they have. And if Congress appoints a special prosecutor to investigate Clapper or anyone who came up in Clapper's testimony, that's a whole different kind of pressure -- the kind that, as the Watergate investigation showed, tends to lead to people metaphorically4 ending up beneath public transit.

So what the administration has to consider is how long it can keep its foot on the pedal before it collides head-on with Congress versus how long Snowden's willing to wait before ... injecting nitrous into the fuel mixture, I guess, in this weirdly vehicular extended metaphor. They'd like us to think that they're putting the brakes on, with this call for "debate" and "discuss[ing] critical issues with the American people and the Congress", when really they know that Congress has just floored it. They're telling Congress to swerve.

What's the administration worried that a thorough Congressional investigation might reveal?

After all, if the executive branch has done nothing wrong, it ought to have nothing to hide, isn't that how the argument usually goes?

1Since that's the term that the NSA seems to prefer to use where a layperson would use "collect" -- seeing as how they've redefined "collect" to mean "actually looked at" so that they can claim before Congress they haven't been "collecting" the telephony metadata of Americans not under investigation -- I'll just go with that, so that there's no ambiguity.
2I have a few thoughts on that.
3The White House Press Office referred to it as a "third-rate burglary attempt."
(The following, plus a few side anecdotes, was delivered at SIGINT 13, Cologne, Germany, July 5th, 2013. Here's the video.)

About a year and a half ago I was in Brussels for a workshop that Google and Privacy International hosted. The goal of this workshop was to develop policy language around privacy that Google could use in negotiating with governments -- I'm guessing trade agreements and things like that, no one was especially able or willing to give me specific details -- about user privacy, what sort of protections have to be applied to data on the wire, data at rest and so on, and what governments can and can't do with respect to the data that private (or publicly traded) companies collect and use in the course of their business.

Now, this workshop was held under the Chatham House rule, which says that I can quote things that people said, but I can't attribute them directly. On the first day of the conference, they offered two tracks, a technical track and a policy track. There were a bunch of really sharp technical people there, academics and industry people and independent researchers and like half of the Tor Project. While I didn't know most of the policy people, there were a whole lot of folks from the EFF and other good-guy kinds of organizations, and I have to figure if they managed to pick up a qualified slate of technical experts they probably did a decent job on the policy side too. But you could go to whichever track you wanted, it wasn't segregated by specialty or anything like that.

So we all meet up, it's about 9 o'clock in the morning, there's coffee and about half an hour to meet-and-greet, and then they sit us all down and give us an overview of the next two days and tell us we can go to whichever room we want, technical or policy. And I notice that every hacker I recognise, along with the computer science academics and so on, they're all headed into the tech room. And I'm like "hmm." Because sure I know a thing or two about Tor, but they've already got half the Tor Project. Not to mention, the academics there knew everything I know about privacy and then some, and there were enough people who knew enough about langsec that even if it came up they didn't really need me, and apart from that I'm not really sure what I have to offer. So I decide okay, since the point of this whole affair is to produce policy language anyway, I'll go see if I can contribute to that. Make sure there's an engineering perspective represented, that kind of thing.

Now remember, Chatham House rule, so I can't directly attribute quotes. But what I can tell you is that maybe 45 minutes, an hour into the discussion, some fuckup (ahem) who'd been sitting there fidgeting at the way things had been going pipes up and says, "Can we take as axiomatic that it's a bad idea to just up and break the Internet?" And the whole room turns and says, "NO." I mean, it wasn't quite as direct as that, there was some spirited discussion, but it very quickly became clear that to everyone in the room that was willing to open their mouth apart from this one fuckup, the very idea of a global interconnected network was something like a lump of modeling clay that you could squish and mold, shape and reshape by fiat. Never mind that there was only a thin wall between them and a whole room minus one full of engineers talking about the incredibly intricate details and constantly moving parts of this really-quite-fragile-when-you-think-about-it putative lump of modeling clay.

A little later, this same fuckup was having lunch, and got into a conversation with one of the other people from the policy room, during which the other person advanced the claim -- and I am pretty sure they were not being ironic -- that mathematics had to be subordinated to national sovereignty.

That was the point where I said to myself shit, y'all, we've got a problem.

Because as far as I can tell, every single person at that workshop was supposed to be one of the Good Guys. But when the Good Guys can't even agree on what reality is, how far can they really get toward agreeing what good is?

So now it's 2013, and the front page of pretty much every major metropolitan newspaper has been carrying articles for weeks on PRISM, on Edward Snowden, on the NSA's actions in Germany and the rest of the EU. It's tempting to think that the lines are really clear: the NSA violated everyone's rights not to mention EU data protection laws, therefore NSA bad, therefore everyone else good, which includes Edward Snowden, therefore what the hell are all these other countries doing hiding behind excuses like "he has to apply for asylum from within our country"? And then Venezuela comes into the picture and there's some arguing about trade agreements, and all of Europe's foreign ministers are suddenly very preoccupied and there's bad news out of the European Central Bank again and we all come off looking like sellouts. And everyone around the world feels vaguely unsatisfied.

Crucially, nothing has actually happened.

Perspectives may have changed. Opinions may have changed. But Edward Snowden is still somewhere in the transit zone at Sheremetyevo, and PRISM is, as far as we know, still in operation. Enormous amounts of cogitation have been expended over this topic. Millions of man-hours of human computation -- and at least an equivalent amount of CPU computation -- have been devoted to it. People obsess over the ins and outs of the rights and wrongs of what Snowden did, or of the legality or the illegality of the NSA's actions, and meanwhile the ingestion systems merrily continue ingesting.

Because nearly everything that matters is a side effect.

I should explain, at this point, exactly what it is I mean by side effect, but I'm going to have to start with a counterexample. In pharmacy, for instance, there's this notion of the clinical effect and the side effect, where the clinical effect is the effect you want to produce, like reducing pain or cooling off a fever, and the side effect is something that you don't want to produce, like a metabolic product that's incredibly toxic to your liver. This is the usage that's made its way into everyday language, and it carries with it this notion that there are always tradeoffs. You can take just enough paracetamol to take away your headache without also killing your liver, and this is reliable enough across the entire human population that we feel comfortable selling it over the counter and giving it to children. And we end up thinking about side effects as something that we manage, in the case of paracetamol adding up to a few hundred thousand emergency room visits per year due to accidental overdose. But I'll get back to this.

In computer science, what we mean by side effect is anything that changes the state of the system. If the intended result of your computation produces some change in state, then it's actually a side effect. If an unintended result of your computation produces some change in state, it's also a side effect. Intent means nothing whatsoever. You could have given that person a third dose of paracetamol after they threw up the first two because you were trying to help them with their fever and didn't realise how quickly the stomach absorbs paracetamol -- this actually happened to a friend of mine -- or you could have been straight-up trying to murder them; computer science only acknowledges the side effect of the person landing in the emergency room with a failing liver. (She survived, by the way.)

So this is why, the other day when a Belgian business news reporter interviewed me about PRISM and finished off by asking for my #1 piece of security advice for Belgian companies, I told him, "Follow the OWASP best practices and focus on your responsibility to your customers." And he got that, which I thought was encouraging. If you're a European company and a copy of your trade-secret algorithm is sitting on an NSA hard drive right now because somebody's git traffic transited through the US, it'll still be sitting there tomorrow and there's not a hell of a lot you can do about that. But you can take steps to harden the machines that algorithm is executing on, and those steps are persistent side effects. They have lasting impact. They matter.

And there's an extent to which I feel like I'm preaching to the choir here, because we get that. It's almost like a sense you develop when you observe a system over a long period of time, whether we're talking about a telephone trunk system or the time-sharing systems at MIT or the early ARPAnet -- which is all way before my time, but that's fine, because it's not a sense you have to be in some particular time or place to acquire. I got mine on IRC. It's network proprioception. Boxes come up, boxes go down, the shape of the network changes, and as you interact with that network and start learning what all its little flags and options do, how change propagates, you develop an awareness of the state of the system that I think it's really only fair to compare to your awareness of the state of yourself. Certainly from a philosophical standpoint they're about equally hard to talk about. But I think it's fair to say that hackers who know what they're doing -- reasonably competent hackers, let's say -- correlate inputs to a system with outputs from that system and, when they can, internal state changes in that system, in much the same way that people who are reasonably self-aware can think abstractly about what they experience, consider their internal responses, and produce some outward response (or not, as the case may be). And, for that matter, learn from their mistakes! I think that in much the same way as Douglas Adams characterised the knack of flying as learning to throw yourself at the ground and miss, it's entirely fair to characterise the knack of hacking as learning how you yourself can fail more quickly until whatever you're analysing fails in exactly the way you want it to.

But having this kind of mindset at all -- which is really just the scientific method all over again, nature being obeyed in order to be commanded and all that -- turns out to be rarer than you'd expect, at least if you're me, which, to be fair, means that most of the people you spend any time with at all are scientists, hackers, or both. This is not all that large of a sector of the population to begin with. And we're in a funny situation here, where for the last couple of years there's been an unusually large proportion of international attention paid to the hacker community and hacker culture by people who don't have the faintest fucking idea how we think. There's a saying for this in the United States, "armchair quarterbacking"; the metaphor refers to the guy who's sitting there in his armchair at home, drinking beer and shouting at his big-screen TV what he thinks the quarterback ought to be doing. Maybe he's played some football, maybe he even coaches kids on weekends or something, but there's this tacit understanding that for all his rhetoric -- even when he's right -- he's still just there in his armchair, if he really understood how to coach a team to glory he'd be out there in the game putting that understanding into practice.

This gets murky in the world of policy, where you can have economists like Felix Salmon who wax rhetorical about Bitcoin without having the first fucking idea what a hash function is, much less how one functions as a component of a billion-dollar financial system. He literally does not understand what he is talking about, but he understands enough about money -- or, at least, what "money" means in the parlance of the modern international finance system -- that he thinks he understands what he's talking about, and worse, that people should listen to him, even though what he's actually talking about and what he thinks he's talking about are systems as wildly disparate as ... two very disparate things. If Bitcoin is going to make any sense to you, you have to accept the notion that the levers and dials that financial regulators are used to being able to fiddle with just aren't there. The currency itself is inherently resistant to regulation, because Satoshi Nakamoto built that system like a Deist God; the parameters of the system, from block difficulty to reward halving time, were built into that system from the moment the genesis block got written out to disk.

And every time I hear one of these armchair cryptocurrency specialists -- who may be top-notch economists, but who crucially spend so much of their time thinking about how to manage systems where "unit of account" and "unit of exchange" mean the same thing that the very idea of decoupling those concepts is crazy moon language -- talking about how inherently doomed Bitcoin is, I kind of want to come back with "so how many billion-dollar financial systems have you built in the last couple of years?" Think about that for a second. We live in a world where if somebody comes up with a useful enough idea, and reduces it to practice in code that other people can actually use, and enough other people decide it's also a useful idea, a couple of years down the line it becomes a billion dollars of monetary capacity. Obviously I'm not going to pretend that's a billion dollars worth of state change -- a billion dollars worth of side effects; a lot of the volume that goes into making that number that large is people chasing bubbles, and it's reasonable to expect the usual sort of outcomes you get from chasing bubbles, namely other floating currencies disappearing into thin air when people decide it's time to get out. But I've also lost count of how many times I've heard the usual pundits predict that surely, this Bitcoin price spike is going to be the one that kills the golden goose ... and every time, like clockwork, the damn thing crashes, dusts itself off, and keeps going. It's almost as if being able to move units of exchange around internationally without having to pay rent to the established financial system is something people find value in.

So it should surprise approximately no one that the next line of defense is, of course, regulation. What surprises me is that it's taken as long as it has. Satoshi did an amazing, unprecedented thing: he designed a protocol that inherently resists tampering. In fact it's so inherently tamper-resistant that you can't actually regulate Bitcoin; you have to either take over the entire network or take a step back and regulate the exchanges where people turn other currency into Bitcoin and vice versa. And I get where the Winklevoss twins are going when they say that regulation means that Bitcoin is maturing as a financial instrument, but I don't for a moment think that's necessarily good for users. If Bitcoin "maturing" means that the majority of its users have to rent their liquidity from a regulator-approved set of oligarchs, then Bitcoin's advantages against other currencies will evaporate. If that happens, the status of the financial system remains a lot closer to quo than it otherwise would. And I can't think of many things that an established rentier likes more than the status quo.

Because nearly everything that matters is a side effect.

Now, I'm hardly going to fault Satoshi for not solving the liquidity problem in addition to not only solving the double-spending problem in a distributed setting, but also doing it in a way that people ended up using. One side effect at a time is fine, especially when you expect it's going to be a big one and you want to find out whether it even works the way you anticipate it will. But this leads me to the next category of people who are suddenly especially interested in What Hackers Do without giving much thought to why, and that's people who desperately want to stave off any kind of side effects at all.

Let me give you a recent example. Maybe a week or two ago on Hacker News, I came across an impassioned article about the difference between science and technology. The author's primary claim was that although the process of scientific discovery and the process of technological creation -- say, performing an experiment to test a hypothesis versus designing and implementing a protocol -- are both performed by humans, who have politics, therefore these processes have political effects, the outcome of the scientific process is apolitical because nature remains the same no matter what your view of the world is. And I'd even agree with that. But then he advances this claim:
TCP/IP et al are technologies created by people (smart, well paid white guys, typically) with politics (as much as they deny it, because they're scientists). You can probably say they've made a blip in our politics.

They are inherently political, we need to work out what their politics is, what they encourage or discourage, before we use them to solve political problems.

Okay, Mr. Smart, Well-Paid White Guy. (Dude, do not give me that look. You live in the western hemisphere and have a blog; you are paid better than most of the planet.) We'll just tell damn near everyone in the Middle East, not to mention every single Kenyan who's been coming up with uses for GSM that the makers probably never even imagined much less intended since long before there was an Arab Spring to vex your stony political sleep, that they need to put down their mobiles and back away from Twitter, because the Flying Spaghetti Monster only knows what those beastly protocols might encourage or discourage. (Hosni Mubarak had a few ideas, which is why he decided to shut them down entirely. You can see how well that worked out for him.)

I'm used to reactionaries; I grew up in Texas. I'm just not used to reactionaries coming in from the left. I suppose it's a sign that the left is maturing, in much the same way that Bitcoin is maturing, which is to say becoming part of an established system that finds side effects existentially threatening. And if you can con someone into holding still for fear of what waves they might make if they were to move, whether it's through guilt or fear or what-have-you, you no longer have to worry about their side effects. It's the liberal version of "fuck you, got mine."

Now, I got my first taste of this right around 25c3, when there was some press coverage of the biohacking work I'd been doing with lactobacillus. If you ever want to see a Democrat supporting gun rights, telling him one of his neighbours is doing synthetic biology in their kitchen seems to work -- I have never gotten more death threats than I did when the Huffington Post picked up that article. And we can talk until we're blue in the face about why that is, but I think what's most interesting is that when presented with a sufficiently large example, people will blithely throw away what up until then they'd considered some of their most cherished beliefs, like guns being evil or murder being wrong, at least for the sake of argument. Obviously no one's come up and shot me yet, so apparently no one's completely pitched those beliefs out the window, and I'll take that as a good thing. I'm in favour of not being shot. But I'm also in favour of change I can see, not merely change I can believe in. If that means poking the status quo with a stick to see what it does, I'm more inclined to do that than not. And if it responds, I'm just as inclined to do it again, like that XKCD comic with the electric shock button. Maybe I find out a little more about how it works. Maybe I find out a way it breaks. Either way, I've learned more about it than I knew before. And, crucially, I never would have found out if I hadn't picked up that stick.

You can think of the human brain in a lot of ways, but probably the most useful way I know of to think about it is as a massively parallel pattern-matching machine. Your neocortex learns to recognise patterns, and it builds an ontology out of those patterns, so that from light and shadow you can discern edges and from edges you can discern shapes and from shapes you can discern whether what you're looking at is something you've already identified or something novel. We quite literally spend the first couple of weeks of our lives learning how to see and hear: the machinery is there, we've already been using it in utero, but now we have to adapt to this weird outside-the-uterus environment and that means learning how to use those senses all over again. But the secret is, you never stop learning. The human brain is amazingly plastic, well on into adulthood, as long as you're willing to continue exposing it to novel experiences that it has to learn to pattern-match. Preferably lots of them, so that you don't over-train to an input set that's too small.

I can't tell you what a "social sense" feels like, at least not the way I can describe network proprioception. I was born without one and I'm still working on putting one together from the parts I have available. But I want to know what we could build if we had people who developed proprioception for, if you will, the body politic. We may very well already be creating those people, given that Western children now grow up in a society where social graphs as graphs are a major input on a daily basis. I look forward to seeing them grow up. But if those kids aren't kicking the tires -- which is what kids are supposed to do in the first place, and I guess what we never grew out of -- where are they going to find the side effects that will tell them how these network effects behave?
If you are one of the people that Georgia Weidman refers to here:

Conference staff was originally very supportive. But then they went to hear his side of the story and they suddenly wouldn’t even look at me. I realize it’s a complicated situation, but what I hit myself in the eye? I asked an organizer point blank if he believed me, and he said he didn’t know. I don’t know what the guy’s story is, but from the police and the conference’s refusal to act, I assume it’s pretty convincing. Hotel staff pulled the security tapes. Someone I thought was a friend of mine watched them with hotel staff. The general jist I got from the interaction was because I was on the tape letting him into my room, walking in the hallway with him, etc. I must be lying. Where in any of that did I consent to unprotected sex, being hit, etc?

The interesting stuff is the reactions. The people who say things like, “This isn’t what I think of course, but I bet a lot of people don’t believe you because you flirt on Twitter,” or “Everyone saw you kiss so and so at this party, so of course no one believes you didn’t want to have sex with that guy.”

then you are fucking terrible at incident response and should find a career field where you are not responsible for the safety of anyone or anything of more value than a common goldfish. Preferably not even yourself, because you're not even competent enough for that.

I refer you to the following excerpt from slightly earlier:

So I gave [the police] my driver’s license and after they left I tore the room apart looking for my passport. In all my passport, wallet, iPad, one of my test phones, one shoe, and my Tag Heuer Carrera watch were stolen. Anyone who is into watches will know my pain at losing it. He originally said he had nothing of mine when questioned by hotel security. Then he magically found my iPad and passport but nothing else. The phone was later found in the hallway of his floor of the hotel. The rest of my things were recovered the next evening from his room by conference staff.

So let me get this straight. Because you have some uncertainty about whether a sexual assault occurred or not, therefore nothing else happened? What about the missing watch, the missing iPad, the missing shoe, the missing passport for crying out loud? Have we suddenly Quantum Leaped into a timeline where a person confessing to taking items of value and then returning them is, somehow, magically, not incontrovertible evidence of theft?

Take your time. I'm not going anywhere.

As infosec professionals it falls to us to recognise, as quickly as possible, any and all indicators of compromise, and prioritise our responses to them. If you are somebody who looks at Shaky Evidence For A and Irrefutable Evidence For B (Not Conditional On A), but decides that because A's evidence is shaky then B can't be true, then you fail at logic and should find a job that doesn't require it before your incompetence hurts someone. I don't care whether you have strong feelings about A or not -- taking a job in this field means committing to evaluating evidence objectively and taking action based on that evidence, and if your feelings about A cloud your ability to evaluate B objectively then you suck at your job. This goes for any A. If the Russians know that you have an irrational hate-on for the Chinese, and they hit you with one exploit that might be Chinese but you can't really be certain and another that is undisputably Russian, and your response is, "Those dirty Chinese, let's get 'em!" then the Russians win that round and you deserve to be mocked. Also fired.

I mean, seriously. I believe Georgia completely, but for the sake of this discussion I will go so far as to stipulate a situation where not only no sexual assault occurred, but no physical assault occurred (so, like, they both walked into doors? a particularly vicious door that left him bleeding freely from the temple? OK, whatever you say). How, then, do you explain the bizarre assemblage of stuff he took from her room and subsequently returned? "She loaned me the watch, phone, and iPad" strains the bounds of belief, but "she loaned me one of her shoes" beggars it entirely. Stop straining so hard at those gnats, you'll hurt yourself.

Not to mention the passport. I don't know whether any of you have ever been without a passport in a foreign country; I have. Mine was stolen on a plane from DC to Brussels last August, and I spent four days in a Belgian border detention center because of it. I have also been raped. If I were forced to choose between reliving either experience exactly as it happened originally, I'd pick the rape, no question. Granted, it sounds like Georgia had the support of the US Embassy, and could have probably gotten a replacement passport without a side trip through Club About to Be Deported, but that does not excuse the fact that taking someone's passport is serious fucking business. Keep in mind, a United States passport is not the property of the person to whom it is issued, but of the State Department. I am not a lawyer, and cannot credibly tell you whether a passport is the kind of "public record" that 18 USC 641 applies to, but if I were Fernando Gont -- the thief Georgia was reluctant to name, but said I could -- I would check with an actual lawyer and find out just what my actual liabilities might be, at least before stealing another fucking passport again.

But back to you, the people I'm addressing. I believe it to be the case that this community is one that does not countenance rape or assault. Perhaps the evidence of assault in this situation is too tenuous to meet the burden of proof for that, which is why I spotted you that point to begin with. I also believe it to be the case that this community is one that does not countenance theft, and what else can you call "taking physical objects that aren't yours and not returning them"? Gont didn't wake up the next morning with a throbbing headache, find a mysterious passport and iPad among his belongings, and take them to the hotel staff saying "Dear me, I appear to have come into possession of Georgia Weidman's passport and perhaps this iPad is also hers," they had to question him -- and at first he lied about it. Then the conference staff had to recover the rest of her things that he'd taken. Including her shoe, let me remind you. Who takes a shoe? What the fuck is wrong with this guy? What the fuck is wrong with you for not being as repelled as I am by this demonstration of his apparent get-hammered-and-steal-shit-from-people proclivities? Keeping on our physical-security toes is all well and good, but if people are wandering around conferences getting plastered and going "oh I like that, it's mine now," then maybe those people don't need to be drinking. Or maybe they don't need to be at our conferences.

Or is someone going to try and rationalize theft away now? Note, please, that I'm not saying "if you accept that Gont stole Weidman's and the State Department's property, you must also accept that he assaulted Weidman"; rather, if you accept that Gont committed crimes of property, one of which is probably a federal felony, why would you just handwave a thing like that away?

I'm waiting.
For many years, one of my rules for dating me has been:
I do not date anyone crazier than I am. Note that I am known for singing in public and juggling fruit in the grocery store, so by "crazy" I do not mean "weird"; I mean "mentally unstable in ways that make it difficult for you to conduct your day-to-day life". Broken is okay. Demolished is not.

(N.B.: This rule is not symmetric. Sure, I'll date people whom I'm crazier than.)

I'm coming to realise it's time I revised this rule, though it's going to be hard to articulate it quite as succinctly as "no one crazier than me." Terseness of expression is often used as a proxy variable for elegance, but if used as the only proxy (which is an oversimplification of the computer-science lines-of-code heuristic), it has a huge false positive rate; an elegant description must also be correct, so a short rule that erases nuance is less elegant than a longer one that preserves it. Terseness is wonderful for readability, and certainly in a marketplace of ideas where the unit of account is your attention, the incentive to sacrifice clarity for eyeballs is enormous. But let's be realistic here, this is my D-list personal blog that has always been more for my benefit than for yours, Gentle Reader, and in any case I have always been more interested in being correct than being popular. So let's see if I can come a little closer to the former.

I was up pretty late a few weeks ago, talking with a close friend about Len. She'd been wanting to have this conversation since roughly around the time he died, but one thing or another had always gotten in the way of it, leading me to anticipate something particularly fraught and minefieldy -- I certainly don't put off a discussion for that long unless I expect it to explode on contact. So I was surprised when the upshot of the whole thing turned out to be Len was a difficult son of a bitch to deal with sometimes.

That might come off as dismissive, but really, once it got underway, the entire conversation was roughly five hours of this:
FRIEND: So this shitty unresolved thing happened once. [describes]
maradydd: Yeah, that was really shitty and unfair. Here's an experience I had that was similar. [describes] I think both experiences have to do with [reasons].

on repeat. No bombshells, no deep dark secrets -- one or two things I hadn't heard about while he was alive, but everything she brought up was completely consistent with my still-operational model of Len over the eight years I knew him.

I was relieved, in the weeks after Len killed himself, that none of the obituaries or eulogies tried to canonise him in the way that eventually happened to Aaron Swartz. Eleanor Saitta, for instance, wrote:
Len Sassaman had struggled with depression for a long time, but he’d struggled against other things too. Len was a cypherpunk. He worked to give people tools to communicate securely in the face of government oppression and corporate oligarchy, whether that government is in China, Iran, Syria, America, or here in the UK. Len was brilliant, but he also saw this oppression in very immediate terms, even when the signals were less obvious than they have been in past months.

This made him, to be frank, difficult. He framed the world as he saw it, very starkly, and you were either with him or against him on every issue. He spent a decade and a half fighting a war that no one else saw, and it killed him.

Obviously this doesn't touch on his personal life, but it's not a far stretch to conclude that someone who saw the broader social sphere in black-and-white might also see the narrower one the same way, and in Len's case you'd even be right. Gentle Reader, I invite you: if you thought Len was a handful on the Internet or at conferences, imagine living with him.

I suppose in my case, it helps that I have my own mile-wide cantankerous streak, fit to the task of locking horns with any curmudgeon this side of Walter Matthau's character in Grumpy Old Men. Not everyone has this tendency, and not everyone who does embraces it as a defining character trait. (I also invite you to consider, Gentle Reader: in light of how much I argue with other people, imagine how much I argue with myself.) This is tempered by things like what feels like an innate drive to be kind to people I like, and some internal incentive reorganization linked to the realisation that the frequency of my indulgence in what can be an enjoyable pastime but whose rewards are usually transient (arguing) was detracting from my effectiveness at pursuits which, while more time-consuming, also produce more enduring rewards (generally: building things); still, I will never mute it to the point where I don't object to unreasonable or unfair propositions, even from people I care about, because dammit, that behavior is useful and more people should have it interpersonally.

My friend does not, with respect to people she cares about; she also has a mile-wide caretaker streak, and Len was both depressed and chronically ill. You do the math. I get lots of pent-up resentment as her outcome, which squares with her self-reporting, and Len doesn't feel anything because he's dead1. Now, I don't think anyone regards "being a pushover when it comes to your friends" as a form of mental illness, myself included, and nor do I think it should be. But it is a way of being that can lead to significant unhappiness if one has friends who are abusive or even just overly needy; even the most devoted caretaker still has other things to do every once in a while. As such, I think it's not unreasonable to say that her personality and his were simply incompatible with respect to this, despite all the other things they were so compatible about, and that they were either going to need to make some contextual behavioural changes (he stops asking her for help, or she stops offering, or both), or to stop interacting at all.

This gets a bit more difficult when the incompatible traits are somewhat more fundamental: things that a person doesn't want to, or can't, or believes they can't change about themselves. Often these traits aren't especially evident until people have spent a couple of years getting to know each other, or they're not very easy to articulate until people have spent a couple of years trying to put words to them. And I've been staring at this paragraph for a couple of days now; it's becoming clear to me that I'm not ready to write this part right now. So I guess I fail at correctness, since correctness requires completeness.

But one last thing before I go; it's important.

My friend was reluctant to have this conversation with me at all, despite clearly needing to, because of a reluctance to "speak ill of the dead." I can understand not wanting to inflict emotional harm on the family of a recently deceased person, but for fuck's sake, people, "don't speak ill of the dead" is not a categorical imperative. If someone is dead and there is something you need to get off your chest about them, do it. Exercise reasonable discretion about other people's internal states (like, don't go bitching to a dead person's mother about what a terrible neighbour they were a week after the funeral), but fucking talk to somebody. The dead person is dead and cannot be hurt any further. Be kind to the living, but don't forget that you're one of them too.

1If you are sensing some bitterness here, your senses are calibrated correctly.

[meta] Leitmotif

It occurred to me that I should say a little about the common thread that underpins pretty much everything I've written here since lifting radio silence, and will continue to do so for some time.

So I alluded to the whole aspie thing1 a few posts back. The interesting thing there is, when I first decided to look into the various non-normative functions of my brain, just before I started grad school, I was actually diagnosed with a non-verbal learning disorder. I fit all but one of those symptoms extraordinarily well -- and indeed, everyone asked, "but how come you're so good at math?" Still, "close" counts in horseshoes, hand grenades, and psychology; at the time I had recently graduated with a BA in English (minor: African Studies2) after washing out hard in the hard-science/math weed-out courses, was working as a tech writer, and was preparing to start an MA in linguistics. It seemed reasonable to conclude that trig, while enough to land me a part-time job tutoring high schoolers for the math SAT, was about the limit of my native intelligence as far as math went.

A year later, Teodor Rus poached me into a PhD (that I never finished) in computer science, and, well, we all know where that ended up.

Apparently I am actually kind of good at math as long as I can treat it as a language.

(Granted, I have been doing most of my mathematical explorations on the discrete side of the house, but there are fields that bridge the discrete and continuous realms -- michiexile's specialty, algebraic topology, being one of them. And since michiexile and I are pretty good at finding ways to convey ideas to each other up to and including inventing them, there has been something of an osmosis effect, though really I need to just buckle down and get a solid grounding in group theory and then go devour algebraic topology and see whether that goes any more smoothly than, I dunno, going through calculus again on Coursera and then maybe diffeq or something. But I digress.)

I didn't mention this in my post about Len, but Twitter was a lifeline for my sanity in the weeks after he died. There was so much I needed to express, but the last fucking thing I wanted to do was have to talk to somebody in person and have to deal with whatever their reactions were. I tried to write, but I couldn't string ideas together for more than a few sentences. The written language was there -- for the thirty seconds at a time I could focus on anything. And, well, bramcohen had tweeted the news shortly after I called him3 anyway; the shoe fit well enough, so I wore it. Watching your brain put itself back together after severe emotional trauma can, as it turns out, be a fucking fascinating process -- and I was already primed, with help from a kickass therapist back during grad school who basically gave himself the equivalent of an associate's degree in computer science in order to help me come up with a set of coping mechanisms built out of CS metaphors that have significantly reduced the severity of my social anxiety, to treat my internal state as an algorithm with a panoply of inputs and outputs.

It is not too far a leap from that to "what else can language, both formal language theory and the physical science behind how organisms communicate, be a useful framing device for?" I guess when all you have is a hammer, everything really does look like a nail. This has nothing to do with why that library has that name, but it is an unintentionally hilarious coincidence nonetheless.

Anyway, there's that. Hopefully it provides some context.

1SID is pretty common among the autistic.
2It started with the Physical Anthropology 101 class I took for a social science gen ed, wended its way through primatology, human evolution, and archaeology, and rapidly turned into "Holy shit Africa is way more complicated and interesting than World History in high school ever let on."
3He was in fact the first person I called.
Lately I have much been enjoying the blog Slate Star Codex, which treads in sparkling prose much of the same rationality, ethics, cognitive science, &c ground that Less Wrong has gotten bad about stomping into dittoheady mud lately. By which I mean it's actually good and stuff. One recent post sparked off some recollections from, of all things, phonology.

"But, Meredith," I hear you say, "what could the study of how sounds are composed into syllables in different languages have to do with whether people are inherently pretty decent or inherently pretty awful and just want to be seen as nice?" Well, I last cracked a phonology text lo these ten and a half years ago -- you will find posts about it on this very blog if you look back that far -- so I may be off about some of the details, and the field has doubtless moved on despite my inattention. (I welcome correction from practicing linguists [q_pheevr? kirinqueen?], more attentive students, &c.) But here goes.

One of the common underpinnings of the various phonological theories that I studied in undergrad and grad school1 is the notion that every syllable, word, &c that is spoken has an underlying representation2 -- i.e., a mental representation of a sequence of sounds to be produced, some abstractable piece of input for one of the state machines on the composed chain that leads from brain to vocal tract. The output of this state machine is (presumably) the sequence(s) of nerve impulses that make your vocal tract do the necessary to make the sounds you wanted to say -- but the sounds articulated (the surface representation) will vary predictably from the underlying representation. The job of a phonologist is to characterise languages in terms of these transformations, ideally in the most compact (or, as both linguists and computer scientists prefer to say, elegant) way possible.

Here's a concrete (and classic) example: English pluralization. The regular plural affix in English is -s, and in cases such as cat → cat-s or top → top-s, indeed the phoneme produced in the surface representation is /s/. But what about dog → dog-s, pronounced /dɔgz/? Or toy → toy-s, pronounced /tɔɪz? You get the picture. So this sort of thing got formalised in the 4th century BC for Sanskrit, but the West only got round to working it out starting in the mid-20th century after lots and lots of descriptive work from people like the Grimm brothers (yes, really). The theoretical frameworks of the '60s and '70s (of which the several I learned about, and have mostly forgotten, grew out of the work of Noam Chomsky and Morris Halle) were all fairly rule-oriented in the way that writing software is rule-oriented, and they all aimed to give linguists the ability to produce a complete description of the rules necessary to produce all underlying → surface transformations for whatever language they happened to be studying.

By now you may very well be saying, "Where do these underlying representations come from, anyway?" I know I am; it's kind of amazing how much clarity one loses about a field of study when one hasn't touched it in a decade. That said, the Chomskyan family of theories has often been criticised for coming up with "just-so stories" about what goes on between brain and vocal tract (the work of Steven Pinker notwithstanding; let's just say there is a lot of ground still to cover), so it's a good thing we're segueing to optimality theory now3. Optimality theory, which came on the scene in 1991, still relies on this notion of underlying representation, but it posits that instead of a however-intricate-it-needs-to-be spiderweb of rules to describe every little edge case of how pronunciation rules interact together for a given language to map a single input to a single output, there's a ranked set of constraints which, applied against the set of all possible candidate surface representations available at the time (which could, in principle, be any old bullshit your brain decides to come up with -- we are talking about a massively parallel computer here), selects a "least-bad" candidate which is then vocalized. The set is the same for all languages, but the ranking differs from language to language. So now language acquisition (you learn the constraint ranking for the language you're learning) and linguistic typology (linear edit distance between constraint rankings), oh and also phonology, fall out of one theory, albeit one that still needs some empirical validation.

So now let's talk about ethics.

Part of doing computer security is being able to think like the bad guy. It is a useful thing, when operating as a defender4, to be able to think like an adversary, to conceive of attacks you would never yourself perform, while coming up with your defense strategies. Put another way, out of all the possible constraints on things it is possible to do with a Turing machine, developers tend to have one typology of rankings ("who would ever ask our database for anything other than what our application asks for?") and attackers a very different one. But a defender who can't adopt the attacker mindset for the purposes of risk assessment cannot be an effective defender, even if the options the defender considers during risk assessments are ones that they would never do independently. Furthermore, if the defender's model of "the attacker mindset" (in the analogy we're constructing here, the "attacker constraint ranking" that the defender uses as a temporary replacement for their own constraint ranking) doesn't comport with what attackers do, the defender won't be very effective either. So not only do you have to be able to think like a bad guy, you have to be able to do it well. A lot of people have cognitive dissonance over this (e.g., as Rob Graham points out, a particular prosecutor, judge and jury in New Jersey). I don't, but then, that's why I'm a computer security researcher.

But it goes beyond that. As you might expect of someone who thrashed Rob Graham so hard in a Cards Against Humanity game that he wrote a blog post around it, I possess a deep capacity for being a terrible person and I am totally okay with this. There have been times in the past when I have done terrible things that have hurt people. Hell, there are times today when I do terrible things that hurt people, like buy goods made in shitty working conditions, although really I have been doing my best to minimize the amount of direct personal harm I inflict on people. What I've noticed, through introspection and discussion and so on, is that by and large, the harm I bring about is through ignorance or inattention rather than intent; having realised this, my gut response has been to try to be more mindful and less of a fuckup, thereby decreasing the extent to which my fuckups inconvenience anyone else. So one could say, as one example, that I raised the rank of the constraint "be conscientious with other people's things", and that while my brain might produce the idea "juggle your housemate's coffee cups," it would fall hors du combat early on in processing due to its violation of this highly ranked constraint. However, nothing prevents me from altering my constraint ordering in a different situation where it's appropriate to do so -- like changing politeness registers for whatever culture I'm in, or taking all the filters off and optimizing for balls-out hilarious evil in a Cards against Humanity game. When it is contextually appropriate and safe to be horrible, I can be a son of a bitch with the best of them, which is fun because being good at things is usually fun. (And by "safe" I mean "nobody gets hurt", which is usually the case in a Cards Against Humanity game apart from your illusions about how your friends spend their spare time being shattered.)

Really I guess this isn't so different from Kant's notion of the categorical imperative, but with lots of them and a ranked-choice ordering. And I also see, off in the distance, something that might be a parallel with Jonathan Haidt's moral foundations research, or which might just be an oncoming train. But it could be interesting to, say, design scenarios that require people to make moral-dilemma decisions quickly and look at the correlations between their choices and their scores along those axes.

Anyway, I'm not sure this gets us fundamentally any closer to answering whether humans are inherently good-seeking or good-appearance-seeking, because obviously there's no objective way to evaluate what constraint ranking a person is using, or whether a person is telling you the truth about their self-reporting of the constraint ranking they're using, or even whether they're right about their self-reporting. But it has been of practical use to me, in the sense that I don't feel any particular cognitive dissonance (e.g., revulsion) when my brain suggests particularly horrific or vile responses to stimuli; when I have the time to think about it, at least, if these things register at all they register as "considered and rejected," as a neat little monadic package. I suspect that it's also an instance of the "I made it but that doesn't mean it's part of me" distinction that I have also found of considerable utility in the last year and change, but that is another topic for another time.

1The University of Houston and the University of Iowa were both Chomskyan programs when I went through them; I learned a little about head-driven phrase structure grammar but that was about it as far as exposure to other theoretical frameworks. I know all the cool kids do statistical everything these days; I work for the world leader in the field, turning research code into production code, so I don't actually get all that much theory these days, and also I work in natural language understanding rather than speech recognition anyway. But these are details.

2I always got the feeling the whole underlying-representation thing had to do with historical similarities, especially since when you study a whole bunch of languages all in one family (which I got to do, for a lot of different families), it quickly becomes clear that a lot of the phonological parallels in languages like, say, Dutch and English are predictable because it's the same word, just said differently. But I don't remember any of my profs or any of the books or papers I read explicitly coming out and saying that. Maybe it's obvious? I don't know. It seems kind of simplistic now that I lay it out like that. My memory is kind of shit sometimes.

3Is that your lampshade?

4My research actually operates a level up from this, focusing on hardening software in rigorous ways, because I don't like having to do the same thing over and over again.

Latest Month

July 2015



RSS Atom
Powered by LiveJournal.com
Designed by Tiffany Chow