Blog

How to Interview Your Prospective Manager

I’m in the process of negotiating offers for my next role now. One of the things I’ve learned the hard way is how important good management is—especially for me, since I’m kind of a hard case, but in general.  It’s said that people leave managers, not companies, and I know that that’s been true of my experience. It turned out that I got very lucky in my early jobs, and up until recently my first managers were my high water mark.

Unfortunately the traditional job interview doesn’t give much time over to learn about the person who would be managing you.  (Sometimes you don’t even meet with them.) While you as the candidate are always implicitly interviewing your interviewers, it’s nice to have time set aside to it.

Mudge had not yet signed on as the new head of security when I got the offer from Stripe, but the recruiting team had told me he was considering it, and I knew I didn’t want to sign on to a new team without talking with the person I’d be reporting to.

I knew Mudge only by reputation and vaguely at that, and I didn’t want to join a team only to have some new manager come in and clean house and install all their own people. I delayed accepting until Mudge was ready to talk, and then we had a long phone conversation where I effectively interviewed him as my new manager.  (He was great, it turned out. 🙂

Going through the process again now, I’ve come back to these questions, and I’m going through the same process with my new potential managers.  It’s proving extremely fruitful.

Here’s what I’m asking:

  • What is your vision for the organization?
  • Where do you see the organization fitting in the overall picture at the company?
  • Where do you want the organization to grow?
  • What’s your plan for scaling the organization?
  • What do you like in a manager?
  • What do you dislike in a manager?
  • How do you view your relationship with the people who work for you?
  • What is your philosophy of management?
  • What makes you excited to come to work every day?
  • Can you tell me about a specific time that you were wrong, and how you handled it?
  • You have two employees who don’t get along. What’s your approach?
  • Have you handled harassment complaints before (sexual or otherwise)? What happened?
  • You have an employee who’s struggling. How do you handle that?
  • What do career paths forward look like for this position?
  • How much support is here to present at conferences/other professional development?
  • What are your preferences around hours/work from home?
  • How much contact do you need from the folks who work for you?
  • What problems do you see facing the company over the next three years
  • What problems do you see facing the industry over the next three years?

Interviewing your prospective manager is absolutely something you can and should do, and these are questions I’ve found useful.

Is there something I’ve missed that you like to ask about?  Leave a comment!

Why Is It So Hard To Build Safe Software?

Asking aircraft designers about airplane safety: Hairbun: Nothing is ever foolproof, but modern airliners are incredibly resilient. Flying is the safest way to travel. Asking building engineers about elevator safety: Cueball: Elevators are protected by multiple tried-and-tested failsafe mechanisms. They're nearly incapable of falling. Asking software engineers about computerized voting: Megan: That's terrifying. Ponytail: Wait, really? Megan: Don't trust voting software and don't listen to anyone who tells you it's safe. Ponytail: Why? Megan: I don't quite know how to put this, but our entire field is bad at what we do, and if you rely on us, everyone will die. Ponytail: They say they've fixed it with something called "blockchain." Megan: AAAAA!!! Cueball: Whatever they sold you, don't touch it. Megan: Bury it in the desert. Cueball: Wear gloves.
XKCD #2030: “Voting Software”; used under the terms of its Creative Commons Attribution-NonCommercial 2.5 License.

Or, “Robert Graham is dead wrong”.

This XKCD comic on voting software security has been going around my computer security Twitter feed today, and a lot of folks have Takes on it.

It gets at something fundamental. What is it that makes software safety so hard?

A couple years ago, at the March 2016 STAMP Workshop in Cambridge, Massachusetts I gave a talk titled “Safety Thinking in Cloud Software: Challenges and Opportunities” where I tried to answer that. (As always, I talk about work here but don’t speak on behalf of any former employer.) What follows is based on my notes for that talk.

I would say that responses to the comic have fallen into two big groups:

  1. Software safety is really hard because we have adversaries.
  2. The comic is needlessly nihilistic about software safety.

Robert Graham‘s post “That XKCD on voting machine software is wrong” is the glass-case example of the first argument, that software safety is uniquely hard because we have adversaries.

This line of argument is fundamentally wrong, and betrays an ignorance of systems safety in general and its practice in aviation in particular.

First off, it’s just fundamentally incorrect to say that in software we have adversaries whereas in aviation we don’t. Remember, the statement the comic puts in the mouth of an aircraft designer isn’t a qualified statement—even in the presence of adversaries (9/11, MH17, even the infamous so-called “shoe bomber”) flying is still the safest way to travel.

Systems safety defines what we casually call an ‘accident’ formally as an unacceptable lossany unacceptable loss. It doesn’t distinguish between adversarially-induced losses and non-adversarially-induced losses.

Considered from a systems safety perspective, the aviation system includes organizations like the TSA, the air marshal program, and air traffic control. It includes cockpit door locks, the fence around the airport, even the folks who go out to scare the geese away from the runways, all of which have important anti-adversarial functions.  (Man, geese, now there’s an advanced persistent threat.)

So Rob’s argument is facially, factually wrong. Now, why is it so hard to build safe software systems?

There are five big factors which make it harder to keep modern software systems safe than to keep best-in-class physical systems like airliners and elevators safe. Namely:

  1. Software is leaner.
  2. Software is moves faster.
  3. Software is more complex.
  4. The geography & physics of networks are different.
  5. Consequences for adversaries are lower.

Let’s address each of these in turn.

Software is Leaner

At Akamai, we had a team of 4 people who reviewed about 50 incidents a year for a company of 6,000 people, part time around other responsibilities related to managing the incident process.

This isn’t unusual—at Stripe we had I think 2 people part-time reviewing incidents and managing the incident process for a company of around 1,000 people.

By contrast, I had the pleasure of sitting down with a member of the Dutch Safety Board at one of the STAMP workshops, a year or two before I spoke. He told me that they work in teams of 5 or 6, for a total of 30 people, and investigate between 5 and 10 accidents a year.

In software we need to make do with many fewer people on the problem—and partly, to be sure, this is an area which software companies could resource more heavily, but there’s a bedrock belief that we expect to be able to make do with fewer people.

Software Moves Faster

At Akamai, the Infosec design safety review team of four to eight people reviewed about 50 new systems a year. We were generally given two business days to read about 60 pages of design documentation and provide feedback, which a security representative would take to a broader architectural review board session. And Akamai was notably hard-nosed about safety compared to some of our more agile competitors.

By contrast, the aircraft which became the Boeing 787 Dreamliner was announced in 2003, based on technology which had been in development since the late 1990s. The first production aircraft was delivered eight years later, in 2011. And the planes have an expected operational lifetime of something like 40 years. In software, if code I write is still in production six months from now, I’ll consider it to have real longevity.

Software is More Complex

The core understanding of systems safety is that most accidents, especially the really bad ones, happen not because of component failures (a popped tire or a snapped timing belt) but because of interaction failures—two systems, operating according to spec, interacting in ways the designers didn’t forsee. And software systems provide exponentially more opportunities for interaction failures than physical systems do.

There are lots of ways to measure complexity, but the proper way in a systems safety context is to count the number of feedback loops, and software systems have a truly enormous number. Any state in a program, including its stack, provides the opportunity for a feedback loop. Any connection over a network is inherently a feedback loop. And any modern web server can support thousands of them a second.

The Geography & Physics of Networks are Different

The geography and physics of networks are very weird to compared to the geography and physics of the physical world.  Time and distance limit the interactions which can occur between planes. Three-dimensional space limits the interactions which can occur in an elevator shaft.

On a network, on the other hand, things which are separated by miles or continents can be more or less adjacent. In fact, it’s very hard to make things not be effectively adjacent on the Internet. We go to a lot of effort to erect barriers.

It’s so easy to connect everything to everything else in software that often we do so completely by accident, and it’s frankly a wonder that things short out as rarely as they do.

Consequences for Adversaries are Lower

In order to successfully hijack a plane, an attacker needs to run a very real risk of death or being arrested. In a suicide attack like the 9/11 hijackings, the attacker dying is even part of the plan!  This weeds out all but the most committed, ideological adversaries.  Even an attack like the MH17 shoot-down, where the adversaries weren’t themselves directly risking death, could have resulted in sanctions against their entire country.

By contrast—because of the weird geography of networks—there are very few cyberattacks where the attacker is directly risking death. While countries have been sanctioned over cyberattacks, that’s a relatively new phenomenon, and it’s harder work for law enforcement to track down the perpetrators in the first place, since cyberattacks are so much more likely to cross jurisdictional boundaries.

 

What, then, are we to make of all this? Are the nihilists right? Is software doomed to always be unsafe?

I don’t think so, and of course through my work I hope to make software safer.

Software Needs a Safety Culture

The Wright brothers were more or less two guys in their garage, who flew the first airplane at Kitty Hawk in 1902. Less than 25 years later, the Air Commerce Act assigned the Commerce Department responsibility for investigating accidents, in 1926, at which point air mail was still a pretty neat idea. The first web browser was released in 1991, and over the last 27 years we’ve built something extraordinarily more complicated on the Internet, with far greater access to and effect on people’s daily lives, without the same kind of investment in safety and accident investigation.

Even within large software companies, we’re only beginning to recognize that safety is a discipline and that we need to invest in it, and we’re struggling to identify and pull in knowledge and expertise from older fields like aviation to help us ensure it.

I believe that we can build safer software systems, even in the presence of asymmetric adversaries, in lean, fast-moving organizations, with massive complexity and the weird geography of networks. We have a lot of work ahead of us, but the same principles apply in software as anywhere else, and we can take a lot of inspiration from how fields like aviation have learned to keep us safe.

 

(P.S. Do I think that the comic is right about the current state of voting software? Absolutely.)

Why I Won’t Work For Facebook

I just sent an unintentionally blistering response to a Facebook recruiter. Having invested the time in writing it, I remembered that I have a very disused blog, and perhaps people reading here would find it useful, either as fodder for your own such messages, or as a snapshot of my concerns regarding Facebook and fascism in America in 2018. If either of these apply to you, enjoy.

Continue reading “Why I Won’t Work For Facebook”

A Proposal For Some Fucking Software Liability

…It’s not a Modest Proposal, because that was originally meant in satire, however it’s been corrupted in these latter and debased days, and I’m quite serious here.

(I am less of an expert on this than other things I blog about, although more knowledgeable than some; no warranty expressed or implied, &c.)

Today, basically all software comes with a blanket waiver of liability. The owners and coders of it do not express or imply any warranty, etc, blah blah blah, if it kills you, you pay your own funeral bills, and also you’re dead. And this leads us to situations where we have insulin pumps running consumer-grade software which hit its end of life four years ago at the ripe old age (for a piece of consumer-grade software) of fifteen.

The NSA hoarding vulnerabilities angle on this is a red herring, and I wish people would drop it. As nice as it would be for the US government to invest more than they do today in defense of software, there’s always going to be an interest in offense against software, and if it’s not the NSA’s vulnerability stockpile getting breached, it’s the Bad Guys’®, however we define them today. Even with some kind of MLAT for software vulnerabilities, the Bad Guys® do not sign or abide by those treaties, and unlike building nuclear weapons, exploiting software at this scale is still the province of bored and clever CS undergrads. We must proceed from the assumption that big tranches of vulnerabilities in our software and exploits for those vulnerabilities exist and might get exposed all at once.

There’s been much speculation, basically all of it (including this) ill-informed about the various legal frameworks already in place, both for software and other more established engineering products, about what effect some nebulously expressed change in the liability laws or case-law around software would have on the industry and practice of software production.

And my modest contribution is this:

All software does not need a fucking warranty. It’s fine that your shitty Javascript framework is shitty, and you shouldn’t be rung up on charges of criminal negligence if a shitty and obvious bug in your shitty Javascript framework kills somebody because your Javascript framework got used in a medical radiation device.

The people who should be rung up on charges of criminal negligence are the people who decided to integrate your shitty Javascript framework into their shitty medical radiation device. Consumer software is different than safety-critical software, and everything about using one for the other is wrong.

There are many different lines within the software ecosystem you can draw, and probably we will need to draw all of them, but safety-critical versus consumer (and then industrial control, and god knows what else) are some important fucking distinctions.

If requiring this kind of liability of the people who make medical devices causes them to prefer to use upstream Javascript framework providers who are also willing to take on this kind of liability, then, well, bully for everybody.

The other obvious players in this are the insurance industry, who have so far entirely punted on insuring software against this liability, probably because there’s no money in it, probably because nobody is going to get sued, probably because there are no laws requiring that somebody who integrates a shitty Javascript framework into a medical radiation device and kills half a dozen people do some jail time, yet, which is a real fucking shame, because purely from a Hammurabian moral perspective they probably should have hot sand driven under their fingernails.

I don’t know what about the economics of medical devices today causes them to be such a shitshow that this liability regime isn’t in place already, although I assume it’s much like the shitshow of other electronic devices (eg. Android phones), where it’s a commodity market without a way of valuing security, and integrators cobble together whatever shit they can to check the feature boxes the marketing and sales departments want and keep their customers buying new shit fast enough to keep the company from going bankrupt, but not fast enough to give them margins such that they can afford to build not-shitty medical devices, because it’s apparently unreasonable to expect that these companies and the people working for them should value not killing other people who have no choice but to submit themselves to the tender ministrations of the healthcare industrial complex.

Possibly a liability system for safety-critical devices would cause them to rethink their shitty life choices, and, more importantly, realign their market so that they could act on what the goodness in me compels me to assume is the non-shittiness within their hearts.

This anyway is my best explanation for the health insurance industry of today, who have for most of my life been rapacious bastards who will put you on the streets for pre-existing conditions including depression, which is only the natural state of all beings confronted with the enormity of the problem of evil in the world, and who are now championing not going back to the bad old days, because there is a bit of humanity left in their Grinch hearts after all. (And also regulation like this is actually better for business, but shh, don’t tell the capitalists that, it confuses and frightens them.)

And obviously we need legal frameworks such that medical devices can get certified on one version of your shitty but liability-insured Javascript framework and reasonably accept and deploy security patches to same and remain (slightly-less-)shitty and also liability insured without a godwaful and too-expensive recertification process, which is apparently part of the problem here as of today, although obviously any such certification system might also quite reasonably be concerned that the security patches not introduce yet other bugs, and balancing that will be an interesting trick.

Reliable sources (the guy who runs the certification company) inform me that we can do this for airplane avionics software, and it’s only (what I presume is) the lack of a (regulated-to-be-)level playing field in the medical device industry which makes this hard today, so it seems plausible that some medical, legal, and technical folks inspired by aviation and other safety-critical industries could sit down and create some proposed legislation which Congress could adopt with minimal editorial oversight which would result in a better medical device industry, fewer hospitals crippled by ransomware attacks, lower insurance premiums, and fewer fucking dead people.

It’s not like people aren’t working on this: (link to I Am The Cavalry) (link to Cyber-ITL) (link to Engineering a Safer World).  Somehow this work hasn’t made the requisite impact yet, and maybe WannaCry will open people up to it, and maybe it won’t, but a mob of people with torches and pitchforks at their legislators’ offices asking “what are you doing about medical device cybersecurity” won’t hurt.

Because any sober and fundamentally good-hearted person can see that it’s past fucking time we fixed this.

Just How Many People Live in Boston’s Millennium Tower, Anyway?

Inspired-slash-frustrated by a conversation on Twitter a few weeks ago, which I am too tired now to find, I went and dug into just how many people actually live, more-or-less full-time, in Boston’s Millennium Tower.  I was reminded of this by a conversation today, and, since the question seems to keep coming up, I might as well post my back-of-the-envelope analysis here.

A very light sketch of the background: Millennium Tower is a 60-story, 442-unit super-swank luxury condo development in the heart of downtown Boston; the tallest to date in the city, and by far the most expensive. This makes it a very visible target for the ire of folks who are rightly concerned about rising housing prices in Boston, with housing prices up 50% over just the last five years and rents quickly following.

It’s been widely reported that many of the buyers at Millennium Tower are non-local, even international, and so while there may be 442 units in the building, the question is, are there 442 units’ worth of people actually living there? (Millennium Tower replaced a Filene’s department store, so it’s not like houses or smaller apartments were torn down and people were put out of housing by its construction, but you can rightly ask the same question of developments where they were.)

 

First an existence proof: you can in fact find properties in Millennium Tower available for rent. I found these by searching Boston Craigslist for “Millennium Tower”: you’ll pay somewhere between $2500/mo (two units available at that price) and $4600/mo for an 800 sq. ft., 1-bedroom apartment. That $2500/mo figure is comparable to what I’ve heard for 1-bedrooms in Harvard and Porter Squares, which surprised me. Considering where and what Millennium Tower gets you, from a tenant perspective that $2500 is actually quite a steal, assuming it’s legit, which I have no reason to doubt. So that’s not just an existence proof but a really positive one. Rents are priced to move.

Now the question: just how many units in Millennium Tower are either owned or rented by people who live there full time? That’s a hard question to answer, so I’m going to answer the related and somewhat easier question of how many units are either owned by people as their principal residence, or rented (and I’m going to assume the rental tenants live there full time). That’s probably not entirely true, but I think it’ll let us start to put some bounds on our uncertainty.

To do that, I turn to the Bates Real Estate report on Millennium Tower from last month, which I encourage you to check out if you’re interested, as it’s the best source of this data I’ve found. (It’s free, you just need to give an email address.)

Bates looks at the first 425 sales (of the 442 total). Of those 425 sales, 20% (85) were declared a “homestead” by their buyers, which means a number of things under Massachusetts law, but relevantly for our purposes means they are claiming it as their principal residence. So that gives us a floor on the number of units where people live full-time. This is low relative to other comparable properties in Boston (the report cites 79% of the units at The Seville declared a homestead).

Now for rentals: as of this report, “more than 100” residences had found tenants. (I count 106 lines, but the report doesn’t make it clear that they correspond 1-to-1 with rentals, so I’ll take the more conservative number for now.) That’s another 23% of the units which have people likely living in them full-time.

So together this gives us a rough guess, that, as of March, about 43% of the units had people living in them full-time, and that that’s much less than other comparable, but more-established, properties in Boston.

The report makes a big deal of how fast the units sold (90% from October 2014 to September 2015, 11 months) relative to other comparable properties, so while it does seem that there’s a disproportionately high number of non-resident owners, some of what may be going on here is that it takes time to find tenants, and perhaps also the Boston market for luxury housing isn’t what the buyers may have hoped.

Those two units listed for $2500/mo are asking below what the lowest rent the report found circa March ($3500), so at least some buyers are in fact responding to market pressures and lowering rents to try to find tenants. Mr. Bates, if you read this, I’d love to read an update on Millennium Tower in a year or two’s time. I expect based on this trend line that more of Millennium Tower will have found residents.

There’s a perception among some activists who I respect that luxury developments are mostly unoccupied, and the owners are investors, likely foreign, buying condos like stocks and letting them sit empty, and so cities allowing developers to build luxury housing just makes housing prices go up, and it effectively takes housing away from the people who would otherwise have lived there if the developers had built a more reasonably-priced building.

There’s a question which I don’t have the information to answer on how developers decide how many units to build (would they have built 442 units if they had positioned the building at market rate?). There’s also an argument I don’t have time to pursue about whether it’s better for investors to spend $3 million on a condo in downtown Boston or $1 million each on three houses in Jamaica Plain or Somerville, but I’ll leave those for another time.

This certainly shows that at least this one new development isn’t doing as much to increase the supply of Boston housing as it might appear on paper. It’s not housing a full 442 units’ worth of people, at least not yet.

It is, however, housing at least 185 units’ worth of people, which is nothing to dismiss either.

Featured image: Millennium Tower- Nov 2015. By Jp16103Own work, CC BY-SA 3.0, Link.