Blog

Common Sense on Apple’s Recycled Hardware Reuse Policy

I hit some nerves on Twitter when commenting on this Motherboard article on Apple’s recycling policy by Jason Koebler a couple weeks ago, so I want to talk about my issues with the piece in a little less constrained format than Twitter provides.

The usual disclaimer: as always, although I talk about my experience at Akamai here, I don’t work for them any more, and I speak only for myself.

First I want to summarize that article, because I read it three times and still came away with the wrong impression, until I went away, returned the next day, and discovered what my mistake had been. (Judging by some of the commentary on Twitter, I wasn’t alone either. I personally find the article at best careless in how it repeatedly enables that misunderstanding, but the point of this post is not to argue its merits as a written work.)

The article talks about two separate but related programs Apple runs, and doesn’t do a good job distinguishing them. In one, Apple both buys back Apple-branded hardware from its customers for refurbishment and resale, and Apple accepts Apple-branded and other manufacturers’ hardware back through its stores and its mail-in program for recycling.

Two, on top of its Apple-branded recycling programs, many states have laws requiring electronics manufacturers like Apple to accept e-waste for recycling in proportion to the sales of their electronics in the state. Apple, being a major electronics retailer in the US, accepts a large tonnage of electronics for recycling. These collection efforts are not Apple-branded. When your school or office does an e-waste collection drive, the hardware may be sent through this program to e-waste management companies under contract to Apple for recycling.

It also appears from the article like Apple uses the same recycling companies for at least some of its Apple-branded waste stream and the state-mandated waste stream. (My guess is that there are relatively few major recycling companies capable of handling Apple’s volume, but I don’t really know.) Once the hardware arrives, it sounds like it doesn’t matter where it came from—it’s under the same Apple contract.

The article is very concerned about the stipulations of that Apple contract. Here’s Mr. Koebler quoting a Michigan state report:

“Materials are manually and mechanically disassembled and shredded into commodity-sized fractions of metals, plastics, and glass,” John Yeider, Apple’s recycling program manager, wrote under a heading called “Takeback Program Report” in a 2013 report to Michigan Department of Environmental Quality. “All hard drives are shredded in confetti-sized pieces. The pieces are then sorted into commodities grade materials. After sorting, the materials are sold and used for production stock in new products. No reuse. No parts harvesting. No resale.

(Emphasis mine.)

Remember, this applies to all the hardware Apple collects, both Apple products and other manufacturers’, under Apple-branded recycling programs and through third-party programs.

This is bad, he explains, because it’s much better for the environment to reuse and repair electronics rather than recycle them.

Kyle Wiens, the CEO of iFixit, notes that recycling “should be a last option” because unrecyclable rare earth metals are completely lost and melted down commodities are less valuable and of generally of a lower quality than freshly mined ones. Repair and reuse are much better ways to extend the value of the original mined materials.

Good so far. It quickly becomes obvious that there’s another motive at work, however, and of course that motive is money.

To be clear, Apple’s practices are often against the wishes of the recycling companies themselves, who don’t like to shred products that are still valuable. In a weird twist of fate, I visited ECS Refining before I knew that it did recycling for Apple. While I was there, I watched workers crowbar and crack open recent-model MacBook Pro Retinas—worth hundreds of dollars even when they’re completely broken—to be scrapped into their base materials.

At the time, I asked ECS CEO Jim Taggart how he feels about “must shred” agreements when he sees products that could have data safely deleted before being turned into parts or repaired. He called such deals an “extreme position,” one his company doesn’t like signing but is a core requirement from some manufacturers.

Now none of this is unreasonable—people who care about the environment (and make their living doing it) want to minimize waste, and recycling companies are in a cutthroat, commoditized business and want to maximize their returns. It’s a rare instance where the economic thing is also the environmental thing, even. All of that’s expected, and fine as far as it goes.

The trouble starts, though, when Mr. Koebler omits Mr. Yeider of Apple’s stated rationale for its policy, which is the sentence immediately following the part of that Michigan report he quotes that talks about “No reuse. No parts harvesting. No resale.” However, a partial scan of the report is thoughtfully included below the paragraph in question, and it says this:

This methodology preserves the chain of custody and assures the protection of data contained in the machines.

Now what the heck does that mean?

What it means is broadly that Apple sees itself to have a duty towards the data on devices you turn over to them, or recycle through third-party programs which wind up in Apple’s hands. Specifically, they believe have a duty to keep that data secret. And they’re right, they do.

Certainly for hardware Apple receives as part of their Apple-branded buy-back and recycling programs, they have a duty to maintain a chain of custody of it from the moment it leaves the customer’s hands in the store or enters the mail system until it has reached some safe state. In-store, having done this myself with an old phone, usually the store employees will walk you through wiping your personal data off device before handing it over, but mistakes happen, and there are no guarantees for hardware which has been mailed in. For hardware which I the consumer have turned over to Apple for recycling, it’s a serious black eye for them if it turns up on Ebay, even if all the data has been wiped—because what if it hadn’t been? The Fappening pales in comparison.

I’m less clear what chain of custody means in the context of hardware Apple receives through the state-sponsored recycling programs, but I presume there is some point at which that hardware enters the possession of Apple-contracted recyclers, and from then on the same argument applies as for hardware obtained through Apple’s branded programs.

Now the article does nod to this periodically but dismisses it as a minor issue, and one the recycling companies are obviously capable of handling.

But in practice, the premature recycling of an iPhone or a MacBook is not ideal. MacBook hard drives can be removed and replaced. And the recyclers Apple uses all advertise industry-standard data destruction tools that can be used to safeguard consumer data without requiring the destruction of all of the rest of the computer or phone’s parts.

There are a few complications with the picture this paints. One is that (to pick an example at random) the newest Macbook’s (solid-state) hard drive is in fact soldered to the mainboard, making it much harder to replace. It’s still technically removable, in the sense that any chip on a circuit board is removable, but it requires a lot more work than just pulling a daughter card by hand, as could be done with older Macbooks (including the Air, which surprised me). The onboard solid-state storage in phones has of course always been soldered down.

Solid-state drives are also much harder to sanitize than older spinning-platter hard drives. Due to the way they work, highly sensitive data can remain in them after it has been deleted, or even after the drive has been formatted. (Unlike spinning platter drives where, despite what the conventional wisdom says, a single-pass format operation is fine.) There are no software tools which can fully sanitize a solid-state drive once it has been used, so anyone considering whether to allow a device to be reused needs to take the risk that a sufficiently advanced adversary could recover some of their data into consideration.

However, the recycling industry has not fully caught up with this. My own encounter with “industry-standard data destruction” for solid-state drives while I was at Akamai did not fill me with any kind of confidence. So little confidence, in fact, that I hired an amazing intern and we successfully prototyped a better method (tl;dr yes they will blend). The NSA’s unclassified data sanitization standard (which our process meets) requires shredding to a 2mm grain size or smaller. I think the “confetti-sized” pieces the Michigan report describes (which I interpret as ~5mm grain) are plausibly sufficient sanitization against non-nation-state adversaries, which, let’s be honest, is who most of us are up against, most of the time.

I could go on a lot longer about SSD destruction (ask my friends! I’m great at parties!), but the long and the short of it is—if I’ve promised someone that I’m going to make the data on their SSD go away, no really, forever, nothing short of physical destruction is going to let me tell them honestly that the job is done and their data is never coming back to haunt them.

Unexamined in all this is the question of who should make the decision whether to allow the device to be reused or require it to be destroyed. The article thinks it should be the recyclers. The status quo, which Apple’s contracts with their recycling vendors enforce, is that the consumer makes that judgment, and I think that that is exactly as it should be. As a consumer, both of Apple products and other electronics, I can choose to sell my device back to Apple or to a third-party service like Gazelle for reuse, if I accept the (very small, but not zero) risk of some of my data being recovered. Or, I can send my device to a recycling program, and, if it goes to a recycler contracted by Apple, I can be assured that my data is really gone. In both cases, I and I alone get to decide how I want my hardware handled.

Given the existence proof provided by services like Gazelle, I don’t see that there’s anything stopping the recycling companies, or refurbishers like the man quoted in the article, from establishing their own brands for electronics reuse, if there’s enough money in it and enough environmental argument for it. They don’t need access to the recycling stream, and Apple is right not to let them have it.

That said, I fully support people choosing to give their hardware up for reuse rather than recycling, as I have myself done in the past. Despite the concerns I mentioned earlier about SSD sanitization, practical issues there are—so far—extremely rare. I think for most of us most of the time, using the device’s operating system function to erase it is sufficient to ensure our data won’t wind up in the wrong hands if the device is subsequently reused. But it currently is, and should continue to be, my choice whether the device gets reused or not.

 

 

Postscript: I want to acknowledge a couple arguments which I think are interesting but which I decided not to pursue in full detail here.

One is the question of whether it’s in the best interests of Apple, Apple’s users, and the hardware ecosystem as a whole for Apple to let its recycling vendors dump product in volume, and old product at that, into the currently quite healthy secondary market for Apple devices—or the markets for other manufacturers’ devices, for that matter—but I do want to note in passing that it is not at all clear to me that that is the case. I would love to see someone with more of an economics background than I have take this on.

Another related question I’m setting aside is whether it’s really in Apple’s users’ best interest to be encouraged towards old hardware which no longer receives software support, although the sketch of that argument is that old Apple hardware which no longer receives security updates is legitimately dangerous to its users and to the ecosystem at large, and it is a public health good for Apple to remove it from circulation.

What Public Health Can Teach Us About How to Give Better Security Advice

A story about the philosophy behind my VPN advice post, and a much deeper elaboration of my followup, for security people, public health people, and interested laypeople. Neither post is required reading for this one.

And a disclaimer, as always: I no longer work for Akamai, and I speak here only in my personal capacity. I don’t represent Akamai here in any way.

 

Most of the time I was at Akamai, I worked for Michael Stone, who has influenced my thinking about security and its place within the broader problem of safety a great deal.  (To even understand and acknowledge that security exists within a broader context, I owe to him.)

One of Michael’s great strengths as a problem-solver, which he has worked consciously to develop, is that he has a large collection of what he calls “frames” or “lenses”—ways of thinking about problems, and other model problems to which to draw analogies—drawn both from within technology and outside it.  In collecting these, he recognizes that often the first step to answering a question is to ask (or frame) it in the right way. His frames are “right ways” which have proven useful to others in the past.

Michael has a family background in public health, particularly from his mother’s work, but it was only slowly through my time working with him that we came to recognize that those ideas might have value to us.  It was during the Heartbleed crisis, in April 2014, when I remember us first talking seriously about applying frames from public health to computer security issues.  As that crisis wore on, it became clear just how effective they were at helping us understand and manage the complexity of the problems we were faced with.

Heartbleed affected some companies a great deal, and some companies not very much at all.  For Akamai, which had tens of thousands of vulnerable servers, and thousands of potentially compromised certificates, Heartbleed was a significant crisis.  Separating servers and certificates into vulnerable populations, considering various potential courses of action as interventions, and putting together triage plans were some of the ways that the frames helped us communicate with each other, our management, and our customers.

Five months later, the Shellshock crisis showed that the frame was also productive to apply not just inside the company but outside as well, to an Internet containing populations of servers running vulnerable services. We asked ourselves, what kinds of interventions could Akamai provide to protect our customers while they patched?  Could that have a material effect on the health of the whole Internet?  Would the benefits outweigh the side-effects?  Since then, I increasingly approach all security advice through the frame of public health.

To caveat what follows: I’m not a public health professional, and I have only an interested layperson’s understanding of these concepts. If you are a public health professional, and I have misused terms or misapplied concepts, I would love correction, either privately or in the comments.

The reversion of Federal broadband privacy rules and the recommendation of VPN services, which I wrote about last week, is a model opportunity to apply the frame of public health to a security advice question.  We have a population—broadband users in the United States.  We have a threat, in the sale of their aggregate browsing information to marketing organizations by their ISPs.  And we have a proposed intervention—using a VPN.

By framing the problem this way, I immediately cut out a great deal of complexity.  No longer do I need to consider all possible users of VPNs, or all possible uses to which they might put them.  Users have different priorities, and different uses dictate different priorities. Sometimes these priorities are even at cross purposes! The number of combinations is large, but I only need to consider the one. As a communicator, defining my audience well helps me communicate better with everyone, not least because those to whom I am speaking can easily recognize that fact, and those to whom I am not speaking can know that they are safe to ignore me.

Then we consider the nature of the particular threat facing our population of broadband users in the United States—the sale of their browsing information to marketing organizations and perhaps other, less savory, actors, by their ISPs.  This is a concerning threat, although it must be weighed against the sale of similar information by ad networks and social media platforms, and will vary from ISP to ISP depending on what each chooses to stipulate in their particular terms of service.

And last we consider the proposed intervention, a VPN service: its costs, its potential benefits, and its potential side-effects.  VPNs can be free, although as I’ve mentioned that is a red flag.  Acceptably-good VPNs for our purposes need not be expensive, though; they are often made, after all, out of commodity VPSes and open source software, which can be had cheaply. And as to potential benefits, a properly configured open source VPN software package running on a commodity VPS will protect a vulnerable US residential broadband user from having their browsing data rolled up and sold by their ISP for marketing purposes.

As to potential side effects, though, as I laid out in my previous posts, we know that many VPNs do not provide the protections they claim to provide, and many are actively malicious. In that context, recommending a VPN is like recommending an entirely unlicensed drug in an unregulated market where it is known that many substances sold as that drug are of wildly varying purity and often laced with other, harmful substances.  Some doctors may still agree to monitor individual patients’ use of such drugs, when they believe that their patients understand the risks, the doctors can make efforts to test the drugs for impurities before use, they can monitor the effects, and the drug’s benefits to the patient still outweigh the risks.

However, few public health officials, faced with anything less than life-or-death stakes, would recommend such a drug as an intervention by public policy.  Both a public health official and a primary care doctor share an obligation to “first, do no harm,” and yet that applies very differently to a population than to an individual.  (As an example of a case where public health officials did choose to suggest for individuals to do exactly what I outline above, consider this article on the use of PrEP in Great Britain.  But even there, in a literally life-or-death situation, they did not recommend or require something they weren’t fully confident of.)  And for essentially all US broadband users, having our browsing data aggregated by our ISPs and sold for marketing purposes is not a life-or-death proposition.

So this is why I do not recommend US residential broadband users use a VPN, as a matter of what we might call public health policy in computer security.  And I think that, as computer security professionals, we will give better advice, which is more actionable to our users, if we at least think of security advice questions ourselves in this frame, and use it to structure our advice, and perhaps use terms drawn from public health when we speak in public about security advice questions as well.

It’s not that there isn’t an explicit user statement or threat model here—I lay out a very clear one.  But this framing allows me to consider a broad group of users facing a narrow threat and so issue extremely specific and actionable advice.  I might even feel safe giving this advice without prefacing it with the threat model, if I feel my advice follows the “first, do no harm” principle well enough, and people outside the target population will not be hurt if they follow it.

I’ve attended a few security trainings where the leaders tried to enumerate every possible combination of user and use case and give advice for each, and that resulted in what I found to be very confusing security advice. It also overrepresented the needs of the one person with challenging or interesting, often technical, needs, over those of the twenty people whose needs were less obviously so.

I have also frequently seen security people respond to every security question with “well, it depends on your threat model,” which is surely a reaction to bad blanket security advice given in the past. While it is strictly true as a statement, and is something I highly encourage for 1-on-1 conversations (as much to break the advice-givers out of our technical mindsets as anything), in group settings we can find some reasonable aggregates and make some reasonable assumptions based on context and perhaps a bit of research.

Also, using this public health frame encourages me to consider not just whether the proposed solution could work, but in what ways it might introduce new problems.  This makes it clear to me, not just as an advice-giver, but as a developer of security solutions and a manager of such teams, that there is more to my work than whether the solution is narrowly effective.  I must consider how much it costs—in money, in time, in my users’ emotional and cognitive energies (sometimes referred to as ‘spoons‘).   I need to consider how it might break, how it might be subverted, how my users’ needs and habits might change as a result of its use.

When we started applying this frame to software security, even I, as a layperson, understood roughly that public health officials needed to do this work I’ve just outlined when considering their interventions.  Michael convinced me that, not only did we need to do this work in computer security as well—and in practice I do not find it onerous—but the fact that we weren’t doing this work is why our interventions so regularly failed, and did so in ways which regularly hurt and sometimes killed people.  And the key insight he had, from which all of this understanding sprung, was to frame the problem as a public health problem—or even to ask, “Could we?”

I encourage you to try this frame, the next time you have cause to provide security advice to a group of people.  If you do, I would love to hear how well it applies for you: what works, what doesn’t, where you get stuck.  And finally, I hope that this idea of frames spurs us to ask the next questions, about what else this frame can teach us and what other frames exist in the world which might apply to our work. Who else has in the past solved similar problems to the ones we face now?

Why Not Advise Use of a VPN?

I’ve been surprised and gratified by the reception my post on Quick and Dirty VPN Advice has gotten.  Within the last two weeks, I’ve been retweeted by Zeynep Tufekci; invited on the Techdirt podcast, along with Kenn White; and interviewed by no less than the San Jose Mercury News.  That’s not why I do this, but it’s not unappreciated!

One of the questions I’ve gotten regularly, online and off, is why I recommend people not use a VPN.  It’s surprising advice to people in a context where so many people are recommending them. (Including no less than The New York Times and The Wirecutter, both of whose articles don’t meet my standards and so I won’t link.)

Here’s what I wrote to one person who e-mailed me:

When I say people on residential broadband are safest not using a VPN, I mean that advice to serve as a sort of sane default.

Based on my research and the research of others, the median VPN service is somewhere between plain incompetent and outright malicious.  If I just tell someone “use a VPN” and they go off and Google it and select something as best they can, they’re extremely likely to wind up with something which will hurt them more than if they hadn’t used a VPN.

Having ads or malware injected into your browsing by your VPN service is a lot less safe than having your browsing habits included in your city’s aggregate data which Xfinity sells to marketers.

Even with the VPN that I use and recommend, Cloak, I can’t be 100% certain that they aren’t selling my data, and they’re a small company without much reputation on the line, so I don’t have much protection from them doing so, or recourse if they do.

So that’s my motivation for telling people that their default should be to not to use a VPN, even if they’re concerned about their privacy, and then to use Cloak only if they’re willing to trust Cloak.

It’s kind of an unusual structure for security advice, which tends to veer drunkenly between infantilizing oversimplification and “JUST RTFM”, but it follows a “first, do no harm” principle that I hope threads the needle both for the general public and for vulnerable subpopulations within it.

Quick and Dirty VPN Advice

I’ve been doing some research on VPNs.  This is a quick and dirty post because I’ve talked about it on Twitter and enough folks there have asked for my opinions.

I won’t call my research exhaustive (see below for that), and the landscape is changing quickly.  This is what I believe with 90% confidence today.  I reserve the right to change my mind in the future.  Also expect me to continue editing this after posting to add more information or clear up misconceptions.

I’m gonna skip over what a VPN is for now. The basic idea and the technical details are better-covered by others. (The Wikipedia article on VPNs is not a shining example but it will do for now.)  It’s sufficient for my purposes to say that VPNs are a tool which some folks are recommending to those who are concerned about the proposed changes to broadband privacy regulations in the US and want to maintain their privacy.

Why should you trust me?  I worked for four years in information security at Akamai Technologies, a leading content delivery network especially for secure content (banks, Fortune 500 ecommerce, government institutions).  Secure content delivery networks have very similar security challenges to VPNs, and I was particularly involved with several of them.  I know what taking customer data security seriously looks like from the inside.  (That said, just to be clear, I don’t work for Akamai any more, and I don’t speak for them here.)

Disclaimer: I am a cybersecurity professional, but this does not constitute professional cybersecurity advice.  I provide this for informational purposes only.  If your browsing history gets sold to marketers despite this advice, you get to delete the resulting spam and/or deal with the resulting divorce, and if you do something the government really doesn’t like while following this advice and you get arrested by the FBI, you get to do the resulting jail time.

I have not received any promotional consideration for the opinions I express here.

This advice is targeted at general US persons (citizens and permanent residents), who are concerned about the potentially-changing residential broadband privacy rules, with a goal of helping you make your own best decisions based on your personal needs and risks.  Other people will have different needs and risks.  This advice is not intended to address all possible reasons someone might consider using a VPN.

A VPN does not provide any anonymity guarantees.  Full stop.  For anonymity protections, use Tor.

A VPN may provide some privacy protections.

In general, US persons today on residential broadband are safest not using a VPN.  This may be changing, hence the renewed interest.  Still, this is the status quo.

My advice, if you are concerned about changing US residential broadband privacy rules:

Call your Representative.  Tell them to oppose Senate Joint Resolution 34.  You can find their number and a script if you put your ZIP code in at the linked site. The House votes Tuesday, March 28 (two days from today, as I write this).  This is the single biggest thing you can do to protect your Internet privacy today.  (Yes, you’re calling your Representative to ask them to oppose a Senate resolution.)

If S.J. 34 passes the House and is signed into law by President Trump, you may still be safer not using a VPN.  Ask your residential broadband provider to guarantee in their Terms of Service that they will not sell your internet connection history, or derived products of it, to third parties for marketing purposes.

If your residential broadband provider won’t do that, should you use a VPN? Maybe. There are a number of caveats to keep in mind, though.

It is hard to verify that VPNs provide the protections that they claim to.  Very few of them have seen third-party security audits, and even the best such audit can only provide so much assurance.  All software has bugs, all systems have failure modes, and any one could make moot the privacy protections the VPN service claims to provide.

It is easy to verify that some VPN services cannot meet their claims, and most VPN services are terrible.

That said, if you decide you want to use a VPN:

Don’t ever use a free VPN service.  If you’re not paying, you’re not the customer, you’re the product.  The whole point of the concern over broadband privacy is that you don’t want to be the product.

Don’t expect a VPN to protect you from law enforcement.  That’s not their job.

Don’t use VPN services which advertise BitTorrent anonymity or content geolocking circumvention.  Whatever your views on its ethics and morality, copyright infringement is a crime in the US, and a VPN provider which will turn a blind eye to crimes committed by its users is likely to commit a few of its own.

Only connect to US-based VPN servers while in the US.  Even if your VPN provider offers servers outside the US.  (There’s a lot of complexity here, but it’s a good rule of thumb.)

I use and recommend Encrypt.Me.  They support Windows, macOS, iOS, and Android, their policies are detailed and honest, their technology and security choices are solid and well-defended, they are undergoing a third-party audit, and they’ve nailed the user experience.

Tunnelbear look like they may be a solid alternative.  I haven’t done enough research to have full confidence in this pick.  In particular, I have reservations about some of their technology choices, and would like them to publish more detail about their security choices and to undergo a third-party audit, but they avoid all my obvious red flags.

If you have sufficient technical skill, you may choose to run Algo.  I don’t recommend this for general users because of its complexity, and frankly you get the same technology with a Cloak subscription, at a comparable or better price point, with a better UI, and somebody else on pager duty.

If you are interested in helping advance this research, e-mail me at kevinr@free-dissociation.com and I’ll give you access to the dataset where I’m collecting details about VPN providers’ legal documents and technical choices.

\ Browse safely. o7

 

Thanks to Christian Ternus for feedback after publication.

 

Edited 2017-03-27 13:00 EDT to clarify threat model/scope of advice in the “This advice is targeted at” paragraph.

Edited 2017-03-30 09:40 EDT to substantially expand “I’m gonna skip over” paragraph and add “If your residential broadband provider” paragraph.

Edited 2018-01-16 16:24 PST to reflect Cloak’s new name (Encrypt.Me) and addition of Windows and Android support.

status update

It seems my primary use of this blog for the last six years has been to talk about the changes in its hosting and make vague promises to update more.  This post is no exception!

I’ve moved the blog to WordPress hosted by Pressable, and, kidding aside, one of the things I like about WordPress is that I do find that the activation energy required to make a post is much lower than my previous self-hosted solution.

Mostly this is a shake-down post to figure out what integrations are still running from the old site and what new ones I want to put in place, and expect some dust as everything comes together, but hopefully there will be actual new content here soon.  (Promises, promises.)