In February 2018, the outdoor-clothing retailer L.L. Bean abruptly terminated its famous “lifetime guarantee” for its products, and imposed a one-year deadline for returns, which must now be accompanied by proof of purchase.

     A company statement explained that “A small, but growing, number of customers has been interpreting our guarantee well beyond its original intent,” such as by submitting for refund “heavily worn products used over many years,” or “products that have been purchased through third parties, such as at yard sales.”

     This conduct—like boarding a commercial airline flight with a creature that does not actually qualify as one’s emotional support animal— “takes advantage” not legitimately (as in, buying items temporarily offered at discounted prices) but abusively, and could lead to a company’s clarification, qualification, or even, as in L.L. Bean’s case, revocation of its well-meaning rule or policy.

     But it might not quite qualify as a “hack,” under the definitions offered by cybersecurity expert Bruce Schneier’s recent and surprisingly squishy book, A Hacker’s Mind

     To Schneier, the term indicates a “clever, unintended exploitation of a system that (a) subverts the rules or norms of the system, (b) at the expense of someone else affected by the system”; or, “[s]omething that the system allows but which is unintended and unanticipated by its designers.”  (A “system” is “[a] complex process, constrained by a set of rules or norms, intended to produce one or more desired outcomes.”)

    Neither the yard-sale-goods-returner nor the mere-pet-carrier is being especially “clever” (though Schneier doesn’t clarify this term), and their activities were probably not unanticipated by L.L. Bean or by the airlines (although the amount of such conduct might well have been). 

    Schneier’s book opens with his own example of a hack: he realized (at an unspecified age) that the address card enclosed with Milton Bradley’s ant farm, for the buyer to request by mail a tube of live ants, would enable him to “have this company send a tube of ants to anyone I want.”

    Thus, hacking “occupies a middle ground between cheating and innovation.” Schneier emphasizes that, as with “an accountant [who] finds a loophole in the tax rules,” “Hacks are often legal.  Because they follow the letter of the rules but evade the spirit, they are only illegal if there is some overarching rule that forbids them.”  (Lawyers, though, might argue that such actions could be challenged as being “not in good faith.”) 

     Though his own central term is not clearly defined, Schneier offers examples of hacking that include creative (re)interpretation of a rule’s words.

     The general reader might find few revelations, or even particularly deep insights, in the book’s thirty-five-page section on “Hacking Legal Systems,” which asserts that although “[t]he disempowered hack to subvert existing power structures,” the most effective “hackers” of laws and regulations can be “governments and large corporations” (with their lawyers’ help, of course). 

     It is not especially illuminating for Schneier to characterize common law—the accumulation of judicial decisions rather than statutory or regulatory rules—as evolving through “a series of adjudicated hacks that are either declared illegal or that become future precedent.” 

     Or, at the end of his discussion of “Hacking Legal Systems,” to proclaim that in some cases “Hacking is a process by which those who use a system change it for the better, in response to new technology, new ideas, and new ways of looking at the world.” 

     As Schneier recognizes—and as Steven Levy’s seminal, and much more memorable and thought-provoking, Hackers: Heroes of the Computer Revolution (1984) chronicled—to the pioneering programmers at MIT in the 1950s and 1960s, one meaning of “hacking” was the benevolent and skillful optimization of hardware or software systems to run with fewer but more elegantly interacting components, or lines of code.

     Compounding this semantic confusion, a word beloved by contemporary hackers (and part of both of Schneier’s senses of “hack”) has its own divergent definitions: like “taking advantage,” “exploit” can signify making “productive use of” something, but also “meanly or unfairly [using it] for one’s own advantage.”

    So, the powerful, but also the less powerful, can be hackers.  Hacking itself can be good, or maybe sometimes not.

     Amidst these muddled meanings and morals, law students and lawyers might find their most valuable lesson in a general application of one particular defense against hacking.

      Although Schneier asserts that “declar[ing] manipulative practices illegal” will fail because “any new rules will be hacked,” he recommends educating “potential victims” about methods by which they might be vulnerable to “’cognitive hacks’ that play on universal human biases like fear and deference to authority.” 

     Similarly, Appendix B of my own book recommends that “[i]n the same way that many lawyers prepare their clients to avoid psychological tricks and traps when being deposed or cross-examined, they should encourage executives to read (and might even summarize for them) some of the reporting on cognitive countermeasures,” a variety of which are discussed in that section.  I also suggest that “counsel themselves could. . . apply this information to enhance their own adversarial and cooperative efforts, as well as the ways in which they offer recommendations to clients.”

    Thus, a more practical and rewarding use of counsel’s, clients’, and compliance officers’ time than reading A Hacker’s Mind might be to review (or perhaps re-read) The Art of Deception: Controlling the Human Element of Security (2002), which was (with a co-author) written by convicted hacker Kevin Mitnick

     Despite two decades of technological change, Mitnick’s detailed discussions and sample dialogues of  “social engineering” techniques for tricking people into revealing sensitive information (such as “pretexting,” and “Using Sympathy, Guilt, and Intimidation”) remain relevant. 

     Of particular interest are Mitnick’s Chapter 15, which discusses how to create programs on “Information Security Awareness and Training”; Chapter 16, on the development of (and including an extensive list of suggestions for components of) “Recommended Corporate Information Security Policies”;  and his final summary, “Security at a Glance.”     

As Schneier himself (citing Mitnick’s testimony before Congress in 2000) concluded a generation ago, in Secrets and Lies (2000), social engineering “will probably always work. . .  People are basically helpful.  And they are easily duped.”