Simple Strategies and Secrets for Success in Law School (A Companion to the Book of the Same Name)

Author: waltereffross (page 2 of 5)

THREE WAYS TO STACK A (STARTER) DECK

     [The previous essays in this series are here, here, here, here, here, here, here, here, here, here, and here.]

     One of the nine “Core Lists” that I recommend that law students construct is “an ABC list of Arguments, Building Blocks, and Clauses. . . that you encounter in your classes and assigned reading.”  I’ve suggested, “[Y]ou might even keep and organize [it] on index cards.”

     Beyond my book’s initial list of thirteen themes, three authors in particular supply entries for such a “starter deck,” which you can expand and customize during your law school journey.

     If—as the titles of two of their books indicate—these practical and theoretical elements constitute the “tools” of law students and practitioners, then a list, table, diagram, or deck would be a handy pegboard with which to neatly separate and organize them.

      ● Joel P. Trachman’s The Tools of Argument: How the Best Lawyers Think, Argue, and Win (2013) collects, introduces, and explains dozens of arguments (and their corresponding counter-arguments, and even counter-counter-arguments) in the categories of: procedure; interpretation; use of precedent; facts and evidence; and standards of legal responsibility. 

     These tactics, and his examples, range across the law school curriculum, including civil procedure, torts, contracts, evidence, constitutional law, international law, and jurisprudence.

      Trachtman, a professor of international law at Tufts University’s The Fletcher School of Law and Diplomacy, carefully states in the book’s introduction that “Lawyers are the modern heirs of the ancient Greek sophists, the worst of whom sought to ‘make the weaker argument appear the stronger.’”

    However, he provides no source for this quotation, despite his declaration that “Citations allow the reader to see, and to evaluate for himself, the quality of the support for the author’s statements.”

     In fact, he ultimately adopts the more pejorative, and “ethically unappealing,” meaning of the word: “This book is not intended to school sophists.  But it does include a taxonomy of the tools of sophistry so they can be identified and countered.”

      Thus, Trachtman might to some degree contradict his book’s emphasis on clarifying the meaning of terms, especially to preclude disputes about the drafter’s “original intent.”

     (The current popular understanding of sophist, like that of stoic, cynic, and epicurean, actually departs considerably from the tenets and techniques of the ancient Greek schools of philosophy known by the respective (capitalized) name.)

     ● Ward Farnsworth’s more complex The Legal Analyst: A Toolkit for Thinking About the Law (2007) focuses on “ideas that can be introduced effectively with a bunch of good examples in a short chapter.”  

     The author, a professor at the University of Texas at Austin School of Law, thus acknowleges “notable omissions [that] include ideas from moral theory, from critical legal studies, and from legal realism [and] theories of constitutional interpretation. . . . “

     Farnsworth’s thirty-one chapters, most of which conclude with a helpfully annotated list of related law review articles and books, are “grouped into five parts”: “incentives: the effects that legal decisions have on the choices people make afterwards”; “trust, cooperation, and other problems that arise when people work together”; “subjects from jurisprudence”; “cognitive psychology” and “the ways that people may act irrationally”; and, “problems of proof.”

     Most useful to the beginner might be the jurisprudence section’s discussion of “Rules and Standards,” “Slippery Slopes,” “Property Rules and Liability Rules,” and “Baselines.”

     Those without a background in economics should appreciate much of the first section, which presents such concepts as efficiency, waste, rent seeking (“wasteful efforts to gain a prize,” or “competition over the way some good thing ought to be distributed”), and Coase’s Theorem (“in a world with no transaction costs. . . . rights naturally flow into the hands of whoever will pay the most for them”). 

     The second section examines game theory situations, like the prisoner’s dilemma and the stag hunt, without demanding mathematical skills or a calculator. 

     Throughout, The Legal Analyst includes examples and illustrations of the concepts, and clearly analyzes their practical import for lawyers, clients, the court, and the public.

     ● Parts of Farnsworth’s fourth section (and some elements of his fifth)—like Trachtman’s discussion of such “Rhetorical Tricks” as misleading logical or causal connections—are  more deeply explored in Nobel Prize laureate Daniel Kahneman’s magisterial Thinking, Fast and Slow (2011).

     Kahneman, an emeritus professor of psychology at Princeton (and also of Psychology and Public Affairs, at The Princeton School of Public and International Affairs), pioneered with Amos Tversky the field of “behavioral economics” (sometimes referred to as “neuroeconomics”).  His longtime best-seller summarizes for non-specialists the findings and implications of his earlier and more technical articles (some of which Farnsworth cites). 

     After reviewing an unsettling and humbling catalog of psychological pitfalls to which even the most (subjectively) careful decision-maker might succumb, the author concludes modestly that, in general, “[M]y intuitive thinking is just as prone to overconfidence, extreme predictions, and the planning fallacy as it was before I made a study of these issues.  I have improved only in my ability to recognize situations in which errors are likely. . . .” and thus to “slow down, and ask for reinforcement from System 2” (his shorthand reference to one’s non-intuitive, deliberative, and effortful mental processes).

     Lawyers might well educate themselve and their individual and corporate clients about these potential vulnerabilities, and help develop cognitive countermeasures, including decision-making processes. 

     In particular, “Organizations are better than individuals when it comes to avoiding errors, because they naturally think more slowly and have the power to impose orderly procedures.”

      Appendix B of my own book lists, categorizes, and summarizes dozens of such traps, and offers a starter set of “Preventive Procedures, Policies, and Protocols.”    

IN RE HOLMES (ELIZABETH): RED FLAGS AND BLACK INK

     [The previous essays in this series are here, here, here, here, here, here, here, here, here, and here.]

     Investigative reporter John Carreyrou’s gripping and best-selling account of Theranos Inc., Bad Blood: Secrets and Lies in a Silicon Valley Startup (2018), presents many personal and professional questions for law students to consider.

     Theranos claimed that it had developed a device that could, by miniaturizing and combining existing technologies, conduct many different diagnostic tests on the same small sample of a patient’s blood.  

      However, as detailed by Carreyrou (then reporting forThe Wall Street Journal, and now with The New York Times), the company never realized that goal, and instead appears to have supplied dangerously inaccurate results to some people whose blood it tested.

     In January 2022, Theranos founder and CEO Elizabeth Holmes was convicted of three counts of wire fraud, and one count of conspiracy to commit wire fraud, on the company’s investors.  That November, she was sentenced to a prison term of more than eleven years.

     In July 2022, Theranos’s former chief operating officer Ramesh “Sunny” Balwani was convicted of ten counts of wire fraud and two counts of conspiracy to commit wire fraud; five months later, he was sentenced to a term of almost thirteen years.

     Balwani, when discussing in 2010 with representatives of Walgreens the early stages of an arrangement to install Theranos devices in their drugstores (a parallel partnership was created for Safeway’s grocery stores), scuttled a suggestion to involve Walgreens’s information technology personnel.  He declared that, “IT are like lawyers, avoid them as long as possible.”

     Yet Carreyrou’s chronicle contains no shortage of lawyers, some of whom might have been more advantageously consulted earlier:

     ● The company’s in-house and outside counsel threatened and sued its former employees, claiming that they had stolen trade secrets. 

     ● Theranos’s lawyers attempted to compel an employee to surrender his patent for a method of affixing lights to bicycle wheels (an invention for which he had successfully raised funds on Kickstarter), asserting that the company was legally entitled to all intellectual property created by its employees.

     ● After one employee, distraught about developments at the company and about his professional prospects, ended his own life, his widow left a message for Holmes, only to receive that day “an email from a Theranos lawyer requesting that she immediately return [his] company laptop and cell phone and any other confidential information he might have retained.”

     ● When a newly-terminated laboratory director refused to sign any more legal papers, Balwani laughably “offered to hire him an attorney to expedite matters.”  (After retaining a lawyer on his own, the former employee complied with the company’s demands to destroy his copies of company documents.)

     ● An inventor and (former) friend of Holmes’ family engaged a lawyer to file, in 2006, a patent application essential to some planned functions of Theranos’s devices. 

     ● Because the inventor’s son was a partner at Theranos’s regular patent law firm, that firm declined to represent Theranos in challenging the application.  Theranos then sued the father and both of his sons, alleging that the lawyer had conveyed some of the company’s confidential intellectual property to his father.

     ● Because of the U.S. Central Command’s military interest in using Theranos’s device on the battlefield, a Army lawyer met with Holmes.  He “noted that she had brought no regulatory affairs expert to the meeting.  He suspected the company didn’t even employ one.  If he was right about that, it was an incredibly naïve way of operating.  Health care was the most highly regulated industry in the country and for good reason: the lives of patients were at stake.”

     ● The prestigious advertising agency Chiat\Day, concerned about its potential liability for statements made in marketing materials prepared for Theranos (and reviewed by Theranos’s counsel), consulted its own lawyers. 

      ● Most notably, Theranos ultimately retained Boies, Schiller & Flexner, the firm of prominent lawyer David Boies.  Boies subsequently became a director of the company, and the law firm became a shareholder in Theranos.  (Holmes would later claim that she had believed that Boies had represented not only the company but herself personally; in the course of dismissing that contention, a federal magistrate judge noted in June 2021 that there had been no written retention agreement between the law firm and Theranos.)

     ● Lawyers from Boies, Schiller “ambushed” former employee Tyler Shultz at the home of his grandfather, former Secretary of State (and, at the time, Theranos director) George Shultz, accusing Tyler of having contacted the Wall Street Journal.

     ● Tyler, refusing to sign a document that those lawyers pressed on him, observed, “A Theranos lawyer had drafted this with Theranos’ best interests in mind. . . . I think I need a lawyer to look at it with my best interests in mind.”

     ● Tyler later “arranged for [his parents] to have their own legal counsel.  That way he could communicate with them through attorneys and those conversations would be protected by the attorney-client privilege.”

     ● At a meeting in the Journal’s headquarters, Boies and colleagues  immediately activated “little tape recorders. . . at each end of the conference table” to  capture their conversation with Carreyrou, his editor, and the newspaper’s deputy general counsel. 

     They would later send the newspaper a letter that “sternly demanded that the Journal ‘destroy or return’ all Theranos trade secrets and confidential information in its possession.  Even though Boies must have known there was zero chance we would do that, it was a shot across the bow.” 

     ● However, in response, “the Journal’s legal department dispatched a technician to copy the contents of my laptop and phone in preparation for litigation.”

     ● Finally, after (starting in October 2015) Carreyrou’s front-page stories (which the newspaper’s “standards editor and the lawyers would comb through line by line”) were published, field inspectors from the federal Centers for Medicare and Medicaid Services (CMS), “the chief regulator of clinical laboratories,” visited Theranos for four days.  “As soon as she sat down” for an interview, a lab associate who had worked on the Theranos technology “asked for an attorney.  She looked coached and afraid.”

     Among the practical issues that Bad Blood raises are:

     ● What danger signs should compel current and potential employees, directors, investors, and lawyers to investigate, if not reevaluate, their relationship with a company and its leaders?  What disclosure policies, or other protocols or processes, should be instituted to detect, and follow up on, such “red flags”?

     At Theranos, not only were firings apparently frequent, but employee access to information was extremely compartmentalized.  Although Holmes and Balwani explained that internal secrecy was an aspect of the company’s “stealth mode,” one employee privately pondered that other firms had “cross-functional teams with representatives from the chemistry, engineering manufacturing, quality control, and regulatory departments working toward a common objective.”

     On another level, it was not only some employees who were surprised to learn of the personal relationship between Holmes and Balwani.  Carreyrou asks, “If Holmes wasn’t forthright with her board about her relationship with Balwani, then what else might she be keeping from it?”

     ● By what techniques and processes can individuals and companies, and their counsel, prevent themselves from succumbing to the “almost hypnotic” and “mesmerizing effect” of an Elizabeth Holmes’ “mixture of charm, intelligence, and charisma”?

     Holmes won the support of, among others, the associate dean of Stanford’s School of Engineering (one of her former teachers); venture capitalists; Oracle founder Lawrence Ellison; and board members who included not only Shultz and fellow former Secretary of State Henry Kissinger but also the former head of the U.S. Central Command, a former admiral, a former Secretary of Defense, a former CEO of Wells Fargo, and former Congressmen.

     Carreyrou observes that at an emergency meeting of Theranos’ board, convened after company insiders had reported misleading revenue projections and statements of technical progress, Holmes managed to talk the directors out of removing her as CEO.

      ● To what degree do personal, corporate, and technological interconnections compound, or clarify, complexities?

      Carreyrou, meeting with a source, notes that “As we drove around in her car, I was struck by how small and insular Palo Alto was,” and proceeds to detail the proximity of some of the people and companies involved.

      Holmes “was able to leverage the family connections of a wealthy Mexican student at Stanford” to obtain government authorization to use Theranos devices in Mexico; made her brother the associate director of Theranos’ product mangement; and later hired some of his friends.

      Of the company’s engineering efforts, Bad Blood notes, “When you packed that many instruments into a small, enclosed space, you introduced unanticipated variations in temperature that could interfere with the chemistry and throw off the performance of the overall system.”

    One component of the product, a cartridge in which the sample of blood was combined with chemicals, “was a complicated, interconnected system compressed into a small space.  One of [the company’s] engineers had an analogy for it: it was like a web of rubber bands.  Pulling on would would inevitably stretch several of the others.”

     ● How can individuals and companies resist the pressure to form agreements out of a Fear of Missing Out (FoMO)?  Carreyrou suggests that such concerns motivated both Walgreens (worried about about rival CVS) and Safeway to partner with Theranos.

     ● How do potential whistleblowers document their concerns in the face of company counsel’s demands (excerpts from which are included) that they destroy or return, by a specified deadline, any proprietary information (including, in some cases, emails that employees had forwarded to their own personal accounts)?

     ● How and when, and under what conditions, should current and former employees contact, and/or cooperate with inquiries from, journalists? 

     ● At what point if any, should counsel for a source, and counsel for the journalist’s organization, become involved?  (In addition, should the journalist ever retain her own personal lawyer in this connection?)

     ● What practices should the journalists and/or lawyers recommend, and what agreements should they adopt, to protect the identities of potential and actual sources?

     Carreyrou himself starts becoming a major part of Bad Blood only in “The Tip,” the nineteenth of the book’s twenty-four chapters.  One potential source “told me he would speak to me only if I promised to keep his identity confidential.  Theranos’s lawyers had been harassing him and he was certain the company would sue him if it found out he was talking with a reporter.  I agreed to grant him anonymity.”

     Another source agreed to talk, “but only off the record.  This was an important journalistic distinction: [other sources] had agreed to speak to me on deep background, which meant I could use what they told me while keeping their identities confidential.  Off the record meant I couldn’t make any use of the information.” 

     Tyler Shultz, rejoining Carreyrou a year after their first meeting (which Tyler had arranged after “calling me from a burner phone that couldn’t be traced back to him”), “didn’t want to discuss the subject in an open place within earshot of other people.” Once in a private setting, he says, “My lawyers forbade me from talking to you, but I can’t keep this bottled up anymore.”  (“I agreed to keep whatever he was about to tell me off the record and only to write about it in the future if he gave me his permission to do so.”)

     ● To what degree do potential sources know about these options, and how standard are they?  Does their meaning depend on the form of media?  The particular publication?  The particular journalist?

     ● How, if at all, are a source’s understandings with a journalist (including with whom, and under what circumstances, the journalist can reveal that source’s identity to one or more of her colleagues) clarified and documented? 

     ● If any form of promised confidentiality is compromised, how can a source demonstrate the terms of the original arrangement if he sues the journalist and her organization?

     ● Finally, under what circumstances should lawyers adopt or adapt for their own purposes one of the Wall Street Journal’s practices, “a cardinal rule called ‘No surprises’”?

      “We never went to press with a story without informing the story subject of every single piece of information we had gathered in our reporting and giving them ample time and opportunity to address and rebut everything.”

NOT STRICTLY BUSINESS: AN INSIDER’s ACCOUNT OF BUSINESS SCHOOL

     [The previous essays in this series are here, here, here, here, here, here, here, here, and here.]

     Law and pre-law students, and lawyers, might find many valuable perspectives in Philip Delves Broughton’s detailed, critical, and sometimes-irreverent account, Ahead of the Curve: Two Years at Harvard Business School (2008). 

    Although Delves’s professors appear to have focused on legal issues only sparingly, the book illuminates the education of a lawyer’s potential clients; provides instructive comparisons with, and contrasts to, traditional practices of legal education; and (directly contradicting a classic movie line), offers advice not only for the reader’s professional career but also for her personal life.

    In 1854, Thoreau famously wrote, “I went to the woods because I wished to live deliberately, to front only the essential facts of life, and see if I could not learn what it had to teach, and not, when I came to die, discover that I had not lived.”

    One hundred and fifty years later, Broughton entered “HBS,” about fifteen miles away from Thoreau’s Walden Pond.  After a decade in journalism, he wanted “to learn about business in order to gain control of my own financial fate, and more important, my time,” and to pursue “greater knowledge about the workings of the world and broader choices about the life I might lead.”

    Without revealing the author’s post-graduation opportunities (although his series of job interviews are carefully described), it can be said that the book’s themes and “takeaways” include:

    ● The immersive nature of the business school experience, and its similarities to the process of learning a new language.

    ● Defining and refining one’s short-term and long-term purposes, and maintaining one’s individuality and ethics, while resisting peer pressure and the urge to compare oneself to others.  (Broughton includes a particularly entertaining discussion of the personality tests that he and his classmates were required to take.)

    ● The division, as in law schools, of entering classes into equal sections, the members of which take most, if not all, of their first year classes together, and gradually develop their own classroom and extracurricular cultures.

    ● The degree to which much of the author’s HBS education came not in the classroom but from discussions—often about defining “selling out” or finding the appropriate work/life balance—with his diverse group of classmates, a number of whom had come to the school from other countries and/or had business or military experience.  (“I was developing a soft spot for the ex-military guys.  They were refreshingly sane.”)

    ● On a different level, the importance to some businesses of a local community of complementary (and in some cases competing) firms, such as in the economic ecosystems of Wall Street and Silicon Valley.  

    ● The business school’s “case studies,” which—though adapted from law schools’  Socratic discussions of court decisions—instead involve faculty-written descriptions, “from a couple of pages to more than thirty,” of “business situations drawn from real life.  The question you are expected to answer in each one is, What would you do?  There are no right or wrong answers to these problems. . . . The only thing that matters is how you think about the problems, how you deal with the paucity of information, the uncertainty.”

     ● The degree—extreme, by law school standards—to which class participation counts in the grading of first-year courses: “Fifty percent of our grade would be determined by. . .  the quality and frequency of our comments,” predictably resulting in a “battle for classroom air time.”

     ● The questions–increasingly being considered by law schools—of whether, and how, leadership, including motivational skills, can be systematically and objectively defined and taught.  (“The word leadership lurked in every corner of HBS.”)

     ● More generally, the extent to which complex concepts and systems can be reduced to formulas or diagrams.

     ● The clarification and codification of decision-making processes; and, the possible perils of prolonged profits-only thinking.  (“This was why people hated MBAs.  Too much cost-benefit analysis, too little humanity. . .  However technically correct an analysis might be, it could also be too rational.  Achieving the balance between reason and emotion would never cease to be a challenge.”)

     ● The crucial, but nonformulaic, role of personal reputation, chemistry, and connections in business success.  (“The financial system turns on personalities as much as anything else.”)

     (In one episode of AMC’s Mad Men, the head of a 1960s-era advertising agency dressed down a young account manager who had alienated a major client: “I don’t know if anyone’s ever told you that half the time, this business comes down to, ‘I don’t like that guy!’”  In another episode, that account manager memorably answered an academic’s condescending query, “What do you do every day?”)

     ● Student stress, including the ever-present FOMO (Fear of Missing Out).  Broughton reports that his class was repeatedly advised by HBS administrators that “You had to choose exactly what you wanted to do and do it without fretting about what else was going on.”

     ● The definition, and process, of valuation—of an asset, a business, an opportunity, and a person (including oneself).

     ● How to define, obtain, and maintain a competitive advantage, as a professional businessperson and for one’s business(es).

     ● The definition, measurement, and appropriate levels of uncertainty and of risk.

     ● The roles of, and requirements for, entrepreneurs.  (“The HBS definition of ‘entrepreneurship’ was ‘the relentless pursuit of opportunity beyond resources currently controlled.’”) 

     Broughton discusses the extracurricular lessons of his own attempt, with a classmate, to obtain venture capital funding to “create a website that would be a supermarket of audio content.”

     ● As recommended by one of the HBS faculty (and in my book on law school), gauging, or demonstrating, one’s mastery of a subject by one’s ability to explain it to someone with no background in the area.

     ● Deep focus and constant attention to details, and a relentless search for innovations.  Broughton identifies the “crux” of the Advanced Competitive Strategy course as, “Most ideas are pretty good ones.  It all comes down to execution, staying alert, paying attention.”  (Or, as poet Jane Hirshfield summarized Zen: “Everything changes.  Everything is connected.  Pay attention.”) 

     ● An early anecdote, presented by one of the faculty:

      “[A]n MBA student [was] arguing with someone in the HBS administration.  As tempers rose, the student burst out, ‘Why are you treating me like this?  I’m the customer, goddamnit.’  ‘No, you’re not,’ said the HBS employee.  ‘You’re the product.’”

      (The speaker concluded, to the class: “I guess you’re somewhere between. . . . Sometimes you’re the customer, other times you’ll feel like the product.”)

     ● Among the most fascinating features of Broughton’s book are his summaries of the variety of life advice (including, “return calls and e-mails in a timely way”) provided to his class by the HBS faculty and by notable (and named) businesspeople; the reflections of some of his classmates, a year after graduation; and his recommendations for improving the school.     But perhaps the book’s single best line is the author’s response to his wife’s characterization of some of his classmates as “freaks”:
     “I know, I know. . . I’m just worried that if I stop recognizing the freaks, I’ll become one of them.”

STRATEGIES, SKUNK WORKS, STRAIGHT ARROWS, AND STEPPING-STONES

     [The previous essays in this series are here, here, here, here, here, here, here, and here.]

    The Clayton M. Christensen Reader (2016) can be read in an afternoon, but its professional and personal lessons might serve well throughout one’s career.

     The book collects eleven Harvard Business Review articles written or co-written by the Harvard Business School professor between 1995 and 2015 on the evolution of industries, companies, markets, executives, and management theories.

     Christensen (1952-2020) is best remembered for his concept of “disruptive innovation.”  His book, The Innovator’s Dilemma (2013), was reported to have “deeply influenced” Apple’s Steve Jobs.

    Using that strategy, less-established competitors can supplant market leaders by developing products—like transistor radios, smaller-than-standard hard drives, and personal computers—that might “perform far worse along one or two dimensions that are particularly important to [mainstream] customers,” but that “introduce a very different package of attributes” valued “in new markets or new applications.”

    ● Thus, in “Disruptive Technologies” (1995), Christensen warned dominant companies to look beyond their valued customers and successful offerings. 

      Managers should themselves pursue potentially disruptive technologies—especially those opportunities identified by “technical personnel,” as opposed to “[m]arketing and financial managers” or “lead customers”—by “creat[ing] organizations [such as ‘skunk works projects’] that are completely independent from the mainstream business.” Executives should research potential markets “by experimenting rapidly, iteratively, and inexpensively with both the product and the market.”

     ● Twenty years later, in “What is Disruptive Innovation?”, Christensen complained that his theory’s “core concepts have been widely misunderstood and. . . basic tenets frequently misapplied.  Furthermore, essential refinements. . . appear to have been overshadowed by the popularity of the initial formulation.” 

     In particular, he insisted that the term did not apply to every “situation in which an industry is shaken up and previously successful incumbents stumble,” but only to the displacement of established businesses by upstarts that at first serve lower-level customers (and sometimes, as with personal photocopiers, entirely new markets), “frequently at a lower price,” but “then move upmarket, delivering the performance that incumbents’ mainstream customers require, while preserving the advantages that drove their early success.”

     Thus, Uber did not qualify as a disruptive innovation, because it had neither exploited a low-end market that had been overlooked by San Francisco’s established taxi companies nor pursued “people who found the existing [ride-for-hire] alternatives so expensive or inconvenient that they took public transit or drove themselves instead.” 

     On the other hand, Netflix was a disrupter, once it moved from supplying DVDs by mail to “becom[ing] appealing to Blockbuster’s core customers, offering a wider selection of content with an all-you-can-watch, on-demand, low-price, high–quality, highly convenient approach. . . [F]ailing to respond effectively to the trajectory that Netflix was on led Blockbuster to collapse.”

     Christensen also resisted the loose use of “disruptive” to characterize any successful innovation.  Pointing to the large number of failed Web-based stores, he noted, “Not every disruptive path leads to a triumph, and not every triumphant newcomer follows a disruptive path.”

    He concluded by observing that even companies that, as he’d recommended, reevaluate their own operations won’t necessarily be immune to disruptive innovation: in the complex and fluid world of business, “Sometimes this [advice] works—and sometimes it doesn’t.”

    Indeed, his more general essay, “Why Hard-Nosed Business Executives Should Care About Management Theory” (2003), had already observed that, “in business, as in medicine, no single prescription cures all ills”; but added that “Progress comes from refining theories to explain situations in which they previously failed. . . .”

     Law students and lawyers should find much of value in considering the ways in which disruptive innovation (most recently, the changes wrought by ChatGPT and other “generative AI” technology) threatens—and provides opportunities to—not only their current and potential clients, but also their own careers and profession.

     ● Christensen’s most far-reaching and personal article, from 2010, bears the title of his 2012 book: “How Will You Measure Your Life?” 

     He wrote that he concluded each semester by asking his business school classes “to find cogent answers to three questions: First, how can I be sure that I’ll be happy in my career?  Second, how can I be sure that my relationships with my spouse and my family become an enduring source of happiness?  Third, how can I be sure I’ll stay out of jail?  Though the last question sounds lighthearted, it’s not.”

    Christensen cautioned that the temptation to deviate “just this once” from one’s principles “suckers you in, and you don’t ever look at where that path ultimately is headed and at the full costs that the choice entails. . . . You’ve got to define for yourself what you stand for and draw the line in a safe place.”

    This counsel corresponds with part of the “Ten-Step Program to Resist Unwanted Influences” that Stanford psychology professor Philip Zimbardo (1933- )—most memorably connected to the (in)famous Stanford Prison Experiment (1971) and to the broken window theory (1969)—provided in his book, The Lucifer Effect: Understanding How Good People Turn Evil (2007), and on the associated website, as “a starter kit toward building individual resistance and communal resilience against undesirable influences and illegitimate attempts at persuasion.”

     Zimbardo “discourage[s readers] from venal sins and small transgressions, such as cheating, lying, gossiping, spreading rumors, laughing at racist or sexist jokes, teasing, and bullying.  They can become stepping-stones to more serious falls from grace.”

     Both works, in their own ways, present personal morality as something that, like a company’s products, should be carefully researched and developed. 

    Like Christensen’s theory, one’s morality should be defined as clearly as possible, although it might not yet (or ever) be comprehensive.

    But one’s moral line, unlike a line of products, should not be self-disrupted or compromised to head off competitors, and should not be subject to “pivots” or “iterations” induced only by the pressure of peers, employers, or the market.

     Otherwise, as Groucho Marx joked more than eighty years ago, “Those are my principles.  If you don’t like them, I have others.”

MACRO-, MICRO-, AND MINDFUL MANAGEMENT OF TIME: THREE APPROACHES

     [The previous essays in this series are here, here, here, here, here, here, and here.]

     Law students and lawyers, deluged daily with data, details, and deadlines, are often inclined towards prescriptions and practices for personal and professional productivity.

     David Allen’s classic Getting Things Done: The Art of Stress-Free Productivity (2001), which spawned numerous “GTD”-related websites, begins from the premise that  “Most people walk around with their [short-term memory] bursting at the seams.  They’re constantly distracted, their focus disturbed by their own internal mental overload.”

      Allen identifies, diagrams, and discusses sequentially “five separate stages” of working, which he recommends implementing at least weekly:

      “We (1) collect things that command our attention; (2) process what they mean and what to do about them; and (3) organize the results, which we (4) review as options for what we (5) do.”

      GTD’s reassuring system and structures might not contain many revelations, but could well repay regular rereading.  Allen’s principles remain relevant, and could easily be adopted and/or adapted, in an increasingly-digital environment.

      Among Allen’s advice:

     ● “It’s. . . just a great habit to date everything you hand-write. . . The 3 percent of the time that this little piece of information will be extremely useful makes it worth developing the habit.”

     ● “Whenever you come across something [in hard copy that] you want to keep, make a label for it, put it in a file folder, and tuck that into your filing drawer. . . . [I]f you can’t get it into your system immediately, you’re probably not ever going to.”

    ● When processing an item, determine the “next action” related to it. “If the next action can be done in two minutes or less, do it when you first pick the item up. . . . If you have a long open window of time in which to process your in-basket, you can extend the cutoff for each item to five or ten minutes.”

     Getting Things Done frequently features lists (including examples and enumerations of categories of other listable items): current projects; next actions; “someday/maybe” initiatives; action reminders and “incompletion triggers”; elements of a “Weekly Review”; helpful office supplies; physical areas to organize; and, six levels of perspective on one’s work (Current actions; Current projects; Areas of responsibility; One- to two-year goals; Three- to five-year visions; and, Life).

     For a specifically hard-copy-based system of productivity, many have turned to Ryder Carroll’s The Bullet Journal Method (2018), presented as “a way to stem the tide of digital distractions [and] an analog solution that provides the offline space needed to process, to think, and to focus.” 

     Like Allen (who offers detailed instructions for “corralling your ‘stuff’” and then conducting a “mind-sweep”), Carroll invites readers to take an initial inventory (of “all the things you are presently working on”; “all the things you should be working on”; and, “the things you want to be working on”).

     Not everyone might employ his various “signifiers and custom bullets” (symbols such as dashes, < , > , and ◦) to designate and distinguish Tasks, Events, Notes, and other information entered.

     Moreover, the “journal” aspect of Carroll’s method is extremely abbreviated, because “Not having to articulate the complexity of an experience makes it much more likely for us to write it down.  That’s the most important part: to have a record.” (One example: “Signed the lease.  Yay!”)  Similarly short are sample “notes” about aspects of pending, or completed, events.

     The full Bullet Journal system, which some might consider overly elaborate, includes “four core Collections: the Daily Log (compiled as the day progresses; as with Allen’s approach, “The idea is to be consistently unburdening your mind”); the Monthly Log (prepared before the month begins, and added to during the month); the Future Log (which “stores entries that have specific dates that fall outside of current month”); and the Index, which contains lists of previous page references, organized by topic.   

    Whatever one’s preferred mechanics of time management, the expansive—and often counterintuitive—philosophy of Oliver Burkeman’s Four Thousand Weeks: Time Management for Mortals (2021) should be a sobering, but also refreshing, companion.

    Burkeman’s title refers to the human lifespan; and his introduction, “In the Long Run, We’re All Dead,” summarizes the book’s theme:

     “Productivity is a trap.  Becoming more efficient just makes you more rushed, and trying to clear the decks simply makes them fill up again faster.  Nobody in the history of humanity has ever achieved ‘work-life balance,’ whatever that might be, and you certainly won’t get there by copying the ‘six things successful people do before 7:00 a.m.’”

     Burkeman counsels readers to acknowledge their limitations; to disregard FOMO (the Fear Of Missing Out), since no one can travel every road; and to resist the temptation to keep one’s options endlessly open (instead, “deliberately mak[e] big, daunting, irreversable commitments, which you can’t know in advance will turn out for the best, but which reliably prove more fulfilling in the end”).

     He recommends that readers pursue hobbies (“it’s fine, and perhaps preferable, to be mediocre at them”); “develop a taste for having problems” rather than fantasizing about a friction-free future; patiently proceed through “the trial-and-error phase of copying others, learning new skills, and accumulating experience”; work steadily and incrementally on large projects; and, to “strengthen the muscle of patience” and sustain long-term productivity, “be willing to stop when your daily [scheduled] time is up, even when you’re bursting with energy and feel as though you could get much more done.”

    Burkeman’s closing advice is to “Practice doing nothing,” even for only a few minutes at a time, which will help you “begin to regain your autonomy—to stop being motivated by the attempt to evade how reality feels here and now, to calm down, and to make better choices with your brief allotment of life.”

     In those meditative moments, one might recall the (translated) poem of Zen-influenced haiku master Matsuo Basho (1644-1694):

                        Sitting quietly, doing nothing

                        Spring comes

                        And the grass grows by itself.

THE COURAGE (OR NOT) OF THEIR CONVICTIONS (OR NOT)

     [The previous essays in this series are here, here, here, here, here, and here.]

     In his book, Making Movies (1995), director Sidney Lumet summarizes simply as, “Listen,” the theme of his first movie, Twelve Angry Men (1957) (trailer).

     The script was written by Reginald Rose, a lawyer’s son inspired by his own jury service (in a manslaughter case in New York City), and by his opposition to McCarthyism.  A shorter form of Rose’s drama had been televised live in 1954 on Studio One; and a version had been performed as a stage play the following year.

    Sequestered in a small and sweltering room as a storm sweeps in, twelve men—portrayed by actors including Henry Fonda (who co-produced the movie with Rose), Lee J. Cobb, E.G. Marshall, Martin Balsam, Ed Begley, and Jack Klugman—attempt to reach a verdict in the case of a young man accused of having stabbed his father. 

     They have been instructed that a guilty or a not-guilty verdict on the single charge, murder in the first degree (i.e., premeditated murder), must be unanimous; and that a verdict of guilt will result in the defendant’s execution.  (The judge and the defendant appear briefly at the beginning of the movie, but in the stage script neither is visible to the audience.)

     To accentuate the jurors’ close quarters, and to heighten the increasing sense of confinement during the ninety real-time minutes of their interaction, Lumet “slowly shifted[ed] to longer lenses,” and “shot the first third of the movie above eye level, shot the second third at eye level, and the last third from below eye level.”

     Without including spoilers, it can be said that the drama’s themes include:

     ● The degree to which each juror has in fact listened to the judge’s instructions, to the lawyers, to the witnesses, and, perhaps most importantly, to his colleagues.

     ● The ways in which reasonable people, acting in good faith, can disagree.

     ● Peer pressure and “groupthink”; and the courage, caution, and possible heroism of someone who questions and/or opposes the majority’s opinion.

     ● Depersonalization.  Seated at the jury table in the order of their juror numbers, the men don’t know each other’s names, but occasionally refer to one another as “that gentleman.”

     ● Individuality, and Diversity.  Each juror figuratively (and, in one show-stopping moment, literally) brings something to the table, as their varying careers, experiences, prejudices, sympathies, and relative maturity—which are gradually, and sometimes inadvertently, revealed—inform their deliberations. 

     ● The respect, or disrespect, shown by the various jurors to each other, to the defendant and to his (unspecified) demographic group, and to the jury process itself.  (One juror mocks another: “What are you being so polite about?”  The response: “For the same reason you’re not—it’s the way I was brought up.”) 

     Justice Oliver Wendell Holmes, Jr. reportedly described the Supreme Court as “nine scorpions in a bottle.”  The dispositions, distractions, engagement, and self-restraint of the dozen jurors vary widely, and change significantly over the course of their work.

     ● Individual jurors’ commitment to continuing deliberations, which are not subject to any deadline or other time constraints, rather than reporting to the judge that the group cannot reach a unanimous verdict.

      ● The willingness and responsibility of jurors to explain their own positions when questioned or challenged, to carefully evaluate the arguments of others, and to vote in good faith rather than for the sake of expediency.

      ● The men’s assessment of the incomplete information presented at trial (whose details they occasionally, sometimes after consulting notes, recall differently), and of the witnesses’ appearance, character, and credibility. 

     ● The deductive prowess of different jurors, and the degree to which a juror can and should independently investigate issues and collect evidence. (The second issue, not presented as particularly controversial or rule-breaking in the movie, is certainly addressed by current jury instructions, especially with regard to jurors’ conducting their own online research.)

     ● The physical evidence requested by the jurors through the court officer, and its relevance to their reconstruction of events.

     ● The all-important question of whether the prosecution has met its burden of proof by demonstrating the defendant’s guilt beyond a “reasonable doubt” (a term that is never detailed or defined in the jurors’ discussions).

     ● The inherent imprecision of the jury’s considerations of possibilities and probabilities:

     -In this context, are objectivity and certainty themselves suspect?  One juror fumes, “I’m sick and tired of ‘the facts.’  You can twist them any way you want!” At another point, recalling one part of the trial presentation, another juror states, “I began to get a peculiar feeling—I mean, nothing is that positive!”       

     -Is is ever appropriate to answer another juror’s question with, “I don’t know”?

     -Might a verdict of guilt condemn to death a man who’s actually innocent?  Alternatively, could a not-guilty verdict free a murderer, who might well kill again?  And (although this is not specifically the jury’s concern), if the defendant didn’t murder the victim, who did, and why?

     ● The jurors’ reflections on the possible unprofessionalism of the prosecutor and of defense counsel (though not of the judge)—and on the degree to which those complicate the jury’s job.

     ● The direct and indirect methods of persuasion and leadership exercised by the foreman and the other jurors—including appeals to reason, emotions, and biases—and the sometimes-delicate dances of dominance, deference, and decorum within various permutations of the jurors, and among the group as whole.

     Sixty-six years ago, a few days before the release of the movie, Rose wrote in The New York Times, “Much of the intricate business of living is singled out and held up for scrutiny in this jury room, I feel.  For instance, the alliances formed for purely intellectual reasons and those formed for emotional reasons alone remind us of larger and more important alliances that we can see at every turn in our newspapers, locally, nationally, internationally.”

     ● The jurors’ reaching agreement on the internal governance of their deliberations, such as their seating, choice of foreman, order and form of discussions, and moments and methods of voting.

     ● The differing roles of voting by secret ballot, by voice, and by a show of hands.

     ● The democratic nature of jury selection and deliberations.

     In 2010, Justice Sonia Sotomayor, invited to select a movie to be screened at the Fordham University School of Law’s Film Festival, chose Twelve Angry Men

     Justice Sotomayor told the audience that, around the time she began college, she had been impressed by the way in which one juror, presented as having emigrated to the United States, championed the jury system: “It sold me that I was on the right path. . . . This movie continued to ring the chords within me.”

     However, she also noted that earlier in her career, she had cautioned jurors that the movie’s depiction of deliberations was unrealistic: “There was an awful lot of speculation.”

     Justice Sotomayor added that, as reflected by the jurors’ discussions, both the prosecution and the defense had “failed in their duties.”

     According to Joan Biskupic’s Breaking In: The Rise of Sonia Sotomayor and the Politics of Justice (2014), the Justice said, recalling scathing references by one of the movie’s jurors to “’those people,’” “You have to flinch. . . Those [remarks] are personal.  They were personal when I saw it the first time.  I had heard about ‘those people’ in my life so often.”

     The most clear-headed of Reginald Rose’s characters coolly responded to that juror’s eruption of bigotry (the script’s expanded version of which is even more extreme and toxic): “It’s always difficult to keep personal prejudice out of this.  And wherever you run into it, prejudice obscures the truth.”

     So it could be said that Rose’s writing, and Lumet’s staging and photography, of Twelve Angry Men, and of that scene in particular, also dramatically depict what should, if heard, not be listened to.

247!

     [The previous essays in this series are here, here, here, here, and here.]

     Not only students of law and of history, but anyone interested in writing and editing, might derive at least eight valuable lessons—applied even more easily in this digital era—from MIT historian Pauline Maier’s deeply-documented American Scripture: Making the Declaration of Independence (1997).

    ● First, keep careful contemporaneous records of drafts and revisions. 

    At the beginning of the core of her book—its third chapter, “Mr. Jefferson and His Editors”—Maier mentions the fragmentary and conflicting records of the creation of the initial version of the Declaration, by Thomas Jefferson (with limited assistance from fellow attorneys John Adams, Roger Sherman, and Robert R. Livingston, and also from Benjamin Franklin), between June 11 and June 28, 1776.

     That committee “left no minutes of its proceedings, and the account of its work written nearest the event, . . . Jefferson’s ‘Notes of Proceedings in the [Second] Continental Congress,’ is succinct to a fault.”

     ● Second, when possible, produce a first draft quickly.  Jefferson, like his colleagues on the committee, had many other Congressional responsibilities; but, according to Adams’s account, completed the draft in only one or two days.

     ● Third, draw on previous materials, including your own relevant writings. 

     Jefferson made use of “the draft preamble for the Virginia constitution that he had just finished and which was itself based upon the English Declaration of Rights”; of “a set of draft instructions for Virginia’s delegates to the First Continental Congress” that he had unsuccessfully proposed, on his own initiative, in 1774, and that had subsequently been published by his friends; and of George Mason’s “preliminary version of the Virginia Declaration of Rights.” 

     (Maier includes a helpful one-page “Family Tree” of various documents that influenced, or were influenced by, the Declaration.)

     ● Fourth, especially for those not so extraordinarily well-read as Jefferson, and/or without a literary and historical memory as capacious as his appears to have been, maintain a library (possibly online, or on a hard drive or flash drive) of relevant documents, for ready access.  

     Jefferson referred in his diary to having, with colleagues in Virginia’s House of Burgesses in 1774, “rummaged over” a collection of Puritan-era “revolutionary precedents and forms” in the library of that body’s council chamber, after which they “cooked up a resolution, somewhat modernizing [those] phrases,” in response to the imposition of Parliament’s Boston Port Act.  However, Maier indicates that there is no record of Jefferson’s having searched for written sources in Philadelphia in 1776.

    ● Fifth, consider using a preface, as Jefferson most memorably did, to introduce and set the tone of your document. 

     Maier discusses how the Declaration announced and justified Independence to the American people, including the soldiers who would be fighting for it, and how Congress deliberately had the document disseminated “not only to the state assemblies, congresses, and conventions that were its immediate constituents and to their Committees of Safety, but to the commanders of the Continental Army, [directing] that it be proclaimed not only in all the states, but at the head of the army.”  It was also widely printed in colonial newspapers, and separately circulated in the form of broadsides.

     ● Sixth, read your writing aloud, even if you don’t anticipate that it will be publicly proclaimed (although Jefferson, for that purpose, marked the Declaration’s text to show where those reading to assemblies should pause. The earliest printed copies of the final document included those marks.).

      “Such attention to the cadences of language was natural for Jefferson, a committed violinist fascinated with music.  He had also studied classical oratory and rhetorical theory. . . . “

     ● Seventh, remember that in preparing (at least some) documents, “Less is more.”

     Jefferson, in his draft, abbreviated some of Mason’s writings.  In turn, the Second Continental Congress, “sitting as the Committee of the Whole, . . . [in] an act of group editing that has to be one of the great marvels of history,” trimmed elements of (particularly the second half of) Jefferson’s work, refining his “overlong attack on the British people to a more lean and constrained statement.”

     ● Eighth, try not to take revisions of your work, especially as part of a collaborative effort, personally. 

      In another illustration of the preceding principle, Maier mentions Jefferson’s account of how Franklin “perceived that I was not insensible to these mutilations,” and tried to cheer him up. 

     Franklin told Jefferson a story about a young hatter who, before opening his own store, asked friends for their advice on his proposed sign: “John Thompson, Hatter, makes and sells hats for ready money,” accompanied by an image of a hat. 

     They helpfully eliminated the redundancies one by one, until all that remained was his name, and the picture of the hat.

(A possible ninth lesson: Franklin advised Jefferson that one moral of that story was to resist, “whenever in my power. . . becoming the draughtsman of papers to be reviewed by a public body.”)

     Two days before the Declaration of Independence was formally approved by the delegates—and signed by John Hancock and Charles Thompson (as the President and the Secretary, respectively, of the Second Continental Congress)— Congress voted for independence from England.

     One of his biographers reports that John Adams wrote to his wife Abigail:

     “The second day of July 1776 will be the most memorable epocha in the history of America.  I am apt to believe that it will be celebrated by succeeding generations as the great anniversary festival.  It ought to be commemorated as the Day of Deliverance by solemn acts of devotion to God Almighty.  It ought to be solemnized with pomp and parade, with shows, games, sports, guns, bells, bonfires, and illuminations from one end of this continent to the other from this time forward forever more.”

     It could well be quibbling to question whether “solemnize” applies to all of the activities in Adams’s last sentence, and to note that Independence Day is celebrated on July 4th, rather than July 2nd.

      But reading closely, or even reviewing quickly, the style and the substance of American Scripture’s seven-page, redlined Appendix C—“The Declaration of Independence: The Jefferson Draft with Congress’s Editorial Changes”—might, for many, be both humbling and inspiring.

SMALL, MEDIUM, AND LARGE VOLUMES OF AMERICAN LEGAL HISTORY: FRIEDMAN, FRIEDMAN & FRIEDMAN

     [The previous essays in this series are here, here, here, and here.]

     Anyone in search of a well-written, informative, friendly, and even entertaining overview of American legal history might appreciate one of Stanford law professor Lawrence M. Friedman’s three different treatments of the topic.

     Law in America: A Short History (2002) is the shortest (192 pages, not counting the index).

    A History of American Law (4th ed. 2019) is the most comprehensive and the longest (at 796 pages).

    But the most rewarding and engaging choice for law and pre-law students could be American Law in the 20th Century (2002) (607 pages). 

     Its major themes are “the rise of the welfare-regulatory state”; the increased use of judicial review; the tensions between federal and state laws and courts; “the shift of power and authority to Washington—to the national government”; the enormous growth in “the size and scale of the legal system” (including in the number, and the diversity, of lawyers); and the dramatic rise in cases concerning products liability, civil rights, and immigation and citizenship. 

     As the book illustrates, law not only permeates American society, but “is a product of society.”  In particular, “The immediate source of law is not social change but what we can call legal culture.  By this I mean people’s ideas, attitudes, values, and expectations with regard to law,” themselves “a complicated, existing system, a system of crosscurrents and interrelations, a web of values and norms.”

      The longest of Friedman’s three histories compares law to “the reflection of a face in a slowly moving river, that is, somewhat refracted and distorted”; the briefest concludes that “Law, in short, is a mirror held up against life.”

     Indeed, each of these books moves beyond courts, commissions, and agencies to illuminate the interdependence and interplay of legal developments with those in global, national and local politics, economics, and demographics; in technology (especially in transportation and communication); and in the public’s perspectives and moods (including popular reactions to rulings and regulations).

     About one-quarter of the main text of American Law in the 20th Century concerns “The Old Order” (from the late 19th century through 1932); the next two-thirds examines the history and implications of “The New Deal and Its Successors” (1933-1979); and the concluding 85 pages summarize, “The Way We Live Now: The Reagan and Post-Reagan Years.”

     Particularly useful to students will be the book’s introductions to, and contextualization of, such traditional course areas as contracts and corporate law (Chapters 3 and 12), torts (Chapter 11), criminal law (Chapters 4 and 8), property (Chapter 13), family law (Chapter 14), and constitutional law (throughout).  It includes detailed discussions of legal issues of racial and gender equity (Chapters Five and Ten); as well as “freedom of speech and what we might call political justice,” especially during the era of McCarthyism and the Cold War (Chapter 10).

      Friedman reviews a number of decisions traditionally included in first-year casebooks, and begins one chapter by warning, “Civil procedure is the ugly duckling of law.  It is a field only a lawyer can love; and even most lawyers find loving it a struggle.”  

     His survey of the superstructure and infrastructure of American law captures complexities, rejects casual simplifications, and focuses at times on the history-shaping roles of individual lawyers and litigants.

     The book begins with the notorious 1905 decision of Lochner v. New York, in which the Supreme Court struck down as unconstitutional a state statute that restricted the number of hours that bakery employees could work each week and each day.

     However, Friedman quickly clarifies that in “much more obscure” decisions, the Court actually upheld state legislation, including several laws on workers’ rights.  He concludes that, in the early part of the century, the Justices “were not so much reactionary, as soundly upper middle class”; and that they, “and judges in general, were cautious and incremental.  They did not consistently adhere to any economic philosophy,” although judges “were much more likely to sympathize with professional people, and skilled artisans, than with laborers and unskilled workers, and their unions.”

     After devoting a chapter to the century’s changes in the composition, education, and (bar) organizations of lawyers, Friedman chronicles the rise of the “uniform laws” movement, and changes in bankruptcy, corporate, antitrust, tax, and labor law. 

     “On the whole, regulation that appealed to the middle class was much more likely to get enacted than programs pushed by the labor unions.”  In particular, the 1906 federal Food and Drug Act was sparked in part by the deeply disturbing (and disgusting) depictions of meatpacking plants in Upton Sinclair’s The Jungle, published that year.

     “In an age of mass media, and mass communication, the role of scandal and incident in lawmaking was bound to multiply”—if not always predictably.  Sinclair, who had intended his novel as a sweeping indictment of capitalism, concluded, “I aimed at the public’s heart, and by accident I hit it in the stomach.”

     In 1935, the Supreme Court’s Schechter Poultry Corp. v. United States decision struck down the National Industrial Recovery Act as an unconstitutional Congressional intrusion upon commerce.  Yet only a few years later (after the failure of President Roosevelt’s “court-packing” plan to enlarge the number of Justices), the Court “renounced its economic activism [and] gave its stamp of approval to all the programs of the New Deal.  It simply abandoned the whole line of thought that Lochner v. New York epitomized.”

     Friedman contends that the New Deal, which included the creation of the Securities and Exchange Commission (1934) (among other agencies), as well as the passage of the Social Security Act of 1935, “was not. . . a total revolution. . . ; but it was a dramatic quickening, a ratcheting upward. . . ,” spurred not just by the Great Depression but by such developments as radio, newsreels, and movies “that pulled national attention away from the neighborhoods and into Washington, D.C.” 

     He discusses also the “second wave,” under the “War on Poverty” and “Great Society” initiatives of the Johnson Administration in the 1960s; and a “third wave,” devoted to protecting the environment, and also “social justice, rights, health and safety, [and] style of life.”

     Friedman devotes a later chapter to the changing demographics and dynamics of the legal profession, and of legal education.  Students, especially, might appreciate his brief discussions and assessments of the emergence (in the 1980s) of the critical legal studies (CLS), critical race theory, and “law and economics” movements, and the increasing tendency of legal scholarship to cite non-law sources (including “everything in the current intellectual circus world from diamond-hard to mushy-soft”).  He reports that “In 1900 law reviews were much concerned with expounding the law.  In 2000 they were most concerned with criticizing it—or suggesting changes.”

     Friedman observes that “competition between [big law] firms became more intense in the last part of the twentieth century. . . . They were more ruthless in pruning out deadwood, even when the deadwood held a partnership interest.  A partner’s income depended on how much business he brought in: as the phrase went, you ‘eat what you kill.’ . . . [These] firms now began to hunt business more actively, prowling about the business world like leopards on the plains.  They made presentations to clients like advertising agencies, they concerned themselves with public relations.” 

     The business of practicing law also changed dramatically for smaller firms.  In 1975, the Supreme Court prevented state bars from forcing their members to adhere to minimum fee schedules (thereby accelerating the shift to charging clients by the number of “billable hours” expended on their matters).  Two years later, the Court allowed attorneys to advertise their services.

     Yet the profession was already losing some of its mystique and its popular respect, which some commentators considered to have peaked in the 1950s and 1960s.  Friedman notes the rise of (anti-)lawyer jokes, and the reportage of American Lawyer and National Law Journal, which were “breezy, gossipy, full of inside dope and human interest.”

     As an abbreviated substitute for parts of, or as an adjunct to, American Law in the 20th Century, one might read Law in America: A Short History

     Its initial chapter introduces and explains federalism, judicial review, and the basic differences between common law and civil law systems (although observing that the roles of their judges have tended to converge); the second chapter summarizes “American Law in the Colonial Period”; and the third reviews “Economy and Law in the Nineteenth Century.” 

     Of the remaining two-thirds of Law in America, parts of the fourth chapter (“Family, Race, and the Law”), fifth chapter (“Crime and Punishment in the Republic”), and much of the sixth (“The Twentieth Century and the Modern Administrative-Welfare State”) and seventh (“American Law at the Dawn of the Twenty-First Century”) chapters recap some of Friedman’s discussions in the 20th Century volume.

     Although the author’s extensive A History of American Law (4th ed. 2019) covers the subject from colonial times onwards, approximately 82 percent of its almost 800 pages are devoted to pre-20th century developments.  In its preface, Friedman notes, “I have expanded somewhat the treatment of the twentieth century, but it is still, I must confess, something of a poor relation” to the book’s examination of earlier eras.

     On a broader level, one might compare the evolution of American law, and of its structures and strictures, to those of the subjects of The Whole Earth Catalog creator Stewart Brand’s How Buildings Learn: What Happens After They’re Built (1994).

     Chronicling the types of transformations in the forms and functions of particular buildings, Brand (who, according to a biography, wrote in his 1986 journal, with regard to the so-called KISS principle, “Keep it simple stupid, is a good way to keep it stupid”) observes that “Buildings keep being pushed around by three irresistible forces—technology, money, and fashion.” 

     He concludes, “an adapted state is not an end state.  A successful building has to be periodically challenged and refreshed. . . . The scaffolding was never taken completely down around Europe’s medieval cathedrals because that would imply that they were finished and perfect, and that would be an insult to God.”

THE ELEMENTS, AND ELEGANCE, OF (JUDICIAL) STYLE

     [The previous essays in this series are here, here, and here.]

     Generations of undergraduates, and of alumni, have consulted the concise classic on composition, The Elements of Style, by William Strunk and E.B. White.

     Law students and lawyers may well have encountered handbooks for drafting comments, commentaries, complaints, and contracts; and they will certainly have learned at least some of the Bluebook’s rules for legal citations.  

     However, few outside of chambers might be aware of the illuminating Judicial Writing Manual: A Pocket Guide for Judges (2d ed. 2013), which can be downloaded at no charge from the Federal Judicial Center.

     The first edition, published in 1991, was developed by a board of editors that included nine federal judges, a law professor, and the Wall Street Journal’s Supreme Court reporter (Stephen Wermiel, now one of my faculty colleagues at American University Washington College of Law).

     Among the eighteen prominent judges “who participated in telephone interviews to discuss their experience with and views on judicial writing” were: Ruggero J. Aldisert and John J. Gibbons (each of whom had served as Chief Judge of the U.S. Court of Appeals for the Third Circuit; Aldisert was also the author of Opinion Writing, whose first edition, published in 1990, was distributed almost exclusively to newly-appointed federal judges); Stephen Breyer (then sitting on the Court of Appeals for the First Circuit); Ruth Bader Ginsburg (Court of Appeals for the District of Columbia Circuit); Richard A. Posner (Court of Appeals for the Seventh Circuit), and Jack B. Weinstein (of the U.S. District Court for the Eastern District of New York, which he had formerly served as Chief Judge).

     Although the introduction suggests that “newly appointed judges and their law clerks will be the principal users of this manual,” its principles and discussions (in 32 pages, plus appendices) should be useful to legal writers generally, and to anyone interested in the process of constructing, or deconstructing, a court’s opinion.

     Moreover, the Manual’s terseness and tone, like those of The Elements of Style (to which it refers), can be appreciated as demonstrations of the authors’ own recommendations.

     Ten takeways:

     ● First, the judge must keep her audience in mind.

     “[I]f a case involves an arcane area of law familiar primarily to specialists—tax, labor, or antitrust law, for example—a thorough discussion of the facts and legal background will needed, and the judge should avoid the use of technical language and should define any technical terms that must be used.”

     Yet, “When an opinion addresses an issue of general public interest or is likely to attract media attention, it should be written in a manner that will ensure it cannot be misunderstood” by the media and the general public.

     Deliberately didactic decisions might include A & M Records, Inc. v. Napster, Inc., 114 F.Supp.2d 896 (N.D. Cal. 2000), which addressed, as its first sentence noted, “the boundary between sharing and theft, personal use and the unauthorized worldwide distribution of copyrighted music and sound recordings”; and which enjoined Napster from disseminating, “without express permission of the rights owner,” such recordings owned by the plaintiffs, eighteen record companies.

     Another example—which, like Napster, was part of extended litigation—is In re Walt Disney Co. Derivative Litigation, 907 A.2d 693 (Del. Ch. 2005), aff’d, 906 A.2d 27 (Del. 2006), in which the court held that Disney’s directors had not breached their fiduciary duties to the company.  The directors had been targeted by shareholders for approving an employment contract that enabled the departing president to receive an allegedly excessive severance package after a relatively brief term in office; and for not terminating that executive for cause, which would have contractually deprived him of his right to severance.  

     The introduction to Disney acknowleged: “I have tried to outline carefully the relevant facts and law, in a detailed manner and with abundant citations to the voluminous record. I do this, in part, because of the possibility that the Opinion may serve as guidance for future officers and directors—not only of The Walt Disney Company, but of other Delaware corporations. And, in part, it is an effort to ensure meaningful appellate review.”

     A related point on judicial clarity, from an article by Judge Aldisert and two of his clerks:

     “When judges weigh the case for and against given rulings, they characteristically refer to certain criteria as ‘justice,’ ‘common sense,’ ‘public policy,’ ‘convenience,’ or ‘expediency.’  Decisions should never be justified by such buzzwords without the support of reasoned elaboration. . . . Set forth your rationale and explain your value-based choice, dwelling not in the murky waters of subjectively defined buzzwords.” 

Opinion Writing and Opinion Readers, 31 Cardozo L. Rev. 1, 37 (2009).

     ● Second, indicating the court’s holding at the beginning of the decision not only will “save time for readers, particularly researchers” but will encourage the drafter to “state it precisely and succinctly.”

     Law students mastering the IRAC (Issue-Rule-Analysis/Application-Conclusion) format of answering exam questions might be interested in the five-factor “framework” enunciated, and then elaborated, by the Manual for a “full-dress opinion”: “an introductory statement of the nature, procedural posture, and result of the case; a statement of the issues to be decided; a statement of the material facts; a discussion of the governing legal principles and resolution of the issues; and the disposition and necessary instructions.”

     ● Third, non-material facts, stylish writing, and humor should be included—if at all—with judicial (and judicious) restraint.  “There is the obvious danger. . . that the reader may think the decision is based on these facts”; “colorful writing. . . may be seen by the parties as trivializing the case”; and, joking “may strike the litigants. . . as a sign of judicial arrogance and lack of sensitivity.”

     ● Fourth, judges should also abjure “pompous writing. . . such as arcane or florid language, use of the imperial ‘we’ (by a single district judge), or expressions of irrelevant erudition.” 

     The Manual recommends “plain English”: “There is a place for the elegant word, but it should not be necessary for the reader to have a dictionary at hand while reading an opinion.”

     ● Fifth, a dissenting opinion should adopt (as in the samples in an appendix) “a temperate, reasoned tone in expressing sincere disagreement with the majority,” “although some judges believe that expressing moral outrage and restrained indignation may sometimes be appropriate.”

     Similarly, an appellate opinion, even if reversing a lower court’s decision, “need not attack a trial court’s wisdom or judgment, or even its attitude,” and “should avoid unnecessary criticism of the trial court, such as for failing to consider authority or resting on improper motives.”

     ● Sixth, judges are advised to cite “law review articles, treatises and texts, and non-legal sources. . . sparingly and only to serve a purpose,” such as to “shed light on relevant historical or policy considerations.” 

     ● Seventh, “Judges should quote [only] briefly, and only when the language makes an important point.”

     ● Eighth, “If [material] is not important enough to go into the text, the judge must have some justification for including it in the opinion at all.” 

     Footnotes might be used to “acknowledge and briefly dispose of tangential issues,” or “to convey information that supports the language of the opinion but is not necessary to understand it, such as the text of a statute or material from the record.”  However, they “should not be used simply as a repository for information that the judge wants to keep but does not know what to do with.”

     ● Ninth, (as in another appendix’s examples) a decision remanding a case to a lower court “’for further proceedings consistent with the opinion’” should “spell out clearly what the lower courts or agencies are expected to do, without trespassing on what remains entrusted to their discretion.”

    ● Tenth, it might be helpful, if time permits, to set aside a near-final draft “for even a few days[, which] may help the judge review things more objectively, gain new insights, and think of new ideas.” 

     (For a 16-point “Checklist for Critiquing [or, Editing/Proofreading] an Opinion,” see Nancy A. Wanderer, Writing Better Opinions: Communicating with Candor, Clarity, and Style, 54 Me. L. Rev. 47, 70 (2002).)

     A number of these recommendations are reflected in the brief profiles written by Justice Daniel J. O’Hern of the members of the Supreme Court of New Jersey under Chief Justice Robert N. Wilentz, and (posthumously) published as What Makes a Court Supreme: The Wilentz Court from Within (2020).  (My review of the book appears on its Amazon page.)       

     Justice O’Hern modestly summed up his own (1981-2000) tenure on the Court: “I loved to unravel complex cases and to try to state their resolution in simple terms that would cover the essential elements of the disposition.”

     The Justice noted that among his colleagues he had acquired the nickname, “the Monsignor,” as “a reference to my reputation for excising from Supreme Court opinions unnecessary graphic details in sex cases. I did not do so out of religious conviction but out of good taste.”

    That chapter includes an excerpt from a law review article written on the occasion of the Justice’s retirement: referring to a longtime establishment in Red Bank, Justice Robert Clifford observed that “It was Dan O’Hern who invented the ‘Sal’s Tavern’ test, now part of the permanent lore of the Court: a judicial opinion that does not make sense to the gang down at Sal’s Tavern is unacceptable. Do it over.”

     In that tribute, Justice Clifford added, “Those who have been privileged to know and work with him will always recall Dan O’Hern’s warmth, dedication to the highest principles, and towering rectitude.”

     As a 1987-1988 clerk for Justice O’Hern, who was a true gentleman, mentor, and wordsmith, I wholeheartedly agree. 

     In his portrait of Justice Clifford, who was perhaps both the most committed and the most passionate grammarian on the Wilentz Court, Justice O’Hern recalled that:

     “I had written an opinion containing only two footnotes. One particularly offended him. This gave him the opportunity for the retort, in In re Opinion 662 [133 N.J. 22, 32 (1993)], which left us all fearful of even a single footnote:
     “’In fact, I deplore resort to footnotes not only in this case in particular but in judicial opinions generally. They distract. They cause the reader to drop the eyes; to absorb what is usually a monumental piece of irrelevancy or pseudo-scholarship but is sometimes – as here – a significant pronouncement that rightly belongs in the text; and then to return, without skipping a beat, to the point of departure on the upper part of the page. The whole irritating process points up the soundness of John Barrymore’s observation that “[reading footnotes is] like having to run downstairs to answer the doorbell during the first night of the honeymoon,” quoted in Norrie Epstein, The Friendly Shakespeare 75 (1992).’”  

MAXIMIZING (MORE THAN SHAREHOLDER) VALUE: TWO REWARDING PERSPECTIVES

     [The previous essays in this series are here and here.]

     In the 1987 movie Wall Street, the corporate raider Gordon Gekko (a performance for which Michael Douglas won an Academy Award) excoriates the management of the fictional Teldar Paper as bloated, self-serving, and incompetent.

    Gekko’s most memorable maxim, delivered from the aisle of a hotel ballroom during the company’s annual meeting of shareholders, was apparently inspired by a 1986 commencement speech at the University of California, Berkeley’s business school: Ivan Boesky (who, late the following year, would be sentenced to prison for insider trading) proclaimed, “I think greed is healthy.  You can be greedy and still feel good about yourself.”

     However, most people misquote Gekko’s iconic remark (from a speech that could be called Icahnic, since it elsewhere echoes several statements, reported in Connie Bruck’s The Predators’ Ball (1988), of raider Carl Icahn—who, like Gekko, had attempted to take over a paper company).

     Gordon Gekko did not say, “Greed is good.”

     He said, “Greed, for lack of a better word, is good.”

     As has been suggested, it might not have been noteworthy, in 1987, had Gekko instead said, “Profit-maximizing is good,” before elaborating that it “clarifies, cuts though, and captures the essence of the evolutionary spirit, and . . . will not only save Teldar Paper but that other malfunctioning corporation called the USA.”

     But in 2023, pure profit-maximizing, and its role in the culture and future of the nation, are themselves controversial.

     The various “Corporate Social Responsibility” (CSR) initiatives of Gekko’s day prefigured the current ESG movement, which seeks to enhance a company’s Environmental, Social, and Governance practices, including: sustainability; fair treatment of the company’s employees, and those of its suppliers; diversity, equity, and inclusion (DEI), on the worker, management, and boardroom levels; product safety; consumer protection and privacy; internal and public statements on issues of social concern; increased transparency and disclosure; evaluation of the potential applications, and customers, of the (especially, high-tech) company’s products and services; and, executive compensation (compared to the average employee’s compensation, and/or correlated with the company’s progress towards its ESG goals).

     Like its predecessor, ESG propels boards to consider the interests of such non-stockholder “stakeholders” as customers, employees, governments, and suppliers, as well as the communities in which the company’s facilities are located.

    Debates over whether, and how much, to accommodate such concerns date back at least ninety years (when Columbia’s Adolf Berle and Harvard’s E. Merrick Dodd published opposing law review articles on the topic).

    Yet Gekko, and even some of today’s real-life lawyers, might be surprised to learn that, as the late Cornell law professor Lynn Stout asserts in her brief, non-technical, and engaging overview, The Shareholder Value Myth (2012), “United States corporate law does not, and never has, required directors of public corporations to maximize either share price or shareholder wealth.”

    In “debunking” the “shareholder primacy” perspective for “executives, investors, and informed laypersons,” Stout reexamines the “core assumptions” that shareholders own corporations; that they are residual claimants (that is, entitled to receive any funds remaining after the company’s debts have been paid); and that they are the principals whom corporate directors and officers serve as agents.

     She also minimizes the import of the Michigan Supreme Court’s 1919 declaration, in Dodge v. Ford Motor Co. (a decision featured in a number of today’s corporate law casebooks), that “a business corporation is organized and carried on primarily for the profit of the stockholders.  The powers of the directors are to be employed for that end.”

     Stout argues that this statement: was extraneous, and non-binding, dictum; was qualified by the word, “primarily”; was contained in a decision that “was not really a case about a public[ly-traded] corporation at all”; and was from a court in Michigan, which “has become something of a backwoods of corporate jurisprudence.”  She adds that, as of 2012, the Dodge decision had not been followed, and had rarely even been cited, by Delaware’s state courts, which are by far the most influential in the corporate context.

     (Delaware is not among the more than thirty states that, beginning in 1983 with Philadelphia, adopted “other constituency” statutes permitting, but generally not requiring, directors to consider the interests of non-shareholders.  Moreover, in 2010, Delaware’s Court of Chancery rejected what it characterized as “a corporate policy that specifically, clearly, and admittedly seeks not to maximize the economic value of a for-profit Delaware corporation for the benefit of its stockholders.”) 

     Stout also shatters the shibboleth of “maximizing shareholder value,” exposing its inaccurate implication that shareholders are a homogeneous group.

     She argues that, unlike those investors inclined to hold shares for long periods, short-term traders like “activist hedge funds” are most likely to attempt aggressively to influence a company’s management, because these funds scrutinize (even if only for relatively brief periods) the operations of a limited number of companies. 

     By contrast, “Most [mutual and other] fund managers rationally conclude it is not in their clients’ interests for them to exercise an active governance role in the dozens or even hundreds of firms whose stocks the fund manager keeps in his portfolio.  If there’s a problem, the fund manager will do the ‘Wall Street Walk’ and sell the shares quickly and quietly, before anyone else catches on.”

    Yet, directors and officers forced to focus on short-term stock prices might (for instance, by short-changing research and development initiatives) short-sightedly deprive their companies of long-lasting benefits.

     Stout summarizes her own holistic “team production” approach to corporate governance, (enunciated and expanded on in a series of law review articles co-written with Vanderbilt law professor Margaret Blair), which portrays directors as “’mediating hierarchs’ who can balance the. . . demands of shareholders against the interests of other stakeholders. . .  that make essential contributions to firms.”

     She observes that although a (for-profit) company could explicitly embrace the shareholder primacy approach by installing a provision to that effect in its own corporate charter (also known as its articles of incorporation, or its certificate of incorporation), “virtually no public[ly-traded] corporation does so.”

    A hybrid approach (which might be abbreviated as, E$G) is carefully chronicled in Better Business: How the B Corp Movement is Remaking Capitalism (2020), by Christopher Marquis (then a professor at Cornell’s business school, and now a member of the faculty of the University of Cambridge’s business school). 

     Marquis examines the emergence of (mostly, privately-held) “social enterprises,” companies specifically dedicated to—and designated as—seeking to achieve not only (some) profits but also social goals.

     He notes that, as of early 2020, the corporate statutes of thirty-five states (including Delaware) recognized for this purpose a special category of “benefit corporations,” and that “more than ten thousand” domestic corporations, including outdoor clothing and gear marketer Patagonia, had formally chosen that form of operation. 

     (In 2008-2009, the leading promoters of such legislation included Arnold Schwarzenegger, then governor of California; U.S. Rep. Jamie Raskin, then a state senator in Maryland, the first state to enact such legislation (in 2013); and principal drafter Bill Clark, a lawyer at Drinker Biddle & Reath.)

     Often quoting from his personal interviews of key participants, Marquis reviews the creation (in 2006) and operations of the private organization B Lab, which invites companies to apply for its own third-party “B Corp” certification of their dual commitment to social goals and profits.  He also profiles a variety of B Corps.

     The home page of B Lab recently stated that there are 6,927 such companies, in a total of 91 countries.  Prominent B Corps include Allbirds, King Arthur Baking Company, Klean Kanteen, Stonyfield, and Tom’s of Maine.

     “The . . . founders [of B Lab] wanted to ensure that the certification process they created [featuring its initial, and then triennial, B Impact Assessment] was standardized across companies of different sizes and comparable across industries, allowing companies to assess the true social and environmental impacts of their operations and work to improve them, and giving consumers and investors the means to hold them accountable.”

     Notably, a company that has met the requirements (if available) of its state of incorporation to be recognized as a benefit corporation might also choose to pursue private certification as a B Corp; but, B Lab requires “B Corps that are incorporated in [such] states. . . to become benefit corporations after certification.”

     Marquis addresses the inception and growth of B Corp certification in other countries; efforts by B Lab to foster a “B Corp community” among its certified companies; the treatment of B Corps by “impact investors” interested in supporting social change, but also in some financial return; traditional corporations’ B Corp subsidiaries (such as Unilever’s Seventh Generation and Ben & Jerry’s, and Danone’s Happy Family); and the non-renewal of certification by some high-profile companies like Etsy (which became publicly-traded), actor Jessica Alba’s The Honest Company (which cited “a number of legal and compliance issues for our company that could lead to risk and uncertainty”), and Warby Parker.  He also considers whether consumers are aware of, and influenced by, the certification of a given company as a B Corp. 

     Thirty-six years ago, profit-maximizer Gordon Gekko publicly warned Teldar’s management, “In my book, you either do it right, or you get eliminated.”

      With the rise of social enterprises and the ESG movement, it remains to be seen whether, and to what degree, a new rule for corporate directors and officers will be, “You either do right, or you get eliminated.”