Quantcast
Channel: All articles by Merritt Baer on Lawfare
Viewing all articles
Browse latest Browse all 3

What Cybersecurity Standard Will a Judge Use in Equifax Breach Suits?

$
0
0

Those affected by data breaches now have increasing opportunities to take their claims to court. Last month, in northern California’s federal district court, Judge Lucy Koh upheld the right of victims to sue Yahoo for massive breaches between 2013 and 2016. Victims of the Equifax hack, which impacted millions more than initially reported, are filing dozens of lawsuits. And in another ruling last month, Koh upheld a class of health insurance company Anthem’s data breach victims right to sue for a recently revealed second breach—shortly after Anthem was ordered to pay $115 million to victims and credit-monitors after the first incident.

We’ve previously described the role of theories of harm to victims, and the duty of care for companies, as courts iron out standards in data breach litigation.

But what happens in court? What standards are judges applying for cybersecurity when deciding these lawsuits? What amount of cybersecurity would have been sufficient, in court if not in practice? In other words, we should assume that because a cybersecurity regime is a series of processes, and because no large-scale entity is impenetrable, breaches can and will happen, even when a company exercises care. So, what standard of care is acceptable? Especially in large-scale operations that hold potential for large scale breaches? The Equifax case may set the high-water mark of weak precautions and bungled incident-response plans, coupled with the intimacy of data and vastness of people affected. But what is the lower limit of acceptable standards for situations that are less clear? (Incidents like the Deloitte hack in September that compromised confidential emails of some of its blue-chip clients.)

There is no coherent standard of care that courts look to for data protection. That’s partly because each breach is different. Data breach cases vary by nature, uses, and sources of the information involved. The attack motive also varies from theft (Equifax) to nation-state espionage (OPM). Companies also have differing disclosure timelines for notifying victims and the authorities. In the Yahoo and Equifax breaches, each company took a long time to disclose (three years for Yahoo and a month for Equifax during which many executives sold stock options). In fact, there is evidence that the lack of timeliness in disclosure has been significant in Koh’s reasoning in the Yahoo case and to members of Congress in recent hearings.

It is fair to argue that in many, if not all, of the recent large-scale cases in the news, the companies made basic errors and clearly did not exhibit a standard of care. But surely we cannot stop at Justice Potter Stewart’s “I know it when I see it” test. That is not replicable or predictable enough to meet basic tenets of justice. For the proactive company, and as a guide for judges to standardize their calculus, presumably there should be some standards of cybersecurity competence or exhibitions of care that courts take into account. In this article, we examine the evidence of care that courts have deemed successful and unsuccessful.

Federal Regulations

The Federal Trade Commission (FTC) has frequently stepped in to enforce consumer privacy standards. Most of the breaches you read about have a pending FTC investigation. (You can find the database of investigations in their new “Stick with Security” manual.) The FTC has the authority to hold companies accountable according to Section 5(a) of the FTC Act (15 USC §45), which allows the FTC to sue companies for “deceptive” and “unfair” practices.

Suits concerning deceptive practices generally involve straightforward complaints against companies for falsely advertising compliance with a standard or the use of a preventive practice. FTC suits against companies for unfair practices have been more controversial because “unfair” covers a broader category of violations and often requires the FTC to assert substantive technical claims, which a court must assess for persuasiveness.

In FTC v. Wyndham Worldwide Corporation, the Third Circuit upheld the FTC’s authority to bring an enforcement action against the hotel company for failing to implement reasonable cybersecurity measures. The FTC, needing to posit a standard for unfairness in the cybersecurity context, held the National Institute of Science and Technology’s (NIST) then-draft version of the 2014 Cybersecurity Framework. Of course, even in the framework’s own words, it is “flexible” and “adaptive,” not a set of hard-and-fast rules. Yet, the Third Circuit held that Wyndham should have been on notice. According to the court, the FTC did not need to formally promulgate rules for the case to succeed: “[T]he contour of an unfairness claim in the data security context, like any other, is necessarily flexible such that the FTC can apply it to the facts of particular cases arising out of unprecedented situations.”

If this or a similar case appealed to the Supreme Court, the high court’s previous use of NIST guidance on technical matters suggests that it may accept the NIST framework as a sufficiently clear standard for unfairness.

The NIST framework is certainly a well-respected document in the cybersecurity firmament, and both public and private sector entities reference it. But it is not much help for those seeking more specific guidelines when gauging the likelihood of liability.

State Regulations

The New York Department of Financial Services (NYDFS) set forth mandatory cybersecurity standards for the financial industry that went into effect on March 1. These standards cover all financial institutions operating in New York and their service providers, including law firms. Financial companies and their outside counsel have a legal obligation to comply with the NYDFS standards—opening the door for the state to sue them.

The NYDFS standards consist of generally accepted best practices. Notably, it introduces the requirement that companies no longer allow indiscriminate access to all individuals authorized within a firewall. Now, sensitive, nonpublic information must be protected with a “need-to-know” framework.

This requirement might have mitigated, for example, the recent Deloitte breach, where the hackers seem to have obtained unfettered access to sensitive information through an administrative account that had unrestricted permission to all information within a firewall. (Moreover, the account only required a single password and did not have “two-step” verification, now largely considered standard cybersecurity practice.)

Industry Self-Regulation

With few regulations guiding data protection or cybersecurity standards, forward-leaning industries have taken steps to form independent bodies that create standards for self-regulation.

For example, the payment card industry (PCI) created a council in 2006 to establish data security standards (DSS). All five major payment brands have voluntarily adopted the guidelines, which the council updates annually. While the standards are not mandatory, some state statutes refer to PCI DSS directly or enact equivalent standards.

Though the guidelines are not mandatory and do not necessarily allow data breach victims to bring suit, failure to comply with these guidelines could provide grounds for corporate clients to make breach of contract claims. For clients whose information has been compromised, these guidelines might fall within the contract by which they enlisted the services of the other company, and create an expectation that specific measures would be taken to maintain the privacy of their data. For example, as part of the fall-out from their data breach, and in addition to the class-action suits by consumers, Target settled with payment card companies including Mastercard for failure to comply with the PCI DSS standards.

Courts have yet to find a company negligent for failure to comply with PCI DSS standards, but the complexity of the current landscape— where corporations contract with other corporations for supply chain, payment systems, and so on, means that there will be many contracts, down corporate lines and over time. Standards like the PCI DSS may be an important tool in company-to-company data breach liability.

In the legal world, the Association of Corporate Counsel released a report in April that specified baseline security measures that legal departments and law firms may require of outside counsel. While these standards are also not mandatory, they are a comprehensive review of client expectations in a still-forming space. If a law firm is hacked and is found to have fallen short of the precautions now seen as customary in their field, they are open to claims of malpractice—they have failed to perform at the baseline level of competency.

There has also been recent attention paid to mandating some standards for cybersecurity for government contractors. The few statutes and regulations that exist in the cybersecurity space generally apply to government entities and protection of government networks (for example, the 2014 NCPA and FISMA reforms). Laws about government network security don’t apply to consumer-facing companies. But if the laws are about procurement rather than network defense, the effects could change practices in government contractors and thereby reach consumers. For example, there is a recent bill for improved guidelines around “internet of things” procurement by the government. The government’s significant purchasing power could mean that the security standards become more common in consumer-facing products as well.

Consumer Protection Statutes

At least 14 states and the District of Columbia have laws that allow affected consumers to sue a company for data breach damages. Additionally, many states define “personal information,” or the data deemed valuable enough to warrant statutory protection, more broadly than federal standards.

However, the burden of proof for plaintiffs under state law can be quite high. Plaintiffs often must show (1) systematically reckless behavior, that is (2) aggravated by a failure to give prompt notice, that (3) caused widespread harm to consumers. State laws focus more on timely breach notification by consumer-facing entities. This is an important dimension of breach response, but it does not speak to our focus here on the standards by which courts evaluate the sufficiency of a company’s cybersecurity practices.

What now?

Unfortunately, what we have cobbled together here is not a coherent set of standards. The painful reality is that we lack broadly applicable, specific standards, partially because of the ugly complexity of the problem. It involves not just technical standards but also corporate executive decisionmaking and reevaluation of practices over time.

So what do you do? Well, companies can buy cybersecurity insurance—and most sophisticated companies do, including the ones whose data breaches have made recent headlines. Equifax, for example, holds a $100 million to $150 million dollar policy. But insurance is unlikely to cover the costs of recovery, and more importantly, it doesn’t erase the permanent privacy costs to the consumer.

There is no overstating the importance of basic cyber hygiene. In almost every notable breach so far, there were basic cybersecurity practices that the entity did not follow. For instance, according to the FTC’s Ashley Madison complaint, the defendants had no written information security policy, no reasonable access controls, inadequate security training of employees, no knowledge of whether third-party service providers were using reasonable security measures, and no measures to monitor the effectiveness of their system security. While no systems are absolutely secure, there is a cumulative nature to vulnerabilities; and to minimize mess, hygiene is key.

To change the landscape more pointedly, we have a few more suggestions:

First, we should evaluate a new corporate structure. The CEO needs to be connected to the IT department, and the IT department needs to be accountable to the general counsel, and to the Board. We need to define and prioritize corporate cyber strategy, not just IT. It will require some structural repositioning and hiring personnel who have backgrounds that look different than the present corporate culture and skill set.

That brings us to our second suggestion: Hire diversely. The best problem-solving comes from having more differentness in the room, and unfortunately, our corporate landscape still reflects very little diversity in areas such as race, ethnicity, gender, sexuality, and education. This isn’t just about paying lip service to diversity, and it’s not about the “brain drain” that economists cite. It’s not simply based on the paltry number of women and other minorities in decision-making positions. The economic explanation is true, but it’s not the full story.

The justification we see for hiring diversely as part of a security strategy revolves around the fact that diverse people contribute differently. They think differently, code differently and interpret problems differently. Because having more diversity of thought means having more human experience, the stuff that life is made of we are stronger and more secure when we are diverse. Scientific American concluded that “diversity makes us smarter” because being around different people stimulates our own creativity. For all of these reasons and more, almost all security firms, including Gartner, advocate for diversity, Facebook’s Chief Security Officer has advocated for defense and diversity, the Obama White House laid out a roadmap for improving national security through diversity, and the auto industry has acknowledged that diversity makes for better car data security. But the corporate and security landscape needs significant improvement in this arena.

Third, there should be more accountability from the government in the regulatory schema. The cybersecurity landscape is littered with common sense issues where no one owns the problem, and so the level of investment across the board isn’t optimal. (For example, botnets and the impending scale of the internet of things, hence the KRACK vulnerability this week.)

Finally, Congress should pass legislation to accord the appropriate agencies with regulatory and enforcement regimes. This would allow government agencies to more appropriately register the intent of legislative decisions, and judges would be able to interpret laws that are written for present-day issues, rather than adjusting antiquated language in creative and often unstandardized ways.

To do this, techies and cybersecurity experts need to help our lawmakers to understand the potential undersides of any legislative language, because a poorly-written law could create unintended consequences. While the tech community generally bemoans Washington antics, there is a role for many of us in the process of better aligning our cybersecurity standards with the data breach landscape we now experience.

We cannot afford to sit this one out. Courts are evaluating the practices of companies that house your personal and enterprise data. This is a personal issue—from health information to home addresses and social security numbers, we all have personal information as well as enterprise data, many of which are compromised already and remain vulnerable. We have to ask what standards courts will apply. The cases are already in court.

The views expressed herein are the personal views of the authors and do not necessarily represent the views of the FCC or the U.S. government, for whom one of the authors works.


Viewing all articles
Browse latest Browse all 3

Latest Images

Trending Articles



Latest Images