7 data breach reporting rules banks need to understand
On May 1, the protocols U.S. financial institutions must follow after a cybersecurity breach changed, and more changes are still to come.
Three bank regulators this month began asking banks to report cybersecurity incidents within 36 hours when such breaches have caused serious harm or are likely to. The three regulators are the Federal Deposit Insurance Corp., Federal Reserve Board and Office of the Comptroller of the Currency.
Banks already faced a number of requirements to report incidents to various parties, and more such compliance burdens are set to go on the books over the coming years. Some hope a recently signed law on cybersecurity incident notifications will harmonize this web of rules.
Cyber reporting requirements tend to differ in their purpose, but “ultimately, what all these regulators are trying to do is promote information sharing,” said Jorge Rey, chief information security officer for accounting firm Kaufman Rossin.
Part of the impetus behind the new rules is a widely held belief that cybersecurity incidents are chronically underreported. Three in four cybersecurity professionals believe that cybersecurity incidents are not fully disclosed, according to a 2018 survey of more than 1,500 cybersecurity professionals.
ISACA, an international professional association focused on IT governance, conducted the survey. In a proposed rule on cybersecurity incident notifications, the Securities and Exchange Commission cited the survey as evidence of underreporting.
Here is a look at the existing, proposed and planned requirements U.S. banks face after a cybersecurity incident.
Bank regulators’ 36-hour rule
As of May 1, banks must report each event — whether an outage or security breach — that materially disrupted or degraded, or is reasonably likely to materially disrupt or degrade, a bank’s ability to carry out banking operations or deliver banking products and services.
Banks must notify their regulator of record “as soon as possible and no later than 36 hours” after they have identified such an incident, per the rule from the FDIC, OCC and Fed.
As for the contents of reports, banks and regulators interact on a regular basis and especially after a cybersecurity incident, meaning that what the regulator asks from the bank after initial notification is likely to vary case by case. That’s according to Matt Miller, principal of cyber professional services firm KPMG.
“It’s really the stability of the financial system that they’re trying to maintain,” Miller said of the bank regulators. “They like notifications, but they also have the ability to then go in and do a review — and they tend to do this — of the actual incidents themselves and how they were managed.”
These reporting requirements also cover tech vendors of banks that are affected by cybersecurity incidents, Miller said. In that case, the regulator asks the vendor to inform the bank and the bank to then inform the regulator of the incident.
CISA’s 72-hour rule
Because it is not yet fully written, perhaps the largest wild card of the bunch is the cybersecurity reporting rule established by Congress in March. The bill actually contains two reporting requirements: Banks must report cybersecurity incidents within 72 hours and ransomware payments within 24 hours.
The reporting requirements will apply to financial institutions and 15 other sectors, all considered critical infrastructure. Because the list of sectors includes IT, banks’ tech vendors are expected to also be covered by the requirements. However, many specifics are yet to be addressed.
To implement the law, the Cybersecurity and Infrastructure Security Administration has until March 2024 to issue a proposed rule and begin a rulemaking process, a process that may take up to 18 months to complete.
Some observers hope the rule will consolidate or harmonize the many other federal reporting requirements. “By having the different organizations and government functions create their own security requirements, you’re going to create a little bit of a mess,” said Rey. The proliferation of differing state requirements creates a similar regulatory burden, he said.
Others believe the answer is not necessarily to standardize the rules, as they serve different purposes. Miller said private notices to regulators tend to include details about the breach that would not be appropriate for public disclosure, as that could publicly expose information about affected persons or unpatched computer exploits.
SEC’s current rule
In October 2018, the Securities and Exchange Commission released guidance warning public companies that it was critical for them to “inform investors about material cybersecurity risks and incidents in a timely fashion,” but the guidance did not specify a timeline on which to report such incidents.
After the rule went into place, some public companies started to report incidents in Form 8-K, the purpose of which is to provide information to shareholders in a timely manner. However, not all companies disclosed incidents to investors in the same way, leading to the SEC to propose a new rule this year that would set a timeline on when to report incidents and define more specifically what companies ought to report.
SEC’s proposed four-day rule
The SEC in March released a proposed rule that concerned not only how companies ought to report cybersecurity incidents to investors but what they must do to inform them about their level of preparedness with respect to breaches.
“There is growing concern that material cybersecurity incidents are underreported and that existing reporting may not be sufficiently timely,” reads the SEC’s proposed rule. “We are proposing to address these concerns by requiring registrants to disclose material cybersecurity incidents in a current report on Form 8-K within four business days after the registrant determines that it has experienced a material cybersecurity incident.”
The guidance would remove much of the ambiguity about what incidents to report to investors and the timeline for reporting, according to Miller. It would also set up a market competition with respect to cybersecurity incident transparency.
“It’s up to the SEC to determine what level of reporting that they mandate, but I think it’s also in some ways going to be the industry itself that defines what becomes standard,” Miller said. “Over time, if your competition is disclosing more or less than you, then that’s going to become the expectation.”
Public companies would also have to provide updates on previously reported incidents to investors, and they would have to report “cybersecurity incidents that have become material in the aggregate” in quarterly and annual filings.
Miller, in what he admitted was “maybe a weird analogy,” likened the aggregate reports to bee stings. While one sting might hurt, getting a bee sting in the same part of your house once a year for 10 years, or many bee stings all at the same time, would be a greater concern. Likewise, a collection of minor security incidents that add up to something major would be something to report to investors.
Fincen advisories on “cyber-events”
Banks regularly send suspicious activity reports (SARs) to the Financial Crimes Enforcement Network when they detect suspected money laundering, fraud or other crimes. In October 2016, the regulator released guidance reminding banks that they must also report each “cyber-event.”
In the guidance, Fincen defined a cyber-event as “an attempt to compromise or gain unauthorized electronic access to electronic systems, services, resources, or information.” Fincen included several examples of cyber-events it expects banks to report. In one hypothetical scenario, a financial institution “knows or suspects a Distributed Denial of Service (DDoS) attack prevented or distracted its cybersecurity or other appropriate personnel from immediately detecting or stopping an unauthorized $2,000 wire transfer.” Fincen expects the financial institution to report both the attack and the transfer in a SAR.
Notices pursuant to Gramm Leach Bliley
As the Federal Trade Commission puts it, the Gramm-Leach-Bliley Act requires financial institutions “to explain their information-sharing practices to their customers and to safeguard sensitive data.”
That explanation typically manifests as annual notices to customers about how their personal data is collected. Banks must also let customers know how they can prevent their personal data from being sold or shared with third parties.
According to its own overview of cyber incident reporting requirements, the Bank Policy Institute said the Gramm-Leach-Bliley Act act also requires notifications to regulators when a bank becomes “aware of unauthorized access to sensitive customer information that is, or is likely to be, a misuse of the customer’s information.”
The web of state requirements
In addition to the many federal requirements to report cybersecurity incidents, most states also require banks to provide notice when residents are impacted by a cybersecurity incident.
According to the National Conference of State Legislatures, all 50 states, Washington D.C. and three island territories have laws requiring businesses “to notify individuals of security breaches of information involving personally identifiable information.”
Multiple companies and groups maintain lists of requirements by state, including IT Governance USA, the International Association of Privacy Professionals and law firm Perkins Coie.
Rey, the accounting firm CISO, said these requirements tend to be different in kind from federal regulations on cybersecurity incident notifications. “Ultimately, that [state] notification is related to privacy exposure,” he said. The states are typically aiming “to make sure that the consumers are being identified and warned so they can put measures in place to protect their identity from identity theft.” Exceptions include New York State, which has its own cybersecurity laws for financial institutions.
To read the full article, please visit American Banker.