Data Brokers: National Security Starts With Data Security

• Bookmarks: 325


On March 21st, 2022, location data broker X-Mode filed a lawsuit against one of its own clients for improperly reselling data to an unauthorized third-party. This is merely the most recent case of a data broker failing to protect sensitive data, whether through data breaches—all major data brokers have been involved in breaches (Axiom in 2003, Choice Point in 2005, Epsilon in 2011, Experian in 2015,  and Social Data in 2020)—or the accidental sale of data to nefarious actors, as ChoicePoint did during a sting operation in 2004. Importantly, we know very little about the collection of personal data because data brokers build, as technology columnist Christopher Mims writes, “enormous and – by design – poorly understood databases containing just about everything there is to know about everyone.” This lack of transparency prevents the design of regulation to properly mitigate the risk to our personal and national security that the collection of sensitive data presents.

Major data brokers in the United States include Acxiom, LexisNexis, Nielsen, Equifax, CoreLogic, Verisk, Oracle, and Epsilon. Companies like these have amassed an enormous amount of information; Acxiom alone has 20,000 servers that collect data on 700 million individuals worldwide. Their growth has been rapid: in 2014, Acxiom stored 1,500 data points on every U.S. customer, and now they report double that figure. Data brokers collect information from three main sources: governments, publicly available websites, or commercial enterprises. If information isn’t collectable online, data brokers will build relationships with local sources to gain access. Few internet or cell phones users are aware of—or can opt out of—sharing their data, making it difficult for even high-level security officials in government or the military to protect their information.

The US regulatory landscape provides little protection for personal data. Only four major pieces of legislation protect data: the Health Insurance Portability and Accountability Act and the Family Educational Rights and Privacy Act, which protects certain medical and educational data from being released by institutions that collect it; the Fair Credit Reporting Act, which protects and requires transparency about data used for consumer credit reports and employment or insurance eligibility checks; the Children’s Online Privacy Act, which prevents internet providers from selling information that would let individuals identify children under the age of 13; and the Electronic Communications Privacy Act, which prevents internet providers from selling content from personal communication (like what an email says), but does not protect “non-content” (like your name or address). Vermont and California have passed legislation requiring data brokers to file with the state and limit some data sharing (Vermont) or file with the state and provide lists of data being sold or shared (California). Other than that, state and federal regulation is non-existent.

The sheer volume of information data brokers store makes them, according to New York Times technology writer Charlie Warzel, “an untenable risk to our personal and national security.”  Such massive quantities of information falling into the hands of untrustworthy state actors could have dire consequences. For example, the type of sensitive personal data frequently collected can be combined with a few pieces of additional information, often easily found through geolocation and cell phone data, to identify one’s neighbors, friends, lovers, location, the nature of one’s job and even intimate details about one’s homelife. These are details that could then be used to blackmail important political or military figures, or even to impersonate them in believable spear phishing emails. To investigate the issue, the New York Times legally bought data from a data broker that, when combined with pings from cell phones, easily identified people’s jobs, and, in this case, their proximity to the president. The Times concluded that “secret service agents were particularly easy to identify.” The fact that secret service agents were unable to keep their sensitive information private shows how easy it is to build disparate data points into valuable information that puts national security at risk.

There are two key ways to regulate data brokers: limit the type of data that can be sold, and limit to whom it can be sold. On the one hand, there is data so personal that it unequivocally should not be sold to anyone, domestic or foreign. Health care data, health metrics information, fingerprints, DNA information, and facial recognition images, are just too risky to be sold—all of them could be used to gain access to sensitive government or military databases. Alternatively, implementing controls over who can buy data would also improve digital safety. Any such controls should likely include bans on selling to foreign adversaries or political campaigns.

But before any regulation can happen, there must be a push for transparency and accountability, as the FTC outlined in a report back in 2014. Without increased transparency into the type of data collected and the security practices that protect it, policy makers cannot design effective policy that encourages the valuable uses of such data (including in marketing and security) while limiting national security risks. Data brokers are uniquely responsible for securing data, both from hacks and from adversarial customers, so this increased transparency should be paired with increased data broker accountability. Opt-out options for individuals are either non-existent or inefficient and would require every American to find the time to opt out of confusing agreements with any potential data broker. Instead, data brokers must be made accountable. If they are willing to make a profit off of amassing so much sensitive information that they create a national security risk, they must also deal with the consequences of their security lapses.

1104 views
bookmark icon