Algorithms rule our lives, so who should rule them?

Volkswagen's recent emissions scandal highlighted the power that algorithms wield over our everyday lives. As technology advances and more everyday objects are driven almost entirely by software, it's become clear that we need a better way to catch cheating software and keep people safe.

A solution could be to model regulation of the software industry after the US Food and Drug Administration's oversight of the food and drug industry. The parallels are closer than you might think.

The case for tighter regulation

When Volkswagen was exposed for programming its emissions-control software to fool environmental regulators, many people called for more transparency and oversight over the technology.

One option discussed by the software community was to open-source the code behind these testing algorithms. This would be a welcome step forward, as it would let people audit the source code and see how the code is changed over time. But this step alone would not solve the problem of cheating software. After all, there is no guarantee that Volkswagen would actually use the unmodified open-sourced code.

Open-sourcing code would also fail to address other potential dangers. Politico reported earlier this year that Google's algorithms could influence the outcomes of presidential elections, since some candidates could be featured more prominently in its search results.

Research by the American Institute for Behavioral Research and Technology has also shown that Google search results could shift voting preferences by 20% or more (up to 80% in certain demographic groups). This could potentially flip the margins of voting elections worldwide. But since Google's private algorithm is a core part of its competitive advantage, open-sourcing it is not likely to be an option.

The same problem applies to the algorithms used in DNA testing, breathalyzer tests and facial recognition software. Many defense attorneys have requested access to the source code for these tools to verify the algorithms' accuracy. But in many cases, these requests are denied, since the companies that produce the proprietary criminal justice algorithms fear a threat to their businesses' bottom line. Yet clearly we need some way to ensure the accuracy of software that could put people behind bars.

What we can learn from the FDA

So how exactly could software take a regulatory page from the FDA in the United States? Before the 20th century, the government made several attempts to regulate food and medicine, but abuse within the system was still rampant. Food contamination caused widespread illness and death, particularly within the meatpacking industry.

Meanwhile, the rise of new medicines and vaccines promised to eradicate diseases, including smallpox. But for every innovation, there seemed to be an equal amount of extortion by companies making false medical claims or failing to disclose ingredients. The reporting of journalists like Upton Sinclair made it abundantly clear by the early 1900s that the government needed to intervene to protect people and establish quality standards.

In 1906, President Theodore Roosevelt signed the Food and Drug Act into law, which prevented false advertising claims, set sanitation standards, and served as a watchdog for companies that could cause harm to consumers' welfare. These first rules and regulations served as a foundation for our modern-day FDA, which is critical to ensuring that products are safe for consumers.

The FDA could be a good baseline model for software regulation in the US and countries around the world, which have parallel FDA organizations including the European Medicines Agency, Health Canada, and the China Food and Drug Administration.

Just as the FDA ensures that major pharmaceutical companies aren't lying about the claims they make for drugs, there should be a similar regulator for software to ensure that car companies are not cheating customers and destroying the environment in the process. And just as companies need to disclose food ingredients to prevent people from ingesting poison, companies like Google should be required to provide some level of guarantee that they won't intentionally manipulate search results that could shape public opinion.

It's still relatively early days when it comes to discovering the true impact of algorithms in consumers' lives. But we should establish standards to prevent abuse sooner rather than later. With technology already affecting society on a large scale, we need to address emerging ethical issues head-on.

(I originally wrote this blog post as a guest article for Quartz.)


Katherine Bailey (not verified):

Unfortunately there’s more to assessing algorithms from an ethical perspective than looking at source code. Predictive algorithms are trained on input data and this data can be inherently biased. I recently came across an excellent article about this, which touches on scary things like “predictive policing” (think Minority Report but with linear regression instead of pre-cogs):…


I totally agree, Katherine. In an earlier draft of this blog post I talked about the need for data transparency in addition to algorithm transparency, but then decided to focus on algorithms to keep the article shorter. In the end, we need both. I discussed the need for data transparency in a previous post though: Thanks for pointing this out!

Andrew Morton (not verified):

I think you missed an opportunity to talk about how the TPA will restrict the ability of governments to require access to source code.


Andrew, do you have link to a write-up with more information? Would love to study up on that.