Government Regulation and Big Tech: Why Internal Systems Aren’t Enough

G

By Aden Siebel (PO ’21)

With an increasingly complex field of technological privacy and ethics concerns, government regulation of tech giants has made surprising progress. International lawmakers have implemented significant policy and punished these companies, and while the U.S. federal system has been slower, Congress has increasingly threatened executives and called for industry reform. This has lead to a push amongst tech companies for self-regulation in order to alleviate scrutiny of their business. 

However, a statement from Microsoft’s Corporate Vice President Julie Brill calls into question the efficacy of this approach, arguing that: “no matter how much work companies like Microsoft do to help organizations secure sensitive data and empower individuals to manage their own data, preserving a strong right to privacy will always fundamentally be a matter of law that falls to governments.” 

Why is a self-regulatory approach insufficient, and why is Microsoft itself calling for outside meddling in their business strategy? Opaque internal technology ethics commissions and a critical lack of external regulations show that regardless of good intentions, outside control is necessary for the survival of the industry. 

Microsoft’s calls for regulation come amidst a slew of privacy concerns and legal troubles for large tech companies, highlighting the need for a reevaluation of the current relationship between big tech and government. A number of high profile court cases, including a recent victory by Google in the E.U., have revealed a troubling attitude amongst tech companies to ignore or push back on government privacy concerns. Furthermore, data leaks by companies like Facebook and controversial partnerships between companies selling facial recognition and law enforcement agencies like Immigration Customs Enforcement (ICE) have caused widespread public outrage. This has lead to a number of calls by politicians for increased regulation, though little has yet to develop.

Only incomplete and local policies like California’s Consumer Privacy Act and Washington’s stalling Privacy Act have managed to make their way in the U.S., although the E.U. has managed to implement a strong Data Privacy act. This act has already had a large impact on the tech industry, including imposing strict restrictions on companies through court cases. The size and scale of modern tech companies has caused some amount of regulatory panic, and although progress has been slow, lawmakers are increasingly aware of the challenge and necessity of creating a comprehensive legal framework. With the pressure ramping up for tech companies to take more significant steps towards privacy protections and ethical behavior, the industry has turned in part to self-regulatory measures.

These internal systems, while idealistically well-meaning, are of questionable utility and are inherently biased. Groups like ethics or legal boards make advisory suggestions to company leadership, or work closely with teams within tech companies to ensure ethical and legal compliance. One example of this is law enforcement software company, Palantir, which has a civil liberties team that works within the company to ensure that their products respect privacy concerns. However, Palantir still faces significant backlash for their decision to continue ties to ICE, as their partnership implies support for the agency’s policy of detention and separation at the border. Some see Palantir’s team as a way of deflecting criticism for continued ethics concerns, rather than as a robust form of internal moderation. Similarly, Google’s ethics board, although containing some high-ranking members, has been criticized for its toothlessness and lack of impact on the company’s policies. It is unclear how many of these concerns are grounded, and how much of their lack of impact is reliant on a lack of internal power as opposed to an intrinsic desire to not criticize their company. 

Nevertheless, this reveals two significant issues with internal structures: a lack of transparency and an inherent sense of loyalty bias. No matter what goes on, companies will still be reticent to release every concern that their advisory boards address, meaning that potential issues and concerns can be obscured from the public. This also means that there is no outside standard for performance, as consumers are unable to measure their effect. Similarly, these boards will always have some inherent inward bias. No matter the good intentions of Google’s ethics board, they still exist at the whim of the company, with their salaries still funded by the profits of the organizations they seek to regulate. 

Tech giants also have a poor record regarding transparency and interpretations of legal matters. Amazon has a troubling history of lying to politicians, and recently pitched a set of facial recognition laws that faced wide-spread criticism. Facebook has been at the center of a controversy regarding recent statements about freedom of speech, arguing that they were not responsible for moderating much of their platform on that grounds. However, this response has also received significant criticism, as it over-simplifies the role of Facebook in determining content distribution in favor of relying on legal excuses. With tech companies showing a lackadaisical and self-benefiting interpretation of the law and of future policy needs, it is imperative that the federal government step in to more clearly define the expectations on such influential businesses. 

Without significant movement, we risk allowing this generation of tech companies to define their own rules. As long as these companies continue to break laws and suggest their own forms of weak enforcement, they will continue to push forward a vision for a future where legality is shaped by the corporate needs of these companies, and not of their consumers. There are, however, promising steps forward that governments could take. Examining their own use of algorithmic technology, like ICE’s partnership with Palantir, moving towards “can’t be evil” systems that protect user privacy, and pushing for ethical standards for the use of facial recognition are only a few paths forward. Furthermore, working with companies that want to implement regulation, like Microsoft, without letting them exert significant influence, could be a way to make more efficient progress. Allowing tech companies to self-regulate isn’t a bad idea: we just have to make sure that there’s enough policy to hold them truly accountable.

About the author

Claremont Journal of Law and Public Policy

Read the Latest Print Edition

Recent Posts

Contact Us