The Black Box Society: An Interview with Dr. Frank Pasquale

T

Conducted by Wentao Guo (PO ’19) and Jessie Levin (PO ’18)

Transcribed by Wentao Guo

Dr. Frank Pasquale is Professor of Law at the University of Maryland and an expert on the law of artificial intelligence, algorithms, and machine learning. His book, The Black Box Society: The Secret Algorithms that Control Money and Information, develops a social theory of reputation, search, and finance, and offers pragmatic reforms to improve the information economy. Dr. Pasquale is one of the leaders of a global movement for algorithmic accountability. In privacy law and surveillance, his work is among the leading research on regulation of algorithmic ranking, scoring, and sorting systems, including credit scoring and threat scoring. CLJPP interviewed Dr. Pasquale before his talk entitled “Privacy, Secrecy, and Corporate Data Flows,” hosted by Tech for Good.

CLJPP: Dr. Pasquale, in 2015, you published your book, The Black Box Society: The Secret Algorithms that Control Money and Information. Could you briefly describe what it’s about?

Dr. Frank Pasquale: Thank you so much for inviting me. It’s a real honor to appear in your publication. What I want to say about The Black Box Society is that it is a book that operates on two levels. On one level, it is offering very pragmatic reforms for the information economy. I look at problems with personal data protection, with runaway data—data that is used in one context but then goes and gets used in other contexts unfairly. I look at the bias of search engines and social networks, and the problematic practices of finance companies. And, in Chapter 5, I offer various specific recommendations for what different regulatory bodies should do to solve those problems.

But what is more important, I think—the more lasting contribution of the book—is that it is a social theory of reputation, search, and finance. To connect this to the organization of knowledge and how we think about knowledge: the study of politics and economics used to be quite tightly integrated in the 19th century. In the 20th, they became more distinct. That specialization helped advance some forms of knowledge. But we are now seeing an increasing integration of political and economic systems. (For example, with campaign finance, the power of business helps determine what happens in politics, and, in a sort of feedback loop, what happens politically determines which businesses do better.) That has led me to be very dissatisfied with the separation of political and economic expertise. The 21st century needs a study of political economy just as much as it needs distinct inquiries into politics and economics.

What I was trying to develop in the book are categories of social understanding that recognize the integration of the political and economic. I develop those categories as reputation, search, and finance. Reputation is how you are known as a person by society as a whole, search is how you know society as a whole, and finance is power in terms of who can do what (outside of violence). Violence is how we usually think about power, but assuming that a lid is kept on violence, in general, finance determines who can do what, who can mobilize people and resources to their projects. The book is about reputation, search, and finance as politico-economic phenomena. It describes how algorithms are used as the organizational dynamics in each of those fields.

CLJPP: What has changed in the years since publication of the book?

Pasquale: The book has a very long practical/pragmatic section, and concludes with a more utopian vision at the very end. That balance made sense in 2015. But after Trump, pragmatic incrementalism is discredited. We have a political regime right now that has made it clear that it does not want to build on, refine, or tweak reforms that have been developed by a sensible, centrist technocratic administration like Obama’s. Instead, it just wants to throw almost all the initiatives I covered in the book, for whatever reasons.

What that leads us to is the need for much more dramatic, far-reaching, and sweeping concepts of how we deal with unfairness, bias, discrimination, exclusion, and inequality, in the realms of reputation, search, and finance. That’s been the major change. I think that the social theory of reputation, search, and finance is still solid and should be the foundation of a lot of work, but I don’t necessarily think that the pragmatic proposals are viable right now. They’re not all terrible, and they’re not all outdated, but it’s hard to get excited about them. It’s hard to get excited about something like the Office of Financial Research (OFR) when you see what’s being done to the financial regulatory agencies right now—because even if OFR achieved a much better sense of how global finance is operating, what would Trumpist finance regulators do about it?

CLJPP: What do these dramatic changes look like? How can we make them in such a way that they last?

Pasquale: At a conference called Algorithms and Explanations, I was asked to give a talk about algorithmic credit determination. I began the talk by contrasting the approach of Larry Summers and Darrick Hamilton. Summers, the former Treasury Secretary, says that the key to financial inclusion is to develop better algorithms with more data that will include more people in society and better judge their creditworthiness. But Hamilton, an economics professor who’s working on things like a job guarantee and other more ambitious economic policies, says that he believes that the algorithms need to be nationalized, they need to be totally public, and there needs to be a matter of political contestation.

In The Black Box Society, I was very cautious about making the algorithms public. I was very balanced: “Well, we’ve got to balance the trade secrecy interests of the firms and the public interest and the right to know.” Now I say, forget about trade secrecy here. It’s inappropriate in any ranking and rating of people. Make this a space where we have a public determination, where there are laws involving what type of data can be used and how it can be used, and who should be given a chance and who should not be, in various forms of credit determination. That’s a more ambitious approach. Hamilton helped me see the light there.

With respect to finance, I am also increasingly attracted to the ideas of modern monetary theory [MMT], which emphasize the power of sovereign currency issuers like the United States to fund themselves, without relying on debt markets. The constraint on the ability to create money is not a debt constraint but an inflation constraint. If prices start going way up, then you tax or you stop creating more money or you adjust reserve requirements for banks—there are many ways of dealing with the problem. Increasing government’s role in finance reduces the power of private finance. That would help many people. With student loans, for example, you could have a vision for free college, or you could have a vision where graduates pay a small tax for a set of years after graduating. That seems much more humane than the current system, where all too many people have some giant lump of debt hanging over their heads collecting interest.

Now, as for search engines and social networks—that’s another really exciting area where I’m much more attracted to an antitrust approach. I tiptoed around it in the book; I proposed several behavioral remedies, which try to alter the behavior of the dominant firm. But now the current thinking is that you need structural remedies. Rather than trying to stop Facebook from doing certain bad things to business partners, you would divide up the firm and say, Facebook needs to be separate from Instagram which needs to be separate from WhatsApp. You could do similar things with Google and divide up various parts of Google.

Those are some of the more comprehensive reform ideas that I think are critical, and I’ve been working on them piecemeal in various articles and consultations with leaders inside and outside of governments. That’s where I’ve been led the past couple of years. What we’ve learned over the past twenty years is that a lot of the incrementalism that was a key cornerstone of neoliberal progressivism, that we thought it would all be built upon, is now crumbling. And so, you’ve got to have something a little more ambitious.

CLJPP: Data and algorithms are increasingly becoming part of our everyday lives. What would you want balance, collaboration and cooperation between regulatory bodies and private firms to look like, especially considering these more dramatic approaches? What does good collaboration look like?

Pasquale: I think that it depends on the area. If we start with data brokers, the beginning of collaboration is a requirement that all data brokers report to the Federal Trade Commission and to state attorney generals who are privacy regulators. They should only be able to operate on a massive scale if they are licensed—to ensure they have proper privacy and security practices, and to ensure they are bonded to the extent necessary to compensate victims if there is a data breach. They need to report where they get their data from, to whom they sell it, what uses they make of it, and what inferences they’re drawing from the data. That needs to be part of a permanent, publicly available record so that we can look after and find out about companies that are, for example, selling lists of people whose kids were killed in car crashes, or “rape sufferers,” which was only inadvertently discovered.

Such disturbing lists, and much more, should be part of our public record. That’s the core of the Black Box message, why the book resonated: that our lives are becoming an open book, while corporations use trade secrecy to keep secret all that they’re doing. We need better incentives to reverse that trend.

Sometimes people do the right thing—think about the unraveling of the Facebook/Cambridge Analytica scandal thanks to very brave whistleblowers, Chris Wylie and Shahmir Sanni—these whistleblowers did that at great personal risk. Gerald Nestler has done excellent work, in “The Future of Demonstration” project, celebrating the courage of the “renegade,” the whistleblower, the person who betrays a corrupt system. And there has to be an aspect of this too which would involve protection for whistleblowers who come forward.  In the healthcare sector, we have that with qui tam laws, where we not only have protection for whistleblowers, but we also tell them, if you find out that healthcare fraud has been happening in a company, you get fifteen percent of the recovery if it all works out. That’s a huge incentive. We have people walking around today that have twenty, thirty million dollars because they were in a very corrupt organization, and didn’t just keep pocketing money within it, but instead reported it to the authorities. That bounty is just. It’s about empowering people to do the right thing.

The other side is that government just needs vastly more experts. There’s an article called, “An FDA for Algorithms,” and the analogy is very powerful. First, before the FDA allows people to take drugs, they have to test the drugs to make sure that they work and that they’re safe. But second, the resources of the FDA—they are great. I’ve given talks at the FDA, and it is massive. You go there, and it’s this whole campus of buildings with many experts. The other agencies need to be like that. They need to be that large. If you taxed tech, if you did other things that were more aggressive, you could have that level of people on the outside trying to insert public values into it.

That might seem bizarre now, when we have so-called tech regulators that are little more than a few hundred attorneys with a tiny budget. But I think we’re now in a state relative to tech that the U.S. was in relative to communications and finance in the 1920s. In the 1920s, there were no New Deal agencies. There was no SEC or FCC. There was an FTC, which was established in 1914, but there were not many of the other agencies that are critical to the administration of the modern economy. But things changed. I think that can happen here as well. I really do hold that hope. But our political system is such that it probably has to happen with a Democratic president, a Democratic House, and sixty Democratic votes in the Senate, and even then it will be very difficult. The U.S. has to politically look like California, essentially, before this happens. That may seem impossible. But if you told anyone five years ago Trump would be president—they’d have laughed you out of the room.

The Overton Window has vastly shifted, and that’s where academics need to be. We need to think about what looks like now that the Overton Window has shifted this far. The people in the big tech and finance firms keep trying to shift it the other way. The tech firms keep trying to shift it radically rightward by using, for example, the First Amendment to say that nothing they “say” (which is a huge amount of what they do) can be regulated. Unless there’s equal and opposite response on the other side, it’s going to keep shifting until they just get carte blanche to do anything they want.

CLJPP: In the U.S., certain states tend to be either pro-regulatory or anti-regulatory. Do you think that trend will hold when it comes to data protection?

Dr. Pasquale: I think the first thing to start with is the problem of pre-emption by the federal government. You know, right now, the federal government is trying to get rid of the California emissions standards for cars. This is out of the old Bush administration playbook. Lots of states in 2004 and 2005 realized that the mortgage market was out of control and passed tough laws, and Bush’s Office of the Comptroller of the Currency, as well as the Office of Thrift Supervision and some other ones, came in and just wiped them away. They just got rid of them. They said, “Look, you’re interfering with profits. Things are fine; we don’t see any problems here.” This is a huge problem. I worry about the federal government doing that. And Congress may even do it to California’s consumer privacy law, which is supposed to go into effect in 2020. That’s going to be a huge problem in the future. What’s even worse is that the Supreme Court has issued a lot of opinions that are basically hostile to progressive pre-emption and favorable to conservative pre-emption. Such decisions erode its legitimacy, but as one justice put it decades ago, the court is not final because it’s infallible; it’s infallible because it’s final.

I would say, though, with respect to the state-level laws, there’s a real controversy in the tech policy community over whether diversity of laws is a good thing or a bad thing. If you look at Europe, they keep emphasizing the European single market with one set of rules. I think it [diversity of laws] is a good thing. I think you should allow experimentation. I think this Illinois biometrics law that prohibits certain forms of facial recognition without a user affirmatively opting in, that could possibly cost Facebook a great deal, is a good thing. I think that’s a good thing that you have states that can impose those sorts of requirements.

However, there are certain constitutional limits on the variety and variability of state legislation. For example, California has this eraser law that guarantees people in California that anything done on social media (and more) up to age eighteen can be gotten rid of. That’s a very compelling law that helps ensures people are not trapped by their past. We have diversity endorsed under HIPAA, too. It’s widely understood that federal health privacy law is a floor of protection, but not a ceiling on protection. That’s the baseline of privacy protection you get, but if the state wants to put something above that, they’re welcome to do that. And that has not killed the healthcare system.

CLJPP: Do you see that experimentation continuing to happen and develop further in the United States?

Pasquale: I do. With reputation, where you’re going to see it is with states limiting the types of data that can be used to make critical decisions. For example, now there are all these machine learning algorithms that say your voice tone alone—not the content of what you’re saying, but just your tone—gives critical clues to who will be a cultural fit in the company and who will not. That to me is highly offensive. It’s a very offensive treatment of people as things. It’s like putting a Geiger counter to a person.

CLJPP: I’ve been thinking a lot about linguistic racism.

Pasquale: Oh yes, exactly. There’s no doubt that that’s going to be a huge component of this. Cultural fit in itself is very contested and a concept that deserves a lot more interrogation and discrediting. Now, there are also people who market the same voice parsing algorithms to match people who are callers and customer service representatives. If you call in angry, you’ll be put in to a certain person, and if you call in as a nice person, then maybe you’ll get another type of person to be a match for you. That’s something again where I find it weird and creepy, but it’s harder to regulate than the job matching software.

There is something about getting a job that is so critical to people’s sense of identity, to their ability to support themselves. Some aspects of fairness and even due process should inform hiring—or else we are going to lose our ability to stop things everyone agrees we should be stopping (like discrimination). This is a really key concept that we’re just realizing now: black boxed algorithms could effectively make certain anti-discrimination laws obsolete, by hiding discrimination behind black boxed algos. I think that we didn’t realize the degree to which legal concepts of fairness permeated spheres that were ostensibly merely market spheres. With respect to getting a job, no human goes to the interview and says, “Well, I am just a sack of atoms that will lead to higher or lower profits, and there are machine learning algorithms that can find out whether my sack of atoms is one that’s more likely to profit or less.” It’s much more like, “I have a certain level of knowledge and enthusiasm and way of doing things and record of accomplishment, and they’ll give me a fair shake and decide I was the best person or not.” That former self-conception is just not bearable, even for someone who is philosophically a physical reductionist, a thoroughgoing materialist. The latter one is the key to human rights—to one’s sense of being a person with dignity and rights. And law is the only way to protect it from an onslaught of computational processing.

This to me is where machine learning in HR [human resources] is a much greater threat to our sense of humanity and self-worth and dignity than is commonly recognized. Just because you might have a plant crop and say “this seed is more likely to grow more wheat than the other one”, they’re trying to use the same types of things on people, but people demand more than that. They demand dignity, and they demand a sense that there has been a fair process that they have gone through. That really is the core right here. But so few people are talking about that core issue. They don’t want to face up to the metaphysical implications of the big data revolution.

CLJPP: Right, machine learning bias is one issue, but there’s a broader issue: if we’re using machine learning predictive analytics to such a great extent, what does that say about people’s agency, their right to exist outside of boxes, to step outside boxes that they’ve been placed in? Outside of bias, what are the values that we are or aren’t upholding?

Pasquale: I totally agree with you there. To bring this to the educational context, there was this case study recently involving a Georgia Tech professor where one of his TAs was ostensibly a human but was actually a bot. They were trying to create a bot that could fool students into thinking it was an actual person, and there’s now been this whole controversy about it. He said that it fooled them, and that they were happy to have been fooled. Then, in later classes, he got consent. But the problem is that it’s not clear yet whether, when he got consent, he had actually made the consent meaningful by setting up a separate section where people who had objected could have a separate course without the bot TA.

What’s critical about all of this ed tech is that it is marketed as a way of ensuring progress over time in learning. For example, there’s software which tracks people’s eye movements while they’re taking a test to make sure they’re not cheating. And there’s Turnitin, the plagiarism software; by using it, you’re doing one powerful, uncompensated labor for Turnitin by having your paper be part of the database so that later people can’t copy you. But this ed tech is simultaneously reshaping the learning environment to be behavioristic. It’s also trying to appropriate the labor of students in an extractive project. That’s part of the battles for future students, to ask about who profits and who is sharing in the profits. Are administrators changing the nature of education via technology? Will what we count as education be changed because a cheaper tool provides x so that x becomes what education is? All of these are very tough questions that they don’t like to answer, but we have to keep their feet to the fire, because there’s so much commercial incentive to make it all automated.

What’s odd about it is that a lot of it makes sense theoretically when they say to make it competency-based, so that rather than spending four years at a college, you just specify everything that someone should know once they leave and let them take all the tests to prove they know it in six months. Faster and faster, so you can compress everything. But we’ve all seen the stories about the “prodigy” who graduates from college at age 11 or 12. It’s sad, right? But somehow that’s the ultimate model? They’re seeing education through the eyes of how to make it as fast as possible and as cheap as possible to create a labor force that is itself cheap, to gin up more profit. As if labor is another resource like wood, or soil, for which we must lower the cost as much as possible. But this is a constituent dimension of human experience. You’ve got to have some time to process, time to explore and sometimes go down blind alleys (knowing that won’t count against you, won’t put you behind in some race), to reflect, all these things that are not necessarily provably efficient or chunks of knowledge that you can demonstrate that you have.

CLJPP: I’m interested in going back to accountability. Cambridge Analytica just announced yesterday that they’re closing, but it’s not clear whether they’re actually closing or rebranding as something else. What do you think can be done to increase transparency when these companies can swap identities or disappear so easily when things go south?

Pasquale: This is the theme of Chapter 4 of The Black Box book: the variable interest entities and shell companies and all that other stuff beloved by complexity-loving corporate lawyers. Part of what we have to start thinking about is the wrong turns we have taken in business law with respect to flexibility for corporations and limited liability companies to slice and dice ownership interests, responsibilities, and bankruptcy consequences. That’s something we have to look very deeply into. We have to start thinking about ways of assuring responsibility for people who own or profit from corporations, because corporations don’t have feelings. This is where the debate about robot accountability and corporate accountability are quite parallel. Some people say, give robots personhood and hold them responsible for things they do. What does that even mean? Do we want a robot, then, who, if we put in jail, cries or looks really sad? It’s something very similar going on with a corporation with respect to allowing this legal shield essentially to take on responsibility for what a group of people did.

There’s a researcher at Berkeley who actually mentioned the possibility of imposing personal liability on Facebook executives. You have to revive notions of personal responsibility. Sarbanes-Oxley was a law that tried to do that. It’s really failed, but it tried to do something like that. That’s where you have to go. You have to go to personal responsibility. Perhaps in the same way that CEOs have to attest to the accuracy of financial statements, perhaps someday we’ll have good data practices and CEOs or other very responsible people will have to attest to the quality and integrity of the data practices. Hopefully that’s where we’re going, but it’ll be hard.

CLJPP: To conclude, if you could have one piece of advice for students who want to create a world in which the use of data and algorithms is transparent and accountable and for the good of people, what would that piece of advice be?

Pasquale: I know this is a bit self-serving, but I really would encourage anyone who’ll eventually work in the firms that are either dominating this field or trying to disrupt this field, to respect the law and legal expertise. I feel like often there is a huge problem in terms of people treating the law as just another set of obstacles to clear. In fact much law regarding the areas in which these algorithms operate is much more than check-the-box compliance requirements. It actually reflects and encodes human values. With many of the problems at these firms—for example, with Facebook and Cambridge Analytica—the guy who was in charge of their partnerships in data sharing with third-party app developers alerted managers that they did not know what was going with the data and that it was impossible to really police their data sharing agreements. And their managers were like, “Eh, who cares? We need more growth, more users, up, up, up, up….”

That is the root of a lot of the problems here, and I’ve also seen that type of attitude with respect to a lot of these firms. They just see law as a cost. They see the legal system and they don’t necessarily respect democratic institutions, respect agencies and bureaucracies that are charged with enforcing the law as the reflection of democratic institutions. That’s part of it.

I would also encourage them to respect and to value social sciences and humanities and to find ways to bring those voices in. It used to be at Xerox Parc and at Bell Labs that there would be some awareness of and effort to bring in other voices, and those voices I think are not as respected nowadays. They need to get more respect. Bring in anthropologists; bring in social scientists. Knowledge is both intrinsically and instrumentally valuable, and deserves our respect, wherever we end up working.

CLJPP: Thank you so much for this interview, Dr. Pasquale.

About the author

Claremont Journal of Law and Public Policy

Read the Latest Print Edition

Recent Posts

Contact Us