Matrix On Point

Surveillance and Privacy in a Biometric World


As governments and businesses begin to use more forms of biometric identification – including fingerprints, facial recognition, and voice recognition, among others – it’s easier than ever to recognize a person. What implications do these technologies have on the future of privacy and surveillance?

Recorded on February 15, 2024, this Matrix on Point panel featured scholars offering perspectives on how biometric identification might change our understanding of the relationship between people, private industry, and their government. The panel featured John Chuang, Professor in the UC Berkeley School of Information; Lawrence Cohen, Professor in Anthropology and South and Southeast Asian Studies and the co-director of the Medical Anthropology Program; and Jennifer Urban, Clinical Professor of Law at Berkeley Law, who is Director of Policy Initiatives at the Samuelson Law, Technology & Public Policy Clinic and a co-faculty director of the Berkeley Center for Law and Technology. The panel was moderated by Rebecca Wexler, Assistant Professor of Law at Berkeley Law and Faculty Co-Director of the Berkeley Center for Law & Technology.

Co-sponsored by the UC Berkeley School of Law, the Center for the Study of Law and Society, the Center for Science, Technology, Medicine, & Society, the Center for Long-Term Cybersecurity, and the UC Berkeley School of Information

Matrix On Point is a discussion series promoting focused, cross-disciplinary conversations on today’s most pressing issues. Offering opportunities for scholarly exchange and interaction, each Matrix On Point features the perspectives of leading scholars and specialists from different disciplines, followed by an open conversation. These thought-provoking events are free and open to the public.


[JULIA SIZEK] Hello, everyone. I’m Julia Sizek. I’m the postdoc here at the Social Science Matrix. Welcome to our event today, which is Surveillance and Privacy in a Biometric World.

So today’s event is part of our Matrix on Point series, when we address contemporary issues, including how technologies, from fingerprint to retina scans and facial recognition, are shaping our world.

These technologies, as you all know, have become both mundane and exceptional. You use them to open your phone. But they’re also subject to fierce public debate. They’re used to surveil pedestrians and drivers in places like San Diego, but they are banned in many municipalities around the Bay Area, including Berkeley, where we are today.

So we asked some experts here, to understand how these technologies have already changed our lives, and how they might be shaping our future. This event that we have today is co-sponsored by the UC Berkeley School of Law, the Center for the Study of Law and Society, and the Center for Science Technology Medicine and Society.

So, before we get started, I’m just going to tell you about a couple of our upcoming events here at Matrix. We have many in the next couple of weeks. So, next week, or later this– yeah. Next week, we will be having Sharad Goel, who will be talking about included variable bias. On March 4, we will be discussing the new book, Terracene, by ethnic studies scholar, Salar Mameni.

On March 7, Dana-Ain Davis will be coming to discuss Black women and obstetric racism. And then on March 11, we’ll be having a panel on storytelling and the climate crisis. And finally, on March 18, we’ll be having an event on conservatorship in California.

In addition to these events, we do have other events that will be coming up at the end of the semester. So if you want to find out about any of those events, you can sign up for our newsletter, follow us on X, formerly known as Twitter, or you can also just look at our website.

So now, as we transition back to the event that we’re going to be having today, I will introduce our moderator, Rebecca Wexler. Rebecca Wexler’s teaching and research focused on data technology and secrecy in the criminal legal system, with a particular focus on evidence law, trade secret law, and data privacy.

Her scholarship has appeared, or is forthcoming in the Harvard Law Review, Stanford Law Review, Yale Law Journal forum, NYU Law Review, UCLA Law Review, Texas Law Review, Vanderbilt Law Review, and the Berkeley Technology Law Journal.

Wexler will serve as the senior policy advisor at the White House Office of Technology policy, or, I guess, previously did, last year, in spring 2023. And then was a visiting professor at Columbia Law last fall. So with no more further ado, I will turn it over to Rebecca.

[REBECCA WEXLER] Thank you so much, Julia. And thanks to everybody for coming. I want to say, actually, I think, Julia, this is one of the most well organized panels that I have ever joined. I’m really excited to be here with a wonderful group of speakers. We had a brief meeting in advance to prepare. And I can assure you that it’s a rich group of interdisciplinary scholars, coming from very different perspectives on some important issues.

So, John Chuang is a professor at the UC Berkeley School of Information, right here. His research and teaching span areas of climate informatics, biosensory computing, incentive-centered design. He leads the BioSENSE lab in studying brainwave authentication using passthoughts. I’m not sure what passthoughts are, but I’m very excited to hear about it. Effective biosensing, embodied decision making, and privacy of ubiquitous sensing.

His earlier work investigated strategic cybersecurity investments, incentives for pure production and scalability of multicast trees. So he has a PhD in engineering and public policy from Carnegie Mellon University, an MS in electrical engineering from Stanford University, graduated summa cum laude in electrical engineering from the University of Southern California. And maybe, actually, what I’ll do is, I’ll introduce the speakers right before you talk. So let’s just start with you, John.

[JOHN CHUANG] Thank you, Rebecca, for your introduction. So, maybe to answer Rebecca’s question about passthoughts, which I’m not going to talk about today. The idea is to replace passwords, for which we have a love/hate relationship, with something else. Instead of typing in your passwords, you will think your secret thought. And you use that to authenticate yourself to your phone, to your computer, whatever systems that you might be interested in.

And of course, not only are there technical considerations there, there are also lots of other social issues, privacy issues, surveillance issues that we can imagine if we were to start using our brainwaves for authentication or other purposes. But that’s a digression. I hope you’re not counting that against my 15 minutes.

Thank you. What I would like to share with you today is some thoughts on limits of privacy in the context of biosurveillance. So, biosignals– meaning, signals that come out of our human bodies– are distinctive in several different ways. They are expensive in scope. We see lots of examples here.

They are intimate, yet leakable. They are precise, yet ambiguous. Familiar yet unverifiable. And finally, probably most importantly in this context, they are of limited controllability.

We also live in a society now, where sensing devices are already ubiquitous in both public and private life. And they form this constellation of tracking infrastructure that can detect and influence our behavior. This is what Zuboff describes as the Big Other. What I’d like to share with you today, as part of this panel, is a study that we called Covert Embodied Choice.

We look to study how a combination of physiological sensing and machine learning may enable us to make predictions of humans, about their future behavior. And perhaps even more importantly, we want to study how effective are individuals at evading these systems when they are explicitly made aware of these adversarial tracking going on.

So we designed an experimental study to ask these questions. To what extent do biosignals, like eye gaze or micro motor movements from our body, predict our intentions? What strategies might be employed by participants in our study when instructed to make unpredictable decisions, knowing that they are being tracked?

How well do their intuitions about the tracking dynamics, and how to evade them, might, in fact, align with modern technology? How effective are their employed strategies at maintaining their privacy of intent? Or in other words, do their strategies work?

So we designed a task where our participants, like the person shown in that photograph, play a virtual card matching game, as you see in the picture in the bottom right. And while they are performing playing this game, we have a virtual adversary that is tracking their biosignals, specifically their eye gaze in the headset, their head position, their head movement, their hand movement as they are holding onto that controller in their hand, as well as their skin conductance, which is measured using what looks like a wristwatch.

And the virtual adversary, with these collected biosignals, attempts to make a prediction of what action is the participant going to take in terms of which of those two cards on the table that they are going to draw from.

So from the participant’s point of view, their tasks are as follows. Number one, during a decision phase, look at the two cards on the table. Make a decision, which of those two cards, when combined with the two cards in their hand in front of them, will form a complete set according to some criteria. After that initial decision phase, they then perform the actual action of picking up their chosen card. And they’re asked to perform the task while keeping the adversary from being able to correctly predict which of the two cards on the table, they’re going to choose.

So we have 55 participants in our study. Each one of them underwent 44 trials. You see here, a schematic. Each trial comprised of a decision phase lasting 10 seconds, during which their behavior is being tracked by the adversary, followed by 3 seconds of a selection phase, where they then actually use the controller to reach out and grab the card that they have chosen.

In the middle of the experiments, they are informed of the presence of an adversary. But prior to that, in the pre-adversary trials, they were not informed of the presence of the adversary. So their task was simply to choose the card that matches the hand that they have. But once they have been informed in the middle of the experiment, then they are now instructed to try to limit the ability for the prediction to be made accurately.

Let me show you some examples of how we may visualize the type of data that can be collected by the adversarial system. What you see here are micro-motor signals, essentially captured through the motion of the hand controller that these participants are using to navigate in this virtual environment.

On the top row, you see three graphs, where the magenta colors corresponds to the card on the left that they decide to choose, versus the blue, the card on the right. And you see that there are clear distinctions across these three metrics in terms of the trajectory of the controller in the first graph, the final position of the controller at the end of that 10 second phase, and then the final velocity of that controller.

And you can see, based on the two colors, that there is a very clear distinction when they ultimately chose to select the card on the left or the card on the right. Their behavior prior to the selection is easily distinguishable, even by just visual inspection.

The bottom row corresponds to the post-adversary trials. So at this point, we can see that participants are now attempting to engage in strategies to hide, or to obfuscate their behavior. In this particular context, you see that perhaps it is harder for us by visual inspection to make distinctions. But there are still certain metrics, such as the one in the bottom right, the final controller velocity, that we could still quite easily see the distinction between the card choice and our behavior.

So this is one type of data that we are able to track using existing commercial, off-the-shelf sensor technologies. Here is a second stream of data that we collected. This corresponds to the eye gaze data through the eye trackers that’s built into these virtual reality headsets.

What we see here are fixation data of three selected participants in the three columns. For each of the three participants, we show their fixation duration, as well as fixation sequence across the 44 trials that are sequenced from top to bottom. If we focus for now on the top row, we see the green lines corresponding to the total fixation in seconds on the cart that they ultimately chose, versus the blue, total fixation in seconds of the cart that they did not choose.

The red dotted line in the middle is where they transition from a pre-adversary phase to a post-adversary phase. So, for example, we can now look at the first column. And we see that for this particular participant, prior to being informed of the presence of an adversary, they spend a lot more time on fixating on the cart that they ultimately choose.

Once we cross that dotted red line to now being aware of the adversary, you see a reversal. So, they are, in fact, attempting to now fixate more on the other card, hoping that will confuse the adversary. But nonetheless, in fact, they’ve compensated in the other way, such that now they are consistently looking at the other card.

The second participant in the middle, pre-adversary, very similar behavior. You see that, in the second half, they are trying to dampen their behavior, they are trying to reduce the gap between how much time they are fixating on one or the other. But if you turn to the bottom graph, the middle bottom graph, where now here we are showing the specific objects that they are fixating on. Across all these trials, across the 10 seconds, you’ll see– oops. Sorry.

You will see, on here, that they, nonetheless, end up, at the end of the 10 seconds, fixating on using their eyes on the card that they’re going to choose. So while they are trying to dampen the behavior over the entire 10 seconds, what their behavior is at the very end of that 10 seconds still is highly consistent and predictable.

Finally, the third participant, pre-adversary, if you look at the top, it appears to be quite random. But in fact, they appear to overcompensate after the transition, to a point where now they are consistently looking at the other card. And for a system to be able to recognize that that’s a strategy that they are undertaking, that becomes a very reliable signal for a system.

So if we take a step back and apply all these data, run it through a very standard, simple machine learning algorithms, we see that the data that we’re able to collect are actually highly predictive of choice, both pre-adversary, as well as post-adversary. So even as participants attempt to evade tracking, their behavior is still going to be quite predictable. In fact, some participants end up becoming even more predictable in the second half.

Just as importantly, in post-experiment surveys and interviews, we found that 60% of respondents believed that they were, in fact, able to influence the adversary’s ability of prediction. So I’ll finish with just a couple of takeaways for us, as researchers, from undertaking this study.

The first is that, I think what this study taught us is that if we go out and consider interventions out there, such as surveillance warning symbols. All they are doing is simply punting the responsibility to the individuals, and asking the individuals to make the right decisions, when, in fact, as we see in this one specific instance, that not all individuals have a nuanced understanding about what are the capabilities of the technologies, the sensors, and the algorithms. And neither should they have to.

Secondly, we also recognize that what we are starting here is only just one snapshot of a moving target. There are, in fact, signals hidden within the raw data. And those signals can, in fact, evolve when new algorithms are proposed, when new users are recruited into the population of data subjects, and new additional sources that can be fused, integrated with the set of existing data.

So the sensitivity arises not only just from this data that we’ve collected, or that can be collected by similar systems, not just– outside of our study. But also then what can be done with this data over time, down the road, in the future, as more sophisticated algorithms become available when the data sets become more comprehensive. So with that, I will stop, and I will turn over to Rebecca.

[REBECCA WEXLER] Thank you so much. Fascinating. Yeah. The graphs are– I wonder what I would do in your lab, in your circumstance. All right. Next up, we have Lawrence Cohen, a scholar of religion and medical anthropologist. Much of his work has focused on the norms and forms of political life in India, attending to questions of old age and the place of the family in the decolonization of knowledge, to the sexual and gendered logics of backwardness, and to the mediation and regulation of markets and human organs as sites to think about ethics as popular culture.

So, for the past decade, he’s studied contending models of biometrics and big data in the control and governance of economy and society with a focus on India’s massive Aadhaar. Aadhaar? Identification project. And do you have slides as well? OK.

[LAWRENCE COHEN] It’s an honor to be here. I find, since I study large technological objects that matter in people’s lives, that, no matter where I talk about this, people in the audience tend to know a lot more. So I’m looking forward to conversation.

So at a range of times, but particularly in the late 1990s, several groups within the government of India decided that India needed a more powerful national ID card. Actually, I shouldn’t say “card,” because the material status of the object was in question. Think of your Social Security number, which operates virtually, versus your driver’s license in the US, which operates as a thing. Very different ontologies of national ID-ness.

So, broadly– and I’m going to be simplistic because of time. And this talk will end like a litany. In an hour, I will be told the time is over, and it will stop. Let’s see how far I can get.

So, India and Pakistan have a much celebrated and popular film war in 1999 on a glacier in divided and multiple acclaimed Kashmir, the so-called Kargil war. On the Indian side, the Kargil review committee is formed to think about future questions of security. The focus is quickly on the need for a national ID card. And the focus of that card is early on focused on something like citizenship, on a differentiation between who are real citizens, and who are persons claiming to be citizens, presumptively, from the other side of the Kashmir line of control.

Around the same time, in a very different world, but also using the same keyword that is biometrics– and biometrics functions in these debates, something like a floating signifier. No one quite knows what it is. It doesn’t matter for the debates. It’s something powerful. We have a sense, fingers, eyes. The history of the fingerprint, of course, is bound up to colonial India and its governance.

So a new biometrics is going to secure the nation. So in the finance world, particularly in the erstwhile planning commission of India, but particularly tied to the growing influence of South Indian tech capital on the planning commission, there is a debate over how to rationalize population health.

To put it very crudely, the question for both the government and the engineers, who become increasingly influential, is, why are we not China? That is, why are we not, despite having the global language of rule, economically positioned for a transforming economy? And the answer, crudely, is population health. And the presumption is something that, increasingly, at the end of the 20th century, becomes, again, animated as corruption across Indian public culture.

And a group of engineers, in this case, tied to perhaps the most important group in the developing of a global outsourcing economy, that is Infosys, the corporation, and particularly one of its founders, Nandan Nilekani, have an idea for how to govern and rationalize the distribution of goods in India.

Now, Infosys is a service company. It’s category central to a shifting economy of service. And Infosys supplies that category of service to what governments, as well as what private capital does. It distributes service.

And what’s interesting, since we’re sitting in a hall of social theory and social science, is that you could argue that much of the social science of the past 150 years has focused on things like wage, things like welfare, things as a kind of gift, things like credit and economies of debt, and things like products and questions of production.

Service collapses all these. Service is anything that is a good that can be distributed, whether it is credit for the poor or the financialization of the poor, whether it is wage, the rationalization of wage economies, whether, chiefly, it is welfare, or whether the hope would be that all products would be tied to securing the customer. This was tied, of course, to the shift to know your customer norms across a range of global finance institutions.

The point is that you have these two debates happening simultaneously, across the government of India, and a range of private capital, global and Indian concerns. And they lead to two very different visions of what national security should look like.

The defense vision eventually leads to something called the National Population Register, NPR. And broadly, because, again, it comes out of the Kargil committee, it’s focused upon– oh, shoot. 5 minutes. More information. We want to know more about you, where you belong, et cetera, et cetera. We want more data fields.

The engineers had a very different– I’m sorry. The engineers of Infosys had a very different understanding. One of the founders said, we want to know nothing about you. We wanted your fingerprints. We wanted a random number. And there were a range of reasons for this. But partially, they had a sense that if this was going to be an uncorruptible system.

If it could not be corrupted by, say, low level babus. that is, bureaucrats extracting value or corruption from service seekers– we had to produce something that would be radically mobile, which knew nothing about you. Their vision of a citizen was of someone who might be making false claims on service, getting more gas cylinders for their household, et cetera. But fundamentally is someone who we want to rationalize service delivery to by getting rid of fake claimants, as opposed to fake citizens.

I won’t go through the history of both fingerprint and eye scans here. One of the problems of this was scaling up. In both cases, the architects of NPR and of Aadhaar, which means basis, or foundation– and it was the foundation of a new kind of political subject– was rooted. So, again, for NPR, the focus begins with border security. It begins, that is, as a problem of citizenship, and the fake citizen. But scaling up is tied to piggybacking this on the census, which is a residency measure.

So the question of who is the subject of this new national biometrics, is it a citizen, or is it any resident of the country, is caught in a certain contradiction built into the question of scale.

For Aadhaar, the question was always a resident, in part because there was no legal authority to this giant edifice, but also because it was trying to imagine a subject that had no name, that had no history, that had no biography, was purely measured by its biometrics. And the engineers said, we want to know nothing about you. And this became central.

But at the same time, access to any life-giving good, what people tend to understand as rights as citizens, was crucially tied to possessing the card. So it became an inexorable demand on citizenship.

Now, very, very briefly, what the engineers offered was a sense that this knew nothing about you. We can only tell any given entity that you are you. So, say, any given distributing of, say, hot lunch programs for children, or college scholarships, or cattle fodder. You will come to that entity, will proffer your fingerprint or eye scan. And all that will be returned is a yes or a no. And that is all that we offer.

Now, that turns out not to be the case in practice. So critiques of it are emerged, they are of basically two kinds. One focus upon the extent to which this program succeeds, and it’s a mixed bag. And the concern there is some kind of Big Brother. It’s privacy concerns regarding the state’s knowledge. I’ll come back to that if I have time. But the more immediate concerns– and these all get highly publicized by a shifting economy of media news– are focused upon the problem of fingerprints.

They erode. They erode with age. They erode with certain illnesses. They erode with environmental exposures. And a range of privacy concerns. So for example, take one famous example, persons getting funding for AIDS medications. There were several reports with Aadhaar in the 2010s, that there were diminished numbers of people beginning to apply for medication programs because they were afraid of the social stigma of this being known. Because of rumors around Aadhaar’s leakage as a privacy interest.

So there were real hacks, which do occur, despite the failsafe nature, which were heavily publicized. But there’s also this fear of imagined attacks. I won’t go, because of time, litigation debates. They have focused upon the question that says, no legal authority has now, since 2016, been established. It is focused upon the fact that there was no right to privacy. And the Supreme Court had to guarantee this, which it did.

And it focused upon the complex question of Aadhaar not being necessary legally, and yet being necessary practically. It’s after 2014, that thing changes. If there are two very different visions of national security, what happens– very briefly, and I can talk about this– is that they collapsed together. And they collapsed together in very interesting ways.

And we can discuss this through the shift in the law. Finally, this was given legal authority in 2016. And in part, like the NPR, the government is now enabled to attach more than it feels– but also, there is now a legal right to any entity that’s approved by the state to have access to the, quote, “demographic information,” not to the core biometrics, which are literally your biometric scans. Those are reserved for national security interests.

But what we see here is a collapsing together of two very different figures of the political subject. And in questions, I can talk about probably the greatest concern in Indian popular discourse now, particularly on the left, which is the expansion of the former National Population Register, in a contested effort to disenfranchise very large numbers of India’s Muslims. So I will stop there. Time is up, I think. Thanks.

[REBECCA WEXLER] Thank you so much. Super interesting history. And I’m excited to talk about it in Q&A. And then for our final speaker, we have Jennifer Urban, a clinical professor of law at Berkeley School of Law, where she is Director of Policy Initiatives at the Samuelson Law Technology and Public Policy Clinic, and co-faculty director of the Berkeley Center for Law and Technology.

In March 2021, Urban was appointed by California Governor, Gavin Newsom, to the inaugural chair of the California Privacy Protection Agency board. Prior to joining Berkeley Law, Professor Urban founded and directed the USC Intellectual Property and Technology Law Clinic at the University of Southern California, Gould School of Law. And before that, she was the Samuelson clinic’s first fellow. Samuelson clinic in the Berkeley Law School, here down the street. And an attorney with the Venture Law Group in Silicon Valley. So she has a BA in biological science from Cornell, and a JD from Berkeley Law.

[JENNIFER URBAN] I’m Jennifer Urban. I’m really delighted to be here from the law school, to talk with a bunch of fantastic social scientists. I’m going to make one point only, in three steps. The point is very simple, and probably simplistic, but I don’t think it’s a problem that we have solved, so I would like to talk about it, which is that the rise in biometrics has provided a chance to think more fully about privacy as a society, and especially the legal parameters around society.

It’s a new chance because of special features of biometrics, which I will talk about, which will be familiar to many of you, whether or not you study them. Just by thinking about them. They are unchangeable. They are tied to your body. They fail very, very, very badly for these reasons. And that changes the conversation about biometrics to some degree.

The steps are, just to say a little bit about the temptation of using biometrics, which is extreme, and the controversy, which both Lawrence and John talked about in different ways. I mean, I think your subjects, John, were trying really hard to beat the adversary, and couldn’t do it. But they wanted to beat the adversary. And of course, there’s been a lot of social controversy in India, about Aadhaar.

And then talking about legal options that we traditionally have in the US, and how they match or don’t match with biometrics, and how biometrics legal solutions, quote, unquote, “have been different.” And then finally, what can we learn from this?

So before I talk about anything substantive, I do need to give my disclaimer. Anything I say represents only my own views, not the views of the California Privacy Protection Agency, or its board. And I think the University of California is also starting to ask us to say that for the university as well. So it’s just me up here.

All right. So I want to start with an example of a temptation, Has anybody used when you were filing your taxes? A couple of you. Did you use the face recognition thing? You did. Did you use the face recognition, or did you do the Zoom option? Yeah.

So, two, three years ago, the IRS started requiring for you to electronically file your tax returns to use a third-party company called, ID dot me, to use a face recognition supposedly for authentication. So, one-to-one matching. I use this example because it already tells you there was an uproar and a controversy about it.

And the IRS said, within a month or so, that it was going to stop using facial recognition, there was that big of a backlash from the public. And the House Oversight Committee started looking at this, and started looking at the company. The Senate looked at it as well. Senator Wyden looked at it.

And yet, a year after the outcry, the IRS was still using Although, you have– apparently, when they said they would stop using facial recognition, what they meant was that you could have a Zoom call with an employee. And they could verify you over Zoom, rather than having to give them the photograph.

The IRS got more attention about this last year. got more attention. Senator Wyden, a few months ago, sent a letter complaining about their what he said were deceptive statements, that they did not do one-to-many matching, which is much more risky than one-to-one authentication that they do.

In any case, my husband went to do his 1099s in January, and he still had to use with the IRS. So the IRS is still using this. It is a very beguiling technology.

Similarly, we know that facial recognition identification is enormously risky, and it is enormously risky, especially for certain populations because the technology itself is biased in terms of when it is accurate, and when it is not. So we now have a number of Black men who are less likely to be accurately identified, who have been wrongfully arrested, sometimes stayed in jail. Presumably, we know that many of these cases have been dismissed. We don’t know if all of them have.

And in any case, they have had this interaction with the state because this technology is biased and inaccurate. And yet, it is very popular. This is just a slide that shows, under the law enforcement and immigration, and so forth, umbrella, the agencies, at a minimum, who use facial recognition technology and other biometrics. And GAO has a complaint that they haven’t fulfilled their privacy requirements perfectly with facial recognition.

These are other federal databases, just to give a sense that it’s not just facial recognition. IDENT/HART, they’ve collected a lot of different kinds of biometrics for a long time. They’ve come under fire from the Government Accountability Office as well, for this. But it is very much embedded in the government at this point.

Similarly, as you know, it’s in your phone. But it’s in a lot of places. It’s seen to be something that is very attractive. So it’s very tempting. And yet, it has inspired enough of a backlash that the IRS at least gives you another option now. And that is a little bit unusual for some of these debates. And so I want to talk a little bit about why I think that is.

So, if we were to decide to, as a society, address this legally, what would be our usual options? In the United States, our privacy law has generally been sectoral, by which I mean it is focused almost always on, until recently, a specific area. So, HIPAA focuses on health information. The Video Privacy Protection Act focuses on video rental records. And we have not traditionally had a very comprehensive law that just covers people’s personal information in a lot of situations. We do in California now. But in any case, that is the tradition.

It’s also– and this is perhaps the more important thing– individual, individualistic, starting from the 1970s, but morphed through our system, and our theory of market-based incentives and choices. For decades, the United States has operated on this idea of notice and choice. Meaning that a company will give you notice of what they’re planning to do with your personal information, and you will make a choice.

And in reality, as you all know, it takes– have you read privacy policies? Have you tried to read privacy policies? Have you tried to make a choice, at least prior to the comprehensive privacy laws that we have now in California, and a few other states. The choice is, well, you go to another company. And that has been something that meant that, at least on the private side– we can talk about government actors in Q&A, if you’d like. But at least on the private side, there have been very little overt controls on the use, selling, profiting from personal information, however you would like to define that.

But biometric privacy laws are different. At least, I think they’re different. I’m really interested in what Mr. Wool thinks. They are a handful of them now. This is sectoral, obviously. It’s focused on biometrics. Illinois is the one I’m going to talk about, specifically because it was the first one, 2008. And it is, I think, one of the most interesting, because it is very different from other privacy laws, consumer privacy laws in the United States up to that point, and even including today.

So it has a few features. One is, first of all, it’s opt-in, meaning that companies cannot take your biometric information unless you affirmatively tell them in advance. Almost all of the rubrics are opt-out, meaning that your information is taken, and then– you may have seen this. You can opt-out under California’s law, in certain ways, for certain things. You send an opt-out, and the company does have to opt you out now. And that’s pretty new.

But this biometric information law in Illinois, it’s opt-in. It also has genuine data retention time limits. So they actually have to limit how long they keep the data to the length of time where they actually need to use it, or three years. And that is very rare in American laws, the idea that you actually have to delete the data.

And very importantly, it has a private right of action, meaning that individual people can sue. And that is really, really very rare. California has a private right of action for certain data– excuse me. Certain data breaches. But for the most part, privacy laws are enforced. Consumer privacy laws are enforced by attorneys general, and in California, also, the agency that I’m on the board for.

So, private right of action. And this has turned out to be very important. It’s really important because you get class actions that can enforce the law. So you have a strong law, and you have a societal mechanism to enforce the law, that looks very different from previous iterations. And you end up with case law that, for example, a recent case from the Illinois Supreme Court, says, that every single time they copy and pass on your biometric information, that is a violation. Those violations are $1,000, if negligent. $5,000 apiece, if there’s a higher standard of knowledge and fault there.

And it’s resulted in settlements that are 200 and something million dollars, 600 and something million dollars against Facebook, which is something that actually could make a real difference in terms of change.

Secondly, as Julia mentioned at the top of the hour, various municipalities, counties, one state, at least, Vermont, have completely banned facial recognition. This is also very different from the way that we have generally treated privacy issues in the United States. It’s usually government and/or law enforcement. So, law enforcement first, then law enforcement and government, are subject to these facial recognition bans. They can’t use facial recognition in these various jurisdictions. And it’s dozens and dozens of them, if you count municipalities.

But the FTC, just recently, actually, in a settlement agreement with Rite Aid pharmacy, has banned Rite Aid pharmacy from using facial recognition for five years because they were not using it responsibly. So this is very different from the notice and choice regime, where everything goes, unless you choose another vendor.

And I think this is really interesting. Well, I’m a lawyer, so I think it’s really interesting. But I wonder why that is. And I wonder what we can learn from this? And I don’t know why it is, really, which is why I think that one of the most important things we can learn from this, is the importance of interdisciplinarity, and having collaborations between lawyers, people like John, and people like Lawrence, who can give a textured description of what people are doing, how they’re interacting with these technologies, how they’re thinking of it on a societal level, so that we can address it with legal tools in a way that is responsive to society.

But what I would like to consider researching to see if it are the reasons, is that all of the things that we hear about biometrics in terms of their level of risk, are things that make them very– they make them somewhat unusual in terms of how people respond to them with regard to privacy. And that includes both the public, and also, people like policymakers, who have, in 2008, Illinois, decided to pass this law, who had done things where they have in other spaces. And that is, they are persistent, they are tied inextricably to our bodies, they don’t change easily. The ACLU says you can’t change your face. You can’t change your face. But your face and your body are inextricably tied, for most people, to their identity. And I mean that in a more philosophical sense, not just in the sense that we usually think about it with regards to privacy law and their sense of autonomy.

So these are reasons why biometrics is especially risky. It’s why it fails very badly. I have one of those diseases that makes my fingerprints iffy. And TSA doesn’t even know if I exist. They just can’t decide. Because my fingerprints don’t scan very well. And so somebody else could put their fingerprints in, and then I’m in big trouble, because they’re seen to be so effective. So, they fail badly. But they’re also deeply connected to us, in a way that I think data shadows are.

I find data shadows to be as revealing, in many ways. Certainly persistent. Certainly something worthy of protection. But it’s much more abstract. And I would like to see biometrics as an opportunity to think more fully about where we’re going with privacy law, and with privacy policy more generally. That said, I don’t think we have very long to talk about it, because it may not seem so good very soon.


I understand that facial recognition is a very vulnerable to deepfakes. Voice recognition, maybe less so, but maybe will be soon. What is the answer to this? Well, it could be, maybe we back out of the biometrics world a little bit. But often the answer that I’ve been seeing, certainly from industry, is back to what the Wall Street Journal said. And you just add more biometrics on.

So, add blood flow. Add heart rate. Add thought– thoughts. So that we can defeat the deepfakes by getting further into the world of biometrics, which leaves us in a really serious societal conversation about this issue. So I look forward to the discussion. Thanks for listening to a lawyer. And I appreciate it.

[REBECCA WEXLER] Thank you so much. Would the panelists come back, please? Well, all three of those were wonderful presentations. And I thought, one question I wanted to start with– and we have lots of time for Q&A. Thank you to all three of you for being so prompt with your time, so that we really can have engagement with all of you in the room who’ve come to spend your time with us.

The first question I wanted to ask is about the accuracy of the technology. So, in all three of your presentations. John, you were talking about people trying to defeat the detectors. And it looked like the detectors maybe would win, but maybe they wouldn’t. And what else could they detect? I wasn’t quite sure.

And Lawrence, you were talking about the system having been advertised as foolproof, and yet then it turned out to leak. And we were worried about– and there were actual hacks. And Jennifer, you were talking also about some of the errors with face recognition. I think now there’s seven, maybe you were talking about there were three. And now there maybe even some more.

But how do we know if the technology really is accurate? And more accurate than what? So there’s a perennial baseline question. And with the face recognition technology in particular, eyewitness IDs are hugely problematic. And so if we have been using face recognition for arrests in a couple of years, and we have four, five, six, seven, is that really so bad? So, yeah. What do you think about– all three of you, maybe. Any order you’d like. How do we know if they’re actually working or not?

[JONATHAN CHUANG] I can go first. I think, at the outset, I had said that biosignals– sorry. At the outset, I had shared that biosignals, biosensory data, they can be very precise because you have all these sensor readings with as many digits of position as you want. But they can also be ambiguous at the same time. And so that poses, I think, a fundamental challenge with regards to, is it really accuracy? And is it accuracy that we’re after? And how accurate is accurate enough?

In many situations, I would argue that I would rather have a system that is 70% accurate than one that is 90% accurate, or worse yet, 99% accurate. If we can guarantee a 100% accuracy, we will never have any failures, no false positives or false negatives. That’s a different world that is unlikely.

But otherwise, the more accurate we think we are– the more we may ascribe high-stake decisions to situations where even a 1%, a 0.1% failure rate is going to be catastrophic, untolerable for individuals, like the ones that have been misidentified. In fact, I’m pushing back on this question. While the study that I showed of you, did present some numbers with actual accuracy, the intention there was not to highlight trumpet what those numbers are. Because what we were doing, we were just employing– we did not invent any new machine learning algorithms. We just took the simplest vanilla-flavored versions that we can find, applied it to the data that we’ve collected.

You can easily imagine that a much more well-resourced entity, like Facebook, who, obviously, sells their own VR systems, or other big tech companies, they have access to much more resources, much more sophisticated algorithms. And therefore, I think the numbers I shared, that we achieved, are really only the low estimates of what a company like Facebook will be able to achieve.

But unless and until they get to 100%, I think it’s going to be a problem. And I would much rather that a big tech company can only achieve a 70% accuracy than a 99% accuracy. Because with a high accuracy, they may think that, OK, good, we are actually very effective. And therefore, we are going to make more and more decisions with higher and higher stakes, when the accuracy levels are not very good.

[REBECCA WEXLER] Unless they get to 100%, you want us to know that we’re not actually that good, so we don’t rely on it too much?

[JOHN CHUANG] Yeah. I think the same argument would apply for autonomous vehicles on the road as well.

[WEXLER] I see. With high-stakes failures. That makes sense. Lawrence, what about you?

[LAWRENCE COHEN] Two points, one of which is, early on in the bureaucracy I study, the unique identification authority that administers Aadhaar. Initially, there was a climate, which was tied to the self-knowledge of the engineers, which was encouraging people failure was success. That is, the more we know publicly about failure, there were websites set up to encourage reporting of failures. There were white papers in which failures were publicly distributed online. And the idea is, the more we can know about failures, the better we can get towards asymptotically 100% success.

At some point, that culture of presumptive reportage disappears. And it disappears before 2014, under the previous administration, in part because of the economic stakes that were emergent. And it disappears increasingly as Aadhaar itself becomes more complexly intertwined with state security.

But now, there are lawsuits against critics. There is a whole range of state effort to use the legal apparatus to prevent a public accountability. So that’s one story. The second would be that, a bit differently, there is a very vigorous, to some extent, public reportage of Aadhaar’s failures. And this is by a media that, according to many critics, has long since been bought by the state, has long since ceased to function as an independent, vigorous national media.

But Aadhaar fakes sell somewhat differently than the US concerns around the IRS. I mean, there’s arguably a different public culture about IDs and the state presence. The dominant feeling that I’ve been hearing for over a decade, from users, particularly users on the economic and social margin, is that this gives proof. That I, as a marginal political subject, am unlikely, in terms of service delivery, in terms of welfare, to be recognized as the state, and I’m in a precarious condition.

And there was a sense that the more powerful Aadhaar’s, even with the ways in which its mistakes have filtered into everyone’s lives, that the sense of being a guarantor for a certain kind of minimal condition of biological citizenship, say, was very powerful. And that remains powerful, despite the fact that people are very aware of its failures.

I would also say that the organized academic left is very sensitive, as are mass media, to these failures as well. They should be. Because the exclusions based on fingerprints are extraordinary and devastating, but not only because of the public sense of proof. But Aadhaar does complexly deliver in some ways.

It does produce greater access to certain goods than prior modes. It’s a bit like your discussion of two modes of witness– the machine versus eyewitness accounts. For many people, the machine is a better alternative than one’s neighbors, because of one’s marginal social status. So the people’s response to the very vivid public knowledge of its failures is not simple.

[REBECCA WEXLER] That’s super interesting. So I’m just going to make sure that I’ve got it. And I think you’re saying that, actually, there’s a benefit to people to overclaiming accuracy at the top, and at the bottom. For the national security state, there’s a benefit because we want to conceal the flaws, and make people trust and believe. For the marginal subject, there’s a benefit because it offers this participation as a citizen, that wasn’t there at all, and may be better than the baseline of the neighbor.

[LAWRENCE COHEN] At times. And the last thing I’ll just say is that, to take one example that I’ve written a lot about. Transgender rights organizations in different cities have taken very different approaches to, on the one hand, given histories of policing, one’s greater legibility to the state is seen as devastating. On the other hand, the sense of being radically outside of the distribution of basic rights is also very powerful. So there are sharp, sharp divides, just to take one sector within trans communities over whether national biometrics is a good or a devastating thing.

[REBECCA WEXLER] Jennifer, what do you think about accuracy and baseline’s?

[JENNIFER URBAN] Accuracy is with all of these technologies, and generally, I think, with surveillance and tracking, is one of the core components of why they’re beguiling. They appear accurate. And it is very difficult for policy makers, and generally, it seems, for all of us to get our heads around the problems with 99% accuracy.

Mr. Williams knows very well the problems. I mean, I don’t actually think those systems are 99% accurate when it comes to a Black man. But it’s going to be 90 something. And that sounds really good to policy makers. It sounds really good just on a sound bite on the news. But everybody in this room can do the math. And when you have 300 million people, and you have an accuracy rate of 98.5%, that means you have an inaccuracy rate of 1.5%. And that is many millions of people.

And this came up well before we were talking about biometrics in this way. Well, many times, I’m sure. But certainly after 9/11, when there were all of these initiatives to unleash things with names like total information awareness. And we were going to collect lots of information about lots of people. And we were going to have accuracy in terms of how we could predict terrorist attacks.

And not being able to predict a terrorist attack is a very high risk failure, of course, on the other side. But the problem with it is that what appears to be accurate is not necessarily going to end up with the positive result, while, in the meantime, there was a dragnet that pulled many, many Muslim Americans into it in the name of accurately predicting terrorist attacks.

The second thing that I– and the eyewitness thing is something that I struggle with. And Adhaar is actually one of the systems that I’ve always found the most attractive for that very reason, that if people are in rural villages, and they are not legible to the state, and they have not been able to obtain benefits, and this gives them that legibility, and it gives them the ability to operate as a citizen, that’s really attractive.

And I’m sorry, this is a little bit of an aside. But I find it so fascinating. I talked to some of the folks in India around the time that they were developing it. And I just thought it was so interesting, and I find it interesting now, that the Infosys engineers had a privacy mindset. They didn’t want to know about you. They just wanted to authenticate you as an Indian.

Anyway. That’s a bit of an aside. But those are very– and I don’t think that we know, always, what the trade-offs are, how they add up in the end. But I don’t know that we’re having the right conversation about the trade-offs.

But the thing that I wanted to say that is a little more out there, I suppose. It’s certainly out there for some of these discussions and debates, is I don’t know that we want 100% accuracy.

I’m not sure what values we lose. I know that we can’t interrogate machines. And that’s a practical problem. We can’t interrogate them well. That’s a practical problem. We can’t necessarily interrogate people very well, but we’ve been doing it for thousands of years, at least we know something about it. But there’s also this question of, if you have 100% accuracy at one moment in time, about one characteristic of a person, or a handful of characteristics about a person, what does that mean for that person’s ability to maneuver through their life and make different choices, and become a different person?

I mean, I mentioned bodies aren’t changeable. They are changeable to some degree. Your example, Lawrence, of the transgender community in India. It would be people, some of whom, I’m sure, have worked to adjust their bodies to their gender identity. There are all kinds of examples of this. And you change through time, naturally, as you age. And facial recognition gets less accurate, actually, as you get older.

I’m not sure we want, necessarily, to freeze people in amber in that way. And I know that we haven’t had a full discussion about that, and really interrogated it, what it would mean, what the technology would actually do. Am I right that it would freeze you in amber? I don’t know. And what that would mean, and how we want to approach it.

[REBECCA WEXLER] Well, I want to open up for all of you in the room. Are there thoughts? Or we could keep chatting. Go ahead.

[AUDIENCE MEMBER] Just to pick up on what you were saying, Jennifer, which is that what’s so frightening about biometrics, and the way they turn you into a machine-readable body, is that you are in a world of absolute capture by the state. And I really appreciated what you were saying about how it’s not a post-racial technology at all.

And Lawrence, what you were actually saying about biometrics really having this whole racialized, colonial inheritance. And so I wondered if the three of you could actually take this conversation to the border, where these issues are being spectacularly played out. And whether, I don’t know, John, if you think that, for example, the border, the smart border is so complicated, both in terms of its capture, and also its failures.

I consistently have problems with biometrics at the border because I have a brown face, right? And also, what kinds of tactics of opacity, fugitivity– I mean, I work on migrants who are trying to cross, because they can’t cross legally. Does fingerprint mutilation work? Are there ways of, I don’t know, setting up your face so that you can evade facial recognition, and evade the reduction of the self to a data point?

[JENNIFER URBAN] Yeah. So, I was just looking to see– well, the slides are down. But one of the things that I wanted to note about that slide that had IDENT, which is someday, apparently, going to be HART. And CODIS, and NGI. Those are the government databases. They are heavily used by immigration authorities.

That is one of the sets of authorities who use them most frequently. So facial images are used for immigration. Fingerprints are used, of course, for immigration. That’s probably more obvious. But the advanced fingerprint technology, iris scans. If you come into the country, we’ve used iris scans and face for quite a long time. They’re very popular with that sector of quasi law enforcement.

And they are very contested. I’m sure you’ve worked and talked with immigrants rights groups who have been working on this issue for a while. They’re very contested, but they fall into that category that issues I work on often do, which is, the tech issues are important. They’re going to make it easier to track people. But the fundamental problem that everybody’s trying to address is that people are being tracked, and they’re not being treated with dignity, and they’re being detained.

And those are such fundamental problems to address that the fact that the technology over here might make it easier in the future, or is making it easier now, to do those things, has been a little bit harder to address. But I absolutely think that it’s fundamentally important. And to your question about whether people will start trying to obfuscate or change their face, some theorists think so.

Joy Buolamwini, who worked with Timnit Gabriele on the– sorry. Timnit’s last name wrong. Gebru?


Yes. Thank you. Sorry. I thought that didn’t sound right. On a lot of the social issues with AI and biometrics, for example. She thinks that– it was her article that I put up at the end. She thinks that, in a few years, we are going to see the rise of the faceless, people who choose to try to obfuscate their identity unless it is someone with whom they trust. And that it’s possible that, in the future, when you experience somebody with their unaltered face, that that will be a profound act of intimacy.

[AUDIENCE MEMBER] I was wondering about, with the Illinois opt-in privilege law. How does that work on a national scale, or even international scale, since so many of these companies are out of Illinois?

[JENNIFER URBAN] Yeah. Well, the law is confined to Illinois. And one of the things that’s interesting about California’s law, for example, is that a lot of the companies, the massive companies who are main actors here, are based in California. They’re not based in Illinois. But they still have to answer for any violations of the law in Illinois, or against Illinois residents.

And so the lawsuits that I mentioned, for example, they were all filed within Illinois, for the most part, not entirely. They’ve gone through the Illinois courts to the Illinois Supreme Court. And the law is confined there. But the cost has been substantial. And so one of the things that can happen– and Illinois isn’t the biggest jurisdiction in the world. Europe is famous for this. The European Union is such a big jurisdiction. Some people say California can have this effect, where a big jurisdiction like that has laws.

Privacy is a prime example of that. They have strong privacy laws that international corporations will comply more broadly than just in Europe, or just in California, because of the fact that it’s more efficient for them to do so. Illinois is not that big. But what they’ve gotten is a lot of attention, and a lot of bang for their buck in terms of the conversation among lawyers for these companies, and how they counsel their companies, as far as I can tell.

They talk about it all the time at my privacy lawyer conferences. Illinois lawsuits are being tracked very, very carefully. Now, that isn’t to say that it’s going to have a legal effect outside of Illinois. You know it won’t. And it doesn’t have the same economic effect as a massive actor, like Europe. But it’s had a soft effect, I guess, I would say.

[REBECCA WEXLER] Lawrence, are there any talks in India about the Illinois Biometric Privacy Act?

[LAWRENCE COHEN] No. But I wanted just to say something more about the data privacy discussion, which is the– I mean, California’s important for the government of India, and various civil society groups on the right, to try to influence US textbook discussions of what is Indian history, and what is proper to it, so that California and Texas textbooks hold much of the market for national high school and middle school textbooks.

So there’s a lot of activism by many groups in California, parent groups, towards trying to develop very particular models. And many of us are involved in contesting these. But it’s so that the state level matters globally, as you suggest.

I just wanted to say, on the question of the US border, everything you said. I mean, it’s on the question of– but just two quick examples, one of which is I’m thinking, well, for a lot of reasons, of course, but about Gaza. And what I’m thinking about is, this is not a novel thought, by any means. But there is an assemblage of technologies, some of which are very adept at pinpointing. And they are effective both in their effective and ineffective usage. And there’s a lot of other technologies, which are, for lack of a better word, vulgar. They are designed not to be pinpointed. They’re designed to get everyone in the– et cetera.

And it’s the difference. That’s just one example. But there’s 10,000 less effectively urgent. But which offers some kind of complex logic of assemblage of vulgar and specific metrical technologies. And it’s the mixture at the border, in terms of, is it the racializing eye of the TSA agent, et cetera, is it or the custom, et cetera. Is it the machine? What combination matters? What are the structures of alibis that emerge, et cetera, et cetera. How has the undecidability of the combination worked?

So this is where your question leads me, it’s to the mix and not any one solution. And my worry is that mix will always be there. And we will tinker with very important questions around civil liberties. But at the questions of the border, this will be operationalized in any number of possible ways to produce restrictions.

I will just finally say that, in the case of India, the border that’s both mattered and not mattered, is not the border with Pakistan, which has, of course, been central to the story I tell, but the border with Bangladesh. Because the phantom Bangladeshi illegal migrant, which has been central to efforts by the right wing party when it was not in power, the BJP, to disallow Aadhaar. Because the concern was that the residency measure would legitimate so-called illegal migrants.

Now, a body of law emerges in Assam, which was part, of course, of pre-colonial Bengal, as part of its complex and multiple division. So lots of people who have been migrating economically for centuries are increasingly being captured as potentially illegal migrants by a suite of laws that were tied to local debates between different groups in Assam, but which have, in the last 15 years, led to been bored under this government to national level efforts to disenfranchise Muslims in general. At least this is contested.

So the border was not effective, that border, despite its powerful fantasmatic quality in delimiting Aadhaar, but it’s become very effective as– because of the forms of local, state-level laws that emerged to satisfy constituencies very anxious about Muslim-speaking Benglis. Sorry. Bengali-speaking Muslims. It’s become widely powerful in the CAA act, for example.

[REBECCA WEXLER] John, I know you want to say something. And I think we have 3 minutes left. So why don’t you go ahead.

[JOHN CHUANG] A couple of examples in the context of the border. First, the Singapore government has embarked on a program to turn their airport in Singapore, where you only need to show your passport once, when you first arrive, and then you never need to bring out your passport or ID ever again, when you get to the plane.

So that implies a whole assemblage of sensors that’s going to be deployed. It’s marketed as a matter of convenience for travelers, but you can also imagine that there are other motivations, security implications. In the years immediately following 9/11, there were, in fact, a lot of governments that were interested in airport security, installing various types of sensors to not simply identify the individuals in the public space, but, in fact, their behavior.

When you go through customs, how much are you fidgeting when you’re standing there face-to-face with the custom officer? That was seen as a possible signal that could be useful for anti-terrorism purposes. So I think there are a lot of possible paths that we can go down in the name of, perhaps, public safety, in the context of public spaces. But you can also apply that to private interests.

The latest gadget is the wearable glasses from Apple, that it has built-in eye tracking capabilities. And people are going to be in public spaces. And how are we going to respond to– are we trying to obfuscate, change how we focus our eye gaze, because now we recognize that we’re being watched, either by TSA, or by a private company? So I think there are a lot of things that we see in the context of the border that’s going to begin to seep into non-border public spaces.

[REBECCA WEXLER] I think that’s a wonderful. Sorry. I think we– we have 1 minute. Oh,

[JENNIFER URBAN] OK. I apologize. I was listening to Lawrence, in response to your question. And it just reminded me. This is not an original thought. But we left out the technology of the law a little bit. The other thing, of course, that immigrants rights groups are always contending with is, what is the result of an identification or a tracking?

And that is very dependent on the external structure of the law, which, of course, has become, from their perspective, I think, an emergency over the last 10 years or so, where there’s so much less discretion for immigration judges, there’s more and more limits on asylum. You can put all of these different things into the superstructure of the law, which changes the stakes of the biometric technology.

[REBECCA WEXLER] With that, I want to thank our panelists. If people want to continue talking, please do. But I want to thank Julia, also, for organizing this. It was a wonderful event. And thank you for bringing us together today.



You May Like

Authors Meet Critics


Published February 5, 2024

Authors Meet Critics: “The Unnaming of Kroeber Hall,” Andrew Garrett

Recorded on January 19, 2024, this "Authors Meet Critics" panel centered on the book, "The Unnaming of Kroeber Hall: Language, Memory, and Indigenous California," by Andrew Garrett, Professor of Linguistics and the Nadine M. Tang and Bruce L. Smith Professor of Cross-Cultural Social Sciences in the Department of Linguistics at UC Berkeley. Professor Garrett was joined in conversation by James Clifford, Professor Emeritus at UC Santa Cruz; William Hanks, Berkeley Distinguished Chair Professor in Linguistic Anthropology; and Julian Lang (Karuk/Wiyot), a storyteller, poet, artist, graphic designer, and writer, and author of "Ararapikva: Karuk Indian Literature from Northwest California." Leanne Hinton, Professor Emerita of Linguistics at UC Berkeley, moderated.

Learn More >

Book Talk


Published January 28, 2024

Vincent Bevins – “If We Burn: The Mass Protest Decade and the Missing Revolution”

Watch the video (or listen to the podcast) of Vincent Bevins discussing his book, "If We Burn: The Mass Protest Decade and the Missing Revolution," which tells the story of the recent uprisings that sought to change the world – and what comes next. The panel was moderated by Daniel Aldana Cohen, Assistant Professor of Sociology at UC Berkeley and Director of the Socio-Spatial Climate Collaborative, or (SC)2.

Learn More >



Published January 13, 2024

Authoritarian Absorption: An Interview with Yan Long

This episode of the Matrix Podcast features an interview with Yan Long, Assistant Professor of Sociology at UC Berkeley, whose research focuses on the politics of public health in China. Matrix Communications Scholar Jennie Barker spoke with Long about her forthcoming book, "Authoritarian Absorption: The Transnational Remaking of Infectious Disease Politics in China."

Learn More >