California Spotlight

Kate Pennington on Gentrification and Displacement in San Francisco

What impact does new housing have on rents, displacement, and gentrification in the surrounding neighborhood? Read our interview with economist Kate Pennington about her article, “Does Building New Housing Cause Displacement?:The Supply and Demand Effects of Construction in San Francisco.”

Kate Pennington

Kate Pennington is a research economist at the Center for Economic Studies, a division of the U.S. Census Bureau; she earned her PhD from UC Berkeley’s Department of Agricultural and Resource Economics in 2021. Her current work focuses on diverse questions related to inequality and urban issues.

Among her current projects, she is collaborating with PhD candidate Eleanor Wiseman in investigating how the water crisis in Flint, Michigan shaped political participation and voting behavior among residents, and she studies how economic shocks like the Great Recession affect access to reproductive healthcare. Her research has been funded by the Institute for Research on Labor and Employment, the Upjohn Institute for Employment Research, the Institute for Women’s Policy Research Fellow, and the National Science Foundation.

Matrix content curator Julia Sizek interviewed Pennington about her recent research on housing and displacement in San Francisco, which won the 2021 Urban Economics Association Prize for best student paper. Her paper, “Does Building New Housing Cause Displacement?: The Supply and Demand Effects of Construction in San Francisco,” explores the impact of new housing construction on rents, displacement, and gentrification in the surrounding neighborhood.  Her work disentangles the supply and demand effects of new construction and compares the different impacts of market rate and affordable housing. (Please note that questions and responses have been lightly edited.)

 

This graph illustrates the rising rent prices in San Francisco from 2003-2017. In 2003, 1 bedroom apartments on Craigslist are approximately $1300/month; in 2017, this number is closer to $2500/month.
Average Monthly Rent for a 1BR on Craigslist, 2003-2017. Courtesy Kate Pennington.

As we can see in this graph, rents have been increasing steadily in San Francisco, and have been climbing dramatically since 2010. Gentrification has long been a hot topic in San Francisco and the Bay Area, especially as the tech sector has brought new — and wealthy — residents to the peninsula. Given all the data that has been collected on this subject, what did you find to be missing? How did you decide to study this topic?   

This is an issue that people care about deeply, but there’s a lot of disagreement on how cities should respond to rising housing prices and demographic change. The construction of new, market rate housing — housing that isn’t restricted to low-income residents — is really controversial because of fears that it may actually accelerate neighborhood change. To me, this is an open empirical question. What is the impact of new market rate buildings on the surrounding people and neighborhoods? I wanted to try to answer this question to help move the discussion forward toward a solution.

The question is difficult to answer because it’s hard to tease apart causation and correlation. Because of the tech boom, higher-income people are moving to the Bay Area, and that’s driving up rents and displacement. Developers want to make money, so they like to build in places where prices are already rising. That means that new market rate housing is positively correlated with rising rents and displacement, but it doesn’t mean that the new buildings are causing the neighborhood change.

The challenge here was to find a natural experiment in housing construction that could help me identify the causal impact of construction on rents and demographic change.

In your paper, you combine data on new construction with data on structural fires and Craigslist rents. Why did you end up using these forms of data to track changing housing conditions in San Francisco? 

The ideal way to determine whether new housing causes changes in rents and displacement would be to do an experiment where we drop down new buildings at random throughout the city and then compare what happens nearby to what happens farther away. Obviously this is impossible for many reasons, so the challenge is to come up with something that mimics that ideal experiment.

I use serious building fires as a source of experimental variation in where new construction happens. San Francisco is famously hard to build in; it’s heavily regulated and it can’t sprawl because it’s surrounded by water on three sides. For the most part, if you want to build something new, you have to tear down something old. Serious building fires make it much cheaper for developers to build on a burned parcel. I use these fires to figure out which construction projects were “exogenously” located, that is, located due to the random occurrence of a fire. This mimics the ideal experiment of random locations for new construction.  The maps below shows where these construction projects were built in 2015 and 2016.

I use Craigslist rents for two reasons. First, the City of San Francisco doesn’t track rental prices.  I had to figure out how to get access to rental price data at a small spatial scale, without the ability to pay for a big dataset like Zillow. Scraping archived Craigslist posts was free and let me get the specific information I needed. Second, since the housing market is really segmented, Craigslist rents probably do a better job of capturing the rents faced by a lower-income resident who’s actually at risk of displacement. Higher-income people might use Zillow or Redfin to find a rental, and those prices tend to be a couple hundred dollars higher than the average Craigslist rent in any given month.

 

This map shows the locations of new construction in San Francisco in 2015.
Tracing the influence of new housing in San Francisco, 2015-2016. Courtesy Kate Pennington.
This map shows the locations of new construction in San Francisco in 2016.
Tracing the influence of new housing in San Francisco, 2015-2016. Courtesy Kate Pennington.

 

These maps depict new construction in San Francisco, as split up by several types. Help us understand these two maps, and how endogenous and exogenous construction are important distinctions to make for both policymakers and economists.

 

These maps show the 600-meter radius around new construction projects in 2015 and 2016.  They help visualize who might be affected by each new project. The randomly located (exogenous) projects are shown in pink and orange. These are the projects whose impacts I study in the paper. The projects shown in blue and green are not experimentally located; those locations may be “endogenously” driven by developers’ desire to build where prices are already high.

Why did you decide to focus on the local scale of housing, and what does this help to show us? 

Focusing on the local scale is important for two reasons: it helps identify a causal relationship, and it directly answers the question of how new housing impacts people living nearby, which is at the center of the policy debate.

 

This diagram shows the impact of new housing construction on rents at a local scale.
Measuring Exposure to New Construction Projects. Courtesy Kate Pennington.

In this figure, we can start to see how you approach this topic through measuring the impacts of new housing spatially. Help us understand this image. How do the effects of new housing differ based on distance?

For each person in my sample, I count up the number of randomly located new projects and new units completed within different distance bins for each year of my study, from 2003-2017.  This figure shows how. The circle shows the 600m radius around a fictional person’s house.  The yellow dot shows a project built within 200m, so this person would have a value of 1 for the number of projects within 0-200m.  They’d have a value of 0 for projects within 200-400m, and 1 for projects within 400-600m (the red dot).  Similarly, they would have a value of 6 for net units within 200m, 0 within 200-400m, and 200 within 400-600m.

 

This graph shows how rents of 1BR apartments shifts after new construction. The X axis measures distance from new construction; the Y axis shows changes in rent.
How do new projects affect 1BR rents and the probability of increase rent? These graphs show Pennington’s results to answer these questions. Courtesy Kate Pennington.

 

This image shows the likelihood of an adverse move for a current resident. X axis shows distance from new construction; Y axis shows likelihood of adverse move.
How do new projects affect 1BR rents and the probability of an adverse move? These graphs show Pennington’s results to answer these questions. Courtesy Kate Pennington.

This pair of figures tracks the impact of new construction on rents and displacement. Can you walk us through these charts, and what you were able to measure? 

This figure shows the main results. The first panel shows the average relationship between rents and distance from the new market rate construction project in the four years after completion. Rents are roughly $40 lower for people living close to the new building. This effect decays with distance, fading out to zero within two kilometers.

The second panel shows the impact on one measure of displacement: the probability that a renter moves to a lower-income zip code. The risk of moving to a lower-income zip code falls by about 20% for people living close by, and again fades to zero with distance. The renters who live closest to the new projects benefit from the largest differential rent reductions and the largest fall in the risk of displacement. Displacement refers to push migration, when individual people are pushed to leave their current housing. Gentrification refers to the replacement of lower-income incumbents with higher-income newcomers. Displacement happens to people; gentrification happens to places.

To measure gentrification, the ideal would be to count the net change in the number of richer people at a given address. Since I don’t have individual income data, I use median zip code income as a proxy. I count the net number of people arriving at a given address who came from a richer sending zip code. Panel A shows that the probability of a net increase in richer arrivers — my proxy for gentrification — increases by 2.5 percentage points close to new market rate construction, again fading out with distance. In contrast, panel B shows that new affordable housing doesn’t attract an increase in gentrification.

This graph shows how rents change based on distance from exogenous market rate construction.
Impact of new projects on gentrification by construction type. This figure shows exogenous market rate housing, or housing constructed after large structural fires for market rate renters. Courtesy Kate Pennington.

 

This graph shows how rents change based on distance from exogenous affordable housing construction.
Impact of new projects on gentrification by construction type. This figure shows the effects of exogenous affordable housing construction, or affordable housing constructed after structural fires. Courtesy Kate Pennington.

 

This final set of figures helps us consider the differences between different kinds of new housing, comparing specifically the differences between affordable and market-rate housing. What is the conventional wisdom on the differences between market rate and affordable housing, and what do these charts and your research more generally suggest for policymakers interested in housing affordability?

One idea that circulates in policy discussions is that market rate housing might cause local price increases and displacement, but affordable housing won’t. Instead, I find that market rate housing differentially decreases nearby rents and displacement risk, while affordable housing has no spillover effects on the surrounding people and neighborhoods. This suggests that affordable and market rate housing are complementary policy levers. Market rate housing can help many people who live nearby, but its price impacts will become less and less effective if the city continues to gentrify and the nearby residents are less sensitive to small changes in rent. On the other hand, affordable housing only prevents displacement for the people living in it, but it does better at targeting people who are really at risk of displacement, and it can preserve long-term income diversity. Both help — and neither hurts.

 

Methods

Metaketa: A Collaborative Model for Social Science Research

Thad Dunning

Social scientists conducting field-based research often design and conduct their studies in isolation, making their findings difficult to replicate in other contexts. To address this challenge, a team of social scientists at UC Berkeley and other institutions launched an initiative called “Metaketa,” which aims to provide a structure and process for designing and coordinating studies in multiple field sites at once, leading to a more robust body of data and improved standards for transparency and verification.

Metaketa is a Basque word meaning “accumulation.” Funded via Evidence in Governance and Politics (EGAP), a research, evaluation, and learning network, the Metaketa Initiative represents a model for collaboration that “seeks to improve the accumulation of knowledge from field experiments on topics where academic researchers and policy practitioners share substantive interests,” according to the EGAP website. “The key idea of this initiative is to take a major question of policy importance for governance outcomes, identify an intervention that is tried, but not tested, and implement a cluster of coordinated research studies that can provide a reliable answer to the question.”

The model is grounded in eight principles: coordination across research teams; predefined themes and comparable interventions; comparable measures; integrated case selection; preregistration; third-party analysis; formal synthesis; and integrated publication.

Thad Dunning, Robson Professor of Political Science at UC Berkeley, helped launch the Metaketa Initiative, and he participated in the first “cluster” of studies, which focused on understanding how the dissemination of information about candidates influences voter behavior. In 2019, Dunning co-authored a research paper in Science Advances, “Voter information campaigns and political accountability: Cumulative findings from a preregistered meta-analysis of coordinated trials,” that summarizes the importance of the Metaketa model: “Limited replication, measurement heterogeneity, and publication biases may undermine the reliability of published research,” Dunning and his co-authors wrote. “We implemented a new approach to cumulative learning, coordinating the design of seven randomized controlled trials to be fielded in six countries by independent research teams. Uncommon for multisite trials in the social sciences, we jointly preregistered a meta-analysis of results in advance of seeing the data.”

Cover of "Information, Accountability, and Cumulative Learning"Dunning and the co-authors — including UC Berkeley political scientist Susan Hyde — also published a book, Information, Accountability, and Cumulative Learning: Lessons from Metaketa I, that shares lessons learned from the project. This collaborative Metaketa model has since been used for studies on taxation, natural resource governance, community policing, and women’s action committees and local services.

We interviewed Dunning about how the Metaketa Initiative evolved, as well as what his own study suggested about how information influences voters’ choices. (Note that questions and responses have been edited for clarity and content.)

What was the focus of the study that you undertook through the Metaketa Initiative?

The initial thrust of the study was focused on the connection between information provision and political accountability. It almost seems like a truism that information has to matter for politics, and yet we don’t actually know that much about how providing voters with certain kinds of information actually affects political behavior. We haven’t built a coherent body of evidence around that.

The second part was methodological, and had to do with how we build cumulative knowledge. There’s been a big movement in the social sciences toward experimentation as a way of building causal knowledge. That has some advantages, but it’s also really limited. Our project stepped away from a single study and said, here are some aggregate conclusions that might hold across settings. It was really trying to tackle the problem of external validity in experiments: if I find something in a particular study, does it generalize to other contexts?

How did the Metaketa model evolve into a formal initiative?

The methodological goal of the Metaketa was to try to design a study through collaboration across teams that would build in meta-analysis-ready data. That was a major objective. There were also objectives around the reporting of results, including more pre-specification and transparency in the analysis, and working toward this larger model of open science. All of those aspects were important in the project.

There’s a lot of value in academia in planting the flag and being the first one to do something, but maybe we’re a little bit too willing to move on. The idea is not to prioritize new models or innovation, but to prioritize replication, and that was a big part of the model. There have traditionally been a lot of problems in trying to generalize across studies. Often the studies themselves are not comparable. They have different kinds of interventions and outcome measures.

In many ways, we were crossing the river by feeling for stones. We’d been having discussions around this kind of model for quite some time. Part of the project was getting some initial grant funding to support the concept, and then to launch this first substantive part of it, focused on political accountability and information provision. It has been an interesting initiative to be involved in. There have now been four or five Metaketas on different substantive topics. It’s a model that’s been funded by different sources, but we had an anonymous donor who provided the funding for our startup, and more recently, the British government and USAID and others have been involved in funding these larger Metaketas.

How did you ensure that data from one study will align with that of the others?

A big part of this is just trying to harmonize the interventions: what kind of information is going to be provided? How do we conceptualize information in relation to information and political performance? What do voters think before information is provided? How does the information differ from what they already believed? We wanted to try to standardize this across projects, and measure in a symmetric way what outcomes we care about: first, whether voters vote and how they vote, but then also secondary outcomes. And we want to be able to do that consistently across studies. That way we can assess the average effect across the seven study sites, as well as variation across the sites. And we can look at that in a way that makes sense. A lot of that was harmonized at the design stage ex ante through a series of workshops across project teams. Then, sharing public data ex post allowed us to do a meta-analysis of data from the seven studies.

What did your study suggest about the role of information on voter behavior?

Graph showing research findings from the study on information and voter behavior.It seems self-evident that the information people receive would make a difference in how they vote, but our finding was a big no. Almost everywhere we looked, the provision of information made no difference in how people voted. We had taken a lot of care to try to develop designs that were well-powered enough, particularly once we aggregated the data across seven studies. We could make the claim with a fair degree of precision and certainty, in a statistical sense. That may make the answer itself more compelling and more credible, that the information provision didn’t have any effect.

On the other hand, it may seem mystifying, given the important role we think information is playing in politics. What we can say is that providing this kind of information from neutral third parties about what politicians are doing in office, including about political malfeasance or misspending of funds, didn’t shape voters’ behavior. That’s depressing, but also may be informative. If we want to transform political accountability, maybe we should be looking at other kinds of interventions.

Politicians may think voters are more responsive than they seem to be in some of these instances. But voters can be hard to move, and that might be consistent with some of what we know about partisanship broadly. The idea that information doesn’t move people away from their pre-existing beliefs is a depressing finding from a number of perspectives, although you can’t test everything, and there’s a role for more sustained, cumulative evidence-gathering in some of these areas. I would have said ex ante that this kind of information provision would have mattered much more than it did.

Our methodological message is that we need to be careful and build up piece by piece, and then after having built up carefully, try to put a body of evidence together in an area. We don’t want to overclaim and then say, well, information doesn’t matter. But we do have robust evidence from these studies that this type of information provision doesn’t shape what voters do very much on average, across a wide set of contexts. We think that’s useful, even if not the final word. But we should evaluate other kinds of claims about other sorts of information, hopefully using similarly robust kinds of evidence. That’s the methodological point we want to drive home.

Do you think the Metaketa model could be used in other contexts, for instance among researchers on a single campus like UC Berkeley?

There’s a lot of ground for more collaboration, and this is consistent with an open science model. Often we have incentives to work in small teams, to claim priority to embark in new directions. Trying to work collaboratively to further knowledge is also really important, and there’s a big role for it. And it doesn’t all have to entail gathering seven teams in a room and planning a study in advance.

We’ve explored ideas like having a registry of study topics, where three studies would be conducted, and then a researcher would come along and replicate that set of studies, and also tweak them, building in innovation and replication at the same time. All of these things are potentially interesting. Many of them could also be interdisciplinary in character, speaking to the Matrix mission, and could be a way to bring people together from across disciplines. These approaches can have a big upside in terms of knowledge generation.

 

 

Matrix News

Q&A with David Robinson, Visiting Scholar at Social Science Matrix

David Robinson

Social Science Matrix is honored to welcome David Robinson as a Visiting Scholar for the 2021-2022 academic year.

A distinguished researcher working at the intersection of law, policy, and technology, David studies the design and management of algorithmic decision-making, particularly in the public sector. He served as a managing director and cofounder of Upturn, a Washington DC-based public interest organization that promotes equity and justice in the design, governance, and use of digital technology. Upturn’s research and advocacy combines technical fluency and creative policy thinking to confront patterns of inequity, especially those rooted in race and poverty.

David previously served as the inaugural associate director of Princeton University’s Center for Information Technology Policy, a joint venture between the university’s School of Engineering and its Woodrow Wilson School of Public and International Affairs. He came to Matrix from Cornell University’s AI Policy and Practice Initiative, where he was a visiting scientist. He holds a JD from Yale Law School, and bachelor’s degrees in philosophy from Princeton and Oxford, where he was a Rhodes Scholar.

We interviewed David to learn more about his research interests and the projects he will be pursuing while at UC Berkeley, including an upcoming book on the development of the algorithm used to determine recipients of kidney transplants in the United States. Please note that this interview has been edited for length and content.

Q: How did you develop your interest in the study of algorithms?

I have always been interested in the social impacts of technology. When I was a kid, I had terrible handwriting; because of a mild case of cerebral palsy, I had some fine motor impairment. When writing meant penmanship, I was a bad writer. But then, eventually, I got a word processor in school, and discovered that I loved writing, and it was a really empowering change for me. Word processors had been around for a number of years, so the key change that made the benefits possible in my life was that the rules changed. The school said, let’s get one of these computers into this setting, where it can be beneficial. Ever since then, I’ve been interested in the social impacts of new digital technologies.

I came of age during the first wave of internet optimism in the 1990s and early 2000s, and I returned to Princeton to help start the Center for Information Technology Policy, a growing, thriving organization that brought together people from different disciplinary backgrounds. Part of the idea was that, if you’re navigating the policy and the values choices that come up around new technologies, it’s a big help to have some real depth of technical expertise. My colleague from that center, Ed Felten, later became the Deputy Chief Technology Officer of the United States in the Obama administration. There was a style of work we had there that was very specific to understanding the factual pieces of new technology, and making sure that a clear shared map of the stakes of the debate would be available to all participants.

While there, I got very involved in one issue in particular: open government data, making data transparent to the public, and publishing it in a reusable format, so that, for example, if you have public records about pollution or crime or education, you can put that on a map and track it over time, and not only rely on the government’s presentation of that information.

This was an idea that really took off in the Obama administration, and they created something called Data.gov, and built a multilateral partnership called the Open Government Partnership, along with other different countries. I came together with Harlan Yu, who was a PhD student at Princeton, and we ended up starting a public interest organization, Upturn, to continue this work of informing the public debate.

In the beginning, there was an optimistic view that there was an inherent valence to the technology, that it would make things more democratic and more open and accountable. Over time, we saw that wasn’t the case. Data.gov and similar sites had great data about things like the weather or the real-time location of buses, but if you were thinking this was going to help uncover financial malfeasance or otherwise disrupt the status quo, that didn’t transpire. We published a mea culpa on this, called the “New Ambiguity of Open Government,” where we said, if you’re making the data open, that doesn’t necessarily mean that you’re making the government open. There’s a whole politics to this. It’s not inherent in the technology that things are going to get more open.

Upturn started out as consulting firm in DC and ended up as an NGO, and we ended up working very closely with civil rights organizations, addressing inequities that are based either on race or poverty or the conjunction of the two. We evolved over time into having a much clearer political or normative mission. While at Upturn, I worked on understanding questions like, how do predictive policing systems work? If we have systems in courtrooms telling us who’s dangerous, what does that mean? What danger or risk is being measured, and what is the impact on real people and their families? Those sorts of questions became more important over time.

Three years ago, I was teaching at the law school at Georgetown, and I was focused on, how do we make algorithms accountable? We’re having software make high-stakes decisions that are impacting people’s lives. What can we do to take the moral innards of these systems and make them visible, and give people a seat at the table who are not the engineers and have them help make some of these values choices? That’s a question that is very much alive today.

What will you be working on during the coming year as a Matrix Visiting Scholar?

One of the projects I’ll be working on is a book with the working title, Voices in the Code. The idea is, I can give you lots of examples of where a system has been built and the values choices have not been made in an accountable way. In courtrooms, in the pre-trial context, where someone hasn’t been convicted of a crime, you’re balancing the liberty of a presumptively innocent person against the risk to the community that they might go out and commit more crimes or something like that. There’s no visibility and no clear understanding of how many of those choices are made in many jurisdictions. The point of these courtroom systems is to predict who’s dangerous. We wrote a paper called “Danger Ahead” that said, we predict these systems are dangerous because they’re hiding the ball on what the moral trade-offs are.

Voices in the Code is about one place where people didn’t hide the ball: in organ transplantation in the United States. If a kidney becomes available, there are 100,000 people waiting for a transplant. So if an organ is donated, it’s a non-market resource. We’re not going to give it to the highest bidder, but we do have to decide collectively, who’s going to get this vital resource and the opportunity to resume a normal life, and not rely on dialysis?

There are all kinds of logistical factors that go into that: how far away is the person? There are also medical factors, like blood type. And there are moral factors: if we wanted to maximize the total benefit from our supply of organs, then we might choose to give the organs to younger, healthier, and by-and-large richer and possibly whiter recipients, with fewer social determinants, co-morbidities, or other health problems. Of course, this is dramatically unfair. If we were to do that in a completely utility-maximizing way, the result would be that people already disadvantaged would lose the chance to get transplants. It’s also the case that older recipients would be greatly disadvantaged in that system.

But what’s interesting about transplants is there’s a very public process of figuring out what that algorithm is going to be. And when they suggested this utility-maximizing idea, the public pushed back, and they switched to something that’s a lot more moderate and smarter than what they were originally going to do. They did that because there was a public comment process, and transparency about what the algorithm was. There was auditing and there were simulations of how it would work if we rolled out different versions of that algorithm.

Those are all things that people are arguing for in other contexts, whether in child welfare, courtrooms, or in the private-sector systems for hiring. We want transparency and accountability. And there are a lot of ideas on the whiteboard. But what does it look like in practice? How can it be done? From my point of view, the transplant example is a really valuable precedent for how to do the ethics inside an algorithm in an accountable way. My book is about this example and what we can learn from it. (Watch a video of a talk that Robinson gave about this work.)

The second half of the work is a book about how algorithms change the stories we tell about who people are. It is looking at how selves are constructed, so it has more of a philosophical bent. When I was working in policy, I noticed that if you tag somebody as having a high productivity score, or a high dangerousness score, it’s not only used to make some narrow decision, but it also changes how the person is perceived by others. If we think about the quantified self movement, with all these self measurements, like a smart watch giving me health points, that’s going to change my view about how healthy I am. If we rate surgeons based on how successful their patients are after the operation, we think we’re finding out who’s a good surgeon, when it turns out, we may really be finding out in part who cherry-picks their cases and takes easy cases or something like that. The book aims to help the public develop a greater sense of confidence in taking apart what some of these scores really mean, to recover a sense of being able to construct our own identities and not ending up outsourcing that to some piece of software. [See this short essay that previews Robinson’s book on the social meaning of algorithms.]

What other lessons does the kidney transplant example teach us about fairness in algorithms?

Sometimes you’ll hear people talk about going out to get public input through some process, and the input is treated like something we’re going to mine and collect. But one of the key insights from this transplant experience is that debate creates opinions. The opinions that people come to the table with tend to change and soften. I always visualize one of those machines for polishing rocks, where you have all of these sharp edges that go in at the beginning, and they tumble around and get polished. Eventually people see where others are coming from, and they are invested in hearing each other out.

The algorithm for transplants is perpetually being revised, which is part of what a real democratic process looks like. People arrived at something they may not have loved, but that they found tolerable. There was a kind of wearing down, a gradual acquiescence into something tolerable. Especially if we look at our politics today, it’s no small feat to find something that is mutually tolerable to people with very different points of view. At some level, that’s part of our ambition for the governance of algorithms.

Based on what you’ve learned about algorithms and transparency, what do you think should be the norm in this area in five or ten years?

People sometimes say there ought to be one centralized regulatory body for algorithms, and I’m skeptical about that, because I think the contexts do differ, and context really matters. If you’re dealing with something medical, you want medical experts, and if you’re dealing with criminal law, then you want experts in the criminal legal system, as well as people and families who’ve encountered the system who can provide input into that.

But I do think there can be a shared layer that emerges, where people in one area talk to people in another and recognize that we have problems of the same shape. We’re doing data science, but we want to do it in an accountable, inclusive, and democratic way. There are places where we can learn how to do that, and we can take examples from one domain and share them with another.

So what does that mean? It means getting people involved early in the design process as early as possible to frame a shared understanding of the problem. It means publishing and auditing and simulating. (This is a step I think that hasn’t gotten a lot of attention so far: how can we forecast the consequences of our  alternatives?) And then, once the thing is out there, continuing to pay attention to how it’s going and seeing if it needs to be revised. That’s a set of practices that people are learning how to do in parallel, in lots of different places. So it’s about how to share ownership of the ethical choices inside high-stakes software. That’s what I’m working on, and that’s where I think a shared literacy needs to emerge.

Sometimes there’s a pattern of technical “shock and awe,” and people say, you have to be a genius or an expert to have any clue what this system is doing. And yet, at the end of the day, there’s a conference room and a whiteboard somewhere where human beings are sitting around and saying, how does this work, and what do we want to change? The doors to that room can always be opened, no matter how complicated the software is, no matter if it’s changing every second. Answering that question is a job that can be shared.

Part of the mission of Social Science Matrix is to promote cross-disciplinary research. What academic disciplines does your work touch upon?

I’ve taken a deep dive into the legal and policy documents, because one of the things about this transparent process is that there are reams of documents and reports, which are not necessarily easy to understand. I added a qualitative component that draws draws on sociological and anthropological methods. I conducted semi-structured qualitative interviews with participants in this public deliberation process, including physicians who led committees, and a transplant patient, who argued that the original proposal was unfair. Although my original training was not in sociology, I learned a great deal from from colleagues and have been able to adapt those methods.

What brought you to UC Berkeley to continue this work?

Berkeley is just an extraordinary community. There’s a public service mission that is very strong because it’s a public university, and one of the world’s great intellectual communities is at Berkeley. It’s a tremendous place. It’s a tremendous opportunity to contribute to those conversations, and to share work in progress and get feedback.

Having looked at the transplant example, part of what I’m trying to do is to make that that experience available to other scholars and policymakers who are working on similar problems in other domains — maybe not in transplants, but in a courtroom or a human resources department, where they want to know, how can transparency be made to work? I really want the substance of what I’ve done to be available to people.

I’ve made an intentional choice to step away from the more immediate policy work and think longer term. It’s been a great opportunity to  think big picture, but also to think concretely about how we can take insights from the academic field and apply them to the social problems we have that relate to new technologies. In order for all this toil and time to pay off, I’ve got to weave in to the broader conversation around these issues. I am hoping Matrix and UC Berkeley will be a platform to bring these ideas into conversation with the wider world.

 

Race

A Q&A with Social Psychologist Jack Glaser on Racial Bias and Policing

Jack Glaser

Jack Glaser, Professor in the Goldman School of Public Policy, is a social psychologist whose primary research interest is in stereotyping, prejudice, and discrimination. He studies these intergroup biases at multiple levels of analysis. For example, he investigates the unconscious operation of stereotypes and prejudice using computerized reaction time methods, and he is investigating the implications of such subtle forms of bias in law enforcement. In particular, he is interested in racial profiling, especially as it relates to the psychology of stereotyping, and the self-fulfilling effects of stereotype-based discrimination.

Additionally, Professor Glaser has conducted research on a very extreme manifestation of intergroup bias — hate crime — and he has carried out analyses of historical data as well as racist rhetoric on the internet to challenge assumptions about economic predictors of intergroup violence. Professor Glaser is working with the Center for Policing Equity as one of the principal investigators on a National Science Foundation- and Google-funded project to build a National Justice Database of police stops and use of force incidents. He is the author of Suspect Race: Causes & Consequences of Racial Profiling.

Professor Glaser has been involved with past Matrix Research Teams on community trust and policing. We reached out to Professor Glaser in July 2020 for his insights on bias in policing in the wake of the protests for racial justice and police reform.

How do you describe your research, particularly as it relates to policing?

My research is centered on applying the psychological science around stereotyping and prejudice to understand racial disparities in policing, in stops and searches, and also in use of force.

I do that a number of different ways. The work I’m most associated with is research on how implicit bias gives rise to discriminatory judgments and behaviors. Some of the work I’ve done there is to measure, for example, the extent to which people hold an association between Blacks and weapons, and the extent to which that causes them to make a shooting response to an armed Black man faster than unarmed White man, or to make a no-shoot response to an unarmed White man relative to an unarmed Black man. What I’ve been doing more recently, though, is working with police departments and with various government agencies to try to figure out what’s going on in the field, and how to reduce the racial disparities that we see time and again, across many different datasets.

Where does racial bias come from?

There’s a century’s worth of psychological science on prejudice and discrimination and stereotyping. But some of the fundamental understandings we have from careful experimental research include the fact that people are hard-wired to categorize others and themselves into racial and ethnic and other kinds of groups. We just do that very spontaneously, we start doing it at a very young age, and it’s not something we can really turn off.

We make those categorizations, and then we have a tendency to prefer the groups we belong to. It’s natural in-group favoritism that people tend to have. On top of that, people who belong to negatively stigmatized groups are less likely to like the group they belong to than the ones who are from the superordinate, high-status, high-power groups. And we also have the specific content of the stereotypes that we have about members of various groups. So we very quickly start to formulate hypotheses about how people from one group or another are going to behave. That might be along gender lines, or racial or ethnic lines, or age lines, or political affiliation lines. We make sense of our complex world by putting people into these categories, and then having predictable traits about those categories.

One of the very prominent stereotypes that’s highly pervasive in American culture is that Black people are associated with crime and weapons and violence. Police officers are not immune to that, so as a consequence, they tend to regard people of color with greater suspicion, because the stereotypes cause them to interpret ambiguous behaviors in a manner that’s consistent with their prior conceptions. In the last 30 or so years, there’s been an avalanche of research on implicit bias and how these biases operate outside of our conscious awareness, and then can be activated automatically, and influence our perceptions and our judgments and behaviors, in spite of our best intentions to behave in a fair and unbiased manner.

How is it possible to bypass or manage this kind of bias?

Training is the usual response. Unfortunately, we don’t know of any training that reduces these biases or consistently reduces the impact that they have on behavior. There is a whole cottage industry of implicit bias trainers across many industries, but especially in policing, and they’re private companies that offer training for a fee. To the extent that they’ve been studied at all, there’s no indication that they actually change performance in the field.

There is a non-trivial number of officers who are explicitly biased and deliberately and overtly engaging in racial profiling or racial oppression, but for the vast majority of officers who are at least trying to operate in an unbiased manner, they are unable to suppress and control the influence of these implicit biases. And so it’s not really realistic to expect that a day’s worth of training, or even multiple days of training, is going to change their biases, or give them the skills that enable them to short-circuit the influence of those biases. You really need chronic motivation, a specific strategy, and then the cognitive resources or the opportunity to impose that strategy to prevent those biases from influencing your judgments. The likelihood that police officers on a day-to-day basis are going to be able to mobilize all three of those dimensions to override their biases is very low.

My view is that the effort should be focused on supervisory staff — sergeants and above — who are determining the decision-making environment the officers are stepping into. They’re the ones who are setting the incentives. If they’re trying to get officers to make a lot of arrests, or find a lot of drugs or weapons, then those officers are going to go out and make a lot of indiscriminate stops and searches of people, most of which (the data show us) are going to be unfruitful.

One of the things we find in the data across many different jurisdictions is that, among the people that officers stopped and searched when looking for guns and drugs, the Whites that they search are more likely to actually be in possession of illegal contraband than the Blacks and Latinos that they search. That’s probably because they are imposing a higher threshold of suspiciousness in order to decide to search a White person in the first place. To the extent that those kinds of discretionary stops are occurring and are being imposed disproportionately on people of color, that is going to be a catalyst for the influence of the implicit or explicit biases on the treatment of minority community members. The best way to have a significant effect on reducing that disparate impact is to reduce those kinds of behaviors that give rise to discriminatory effects.

What kinds of structures can be put into place to help reduce racial bias in policing?

The psychological research on controlling the influence of bias is pretty clear. The first element you need to have in place to be able to make an unbiased judgment is having the cognitive resources, which means not being rushed or stressed or drunk or tired. Then you can make a deliberative judgment and focus on specific indicators of, in this case, suspicion or whatever it is that you’re looking for. And so that needs to be in place in the first place for the implicit bias not to influence you.

But even if you have that, it’s difficult for a normal person to look at another person, take the information that is available to them — which is never going to be complete, and will always have some ambiguity — and differentiate between the subtle, implicit things that are causing them to regard that person in a certain way with the actual objective indicators of that. We can’t subjectively separate those things out very well. You need a specific strategy to help you try to separate those things out. That involves approaches like trying to think of that individual as another person, or relating to them by taking their perspective. Lots of different strategies have been tried, and some of them work. But none of them works for very long.

One approach would be to use some kind of checklist to say, does this person have these three characteristics that have been empirically demonstrated to be related to this sort of suspicious behavior? In the absence of that, they don’t meet the criteria for being searched. The strategic approach would be to formalize the process. But that’s very difficult to do in the real world when you’re encountering things in a fluid situation. So my view is that the incentives matter more. And generally, what you’re asking people to do is going to determine the extent to which what what they’re doing is discriminatory.

Have you seen any departments implement these shifts in incentives?

I can’t say that I’ve seen it done that rewards are changed to promote accuracy, per se. But what we have seen across multiple jurisdictions is that some police departments are backing away from the incentive to make a lot of drug arrests. In New York City, the city lost a major class action lawsuit over “stop and frisk,” so there’s been a radical reduction in the number of pedestrian stops they are doing in New York. There was also a shift in political winds at the same time, but they’ve gone from almost 700,000 stops a year to under 20,000 stops a year. It’s almost unrecognizable. What we see there is that those racial disparities in the outcomes of the searches have become almost equalized in New York, while the crime rate was flat or declining. Oakland, California also reduced the number of discretionary stops (mostly vehicle stops) that their officers were making. They also saw a reduction in racial disparities for those stops, and there was no impact on crime.

It’s not entirely clear to me as an outsider how those incentives changed, but I have an opaque sense that it was the removal of an encouragement to make a lot of stops in New York, and even a prohibition, like, we’re not doing those stops anymore unless you have a high degree of suspicion. In instances where we have seen that, you see not only overall reduction in stops, but also reduction in the disparities. And one thing that’s important to bear in mind is that, even if you didn’t see a reduction in the disparities, because the harm of being stopped without good purpose is overwhelmingly borne by communities of color, reducing that activity overall is going to differentially benefit those communities. It’s not going to equalize things, but it is going to have a benefit for those groups.

What are the research questions you’re asking now?

I have a couple of research projects currently in progress with my very impressive colleagues, one of which is with Perfecta Oxholm, a doctoral student at the Goldman School, who is doing her dissertation work with the Oakland Police Department. She’s going to be doing a multi-methodological study where she is interviewing police officers and community members to get a sense in a qualitative way of their perceptions of each other — and their perceptions of their perceptions of each other. And then we’ll be doing survey-based research that’s more structured based on those interviews, and ultimately doing an intervention, a randomized, controlled trial where some police officers engage in particular community contact activities to see how that affects attitudes on both sides.

Communities have a right to have good relations with other people, including agents of government, and to feel enfranchised and to not feel threatened by agents of government. But it’s also in the interest of the state for communities to trust law enforcement, because they’re going to be more likely to report crimes and to cooperate with investigations. It’s generally a win-win all around. Historically, it’s been clear that having an oppressive relationship between law enforcement and minority communities is not helpful.

I’m also conducting research with colleagues at UC Davis and RAND where we have developed a computerized simulation that we’re going to be rolling out with police officers, in which we have experimentally manipulated the race of the person they view on the computer monitor, and they evaluate the suspiciousness of the behaviors he’s engaged in. We have 72 different scenarios, where individuals are doing things ranging from not at all suspicious, like just sitting on a stoop, to highly suspicious behaviors, like dropping a gun behind a bush. In between are the really interesting ones, where they’re dropping some ambiguous object, or they’re picking something up from under a suspicious place. The idea is that we want to see the extent to which there are racial differences in who they regard as suspicious. The question is, would you stop and search this person? The main purpose is to establish a standardized metric for the variation in racial sensitivity that officers have to the race of the suspect, and to look at how that relates to their actual field performance, and the racial distribution of the people that they’re actually stopping in the field. We’ll be measuring lots of other things as well.

We created little animations of three still photographs that depict a process where somebody is moving through space and doing something, but it’s highly standardized. We have Black and White actors playing these parts doing exactly the same thing. We will of course mix up the order in which people  see them. So it won’t be like, here’s the Black guy doing it, here’s the White guy doing it. But you know, just be respond to this individual. We may give some of our research participants only the Black actors and or only the White actors to do what we would call a between-subjects comparison. We’re going to do it a lot of different ways to see what we can pick up.

How might police departments be able to use that kind of standard metric?

If we find a correlation between racial bias in that measure, and the racial distribution of who they’re stopping and searching — or the outcomes of the searches they’re doing — that would give the department quite a bit of insight. Without the metric we’re developing, they could look at those racial disparities in who has been stopped and searched and throw their hands up and say, well, that might just be them responding to what’s happening on the street. But if we can show that there’s a relationship between the sort of preconceptions and the actual performance, that would be enlightening. It could also lead to training opportunities, where they use that information to say, you should be looking at the object, but the officers who do this tend to be influenced by the race of the person dropping the object or picking it up.

What are common misconceptions that people have about policing and racial bias?

One thing people don’t realize is that the overwhelming majority of police civilian encounters do not have any public safety-enhancing effect, especially the discretionary encounter. Obviously, calls for service when officers are responding to a call — whether there’s been a witnessed crime, or there is some kind of crisis — those have great public safety-enhancing value. But these discretionary stops, or low-level equipment failure vehicle stops, do not promote public safety. And only a very small minority of them result in any kind of recovery of weapons, and a slightly larger but still small fraction result in recovery of illegal contraband like drugs.

A lot of these discretionary activities that police are engaged in are not only not promoting public safety, but they are disproportionately borne by communities of color. And that has the effect of violating the Constitution — violating those people’s right to equal protection and due process — but it also destabilizes the communities and causes a lack of trust and a lack of cooperation.

In the case of something like the murder of George Floyd, it’s hard to use the usual explanations of automatic bias and the like to explain a nine-minute strangulation. But the more typical cases, where there’s a shooting and maybe even a foot pursuit, those are disproportionately Black victims when they’re unarmed. That’s quite clear in the research, although there’s another body of research that shows that if you look at all of the cases of fatal officer-involved shootings, there doesn’t appear to be a racial disparity. The problem with that analysis is that it’s really the unarmed victims that are the ones who shouldn’t be getting killed by the police, and that’s where the disparities reside. The much larger number of cases are armed victims, and they tend to be White men.

The fact that it took the George Floyd killing to bring this to the public consciousness, to the boiling point where change can actually happen, something about the way our society is structured in the way people from a hegemonic group are unlikely to relate to the challenges of minority groups. What we don’t want to lose sight of is that the fatal killings and use of force on unarmed Black men is just the tip of the iceberg of the daily indignities that Black people suffer at the hands of police when they’re being overzealous. That’s the big mass of the iceberg under the water that most of us don’t see, but that minority communities feel the weight of very, very heavily.

Podcast

Matrix Podcast: Interview with Rebecca Herman

Rebecca Herman

 

In this podcast, Michael Watts interviews Rebecca Herman, Assistant Professor of History, UC Berkeley. Professor Herman’s research and writing examine modern Latin American history in a global context. Her first book, forthcoming from Oxford University Press, reconstructs the history of U.S. military basing in Latin America during World War II – through high diplomacy and on-the-ground examinations of race, labor, sex and law – to reveal the origins and impact of inter-American “security cooperation” on domestic and international politics in the region. She has also authored past and forthcoming articles and book chapters on the global politics of anti-racism, the Cuban literacy campaign, the Brazilian labor justice system, and U.S.-Latin American relations. She is currently working on a new book project on Antarctica, Latin America, and the World.

Prior to entering academia, she spent several years in Argentina, Chile, Bolivia and Brazil working as a freelance translator, researcher, and documentarian. Before joining the faculty at Berkeley, she was Assistant Professor of International Studies and Latin American Studies at the University of Washington, Seattle. She received her Ph.D. in History from UC Berkeley and her B.A. in Literature and History from Duke.

Produced by the University of California, Berkeley’s Social Science Matrix, the Matrix Podcast features interviews with scholars from across the UC Berkeley campus. The Matrix Podcast is hosted by Professor Michael WattsEmeritus “Class of 1963” Professor of Geography and Development Studies at UC Berkeley.

Listen on Apple Podcasts or Google Podcasts.

Podcast

Matrix Podcast: Interview with Brittany Birberick

Brittany Birberick

In this episode, Professor Michael Watts interviews Brittany Birberick, an anthropology PhD student at the University of California, Berkeley — and a former Matrix Dissertation Fellow. Birberick’s dissertation project focuses on urban transformation in Johannesburg, South Africa. More broadly, she writes and thinks about economies, migration, temporality, and aesthetics within an urban context. Her dissertation, “Paved with Gold: Urban Transformation in Johannesburg,” situates the city of Johannesburg historically, considering the extractive economy of gold that initiated its development to understand the city’s contemporary tensions: a dilapidated post-apartheid city aiming to be a world-class global city. Her research takes place in Jeppestown, a neighborhood in Johannesburg, and focuses on the inhabitants and built environment of a single street. Today, Jeppestown is portrayed as either on its way to becoming a site of redevelopment by the Johannesburg Development Agency, artists, and private developers, or, if left unattended, a crime ridden area and hotbed of xenophobic violence. The dissertation posits that rather than transformation and development projects leading to an inherently new city or inherently new object, Jeppestown, like many urban areas around the world, is caught in a back and forth between being a successful or failed urban space—a “good” or “bad” city.

Birberick received the Association for Africanist Anthropology’s 2019 Bennetta Jules-Rosette Graduate Essay Award for her essay, “Dreaming Numbers,” which is an analysis of fafi, a street-based lottery game played by residents in Jeppestown. The piece investigates the ways in which dreams, gambling, and interpreting patterns become meaningful strategies for choosing the next winning number and reducing uncertainty in the city.

Related Materials

 

Podcast

Matrix Podcast: Interview with Clancy Wilmott

Clancy Wilmott

 

 

In this episode, Professor Michael Watts interviews Clancy Wilmott, Assistant Professor in Critical Cartography, Geovisualisation, and Design in the Berkeley Centre for New Media and the Department of Geography. Professor Wilmott comes to UC Berkeley from the Department of Geography at the University of Manchester, where she received her PhD in Human Geography with a multi-site study on the interaction between mobile phone maps, cartographic discourse, and postcolonial landscapes. At UC Berkeley, Professor Wilmott is teaching graduate-level combined theory/studio courses on locative media, cross listed courses in digital geographies, as well as core curriculum on geographic information systems in the Geography department.

Produced by the University of California, Berkeley’s Social Science Matrix, the Matrix Podcast features interviews with scholars from across the UC Berkeley campus. The Matrix Podcast is hosted by Professor Michael WattsEmeritus “Class of 1963” Professor of Geography and Development Studies at UC Berkeley.

Listen on Apple Podcasts or Google Podcasts.

Related Materials

 

Podcast

Matrix Podcast: Interview with Mariane Ferme

Mariane Ferme

 

In this episode, Michael Watts talks with Mariane C. Ferme, Professor of Anthropology at UC Berkeley and the author of Out of War: Violence, Trauma, and the Political Imagination in Sierra Leone and The Underneath of Things: Violence, History, and the Everyday in Sierra Leone.

Ferme is a sociocultural anthropologist whose current research focuses on the political imagination, violence and conflict, and access to justice in West Africa, particularly Sierra Leone. Her research encompasses gendered approaches to everyday practices and materiality in agrarian West African societies, and work on the political imagination in times of violence, particularly in relation to the 1991-2002 civil war in Sierra Leone. Her most recent fieldwork in Sierra Leone—carried out in 2015-16—was an interdisciplinary research project on changing agrarian institutions and access to land in the country. Ferme’s latest book, Out of War: Violence, Trauma, and the Political Imagination in Sierra Leone, draws on her three decades of ethnographic engagements to examine the physical and psychological aftereffects of the harms of Sierra Leone’s civil war.

Related Materials

 

Podcast

Matrix Podcast: Interview with Leigh Raiford

Leigh Raiford

 

In this episode, Michael Watts interviews Leigh Raiford, Associate Professor of African American Studies at UC Berkeley and author of Imprisoned in a Luminous Glare: Photography and the African American Freedom Struggle, finalist for the 2011 Berkshire Conference of Women Historians First Book Prize. In her book, Raiford argues that over the past one hundred years, activists in the black freedom struggle have used photographic imagery both to gain political recognition and to develop a different visual vocabulary about black lives. Offering readings of the use of photography in the anti-lynching movement, the civil rights movement, and the black power movement, Imprisoned in a Luminous Glare focuses on key transformations in technology, society, and politics to understand the evolution of photography’s deployment in capturing white oppression, black resistance, and African American life.

Listen on Apple Podcasts or Google Podcasts.

Related Materials

Podcast

Matrix Podcast: Interview with Desiree Fields

desiree fields

In this episode, Michael Watts talks with Desiree Fields, Assistant Professor of Geography and Global Metropolitan Studies at the University of California, Berkeley.

Fields’ research explores the financial technologies, market devices, and historical and geographic contingencies that make it possible to treat housing as a financial asset, and how this process is contested at the urban scale. At the heart of her work is an interest in how economic and transformations unevenly restructure urban space and social relations, with a particular concern for how urban struggles for justice coalesce around these changes. Within this broadly defined area, she examines two transformations as they relate to housing, a crucial vector of urban inequality and terrain of grassroots political contestation. First, the shift to a finance-oriented political economy; second, the growing global reach and power of digital platforms.

Related Materials

Listen on Apple Podcasts or Google Podcasts

 

 

Podcast

Matrix Podcast: Interview with Dacher Keltner

Dacher Keltner

In this episode of the Matrix Podcast, Michael Watts talks with Dacher Keltner, Professor of Psychology, Director of the Berkeley Social Interaction Laboratory, and Faculty Director of the Greater Good Science Center.

Dacher’s research focuses the biological and evolutionary origins of emotion, in particular prosocial states such as compassion, awe, love, and beauty, and power, social class, and inequality. He is the co-author of Born to Be Good: The Science of a Meaningful LifeThe Compassionate Instinct: The Science of Human Goodness, and The Power Paradox: How We Gain and Lose Influence. Dacher has published over 200 scientific articles, written for many media outlets, and consulted for the Center for Constitutional Rights (to help end solitary confinement), Google, Facebook, the Sierra Club, and for Pixar’s Inside Out.

Related Materials

Listen on Apple Podcasts or Google Podcasts.