California Spotlight

Kate Pennington on Gentrification and Displacement in San Francisco

What impact does new housing have on rents, displacement, and gentrification in the surrounding neighborhood? Read our interview with economist Kate Pennington about her article, “Does Building New Housing Cause Displacement?:The Supply and Demand Effects of Construction in San Francisco.”

Kate Pennington

Kate Pennington is a research economist at the Center for Economic Studies, a division of the U.S. Census Bureau; she earned her PhD from UC Berkeley’s Department of Agricultural and Resource Economics in 2021. Her current work focuses on diverse questions related to inequality and urban issues.

Among her current projects, she is collaborating with PhD candidate Eleanor Wiseman in investigating how the water crisis in Flint, Michigan shaped political participation and voting behavior among residents, and she studies how economic shocks like the Great Recession affect access to reproductive healthcare. Her research has been funded by the Institute for Research on Labor and Employment, the Upjohn Institute for Employment Research, the Institute for Women’s Policy Research Fellow, and the National Science Foundation.

Matrix content curator Julia Sizek interviewed Pennington about her recent research on housing and displacement in San Francisco, which won the 2021 Urban Economics Association Prize for best student paper. Her paper, “Does Building New Housing Cause Displacement?: The Supply and Demand Effects of Construction in San Francisco,” explores the impact of new housing construction on rents, displacement, and gentrification in the surrounding neighborhood.  Her work disentangles the supply and demand effects of new construction and compares the different impacts of market rate and affordable housing. (Please note that questions and responses have been lightly edited.)

 

This graph illustrates the rising rent prices in San Francisco from 2003-2017. In 2003, 1 bedroom apartments on Craigslist are approximately $1300/month; in 2017, this number is closer to $2500/month.
Average Monthly Rent for a 1BR on Craigslist, 2003-2017. Courtesy Kate Pennington.

As we can see in this graph, rents have been increasing steadily in San Francisco, and have been climbing dramatically since 2010. Gentrification has long been a hot topic in San Francisco and the Bay Area, especially as the tech sector has brought new — and wealthy — residents to the peninsula. Given all the data that has been collected on this subject, what did you find to be missing? How did you decide to study this topic?   

This is an issue that people care about deeply, but there’s a lot of disagreement on how cities should respond to rising housing prices and demographic change. The construction of new, market rate housing — housing that isn’t restricted to low-income residents — is really controversial because of fears that it may actually accelerate neighborhood change. To me, this is an open empirical question. What is the impact of new market rate buildings on the surrounding people and neighborhoods? I wanted to try to answer this question to help move the discussion forward toward a solution.

The question is difficult to answer because it’s hard to tease apart causation and correlation. Because of the tech boom, higher-income people are moving to the Bay Area, and that’s driving up rents and displacement. Developers want to make money, so they like to build in places where prices are already rising. That means that new market rate housing is positively correlated with rising rents and displacement, but it doesn’t mean that the new buildings are causing the neighborhood change.

The challenge here was to find a natural experiment in housing construction that could help me identify the causal impact of construction on rents and demographic change.

In your paper, you combine data on new construction with data on structural fires and Craigslist rents. Why did you end up using these forms of data to track changing housing conditions in San Francisco? 

The ideal way to determine whether new housing causes changes in rents and displacement would be to do an experiment where we drop down new buildings at random throughout the city and then compare what happens nearby to what happens farther away. Obviously this is impossible for many reasons, so the challenge is to come up with something that mimics that ideal experiment.

I use serious building fires as a source of experimental variation in where new construction happens. San Francisco is famously hard to build in; it’s heavily regulated and it can’t sprawl because it’s surrounded by water on three sides. For the most part, if you want to build something new, you have to tear down something old. Serious building fires make it much cheaper for developers to build on a burned parcel. I use these fires to figure out which construction projects were “exogenously” located, that is, located due to the random occurrence of a fire. This mimics the ideal experiment of random locations for new construction.  The maps below shows where these construction projects were built in 2015 and 2016.

I use Craigslist rents for two reasons. First, the City of San Francisco doesn’t track rental prices.  I had to figure out how to get access to rental price data at a small spatial scale, without the ability to pay for a big dataset like Zillow. Scraping archived Craigslist posts was free and let me get the specific information I needed. Second, since the housing market is really segmented, Craigslist rents probably do a better job of capturing the rents faced by a lower-income resident who’s actually at risk of displacement. Higher-income people might use Zillow or Redfin to find a rental, and those prices tend to be a couple hundred dollars higher than the average Craigslist rent in any given month.

 

This map shows the locations of new construction in San Francisco in 2015.
Tracing the influence of new housing in San Francisco, 2015-2016. Courtesy Kate Pennington.
This map shows the locations of new construction in San Francisco in 2016.
Tracing the influence of new housing in San Francisco, 2015-2016. Courtesy Kate Pennington.

 

These maps depict new construction in San Francisco, as split up by several types. Help us understand these two maps, and how endogenous and exogenous construction are important distinctions to make for both policymakers and economists.

 

These maps show the 600-meter radius around new construction projects in 2015 and 2016.  They help visualize who might be affected by each new project. The randomly located (exogenous) projects are shown in pink and orange. These are the projects whose impacts I study in the paper. The projects shown in blue and green are not experimentally located; those locations may be “endogenously” driven by developers’ desire to build where prices are already high.

Why did you decide to focus on the local scale of housing, and what does this help to show us? 

Focusing on the local scale is important for two reasons: it helps identify a causal relationship, and it directly answers the question of how new housing impacts people living nearby, which is at the center of the policy debate.

 

This diagram shows the impact of new housing construction on rents at a local scale.
Measuring Exposure to New Construction Projects. Courtesy Kate Pennington.

In this figure, we can start to see how you approach this topic through measuring the impacts of new housing spatially. Help us understand this image. How do the effects of new housing differ based on distance?

For each person in my sample, I count up the number of randomly located new projects and new units completed within different distance bins for each year of my study, from 2003-2017.  This figure shows how. The circle shows the 600m radius around a fictional person’s house.  The yellow dot shows a project built within 200m, so this person would have a value of 1 for the number of projects within 0-200m.  They’d have a value of 0 for projects within 200-400m, and 1 for projects within 400-600m (the red dot).  Similarly, they would have a value of 6 for net units within 200m, 0 within 200-400m, and 200 within 400-600m.

 

This graph shows how rents of 1BR apartments shifts after new construction. The X axis measures distance from new construction; the Y axis shows changes in rent.
How do new projects affect 1BR rents and the probability of increase rent? These graphs show Pennington’s results to answer these questions. Courtesy Kate Pennington.

 

This image shows the likelihood of an adverse move for a current resident. X axis shows distance from new construction; Y axis shows likelihood of adverse move.
How do new projects affect 1BR rents and the probability of an adverse move? These graphs show Pennington’s results to answer these questions. Courtesy Kate Pennington.

This pair of figures tracks the impact of new construction on rents and displacement. Can you walk us through these charts, and what you were able to measure? 

This figure shows the main results. The first panel shows the average relationship between rents and distance from the new market rate construction project in the four years after completion. Rents are roughly $40 lower for people living close to the new building. This effect decays with distance, fading out to zero within two kilometers.

The second panel shows the impact on one measure of displacement: the probability that a renter moves to a lower-income zip code. The risk of moving to a lower-income zip code falls by about 20% for people living close by, and again fades to zero with distance. The renters who live closest to the new projects benefit from the largest differential rent reductions and the largest fall in the risk of displacement. Displacement refers to push migration, when individual people are pushed to leave their current housing. Gentrification refers to the replacement of lower-income incumbents with higher-income newcomers. Displacement happens to people; gentrification happens to places.

To measure gentrification, the ideal would be to count the net change in the number of richer people at a given address. Since I don’t have individual income data, I use median zip code income as a proxy. I count the net number of people arriving at a given address who came from a richer sending zip code. Panel A shows that the probability of a net increase in richer arrivers — my proxy for gentrification — increases by 2.5 percentage points close to new market rate construction, again fading out with distance. In contrast, panel B shows that new affordable housing doesn’t attract an increase in gentrification.

This graph shows how rents change based on distance from exogenous market rate construction.
Impact of new projects on gentrification by construction type. This figure shows exogenous market rate housing, or housing constructed after large structural fires for market rate renters. Courtesy Kate Pennington.

 

This graph shows how rents change based on distance from exogenous affordable housing construction.
Impact of new projects on gentrification by construction type. This figure shows the effects of exogenous affordable housing construction, or affordable housing constructed after structural fires. Courtesy Kate Pennington.

 

This final set of figures helps us consider the differences between different kinds of new housing, comparing specifically the differences between affordable and market-rate housing. What is the conventional wisdom on the differences between market rate and affordable housing, and what do these charts and your research more generally suggest for policymakers interested in housing affordability?

One idea that circulates in policy discussions is that market rate housing might cause local price increases and displacement, but affordable housing won’t. Instead, I find that market rate housing differentially decreases nearby rents and displacement risk, while affordable housing has no spillover effects on the surrounding people and neighborhoods. This suggests that affordable and market rate housing are complementary policy levers. Market rate housing can help many people who live nearby, but its price impacts will become less and less effective if the city continues to gentrify and the nearby residents are less sensitive to small changes in rent. On the other hand, affordable housing only prevents displacement for the people living in it, but it does better at targeting people who are really at risk of displacement, and it can preserve long-term income diversity. Both help — and neither hurts.

 

Methods

Metaketa: A Collaborative Model for Social Science Research

Thad Dunning

Social scientists conducting field-based research often design and conduct their studies in isolation, making their findings difficult to replicate in other contexts. To address this challenge, a team of social scientists at UC Berkeley and other institutions launched an initiative called “Metaketa,” which aims to provide a structure and process for designing and coordinating studies in multiple field sites at once, leading to a more robust body of data and improved standards for transparency and verification.

Metaketa is a Basque word meaning “accumulation.” Funded via Evidence in Governance and Politics (EGAP), a research, evaluation, and learning network, the Metaketa Initiative represents a model for collaboration that “seeks to improve the accumulation of knowledge from field experiments on topics where academic researchers and policy practitioners share substantive interests,” according to the EGAP website. “The key idea of this initiative is to take a major question of policy importance for governance outcomes, identify an intervention that is tried, but not tested, and implement a cluster of coordinated research studies that can provide a reliable answer to the question.”

The model is grounded in eight principles: coordination across research teams; predefined themes and comparable interventions; comparable measures; integrated case selection; preregistration; third-party analysis; formal synthesis; and integrated publication.

Thad Dunning, Robson Professor of Political Science at UC Berkeley, helped launch the Metaketa Initiative, and he participated in the first “cluster” of studies, which focused on understanding how the dissemination of information about candidates influences voter behavior. In 2019, Dunning co-authored a research paper in Science Advances, “Voter information campaigns and political accountability: Cumulative findings from a preregistered meta-analysis of coordinated trials,” that summarizes the importance of the Metaketa model: “Limited replication, measurement heterogeneity, and publication biases may undermine the reliability of published research,” Dunning and his co-authors wrote. “We implemented a new approach to cumulative learning, coordinating the design of seven randomized controlled trials to be fielded in six countries by independent research teams. Uncommon for multisite trials in the social sciences, we jointly preregistered a meta-analysis of results in advance of seeing the data.”

Cover of "Information, Accountability, and Cumulative Learning"Dunning and the co-authors — including UC Berkeley political scientist Susan Hyde — also published a book, Information, Accountability, and Cumulative Learning: Lessons from Metaketa I, that shares lessons learned from the project. This collaborative Metaketa model has since been used for studies on taxation, natural resource governance, community policing, and women’s action committees and local services.

We interviewed Dunning about how the Metaketa Initiative evolved, as well as what his own study suggested about how information influences voters’ choices. (Note that questions and responses have been edited for clarity and content.)

What was the focus of the study that you undertook through the Metaketa Initiative?

The initial thrust of the study was focused on the connection between information provision and political accountability. It almost seems like a truism that information has to matter for politics, and yet we don’t actually know that much about how providing voters with certain kinds of information actually affects political behavior. We haven’t built a coherent body of evidence around that.

The second part was methodological, and had to do with how we build cumulative knowledge. There’s been a big movement in the social sciences toward experimentation as a way of building causal knowledge. That has some advantages, but it’s also really limited. Our project stepped away from a single study and said, here are some aggregate conclusions that might hold across settings. It was really trying to tackle the problem of external validity in experiments: if I find something in a particular study, does it generalize to other contexts?

How did the Metaketa model evolve into a formal initiative?

The methodological goal of the Metaketa was to try to design a study through collaboration across teams that would build in meta-analysis-ready data. That was a major objective. There were also objectives around the reporting of results, including more pre-specification and transparency in the analysis, and working toward this larger model of open science. All of those aspects were important in the project.

There’s a lot of value in academia in planting the flag and being the first one to do something, but maybe we’re a little bit too willing to move on. The idea is not to prioritize new models or innovation, but to prioritize replication, and that was a big part of the model. There have traditionally been a lot of problems in trying to generalize across studies. Often the studies themselves are not comparable. They have different kinds of interventions and outcome measures.

In many ways, we were crossing the river by feeling for stones. We’d been having discussions around this kind of model for quite some time. Part of the project was getting some initial grant funding to support the concept, and then to launch this first substantive part of it, focused on political accountability and information provision. It has been an interesting initiative to be involved in. There have now been four or five Metaketas on different substantive topics. It’s a model that’s been funded by different sources, but we had an anonymous donor who provided the funding for our startup, and more recently, the British government and USAID and others have been involved in funding these larger Metaketas.

How did you ensure that data from one study will align with that of the others?

A big part of this is just trying to harmonize the interventions: what kind of information is going to be provided? How do we conceptualize information in relation to information and political performance? What do voters think before information is provided? How does the information differ from what they already believed? We wanted to try to standardize this across projects, and measure in a symmetric way what outcomes we care about: first, whether voters vote and how they vote, but then also secondary outcomes. And we want to be able to do that consistently across studies. That way we can assess the average effect across the seven study sites, as well as variation across the sites. And we can look at that in a way that makes sense. A lot of that was harmonized at the design stage ex ante through a series of workshops across project teams. Then, sharing public data ex post allowed us to do a meta-analysis of data from the seven studies.

What did your study suggest about the role of information on voter behavior?

Graph showing research findings from the study on information and voter behavior.It seems self-evident that the information people receive would make a difference in how they vote, but our finding was a big no. Almost everywhere we looked, the provision of information made no difference in how people voted. We had taken a lot of care to try to develop designs that were well-powered enough, particularly once we aggregated the data across seven studies. We could make the claim with a fair degree of precision and certainty, in a statistical sense. That may make the answer itself more compelling and more credible, that the information provision didn’t have any effect.

On the other hand, it may seem mystifying, given the important role we think information is playing in politics. What we can say is that providing this kind of information from neutral third parties about what politicians are doing in office, including about political malfeasance or misspending of funds, didn’t shape voters’ behavior. That’s depressing, but also may be informative. If we want to transform political accountability, maybe we should be looking at other kinds of interventions.

Politicians may think voters are more responsive than they seem to be in some of these instances. But voters can be hard to move, and that might be consistent with some of what we know about partisanship broadly. The idea that information doesn’t move people away from their pre-existing beliefs is a depressing finding from a number of perspectives, although you can’t test everything, and there’s a role for more sustained, cumulative evidence-gathering in some of these areas. I would have said ex ante that this kind of information provision would have mattered much more than it did.

Our methodological message is that we need to be careful and build up piece by piece, and then after having built up carefully, try to put a body of evidence together in an area. We don’t want to overclaim and then say, well, information doesn’t matter. But we do have robust evidence from these studies that this type of information provision doesn’t shape what voters do very much on average, across a wide set of contexts. We think that’s useful, even if not the final word. But we should evaluate other kinds of claims about other sorts of information, hopefully using similarly robust kinds of evidence. That’s the methodological point we want to drive home.

Do you think the Metaketa model could be used in other contexts, for instance among researchers on a single campus like UC Berkeley?

There’s a lot of ground for more collaboration, and this is consistent with an open science model. Often we have incentives to work in small teams, to claim priority to embark in new directions. Trying to work collaboratively to further knowledge is also really important, and there’s a big role for it. And it doesn’t all have to entail gathering seven teams in a room and planning a study in advance.

We’ve explored ideas like having a registry of study topics, where three studies would be conducted, and then a researcher would come along and replicate that set of studies, and also tweak them, building in innovation and replication at the same time. All of these things are potentially interesting. Many of them could also be interdisciplinary in character, speaking to the Matrix mission, and could be a way to bring people together from across disciplines. These approaches can have a big upside in terms of knowledge generation.

 

 

Matrix News

Q&A with David Robinson, Visiting Scholar at Social Science Matrix

David Robinson

Social Science Matrix is honored to welcome David Robinson as a Visiting Scholar for the 2021-2022 academic year.

A distinguished researcher working at the intersection of law, policy, and technology, David studies the design and management of algorithmic decision-making, particularly in the public sector. He served as a managing director and cofounder of Upturn, a Washington DC-based public interest organization that promotes equity and justice in the design, governance, and use of digital technology. Upturn’s research and advocacy combines technical fluency and creative policy thinking to confront patterns of inequity, especially those rooted in race and poverty.

David previously served as the inaugural associate director of Princeton University’s Center for Information Technology Policy, a joint venture between the university’s School of Engineering and its Woodrow Wilson School of Public and International Affairs. He came to Matrix from Cornell University’s AI Policy and Practice Initiative, where he was a visiting scientist. He holds a JD from Yale Law School, and bachelor’s degrees in philosophy from Princeton and Oxford, where he was a Rhodes Scholar.

We interviewed David to learn more about his research interests and the projects he will be pursuing while at UC Berkeley, including an upcoming book on the development of the algorithm used to determine recipients of kidney transplants in the United States. Please note that this interview has been edited for length and content.

Q: How did you develop your interest in the study of algorithms?

I have always been interested in the social impacts of technology. When I was a kid, I had terrible handwriting; because of a mild case of cerebral palsy, I had some fine motor impairment. When writing meant penmanship, I was a bad writer. But then, eventually, I got a word processor in school, and discovered that I loved writing, and it was a really empowering change for me. Word processors had been around for a number of years, so the key change that made the benefits possible in my life was that the rules changed. The school said, let’s get one of these computers into this setting, where it can be beneficial. Ever since then, I’ve been interested in the social impacts of new digital technologies.

I came of age during the first wave of internet optimism in the 1990s and early 2000s, and I returned to Princeton to help start the Center for Information Technology Policy, a growing, thriving organization that brought together people from different disciplinary backgrounds. Part of the idea was that, if you’re navigating the policy and the values choices that come up around new technologies, it’s a big help to have some real depth of technical expertise. My colleague from that center, Ed Felten, later became the Deputy Chief Technology Officer of the United States in the Obama administration. There was a style of work we had there that was very specific to understanding the factual pieces of new technology, and making sure that a clear shared map of the stakes of the debate would be available to all participants.

While there, I got very involved in one issue in particular: open government data, making data transparent to the public, and publishing it in a reusable format, so that, for example, if you have public records about pollution or crime or education, you can put that on a map and track it over time, and not only rely on the government’s presentation of that information.

This was an idea that really took off in the Obama administration, and they created something called Data.gov, and built a multilateral partnership called the Open Government Partnership, along with other different countries. I came together with Harlan Yu, who was a PhD student at Princeton, and we ended up starting a public interest organization, Upturn, to continue this work of informing the public debate.

In the beginning, there was an optimistic view that there was an inherent valence to the technology, that it would make things more democratic and more open and accountable. Over time, we saw that wasn’t the case. Data.gov and similar sites had great data about things like the weather or the real-time location of buses, but if you were thinking this was going to help uncover financial malfeasance or otherwise disrupt the status quo, that didn’t transpire. We published a mea culpa on this, called the “New Ambiguity of Open Government,” where we said, if you’re making the data open, that doesn’t necessarily mean that you’re making the government open. There’s a whole politics to this. It’s not inherent in the technology that things are going to get more open.

Upturn started out as consulting firm in DC and ended up as an NGO, and we ended up working very closely with civil rights organizations, addressing inequities that are based either on race or poverty or the conjunction of the two. We evolved over time into having a much clearer political or normative mission. While at Upturn, I worked on understanding questions like, how do predictive policing systems work? If we have systems in courtrooms telling us who’s dangerous, what does that mean? What danger or risk is being measured, and what is the impact on real people and their families? Those sorts of questions became more important over time.

Three years ago, I was teaching at the law school at Georgetown, and I was focused on, how do we make algorithms accountable? We’re having software make high-stakes decisions that are impacting people’s lives. What can we do to take the moral innards of these systems and make them visible, and give people a seat at the table who are not the engineers and have them help make some of these values choices? That’s a question that is very much alive today.

What will you be working on during the coming year as a Matrix Visiting Scholar?

One of the projects I’ll be working on is a book with the working title, Voices in the Code. The idea is, I can give you lots of examples of where a system has been built and the values choices have not been made in an accountable way. In courtrooms, in the pre-trial context, where someone hasn’t been convicted of a crime, you’re balancing the liberty of a presumptively innocent person against the risk to the community that they might go out and commit more crimes or something like that. There’s no visibility and no clear understanding of how many of those choices are made in many jurisdictions. The point of these courtroom systems is to predict who’s dangerous. We wrote a paper called “Danger Ahead” that said, we predict these systems are dangerous because they’re hiding the ball on what the moral trade-offs are.

Voices in the Code is about one place where people didn’t hide the ball: in organ transplantation in the United States. If a kidney becomes available, there are 100,000 people waiting for a transplant. So if an organ is donated, it’s a non-market resource. We’re not going to give it to the highest bidder, but we do have to decide collectively, who’s going to get this vital resource and the opportunity to resume a normal life, and not rely on dialysis?

There are all kinds of logistical factors that go into that: how far away is the person? There are also medical factors, like blood type. And there are moral factors: if we wanted to maximize the total benefit from our supply of organs, then we might choose to give the organs to younger, healthier, and by-and-large richer and possibly whiter recipients, with fewer social determinants, co-morbidities, or other health problems. Of course, this is dramatically unfair. If we were to do that in a completely utility-maximizing way, the result would be that people already disadvantaged would lose the chance to get transplants. It’s also the case that older recipients would be greatly disadvantaged in that system.

But what’s interesting about transplants is there’s a very public process of figuring out what that algorithm is going to be. And when they suggested this utility-maximizing idea, the public pushed back, and they switched to something that’s a lot more moderate and smarter than what they were originally going to do. They did that because there was a public comment process, and transparency about what the algorithm was. There was auditing and there were simulations of how it would work if we rolled out different versions of that algorithm.

Those are all things that people are arguing for in other contexts, whether in child welfare, courtrooms, or in the private-sector systems for hiring. We want transparency and accountability. And there are a lot of ideas on the whiteboard. But what does it look like in practice? How can it be done? From my point of view, the transplant example is a really valuable precedent for how to do the ethics inside an algorithm in an accountable way. My book is about this example and what we can learn from it. (Watch a video of a talk that Robinson gave about this work.)

The second half of the work is a book about how algorithms change the stories we tell about who people are. It is looking at how selves are constructed, so it has more of a philosophical bent. When I was working in policy, I noticed that if you tag somebody as having a high productivity score, or a high dangerousness score, it’s not only used to make some narrow decision, but it also changes how the person is perceived by others. If we think about the quantified self movement, with all these self measurements, like a smart watch giving me health points, that’s going to change my view about how healthy I am. If we rate surgeons based on how successful their patients are after the operation, we think we’re finding out who’s a good surgeon, when it turns out, we may really be finding out in part who cherry-picks their cases and takes easy cases or something like that. The book aims to help the public develop a greater sense of confidence in taking apart what some of these scores really mean, to recover a sense of being able to construct our own identities and not ending up outsourcing that to some piece of software. [See this short essay that previews Robinson’s book on the social meaning of algorithms.]

What other lessons does the kidney transplant example teach us about fairness in algorithms?

Sometimes you’ll hear people talk about going out to get public input through some process, and the input is treated like something we’re going to mine and collect. But one of the key insights from this transplant experience is that debate creates opinions. The opinions that people come to the table with tend to change and soften. I always visualize one of those machines for polishing rocks, where you have all of these sharp edges that go in at the beginning, and they tumble around and get polished. Eventually people see where others are coming from, and they are invested in hearing each other out.

The algorithm for transplants is perpetually being revised, which is part of what a real democratic process looks like. People arrived at something they may not have loved, but that they found tolerable. There was a kind of wearing down, a gradual acquiescence into something tolerable. Especially if we look at our politics today, it’s no small feat to find something that is mutually tolerable to people with very different points of view. At some level, that’s part of our ambition for the governance of algorithms.

Based on what you’ve learned about algorithms and transparency, what do you think should be the norm in this area in five or ten years?

People sometimes say there ought to be one centralized regulatory body for algorithms, and I’m skeptical about that, because I think the contexts do differ, and context really matters. If you’re dealing with something medical, you want medical experts, and if you’re dealing with criminal law, then you want experts in the criminal legal system, as well as people and families who’ve encountered the system who can provide input into that.

But I do think there can be a shared layer that emerges, where people in one area talk to people in another and recognize that we have problems of the same shape. We’re doing data science, but we want to do it in an accountable, inclusive, and democratic way. There are places where we can learn how to do that, and we can take examples from one domain and share them with another.

So what does that mean? It means getting people involved early in the design process as early as possible to frame a shared understanding of the problem. It means publishing and auditing and simulating. (This is a step I think that hasn’t gotten a lot of attention so far: how can we forecast the consequences of our  alternatives?) And then, once the thing is out there, continuing to pay attention to how it’s going and seeing if it needs to be revised. That’s a set of practices that people are learning how to do in parallel, in lots of different places. So it’s about how to share ownership of the ethical choices inside high-stakes software. That’s what I’m working on, and that’s where I think a shared literacy needs to emerge.

Sometimes there’s a pattern of technical “shock and awe,” and people say, you have to be a genius or an expert to have any clue what this system is doing. And yet, at the end of the day, there’s a conference room and a whiteboard somewhere where human beings are sitting around and saying, how does this work, and what do we want to change? The doors to that room can always be opened, no matter how complicated the software is, no matter if it’s changing every second. Answering that question is a job that can be shared.

Part of the mission of Social Science Matrix is to promote cross-disciplinary research. What academic disciplines does your work touch upon?

I’ve taken a deep dive into the legal and policy documents, because one of the things about this transparent process is that there are reams of documents and reports, which are not necessarily easy to understand. I added a qualitative component that draws draws on sociological and anthropological methods. I conducted semi-structured qualitative interviews with participants in this public deliberation process, including physicians who led committees, and a transplant patient, who argued that the original proposal was unfair. Although my original training was not in sociology, I learned a great deal from from colleagues and have been able to adapt those methods.

What brought you to UC Berkeley to continue this work?

Berkeley is just an extraordinary community. There’s a public service mission that is very strong because it’s a public university, and one of the world’s great intellectual communities is at Berkeley. It’s a tremendous place. It’s a tremendous opportunity to contribute to those conversations, and to share work in progress and get feedback.

Having looked at the transplant example, part of what I’m trying to do is to make that that experience available to other scholars and policymakers who are working on similar problems in other domains — maybe not in transplants, but in a courtroom or a human resources department, where they want to know, how can transparency be made to work? I really want the substance of what I’ve done to be available to people.

I’ve made an intentional choice to step away from the more immediate policy work and think longer term. It’s been a great opportunity to  think big picture, but also to think concretely about how we can take insights from the academic field and apply them to the social problems we have that relate to new technologies. In order for all this toil and time to pay off, I’ve got to weave in to the broader conversation around these issues. I am hoping Matrix and UC Berkeley will be a platform to bring these ideas into conversation with the wider world.

 

Race

A Q&A with Social Psychologist Jack Glaser on Racial Bias and Policing

Jack Glaser

Jack Glaser, Professor in the Goldman School of Public Policy, is a social psychologist whose primary research interest is in stereotyping, prejudice, and discrimination. He studies these intergroup biases at multiple levels of analysis. For example, he investigates the unconscious operation of stereotypes and prejudice using computerized reaction time methods, and he is investigating the implications of such subtle forms of bias in law enforcement. In particular, he is interested in racial profiling, especially as it relates to the psychology of stereotyping, and the self-fulfilling effects of stereotype-based discrimination.

Additionally, Professor Glaser has conducted research on a very extreme manifestation of intergroup bias — hate crime — and he has carried out analyses of historical data as well as racist rhetoric on the internet to challenge assumptions about economic predictors of intergroup violence. Professor Glaser is working with the Center for Policing Equity as one of the principal investigators on a National Science Foundation- and Google-funded project to build a National Justice Database of police stops and use of force incidents. He is the author of Suspect Race: Causes & Consequences of Racial Profiling.

Professor Glaser has been involved with past Matrix Research Teams on community trust and policing. We reached out to Professor Glaser in July 2020 for his insights on bias in policing in the wake of the protests for racial justice and police reform.

How do you describe your research, particularly as it relates to policing?

My research is centered on applying the psychological science around stereotyping and prejudice to understand racial disparities in policing, in stops and searches, and also in use of force.

I do that a number of different ways. The work I’m most associated with is research on how implicit bias gives rise to discriminatory judgments and behaviors. Some of the work I’ve done there is to measure, for example, the extent to which people hold an association between Blacks and weapons, and the extent to which that causes them to make a shooting response to an armed Black man faster than unarmed White man, or to make a no-shoot response to an unarmed White man relative to an unarmed Black man. What I’ve been doing more recently, though, is working with police departments and with various government agencies to try to figure out what’s going on in the field, and how to reduce the racial disparities that we see time and again, across many different datasets.

Where does racial bias come from?

There’s a century’s worth of psychological science on prejudice and discrimination and stereotyping. But some of the fundamental understandings we have from careful experimental research include the fact that people are hard-wired to categorize others and themselves into racial and ethnic and other kinds of groups. We just do that very spontaneously, we start doing it at a very young age, and it’s not something we can really turn off.

We make those categorizations, and then we have a tendency to prefer the groups we belong to. It’s natural in-group favoritism that people tend to have. On top of that, people who belong to negatively stigmatized groups are less likely to like the group they belong to than the ones who are from the superordinate, high-status, high-power groups. And we also have the specific content of the stereotypes that we have about members of various groups. So we very quickly start to formulate hypotheses about how people from one group or another are going to behave. That might be along gender lines, or racial or ethnic lines, or age lines, or political affiliation lines. We make sense of our complex world by putting people into these categories, and then having predictable traits about those categories.

One of the very prominent stereotypes that’s highly pervasive in American culture is that Black people are associated with crime and weapons and violence. Police officers are not immune to that, so as a consequence, they tend to regard people of color with greater suspicion, because the stereotypes cause them to interpret ambiguous behaviors in a manner that’s consistent with their prior conceptions. In the last 30 or so years, there’s been an avalanche of research on implicit bias and how these biases operate outside of our conscious awareness, and then can be activated automatically, and influence our perceptions and our judgments and behaviors, in spite of our best intentions to behave in a fair and unbiased manner.

How is it possible to bypass or manage this kind of bias?

Training is the usual response. Unfortunately, we don’t know of any training that reduces these biases or consistently reduces the impact that they have on behavior. There is a whole cottage industry of implicit bias trainers across many industries, but especially in policing, and they’re private companies that offer training for a fee. To the extent that they’ve been studied at all, there’s no indication that they actually change performance in the field.

There is a non-trivial number of officers who are explicitly biased and deliberately and overtly engaging in racial profiling or racial oppression, but for the vast majority of officers who are at least trying to operate in an unbiased manner, they are unable to suppress and control the influence of these implicit biases. And so it’s not really realistic to expect that a day’s worth of training, or even multiple days of training, is going to change their biases, or give them the skills that enable them to short-circuit the influence of those biases. You really need chronic motivation, a specific strategy, and then the cognitive resources or the opportunity to impose that strategy to prevent those biases from influencing your judgments. The likelihood that police officers on a day-to-day basis are going to be able to mobilize all three of those dimensions to override their biases is very low.

My view is that the effort should be focused on supervisory staff — sergeants and above — who are determining the decision-making environment the officers are stepping into. They’re the ones who are setting the incentives. If they’re trying to get officers to make a lot of arrests, or find a lot of drugs or weapons, then those officers are going to go out and make a lot of indiscriminate stops and searches of people, most of which (the data show us) are going to be unfruitful.

One of the things we find in the data across many different jurisdictions is that, among the people that officers stopped and searched when looking for guns and drugs, the Whites that they search are more likely to actually be in possession of illegal contraband than the Blacks and Latinos that they search. That’s probably because they are imposing a higher threshold of suspiciousness in order to decide to search a White person in the first place. To the extent that those kinds of discretionary stops are occurring and are being imposed disproportionately on people of color, that is going to be a catalyst for the influence of the implicit or explicit biases on the treatment of minority community members. The best way to have a significant effect on reducing that disparate impact is to reduce those kinds of behaviors that give rise to discriminatory effects.

What kinds of structures can be put into place to help reduce racial bias in policing?

The psychological research on controlling the influence of bias is pretty clear. The first element you need to have in place to be able to make an unbiased judgment is having the cognitive resources, which means not being rushed or stressed or drunk or tired. Then you can make a deliberative judgment and focus on specific indicators of, in this case, suspicion or whatever it is that you’re looking for. And so that needs to be in place in the first place for the implicit bias not to influence you.

But even if you have that, it’s difficult for a normal person to look at another person, take the information that is available to them — which is never going to be complete, and will always have some ambiguity — and differentiate between the subtle, implicit things that are causing them to regard that person in a certain way with the actual objective indicators of that. We can’t subjectively separate those things out very well. You need a specific strategy to help you try to separate those things out. That involves approaches like trying to think of that individual as another person, or relating to them by taking their perspective. Lots of different strategies have been tried, and some of them work. But none of them works for very long.

One approach would be to use some kind of checklist to say, does this person have these three characteristics that have been empirically demonstrated to be related to this sort of suspicious behavior? In the absence of that, they don’t meet the criteria for being searched. The strategic approach would be to formalize the process. But that’s very difficult to do in the real world when you’re encountering things in a fluid situation. So my view is that the incentives matter more. And generally, what you’re asking people to do is going to determine the extent to which what what they’re doing is discriminatory.

Have you seen any departments implement these shifts in incentives?

I can’t say that I’ve seen it done that rewards are changed to promote accuracy, per se. But what we have seen across multiple jurisdictions is that some police departments are backing away from the incentive to make a lot of drug arrests. In New York City, the city lost a major class action lawsuit over “stop and frisk,” so there’s been a radical reduction in the number of pedestrian stops they are doing in New York. There was also a shift in political winds at the same time, but they’ve gone from almost 700,000 stops a year to under 20,000 stops a year. It’s almost unrecognizable. What we see there is that those racial disparities in the outcomes of the searches have become almost equalized in New York, while the crime rate was flat or declining. Oakland, California also reduced the number of discretionary stops (mostly vehicle stops) that their officers were making. They also saw a reduction in racial disparities for those stops, and there was no impact on crime.

It’s not entirely clear to me as an outsider how those incentives changed, but I have an opaque sense that it was the removal of an encouragement to make a lot of stops in New York, and even a prohibition, like, we’re not doing those stops anymore unless you have a high degree of suspicion. In instances where we have seen that, you see not only overall reduction in stops, but also reduction in the disparities. And one thing that’s important to bear in mind is that, even if you didn’t see a reduction in the disparities, because the harm of being stopped without good purpose is overwhelmingly borne by communities of color, reducing that activity overall is going to differentially benefit those communities. It’s not going to equalize things, but it is going to have a benefit for those groups.

What are the research questions you’re asking now?

I have a couple of research projects currently in progress with my very impressive colleagues, one of which is with Perfecta Oxholm, a doctoral student at the Goldman School, who is doing her dissertation work with the Oakland Police Department. She’s going to be doing a multi-methodological study where she is interviewing police officers and community members to get a sense in a qualitative way of their perceptions of each other — and their perceptions of their perceptions of each other. And then we’ll be doing survey-based research that’s more structured based on those interviews, and ultimately doing an intervention, a randomized, controlled trial where some police officers engage in particular community contact activities to see how that affects attitudes on both sides.

Communities have a right to have good relations with other people, including agents of government, and to feel enfranchised and to not feel threatened by agents of government. But it’s also in the interest of the state for communities to trust law enforcement, because they’re going to be more likely to report crimes and to cooperate with investigations. It’s generally a win-win all around. Historically, it’s been clear that having an oppressive relationship between law enforcement and minority communities is not helpful.

I’m also conducting research with colleagues at UC Davis and RAND where we have developed a computerized simulation that we’re going to be rolling out with police officers, in which we have experimentally manipulated the race of the person they view on the computer monitor, and they evaluate the suspiciousness of the behaviors he’s engaged in. We have 72 different scenarios, where individuals are doing things ranging from not at all suspicious, like just sitting on a stoop, to highly suspicious behaviors, like dropping a gun behind a bush. In between are the really interesting ones, where they’re dropping some ambiguous object, or they’re picking something up from under a suspicious place. The idea is that we want to see the extent to which there are racial differences in who they regard as suspicious. The question is, would you stop and search this person? The main purpose is to establish a standardized metric for the variation in racial sensitivity that officers have to the race of the suspect, and to look at how that relates to their actual field performance, and the racial distribution of the people that they’re actually stopping in the field. We’ll be measuring lots of other things as well.

We created little animations of three still photographs that depict a process where somebody is moving through space and doing something, but it’s highly standardized. We have Black and White actors playing these parts doing exactly the same thing. We will of course mix up the order in which people  see them. So it won’t be like, here’s the Black guy doing it, here’s the White guy doing it. But you know, just be respond to this individual. We may give some of our research participants only the Black actors and or only the White actors to do what we would call a between-subjects comparison. We’re going to do it a lot of different ways to see what we can pick up.

How might police departments be able to use that kind of standard metric?

If we find a correlation between racial bias in that measure, and the racial distribution of who they’re stopping and searching — or the outcomes of the searches they’re doing — that would give the department quite a bit of insight. Without the metric we’re developing, they could look at those racial disparities in who has been stopped and searched and throw their hands up and say, well, that might just be them responding to what’s happening on the street. But if we can show that there’s a relationship between the sort of preconceptions and the actual performance, that would be enlightening. It could also lead to training opportunities, where they use that information to say, you should be looking at the object, but the officers who do this tend to be influenced by the race of the person dropping the object or picking it up.

What are common misconceptions that people have about policing and racial bias?

One thing people don’t realize is that the overwhelming majority of police civilian encounters do not have any public safety-enhancing effect, especially the discretionary encounter. Obviously, calls for service when officers are responding to a call — whether there’s been a witnessed crime, or there is some kind of crisis — those have great public safety-enhancing value. But these discretionary stops, or low-level equipment failure vehicle stops, do not promote public safety. And only a very small minority of them result in any kind of recovery of weapons, and a slightly larger but still small fraction result in recovery of illegal contraband like drugs.

A lot of these discretionary activities that police are engaged in are not only not promoting public safety, but they are disproportionately borne by communities of color. And that has the effect of violating the Constitution — violating those people’s right to equal protection and due process — but it also destabilizes the communities and causes a lack of trust and a lack of cooperation.

In the case of something like the murder of George Floyd, it’s hard to use the usual explanations of automatic bias and the like to explain a nine-minute strangulation. But the more typical cases, where there’s a shooting and maybe even a foot pursuit, those are disproportionately Black victims when they’re unarmed. That’s quite clear in the research, although there’s another body of research that shows that if you look at all of the cases of fatal officer-involved shootings, there doesn’t appear to be a racial disparity. The problem with that analysis is that it’s really the unarmed victims that are the ones who shouldn’t be getting killed by the police, and that’s where the disparities reside. The much larger number of cases are armed victims, and they tend to be White men.

The fact that it took the George Floyd killing to bring this to the public consciousness, to the boiling point where change can actually happen, something about the way our society is structured in the way people from a hegemonic group are unlikely to relate to the challenges of minority groups. What we don’t want to lose sight of is that the fatal killings and use of force on unarmed Black men is just the tip of the iceberg of the daily indignities that Black people suffer at the hands of police when they’re being overzealous. That’s the big mass of the iceberg under the water that most of us don’t see, but that minority communities feel the weight of very, very heavily.

Podcast

Matrix Podcast: Interview with Rebecca Herman

Rebecca Herman

 

In this podcast, Michael Watts interviews Rebecca Herman, Assistant Professor of History, UC Berkeley. Professor Herman’s research and writing examine modern Latin American history in a global context. Her first book, forthcoming from Oxford University Press, reconstructs the history of U.S. military basing in Latin America during World War II – through high diplomacy and on-the-ground examinations of race, labor, sex and law – to reveal the origins and impact of inter-American “security cooperation” on domestic and international politics in the region. She has also authored past and forthcoming articles and book chapters on the global politics of anti-racism, the Cuban literacy campaign, the Brazilian labor justice system, and U.S.-Latin American relations. She is currently working on a new book project on Antarctica, Latin America, and the World.

Prior to entering academia, she spent several years in Argentina, Chile, Bolivia and Brazil working as a freelance translator, researcher, and documentarian. Before joining the faculty at Berkeley, she was Assistant Professor of International Studies and Latin American Studies at the University of Washington, Seattle. She received her Ph.D. in History from UC Berkeley and her B.A. in Literature and History from Duke.

Produced by the University of California, Berkeley’s Social Science Matrix, the Matrix Podcast features interviews with scholars from across the UC Berkeley campus. The Matrix Podcast is hosted by Professor Michael WattsEmeritus “Class of 1963” Professor of Geography and Development Studies at UC Berkeley.

Listen on Apple Podcasts or Google Podcasts.

Podcast

Matrix Podcast: Interview with Brittany Birberick

Brittany Birberick

In this episode, Professor Michael Watts interviews Brittany Birberick, an anthropology PhD student at the University of California, Berkeley — and a former Matrix Dissertation Fellow. Birberick’s dissertation project focuses on urban transformation in Johannesburg, South Africa. More broadly, she writes and thinks about economies, migration, temporality, and aesthetics within an urban context. Her dissertation, “Paved with Gold: Urban Transformation in Johannesburg,” situates the city of Johannesburg historically, considering the extractive economy of gold that initiated its development to understand the city’s contemporary tensions: a dilapidated post-apartheid city aiming to be a world-class global city. Her research takes place in Jeppestown, a neighborhood in Johannesburg, and focuses on the inhabitants and built environment of a single street. Today, Jeppestown is portrayed as either on its way to becoming a site of redevelopment by the Johannesburg Development Agency, artists, and private developers, or, if left unattended, a crime ridden area and hotbed of xenophobic violence. The dissertation posits that rather than transformation and development projects leading to an inherently new city or inherently new object, Jeppestown, like many urban areas around the world, is caught in a back and forth between being a successful or failed urban space—a “good” or “bad” city.

Birberick received the Association for Africanist Anthropology’s 2019 Bennetta Jules-Rosette Graduate Essay Award for her essay, “Dreaming Numbers,” which is an analysis of fafi, a street-based lottery game played by residents in Jeppestown. The piece investigates the ways in which dreams, gambling, and interpreting patterns become meaningful strategies for choosing the next winning number and reducing uncertainty in the city.

Related Materials

 

Podcast

Matrix Podcast: Interview with Clancy Wilmott

Clancy Wilmott

 

 

In this episode, Professor Michael Watts interviews Clancy Wilmott, Assistant Professor in Critical Cartography, Geovisualisation, and Design in the Berkeley Centre for New Media and the Department of Geography. Professor Wilmott comes to UC Berkeley from the Department of Geography at the University of Manchester, where she received her PhD in Human Geography with a multi-site study on the interaction between mobile phone maps, cartographic discourse, and postcolonial landscapes. At UC Berkeley, Professor Wilmott is teaching graduate-level combined theory/studio courses on locative media, cross listed courses in digital geographies, as well as core curriculum on geographic information systems in the Geography department.

Produced by the University of California, Berkeley’s Social Science Matrix, the Matrix Podcast features interviews with scholars from across the UC Berkeley campus. The Matrix Podcast is hosted by Professor Michael WattsEmeritus “Class of 1963” Professor of Geography and Development Studies at UC Berkeley.

Listen on Apple Podcasts or Google Podcasts.

Related Materials

 

Podcast

Matrix Podcast: Interview with Mariane Ferme

Mariane Ferme

 

In this episode, Michael Watts talks with Mariane C. Ferme, Professor of Anthropology at UC Berkeley and the author of Out of War: Violence, Trauma, and the Political Imagination in Sierra Leone and The Underneath of Things: Violence, History, and the Everyday in Sierra Leone.

Ferme is a sociocultural anthropologist whose current research focuses on the political imagination, violence and conflict, and access to justice in West Africa, particularly Sierra Leone. Her research encompasses gendered approaches to everyday practices and materiality in agrarian West African societies, and work on the political imagination in times of violence, particularly in relation to the 1991-2002 civil war in Sierra Leone. Her most recent fieldwork in Sierra Leone—carried out in 2015-16—was an interdisciplinary research project on changing agrarian institutions and access to land in the country. Ferme’s latest book, Out of War: Violence, Trauma, and the Political Imagination in Sierra Leone, draws on her three decades of ethnographic engagements to examine the physical and psychological aftereffects of the harms of Sierra Leone’s civil war.

Related Materials

 

Podcast

Matrix Podcast: Interview with Leigh Raiford

Leigh Raiford

 

In this episode, Michael Watts interviews Leigh Raiford, Associate Professor of African American Studies at UC Berkeley and author of Imprisoned in a Luminous Glare: Photography and the African American Freedom Struggle, finalist for the 2011 Berkshire Conference of Women Historians First Book Prize. In her book, Raiford argues that over the past one hundred years, activists in the black freedom struggle have used photographic imagery both to gain political recognition and to develop a different visual vocabulary about black lives. Offering readings of the use of photography in the anti-lynching movement, the civil rights movement, and the black power movement, Imprisoned in a Luminous Glare focuses on key transformations in technology, society, and politics to understand the evolution of photography’s deployment in capturing white oppression, black resistance, and African American life.

Listen on Apple Podcasts or Google Podcasts.

Related Materials

Podcast

Matrix Podcast: Interview with Desiree Fields

desiree fields

In this episode, Michael Watts talks with Desiree Fields, Assistant Professor of Geography and Global Metropolitan Studies at the University of California, Berkeley.

Fields’ research explores the financial technologies, market devices, and historical and geographic contingencies that make it possible to treat housing as a financial asset, and how this process is contested at the urban scale. At the heart of her work is an interest in how economic and transformations unevenly restructure urban space and social relations, with a particular concern for how urban struggles for justice coalesce around these changes. Within this broadly defined area, she examines two transformations as they relate to housing, a crucial vector of urban inequality and terrain of grassroots political contestation. First, the shift to a finance-oriented political economy; second, the growing global reach and power of digital platforms.

Related Materials

Listen on Apple Podcasts.

Podcast Transcript

9D816CF0-51A6-4B77-A31E-EFC918D7D128

[MUSIC PLAYING]

Woman’s Voice: The Matrix Podcast is a production of Social Science Matrix, an interdisciplinary research center at the University of California Berkeley. Your host is Professor Michael Watts.

Michael Watts: Hello. This is Matrix Podcast. Our interview today is with Professor Desiree Fields, assistant professor of geography on the Berkeley campus and also a core faculty member in the global metropolitan studies program.

Desiree is a relatively recent arrival here. She happens to be a Bay Area native. So she’s coming home in that regard. But prior to arriving here, she taught for a number of years at the University of Sheffield in the UK.

Her focus in research terms is primarily on the housing sector and particularly changes in the housing sector that have occurred since the financial crisis of 2008. And she’s especially concerned, and that’s what she’ll be talking about today, with the sorts of changes, technological changes related to, among other things, social media, cloud computing, mobile computing, tech boom 2.0 as it sometimes popularly referred to, how that has been transforming the structure and character of the housing market and actors in the housing market. So Desiree, welcome and thank you so much for coming along and talking to us today.

Desiree Fields: Hi, Michael. Thanks for inviting me.

Michael Watts: Good. So let me begin with a sort of general question. Your training and your formation and most of your professional experience has been as a geographer. So let me start with this question. How do you as a geographer think about housing and why for you it’s so central to understanding, for example, contemporary American capitalism?

Desiree Fields: Sure. Yeah. So housing is a really interesting thing to look at as a geographer because it’s both fundamental to the urban landscape and how we make and remake cities, but particularly in places like America and other highly advanced economies, it’s also really central to economic growth and reproduction.

And so that’s really how I think about housing both as something that is actually increasingly important in our economy and increasingly sort of structured by financial interests and the way that those financial interests are affecting urban housing markets.

So I think about how do financial actors seek to use housing as a means of making profits from cities and from housing markets and how are those efforts by financial actors to extract wealth from housing fundamentally changing our cities.

Michael Watts: So is it fair to say that in a way there’s been– there’s been a lot of talk, of course, about the dominance of finance or finance capital in contemporary America or the transatlantic economies. Do you see much of your work then in housing as a type of example of this financialization at work? And these actors that we’ll talk about in just a second, investment banks or whatever they may be. Is this for you not exclusively but a type of financialization story?

Desiree Fields: Yeah, certainly. I mean, my work is about financialization, but I think one thing that distinguishes my work perhaps from– look, I mean, there’s a lot of work on financialization and housing financialization now in geography, urban studies, and lots of other fields.

My particular focus on financialization of housing has always really sought to look not only at that process of treating housing as a financial asset or attempting to treat housing as a financial asset but to also look at how does that change the lived experience of housing and home, how does that create inequalities that then become the site of urban struggles.

So yes, I’m interested in the economic parts of this process but just as much on the politics and power relations and lived experience of this process.

Michael Watts: Let me ask you, why did you gravitate toward housing as a graduate student really? I guess your PhD was awarded after the 2008 financial crisis. Was it triggered by that and changes that you saw when you were in New York, let’s say? Or were there other motives that drew you to housing as being something that was so central to understanding, for example, contemporary inequality?

Desiree Fields: Sure. I mean, so I came into graduate school and into environmental psychology with a background in psychology and social work. And I had been working in San Francisco as a counselor in residential settings and working with a primarily homeless population.

And my motivation for going to graduate school was my frustration with the system that I was working in and how I perceived it as almost necessitating that the people I was working with be in crisis in order to be housed.

And so I started to see that the system was reproducing a cycle of crisis in people’s lives that, of course, was not helpful for their mental health. So I wanted to pursue a PhD in environmental psychology to better understand that process and to intervene in it.

And then of course, in the middle of my graduate training, 2008 happened, and I was working closely with my PhD supervisor on a research project about experiences of foreclosure in families in different cities across the country.

And so I think that’s where the kernel of my interest in financialization of housing really lies. And then I just think– I think getting your PhD at such a pivotal time in our history, it just–

Michael Watts: Right. Exactly

Desiree Fields: Yeah. It was inescapable to focus on housing.

Michael Watts: Well, let me just take up on this issue of your Bay Area origins and working in the Bay Area. I mean, obviously, there’s probably no more central and contentious an issue right now in the greater Bay Area than housing costs, than the homeless question, than gentrification and so on.

And sometimes those things are seen to be in a way if not peculiar to San Francisco, they’re in part, of course, driven by a lot of wealth in Silicon Valley and so on. But is your view that these sorts of issues are peculiar to the likes of San Francisco, or New York, or Los Angeles? Or do we see these showing up in very different types of urban contexts in the Midwest, in Florida, or other parts in the transatlantic economies?

Desiree Fields: I mean, certainly there are particular variations that we associate with San Francisco, Los Angeles, New York, and these kind of major cities. But in the North Atlantic, we live in late capitalism. And in that system, housing is a crucial part of the economy.

And in that sense, housing financialization, homelessness, gentrification, rising rents, these are problems that we see across the United States in rural contexts, in small cities, in major cities. We see the presence of institutional investors in suburban housing markets, urban housing markets, and in between.

So it’s a variegated problem as geographers would say. So this is a problem that has geographical variation, sociospatial difference, but nonetheless, it’s a pretty generalized process I would say.

Michael Watts: Good. Let me start by moving into your research by focusing specifically on the financial crisis of 2008. Why, in your view, is this such a foundational moment in understanding the changes that you’ve documented, which we’ll get to in just a second, within the broader real estates sector?

Desiree Fields: Sure. So leading up to 2008, of course, there were decades of financial innovation as we might call it. So a proliferation of different kinds of lending products that really transformed the notion of the 30 year fixed rate amortizing mortgage.

We began to see different mortgage rates, different mortgage products, interest only mortgages, et cetera, et cetera. So we saw this kind of real–

Michael Watts: Proliferation of instruments as it were.

Desiree Fields: Of different instruments and different ways of then treating those debts as the raw material of financial products. So mortgage backed securities and all of those other– all those other things.

So the dissolution of all of that in the 2008 crisis, all of that was a turning point in itself. All of those decades of innovation, I think, really changed how people think about housing, so just at a personal level, this idea of being able to take out a loan against your house in order to go on a vacation or pay for your kid’s education or whatever.

So the mortgage as a means of supporting personal consumption, that was a difference. The idea of the mortgage as the ingredient in a financial asset that was a difference. And then so when all of that fell apart in 2008, it became as crisis often is in capitalism an opportunity for different kinds of actors to capitalize on all of this dispossession and loss.

Michael Watts: And these were as it were traditional financial actors, meaning investment banks? Who were the cast of characters who when that massive foreclosure happened in Stockton here in California or in Florida? What were the actors that were piling in to this sector that was literally, as you said, being ravaged?

Desiree Fields: Sure. So what we began to see in the aftermath of 2008 was really the increased involvement in the housing market of actors like private equity funds, who were well known in terms of taking over large businesses like retail businesses and all kinds of other business settings.

So private equity players are known for taking over what they would call distressed assets or distressed businesses and attempting to turn them around. So we began to see the entrance of these private equity players coming into the housing market. So these very large scale investors with a lot of capital.

And this was pretty new. I mean, there’s a history in apartment buildings, particularly in New York and some other major cities, there’s a history of this kind of institutional investing in multifamily housing, not so much by private equity firms but–

Michael Watts: But these were acquisitions of single family rentals that were, quote, “distressed assets”, end quote. So this was in a sense a consolidation and a massive acquisition of these sorts of properties. Is that correct?

Desiree Fields: Right. Yeah. I mean, essentially, the foreclosure crisis created an opportunity ostensibly for all kinds of investors to buy up housing that had been devalued. But what we saw was that the ones who really had access to large pools of capital were private equity investors largely, who were able to come in and buy properties from banks often one by one, so on the courthouse steps as it were every month when foreclosed properties were being auctioned off.

And really, assemble portfolios consisting of thousands and tens of thousands of single family homes that they then rented out. So there was this shift happening from home ownership to just real huge increase in rental demand as people, both homeowners and tenants, lost homes in the foreclosure crisis.

Michael Watts: And these properties that were so acquired, were they distributed nationally or would a private equity firm, for example, be specializing in southern Florida and southern California, were they national in that sense in these large scale holdings that they possessed?

Desiree Fields: So the geography has shifted a bit over time but there’s definitely a distinctive sunbelt geography to this phenomenon. And so we see parts of California, largely southern California a bit, northern California markets like Las Vegas, Phoenix, Tampa, heavy presence in Atlanta.

So there’s this real west, southwest, and southeast metro areas that really saw private equity kind of descend on these places. And it kind of started in the west and then moved east.

Michael Watts: But it’s not exclusively private equity firms that are the sole actors in this consolidation of single family rentals. Or are they the major player would you say?

Desiree Fields: There’s been a bit of a shift. So particularly in a place like Oakland, you saw a lot of smaller localish private equity firms but there’s been over– that was kind of in the immediate right after, during, and right after the crisis. And then you saw larger firms like Blackstone or Colony Capital coming in.

And so what we’ve seen over the past almost a decade now is this kind of the entrance of smaller players, some private equity backed, some just kind of savvy local investors. And you’ve seen this kind of consolidation happening.

So the smaller players come, the larger players then descend. You see sometimes they buy up the inventory of the smaller actors. You then see some of the larger actors themselves start to consolidate and merge with one another.

And then you have these private equity players like Blackstone, their single family rental company is called Invitation Homes. Invitation Homes is now a real estate investment trust. So there’s also a shift in the corporate structure of these companies.

Michael Watts: Now, is this a peculiarly American phenomenon? Or do we see this in Canada? Do we see this in the UK or versions of it?

Desiree Fields: We see it all over the place. I mean, particularly if you look at other countries that were really hit hard by the crisis in terms of the impact of the 2008 crisis on their housing markets. Ireland, Spain, these places you have seen the entrance of many of the same players, particularly Blackstone, acquiring distressed housing, sometimes distressed public housing, and renting it out.

You’ve also seen in those countries the rollout of legislation by the state that really supports this strategy by allowing real estate investment trusts. You also have seen a heavy financialisation of rental housing in Canada, which was not hit as hard by the crisis but nonetheless is subject to this trend.

Less so in the UK because their rental market is still very, very fragmented.

Michael Watts: Quite. Quite. Quite. Quite. Now, let’s take Blackstone. So you have now thousands, perhaps even tens of thousands of properties, across a national space. How are they managed? This seems to be a central part of your story and the role of technologies of various sorts in facilitating that. So walk us through that story.

Desiree Fields: Sure. I mean, so this is so interesting to me because I grew up in a single family home in Concord. And a lot of us grew up in single family homes. So what is tricky to think about doing is managing a portfolio of 100,000 of these properties, which unlike apartment buildings, are all sitting in different places.

So all of Blackstone’s properties are not on the same block or even in the same neighborhood in any one city that they’re operating in. So you have a scattered set of assets that were built at different times, non standardly constructed, and that is a real challenge in terms of operations and management.

And so I think while the foreclosure crisis and the price dislocation that that created presented an opportunity for these actors, it was not sufficient for them. I think what we really see is the intersection of that price dislocation with this crazy boom in technology that we’ve seen essentially happening over the same time scale.

And that is what has enabled investors like Blackstone to manage such a huge portfolio of properties.

Michael Watts: Let’s start there because you in your work, you flag a number of innovations or a number of forces that you think are central for anyone to understand and what you’ve just described. And maybe before we get into the technologies, let me just ask you about those things that you think are so important.

One is you refer to logistics. So why is logistics– perhaps just walk us through what you mean by that term. And why an understanding of something called logistics is an integral part of the story that you’ve just outlined?

Desiree Fields: Sure. So we often think of logistics in terms of ways of trying to move things efficiently from one place to another. So if we think about cargo ships and the ways in which products that are produced in China are moved on shipping containers on ships and shipping containers and then how all of those goods get to different points throughout the United States.

So logistics is this kind of scientific or technical approach to the movement of stuff.

Michael Watts: So this could be a type of classical global supply chain when we talk about the iPhone and its various components being outsourced and moved around, something of that sort.

Desiree Fields: Right. And so I try to take this idea of logistics and use it to think about how do we move capital basically and how do we organize rental homes in such a way that rent checks can be moved from tenants’ bank accounts into global financial markets via financial products.

And so I’m thinking about this notion of the supply chain in terms of the supply chain of financial products. So rent checks as the thing that needs to be kind of moved and distributed through an extended supply chain of different kinds of actors.

Michael Watts: Got it. Got it. Now, a second force that you identify or a set of innovations you refer to here some work of our colleague actually, Neil Fligstein on campus here at Berkeley, the vertical integration of the firm. Now why is that an important part of your story too to understand the real estate innovations?

Desiree Fields: Yeah. I mean, it’s interesting if you look at a lot of work about the economy over the past 50 or 60 years, there’s a lot of talk about vertical disintegration and the shift from these huge producers and large factories controlling everything within the factory to the creation of these kinds of flexible lien, et cetera.

Michael Watts: Decentralized networks of production.

Desiree Fields: Forms of production. But I drew on Neil’s work because he was looking at the production of subprime loans and found that the production of subprime loans in the lead up to the crisis was engineered– some of the most important actors in that space were heavily vertically integrated.

And so these firms were responsible for everything from knocking on doors and originating loans in neighborhoods all the way up to securitization and selling those products on financial markets. And what I observed with actors like Blackstone is exactly the same kinds of behavior.

So keeping all of those activities internal to the company all the way from acquisition of homes to the rehabilitation of those homes, the operation and management of those properties, and the securitization of the rent into a financial asset.

Michael Watts: Absolutely. Since you’re talking about the firms themselves, let me ask you a related question, namely, how does one study these things? I mean, typically studying large corporations is a difficult issue.

They don’t disclose very much. Their records may not even be in the public domain. What was your approach to trying to understand that vertical integration that you just described from the knocking on the door up to the instruments that were securitized?

Desiree Fields: Sure. So I do a lot of what we call it in the UK desk research, which consists largely of following a lot of media and journalistic accounts of this process. I think one of the things that was really interesting about this phenomenon, this kind of creation of the rent backed security is that it is and was a totally new financial asset class.

And because of that, there was a lot of interest in the process from investors, by credit rating agencies, by people in capital markets. There was a lot of speculation about whether this business model could exist, whether investors like Blackstone were trying to quickly buy and flip properties or whether they were in it for the long haul.

And because of that, there was a lot of material to work with. So I did a lot of this kind of desk research. I do conference ethnography, where I go to investment forums and do participant observation at conferences and use that as a way of networking with actors in this space and interviewing them.

Michael Watts: So we’ve got the logistics. We’ve got the vertical integration. Let’s turn to the technologies themselves. Why don’t you just walk us through then when you’ve got this organizational supply chain challenge in a sense of multiple properties dispersed over space, different housing stock qualities, et cetera, et cetera, et cetera, how does technology, what you call platform capitalism– I’ll come back to that term, I’d like you to explain that. But what are the sorts of technologies that are in play at this point?

Desiree Fields: Sure. So we can think about it starting from the point of acquisition. So if you’re trying to scale up a portfolio that you need to have 1,000 properties in a given city, for example, or a metro area, how do you acquire those properties before all the other investors who are interested in the same set of properties get in there and prices go up?

And so what we began to see was the development of acquisition algorithms or acquisition engines that basically take all kinds of public and proprietary and private data about housing stock and neighborhoods, employment growth, and transportation, the age of the housing stock, the quality of the neighborhood, proximity to schools, all of these kinds of data points, throw them into an algorithm and use that to identify geographies that have concentrations of properties that meet those criteria.

Michael Watts: I see.

Desiree Fields: So we saw companies using these kinds of algorithms to essentially delineate geographical areas where they could acquire properties and to designate what a maximum bid for a particular property in that space might be.

And so they were able to, even though they were largely buying properties one by one and not in bulk, they were able to use that to very efficiently scan what was available and then drill down into, OK, what are the properties that meet our investment criteria and how much are we going to pay.

And so that kind of is like an industrial strategy–

Michael Watts: Acquisition strategy almost.

Desiree Fields: Right. So then once you have– once you’ve acquired the properties, it’s a question of, OK, well, how do we manage all of this stuff? Certainly, the landlord is not going to be going around to properties one by one and engaging with tenants.

So we see things, all of these companies have tenant portals where you can pay your rent via this portal rather than writing out a rent check and sending it into the management office, where you can submit a maintenance request if something is wrong with your home.

And as I learned in attending these conferences and investment forums, the Holy Grail of this is it sounds so simple now but it’s just having tenants upload a picture of whatever is happening. So that they can kind of diagnose the problem rather than having a workman or a contractor come out twice, once to diagnose and again to fix the problem.

So you have that kind of management at a distance enabled by portals for payment and maintenance. And then there’s this question of turnover. So we see actors deciding, OK, well, I need to add 20 properties in this metro area, or actually, we’re not going to be active in Las Vegas anymore, so we want to drop all of these properties in this market.

And when you have tenants in properties and you need to add or cull your portfolio, then it becomes a real problem of, well, so are you going to get the tenants out, put the property on the market vacant? Wait for someone else to buy it, then the person who buys it or the company who buys it has to get in there, do rehab, and get a lease signed.

So meanwhile, you’re just leaving rent on the table. And so what we have seen is the emergence of platforms that are designed to basically enable the buying and selling of single family rental properties without getting the tenants out. So you have a platform like Roofstock, which is designed specifically for this purpose.

Michael Watts: So let me ask then about the renters themselves and the ways in which these types of technologies and particularly of big data of various sorts– you yourself talk about an interesting legal case from last year in which Facebook was implicated in exactly providing this sort of private data about actual or potential renters. Where does that enter into the tech story that you’ve just walked us through?

Desiree Fields: Sure. So I mean, what is especially important to consider and just understand when you’re looking at the ways in which technology is reconfiguring housing markets is that, of course, our housing markets are already fundamentally unequal and heavily segregated by processes of discrimination, both individual and structural.

And so any kind of technology that’s operating in the housing market is, I would say, likely to perpetuate these same processes. And so this case that HUD brought against Facebook is really interesting because they allege that Facebook violated the Fair Housing Act via the ways in which housing ads and mortgage ads on Facebook Marketplace were made available to different Facebook users.

And so they contend that this happened in two ways. The first way we might think of is like uploading existing biases into the system. So Facebook Marketplace, the platform that advertisers were interacting with to put these ads on Facebook Marketplace essentially allowed people taking out ads to pick and choose which kinds of Facebook users they wanted to see the ad.

And there was even a tool where they could draw a red line around areas that they did or did not want people located in those areas to see the ads, which just the idea of a tool that allows them to draw red lines around specific areas obviously harks back to–

Michael Watts: Redlining and in real estate more generally.

Desiree Fields: Exactly. And the reason that Facebook was able to make different categories of users available for advertisers to pick and choose who will and won’t see the ads is because of our activities on Facebook essentially creating all of this data that Facebook can use to classify us.

So based on my activities on Facebook, they probably know, for example, that I’m an academic, that I’m interested in housing, that I’m a parent, that I live in the Bay Area. And so all of those things might kind of get condensed to put me into some kind of box that says, OK, here’s a middle class white woman living in the Bay Area who’s interested in stuff for her kid.

And so Facebook uses that data to present categories to advertisers.

Michael Watts: So this is also classifying people and families in a sense. You refer to this. Again, one of our colleagues, Professor Marion Fourcade has talked a great deal about this. You cite her use of this notion of an information dragnet, which is in the business of exactly that type of classification.

Desiree Fields: Exactly. And so Facebook Marketplace more or less allowed advertisers to upload their existing biases by picking and choosing who will and will not see these ads. But the lawsuit that HUD brought goes further to say that the platform itself generated its own kind of discrimination that was a violation of the Fair Housing Act because regardless of whether advertisers wanted to, say, cast a wide net and have lots of different kinds of groups see the ads, the platform would choose which categories of people did and not see the ads on the basis of who they deemed most likely to interact with that ad.

So who’s most likely to click on the ad? And so this is a really interesting case where it’s like, OK, yes, platforms allow us to transmit our biases online. But platforms themselves by virtue of those kinds of dynamics of classification and the analysis of data that says, predicting who will and will not interact with certain kinds of content, that platforms themselves generate their own kind of discriminatory behavior.

Michael Watts: That’s fascinating. I mean, this points to, I guess, a larger question about the degree to which these technological innovations within this sector are now demanding new sorts of regulatory interventions of various sorts or to what degree are these innovations pushing into frontier areas that effectively are not regulated.

Or we could anticipate exactly these types of HUD Facebook issues becoming central to transparency and disclosure within this sector.

Desiree Fields: I mean, even as HUD is bringing this suit against Facebook around violations of the Fair Housing Act, HUD is also increasingly reluctant to pursue other modes of algorithmic discrimination in housing.

And so it’s not really clear how HUD as a regulatory agency charged with upholding and strengthening fair housing, it’s not really clear whether and how HUD is an actor is going to intervene effectively in this space.

But we see other kinds of pushes for regulation around these kinds of technological interventions happening. So one case that a lot of people might have heard about is the use of facial recognition systems to access housing.

So there’s a pretty well known case now in New York of a property management company that wanted to use a facial recognition system for tenants to– this would be the primary way that tenants would be able to get into and out of the apartment complex.

They had properties in Central Brooklyn and in the Bronx, a primarily Black resident population. And the tenants said, we do not consent to this kind of surveillance. We already feel heavily surveilled by virtue of the cameras and the electronic key fobs that you’re requiring us to use.

And we feel that this is a kind of punitive carceral intervention that is really designed to criminalize us as tenants. And you see companies advertising this kind of biometric access system as a way for property owners to more or less catch up tenants who might be violating the terms of their lease.

And so this is especially poignant in the context of gentrification and the ways in which landlords might be able to use this kind of technology to evict tenants and raise rents. So those tenants in New York fought back. The city is now really pushing to restrict the use of facial recognition systems at all.

And you see this wave of legislation by cities around the country outlawing the use of facial recognition on the basis that it is not only inaccurate but discriminatory and just difficult to govern.

Michael Watts: I mean, obviously, these types of push backs in and around what is effectively forms of surveillance is one area that’s going to be clearly an important policy arena going forward. I’m wondering whether these types of platforms that you’re describing, precisely because you’ve written a lot and have been involved in various types of social justice work around evictions, around rental properties, et cetera, et cetera, are these platforms also being used by advocacy organizations?

Or are there in a sense technological counterweights to exactly this type of process that you’ve been describing, which is in a way is if I understand you correctly is sort of facilitating the deeper structural forms of segregation, exclusion, eviction that have always inhabited the housing sector?

Desiree Fields: Yeah. I mean, I think what we know about technology is that it’s a tool. And the technology exists in a social context and a political context. And all of the kinds of interventions that we’ve just been discussing are really uses of technology within a market context, within a capitalist context, and being used more or less to further capital accumulation.

The good news is that it doesn’t have to be used that way. And so we see, of course, really sophisticated and exciting uses of technology by activists and advocates that are trying to push back on that process in various ways.

So of course, there’s the Anti Eviction Mapping project, which is well known for its work mapping evictors, evictions, and also developing counter narratives to push back on this process that we see really just turning the Bay Area inside out.

We see groups like justfix.nyc, which has developed this tool called Who Owns What that allows you to– it basically draws together just lots of different public data sources into a user friendly interface.

So you can put in an address and look at who owns the property, what are all the other buildings that they own. Because this process of financialization means that your landlord could own lots of different properties not just in your neighborhood but all over a city.

And that, in terms of activism, I think demands a different kind of organizing that might be more portfolio based than strictly neighborhood based. So this kind of tool justfix.nyc offers helps to facilitate that kind of portfolio based organizing.

I’ve been doing some field work in Berlin on this question. And we see some interesting more social enterprise focused strategies that basically use legal technology, machine learning, and so forth to basically scan rental contracts, check them against regulations, and find out if tenants are being overcharged, and help them kind of recoup those costs.

And then we see platforms like Doma, which basically is trying to use smart contracts and crowdfunding to facilitate investment in rental properties and then to pay some of the dividends back to tenants.

So it’s basically trying to redistribute equity that develops in property over time, rather than letting that all go to property owners.

Michael Watts: So in a sense, all of these are pointing to the need to innovate political and advocacy strategies that are congruent with the types of technological innovations that are transforming the sector itself.

It’s going to demand in that sense a different style of doing politics in some sense given the nature of the beast that’s now so dominant in the housing sector.

Desiree Fields: I think this is true. However, I think it is also true that we need to push for blanket policies that protect and support all renters because even though these technological processes are transforming housing markets and even though they might be affecting greater and greater parts of the housing market, they aren’t going to affect everyone.

And if we are targeting only those kinds of interventions, we might protect and support some people. When we know the housing market itself is operating in fundamentally unequal ways that benefit landlords and property owners and disadvantage renters, we need to be careful, I think, in terms of how we focus our organizing.

And so pushing for things like just cause eviction across the board, pushing for things like rent controls that are– good rent controls across the board is just as important.

Michael Watts: Thank you, Desiree, for a fascinating conversation. These issues are obviously central not only to Bay Area and politics that we’ve been involved with currently. But obviously, they have national and in fact, global implications. And we’ll be certainly circling back to many of these issues in our future podcasts. We’ll be posting this and other podcasts on our website. That’s matrix.berkeley.edu. And thank you very much for listening.

Podcast

Matrix Podcast: Interview with Dacher Keltner

Dacher Keltner

In this episode of the Matrix Podcast, Michael Watts talks with Dacher Keltner, Professor of Psychology, Director of the Berkeley Social Interaction Laboratory, and Faculty Director of the Greater Good Science Center.

Dacher’s research focuses the biological and evolutionary origins of emotion, in particular prosocial states such as compassion, awe, love, and beauty, and power, social class, and inequality. He is the co-author of Born to Be Good: The Science of a Meaningful LifeThe Compassionate Instinct: The Science of Human Goodness, and The Power Paradox: How We Gain and Lose Influence. Dacher has published over 200 scientific articles, written for many media outlets, and consulted for the Center for Constitutional Rights (to help end solitary confinement), Google, Facebook, the Sierra Club, and for Pixar’s Inside Out.

Related Materials

Listen on Apple Podcasts or Google Podcasts.