Podcast

Institutionalizing Child Welfare: An Interview with Matty Lichtenstein

Matty Lichtenstein

How do American child welfare and obstetric healthcare converge? Matty Lichtenstein, a recent PhD from UC Berkeley’s Department of Sociology, studies how state and professional organizations shape social and health inequalities in maternal and child welfare. Her current book project focuses on evolving conceptions of risk in social work and medicine, illustrated by a study of the intertwined development of American child and perinatal protective policies. She is working on several collaborations related to this theme, including studies of maltreatment-related fatality rates, the racialization of medical reporting of substance-exposed infants, and risk assessment in child welfare.

In another stream of research, she has written on social policy change, with a focus on educational regulation and political advocacy, and she has conducted research on culture, religion, and politics. Dr. Lichtenstein’s work has been published in American Journal of Sociology, Qualitative Methods, and Sociological Methods and Research. She is currently a postdoctoral research associate at the Watson Institute for International and Public Affairs at Brown University.

In this podcast episode, Matrix content curator Julia Sizek speaks with Lichtenstein about her research on the transformation of American child welfare — and the impact of that transformation on contemporary maternal and infant health practices.

Excerpts from the interview are included below (edited for length and content).

How has the child welfare system changed over the span of time that you study?

I focused my research starting after the passage of the Social Security Act, because that is the major dividing line for American child welfare. Prior to 1935, when the Social Security Act was passed, we had a fragmented patchwork of mostly private child welfare agencies throughout the United States. The passage of the Social Security Act enabled an expansion of funding for state and local public child welfare. The main shift had to do with thinking about what welfare meant, and what it still means today.

In general, when we think about welfare, we are referring to government support for individuals or groups. The main distinction, especially in the 1930s, was between financial support — giving people money when they needed it, and couldn’t get it any other way — or providing services, such as funded medical services, educational services, or psychological counseling. Across social work, which was in a way the parent discipline of child welfare, there was a tension there. How do we help people — by giving them financial aid, or do we help them through social services?

The Social Security Act made that distinction quite clear for child welfare services, because the section that focused on child welfare services emphasized that this was about services in general, and financial aid was a separate part of the Social Security Act for families. One of the things that needed to be figured out was, what is child welfare, and how do you best serve children?

I’ve found in my research that there was an increased emphasis in the 1930s and 40s on the argument that child welfare should serve all the various needs children have. It was not just poverty-related needs. In fact, they veered away from poverty-related needs toward psychological needs, medical needs, health needs, etc. Child welfare advocates pushed for more funding and more resources for child welfare. What happened is that public child welfare grew exponentially in the 1950s and 1960s. The number of child welfare workers started rising dramatically. This led to a larger shift in child welfare and thinking about what child welfare meant in the 60s and 70s.

What was the focus of the child welfare system in the 1960s and 70s?

One of the major findings of my dissertation conflicts with the conventional narrative of child welfare history. The classic narrative is that the late 50s and 60s saw the discovery of child abuse as a social problem. Before then, scholars argue, nobody was talking about child abuse and neglect, and social workers and the public did not see it as a problem. And then by the 60s, it became a public and political issue, and you saw a number of laws being passed to mandate reporting of child abuse. This led to the creation of child welfare as we know it today, which is heavily focused on child abuse prevention and response.

The problem was that, as I dug through more archival resources, I found that that just wasn’t the case. The most damning piece of evidence I found was a publicly available report put out by the Children’s Bureau in 1959, which stated that 49% of public child welfare in home services related to abuse and neglect. This was in 1959, when current scholars were saying nobody talked about abuse and neglect.

I spent a few months in a sort of existential crisis: what is the meaning of my dissertation if everything is wrong? Eventually, I figured out that not everything is wrong, and that a lot of what was written about the history of child welfare was correct. There was much more of an emphasis on child abuse. But what it missed was this larger moment of transformation in child welfare.

What I show is that it’s not so much that child welfare agencies rediscovered child abuse, as much as they relinquished (sometimes willingly and sometimes unwillingly) jurisdiction over most other child welfare issues, including poverty, health issues, and education, and they retained jurisdiction only over child abuse and child neglect. I show that this happened largely due to larger trends in the American welfare state, specifically welfare state retraction and an increasing focus on efficiency and welfare governance in the late 60s and 1970s, which demanded that child welfare focus on issues that could be easily defined and services that you could put a price on.

The Children’s Bureau could no longer say they serve all of the needs of the population of children. Instead, there was an increasing shift toward, what is the problem you’re here to resolve? There were advocates that pushed for more focus, but it was all part of this larger shift in the American welfare state.

I also emphasize that the massive expansion of child welfare — that growth of staffing and funding — was also made possible by laws saying, you need to report child abuse. Where do you report it? To a child welfare agency. So now there were thousands of child welfare workers. It had unintended consequences. All the child welfare workers who were supposed to solve all of children’s problems were now there to solve one problem, which was the increasing the number of reports of child abuse and neglect.

How was the category of child abuse and neglect defined, and how did it transform over time?

Early research that tried to define what it meant to have abusive parents was primarily in medical journals. That was usually based on things like X-rays of children with broken bones and trying to figure out, was this an accident, or who caused this? There were also psychiatric evaluations of parents saying, what is wrong with parents who do this? It was a diagnostic model of approaching child abuse and neglect. The cases they were referring to were usually fairly severe cases of child abuse and neglect.

Originally, a lot of the laws addressed medical professionals, but they quickly expanded, in part because medical professionals pushed back and said, we can’t be the only ones mandated to report this. And so it quickly started to expand throughout the 1960s and 1970s to include professionals across the board who have any sort of interaction with children, including anyone in an educational setting, anyone in a medical setting, or people who work in funeral homes, for example. They became mandated reporters, which means they were supposed to be penalized if they did not report what were often very vaguely defined forms of abuse and neglect.

This varied greatly across states. Every state had different laws and different sets of mandated reporters, but child welfare agencies across the country started to receive a skyrocketing number of reports. This does not mean that everyone was reporting every suspicion, but there were enough reports pouring into child welfare that they had to figure out what to do with all these reports. In the 1970s, and increasingly in the 1980s, that forced a reckoning of the question of how to define child abuse — and how to figure out if what’s happening is child abuse and neglect.

Out of these millions of reports that started pouring in during this era, the majority were usually unsubstantiated. In the mid-1970s, usually around 60% of reports were unsubstantiated. The majority of reports that are substantiated were neglect reports that were highly correlated with poverty. There were eight times the rate of substantiated reports of physical neglect among low socioeconomic-status children versus non-low socioeconomic status children. So you had a broad category of neglect, which could include everything from passively allowing your child to starve to leaving your child home alone for a few hours when you go out to work. There was a huge range that varied by county and state.

The question then became, if you have this huge number of reports coming in, and the majority of them are not even abuse and neglect, or it’s not clear if it’s neglect or poverty, how do you create a system to prevent and treat a problem that we’re not even sure exists? And that’s really where you started to see this focus on risk. Child welfare and medical professionals affiliated with child welfare began to develop practical risk assessment tools to determine the risk that there’s an actual case of child abuse happening, or that it might happen in the future. These tools had all sorts of problems built into them.

What was wrong about the risk assessment tools that professionals were using?

In the 70s and 80s, the tools were often built on what was called a consensus approach to risk assessment. That was based on what social workers considered risk variables. This was deemed very problematic by the 1990s, but they were still widely used for the first 20 or so years. These tools tended to incorporate all kinds of variables having to do with the environment of the child. There may not have been any sign that the child was harmed directly, but you look at the environment and try to assess if there are risk variables there. That had to do with everything from the income status of the family to health issues of the parents to the marital status of the mother.

Childcare access could be a risk factor, as well as issues like the stability of the home. In the 1970s, there were risk assessment tools that had factors like, do the parents take this child to movies? Do they have a camera? Do they take the child fishing? Does the child have a mattress? You can see that it’s really hard to disentangle poverty from this.

There were also sometimes cultural factors. There was an early tool that was approved by the predecessor to the Department of Health and Human Services that asked whether the parent had wider family support in child care, and whether they were overly dependent on their family. That gets at something that is cultural, not just economic: studies have found that in families of color, there’s more interdependence and less of an emphasis on nuclear family units, so this could be problematic.

Drug or alcohol use was assessed as a risk factor. When you look at earlier surveys about child welfare services before this transformation toward a focus on child abuse, they would talk about health and family issues as issues of child welfare, but they weren’t risk factors for abuse. Child welfare might intervene if there was some sort of health issue with a parent, but that was seen as distinct, whereas when you look at the studies in the 1970s and 1980s, those same factors were not just a health issue, but a risk factor for abuse or neglect. So you saw a trend of structural inequalities and health issues turning into risk factors.

So instead of trying to say, how do we help this family as a whole, it became, how do we assess the assumption that the parent is harming the child? It’s an approach in which parent and child are seen as distinct units, and the question is, are they in some sort of conflict? What’s interesting is that this is a relatively rare problem, in which there’s an intentional effort by the parents to harm the child. It certainly happens, but it’s relatively rare.

How does what you’ve learned matter for people thinking about child welfare policy today?

First, child welfare is under-equipped for multi-dimensional problems. In some states, they might have access to more resources, and in other states, the only thing that can really do is child removal or interventions that are often quite disruptive to the family. Having child welfare in charge conflicts with the multidisciplinary approach that’s favored by most professionals.

Second, child welfare is associated with an enormous amount of trauma, especially for families that are low-income and for families of color in the United States. Fifty percent of African-American children in the United States today have experienced a child welfare investigation — one out of two. That’s just crazy. Huge numbers of children are experiencing these kinds of investigations. Perhaps some are very minimal, but some of them are not going to be so minimal.

What we have is potentially traumatic family surveillance and separation that’s intrinsically linked to child welfare, because no matter how helpful or well-meaning a child welfare worker might be, ultimately child welfare has the authority to take your child away, possibly forever. Even if they do that rarely, it can still be something that is laden with fear and anxiety for families.

Adding to that, lower standards of evidence are applied in child welfare proceedings, so that makes it particularly problematic to have child welfare involved in cases of substance-exposed infants, especially because (at least based on the limited data we have, for example, for California), a significant percentage of these infants are taken away from their mothers. Taking a newborn away from their mother is not necessarily an evidence-based approach to dealing with substance use issues. But the paradigm of child welfare is not necessarily to approach the best interests of the family as a whole. The paradigm of child welfare is to reduce and mitigate risk of future child abuse and neglect.

There have been significant shifts in child welfare over time. My research largely ends in about 2000. In the first couple of decades of the 21st century, there has been a concerted effort by child welfare agencies on every level to try to counter some of the intense racialization and income inequality that is reproduced by the child welfare system. We’ve seen a dramatic decline in child removals. For example, in New York City in 1995, there were 50,000 children in foster care. In 2018, there were 8,000 children in foster care. That is a dramatic decline. However, even though there were 8,000 children, there have been an enormous number of children investigated, and in New York City in 2019, 45,000 cases were in preventative services. So you still have a lot of child welfare involvement. What that means for families is not really clear yet.

The second major shift is that there’s been an intensification of the focus on risk assessment. We have seen the development of quite sophisticated risk assessment tools, not just the consensus tools, but actuarial tools and algorithmic tools that use computational methods to assess risk. And there have been a lot of critiques of some of these tools. The main issue is, do these tools funnel multiple problems, many of them poverty-related, into child welfare? And even if racial disproportionality in some states has declined, we still have a lot of racial disproportionality in child welfare, and income inequality continues. We don’t have enough data on that to fully assess it. And so we’ve continued to have significant issues with child welfare today, even as it has changed in this new century.

What are the approaches that different states take to the issue of infants who have been exposed to substance use during pregnancy?

In the 1980s, you have an increasing number of reports coming into child welfare of substance use during pregnancy, and a lot of this was highly racialized, in terms of how it was conceptualized. During the 1980s, this problem received a lot of media coverage. And what that means is that state legislators felt they had to do something; they had to respond in some way. And their options were basically to say, well, we can mandate medical intervention in such cases, we can criminalize these women for harming their children and mandate essentially law enforcement interventions, or we can mandate civil interventions through child welfare. The current scholarship on this period — and really on this issue — tends to focus a lot on criminalization, on how pregnant women are thrown into jail and how women are jailed or prosecuted for these kinds of uses. And then there’s also a lot of conflation of child welfare interventions and medical interventions, all part of this larger criminalization and policing of pregnant women. And there’s a lot to be said for that framework. But I think it’s actually really important to distinguish between those things, because criminalization is actually relatively rare compared to the thousands of women who are reported in each state to child welfare every year. By far the predominant response is child welfare reporting.

So how do we essentially manage and mitigate this risk of substance-exposed infants? Child welfare has this risk prevention framing, and also, it’s supposed to be dedicated to protecting children. So they are the perfect response. And what’s interesting about this is that child welfare increasingly across states becomes the primary authority for intervening in such cases, even as simultaneously, the professional consensus increasingly converges on the idea that we need a multidisciplinary response to the issue of substance-exposed infants. If you’ve read reports that are put out on this issue of substance-exposed infants, including from the federal government, the consensus is that we need doctors and social workers and financial aid, and perhaps even law enforcement. Everyone needs to work together to deal with this issue of substance-exposed infants. But in practice, the state laws overwhelmingly favor child welfare interventions, and child welfare is mandated to mitigate risk of child abuse and neglect. They’re not there to provide a multidisciplinary approach. They can and sometimes they do; it varies greatly by state. But that’s not their primary mandate. And there are very concrete consequences to having a child welfare response to this issue.

Listen to the full podcast above, or listen and subscribe on Google Podcasts or Apple Podcasts. For more Matrix Podcasts, including interviews and recordings of past events, visit this page.

 

 

Article

How CRISPR Became Routine

A visual interview with Santiago Molina, a recent UC Berkeley PhD, on the normalization of CRISPR technologies and the new era of gene editing.

Santiago Molina

Santiago J. Molina (he/they) is a Postdoctoral Fellow at Northwestern University, with a joint appointment in the Department of Sociology and the Science in Human Culture program. They received a PhD in Sociology from the University of California, Berkeley in 2021 and a BA from the University of Chicago. Their work sits at the intersections of science and technology studies, political sociology, sociology of racial and ethnic relations, and bioethics. On a theoretical level, Santiago’s work concerns the deeply entangled relationship between the production of knowledge and the production of social order. Their research included fieldwork at conferences and in labs around the Bay Area.

In this visual interview, Julia Sizek, Matrix Content Curator and a recent PhD graduate in Anthropology from UC Berkeley, interviewed Molina about their research on CRISPR, the genetic engineering technology that has reshaped biological research through making gene editing easier. This new tool has excited biologists at the same time that it has worried ethicists, but Molina’s research shows how CRISPR has become institutionalized — that is, how CRISPR has become an everyday part of scientific practice.

This image depicts a model of the CRISPR-Cas9 system. How did you come to encounter this model of CRISPR, and how does CRISPR work? 

3D Printed interactive model of Cas9.

This model was passed around the audience at a bioethics conference in Davis, California back in 2014 when I started my fieldwork. I remember the speaker holding it high above his head and pronouncing, “This! This is what everyone is so excited about!” While he meant it as a way to demystify the new genome-editing technology, a 3D-printed model of a molecule doesn’t tell us a lot about the process behind the technology. 

What is a bit disorienting is that technically, this isn’t a model of CRISPR at all, but a model of Cas9 (CRISPR-associated protein 9, a kind of enzyme called a nuclease) in white, an orange guide RNA, and a blue DNA molecule. To put it really simply, CRISPR (Clustered-regularly-interspaced-palindromic-repeats) describes a region of DNA in bacteria where the molecular “signatures” of viruses are stored so that the bacteria can defend itself. This bacterial immune system was repurposed by scientists into a biotechnology.  At its core, CRISPR-Cas9 technology is just the white and orange parts. The Cas9 does the heavy lifting of cutting DNA, and the guide RNA, or gRNA, acts as the set of instructions that the Cas9 uses to find the specific sequence of DNA where it should cut.

While people use CRISPR as a shorthand for the entire CRISPR-Cas9 system, you won’t actually find a single Eppendorf tube in a lab marked “CRISPR.” As a process, the way scientists get this to work is by adding Cas9 and the “programmed” gRNA to cells via one of several delivery techniques, such as a plasmid or viral vector, so that the Cas9 will make a specific DNA cut. In the years since then, scientists have developed a whole toolbox of different Cas proteins, and each can make many different kinds of modifications. 

What is interesting about this sociologically is that CRISPR has a wide scope of potential application, and early in its development, every possible use was on the table, from bringing back the wooly mammoth to ending world hunger. This meant that exactly what it would be, ontologically, was really open. Scientists would describe the technology as a pair of scissors, as a scalpel, as a find-and-replace function for DNA, a guided missile, a sledgehammer, etc. I became obsessed with these metaphors because they were traces of the active construction of CRISPR as a technology. 

My research takes this focus on the development of genome editing technology and reframes it as a problem of institutionalization, which sociologists generally understand as the process by which a practice acquires permanence and reproducibility in society. I look at how the ideas around what the technology is, how it should be used, and what it should be used for come to be settled, legitimized, and eventually taken for granted.

CRISPR has recently been in the news, not only because of Emmanuelle Charpentier and Jennifer A. Doudna’s 2020 Nobel Prize, but because of the 2018 announcement that a Chinese researcher had used CRISPR to gene-edit babies. How has the media covered CRISPR and the ethics of the technology? 

A crowd of photographers and reporters gearing up for He Jiankui’s presentation in Hong Kong.

Most media articles go something like this: “The idea that scientists can modify your DNA at will sounds like science fiction. But now it’s reality!”

This framing does important work to normalize futures that are in active construction. When newspapers and magazines cover CRISPR, they are bridging the social worlds of science and civil society and making concrete a very fluid social process of knowledge production and technological development. In doing so, some media coverage amplifies the hype around CRISPR and genome editing.

That said, it’s more complicated than saying they sensationalize it, because most coverage draws directly from interviews with actual genome-editing scientists, and they do their best to represent the science accurately. Instead, I think about media coverage as part of the cultural side of institutionalization. News articles offer interpretive scripts though framing that audiences can use to make sense of what CRISPR is, how it is used, and what the ethical issues are. This “making sense” is part of how genome editing is coming to be seen as a normal practice in biomedicine.

The distinction between investigative reporting and general media is important to keep in mind. Take, for example, the controversy surrounding the birth of genetically modified twins in Shenzhen, China in November 2018. If it wasn’t for keen investigative reporting by Antonio Regalado of the MIT Technology Review ahead of the Hong Kong Summit, it is likely that the controversy would have unraveled differently. 

The image above is a photo of a group of reporters during the summit taking pictures of He Jiankui, the scientist behind the clinical trial in Shenzhen that aimed to use CRISPR-Cas9 to confer genetic immunity to HIV in embryos. Subsequent media coverage of the controversy drew from interviews with high-profile, U.S.-based scientists in the field. These scientists argued that He Jiankui was an outsider on the fringe of the field. The resulting articles framed him as a “rogue,” “a mad scientist,” and a “Chinese Frankenstein.” This “bad actor” framing tells us that on the whole, the field is responsible and CRISPR itself is good, essentially repairing the crisis.

However, in alignment with more recent investigative reporting, my ethnographic research found that a handful of U.S.-based scientists had helped He Jiankui with his project. He had earned his PhD at Rice and was a postdoctoral fellow at Stanford. Scientists at UC Berkeley had given him technical advice on the project, as well. To me, this suggested that the “bad actor” framing — and the Orientalism surrounding how he was talked about – obfuscated the broader moral order of genome editing.

CRISPR is a relatively contemporary invention, but the idea of genome editing has a much longer history. How does this history appear in your research, and what does Charles Davenport have to do with it?

Photograph of Charles Davenport hanging in the common area of one of the buildings at Cold Spring Harbor Laboratory.
Photograph of Charles Davenport hanging in the common area of one of the buildings at Cold Spring Harbor Laboratory.

It’s interesting how little history appeared in my research. There is a sort of presentism that comes with “cutting-edge science.” CRISPR technology is part of a lineage of genetic engineering tools, going back to the 1970s, when recombinant DNA (rDNA) was invented. This biotechnology, rDNA, allowed scientists to mix the DNA of different organisms. It gave rise to a whole industry of using engineered bacteria to produce biologics and small molecules like insulin. The history of rDNA is important because the debates around its use in the 1970s came to be the dominant model of decision-making surrounding new technologies in the United States. Indeed, a handful of the top scientists from these debates have held top positions on committees that have been tasked with debating the ethics of genome editing over the past five years. 

Charles Davenport predated these debates, and has been largely an invisible figure for modern genome-editing scientists. Davenport was a prominent scientist in the early 20th century. He was a eugenicist and racist scientist who served as the director of Cold Spring Harbor Laboratory, a private, non-profit research institution, from 1898-1924. While at CSHL, Davenport founded the Eugenics Record Office, which published research to support the eugenics movement. I found this photo of Davenport in Blackford Bar, the pub at Cold Spring Harbor Laboratory, where I went to the first meeting, titled “Genome Engineering: The CRISPR/Cas Revolution,” in 2015. While the scientific community eventually came to reject Davenport, and the eugenics movement fell out of fashion after World War II, this history is important to recognize as we usher in a new technology aimed at eliminating genetic diseases and improving human health. At the conference in 2015, I thought, if Davenport’s ghost had been hanging out at the pub, he would have been thrilled.

The scientists I worked with vehemently rejected the idea that what they were doing could be considered eugenics, or what one scientist called it, the “E-word.” But people often forget that the eugenics movement in the United States was both mainstream and progressive at the time. Eugenics laws were drafted and passed by Democratic legislators who aimed to address poverty by drawing on the most up-to-date science, medical knowledge, and expert opinion. When this history was brought up at modern conferences and meetings, it was either subtly discredited as fear-mongering or tucked into a panel at the end of the conference to entertain philosophical discussion.   

Your research also contends with the way research is conducted between different laboratories, even when many of the plasmids (a kind of DNA molecule commonly used in CRISPR applications) and techniques that they use are proprietary. The shipping area in this image is how Addgene, what has been called “the Amazon of CRISPR,” sends reagents and plasmids used in scientific research to laboratories around the world, and manages many intellectual property issues. What is Addgene’s role in the scientific process?

Hundred of plasmids await daily FedEx pickup in Addgene’s shipping room.
Hundred of plasmids await daily FedEx pickup in Addgene’s shipping room.

While I was doing my research, there was a raging patent dispute between the University of California, Berkeley and the Broad Institute, where each institute claimed to have invented the technique for modifying mammalian cells with CRISPR. So the proprietary aspects of CRISPR were always in the background. But I think if it wasn’t for Addgene, these concerns would have really slowed down the spread of genome editing.

Addgene is a non-profit organization that operates as a mediator between the exchange of practices and biological materials between labs. What they do is manage a plasmid repository, a sort of technique library, and fulfill the requests for plasmids to send them to those labs. Because plasmids are central to many biological experiments, and are key for CRISPR-based techniques, scientists rely on the availability of these circular pieces of DNA as a key reagent. Since receiving its first CRISPR plasmid in 2012, Addgene now has over 8,000 different CRISPR plasmids in the repository, and has shared them over 140,000 times with laboratories across 75 different countries. They essentially took over the logistics of CRISPR distribution, moving biological materials from place to place. By doing it at a really low cost, this effectively contributed to what scientists described as the “democratization” of genome editing. 

They also keep patent lawyers at universities happy with detailed record-keeping and by electronically managing material transfer agreements (MTAs), which sort out the proprietary issues, through a Universal Biological Material Transfer Agreement (UBMTA). This UBMTA relaxes the institutional constraints on the transfer of biological materials. Scientists love this because it reduces a lot of paperwork.

Last but not least, Addgene contributes to the institutionalization of CRISPR-Cas9 by producing guidelines and protocols that support the use of some of the plasmids. For example, Addgene was the first to develop a textbook for CRISPR. Their CRISPR 101 eBook has been downloaded more than 30,000 times, and their informative CRISPR blog posts had been visited over 500,000 times as of 2019. In these materials, detailed definitions of new genome editing techniques and terms of art are spelled out for curious adopters. Additionally, the scientific team at Addgene works with the scientists who are depositing plasmids to coproduce useful documentation to accompany the plasmids. Addgene does not share plasmids with for-profit organizations, but acts as an up-to-date clearing house and tracker of CRISPR innovations in academic and non-profit laboratories.

As part of your research, you spent time at different labs around the Bay Area to understand how CRISPR research has become an ordinary part of scientific research. Can you walk us through some of these images of lab life and what they show us about how CRISPR has become institutionalized? 

Sculpture of a ribosome in an atrium.

 

Rows of backed lab benches.

 

The first image is of the atrium in one of the buildings I often found myself in for fieldwork. The huge sculpture of ribosomes on the side looks so abstract to me. A lot of these spaces required keycard entry, and for me, the emptiness of some of the spaces made them all the more isolating. I would have to get lost sometimes just to find the right room, where a small group of scientists were discussing the next big breakthrough or the next application of CRISPR-Cas9. The public-facing image of the field was really different from the behind-the-scenes shop-talk environments where I took notes. It was different because it wasn’t open to anybody, and you would need a lot of intellectual and cultural capital to enter those places.

The second picture, to me, represents the ordinary that is behind those barriers of access. Lab benches are workshops. They are shared spaces that are a lot like kitchens in a restaurant. Everything has its place, every tool is in its nook, you might find some remnants of an experiment in the fridge, or old reagents in the freezer. But you can tell there is some fun in the mix. The folks who are working at those benches are doing it because they love it. For these graduate students and postdocs, CRISPR-Cas9 was an exciting opportunity, something that would help them finish their PhD, or if they were an undergrad-volunteer, it was a key skill to move forward. Lab life a lot of times felt banal: scientists moving through their careers, with lots of failed experiments, meetings that could have been emails, day-to-day conflict with coworkers, late hours, etc. I wish people could see the contrast between the hype surrounding something like CRISPR-Cas9 and the on-the-ground struggles of scientists in the lab.

In these pictures below, you show a humorously decorated doorway that tells us a lot about how scientific work happens at a university. What does this tell us about who conducts science, and about equity issues within the lab?

Threshold of the lab as an angry doorway with a top-hat and mustache, hungry for the labor of postdoctoral fellows, undergraduate, and graduate students.
Threshold of the lab as an angry doorway with a top hat and mustache, hungry for the labor of postdoctoral fellows, undergraduate, and graduate students.

Threshold of the lab as an angry doorway with a top-hat and mustache, hungry for the labor of postdoctoral fellows, undergraduate, and graduate students.

This personification of the lab was interesting to me because it draws attention to those struggles I just mentioned. Of course the decoration is a lovely piece of satire, but scientific discoveries and breakthroughs are the products of years of labor. A lot of this work is done by unpaid undergraduate volunteers, graduate students who are often in precarious financial situations, and some paid research associates, and it is coordinated by postdoctoral fellows. Sometimes, because of the demands of experimental work, lab workers would have to come in in the middle of the night to feed cells, check on experiments, or manage instruments. In the lab I worked in, one research associate worked as a Lyft driver on the side because their salary wouldn’t cover their cost of living. While the hierarchies of labor are still very strong, some universities and labs, like the Innovative Genomics Institute at UC Berkeley, are now requiring that all undergraduate workers be paid. I think this is a step in the right direction, but there are still equity issues both between and within ranks of the lab. 

This disparity is even more extreme when you consider how senior scientists and universities benefit from scientific labor. Social capital in the form of reputation and financial capital both accumulate as a result of this work. Partnerships between university laboratories and the biotech and pharma industries in particular have become commonplace in 21st-century biomedicine. Research examining these partnerships describes this as academic capitalism or neoliberal science. My research adds to this line of social scientific research that has traced this institutional shift, where academic organizations are increasingly adopting the practices and bureaucratic frameworks of for-profit organizations in industry. Those patent disputes I mentioned previously are a good example of this. 

With CRISPR research, as with much other biological research, the institutionalization of scientific norms is essential to conducting scientific research. What does Michael Jackson have to do with that? 

DIY biohazard safety sign posted on the lab doors.
DIY biohazard safety sign posted on the lab doors.

There are three proximate institutions of social control surrounding scientific work, in my view: biosafety, bioethics, and the ethics of research misconduct. This poster is an example of a biosafety rule being operationalized in the lab. It is posted on the doors so you would see it as you exit the lab space to the common area and kitchen. Biosafety essentially aims to contain the materials, reagents, and products of scientific experiments to the lab. Lab managers and principal investigators must fill out detailed forms describing the experiments being done and submit these to the biosafety office at their university. These are then reviewed and evaluated by biosafety experts, who then make recommendations about infrastructure requirements for the spaces where the experiments are conducted and prescribe mandatory training for any personnel conducting those experiments.

Biosafety is a really interesting social institution because it must constantly keep up with new techniques and develop risk frameworks for assessing them. For innovations like CRISPR-Cas9 that are revolutionary, this sometimes requires some finesse. When you consider the modifications being made to bacteria, plants, non-human animals, and human cells, you can bet there is considerable work going into making sure those biologics don’t end up where they aren’t supposed to. Consequently, scientists must follow strict protocols for waste disposal and use the appropriate personal protective equipment (PPE).

But then consider who is doing those experiments. There can sometimes be a disconnect between the official protocols and how they are enacted. This poster captures that disconnect and suggests that more immediate forms of social control might work better in some cases than extensive bureaucratic procedure. Plus, Michael is iconic.

As with any social process, there are bound to be accidents. In the lab I observed, for example, a graduate student accidentally cut himself through his gloves on some broken glass while conducting some genome-editing experiments with lentiviral packaged Cas9. This lentivirus could, in principle, infect any mammalian cell. While he was working under the fume hood, which creates negative pressure to suck up the air where the experiment is being done, there was still a risk that Cas9, which would edit the DNA, could enter his blood stream. He then went to the post-doc he was working under and the lab manager, who advised him to report it to the Office of Environment, Health & Safety (Eh&S). EH&S then told him to go to the student health center. Once at the health center, the grad student with his bandaged hand informed the nurse that his lab was categorized as BSL-3 (biosafety level 3), to which the nurse responded, “What is BSL-3?” He was ultimately fine, as far as we know, but the example shows a further disconnect between the different offices tasked with managing the risks of scientific work.

As genome editing continues to develop as a broader institution in biomedicine, there are going to be accidents, and there is going to be misuse. No number of guidelines or codified norms can prevent that. This is why it is crucial that we continue having debates about the norms governing the use of the CRISPR-Cas9 system, both as a promising clinical technique and as a sociocultural institution. My hope is that these debates will lead to concrete regulatory and legal changes that can more directly shape this technology’s use. 

Article

The Terracene: An Interview with Salar Mameni

Salar Mameni

At the intersection of the War on Terror and the Anthropocene lies Salar Mameni’s concept of the Terracene, which describes the co-emergence of these two terms as a means to understand our contemporary social and ecological crises. Mameni, an Assistant Professor in the department of Ethnic Studies at University of California, Berkeley, is an art historian specializing in contemporary transnational art and visual culture in the Arab/Muslim world, with an interdisciplinary research focus on racial discourse, transnational gender politics, militarism, oil cultures, and extractive economies in West Asia. They have published articles in Signs, Women & Performance, Resilience, and Al-Raida Journal, among others.

In this visual interview, Julia Sizek, Matrix Content Curator and a PhD candidate in the UC Berkeley Department of Anthropology, talked with Professor Mameni about their research, working with select images of art discussed in their forthcoming book, Terracene: A Crude Aesthetics.

The concept that you propose in your book, the Terracene, foregrounds the War on Terror as necessary for understanding not only our contemporary political crises, but also our contemporary ecological crisis. Describe your concept, and what it adds to our understanding of the links between terrorism and environmental issues.

My book coins the term “Terracene” in order to bring attention to the role of militarism in enacting the ongoing ecological crises we currently face. I insist that contemporary forms of warfare – such as the infamous War on Terror – are concurrent with and continuations of settler colonial land grabs and habitat destructions that have created wastelands across the globe. In their initial timeline for the Anthropocene,  scientists traced the origins of this new epoch to technological innovations in early 19th-century Europe that brought about industrialization. In my view, this is an inadequate historiography that does not take into account longer histories of European settler colonialisms, as well as the ongoing role of militarism in maintaining wastelands. The term “Terracene” is a way of highlighting the terror that is tied to the current geological timeline.

Terror, however, is not the only idea I intend to highlight with the notion of the Terracene. I also take advantage of the sonic resonance of “terr” (meaning earth/land) in the word “terror” in order to direct our attention to the significance of thinking with the materiality of the earth itself. In my work, I consider this through toxicity of militarism and extractive economies, which turn the earth itself into a weapon that continues to poison even after the troops and the industries have receded. Scholars of environmental racism often highlight the dumping of toxic waste on lands inhabited by racialized, poor, and devalued communities. My book emphasizes the production of “terror” out of “terra,” which can mean the weaponization of the earth itself. Yet, I believe that the very shift of attention to the earth’s many potentialities can also allow for conceptualizing futures out of toxic wastelands. For me, new theories are only useful if they do not simply mount a critique of systems of oppression but also offer new imaginaries as foundations for future directions. Much of my book is attentive to materialities and thought systems that do not align with scientific conceptualizations of ecological thinking as a way of opening up new modes of thought.

Part of the reason you relate the Anthropocene and War on Terror is because of their coeval histories. Aside from emerging during the same era, how are the histories of these two concepts — terrorism and the Anthropocene —related?

Yes, the so-called War on Terror, as well as the scientific notion of the Anthropocene, were both popularized in 2001, each proposing a new way of conceptualizing the globe. What is fascinating to me is how each of these ideas revolves around an antagonist: the terrorist in one case, and the Human (Anthropos) who caused climate change in the other.

The question I raise in the book is this: why is it that the term “terrorist” cannot be applied to the Human who has caused deforestations, temperature rise, and oil spills, making the globe uninhabitable for endangered species, as well as threatening the livelihood of multi-species communities globally? Why is the notion of the “terrorist” instead reserved for those who protest the building of oil pipelines on Indigenous lands, or those who resist settler colonialism in places such as Palestine? This tension brought me to see that the idea of the Human (Anthropos) continues to be limited to those engaged in settler colonial ventures, those who are protected against the “terrorist” through the security state.

What do you think the study of art history can bring to the Anthropocene, which is often described through science?

Great question! The book argues that “science” is a provincial worldview that has displaced a plethora of diverse thought systems that are in turn called “art” (or “myth” or “superstition” or “religion”). So my first approach in the book is to question the very art/science divide that disallows those deemed non-scientists to participate in knowledge production. Non-scientists have of course included very large groups, such as women, non-Western knowledge producers, and non-human intelligent beings. This vast array of intelligence left out of “science” says much about the limits and hubris of scientific thought. My book opens up space for artists who think beyond the reaches of scientific ecologies. A part of the book, for example, is dedicated to ecologies of ancient deities. For instance, I consider Huma, the Mesopotamian deity who has been conjured and resurrected by the contemporary Iranian artist Morehshin Allahyari (Fig. 1).

Morehshin Allahyari, "She Who Sees the Unknown: Huma" (2016), Image courtesy of the artist.
Figure 1: Morehshin Allahyari, “She Who Sees the Unknown: Huma” (2016), Image courtesy of the artist.

 

As the artist explains, this is the deity of temperatures. Huma’s body is multi-layered and mutative. It has three horned heads, a torso hung with large breasts, and two snake-like tails. Huma is multi-species and multi-gendered and is the deity that rules temperatures. In a time of temperature rise, wildfire, and fevers brought about by the COVID-19 pandemic, Huma is the deity to conjure. Indeed, Allahyari conjures her as a protector, but also builds her out of petrochemicals, the plastic used in 3D printers.

I also take seriously the intelligence of non-human phenomena such as oil. In the book, I consider images of explosions at a Southern Iranian oil field, as documented by the Iranian filmmaker Ibrahim Golestan in a film called A Fire! (1961) (Fig. 2).

Fig 2. Still from “A Fire!”(Dir. Ebrahim Golestan, 1961)

Rather than thinking about the human triumph of putting out the explosive fire, which took 70 days to extinguish, I consider the intelligence of petroleum that refuses to be extracted from bedrock. I call this human/oil relationality “petrorefusal” in order to call attention to the unidirectional master narrative of extraction. What would it mean, for instance, if we understood explosions as petroleum’s refusal to leave the ground? Would engaging such a refusal mean an end to extractive practices at the current industrial-capitalist scale?

Though you are an art historian, you are attentive to the limits of the visual as a mode of sensing the world. How do you bring other modes of sensing into your work, and how does this shape your approach to art history, which is often imagined as a visual discipline?

Yes, the dominance of the visual within traditions of art history cannot tackle the rich sensorial relations that ecological thinking needs. In the examples of the artworks I cite above, for instance, my theories do not arise from the visual aspects of the works alone. In the case of Huma, a visual reading would miss the spiritual and ethical significance of the deity’s conjuring. Instead, my reading of Huma engages with the object’s deep time, a time that dissolves its plastic materiality into the microbial temporality of oil’s production. In this sense, the sculpture is not simply and statically visual or coeval with our present moment. If we focus on the time of oil and plastic, the sculpture moves into a performative, mutative flux of multi-species organisms across temporalities that are beyond our own. The book as a whole treats the visual as embedded within (and inseparable from) multiple sensorial experiences.

How does art add to our understanding of the Terracene?

I coined the term Terracene as a critique of the notion of the Anthropocene. It is meant to question the centering of a destructive Human (Anthropos) at the core of a planetary story. In this sense, I probe the narrative structure of this scientific story of the Anthropocene — a story that is proposed to be a fact. Usually, storytelling is understood to belong to the domain of arts and humanities. By definition, stories are not checked for factual accuracy, but engaged with at the level of the creative imagination. This is precisely what gives stories their power. Stories can build alternate worlds and offer alternatives to how we perceive reality to be. So if the Anthropocene is a story, then surely other stories can be told. The Anthropocene story is a story of the destructive human, which is why I propose that it is better called the Terracene.

What if we began to tell creation stories at the moment of planetary destruction? Indigenous cultures across the world have creation stories that have been vehemently suppressed by destructive (settler) colonial knowledge productions and worldviews. In the book, I make a case for ethical engagements with subjugated forms of knowledge that offer alternatives to thought systems that have brought the Terracene into being. One such story I relate in the book comes from my own vernacular Islamic culture that imagines the world as a sacred mountain balancing on the horns of a bull, the bull standing on the back of a fish, and the fish, in turn, being held up by the wings of an angel.

Salar Mameni, "Creation Story" (2022)
Fig. 3: Salar Mameni, “Creation Story” (2022)

I argue that such a creation story emphasizes the inter-relatedness and inter-reliance of all things. The world hangs together in a fine balance, with every creature mattering to its overall existence. Art, in this sense, is not an alien other to science, but an equal participant in the creation of worlds we inhabit.

 

Podcast

What Happened to the Week? An Interview with David Henkin

David Henkin

We take the seven-day week for granted, rarely asking what anchors it or what it does to us. Yet weeks are not dictated by the natural order. They are, in fact, an artificial construction of the modern world.

For this episode of the Matrix podcast, Julia Sizek interviewed David M. Henkin, the Margaret Byrne Professor of History, about his book, The Week: A History of the Unnatural Rhythms that Make Us Who We Are. With meticulous archival research that draws on a wide array of sources — including newspapers, restaurant menus, theater schedules, marriage records, school curricula, folklore, housekeeping guides, courtroom testimony, and diaries — Henkin reveals how our current devotion to weekly rhythms emerged in the United States during the first half of the 19th century.

Reconstructing how weekly patterns insinuated themselves into the social practices and mental habits of Americans, Henkin argues that the week is more than just a regimen of rest days or breaks from work, but a dominant organizational principle of modern society. Ultimately, the seven-day week shapes our understanding and experience of time.

Excerpts from the interview are included below (with questions and responses edited).

Listen to this interview as a podcast below, or listen and subscribe on Google Podcasts or Apple Podcasts.

 

 

What are the different ways people have thought about the week?

The seven-day week does many things for us in the modern world, but we tend to focus exclusively on one of them, and that’s the idea that we have a unit of time that divides weekdays and weekends, work from leisure, profane time from sacred time. The week creates two kinds of days. But by its very structure, the week also divides time into seven distinct, heterogeneous units. Every day is fundamentally different from the day that precedes or follows it. The names we use for the days of the week suggest no numerical relationship between days. The week also lumps time together for us in interesting ways. We talk about what we did this week, what we hope to get done next week. What the week does most conspicuously and powerfully for us in the modern world is coordinate our schedules. It sequesters or regulates the timing of certain activities, especially activities that we try to do in conjunction with strangers.

How did people begin to use the week for stranger sociality?

The best example might be a market day, where you want to only have a public market every so often, and you want to make sure everyone can be there. And everyone remembers when it is and it doesn’t conflict with other things. That’s one model for it. But I argue in the book that it was really only in the early 19th century that large numbers of people began to have schedules that were different from one day of the week to another.

The institutions that helped produce that are varied. They included things like mail schedules, newspaper schedules, school schedules, voluntary associations (like fraternal orders or lodges), and commercial entertainment, like theater or baseball games. The more people lived in large towns and cities, the more they were bound to patterns of mail delivery or periodical publication, and the more likely they were to have regular activities that took place every seven days, or on one day of the week or another. Once they had that, it was a self-perpetuating cycle, because then you’ll begin to schedule other activities so as not to conflict with them, or to be memorable and convenient. The weekly calendar began to be used to organize these regularly recurring activities, typically that involved strangers and were open to the public.

Today, we often think about having the work week, and then the weekend, if we are so lucky. What are the ways that historians think about this division of either week and weekend, based on work or leisure?

Historians haven’t really thought too much about the weekly calendar at all, but to the extent that they have, they have focused exclusively on this question of the work week. Most commonly, they’ve studied the ways in which organized labor or capital have sought to control or regulate the length, pace, and even the timing of the work week.

The Industrial Revolution brought about a hardening of the boundaries between work and leisure, rather than having leisure bleed into Monday, or having work bleed into Saturday or Sunday. Something industrial the week has done for for centuries, even for millennia, is from biblical origins. The concept of a Sabbath is essentially an industrial one, which says there’s a time for work, and a time for rest or “not work.” That’s how historians have written about it.

Historians have not paid much attention to the role of leisure in organizing weekdays. They have paid attention to the role of leisure in giving special meaning to Sunday, and the great debates over how one should spend one’s Sunday — whether it should be in church, or going to the theater, or whether it must not involve alcohol, or whether it can involve sex, or whether the mail can be delivered. That all features prominently in the historiography of 19th-century America. But few have noticed that people’s lives have these other weekly rhythms, too.

What were the sources you drew upon to come to your conclusions about how the week is shifting and changing?

There were two kinds of sources. The first is a bit boring, but phenomenally important, which is that if you look at any newspaper or city directory, or anyone’s account of their lives, you suddenly realize how many activities they engage in that are pegged to the week, whether it’s going to musical societies or temperance lectures or anti-slavery organizations. You notice that they’re organizing by the week. It’s glaring at you and in plain view, but if you don’t ask the question, then you won’t actually see it. We know that newspapers typically came out once a week, but on which day of the week did they come out? Was it the same? Did it vary? Things like that don’t require a huge amount of digging. It just requires asking the question. You can basically ask that question to almost every public document from the first half of the 19th century in the United States, and those documents that register life in an urban or semi-urban society create a thick catalogue of weekly activities, obligations, and habits.

You also look at diaries. What are some of the insights you can get from diaries, and how did the practice of diary-making change during the period of time you’re looking at?

Diaries tell us what whether people went to French class on a Wednesday or not, but the cool thing that they do, along with correspondence and other kinds of recollections, is allow people to narrate their own experiences. Those are fascinating because you can not only see what they did, but how they remembered — or sometimes failed to remember — what day of the week it was. One of the things I came to be especially impressed by during the course of my research for this book was the link between the week and memory. We can use diaries as the main example, because that’s probably the single source type that I immersed myself in most most deeply. Diaries are not hard to find. They are everywhere. The challenge there was to spend years looking at as many of them as I could, then thinking about the various kinds of archival biases I needed to overcome to make sure I was looking at a broad range of diaries.

Diary-keeping is a very old activity. I would say it became a mass practice in the United States in the early 19th century. In New England, it was somewhat widely practiced even in the 18th century, but became much more so in the 19th century, and the 19th century also saw the rise of the pre-formatted diary book. It had been introduced as a consumer good in the United States in the 1770s, but totally bombed. No one really wanted such a thing. Instead, people used almanacs with a standard format of calendar as a material artifact. Almanacs are organized around the month, and they tend to focus on naturally observable things, like the weather. People didn’t really see any need for a pocket diary that you could write stuff in. But by the 1820s, these were suddenly quite popular. The most common format was six days to a spread, sometimes seven. It conditioned people to thinking about their lives in chunks of time that were much smaller than a month, but bigger than a day.

You mentioned that a lot of historians of industrial capitalism have focused on the work day. What do your insights about the week have to bear against the focus on the hour?

The hour is by far the time unit that has been of greatest interest not only to historians of labor, but also to historians of time, who have been far more interested in the clock than the calendar, in part because the clock is a mechanical device, and we tend to look for technologies to explain fundamental changes in temporal consciousness, whereas calendars don’t seem to be that kind of technology. The week is not measured any more precisely today than it was 100 years ago, or even 500 or 1000 years ago. The hour is very much associated with punctuality, and with discipline. The 19th century is really also when large parts of the world began calculating hours the way we do today, which is to conceive of it as 60 minutes, and as 1/24 of a full daily cycle, which is not how most societies used to define which they define it, which was as 1/12 of the variable amount of daylight.

When you read about the week, you realize that you’re looking at a unit of time that doesn’t fit into any of the big paradigms that have drawn our interest to the hour. We’re interested in the hour because we think that pre-modern time was natural and observable. Modern time is homogeneous. It’s arithmetically calculable, and fundamentally alienated from nature. But the week is equally artificial. It’s not actually rooted in natural rhythms, and it’s not confirmed or correctable by observable natural phenomena. It’s very rigid and artificial, but it’s also very, very old. So once you stop assuming that clock time is the way to look for the hallmarks of modernity, I think it opens up new ways of being interested in the week. The week wasn’t even a universal system of any kind in large parts of the world, including East Asia, which did just fine without thinking of the seven-day cycle as a timekeeping register of any kind. My research into the week makes me think of the hour as a less less apt symbol for the difference between modern and pre-modern timekeeping. The week is a heterogeneous timekeeping system. The homogeneity of time is a powerful feature of modern timekeeping, but the seven-day week says that no two days are alike. We speak about daily life, everyday life, but the week resists that whole notion. It insisted that no two consecutive days are substitutable. It would seem to correspond with pre-modern notions of time movement and heterogeneity that used to interest anthropologists about timekeeping in primitive societies, and yet it is fundamentally modern and has only now in the last 100 years become a global timekeeping system.

The week is more about the calendar that you keep, and not about the town square, which doesn’t raise a different flag on Mondays or Tuesdays. It raises the question about the way that the week has been seen to be subpar, or irrational. There have been different projects to try to remake the week into something that is more like a clock tower. What have some of those projects been?

There have been three big ones. They’re all big, because they all represent an attack on the seven-day week from very powerful, and in many other respects, successful revolutionary movements.

The first was the French Revolution, which sought to rationalize and standardize measurements of all kind, and succeeded. Many of the ways in which we measure things, especially outside the United States, are a product of the French Revolution and its belief in enlightened rationality. The French Revolution also had another gripe with the week, apart from the fact that it’s awkward and irrational, which is that it seemed to be the fundamental anchor of the power of the Catholic Church, in old regime France. So the French revolutionaries created a new calendar. They not only renamed months and years, but they also more radically introduced a 10-day week, called a decade. And it was fundamentally different from the seven day week. And it was a failed experiment.

The next big one was the Soviet attack on the week. Soviets were mostly interested in continuous production in factories, but they also wanted to undermine the power of the Russian Orthodox Church. They first went to a five-day week, then a six-day week, and then weeks were not coordinated. That was the part that had to do with continuous production, similar to a hospital or any other operation that seeks continuous operation: I have one day off, but my best friend or wife might have another one. That failed, in part because of resistance to having a non-coordinated week.

The third attack is less well known, but represents American and European corporate capitalism, and the rational reforms favored by big businesses that they largely succeeded in creating by World War One. It was a system of timekeeping that’s universal, that gave us things like time zones, where you can divide the world in the 24 zones, and also a line that marks where the day officially ends and begins somewhere in the Pacific Ocean that’s antipodal to Greenwich, England. Or daylight savings time, the idea that you can manipulate the clock for various social or economic benefits. All these things are product of what my colleague Vanessa Ogle calls the global transformation of time between 1880 and 1920.

The one thing that the many those same reformers wanted to do — and failed to do — was to tame the week by making it an even subdivision of months, and especially of years. And that’s not a very big change, right? They’re not making the week longer or shorter. They’re not making it non-coordinated. All they’re doing is saying that at the end of every year, there’ll be one day, or two if it’s a leap year, that are blank. Most proposals to tame the week as I would call it, or reform the week, simply asked for one or two blank days that would have no weekly value. And the purpose was so the cycle of weeks would be 364 days, not 365, and therefore divisible by seven, and therefore every January 28, would be a Monday. The League of Nations took it up and considered it, but rejected it. Many people assumed that this was the wave of the future, but instead it suffered the fate of Esperanto, and not the fate of timezones.

Meanwhile, the week was entering, without much resistance, all these societies that never had one. If I were a historian of Japan, I would really want to study, what was the cognitive process, the cultural process, and the political process by which a society that had never counted continuous seven-day cycles suddenly began organizing not only its work life, but life more generally, around this complete innovation? It’s not flashy like the internet. But it is a technology, and it was completely new in Japan. It’s a different story in the United States, where the technology was quite old. and was doing new things for people without anyone really commenting on it.

Article

The Labor Market and the Opioid Epidemic: A Visual Interview with Nathan Seltzer

Nathan Seltzer is a postdoctoral scholar in the UC Berkeley Department of Demography. He received his PhD in sociology from the University of Wisconsin-Madison, where he also trained in demography at the Center for Demography and Ecology. His research explores the relationship between economic change and population trends. In published and ongoing work, he investigates how the decline of the American manufacturing sector has impacted fertility rates, mortality rates, and economic mobility. 

Social Science Matrix content curator Julia Sizek interviewed Seltzer about his recent research, using images from his article, “The economic underpinnings of the drug epidemic,” which was published in Social Science and Medicine – Population Health in December 2020. (Please note that captions have been revised for this article.)

 

graph showing rise in opioid rates
Fig. 1: Annual number of total drug overdoses, specified opioid overdoses, and corrected estimates of opioid overdoses, which include specified opioid overdoses and predicted opioid overdoses for death records that had an unspecified contributing cause in the United States.

 

During the last 20 years, the number of opioid-related deaths has been dramatically increasing, as Figure 1 shows. How have scholars typically understood the causes of the opioid epidemic?  

There are a number of reasons why drug and opioid overdose deaths have increased over the past two decades. To begin, pharmaceutical companies began ramping up the manufacturing and distribution of prescription opioids in the 1990s. Foremost, Purdue Pharma is known for its role in pushing OxyContin, but the widespread adoption of prescription opioids for pain ailments extends to the broader pharmaceutical industry, which promoted the idea that opioids were non-addictive and safe to use with minimal risks. 

The deliberate distribution of prescription opioids by pharmaceutical companies is a supply explanation for what propelled the opioid epidemic. At the same time, we know that supply cannot exist without demand. Recent academic literature, including my study, has found that the success of the pharmaceutical companies in distributing prescription opioids was driven in part by below-par social and economic conditions. In particular, economists Anne Case and Angus Deaton have emphasized in their research how deteriorating quality of life and economic “despair” have proliferated in recent decades. Indeed, there is a strong correlation between measures of economic precarity and opioid prescribing patterns.

While the drug epidemic was initially spurred by the over-prescription of opioid medications, two additional developments kept it going. First, the rise of heroin supply started at the beginning of the 2010s. Second, the rise of synthetic opioids, such as fentanyl, started shortly after the rise of heroin. Yet, the drug epidemic is wider than just opioid use – there has also been an increase in deaths involving psychostimulants and cocaine. In my research, I focus on the broader drug epidemic, rather than just the opioid epidemic, to call attention to this broader development.

Fig. 2: Total number of workers employed in the manufacturing sector in the United States, 1980–2019. (Data Source: U.S. Bureau of Labor Statistics, All Employees, Manufacturing [MANEMP], retrieved from FRED, Federal Reserve Bank of St. Louis; January 28, 2020.

Your research examines the link between the labor market and opioid overdose mortality. In the graph above (Figure 2), we can see the general decline in the number of workers employed in manufacturing. How do scholars normally explain the link between this decline in jobs in the manufacturing sector and opioid deaths, and what is important about manufacturing-sector jobs compared to other declining industries? 

The decline of U.S. manufacturing is one of the most important labor market events of the past fifty years. Between 1970 and today, manufacturing jobs went from representing a quarter to less than a tenth of all jobs in the U.S. The issue with this decline is that manufacturing jobs have traditionally functioned as a ladder for upward economic mobility, especially for those without a college degree. As manufacturing employment has decreased, no other industry has taken manufacturing’s place to provide a similar ladder for upward economic mobility. Instead, most employment growth has been in the “low-skill” service sector, which provides wages that are not comparable to those commanded by manufacturing workers.

Scholars have recently begun to examine how these sorts of labor market changes are impacting different facets of society, including trends in drug overdose mortality rates. My research builds on this new literature by examining how the loss of manufacturing jobs predicted the rise of the drug epidemic. The mechanism behind this association is that manufacturing decline heightens economic uncertainty for both workers who are directly laid off, as well as the broader community that experiences reduced employment opportunities. This economic uncertainty fosters a risk environment that increases the likelihood of substance use.

Fig. 3. Change in the share of employees in the manufacturing sector by state, 1998-2016. Data: U.S. Census Bureau, County Business Patterns Program. (For interpretation of the references to color in this figure legend, see the web version of the article.)

 

As we can see in Figure 3 (above), there is significant variation across states in the extent to which manufacturing declined. What does examining the opioid epidemic at the state scale show us that’s less visible at other scales, and what did you find when you examined smaller scales at the county level?

I chose the state scale for the primary analysis because there is substantial variation in both drug overdose deaths and manufacturing employment across states. This state-level variation is not just random noise, but the result of different social, economic, and health policies that have been implemented by states over the course of decades. These policies range from labor deregulation to Naloxone access laws (Naloxone is a drug that immediately reverses an opioid overdose) and the creation of prescription drug monitoring programs. Accordingly, population health outcomes are now increasingly determined by state-level policies and regulations, and it is important to take into account these broader socio-political policy regimes when conducting a statistical analysis.

The results of the state-level analysis indicate that states with higher levels of manufacturing employment had lower rates of drug overdose deaths. Specifically, for every one percentage point increment in manufacturing employment, there is a 3.2% reduction in drug overdose rates for women and a 4.7% reduction in drug overdose mortality rates for men. Between 1999-2017 (the length of the study period), the overall decline in manufacturing employment experienced by all states accounted for approximately 92,000 overdose deaths for men and up to 44,000 overdose deaths for women.

In addition to the state scale, I examined whether the association between manufacturing employment and drug overdose deaths held at smaller geographic levels, including the commuting zone level (a level slightly larger than a metropolitan statistical area) and the county level. The results demonstrate that the statistically significant association remains, although the effect size attenuates slightly. This attenuation can be explained by the effects of shifting to a smaller level of geography: by studying a commuting zone or county level on its own, spillover effects, like work commuting patterns across counties, are ignored.

 

Fig. 4. Percentage of drug deaths between 1999 and 2017 predicted by manufacturing decline.

 

 

These maps in Figure 4 show the percentage of drug deaths you were able to predict using your model that factored in manufacturing decline. How were you able to use the data from a decline in manufacturing jobs to predict opioid deaths? What were some of the challenges of trying to put together this predictive model, and what were you able to find in terms of the predictive power of manufacturing decline on opioid-related deaths?  

The findings of my study indicate that up to 92,000 overdose deaths for men and up to 44,000 overdose deaths for women are attributable to the decline in manufacturing jobs between 1999-2017. These total figures represent the percentage of all drug deaths that are predicted by manufacturing employment levels in each state. As you can see in the maps, the share of drug deaths that are predicted by manufacturing decline varies considerably across state contexts, as well as by sex. I derived these figures using data on the overall percentage point decline in manufacturing employment for each state and data from the estimated statistical models. 

The biggest challenge of this project was assembling a dataset that combined data on drug overdose mortality rates with data on manufacturing employment, as well as other social, economic, and policy variables. Assembling this unique dataset allowed me to statistically adjust the models for important alternate explanations other than manufacturing decline that might better explain the rise of drug overdose deaths. To generate mortality rates, I combined data on state-level populations with restricted-use death certificate records from the National Center for Health Statistics at the CDC. For manufacturing employment levels, I worked with data from the Census Bureau’s County Business Patterns program. I then accessed data from various other sources, including the Current Population Survey, the Census Bureau’s Local Area Unemployment Statistics program, and a database on prescription drug policies. Including variables in the model from all of these individual datasets improved the theoretical and methodological rigor of the research.

What racial and gender differences did you find in your model?

Much of the previous literature on the opioid and drug epidemic has focused on middle-aged white males because they initially had the highest levels of drug and alcohol usage in the 2000s in comparison to other race and sex groups. In my research, I sought to examine whether the effect of manufacturing decline on drug overdose deaths was generalizable to other population subgroups. Generally, the effect remains the largest for middle-aged white males between the ages of 45-54, but the effect is also large for adult white males of other ages, as well as for adult white females of all ages. For Black males and females, the effect is generally not statistically significant, but there are important exceptions: manufacturing decline was associated with drug overdose deaths for Black females ages 45-54 and Black males ages 35-44 and 55-64. These findings go against the widespread, but unfounded notion that manufacturing decline has primarily impacted white male workers. In fact, as evidenced by William Julius Wilson’s research, Black workers experienced substantial losses in manufacturing employment over the course of the final two decades of the 20th century.

What are some of the implications of your research for policymakers and institutions? 

This paper speaks to a growing literature that finds a relationship between social conditions and the rise of the opioid and drug epidemic. The implications of the results – that higher manufacturing employment is associated with lower rates of drug overdose deaths – signal the importance of policy interventions that aim to reduce persistent economic precarity experienced by individuals and communities, especially the economic strain placed upon the middle class. We live in a world where it is unlikely that major growth will occur in the U.S. manufacturing industry; however, an emphasis on improving jobs in the service sector should be the focus. Improvements in wages, benefits, and job stability in the low-wage service sector might decrease economic uncertainty and therefore provide a pathway toward reducing drug and opioid overdose mortality.

 

 

Article

A Visual Interview with Eric Stanley on “Atmospheres of Violence”

Atmospheres of Violence Book Cover
Professor Eric Stanley
Professor Eric Stanley

How should we understand violence against trans/queer people in relation to the promise of modern democracies? In their new book, Atmospheres of Violence: Structuring Antagonisms and the Trans/Queer Ungovernable (Duke 2021), Eric A. Stanley, Associate Professor in the Department of Gender and Women’s Studies, argues that anti-trans/queer violence is foundational to, and not an aberration of, western modernity.

Their other projects have included the anthology Trap Door: Trans Cultural Production and the Politics of Visibility, co-edited with Tourmaline and Johanna Burton; and the films Criminal Queers (2019) and Homotopia (2008), in collaboration with Chris Vargas.

For this visual interview, Julia Sizek, Matrix Content Curator and a PhD candidate in the UC Berkeley Department of Anthropology, asked Professor Stanley about their research, drawing upon images and videos referenced in the book.

 

Your book begins at the site of the death of Marsha P. Johnson, a pioneering transgender activist, and trans/queer death is generally the subject of the book. In what ways has death become central to understanding both LGBT history and trans/queer people today? 

Marsha P. Johnson
Marsha P. Johnson pickets Bellevue Hospital to protest treatment of street people and gays, ca. 1968–75. Photo by Diana Davies, Manuscripts and Archives Division, New York Public Library

The book does dwell in the space of death, and the first pages include a note on “reading with care” so people will be aware of its content. However, my attention to the work of violence is not because I believe it to be the limit of trans/queerness but because, under the order of the settler colonial state, harm is any and everywhere. What this means is that we must work to understand the various ways violence delineates trans/queerness if we want to end it. To this end, I investigate how racialized anti-trans/queer violence is foundational to and not an aberration of the social world.

That said, rather than simply argue that we are “against violence,” I reposition the demand by way of a question: what constitutes the time of violence for those living in the crucible of total war? In other words, saying that we want to respond to specific instances of violence is not enough if we have not rendered unworkable the structures that do not simply allow it, but mandate its continuance.

This is one of the many lessons that I continue to learn from theorists like Marsha P. Johnson. She was a marginally housed radical organizer whose Black trans politics were fashioned from living in and against the anti-blackness of a transmisogynist world. Her death, which was deemed a suicide by the NYPD, remains under speculation by her friends, who believe it was perhaps a violent trick or even a police officer who murdered her. While the case of her death has become a focus for organizing, Marsha’s commitments — her life in struggle — instruct us to organize against the conditions that stole her from the world.

While direct attacks against trans/queer people are one focus of the book, I also theorize that the state perpetuates violence against trans/queer people through paradigmatic neglect. We can look at trans/queer houselessness, incarceration, and the ongoing HIV/AIDS pandemic to see the ways inaction is, perhaps counterintuitively, an active process. It is, I believe, in these spaces of seeming contradiction where power becomes most visible.

In this video, Sylvia Rivera, a contemporary of Martha P. Johnson, is met with resistance by the crowd when she takes the stage at the 1973 Christopher Street Liberation Day celebration. Today, she is considered to be a trans icon. What does Rivera’s acceptance today reveal about how we consider LGBT history?

This video depicts transgender activist Sylvia Rivera’s monologue at a demonstration in 1973. 

The introduction of my book, River of Sorrow, attempts to think about this antagonism. The amazing documentation of Sylvia fearlessly climbing the stage at this celebration gathers up so much of what the book theorizes. Sylvia was a Puerto Rican trans organizer, sex worker, anti-imperialist and one of Marsha’s closest friends. She was not given space to speak because cis lesbians and gays diagnosed her, and all trans women, as perpetrators of a misogynist culture by way of their identities. The transmisogyny of the event organizers who attempted to force her physically and ideologically off the stage tragically still lives in the ongoing harassment of trans people in general, and trans women specifically, by Trans Exclusionary Radical Feminists (TERFs). Not unlike anti-trans “feminists” of 1973, today we see trans people attacked much more than the patriarchal order they blame us for reproducing. Luckily, Sylvia was able to eventually take the microphone that day, and as you can see, she then delivered a devastatingly beautiful speech about the importance of not leaving behind those hidden by calls of “gay respectability,” namely trans/queer people of color in jails, shelters, and other “street queens” like her and Marsha.

The mainstream LGBT movement that Sylvia declared war against continues its legacy of assimilation in our current moment. Yet what is different, and perhaps even more dangerous, is that it now primarily terrorizes through incorporation. What this means is that, rather than working through exclusion and exile as it did in 1973, we now see the inclusion of those historically forced out not toward the end of reorganizing normative power, but to maintain it. The goal of inclusion is not to challenge the political order, as we are often told, but to extinguish radical critique and our dreams of freedom.

This dispossession through incorporation was again clarified after I finished the book and I noticed that the “all power to the people” photo of Marsha was being sold on a shirt at Target during their rainbow-washed June. The brutal irony is that they were selling the image of radical Black anti-capitalist action while underpaying their workers and racially profiling Black people in their stores. They want Marsha’s image, but they don’t want her. It’s this knot that I’m trying to apprehend in the book, so that we might find a way out.

This photograph was taken in 1992 at a political action by ACT UP, in which activists flung ashes of loved ones on George H.W. Bush’s White House lawn and transformed an act of grief into a political act. How does this act combat what you call “necrocapitalism”?

protestors throwing ashes on white house lawn
ACT UP Ashes Action, 1992. © Meg Handler

The ashes action leaves me undone. While political funerals were often organized by ACT UP and many other groups, this one harnessed the brutal eloquence of those forms of protest with the material act of “returning the dead” to the house of their executioner, specifically Bush’s White House. Here, friends, lovers, and families marched with boxes of ashes toward the White House under threat of the swinging clubs of mounted DC police, and then once they arrived at the gates, they tossed the remains onto the green of the lawn.

One of the practices developed by ACT UP was to name governmental inaction as a method of active killing. The disappearance of their loved ones was the unfolding of what Ruth Wilson Gilmore might call “organized abandonment,” instigated by a straight state that understood HIV/AIDS as the wish fulfillment of those already damned to hell. This idea that HIV is the materialization of God’s wrath might circulate less openly today, but the logical structure of this belief — that a virus is the punishment for wrongdoing — maintains the crushing stigma many still endure. 

The desperation in the videos and photos of the action overwhelms. Revenge and mourning meet in the act of exhuming bodies. While the open secret of mass deaths from AIDS-related illnesses was spoken in quiet whispers and hidden under homophobic silence, here ACT UP materialized their loss in the form of ground bones, the remains of trans/queer life, scattered to the winds so that their pain might become all of ours.

Through thinking with this action, along with the murders at the Pulse nightclub in Orlando, Florida, and the longer colonial history of HIV and current practices of blood banking, I develop a theory of necrocapital. Here I work with, and sometimes against, materialist feminists and others who have helped us understand the centrality of reproductive labor. With necrocapital, I’m paying attention to how speculation is not tied exclusively to the category of “life,” and indeed financialization has opened the entirety of the worker, even in death, to increased profits. One of the reason ACT UP’s direct action is so powerful is because it materializes the symbolics of trans/queer blood — the feared yet valued substance that is, at least under the logic of a phobic social, a vector of death. Here it is returned as a bio-strike, a labor stoppage, and a refusal to privatize our grief.

In this short film produced by the Barnard Center for Research on Women, Miss Major Griffin-Gracy, a trans activist, discusses how her personal activism has taken a new form. She says that, “on a personal level, what I did was change all of my identification back to male” as a way to highlight her transgender identity and “strike back.” How do you read this “striking back,” and what does it show us about the relationship between trans people and the state? 

Major’s irreverence for a world that demands respect but delivers none shows us that what is offered is not all that is available. Through a reading of her words and Tourmaline’s film, I suggest that her ungovernability — her life in refusal — is a pedagogy of Black trans sociality, an escape hatch out of the dreadful pragmatism of the current order. Importantly, as with Major, Marsha, Sylvia, and many others who appear in the text, I’m emphatic that they are theorists of trans life and not simply examples of it. This is necessary if we are to build a trans study that at least attempts to disorganize the organization of cis knowledge production.

Among the ways Major offers us this gift is through the story of her IDs. At one point, she switched her IDs from “male” to “female,” as many trans women do in hopes of decreasing harassment by those who demand papers. But then the short film repositions the narrative of transition, as she “switched them all back” to “male” because she is a transgender woman and she wanted to be known as such. She is clear, and I also underscore this in the book, that she is not making a prescription, but this “personal act” was, as you noted, one of her ways of “striking back.” 

I’m dedicated to charting these otherwise minor acts, moments of rebellion and striking back that might slip past the telling of revolutionary social change. This is important as it not only connects to the larger moment histories, but as Majors makes clear, it’s where the force necessary to continue to struggle is often found. For her, community care and sedition fall into each other and build out an underground of laughter and beautiful negation.

Your book concerns questions of death and violence against trans/queer people and asks readers to confront scenes of death and violence. What were some of the challenges in representing anti-trans/queer violence in this book, and what alternatives do you imagine to trans/queer death today?

“ANOTHER END OF THE WORLD IS POSSIBLE” Notes on a Burning Kmart, Minneapolis uprising, 2020. Photo by Aren Aizura.

This is a central concern of the book and an excellent question. However, throughout the text, I am unable to reconcile the fact that representing violence and allowing it to disappear are both, in different but related ways, among the technologies that ensure harm continues. Instead of assuming I might know the answer, I hold this contradiction with as much love and precision as I can to move through it under the banner of collective liberation. Methodologically, I don’t represent, at least in image, the violence I theorize. I do, however, at times narrate the scenes, as I believe we must work to understand its world-shattering force if we are to stop it. The answer then cannot simply be to look away, although we all must do that at times to preserve enough of us. 

Yet what I believe the project must be, if we want to “end violence,” is the destruction of the racist anti-trans/queer social that has taken so many and continues to threaten the very possibility of anything else. If, rather than an aberration of settler modernity, these woven forms of terror constitute the world, then I ask, with Frantz Fanon, “is another end of the world possible?” I’m not sure. I do know that we must continue to think, which is also to continue to learn that, as Major reminds us, there is abundance here and now. Following the ungovernable, among our tasks is life’s radical redistribution and the abolition of the world as it is. Rather than defeat, we must also know that there is a long and unfolding tradition of trans/queer action that builds a world beyond this one, where we might all feel the safety and joy of ease.

Article

Innovation Matters: Competition Policy for the High-Tech Economy

An interview with Professor Richard Gilbert

What’s wrong with antitrust policy for regulating the tech sector? In his new book, Innovation Matters: Competition Policy for the High-Technology Economy, Richard Gilbert, Distinguished Professor Emeritus of Economics at UC Berkeley, argues that regulators should be considering the effects of mergers and monopolies on innovation, rather than price.

From 1993 to 1995, Gilbert served as Deputy Assistant Attorney General in the Antitrust Division of the U.S. Department of Justice. He also served as Chair of the Berkeley Economics Department from 2002 to 2005, as President of the Industrial Organization Society from 1994 to 1995, and as the non-lawyer representative to the Council of the Antitrust Section of the American Bar Association from 2011 to 2014.

Julia Sizek, Matrix Content Curator, interviewed Professor Gilbert about the arguments in his new book. (Please note that responses have been edited, and links were added for reference.)

Q: As large technology companies have increasingly come under fire for their monopoly-like powers, many have been asking about how antitrust policy needs to change to address this industry. What motivated you to investigate the changing landscape of antitrust policy?

Traditionally, antitrust policy has been about prices, and antitrust officials have focused on stopping mergers that would increase prices or limiting conduct that would cause prices to rise or prevent them from falling. But we know that innovation — new or improved products or production methods — is more important for the economy and consumer welfare than a reduction in prices. We need to change antitrust policy from price-centric to innovation-centric.

Antitrust authorities appreciate the importance of innovation, but until recently they have not had the tools to analyze how mergers or the conduct of dominant firms might suppress innovation. Many antitrust enforcers and academics endorsed views associated with the writings of Joseph Schumpeter in the 1940s. He wrote that progress proceeds through a process of creative destruction, with new technologies replacing old products and methods, and that large firms were often better suited than small firms to create these new technologies. This Schumpeterian perspective suggested a defense for mergers and monopolization, rather than a basis to challenge them. Indeed, the Merger Guidelines published by the Department of Justice and Federal Trade Commission barely mentioned innovation as a merger concern until they were revised in 2010.

More recent economic research challenges the Schumpeterian perspective and shows how the lack of competition can suppress innovation incentives. Having fewer firms engaged in research and development lowers the probability of discovery. A firm that has monopoly power has little incentive to invest in costly R&D if a successful discovery would merely replace the profits it earns from its existing products. It is no surprise that many major discoveries have been made by firms that do not have existing products that would be threatened by the discovery. Electric vehicles, the smartphone, digital photography, ride-hailing services, digital mapping, photolithography, and mRNA vaccines are some examples of innovations that emerged out of non-dominant firms.

So, the motivation for my book was to collect in one place what we now know about the relationship between competition and innovation. That includes the Schumpeterian perspective, but also more recent scholarship that shows how monopoly is a threat to innovation. My objective was to describe the central principles that support an innovation-centric antitrust policy.

Q: As you note in the book, current antitrust policy in the U.S. asks how consumers would suffer if a merger or acquisition were to be completed, and that this harm to consumers is measured through looking at prices of products. What are the limits of using prices to measure competition (or lack thereof)?

A merger that results in a small reduction in the pace of innovation is likely to cause greater consumer harm than if it causes a small increase in price. That is why we need an innovation-centric antitrust policy when mergers or conduct are likely to affect the pace of innovation.

Sometimes we can account for innovation effects by incorporating quality into product prices. That is, we can measure the consumer benefit from an improvement in the quality of a product by an equivalent reduction in its price, or the consumer cost from a reduction in quality by an increase in price. This is straightforward for some products. If Hershey sells a smaller candy bar at the same price, it is equivalent to an increase in the price of the bar. If a new car gets lower gas mileage, it is equivalent to an increase in the price of the car.

This quality-adjusted price approach has limitations. It is difficult to apply to complex changes in the dimensions of a product. Moreover, in today’s digital economy, many services are provided without a monetary price. It doesn’t make much sense to ask whether the price could be lower, but instead we should ask whether companies are creating new services that benefit consumers or interfering with the ability of other firms to compete with new services.

Digital platforms such as Facebook and Google complicate the analysis because they provide services to consumers (e.g., social networks and search) without a price while generating revenues from advertising. The services that the platforms offer at a zero price and the advertising services that the platform sells at positive prices are interdependent. However, they raise different issues for antitrust analysis. For example, the Federal Trade Commission has filed an antitrust complaint related to Facebook’s acquisitions, including Instagram and WhatsApp. The complaint alleges that Facebook maintained its personal social networking monopoly by systematically tracking potential rivals and acquiring companies that it viewed as serious competitive threats.

A price-centric analysis might be appropriate for the advertising service, but an innovation-centric analysis is more appropriate for the effects of such acquisitions on the quality of Facebook’s social networking services.

Q: Your book offers innovation as a metric to understand antitrust policy. What is innovation, and how does one measure it?

Innovation is a new or improved product or process that differs significantly from previous products or processes. Innovation is more than invention, which is the act of discovering a new product or process, because innovation requires that an invention be put into active use or be made available for use by others.

Innovations can be measured in different ways. This can include direct measures, such as a technical or economic assessment of the value of the innovation. For pharmaceuticals, a new drug application that is approved by the Food and Drug Administration is a measure of innovation, although drugs differ greatly in their therapeutic and economic value. Indirect measures of innovation include the number of patents that cover the innovation. Because patents can differ greatly in significance, economic studies often use citation counts to determine the significance of the patents. Patent counts are generally better indicators of the value of innovations when they are adjusted by citations to measure quality, but there is still a gap between citation-weighted patent counts and the value of innovations. The gap depends on the industry. For example, patent counts tend to be aligned with the values of pharmaceutical and chemical innovations. However, in other industries, patents provide a measure of protection from competition that is not necessarily related to the value of the innovation that is disclosed by the patent. This disconnect is particularly problematic for industries in which many patents cover the same product, such as electronics, software, and communications technologies. In that case, a patented technology can represent a small fraction of the value of a product, yet the patent owner might be able to demand a high royalty because the product cannot be produced with the right to use the patent.

Economic studies of competition and innovation often use research and development (R&D) expenditure to measure innovative effort. R&D, however, is an input to the activity of innovation. It does not measure the output of innovation. R&D expenditures can increase with no effect on the output of innovation, or R&D can become more efficient and decrease with the same or greater output of innovation. Nonetheless, because R&D expenditures are often more accessible than measure of actual innovation, many empirical studies have used R&D expenditure as an indirect measure of innovation.

Q: In the book, you note that the number of complaints about innovation loss increased from the 1990s through the 2010s. What do you think accounted for the new focus on innovation, rather than other kinds of complaints? (In other words, how did innovation emerge as a means of thinking about anti-trust law?)

Courts generally follow economic developments in their evaluations of antitrust law, but usually with a substantial lag. Economic analysis plays a central role in almost every merger case, but economic analysis was almost always absent in merger evaluations that took place before 1980. Economic analysis became important in merger analysis after courts recognized that economics has something to say about whether a merger is likely to result in a “substantial lessening of competition,” which is the standard for review under the antitrust laws.

Economics did not have much to say about the relationship between competition and innovation until the latter part of the 20th century. As I mentioned, the prevailing sentiment was a Schumpeterian view that some monopoly power is conducive to innovation. When innovation appeared in antitrust cases, it was mostly as a defense to otherwise anticompetitive conduct. Indeed, in the monopolization case against Microsoft brought by the Department of Justice and several states, the Federal Court of Appeals quoted Schumpeter in the introduction to its opinion.

Innovation became more of a concern for antitrust enforcement by the Department of Justice and the Federal Trade Commission in the 1990s. This coincided, perhaps incidentally, with the publication by the agencies of the Antitrust Guidelines for the Licensing of Intellectual Property. (I led the effort that resulted in these guidelines when I was Deputy Assistant Attorney General at the Department of Justice.) The Guidelines brought innovation to the forefront.

The DOJ and FTC publish and update guidelines that describe their enforcement intentions for mergers. The first edition was published in 1968. Neither the first edition nor many subsequent editions mentioned innovation as a competitive concern until the guidelines were revised in 2010. (UC Berkeley professors Carl Shapiro and Joe Farrell led the 2010 effort to revise the guidelines.) The 2010 guidelines describe several ways in which mergers might suppress innovation. This discussion paralleled economic developments beginning in the late 20th century that showed why mergers and monopoly power can harm incentives to innovate.

Q: Large technology companies like Alphabet (Google) and Meta (Facebook) are known for acquiring companies in their start-up phases, and this has become widely accepted for small companies in the technology sector. How do you think this model has shifted possibilities for innovation in technology, and how might regulators change their approach to regulating these acquisitions?

Google and Facebook (and Amazon, Apple, and Microsoft) have acquired hundreds of start-ups. Few of these acquisitions were even reviewed by the antitrust agencies, and none was blocked. The reasons for the lack of enforcement are complex. The companies operate in fast-moving technologies, so it is often difficult to know whether a start-up represented a competitive threat to the acquiring firm.

The US and European authorities reviewed, but did not challenge, Facebook’s acquisition of WhatsApp and Instagram. The European Commission noted that WhatsApp and Facebook were but two of many messaging services, and that WhatsApp did not compete with Facebook for online advertising. Both agencies should have paid greater attention to the possibility that WhatsApp could have become a rival social network, much as the multi-purpose messaging service WeChat has done in China (albeit censored by the authorities). Indeed, the $19 billion that Facebook paid for the app, despite little usage at the time in the US, should have been an indicator of its potential as an industry disruptor.

Some acquisitions escape review by the antitrust agencies because they fall below the required reporting thresholds. Many of these acquisitions are “aqui-hires.” They are groups of talented individuals that bear little resemblance to the corporate acquisitions that are the usual targets of antitrust enforcement.

In my opinion, the most significant reason why antitrust enforcers have not been able to restrain the growth of the dominant digital platforms through acquisition is their inability to deal with potential competition. The antitrust agencies are quick to challenge a merger of X and Y when both have large shares of a concentrated market. But what about a dominant company X that acquires a startup Y that has no product, but might develop a product that competes with X? Y is not an actual competitor of X, Y is a potential competitor. Antitrust legal precedents impose a high bar to challenge an acquisition that eliminates a potential competitor.

Congress is currently considering several proposed bills that would strengthen antitrust enforcement, particularly for dominant platforms. While some of these bills are not, in my opinion, a step in the right direction, those that make it easier to challenge acquisitions of potential competitors could, if properly crafted, be a positive change to antitrust enforcement.

Q: How do these approaches to innovation need to change in the context of platform markets, like Google Shopping or Amazon? How do platform economies change how we should think about antitrust issues?

Platforms are challenging for antitrust enforcement. First, for platforms such as Google or Facebook, one side is supplied without a monetary price, although consumers “pay” by supplying valuable data. Second, many platforms have powerful network effects and scale economies from the accumulation of data. Network effects imply that users of the platforms benefit from the participation of other users. Scale economies imply that rivals would have to incur large and irreversible costs to duplicate the values that the platforms obtain from their data. The presence of network effects and scale economies imply large barriers to entry of new platform competitors. For both of these reasons, new competitors can’t gain a toehold in the usual way by providing the same service at a lower price. They have to compete with a differentiated product. Third, there are competitive interactions between the “free” and paid sides of the platforms. Platforms have incentives to maintain service quality on the free side if it is useful to attract paying advertisers, although such incentives are limited. Fourth, innovation concerns are particularly relevant for many platforms because the pace of technology development is rapid and some platform services are provided without charge, which makes a price-centric analysis less useful.

Of course, antitrust is relevant for platforms, and the challenges they present are not entirely new. But enforcement has to be mindful of platforms’ unique characteristics. Designing workable remedies for antitrust abuses is challenging for platforms. Consumers do not benefit from breaking up a platform if network effects imply that only one firm will survive. And behavioral remedies can be difficult to enforce or may have little effect. The Google search remedy imposed by the European Commission is still being criticized as too weak, years after it was first implemented. The European Commission requirements to offer choice screens for default browsers and search engines (i.e., screens that allow users to choose their preferred search engine) has not had a significant effect on utilization. The alternative approach to remedies might involve a regulator that supervises conduct by the platforms or that can impose fines large enough to affect their behavior.

Q: What is the future of antitrust policy in the United States, especially now, when prominent antitrust lawyers Lina Khan and Jonathan Kanter have been confirmed as Chair of the Federal Trade Commission and Assistant Attorney General, respectively?

Interesting question. Lina Khan is a self-professed member of the New Brandeis (NB) movement. The NBs believe that monopoly is a corrosive force for the economy and an obstacle for social justice. They want to break up monopolies without having to demonstrate a pattern of abusive behavior. This will be a tough sell in the courts. Established precedent requires a finding of anticompetitive conduct for a finding of unlawful monopolization under the Sherman Act.

Nonetheless, as Chairman of the FTC, Khan might be able to make some significant changes in antitrust enforcement. The FTC Act empowers the Commission to challenge “unfair” competition. Courts have ruled that the standard for unfair competition is the same standard for violation of the Sherman Act. But the Commission might have wiggle room to bring cases that would be difficult to prove under the Sherman Act. That would be an important development. Furthermore, the FTC has an administrative structure that gives it enforcement leverage that is absent at the Department of Justice. Specifically, the FTC can send cases to an administrative law judge (ALJ) before they go to a traditional court of law. The ALJ process takes time, and some defendants are willing to make concessions to avoid the extra delays.

I don’t expect to see the same movement of the antitrust needle at the DOJ, because the DOJ can’t avoid or delay judgments in the courts. Both agencies can be tougher on merger cases. There is some evidence that is happening and I expect it to continue. (But again, they have to deal with the courts if merging parties contest a challenge.) The DOJ also can have an impact through a process called the business review letter, where it can state an intention not to challenge a practice. For example, Democrats tend to be softer on enforcement of intellectual property rights, and the DOJ can signal this intent through a business review letter.

 

Podcast

Individual Trauma, Social Outcomes: A Matrix Podcast Interview with Biz Herman

Biz Herman

In this episode of the Matrix Podcast, Julia Sizek, PhD Candidate in Anthropology at UC Berkeley, interviews Biz Herman, a PhD candidate in the UC Berkeley Department of Political Science, a Visiting Scholar at The New School for Social Research’s Trauma and Global Mental Health Lab, and a Predoctoral Research Fellow with the Human Trafficking Vulnerability Lab. Herman’s dissertation, Individual Trauma, Collective Security: The Consequences of Conflict and Forced Migration on Social Stability, investigates the psychological effects of living through conflict and forced displacement, and how these individual traumas shape social life. 

Herman’s research has been supported by the Fulbright U.S. Student Program, the University of California Institute on Global Conflict & Cooperation (IGCC) Dissertation Fellowship, the Simpson Memorial Research Fellowship in International & Comparative Studies, the Malini Chowdhury Fellowship on Bangladesh Studies, and the Georg Eckert Institute Research Fellowship. Along with collaborators Justine M. Davis & Cecilia H. Mo, she received the IGCC Academic Conference Grant to convene the inaugural Human Security, Violence, and Trauma Conference in May 2021. This multidisciplinary meeting brought together over 170 policymakers, practitioners, and researchers from political science, behavioral economics, psychology, and public health for a two-day seminar on the implications of conflict and forced migration. She has served as an Innovation Fellow at Beyond Conflict’s Innovation Lab, which applies research findings from cognitive and behavioral science to the study of social conflict and belief formation.

In addition to her academic work, Biz is an Emmy-nominated photojournalist and a regular contributor to The New York Times. In 2019, she pitched and co-photographed The Women of the 116th Congress, which included portraits of 130 out of 131 women members of Congress, shot in the style of historical portrait paintings. The story ran as a special section featuring 27 different covers, and was subsequently published as a book, with a foreword by Roxane Gay.

The Matrix Podcast interview focuses primarily on Herman’s research on mental health and social stability at the Za’atri Refugee Camp in Jordan, as well as her broader research on the psychological implications of living through trauma and the impacts of individual trauma on community coherence.

The research in the Za’atri Refugee Camp, Herman explains, was part of a project developed by Mike Niconchuk, Program Director for Trauma & Violent Conflict at Beyond Conflict, who created a psycho-educational intervention called the Field Guide for Barefoot Psychology. “The goal of The Field Guide is to provide peer-to-peer mental health and psychosocial support and education,” Herman explains. “It’s a low-cost intervention, and it can be scaled. The idea was that in Za’atari Camp, where mental health care is very stigmatized, there are a lot of barriers to entry. And there are a lot of needs — physical security needs and community needs — and mental health is often de-prioritized. [The Field Guide provides] one way to address the lingering psychological implications of living through conflict and forced migration in a way that is accessible, and that can be provided without attracting attention or producing any kind of stigma, and that’s really connected to the context.”

The Field Guide uses narrative storytelling and scientific education, paired with self-care exercises, Herman explains. “Each chapter starts with a narrative of a brother and sister and their lives in Syria before conflict, during conflict, during migration, and in resettlement,” she says. “Through the story, different themes and ideas and issues come up, with different physiological and psychological responses. As these different responses come up, the next part of the chapter talks about the science behind that in a way that allows for some psychoeducation on what’s happening, but allows people to engage with it through someone else’s story.”

Listen to the interview below, or on Apple Podcasts or Google Podcasts.

 

 

 

Article

Online Extremism and Political Advertising: A Visual Interview With Laura Jakli

Laura Jakli

How can we track online extremism through political advertisements? Using data from online advertising, Laura Jakli, a 2020 PhD graduate from UC Berkeley’s Department of Political Science, studies political extremism, destigmatization, and radicalization, focusing on the role of popularity cues in online media. She is currently working on her book project, Engineering Extremism

She is currently a Junior Fellow at the Harvard Society of Fellows. Starting in 2023, she will be an Assistant Professor at Harvard Business School’s Business, Government and the International Economy (BGIE) unit. From 2018 to 2020, she was a predoctoral research fellow at Stanford University’s Center on Democracy, Development and the Rule of Law, and at the Program on Democracy and the Internet.

Social Science Matrix content curator Julia Sizek interviewed Jakli about her work, with questions based on political advertisements and graphics from Jakli’s research.

Your research uses the Facebook Ad Library to understand far-right political parties. What insights do advertisements provide for understanding far-right parties? 

Since 2018, the Facebook Ad Library (also known as the Ad Archive) has publicly documented the political advertisements hosted on the platform, as well as some limited metadata for each ad (for example, the name of the ad buyer, the number of ad impressions, total ad expenditure, geographic target, and audience gender and age demographics). Initially, the Ad Library exclusively featured ads run in the United States, but it expanded to dozens of other countries within a year. Since I study European politics, this expansion of the Ad Library opened up a new way to explore party messaging at scale.

Much of my research considers the gap between the publicly stated and privately held beliefs and preferences of far-right voters (and party elites themselves). In line with this, I was interested in examining party ads because the far right may be incentivized to present a more mainstream right-wing ideological profile in formal documents and in mass media campaigns to appeal to a broad audience. Meanwhile, when the far-right is targeting a narrow, custom audience through online media, the party may use more extreme campaign content. This is because, with digital micro-campaigns, they do not have the same political incentive to appeal to the masses or signal ideological common ground with center-right parties.

With my current political ads research, the objective is to better understand far-right party strategy and political positions. The main advantage of ads in this regard is that most parties field hundreds of unique online ads in the months leading up to an election. The sheer volume of political ad text available means that it is quite feasible to construct reliable ideological profiles for small parties, and to create valid inferences about party strategy. Moreover, since online ads are time-stamped and geographically targeted, they can be used to trace how positions change over time, both sub- and cross-nationally.

How do political ads work on Facebook? Who buys them, and how are political ad purchases split between groups? In other words, who is posting these ads, and how do they find their audiences? 

Many party ads are purchased by the national party itself, meaning that they are sponsored by the party’s main Facebook page, even if the ad content is focused on a specific regional or candidate campaign. But it can be a more decentralized process, and each political party can choose to run its political campaign through a combination of national and local advertising. In some European countries, I see party candidates and local party organizations paying for and running their own ads. 

Facebook allows advertisers to target not just by age, gender, and geographic location, but also by political interests and hobbies. Email lists gathered through rallies, fundraisers, and other events can be used to target customized political audiences. Moreover, these inputs can be used to find “Lookalike Audiences” that share interests, traits, and demographics with the established email list. These advertising parameters allow campaigns to target political ads quite narrowly and precisely.

One weakness of the Ad Archive is that it doesn’t actually reveal how the campaigns found their audiences. All you have available as metadata is basic demographic information, including a breakdown of the audience by gender, age, and geographic location. You can make some inferences about whom parties targeted based on this information, but the ad algorithm may also be impacting that audience.

For example, you can’t distinguish between when the party directed ads to be delivered to males between the ages of 18-24, or when the ad algorithm picked up on the fact that men between 18-24 interacted with the ad at higher rates, and therefore “learned” to deliver more ads to this segment over time. In other words, the audience is curated both through what ad buyers specify as their parameters (e.g., let’s target XYZ demographics), and the algorithm independently determining who would be an efficient target to display the ad.

This advertisement (Figure 1) from Vlaams Belang, a far-right party in Belgium, is fascinating because of the way that it is designed to track viewer reactions. How are advertisements on social media different from ordinary advertisements, and are you able to track how people interact with these advertisements?

sample facebook ad
Figure 1: Translation: “They have gone completely mad and want to actively participate in the return of ISIS terrorists! Vlaams Belang resolutely says NO. We must protect our people from these time bombs. We must take their nationality and try them in the countries where they committed their crimes. What do you think? Return possible for terrorists? [Indicate Yes (with a smiley) or No (with a like).]

The ability to rapidly field and test the performance of different political ads is one aspect of online advertising that distinguishes it from older forms of campaigning. Parties don’t have to commit to one message or thematic policy focus through a campaign season. This flexible, feedback-based approach is precisely demonstrated in this ad from Vlaams Belang. It asks ad viewers to signal using the laugh emoji if they agree with the return of foreign terrorist fighters (known as “returnees”) and “like” if they disagree with the policy. Presumably, the idea is to quickly and cheaply test how salient this issue is for potential voters.

Researchers are not easily able to track how people interact with these advertisements unless the advertisement links to a post on a public Facebook page. But in the case of Vlaams Belang and most parties that do these quick polls through ads, the poll takes you to a party webpage so they can get more information about their audience (and possibly elicit donations). One other way to get a sense of how people interact is simply through the number of impressions the ad gets. Impressions count the total number of times the ad is displayed on viewers’ screens. This is broadly informative, but doesn’t mean that audiences are actually clicking on the ad or interacting with its content in any way, so the inferences researchers can draw are quite limited.

One of the benefits of online advertisements, in contrast to traditional advertising, is the ability to target certain groups. This ad (Figure 2) shows an ad that targeted audiences specifically in Austria. How did you find that targeted advertising worked for far-right groups, and how did advertisements differ at the local and national scales?

Facebook ad
Figure 2: Translation: “There is a huge boiling point at Europe’s borders, because masses of illegal migrants want to return to certain European target countries, including our Austria. While patriotic politicians like Matteo Salvini are doing everything possible to stop illegal migration, completely different signals are coming from Berlin. Angela Merkel even wants to have the refugees picked up from Africa….”

Broadly speaking, the demographic metadata suggests that the far right has a much higher ratio of male ad audiences than do other parties, which makes sense, given the male skew of their voter base. But there is such limited metadata provided by the Facebook Ad Library that I have not been able to establish any other notable demographic trends. I am currently working on understanding the geospatial trends of far-right advertising but cannot say anything definitive yet.

I will say that the more localized advertisements — typically fielded by regional party organizations or local candidates — differ substantially in content from national ads. The more localized campaign material is crafted to resonate with local news events and community issues. Far-right political ads that target a narrow geography appeal to voters less on abstract political platforms or ideological principles and more on tangible and immediate localized concerns. In effect, this represents a shift to digital “home style” politics, by which the far right frames their platforms such that constituents of each district are led to believe party representatives are “one of them” and have their immediate interests in mind when crafting policy. 

In my qualitative analysis, I found that regional far-right party branches often stylize themselves as accessible, populist, and anti-political, presenting their party as concerned with what is “happening on the ground” and what the “people” really want. Relatedly, these online campaigns are crafted and fielded rapidly, in a manner that is less professional, less polished, and more casual than offline campaigns. Knife crime is one example of a localized thematic focus common in far-right ads (see Figure 3).

Sample Facebook ad
Figure 3: Translation: “Migrants at the forefront of knife crimes. ‘Dangerous people have no place in the middle of our liberal society and therefore have to be deported.’ Those were the words of #CDU Interior Minister Roland Wöller after the horrific knife murder in Dresden in October 2020 by an Islamist Syrian. This should give the impression that the CDU-led government is finally taking action against serious criminal foreigners….”

Working from a large dataset of far-right political ads, you translated the advertisements into English, and then used the NRC Word-Emotion Association Lexicon to identify how the ads evoke emotions like fear, disgust, and anger. These images (Figure 4) show word clouds based on advertisements from the German AfD (Alternative for Germany) party. What do these word clouds show?

Disgust and anger word clouds
Figure 4: Disgust and anger word clouds for Alternative for Germany ads, using the NRC Word-Emotion Association Lexicon (aka EmoLex).

First, I want to note that the share of negative emotive ad content is typically much higher in far-right ads than in the ads of other parties. Their negative ad campaigns focus on — and often exaggerate — social and economic problems, while identifying other people, parties, and institutions as responsible for them. Consistent with much of the literature, I also found that the far right is associated with specific emotive appeals, most prominently with fear and disgust, but also with a higher share of anger emotion words, on average.

Using the NRC Word-Emotion Association Lexicon, the disgust word cloud visualizes the terms tagged in the far-right AfD’s (Alternative for Germany) ads as being words associated with disgust. The size of the term in the word cloud helps visualize its relative frequency in appearance across the ads. The anger word cloud visualizes the same, but for anger-associated terms in the ads. These figures show that illegality, criminality, and violence are some of the most prevalent disgust-associated themes found in German far-right ads. There is quite a bit of overlap here with the most frequently found anger-associated words. Themes of criminality, violence, and terror attacks are frequently discussed by AfD, presumably with the intent of evoking anger toward the political status quo.

One of your findings is that far-right groups in Europe tend to claim ownership over the topic of immigration, as is reflected in this advertisement (Figure 5). How did you measure the focus on immigration among far-right parties in comparison to their more moderate counterparts? 

sample facebook ad
Figure 5: Translation: “Swept under the rug: the huge refugee costs. The AfD has been talking about it for a long time, but the other parties and the associations and companies of the so-called ‘asylum industry’ that benefit from them consistently avoid talking about this topic….”

I use a method called structural topic modeling to determine whether the far right maintains issue ownership on immigration. In topic modeling, each document (in this case, party ad corpus) is modeled as a mixture of multiple topics. Topical prevalence measures how much each topic contributes to a document. Put simply, I use metadata on which party fielded each ad text to examine differences in topical prevalence across the ad texts, and sort topical prevalence by party family. I estimated the mean difference in topic proportions for far-right parties and all other parties to determine which topics are more prevalent in far-right ads.

I use this to gauge whether there is disproportionate emphasis on immigration in far-right campaign ads, or whether immigration topics are prevalent across different types of parties. In a large majority of sampled EU countries, I found a disproportionate emphasis on immigration issues on the far right, which is consistent with issue ownership. There are three notable patterns in how the far right discusses the immigration issue across Europe. First, many parties specifically emphasize Muslim migration and frame Islam as a unique threat to national values and cultural identity. Second, immigration is often tied to criminality as well as to issues of women’s safety. Third, it is linked to general Euroscepticism and the EU’s multiculturalism.

While your analysis focused on the text of far-right political advertisements, the images would seem to be an essential part of ads’ effectiveness, as we can see in this image (Figure 6). What do you think are the limits of a text-based analysis, and what are avenues for investigating visual complements to your text-based research? 

facebook ad
Figure 6: Translation: “A sobering word for Sunday: The persecution of Christians in many countries around the world is increasing. But the Christian churches in Germany have paid too little attention to it for years. They prefer to curse the AfD, although the protection of Christians abroad is an important issue for this party….”

It is definitely an important limitation. Many ads also have videos embedded, not just images. By reducing the current study to text analysis, I may miss the fundamental features that lead viewers to interact with the ad, click on related content, or mobilize for the party. 

More broadly, there seems to be a trend in recent years of decreasing emphasis on text and increasing emphasis on visuals and videos in political ads. These trends mirror other social media trends (e.g., the rise of TikTok and YouTube). I think the political parties that acknowledge this trend and craft their online ads accordingly have a leg up over those that do not.

Based on a small qualitative assessment of these ad visuals, what I can say is that the inflammatory, emotive content I try to capture through text comes through much more explicitly in images and video. My sense is that the visuals associated with far-right ads are quite striking and substantively different from the ad visuals of other parties, although I have not tried to quantify these differences systematically. As our tools for image and video analysis improve in social science, I hope to study these features more rigorously.

 

Podcast

Science and Socialism in Cuba

A Matrix Podcast interview with Clare Ibarra and Naomi Schoenfeld

Clare Ibarra and Naomi Shoenfeld

In this episode of the Matrix podcast, Julia Sizek interviews Clare Ibarra, a PhD candidate in history, and Naomi Schoenfeld, a public health nurse practitioner and recent PhD from the joint UC San Francisco/UC Berkeley medical anthropology program. Both Ibarra and Schoenfeld study the history and present of socialist science and medicine in Cuba.

Ibarra’s dissertation examines scientific exchange between Cuba and the Soviet Union during the Cold War. Her research seeks to answer how socialist ideology affected each country’s approach to development, resource extraction, and decolonization.

Schoenfeld’s areas of expertise include medical anthropology, STS, pharmaceuticals, vaccines, postsocialism, social medicine, and critical public health. She has conducted ethnographic research examining (post)socialist technoscientific formations through Cuban cancer vaccines. Her new research examines a novel program providing thousands of rooms in tourist hotels to persons experiencing homelessness during the COVID19 pandemic. (See her recent paper, Vivir En Cronicidad: Terminal Living through Cuban Cancer Vaccines, here).

On the podcast, Sizek, Ibarra and Schoenfeld discuss the history of science and medicine in Cuba and its relationship to the socialist project, as well as how Cuba has developed vaccines during the current pandemic.

Produced by the University of California, Berkeley’s Social Science Matrix in collaboration with the Ethnic Studies Changemaker Studio, the Matrix Podcast features interviews with scholars from across the UC Berkeley campus. Listen to other episodes here. You can also listen on Apple Podcasts and Google Podcasts. Excerpts from the interview are included below.

Q: What makes Cuban science socialist science?

Clare Ibarra: From my research of the 1950s, as Cuban scientists are beginning to make contact with Soviet scientists, there’s a lot of Soviet literature that reaches them, where the Soviets are actually making this distinction between socialist science and capitalist science. Capitalist science is science that has the privilege of engaging in projects that are more lofty and don’t necessarily have that social impact. By contrast, both the Soviet Union — and definitely Cuban scientists after the revolution after 1959 — their method is to approach every single project from the standpoint of, how is this going to serve society? And if they can’t prove that, it makes their projects ineligible for support. They really emphasize this social impact, more so than in the US or in the West.

Naomi Schoenfeld: In my research, I argue that Cuban science is socialist science for a variety of reasons, but fundamentally, it is completely sponsored and controlled by the state. That has a couple different sides. Scholar Loren Graham has said that continuous state funding explains why the Soviet Union could continue to produce so many Nobel Prize-winning scientists, even while many were in labor camps.

This turned out to be really important in my research. I did ethnographic research with a variety of different interlocutors, but including people who are scientists who are clinical and bed scientists. This notion of having continuous research funding from the state, where you’re not on a grant cycle. As we academics know, chasing money, or chasing the next project, is always a challenge. There’s continual state funding. It’s a closed system, where your ultimate goal is not to develop a patent or a novel agent to be able to sell to a company or make profit. There’s fundamental state support.

But because the Communist Party ultimately controls everything that happens, the decisions are ultimately going to go to the Party, at least in the biological sciences, the biotech sector. And the leaders of that research are very well connected. They’re familial. They are the children and the grandchildren of the Revolutionary generation, and so they’re very well connected to the Party.

Q: One theme that seems to make science socialist is its relationship to intellectual property. How did that come up in your research?

Schoenfeld: Cuba is coming up with all sorts of patented biotech innovations. And ultimately, what I argue is really special and interesting about Cuban science — and what is most radically distinct from what happens in Europe in the US and North America and some parts of Asia — is that the fundamental science is not driven by the idea that somebody might make money off of something, or that something has to make money. Ultimately, they do need to try to figure out how to make some money to go back into the state to fund more research, and to fund the healthcare system. And there are cynics and critics who will say, well, where is this money actually going? Look at our deteriorating hospitals, and at how some people are living. But nevertheless, the idea of a patent doesn’t have that same notion at, say, GlaxoSmithKline, which is going to make huge profits. It’s really quite different, and the fact that they’ve been able to do so much innovation really flies in the face of the notion that capitalism is necessary for creativity and innovation.

Ibarra: From the historical perspective, it’s interesting that you say there’s a clear vision of property. Because in my research, especially in the late 60s and 70s, the greatest issue where the Soviets and Cubans butt heads is over who has the right to access these natural resources, and use them. The ownership is contested, mainly because in this period, there’s such a strong adherence to socialist internationalism and socialist brotherhood, where all socialist republics, including Cuba, are supposed to exchange to produce one socialist market. Cuban scientists really push against this, because they understand that their environment, their natural resources, are their own, and they have to protect it. That comes from a legacy of especially the US using and abusing the natural resources that are there, and even to increase their status within their own scientific communities. That is also true of the Soviet scientists who go to Cuba. They know that, because they’re exploring this new area, it’s going to give them greater standing once they go back home.

Q: One of the interesting points of tension you’re highlighting is the question of whether natural resources are owned by the state of Cuba or by an international group of socialized nations. What were these natural resources, and how were they involved in scientific research?

Ibarra: It’s both minerals in geology, and agriculture. When it comes to agriculture, there was great interest from the Soviets in understanding a tropical environment. There was also great interest from the German Democratic Republic. And they were both guilty of using or creating technology that they would test in Cuba to see if it would withstand the climate, to be able to export it to other places in the Global South.

In geology, there is a lot of nickel laterite throughout the eastern portion of Cuba. They needed that to be able to extract things that would make it easier for Cubans to convert Soviet crude oil, which is one of the biggest exports from the Soviet Union, into usable oil. The Cubans justified this because oil is necessary for energy, and part of the program of the revolution is to provide energy to even the most rural places, and creating more equitable access to energy. At the same time, the Soviets are interested in having access to this nickel because their supplies have been diminishing, and they also need the iron ore to convert their own oil.

Q: You note that Cuba was localizing Soviet technologies, especially in the instance of tropical medicine. Tell us more about the emergence of tropical medicine as a category.

Ibarra: One of the martyrs of Cuban science is Carlos Finlay. He developed the idea that yellow fever was transmitted through a vector. He was able to do this research along with a few US scientists, most prominently Walter Reed, and after this, the idea starts circulating, Walter Reed became the main scientist associated with the work that Carlos Finlay had done.

This story became one of the greatest narratives that the Cuban Revolutionary government used to remind the Cuban people that, since the early 1900s, the US had been coming into our country, and not only using our natural resources to their benefit, but even stealing the intellectual property created by Cuban scientists.

Schoenfeld: Tropical medicine goes beyond Cuba. The countries that were tropical zone countries became labs for white scientists to come and learn, then take that knowledge away. The Finlay-Reed story is classic. Finlay was the grandfather of Cuban science and important in staking a claim to an intellectual history that is independent from the Soviet Union, which is very important. They will acknowledge how much training they got from the Soviet Union, but they really wanted to make it evident that there was a precursor, a very strong tradition of research that predates the revolution.

As part of my research, I looked at the history of vaccinations and why vaccines are so important to Cuba. The centerpiece of my research was the Cuban cancer vaccine. It has everything to do with Carlos Finlay and with tropical medicine, because this pioneering research and the strengths that Cuba has built on all come from infectious disease.

Infectious disease completely dominated the landscape of life and death for people in tropical countries, and Cuba is no exception. The major investment in science and medicine after the revolution really transformed the health statistics in Cuba, as they were able to bring the levels of illness and death down on par to developing countries. What they were able to achieve with biotech in the late 70s and early 80s, they were building on everything that they knew from infectious diseases. The vaccines were explained to me as antigen-antibody “keys and locks.” This conceptualization has helped them continue to think of novel ways to use vaccines, including the therapeutic cancer vaccine that I’ve studied.

 

Listen to the full podcast above, or on Apple Podcasts or Google Podcasts.

 

 

Grad Student Profile

Addressing Latinx Social Inequality in Later Life

grandmother and granddaughter holding hands

Americans are aging, but the experience of retiring is far from equitable. The dream of an adequate pension or retirement fund, and of residing in an age-friendly community, seems increasingly inaccessible for many historically marginalized older Americans. What does aging look like across the spectrum of older Americans, and what does it specifically look like for Latinx older adults? 

For this Q&A, Julia Sizek, Matrix Content Curator and a PhD Candidate in the UC Berkeley Department of Anthropology, spoke with two graduate students from UC Berkeley — Isabel García Valdivia and Melanie Z. Plasencia — whose research examines what aging looks like for the Latinx communities in the United States, particularly in California, Mexico, and New Jersey.

Isabel García Valdivia
Isabel García Valdivia

Isabel García Valdivia is a PhD candidate in the UC Berkeley Department of Sociology whose research focuses on the life course of Latinx immigrants and their families in the United States, focused on California and Mexico. Her 2020 paper, “Legal Power in Action: How Latinx Adult Children Mitigate the Effects of Parents’ Legal Status through Brokering,”published in Social Problems, received student paper awards from the American Sociological Association’s Latino/a Sociology Section and the Society for the Study for Social Problems’ Youth, Aging, and the Life Course division. The paper discusses how children of immigrant parents broker, or liaise, on behalf of their parents, and how citizenship status affects their success in navigating legal and financial institutions.

Melanie Z. Plasencia
Melanie Z. Plasencia

Melanie Z. Plasencia is a PhD candidate in the UC Berkeley Department of Ethnic Studies who examines the role of social support and place in shaping the health and well-being of older Latinx people, in order to improve older immigrants’ social, economic, and health conditions. Her research on how older Latinxs envision an age-friendly environment was published in The Gerontologist: “Age-friendly as Tranquilo Ambiente: How Socio-Cultural Perspectives Shape the Lived Environment of Latinx Older Adults argues that social and cultural elements must be considered when constructing “age-friendly” communities for older Latinxs. A second publication, “‘I don’t have much money, but I have a lot of friends’: How Poor Older Latinxs Find Tangible Support in Peer Friendship Networks,” will be published in Social Problems and was awarded Second Place in the Emerging Scholars Poster Competition at the International Conference on Aging in the Americas (ICAA) in 2017. This article demonstrates how older Latinx peer networks are an important source of support, as they take into consideration the limitations of the aging body and affirm their intersectional identities as Latinxs, as immigrants, and as older adults in the United States. Her research has been supported by the Ford Foundation and Dartmouth College, where she resides as the 2021-2022 César Chávez Predoctoral Fellow. She is also presently the student representative for the ASA section on Aging and the Lifecourse

Q: How did you decide that aging is an important part of the immigrant experience, and that you wanted to study this particular lifestage?

 

Melanie: I have both a personal and academic investment in the fields of race, ethnicity, and aging. I was raised by my grandmother and her group of friends, who were foreign-born immigrants from Latin America and the Hispanic-Caribbean. Witnessing my grandmother’s and her friends’ experiences as older persons really clued me into the realities that older adults face, and some of the ways that we can better support them. 

However, it wasn’t until college that my mentor, Professor Ulla Berg, recommended that I turn my personal interest in aging into academic study. Since then, I have been focused on understanding the experiences of older foreign-born Latinxs and how they adapt to growing old in the U.S., especially under extreme duress and hardship. From my research, I’ve uncovered significant needs within the older Latinx population that researchers should pay closer attention to. A majority of older Latinxs worked in jobs that did not afford them an adequate pension or retirement. If they do qualify for federal aid, it is often a very small amount that does not meet their basic standards of living, especially in areas where gentrification is presently taking place and housing and other basic necessities are continually on the rise. It has been reported that 42% of married couples and 59% of unmarried older Latinxs relied on federal aid for more than 90% of their income to survive (Social Security Administration 2017), but this does not capture the experiences of those who do not qualify for institutional support. The number of older undocumented Latinxs is expected to increase in the next 20 years, and this population will be in need of health insurance and other modes of support for their survival as well (Ro, Hook and Walsermann 2021). (The Elder Index is a helpful tool to learn more about what older adults would need to cover housing, healthcare, transportation, food and other miscellaneous items in different geographical areas across the U.S.)

Isabel: Similar to Melanie, I have an academic and personal interest in learning more about the aging process of Latinx communities. In particular, I am interested in learning how the racialized United States immigration regime impacts immigrants’ lives across the life course. My academic interest in studying Latinx aging experiences stems from interviews with past research participants. In some of my research, I work with mixed-status families, whose members include a combination of legal statuses; specifically, I refer to the subset of families with at least one undocumented member. In my study, parents I interviewed expressed their concerns about accessing critical support like healthcare and financial security as they aged without an immigration status. Individuals need an immigration status to access federally funded safety nets. Similarly, their adult children highlighted how they expect to take care of their undocumented aging immigrant parents with little support from traditional safety nets, such as Medicare and Social Security. 

A little-known fact is that immigrants, including undocumented immigrants, were not always barred from safety nets due to their immigration status. Undocumented immigrants began to be excluded from federally funded safety programs starting in the 1970s with the Old Age Assistance program, now known as Supplemental Security Income (Fox 2016). In recent years, access to safety nets has diminished for legal immigrants, too.

At a personal level, I am the daughter of immigrants and I feel responsible for reducing barriers for my parents and improving their aging experiences. They are still young, but they have worked physically and mentally straining jobs that impact their aging experience. This influences my interest in learning more about aging and the factors that impact immigrants’ experiences.

Q: Both of you spent significant time in the communities where you worked, conducting interviews and ethnographic research. Discuss what your research methods were, and how this kind of ethnographic attention differs from how aging is typically studied.

 

Melanie: Typically, research on older Latinxs has been quantitative, using rich datasets from population-based surveys that measure the health and well-being of older Hispanics/Latinxs, such as the Hispanic EPESE and the Health and Retirement Survey (HRS). However, as stated at a recent virtual talk by gerontologist Professor Deborah Carr, immigrants on average only make up about 15% of these analyses. Isabel and I are offering a qualitative examination that focuses specifically on older Latinx immigrants in the U.S. (in my case) and binationally, via Mexico and the U.S. (in the case of Isabel’s work). 

Isabel: The EPESE and HRS are well-known as exceptional for understanding U.S. aging experiences, but they also have limits. For example, the HRS is nationally representative and oversamples the Latinx population, yet given the migration histories and patterns of different Latinx ethnic immigrant (or foreign-born) groups (e.g., Mexicans, Central Americans, Cubans, South America), the samples are still small. Many Latinx immigrants migrated to the United States after 1965, and these datasets are just starting to sample these cohorts. These older adult cohorts have also come to the U.S. in an era when greater restrictions have been imposed on immigrants’ access to safety nets.

Because the people I interview include undocumented older adults who may feel more vulnerable if they self-identify, trust is key. I conducted semi-structured, in-depth interviews with participants in California, and with migrants who have returned to Mexico. To create trust and to learn more about these communities, I had to build trust by volunteering at organizations at churches, and by making connections one-to-one with respondents. This meant showing up at their coffee meet-up locations where older adults gathered to drink coffee, socialize, and play games in a local store that provided outdoor tables, chairs, and benches and helping when asked to. It also gave me a lot of insights beyond what the research participants shared in the interviews. For example, I heard more stories about their day-to-day interactions with family members and struggles with bureaucracy. Since the initial interviews in spring/summer 2019, I have reinterviewed some of them, am still interviewing others, and some have reached out to check-in. It has been a very insightful experience.

Melanie: My work differs from how research on aging typically has been studied in a few different ways. I use my interdisciplinary training to conduct research that uses an intersectional approach. For example, key considerations for my research are the role of race, ethnicity, gender, class, immigration status, and disability in relation to the lives of older Latinxs. I see my work as building critical connections across the fields of Latinx Studies, gerontology, and sociology. While Latinx Studies has called attention to the importance of community formation and pan-ethnicity among Latin American immigrants in the U.S., we know little about what community means for older Latinxs. In sociology, there is an emphasis on older adults and inequality, but the work on older Latinxs has been largely limited, and in gerontology there is increasing demand for more research on the needs and experiences of older immigrants broadly. I see myself as an interlocutor between fields and between the academic institution and community. My training in Ethnic Studies, for example, makes it so that much of my work is grounded in being among the community and learning from participants as collaborators, with collective goals to create better living environments for poor, historically marginalized older adults on their terms (for example, based on the concept of tranquilo ambiente, which I describe in my manuscript as a concept developed by my participants to describe what an age-friendly community should encompass), and to push for better local and federal solutions that improve their livelihoods as they grow older. 

Q: How does immigrants’ legal status affect their ability to access social services and benefits as they age?

 

Isabel: It is complicated. In short, lacking an immigration status (i.e., being undocumented) bars immigrants from accessing most federally funded safety net programs. This is purposeful and is embedded in current immigration law. Exclusions also apply to legal migration. For example, at the time of applying for permanent legal residency, individuals (regardless of age) must provide an affidavit of financial support from a U.S. citizen or legal permanent resident (usually a family member or friend) to immigration officials and are barred from using federally funded safety nets for at least five years following their arrival in the United States. 

However, given the distinction between federal, state, and local governments, some jurisdictions allow immigrants to access some forms of care and safety nets. For example, under its program Health Program of Alameda County (HealthPAC), California’s Alameda County provides affordable healthcare to low-income, uninsured people living in Alameda County, regardless of immigration status. This helps immigrants of all ages, including older adult immigrants.

The main institutions of support for older adults are Social Security and Medicare, which are federally managed and are essential sources of support for aging low-income workers. Undocumented immigrants are ineligible for most federal public benefits, including Medicare and Social Security.

Melanie: Isabel’s description is spot on. It is complicated both federally, but also by each state. In New Jersey, undocumented immigrants use Charity Care, which is also for low-income and uninsured people. However, Charity Care might not cover all of their medical services and needs. For example, one undocumented woman I interviewed had become deaf in one ear and was in need of $850 dollars to complete the purchase of a hearing device, which was not entirely covered by Charity Care. The undocumented older adults that I interviewed or spent time with during my ethnographic work had to rely on family and friends for economic support for their daily survival. Undocumented older adults often used their networks to find employment, to ask for a loan of money, or to even ask for a collection on their behalf when times were extremely difficult. It was also common to hear older undocumented immigrants express that they were having a hard time finding work because of their status and their age. Many faced age discrimination, which added an additional layer of precarity that they had to navigate on a daily basis. 

Q: As both of you note in your research, older Latinxs have among the highest rates of poverty in the United States, and their economic status shapes how they and their families can access social services. How do you account for these economic determinants in your research in comparison to other factors, like citizenship status? What do they explain, and what can they not explain?

 

Isabel: Economic and citizenship status are interrelated in my work. The current immigration system stratifies the lifelong economic opportunities of immigrants. It starts with, how did the immigrants arrive with or without visas? From where did they arrive, and were they with family, immigrant-friendly communities, or neither? What opportunities were available to them? What types of employment opportunities were available? How have they been integrated (or not) into U.S. society? Do they have language barriers? I try to understand immigrants’ experiences through a life-course perspective because context really matters.

We must be cautious about looking solely at economic factors because they do not take historical context into account. For Mexicans, there is a long history of using workers as cheap disposable labor, for example through the Bracero Program, a federal temporary guest worker program that allowed Mexican men to come to the U.S. to work mostly in low-paying agriculture jobs. When the Bracero Program ended, many men continued to return to the U.S., and sometimes their offspring did, too. In 1986, many agricultural workers adjusted their immigration status with amnesty through the Immigration Reform and Control Act, or IRCA. This was the last and only large-scale legalization program. In the life trajectories of many of the older adults I interview, I see how these historical events and programs shaped their lives.

Melanie: I agree with Isabel that context matters, and several social, economic, and political factors influence the life-course trajectories of older Latinx immigrants. In my own work, I was inspired by my upbringing to consider how older Latinxs survive with limited economic means, and that became the impetus for my larger dissertation study on how they adapt and adjust to growing old in the U.S., by observing the role of the family, community, and place. I would say that economic factors are interrelated with other factors, including the social determinants of health. With limited means, they have poorer chances of surviving with a high quality of life.  

Another example of how historical context, migration, economics, and health converge can be seen in the case of older Puerto Ricans. As a colony of the U.S., Puerto Ricans who remain on the island have been relegated to second-class citizenship status, which does not permit them the same rights as other U.S. citizens, such as the right to vote for the U.S. president in primary elections. In Puerto Rico itself, there is increasing financial debt, governmental mismanagement, and the historical legacy of several policies that have stripped the island of its independence. These conditions have led to a large out-migration of both young and middle-aged Puerto Ricans and has increased population aging on the island (Abel and Deitz 2014).

Presently, the island is facing a “parallel pandemic” as a result of poverty and inequality, the COVID-19 pandemic, and recent hurricanes (García et al. 2021). These compounding disasters (see Garriga‐López) have furthered health disparities among Puerto Ricans, such as the older adult population, who were already suffering at high rates from several chronic conditions, including diabetes and hypertension (García et al. 2021). Island-born Puerto Ricans residing in the U.S. have also been shown to have worse health compared to other Latinxs and non-Latinx whites (Pérez and Ailshire 2017).

Q: Isabel, your research emphasizes the intergenerational aspects of aging. How are families affected by the aging of the older generations?

 

Isabel: Broadly speaking, their lives are linked. I see how immigration status stratifies the experiences of both older adults and their adult children. For example, I see how adult children take on more of the care and costs of their aging parents, who do not have access to safety nets. This contrasts with research on poor, aging citizens who still have access to basic safety nets and whose children may provide support after they exhaust all other sources. Providing support to aging parents is especially difficult for low- and middle-income families in areas with high cost of living, like the San Francisco Bay Area. High costs are also forcing their children to move farther away from them, as Melanie’s work also shows. The lack of support not only affects older adults, but also their adult children and grandchildren. Their lives are intricately linked.

Q: Melanie, your forthcoming piece in Social Problems discusses the evolving role of family and how we must also consider the role of friends in the lives of older Latinx people. What support do friends provide that might make them as important as that offered by family members?

 

Melanie: We know from the literature on Latinx populations and immigration that family can be tenuous due to one’s social conditions, expectations, and needs. In my research, I found that older Latinxs spent considerable time with friends, especially when family was not available due to a variety of factors, such as their children being incarcerated or sick or having children of their own to care for. They may also turn to friends when their children move to the suburbs and attain some level of upward mobility, as they may not want to live with them or be seen as a burden. There were several dynamics at play that made friends an easier avenue for support. 

I wanted to shift the conversation by offering another lens to view older Latinx care. I argue in the piece that friends or peer networks have the ability to support older Latinxs by providing them with social, economic, and emotional support that other networks may not be able to provide, since they are removed from some of the conditions affecting them as older Latinxs. For example, peers have the unique ability to understand each other’s migration experiences, the challenges that they have faced while adapting to the U.S., and also the challenges they face in the present as they grow old, such as understanding medical and health services and institutions and planning for later life. I wrote this paper on peer networks as a way to consider another avenue that can offer support to older Latinxs, and to consider how we could collectively infuse these networks at the community-level. 

Q: While you both work on similar topics, your research takes place across different coasts. How do the histories of the places where you work — the East Bay of San Francisco and New Jersey — shape the possibilities for the Latinx people who are aging there? What are some of the consequential differences between your fieldsites?

 

Melanie: The location of my fieldsite is Hudson County, New Jersey, relatively close to New York City, near the Lincoln Tunnel that goes into Times Square. One of the reasons I was drawn to my fieldsite is because, often in Latinx Studies, we focus on three dominant geographies for Latinx life —  New York, Los Angeles, and Texas — but now more than ever, Latinxs are moving beyond these epicenters into new areas and creating communities and enclaves that support their survival.  

In the case of Hudson County, people forget about its close proximity to Manhattan and how that has played a role in the creation of the community as a Latinx enclave. Many of the older Latinxs from my study actually arrived in New York City to work as domestic or factory workers, but found their way across the Hudson due to informal networks and a lower cost of living. 

My dissertation, Con Suenos Que Ya Son Viejos [With Dreams That Are Already Old], focuses on the older Latinxs who arrived in the U.S. to make money and planned to one day return home, but due to a series of circumstances have lived out their remaining years here in the U.S. Many of them are what Brian Hofland and Fernando M. Torres-Gil describe as “stuck in place,” a gerontological term that discusses how some historically marginalized older adults do not have the choice to grow old where they want. I would argue that some also decide to stay in place because they value the social and cultural aspects of the enclave. Here they can more readily find support, such as emergency assistance and food and furniture referrals, and it is easier to find facilities that accept Charity Care from uninsured patients. However, more research is needed to examine how state and federal governments provide limited assistance to care, which can make obtaining support to navigate older adulthood precarious, stressful, and expensive. 

Isabel: Melanie is correct that Latinx populations are moving beyond the traditional locations. My dissertation work, Becoming Invisible: Aging and Stratification for Older Immigrants in the United States and Mexico, takes place in California, a traditional immigrant receiving state, and the East Bay. Many of the older Mexican immigrants that I interviewed moved to the East Bay due to family connections. Some had parents or other family members who were part of the Bracero Program, while others used established social networks from their hometowns or rural areas to move to the area. Return migrants I interviewed in Mexico were from Jalisco, a traditional sending Mexican state that has a long history of migration to California. I selected these sites because there are many studies with the Mexican-origin population that assume that older adults return to Mexico to retire. Thus, my binational approach sought to compare the experiences of Mexican immigrants who are aging in the U.S. and those who opted out.

California and the East Bay are immigrant-friendly geographic locations and have deeply shaped how low-income older adults access low-cost health care and housing subsidies. California is shaping immigrant-friendly progressive policies. For example, the state of California has just approved the expansion of medical health coverage to all low-income adults over 50 (regardless of immigration status) starting in May 2022. Meanwhile, the returnees I spoke to in Mexico often have a legal immigration status that allowed them to gain capital, or accumulated wealth, or social benefits (e.g., Social Security). They are also able to return to the U.S. if they desire, and their retirement income goes a long way in Mexico where their cost of living is often lower.

Q: How has the pandemic reshaped your research?

 

Melanie: The pandemic has raised questions about structural discrimination and its effects on older minority populations, and has brought to the fore the highly unequal environments that Latinx older adults face. For example, a recent study has shown that Black and Latinx older adults face an accumulation of disadvantages across the life course that makes them more susceptible to contracting and having health complications from sicknesses like COVID-19 (Garcia et. al 2021). I am working on an epilogue to my dissertation that looks at how social distancing mandated by COVID-19 mitigation measures have impacted and possibly undermined the rich networks that existed for older Latinxs in my fieldsite. My sense is that older Latinxs in the community I study felt supported during the quarantine, based on the ease of partaking in alternative forms of their daily activities and the care and concern provided by the community. For example, critical cultural resources evolved: churches shifted to online platforms, many older adults used phone chains to pray and stay in touch, and the city brought food and masks to older adults’ doors and provided rental assistance and socially distanced transportation to medical appointments. Together, communal care and daily activity helped to limit feelings of social isolation. However, I should also say that the use of this landscape differed remarkably by gender, immigration, and marriage status, which I plan to write about in an upcoming piece.

Isabel: I have maintained communication with the older adults in my research and begun reinterviewing them since the start of the COVID-19 pandemic. The experience of the pandemic for older adults was significantly shaped by their immigration status. Undocumented older adults were not able to access much of the federal stimulus. Grassroots funds emerged, as well as private and state support, that only some were able to access. However, it was difficult for many older adults to apply for these opportunities because of their lack of  technological knowledge or understanding of whether they qualify. There was a lot of misinformation about the pandemic, and many feared being punished for accepting help. Prior to the pandemic, President Trump also sought to make it more difficult for undocumented immigrants to adjust their immigration status by changing the “public charge” requirements by amending the Department of Homeland Security’s (DHS) regulations on how DHS determined whether an immigrant applying for admission was likely to become a dependent on certain government benefits in the future, or “a public charge.” The revised regulations added Medicaid, Supplemental Nutrition Assistance Program (SNAP), Medicare Part D, and Housing Assistance to the benefits that must be considered for immigrant admission. This influenced older adult immigrants’ willingness to actively seek pandemic-related support, including healthcare.

Q: What has previous research by gerontologists and policymakers helped us understand about how aging is changing in the United States? How does this differ from what you’ve been finding in your research?

 

Isabel: Work by gerontologists and social scientists has begun to show demographically who the aging population is and how it will change. The next groups of older adults will be less white, more diverse, and the cohorts will have grown old under different policy conditions. This forecasts that their needs will be different. For example, immigration dramatically changed after the Immigration and Nationality Act of 1965, which imposed limits on immigration from nations in  the western hemisphere for the first time, and after the exclusion of immigrants from social safety nets began in the 1970s.

Life course and aging theory show us how inequalities across an individual’s life shape their aging experience, and how policies further shape these inequalities. For example, laws that exclude immigrants from participating in formal employment also bar them from accessing important programs like Social Security and Medicare in late adulthood, even if they contribute to the fund. Federal funding often omits safety nets for undocumented individuals, even though many pay income taxes (as required by federal law). The intention of these restrictive immigration laws is to punish individuals, but we are really hurting their entire families (often citizens), who must carry the costs. It is critical to understand that every policy we enact has long-term consequences — everything from immigration to housing to social safety nets. We need to go no further than the Social Security Act to see how it helped decrease poverty among the elderly (even though it is not sufficient).

Melanie: As stated by Isabel, the general trend among gerontologists and policymakers has been to shift the conversation to “diverse aging,” which includes older Latinx immigrants, as well as other racial/ethnic populations, disabled older adults, and other historically marginalized populations, such as older LGBTQIA adults. My work is a continuation of the work of several scholars who have shifted the tides of gerontological work with important theories on race, ethnicity, gender, and class. Before I began my work, there were scholars who developed concepts like double jeopardy (Beale 1970, 2008), which became the impetus for studying the compounding effects of race, gender, and age. That was followed by triple jeopardy theory, which additionally considered sexuality as well as other categories of social stratification among older adults, and cumulative advantage/disadvantage theory, which focuses on the ways in which one’s early life, including advantages and privileges, or lack thereof, come to shape one’s life course into old age.

My work is different because I am working with a strong foundation from people before me, and because times have evolved, so much so that someone like myself who is a Latina, raised in a single-headed household and in a low-income area, can now complete a PhD at a place like UC Berkeley. I also benefit from having had the opportunity to work in a field that has afforded me an interdisciplinary range of ethnic and cultural studies courses, and has allowed me to take classes in public health, sociology, and social welfare. This combination informs my focus on ethnic aging and community, and also affects the policy interventions I am interested in highlighting in my work. As a result, I believe that aging equitably is part of meeting the challenge to live in a just world. This includes the incorporation of immigrants, as mentioned by Isabel, and also policies that focus on addressing affordable housing, food insecurity, healthcare needs, and policy interventions that address earlier determinants of one’s life chances, such as better working conditions, fair wages, and better education, to name a few. Once we address these and other social inequalities, we can see those fruitful changes at play at the end of the life course for everyone. Aging equitably can be our measure of success as a society.

 

References

Abel, J. R., & Deitz, R. (2014). The Causes and Consequences of Puerto Rico’s Declining Population (SSRN Scholarly Paper ID 2477891). Social Science Research Network. https://papers.ssrn.com/abstract=2477891

Beal, F. M. (2008). Double Jeopardy: To Be Black and Female. Meridians, 8(2), 166–176. https://doi.org/10.2979/MER.2008.8.2.166

Fox, Cybelle. (2016). “Unauthorized Welfare: The Origins of Immigrant Status Restrictions in American Social Policy.” Journal of American History 102(4):1051–74. doi: 10.1093/jahist/jav758.

García, C., Rivera, F. I., Garcia, M. A., Burgos, G., & Aranda, M. P. (2021). Contextualizing the COVID-19 Era in Puerto Rico: Compounding Disasters and Parallel Pandemics. The Journals of Gerontology: Series B, 76(7), e263–e267. https://doi.org/10.1093/geronb/gbaa186

Garcia, M. A., Homan, P. A., García, C., & Brown, T. H. (2021). The Color of COVID-19: Structural Racism and the Disproportionate Impact of the Pandemic on Older Black and Latinx Adults. The Journals of Gerontology: Series B, 76(3), e75–e80. https://doi.org/10.1093/geronb/gbaa114

Garriga‐López, A. M. (2020). Compounded disasters: Puerto Rico confronts COVID-19 under US colonialism. Social Anthropology / Anthropologie Sociale, 28(2), 269–270. https://doi.org/10.1111/1469-8676.12821

Pérez, C., & Ailshire, J. A. (2017). Aging in Puerto Rico: A Comparison of Health Status Among Island Puerto Rican and Mainland U.S. Older Adults. Journal of Aging and Health, 29(6), 1056–1078. https://doi.org/10.1177/0898264317714144

Plasencia, M. Z. (2021). Age-friendly as Tranquilo Ambiente: How Socio-Cultural Perspectives Shape the Lived Environment of Latinx Older Adults. The Gerontologist, gnab137. https://doi.org/10.1093/geront/gnab137

Ro, A., Van Hook, J., & Walsemann, K. M. (2021). Undocumented Older Latino Immigrants in the United States: Population Projections and Share of Older Undocumented Latinos by Health Insurance Coverage and Chronic Health Conditions, 2018–2038. The Journals of Gerontology: Series B, gbab189. https://doi.org/10.1093/geronb/gbab189

Social Security Administration. (n.d.). Hispanics’ Understanding of Social Security and the Implications for Retirement Security: A Qualitative Study. Social Security Administration Research, Statistics, and Policy Analysis. Retrieved November 24, 2021, from https://www.ssa.gov/policy/docs/ssb/v77n3/v77n3p1.html#mt9

Torres-Gil, F., & Hofland, B. (2012). Vulnerable Populations. In H. Cisneros, M. Dyer-Chamberlain, & J. Hickie (Eds.), Independent for Life: Homes and Neighborhoods for an Aging America. University of Texas Press.

 

Grad Student Profile

The History of Astronomical Illustration: Q&A with Lois Rosson

Lois Rosson

How do we imagine and illustrate outer space? Lois Rosson, a PhD candidate in the UC Berkeley Department of History, focuses on the history of astronomical illustration as a lens into the history of science and technology. She worked at NASA for two years before starting graduate school, and recently completed a research internship at Lawrence Livermore National Laboratory. She has held fellowships at the Smithsonian’s National Air and Space Museum, and the Huntington Library. Her background is in studio art, and she is primarily interested in the ways both artists and scientists construct visual truth claims. 

Matrix content curator Julia Sizek interviewed Rosson about her dissertation research, drawing on astronomical illustrations that Rosson features in her work.

image of probe heading toward a planet
Art by Paul Hudson Pioneer Venus: Multiprobe Artwork

How did you become interested in the role of artists producing astronomical images?

I came to this project via a somewhat zigzagged trajectory. My undergraduate training is actually in studio art; I was very interested in portraiture. I didn’t have the vocabulary for this at the time, but what I was really interested in was mimesis, or how someone could paint an image of another person that could be recognized as “realistic.” In portraiture, there are thousands of ways to produce a painting that resembles an individual, but we don’t use the rubric of accuracy or objectivity to make sense of this relationship.

When I was an art student at Santa Cruz, there was an interesting split between painters who used photography as reference material and those who didn’t. The paintings people colloquially described as “realistic” typically used photography as a tool in the visualization process. What I noticed was, most of the time when people used the term realistic, what they actually meant was photographic.

When I graduated, I got a job doing graphic design at NASA’s Ames Research Center. NASA as an organization is great at preserving its own institutional history, and each of the ten NASA centers has its own history office that maintains a local archive.

The history office at Ames has an incredible collection of illustrations NASA commissioned to visualize the unmanned satellite and probe missions of the late 1970s. These images were circulated in print, so their final form was fairly small, but in person they are really large and stately-looking art objects. I couldn’t resist the urge to describe the type of realism they deployed as a photographic one. But what photographs do you use as reference material when you’re painting a largely unobserved topography? No one had ever seen the surface of these planets from these vantage points with the naked eye. I wanted to know more about how these artists were trained, and what kinds of reference material they were using.

From there, I got very interested in the conceptual differences between fine art and scientific illustration. Astronomical illustration was especially fascinating, because space has historically been such a difficult subject to visualize. I discovered that these illustrators were often trained as artists, but that the illustrations they produced were circulated as a sort of neutral scientific image. A lot of times, the illustrations were simply attributed as anonymous “artist’s depictions,” in ways that downplayed an individual illustrator’s interpretive lens.

I left Ames after two years to start a doctorate in history, and work with Berkeley’s resident historians of science. The history of science is really a study of how groups of people produce truth, and since I was interested in why certain images are read as more “real” than others, it felt like a great fit intellectually.

These images come from the Mariner 9 mission, which was the first mission to orbit another planet. Can you describe the process and techniques that they used to sharpen their images?

The Mariner 9 images are really fascinating because we used artists to literally help us “see” it better. Artists at the USGS took fuzzy Mariner 6 and 7 images and redrew them into a smoother, more coherent landscape. Then, by the time Mariner 9 sent back slightly clearer images, we used those earlier drawings to help us make sense of what we were seeing. In this case you have human observers collapsed into a larger, institutional “seeing” apparatus, which is why their identity as artists is collapsed into a process that sounds almost mechanical. These artists make visual observations, and then transcribe what they see. In my dissertation, I argue that astronomical illustrators exerted much more autonomy over their images than we typically account for, and that they actually held quite a bit of purchase over the “look” of outer space that emerged over the course of the twentieth century.

While the black-and-white image is a composite from the Mariner 9’s cameras, the color image comes from an artist rendering from the same mission. What did the production process look like for artists rendering images of Mars, and what does this show us about the relationship between science and art at this time, which you call “astrorealism”? How do the processes of astronomical illustration compare to scientific illustrations in other fields?

The Mars mapping efforts of the early 1970s — the maps we made with Mariner images — were actually made with a set of techniques developed in the 1960s for mapping the lunar surface in preparation for Project Apollo. In the early 1960s, photographing the Moon with high enough resolution for effective mapping was fairly difficult. The solution was to hire artists to come in and bolster the resolution of fuzzy photographs by hand with an airbrush. Patricia Bridges refined the technique of airbrush editing at Lowell Observatory, and trained a whole roster of illustrators in the process. She used a lot of the same techniques again in the 1970s to make clearly legible drawings of the Martian surface.

This is largely a story about artists being deployed to “see” in situations where cameras can’t, and I argue that the maps produced during this period were largely contingent on replicating images that could be read as sufficiently photographic. There’s a growing literature in the history of science about the history of scientific photography, and the ways in which it displaced human illustrators in botany and medicine not because the images were necessarily  “more objective,” but because viewers were anxious about human fallibility and trusted mechanical reproductions to be more neutral. The epistemological anxieties baked into the way we read hand-drawn images are part of the reason the artists in my story were cast as passive transcribers of astronomical information. In reality, they were just teasing out photographic-looking clarity from ambiguous scientific images.

“Realism” is the most confounding word in the entire dissertation project. In art, you have French Realism, photorealism, Socialist Realism, hyperrealism, etc. They all mean slightly different things, and usually refer to a specific historical movement as opposed to “naturalism” or mimesis, which typically refer to attempts at visually inscribing reality in some way.

I coined “astrorealism” after Douglas Dewitt Kilgore’s “astrofuturism,” which treats a lot of space advocacy work in the 1970s and 80s as a body of fictional literature. That’s not to say what these advocates were doing was fake, but by framing it as a form of literary futurology, you can tease apart the cultural meaning baked into descriptions of humanity’s place in the cosmos. For me, astrorealism refers to the artwork that accompanied much of this writing. The astrorealist impulse is one that depicts space accurately in an attempt to make space futures seem more tangible. There’s a long history of these landscapes being framed as a form of scientific illustration in order to differentiate them from science fiction art. This is typically done to make the views seem more scientifically plausible, which is useful if you’re trying to convince a wide audience about the feasibility of an orbital space colony.

In this image, we can see artist Donald Davis producing what became “The Two Former Faces of the Moon,” renderings of what the Moon used to look like. How did these illustrations circulate, and how did they shape American understandings of outer space?

Don Davis’s career is an incredible through-line through most of the dissertation project. He was hired by the U.S. Geological Service’s Branch of Astrogeologic Studies in the late 1960s, while he was still a high school student in Menlo Park. Because large-format color printers did not yet exist, the agency hired high school students to hand-color maps with a numerical coding system, much like a large color-by-number picture. Davis’ artistic dexterity was quickly noticed, and in 1971 he was sent to Flagstaff to help support the Mars mapping project that was newly underway. While in Flagstaff, Davis came under the tutelage of none other than Patricia Bridges, who by this time had a decade of experience using airbrushes on astronomical images.

Around the same time Davis relocated to Flagstaff, Donald Wilhelms, one of the USGS’s planetary geologists, had an idea for a project that would deploy Davis’ talents as a transcriber of visual astronomical information. In a gesture to the Moon maps produced by Lowell in the early 1960s, Willhelms wanted to produce a series of images that illustrated earlier periods of the Moon’s history. They collaborated on a paper published in Icarus titled “Two Former Faces of the Moon.” In it, Wilhelms described what the lunar surface looked like at various points in its history and paired his analysis with Davis’ detailed airbrush drawings of the Moon. Just as Bridges helped fill the gap cameras couldn’t, Davis’ used her techniques to make visible a view of the Moon humans could not photograph or view through a telescope. In this case, the views he was clarifying were unphotographable because they existed only in the past.

The paper was a pivotal moment for Davis’ career. Just after the publication of “Two Former Faces of the Moon,” Davis attended a party at a commune owned by Joan Baez. Carl Sagan, also in attendance, was the editor of Icarus and remembered the drawings of the Moon Davis had produced. Sagan was impressed by the series, and the meeting kicked off what would become a long and fruitful set of collaborations. Davis produced several illustrations for Sagan’s books over the course of the 1970s — including the cover of Dragons of Eden — and joined the Cosmos Art Department in 1979 when production for the television series began.

Davis went on to enjoy a highly visible career in the field of astronomical illustration. In addition to his many collaborations with Carl Sagan and JPL, Davis helped produce one of the 1970s’ most iconic visions of space. In 1974, Davis spotted a newspaper article titled “Princeton Plan for a New Frontier: A Space Colony by the Eighties,” written by Gerard K. O’Neill. The article outlined a plan for developing a space colony as early as the 1980s, and for no more money than the Apollo Program. In Davis’s view, O’Neill, a Princeton physicist, had the credentials necessary for this to be a reasonable claim. Davis was intrigued and reached out to O’Neill to advertise his services as an astronomical artist. In response, O’Neill sent Davis a newsletter with drawings and early ideas on the subject. Davis used these to produce a painting of one of the cylindrical space colonies O’Neill described in his plan.

Davis’s collaborations with O’Neill are a prime example of how the brand of scientific realism cultivated at Lowell Observatory helped develop the look of outer space in the public imagination. Davis was trained to make his images appear as plausible as possible by rooting his art in photographic reference material. O’Neill, actively trying to sell Congress on the viability of a 10,000-person space station, wanted images that appeared as realistic as possible. O’Neill and Davis’ 1975 Space Station Design collaborations are some of the most visually iconic artifacts of the post-Apollo period. Their production was contingent on Davis’ training as an astronomical illustrator, and the belief that artists can be deployed in the absence of cameras to document scientific information.

In other instances, art became a means to express what were seen as the limits of human expression, and a response to earlier modes of nationalist space exploration. What does the art included on the Golden Record show us about how the politics of space exploration and ideas about space are changing in the 1970s and 1980s?

Outer space is the perfect cultural Rorschach test. What’s fascinating about the Golden Record is that it represented an attempt to produce a snapshot of life on Earth, and attempted to make it legible to an imagined alien species.

As an artifact, it’s also a great representation of how space advocacy changed over the course of the twentieth century. Carl Sagan’s approach to drumming up support for new space ventures was a dramatic departure from the space boosterism of the 1950s. Rather than celebrating space exploration as an activity that would cement American hegemony in space, he framed the cosmos as an intellectual antidote to aggressive political impulses. The hardware that allowed space science to cohere into a bounded discipline in the mid-twentieth century was not a function of the same defense-minded spending habits that gave us nuclear weapons, but rather part of a much older tradition of astronomical observation. He framed space science as part of an ancient practice of human star-gazing, rather than a set of technologies similarly borne out of Cold War conflict. This rhetorical move allowed him to use scientific practice as a vehicle to critique the military-industrial-academic crucible that created a U.S. missile program, in spite of muddled historical boundaries. In Sagan’s view, which was reiterated in his popular writing and television appearances, the cosmos offered the type of humbling perspective the political squabbles of the 1970s so desperately needed.

The Golden Record’s emphasis on a single human race inhabiting a single planet is a great encapsulation of this philosophy. At its conceptual core, the project’s goal was to produce an object that represented the non-technical dimensions of Earth, and to communicate them to an imagined alien species. Jon Lomberg, one of Sagan’s long-time artist-collaborators, was tasked with collecting a range of images that described human life in a coherent way. Of course, no totalizing narrative of Earth could be communicated in 116 images; the selection included says much more about the compilers of the record’s contents than anything essential about life on Earth in 1976. There were diagrammatic images meant to describe concepts believed to be universal: lists of mathematical equations, a diagram of a DNA helix, as well as a chart describing the distances between all the planets in our solar system. There were also photographs of various human activities: one image showed three people eating and drinking, while another showed a baby breastfeeding. Olympic athletes, a teacher, and a woman at the grocery store were shown, as well as several city-scapes, cars, and Titan Centaur rocket.

Lomberg’s participation in the project is largely unknown, and is my favorite part of the Golden Record’s creation. Lomberg was concerned that, even if an alien civilization intercepted the record and was able to decode the disc, they might not be able to read photographs as containing any intelligible information. In his view, “not even all people can necessarily read photographs” unambiguously, and that most people take for granted the extent to which the ability to decipher visual information is a learned skill. His solution, which I think is fascinating, was to include eleven drawings on the record that broke down visual information into black and white shapes. He concluded that if an alien organism were ever to encounter the record, its physical ability to decipher visual information encoded by humans would warrant proximity to some sort of star system. In other words, if an alien being were to “see” visual information the way humans do, it would likely be the result of evolutionary sensitivity to a centralized light source. Thus, Lomberg’s solution was to depict certain forms as shadows. If the hypothetical alien observer had any familiarity with light emanating from a single source, then it was likely familiar with the concept of a shadow. If it was familiar with shadows, it might also understand that they represent complex physical objects in two-dimensions. If it got that far, it might also realize that the rest of the images on the record were two-dimensional representations of three-dimensional beings and structures.

While many images of space sharpened actual images, others imagined new frontiers in space. In these images, Donald Davis imagined donut-shaped spinning space colonies that physicist Gerard O’Neill’s envisions in a 1974 Physics Today article. How did these designs reflect the relationship between astrofuturism and astrorealism, and in what ways were these visions inspired by life on Earth?

I’m always fascinated to learn where the artists in my story source their reference material. When you’re depicting an unobservable topography — or, as in this case, an imagined one — it’s easier to deploy an existing reference as a proxy. Gerard O’Neill had worked out the basic architecture of the structures in his plan, but the look of the interiors still needed to be filled in. I had the pleasure of interviewing Don Davis about his working process in 2020, and when I asked him about the space station designs, he emphasized that the landscapes around him played a significant role in the visual formulation of O’Neill’s space colonies. He had recently moved back to Northern California from Flagstaff, and used the rolling green hills of the San Francisco Bay Area to inform his work. He considered trees and other natural features to be integral to a pleasant living experience, but also an important nod to the types of ecosystem design that would be necessary on a self-sustaining colony. Davis felt that would be as big a challenge as anything else and wanted to make sure environmental engineering was an implied task in the illustrations he produced.

Don Davis’ illustrations provide a clear material link between Gerard O’Neill’s space station designs, and Douglas Dewitt Kilgore’s analysis of them as literary objects that cast outer space as a type of suburban frontier. According to Kilgore, O’Neill’s design was the astrofuturist’s answer to the economic and political ills of the 1970s.  The energy crisis on Earth wouldn’t be a problem for orbiting space colonists, who could tap into the limitless supply of solar energy offered by the sun. Resources scarce on Earth could be mined from the surface of the Moon, and the absence of gravity meant the construction and expansion of space station structures could continue into perpetuity. O’Neill’s answer to the resource limitations of Earth was not to re-examine consumption, but rather to extend the possibilities of capitalist growth indefinitely into a new and endless frontier.

This would presumably also help ameliorate the political problems of the 1970s. By giving different groups of people the resources they needed to live independently, governments wouldn’t need to mediate their peaceful coexistence. Groups could self-organize however they pleased and splinter off to form new colonies, should the need arise.

At the time that Davis began collaborating with Gerard O’Neill, he was living in Atherton, a suburb south of San Francisco, close to his first job at the USGS. Atherton’s surrounding natural landscapes greatly influenced the look of the interiors of the space colonies Davis produced, but so did the layout of the city itself. Davis’s designs deliberately included a lot of greenery, foregoing the dense “shopping mall” aesthetic he often saw applied to space colonies. The beauty of O’Neill’s design was the prospect of infinite expansion, which eliminated the need for cramped space stations and the miserly economization of resources in an extreme environment. Atherton was a wealthy suburb, and a perfect example of the low-density housing Davis thought represented ideal living conditions.

I think this is a great example of the ways in which these images function as robust historical artifacts. Kilgore observed that the embrace of the suburban pastoral in space station design mirrored the same impulses that drove white flight out of urban environments in the same period. As with the new suburban neighborhoods sprouting up across the United States, Davis’ colonies implied the existence of life on the idyllic periphery of an industrial center. I think this is especially evident in illustrations of O’Neill’s toroidal designs, spinning rings that simulated gravity using centrifugal force. In a visual sense, the colonies are a suburban halo around a city that has ceased to exist. The problems of city life have been literally absented, leaving only a verdant and harmonious mode of existence.

How did astronomical illustrations change after the 1970s, and what does this tell us about the relationship between art and science today? How has the field of astronomical illustration changed today?

I think the visualization efforts of the late 1970s—when we had to rely on largely unmanned satellite views of distant cosmic neighbors—really resulted in a critical mass. By the early 1980s space art and astronomical illustration actually professionalized into a formal guild with its own in-house journal. I have an entire dissertation chapter about a trip they took to the Soviet Union in 1987 to meet with a parallel guild of Russian space artists, and the differences between their respective approaches. As you might imagine, both groups cultivated very different philosophical approaches to representing the cosmos.

The International Association of Astronomical Artists, or the IAAA as they were abbreviated, is actually still around today. There’s definitely less of a market for handmade illustrations than there was closer to the mid-twentieth century, but outer space is still extremely hard to make visible.

A few years ago I had the opportunity to interview Dana Berry, who worked on Hubble imaging at the Space Telescope Science Institute at Johns Hopkins. The question of how to represent space subjects in an intelligible way was still a pressing one, even with the use of digital image processing tools and the help of a space telescope.

In Berry’s view, science visualization is a competition between believability, pedagogy, and accuracy. For instance, if you’re trying to show the solar system on a computer screen, you have to be able to show the planets. In reality, they’d be so small they’d fall between pixels. In order to scale the planets up in a way that viewers can recognize, accuracy has to take a hit. So in this way, pedagogy and believability win out.

Berry emphasized that is especially true with representations of the Big Bang. To keep with the computer screen analogy, the Big Bang is usually shown as an empty screen, and then a pixel emerges, and blows up to include the entirety of the frame. But the problem with this visualization is that the Big Bang created space as well as time, so the computer screen technically didn’t exist yet. We’re trained to think of the universe as emerging from a single dot suspended in space—how do you show something expanding into a realm that doesn’t exist yet? In these views, accuracy takes a backseat to believability.

To answer your question, while we don’t see many handmade illustrations of space these days, we’re still very much grappling with the same kinds of questions that astronomical artists were trying to figure out over the course of the twentieth century. Space is hard to conceptualize, which makes it hard to see. I think our attempts to picture it will always inevitably function as cultural products.