Podcast

Matrix Podcast: The Past and Present of Teletherapy

The Distance Cure

 

In this episode of the Social Science Matrix podcast, Julia Sizek, a Phd candidate in the UC Berkeley Department of Anthropology, interviews UC Berkeley scholars Hannah Zeavin  and Valerie Black about the history and present of teletherapy, which describes all forms of remote therapy, from letter-writing to chatbots. Both researchers study the history and experience of these tools of therapy, which are often assumed to be more impersonal than and inferior to forms of in-person therapy. They discuss the past and present of teletherapy, how the ongoing pandemic has affected mental health care, and the business of artificial intelligence-based therapy.

Valerie Black is a PhD Candidate in anthropology at Berkeley completing her dissertation, “Dehumanizing Care: An Ethnography of Mental Health Artificial Intelligence.” Her multisited dissertation research has been conducted in Silicon Valley at a mental health chatbot company and in Japan at a mental health videogame company. Her research concerns how chatbots and other AI health might reshape our understanding of care and labor. She was recently awarded the Charlotte W. Newcombe Doctoral Dissertation Fellowship to complete her work on her dissertation.

Hannah Zeavin is a Lecturer in the Departments of English and History at Berkeley, and sits on the Executive Committee for the Berkeley Center for New Media.she received her PhD from NYU’s Department of Media, Culture, and Communication in 2018. Her research considers the role of technology in American life. Her book (2021, MIT Press), The Distance Cure: A History of Teletherapy, is a transnational history of mediated and distance therapy, starting with Freud himself. Her second book, Mother’s Little Helpers: Technology in the American Family (MIT Press, 2023), considers the history of techno-parenting in the 20th and 21st centuries.

Produced by the University of California, Berkeley’s Social Science Matrix, the Matrix Podcast features interviews with scholars from across the UC Berkeley campus. Stream the episode above, or listen and subscribe on Apple Podcasts or Google Podcasts.

Podcast Transcript

[MUSIC PLAYING]

Woman’s Voice: The Matrix Podcast is a production of Social Science Matrix, an interdisciplinary research center at the University of California, Berkeley.

Julia Sizek: Hello, everyone, and welcome to the Social Science Matrix Podcast. I’m Julia Sizek, and I’ll be your host for this episode. Today, we’re excited to have Hannah Zeavin and Valerie Black to discuss the history and present of teletherapy.

Teletherapy describes all forms of remote therapy, from letter writing to contemporary chatbots. Both of them study the history and experience of these different forms of therapy, which are often assumed to be inferior to forms of in-person therapy. At the same time, teletherapy has seen an enormous surge in popularity during the pandemic.

Hannah, a lecturer in English and history at UC Berkeley, recently published her first book, The Distance Cure– A History of Teletherapy with MIT Press.

Valerie is a PhD candidate in anthropology and she was recently awarded the Charlotte W. Newcomb Fellowship to complete her dissertation on the use of artificial intelligence in therapeutic mental health care.

Thank you for both of you for coming. We want to talk about the new rise of teletherapy. During the pandemic, there’s been a lot of news coverage about the rise of teletherapy or therapy that’s conducted over the phone or online.

But both of you are really interested in the much longer history of teletherapy. How would you define teletherapy, and what’s seems to be so new about teletherapy today?

Hannah Zeavin: Thanks so much for that question, Julia, and for having me on. In my book, The Distance Cure– A History of Teletherapy, I both use the more narrow definition of teletherapy that’s mediated by true telecommunication technology.

But also this more expansive definition that you’re pointing to, so that I can include activist care, self-care mediated by machine, but also, as you point out, the post and Fin-de-siècle Vienna and radio broadcasts, and so on.

What’s new about contemporary therapy, I think the first thing that comes to mind is scale and also a diversity of delivery. We haven’t lost our usage of, say, the suicide hotline. People still call in and write to radio shows and advice columns, no doubt.

But now people do have a whole ecosystem of teletherapy startups vying for their care and their dollars. And in the global pandemic, much of private practice is still remote.

Although, of course, I think there has been this very intense focus on teletherapy, which obfuscates the fact that not all therapeutic care happened in an office anyway and not all therapeutic care has gone online. And we can talk a little bit about that too.

Valerie Black: What more is there for me to possibly add to that? I would just say that– so my research, my field sites were two different startups that are providing care that they themselves would not define as a form of teletherapy, but that I think it’s absolutely fair to understand it that way.

And forming my project, I was drawn to the seemingly improper, weird, in-between spaces filled with non-experts, places where entertainment meets care, everything from ham radio, amateur radio, early chat rooms.

And teletherapy is on the fringes of that. And Hannah is going to talk about this, but her own work really beautifully upends much of what seems to be proper versus improperly therapeutic.

For me, I’d say teletherapy instead of being too unofficial or sort of an outlier, I almost have the opposite problem. It’s a bit too official in some ways to define the work being done at my field sites. And I initially saw teletherapy as this sort of adjacent kindred spirit or predecessor.

I remember back in– I think it was 2018, I went to a panel at the American Psychology Association’s big annual conference on teletherapy, and I was just blown away by it being a field filled with red tape.

And I think one of the forms that I’m going to be talking about that you’ve already touched upon, Julia, therapeutic chatbots are very much free of that. They have a certain plasticity where that creates a business opportunity.

So whereas in conventional teletherapy leading up to the pandemic, there’s been so many questions about if I’m licensed to practice in this state, but my client is traveling in this other state for a certain period of time, can I still work with them while they’re at that location, even though both of us are remote from one another, anyway?

And I think, yeah, chatbots being so much less formal and not officially therapy end up circumventing a lot of the concerns that teletherapy has traditionally brought.

Zeavin: And Valerie, I think that’s an awesome point because one thing that’s very tied up in this idea of scale and diversity, too, is exactly that.

That in the pandemic, the lack of red tape that’s been reserved for non-therapeutic interventions, even if they’re marketed as such, like the AI chatbot, like the sort of, quote unquote, “serious game” or the gamification of mental health care, we’ve seen that all of the red tape disappeared virtually overnight for private practice as well.

So it used to be that you only could use HIPAA compliant medical grade Skype, say, or official teletherapy and telemedicine channels. And right as the pandemic started, those systems actually started to fail.

And so all of that compliance was waived for any habitual media. So whether you preferred FaceTime or Zoom, which is now so ubiquitous, that was OK. And in parallel, also licensure problems were also diminished.

And lastly, and I think this has been a really underreported story of the pandemic, four out of five big insurance companies waived the copay for teletherapy that had previously existed.

And now, of course, we’re seeing all of these special emergency loosening of that red tape that Valerie so perfectly pointing to go right back online, which is going to have pretty deleterious effects for those who still need remote care or needed it all along and could only access it because of this lack of structure.

Black: So well-described. Yeah, I think the changes and the sort of boomerang of those changes have been and will continue to be such so sweeping and so problematic for so many.

Sizek: Yeah, I think it’s so interesting to hear about the way that therapy has always been tied up with certain forms of bureaucracy, red tape, insurance, all this compliance that we don’t actually think about entering into the therapeutic relationship.

When people imagine going to their therapist, having their therapist be their friend, sitting on a couch, that is not at all what we think about when you’re pointing to these forms of mediation that are at the center, I think, of both of your work.

And so maybe, Hannah, you can tell us a little bit about why people think teletherapy is inferior or different from or the challenges that people have put toward teletherapy historically.

Zeavin: Great. Thank you so much. I mean, I think that that’s like the most loaded and overarching question here, right? And I think there are a lot of different answers that I try and deal with in my book.

As you point out, teletherapy up until the pandemic is most frequently talked about as lesser, almost amateur, definitely a dampening or kind of metallic quality.

And it was true up through the pandemic that teletherapy was therapy shadow form. It was not the dominant form of care on offer. And so there wasn’t a huge sociological drive to make it the dominant form or readily available.

And then I think what Valerie and I are both really interested in is that didn’t really stop anyone from experimenting in all different kinds of ways. On the one hand, of course, the history of AI therapy is some 60, 70 years old at this point, or the first efforts to script therapy. The suicide hotline starts in the 1950s.

And going further and further back, therapists have remarked in the pandemic, well, but I can’t imagine Freud using Zoom or something to this effect. I’ve that joke a lot.

And of course, I’m the killjoy. And I say, well, not Zoom, but in fact, the founder of psychoanalysis was very invested in media and thinking with media, not just metaphorically, but also quite materially in using written cures to both treat himself and others.

So I think that this idea that it’s lesser has to do with this question of, what does it mean to gather two people together? And that is a huge human question that we’ve been looking at and examining both at the personal and interpersonal level this past year and a half and for millennia.

What does it do to put two bodies or more than two bodies in a room together? So I could walk through all the various critiques. They often center on a reduction in empathy and feeling, a reduction in information.

Sometimes the critiques are upsetting and they center on losing a power differential that’s endemic and important. These people argue to the therapeutic scenario. And then we also see that actually there are all kinds of new ways of coming together in teletherapy that I call distanced intimacy.

The last thing I’ll say about this is that part of it, I think, has to do with the idea that there’s an intruding third factor. If it’s me and you and a medium, that’s, quote unquote, “less pure” than just me and you.

And so in my book, I start with upending that assumption and saying, actually all therapy is always a triad, not a dyad. It’s always comprised of patient, therapist, and medium.

And I think to pass this to Valerie, it’s one reason why AI is so fascinating as a test case because it actually returns us to the notion of a dyad, but this time it’s just the patient and the medium, no therapist.

Black: Such a great point. To think in terms of the dyad and triad, which is so compelling, I think a big question that people have for me oftentimes is like, who is the caregiver? Is it the human who writes this script?

And I don’t mean script as in like programming, although that too. But I actually mean the dialogue in the case of AI that the AI therapist is putting forth. Is it the human that’s creating that or the AI itself that is the caregiver for?

And so in a way the triad is still sort of there a bit depending on how you approach it. Yeah, so I love that framing that you have, Hannah.

Sizek: Yeah. And to maybe just dig into how this actually works, like, what does an AI chatbot interaction look like? What is a circumstance under which someone would use it? What are the mechanics of that relationship?

Black: So it’s really going to depend quite a bit on what service, what platform you’re using, like what kind of device you’re using as a user and user to connect to that. I’d say most generally I would say– I’ll just give an example, rather than say this is the way.

But if you’re using your phone, you might be using an app, you might just be using your regular SMS text messaging to a phone number, but then it’s the bot that’s replying to you.

And so it might pop up with a question once you’ve signed up and joined and you click through that, you understand the standard user policies, disclaimers that this isn’t a real person, that this isn’t an emergency service or a substitute for that.

Once you’ve gotten there and you’re signed up, then you might get pinged right away and also on a regular basis, maybe daily, maybe weekly, perhaps the same timeframe.

It really depends on the service. They have different theories as to how often people want to be contacted and how they detect that from people’s patterns of corresponding with it.

But say, you’re getting like a weekly text message on Friday afternoons and it might just be a question like– it could be something like, how’s your week going? Or how are you dealing with stress these days? Some sort of a hook that makes you want to open up your phone and click something.

Sometimes you’ll get buttons to answer, sometimes it’s all text-based. It really just depends. And it sounds so basic when I put it that way and surprise, it is pretty basic.

Does that help give anything when you– imagine trying to describe how email works to someone. It’s going to sound a lot more painful than just what it actually is. So I hope I did it justice.

Sizek: Yeah, well, I think that’s really helpful because it helps demystify this process for us, right? When we think about AI chatbots caring for people, some people find this to be a very strange phenomenon.

But when you put it this way, it sounds just like calling into a suicide hotline where both of you have conducted research, both archivally as well as ethnographically.

So maybe you can help us understand, like, how do you think about something like a AI chatbot today, which seems very different or new or exciting or strange, to something that seems somewhat mundane, suicide hotline?

Zeavin: I’ll let Valerie deal with that direct comparison. It’s such an incredible feature of Valerie’s work that those two sites are brought into conversation together. But maybe, Valerie, if it’s OK with you, I can back up and start with some suicide hotline, just brief glossing and history.

Black: Absolutely that would be perfect.

Zeavin: So the suicide hotline in AI if I had to foreshadow where Valerie might go, the suicide hotline when it starts is far from mundane. It’s a kind of radical idea for a number of reasons.

The first is that it really comes out of Protestant clergy first in England and then the United states, which surprised me. When I put together the proposal for what eventually became the book, I assumed, of course, that the suicide hotline had to be secular. But in fact, it’s psycho religious in origin.

And the idea of removing care from the expert, which we also see in the AI scenario, was completely radical. But instead of it being given only– and only there does include, of course, many, many, many humans, to a sort of machine, the idea was to yoke people anonymously via telephone wire, via another sort of common household appliance, the phone.

And the first suicide hotline in the United States was, in fact, in the Bay area, the first fully apparent suicide hotline run by Bernard Maze, who went on to found KQED and become chairman of NPR.

And as a queer priest in the Tenderloin, precisely to care for the suicidal, of course, but also folks who are working in the Tenderloin and especially LGBT users of the hotline who did not want to interface for all kinds of maybe obvious reasons with a deeply homophobic standard psychological apparatus in that time, and also one that really put suicidality within the context not only of that psychological framework, but a carceral one as well, right? It’s illegal.

And so Maze trained volunteers, it wasn’t just him. He did one in every four nights. And the people he trained, he would not accept anyone with any classical training in psychology or social work. He wanted what he called an exquisite ear.

But that meant that a whole host of media came together to train those volunteers, tapes of callers, role plays, and of course, scripts. But each script would be– and it’s true if you’ve worked on a hotline, you have scripts, you learn scripts. And then, of course, in the moment, you’re working between a script and response, human response to the person you’re talking to.

And the hotline grew rapidly. It started out as just a couple calls a day, and then slowly 200 and exploded and then became adopted in every possible state, precisely because it was this radical form of free, which is something we haven’t yet underscored. Teletherapy and it’s longer history is almost always free or low-fee, free peer-to-peer care that deletes the expert but keeps a human.

Black: So well put. And I would love to note that at hotlines today, many of the volunteers who go through training come there from some type of professional career or training background in some sort of mental health field, maybe psychology, maybe social work, psychiatry, and so forth. And a big component of volunteer training is having to be reminded to not bring that training into the call.

Yes, that squares perfectly with my experience as well.

So for me personally, the connection between crisis hotline and AI chatbot, this all sort of came together for me as like my way of thinking through how to propose and get funding for my dissertation research.

I think, in general, it’s very difficult to just put something forward at face value and say, this is new, this is unlike anything else. I mean, that’s absolutely bait for historians to say, oh, hell no.

But I think for most scholars, there’s a pause on that kind of claim. And that claim is all over in Silicon Valley. So it’s hard not to be a bit reactive to that as a scholar.

So to me, I was trying to just think through logically what are– this is what an AI chatbot does. Like, what are other forms that are similar? And to me, the first thing that came to mind, well, maybe the first two things would be like a confessional Catholic priest, the booth, and then also, yeah, crisis hotlines.

So I don’t know if it’s because I grew up in the South or what, but to me, I personally wasn’t surprised to learn that history of crisis hotlines. Maybe just, again, yeah, growing up immersed in a culture where church is such a huge component of life for so many different people.

But back to what Julia said about demystifying AI a bit earlier or just AI-based or delivered therapy, yeah, I think that that’s absolutely important to do here, too, because my biggest takeaway over time was that the AI is surprisingly not that advanced in these kinds of offerings.

It’s deliberately limited because the people making these are not wanting the AI to be crafting dialogue on its own, going outside of the confines that the experts who themselves are often trained psychologists are writing for it.

And so I– but this took me a lot of time to realize precisely because trying to get in to one of these sites to do field work– I don’t know if either of you or if anyone listening if you’ve ever seen– oh, gosh, I’m going to sound so nerdy, Star Trek 4, the one where they time travel back to San Francisco, actually, with the one with the whales.

Zeavin: No, but need to look it up immediately.

Black: Well, basically, they have– oh, man, now I’m like losing my nerd card because I’m forgetting the details that I ought to know as a nerd, but I think it’s set in the 80s. And so one of the members of the Star Trek crew is Russian. And so he’s there to save the problem.

And he’s saying to like, take me to your nuclear weapons, your nuclear vessels so I can come– and everyone’s like, Holy crap, there’s this Russian guy saying the nuclear word. Like, that’s a big– call all the security kind of issue.

And I started to feel like as a researcher because you’re wondering, where the hell is she going with this? When I– knock, knock. Hello. I’m a graduate student. May I please come observe your chatbot startup workplace?

I felt like I was basically demanding to be taken to the nuclear stuff, like people were very scared that I was going to spy on their AI. And I don’t have the training to really be able to do that, nor the inclination, but I ended up getting a lot of no’s and not even direct no’s, but like just non responses.

And I’d keep trying and someone at the company would on the down low reach out to me and be like, yeah, sorry, we’re not going to be answering that. But just so you know, it’s not happening. And I started to panic that I was not going to have a project.

And so I approached a suicide prevention hotline and asked if I could maybe do some research there. They were incredibly gracious, welcomed me right away. Literally, I think, 24 hours after I emailed, I was there.

And I sort of saw it as a warm up to get to think about a chatbot startup and understand this sort of, just as Hannah and I have both sort of alluded to, this pre-scripted forms of care where someone calls in or texts in and says something and you respond in one of two ways.

Or you ask this question and then based on their response, you move to this next question, or else you respond with the answer in all these sort of conditional “if then” tree of conversation.

So that was my chance to either warm up with that or else maybe end up there if everything else fell through. So that is what brought me to think about these two things together in the same frame.

Sizek: Yeah, I think that’s such an interesting way of also pointing out all the bureaucracy and red tape that not only surrounds the medical side of this practice, but the proprietary ways that chatbots are coming to fill in a certain kind of gap in the medical system.

So can you just tell us a little bit about how these companies think about their role in the mental health ecosystem or sort of within the scope of the very bizarre world of US health insurance?

Black: So again, with this sort of positioning of these, the people making them would never suggest that they are a substitute for conventional therapy. And so the way that they are being put forth is sort of like an additional layer.

It’s for people who would not be accessing conventional therapy for whatever reason, whether because they’re not interested, they have concerns about it, it takes too long to get started.

And many of the makers of these would also position them as sort of a transition into thinking about pursuing therapy for people that might be hesitant to do so. So it’s definitely like an added tool and not an instead of thing.

So I feel pretty strongly that the biggest difference between the 1950s AI chatbot iterations that Hannah mentioned and today’s services, things that you– I mean, I can say this one because it’s not my field site, but a very well-known, recognizable service, for example, is Woebot.

The difference between ELIZA and Woebot, the biggest difference is not the technological capacity. A little bit of the difference is the accessibility that so many more people do have a smartphone or similar device. But I think the biggest difference is the business to business that these startups are able to position this to a buyer.

And in this case, a huge percentage of the sales– and by business to business, I mean as opposed to you have a direct like business to– B2C. So you would have your in-consumer, your end user is paying for it. So maybe like a subscription service that you pay for.

But business to business would be something kind of like here at UC Berkeley, like our health benefits. If there’s something that you’re allowed to tap into by virtue of being a Berkeley student or faculty or staff member, like that kind of access that the university has bought on our behalf. So we’re the end user, but the university is the client in that relationship.

And in fact, actually, a lot of campuses are clients of AI chatbots, therapeutic chatbots. So I think the ability for a client to add this to their portfolio of services and the main clients in question would be programs like employee assistance programs, like EAPs, sort of privatized entities that provide the sort of care packages that we increasingly come to think of as characterizing a great company to work for.

But as I mentioned, schools and other nonprofits will have these as well. So I think the opportunity to add that service at a relatively low cost, but to have one more layer, one more sort of a safety net of like we have this offering just in case becomes the thinking behind that.

And a lot of these EAPs will actually build on and customize the offerings to tie-in a human counseling service to their chatbots. So you could start off in the chat and end up on a phone call or still in the chat with an actual human.

And while that’s not a feature of all of the chatbot companies, although some do have that service at the front-end where anyone could access it, I believe there are a few that do that. But most don’t, but then it becomes available that way through your own workplace or school or whatnot.

And Hannah, I don’t know if you want to add. I think there’s something very interesting about this way that the business relationship comes to interrupt how we think about therapy as maybe not being a business relationship, despite the fact that the money is always being exchanged outside of the room. I don’t know if that’s come up in your research.

Zeavin: Yeah, I mean, thank you for that. I look at all of these ways from jump. Any therapeutic encounter might be mediated. And I say might be because of course, one thing that is true with teletherapy across its longer history, much, much less now in the present, is that, again, teletherapy is often free or low fees.

So it is a place where no money might be exchanged. But of course, there are other media, overt obvious media. I recast money as another media that’s part of what’s called the therapeutic frame.

And so whether or not money is being exchanged in the form of a check, hand to hand, or cash, hand to hand, or on Venmo, as is now the case where you might see therapists being paid in your Venmo feed, if you use Venmo by friends or by strangers, all of that is deeply part when it isn’t part of one of these systems that Valerie perfectly just described.

It’s not just the system that Valerie is describing is not limited only to AI therapy, of course, because there is a huge ecosystem now that AI therapy startups, like Woebot, have just gotten massive rounds of funding, a B series funding in the last few weeks.

So the sort of, quote unquote, “slow down of the pandemic,” which, of course, is not at all slowing down, certainly not in the United states, certainly not elsewhere, is also not slowing down this kind of quest to make an AI therapist.

But all of this is also happening in other kinds of teletherapy startups, both in the Valley and Silicon Valley and in New York and elsewhere. And that is what now I think when you say teletherapy, the first thing that’s largely going to come to mind is, oh, you mean BetterHelp? Oh, you mean Talkspace?

And as Valerie said, these are often bought by employers and they are the mental health care that is then offered back to employees or students and for good and bad reasons.

This is not a defense of it, but I remember once talking with the CEO of a teletherapy startup that has since gone bankrupt as part of my research. And so I’ve cut this interview from my book because I’ve kept other elements in, but not this part.

And she said to me, yeah, but if you don’t have teletherapy, the waiting time at a California University– she was talking about a non-UC University in California, is 16 weeks for one intake session. And of course, we know mental health care needs to happen much more rapidly than that.

This it’s very hard to say something back to that, except my answer wouldn’t necessarily be to endanger data privacy, confidentiality, and of course, even just the basic therapeutic relationship in this way.

But that is part of the kind of evangelist sort of statements that Valerie is pointing to or the kind of democratizing, marketing campaigns around this ecosystem, both on the AI chatbot side and in the wider teletherapeutic startup landscape.

Sizek: Yeah, so maybe now that we’ve heard about this evangelizing side, this liberating part of teletherapy and these chatbots, maybe we can understand some of the drawbacks or this potential concerns around privacy, technology.

Earlier, we discussed how at the beginning of the pandemic everyone was allowed to go on Zoom or on an HIPAA compliant forms of mediation for their teletherapy. What are some of the drawbacks either in terms of this privacy concern or other concerns about teletherapy in terms of the specific media that we use today?

Zeavin: I mean, one thing I would say to that to just complicate our conversation is, of course, everything you just named, Julia, is a drawback of corporate teletherapy, and sometimes also in private practice too.

And there are others, there are legion, even though in general the charge gets laid at the question of relationality, which studies have shown it works just as well, sometimes better, depending on the kind of therapy. CBT, for instance, is shown to be more effective mediated by computer than in-person.

But leaving all of that aside, I think one other area that I’m really interested in and invested in that we haven’t yet spoken about is what it does to the therapist. So, of course, following Roy Porter’s call to look from below and to be invested in patients, in my book, that is absolutely the driving factor.

And also therapeutic labor is deeply intertwined with the question of media, and the feminization of therapy is deeply intertwined with the question of media and the kind of rescinding of the expert, which is good and bad.

So one thing we see now is also that the therapist is supposed to be always on. Even in private practice, that slide into what used to be known as therapeutic contact. Texting, emailing is much more prevalent the therapist I’ve spoken to say.

But then especially if you work for one of these startups, there have been a whole host of complaints. And two key figures who are really taking the charge in this debate, one in the UK is Dr. Elizabeth Cotton, who’s project Surviving Work is phenomenal and everyone should go check it out on the uberization of therapy and that demand to be always on.

And then researcher Brianna Last in Pennsylvania is also looking at what the current status of therapeutic labor is. As just one statistic from Dr. Cotton’s work, 10% of mental health workers in the UK in this past year received zero payment.

So the other thing to say is we hear this logic circulate a lot, right? Oh, there are too few therapists. And that’s true. But it’s also more complicated than that because we have a lot of therapists who aren’t being compensated and certainly not compensated fairly for their labor, and then are sort of at the behest of moving into these corporate teletherapy platforms which are gigifying therapeutic labor, therapeutic labor, where in the United states, something like 70 some percent of therapists, counselors, social workers are women.

And so that’s just one other additional question to ask. Is this the future of care we want? And all of this massively intersects with all kinds of other questions.

So there’s the labor question, there’s the patient experience question, where, yes, it’s been in Forbes, in The New York Times, in Guardian, right? And there have been massive confidentiality and privacy leaks from both AI chatbots and their corporations and more general teletherapy platforms.

There might be diminished connection, again, not because it’s mediated, but because it’s a corporatized medium, and the question of insurance, the question of choice and not choice, false choice.

The questions here are legion because the mental health care system in the United states, as you pointed out, Julia, is deeply broken and it has been for a century at least.

Black: Yeah, I’m so glad, Hannah, that you took that in the direction of thinking about caregivers. That’s exactly what my focus is in my research because for a couple of reasons.

And so when I receive a lot of questions around privacy concerns, that’s definitely something like I’m interested in talking about, but it’s not really what I’m most immediately concerned with in my work.

Like, as an ethnographer, you are most expert in the people or entities that you are spending time with observing. And for me, end users was a small fraction of that compared to workers.

And a big part of my project is considering the AI as a caregiver, as an entity working alongside, as a colleague with human caregiver laborers because a lot of the– as I’ve mentioned, like these startups, a lot of the–

There’s more than a bit of a status divide oftentimes at the way many of the startups are composed where you have people doing what you would think of as the conventional tech side of things, the engineering, the building and maintenance of what we think of as being like the real AI, the technology.

And then you have psychologists or social workers being hired to craft the language, the therapeutic dialogue that is then carried out by AI. And I think at the startups themselves, there’s a lot of status difference in terms of pay and just relative importance.

And that was something that really interested me because I didn’t exactly not expect that. But to me, I sort of saw startup as this whole entity. And precisely because it was so hard to get access to one, it was easy to slip into thinking of it as this singular entity, the startup, instead of just a regular workplace with all these hierarchies and dynamics.

But what really interested me, too, was that many of the mental health experts working at these startups were deliberately wanting to be replaced– I think in the better term from scholar Lily Irani, displaced by AI. They wanted their job to be taken over by AI so that their job as a mental health caregiver could be something else.

Many of the workers that I worked alongside, got to know, they had experienced really poor working conditions. A few of them had complex PTSD due following– OK, so that’s a whole thing, talking about you have PTSD because this one– it’s much more complicated than that.

But just shorthand to say like through the conditions of their work as caregivers that they experienced trauma that led to them needing to get care for themselves in order to keep working and functioning and being OK.

And they had terrible pay, they’d had tremendous job insecurity, they’d had terrible working hours. And so for them, AI was an opportunity to do better and have a job with more stability, better pay, benefits, safety. So I found that to really append a lot of conversations around automation in a very interesting way.

And in thinking about AI and human caregivers, this sort of working relationship between them is suggesting that AI is a caregiver. I’m not necessarily arguing that it’s a better caregiver or an identical caregiver. That care means the same thing, regardless if the caregiver is human or an AI.

I’m also not trying to just be provocatively fun and say like, AI– I don’t know how to put it. Like, I’m definitely seeing AI in this industry is very limited and not like our sci-fi imaginings might suggest. And then I’m simultaneously saying, yes, it’s a caregiver because it’s doing this work.

And I feel like looking at human and AI caregivers together in the same frame makes it possible to understand what is expected of caregivers, what an ideal caregiver is becoming, what sort of attributes.

And just as Hannah perfectly pointed out about the poor working conditions that human caregivers experience is not because AI, but AI ends up becoming a solution to these problems because of the logic that they create about what care work should be and entails.

And I also spend a bit of time dwelling on caregiving labor as this really fascinating sort of you see a paradox in plain sight where you would think we have a lot of expectations around work and labor and compensation and you would think that–

And I’ve even seen some scholars suggest– in anthropology, I’m thinking of Arthur Kleinman’s work talking about caregiving as this sort of ultimately human experience.

That there’s this deep rooted humanness sort of almost tautologically. Like, we are human because we care and we care because we’re human. And he writes about it in such a beautiful and personal way.

But I want to push back against that. And I think there’s a tendency to say like, well, there’s two forms of value at work here. There’s this humanist value, and then there’s income, money value.

And it’s because you’ve got two forms that being a caregiver is often very exploitative, particularly along the lines of gender, race, nationality work, but it’s because it’s so valuable in other ways.

But in actuality, I think it’s one big system of value where that becomes justification precisely for that. Like, that duality is precisely what enables that. It’s not really a duality because one condition is what enables the other. And I do think that looking at AI makes it possible to glimpse that in a different way.

Sizek: I think that’s so fascinating because it gets us back to the questions that we were thinking about at the beginning about mediation, about what makes teletherapy or chatbots seem so strange to us when, in fact, they’re often doing things that are replacing or displacing therapists in a way that they would like to be displaced.

That they would like to not have this complex PTSD as a result of doing their work, or that they would like to be adequately compensated for the labor that they’re actually performing.

And so I think this might be a good opportunity for us to just pull back a little bit and reflect on what we think that we can learn from this teletherapy and how to actually make our very broken mental health system a little better, or what this offers us in terms of thinking about our preconceptions when we come to therapy.

Zeavin: Well, I think one answer here is just to start by saying that I don’t think either Valerie would suggest that technology in a vacuum is going to fix anything, right? That these are system-wide and deeply complicated and very entrenched social problems.

I think it’s a long-term, very American techno optimistic fantasy, right? That would be great. It’s just not going to happen. And there are a number of reasons why.

I mean, one thing we can learn from teletherapy in the pandemic and telemedicine also– my colleague at Johns Hopkins, Jeremy Green, will have a book out in the next year from UChicago Press called the Medium of Care, which really deals with this in the telemedicine sector.

But one thing that we’ve each seen is that if telemedicine or teletherapy used to be often a project that would be small or medium, it would build its own infrastructure, the technology would work, and then other factors would intervene.

That the problem is not that– as Valerie is saying, that the technology doesn’t work. It’s that maybe infrastructures aren’t there. That trust might not be there for all kinds of reasons. That medical redlining is going to exist alongside digital redlining.

So if you were more likely to see a therapist or have telemedicine before the pandemic, actually during the pandemic, you’d be less likely along the lines of race and class and gender.

So these are the kinds of complicated takeaways that can scale for something like policy as long as we don’t techno evangelize and instead think about what role technology can play.

So Valerie’s very convincing anecdote, please take this element of my job, can be read again. And we can listen to that. Can be read against other concerns on the patient side what would it mean to call into a call center that uses paralinguistic vocal monitoring to make diagnoses instead. What would that do for the patient?

Beth Semel’s recent work. Listening Like a Computer, really deals fine grainly with why that might be a problem and not a solution and is a problem and is an extant long-standing problem.

So I think that what I’m trying to suggest here is that there is no easy fix. It would have to be systemic. And that technology is also– and of course, we shouldn’t even have to say this. Nothing under capitalism is neutral. Technology is certainly not neutral.

But also we can learn and listen to patients, to workers as we try and move towards scaling up, both ethically and holistically. Whether teletherapy is the sum total of that, I very much doubt it. But it’s always been part, it’s always been in the shadows. And right now, of course, it’s the dominant, if not only form on offer.

Black: Yeah, I’m just very much in agreement with everything Hannah said. Back to this issue of workplaces and institutions, there’s a desire to show like mental health care matters to us. So now we offer this.

And I think it’s so important to understand what these supposed pathways or solutions actually look like when you’re there navigating them, what they can and can’t do.

And I don’t think that needs to boil down to a pro or anti-technology debate any more than it would in any other component of life. I think it’s not that simple or straightforward.

In my work, I’m really trying to show AI– take AI as seriously as a caregiver, which isn’t to say that I’m committed to that as a solution. And I’m very skeptical of why services are becoming more– like, why startups like the ones where I’ve done field work are becoming more viable.

What kinds of needs and demands are leading to that? And I just think it’s very important to remain mindful of, yeah, exactly what kind of systems of care are available to whom and why.

And I think that a lot of things that we take for granted is being publicly available or freely or easily available to people, just really rethinking that. And technology can intervene in some of those problems and amplify them at the same time.

Zeavin: Yeah, I think that that’s exactly right. One of the worries about AI or a non-AI therapy startup is exactly that. It’s not as if therapeutic care is perfectly neutral or therapeutic care is perfectly good and therefore, we add technology and then the problems start.

But instead that because of profit motive in Silicon Valley and elsewhere, or what Silicon Valley has come to stand for, the Silicon Valley model, like trademark, TM, is all about using profit motive and scale, right?

And therefore, the thing we’re in danger of is remediating pre-existing problems, both in the technology and in the forms of care at scale, and to people who are more vulnerable precisely because those are the kinds of care on offer, whether they’re our students or our colleagues, right?

And I don’t just mean at our school, but I mean sort of in the broader sense where college students are more frequently increasingly receiving this kind of care instead of other forms. And these are the things to pay attention to.

So I think, of course, on the one hand, it’s always a sensational story when there’s a massive leak. And those are the ones that we get to learn about. A mindfulness app today had a confidentiality breach, and that’s just an interaction and a script and your data.

There are also breaches and leaks where a person has extraordinarily confessed their most intimate knowledges, behaviors, thoughts, feelings to whether it’s a bot or a person. And that transcript gets leaked. These are some of the deep seated worries on a patient side.

And then, of course, algorithm, what’s called bias, which is not a strong enough word, is also something to pay attention to in alongside thinking about all of the questions of labor, all of the questions of systems.

Black: Yeah, absolutely.

Sizek: Yes, well, thanks so much. This has been really illuminating, and I feel like we’ve all learned a lot about the world of not only teletherapy, but also the contemporary world of conducting therapy during a pandemic. And so I just want to thank you so much for coming onto our podcast today.

Black: Thank you so much. It’s been an absolute privilege to be able to speak with you.

Zeavin: Yes, thank you so much. This was lovely.

Woman’s Voice: Thank you for listening. To learn more about Social Science Matrix, please visit matrix.berkeley.edu.

[MUSIC PLAYING]

 

You May Like

Cities

Interview

Published September 1, 2021

A Photographic Interview: Kaily Heitz on Black Oakland

Kaily Heitz, who earned her PhD from the UC Berkeley Department of Geography in 2021, studies how concepts of Blackness and Black culture are deployed in the making and marketing of Oakland, California. Her dissertation, entitled “Oakland is a Vibe: Blackness, Cultural Framings and Emancipations of The Town,” draws on Black feminist geographies and media studies to understand contemporary conflicts over gentrification in “The Town.” This interview by Julia Sizek, a PhD candidate in the UC Berkeley Department of Anthropology, revolves around images from Kaily’s work that help reveal the arguments of her work.

Learn More >

Social Science / Data Science

Recap

Published September 1, 2021

Julia Lane, NYU: “Democratizing our Data”

On August 26, 2021, Social Science Matrix and the D-Lab presented a lecture by Julia Lane, Professor at the NYU Wagner Graduate School of Public Service, at the NYU Center for Urban Science and Progress, and a NYU Provostial Fellow for Innovation Analytic. The talk, entitled "Democratizing Our Data," provided an overview of a research collaboration designed to improve the sharing of data across state agencies.

Learn More >

California Spotlight

Interview

Published August 11, 2021

Kate Pennington on Gentrification and Displacement in San Francisco

What impact does new housing have on rents, displacement, and gentrification in the surrounding neighborhood? Read our interview with economist Kate Pennington about her article, "Does Building New Housing Cause Displacement?:The Supply and Demand Effects of Construction in San Francisco.”

Learn More >