Artificial Intelligence

Crisis Text Line stops sharing conversation data with AI company

Views: 120

Crisis Text Line has decided to stop sharing conversation data with spun-off AI company Loris.ai after facing scrutiny from data privacy experts. “During these past days, we have listened closely to our community’s concerns,” the 24/7 hotline service writes in a statement on its website. “We hear you. Crisis Text Line has had an open and public relationship with Loris AI. We understand that you don’t want Crisis Text Line to share any data with Loris, even though the data is handled securely, anonymized and scrubbed of personally identifiable information.” Loris.ai will delete any data it has received from Crisis Text Line.

Politico recently reported how Crisis Text Line (which is not affiliated with the National Suicide Prevention Lifeline) is sharing data from conversations with Loris.ai, which builds AI systems designed to increase empathetic conversation by customer service reps. Crisis Text Line is a not-for-profit service that, according to Shawn Rodriguez, VP and General Counsel of Crisis Text Line, provides “mental health crisis intervention services.” It is also a shareholder in Loris.ai and, according to Politico, at one point shared a CEO with the company.

Before hotline users seeking assistance speak with volunteer counselors, they consent to data collection and can read the company’s data-sharing practices. Those volunteer counselors, which CTL calls “ Empathy MVPs,” are expected to make a commitment of “volunteering 4 hours per week until 200 hours are reached.” Politico quoted one volunteer who claimed that the people who contact the line “have an expectation that the conversation is between just the two people that are talking” and said he was terminated in August after raising concerns about CTL’s handling of data. That same volunteer, Tim Reierson, has started a Change.org petition pushing CTL “to reform its data ethics.”

Politico noted how Crisis Text Line says data use and AI play a role in how it operates:

“Data science and AI are at the heart of the organization — ensuring, it says, that those in the highest-stakes situations wait no more than 30 seconds before they start messaging with one of its thousands of volunteer counselors. It says it combs the data it collects for insights that can help identify the neediest cases or zero in on people’s troubles, in much the same way that Amazon, Facebook and Google mine trends from likes and searches.”

Following the report, Crisis Text Line released a statement on its website and via a Twitter thread. In a statement, Crisis Text Line said it does not “sell or share personally identifiable data with any organization or company.” It went on to claim that “[t]he only for-profit partner that we have shared fully scrubbed and anonymized data with is Loris.ai. We founded Loris.ai to leverage the lessons learned from operating our service to make customer support more human and empathetic. Loris.ai is a for-profit company that helps other for-profit companies employ de-escalation techniques in some of their most notoriously stressful and painful moments between customer service representatives and customers.”

In its defense, Crisis Text Line said over the weekend that “Our data scrubbing process has been substantiated by independent privacy watchdogs such as the Electronic Privacy Information Center, which called Crisis Text Line “a model steward of personal data.” It was citing a 2018 letter to the FCC, however, that defense is shakier now that the Electronic Privacy Information Center (EPIC) has responded with its own statement saying the quote was used outside of its original context:

“Our statements in that letter were based on a discussion with CTL about their data anonymization and scrubbing policies for academic research sharing, not a technical review of their data practices. Our review was not related to, and we did not discuss with CTL, the commercial data transfer arrangement between CTL and Loris.ai. If we had, we could have raised the ethical concerns with the commercial use of intimate message data directly with the organization and their advisors. But we were not, and the reference to our letter now, out of context, is wrong.”

On the Loris.ai website, it claims “safeguarding personal data is at the heart of everything we do,” and that “we draw our insights from anonymized, aggregated data that have been scrubbed of Personally Identifiable Information (PII).” That’s not enough for EPIC, which makes the point that Loris and CTL are seeking to “extract commercial value out of the most sensitive, intimate, and vulnerable moments in the lives (of) those individuals seeking mental health assistance and of the hard-working volunteer responders… No data scrubbing technique or statement in a terms of service can resolve that ethical violation.”

Update, 10.15PM ET: This story has been updated to reflect Crisis Text Line’s decision to stop sharing data with Loris.ai.

Correction February 1st, 10:54AM ET: An earlier version of this story identified Tim Reierson as both a volunteer and an employee who was fired. He was a volunteer on the hotline who was terminated. We regret the error.

Tags: , ,
Google adds National Domestic Violence Hotline to search results
Destiny 2: The Witch Queen’s new weapons and gear look super powerful

Latest News

Film

Cars

Artificial Intelligence

SpaceX

You May Also Like