Insights
Teams
8 min Read
July 25, 2018

Leveraging the power of bots for civil society

Allison Fine and Beth Kanter

Artificial Intelligence is everywhere—including the nonprofit sector. In this post, originally published in the Stanford Social Innovation Review, our friends Beth Kanter and Allison Fine go deep on bot technology. They share examples of how social services organizations are using chatbots to communicate with audiences and offer guiding questions to ensure nonprofits considering integrating bots into their programs do so ethically.

At a JFK airport café one recent afternoon, we ordered our food from a screen. Last year, at the same café, it was a person who took our order. The new screen system was efficient, but it represented a lost opportunity for human interaction, a loss for our common humanity, and lost employment for a human.

Like it or not, the age of automation has arrived. “Bots,” in the form of robots, chatbots, artificial intelligence, machine learning, conversational interfaces, cyborgs, and other smart devices, are increasingly becoming the interface between organizations and humans. So much so that by 2020, according to Gartner research, it is estimated that 80 percent of our interactions each day will be with bots.

The ramifications of these technological developments reach far beyond cafés and customer service interactions to the heart of civil society. As we see in the JFK café example, bots are beginning to be deployed for actual service provision, and this is where the age of automation comes into conflict with our humanity. It may be fine to interact with a bot rather than a human to make airline reservations—in fact, it may make for a better overall experience—but what about a bot instead of a therapist, a healthcare worker, or a social worker? Are these interactions we are willing to delegate to a bot?

And then there is the question of implicit bias in the algorithms that dictate who gets preferential treatment. Nancy J. Smyth, Dean and Professor at the University at Buffalo School of Social Work, says that at a very basic level, there should always be a way for a person to reach beyond the bots and speak with a human being. “We need processes that ensure when…people are flagged for specific actions by an algorithm, the decisions are subject to careful review by a human being who has the training to understand the complexities of people’s lives,” Smyth says. “In my view this is a basic human right.”

Our work in technology has always centered around making sure that people are empowered, healthy, and feel heard in the networks within which they live and work. The arrival of the bots changes this equation. It’s not enough to make sure that people are heard; we now have to make sure that technology adds value to human interactions, rather than replacing them or steering social good in the wrong direction. If technology creates value in a human-centered way, then we will have more time to be people-centric.

So before the bots become involved with almost every facet of our lives, it is incumbent upon those of us in the nonprofit and social-change sectors to start a discussion on how we both hold on to and lead with our humanity, as opposed to allowing the bots to lead. We are unprepared for this moment, and it does not feel like an understatement to say that the future of humanity relies on our ability to make sure we’re in charge of the bots, not the other way around.

To Bot or Not to Bot?

History shows us that bots can be used in positive ways. Early adopter nonprofits have used bots to automate civic engagement, such as helping citizens register to vote, contact their elected officials, and elevate marginalized voices and issues. And nonprofits are beginning to use online conversational interfaces like Alexa for social good engagement. For example, the Audubon Society has released an Alexa skill to teach bird calls.

And for over a decade, Invisible People founder Mark Horvath has been providing “virtual case management” to homeless people who reach out to him through social media. Horvath says homeless agencies can use chat bots programmed to deliver basic information to people in need, and thus help them connect with services. This reduces the workload for case managers while making data entry more efficient. He explains it working like an airline reservation: The homeless person completes the “paperwork” for services by interacting with a bot and then later shows their ID at the agency. Bots can greatly reduce the need for a homeless person to wait long hours to get needed services. Certainly this is a much more compassionate use of bots than robot security guards who harass homeless people sleeping in front of a business.

But there are also examples where a bot’s usefulness seems limited. A UK-based social service charity, Mencap, which provides support and services to children with learning disabilities and their parents, has a chatbot on its website as part of a public education effort called #HereIAm. The campaign is intended to help people understand more about what it’s like having a learning disability, through the experience of a “learning disabled” chatbot named Aeren. However, this bot can only answer questions, not ask them, and it doesn’t become smarter through human interaction. Is this the best way for people to understand the nature of being learning disabled? Is it making the difficulties feel more or less real for the inquirers? It is clear Mencap thinks the interaction is valuable, as they reported a 3 percent increase in awareness of their charity.

Also fresh in our minds is the lack of regulation and adequate public policy that allow social media platforms to focus on their enrichment over our privacy and well-being. Right now, our sector has no agreed on ethical guidelines for where and how to use the new technology. Nor is there a significant advocacy effort to ensure that public policy is keeping up with the use of the bots. And we know from recent history that time is not on our side.

These two very different approaches to the use of bots in mental-health support highlight what happens in an ethical and regulatory void:

Crisis Text Line is a hotline for young people experiencing mental health crises. A person in crisis texts an automated interface that then texts them back, asking them what the crisis is. Crisis Text Line uses machine learning and natural language processing to identify which texters are at imminent risk, and those people are pushed to the top of the queue, in front of people who are at less risk. This textual data is stored in a consistent and passive way, says Bob Filbin, Crisis Text Line’s chief data technologist, so that “we don’t have to do anything to generate data collection, which is very different from surveys and other high-effort data-collection efforts.” This frees up the time of their volunteer responders to focus on their interactions with people in need, rather than having to spend time coding or tagging or storing data. Crisis Text Line uses automation and big data analysis to understand the needs of their clients better and how to better respond to them—but the bots do not replace humans in their system. Every interaction after the initial text is with a human being.

A very different model is Woebot, which is an online chatbot users access through Facebook’s Messenger application. It also provides assistance to people in need of counseling, although the company makes it clear that they are not focused on people in crisis. According to its website, Woebot is “…ready to listen, 24/7. No couches, no meds, no childhood stuff. Just strategies to improve your mood. And the occasional dorky joke.”

Woebot has no human interaction as part of its model. The company believes that short chat conversations, sentiment, tone analysis, and word games are sufficient to help people who are looking for inexpensive therapy. 

Woebot is free and your data is safe with it—for now. According to its website: “Our mission is to make the best psychological tools radically accessible to those who need them. As such we are lucky enough to not have to charge users for Woebot’s services at this time. However, this may change at some point as we move toward establishing a sustainable business.” Of course, when you use Woebot, your data is also being analyzed, stored, and used by Facebook Messenger, a notoriously opaque and company-centered data platform.

So what happens if Woebot or another bot therapist begins to sell your data to pharmaceutical companies? How is patient confidentiality going to be preserved? Who is going to take the lead in ensuring that humans come first in delivering social services to people using bots?

Incorporating Bots into Your Work

The following discussion questions are the start of conversations we need to have within our organizations and as a sector on the ethical use of bots for social good:

  • What parts of our work will benefit from greater efficiency without reducing the humanness of our efforts? (“Humanness” meaning the power and opportunity for people to learn from and help one another.)
  • Do we have a privacy policy for the use and sharing of data collected through automation? Does the policy emphasize protecting the data of end users? Is the policy easily accessible by the public?
  • Do we make it clear to the people using the bot when they are interacting with a bot?
  • Do we regularly include clients, customers, and end users as advisors when developing programs and services that use bots for delivery?
  • Should bots designed for service delivery also have fundraising capabilities? If so, can we ensure that our donors are not emotionally coerced into giving more than they want to?
  • In order to truly understand our clients’ needs, motivations, and desires, have we designed our bots’ conversational interactions with empathy and compassion, or involved social workers in the design process?
  • Have we planned for weekly checks of the data generated by the bots to ensure that we are staying true to our values and original intentions, as AI helps them learn?

We must not look at the age of automation as a smackdown between flesh and code, but as a partnership between social services agencies adopting the technology to better serve their clients. To reap the benefits for civil society, design and implementation must have a humans-first orientation and maintain the highest ethical standards to avoid devastating unintended consequences.