Transparency, AI Use Policies Make the Difference

 

When the National Eating Disorders Association (NEDA) implemented a wellness chatbot named Tessa as a replacement for its human-staffed national helpline, the generative AI program began dispensing diet-culture advice harmful to the disordered-eating community.

The technology was quickly yanked, but the negative media attention and reputational harm to the nonprofit was already done. While AI offers useful and much-needed solutions to short-staffed nonprofits, this AI-gone-awry example provides a cautionary tale.

Executives sitting on nonprofit boards should urge their organizations to study the capabilities of AI first and to be very clear on the specific problems they want the technology to address. Adopting AI for the sake of saying “Our nonprofit embraces AI” is counterproductive and often leads to a bloated administrative load and workflow – the opposite of the efficiencies it promises. Nonprofits should be able to answer, for instance, who will administer and assess the AI tools’ effectiveness, who will monitor data protection and privacy regulations, and how the IT department’s responsibilities will change.

AI Toe-Dips recommended
“The key is starting small,” says Judy Nagai, San Jose State University’s vice president for university advancement and CEO of the university’s Tower Foundation. Located in Silicon Valley, the university was the first in 2023 to offer a bachelor’s degree in Artificial Intelligence.

“Explore simple tools that might help you write a better solicitation letter or donor impact report.” Even then, care should be taken to ensure a human voice is present in the communiques. If the language is not authentic to the nonprofit or customized to the recipient, customers and prospective donors are sophisticated enough to know.

AI as supplement, not replacement
AI is a tool, not a replacement for human interaction with stakeholders – something Nagai understand well. In June, her team unveiled a virtual engagement officer, Samantha, named after the university’s mascot, Sammy the Spartan. The avatar communicates through video, text and email, and supplements the university’s annual giving and major gifts programs run by staff. “We have more than 30,000 alumni and only 14 staff members,” says Nagai. Samantha is filling a gap by engaging a population that was, before, unreachable – those who didn’t fall into fundraising’s annual giving or major gifts categories, and alumni who had never engaged.

“We also have a human in the loop,” says Nagai. A person is on the back-end, reviewing every interaction, often within minutes and usually within an hour. All responses from alumni and friends – ranging from requests to unsubscribe or not engage, to queries about specific schools and alumni events – are reviewed. “This virtual engagement officer is not just operating and thinking on its own.”

Nagai is quick to point out that Samantha’s primary messages also are not fundraising focused. “Samantha shares that we rose in the rankings, the football schedule. She asks ‘What would you like to know about San Jose State? Can I send you the football schedule? Would you like to get involved as a mentor?’”

Donor transparency, policy key
Due to careful planning and research, San Jose State’s unveiling of its virtual engagement officer received a much warmer welcome than the NEDA chatbot. People are responding favorably and interacting, which is, in part, due to transparency. “It’s very clear from the very first message that says ‘Hi, I’m a virtual engagement officer, and I’m using AI to reach out to you’ that we are not trying to trick anybody,” says Nagai.

“Can AI help staff become more efficient and effective in their work?”

When adopting AI tools, it’s also important for boards to insist upon clear AI use policies, an area often overlooked in the rush to adopt. Nonprofits should consider:

  • Ethical guidelines for AI use
  • Staff roles and responsibilities to ensure accountability
  • Proactive plans for risk mitigation and security
  • Continual oversight and review of AI policy
  • Compliance with privacy, data protection, intellectual property, and copyright laws

“We need to be ethical stewards of how AI is adopted and governed,” says Nagai. San Jose State developed an AI vision statement and also offers “AI Literacy Essentials” training, because trained faculty and staff better understand AI vulnerabilities and pitfalls. San Jose State’s AI policies are continually reviewed alongside the university’s existing ethics and responsible-use policies.

The moral of the story: Nonprofits should not avoid AI. “It’s like avoiding email back in the 90s. Or text messages.” And the reality, Nagai says, is that nonprofits are likely already using AI when they hire third parties for prospect management or data analysis. The big question for board members, is ‘Can AI help staff become more efficient and effective in their work?’”

Questions to ask Before Committing to AI

Before diving into AI solutions, nonprofits must ask:

  • What purpose will AI serve?
  • Will AI be used for student/customer/community support; virtual-assistant chatbots for fundraising; institutional research for data aggregation or predictive analytics; or staff/faculty support?
  • What specific problems might AI address – operationally or programmatically?
  • How do the nonprofit’s AI goals align with state and national data protection and privacy laws?
  • What roles and responsibilities will be assigned within the nonprofit for managing and overseeing AI programs?
  • Will the nonprofit assign a governance committee, and what role will the IT department or data-science team play?

September 2025