A 360° Approach to AI in Philanthropy
As tech advisor to many a philanthropic organization, I understand your challenges.
Every department wants to adopt a new AI tool while some staff fear AI "will cause us to lose our humanity."
Fear, tension, and unrealistic expectations are a heady brew, rarely making for strategic and ethical decisions. How do you move from AI exploration to AI implementation?
I recommend pragmatic effort in four areas of organizational change:
strategic, structural, technical, and cultural.
4 Considerations for AI & Organizational Change
A four-part series published on LinkedIn in May 2024 designed to complement the Responsible AI Framework for Philanthropy
-
Your Vision for AI: In concert with community, develop a conceptual vision of AI’s potential for your mission attainment and sector-wide advancement. Cross-reference with your values and guiding principles. Align AI initiatives and grants with vision.
Community Engagement: Build ongoing engagement strategies with communities impacted by or engaged in AI technologies to understand their perspectives, concerns, and needs, Incorporate these insights into your vision and guiding principles. Ensure accountability.
Manage Risk: Identify, assess, and develop plans to mitigate the risks associated with AI adoption, including technical, ethical, legal, and reputational considerations for your org.
Innovation Strategy: Commit to building a culture of innovation and experimentation that encourages exploration of new AI technologies that can streamline and enhance in alignment with your vision, ethical guidelines, and risk. Establish mechanisms for continuous monitoring, feedback, and evaluation to iterate and improve AI strategy, adoption and outcomes.
Sector Leadership & Collaboration: Explore opportunities to shape the future of AI. Consider partnerships with academia, business, government, peers and partners in civil society such as Partnership on AI and American Association for the Advancement of Science to leverage expertise and share resources to advance responsible AI adoption. Advocate for policies that promote responsible AI development and adoption, such as open-source foundation models, algorithmic transparency, data privacy, and global AI ethics standards.
-
Cross-functional Teams: Consider adopting cross-functional teams to make 360° decisions about AI usage and grantmaking, build organizational capacity, mitigate risks, and ensure a holistic approach to AI that aligns with your mission and values.
Advisory Committees: Establish an advisory committee including nonprofit partners, community members, and technology advisors with expertise in AI usage and ethics to provide guidance on responsible AI adoption and grant funding, where appropriate.
Collaboration Networks: Consider partnerships with other foundations, nonprofits, government agencies, and private sector to leverage resources and expertise in advancing responsible AI adoption. Facilitate knowledge sharing and collaboration among grantees working on AI-related projects to foster learning and best practices.
Governance: Establish AI governance mechanisms leveraging your advisory committee, collaboration networks, and key stakeholders. Include decision-making processes regarding AI adoption and funding as well as policies addressing responsible usage. Include ongoing monitoring, evaluation, and reporting.
Adequate Resourcing: Ensure sufficient resources (people, $$, and tech) are allocated to support AI initiatives in a fashion that includes ethics, accountability, transparency, and experimentation.
Ongoing Engagement: As noted in Part 1: Strategic Considerations, ongoing communication, transparency, and authentic stakeholder engagement are key to responsible AI adoption. Build both passive and active feedback mechanisms including ideation with staff, grantees, beneficiaries, and external partners. Publish AI usage disclosures. Commit to periodic reporting on AI experimentation, adoption, and grant funding.
-
Capacity: Provide training, resources, and support to foundation staff and grantees to enhance their understanding of AI technologies, ethics, and responsible adoption.
External Expertise: Invest in bringing external expertise to the org, through partnerships with research institutions, think tanks, or specialized AI orgs such as Partnership on AI.
Infrastructure: Develop a strategy for infrastructure that includes data management systems, and domain-specific platforms that integrate with existing tools in your ecosystem. Partner with Programs teams to invest both internally and externally in AI infrastructure projects.
Evaluation: Develop tools or frameworks to evaluate the ethical and societal implications of AI projects funded by the foundation, ensuring alignment with your mission, values, and privacy commitments.
Data Governance & Privacy: Establish protocols for data collection, storage, usage, and sharing to ensure privacy, accuracy, ethics, and regulatory compliance.
Security: Update your security strategy to protect AI systems and data against unauthorized access, breaches, and malicious activities. Enhance insurance coverage.
-
Refresh Your Values: Start by revisiting your org’s values and guiding principles in this era of accelerated change. How does our current context call for new forms of resiliency?
Embrace Continuous Learning & Experimentation: Work to build a culture of curiosity, openness, and continuous learning to overcome fear and encourage experimentation (within guidelines). At the same time, create an environment where staff feel safe to voice concerns and ask questions related to AI adoption without fear of retribution.
Change Management: Recognize that change is the “new normal” and that responsive grantmakers will provide ongoing skill-building, clear communication, adaptive leadership, engagement and feedback mechanisms.
Inclusion: Ensure that diverse voices and perspectives are a core component throughout your change journey, building genuine ownership as noted below.
Build Ownership: Empower staff and stakeholders to build experimentation communities and co-lead AI initiatives in accordance with your policies. Foster accountability through transparent reporting to your Advisory Committee.
Prioritize Ethics: At all levels of the organization, ensure that ethics, transparency, and responsibility are a constant consideration in AI-related activities and decisions. Build ethics into AI decision-making and accountability frameworks.