RAISEimpact
Raising AI Safety Effectiveness and Impact
RAISEimpact is a program designed to help organizations working on reducing AI risks strengthen their management and leadership practices to amplify their effectiveness, retain and attract top talent, and build high-performing teams.We support organizations in addressing bottlenecks to reaching their full impact potential.
What is the opportunity and why is it important?
We conducted research into the challenges at AI safety organizations, including interviewing 25 individuals in AI safety. Clear patterns emerged around opportunities to improve management, leadership, and work culture. Some of the patterns were around the clarity of roles/responsibilities, feedback, hiring, retention, conflict, decision making, burnout / motivation, career growth, performance management, delegation and autonomy, and others. Many of these challenges are downstream of a relative lack of management experience of the leaders and people managers in the organization.
Even high-performing orgs with incredible technical talent hit the same leadership bottlenecks that limit their impact. We believe that there is a significant amount of low-hanging fruit and hidden impact potential that can be unlocked by surfacing key blind spots.
An example of a research organization worth highlighting is Rethink Priorities. Investing in people management was a significant part of how they were able to scale rapidly and increase their impact.
What is our approach?
Mission-aligned: We tie our interventions to tangible positive impact on organizations’ goals, mission, effectiveness, and impact
Fast and focused: We deliver visible ROI within weeks, while exploring and setting the foundation for longer engagements
Customized: We deeply engage with organizations, learn their unique context, and establish trust
Collaborative: We co-create solutions grounded in evidence and best practices, with clear success metrics
AI safety aligned: We tailor our approach to the unique culture of organizations in the AI safety community
Expert-driven: We partner with experienced coaches, organizational experts, and consultants. We provide a necessary bridge between them and the AI safety domain.
Culture engineering: We help teams intentionally design cultures that support truth-seeking, retention, and high performance
What is our theory of change?
The following is a condensed version of our theory of change.
Example Interventions (not exhaustive)
Team retreats focused on trust and team dynamics
Role clarity guidance
Consulting on role clarity, governance, and policy
Team / individual Performance system design
Conflict management training and facilitation
Hiring process consulting
Leadership coaching for directors and managers
360 feedback process implementation
Outputs
Stronger manager skillsets and confidence
Clearer roles and responsibilities
Stronger governance systems
New or improved systems for performance, feedback, and hiring
Surfaced and addressed team conflicts
Staff feedback / employee survey data improvements
Employees have documented goals and development plans
Outcomes
More effective leadership and decision-making
Healthier, more effective work culture
Increased organizational performance and resilience
Increased talent retention and attraction
Higher trust and motivation
Work better aligned with strategic priorities
Leaders' time optimized for high-leverage work
Impact
Widespread adoption of robust AI safety measures by governments and industry
Resilient safety ecosystem capable of adapting to emerging AI risks
Reduced risk of internal dysfunction
Sustained funding and support for AI safety
Consistently high impact, consistently growing impact
Reduced catastrophic risks from AI
Who is leading the project?
RAISEimpact is led by Adam Tury in collaboration with Patrick Gruban at Successif. Adam is an experienced EA leadership coach who bridges organizational development with AI safety context, and a former Amazon tech leader with years of experience building high-performance teams. Patrick is a business leader, entrepreneur, COO of Successif, and is on the board of EV UK and Talos Network.
What does it cost to participate?
We received a grant from Open Philanthropy in May 2025. As such, the costs of participating in the program will be covered for the first round of five participants.
What are the next steps?
We’re currently identifying 8–10 organizations for initial workshops. Our criteria are AI safety organizations with 5 or more paid staff. During the workshops we will:
Identify hidden challenges and opportunities
Co-create a model of the organization
Highlight immediate low-hanging fruit
Build trust and establish the foundation for ongoing collaboration
Each participating org will receive a tailored report with findings and recommendations designed to directly enhance impact. This process is both a diagnostic and a proving ground—if the workshop creates value, we can explore deeper support.