Interviews and workshops
What did we do?
We conducted a total of 13 semi-structured interviews with people in various roles, including:
Care workers
Care owners
Volunteers
Circle members
These conversations took place either online or in person depending on accessibility needs, and lasted between 45 and 90 minutes. Each participant was asked a shared set of core questions, with additional tailored questions depending on their role in the project.
We used OtterAI, a free transcription tool, to transcribe both online and in-person interviews. We then reviewed the transcripts to identify recurring themes across participants’ experiences. These were later colour-coded and cross-referenced with the three main Outcome Domains of our Theory of Change:
🌱 Growth
🤝 Co-production
đź’ž Well-being, Relationships & Belonging
We also conducted two in-depth interviews with project leads from the Clapton Care Circle:
Luke, Team Starter
Aga, Commons Organiser
These took place over roughly 6 hours across multiple days. For each of the 43 Outputs in our Theory of Change - across Platform, Circles, Teams, and Commons - we asked:
What did you do?
What did you learn?
What are your recommendations?
In addition to one-to-one interviews, we facilitated four reflective group activities. These sessions were designed to surface shared experiences and perspectives, allowing participants to reflect together on challenges, learnings, and aspirations. They helped to validate or expand on the themes identified in interviews and connected more directly to our collective values around co-production.
Why did we use this evaluation tool?
Semi-structured interviews are particularly effective for:
Gathering qualitative, first-hand insights from diverse participants.
Exploring how people feel about their experience of care and involvement in the project.
Testing the assumptions in our Theory of Change by inviting participants to narrate changes in their lives or work.
Giving participants a voice in the evaluation and reinforcing a culture of listening.
They also support relationship-building, helping people feel seen and heard in a way that’s hard to replicate in other formats.
Challenges: Being aware of confirmation bias
As with any method grounded in open-ended dialogue, there are risks of confirmation bias, where results might unintentionally reflect our assumptions rather than what participants truly express. We identified a few key risks:
Predefined questions might nudge participants toward expected themes or familiar narratives.
Selective analysis could focus too heavily on content that aligns with our Outcome Domains, missing insights that don’t fit neatly into those categories.
In-depth staff interviews may be influenced by internal perspectives that lean toward success stories.
Interpretation bias may lead us to hear what we want to hear, especially in ambiguous or nuanced statements.
To address the challenges raised above, we took a reflective and iterative approach to analysis. We revisited transcripts multiple times, actively looked for contradictions or unexpected findings, and discussed emerging themes in a small evaluation working group. We acknowledge that no method is neutral—but through this process, we tried to remain open, curious, and honest in our interpretation.
Last updated
Was this helpful?