Evaluation Challenges
Last updated
Was this helpful?
Last updated
Was this helpful?
Why? Because we’re not trying to measure a single intervention or a top-down service. We’re trying to understand the ripple effects of a model that weaves together care, community, co-production, and power-sharing. A model that aims to do a lot at once: build relationships, shift culture, improve lives, and distribute responsibility more fairly.
This complexity makes the work more rewarding, and the evaluation more challenging. Here are some of the key hurdles we’ve encountered, grouped into three areas: methodological, ethical, and logistical.
We're not just looking at whether someone received a service - we’re interested in outcomes around belonging, well-being, shared decision-making, and community resilience. These aren’t easy to capture with a simple metric or a survey tick-box.
In a real-world setting, it's tricky to isolate which changes are the direct result of our model. Social outcomes are influenced by countless variables and it takes time and careful design to understand how our work fits into the bigger picture.
How do you measure trust? Or community connection? Or the feeling of having a say in your care? These outcomes matter deeply to us, but they don’t always fit into conventional evaluation frameworks.
We gather insights from many sources: team members, care receivers, circles, volunteers, community partners. Creating consistent data without losing context is an ongoing balancing act.
Some of the biggest changes - like reducing social isolation or shifting power - may only emerge over time. That means we need long-term tools, not just short-term snapshots.
We work with people’s lives, not just numbers. Protecting personal information and ensuring consent isn’t just a box-tick - it’s part of respecting each person's autonomy.
Evaluation should never feel extractive. That means being transparent about what we’re asking, why, and how it will be used and designing accessible, culturally-sensitive tools.
What matters to a care worker might differ from what matters to a commissioner or a family member. Our evaluation approach must hold space for these different priorities without flattening or ignoring them.
How do you keep something deeply local and relationship-driven while expanding to other places? Evaluating this requires us to ask not just “does it work?” but “how and why does it work - and could it work elsewhere?”
Evaluation takes time, people, and money. That means building it into how the service is run - not bolting it on after the fact.
People delivering and participating in care need the tools and confidence to also contribute to evaluation. That means training in data collection, reflection, and using findings to shape practice.
When your model values relationships, shared responsibility and lived experience - your data reflects that. It’s rich, but messy. We need systems and methods that can do justice to the complexity.
Evaluation shouldn’t be something done to the community. It should be something we do together; building on the same principles of trust, autonomy, and co-creation that underpin our care model.
Putting learning into action
Lastly, having used the evaluation tools and analysed the data, putting any learning and insights into action requires resources and commitment. It’s not enough to generate knowledge – we need the capacity, time and organisational willingness to reflect, adapt, and make meaningful changes.
If we’re serious about creating care that’s co-owned, deeply rooted, and built on trust, then we need evaluation frameworks that match. That means embracing complexity, staying open to learning, and holding ourselves accountable to the people we serve.