How did Local Trust support Big Local partnerships to evaluate their work?
Key points
- Local Trust set up Big Local with minimal reporting requirements, to ensure it would remain a resident-led programme. Financial accountability was the role of the Locally Trusted Organisation, enabling volunteer residents to focus on decision-making and delivery.
- Over time, Big Local partnerships began to show an interest in building skills around evaluation, so Local Trust developed the Measuring Change support offer. Thirty-three Big Local partnerships opted to take part in the support.
- The support was fully funded by Local Trust and built to be flexible to the changing needs of Big Local areas. Skilled community evaluators worked with Big Local partnerships to develop evaluation plans or gather and analyse data to create evaluation reports.
- Measuring Change worked best when individual partnership members or Big Local workers took on a champion role to progress work, while workshops and outputs left other partnership members more confident to speak about the impacts of Big Local in their area.
- Interest in evaluation was not universal across Big Local partnerships, indicating a role for funders in signalling the value of this work in resident-led settings.
Reporting expectations
The values of the Big Local programme (resident-led, flexible, patient, and non-judgemental) meant that Big Local partnerships set their own priorities. While programme outcomes were set from the start, these were intentionally broad and focused on partnerships being better able to identify local needs, take action, and make a difference to whatever needs they prioritised. Local Trust asked partnerships to report on their plans and activities, setting minimal requirements for those reports.
These minimal requirements addressed the challenges of evaluation in community development work, where lack of resource, skill, time, and budget act as barriers to evidencing impact (Dunkley and Franklin, 2017). Most importantly, Local Trust decided to release funding based on assessments of partnership plans – what Big Local partnerships hoped to achieve for their community and how – rather than the impact they had achieved. Releasing funding in this way differed from typical funder-grantee relationships, where outcomes, monitoring forms, and evaluations are required for the funder to understand whether the initiative was successful (Mc Ardle and Murray, 2021). As funding was allocated to communities non-competitively, with an ethos of putting communities in control, Local Trust did not impose typical reporting requirements (like evidence of outcomes or key performance indicators being met). Financial accountability was still important, however rather than expecting this of communities, this was one of the main expectations of the Locally Trusted Organisations (LTOs), which regularly reported financial expenditure to Local Trust.
Local Trust has explored the role of LTOs and the values of Big Local in other articles.
Local Trust’s flexible approach to reporting and releasing funding enabled partnerships to take more risks in supporting and making change in their communities – trying new things, funding work that otherwise may not be funded, and emphasising quality over quantity. However, Local Trust recognised that there is value to evaluating at a community level. Evaluation can provide opportunities for reflection and inform ongoing delivery; demonstrate to the wider community what has been achieved; and provide evidence to support future funding opportunities.
A Big Local partnership was a group made up of at least eight people that guided the overall direction of delivery in a Big Local area.
A Big Local Plan set out what changes the partnership planned to make, how they planned to deliver on this and how funds were to be allocated. It was written for themselves, their community and Local Trust, as a guide and action plan.
A locally trusted organisation (LTO) was the organisation chosen by people in a Big Local area or the partnership to administer and account for funding, and/or deliver activities or services on behalf of a partnership. Areas might have worked with more than one locally trusted organisation depending on the plan and the skills and resources required.
Early approaches to reflection and evaluation in Big Local areas
In the early years of Big Local, Local Trust focussed on getting partnerships in place so they could develop their first plans. As delivery progressed and partnerships started looking to produce their second plans, Local Trust asked them to review their first plans. The review took place after each plan was delivered, and included nine open questions (later reduced to seven) about what partnerships had achieved and learned while delivering their previous Big Local plan.
In line with the programme’s values, the review aimed to enable resident-led decision-making, by supporting partnerships to reflect on achievements and challenges, and what to do next. The review was not aimed at providing Local Trust with evidence for monitoring and evaluation. Partnerships could decide for themselves what the reflective process looked like. Ensuring the review was completed was part of the Big Local reps’ role. Other individuals (like partnership chairs, workers, or Locally Trusted Organisation representatives) with experience or interest in evaluation may have acted as champions, encouraging certain approaches or the involvement of the whole partnership. While plan reviews provided some insight on what was happening in Big Local areas, the information varied in quality (Street, 2020).
Local Trust has explored the role of Big Local reps in another article.
While champions could encourage reflective practice or evaluative approaches, the extent to which Big Local partnerships engaged in this work was dependent on the capacity of those involved or employed to support them. Across the first half of the Big Local programme (when partnerships were focused on setting up new projects, engaging residents, and building relationships), reflection and evaluation were generally not prioritised. This aligns with broader thinking on the role of evaluation in community development, that it is often perceived as an additional ‘chore’ (Maloney et al., 2019), and the volunteer-led nature of the programme likely exacerbated this.
Where partnerships had scope to understand the changes they’d made in their communities, this usually involved counting people attending activities or gathering informal feedback (Street, 2020). In some cases, due to a lack of experience in monitoring and evaluation, partnerships expected more reporting from others than was reasonable for the amount of money awarded and requested information wasn’t received. In other cases, partnerships successfully gathered data (from grant recipients or their own projects), but limited capacity or skills (like how to link data back to aims) meant data was not analysed. These challenges reflect broader trends in the voluntary sector (Liket et al., 2014).
As the early phase of Big Local ended, Local Trust started to see an increase in partnerships interested in building evaluation skills, to help them reflect on their work and understand their impact. This prompted Local Trust to commission experienced community evaluators to develop introductory training for partnership members and their support staff. It aimed to build an understanding of why evaluation could be useful, using participatory approaches and activities that provided a basic understanding of key concepts. Feedback from these sessions showed that knowledge increased, but more intensive, longer-term, hands-on support would be needed to build partnership members’ confidence to reflect on and evaluate their work.
Reps were individuals appointed by Local Trust to offer tailored support to Big Local areas, and share successes, challenges and news with the organisation. These roles ended in 2022, replaced by Big Local Area Advisors. Advisors were a specialist pool of people contracted to Local Trust, who delivered specialist and technical assignments to support the partnerships.
Local Trust’s main evaluation support offer — Measuring Change
Responding to the need for more significant evaluation support, Local Trust developed Measuring Change. Starting in 2019, this offer provided bespoke support to Big Local partnerships. This support aimed to build partnership members’ skills, confidence, and knowledge (rather than contracting experts to plan or deliver evaluations on their behalf) and was designed to be resident-led. Partnerships were matched with specialist community evaluators (known as support partners). Partnerships and evaluators would work together to understand what the partnership wanted to evaluate, and to deliver data collection, analysis, or reporting. As part of this, the community evaluators hosted relevant workshops or ad hoc training throughout the support period. This bespoke support was done to a timeframe to suit the partnership and was funded by Local Trust to overcome resourcing challenges (the most immediate barrier to evaluation).
Overall, 33 (of 150) Big Local partnerships took part in Measuring Change between 2019 and 2025. This included an initial pilot phase, where partnerships were approached based on their previous interest in Local Trust’s introductory training on evaluation. Beyond the pilot, Local Trust promoted the support offer to areas, primarily relying on Big Local reps to identify and encourage partnerships to take up the offer.
Partnerships who took part in Measuring Change received long-term support to accommodate a full evaluation or an evaluation planning process, which usually took around nine to 12 months.
Local Trust explored the findings from the pilot phase in a report.
Flexibility was key
A key feature of the Measuring Change support was its flexibility. This was integral to the support being resident-led, as it could adapt to changing needs, pause to accommodate capacity barriers, or add more time for data collection. Support providers were also expected to consider community context and existing skills in partnerships, rather than offering a standard, ‘off the shelf’ evaluation.
Throughout the pilot and main support period, Local Trust’s approach to contracting support providers enabled flexibility. Local Trust pulled together a pool of specialist community evaluators to draw on when a partnership reached out for support. Workplans were developed between providers and partnerships, which formed the basis of agreements between providers and Local Trust, using indicative but flexible milestones and budgets. By using a specialist pool, Local Trust could also add new evaluators based on partnership interest, such as those with a focus on peer research.
The evaluation of the pilot phase found that the flexibility of support was sometimes overwhelming for partnerships. Where there was a lack of confidence and knowledge around evaluation, partnership members were unsure how to make best use of the support, including what they could measure and what the final output could be. It often took many months for partnerships and support providers to agree a workplan. Some support providers reflected on the challenges of maintaining partnership buy-in when the early phases were drawn out over a prolonged period.
Based on learnings from the pilot, the support was adapted. It remained flexible but with a more focused ‘menu of support’, drawing on the skills of support partners. From this point onwards, the support typically included workshops to develop a document similar to a theory of change (a theory explaining why activities lead to outcomes often used in evaluations). Support providers then used this to develop an accompanying evaluation framework for the partnership to implement, or a report summarising the changes partnerships had made in their communities to date. Data informing these reports may have already been gathered by partnerships, or they may have been supported to collect it as part of Measuring Change, such as through a facilitated community event.
Building skills and capacity
When support providers initially worked with Big Local partnerships to understand their specific needs and context, it often became apparent that residents did not share Local Trust’s ambition for the support – to build residents’ evaluation skills. Partnerships often saw the support as an opportunity to gain an additional resource, rather than a chance to work jointly with an evaluator to develop tools or frameworks.
It was therefore important for Local Trust to ensure the support included key touchpoints for residents to remain in the lead throughout the process. This generally involved support providers identifying champions. The champion approach took the form of designated working groups within partnerships, more informal approaches based on interest, or paid staff – like Big Local workers or reps – taking on the role. These champions acted as a key point of contact for the support provider, feeding into draft outputs and providing updates to the rest of the partnership. In doing so, they were key to maintaining momentum, while support providers usually took the lead on data collection, data analysis, and developing final outputs, before presenting back to the partnership. Where a champion was involved in data collection, analysis, or outputs, it was typically a Big Local worker. In some cases, this may have limited the long-term value of the support, as when those key individuals moved on from Big Local, the evaluation tools and frameworks were not used.
Local Trust has explored the role of Big Local workers in another article.
Another approach to building skills within partnerships involved support providers facilitating workshops open to the whole partnership. The purpose of these workshops was usually to generate a document similar to a theory of change, which would inform the rest of the support. Feedback from partnership members about these workshops was usually positive, with people valuing the opportunity to think strategically about future plans or reflect on achievements to date.
After Measuring Change, support providers reflected that using both champions and workshops improved partnerships’ capacity for long-term engagement in evaluation support, but it was important to be clear about who needed to be involved and when. This helped keep the support on track, maintain progress, and generate outputs useful to the partnership. While support providers felt that building evaluation knowledge and skills was less of a priority for partnership members, it did not diminish the value of Measuring Change as it improved partnerships’ ability to speak more confidently about the changes they made in their areas.
Many Big Local partnerships funded workers to support the delivery of Big Local. They were paid individuals, as opposed to those who volunteered their time. They were different from Big Local reps and advisors, who were appointed and paid by Local Trust.
The role of skilled community evaluators
The experienced community evaluators leading Local Trust’s introductory training received positive feedback from attendees, so this became a key feature of the Measuring Change support. Local Trust aimed to include providers who understood the resident-led context of Big Local, were skilled in community-level facilitation, and could adapt technical evaluation knowledge to meet varying skill levels within a Big Local partnership.
Ultimately, it was skilled evaluators who enabled Measuring Change to maintain its flexibility and meet the capacity challenges of Big Local partnerships. The providers encouraged the use of evaluation best practice (particularly for partnerships seeking additional funding from other sources), and co-created approaches that were less resource-intensive for the partnership or extractive of the community.
Towards the end of Big Local, partnerships often wanted to engage in Measuring Change, so they could be accountable to their community. Recognising the need for engaging outputs that could be easily understood, some support providers brought in graphic recorders (people who illustrate content). The recorders provided large, visual outputs documenting the local changes that partnerships and residents had identified, making evaluation findings more accessible to the community.
While outputs were considered useful, partnerships felt the facilitated strategic conversations provided the strongest benefit. They felt they better understood their ambitions for Big Local, and when the support involved analysing community input, they found this motivating. Partnership members reported feeling a greater sense of confidence in their plans and more clarity about what their next steps should be. Outputs (like evaluation reports, case studies, and graphic recordings) were helpful in demonstrating to residents the changes Big Local achieved, and for attracting additional funding in cases where partnerships planned to continue delivery beyond Big Local.
How else did partnerships evaluate or reflect on their work?
Big Local partnerships reflected that while their involvement in other Local Trust support may have had other aims (like business planning or fundraising), the support and time with experts enabled them to reflect on their goals, achievements, and challenges. As part of the broader support offer, Local Trust developed Reflect / Recharge. This was delivered by experienced facilitators and aimed to identify strengths, achievements, barriers, and potential solutions. While this was not evaluation, it drew on evaluative principles to support partnerships to progress their delivery. This support was provided to partnerships facing challenges, where Measuring Change was less appropriate.
As Big Local came to an end, many partnerships took part in workshops or interviews about their achievements and learning, as part of Learning from Big Local. Again, this was not intended to act as an evaluation but provided an opportunity for partnerships to reflect on their experiences in the final years of the programme.
Some partnerships chose to understand their impact by drawing on the skills of their Big Local worker or Locally Trusted Organisation (LTO), or allocating funds to local organisations for evaluations, rather than using Local Trust’s nationally-available offers. In these instances, the aim was typically to evidence change in their community, to provide accountability locally or seek additional funding. Building partnership members’ skills was generally not a focus. By outsourcing this work, partnerships were able to bring in additional capacity, particularly in the few cases where complex evaluation approaches (like social return on investment) were used. In many areas, this work came towards the end of the Big Local programme, with paid staff or commissioned organisations analysing data collected over the years to produce an impact report, documenting the overall achievements of the partnership.
Reflections and learning
Local Trust’s minimal expectations around monitoring and evaluation meant accepting less insight and evidence on what was achieved in communities and the impact of the Big Local programme. However, by ensuring the programme’s outcomes were broad and focusing on financial accountability through the Locally Trusted Organisation (LTO), Local Trust was successful in encouraging partnerships to prioritise and identify their communities’ needs, maintaining the resident-led ethos.
Measuring Change was largely successful in enabling participating partnerships to understand their achievements and, where relevant, showcase these to their community or other funders. Skill-building for resident partnership members was limited by their capacity to be involved in the support, prompting paid workers to take the lead alongside the support provider.
Local Trust staff closest to Big Local partnerships reflected that it was those partnerships with the most to gain from Measuring Change that were least likely to engage in the support. Despite the additional capacity and funding available via Local Trust, many partnerships felt that the support would lead to a greater bureaucratic burden by adding more processes to their decision-making and delivery. Some were also sceptical of evaluation – they felt they did not need to measure their impact when they knew what they had achieved. Local Trust staff reflected that partnerships who struggled with making decisions or strategically allocating funds were often the most reluctant to reflect on the broader impacts of their work, and how well they met residents’ needs. Facilitated support (like Measuring Change) could have helped, if it had been mandatory. Though, this may have caused disengagement due to additional burden and did not align with Local Trust’s core values.
As the Big Local programme came to an end, many partnerships became more interested in evaluating their work, usually prompted by a need to access additional funding or a desire to be accountable to their communities. However, in some cases this motivation may have come too late. Partnerships were sometimes unable to evidence their impact as access to key resources (like Big Local funding or paid worker support) came to an end, or because they didn’t have the necessary information to demonstrate their achievements. When trying to fill these gaps, those working closest with partnerships reflected that members were sometimes unable to see their work in terms of outputs and outcomes, or to articulate changes in their community beyond their personal experiences.
Despite available evaluation offers, the resident-led ethos and minimal reporting requirements meant that some partnerships chose not to prioritise understanding and evidencing their impacts, when it may have been beneficial to do so. This potentially adds another layer to the view that skill, time, and resources are necessary to evaluate work at a community level (Dunkley and Franklin, 2017).
Local Trust’s Measuring Change support took a careful approach to enabling the skill, time, and resources needed for community-led evaluation, by incorporating flexibility, providing skilled evaluation support, and committing additional funding. Where it worked, individuals in a champion role were able to increase their confidence and knowledge in evaluation, with partnerships contributing to outputs that could provide important strategic benefits. Though, the limited take-up of the Measuring Change support (or any evaluation by a Big Local partnership) without those champions indicates that funders play an important role in signalling the value of evaluation as part of resident-led work.
References
Dunkley, R. A., and Franklin, A. (2017) ‘Failing better: The stochastic art of evaluating community-led environmental programs’ (Evaluation and Program Planning, vol. 60). Available at: sciencedirect.com/science/article/pii/S0149718916302762 (Accessed 1 June 2025)
Fisher, L. (2021) ‘Measuring change support pilot evaluation’ (Local Trust). Available on Learning from Big Local. (Accessed 17 December 2025)
Liket, K. C., Rey-Garcia, M., and Maas, K. E. H. (2014) ‘Why aren’t evaluations working and what to do about it: A framework for negotiating meaningful evaluation in nonprofits’ (American Journal of Evaluation, vol. 35, issue 2). Available at: researchgate.net/publication/265467719_Why_Aren’t_Evaluations_Working_and_What_to_Do_About_It_A_Framework_for_Negotiating_Meaningful_Evaluation_in_Nonprofits (Accessed 23 October 2025)
Maloney, J., Leahy Gatfield, R., and Vickers, C. (2019) ‘Evaluating community development’ (Social Justice, Practice and Theory, vol. 2, issue 1). Available at: openjournals.library.sydney.edu.au/SWPS/article/view/13249/12039 (Accessed 23 October 2025)
Mc Ardle, O., and Murray, U. (2021) ‘Fit for measure? Evaluation in community development’ (Community Development Journal, vol. 56, issue 3). Available at: academic.oup.com/cdj/article-abstract/56/3/432/5818514 (Accessed 27 October 2025)
Street, L. (2020) ‘Power in our hands: An inquiry into positive and lasting change in the Big Local programme’ (Local Trust). Available on Learning from Big Local. (Accessed 8 October 2025)