Supporting the Evaluation Action Plan
This section sets out questions and considerations that should guide when, how and at what scale to evaluate. It also sets out the quality assurance measures that are in place to ensure evaluations are robust and fit for purpose, how FSA staff’s evaluation capability and skills can be supported and how the profile of evaluation can be raised across the FSA.
Delivering on the FSA’s vision for this Action Plan depends on colleagues knowing when and how to evaluate. This in turn relies on there being appropriate governance structures and colleagues having the skills necessary to design and commission evaluations.
This section sets out questions and considerations that should guide when, how and at what scale to evaluate, the quality assurance measures that are in place to ensure evaluations are robust and fit for purpose, how FSA staff’s evaluation capability and skills can be supported and how the profile of evaluation can be raised across the FSA.
Our Evaluation Approach
Evaluation is important. The FSA already conducts a range of evaluation activities pre- and post-policy implementation, in line with guidance provided in the Green Book; all of the FSA’s discretionary spending in Northern Ireland is required to undergo proportionate post-project evaluation. Examples of approaches take across the FSA include our benefits measurement approach within our business case process and our use of establishing project impact (EPI) forms pre and post award to capture the intended and realised impact of our work.
It is critical, however, that evaluation activities are proportionate and meet the needs of decision makers and those scrutinising our activities. While good quality evaluation evidence supports the delivery of our mission, it is not an end in itself: evaluation should facilitate the FSA’s work, integrate into our existing processes and be timely to support effective organisational delivery.
When deciding what form of evaluation to conduct, FSA colleagues should consider the potential value of the evidence generated from evaluation (for example, the filling of knowledge gaps), alongside reporting requirements (for example, the need to demonstrate accountability and transparency, scale of investment / use of public funds) and practicalities of delivering an evaluation (for example, whether it is feasible to evaluate in a timely manner). This should reflect how best to evidence anticipated benefits described in business cases. Areas that make a greater contribution to the evidence base, where it is a requirement, where it is feasible and where activities sit within the FSA’s priority programmes should be prioritised.
These considerations also have a bearing on evaluation scale and approach. Activities can be evaluated in different ways with the most appropriate form of evaluation depending on the questions being addressed, the profile and cost of the activities being evaluated, and the risk/uncertainty surrounding what can be learnt through evaluation.
There is no one-size-fits-all approach for choosing an evaluation approach and robust evaluation methods can take many forms. While commissioning third-parties to conduct evaluations may be desirable, it is not always necessary, feasible or proportionate; internally designed and delivered evaluation activities can provide the required insight in a timely way and more efficiently but can sometimes be perceived as less independent. Likewise, although experimental methods allow the impact of activities to be clearly demonstrated, and are considered by default at the FSA, they cannot always be operationalised. That said, gathering quantitative insights through our evaluations is generally desirable.
Colleagues leading FSA activities should engage broadly and consult with Science, Evidence and Research Division (SERD) colleagues early in the policy formation and business case development process to decide what type of evaluation is required, whether evaluation activities can be conducted in-house and whether independently conducted evaluation is appropriate. SERD colleagues will also be able to advise on evaluation approach, timing and any practical or ethical issues that may support or prevent a particular approach.
Further detail on how we identify and prioritise areas for evaluation and factors which inform our choice of evaluation type and approach are detailed in Annex A.
Quality assurance
Evaluations can be resource intensive; doing them well requires active and early engagement with subject matter specialists and understanding of the value evaluation evidence can offer.
Supporting evaluation capability and skills, and thereby supporting an evaluation mindset, will support the early consideration of evaluation and ensure research questions can be addressed with appropriate research methods. This will help ensure colleagues are able to identify when evaluation is needed, what the implications of evaluation are for policy implementation and or rollout, and that appropriate colleagues with the FSA (for example, SERD, operations, policy and so forth) and in delivery partners (for example, local authorities) are engaged to support data collection and evaluation delivery.
In conjunction with ensuring colleagues have the right capabilities and conducting evaluation is appropriately incentivised, use of our existing quality assurance mechanisms will ensure FSA evaluations are robust and their findings credible. Within the FSA, we have the following mechanisms to support quality assurance and the delivery of robust evaluations, which are drawn on where applicable:
- our existing business case process which captures anticipated benefits of business activities and prompts consideration of how these benefits will be evidenced and in time realised.
- reference of the Analytical Quality Assurance (Aqua) Book, the Magenta Book and Annexes to ensure processes align with best practice guidance.
- use of the Advisory Committee for Social Science (ACSS) to act as a critical friend and provide input to our evaluation plans and approach, and to support with peer review.
- use of external peer reviewers to advise on evaluation approaches and to quality assure evaluation outputs.
- involvement of a suitably experienced project officer and / or project manager to ensure that evaluations run to time and budget, that risks are managed and key milestones hit.
We will also explore the feasibility of introducing the following additional quality assurance measures:
- publication of evaluation plans, publication plans, trial protocols, results and datasets as appropriate to evaluation methods before/at the start of evaluations where possible and where doing so will not compromise the efficacy of the evaluation or policy development process.1
- publication of supporting documentation (for example, Logical Models/Theories of Change, Project Plans) alongside final outputs.
- use of the Assurance working group to support the impartial commissioning and delivery of evaluations, including a review of the types of data collected and research questions, through provision of critique of evaluation method and Logic Models etc. Building evaluation skills and capability across the FSA
Ensuring FSA colleagues have the skills they need to commission, design, deliver and quality assure evaluations is critical to the delivery of this Action Plan; it also underpins an evaluation mindset.
Delivering the following activities will support the development of evaluation skills across the FSA and the design and delivery of timely, robust evaluation:
Measure existing levels of awareness and understanding of evaluation and evaluation skills at the FSA to identify key gaps, create a baseline to measure changes in awareness and understanding and to appropriately target learning activities.
While this would be a cross FSA activity, conducting focused work within SERD to baseline existing evaluation experience and expertise would support targeted learning and development activities to fill any unmet needs.
- development of bespoke training programmes for FSA colleagues to increase evaluation skills and awareness. This could include the provision of introductory training for policymakers on the value of evaluation, how it can be integrated into the policy development process, and the strengths, limitations and requirements of different types of evaluation, and the provision of specialist training on the design and delivery of evaluation to increase the capacity for conducting evaluation, with SERD colleagues prioritised.
- review of existing FSA resources and tools for evaluation to ensure consistency and sharing of good practice. Where gaps are identified, the development of toolkits to support conduct of discrete stages of evaluation (for example, scoping, development of Theory of Change).
- the creation of evaluation drop-in surgeries whereby colleagues can engage with an evaluation expert, discuss options for evaluation and troubleshoot potential evaluation challenges.
In addition to the above, the following tools could be developed:
-
a checklist of key evaluation considerations for colleagues to use during the business case process in order to identify appropriate evaluation approaches and the implications of these choices for implementation/rollout of business activities.
Further raising the profile of evaluation
The FSA already supports the evaluation of its work and incorporates monitoring and benefits measurement into its business practices (see Annex C). Our Strategy includes the guiding principles of being ‘science and evidence led’ and ‘open and transparent’. These align with key precepts of evaluation - that it supports learning, informs action and supports accountability.
To support wider awareness of our evaluation activities, we already publish and promote our evaluation findings, conduct lessons learned sessions and share best practice across the FSA. We propose doing the following to raise the profile of evaluation and support an evaluation mindset across the FSA:
- seek an evaluation champion(s) at senior level to support the use of evaluations, showcase evaluation activities and the benefits they have delivered, and advocate for training for staff on the benefits evaluation evidence delivers. This could include promotion of the FSA’s evaluation criteria (Annex A), wider government resources designed to promote evaluation (for example, the Magenta Book) and the importance of benefits measurement to our activities.
- create an evaluation community of practice who could supporting effective evaluation and the sharing of best practice across the FSA.
- Include a prompt for colleagues to confirm they have considered how projects are to be evaluated when producing a business case. Tools used by the FSA in Northern Ireland, where proportionate post project evaluation is a requirement for all discretionary spend, could be used as a template for activities across the FSA. Changes could include a prompt to engage with SERD on the development of an evaluation plan and a requirement to describe which evaluation approach is recommended, which have been considered and which have been discarded. Changes to the business case process will need to align with the BMAP.
- ensure alignment between benefit measurement process and evaluation activities to ensure evaluation is proportionate and efficient.
- create an annual ‘Evaluation Week' to promote understanding and awareness of the value and benefits of evaluation.
Collectively, the above actions will remind colleagues to plan evaluation, demonstrate the value of evaluation and support the use of evaluation evidence across the FSA.
Revision log
Published: 9 September 2022
Last updated: 11 September 2023