Creative & Credible supports arts and health organisations and practitioners to:

– engage with evaluation creatively
– improve your practice
– make well-informed spending decisions
– strengthen the evidence base around the benefits and impacts of arts and health projects

Use this website to understand why you might need to evaluate, what approaches might be appropriate, and to plan and implement evaluation for your project. You will also find downloadable handouts, examples and links to other evaluation resources here.

For news of Creative & Credible Training Programmes like us on Facebook and follow us on Twitter.

Preparing to Evaluate

There is a growing interest in the role that the arts can play in addressing health and social care priorities, and a need to demonstrate their impact effectively. A variety of strategic programmes and policy initiatives are helping both to highlight the contribution that the arts can make to health and wellbeing improvements and to develop the evidence base. Arts organisations and practitioners will want to consider how they fit into this strategic landscape, how they can forge partnerships with health professionals and policy makers, and how then to prepare their own plans for evaluation.

Evaluation is the process we use to find out whether a project can meet, or has met its aims. It will be led by the core values of the organisation or project.  It should be informed by a ‘theory of change’, that is, an understanding of how an activity may be linked with its outputs and outcomes, or why what is done has the effect you observe that it does.

It may be useful to think about what evaluation is not.  It involves more than just collecting and reporting monitoring information about a project. Although evaluation findings can be powerful when advocating for a project, it is important not to confuse the two, because evaluation will ask and answer questions and reveal impacts that may be both positive and negative. And evaluation and research are also different.  Research usually seeks to generate new knowledge, while evaluation generally judges an existing service.

We evaluate for various reasons.  Top amongst them has to be a desire to find out whether what we are doing has an effect and whether that effect is the one we thought it would.  Without this knowledge we cannot develop or refine a service or respond effectively to the needs of those people who hold a stake in the work – such as participants, project funders or commissioners.

Do any of the following apply to you? You want to know how many people took part in a project and whether they were the people you had hoped to reach. You are interested to know about how the artistic outputs of a project have been displayed or received.  You are concerned to know whether participants enjoyed the experience of taking part.  You want to establish what practical challenges were involved in delivering the project and how they were overcome, what did not go well and what you could improve on next time.  You are interested in unexpected outcomes, and whether the project worked in ways you had not anticipated.  You want to understand whether the project represented a worthwhile financial investment.  You are interested in understanding the benefits the project has delivered for participants in terms of their health, wellbeing or quality of life. Evaluation can help in all of these areas.

It is important to ask yourself this question before you begin an evaluation, and to answer it honestly. Is the evaluation intended for your own reflective use? Are you fulfilling the requirements or responding to the needs of a funder or commissioner? Are you primarily interested in evidence that will help to advocate for your work, or make your service more attractive to service users? Would you like to contribute to the wider evidence base for the arts and health sector? A clear understanding of who the evaluation is for will help you to make decisions about what to evaluate, what resources to allocate to the job, whether you might want to work with someone else to do it, and how you will disseminate your findings when you have them.

Evaluation can naturally inform a cycle of reflection and development. It can help arts and health practitioners establish what is good about what they do, to improve upon it, and to ensure that they are doing nothing that is harmful to participants. In addition, it can help organisations to develop and refine services based upon an informed understanding of the impact they have on participants, and in relation to the needs of commissioners and funders. It can also help to advocate for arts and health projects by showing positive benefits for health and wellbeing

It is worth looking carefully at the way you currently reflect on or measure the work you do. It may be that you can simply tweak and build on what you do now. For example, some evaluation questions can be very effectively answered through simple, accurate monitoring. But you may find that you are spending a lot of valuable time collecting data that you simply cannot or do not use. It is always best to collect less data, and to make sure that what you do collect will answer the questions that are most important to you.

In arts and health evaluation practice, ethical best practice relates to three main areas: respect for individuals; social responsibility; and, the maximising of benefit and minimising of harm. We ensure that our responsibilities in these areas are met through mechanisms such as having an informed consent process when collecting data, respecting people’s need for confidentiality and anonymity, considering potential risks to participants in an evaluation and ensuring that no individual or group is stigmatised or mis-represented through it.  Unless you are conducting research, you are unlikely to need to go through a formal ethical approval process and therefore there is no single set of formal rules to adhere to, but specific ethical considerations will apply to particular data collection methods.

Demonstrating your understanding of ethical good practice will, in itself, go a long way to delivering a trustworthy evaluation. You also demonstrate your credibility through use of a well-designed evaluation: one that uses appropriate methods and tools, does not gather information that is not used, and which does not place unnecessary burdens on your participants. In addition it is important to demonstrate that you have considered and attempted to avoid bias wherever possible.

Evaluation will take time at almost every stage of a project – from identifying appropriate aims at the outset, to collecting data while the project is underway, to analysing data, reporting and disseminating findings and then ensuring that learning is fed back into the next phase at the end. It will help if you appoint an ‘Evaluation Lead’ to take responsibility for the process.  The good news is that if you have very little time and resources available, you can shape your evaluation accordingly by focusing your evaluation aims and questions ruthlessly and minimising data collection and analysis.  Being realistic about what you can achieve will help you deliver stronger results and make it more likely that you answer the questions that are most important to you.

Arts and health organisations often look outwards when evaluating because they feel they don’t have the skills or the capacity to evaluate internally. External evaluation can appear more credible, avoiding issues around areas where bias may creep in.  An external evaluator provides specialist skills and knowledge and may be able to disseminate findings more widely. However, these benefits may come at a significant financial cost or mean that evaluation is ‘one off’ and that learning is not embedded within the project team. In reality, many successful projects develop by using an iterative process involving both internal and external evaluation. It may be useful to think through carefully where you need to be on a continuum between simple monitoring and academic research and the organisation AESOP has developed a number of tools designed to help think through this process. And sometimes, it is useful to accept that your evaluation just needs to be ‘good enough’ for the context in which you are working.

Arts organisations and practitioners may be contracted by local hospitals, care homes, GP surgeries and community and third sector organisations to support a range of health and wellbeing outcomes for participants. In this context, ‘co-production’ of both projects and evaluation with commissioners, partners and service users becomes particularly important. Taking a collaborative approach to evaluation with your stakeholders, can help ensure that the outcomes you want to evidence are the right ones. Working with independent evaluators or academics can also help to make sure that the language and evaluation methods you use are appropriate.

Approaches to Evaluation

The approach you take to an evaluation will depend on what you want to find out. You will choose the approach that suits your project, your stakeholders and the resources and skills available to you. You may feel that you need to evidence your work using the language and methods used by those who are commissioning or funding you. However, it may be that the outcomes you want to demonstrate cannot usefully be addressed in this way. Co-producing an evaluation with commissioners and funders and involving them in identifying aims and suitable outcomes, will help to make sure that you understand each other from the outset.

Evaluation frequently relies on the collection and analysis of data in number form. Approaches which involve this are termed ‘quantitative’.  They range in complexity from simple monitoring to the randomised control trial. Most arts and health project evaluations will benefit from using a mix of simple quantitative and qualitative methods.

Even the smallest scale project evaluation will involve some kind of monitoring of attendance or number of activities delivered.  Quantitative data are also often collected at the end of a project or activity using surveys or questionnaires. Reporting on these can be a good way to describe impacts on participants. However, they cannot tell us much about whether a project has actually had a measurable effect. To find out whether a change might have taken place, you will need to measure at both the beginning and end of a project.

Use of a measure developed by researchers and ‘validated’ or tested on similar participants in a similar context, for example, the Warwick-Edinburgh Mental Well-being Scale, can give more reliable data.  Validated scales also allows what you find for participants in your project to be compared with those of similar projects elsewhere.

It is always good for those delivering arts and health projects to clearly understand what the project costs and how this might compare to the cost of delivering similar work.  However, as well as demonstrating outcomes, project providers sometimes feel they need to show that they can deliver specific economic benefits for commissioners and other stakeholders. If this is the case for your project, it will be important to develop a collaborative relationship with the commissioner to ensure that you both understand what is required.

The ‘value’ of an arts and health project may be measured in various ways.  Formal approaches such as ‘cost benefit analysis’ value outcomes of a project in monetary terms but may not easily capture many important more subjective aspects of an arts project.  Economic evaluation that seems better suited to the context might seek to assess the benefits of an intervention in terms of Quality of Life Years or QALYs.  Another commonly-used tool in arts and health is ‘Social Return on Investment’ (SROI). SROI seeks to establish what the cost and impact to society would have been if an intervention had not taken place.

Economic evaluation of any kind requires significant skills and resources to conduct. Results will only be reliable if a project has first established a clear understanding of why the work it does has the effect it does – a theory of change. Further information about a variety of approaches to measuring value can be found by downloading Economic Evaluation.

Qualitative evaluation approaches focus on exploring the experiences, perspectives and opinions of those involved. They can help us understand what an arts activity or process ‘means’ to participants. They can be illuminating about other subjective elements such as project delivery. These approaches frequently rely on interviews and focus groups to collect data which is then analysed to identify themes and patterns within participants’ words.

It is important to make a distinction between ‘anecdote’ and qualitative evaluation. Personal testimonial or case studies are often used in advocacy by arts and health organisations but these ‘anecdotes’ are often chosen because they tell a good story, usually positive. This process lacks credibility, and is unlikely to be helpful in developing a project in the longer term. In contrast, balanced reporting of qualitative data, carefully collected from a sample of participants, can produce rich, detailed evidence and stories. This can inform advocacy and give really meaningful information to support improvement of a project.

Arts and health evaluations may benefit from using a mix of qualitative and quantitative methods to both identify specific outcomes for participants and explore their subjective experience.

 

An evaluator demonstrates ‘creativity’ every time they view an evaluation problem from a fresh perspective, or devises an evaluation approach tailored to a particular context.   While creativity is very definitely not just the province of artists or the arts, there is a growing interest in the use of evaluation methods that use film and visual arts, poetry and creative writing, music, drama and performing arts.

Arts-based methods can be particularly powerful in uncovering hidden perspectives and in empowering participants.  They may also be less intrusive than more clinically-based evaluation tools as they can be inspired by and modeled on the intervention itself. They also involve a number of challenges. For example, the results you get from them (pictures, performances or poems for example) are by nature difficult to interpret and you may require technical skills that are not a part of standard evaluation.

It is particularly important to make sure that any creative evaluation approach you use ‘fits’ the project and the people who are taking part in the evaluation.

There is power in a good story.  You may have used case study stories for many reasons, including when advocating for a project.  Personal testimonial from participants who have had a positive experience can be very persuasive. However, this is not what is meant by ‘case study’ in evaluation. For an evaluator, the quality of a case study hinges on the methods used to collect and analyse the data and the writing or presentation of the story.  The evaluator will select case studies that are relevant to the outcomes or issues the evaluation seeks to explore. You can have a case study of a single individual, groups of individuals, an object, even a place or a situation.  A range of different kinds of data may be pieced together in a case study to tell a story relevant to the questions that inform the evaluation.

It is important to remember that a case study story is particular to an individual case; because one participant experienced something during a project, this does not necessarily mean that all participants did, or that any future participants will.  This is why it is vital to select your cases carefully, so that they can genuinely help to illuminate the particular issues you are investigating.

When assessing the impacts of arts and health projects on those who take part, some conventional evaluation approaches may be lacking if they do not attempt to understand the meaning of arts participation for those involved. Participatory Action Research (PAR) is an approach that places the participant at the centre, as the person who knows best what has happened and what this has meant in terms of their own life.  It can very effective in empowering and engaging participants in an evaluation.  It is undertaken in a reflective cycle that can be deeply embedded within the process of delivering a project itself.

Evaluation Cycle

Any evaluation you undertake should be part of an iterative process, with your learning being used to help improve and develop the next phase or project. If you look at it in this way, rather than as a costly and time-consuming ‘add-on’, it becomes obvious that evaluation is an essential part of the process of delivering a project itself. The Evaluation Cycle offers an overarching framework in which to understand evaluation as this iterative process. It was developed by Willis Newson with researchers at the University of the West of England and will fit any kind of evaluation approach you adopt.

This cycle has three main phases: project planning; implementation; and reporting and dissemination. It is worth noting that the implementation phase (including data collection and analysis) is only one part of the cycle and that project planning and reporting and dissemination demand just as much, and possibly more, time and thought.

A published version of this cycle was developed by Norma Daykin, Meg Attwood and Jane Willis and its use should be credited to them wherever it is used.

What can you learn from the work of others? In any evaluation, the first step should be to find out what other evaluation and research can tell you about the likely impacts of a project like yours.

An evidence review summarises the available evidence, revealing what is known about the impacts, outcomes and delivery of similar projects. It should also tell you about what is still not known, or difficult to establish. This is all critical information for evaluation planning. It will help you to decide what to evaluate and what approach might be appropriate. It may also be useful in planning the project itself.  Crucially for an arts and health evaluator with limited resources and time, an evidence review may identify research which – if relevant – can then be used to support or illuminate limited, but promising findings from a smaller scale evaluation.

Consulting with the people involved in or affected by your project when planning an evaluation will tell you how they perceive the evaluation and what information they might need from it. Consultation is particularly important at an early stage for informing both project aims and identifying appropriate evaluation aims. It can also provide an opportunity to test out ideas and methods and to discuss whether a particular kind of evaluation activity might be appropriate.

Setting aims is a crucial part of the evaluation cycle.  A clear set of evaluation aims should inform the questions that the evaluation seeks to answer.  Many arts and health projects have a wide range of project aims, not all of which can be easily measured, particularly given limited evaluation resources.  Evaluators will need to decide which aims to focus on, based on stakeholder priorities, an understanding of what can actually be measured, and the funds and resources available.

When you have decided on your evaluation aims and questions, you can plan how you will go about answering them.  An evaluation framework or protocol describes what you are interested in evaluating and how you will go about doing it.  At this point you will also consider what resources you might need, how you will manage the data, what the ethical issues are, and how you are going to report on and disseminate your findings.

An outcomes framework may underpin all this planning. In arts and health terms we generally understand ‘outcomes’ to mean the changes, benefits, learning or other effects that can be attributed to a particular service or activity.  These are what you are seeking to measure.  An outcomes framework will clearly map each outcome to a set of indicators that will establish whether or not it has happened and a method of measurement. This is a clear and manageable way to understand how and what you will be evaluating.

The data you collect for an evaluation may come in a number of forms. It may be quantitative, including numbers from monitoring or collected through closed questions on questionnaires. This information might have been collected at the end of a project or activity, or throughout.  Data can also come in the form of transcriptions of interviews or focus group discussion, open feedback, meeting minutes, photographic documentation of activities, even artworks created by participants. Things you will need to consider, whatever kind of data you collect, include how you will ensure that the confidentiality, and maybe also the anonymity of respondents and participants will be ensured, and how you are going to avoid bias. The way in which you design and administer forms and questionnaires or run interviews and focus groups can introduce bias, and will need careful thought. In addition, you will want to think about how to use sampling to make your evaluation encompass as wide a range of participants’ experience as possible.

The techniques used to analyse quantitative and qualitative data are quite different.  For quantitative data you are likely to want to describe the patterns you observe in the numbers you have collected.  If your analysis needs to go beyond the descriptive – and you are seeking to infer meaning from the data – it is best to seek advice. Qualitative data will generally be analysed using content or ‘thematic analysis’, a method which identifies patterns within the data, usually (but not always) participants’ words collected in interviews or focus group discussion.

You may want to explore how to collect and analyse data using arts-based methods such as film or video.  Some examples of ways in which researchers have gone about doing this can be browsed from this section.

Evaluation findings need to be recorded so that they can be shared. This is usually achieved through a written report describing the project, the evaluation and highlighting what has been learned.  A report is useful however brief, and sometimes brief is best.  It should present a balanced account of the project, commenting on its impact, the strengths and weaknesses of its delivery and the learning that has been captured that might inform future projects.

Reporting does not have to be confined to a written document. You could explore using film, animation or infographics to bring your findings to life. The important thing to consider is who you are addressing with the report and how this audience will want to access the findings.

Resources

The resources in this section are those that came to light during the Creative & Credible project, either during the literature review phase, or through discussion and interview with members of the Stakeholder Reference Group. This is not intended to be an exhaustive list of resources relating to arts and health evaluation, instead it provides background and references for all the elements discussed on the website. Weblinks (URLs) change. Please contact Willis Newson if you find a broken link so we can fix it.

Willis Newson, in collaboration with Professor Norma Daykin, Tampere University, Finland and other partners including the Royal Society of Public Health and the University of Winchester Centre for the Arts as Wellbeing, is offering a number of training and professional development programmes based on Creative and Credible.

For news of Creative & Credible Training Programmes follow the links below.

Helpful links for planning and implementing data collection and analysis.

The following links may provide information and inspiration if you are considering using a creative or arts-based method within your evaluation.