The Challenge of Measuring Effectiveness in Social Work: A Case Study of an Evaluation of a Drug and Alcohol Referral Service in Scotland

This article explores the challenge of measuring effectiveness in social work, building from an evaluation of a service for those with drug-and alcohol-related problems in Scotland conducted in 2012. Drug and alcohol misuse have long been recognised as major issues for the Scottish economy, health service and population, affecting high numbers of individuals, families and communities in Scotland. For this reason, it is vital that the services provided give ‘best value’ for service users and the country as a whole. But what is an effective service and how might this be measured? We will argue that demonstrating social work effectiveness is always difficult, but that the complex and interconnected nature of drug and alcohol problems makes it even more difficult to isolate the effectiveness of one intervention from another and from the context in which it is located. This suggests that, as we move forward, we need an approach to evaluation that acknowledges the systems within which individuals and services are operating (see Forrester et al., 2013), as well as the inevitably political nature of all evaluation (Gray et al., 2009).


Introduction
Measuring effectiveness has been a major challenge for social work since its early days. There have been, one might argue, consistent messages from re-search in recent years that attest to the helpfulness of social work practice across a range of fields (Shaw and Gould, 2001;Shaw and Norton, 2007;Unrau et al., 2007). And yet long-standing questions remain. How does social work, as a profession, prove its usefulness, given that it is social work's role to intervene in difficult situations, often with people who would prefer us to leave them well alone? And how can we, as professional helpers in a casework relationship, ever be sure that it is our intervention that has really made a difference in the complexities and turmoil that may characterise service users' lives?
This article introduces an evaluation of a drug and alcohol referral service in Scotland, and uses this to interrogate these and other questions about the nature of evaluation in social work. We begin by describing briefly the service that was evaluated, and the aim of the evaluation. The methodology for the evaluation is then explored and the findings presented (again in pre´ cis). The main body of the paper is taken up with an exploration of the methodological issues at the heart of this and, we would argue, all evaluation of effectiveness in social work. This is not to suggest that it is possible to generalise from such a particular example. Instead, it is to argue that a case study such as this one highlights issues that are likely to emerge whatever the evaluation; by looking in depth at one case study, these issues are brought into sharp relief (Yin, 2009;Flyvbjerg, 2011).

The setting and aim of the evaluation
Responsibility for providing drug and alcohol services in Scotland is devolved to thirty Alcohol and Drug Partnerships (ADPs), which fund organisations to provide services. The service provider in this instance was the local authority Social Care and Health Department; it commissioned the evaluation, which was focused on the work of two referral teams, one that targeted adults with alcohol problems and the other adults with drug problems. The teams offered a sixteen-week, intensive, community-based, case management service to clients and will be referred to collectively in this article as 'the service'; those using the agency's service will be called 'clients' or 'service users' inter-changeably. Referrals were received from a range of professionals in health and social work (GPs, hospital doctors and nurses, social workers, drug workers, housing officers, etc.). Once allocated, qualified social workers worked intensely with service users, first to assess their needs and situation, and then to refer them on to services that might meet those needs. For most clients, this meant helping them to get into a position where they might be able to make use of other services; in other words, a sense of order needed to be established and many practical obstacles overcome (including debt, homelessness, family breakdown) for this to be likely to be achieved (Galvani, 2012).
The service was underpinned by a number of key beliefs. The first concerned the need for a professional social work service. It was strongly felt that the lives of those who were referred were so complex and chaotic that a qualified social worker was needed first to undertake a full assessment and then to take the case forward. The approach adopted was thereafter a kind of case management. Social workers worked with service users and other professionals to provide a 'targeted, community-based and pro-active approach to care', involving what has been called 'case-finding, assessment, care planning and care coordination', as outlined by Ross et al. (2011, p. 1). Within this, the relationship between the social worker and the service user was held to be crucial; creating a non-judgmental, therapeutic relationship was seen as necessary, even though this was not, on the surface, a therapeutic service. This approach was chosen because of the belief that, for this client group, a short-term, focused and intensive programme of intervention was likely to be more successful than one that was open-ended and lacked clear objectives (DiClemente, 2006;Galvani and Forrester, 2011;Reid and Shyne, 1968).
The underlying purpose of this evaluation was a study of effectiveness. Moreover, the funder wanted to find out what, if anything, the social workers, clients and referrers saw as the 'added value' of the service continuing to operate as a social work service, employing qualified social work staff. The findings of the evaluation were presented to the local Alcohol and Drug Partnership as part of a wider review of service provision for this client group.

The evaluation
The evaluation was conducted between February and November 2012. It had two broad objectives. First, it sought to evaluate outputs, asking what was achieved by the service-that is, what the outcomes of the service were, from the point of view of the staff, the service users and the referrers. Second, it sought to evaluate effectiveness-that is, how well these outputs/ outcomes were achieved, again from the point of view of the staff, the service users and the referrers. The research team members brought consider-able knowledge and expertise to the evaluation. The principal investigator (A) and co-investigator (B) had both carried out evaluations in the past, and had experience of conducting research on sensitive areas including HIV and mental health. The research assistant (C) had recently worked for a drug and alcohol project as a caseworker, and was now undertaking a Ph.D. on recovery from drug use.
The evaluation conformed to Social Research Association ethical guide-lines (http://the-sra.org.uk/sra_resources/research-ethics/ethics-guidelines). Research ethics permission was sought and received from both the University Of Edinburgh School Of Social and Political Science Research Ethics Committee and the local authority (LA) Social Care and Health Ethics Review process. In addition, the research assistant (Peter) was given clearance to access client records on the basis of an enhanced disclosure report from Disclosure Scotland. All informants gave written consent to take part in the evaluation.

Methodology
The evaluation used a mixed-methods approach (Nutley et al., 2003;Rossi et al., 2004), including both qualitative and quantitative methods and allowing us to achieve 'triangulation' within the study (Johnson et al., 2007): • a targeted literature review of relevant national and local policy documents and research evidence; • analysis of services' reports and case records; • participant observation; • focus groups with social work staff; • interviews with different stakeholders; We will now discuss the methods in more detail.

Literature review
We began with a targeted literature review of relevant national and local policy documents and research evidence. This helped to locate the service and the evaluation within their wider contexts, as well as to reach a deeper understanding of the findings (Boote and Beile, 2005;Hart, 1998).

Analysis of agency reports and case records
The literature review was followed by the examination, coding and analysis of agency reports and case records, seeking to identify, first, how initial problems were recorded by staff and, second, changes in behaviour or outcomes for service users over the course of the sixteen week intervention programme. We hoped that, by doing so, we might be able to replicate what might be regarded as a 'pre' and 'post' test in evaluation (Rossi et al., 2004). We began by reviewing basic demographic data, referral source, reasons for referral and reasons for closure in relation to all 490 referrals made to both the drug and alcohol teams in 2011. We then looked in more detail at fifty cases from each team, selecting every fourth case and analysing referral forms, care/action plans, case notes and agency post-closure evaluations relating to these cases. It was soon apparent that there was a significant discrepancy in the case files between what was recorded as 'support needed' and 'support provided'. We recategorized the results, focusing instead on those who had completed the sixteen-week programme or had met their goals in less time (this amounted to twenty-four drug team clients and twenty-one alcohol team clients); the resulting picture reflected greater parity between 'support needs' and 'support provided'. There was still, however, a significant gap between the two.
The discrepancy between 'support needed' and 'support provided' might reasonably be assumed to provide an indication that the service was failing. However, further research suggested the picture was more complex than this. Service users dropped off the programme for a number of reasons (some chose not to engage after the first meeting, some moved away and some even died during the sixteen-week programme), and the recording systems failed to adequately capture this. Moreover, evidence from other sources (interviews and observation) demonstrated that those who used the service felt very positively about it, including those who only attended for a short time. This highlighted not only the need for better recording systems within social work agencies, but also the inherent difficultly in using case files for evaluation purposes such as this (see Hayes and Devaney, 2004). (This will be developed further in the 'Discussion' section of this article.)

Participant observation
The qualitative research began with a period of participant observation, during which the research assistant who was to interview the clients (Peter) spent time with social workers in their office setting and also accompanied them on three routine visits to service users. The purpose of the observation was not to conduct a full 'ethnography' (see Baszanger and Dodier, 2004;Ferguson, 2010). Instead, it was felt that spending time with social workers would give us a better 'feel' for what it was they were actually doing with clients. This would contribute to our understanding, and hence to the evaluation as a whole. Visits were conducted with three different social workers; in each case, the social worker explained the purpose of the visit in advance and sought consent from the client. During one planned observation, there was no response from the client when the social worker knocked on the door. This gave the research assistant insight into the everyday experience (and frustration) of a social worker working with clients with drug and alcohol problems. On the other two visits, the research assistant observed the open and trusting professional relationship that existed between the clients and their social workers; clients expressed their needs and feelings openly, and workers 'heard' this and negotiated their role well. The research assistant was particularly struck by the amount of 'emotion work' (Zapf et al., 2001) in the interviews with clients; this might have been difficult to appreciate unless observed first hand.

Focus groups
Focus groups were conducted with social work staff in each team by two of the researchers (Peter and Sumeet). The purpose of the focus groups was to facilitate the evaluation as a whole (allowing the researchers to get to know the social workers and vice versa) and also to allow for an open exploration of the social workers' views on recovery and social work. Focus groups are different from interviews in that they allow for more free-flowing conversation and debate between participants (Wilkinson, 2004). This was certainly the case in our evaluation. Social workers shared both joys and frustrations about their work; interestingly, it was not the service users that social workers complained most about, but rather the systems (particularly IT systems) with which they were forced to work. This mirrors findings in child protection, including Munro (2011).

Interviews
The largest amount of time on qualitative data collection was spent in inter-views with a range of stakeholders: • structured, face-to-face interviews were carried out by Peter with the service manager and two team leaders; • interviews were also conducted with twenty current and former service users (ten from each team) by Peter, using standardised interview schedules and open-ended questions; • Interviews were conducted with twenty referrers (ten for each team) by Viv and Sumeet, using standardised questionnaires.
The interviews with the team leaders took place towards the beginning of the evaluation, allowing Peter to explore some of the issues that might come up in the study as well as to find out the team leaders' views about how their service was operating. The interview with the service manager took place towards the end of the evaluation, thus giving scope to discuss provisional findings.
The interviews with the service users were, inevitably, time-consuming to set up and complex to carry out. Service-user informants were selected at random from the 2011 referral lists: every tenth service user was contacted by the agency, given information about the evaluation and invited to take part. This produced some, but not enough, informants, and so another random selection took place, again using the same method. Finally, agency staff put forward a small number of additional names (five) in order to make up our target of twenty clients. The result was that the sample was 'purposive' (Oliver, 2006), not randomly selected; the agency played a key role in facilitating the data collection, by contacting clients and helping to set up inter-views. (The implications of this will be examined further in the 'Discussion' section.) This was a largely male, almost all white (all but one of the informants were white), relatively young informant group, as Table 1 demonstrates. Inter-views were conducted in service users' own homes or in other venues such as cafe´ s, if desired by the service user, or felt advisable by staff because of researcher-safety concerns.
Interviews with referrers took place on the telephone and, in two instances, by e-mail correspondence. Referrers were selected at random from a list of all referrers from 2011; care was taken to ensure that different agencies and professional groups (including health, housing and social work; hospital, GP and community-based) were represented in the selection. All the interviews were recorded and transcribed, and analysis was managed thematically, looking for dominant themes, common threads, contrasts and contradictions in the varied data gathered (Attride-Stirling, 2001;Benner, 1985). Significantly, we did not seek to verify or corroborate views expressed by service users by, for example, checking their case files for further evidence of, for example, progress made. Instead, it was our belief that their accounts should be regarded as their stories, valuable in their own terms as worthwhile feedback on the service as well as wider narratives of their lived experience (Plummer, 2000;Reissman, 1993).

Findings
The findings of the evaluation were largely positive. As outlined in the end-of-evaluation summary and report (Cree et al., 2012), the referral teams were found to provide a range of useful services to what was recognised to be an unpredictable, hard-to-reach client group. While specific questions were raised, for example, about the length the programme, there was evidence that social workers were making a significant impact on service users' lives, by providing extensive practical support and guidance in life skills, as well as emotional and social support, within the parameters of a programme in which their key task was to help service users to regain control of their lives and then refer them on to other services (health, housing, social work, education, etc.). What was found to be valued most highly by service users and referrers across the board was what was described as the flexibility and openness of the service's approach to recovery. Social workers sought to 'hang in there' with service users whatever their current situation, giving them real, practical help to negotiate across a range of systems (health, social work, criminal justice, family, community, etc.) and with a myriad of personal, financial, housing and interpersonal difficulties, as well as those directly concerned with drug or alcohol use.

Discussion
As already stated, our purpose in writing this article is to open up methodological issues in evaluation, not simply report results. We will therefore turn to an interrogation of the evaluation, first drawing attention to the importance of context, and then exploring the methodology in more detail. We then offer a number of observations that, we believe, may be useful for other future evaluations in social work.

The context in which the evaluation took place
Drug and alcohol misuse is central to Scotland's social problems. Scotland has higher rates of drug and alcohol problems than other parts of the UK and many other countries in Europe (Scottish Government, 2010Government, , 2011UNODC, 2010). Because of this, the Scottish government has identified the need to confront drug and alcohol abuse as a priority for policy and practice (Scottish Executive, 2001;Scottish Government 2008a, 2008b, 2008c. Within this, it has heralded a move away from a 'harm reduction'-focused policy to a 'recovery-oriented' approach as the 'new way forward' in tackling drug and substance misuse problems. Its recent strategy report, Road to Recovery, explains that recovery is: .. . a process through which an individual is enabled to move on from their problem drug use, towards a drug-free life as an active and contributing member of society.... recovery is most effective when service user needs and aspirations are placed at the centre of their care and treatment. In short, an inspirational, person centred approach (Scottish Government, 2008c, p. 23).
Alongside the shift to a 'recovery' model, there has been a move towards what has been identified as an 'outcomes-focused' approach to the delivery and management of health care across the UK. Andrew Lansley, the UK Health Minister, announced a shift from 'process driven' targets (such as waiting time targets) to 'improving health outcomes' in the first NHS Out-comes Framework published in December 2010: 'Our ambition is to achieve health outcomes at least as good as any in the world. To achieve this, we need to focus on outcomes and their robust, continuing measurement' (webarchive.nationalarchives.gov.uk/+/www.dh.../DH_122995).
This, then, is the context within which this evaluation must be placed. Its aim was to evaluate an addictions' service provided by a Scottish Council; its findings would be used to inform service provision and determine priorities in the future. This was not, therefore, a neutral exercise. The evaluation was conducted at a time of considerable organisational stress, both external and internal. The worldwide economic recession (Garrett, 2013) had brought cuts in national and LA budgets, with inevitable pressure on agencies across all sectors to justify their services. Just as crucially, a Scottish government consultation paper on the introduction of integrated budgets for health and social care was published during the evaluation (Scottish Government, 2012). Of course, the idea of integration of services was not itself new; the concept of 'joint working' between health and social care has a long history in the UK. Nevertheless, the new agenda of integration, already well under-way in other parts of the UK, brought increased anxiety to social work and social workers, not least because it came at a time of financial restraint (Williams, 2012). All those who took part in the evaluation (including the researchers) were aware that a 'bad' evaluation could have highly negative consequences, not only for the social workers, but also for those who relied on their service for support. In that sense, the evaluation was inevitably political. Pawson and Tilley assert that 'the very act of engaging in evaluation constitutes a political statement' (1997, pp. 11 -12). In our example, the politics of the evaluation were not only located in the question of how to manage budgets and improve service delivery. Instead, the evaluation both demonstrated and anticipated social work's uncertain status in the new world of integrated services, rehearsing debates about whether drug and alcohol problems should be viewed as predominantly social or medical problems, and high-lighting who is 'in charge' of the new health and social care agenda. (Interestingly, a decision was taken after the evaluation to merge the two services; it is difficult to judge what part the evaluation played in this decision, let alone whether or not this restructuring was a 'good' decision for service users or social workers.)

Methodological issues
All research throws up methodological issues, whichever approach is adopted (Bryman, 2012). It is important, therefore, to interrogate research reflexively, exploring the impact that such issues may have had on the process and the results and, hence, on any conclusions that may be drawn (Finlay and Gough, 2003).

An internal evaluation
The first factor that must be considered is that this was largely an internal evaluation: it was limited to agency case files and the views of agency service users, managers, social workers and clients, and the only external voices were those of the twenty referrers. There has been a growing tendency in recent years to embrace randomised controlled trials (RCTs) as the 'gold standard' in studies of effectiveness across an increasing range of service-user settings Wilcox et al., 2005). Advocates argue that RCTs reduce bias and increase reliability in evaluation research; it is asserted that an RCT is the only way to be certain that intervention 'A' is better or worse than intervention 'B' or 'C' (Chiappelli et al., 2010). It was not possible to operate with a 'control' group in our own evaluation, for two different, but connected, reasons. The first relates to the complex nature of the lives of those who abuse drugs and alcohol (Galvani, 2012). Most of the service users in our study were working with more than one clinician at any one time (including a doctor or doctors, nurse, social worker, drug worker, etc.); some, meanwhile, made use of 'recovery capital' such as extended family members and/or friends (Cloud and Granfield, 2009). Because of this, it was impossible to isolate the impact of one service for investigation. Additionally, the findings indicated that what worked best for most service users was the range of individuals and services that they relied on in their, at times, highly troubled lives. Recent public health research suggests that RCTs may not offer the best way forward for the evaluation of such 'complex interventions' (see Craig et al., 2008;Mackenzie et al., 2010).
There is another complication here, however. Not only was this an internal evaluation; we were only able to access records and individuals through the agency itself. This might suggest that our findings would be biased towards positive views of the service, rather than a balance of positive, negative and neutral views. In practice, by sampling as widely as possible, we achieved some success in this, hearing very different views about the service, including a small number of highly critical perspectives. We also heard a great deal, as already stated, about drug and alcohol problems, about recovery and about social work in general. Gray argues that evaluations that 'concentrate narrowly on inputs and outputs of programmes are in danger of missing vital, often illuminating information on processes' (2009, p. 284, emphasis in origin-al). There is little doubt that this evaluation gave important information about the process of recovery, and the role of social work within this, as we will return to in the final section of this article.

Agency records as a source of evidence
There was substantial evidence from the evaluation that service users' lives improved over the time of their contact with the service: some had reduced their drug or alcohol use; some had been re-housed; a small number had re-established contact with a family member again.
It was, however, the interviews, not the administrative records, that gave the fullest picture of these 'success stories'. Our examination of the case files showed that the agency used different electronic recording systems simultaneously and individuals recorded things differently. There were also important gaps in the case files, as we have outlined already. Many referrals were, in practice, re-referrals, as service users returned for another programme of help; some service users were supported for longer than the sixteen-week period, as in cases where they spent time in hospital in 'rehab' before being returned to the community.
All of this made it extraordinarily difficult (for both the agency and the evaluation) to identify a clear 'before' and a clear 'after' stage, highlighting again the difficulties of assessing effectiveness in a complex setting such as drug and alcohol use and recovery. Many of those who used this service had had a battle with drugs and/or alcohol for twenty years or more. Their lives, as is usual for those with addiction problems, went through cyclical patterns of getting worse, seeking help, getting better and then deteriorating; research shows that those with drug and alcohol problems will experience periods of stability and health, interspersed with periods of loss of momentum before (hopefully) giving up their addiction for good, or becoming ill (Barber, 1995;Granfield and Cloud, 2001). The intervention by the service was effectively a 'moment' in their recovery journey-a journey that had probably begun before they were referred to the service and would end a considerable time afterwards. As one social worker said, 'recovery is a spectrum and not an end result'.
Of course, this is not the first evaluation to uncover the difficulties in measuring effectiveness in social work. Criminal justice researchers have already identified that it may be some time after an intervention or programme that a person may actually change their offending behaviour (Friendship et al., 2002;Kirkwood, 2008). What this suggests is that it is necessary to be keep track of the shorter-term, so-called 'soft' outcomes that have been shown empirically to be linked with reductions in drug or alcohol use over a sustained period of time. Helping someone to shift from a 'pre-contemplative' to a 'contemplative' stage in terms of the cycle of change might be such an example; there might be no obvious changes in drug or alcohol use, yet the intervention may have played a crucial part in assisting someone to begin the path towards a reduction in drug or alcohol use (Prochaska et al., 1992).

'Insider' research
A final area that merits discussion is the question of the researchers them-selves. The research team was made up of two social work academics and one student undertaking a Ph.D. in social work. This might reasonably suggest an implicit bias in our methodology and findings; as social workers, we were likely to be sympathetic to the agency and its staff and, more than this, supportive of the idea that a 'social work' service might be a valuable one. Our considered view is that our membership of the social work body inevitably had an impact on the evaluation's conclusions; it could not have been otherwise. But we do not necessarily see this as a negative influence. On the contrary, our status as 'insider' researchers meant that we had familiarity with the culture, jargon and everyday practice of the service; our networks also gave us ease of access to the social workers and referrers in the study, as well as to the service users themselves. Brannick and Coghlan, in a review of insider status in organisational research, conclude that 'there is no inherent reason why being native is an issue' and 'the value of insider research is worth reaffirming ' (2007, p. 59). Furthermore, we wish to argue that it is important to open up all knowledge for scrutiny, not just that related to being an 'insider'. A critically reflexive approach to research and evaluation highlights that everything (age, class, educational background, ethnicity, sexuality, etc., etc.) has an impact on research from the beginning through to the end of the process (Finlay and Gough, 2003). In this regard, it is undoubtedly the case that the quality of the data from interviews with service users was, in part, reflective of our research assistant (Peter's) position as a young, Irish man, as well as his warm, engaging personality. At the same time, the 'Professor' title held by the principal investigator may have eased the research through its various ethical and practical hurdles.

Implications for future evaluations
We have asserted that the evaluation gave detailed insight into the impact of drug and alcohol problems and the process of recovery for service users and, within this, the importance of relationships with social workers in the recovery journey. It also, however, shed light on the importance of having monitoring and recording systems that support, not hamper, social work activity. We will now discuss each of these points in more detail.
Perhaps the single most important finding from the evaluation was that everyone's recovery journey is unique. This was demonstrated throughout the case files and in all the interviews. Each person had their own story and their own slant on their problems and, because of this, 'recovery' itself meant different things to different people. For some, recovery meant reducing their alcohol use or sticking only to their prescribed medication, while, for others, it meant total abstinence. For some again, the important contribution the service had made was not recovery as such, but rather a small step in the 'right' direction; what the service had done was to get them into a position where it might be possible for the first time for a long time to contemplate reducing their use of harmful substances, getting a new tenancy or sorting a debt problem. This reinforces the idea that recovery-and hence its evaluation-is difficult to pin down in any absolute sense.
The evaluation also showed that recovery happened in the context of a trusting relationship, something that, again, is likely to be too individual to be measured in terms of outcome targets, checklists and effectiveness scales. Building and sustaining relationships with service users with drug and alcohol problems are extraordinarily challenging. These are, after all, the people whom most other professionals want to have as little to do with as possible, as demonstrated many years ago in Jeffery's (1979) ground-breaking ethnographic study of an Accident and Emergency Department. Drug and alcohol users are often unpredictable, unreliable, unhappy people; they may be violent and are often depressed (Galvani, 2012). The value of relationship has long been known, and yet measuring it is always going to be problematic because it is both highly personal and, at the same time, socially and culturally constructed (see Beresford et al., 2008;Cree and Davis, 2007;Ruch et al., 2010).
It has been stated that the evaluation was hampered by inadequate agency monitoring and incomplete recording systems that the social workers also complained bitterly about. This service, like many others, was undergoing internal restructuring: old 'social work' ways of doing things now existed along-side 'health' ways, and social workers found themselves duplicating effort and caught up in IT systems that they felt were not 'fit for purpose'. This made it impossible for them to either monitor progress or count their successes with any real confidence, thus raising an important practical problem for integrated services in the future. There is, however, a more fundamental point at issue here. As we move into a world that sees social work increasingly absorbed into health departments, it will be essential that social workers find ways of fore-fronting the value of social work practices, including social work approaches to evaluation. We have argued that evaluation in social work must take account of the complex nature of people's lives, and the different systems within which they have to operate (e.g. social work and health; family and community). This means that evaluation also needs to be multilayered and iterative, giving value to the 'small steps' that individuals make, as well as to the relationships with practitioners that make these small steps possible.

Conclusions
This article has contended that measuring effectiveness in social work is complex. By scrutinising one example of evaluation research, we have shown that evaluation is always context-specific and so is inevitably political, in one way or another. Moreover, we have argued that the activity of social work, at least in the context of drug and alcohol services, may be too complex to be measured in a positivistic, scientific kind of way. This does not suggest that we abandon it altogether! On the contrary, it suggests that we need to look towards a critical, pragmatic approach to evaluation (Webb, 2001;Gray et al., 2009)-one that foregrounds the importance of methodological rigour as a way of achieving dependability and trustworthiness in our findings and analysis (Lincoln and Guba, 1985;Mishler, 1990;MacDonald and Popay, 2010). In our evaluation, we sought to achieve this by adopting a mixed-methods approach, and by fore-fronting the impact of context on our results. By bringing a critical, reflexive and contextual under-standing to evaluation, we may yet be able to say something positive about the contribution of social work services to individuals' lives and to society as a whole.