Evidence Focused Social Care: on Target or Off-Side?
As contentions continue to engulf the evidence-based practice (EBP) debate within social care, consensus seems to be gravitating towards the softer term of ‘evidence-aware’ practice, although there is as yet no definitive concept on the horizon. Set against a back-drop of competing ideologies, heavily influenced by the natural sciences, what is at stake is the essence of social work: there is a real danger that social work practice be reduced to mere base elements, which seek to eliminate all notions of uncertainly, so essential and endemic to our social world; a characteristic of our very being. Underlying the surface arguments that an evidence-focused approach can realise better practice, are questions of power and control. In exploring the current position of EBP within UK social care, this paper argues that as an interpersonal profession, operating within the complexities of the social world, social work would be better suited to pursue an inclusive ‘knowledge-aware’ approach to practice, which may allow the profession to move beyond the restrictive and limiting polemics that surround it. Deliverance of a deserving future will require a brave and frank acceptance of our own fallibility, together with an openness which allows the profession to harness its key values.
The term EBP has become commonplace within the UK health sector where, not only has it prompted much debate, it has been, and continues to be, a mode of practice actively promoted by the government, best illustrated by initiatives like the establishment of the National Institute for Health and Clinical Excellence (NICE); an appraisal body for new and existing treatments, part of its role being to ‘disseminate consistent advice on what works and what doesn’t’ (Department of Health 1998, 12). This emphasis on ‘what works’ coincides with the neo-liberal economy within which many of the UK’s public services operate, where government publications increasingly talk of performance measures and performance management (Department of Health 2006), the emphasis on ‘cost effective prevention and evidence based practice’ (Department of Health 2005, 48). Encouraged by developments in the health sector, the government has since turned its attention to social care, here the continued interest in promoting and developing knowledge to improve practice led to the creation of the Social Care Institute for Excellence (SCIE). Since its conception in 2001 SCIE appears to have become a vocal advocate for the expansion of an evidence base in social care (see SCIE, 2005) and of the role which systematic reviews can play in service improvement (see SCIE, 2003); a research methodology commonly associated with EBP in health care.
Underlying the appeal of empirical research and EBP are assumptions of methodological objectivity and moral neutrality (see Firth et al. 2004), however it is doubtful whether any position is value free, arguably the formal attempts of EBP to eliminate subjective elements; inherent at some level within all social exchanges; demonstrates an implicit values base of its own (see Webb 2001). Indeed, one of the main reasons EBP has spawned such, frequently contentious, debate, lies in the fact that the adoption of new theories is initially epistemically uncertain (Laudan 1996): as a concept EBP has yet to ‘black boxed’, where the chances of exploration and discussion diminish as, when confronted with a black box, an open acceptance often prevails; thus serving to further cement the notion (see Latour 1987). Still at the stage of conceptual development, EPB has endeavoured to become a good theory; good theories are those which, it is claimed, should be general, explanatory and predictive (see Ghilarov 2001). This endeavour is visible in the move from initial desires, influenced by positivist ideology, to identify universal predictive laws (for example see Sheldon 2001), to more recent conceptions concerned with the identification of mechanisms which have explanatory capabilities (for example see Pawson 2001b), but, given social works’ heterogeneous nature, it is questionable whether anything more than ‘limited truths’ (Ghilarov 2001, 361) can be sought. Thus the idea of EBP in social work remains a concept awaiting implementation (Mullen, Shlonsky, Bledsoe and Bellamy 2005), largely due to the fact that there is no general consensus on the exact definition or conceptual make-up of, what can generally called, an evidence-focused approach.
1 The emergence of EBP in the health sector: contextual considerations
A political market
Since 1979, when Thatcher’s government came to power, the public sector within the UK has seen an increase in the now omnipresent management agenda, exemplified by a focus on programme evaluation, performance indicators and increased powers for inspection and audit regimes (see Nutley and Webb 2000 and Department of Health 2006). Changes to welfare policy made by successive Conservative governments, culminated in the implementation of the NHS and Community Care Act 1990, legislation which ensured local authorities no longer had the monopoly on providing services (see Brammer 2003), thus allowing a variety of stake holders to be introduced to the social care sector. Following New Labour‘s 1997 election victory, political backing has continued to be given to the community care reforms introduced by the previous administration, serving to reproduce a mixed economy of welfare (Orme 2001) and further cement the purchaser-provider split: ideologically, reflecting a belief in the market as the vehicle best suited to secure client choice and high quality provision (Macdonald 2000).
Within this relatively new market economy, comprised of a multitude of organisations effectively open for business, the need for increased regulation soon became apparent, not least to ensure the attainment of threshold standards, the desired outcomes of which were, and continue to be, centrally driven (Orme 2001). To achieve this ends the government machine swung into action and unequivocal backing was given to a ‘what works’ approach (Department of Health 1998 and Department for Education and Skills 2005). With the rise of inspection regimes enforcing set ‘minimum standards’, the social care profession began to see the emergence of formulaic working methods, designed to provide a minimum level of safety for individuals coming in to contact with services. This, combined with an emphasis on ‘what works’, culminated in a trend for normative practice imposed from research agendas (see Orme 2001), where interactions became governed, not by personal traits or characteristics, but by formal, often written agreements (see Smith 2001). Doubling as an implicit classification system, these changes have, perhaps somewhat simplistically, made measurable outcomes possible, obtainable through a collation and analysis of the black and white; typifying, as Webb observes, the current socio-economic climate: ‘neo-liberalism…monitors social welfare as a fundamental datum from which to enforce public policy and systems of management on the basis of economic market rationality’ (Webb 2006a, 38). It is certainly true that there is an ongoing concern within government that services perform (see Department of Health 2006), and that performance is, in part, dependent upon the delivery of cost effective services (see Department of Health 2005); services which are supported by their own evidence base (see SCIE 2005). This naturally begs the question that, from a political perspective, has a means been created which acts to reinforce a managerialist culture?
Health care: the first hurdle
It could rightly be said that the UK government has an enthusiastic thirst for performance indicators, objectives and measurable outcomes (Smith 2001). Within health care this form of managerial control is epitomized by performance focused clinical governance regimes, which, having been established during the late 1990’s, made it possible for managers to be held accountable for the quality of care (see Davies and Nutley 2000). In making this link, it becomes evermore apparent why an evidence-based approach moved up the political agenda to become the hot topic that it now is. Clearly, the growth of EPB in health care is not purely a party political act, the medical profession itself has seen potential benefits in adopting the approach, not least ‘to maintain its position as exclusive safe-keeper and wielder of medical knowledge’ (Timmermans and Berg 2003, 16). By tying medicine to scientific evidence the profession has arguably achieved this, and in doing so has added weight to the proposition, perhaps more apt in health care than any other public sector service, that practice based on scientific evidence leads to better and more efficient care and improved outcomes (see Timmermans and Berg 2003), where the prospect for patient harm is reduced (for example see Chalmers 2003).
With New Labour sponsorship given to robust evidence as the key commodity able to transform welfare services, the administration’s early policy documents set out the stall for national standards and guidelines through systems such as NICE, the national appraisal body charged with promoting clinical and cost effectiveness (Department of Health 1998), and evidence-based National Service Frameworks (NSFs), designed to ensure consistent access to services and quality of care throughout the country (Department of Health 1997). As a body charged with identification and promotion of ‘what works’ in terms of existing health interventions and new treatment options (see Department of Health 1998), NICE has, in many ways, been crafted in the mould of the slightly more mature Cochran Collaboration. Founded six years earlier in 1993 the Cochran Collaboration is, by its own admission, dedicated to making up-to-date, accurate information about the effects of health care readily available worldwide; as such it produces and disseminates systematic reviews of health care interventions and promotes the search for evidence (see ). The rise of these organisations, illustrates the powerful force EBP has already become within the UK health sector, however, they represent only one side of the coin.
In endeavouring to disseminate evidence to those professionals who should be using it, both the Cochran Collaboration and NICE exemplify a top-down approach to EBP, aimed at closing the link between evidence of effectiveness and practice; part of what some have termed the guidelines movement (Davies and Nutley 2000). Conversely, Evidence-based medicine (EMB) represents a bottom-up approach designed to achieve the same ends. EBM outlines a specific approach to practice, where professionals, having sought evidence relevant to a specific clinical question, employ problem solving skills which allow them to determine the ‘best course of action for any given patient’ (Davies and Nutley 2000, 54). The concept, developed by Sackett et al, (1996), defines EBM as ‘the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients’ (Sackett et al. 1996, 71), coupled with the stipulation that the practice of EBM means ‘integrating individual clinical expertise with the best available external evidence’ (ibid, 71). By stressing the importance of the clinician’s use of judgement; forming part of his clinical expertise; Sackett et al illustrate the mediating and regulating nature of professional judgement as a positive force in determining the delivery of care, acknowledging the need for the appropriate application of evidence, as ‘even excellent evidence may be inapplicable to or inappropriate for an individual patient’ (ibid, 72). In response to fears expressed about the standardization of care, this conception is made explicit by Timmermans and Berg’s idea that professionals actually work with procedural standards, and are not in danger of blithely following set work practices, as critics suggest. In this sense, procedural standards, referring to practice policies or protocols, take on a coordinating activity and should be viewed as tools which professionals utilize, in combination with an expression of their embodied expertise, in doing so it is acknowledged that the practitioner will, as in any other social situation, have to surrender a position of full-control or self-determination and let themselves be acted upon (Timmermans and Berg 2003). Like any conceptual definition, Sackett et al’s proposition is not necessary definitive (for example see Cohen et al. 2004), and the term EBM will have different meanings and embody slightly different elements depending on perspective. It is significant however, as Sackett has been singled out as a leading figure in this area, largely credited for his instrumental role in coining and promoting the phrase (see Timmermans and Berg, 2003), and it is his, and his co-workers, definition which encompasses the three components, posited by some to be the definitive elements of EBM: ‘best evidence, client values and expectations, and the practitioner’s individual expertise’ (see Mullen et al. 2005, 65). For these reasons, and the fact that our concern rests with social care, where the picture itself is far from conclusive, it is this definition that serves as our comparator.
The scientific prod
EBP is essentially a product of modernist ideology, containing, as it does, a desire to construct a ‘technology of care for purportedly making states of affairs more manageable, ordering and predictable’ (Webb 2006a, 161). Such ideology pervades the policy process, evident in the government’s adoption of policy making models based on rational decision-making (see Nutley and Webb, J. 2000), a textbook approach to policy analysis which has been described as ‘promoting the idea of a ‘scientifically guided society’’ (Nutley and Webb, J. 2000, 33), through to the construction of quality measures, where the best efforts are made to iron-out all notions of complexity, subjectivity and partiality (Orme 2001); the implicit belief being that a truth exists, ‘to be learnt and applied in a technical rational way’ (Orme 2001, 621).
Since the Enlightenment period, where narratives promised progress through increased rationality and control (Timmermans and Berg 2003), science has become ubiquitous in virtually all areas of our everyday lives. Indeed, in a society still heavily influenced by modernism, where, at least over the last two centuries, technical reason and scientific knowledge have been perceived to be a means to make life more fulfilling (Webb 2006a), it is hardly surprising that the development of EBP has followed a path of scientific rationality. Part of modernity’s authority revolves around the production of technical standards, the attainment of which has been fuelled by both the need for greater efficiency and the realisation that future development could be controlled by the individual, rather than divine intention. Certainly, as scientific knowledge continues in its apparently successful ability to produce universal statements, statements which explain the physical and natural world, its appeal is self-evident: it has, in the main, risen to the challenge of its own self-fulfilling prophecy. Successive scientific advances combined with the modernist quest for progress have created a desire within the social sciences to replicate that which had been achieved in the natural sciences: the production of generalisations with strong predictive power (see Smith, D. 2002). Thus, EBP, with its attention to outcomes (Smith, D. 2002), can be seen to fit with this strong positivist history, a means proven to be successful in its approach to knowledge generation. However, in stemming from this tradition of scientific development, EBP follows the path of progress built on the back of cumulative retention, a position which Laudan sees as short-sighted, as he puts it, ‘it does not allow for the possibility of progress even when there are explanatory losses as well as gains’ (Laudan 1996, 81): in other words, a linear approach ensues; the obvious draw back being that attention is possibly diverted from other, potentially more promising, conceptual frameworks.
Are we sure?
Although cautionary criticism levelled at EBP, for example the assertion that it represents a means to increase the standardisation of practice (see Clark 2006, James 2004, McBeath and Webb 2002 and Smith 2001), is an unavoidable and reoccurring theme; not without merit; it is necessary to take a step back in order to consider the concept’s epistemological credentials. Specifically, it is necessary to ask the question whether application of such a positive approach to the social sciences is missed placed, with the formulation of universal laws being unattainable, partly due the unpredictable nature of the social world (Smith, 2002). Given that unpredictability spawns uncertainty, positivist social scientists, like their bureaucratic counterparts, have, in wanting to remove all sources of uncertainty, been charged with ‘yearning for God-like omniscience’ (Smith, D. 2002, 29). Piercing the bubble of divine intention may well have freed the populous from their restrictive shackles, but what has been left is a vacuum that scientists now yearn to fill: the trouble is, it’s proving rather hard to fill and nobody’s quite sure of all the answers. For social work, is it not better to admit that there are limits to our knowledge? Perhaps social sciences are not suited to foundations built on, somewhat wanting, universal laws? Instead, is it not more suitable that generalisations of human life form the basis of our knowledge? (see Smith, D. 2002), rather than attempting to construct an all powerful, all encompassing, though ultimately fallible, one-size-fits-all theory. Should we learn to live with the ‘principle of uncertainly’? (Firth et al. 2004, 146), and in doing so, be willing to accept at least some degree of epistemological relativism?
Uncertainty is by no means a bad state of affairs: in acknowledging that social work is an interpersonal profession, Carole Smith (2001) vocalises the need to embrace it in order to aid the attainment of positive outcomes. Moreover, in paying homage to postmodern ideology she states that ‘if the basis for probability calculations is fatally flawed, then a dependence on risk assessment and risk management can no longer be tenable’ (Smith 2001, 287): social workers should therefore ‘abandon the spurious expectation that they can predict conditions and outcomes of risk’ (ibid, 290). Indeed, in exhibiting a raft of behaviours guided by multiple motivational forces, individuals further compound the subjective and therefore unpredictable nature of the social world; the implication for social work being that an objective stance, perspective or whatever, as demanded by such a rigid scientific approach is both unrealistic and inappropriate. To illustrate: Clark (2006) vehemently states that values are an inescapable facet present within social work, one consequence being that ‘Value neutrality over many pressing contemporary social issues is thus neither feasible nor desirable for human service professionals’ (Clark 2006, 75); essentially the assertion is that the approach described is both limited and limiting, assuming a deterministic view of the world (Webb 2001). Some of the criticisms of evidence-based medicine concur with this premise, and are similar to those levelled at EBP within social care; for example, there is concern with the narrow definition of evidence which fails to integrate other, non-statistical, forms of information such as professional experience and patient specific factors (see Cohen et al. 2004). This is not an uncommon claim, and extends further to become a fear that the emphasis on hard scientific data will devalue clinical expertise, largely derived from experience and the detailed study of individual patients (Williams and Garner 2002), resulting in the development of a bland uniformity, where, amongst other things autonomy, innovation and creativity diminish (see Timmermans and Berg 2003). However, in some areas of health care, such as medicine, where it supports more clear-cut outcomes of procedure, the adoption of an evidence-based approach set within the paradigm of scientific determinism may have arguably found its niche; this is highlighted by the fact that there is more evidence based work in cure medical work than in care work (see Garretsen et al. 2005).
2 EBP in social care
Call for proposals
Having set the stage it is easy to see why EBP has attracted many suitors, spurred on by advances in medical science and dovetailing political ideology, its popularity is bolstered by our ‘technocratic culture’ (Webb 2001, 58); a culture increasingly devoid of its organic essence, evermore reliant upon notions of acceptability, where the visibility and vocalisation of some kind of tangible facet is almost deemed necessary to secure societal approval. Arguably the growth of EBP has created an expectation that professionals within the human service professions need to be capable of proving their worth (see McCrae et al. 2005), together with the visible means of comparison, however crude. It is certainly true that as a profession, social work has frequently faced questions surrounding its effectiveness (see Macdonald et al. 1992 and Sheldon and Chilvers 2000), with criticisms extending to its near inability to produce scientific evidence of its worth and the perception that the profession has, for too long, been guided by practices based on personal and professional belief systems (Garretsen et al. 2005). Deemed to be an unacceptable condition, these observations have lead to calls that there is an unacceptable knowledge gap in the social services which must be addressed (Sheldon and Chilvers 2000); not least to rein in ‘habits of mind’, which would otherwise exert inappropriate influence upon proceedings (Sheldon 2001, 805). Like Sheldon, Chalmers acknowledges that a knowledge gap exists, stating that evidence derived from evaluative research should be sought, and where sought it should guide practice (see Chalmers 2003), the proposition being that evidence derived from research evaluations can protect individuals from harm, by limiting professional practice based purely on the best of intentions (Chalmers 2003, 2005). For Chalmers, evidence should ideally be provided by systematic reviews, a methodology credited for its scientific robustness and its perceived ability to minimise the effects of bias and chance. Once made available to practitioners, it is argued that this evidence can help fulfil an adjudicatory function among conflicting opinions, thereby increasing the chances of an optimal outcome. Tied to this is an acceptance that professional judgement will always be a needed when utilizing research evidence, as account will need to be given to ‘needs, resources, priorities and preferences, and other factors’ (Chalmers 2003, 36); although it is consigned to a somewhat backseat role, as to limit inappropriate actions no less.
Not wanting to get a head of proceedings, such as opening the debate about what counts as evidence and the various spin-offs, it is necessary to note the existence of more reserved perspectives, such as that offered by Hammersley (2005). Hammersley does not completely disagree with Chalmers’ position, but the nuances in opinion offered are important for their clarifying quality; illuminating some of the contentions encountered when attempting to apply the concept to the social care arena. In exploring the proposition that research evidence can pave the way to doing more good than harm, Hammersley; besides pondering whether professionals can sometimes ever be prevented from doing more harm than good; situates judgement at the heart of practice, combined with the assertion that in critically assessing research evidence the individual doing so will draw upon their own experience and background knowledge. He makes this point in relation to the fact that, in contrast to the medical profession, interventions performed in other personal service professions do not necessary lend themselves well to standardisation, nor are their outcomes reliably measurable; hence ‘the role of practical judgement may be greater in some fields than in others simply because of the nature of the problems professionals have to deal with and the circumstances in which they must act’ (Hammersley 2005, 89).
Essentially, what is going on here is a debate concerned with a desire to construct a concrete foundation for social work, for want of a better term, a validated authority, integral to which, annoyingly for some, is the fact that value judgements are intractable from the profession. In this respect EBP represents an attempt, not only to improve outcomes, but to deliver social work its validated authority. Whether a flawed attempt or not is a matter for later debate, but it is one which feeds into view that in a context of changes to the profession, and the threat to professional identity contained therein, an evidence-based approached to policy and practice needs to be considered (see Macdonald 2000); not least for the perceived protection it affords professional jurisdiction (see Abbott 1988). This pattern of legitimisation is already well advanced in health care, where the power of specialist knowledge is widely recognised (for example see Davis and Nutley 2000), creating something of a domino effect: ‘By showing that they [other health professionals] have a solid knowledge base, just like the medical profession, and that they are self-critical and scientific in their approach, these professionals hope to obtain high professional status’ (Timmermans and Berg 2003, 17).
Currently at the forefront of academic discourse (Mullen et al. 2005), EBP has yet to be fully adopted by the social work profession within the UK; despite historical flirtations and affiliations with behaviourist ideology (see Gray and McDonald 2006 and Webb 2006a). Described by some as a ‘laudable aspiration’ (Sheldon and Chilvers 2000, 8), it is, at present, a concept gradually moving towards implementation. Following the lead set by the health sector (see Sheldon and Chilvers 2000) there has been a steady growth of research and development schemes within social care. Already these have begun to make their mark on practice, with social work increasingly characterised by technical competence and system confidence (Smith 2001); epitomised by the belief that ‘a formal rationality of practice based on scientific calculation will result in a more efficient and economically accountable social work’ (Webb 2006a, 159). One of the most significant of these early schemes was the establishment of the Centre for Evidence Based Social Services (CEBSS), a collaboration between the Department of Health; the scheme’s funder; sixteen social service departments in the south and south-west of England and the University of Exeter.
As a pioneering centre for EBP; operational under the directorship of Brian Sheldon between1997-2004; CEBSS was primarily concerned with translating the results of research into practice and service delivery, improving the dissemination of research findings and commissioning new research where gaps in knowledge were identified (see Sheldon and Chilvers 2000). Essentially CEBSS was the forerunner to SCIE, the national body launched as part of the government’s drive to improve social care. SCIE’s central role is to develop and promote knowledge about good practice through evaluations of social care, the ultimate objective being to improve the evidence-base of UK social care (for example see SCIE 2005). The visible influence of the health sector is not only apparent in the development of SCIE: at an international level the establishment of the Campbell Collaboration, sister organisation to the Cochrane Collaboration, clearly illustrates the mirror image forming between health and social care. Again, the concern is with the provision of evidence of ‘what works’ and the dissemination of information on the effectiveness of social care interventions ( ). Similar too, is the Campbell Collaboration’s standard currency: systematic reviews. Largely credited with scientific prowess, one obvious draw back of this methodology is that it is never possible to tell how many studies have been conducted but never reported (Williams and Garner 2002 and Gould 2006), meaning that potentially pivotal information remains unknown, the implication being that inaccurate conclusions may ensue. Given that EBP progresses along rather linear lines, it is to be expected that SCIE’s stance on evidence will follow the path laid out before it. In many ways it does, and growing emphasis has been placed on the role that systematic reviews can play in the development of service and practice guidelines (see, SCIE 2003), however the organisation’s website states that ‘What counts as knowledge is a key issue for SCIE. The knowledge underpinning our practice guidance must be well-researched, evaluated, relevant, and take account of what people want from the services they are using’ ( ). It is this latter factor which is important, as it indicates an inclusively sometimes lacking, although SCIE sees its inclusion as something of a challenge in terms of ‘finding reliable ways of synthesising data from non-experimental studies’ (SCIE 2003, 17).
In preparation for the later exploration of the many evidence-focused terms that have been mooted, it is important to mention the distinction between two different approaches to EBP, pragmatist and experimentalist, to which Webb, citing Trinder, draws our attention: ‘Pragmatists claim that EBP involves integrating professional expertise with the best available external evidence from good quality research as well as considering values and expectations of clients and service users’, whereas ‘for adherents of experimental EBP randomised control trials are the gold standard’ (Webb 2006a, 157). Significant, as the future shape of an evidence-focused approach is likely to be heavily influence by at least one of these perspectives. Sackett et al’s proposition, perhaps best emulates the pragmatist stance, where a degree of provision is given to the complexities of the social world and the mediating mechanisms of the practitioner. Conversely, in assuming that the social sciences should follow the model of the natural sciences; the crisper the reflection the better (see Smith, D. 2002); Sheldon’s feet are firmly in the experimentalist camp, the pursuit of instilling confidence in the profession evident throughout: ‘We cannot, through its [science’s] procedures ever be sure, but we can be surer for now’ (Sheldon 2001, 806, original emphasis). However, the translation of scientific methodology, relying on the ability to control and isolate as many variables as possible, to the social world is less than straight forward, in this environment the subject in question can not be broken down into simple input, intervention and output categories, as the elements of study are ‘complex, conscious, sentient actors’ (Nutley and Tilley 2000, 254). Observations like these have not gone completely unnoticed by policy makers, there is some recognition that the social work role is complex and typically involves ‘working in the context of ambiguity, uncertainly and risk’ (Department of Health 2006, 188). Whether hindering the wholesale implementation of EBP or not, the government’s strategic commitment continues, visible in the proposal for a new National Reference Group for Health and Wellbeing, which will oversee the accessibility and use of the evidence base for interventions that support health and wellbeing; working in joint partnership with SCIE and NICE once operational (see Department of Health 2006). Initiatives such as these, while creating an infrastructure of some description, do not necessary sit well with the desire that rather than social workers becoming slaves to authoritative opinion, they become autonomous, active evidence-based practitioners, who are ‘informed, critical consumers of the practice literature’ (Mullen et al. 2005, 69): an area for subsequent consideration.
The changing nature of a profession
In many respects, social work has had a tumultuous history, being described as a ‘developing profession’ (Macdonald et al. 1992, 626) even as late as 1992; at times a thorn in the side of successive UK governments, most pointedly when child abuse scandals rocked the profession in the 1980’s, the findings from inquiry reports often critical, consistently highlighting the need for better training and improvements in inter-agency work and co-operation (see Brammer 2003). However, the change of government in 1997 appeared to signal a shift in political appreciation, as Orme, quoting the Department of Health, points out, the government ‘after years of trying to either ignore or eradicate social work, had come to a realisation that ‘social services can be a powerful force for good in our society’’ (Orme 2001, 614). Although could it be that this ‘can’t beat them join them’ attitude, actually resembled something of a Trojan horse? It is, after all, easer to realise change if you are the one(s) in the driving seat; especially if the social conditions are right. Indeed, social work has tended to make the headlines for all the wrong reasons, when it has come under the spotlight, it has often done so through what Orme identifies as ‘knee-jerk responses to public enquiries, that assume quality will be achieved by proceduralizing and bureaucratizing practice’ (Orme 2001, 618), where resources are channelled into regulatory or protective functions in order to address risk that may not necessarily have such a high priority (see Webb 2006a); often at the expense of preventative work (see Sheldon and Chilvers 2000). The public’s role has, perhaps inadvertently, been instrumental here. In becoming evermore questioning, with better access to information, and having seen professional authority called into question by media exposés attuned to both fulfilling the nation’s desire for information, and fuelling its anxieties, individuals are no longer so content to be the passive recipients of expert knowledge (see, Davies et al. 2000 and Phillips 2006), nor for that matter professional decision making. What has developed is a societal preoccupation with risk, or more accurately risk-aversion, which has arguably given rise to a blame culture, especially in public service institutions like social services (see Moseley and Tierney 2005) whose remit includes that of public protection: cue new political initiatives.
In a climate increasingly influenced by government led value talk of citizenship and partnership, the nature of service user and social worker interactions is changing, in that they resemble a ‘mode of exchange akin to contractual obligations, rather than the kind of engagement that focuses on trusting relationships’ (Smith, 2001, 288). By the government’s own admission there is already a plethora of guidelines available to health and social care professionals (Department of Health 2006), and it seems intent on creating many more through various risk management initiatives (Department of Health 2006). Faced with increased scrutiny, expert communities have, to varying degrees, been willing to concur with the implementation of explicit procedures, as the adoption of independently verifiable rules and procedures can help to maintain their legitimacy (see Timmermans and Berg 2003). In this context then, EBP presents itself as a vehicle for restoring trust in public services, the crucial difference being that this trust is now generated by a system, as opposed to an expert (see Webb 2002).
Development of the relatively new institutional framework; the establishment of General Social Care Council (GSCC), SCIE and Commission for Social Care Inspection (CSCI); both complements and drives these changes, but while boosting the professional status of social workers and offering them protection of title, these organisations have not escaped criticism for their role in increasing the standardisation of the profession. With reference to the later organisations, Webb claims that ‘these forms of re-regulation of the public sector are increasingly outcomes-based, performance and results-driven, rather than concerned with reflexive processes of practice’ (Webb 2006a, 68). For James, this McDonaldization of social work resembles a ‘one size fits all’ (James 2004, 39) approach to service provision, inherently restrictive in character, integral to which are quests for target identification and measurement of service effectiveness (Smith 2001). With the proposed merger of CSCI and the Healthcare Commission; the body charged with improving and inspecting the UK health sector; due to take place by 2008, creating one body concerned with the attainment of outcomes set by central government (see Department of Health 2006), this trend shows little sign of abating, and continues to raise concerns that professional autonomy will be overly compromised; at worst reducing social work to nothing more than a set of automated responses, based on ‘rationally calculable maximizing solutions’ (McBeath and Webb 2002, 1031). Efforts to reduce margins of error by the imposition and adherence to clearly defined procedures, has also led to the charge that social work has retreated to a position of defensible practice, but this is not necessarily the case; it seems that EBP, in the right pair of hands, is capable of much more, as Macdonald notes ‘Defensive practice (one consequence of recent changes in legislation and ‘political climate’) does not necessarily sit easily with EBP, which might justify taking more risks than some would care to choose’ (Macdonald 2000, 134). Furthermore, one of the aims of the CEBSS project was to educate the participating groups, at every level, the apparent consequence being that management frequently took away ‘what they would have liked not to hear’ (Sheldon 2001, 807). Emerging here, are issues pertinent to social work’s future position, a subject for further exploration, for now Webb provides a fitting opener, stating that EBP ‘presents itself in a way that is likely to secure its much needed institutional endorsement and legitimation’ (Webb 2001, 60).
3 What to include? Evidence focused social care and semantics
I’ll huff and I’ll puff and I’ll blow your house down:
A lack of consensus permeates every aspect of the EBP debate, fundamentally there is no agreement as to what counts as evidence. Primarily centring on the use of research evidence as the platform for EBP, the polemics continue in earnest (for example see Chalmers 2003, 2005 and Hammersley 2005), the main objective being to produce evaluative research which is capable of distinguishing any specific effects of an intervention from any other factors (Nutley and Tilley 2000). The core issue is one of formulating a definition, one with an appropriate level of inclusion: if SCIE’s proposition that evidence equals ‘research findings and interpretations of the findings’ (SCIE 2005, 3) becomes definitive, then a wealth of potentially useful information is omitted, including other forms of knowledge.
In terms of evidence generation in health care, again an area more advanced than that in social care, a favoured method is the use of randomised control trials (RCTs). In allowing chance to determine group allocation, thereby reducing the possibility of bias, this methodology allegedly provides a benchmark by which all other forms of evidence should be judged, as such it has, predictably, found favour from proponents of evidence-based social care, who take the view that it is the experimental tradition which provides social work with the securest results (for example see Macdonald et al. 1992). RCTs employ a strenuously rigorous method, capable of demonstrating whether or not an intervention is effective in producing beneficial outcomes, however, they fail to explain why the intervention worked (see Davies and Nutley 2000), and, regardless of their scientific credibility, questions of transferability and generalizability remain (see Gould 2006). In the social world, the thrust of these criticisms concentrate around the fact that it is virtually impossible to control the research context (for example see Macdonald et al. 1992), irrefutably due to our inherent human idiosyncrasies (see Hammersley 2005, 92), therefore making even the attainment of internal validity difficult (see Macdonald 2000): furthermore, in offering measures of probability across populations, RCTs fail to provide individually tailored estimates of intervention effectiveness (Gould 2006). Enter systematic reviews, an equally scientific methodology designed to deal with the often problematic nature of individual studies. Chalmers champions their use, claiming that empirical evaluation in controlled experiments can assess whether the effects predicted by theory hold (see Chalmers 2003, 2005); assuming, of course, that isolation of the numerous factors in-play is possible. For Chalmers, the merits of this methodology stem from its ability to reduce bias in research and to therefore produce valid results; in part due to the fact that a number of studies are considered with in the context of the review and, in the quest for transparency, a protocol is outlined prior to the start. Perceptions that some methodologies are capable of producing evidence that is more valid than that produce by others, have led to suggestions that a hierarchy of evidence exists; further demonstrating the continued influence exerted by scientific ideology. For example, the following is outlined by the Department of Health in the National Service Framework for Mental Health:
Type of Evidence
At least one good systematic review, including at least one randomized control trial.
At least one good randomized control trial.
At least one well designed intervention study without randomization.
At least one well designed observational study.
Expert opinion, including the views of service users.
(Glasby and Beresford 2006, 271)
In their criticisms of this rather positivistic premise, Glasby and Beresford not only dismiss the idea of a hierarchy of evidence, on the grounds that ‘a research method is only helpful and appropriate if it fully answers the question that is being asked’ (Glasby and Beresford 2006, 275), they also reject the notion that objectivity is a prerequisite for the production of valid evidence, claiming that both the desire and efforts to be distanced from the subject may actually result in the production of inferior research, due the unequal power relationships; now clearly visible; and lack of awareness between the researcher and the researched. Clearly, such views contrast sharply with those seeking scientific robustness: a position, perhaps reassuringly, supported by the prevailing view in health care, which, being seated in an experimentalist school of thought, recognises the need for fair and rigours comparisons in assessing effectiveness (see Davies and Nutley 2000). However, this is not to say that the current order, stemming from a strong positivist tradition; viewed by some as wanting (for example see Smith, D 2002); will remain as is, as Laudan observes: ‘the views of the scientific community about how to test theories and about what counts as evidence have changed dramatically throughout history’ (Laudan 1996, 80).
Challenges to the current hegemony are already evident. Noted earlier was the idea that value neutrality was something of an illusion, in accepting this position doubt is cast on the entire notion of objectivity, primarily as the researcher’s unbiased, value-free position; the central tenant of objective research (Glasby and Beresford 2006); is dispelled. This had led to the assertion that ‘all research is inherently subjective’ (Glasby and Beresford 2006, 279), a supposition supported by the fact that, either by design or subconscious predisposition, only a few of the ‘infinite number of events, actions and thoughts that make up a programme...get set down in research reports and rather fewer get recorded in research reviews’ (Pawson 2001a, 17). Problems with measurability are well documented, raising the question of whether social work results are in fact quantifiable (see Garretsen et al. 2005). While the current agenda of performance management requires quantifiable outcomes, centring on quantitative indicators (see Webb 2006a), should the focus not be concerned with definitions of quality that are meaningful and not just measurable (Orme 2005)? Once again, the ongoing interaction between evidence and public policy (Nutley and Webb, J. 2000), is visible, as is the question of who decides: i.e. ‘within whose frame of reference are proofs of efficacy and effectiveness being sought?’ (Firth et al. 2004, 148); more important perhaps is the question of who actually decides what constitutes valid evidence? (Glasby and Beresford 2006).
Differing views on what to measure and how best to measure it, relate to the dichotomy between use of quantitative and qualitative data, with utilization of the later causing much contention, even though it is credited with providing a depth of understanding lost to quantitative methodologies. Significantly, qualitative research has been claimed to place greater emphasis on examining why a particular intervention did or did not work (Williams and Garner 2002), partly by being able to explore the experiences of service users and staff (see Glasby and Beresford 2006). While Chalmers himself welcomes the inclusion of a variety of study designs, he is more cautious in his views of qualitative data, stating that its inclusion in systematic reviews poses a ‘methodological challenge’ (Chalmers 2003, 35). Like Chalmers, Sheldon sees the need for research data to be typically robust in nature, capable of providing information at ‘a secure level’ (Sheldon 2001, 804), and thus pursues a quantitative approach, typified by quests for predictive and explanatory power, founded on logic and replicability of method (see Gould 2006). However, if this latter approach is followed are we likely to miss something? Will the individual actually be omitted from the equation, their personal accounts, capable of providing the immediate working material for direct practice (Firth et al. 2004), merely cast aside? More alarming perhaps is the danger that, by instilling not only faith, but confidence and credibility in an approach which is predominantly influenced by the natural and medical sciences, a false view of the world will come to pass (see Glasby and Beresford 2006). Fortunately for social work, rather more sensible attitudes are emerging, which, rather than arguing the toss over which is the better approach, are turning their attention to work that seeks to integrate the two (see Gould 2006).
Opening the box
Given the largely ideological differences noted above, it follows that are numerous conceptions of an evidence-focused approach: from Sackett et al’s (1996) proposition that EBM is ‘the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients’ (Sackett et al. 1996, 71), which, exemplifying its pragmatic guise, states that ‘External clinical evidence can inform, but can never replace, individual clinical expertise, and it is this expertise that decides whether the external evidence applies to the individual patient at all and, if so, how it should be integrated into a clinical decision’ (ibid, 72), to Chalmers’ (2005) idea of evidence-informed practice, where the best efforts are made to use valid research, or conduct valid research, to shape policy and practice. Within and beyond these parameters, there is a plethora of literature concerned with constructing terminology deemed fit for purpose; demonstrating the difficulties in moving an evidence-focused concept towards implementation. For instance, in their study of a mental health social work team, Firth et al. (2004) claim that the social work evidence base should ideally be derived directly from practice. They go on to state that evidence of what works could include case descriptions co-written with service users and the development of suitable outcome measures that take in to account a broader, and perhaps more appropriate, range of factors, such as those which focus on the quality of life. This somewhat reflective approach affords the opportunity to develop outcome and quality goals pitched at a more local and individual level, thereby surmounting the difficulties associated with a one size fits all approach, often visible in evaluative research methodologies; most obviously through claims of predictability and probability. Subscription to means-ends rationality may well advance certain notions of knowing, but in relation to the social world such reductionism clearly has its limitations: in what first appears to be a promising conception of evidence-based social work, centred on individual assessment, Mullen et al stipulate that ‘The assessment should be used as a basis for a well-formulated question that must be not only answerable, but also phrased in a way that a search of existing research literature can be conducted to answer the question’ (Mullen et al. 2005, 67). Granted, in the next sentence they indicate the need for discretion in deciding upon the applicability of said evidence, none-the-less, the restrictive premise is set.
Building on the idea of reflective practice, Ferguson introduces the idea of ‘critical-best practice’ (Ferguson 2003, 1005), which encourages the professional to question all aspects of their work, even that which is perceived to be ‘best-practice’, a term not limited to outcome, but inclusive of social action and processes of practice; those intangible facets which give social work its unique edge. These ideas are not out of kilter with SCIE’s recent proposition that there is a need for ‘social work research that is practice-based and that will deliver practice change’ (SCIE 2005, 10), and together they represent more of a bottom-up approach to evidence-focused social care which is both pragmatic and considerate to social realities, where the need for context sensitive practice is recognised (Nutley and Davies 2000). Following similar lines is the idea of ‘knowledge-based practice’ (Glasby and Beresford 2006), where knowledge not only includes the contribution made by harder scientifically based research, including systematic reviews, but also includes the tacit knowledge of practitioners and most importantly the experiences of services users. Again, there are link’s with the work of SCIE: SCIE sees evidence as one element comprising knowledge, along with practice wisdom, and service user and carer experiences and wishes, with no hierarchical lineation separating the three (see SCIE 2005). This leads us to muse whether the development of a social work knowledge base as oppose to a, potentially, narrow evidence base (see McCrae et al. 2005), would be a more worthwhile undertaking, depending, naturally, on how knowledge is to be defined.
4 Overcoming contextual confines: is it enough?
Making it happen
As an approach waiting in the wings, EBP may have cultivated support from senior managers (see McCrae et al. 2005), yet its coronation remains uncertain. At a practical level, constraints such as lack of protected time to access research resources (for example see Moseley and Tierney 2005) and associated financial pressures impede its implementation. However, such constraints are secondary to the more fundamental questions concerned, not only with definitional and inclusively contentions, but with issues such as, how is it possible to prove that EBP can produce better results, and where is the evidence? (see Mullen et al. 2005). Regardless of how such difficulties are countered, under the ‘fluid conditions of a complex social system’ (McBeath and Webb 2002, 1016), uncertainties are inescapable: the problematic implication being that not all is predictable. For social work a significant source of unpredictability is the context, ‘changing with time and space, in which it is practiced’ (Smith, D. 2002, 30). Even in the more sterile health care environment, the importance of context has been stressed; for instance Davies and Nutley (2000) state ‘Effectiveness may be quite context-dependent…Therefore assessing the value of evidence, and making decisions on implementation, requires thorough consideration of local circumstances’ (Davies and Nutley 2000, 50). The premise that social work cannot be decontextualized (for example see Webb 2001), not only reflects a concern that an evidence-focused approach, if confined to an experimentalist camp, is something of a blunt tool, but also raises the question that subsequent recommendations may be negated due to specific contextual conditions, either inherent in the research context or in the subsequent intervention context (see Pawson 2001a).
Thus the issue of decontextualization poses a significant obstacle for moving an evidence-focused approach from rhetoric to reality, as Garretsen et al succinctly note ‘If a work method has proved to be effective in a particular place at a particular time, this does not automatically mean it is effective in any situation at all times’ (Garretsen et al. 2005, 656). In effect evidence-focused approaches, particularly those drawing on rigid scientific ideology, are in danger of being heist by their own petard: positivism’s ‘decontextualized preoccupation with outcomes invariably means that most of the results of positivist research are non-significant and inconclusive, because the theories that it is supposed to be testing depend critically on the specific context in which they are implemented’ (Smith, D. 2002, 30). Fundamentally, the sticking point lies in the preoccupation with results and not why (for example see Macdonald et al. 1992); an emphasis which closes the door to the suggestion that interventions may reach the same ends by different means (see Pawson 2001a). This premise has lead to calls for a greater depth of understanding to be sought between the various factors in play in a particular social context; for instance Nutley and Tilley state that understanding should be sought between ‘a desired set of outcomes, and the interactions between the context(s), intervention(s) and target group(s) that may lead to such outcomes’ (Nutley and Tilley 2000, 265), thereby maximising the scope of what can be learnt and then built upon. The essence of this theory driven approach, based on realistic evaluation (see Pawson and Tilley 1997), lies in its ambition to overcome the problematic issue of context: it is argued that in seeking to understand the ‘mechanisms of action’ (ibid, 265) operating within an unpredictable context, potential is given to the possibility of obtaining aggregate effects; mainly due to the fact that thinking has moved form an outside position, to one concerned with the inner processes of change.
Why, but how?
In further developing a realist approach Pawson (2001b) proposes the method of realist synthesis, claiming that it is not simply an evaluation technique, but a ‘framework for the whole enterprise’ (Pawson 2001b, 3), the central tenet being that it is not the programmes that work, rather the resources they offer to enable individuals to make them work. Thus, resources are seen as a mechanism which may be activated, therefore being potentially successful, depending upon the nature of the individuals and their wider circumstances. Understanding of the outcome, whether successful or not, is provided in the form of a context – mechanism – outcome (CMO) configuration (see Pawson and Tilley 1997), with an emphasis on obtaining information on ‘what works for whom in what circumstances’ (Pawson, 2001b, 4); building as it does on generative theory of causation, which sees causation as acting internally as well as externally (see Pawson and Tilley 1997). Within this framework a variety of contexts form the comparative bed for a given intervention, culminating in the production of explanations which outline why a programme theory works better in one context rather than another (see Pawson 2001b and 2006). Should a review of a particular mechanism indicate an unsuccessful outcome, the insight gained from the specific CMO configuration can be utilized to explore the reasons for failure and help identify what changes could be made to help attain future successes. This continual quest for explanation, represents a process of theory-building (see Pawson, 2001b) which attempts to overcome the problems associated with the transferability of research findings derived from more established methodologies, the aspiration being the production of ‘middle range theory’ (Pawson 2001b, 11).
Making no assumptions that an initiative will always replicate a given CMO configuration, realist synthesis appears to be more forgiving of the social world and its human agents, reflecting as it does the open nature of social systems (for example see Collier 1994). Indeed, by striving to understand the internal mechanisms of causality; the why; realist synthesis represents an attempt to overcome the limitations of traditional experimental methodologies, deaf as they are to the transformative power of context specific subtleties (see Pawson and Tilley 1997). Somewhat expectantly, this perspective has attracted encouraging support from those disillusioned with the rather reductionist alternatives: in echoing realist sentiment, David Smith argues that the development of middle range theories affords an opportunity to interpret differences and similarities among groups of programmes, and in doing so, the difficulties encountered when trying to formulate ‘a universal one best way of responding’ (Smith 2002, 31) do not come into question, instead, an exploration of options is permitted, thus making the attainment of appropriate solutions more likely. Part of the appeal of a realist approach lies in its efforts to seek a deeper ontological understanding, an understanding which is inextricably linked to a notion of causality that recognises the transformative potential of the human agent, an idea initially developed by Bhaskar, the proposition being that ‘people make societies and societies make people’ (Collier 1994, 143). Attempts to understand the processes of human change in relation to a given intervention, whist acknowledging this dynamic relationship between the individual and society, demonstrate an effort to move understanding forward, by essentially seeking explanations to an otherwise unseen reality (see Morén and Blom 2003). Translated to social work practice, a glimpse of the previously unknown, while certainly not definitive (see Pawson 2006), offers the practitioner a greater room for manoeuvre in having access to ‘plausible patterns linking interventions and results’ (Morén and Blom 2003, 60).
While realist ideology endeavours to get under the skin of the social world, to create an evidence-base for social work which is more ‘real’, this effort is still a scientific enterprise, the aim being to identify theories which ‘sustain law-like regularities’ (Pawson and Tilley 1997, 63), via the observation of empirical events (see Morén and Blom 2003, 43); this, tied to a caveat of fallibility, where the possibility is left open for future ideas that might prove more efficacious (see Pawson 2006). Certainly, this approach sits rather more comfortably with concepts of unpredictability and uncertainty, and the depth sought should be welcomed, but has the development of a suitable knowledge-base for social work been advanced? Surely, Sheldon’s premise that ‘We cannot…ever be sure, but we can be surer for now’ (Sheldon 2001, 806: original emphasis), still stands, perhaps more fitting even, descriptive of an endemic status quo? Clearly, there are limits to what can be known, and any form of evidence must be seen as only one component within a larger body of knowledge, in turn, knowledge; however defined; forms only one element of practice, which arguably ‘exists to inform their [social workers] understanding, not to dominate it’ (England 1986, 35).
It is therefore necessary to consider a shift in emphasis: accepting the premise that professionalism is ‘an art that involves experience, intuition and creativity because the problems presented by each client are unique…’ (Phillips 2006, 32), attention turns to the qualities and skills of the practitioner. The reasoning behind this realignment stems from the fact that without the necessary skills and abilities, any level of understanding is likely to remain elusive, thus, the ability to understand, central as is it to social work (see England 1986), leads to the more pertinent question of how understanding is achieved. Consequentially, as social work is an interpersonal profession, in which abstract knowledge plays a supporting role, consideration of this question moves the centre of attention from a preoccupation with understanding the intervention, to understanding the individual. Here attention turns to England (1986) who outlines a framework for understanding, primarily dependent upon the social worker’s selection and synthesis of information necessary to build a picture of presenting circumstances and situations, crucial to which are the choices taken by the practitioner. England asserts that although these choices are guided by formal learning of relevant knowledge, ideology and philosophy, ‘the specific process will be one which is intuitive…’ (England 1986, 29), in this light, the practitioner’s understanding and actions stem from their consciousness: termed the ‘intuitive use of self’ (ibid, 32), this process is seen central to social work. In giving the human element of social work its deserving status, England ties this admittedly value-laden concept to the notion of social worker as artist, where practitioners are required ‘to be astute about their own experience, perception and understanding and to be fluent in their expression, as well as to subject their ability to the critical scrutiny of others’ (ibid, 135). While values are seen as essential to the attainment of understanding, it is through comparison and critical exchange that they permit some kind of relative evaluation: England asserts that via critical inquiry, values move from the individual to become those of the culture and in doing so create a shared understanding, i.e. the basis for appraisal (see England 1986). In attempting to conceptualise a process of knowing at the level of the individual, whilst acknowledging the influence of the cultural context, this perspective offers a more advanced theory of the inner workings of human nature and brings the process of interpretation, so integral to social work, to the fore. Accepting that knowing, in terms of understanding as identified here, is invaluable for good practice, it is not just theories that should be developed, but the skills of the worker, and, although greatly important, not just those limited to critical appraisal; hence the focus turns to the quality of the actor (McBeath and Webb 2002), rather than the quality of their tools.
5 What’s in it for social work: are we on target or off-side?
The story so far
In exploring the current position of evidence-focused social work within the UK, it is becomes clear why such an approach has yet to be fully adopted by the profession: faced with conceptual ambiguity and scepticism about its fitness for purpose, it currently resembles an ideological position in the making, with no real consensus as to what its ultimate composition should look like. Certainly some of the individual perspectives contained under this umbrella term, such as experimentalist EBP, are well established, although in the face of epistemological criticisms their self-assured nature all but melts away. While a pragmatic approach, seems better suited to realise a suitable concept, incorporating much needed inclusively and flexibility, there remains the question of how can the realisation of an evidence-focused approach be attained in a way which secures the delivery of sustained good practice? The idea that an evidence-focused approach can prevent professionals doing more harm than good (Chalmers 2003, 2005) is all about this, and it can be argued that the narrower the scope of the concept, the easier it is to prescribe some kind of normative service standards, thereby providing a basic safety net for practice, founded on transparent, and therefore easily identifiable, evidence (see Gambrill 2003). Conversely, it is clear that the margins of error are not necessarily reduced; the rigidity imposed prevents the execution of sound decisions based on good professional judgement, as the professional’s appropriate skills have not been given their due concern, their inner guiding mechanisms inadequately fostered. Although there is some recognition that the skills of social workers need to be enhanced in a way which provides them with the support necessary to work within the parameters of an evidence-focused approach (see Macdonald 2000 and Sheldon 2001), affording them the skills and abilities to ‘locate, appraise and judge the relevance and applicability of research’ (Moseley and Tierney 2005, 115), attention to the practitioner’s interpersonal skills and human qualities is often second to their technical competence.
As acknowledged, criticism specific to the term EBP has frequently taken the line that it is inherently restrictive, leading to claims that it is both conceptually confining and theoretically impoverished, primarily as a result of the ‘conceptual and practical rigidities’ (Gray and McDonald 2006, 8) brought to bear by adherence to inadequate epistemology, and the resultant reliance upon reductionist methodology that deals ‘best with those aspects which can be rendered ‘visible’ to and hence measurable by the research tools of positivism’ (ibid, 14). Amongst its critics, this view has given rise to the fear that the continued trend towards formal rationality, increasing exhibited in modern practice (for example see Smith 2001, Nutley and Webb, J. 2000), will culminate in actuarialist social work, a mode of practice based on a preoccupation with ‘calculating probability, constructing controlled environments which remove chance and measuring potential through evidence-based outcomes’ (Webb 2006, 140). At worst this almost cybernetic approach to practice, will result in a kind of circuit board social work, where interventions are reduced to sets of ‘mechanised operations’ (Pawson and Tilley 1997, 48), based in part on the inappropriate application of standards ‘in a misplaced equation of standardization with quality’ (Timmermans and Berg 2003, 80).
Claims that until social work can ‘assert the value of its unique contribution, its impact on policy and practice will remain weak’ (McCrae et al.2005, 70), have driven efforts to seat an evidence-focused social work within a scientific paradigm, on the basis that it is this approach which is capable of providing social work will with the secure footing it is perceived to lack. This position is not only untenable due to the nature of the subject matter, but, as Laudan demonstrates, it may also represent an epistemological flaw: ‘The quest for a specifically scientific form of knowledge, or for a demarcation criterion between science and nonscience, has been an unqualified failure. There is apparently no epistemic feature or set of features which all and only the ‘sciences’ exhibit’ (Laudan 1996, 86). Furthermore, adoption of a scientific stance has often resulted in a narrow reliance on research evidence, which, due to its consequential nature, ‘cannot tell us what is the best thing to do, either in general terms or particular cases’ (Hammersley 2005, 95), accepting that the interactions faced by social workers are too complex for us to predict (see Webb 2006a), it is becomes apparent that EBP does not cover all the bases: black holes remain. Faced with varying degrees of separation, consensus is now gravitating towards the softer term of ‘evidence-aware‘ (see Phillips 2005), which appears to be both more acceptable, and more appropriate, to social work, in that it highlights the fact that, as Phillips recognises, an evidence-focused approach ‘should be regarded as a support for professional judgement and autonomy’ (Phillips 2005, 2); the role of defined knowledge being to inform understanding (England 1986). However, could this be taken further? Would the term ‘knowledge-aware’ not be yet more befitting? With an emphasis on a knowledge base that is inclusive of not just a broad range of evidence, but also of professional judgement based on experience, intuition and knowledge of the individual client (see Phillips 2006); although the topic of knowledge, as shall become clear, is shrouded in its own polemical debates.
Changing roles: towards professionalism?
Should the increased focus on evidence and empirical practice continue, there is no question that the future role of social workers, especially related to research, will change, with the role of social worker as practitioner and researcher (see Gray and McDonald 2006) likely to become commonplace. Already there have been calls for increased collaboration between academics, practitioners and service users, including the need for workers to be critically evaluative of research studies and be ‘willing to participate in practice research in order to widen the knowledge about the situations in which they are intervening’ (Orme 2001, 620). This sentiment is echoed in ideas like the one proposed by Nutley and Webb, J. (2000), who claim that wider participation in initiatives like policy action teams may lead to a more practice-based view of policy options, in turn directing researchers to answer questions of not only what works, but why and how; again the role of social work education is crucial here.
While the drive to increase professionalisation has been aided by initiatives such as the system of registration, largely welcomed for the part it has played in raising the professional status of social workers, as well as service standards (see Orme 2005), it is the change in professional role, dependent upon a specific body of knowledge which, at present, attracts the most prestige. It follows that the desire to create an evidence-focused knowledge base can be viewed as another means to increase professionalisation of social work (see Garretsen et al, 2005), thereby bolstering ‘the idea that social work has value as a modern profession’ (Gray and McDonald 2006, 11). In health care, the value of abstract knowledge has not gone unnoticed, as, in addition to medicine, other health professionals now ‘see the creation of guidelines as a crucial strategy in their own professionalisation process’ (Timmermans and Berg 2003, 17); thus, the standardisation of care, whether good or bad, becomes an forgivable consequence. In a not wanting to be left behind in the professional stakes, it almost goes without saying that social work should follow the same route in striving for enhanced jurisdiction, through the control of knowledge and skills supposedly specific to the profession (see Abbott 1988). However, not all are accepting of what is, to some extent, the sterilization of practice: some voices from the medical profession are wary, concerned that a too greater emphasis on technical and scientific aspects risks loosing the humanity of practice (Williams and Garner 2002). Furthermore, within those professions dealing with the more complex and opaque aspects of the human condition, concern has been expressed that rigid adherence to certain forms of knowledge may be of detriment both the patient and professional. Notably, from the field of psychology, it has been suggested that the inclusion of subjective evidence overcomes the problems encountered when objective evidence falls short (see Corrie and Callanan 2001).
Part of the process of professionalisation then, involves the increased demarcation of explicit knowledge, skills, tasks, roles and so on, but these tangible elements overlook the interpersonal nature of the profession, where values are an essential part of practice: the uniqueness of the qualities endemic to social work are evident when considering this very issue of professionalisation, as in the face of a culture of self interest, it has been observed that social work ‘holds on to values of compassion, justice and caring’ (Webb 2006a, 200). Values should arguably play a central role in shaping the profession’s future, especially when consideration is given to the wider social context, evermore dominated by political - professional power struggles. So far, government drives for efficiency and measurability have exerted much influence over the social work role, perhaps resulting in an increased standardisation of practice: EBP fits with these political imperatives, and it is widely recognised that research findings coexists with a variety of factors including political ideology, which is, at times, able to dominate evidence (Davies et al. 2000); however, professionals are not without their own political agenda.
An exclusive club
A common theme throughout this paper has been the role of knowledge, the categorisation and ownership of which is arguably the essential component within the professionalisation process. As such, it has been posited that the emergence of an evidence focused approach may represent the formalisation of knowledge, designed to achieve stability and constant regularities. In keeping with the critical views expressed towards experimentalist conceptions of EBP, Webb states that ‘Formalization requires that one define every object and its limited context in terms of a finite number of strictly identifiable phenomena’ (Webb 2006b, 6), ultimately constrained to ‘a fixed and determined method’ (ibid, 9), intent on determinant calculus. Compounding the claim that EBP acts as a restrictive device, it is possible to see that any formulization process leads to the exclusion of certain groups or individuals. Indeed, as much of the EBP debate takes place in academic literature, it could be argued that social work practitioners operating on the shop-floor are prevented from engaging in the debate, the use of scientific jargon in research papers has been criticised for this very reason (see Moseley and Tierney 2005). Furthermore, should knowledge generation take place within an experimental paradigm, the researcher may come to dominate the process, thereby ostracising service users (see Ferguson 2003), the individuals who should actually be at the centre of social work; experts in their own right.
In charting the current UK government’s modernising agenda, Orme points out how it focused not only the debate about what the social task is, who should perform it and what qualifications are needed, but, crucial to it all, ‘who will decide?’ (Orme 2001, 611). It is here that questions of power take centre stage, implicit, but integral to which are issues of professional ownership and authority, and efforts to determine what constitutes the nature of social work (Gray and McDonald 2006). Ultimately what is at stake is how the profession is defined and who defines it: whether it is from the inside out or the outside in, hence, formal practice guidelines have the capacity to act as a double edged sword: not only do they provide a licence to act, supporting professional jurisdiction and autonomy, they can, and are, increasingly used to hold professionals accountable by outside agencies. Sheldon and Chilvers note that one of the aims of the CEBSS project was to influence elected members, a third and frequently meddling party, by seeking their backing for a mode of practice, EBP, that would allow social workers to take ‘properly considered, but not entirely foreseeable risks’ (Sheldon and Chilvers 2000, 83). In anchoring social work practice to a suitably robust evidence-focused approach, as a way to protect the profession from being reduced to defensible practice, Sheldon seems to be striving for some kind of emancipatory social work. Credit, where credit’s due, but is this not just a shifting of the goal posts? By using evidence as the defence, does this new found explicitness not then form the basis of defensible practice, or, at the very least, become just another form of justification?
Endeavours to control knowledge via EBP may fit with the process of professionalisation, but has the professional status of social workers been secured? Has social work been saved from ‘outsiders who attack control’ (Abbott 1988, 2)? It seems doubtful at best, the balance always in the lurch. Returning to the well-worn path of the health sector, Timmermans and Berg claim that when autonomy gives way to accountability ‘What is at stake is who decides how medical work should be done. Are the people trained to do the work or those who pay in charge?’(Timmermans and Berg 2003, 114). Should it fall to the latter, efficiency drives may well become evermore ruthless, leading to the ‘rationing and the non-purchasing of care that clinical judgement says is useful’ (Williams and Garner 2002, 10). Like health care, social work is already faced with similar predicaments, with practitioners ‘increasingly required to demonstrate ‘effectiveness’ to third-party purchasers of their services’ (Gray and McDonald 2006, 13).
Ultimately social work would benefit from a knowledge base which is accessible to all, driven by an ethos of inclusion and supported by a value base which explicitly democratizes knowledge (Gould 2006): any limiting agent should be greeted with caution. The usefulness of such an enterprise will clearly depend on how knowledge is defined, and while some of what we know, like much of what we don’t, cannot be explicitly expressed, this ‘unconscious dimension of knowledge’ (Nutley and Davis 2000, 335) pervades social work and cannot be understated, nor should it be ignored; it is, after all, this tacit knowledge which makes social work specific, individual, personal, and ultimately workable: as Eraut suggests, it is this implicit body of knowledge which is bought to bear in practice, the consequence being that explicit knowledge is consigned to a somewhat subordinate position (see Phillips 2005). While crucial to social work, these facets remain elusive to positivism’s knowledge claims, which sit at a rather rudimentary level, as Rickman notes positivism ‘has led to researchers choosing the easily accessible and avoiding the complex topics’ (England 1986, 79). If Latour’s claim that the uptake of theories depends on others’ behaviour and that ‘the status of a statement depends on later statements’ (Latour 1987, 25) is true, then the role of active discussion in theory development should be encouraged: participation is the only way to minimise the inaccurate black boxing of ideas. Although, were evidence to be viewed, perhaps more accurately, as a ‘state-of-becoming’ or ‘always being worked on by difference’ (Webb 2006b, 24), the process of black boxing, becomes redundant, inappropriate even, as Abbott suggests ‘we must forget beginnings and ends, and make history the endless succession of middles that it is’ (Abbott 1988, 322). In social work there is only so much that can be known by any one individual, the interpretations made invariably containing some level of fallibility, the predictability of events never guaranteed.
Findings that clients really value the caring relationship of social work (see Webb 2006a), highlights the ever present tug-of-war between interpersonal skills vs. technical skills and competence which permeates the knowledge debate in social work; at times paying scant regard to social work values and ethics. The emphasis on knowledge, which serves to guide and support practitioners in their work, does have a valid place in contemporary and future social work, although too tight a focus on knowledge is misguided: can efforts to produce neat box-like descriptors of any number of social problems, produced by innumerable social actors, in innumerable situations, ever be attained? Any knowledge claims should be accompanied by a level of humility, there are limits to what can be known, which will vary according to the specific context, relative to a specific time and space; though efforts to seek a deeper ontological understanding should be welcomed and encouraged. Knowledge, in the broadest sense of the term, is not the only influence on professional practice, as we have seen it is the skills of the practitioner and, perhaps more significantly, the execution of sound judgments that makes social work, work. In uncertain circumstances, it is conceivable that good judgements on the part of the practitioner are able to foster the conditions needed to attain successful outcomes, for example the realisation of trusting relationships between practitioner and service user (see Smith 2001). Achieving good judgement is complex, involving an inter-play of factors, as Webb suggests ‘Good judgement in social work entails a process of reflective equilibrium, of striking a balance between concept-driven and practice driven patterns of thinking and moral intuitions primed by factual and counter-factual framing of the situation’ (Webb 2006a, 219). As such judgement can, and must, be viewed akin to a skill, which, along with other skills, should be fostered to allow social workers to make deft and knowledgeable assessments of what is likely to be true (Hammersley 2005). Accepting Clark’s sentiment that social workers, like other professionals in human services, ‘cannot pretend to value-neutrality in their professional relationships’ (Clark 2006, 83), and the premise that ‘It is only by the assertion of value that perception can select and synthesize, that it can bring manageable coherence to complexity’ (England 1986, 124), it is becomes possible to see that it is through an emphasis on key social work values, such as those of social justice and human well-being (see Bisman 2004), that, perhaps, good judgement can be fostered.
What is needed then is an open recognition of the multi-faceted nature of practice and the interdependence of knowledge, skills, values and so on, which, if permitted, can enable social work to proceed with a well formed sense of direction (see Firth et al.2004). Part of this must include an awareness of our own limitations, for example as Sheldon notes ‘…we fail to see something which, in retrospect we realize was staring us in the face’ (Sheldon 2001, 806), along with an acceptance that the possibility of full transparency is an ‘illusionary idea’ (Timmermans and Berg 2003, 206). Consequently, the application of standards should be realistic ‘not being more detailed than required, not more stringent than necessary, not more imperative than usable’ (ibid, 211); where they do exist, they must not come to be seen as the definitive aspect of social work practice: it is in situations affording adaptability where the true essence of good social work can be found, here, amongst the ‘limited truths’ (Ghilarov 2001, 361) values and ethics are essential.
Abbott, A. 1988: The System of the Professionals. Chicago: The University of Chicago Press.
Brammer, A. 2003: Social Work Law. Harlow: Pearson Education.
Bisman, C. 2004: Social Work Values: The Moral Core of the Profession, in: British Journal of Social Work, pp. 109-123.
Chalmers, I. 2003: Trying to Do More Good than Harm in Policy and Practice: The Role of Rigorous, Transparent, Up-to-Date Evaluations, in: Annals of the American Academy of Political and Social Science, pp. 22-40.
Chalmers, I. 2005: If evidence-informed policy works in practice, does it matter if it doesn’t work in theory?, in: Evidence and Policy, 2, pp. 227-242.
Clark, C. 2006: Moral Character in Social Work, in: British Journal of Social Work, pp. 75-89.
Cohen, A. M., Stavri, P. Z. and Hersh, W.R. 2004: A categorisation and analysis of the criticisms of Evidence-Based Medicine, in: International Journal of Medical Informatics, 1, pp. 35-43.
Collier, A. 1994: Critical Realism: An Introduction to Roy Bhaskar’s Philosophy. London: Verso.
Corrie, S. and Callanan, M. 2001: Therapists’ beliefs about research and the scientist-practitioner model in an evidence-based health care climate: A qualitative study, in: British Journal of Medical Psychology, pp. 135-149.
Davies, H. and Nutley, S. 2000: Healthcare: evidence to the fore, in Davies, H., Nutley, S. and Smith P. (eds.): What Works? Evidence-based policy and practice in public services. Bristol: The Policy Press, pp.43-67.
Davies, H., Nutley, S. and Smith, P. 2000: What Works? Evidence-based policy and practice in public services. Bristol: The Policy Press.
Department of Health 2005: Independence, Well-being and Choice: Our vision for the future of social care for adults in England. Norwich: The Stationary Office.
England, H. 1986: Social Work as Art. London: Allen and Unwin Ltd.
Firth, M. T. Dyer, M., Marsden, H., Savage, D. and Mohamad, H. 2004: Non-Statutory Mental Health Social Work in Primary Care: A Chance for Renewal?, in: British Journal of Social Work, pp. 145-163.
Ferguson, H. 2003: Outline of a Critical Best Practice Perspective on Social Work and Social Care, in: British Journal of Social Work, pp. 1005-1024.
Frost, N. 2002: A problematic relationship? Evidence and practice in the workplace, in: Social Work and Social Sciences Review, 1, pp. 38-50.
Gambrill, E. 2003: Evidence-Based Practice: Sea Change or the Emperor’s New Clothes?, in: Journal of Social Work Education, 1, pp. 2-23.
Garretsen, H., Bongers, I. and Rodenburg, G. 2005: Evidence-Based Work in the Dutch Welfare Sector, in: British Journal of Social Work, pp. 655-665.
Ghilarov, A. 2001: The changing place of theory in 20th century ecology: from universal laws to array of methodologies, in: Oikos, 2, pp. 357-362.
Glasby, J. and Beresford, P. 2006: Who knows best? Evidence-based practice and the service user contribution, in: Critical Social Policy, 1, pp. 268-284.
Gould, N. 2006: An Inclusive Approach to Knowledge for Mental Health Social Work Practice and Policy, in: British Journal of Social Work, pp. 109-125.
Gray, M. and McDonald, C. 2006: Pursuing Good Practice? The Limits of Evidence-based Practice, in: Journal of Social Work, 1, pp. 7-20.
Hammersley, M. 2005: Is the evidence-based practice movement doing more good than harm? Reflections on Iain Chalmers’ case for research based policy making and practice, in: Evidence and Policy, 1, pp. 85-100.
James, A. 2004: The McDonaldization of Social Work – or Come back Florence Hollis, All is (or should be) Forgiven, in: Lovelock, R., Lyons, K. and Powell, J. (eds.): Reflecting on Social Work: Discipline and Profession. Aldershot: Ashgate.
Laudan, L. 1996: Beyond Positivism and Relativism. Oxford: Westview Press.
Latour, B. 1987: Science in Action. Massachusetts: Harvard University Press.
McBeath, G. and Webb, S. A. 2002: Virtue Ethics and Social Work: Being Lucky, Realistic, and not Doing ones Duty, in: British Journal of Social Work, pp. 1015-1036.
Macdonald, G. 2000: Social care: rhetoric and reality, in: Davies, H., Nutley, S. and Smith, P. (eds.): What Works? Evidence-based policy and practice in public services. Bristol: The Policy Press.
Macdonald, G. Sheldon, B. and Gillespie, J. 1992: Contemporary Studies of the Effectiveness of Social Work, in: British Journal of Social Work, 6, pp. 615-643.
McCrea, N., Murray, J., Huxley, P. and Evans, S. 2005: The Research Potential of Mental-Health Social Workers: A Qualitative Study of the Views of Senior Mental-Health Service Managers, in: British Journal of Social Work, pp. 55-71.
Morén, S. and Blom, B. 2003: Explaining Human Change: On Generative Mechanisms in Social Work Practice, in: Journal of Critical Realism, 1, pp. 37-60.
Moseley, A. and Tierney, S. 2005: Evidence-based practice in the real world, in: Evidence and Policy, 1, pp. 113-119.
Mullen, E., Shlonsky, A., Bledsoe, S and Bellamy, J. 2005: From concept to implementation: challenges facing evidence-based social work, in: Evidence and Policy, 1, pp. 61-84.
Nutley, S. and Davies, H. 2000: Making a reality of evidence-based practice, in: Davies, H., Nutley, S. and Smith P. (eds.): What Works? Evidence-based policy and practice in public services. Bristol: The Policy Press.
Nutley, S. and Tilley, N. 2000: Debates on the role of experimentation, in: Davies, H., Nutley, S. and Smith P. (eds.): What Works? Evidence-based policy and practice in public services. Bristol: The Policy Press.
Nutley, S and Webb, J. 2000: Evidence and the policy process, in: Davies, H., Nutley, S. and Smith P. (eds.): What Works? Evidence-based policy and practice in public services. Bristol: The Policy Press.
Orme, J. 2001: Regulation or fragmentation? Directions for Social Work under New Labour, in: British Journal of Social Work, pp. 611-624.
Pawson, R. 2006: Evidence-based Policy: a Realist Perspective. London: Sage Publications.
Pawson, R. 2001a: Evidence Based Policy I: In search of a Method. ESRC UK Centre for Evidence Based Policy and Practice: Working Paper 3. Accessed 02/06/06
Pawson, R. 2001b: Evidence Based Policy II: The Promise of ‘Realist Synthesis’. ESRC UK Centre for Evidence Based Policy and Practice: Working Paper 4. Accessed 02/06/06
Pawson, R. and Tilley, N. 1997: Realistic Evaluation. London: Sage Publications.
Phillips, M. 2006: Putting evidence into practice: an evidence-based approach to continuing professional development, in: Evidence and Policy, 1, pp. 27-45.
Phillips, M. 2005: Developing Evidence-Based Practice in the Professions. University of Bristol: Working Paper Series Vol. 1. Accessed 30/05/06
Sackett, D., Rosenberg, W., Gray, M., Haynes, B. and Richardson, S. 1996: Evidence based medicine: what it is and what it isn’t, in: British Medical Journal, pp. 71-72.
Sheldon, B. 2001: The Validity of Evidence-Based Practice in Social Work: A Reply to Stephen Webb, in: British Journal of Social Work, pp. 801-809.
Sheldon, B. and Chilvers, R. 2000: Evidence-Based Social Care: A study of prospects and problems. Lyme Regis: Russell House Publishing.
Smith, C. 2001: Trust and Confidence: Possibilities for Social Work in ‘High Modernity’, in: British Journal of Social Work, pp. 287-305.
Smith, D. 2002: The limits of positivism revisited, in: Social Work and Social Sciences Review, 1, pp. 27-37.
Social Care Institute for Excellence 2005: Developing the evidence base for social work and social care practice. Accessed 07/06/06
Timmermans, S. and Berg, M. 2003: The Gold Standard: The Challenge of Evidence-Based Medicine and Standardisation in Health Care. Philadephia: Temple University Press.
Webb, S. A. 2006a: Social Work in a Risk Society: Social and Political Perspectives. London: Palgrave Macmillan.
Webb, S. A. 2006b: If evidence informed policy works in theory, does it matter if it doesn’t work in practice? University of Sussex, paper in progress.
Webb, S. A. 2002: Evidence-based Practice and Decision Analysis in Social Work: An Implementation Model, in: Journal of Social Work, 1, pp. 45-63.
Webb, S. A. 2001: Some Considerations on the Validity of Evidence-based Practice in Social Work, in: British Journal of Social Work, pp. 57-79.
Williams, D. and Garner, J. 2002: The case against ‘the evidence’: a different perspective on evidence-based medicine, in: British Journal of Psychiatry, pp. 8-12.
Ian J. Dore
University of Sussex
School of Social Work and Social Care, Sussex Institute
BN 19 QQ, Falmer,
Tel: ++ 44 1237 87 7888
Email: or email@example.com