Ng and de Colombani: Framework for Selecting Best Practices in Public Health: A Systematic Literature Review

Framework for Selecting Best Practices in Public Health: A Systematic Literature Review

Abstract

Evidence-based public health has commonly relied on findings from empirical studies, or research-based evidence. However, this paper advocates that practice-based evidence derived from programmes implemented in real-life settings is likely to be a more suitable source of evidence for inspiring and guiding public health programmes. Selection of best practices from the array of implemented programmes is one way of generating such practice-based evidence. Yet the lack of consensus on the definition and criteria for practice-based evidence and best practices has limited their application in public health so far. To address the gap in literature on practice-based evidence, this paper hence proposes measures of success for public health interventions by developing an evaluation framework for selection of best practices. The proposed framework was synthesised from a systematic literature review of peer-reviewed and grey literature on existing evaluation frameworks for public health programmes as well as processes employed by health-related organisations when selecting best practices. A best practice is firstly defined as an intervention that has shown evidence of effectiveness in a particular setting and is likely to be replicable to other situations. Regardless of the area of public health, interventions should be evaluated by their context, process and outcomes. A best practice should hence meet most, if not all, of eight identified evaluation criteria: relevance, community participation, stakeholder collaboration, ethical soundness, replicability, effectiveness, efficiency and sustainability. Ultimately, a standardised framework for selection of best practices will improve the usefulness and credibility of practice-based evidence in informing evidence-based public health interventions.

Significance for public health

Best practices are a valuable source of practice-based evidence on effective public health interventions implemented in real-life settings. Yet, despite the frequent branding of interventions as best practices or good practices, there is no consensus on the definition and desirable characteristics of such best practices. Hence, this is likely to be the first systematic review on the topic of best practices in public health. Having a single widely accepted framework for selecting best practices will ensure that the selection processes by different agencies are fair and comparable, as well as enable public health workers to better appreciate and adopt best practices in different settings. Ultimately, standardisation will improve the credibility and usefulness of practice-based evidence to that of research-based evidence.




Introduction

Practice-based evidence in public health

According to the World Health Organization (WHO), public health is defined as all organised measures (whether public or private) to prevent disease, promote health, and prolong life among the population as a whole.1 Public health activities can be generally categorised into five areas, namely monitoring and evaluation, health promotion and protection, healthcare service delivery, health system as well as research (Appendix 1).2-4

Evidence may be defined as the available body of facts or information indicating whether a belief or proposition is true or valid.5 Similar to clinical medicine,6,7 evidence-based public health emphasises proof of efficacy,8,9 so that scarce resources are efficiently utilised on interventions which have been shown to bring about desired outcomes,10 and that benefits outweigh harm for both individuals and society.11 Furthermore, it encourages accountability by decision-makers through the use of objective judging metrics to evaluate evidence.12

Many sources of evidence may be considered in decision-making. However, the quality of evidence is often only judged by the internal validity of study designs,13 with randomised controlled trials held as gold standard for minimising bias in study results. This then limits the data considered to research-based evidence in many cases. For example, the WHO Guidelines Review Committee evaluates evidence using the Grades of Recommendation, Assessment, Development and Evaluation (GRADE) approach,14 where one of the assessment criteria for high confidence in the body of evidence is low methodological bias.15 While efficacy under controlled conditions supports causality between the intervention and outcomes, there is growing awareness of the importance of demonstrating effectiveness in actual programme settings, on top of efficacy, for public health interventions.16-18 Evaluating a programme in the real world not only considers interactions between contextual factors and the intervention to ensure feasibility, or external validity, but also broadens the scope of interventions that can be assessed beyond simple individual-based interventions favoured by randomised controlled trials.19,20 Contextual factors such as social determinants of health,10 the ability of public health workers to deliver the interventions to target groups,19 and accessible resources21 all alter effectiveness of interventions in reality. While pragmatic trials may be an alternative in that they are carried out under real-life conditions while preserving internal validity,22 population-wide upstream interventions, for instance policies to promote healthy diets in Finland, remain incompatible with trial settings23 and require more qualitative methods of assessment.24 Therefore, practice-based evidence, as opposed to research-based evidence, has been proposed as a more relevant source of evidence for public health decision-making due to the focus on populations as a unit and complexity of multi-disciplinary interventions.25 Furthermore, use of practice-based evidence may enhance the translation of interventions from research to practice, a problem often cited in literature,26 by considering drivers and barriers in implementation. In this review, practice-based evidence is obtained from field-based assessment of an intervention in a specific real-life setting.27,28 In contrast, research-based evidence refers to empirical data derived from testing hypotheses about the efficacy of the intervention. This may be done through observational or experimental studies, with biases and confounders minimised to elucidate the relationship between the intervention and observed results.29 This review proposes that both sources of evidence are complementary in informing evidence-based practice in public health and greater attention should be given to helping decision-makers tap available practice-based evidence.

Evaluation of existing public health interventions is one valuable source of practice-based evidence.11 While frameworks have been developed to direct the process of evaluation,30 relatively little work has been done to outline criteria for reviewing practices for evidence.31 In addition, programme evaluations are often geared towards ensuring accountability towards funders or improving the programme itself, rather than for sharing of lessons learnt.32

Best practice approach

In particular, selection of best practices is one way via which implemented interventions may be evaluated to generate practice-based evidence.33 The concept of systematically identifying best practices first started in the private sector,34 where a best practice refers to a model of excellence against which counterparts in the industry can benchmark their own operations to better performance.35,36

In the public sector, the notion of best practices also underlies policy transfers between countries37 and is increasingly utilised in various sectors including education,38 immigration39 and public health, the focus of this review.A practice may be broadly taken to mean a policy, activity, intervention, approach, programme and so on.40,41 Similar to industries, the objective of identifying best practices in public health is to avoid wasting resources on reinventing the wheel by learning from others under comparable circumstances.34,42 Such exchange of knowledge not only facilitates improvement of current practices, but also helps those starting new interventions to avoid common mistakes43 and accelerate programme development.44 The increased collaboration and learning between organisations are also in line with the global movement to promote knowledge management as a means to improve outcomes.45-47 Nonetheless, the best practice approach requires dedicated resources for programme evaluation and proper documentation.48 The lack of consensus on the definition and criteria for best practices,42,49 also impedes the use of such practice-based evidence.33 In addition, the reliability and credibility of practice-based evidence depend on a flexible and transparent evaluation process,50 which has been unexamined thus far. Hence, there is an urgent need for this novel review to address the gaps in current literature and facilitate optimal utilisation of valuable practice-based evidence in the form of best practices.

Aim of study

This systematic literature review aims to develop a scientifically sound and feasible framework for the selection of best practices in public health. This seeks to address the research question: what are suitable measures of success of a public health intervention to generate practice-based evidence? Although a review by Baker similarly sought to identify criteria for evaluating research-based and practice-based evidence, the consolidated practice-based criteria were solely derived from 12 expert interviews.31 Hence the comprehensiveness of identified criteria was highly dependent on the knowledge and experience of the 12 public health experts, unlike in a literature review which can cover more extensive and varied sources to incorporate a wider range of opinions. This paper is thus likely to be the first systematic literature review that attempts to synthesise criteria for producing practice-based evidence in public health to guide organisations and decision-makers in identifying and learning from best practices. Furthermore, the interviews were conducted in 2004 and this systematic review will provide an update to the criteria identified in Baker’s paper.

Methodology

Search strategy

Literature search (April to June 2014) was carried out via three main strategies. Literature found through preliminary searches was consulted to develop appropriate search terms.31,51

Firstly, Pubmed (1966-June 2014) and the Global Health Library (2005-June 2014) databases were searched for evaluation frameworks relevant to public health. Secondly, websites and publications of major international health-related organisations were searched to identify criteria and methodologies that had previously been used to select best practices in public health. The WHO library database (WHOLIS, 1948-June 2014) and Intergovernmental Organisation search engine (IGO) were further utilised to ensure the comprehensiveness of the literature search. Lastly, the reference lists of identified articles were hand-searched to select appropriate sources. Details of the search strategy are summarised by database in Appendix 2.

Inclusion and exclusion criteria

In all cases, the search was limited to English records with full text available online (including library searches with available subscriptions at Imperial College London, University of Oxford and World Health Organization) because of practical considerations. Articles looking at best practices or evaluation outside the scope of public health were excluded. In addition, articles listing case studies as best practices without accompanying definition, criteria or selection methodology were also eliminated. However, there was no restriction on the type of literature and grey literature, such as websites and meeting reports, was included. The inclusion and exclusion criteria are summarised in Appendix 3.

Data extraction and synthesis

Once selected, data extracted included basic information like authors and year of publication, definitions of best practice if given, as well as methodology (e.g. expert panels, scoring system) and criteria used to evaluate public health interventions or select best practices.

Unlike standard systematic reviews for public health interventions,52 commonly used data quality assessment tools could not be applied due to the unconventional types of articles included and the focus of the study. Nonetheless, in line with the emphasis on using theories to enhance interventional effectiveness,52,53 data were extracted on whether the theory supporting the evaluation frameworks or best practice selection methods was reported. This was then used to identify high quality papers. It also ensured that the criteria included in the final framework are aligned with public health principles and theories, and not skewed towards any organisation’s interests projected onto their criteria. In addition, evaluation or selection frameworks that were published in peer-reviewed articles were also considered to be of higher quality than other types of documentation as the former are likely to be more robust studies having been through the peer-review process.

Finally, qualitative data synthesis was performed. Various public health evaluation frameworks were first compared to identify common categories and criteria under each category. Frameworks which have been applied in diverse settings were considered more likely to be acceptable to public health workers. The categories derived from these evaluation frameworks then provided a structure for the proposed framework to ensure that all important aspects of a practice are assessed when generating practice-based evidence. Subsequently, criteria previously applied in the selection of best practices were compiled and classified according to the categories outlined. Once again, a criterion which was consistently used by different organisations is possibly widely accepted as a significant indicator of a successful public health intervention and feasible for application, and hence more likely to be included in the final framework.

Results

A total of 6889 records were obtained, of which 176 were eventually included in this literature review (Figure 1). The complete list of included records is provided in Appendix 4, sorted by database and order of extraction.

Definition of best practice

To develop a framework for selecting best practices, a working definition of best practice is first necessary. One book, 10 peer-reviewed articles and 21 organisational sources included in this review provided varying interpretations. Alternative terms include good practice, effective solution, promising practice and innovative practice.31,40,47,54-58 Good and effective practice are often used to avoid debate about whether a single perfect intervention exists.47 Nonetheless, the process of selecting best practices is comparable to that for good practices and the former is preferred in this review as it provides greater incentive for countries and organisations to improve their practices.59 However, best practice should not be understood in its superlative form,43 but instead seen to encompass interventions that meet a set of pre-defined criteria to varying degrees and reflect the society’s or organisation’s priorities over time.33 In addition, best and good practices were occasionally differentiated from promising or innovative practices by the level of data supporting the success of the intervention. Best practices are well-established programmes proven to be effective through rigorous evaluations whereas promising or innovative practices are still in their infancy but show some signs of potential effectiveness in the long run.40,55 While having robust evidence to back the success of the practice is ideal, it is perhaps more important to document the level of evidence available to guide decision-makers who are trying to learn from these practices instead of creating separate labels with distinct criteria. Differentiating practices by the level of evidence may also limit the settings and types of interventions that may be included. Therefore, best practices also encompass promising practices with varying levels of supporting evidence in this review.

Kahan, Goodstadt and Rajkumar summed up six frequently used versions of the best practice approach in health promotion, which is useful for clarifying the definition of best practice in this review.50,60 In accordance with the aim of generating practice-based evidence, this review is limited to best practices that are selected against previously established criteria to illustrate what works in reality. This is preferred over using best practices as a compilation of enabling elements, standards or steps that are detached from any cultural or social context and hence difficult to apply in the real world.59

The importance for best practices to demonstrate effectiveness or positive outcomes with regard to programme objectives in a specific real-life context is supported by 25 reviewed sources.17,41-43,47,48,54-56,58,61-75 Additionally, in order to fulfil their purpose as a learning tool across countries and organisations, best practices should be defined by their potential for adaptation to other settings through consideration of implementation and contextual factors, as agreed by 17 sources.17,41-43,47,49,55-57,64,68,69,71-75

In short, a proposed working definition for best practices is: practices that have shown evidence of effectiveness in improving population health when implemented in a specific real-life setting and are likely to be replicable in other settings. Consequently, the emphasis on real-life implementation also requires evaluation with a focus on contextual and implementation factors as compared to experimental settings.68

Existing frameworks for public health evaluation

With reference to the five areas of public health activities mentioned in the introduction (and elaborated in Appendix 1), only one peerreviewed article was focused on evaluating public health surveillance while 39 peer-reviewed articles and nine organisational sources offered diverse frameworks for evaluation of health protection and promotion practices. In addition, three peer-reviewed and two organisational sources focused on healthcare services delivery, and five peerreviewed and seven organisational sources explored health system evaluation. There were no records found for the evaluation of public health research. This is expected due to the search terms used in the literature search to elucidate criteria for the evaluation of practicebased evidence specifically, as opposed to research-based evidence which is beyond the focus of this review.

Regardless of the area of public health activities, the review found a general consensus on the importance of assessing the implementation process as well as short-term and long-term outcomes, as recommended by 36 sources.30,48,63,72,76-107 Monitoring the implementation process strengthens the causal relationship between the intervention and observed outcomes by providing information on the facilitating intermediary links.89 In cases where outcome indicators are unavailable, process evaluation may also act as a provisional indicator of effectiveness in view of its expected impact on outcomes.91 For instance, monitoring the coverage of a target group for a screening programme may be used to gauge the success of the programme on top of outcome data on decreased morbidity and mortality, which may only improve after a long period and are difficult to attribute to a single intervention. On the other hand, monitoring outcomes of public health practices ensures that desired objectives are met and guides resource allocation in favour of effective practices.76 Alternatively, the RE-AIM framework first proposed by Glasgow attempts to condense process and outcome evaluation into five dimensions, namely reach, efficacy,108 adoption, implementation and maintenance. Outcome evaluation involves the measurement of the size of change in a specific desirable outcome (efficacy), while the implementation process is assessed through the percentage of target population receiving the intervention (reach), proportion and representativeness of settings taking up the intervention (adoption), degree to which the intervention is carried out as planned (implementation), as well as sustainability of the intervention and its effects (maintenance). Each of these domains then contributes to the public health impact of the intervention. This framework is an extension of Abram’s proposal that the impact of a programme can be equated to the product of its reach and efficacy.109 Since its introduction in 1999, the RE-AIM framework has been widely used to assess various health promotion programmes in peer-reviewed studies.110-120 Manipulating the five dimensions further produces additional criteria such as efficiency, given by (reach × efficacy)/cost of the intervention.110

Another category typically included in public health evaluations is the context. Contextual factors include programme inputs as well as characteristics of the health issue and target community. Assessment of contextual factors ensures that the intervention is relevant to the target group’s needs and circumstances.48,121,122 Furthermore, it also pinpoints factors that may influence the ability of the programme to achieve its desired outcome,85,87,89,92-95,123 such as political environment and available resources. Since a best practice needs to be transferable between settings, consideration of the context during evaluation is useful in facilitating replication of the practice.123 Hence, inclusion of context, process and outcome evaluation criteria is likely to be fundamental for a framework generating practice-based evidence in public health, which is in line with the theory of change and realistic approach to evaluations where analysis of background factors and process descriptions are necessary to explain outcomes in the complex system of public health.123-125

With regard to the programme content, programmes may sometimes be appraised by their objectives, theoretical underpinnings and scope of interventions. Interventions based on theories, such as the health belief model for behavioural interventions, are likely to facilitate positive out-comes.30,63,78,84,86,121,122 A comprehensive intervention takling both individual and wider health determinants is also aligned with health promotion goals to build an environment that supports healthy lifestyles.83,121

In addition, the quality of evidence illustrating the effectiveness of an intervention is occasionally included as part of the evaluation framework.58,88,126,127 Alternatively, programmes may simply be assessed by the presence of formal formative, process and/or outcome evaluation studies.48,87

It is interesting to note that health system assessments also focus on sustainable and equitable financing of the health system,128,129 an element rarely found in health promotion or healthcare service delivery evaluation. In some cases, fair financing may be subsumed under equity.101,103,104,128-131

Lastly, some evaluations found do not allow for assessment of the programme against standards or comparisons between programmes132-134 and are hence not useful for the selection of best practices. Some reviewed sources also provided individual points of evaluation, highlighting attributes of an ideal public health activity.30,48,58,80,86,87,93,94,126, 127,135-139 Appendix 5 summarises the various points of evaluation cited according to the five main categories described earlier, namely, context, content, process, outcomes and evaluation.

Previously used methodology for selecting best practices

Seven peer-reviewed articles and 32 organisational sources detailed the process by which best practices were selected or public health practices were evaluated. Understanding the methodology will be valuable for the development of a best practice framework as it affects how the framework will be applied and hence its design to enhance usability.

Subjectivity at various stages of selection or evaluation is a universal feature across all reviewed sources. Firstly, best practices were identified from submitted cases,41,43,47,54-56,59,66,68,75,88,140-152 or from literature review and experts’ opinion.44,60,61,64,153-156 For the former, programme managers were commonly required to submit a completed template, which was usually narrative and qualitative to accommodate the uniqueness of the practice.59 The range of interventions identified and quality of information used for assessment are hence reliant on the initiative of the programme managers. In the case of evaluations by experts, the programme managers were instead contacted or reviewers visited intervention sites,17,137 and the selection of interventions to be evaluated was limited by the biases and knowledge of the expert reviewers. In one exception, practices were shortlisted from projects related to United Nations agencies.157 As each approach has its pros and cons, it is important for reviewers to report the limitations of their method of choice and any potential bias on the type of practices selected. Subsequently, reviewers appraise the information collected against pre-set criteria to determine if the intervention can be considered a best practice. Hence, the composition of the panel of reviewers and the selection process are also crucial in ensuring valid and reliable selection of best practices. While reviewers are typically branded as experts in the relevant fields, there is often little information on the background of the reviewers. Besides professional academics or organisational staff, it has also been suggested that best practices should be assessed by the beneficiaries of the intervention.33,59 This adheres to the recommendations for participatory evaluation in public health and allows direct assessment of the acceptability of the intervention.30 However, it may be difficult to engage the most economically disadvantaged beneficiaries of the intervention in reality and to ensure an equitable representation.33 Regardless, careful consideration of the composition of the panel of reviewers is important to avoid biases due to vested interests and details of the composition should be made transparent.

Next, the selection of best practices or evaluation of public health interventions is typically done via consensus after independent assessment by each reviewer.48,60,88 Alternatively, reviewers may be asked to score a programme on each criterion, which will then contribute to the overall score of the programme to facilitate comparisons with other programmes.55,142,145 Either way, subjective views of each reviewer greatly influences the selection and evaluation outcomes and a reliable selection framework should thus consider such subjectivity through appropriate criteria and indicators.

A few studies attempted to provide more objective evaluation methods. Some users of the RE-AIM model have translated each criterion into a mathematical formula, for instance impact of an intervention can be calculated as (reach × efficacy).117 Similarly, Reed proposed a scoring system given by (impact of the intervention × number of people implementing the intervention other than programme staff or percentage of target population involved).158 While these methods may facilitate comparison between different interventions based on their scores, they may also oversimplify or ignore important qualitative information. Furthermore, some degree of subjective judgements is unavoidable in any evaluation, for instance in weighing the importance of the various criteria used.117 Therefore, it is more important to acknowledge the subjectivity in the evaluation process and incorporate it into the proposed framework for evaluating best practices rather than trying (and failing) to eliminate it fully.

Two sources found by this review also recommended validating best practices through rigorous empirical studies of the causality between intervention and outcomes.40,159 Once again, best practices should not only be sourced from settings conducive for research but also other real-life settings where process evaluation may then be used to support any observed correlation between the programme and outcomes in replacement of empirical causality studies.

Nine reviewed sources further provided considerations for choosing indicators in public health evaluation.76,97,99-101,106,139,160,161 Criteria set the benchmark for evaluating an intervention while indicators are measurements used to assess achievement of these standards.76 Ideal indicators are valid because they provide an accurate reflection of what is being measured, relevant because they measure an important phenomenon, practical and cost-effective given accessible information, sensitive to changes arising from programme implementation, reliable in producing consistent results independent of reviewer or time, produce comparable measurements when used in different contexts, easy to interpret and useful for the purpose of selecting best practices.

Criteria used to select best practices

One book, eight peer-reviewed articles and 39 organisational sources presented various criteria for selecting best practices. To ensure that the criteria provide a comprehensive assessment of public health interventions, the criteria were grouped according to the five categories typically used in public health evaluations mentioned earlier, namely context, content, process, outcomes and evaluation. Appendix 6a-e lists the criteria that are cited by each source, with sources arranged by types and databases. Appendix 6a cites peerreviewed articles and book while Appendix 6b cites organisational sources from the database of WHOLIS and WHO Regional Offices of Africa and the Americas. Similarly, Appendix 6c includes sources from WHO Regional Office for Europe, UNDP and the World Bank; Appendix 6d cites sources from US CDC, IGO search engine and hand-searches of reference lists; and Appendix 6e focuses on sources derived from hand-searching reference lists. Appendix 7 then summarises the number of sources citing each criterion.

Almost all sources (44 of 48 sources) agreed that a best practice must be effective and show measurable positive results in achieving pre-defined objectives. This also supports the working definition of a best practice stated earlier, where a best practice must demonstrate what works in reality. Assessing effectiveness is likely to be crucial regardless of the area of public health, as earlier demonstrated. Additional sources providing detailed analysis of each individual criterion were also found. On top of the 48 sources, three additional peerreviewed sources offered different ways of evaluating effectiveness. Lengeler assessed individual and community effectiveness,162 while McDonnell proposed that effectiveness can be measured as (programme efficacy × probability that the programme can deliver its intended outcomes), where the latter is dependent on available human resources, infrastructure and the community’s access to the programme.163 Lastly, Macdonald argued for inclusion of qualitative process indicators in addition to the final outcome evaluation.164 While Macdonald’s claim is supported by evaluation frameworks described earlier,30,48,63,72,76-107 Lengeler’s and McDonnell’s work are less relevant to this review due to the emphasis on scientific studies to establish individual effectiveness and programme efficacy respectively, which may not always be feasible. In short, all positive and negative outcomes of an intervention across time should be taken into account when assessing effectiveness. The objectives of the intervention may also suggest potential targets, for instance achievement of more than 90% of the objectives.145

Another commonly-cited criterion is programme sustainability (32 of 48 sources). On top of the 48 sources, eleven additional peer-reviewed articles provided definitions of sustainability in public health practices. Sustainability may be seen as a long-term continuation of i) activities through local ownership or incorporation into standard practices, otherwise known as institutionalisation;165 ii) benefits as outlined in the objectives of the intervention, including health improvements or heightened attention on the issue; iii) community or organisational capacity to deliver the intervention; or iv) a combination of all three dimensions.5,166-170 In particular, long-term availability of necessary resources, financial or otherwise, to run the intervention is an important point to consider as it greatly affects the maintenance of the activities and their benefits.170 This is especially crucial for externally supported programmes as termination of the programme with the end of funding may limit the potential benefits that can be reaped from initial investments and erode the trust that the community placed in public health workers.5 Contextual and programme elements which may enhance sustainability include alignment of the intervention with national goals, political commitment, community participation, stakeholder partnerships and programme evaluation.171-174 In addition, the timing of appraisal is also crucial, with some sources suggesting that a sustainable intervention should continue for a minimum of five175 or two years after its start54,141,149 or at least one year after the external funding stops.170 Although Stephenson attempted to provide indices to measure sustainability,5 there is no agreed threshold for categorising a practice as sustainable or otherwise.170 Hence, when selecting best practices, it is also important to state the duration of implementation prior to evaluation to inform decision-makers, as well as recognise that sustainability is a continuum where programme elements, benefits and community capacity are maintained to different extents.

Efficiency (24 of 48 sources), or cost-effectiveness, is important in ensuring that scarce resources are used in a prudent and accountable manner, and may be commonly defined as the ability to produce optimal results with minimum resources.43,54,154 On top of the 48 sources, four additional peer-reviewed articles gave further examples of assessment of efficiency in practice. Where the cost of the intervention is known, cost-effectiveness analysis is commonly done176,177 and a threshold can then be applied to determine if the intervention is cost-effective,178 for example by relating the cost per Disability-Adjusted Life-Year averted to the per capita Gross Domestic Product of the country.179 However, in absence of cost-effectiveness calculations, judging the efficiency of practices may rely on evidence of wastage avoidance, cost minimisation56,62 or optimal use of locally accessible resources.180

Potential for replication, or replicability, should also be a main criterion for defining best practices in line with the working definition (24 of 48 sources). Replicability may be defined as the ability of the intervention to continue achieving desired outcomes when adapted to various cultures and settings.54-56,70 Thus, best practices should have key success factors that are independent of the context and available resources.

Next, relevance of the intervention in addressing an important public health issue in the community (21 of 48 sources) depends on the priorities and perceptions of the target community.43,154,155,175 This requires analysis of the disease burden and community profile as well as a needs assessment involving the target community before designing the intervention. Contextual factors such as integration of the intervention into existing structures and its culture appropriateness may also be considered under relevance.70,155 Thus, awareness of the community and settings enhances relevance of the intervention. Furthermore, these descriptions may contribute to the replicability of the intervention by providing information on any contextual factors which may have led to the outcomes observed.

Stakeholder collaboration (17 of 48 sources) and community participation (12 of 48 sources) are two frequently used criteria for selecting best practices and in process evaluation in public health. Both elements are recommended to enhance local ownership,47,54,62 increase the reach of the intervention,148 capitalise on various competencies and incorporate perspectives of the target beneficiaries. Achieving these elements is thus believed to augment effectiveness and sustainability.61,78,181 Furthermore, they are aligned with the principles of public health of being participatory and multi-sectoral in recognition of the fact that improvement in population health requires the collaborative efforts of more than any single actor.4,42,121 On top of the 48 sources, five peer-reviewed articles on community engagement182-186 and one on stakeholder collaboration187 further contribute to this discussion. Markers of ideal community participation include appropriate members and participation process, empowerment of the individuals involved, improved community ties, synergistic and viable coalition where new ideas emerge as well as effective leadership and management. Thus, capacity building of the community (eight of 48 sources) may also be subsumed under community participation as a means through which the latter promotes effectiveness and sustainability of the intervention. On top of the 48 sources, six peer-reviewed articles on capacity building outlined the areas in which empowerment may be observed.188-193 Development of the community’s knowledge and skills on health issues and ways to tackle them, community networks, leadership, resource mobilisation, investments, and organisational structures to support delivery of public health activities will facilitate the maintenance of the intervention and its benefits over time and even aid the community in managing other public health concerns in the future. On the other hand, stakeholder collaboration can be gauged by the synergy achieved, where comprehensiveness and innovation are enhanced by the pooling together of complementary resources and competencies.187 Therefore, community participation and stakeholder collaboration are criteria that both support the attainment of positive outcomes and are themselves important public health goals to be achieved.80 They are also assessed through proxy outcomes such as community empowerment and synergistic cooperation.

Ethical soundness of an intervention (14 of 48 sources) includes respect for an individual’s rights and dignity as well as professionalism by public health workers.41,43,56,154 Reflecting fundamental public health principles, ethical considerations should underpin all activities involving human participants and be made explicit as a criterion for best practices. Furthermore, ethical interventions that do not infringe on an individual’s rights and a community’s norms are more likely to be accepted and utilised by the target group,54 thus enhancing impact through greater reach and adoption.126 Ethical frameworks proposed by eight peer-reviewed articles on top of the 48 sources are also considered. Suggested elements include prevention of harm while ensuring benefits at both individual and population levels, consideration of equity in distribution of benefits and burdens (two of 48 sources), respect for an individual’s autonomy and privacy, informed consent, consciousness of local norms, accountability as well as awareness of vulnerable groups51,194-200 (eight of 48 sources). Specifically, equity may be assessed by the distribution of access, financing and effects201-203 across place of residence, race, occupation, gender, religion, education, socioeconomic status and social capital.204 Davies further suggested eight elements that promote equity, including activities addressing social determinants of health.205 In short, ethical soundness is a basic requirement for any public health intervention and is taken to include equity and social inclusion of vulnerable groups in this review.

Having a theoretical basis underlying public health programmes (11 of 48 sources) and the need for strong evidence of effectiveness (12 of 48 sources) have been mentioned earlier under the chapter on existing frameworks for public health evaluation. While having a theoretical basis will be ideal to explain the logic behind public health interventions, an intervention should not be discounted as ineffective solely because of a lack of underlying theory as inclusion of contextual and process criteria may already provide information about the mechanisms leading to the observed outcomes.82 Similarly, the need for high quality evidence may restrict the selection of best practices to only those amenable to empirical studies and with ample resources to conduct these studies. This may again result in the drawbacks of using only research-based evidence. Therefore, in order to maximise learning from successful initiatives without a theoretical basis or rigorous evaluation, they should still be included in the selection for best practices, though with the level of evidence explicitly stated to inform decision-making.

The next criterion to be discussed is the innovative nature of interventions (11 of 48 sources), defined as those implemented in the context for the first time.54,141,148,150 However, an effective intervention should not be ruled out as a best practice simply because it has been applied previously and thus innovation will not be a significant criterion for selecting best practices. While outcome-related criteria such as effectiveness and sustainability are cited at similar frequency in peer-reviewed and organisational sources, it should be noted that the two types of sources differ in their emphasis on other categories of criteria.

In addition, while most peer-reviewed articles included the need for robust supporting empirical evidence and theoretical backing, criteria such as innovation and ethical soundness are rarely mentioned in these articles but frequently used by organisational sources. This supports the need to include views of both public health researchers and field workers to ensure a comprehensive review through consideration of both peer-reviewed and organisational sources.

Lastly, whether a programme is implemented as planned or well-executed (11 of 48 sources) depends on the required resources and reflects the feasibility of the programme in the context of the target group.206 Other less commonly used criteria also include extensive reach or scale of intervention (nine of 48 sources), which may penalise small-scale targeted initiatives; mandatory formal evaluation studies (seven of 48 sources), which may be restrictive given availability of resources in different settings; support from leaders (seven of 48 sources), which can conceptualised as expressed commitment in public statements, building of relevant infrastructure or budget allocation;207 having clearly defined objectives (six of 48 sources); comprehensiveness (six of 48 sources), such as targeting both determinants of health and environmental factors; integration into local context (five of 48 sources); acceptability (two of 48 sources) and visibility (one of 48 sources). As these criteria were found to be less commonly cited, used without accompanying justification or intermediary towards effectiveness and sustainability of an intervention, they are not considered as essential features of a best practice in this review. Nonetheless, ensuring that these less commonly cited factors are met may further enhance the implementation and effectiveness of the interventions, for instance gaining the buy-in and support from community leaders to ensure the intervention is sustained over time.

Proposed evaluation framework for selection of best practices

Based on the literature review of past evaluations of public health interventions and selections of best practices, this paper proposes a framework for selection of best practices in public health as illustrated in Table 1. Contextual and process elements should be considered together with the outcomes of a practice, as they can further direct adaptation to other settings. Eight criteria across context, process and outcomes were chosen based on their widespread application in evaluation frameworks for various areas of public health as well as best practice selection processes: relevance, community participation, stakeholder collaboration, ethical soundness, replicability, effectiveness, efficiency and sustainability. While the three criteria of having a theoretical basis underlying the intervention, showing strong evidence for effectiveness and being innovative were cited by a number of sources, they were excluded from the final framework as they are not critical for identifying a best practice worthy of emulation as discussed earlier.

Keeping in mind the subjective nature of the selection process, the background of reviewers and process of selection (whether the practice is identified from submitted case studies or by reviewers themselves) should be made transparent to inform decision-makers of the potential biases. Sub-points and examples are also developed for each of the eight criteria in the proposed framework to guide reviewers. Indicators specific to the health issue of interest may also be chosen to provide additional measurements to facilitate selection. By structuring the framework as a checklist, it not only guides the selection process but also facilitates reporting and dissemination by identifying strengths and weaknesses of the practice of interest. It is important to note that besides effectiveness and replicability, best practices may not necessarily exhibit all the listed criteria.71 However, they should not go against any of the listed points. For instance, while best practices may not have demonstrated a causal link with a decrease in mortality, they should not show an increase in mortality instead. To ensure replicability of best practices in other settings, descriptions of the community, context, resources employed as well as supporting evidence should ideally be included to guide decision-makers in adopting these best practices to their own context. Ultimately, in order for the best practice approach to achieve its purpose as a learning tool, the context-specific nature of programmes must be stressed and elements should be adapted before replication in other settings.33

Lastly, the proposed framework is deliberately kept general to be applicable to any public health action at all levels of implementation, be it individual-, community- or population-based interventions. Depending on the public health issue and types of interventions, the framework may be further fine-tuned to emphasise specific criteria.86 For instance, when applying the framework to health systems, equity and sustainability of the health financing mechanisms may be given greater weight due to their importance.129 The proposed framework should also be updated over time to reflect new priorities and focus of the society and organisation of interest.33

Conclusions

While metrics for assessing empirical studies for the quality of evidence have been extensively examined, the lack of consensus on what constitutes practice-based evidence impedes utilisation of valuable real-world experiences in informing public health interventions. Best practices in public health may be defined as interventions that have been shown to produce desirable outcomes in improving health in real-life settings and are suitable for adaptation by other communities. By consolidating previous best practice selection criteria in secondary literature and comparing them with established public health evaluation frameworks, a framework covering eight criteria (relevance, community participation, stakeholder collaboration, ethical soundness, replicability, effectiveness, efficiency and sustainability) across programme context, process and outcome is proposed in this review to guide the selection of best practices.

Strengths and limitations of study

This review addresses a notable gap in current literature on standards for practice-based evidence in public health by proposing the first framework (Table 1) for the selection of best practices based on an extensive systematic review. Despite the increasing awareness about the benefits of practice-based evidence, there is a lack of consensus on the criteria for assessing such evidence, hence hindering its usability.31 Therefore, the framework suggested in this review provides a much-needed tool for public health workers to tap experiences in the field and evaluate interventions in a logical manner to select noteworthy practices which can then be adapted to other settings as evidence-based practices. Promotion of practice-based evidence widens the scope of evidence beyond research with regard to complex and multi-disciplinary public health work and this novel review is hence a crucial first step in setting standards for generating such evidence.

All literature included in the review was found using the methodology recommended by the Cochrane Collaboration for systematic reviews,208 from formulation of a research question to systematic search and selection of records. As the first systematic review to focus on elucidating criteria for practice-based evidence, this review is therefore a novel attempt to define and conceptualise elements of a best practice in public health in a scientifically sound manner. Furthermore, the use of broad search terms and the search through 17 databases with different focuses and settings aimed to minimise omission of crucial material and increase comprehensiveness and representativeness of included records.

However, the proposed framework is produced following literature search of the stated databases and is by no means exhaustive or conclusive. Due to time and resource constraints, the search was limited to records in English and articles with full text available online. Therefore, inclusion of more databases and removal of search restrictions may improve the comprehensiveness of this review. In addition, data search, extraction and synthesis were only conducted by a single reviewer. In order to minimise errors, search of and data extraction from the first database, Pubmed, were conducted twice and results compared to identify any oversight. Nonetheless, having a second independent reviewer would be ideal to prevent individual bias in the literature search and extraction. Due to the heterogeneity and nature of the literature included, there is also no available quality assessment tool that can be used to determine the bias of included records and this may affect the quality of this review. While extracting the theoretical basis for evaluation or selection and distinguishing between peer-reviewed and organisational sources may hint at the quality of the records, it may be useful to update this review when an appropriate quality assessment tool becomes available. Furthermore, as this is the first literature review that attempts to elucidate criteria for evaluating practice-based evidence, details of the methodology, including search terms and databases used, and the results obtained could not be corroborated with any existing study. Lastly, pilot testing of this framework in evaluation of existing programmes, for instance to evaluate health-related policies across different contexts in the recent movement to promote Health in All Policies in Europe,209 or gathering expert opinion will also be necessary in the future to determine its usability.

Future research

Future studies on improving best practice reporting, dissemination and adoption will be complementary in maximising the potential of the best practice approach.48 Appropriate dissemination is likely to improve the efficiency of programme development as resources may then be devoted to adoption rather than innovation.73 Suitable research may hence go beyond the boundaries of public health and involve knowledge management theories.46,210 This includes building suitable knowledge sharing platforms and helping decision-makers identify elements that can be replicated to their own settings.

Ultimately, improving the selection, dissemination and transfer processes of best practices can then facilitate and promote appropriate use of the best practice approach to generate reliable practice-based evidence which can complement research findings in public health. Inclusion of credible practice-based evidence in informing evidence-based practice is consequently likely to enhance the feasibility of derived interventions and widen the scope of recommended practices. Furthermore, it taps a previously underrated wealth of field-based knowledge and experience that should be equally, if not more, valued as physical or financial resources for practitioners attempting to develop effective public health interventions.

Acknowledgments

The author wishes to thank Dr Peter Scarborough and Dr Emma Plugge from the University of Oxford for their invaluable comments on earlier drafts, as well as the World Health Organization Regional Office for Europe for its support.

References

1. 

World Health Organization. Public health. Available from: http://www.who.int/trade/glossary/story076/en/

2. 

Public Health Functions Steering Committee. Public health in America. Available from: http://www.health.gov/phfunctions/public.htm

3. 

C Ramagem, J. Ruales The essential public health functions as a strategy for improving overall health systems performance: trends and challenges since the public health in the Americas Initiative, 2000-2007. World Health Organization. 2008. Available from: http://www.paho.org/PAHO-USAID/index.php?option=com_docman&task=doc_download&gid=10413&Itemid=99999999

4. 

WHO Regional Committee for Europe. Strengthening public health services across the European Region – a summary of background documents for the European Action Plan. 2012. Available from: http://www.euro.who.int/_data/assets/pdf_file/0017/172016/RC62-id05-final-Eng.pdf?ua=1

5. 

A Stevenson, ed. Oxford dictionary of English. 3rd ed. Oxford: Oxford University Press; 2010.

6. 

Evidence-Based Medicine Working Group. Evidence-based medicine: a new approach to teaching the practice of medicine. J Am Med Assoc 1992;268:2420-5.

7. 

DL Sackett, WM Rosenberg, JA Gray. Evidence based medicine: what it is and what it isn’t. BMJ 1996; 312:71-2.

8. 

M Jenicek. Epidemiology, evidenced-based medicine, and evidence-based public health. J Epidemiol 1997;7:187-97.

9. 

R Brownson, J Gurney, G. Land Evidence-based decision making in public health. J Publ Health Manag Pract 1999;5:86-97.

10. 

S Birch. As a matter of fact: evidence-based decision-making unplugged. Health Econ 1997;6:547-59.

11. 

R Brownson, J Fielding, C. Maylahn Evidence-based public health: a fundamental concept for public health practice. Ann Rev Publ Health 2009;30:175-201.

12. 

B Flay, A Biglan, R Boruch. Standards of evidence: criteria for efficacy, effectiveness and dissemination. Prevent Sci 2005;6:151-75.

13. 

N Hill, L Frappier-Davignon, B. Morrison The periodic health examination. Can Med Assoc J 1979;121:1193-254.

14. 

World Health Organization. WHO handbook for guideline development. World Health Organization. 2011. Available from: http://apps.who.int/iris/bitstream/10665/75146/1/9789241548441_eng.pdf

15. 

H Balshem, M Helfand, H Schünemann. GRADE guidelines: 3. rating the quality of evidence. J Clin Epidemiol 2011;64:401-6.

16. 

RE Glasgow, E Lichtenstein, AC Marcus. Why don’t we see more translation of health promotion research to practice? Rethinking the efficacy-to-effectiveness transition. Am J Public Health 2003;93:1261-7.

17. 

R Farris, D Haney, D. Dunet Expanding the evidence for health promotion: developing best practices for WISEWOMAN. J Womens Health (Larchmt) 2004;13:634-43.

18. 

LW Green, JM Ottoson. From efficacy to effectiveness to community and back: evidence-based practice vs practice-based evidence. In: RE Glasgow, KMV Narayan, D Meltze, eds. From clinical trials to community: the science of translating diabetes and obesity research, 12-13 Jan. 2004, Bethesda: National Institutes of Health; 2004. pp 15-18.

19. 

L Rychetnik, M Frommer, P Hawe, A. Shiell Criteria for evaluating evidence on public health interventions. J Epidemiol Community Health 2002;56:119-27.

20. 

J Kemm. The limitations of evidence-based public health. J Eval Clin Pract 2006;12:319-24.

21. 

J Walker, E. Bruns Building on practice-based evidence: using expert perspectives to define the wraparound process. Psychiatr Serv 2006;57:1579-85.

22. 

K Thorpe, M Zwarenstein, A Oxman. A pragmatic-explanatory continuum indicator summary (PRECIS): a tool to help trial designers. Can Med Assoc J 2009;180:E47-57.

23. 

T Lobstein, B. Swinburn Health promotion to prevent obesity. In: DV McQueen, CM Jones, eds. Global perspectives on health promotion effectiveness. New York: Springer; 2007. pp 125-150.

24. 

J. McKinlay Paradigmatic obstacles to improving the health of populations: implications for health policy. Salud Pública de México 1998;40:369-79.

25. 

LW Green. Public health asks of systems science: to advance our evidence-based practice, can you help us get more practice-based evidence? Am J Public Health 2006;96:406-9.

26. 

R Glasgow, K. Emmons How can we increase translation of research into practice? Types of evidence needed. Ann Rev Public Health 2007;28:413-33.

27. 

J Leeman, M. Sandelowski Practice-based evidence and qualitative inquiry. J Nurs Scholarsh 2012;44:171-9.

28. 

KM Wilson, TJ Brady, C Lesesne. An organising framework for translation in public health: the knowledge to action framework. Prevent Chronic Dis 2011;8:A46-52.

29. 

H Banta. Considerations in defining evidence for public health. Int J Technol Assess Health Care 2003;19:559-72.

30. 

I Rootman, M Goodstadt, B Hyndman. Evaluation in health promotion. World Health Organization. Report number: 92, 2001.

31. 

EA Baker, Ramirez LK Brennan, JM Claus, G. Land Translating and disseminating research- and practice-based criteria to support evidence-based intervention planning. J Public Health Manag Pract 2008;14:124-30.

32. 

LV Kerkhoff, N. Szlezák Linking local knowledge with global action: examining the Global Fund to Fight AIDS, Tuberculosis and Malaria through a knowledge system lens. Bull World Health Organ 2006;84:629-35.

33. 

E Øyen. A methodological approach to best practices. In: E Øyen, A Cimadamore eds. Best practices in poverty reduction: an analytical framework. London: Zed Books; 2002. pp 1-28.

34. 

T Brannan, C Durose, P John, H. Wolman Assessing best practice as a means of innovation. Local Governm Stud 2008;34:23-38.

35. 

R Cross, A. Iqbal The Rank Xerox experience: benchmarking ten years on. In: A Rolstadas, ed. Benchmarking: theory and practice. New York: Springer; 1995. pp 3-10.

36. 

D Elmuti, Y. Kathawala An overview of benchmarking process: a tool for continuous improvement and competitive advantage. Benchmark Qual Manag Technol 1997;4:229-43.

37. 

A Newmark. An integrated approach to policy transfer and diffusion. Rev Policy Res 2002;19:151-78.

38. 

M Peters, T. Heron When the best is not good enough: an examination of best practice. J Spec Educ 1993;26:371-85.

39. 

S Bendixsen, P. de Guchteneire Best practices in immigration services planning. J Policy Anal Manag 2003;22:677-82.

40. 

Compassion Capital Fund National Resource Centre. Identifying and promoting effective practices. Compassion Capital Fund National Resource Centre. 2010. Available from: http://www.strengtheningnonprofits.org/resources/guidebooks/Identifying%20and%20Promoting%20Effective%20Practices.pdf

41. 

WHO Regional Office for Europe. Best practices in prevention, control and care for drug-resistant tuberculosis. WHO Regional Office for Europe. 2013. Available from: http://www.euro.who.int/_data/assets/pdf_file/0020/216650/Best-practices-in-prevention,control-and-care-for-drugresistant-tuberculosis-Eng.pdf

42. 

N Jetha, K Robinson, T Wilkerson. Supporting knowledge into action: the Canadian best practices initiative for health promotion and chronic disease prevention. Can J Public Health 2008;99:I1-8.

43. 

WHO Regional Office for Africa. Guide for documenting and sharing best practices in health programmes. WHO Regional Office for Africa. 2008. Available from: http://www.afro.who.int/index.php?option=com_docman&task=doc_download&gid=1981

44. 

Joint United Nations Programme on HIV/AIDS. HIV and sexually transmitted infection prevention among sex workers in Eastern Europe and Central Asia. Joint United Nations Programme on HIV/AIDS. 2006. Available from: http://data.unaids.org/pub/Report/2006/jc1212-hivpreveasterneurcentrasia_en.pdf

45. 

Knowledge and Learning Group Africa Region. Innovations in knowledge sharing and learning in the Africa region: retrospective and prospective. The World Bank. 2002. Available from: http://siteresources.worldbank.org/AFRICAEXT/Resources/km2Retrospective.pdf

46. 

J. Van Beveren Does health care for knowledge management? J Knowledge Manag 2003;7:90-5.

47. 

Food and Agriculture Organisation of the United Nations. FAO good practices. Available from: http://www.fao.org/capacitydevelopment/goodpractices/gphome/en/. Accessed May 2014.

48. 

D Albert, R Fortin, A Lessio. Strengthening chronic disease prevention programming: the toward evidence-informed practice TEIP. program assessment tool. Prevent Chron Dis 2013;10:1-11.

49. 

B Kahan, M Goodstadt, E. Rajkumar Best practices in health promotion: a scan of needs and capacities in Ontario. The Centre for Health Promotion, University of Toronto. 1999. Available from: http://www.idmbestpractices.ca/pdf/BP_scan.pdf

50. 

B Kahan, M. Goodstadt An exploration of best practices in health promotion. The Centre for Health Promotion, University of Toronto. 1998. Available from: http://www.idmbestpractices.ca/pdf/Hpincan4.pdf

51. 

M ten Have, ID de Beaufort, J P Mackenbach, A van der Heide. An overview of ethical frameworks in public health: can they be supportive in the evaluation of programs to prevent overweight? BMC Public Health 2010;10:638-48.

52. 

N Jackson, E Waters for the Guidelines for Systematic Reviews in Health Promotion and Public Health Taskforce. Criteria for the systematic review of health promotion and public health interventions. Health Promot Int 2005;20:367-74.

53. 

D Nutbeam, E Harris, W. Wise Theory in a nutshell: a practical guide to health promotion theories. New York: McGraw-Hill; 2010.

54. 

WHO Regional Office for the Americas. Knowledge sharing for health: scaling up effective solutions for improved health outcomes. WHO Regional Office for the Americas. 2012. Available from: http://www.paho.org/sscoop/wp-content/uploads/2012/11/Solutions-and-good-practices-guidelinesENG.pdf

55. 

Advance Africa. Advance Africa’s approach to best practices. Available from: http://advanceafrica.msh.org/tools_and_approaches/Best_Practices/index.html.

56. 

B Oxlund. Manual on best practices HIV/AIDS programming with children and young people. AIDSNET. 2005. Available from: http://www.safaids.net/files/Manual%20on%20Best%20Practices%20with%20Children%20and%20Young%20People_AIDSnet.pdf

57. 

United Nations Children’s Fund. Innovations, lessons learned and good practices. Available from: http://www.unicef.org/innovations/index_49082.html. Accessed May 2014.

58. 

LM Spencer, MW Schooley, LA Anderson. Seeking best practices: a conceptual framework for planning and improving evidence-based practices. Prevent Chron Dis 2013;10:E207-15.

59. 

United Nations Educational Scientific and Cultural Organisation. UNESCO MOST Clearing House. Available from: http://www.unesco.org/most/bpindi.htm. Accessed May 2014.

60. 

R Cameron, M Jolin, R Walker. Linking science and practice: toward a system for enabling communities to adopt best practices for chronic disease prevention. Health Promot Pract 2001;2:35-42.

61. 

F Bull. Review of best practice and recommendations for interventions on physical activity. A report for the Premier’s Physical Activity Taskforce on behalf of the Evaluation and Monitoring Working Group. Western Australian Government. 2003. Available from: https://secure.ausport.gov.au/__data/assets/pdf_file/0011/557696/Review_of_best_practice_and_recommendations_for_interventions_on_physical_activity.pdf

62. 

Association of State & Territorial Dental Directors. Best practice approach reports. Available from: http://www.astdd.org/school-based-dental-sealant-programs-introduction/. Accessed June 2014.

63. 

W Thurston, A Vollmann, D Wilson. Development and testing of a framework for assessing the effectiveness of health promotion. Sozial-und Präventivmedizin 2003;48:301-16.

64. 

L Llyod. Best practices for dengue prevention and control in the Americas. United States Agency for International Development. 2003. Available from: http://www.ehproject.org/PDF/strategic_papers/SR7-BestPractice.pdf

65. 

WHO Regional Office for the Americas. Working to achieve health equity with an ethnic perspective: what has been done and best practices. 2004. Available from: http://www1.paho.org/English/AD/ethnicity-0ct-04.pdf

66. 

WHO Regional Office for the Eastern Mediterranean. Success stories for community based initiatives. 2005. Available from: http://www.emro.who.int/images/stories/cbi/documents/publications/success-stories/cbi_successtories.pdf

67. 

D McNeil, M. Flynn Methods of defining best practice for population health approaches with obesity prevention as an example. Proc Nutr Soc 2006;65:403-11.

68. 

World Health Organization. Review of best practice in interventions to promote physical activity in developing countries. 2008. Available from: http://www.who.int/dietphysicalactivity/bestpracticePA2008.pdf

69. 

World Health Organization. Implementing best practices in reproductive health: our first 10 years. 2010. Available from: http://www.ibpinitiative.org/images/OurFirstTenYears2010.pdf

70. 

CDC Office for State Tribal Local and Territorial Support. CDC Best Practices Workgroup: definitions, criteria, and associated terms. 2010. Available from: http://www.cdc.gov/niosh/z-draftunder-review-do-not-cite/draftwrt/pdfs/Draft-Best-Practice-Definitions-adnd-Criteria-for-review-9-21-10.docx

71. 

L King, T Gill, S Allender, B. Swinburn Best practice principles for community based obesity prevention: development, content and application. Obes Rev 2011;12:329-38.

72. 

D Hercot, B Meessen, V Ridde, L. Gilson Removing user fees for health services in low-income countries: a multi-country review framework for assessing the process of policy change. Health Policy Plann 2011;26:ii5-15.

73. 

European Monitoring Centre for Drugs and Drug Addiction. Best practice portal. Available from: http://www.emcdda.europa.eu/best-practice. Accessed May 2014.

74. 

Public Health Agency of Canada. Best practice portal. Available from: http://cbpp-pcpe.phac-aspc.gc.ca/interventions/about-best-practices/. Accessed: May 2014.

75. 

WHO Regional Office for Europe. Good practices in nursing and midwifery: from expert to expert. 2013. Available from: http://www.euro.who.int/__data/assets/pdf_file/0007/234952/Good-practices-in-nursing-and-midwifery.pdf

76. 

J Bryce, JB Roungou, P Nguyen-Dinh. Evaluation of national malaria control programmes in Africa. Bull World Health Organ 1994;72:371-81.

77. 

C Costongs, J. Springett, C Costongs, J. Springett Towards a framework for the evaluation of health-related policies in cities. Evaluation 1997;3:345-62.

78. 

WHO European Working Group on Health Promotion Evaluation. Health promotion evaluation: recommendations to policy-makers. 1998. Available from: http://apps.who.int/iris/bitstream/10665/108116/1/E60706.pdf

79. 

I de Zoysa, J P Habicht, G Pelto, J. Martines Research steps in the development and evaluation of public health interventions. Bull World Health Organ 1998;76:127-33.

80. 

D Nutbeam. Evaluating health promotion: progress, problems and solutions. Health Promot Int 1998;13:27-44.

81. 

A Ogborne, C. Birchmore-Timney A framework for the evaluation of activities and programs with harm-reduction objectives. Subst Use Misuse 1999;34:69-82.

82. 

E Wimbush, J. Watson An evaluation framework for health promotion: theory, quality and effectiveness. Evaluation 2000;6:301-21.

83. 

D Nutbeam. Health promotion effectiveness. The questions to be answered. In: G Macdonald, ed. The evidence of health promotion effectiveness. Shaping public health in a new Europe. Paris: Jouve Composition & Impression; 2000. pp 233-235.

84. 

G Bauer, J Davies, J Pelikan. Advancing a theoretical model for public health and health promotion indicator development: proposal from the EUHPID consortium. Eur J Public Health 2003;13:107-13.

85. 

A Lee, FF Cheng, L St Leger. Evaluating health-promoting schools in Hong Kong: development of a framework. Health Promot Int 2005;20:177-86.

86. 

C Bollars, H Kok, S Van den Broucke, G. Mölleman European quality instrument for health promotion. European project getting evidence into practice. 2005. Available from: http://ec.europa.eu/health/ph_projects/2003/action1/docs/2003_1_15_a10_en.pdf

87. 

GR Molleman, LW Peters, CM Hosman. Project quality rating by experts and practitioners: experience with Preffi 2.0 as a quality assessment instrument. Health Educ Res 2006;21:219-29.

88. 

J Brug, D van Dale, L Lanting. Towards evidence-based, quality-controlled health promotion: the Dutch recognition system for health promotion interventions. Health Educ Res 2010;25:1100-6.

89. 

F Douglas, D Gray, E. van Teijlingen Using a realist approach to evaluate smoking cessation interventions targeting pregnant women and young people. BMC Health Serv Res 2010;10:49-55.

90. 

Victorian Government Department of Health. Evaluation framework for health promotion and disease prevention programs. 2010. Available from: http://docs2.health.vic.gov.au/docs/doc/AE7E5D59ADE57556CA2578650020BBDE/$FILE/Evaluation%20framework%20for%20health%20promotion.pdf

91. 

A DeGroff, M Schooley, T Chapel, T. Poister Challenges and strategies in applying performance measurement to federal public health programs. Eval Program Plann 2010;33:365-72.

92. 

J Davies, N. Sherriff The gradient in health inequalities among families and children: a review of evaluation frameworks. Health Pol 2011;101:1-10.

93. 

G Payne, D Thompson, C Heiser, R. Farris An evaluation framework for obesity prevention policy interventions. Prevent Chron Dis 2012;9:1-9.

94. 

JE Cohen, EA Donaldson. A framework to evaluate the development and implementation of a comprehensive public health strategy. Publ Health 2013;127:791-3.

95. 

RL Milstein, SF Wetterhall. Framework for program evaluation in public health. CDC Morbidity and Mortality Weekly Report 1999;48:1-40.

96. 

T Cornford, G Doukidis, D. Forster Experience with a structure, process and outcome framework for evaluating an information system. Omega 1994;22:491-504.

97. 

United States Department of Health and Human Services. National healthcare quality report 2005. Available from: http://archive.ahrq.gov/qual/nhqr05/nhqr05.pdf

98. 

A Donabedian. The quality of care: how can it be assessed? J Am Med Assoc 1988;260:1743-8.

99. 

E Kelley, J. Hurst Healthcare quality indicators project conceptual framework paper. Organisation for Economic Co-operation and Development. 2006. Available from: http://www.oecdilibrary.org/docserver/download/5l9t19m240hc.pdf?expires=1438495946&id=id&accname=guest&checksum=F575196588ECB6B18 DFD56ED42CDF781

100. 

OA Arah, GP Westert, J Hurst, NS Klazinga. A conceptual framework for the OECD health care quality indicators project. Int J Qual Health Care 2006;18:5-13.

101. 

JC Knowles, C Leighton, W. Stinson Measuring results of health sector reform for system performance: a handbook of indicators. Partnerships for Health Reform. 1997. Available from: http://info.worldbank.org/etools/docs/library/122031/bangkokCD/BangkokMarch05/Week2/2Tuesday/S3HealthSysPerformance/MeasuringResultsofHSReform.pdf

102. 

A Handler, M Issel, B. Turnock A conceptual framework to measure performance of the public health system. Am J Public Health 2001;91:1235-9.

103. 

National Health Performance Committee. National health performance framework report. Queensland Health. 2001. Available from: http://www.pc.gov.au/research/completed/health-performance-framework-2001/nphfr2001.pdf

104. 

L Aday, C Begley, D Lairson, R. Balkrishnan Introduction to health services research and policy analysis. In: L Aday, C Begley, D Lairson, R Balkrishnan, eds. Evaluating the healthcare system: effectiveness, efficiency, and equity. Chicago: Health Administration Press; 2004. pp 1-26.

105. 

Canadian Institute for Health Information. Health indicators 2013. 2014. Available from: https://secure.cihi.ca/free_products/HI2013_Jan30_EN.pdf

106. 

M Jee, Z. Or Health outcomes in OECD countries: a framework for health indicators for outcome-oriented policymaking. Organisation for Economic Co-operation and Development. 1999. Available from: http://www.oecd-ilibrary.org/docserver/download/5lgsjhvj7s8r.pdf?expires=1438496104&id=id&accname=guest&checksum=0C0092F72A229E50FE0BBF38CFC9B46E

107. 

D Sosin. Draft framework for evaluating syndromic surveillance systems. J Urban Health 2003;80:i8-i13.

108. 

RE Glasgow, TM Vogt, SM Boles. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health 1999;89:1322-7.

109. 

D Abrams, C Orleans, R Niaura. Integrating individual and public health perspectives for treatment of tobacco dependence under managed health care: a combined stepped-care and matching model. Ann Behav Med 1996;18:290-304.

110. 

D King, R Glasgow, B. Leeman-Castillo Reaiming RE-AIM: using the model to plan, implement, and evaluate the effects of environmental change approaches to enhancing population health. Am J Public Health 2010;100:2076-84.

111. 

V Dubuy, K De Cocker, I De Bourdeaudhuij. Evaluation of a workplace intervention to promote commuter cycling: a RE-AIM analysis. BMC Public Health 2013;13:587-97.

112. 

SA Smith, DS Blumenthal. Efficacy to effectiveness transition of an Educational Program to Increase Colorectal Cancer Screening EPICS.: study protocol of a cluster randomized controlled trial. Implement Sci 2013;8:1-11.

113. 

M Allicock, LS Johnson, L Leone. Promoting fruit and vegetable consumption among members of black churches, Michigan and North Carolina, 2008-2010. Prevent Chron Dis 2013;10:E33-41.

114. 

CG Abildso, SJ Zizzi, B. Reger-Nash Evaluating an insurance-sponsored weight management program with the RE-AIM Model, West Virginia, 2004-2008. Prevent Chron Dis 2010;7:A46-57.

115. 

DA Peels, MM van Stralen, C Bolman. Development of web-based computer-tailored advice to promote physical activity among people older than 50 years. J Med Internet Res 2012;14:e39.

116. 

G Dunton, R Lagloire, T. Robertson Using the RE-AIM framework to evaluate the statewide dissemination of a school-based physical activity and nutrition curriculum: Exercise Your Options. Am J Health Promot 2009;23:229-32.

117. 

RE Glasgow, LM Klesges, DA Dzewaltowski. Evaluating the impact of health promotion programs: using the RE-AIM framework to form summary measures for decision making involving complex issues. Health Educ Res 2006;21:688-94.

118. 

RP Farris, JC Will, O Khavjou, EA Finkelstein. Beyond effectiveness: evaluating the public health impact of the WISEWOMAN program. Am J Public Health 2007;97:641-7.

119. 

N Gyurcsik, D. Brittain Partial examination of the public health impact of the People with Arthritis Can Exercise (PACE®) program: reach, adoption, and maintenance. Publ Health Nurs 2006;23:516-22.

120. 

Virginia Polytechnic Institute and State University. RE-AIM. Available from: http://www.re-aim.hnfe.vt.edu/. Accessed May 2014.

121. 

B Kahan, M. Goodstadt The interactive domain model of best practices in health promotion: developing and implementing a best practices approach to health promotion. Health Promot Pract 2001;2:43-67.

122. 

B Kahan. Welcome to IDM best practices. Available from: http://www.idmbestpractices.ca/idm.php?content=basics-overview. Accessed May 2014.

123. 

T Svoronos, K. Mate Evaluating large-scale health programmes at a district level in resource-limited countries. Bull World Health Organ 2011;89:831-7.

124. 

C Weiss. Nothing as practical as good theory: exploring theory-based evaluation for comprehensive community initiatives for children and families. In: JP Connell, AC Kubisch, LB Schorr, CH Weiss, eds. New approaches to evaluating community initiatives: concepts, methods, and contexts. New York: The Aspen Institute; 1995. pp 65-92.

125. 

J Connell, A. Kubisch Applying a theory of change approach to the evaluation of comprehensive community initiatives: progress, prospects, and problems. The Aspen Institute. 1998. Available from: http://www.dmeforpeace.org/sites/default/files/080713%20Applying%2BTheory%2Bof%2BChange%2BApproach.pdf

126. 

JP Habicht, CG Victora, JP Vaughan. Evaluation designs for adequacy, plausibility and probability of public health programme performance and impact. Int J Epidemiol 1999;28:10-8.

127. 

RJ Stoltzfus, G. Pillai Measuring performance: a strategy to improve programs. J Nutr 2002;132:845S-8S.

128. 

C Murray, J. Frenk A framework for assessing the performance of health systems. Bull World Health Organ 2000;78:717-31.

129. 

World Health Organization. World Health Report 2000. How well do health systems perform? 2000. Available from: http://www.who.int/whr/2000/en/whr00_en.pdf?ua=1

130. 

J Hurst, M. Jee-Hughes Performance measurement and performance management in OECD health systems. Organisation for Economic Co-operation and Development. 2001. Available from: http://www.oecd-ilibrary.org/performance-measurement-and-performance-management-in-oecd-health-systems_5lgsjhvj7rq0.pdf?contentType=%2fns%2fWorkingPaper&itemId=%2fcontent%2fworkingpaper%2f788224073713&mimeType=application%2fpdf&containerItemId=%2fcontent%2fworkingpaperseries%2f18151981&accessItemIds=

131. 

Institute of Medicine Committee on Quality of Health Care in America. Crossing the quality chasm: a new health system for the 21st century. United States of America: National Academies Press; 2001.

132. 

HA Truong, CR Taylor, NA DiPietro. The Assessment, Development, Assurance Pharmacist’s Tool (ADAPT) for ensuring quality implementation of health promotion programs. Am J Pharm Educ 2012;76:12-21.

133. 

D Impoinvil, S Ahmad, A Troyo. Comparison of mosquito control programs in seven urban sites in Africa, the Middle East, and the Americas. Health Policy 2007;83:196-212.

134. 

J Øvretviet. Quality in health promotion. Health Promot Int 1996;11:55-62.

135. 

World Health Organization. Managerial process for national health development: guiding principles for use in support of strategies for health for all by the year 2000. World Health Organ 1981. Available from: http://whqlibdoc.who.int/publications/9241800054.pdf

136. 

Zapata L González, Moncada R Ortiz, Dardet C Alvarez. Mapping public policy options responding to obesity: the case of Spain. Obes Rev 2007;8:99-108.

137. 

DO Dunet, PB Sparling, J Hersey. A new evaluation tool to obtain practice-based evidence of worksite health promotion programs. Prevent Chron Dis 2008;5:A118-30.

138. 

OA Arah, NS Klazinga, DMJ Delnoij. Conceptual frameworks for health systems performance: a quest for effectiveness, quality, and improvement. Int J Qual Health Care 2003;15:377-98.

139. 

Commonwealth Fund. First report and recommendations of the Commonwealth Fund’s International Working Group on Quality Indicators: a report to the Health Ministers of Australia, Canada, New Zealand, the United Kingdom, and the United States. 2004. Available from: http://www.commonwealthfund.org/~/media/files/publications/fund-report/2004/jun/first-report-and-recommendations-of-the-commonwealth-funds-international-working-group-on-quality-in/ministers_complete2004report_752-pdf.pdf

140. 

United Nations Development Programme. MDG good practices. 2008. Available from: http://www.undp.org/content/dam/aplaws/publication/en/publications/poverty-reduction/poverty-website/mdg-good-practices/MDGGoodPractices.pdf

141. 

WHO Regional Office for the Americas. First Hispano-American and Inter-american contest of good practices in urbanism and health. Available from: http://www.paho.org/hq/index.php?option=com_content&view=article&id=2442&Itemid=259&lang=en. Accessed May 2014.

142. 

WHO Regional Office for Europe. Report of the meeting on community initiatives to improve nutrition and physical activity Berlin, Germany, 21-22 February 2008. Available from: http://www.euro.who.int/__data/assets/pdf_file/0005/87422/E93702.pdf

143. 

WHO Regional Office for Europe. Health in prisons programme best practice awards. Available from: http://www.uclan.ac.uk/research/environment/projects/who_collaborating_centre.php. Accessed May 2014.

144. 

United Nations Office for South-South Cooperation. What is a southern development solution? Available from: http://academy.ssc.undp.org/GSSDAcademy/SIE/whatis.aspx. Accessed May 2014.

145. 

WHO Regional Office for Europe. Good practice appraisal tool for obesity prevention programmes, projects, initiatives and interventions. 2011. Available from: http://www.euro.who.int/__data/assets/pdf_file/0007/149740/e95686.pdf

146. 

WHO Regional Office for the Americas. Safe motherhood initiative: best practices contest. Available from: http://www.paho.org/ims/index.php?option=com_content&view=article&id=37&Itemid=48&lang=en. Accessed May 2014.

147. 

WHO Regional Office for the Western Pacific. Healthy cities recognition and awards. [Online]. Available from: http://www.alliance-healthycities.com/htmls/awards/index_awards.html. Accessed May 2014.

148. 

WHO Regional Office for the Americas. Malaria champions of the Americas. Available from: http://www.paho.org/hq/index.php?option=com_content&view=article&id=8531&Itemid=39966&lang=en. Accessed May 2014.

149. 

WHO Regional Office for the Americas. Competition on best practices that integrate equality and equity in gender and interculturalism in health. Available from: http://www.paho.org/hq/index.php?option=com_content&view=article&id=457&Itemid=4262&lang=en. Accessed May 2014.

150. 

European Commission. The European Commission database of good practices, policies and tools in mental health and well-being. Available from: http://ec.europa.eu/health/mental_health/eu_compass/add/index_en.htm. Accessed May 2014.

151. 

United Nations Habitat. Dubai international award for best practices to improve the living environment. Available from: http://unhabitat.org/dubai-international-award-for-best-practices-to-improve-the-living-environment/. Accessed May 2014.

152. 

The independent Expert Review Group. Call for submission of documentation for good practice and obstacles. Available from: http://www.who.int/woman_child_accountability/ierg/reports/call_evidence/en/. Accessed May 2014.

153. 

Joint United Nations Programme on HIV/AIDS. Comfort and hope: 6 case studies on mobilising family and community care for and by people with HIV/AIDS. 1999. Available from: http://data.unaids.org/publications/irc-pub01/jc099-comfort_hope_en.pdf

154. 

Joint United Nations Programme on HIV/AIDS. Collaboration with traditional healers in HIV/AIDS prevention and care in Sub-Saharan Africa 2000. Available from: http://data.unaids.org/Publications/IRC-pub01/JC299-TradHeal_en.pdf

155. 

Joint United Nations Programme on HIV/AIDS. Innovative approaches to HIV prevention: selected case studies. 2000. Available from: http://www.unaids.org/sites/default/files/en/media/unaids/contentassets/dataimport/publications/ircpub05/jc414-innovappr_en.pdf

156. 

P Gottret, G Schieber, H. Waters Lessons from reforms in low- and middle-income countries: Good practices in health financing. The World Bank. 2008. Available from: http://siteresources.world-bank.org/INTHSD/Resources/376278-1202320704235/GoodPracticesHealthFinancing.pdf

157. 

United Nations Development Programme. UNDP Good practices in gender mainstreaming and implementing the Beijing platform for action. Available from: http://www.un.org/womenwatch/resources/goodpractices/guideline.html. Accessed June 2014.

158. 

K Reed, A Cheadle, B. Thompson Evaluating prevention programs with the Results Mapping evaluation tool: a case study of a youth substance abuse prevention program. Health Educ Res 2000;15:73-84.

159. 

SDC Learning and Networking. SDC knowledge management toolkit: good practices. Available from: http://www.sdclearningandnetworking.ch/en/Home/SDC_KM_Tools/Good_Practice. Accessed June 2014.

160. 

T Hambling, P Weinstein, D. Slaney A review of frameworks for developing environmental health indicators for climate change and health. Int J Environ Res Public Health 2011;8:2854-75.

161. 

P Kramers. The ECHI project: health indicators for the European community. Eur J Public Health 2003;13:101-6.

162. 

C Lengeler, RW Snow. From efficacy to effectiveness: insecticidetreated bednets in Africa. Bull World Health Organ 1996;74:325-32.

163. 

S McDonnell, A Yassin, W Brown. Measuring health program effectiveness in the field: an assessment tool. Prehospital Disaster Med 2007;22:396-405.

164. 

G Macdonald, C Veen, K. Tones Evidence for success in health promotion: suggestions for improvement. Health Educ Res 1996;11:367-76.

165. 

RP Saunders, RR Pate, M Dowda. Assessing sustainability of lifestyle education for activity program (LEAP). Health Educ Res 2012;27:319-30.

166. 

MC Shediac-Rizkallah, LR Bone. Planning for the sustainability of community-based health programs: conceptual frameworks and future directions for research, practice and policy. Health Educ Res 1998;13:87-108.

167. 

M Scheirer. Is sustainability possible? A review and commentary on empirical studies of program sustainability. Am J Eval 2005;26:320-47.

168. 

B Jacobs, N Price, S. Sam A sustainability assessment of a health equity fund initiative in Cambodia. Int J Health Plann Manag 2007;22:183-203.

169. 

M Jansen, J Harting, N Ebben. The concept of sustainability and the use of outcome indicators. A case study to continue a successful health counselling intervention. Fam Pract 2008;25:i32-7.

170. 

M Scheirer, J. Dearing An agenda for research on the sustainability of public health programs. Am J Public Health 2011;101:2059-67.

171. 

I Olsen. Sustainability of health care: a framework for analysis. Health Pol Plann 1998;13:287-95.

172. 

E Murray, S Treweek, C Pope. Normalisation process theory: a framework for developing, evaluating and implementing complex interventions. BMC Med 2010;8:63-73.

173. 

DA Forster, M Newton, HL McLachlan, K. Willis Exploring implementation and sustainability of models of care: can theory help? BMC Public Health 2011;11:S8-17.

174. 

S Schell, D Luke, M Schooley. Public health program capacity for sustainability: a new framework. Implement Sci 2013;8:15-23.

175. 

R. Levine Millions saved: proven successes in global health. Center for Global Development. 2004. Available from: http://www.cgdev.org/doc/millions/Millions_Saved_07.pdf

176. 

C Sijbesma, T. Christoffers The value of hygiene promotion: cost-effectiveness analysis of interventions in developing countries. Health Pol Plann 2009;24:418-27.

177. 

V Phillips, B Teweldemedhin, S Ahmedov. Evaluation of program performance and expenditures in a report of performance measures (RPM) via a case study of two Florida county tuberculosis programs. Eval Program Plann 2010;33:373-8.

178. 

L Owen, A Morgan, A Fischer. The cost-effectiveness of public health interventions. J Public Health 2012;34:37-45.

179. 

P Khlangwiset, F. Wu Costs and efficacy of public health interventions to reduce aflatoxin-induced human disease. Food Addit Contam Part A Chem Anal Control Expo Risk Assess 2010;27:998-1014.

180. 

Joint United Nations Programme on HIV/AIDS. The faces, voices and skills behind the GIPA Workplace model in South Africa. 2002. Available from: http://data.unaids.org/Publications/IRC-pub02/JC770-GIPA-SA_en.pdf

181. 

Open Health Institute. Assessment of the best practices in HIV/AIDS harm reduction programs among civilian population and prisoners in the Russian Federation. 2006. Available from: http://www-wds.worldbank.org/external/default/WDSContentServer/WDSP/IB/2006/09/07/000160016_20060907102635/Rendered/PDF/372080RU0Prisoners0HR0200601PUBLIC1.pdf

182. 

ML Granner, PA Sharpe. Evaluating community coalition characteristics and functioning: a summary of measurement tools. Health Educ Res 2004;19:514-32.

183. 

LT Smith, DB Johnson, E Lamson, M. Sitaker A framework for developing evaluation tools used in Washington State’s Healthy Communities projects. Prevent Chron Dis 2006;3:A64-72.

184. 

J Falisse, B Meessen, J Ndayishimiye, M. Bossuyt Community participation and voice mechanisms under performance based financing schemes in Burundi. Trop Med Int Health 2012;17:674-82.

185. 

PG Szilagyi, LP Shone, AM Dozier. Evaluating community engagement in an academic medical center. Acad Med 2014;89:585-95.

186. 

R Lasker, E. Weiss Broadening participation in community problem solving: a multidisciplinary model to support collaborative practice and research. J Urban Health 2003;80:14-47.

187. 

R Lasker, E Weiss, R. Miller Partnership synergy: a practical framework for studying and strengthening the collaborative advantage. Milbank Quart 2001;79:179-205.

188. 

S Fawcett, A Paine-Andrews, V Francisco. Using empowerment theory in collaborative partnerships for community health and development. Am J Community Psychol 1995;23:677-97.

189. 

P Hawe, M Noort, L King, C. Jordens Multiplying health gains: the critical role of capacity-building within health promotion programs. Health Policy 1997;39:29-42.

190. 

RM Goodman, MA Speers, K McLeroy. Identifying and defining the dimensions of community capacity to provide a basis for measurement. Health Educ Behav 1998;25:258-78.

191. 

HR Yeatman, T. Nove Reorienting health services with capacity building: a case study of the core skills in health promotion project. Health Promot Int 2002;17:341-50.

192. 

F de Groot, N Robertson, B Swinburn, A. de Silva-Sanigorski Increasing community capacity to prevent childhood obesity: challenges, lessons learned and results from the Romp & Chomp intervention. BMC Public Health 2010;10:522-9.

193. 

G Gavriilidis, P. Östergren Evaluating a traditional medicine policy in South Africa: phase 1 development of a policy assessment tool. Global Health Action 2012;5:17271-81.

194. 

PB Berger. AIDS and ethics: an analytic framework. Can Fam Physician 1988;34:1787-92.

195. 

NE Kass. An ethics framework for public health. Am J Public Health 2001;91:1776-82.

196. 

N Baum, S Gollust, S Goold, P. Jacobson Looking ahead: addressing ethical challenges in public health practice. J Law Med Ethics 2007;35:657-67.

197. 

R Kersh, DF Stroup, WC Taylor. Childhood obesity: a framework for policy approaches and ethical considerations. Prevent Chron Dis 2011;8:A93-7.

198. 

M Rae, I. Kerridge Vaccines-but not as we know them: an ethical evaluation of HPV vaccination policy in Australia. Aust N Z J Public Health 2011;35:176-9.

199. 

MT Have, A van der Heide, JP Mackenbach, ID de Beaufort. An ethical framework for the prevention of overweight and obesity: a tool for thinking through a programme’s ethical aspects. Eur J Public Health 2013;23:299-305.

200. 

AH Antommaria. An ethical analysis of mandatory influenza vaccination of health care personnel: implementing fairly and balancing benefits and burdens. Am J Bioeth 2013;13:30-7.

201. 

N Daniels, J Bryant, R Castano. Benchmarks of fairness for health care reform: a policy tool for developing countries. Bull World Health Organ 2000;78:740-50.

202. 

N Daniels, W Flores, S Pannarunothai. An evidence-based approach to benchmarking the fairness of health-sector reform in developing countries. Bull World Health Organ 2005;83:534-40.

203. 

R Davis, D Cook, L. Cohen A community resilience approach to reducing ethnic and racial disparities in health. Am J Public Health 2005;95:2168-73.

204. 

V Welch, P Tugwell, E. Morris The equity-effectiveness loop as a tool for evaluating population health interventions. Revista de Salud Pública 2008;10:83-96.

205. 

J Davies, N. Sherriff Assessing public health policy approaches to level-up the gradient in health inequalities: the gradient evaluation framework. Public Health 2014;128:246-53.

206. 

F Wu, P. Khlangwiset Evaluating the technical feasibility of aflatoxin risk reduction strategies in Africa. Food Addit Contam Part A Chem Anal Control Expo Risk Assess 2010;27:658-76.

207. 

A Fox, A Goldberg, R Gore, T. Bärnighausen Conceptual and methodological challenges to measuring political commitment to respond to HIV. J Int AIDS Soc 2011;14:1-13.

208. 

JPT Higgins, S Green, eds. Cochrane handbook for systematic reviews of interventions. The Cochrane Collaboration. 2011. Available from: http://handbook.cochrane.org/

209. 

F Bert, G Scaioli, MR Gualano, R. Siliquini How can we bring public health in all policies? Strategies for healthy societies. J Public Health Res 2015;4:43-6.

210. 

R Landry, N Amara, A Pablos-Mendes. The knowledge-value chain: a conceptual framework for knowledge translation in health. Bull World Health Organ 2006;84:597-602.

Figure 1.

Results of literature search.

jphr-2015-3-577-g001.jpg
Table 1.

Proposed framework for selection of best practices in public health, with examples for each criterion.

Category Criterion Example
Context 1. Relevant
  • Relevant to the needs of the community (conduct problem analysis and needs assessment of the community prior to programme development; consider perspectives of the target group and stakeholders)

  • Relevant to the setting of the community(describe characteristics of the community and context)

Evaluate disease burden in community.
Involve target groups and stakeholders in needs assessment.
Describe existing programmes, social and cultural perception of disease.
Process 2. Engage the community (community participation)
  • Describe who and how members of the community are involved

  • Empower the community

  • Achieve synergy through community participation in programme development and implementation

Ensure appropriate representation of target groups, including vulnerable groups.
Improve knowledge about the disease in local community.
Give rise to new approaches due to inclusion of the community.
3. Involve the right stakeholders (stakeholder collaboration)
  • Ensure appropriate representation of relevant stakeholders

  • Describe who and how stakeholders are involved

  • Achieve synergy through stakeholder collaboration

Give rise to new approaches due to pooling together of resources and competenc. of non-governmental organisations, donors and international organisations.
4. Ethically sound
  • Ensure benefits outweigh harm to individuals and community

  • Distribute access, financing, benefits and harms equitably

  • Demonstrate respect for individual autonomy and privacy

  • Consider vulnerable groups

  • Ensure accountability to community

  • Demonstrate respect for local norms and cultures

Promote fair distribution of benefits across ethnicities, socioeconomic status and gender.
Involve voluntary participants.
Target the poor and females.
Benefit the local community and do not deplete local resources.
5. Replicable*
  • Require expertise and resources that are generalisable to other settings

Outcomes 6. Effective*
  • Achieve desirable outcomes and improve public health

  • Describe types of supporting evidence available

Reduce morbidity and mortality, achieve universal access to healthcare, and enhance community awareness.
Conduct case control studies, patient surveys, and routing monitoring.
7. Efficient
  • Describe physical, financial and technical resources used

  • Use locally accessible resources

  • Demonstrate minimisation of resource use and wastage

  • Describe types of supporting evidence available

Describe expertise of public health workers required, and costs of intervention.
Conduct cost-benefit analysis, and cost-effectiveness analysis.
8. Sustainable
  • Demonstrate (potential of) continuation of programme activities through local ownership or institutionalisation

  • Demonstrate (potential of) continuation of benefits of programme

  • Demonstrate (potential of) continuation of community and organisational capacity to deliver programme, including source of funding in the long run

  • State duration of programme since start of implementation

Train local public health workers to administer intervention.

Ensure community awareness continues after programme ends. Self-financing of intervention by community.

[i] *Replicability and effectiveness are fundamental criteria in line with the working definition of best practices

Abstract views:
1695

Views:
PDF
696
APPENDIX
118
HTML
1285

Article Metrics

Metrics Loading ...

Metrics powered by PLOS ALM


Copyright (c) 2015 Eileen Ng, Pierpaolo de Colombani

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
 
© PAGEPress 2008-2017     -     PAGEPress is a registered trademark property of PAGEPress srl, Italy.     -     VAT: IT02125780185