Student Study Site for Agency-Based Program Evaluation
Lessons From Practice
Stephen A. Kapp and Gary R. Anderson

Learning from Journal Articles

Note: Each link below expands or collapses when clicked.

Chapter 1: Making the Case for Program Evaluatione

Melanie Britt Jephson
The Purposes, Importance, and Feasibility of Program Evaluation in Community-Based Early Intervention Programs
Journal of Early Intervention, Jan 1992; vol. 16: pp. 252 - 261.
http://jei.sagepub.com/cgi/reprint/16/3/252?ijkey=lxX93QnL74UBQ&keytype=ref&siteid=spjei

Abstract:
The purpose of this study was to gather information on service providers' perceptions related to program evaluation in community-based early intervention programs. The directors of 61 programs in Texas responded to a 22-item questionnaire. Respondents ranked eight program evaluation purposes, rated the importance and feasibility of five types of program evaluation on a Likert-type scale, and indicated on a checklist factors potentially hindering each type. Results indicated that program improvement should be the main reason for conducting program evaluation. Evaluating program goals and child progress were considered the most feasible strategies; evaluating program quality and evaluating family progress were perceived as most difficult. Although numerous factors hinder program evaluation efforts, a lack of appropriate methods and measurement instruments was perceived as the greatest constraint. Implications of these results for training and practice are discussed, and recommendations are made for future research.

***

Michael J. Holosko
What Types of Designs are We Using in Social Work Research and Evaluation? Research on Social Work Practice, Jun 2009; vol. 0: pp. 1049731509339586v1.
http://rsw.sagepub.com/cgi/rapidpdf/1049731509339586v1?ijkey=y48v1JvUT54Fo&keytype=ref&siteid=sprsw

Abstract:
This article addresses a void in the literature about social work research and evaluation (R&E) designs, in particular related to the quality of its published work. Data were collected by reviewing three empirically oriented journals, Research on Social Work Practice, Journal of Social Service Research, and Social Work Research over three publication years 2005, 2006, and 2007. A total of N = 329 articles were content analyzed accordingly: research versus nonresearch, designs used, design objectives, sample sizes, primary statistics used, and outcomes. Main findings were (a) social work’s R&E is uniquely characterized by a cohort of nonresearch studies, which assist in understanding our empirically published work; (b) the most frequently used designs were preexperimental (82.2%) and least frequently used were experimental (2.3%); (c) design objectives were equally dispersed across exploration, variable relationships, instrument development, and program/evaluation; (d) primary statistics used were parametric (82.2%); and (e) 96.7% of the studies specified outcomes within them. Implications are directed to better understanding the context of where social work R&E is conducted, not apologizing for the designs the author uses, and how one can and should strengthen our study designs to offset their concerns.

***

Michael Quinn Patton
The Challenges of Diversity in Evaluation: Narrow Versus Expansive Perspectives Science Communication, Sep 1998; vol. 20: pp. 148 - 164.
http://scx.sagepub.com/cgi/reprint/20/1/148?ijkey=mJnhquSsUznRs&keytype=ref&siteid=spscx

Abstract:
As program evaluation has developed over the last twenty years into a viable profession, its central challenge has become to define what it means to be an evaluator. This debate is fueled less by traditional divisions between academic and service-related professionals, or disagreements over methodologies, than by sharply different visions of the evaluator's moral and political role, derived from the profession's historic roots in principles of social justice and concern for individual rights.

Chapter 2: Steps in Program Evaluation

Michael Quinn Patton
A World Larger than Formative and Summative American Journal of Evaluation, Jun 1996; vol. 17: pp. 131 - 144.
http://aje.sagepub.com/cgi/reprint/17/2/131?ijkey=gkHVfx5GqzG2o&keytype=ref&siteid=spaje

Abstract:
Patton continues the debate by identifying three arenas of evaluation practice in which the formative/summative dichotomy appears limited: knowledge-generating evaluations aimed at conceptual rather than instrumental use; developmental evaluation; and use of evaluation processes to support interventions or empower participants. In so doing, the essence of "evaluation" is more broadly defined, and the impact of harsh criticism on the listener is demonstrated through personal example.

***

Robert L. Johnson, Marjorie J. Willeke, and Deila J. Steiner
Stakeholder Collaboration in the Design and Implementation of a Family Literacy Portfolio Assessment American Journal of Evaluation, Sep 1998; vol. 19: pp. 339 - 353.
http://aje.sagepub.com/cgi/reprint/19/3/339?ijkey=fAhtWigpTH.rA&keytype=ref&siteid=spaje

Abstract:
Collaborative, participatory, and empowerment forms of evaluation advocate the inclusion of stakeholders in decision-making roles in the evaluation process; however; little in the literature describes the involvement of stakeholders in the design and implementation of evaluative tools for data collection. This case study describes the collaborative process and the lessons learned when the staff of a family literacy program and an evaluator collaborated to design and implement a portfolio assessment that was used to collect program evaluation information over a two-year period. The program, Even Start, provides integrated education and human services to meet the literacy needs of participants. The evaluation offered opportunities to collaborate with stakeholders-the program coordinator and family educators-in each step of the creation and implementation of the portfolio system.

***

Sheryl A. Scott and Scott Proescholdbell
Informing Best Practice With Community Practice: The Community Change Chronicle Method for Program Documentation and Evaluation Health Promotion Practice, Jan 2009; vol. 10: pp. 102 - 110.
http://hpp.sagepub.com/cgi/reprint/10/1/102?ijkey=RD5luo/okmbK6&keytype=ref&siteid=sphpp

Abstract:
Health promotion professionals are increasingly encouraged to implement evidence-based programs in health departments, communities, and schools. Yet translating evidence-based research into practice is challenging, especially for complex initiatives that emphasize environmental strategies to create community change. The purpose of this article is to provide health promotion practitioners with a method to evaluate the community change process and document successful applications of environmental strategies. The community change chronicle method uses a five-step process: first, develop a logic model; second, select outcomes of interest; third, review programmatic data for these outcomes; fourth, collect and analyze relevant materials; and, fifth, disseminate stories. From 2001 to 2003, the authors validated the use of a youth empowerment model and developed eight community change chronicles that documented the creation of tobacco-free schools policies (n = 2), voluntary policies to reduce secondhand smoke in youth hangouts (n = 3), and policy and program changes in diverse communities (n = 3).

Chapter 3: Ethics and Program Evaluation: Applying a Code of Ethics to Field-Based Research

Jean L. Pettifor
Ethics and Social Justice in Program Evaluation: Are Evaluators Value-free? Canadian Journal of School Psychology, Jan 1995; vol. 10: pp. 138 - 146.
http://cjs.sagepub.com/cgi/reprint/10/2/138?ijkey=IA0R7WmoracLw&keytype=ref&siteid=spcjs

Abstract:
Program evaluations can be methodologically correct, comply with stated ethical principles, and yet be morally wrong because they violate concepts of social justice. Literature is reviewed on values in evaluation and relevant professional codes of ethics and standards. Issues of changing attitudes in society, who is the primary client, power differentials, vested interests, vulnerabilities, and ethical dilemmas are discussed. A Canadian Code of Ethics for Psychologists and A Code of Ethics for the Canadian Association of School Psychologists are seen as strong supports for psychologist evaluators in making ethical decisions that respect the rights and welfare of vulnerable populations as well as contributing to a better society. It is important for values affecting program evaluators and program evaluation to be openly recognized and for evaluators to have guidelines for negotiating the ethical dilemmas that arise

***

Sherri N. Sheinfeld and Gary L. Lord
The Ethics of Evaluation Researchers: An Exploration of Value Choices Evaluation Review, Jun 1981; vol. 5: pp. 377 - 391.
http://erx.sagepub.com/cgi/reprint/5/3/377?ijkey=kKREjheqvLYLo&keytype=ref&siteid=sperx

Abstract:
This exploratory study examines (1) the values of evaluation research practitioners in response to ethical statements demanding a value choice and (2) the relationship between evaluator value choices and five hypothetical value dimensions relevant to more mature groups of professionals. Preliminary findings indicate that scaling, comparison, and analysis of evaluator value choices are feasible and may identify whatever consensus exists. Value conflicts exist among evaluation practioners, particularly around more esoteric concepts such as sharing and client loyalty versus distributive justice. The implications of multiple value conflicts for the further growth of evaluation as a profession are discussed.

***

Einat Peled and Ronit Leichtentritt
The Ethics of Qualitative Social Work Research
Qualitative Social Work, Jun 2002; vol. 1: pp. 145 - 169.
http://qsw.sagepub.com/cgi/reprint/1/2/145?ijkey=PnG.cOIvlcai2&keytype=ref&siteid=spqsw

Abstract:
A study cannot be a good study unless proper ethical standards have been maintained. This article examines ethical thinking and practice in qualitative social work research. A review of a randomly selected sample of articles published in social work journals in the past decade was conducted, centered around four main issues: (a) prevention of harm; (b) empowerment-related aspects of the research process; (c) research-related benefits for participants and others; and (d) researchers’ technical competence. Our findings suggest that, as a general trend, ethical considerations are marginal in most phases of the studies that are reported in our journals. This raises questions as to the meaning of ‘proper ethical standards’ in qualitative social work research and as to the extent research ethics are regarded as important by researchers and journal editors in our field.

Chapter 4: Ethical Challenges for Evaluators in an Agency Setting: Making Good Choices

Ian F. Shaw
Ethics in Qualitative Research and Evaluation
Journal of Social Work, Apr 2003; vol. 3: pp. 9 - 29.
http://jsw.sagepub.com/cgi/reprint/3/1/9?ijkey=5u2Cw/l1z3oaU&keytype=ref&siteid=spjsw

Abstract:
• Summary: The article approaches questions of research ethics with three emphases: first, the process of research; second, ethical questions raised by qualitative research; and third, precedent and stimulation from the work of writers outside the usual boundaries of social work.

• Findings: The ethics of qualitative research design pose distinctive demands on principles of informed consent, confidentiality and privacy, social justice, and practitioner research. Fieldwork ethics raise special considerations regarding power, reciprocity and contextual relevance. Ethical issues raised by the analysis and dissemination of qualitative enquiry emphasize questions concerning narrative research, outcomes and justice, and the utilization of research.

• Applications: Social work needs a culture of ethical awareness, a review of ethical approval, an awareness of the ethical issues posed by practitioners' involvement in evaluative research, and an understanding of the ethical dimensions of different parts of the research process.

***

Donna M. Mertens and Pauline E. Ginsberg
Deep in Ethical Waters: Transformative Perspectives for Qualitative Social Work Research Qualitative Social Work, Dec 2008; vol. 7: pp. 484 - 503.
http://qsw.sagepub.com/cgi/reprint/7/4/484?ijkey=qT7INOM7Xx40M&keytype=ref&siteid=spqsw

Abstract:
Given the commitment expressed by the social work community towards the furtherance of social justice as reflected not only in their ethical codes, but also in their historical legacy and current statements of purpose, the question of how research can contribute to the enhancement of human rights and social change is particularly relevant for the ethical conduct of qualitative social work research. Based on literature from ethics and social work research and the Handbook of Social Research Ethics (Mertens and Ginsberg, forthcoming), the intersection of advocacy and research is examined from a transformative stance, revealing that strict adherence to the codes and/or regulations as defined by governments, professional associations, and ethics boards are fraught with tensions with regard to such issues as informed consent, confidentiality, and beneficence. In order to investigate topics that are controversial (e.g. pedophilia, drug use) and involve participants who may be stigmatized, the researcher's role may need to be reframed as a member of a team, with differential responsibilities assumed by each team member. This article examines potential ethical alternatives in which the researcher can partner with communities for collection, analysis, and interpretation of data.

***

R.Kevin Grigsby and Heidi L. Roof
Federal Policy for the Protection of Human Subjects: Applications to Research on Social Work PracticeResearch on Social Work Practice, Oct 1993; vol. 3: pp. 448 - 461.
http://rsw.sagepub.com/cgi/reprint/3/4/448?ijkey=vMPVMS2oexYsU&keytype=ref&siteid=sprsw

Abstract:
Federal policy regarding the protection of human subjects in research has led to the creation of institutional review boards (IRBs) at every institution that receives federal funds for research. The function of the IRB is to review research that involves human subjects to ensure that this research is completed in an ethical manner. Social work research undertaken by researchers at federally funded institutions using human subjects and aiming to build knowledge that is generalizable is subject to IRB review, as is any research that is not specifically exempted from IRB oversight by the law. Social work practitioners and researchers who use research designs to evaluate practice effectiveness should comply with the ethical standards of the profession and may be subject to the standards specified in federal policy The relationships among social work research, the IRB, and the evaluation of social work practice are examined in light of the federal policy for protecting human subjects. Guidelines are given as to the types of research and evaluation that fall under the purview of the IRB.

Chapter 5: Agencies and Academics: The Social and Political Context of Program Evaluation

Laurie Stevahn, Jean A. King, Gail Ghere, and Jane Minnema
Establishing Essential Competencies for Program Evaluators
American Journal of Evaluation, Mar 2005; vol. 26: pp. 43 - 59.
http://aje.sagepub.com/cgi/reprint/26/1/43?ijkey=hZ/rAfn9ocNkg&keytype=ref&siteid=spaje

Abstract:
This article presents a comprehensive taxonomy of essential competencies for program evaluators. First, the authors provide a rationale for developing evaluator competencies, along with a brief history of the initial development and validation of the taxonomy of essential evaluator competencies in King, Stevahn, Ghere, and Minnema (2001). Second, they present a revised version of that taxonomy and describe the revision process. Third, a crosswalk accompanying the taxonomy indicates which competencies address standards, principles, and skills endorsed by major evaluation associations in North America. Finally, the authors identify future needs related to the taxonomy, including the need for validation research, a shared understanding of terms, and the construction of descriptive rubrics for assessing competence.

***

Michael Morris
A Nightmare in Elm City: When Evaluation Field Experiences Meet Organizational Politics Evaluation Review, Feb 1990; vol. 14: pp. 91 - 99.
http://erx.sagepub.com/cgi/reprint/14/1/91?ijkey=sz45WFPZRGQ9o&keytype=ref&siteid=sperx

Abstract:
Despite the growing attention that has been directed in recent years to the politics of evaluation, there has been little detailed analysis of the implications of this discussion for the design and supervision of students' field experiences in program evaluation. This article uses a case study of one such experience to examine some of the political issues and decisions that instructors face in this area. It is concluded that a more fully developed analytical framework is needed to address the political dynamics involved in evaluations that serve as training experiences for students.

***

Sue A. Kaplan, Neil S. Calman, Maxine Golub, Charmaine Ruddock, and John Billings Fostering Organizational Change Through a Community-Based Initiative Health Promotion Practice, Jul 2006; vol. 7: pp. 181S - 190S.
http://hpp.sagepub.com/cgi/reprint/7/3_suppl/181S?ijkey=8NxInk0S.pxEw&keytype=ref&siteid=sphpp

Abstract:
Program funders and managers are increasingly interested in fostering changes in the policies, practices, and procedures of organizations participating in community-based initiatives. But little is known about what factors contribute to the institutionalization of change. In this study, the authors assess whether the organizational members of the Bronx Health REACH Coalition have begun to change their functioning and role with regard to their clients and their staff and in the broader community, apart from their implementation of the funded programs for which they are responsible. The study identifies factors that seemed to contribute to or hinder such institutional change and suggests several strategies for coalitions and funders that are seeking to promote and sustain organizational change.

***

Emil J. Posavac
Inhouse Health-Care Program Evaluators: Their Role and Training Personality and Social Psychology Bulletin, Mar 1982; vol. 8: pp. 159 - 167.
http://psp.sagepub.com/cgi/reprint/8/1/159?ijkey=XIedicSwh/2Yk&keytype=ref&siteid=sppsp

Abstract:
A survey of 58 in house program evaluators in health-care settings revealed that evaluators are largely satisfied with their work and their roles in these facilities. This finding suggests that the faculty of graduate programs can in good faith continue to encourage students to prepare to become program evaluators since program evaluation seems to be a desirable field to enter. Survey respondents suggested that research methods courses be retainede however, the type of evaluations done by the evaluators surveyed suggests that methods of large-scale data management ought to become an important part of the evaluator's training.

Chapter 6: Cultural Competency and Program Evaluation

Luba Botcheva, Johanna Shih, and Lynne C. Huffman
Emphasizing Cultural Competence in Evaluation: A Process-Oriented Approach American Journal of Evaluation, Jun 2009; vol. 30: pp. 176 - 188.
http://aje.sagepub.com/cgi/reprint/30/2/176?ijkey=nYKJO4JNn.dag&keytype=ref&siteid=spaje

Abstract:
This paper describes a process-oriented approach to culturally competent evaluation, focusing on a case study of an evaluation of an HIV/AIDS educational program in Bulawayo, Zimbabwe. We suggest that cultural competency in evaluation is not a function of a static set of prescribed steps but is achieved via ongoing reflection, correction, and adaptation. The aim of these processes is to attain the ``best fit'' possible between evaluation goals, methods, and cultural context. Three main ingredients in a process-oriented approach to culturally competent evaluation are discussed: collaboration, reflective adaptation, and contextual analysis. In addition, since evaluators face constraints set by funders and other stakeholders, we suggest that cultural competence is best viewed as a continuum. An evaluator's goal should be to ``move across the continuum'' in order to achieve the highest level of cultural competency possible given the unique parameters of every evaluation.

***

Daniel J. Kruger, Susan Morrel-Samuels, Loretta Davis-Satterla, Barbara J. Harris-Ellis, and Amy Slonim Developing a Cross-Site Evaluation Tool for Diverse Health Interventions Health Promotion Practice, Dec 2008; vol. 0: pp. 1524839908324784v1.
http://hpp.sagepub.com/cgi/rapidpdf/1524839908324784v1?ijkey=CcjYjl.8JR5Tg&keytype=ref&siteid=sphpp

Abstract:
The Prevention Research Center of Michigan provided technical assistance for the evaluation of 10 projects funded by the Michigan Department of Community Health’s (MDCH) Health Disparities Reduction Program. These projects varied considerably in focus, methodology, geographical coverage, and populations served. The authors developed a cross-site evaluation tool to complement the internal evaluations of the projects. The tool contains four sections based on priorities identified by MDCH: evidence-based practice, research-based learning/evaluation (including process, impact, and outcomes indicators), cultural competence, and sustainability. Recognizing the diversity of programmatic efforts and organizational evaluation capacity, the authors sought to enable each project to create the best evaluation possible given the resources and data available. Each section contains a range of components from basic questions to more advanced evaluation techniques. The instrument attempts to use the highest quality of information available for each project. This evaluation tool can be used by programs with diverse goals and methodology.

***

Richard H. Dana, Joan Dayger Behn, and Terry Gonwa
A Checklist for the Examination of Cultural Competence in Social Service Agencies Research on Social Work Practice, Apr 1992; vol. 2: pp. 220 - 233.
http://rsw.sagepub.com/cgi/reprint/2/2/220?ijkey=dJUVudcIGkQ6c&keytype=ref&siteid=sprsw

Abstract:
Multicultural services are being provided by social service agencies in the absence ofany clearly identified criteria for culturally competent practice. This article describes the development of a checklist of agency characteristics that are believed to represent cultural competence. The checklist content was derived by sampling articles from a compilation of relevant literature. This literature described existing services for minority groups and provided case examples of specific programs. Systematic procedures were used to select articles, abstract characteristics from these articles, and cluster these characteristics. A preliminary form of the checklist contains items related to agency practices, available services, relationship to ethnic community, training, and evaluation. Pilot applications in social service programs provided evidence for observer reliability and concurrent validity. A discussion suggested some needed revisions in this checklist and provided a context for the implied checklist definition of cultural competency.

Chapter 7: Program Definition: Using Program Logic Models to Develop a Common Vision

Ralph Renger and Allison Titcomb
A Three-Step Approach to Teaching Logic Models American Journal of Evaluation, Dec 2002; vol. 23: pp. 493 - 503.
http://aje.sagepub.com/cgi/reprint/23/4/493?ijkey=JV1DhR3T8DYmA&keytype=ref&siteid=spaje

Abstract:
Developing a logic model is an essential first step in program evaluation. Our experience has been that there is little guidance to teach students how to develop a logic model that is true to its intended purpose of providing a clear visual representation of the underlying rationale that is not shrouded by including the elements of evaluation. We have developed a three-step approach that begins with developing the visual representation of the underlying rationale, central to which is the identification of Antecedent conditions. Step 2 ensures that program activities Target antecedent conditions, while Step 3 focuses on Measurement issues, depicting indicators and objectives for outcomes being included in the evaluation plan. We have coined this method of teaching the ATM approach. We hope that teachers of evaluation will find the ATM approach useful in the form presented here or at least stimulate thought as to how to adapt the approach to meet individual teaching needs.

***

Wilfreda E. Thurston, Jennifer Graham, and Jennifer Hatfield
Evaluability Assessment: A Catalyst for Program Change and Improvement Evaluation & the Health Professions, Jun 2003; vol. 26: pp. 206 - 221.
http://ehp.sagepub.com/cgi/reprint/26/2/206?ijkey=p27v.XrmIWjVc&keytype=ref&siteid=spehp

Abstract:
Using a local cross-cultural health service program as a framework, the authors describe the process of an evaluability assessment (EA) and illustrate how it can be a catalyst for program change. An EA is a process that improves evaluation. The key product was a logic model, which traces the links between objectives, activities, and outcomes. Four key insights emerged. First, the distinction of who was included and excluded in the target population, originally ambiguous, was clearly defined. Second, through the development of the logic model, staff members were able to analyze their goals and assumptions and critically explore possible gaps between expected outcomes and activities. Third, the EA enabled reflection on and clarification of both process and outcome measures. Finally, global goals were pared down to better match the project capacity. Developing an evaluability assessment was a cost-effective way to collaborate with staff to develop a clearer, more evaluable project.

***

Janene D. Fluhr, Roy F. Oman, James R. Allen, Marilyn G. Lanphier, and Kenneth R. McLeroy
A Collaborative Approach to Program Evaluation of Community-Based Teen Pregnancy Prevention Projects Health Promotion Practice, Apr 2004; vol. 5: pp. 127 - 137.
http://hpp.sagepub.com/cgi/reprint/5/2/127?ijkey=gDgZikENwy24o&keytype=ref&siteid=sphpp

Abstract:
The purpose of this article is to demonstrate a model for collaboration between program providers and program evaluators. The article describes how university-based evaluators, a state health department, and local program providers collaborated to evaluate 12 projects implementing commercially developed teenage pregnancy prevention (TPP) programs in school settings. Approximately 2,200 students participate annually in the programs. Program evaluation staff and local program providers worked together to construct logic models that helped guide the intervention and evaluation design. The local providers also participated in training sessions, conducted by the evaluation team, to increase their understanding and skills related to program evaluation methods. Student-level outcomes related to knowledge, attitudes, skills, behaviors, as well as an assessment of curricula fidelity were included in the evaluation. The result of this collaborative model has been a quality program evaluation for the projects while maintaining community input regarding program improvements that reflect local population needs.

Chapter 8: Program Description: Evaluation Designs Using Available Information

Robert C. Saunders and Craig Anne Heflinger
Integrating Data from Multiple Public Sources: Opportunities and Challenges for Evaluators Evaluation, Jul 2004; vol. 10: pp. 349 - 365.
http://evi.sagepub.com/cgi/reprint/10/3/349?ijkey=HXCn1lDWW9QpQ&keytype=ref&siteid=spevi

Abstract:
This article aims to inform evaluators about issues involved in using and integrating administrative databases from public agencies. With the growing focus on monitoring and oversight of public programs for health, mental health and substance abuse problems, existing data sets in public agencies became an important source of evaluation and planning information. The focus of this article is on the methods used to find, integrate and analyze multiple existing databases. Primary challenges that confront the evaluator in identifying and accessing data sources and in addressing the technical issues involved are discussed.

***

Robin Lin Miller, Barbara J. Bedney, and Carolyn Guenther-Grey
Assessing Organizational Capacity to Deliver HIV Prevention Services Collaboratively: Tales from the Field Health Education & Behavior, Oct 2003; vol. 30: pp. 582 - 600.
http://heb.sagepub.com/cgi/reprint/30/5/582?ijkey=w70OYrGZUrK.g&keytype=ref&siteid=spheb

Abstract:
Collaborative efforts between university researchers and community entities such as citizen coalitions and community-based organizations to provide health prevention programs are widespread. The authors describe their attempt to develop and implement a method for assessing whether community organizations had the organizational capacity to collaborate in a national study to prevent HIV infection among young men who have sex with men and what, if any, needs these institutions had for organizational capacity development assistance. The Feasibility, Evaluation Ability, and Sustainability Assessment (FEASA) combines qualitative methods for collecting data (interviews, organizational records, observations) from multiple sources to document an organization's capacity to provide HIV prevention services and its capacity-development needs. The authors describe experiences piloting FEASA in 13 communities and the benefits of using a systematic approach to partnership development.

***

Christine L. Salisbury, Wayne Crawford, Deborah Marlowe, and Patricia Husband
Integrating Education and Human Service Plans: The Interagency Planning and Support Project Journal of Early Intervention, Oct 2003; vol. 26: pp. 59 - 75.
http://jei.sagepub.com/cgi/reprint/26/1/59?ijkey=q0uU9PUOEzVzs&keytype=ref&siteid=spjei

Abstract:
Individual service/support plans from several different agencies were integrated to improve coordination and service delivery, and address difficulties encountered by families whose children are served by multiple agencies. Components of the Interagency Planning and Support model are described, and data on the development, adoption, and utilization of the Collaborative Support Plan (CSP) are presented from a sample of 34 families and 49 providers. Analysis of project level, parent, provider, and administrative data revealed the approach was successful in producing an integrated service plan that was valued by participants and was determined by parents to be better than previous service planning experiences. Factors affecting the success of development, adoption, and utilization of this innovative approach are described.

Chapter 9: Evaluation Design: Options for Supporting the Use of Information

Jenifer Cartland, Holly S. Ruch-Ross, Maryann Mason, and William Donohue
Role Sharing Between Evaluators and Stakeholders in Practice
American Journal of Evaluation, Dec 2008; vol. 29: pp. 460 - 477.
http://aje.sagepub.com/cgi/reprint/29/4/460?ijkey=sTdA8U8CBoWCY&keytype=ref&siteid=spaje

Abstract:
In the past three decades, program evaluation has sought to more fully engage stakeholders in the evaluative process. But little information has been gathered from stakeholders about how they share in evaluation tasks and whether role sharing leads to confusion or tensions between the evaluator and the stakeholders. This article reports findings from surveys and interviews with 20 evaluator—project director (lead stakeholder) pairs to explore how they share each other's roles in practice. In this study, sharing roles between evaluators and project directors generally was the norm among study participants but varied by the orientation of the evaluator (academic, program, or client). For some, there was tension and confusion in the role sharing of evaluators and stakeholders, but it was typically resolved early on in the cases where evaluators bring strong communication skills to the project. Where these skills were not present, the tensions did not resolve consistently.

***

Gary J. Skolits, Jennifer Ann Morrow, and Erin Mehalic Burr
Reconceptualizing Evaluator Roles American Journal of Evaluation, Sep 2009; vol. 30: pp. 275 - 295.
http://aje.sagepub.com/cgi/reprint/30/3/275?ijkey=hBAN.Gvu/b1gE&keytype=ref&siteid=spaje

Abstract:
The current evaluation literature tends to conceptualize evaluator roles as a single, overarching orientation toward an evaluation, an orientation largely driven by evaluation methods, models, or stakeholder orientations. Roles identified range from a social transformer or a neutral social scientist to that of an educator or even a power merchant. We argue that these single, broadly construed role orientations do not reflect the multiple roles evaluators actually assume as they complete the activities encompassing an external evaluation. In contrast to the current literature, this article suggests that typical evaluation activities create functional demands on evaluators, and that evaluators respond to these demands through a limited number of specified evaluator roles. This depiction of a set of specific multiple evaluator roles, generated in response to particular evaluation activities and their associated demands, has implications regarding how evaluation is conceptualized, practiced, and studied. This article concludes with a discussion of these implications.

***

Diane Hart, Gabi Diercks-O'Brien, and Adrian Powell
Exploring Stakeholder Engagement in Impact Evaluation Planning in Educational Development Work Evaluation, Jul 2009; vol. 15: pp. 285 - 306.
http://evi.sagepub.com/cgi/reprint/15/3/285?ijkey=VTLbSovXrztFg&keytype=ref&siteid=spevi

Abstract:
This article presents a case study of engaging stakeholders in the early stages of an impact evaluation of educational development work in a UK university. The rationale for undertaking participative impact evaluation is outlined in relation to the national and local context. The aim is to contribute to wider knowledge about appropriate methodology to lead to a better understanding of change processes in learning and teaching. We outline how stakeholder engagement in evaluation in this context has been influenced by the Aspen Institute's `Theories of Change' approach, and how we interpreted and applied it in the context of a grant scheme for educational development work. Our experience is discussed in relation to previous learning about the application of the approach, in particular in the health sector. This has highlighted implications for future work, not least that it should be informed by a more appropriate theoretical framework for exploring the complexity of evaluation in this context.

***

Lynne Huffman, Cheryl Koopman, Christine Blasey, Luba Botcheva, Kirsten E. Hill, Amy S. K. Marks, Irene Mcnee, Mary Nichols, and Jennifer Dyer-Friedman A Program Evaluation Strategy in a Community-Based Behavioral Health and Education Services Agency for Children and Families Journal of Applied Behavioral Science, Jun 2002; vol. 38: pp. 191 - 215.
http://jab.sagepub.com/cgi/reprint/38/2/191?ijkey=cDeXHp.2d9wdY&keytype=ref&siteid=spjab

Abstract:
Evaluation research and outcomes measurement in the arena of behavioral health services for children must be adapted for the community agency setting. Through evaluation research, it is possible to address service goals as well as more traditional academic research goals. This article examines a variety of activities that have been implemented to evaluate children’s behavioral and educational services in a Northern California non-profit community agency. It is noted that there are multiple formats for collecting information from and providing comments to children’s parents, their clinicians, and program administration staff, all of which can be used to effectively address service-focused evaluation research goals. Challenges to doing scientifically rigorous research in a community setting require additional considerations regarding organizational culture and structure. Based on the experiences of the authors and the experiences of others, the article describes general principles that can guide evaluation research and outcomes measurement with children and their families in the community health agency setting.

Chapter 10: Group Designs and Methods

Luba Botcheva, Catherine Roller White, and Lynne C. Huffman
Learning Culture and Outcomes Measurement Practices in Community Agencies American Journal of Evaluation, Dec 2002; vol. 23: pp. 421 - 434.
http://aje.sagepub.com/cgi/reprint/23/4/421?ijkey=jLCL3JzTGd/ng&keytype=ref&siteid=spaje

Abstract:
The present study is a first step in examining learning culture and outcomes measurement practices as indicators of community agencies’ readiness for the implementation of research-based evaluation. Representatives from Northern California community agencies serving children and youth (n = 25) completed surveys, which included questions about agency demographics, outcomes measurement practices, and learning culture. Results indicate that, although there is an awareness of the importance of outcomes evaluation, most agencies lack the resources for its systematic implementation. They express interest in learning more about outcomes measurement techniques and program evaluation, and they report needing help in building internal capacity for evaluation. The newly developed Assessing Learning Culture Scale revealed that an underlying set of beliefs, norms, and behaviors characterizes the learning culture of community agencies. These attitudes and beliefs are positively related to systematic data collection efforts and external funding, while attitudes and beliefs that indicate absence of a learning culture are correlated with sporadic data collection efforts and less external funding. Findings indicate that learning culture is an important factor for both the implementation of systematic evaluation efforts and the successful procurement of external funding. They also suggest that systematic evaluation efforts can serve as a change agent for creating a culture that values learning within the organization.

***

Elicia J. Herz, Janet S. Reis, and Linda Barbera-Stein
Family Life Education for Young Teens: An Assessment of Three Interventions Health Education & Behavior, Jan 1986; vol. 13: pp. 201 - 221.
http://heb.sagepub.com/cgi/reprint/13/3/201?ijkey=r.CTk4i5w/8Ho&keytype=ref&siteid=spheb

Abstract:
The impact of three variations of a family life education (FLE) program for 172 inner-city, junior-high-level students was investigated. Variations in exposure time, instructional methods, and teacher quality led to the classification of each intervention on a general intensity dimension. Separate pretest-posttest nonequivalent comparison group designs were utilized to assess pro gram impact along seven knowledge and attitudinal dimensions. Survey results revealed that, in comparison to no-treatment groups, the more intensive the program (a) the greater the gains in knowledge about reproductive physiology, contraception, and the consequences of teen pregnancy and parenthood (especially among experimental group females); and (b) the more birth control methods participants became familiar with over time. Changes in personal accep tance of premarital intercourse and perceived responsibility for contraception were observed only in the study examining the most intensive treatment. The results of the evaluations point to the combined importance of instructional methods, teacher quality, and in-class exposure time for producing change in young adolescents' knowledge of and attitudes toward sexuality. Further potential for the impact of school-based sex education programs on knowledge and attitudes is discussed within the broader context of the young adolescent's social environment.

***

Mark S. Umbreit
Crime Victims Confront Their Offenders: The Impact of a Minneapolis Mediation Program Research on Social Work Practice, Oct 1994; vol. 4: pp. 436 - 447.
http://rsw.sagepub.com/cgi/reprint/4/4/436?ijkey=t8F5if3aZjr3k&keytype=ref&siteid=sprsw

Abstract:
Social workers are playing an increasingly active role in the emerging field of mediation. This article presents the findings of an evaluation of a victim offender mediation program. The study is based on a quasi-experimental design involving 516 interviews with 441 crime victims and juvenile offenders, including pre- and postmediation interviews and involving two comparison groups. The vast majority of victims and offenders experienced the mediation process and outcome (restitution agreement) as fair and were satisfied with the program. Significantly greater victim satisfaction and perception of fairness, as well as higher restitution completion rates by offenders, were found.

Chapter 11: Evaluation Design: Qualitative Designs and Applications

Patricia Flynn Weitzman and Sue E. Levkoff
Combining Qualitative and Quantitative Methods in Health Research with Minority Elders: Lessons from a Study of Dementia Caregiving Field Methods, Aug 2000; vol. 12: pp. 195 - 208.
http://fmx.sagepub.com/cgi/reprint/12/3/195?ijkey=LiEAZo7FONH7Y&keytype=ref&siteid=spfmx

Abstract:
The merits of combining qualitative and quantitative methods are well known. While used often in evaluation research, a combined methods approach is rarely used in health studies with minority elders. This approach can be particularly useful for overcoming theoretical and recruitment problems specific to health research with minority elders. A cross-cultural study of family caregiving for demented-affected elders is presented to show how issues of rigor, theory building, language, and cultural adaptation of diagnostic tools can be effectively dealt with using combined methods. Specifically, the authors found qualitative data valuable in ensuring the cultural appropriateness of quantitative measures and in confirming causal relationships to which quantitative data pointed. They also found applying quantitative data collection techniques to qualitative data collection to be useful in theory building and in overcoming some of the reliability problems associated with qualitative data.

***

Ian Shaw
Qualitative research and outcomes in health, social work and education Qualitative Research, Apr 2003; vol. 3: pp. 57 - 77.
http://qrj.sagepub.com/cgi/reprint/3/1/57?ijkey=4CYd.9eG9ffRw&keytype=ref&siteid=spqrj

Abstract:
The purpose of this article is to outline ways in which qualitative research has a contribution to make to research on outcomes in health, social work and education. The main questions are contextualized through a general consideration of the relationship between quantitative and qualitative methods, especially in relation to evaluative research. In the main part of the article, I draw some conclusions regarding the contribution that qualitative methodology can make to outcomes research. I illustrate how qualitative research can contribute indispensably to outcomes research in four ways: design solutions; sensitivity to the micro-processes of practice and programmes; applications of symbolic interactionism and ethnomethodology; and qualitative data analysis.

***

Michael Quinn Patton
Two Decades of Developments in Qualitative Inquiry: A Personal, Experiential Perspective Qualitative Social Work, Sep 2002; vol. 1: pp. 261 - 283.
http://qsw.sagepub.com/cgi/reprint/1/3/261?ijkey=f2GetkDnDFhIA&keytype=ref&siteid=spqsw

Abstract:
The publication of the third edition of Qualitative Research and Evaluation Methods offers the author an opportunity to reflect back over two decades of developments in qualitative inquiry. Major developments include: the end of the qualitative-quantitative debate; the flowering of diverse and competing approaches within qualitative inquiry; the increased importance of mixed methods; the elaboration of purposeful sampling approaches; increasing recognition of the creativity at the center of qualitative analysis; the emergence of ever more sophisticated software to facilitate qualitative analysis; and new ethical challenges in the face of the potential impacts of qualitative inquiry on both those studied and those engaged in the inquiry.

Chapter 12: Consumer Satisfaction

Andrea M. Carroll, Karen Vetor, Sara Holmes, and Katherine P. Supiano
Ask the consumer: An innovative approach to dementia-related adult day service evaluation American Journal of Alzheimer's Disease and Other Dementias, Sep 2005; vol. 20: pp. 290 - 294.
http://aja.sagepub.com/cgi/reprint/20/5/290?ijkey=jROwbax0obO2M&keytype=ref&siteid=spaja

Abstract:
Historically, family caregivers have been considered the "consumers" when evaluating respite programs for persons with dementia offered by adult day service (ADS) centers. The purpose of this article is to describe a unique evaluation of ADS conducted directly with persons with dementia. Seventeen persons who regularly attended the Silver Club, an ADS program associated with the University of Michigan Turner Geriatric Clinic, were interviewed by an independent, trained interviewer using a single group, one-time, cross-sectional administration of a consumer satisfaction survey. Fifteen persons were able to complete the interview successfully. The implication of this evaluation is that when specially designed procedures are used, persons with dementia are capable of contributing usable data to consumer satisfaction surveys. Including the voice of this vulnerable population improves the quality of an agency's overall evaluation process and supports the basic philosophy of ADS to preserve the self-worth, independence, and dignity of cognitively impaired individuals.

***

David J. Kolko
Individual Cognitive Behavioral Treatment and Family Therapy for Physically Abused Children and their Offending Parents: A Comparison of Clinical Outcomes Child Maltreatment, Nov 1996; vol. 1: pp. 322 - 342.
http://cmx.sagepub.com/cgi/reprint/1/4/322?ijkey=FNnAUkFZaV.pw&keytype=ref&siteid=spcmx

Abstract:
Few studies have evaluated short-term psychosocial treatments with physically abused school-aged children and their offending parents or families. This study compares the treatment outcomes of 55 cases that were randomly assigned to individual child and parent cognitive behavioral therapy (CBT) or family therapy (FT) with those who received routine community services (RCS). Measures of child, parent, and family dysfunction and adjustment were collected from both participants and supplemented with official social service records to evaluate the efficacy of treatment through 1-year follow-up. Compared with RCS, CBT and FT were associated with improvements in child-to-parent violence and child externalizing behavior, parental distress and abuse risk, and family conflict and cohesion. All three conditions reported several improvements across time. One parent participant each in CBT and FT and three in RCS were found to have engaged in another incident of physical maltreatment after treatment had begun. No differences between CBT and FT were observed on consumer satisfaction or maltreatment risk ratings at termination. The findings of this evaluation provide additional support for the continued development and evaluation of individual and family treatments involving child victims of physical abuse.

***

Jay L. Lebow
Client Satisfaction With Mental Health Treatment: Methodological Considerations in Assessment Evaluation Review, Dec 1983; vol. 7: pp. 729 - 752.
http://erx.sagepub.com/cgi/reprint/7/6/729?ijkey=TtufseWpElbCI&keytype=ref&siteid=sperx

Abstract:
This article critically assesses the evaluation of consumer satisfaction in mental health treatment settings. Methodological problems addressed include uniformity myths, inclusion of items not measuring satisfaction, ambiguity in response alternatives, lack of precision in the use of terminology, failure to distinguish dissatisfaction and lack of satisfaction, failure to sufficiently probe, poor psychometric practice, the absence of accepted measures, failure to identify norms for satisfaction, lack of control over procedure, sampling bias, biasing responses, the lack of variability in responses, and primitive design , analyses, and reporting. Consumer satisfaction emerges as an important indicator of the quality of care, but one that must be interpreted with caution.

Chapter 13: Dissemination: Spreading the News

Rosalie T. Torres, Hallie S. Preskill, and Mary E. Piontek
Communicating and Reporting: Practices and Concerns of Internal and External Evaluators American Journal of Evaluation, Feb 1997; vol. 18: pp. 105 - 125.
http://aje.sagepub.com/cgi/reprint/18/1/105?ijkey=QjhaNA17oAD4I&keytype=ref&siteid=spaje

Abstract:
This study investigated internal and external evaluators' practices and concerns about communicating and reporting evaluation findings. Approximately three-quarters (72%) of a random sample of American Evaluation Association members responded to a survey on this topic. Most of those responding: (1) adhere to traditional reporting formats; (2) are only moderately satisfied with their communicating and reporting efforts; (3) found that insufficient time and political/organizational complexity impedes success in communicating and reporting; and (4) describe effective Rosalie T. Torres practice as typically entailing high stakeholder involvement. Internal evaluation was found to be not only equally as prevalent as external evaluation, but different in relation to certain communication and reporting practices

***

Joanne G. Carman
Nonprofits, Funders, and Evaluation: Accountability in Action
The American Review of Public Administration, Jul 2009; vol. 39: pp. 374 - 390.
http://arp.sagepub.com/cgi/reprint/39/4/374?ijkey=Cm4OqyJCKb2TM&keytype=ref&siteid=sparp

Abstract:
This article examines the extent to which different types of funders are asking nonprofit organizations for evaluation and performance measurement data, and describes the many ways in which nonprofit organizations are responding to these requests. The picture that emerges is one that is decidedly mixed, illustrating a range of behaviors that challenges the current perception that most, if not all, funders are asking nonprofit organizations for more evaluation and performance measurement data. The data collected during this study show that only those nonprofit organizations that receive considerable funding from the federal government and the United Way are engaging in program evaluation and performance measurement, compared to nonprofit organizations that receive more funding from state and local governments, foundations, and other sources. Furthermore, the extent to which nonprofit organizations are subjected to external monitoring and descriptive reporting requirements also varies according to the type and amount of funding.

***

Brenda M. Joly
Writing Community-Centered Evaluation Reports Health Promotion Practice, Apr 2003; vol. 4: pp. 93 - 97.
http://hpp.sagepub.com/cgi/reprint/4/2/93?ijkey=izIFoJmad0a0c&keytype=ref&siteid=sphpp

Abstract:
Documenting the process, results, impact, and effectiveness of community-based health promotion programs is an important part of any evaluation. This article provides information on how to write community-centered evaluation reports for program stakeholders. Specific prerequisites and principles are provided. In addition, several tips for increasing the use of the results are highlighted.

***

Russell E. Glasgow
Critical Measurement Issues in Translational Research Research on Social Work Practice, Sep 2009; vol. 19: pp. 560 - 568.
http://rsw.sagepub.com/cgi/reprint/19/5/560?ijkey=UFhEew9SWkPfo&keytype=ref&siteid=sprsw

Abstract:
This article summarizes critical evaluation needs, challenges, and lessons learned in translational research. Evaluation can play a key role in enhancing successful application of research-based programs and tools as well as informing program refinement and future research. Discussion centers on what is unique about evaluating programs and policies for implementation impact (or potential for dissemination). Central issues reviewed include the importance of context and local issues, robustness and external validity issues, multiple levels of evaluation, implementation fidelity versus customization, choosing evaluation designs to fit questions, and who participates and characteristics of success at each stage of program recruitment, delivery, and outcome. The use of mixed quantitative and qualitative methods is especially important, and the primary redirection that is needed is to focus on questions of decision makers and potential adoptees rather than the research colleagues.