Menu
Log in


Hawai'i - Pacific Evaluation Association

Log in


H-PEA 2017 Conference Workshops

Register Now!


Developmental Evaluation

Michael Quinn Patton

Thursday, September 21st, 9am-12pm


Developmental evaluation provides evaluative information and feedback to social innovators, and their funders and supporters, to inform adaptive development of change initiatives in complex dynamic environments. Developmental evaluation brings innovation and adaptation the processes of asking evaluative questions, applying evaluation logic, and gathering and reporting evaluative data, to inform and support the development of innovative projects, programs, initiatives, products, organizations, and/or systems change efforts with timely feedback.

Participants will learn:

  • The five types of Developmental Evaluation (DE)
  • When DE is appropriate
  • The 8 DE principles
  • Strengths and weaknesses of the DE approach
  • The niche of DE

Principles-focused Evaluation

Michael Quinn Patton

Thursday, September 21st, 1pm-4pm


Evidence about program effectiveness involves systematically gathering and carefully analyzing data about the extent to which observed outcomes can be attributed to a program’s interventions. It is useful to distinguish three types of evidence-based conclusions:

  1. Single evidence-based program. Rigorous and credible summative evaluation of a single program provides evidence for the effectiveness of that program and only that program.
  2. Evidence-based model. Systematic meta-analysis (statistical aggregation) of the results of several programs all implementing the same model in a high-fidelity, standardized, and replicable manner, and evaluated with randomized controlled trials (ideally), to determine overall effectiveness of the model. This is the basis for claims that a model is a “best practice.”
  3. Evidence-based principles. Synthesis of case studies, including both processes and outcomes, of a group of diverse programs or interventions all adhering to the same principles but each adapting those principles to its own particular target population within its own context. If the findings show that the principles have been implemented systematically, and analysis connects implementation of the principles with desired outcomes through detailed and in-depth contribution analysis, the conclusion can be drawn that the practitioners are following effective evidence-based principles.
  4. Principles-focused evaluation treats principles as the intervention and unit of analysis, and designs an evaluation to assess both implementation and consequences of principles.  Principles-focused evaluation is a specific application of developmental evaluation because principles are the appropriate way to take action in complex dynamic systems.  This workshop will be the worldwide premiere of principles-focused evaluation training.  Specific examples and methods will be part of the training.

Participants will learn:

  • What constitutes a principle that can be evaluated
  • How and why principles should be evaluated
  • Different kinds of principles-focused evaluation
  • The relationship between complexity and principles
  • The particular challenges, strengths, and weaknesses of principles-focused evaluation.


Michael Quinn Patton

Former President of the American Evaluation Association; recipient of both the Alva and Gunnar Myrdal Award from the Evaluation Research Society for "outstanding contributions to evaluation use and practice" and the Paul F. Lazarsfeld Award for lifetime contributions to evaluation theory from the American Evaluation Association.  Author of eight evaluation books including 4th editions of Utilization-Focused Evaluation (2008) and Qualitative Research and Evaluation Methods (2015), books have been used in over 500 universities worldwide.

Questions about this workshop may be addressed to mqpatton@prodigy.net.

 

 

Quasi-experimental Designs: When Experimental Designs Are Not Good Enough 


John P. Barile

Thursday, September 21st, 9am-12pm


Evaluators often pursue experimental designs due to perceived superiority but these approaches are often inappropriate and can result in misleading conclusions. This workshop will provide an overview of quasi-experimental techniques, provide examples of when each technique is most appropriate, and provide a tutorial on how to conduct propensity score matching for the purposes of program evaluation.


Propensity score matching a specific approach to conducting a quasi-experimental evaluation. Propensity score matching is used to create pairings between individuals, classrooms, communities that received an intervention to individuals, classrooms or communities that did not receive an intervention. Propensity score matching techniques utilize data regarding why individuals self-select into a program in order to match them to statistically similar individuals who did not receive the program. This workshop will show how one-to-one and full matching (matching on more than one case) can be conducted using SPSS and R statistical packages.


Participants will:

  • Learn about different quasi-experimental approaches to conducting program evaluations
  • Learn when each approach is most appropriate given available data and program characteristics
  • Learn how to conduct propensity score matching with data in SPSS and R
  • Learn how matched data can be used to determine the impact of a program

John (Jack) P. Barile

Jack Barile is an assistant professor at the University of Hawai‘i at Mānoa in the Department of Psychology. Jack's research concerns ecological determinants of health-related quality of life and program evaluation. This line of research includes the study of individual and neighborhood-level factors associated with social disadvantage and well-being. Prior to coming to UH Mānoa, Jack served as research fellow at the US Centers for Disease Control and Prevention.

 

Social Action Evaluation Using Photography: From Needs Assessment and Program Development to Implementation and Evaluation 

Anna Smith Pruitt and Joy Agner

Thursday, September 21st, 1pm-4pm


“A picture is worth a thousand words,” so goes the old adage. Able to convey complex notions in a succinct form, photos can be powerful and persuasive tools. Photovoice, a participatory research methodology, uses photography, critical analysis, and group discussion to capture the perspectives and experiences of marginalized people in an effort to give voice to underserved communities and populations. Participants become active researchers at each stage of the research process, providing insight through their analysis of their photos and assisting in the dissemination of findings. This method is especially useful when evaluating programs that work with marginalized groups, youth, and indigenous populations. By giving program participants and stakeholders a voice in the evaluation, Photovoice can lead to more accurate accounts of people’s experiences with the program, resulting in richer and more valuable data. Although particularly beneficial when conducting evaluations for programs with marginalized groups, Photovoice also can be useful when engaging various stakeholders at different stages of evaluation – from engagement, needs assessment, and program implementation to process and outcome evaluations. This workshop will provide hands-on instruction for using Photovoice in various evaluation projects, with an emphasis on the ultimate goal of Photovoice: to achieve social action through the dissemination of the results.

Participants will:

  • Learn how Photovoice has been used in evaluations and research that work with different underserved communities and populations;
  • Learn how different populations can be engaged through community-based participatory action research (CB/PAR) to achieve project and evaluation goals;
  • Practice analyzing photos in a Photovoice format; and
  • Explore how social action may be facilitated through Photovoice projects.

Participants are encouraged to bring a picture answering the question: 

“What does evaluation look like for me/my organization?”


Format: Interactive with hands-on training in Photovoice

Target audience: Beginners to Intermediate



Anna Smith Pruitt

Anna Smith Pruitt is a doctoral candidate in the Community & Cultural Psychology program at the University of Hawai‘i at Mānoa. Her research focuses on the ways in which individuals and communities impact and are impacted by environmental and contextual factors (including historical, cultural, and social factors). By taking an ecological and feminist intersectional approach, her research attempts to quantify and explain the ways in which individual and contextual factors interact to impact community and individual health and quality of life. In particular, her research interests include university-community partnerships, developing methodologies that capture context, innovative qualitative methods (e.g., PhotoVOICE), and the interaction between research, evaluation, and social policy. Her research has practical application with an ultimate goal of social justice and equitable distribution of resources.

 




Contact Us

Hawai'i-Pacific Evaluation Association

P.O. Box 283232, Honolulu, HI 96828

info@h-pea.org

H-PEA is a tax-exempt charitable organization under Section 501(c)(3) of the Internal Revenue Code and is eligible to receive tax-deductible contributions.



Powered by Wild Apricot Membership Software