September 10, 2010
Waikiki Prince Kuhio Hotel
Powerful Teaching and Learning: To What Extent is Instruction Aligned with Educational Reform Efforts
Duane Baker - The BERC Group
Over the last eight years, members of The BERC Group, Inc. have conducted more than 18,000 classroom observations to determine the extent to which teaching and learning that takes place in classrooms is aligned with the standards-based educational reform movement that requires all students to meet criterion-referenced standards. During the presentation, Dr. Baker will provide background on data collection procedures, the historical development of the STAR Classroom Observation Protocol, and the findings from the almost decade-long study. Data collected and Hawaii's schools will also be discussed. Outcomes, commentary, and suggested next steps will be provided.
In 2005, Project Ho`omohala (PH) was funded by Comprehensive Community Mental Health Services for Children and their Families Program to implement a system of care in the Kalihi-Palama (KP) community for transition-aged youth with serious emotional and behavioral challenges. Core components of PH are direct services and evaluation. Evaluation is a critical component of the system-of- care, tracking effectiveness of direct services to youth participants and their families. Maintaining retention rates in youth transition programs of such a diverse community challenges and enriches. PH will present on challenges experienced maintaining retention, efforts addressing the challenge, ending with discussion of successes or challenges local youth programs experience.
An Evaluation of a Training Program for Korean Teachers of English as a Foreign Language
Curt Hutchison, Hanbyul Jung - University of Hawai'i
This presentation focuses on the evaluation of a Honolulu-based training program for Korean teachers of English as a foreign language. During month-long sessions, visiting Korean middle and high school English instructors attend a series of classes, workshops, and observations of local language instructors. In order to provide intended users with various perspectives on the program’s
general effectiveness, focus group, interview, and survey data were collected from several stakeholder groups, including administrative staff, teachers, workshop leaders, and students. The presentation will address data collection methods, key findings, and the co-development of an action plan with the primary intended users.
Evaluation of a Hoshuukoo, a Community-Based Japanese Weekend Supplementary School: Application of a Practical Participatory Evaluation Approach
Ritsuko Iyoda - University of Hawai'i
This paper reports on an evaluation of a Hoshuukoo, a Japanese weekend supplementary school in the central U.S. A practical participatory evaluation approach was used for part of the evaluation, which was designed in collaboration with various key stakeholders in the program. Data was gathered using multiple methods including test score analysis, surveys, and teacher interviews. Due to contextual constraints, the focus of the evaluation was to gain a greater understanding of the program, rather than participant learning or immediate change. Nevertheless, the findings had local significance and some of the recommendations resulted in actions to improve program practices.
Only the United States and Canada remain as voting against the United Nations Declaration on the Rights of Indigenous Peoples. Yet the declaration has substantial implications for evaluators in those two countries. Key implications include (a) there should be no privileging of “mainstream” non-indigenous evaluation approaches (e.g., advocacy of the Program Evaluation Standards and of the Guiding Principles for Evaluators, emphasis on written reports, and regarding randomized control trials as a gold standard) over indigenous approaches, and (b) indigenous peoples have a right to determine how to evaluate and promote professional indigenous evaluators, including those in “mainstream” academies.
Cutting-Edge Techniques to Evaluate an Organization’s Online Presence
Marco Morawec - Kamehameha Schools
Many organizations have a website representing its public face and supporting its mission by providing specific content and tools to its audience. In simple terms, the online presence of any organization is an integral part of its overall strategy. However, very few organizations have defined goals for their online presence, or even know what they should achieve online. Evaluating a website through web-analytics enables the organization to define those objectives, analyze the outcomes, and develop actionable insights for consistent improvement of the online experience and the organization’s services offered to the community.
This presentation uses the H-PEA website as a self-reflecting and real-time online evaluation example, introducing participants to the beauty of web analytics.
The American Evaluation Association is in the process of soliciting member response to the AEA Public Statement on Cultural Competence in Evaluation. The statement is the culmination of four years of work by a dedicated task force and is informed by a wide range of evaluation and cultural experts.
In this presentation, participants will learn about the history of the statement, highlights from the contents, the process for review, and will be provided with a copy for a more thorough reading. Comments shared by the session participants will be collected and forwarded to the Task Force and the AEA Board for consideration in making any edits to the final statement.
Addressing Issues in Evaluating Large Research Initiatives and Centers: Example of the C-MORE Evaluation
Ryan Tolman, Judith K. Inazu - University of Hawai'i
The Center for Microbial Oceanography: Research and Education (C-MORE), a 10-year, $40 million NSF funded Science and Technology Center, is a 6-partner, large research initiative requiring an evaluation. However, the evaluation field has not made much progress in addressing the specific issues involved in evaluating large research initiatives. This paper seeks to address some of these issues by reviewing evaluation methods and models that can be applied to these evaluation efforts. For example, evaluations of large research initiatives, like C-MORE, should be guided by asking what value is added by the research center structure.
Accreditation agencies advocate that outcomes assessment should be part and parcel of organizational operation, yet culture of assessment use and learning is not engrained in academic programs (Banta, 2002). To inform ways to go about assessment, the current multiple-case study examines factors that may facilitate or hinder initiating, planning, implementing, using, and learning from outcomes assessment across five college academic programs. The presentation reports on the first phase of the study, which investigated the organizational readiness factors that impacted initial faculty engagement in assessment.
Stakeholder evaluation of the Balanced Transportation Coordinator position in Hawaii County
Lehua Choy, Vickie Ramirez, Nalani Aki, Katie Heinrich - University of Hawai'i
The Healthy Hawaii Initiative, Hawaii State Department of Health, funded an innovative Balanced Transportation Coordinator (BTC) position in the Hawaii County Department of Planning for one year to increase non-motorized transportation. Evaluation of the BTC sought to determine what value was found in the position and the lessons learned. Five stakeholders were interviewed and their responses were analyzed to determine major themes. All stakeholders found value in the position although a range of contextual factors impeded the BTC's ability to fulfill the major project objectives. The stakeholders also provided recommendations for other agencies considering their own BTC position.
Statistics for Planning School-Randomized Experiments in Hawaii
George Harrison, Paul R. Brandon - Curriculum Research & Development Group, UH Manoa
Group- randomized experiments, such as when schools are randomly assigned to treatment and control groups, are increasingly common. The power analyses that are conducted when planning these experiments require a sound basis for estimating minimum detectable effect sizes, intraclass correlation coefficients, covariates (e.g., pretest R-squares), and other statistics. We present these statistics for the the Hawaii State Assessment reading test and mathematics test for six grades over six years. The results provide an empirical basis for randomized experiments that use student achievement as the dependent variable and can help guide other research or evaluation studies comparing school achievement.
Project Ho'omohala : A three-year evaluation of a community-based youth development project
Ranilo Laygo, Angela T. Hoppe-Cruz, Melodi Wynne, Ziwen Wang - Center on Disability Studies
Project Ho'omohala is a program based in urban Honolulu to serve young people who need help when moving towards adulthood. The evaluation component including baseline interviews which completed within 30 days of registration and follow-up ones. Compared the baseline data with 6-month interviews and latest interviews from the same clients, the evaluation group reported positive effects immediately after the first meeting and in a three-year follow-up survey. At the follow-up, there was consistent evidence of possible positive effects among those clients. The Abraham Maslow's Hierarchy of Needs Model will be used in the report poster to define the categories of needs among youth and as the reference of project evaluation.
Tasting the Soup: Using Evaluation Results to Improve STEM Programs for Native Hawaiian Students
Vinita Ling, Judith Inazu - University of Hawai'i
The Native Hawaiian Science and Engineering Mentorship Program within the College of Engineering at UH Manoa recruits and retains Native Hawaiian and other minority undergraduate students in science and technology. Ongoing refinement of program delivery was achieved by observing selected sponsored activities, administering pre- and post- surveys, holding focus groups, and visiting the sites to meet with affiliates. Evaluation results were used by the program administrators to make changes to the program. Two key ingredients in this process of "tasting the soup" include openness and receptivity of the program administrators to evaluator recommendations, and flexibility in program delivery as a means for attaining objectives.
Cross Site Evaluation: Friend, Not Foe
Nancy Marker - University of Hawai'i
Typically evaluators groan over the onerous addition of a national cross-site evaluation to their workload: more protocols, surveys, datasets, and work! And, what does it really accomplish for our own evaluation and for adapting and improving the program?
The Hawaii’s Gatekeeper Training Initiative evaluation is an example of how to take a national cross-site evaluation that added six protocols and instruments and “make it work.” When Hawaii’s youth suicide prevention project team embraced the cross-site evaluation protocol and used its results for our evaluation needs, the dreaded workload diminished. This poster outlines the steps used to meet evaluation requirements without overburdening the evaluators or program participants.
An Assessment Plan: A Longitudinal Studying of Academic Performance at University of Hawaii at Manoa
Monica Stitt-Bergh, Marlene P. Lowe - University of Hawai'i
This poster will describe the research design of a longitudinal study of 250 University of Hawai‘i-Mānoa freshmen, starting 08/2010. The study will assess student academic proficiency and explore factors that contribute to students’ academic development. The study improves upon UHM’s current assessment practices by focusing on student development over time and investigating students’ perceptions of how they learn in addition to assessing their levels of achievement. The poster will give examples of survey questions, focus group protocols, and rubrics for coursework evaluation. It will also explain how the results will be used to update and improve UHM’s core curriculum and pedagogy.
Beyond the obvious: mining survey data for underlying factors and correlations
Katherine Tibbetts - Kamehameha Schools
Four years of participant satisfaction data were mined to identify relationships between satisfaction with a summer school program and self-reported impact on school engagement and cultural connectedness.
Evidence-Based Program and Fiscal Evaluation - the Desk Review Process, Department of Education
Jerry Wang, Clyde Igarashi, Donna Fujimoto-Saka - Hawai'i State Department of Education
The desk review model (including screening tool) was developed and employed to review 94 general funded programs in Office of Curriculum , Instruction and Student Support (OCISS), Department of Education. The following five evidence-based criteria were used to rate the program documentation:
1. Data Performance
2. Budget Efficiency
3. Internal and External Monitoring
4. Continuous Improvement
5. Program Effectiveness
Evaluators and Labor Market Information
Carolyn Weygan-Hildebrand - Gawis (Ga-wis’)
This is a cursory look at information revealed in job ads during a period of economic
downsizing and increased requirement for accountability by program funders. It is a
presentation of observed labor market information trends such as specialization areas,
competencies, education, work experience requirements, and other “hidden”
requirements. The “quick and dirty” assessment is anchored on two major sources of
job ads-the American Evaluation Association and HireNetHawaii- and several others
that were “accidentally” found during the six month process. The presentation is a good
starting point for discussing how we can help employers communicate their needs as
well as job/career seekers navigate their paths towards their desired niche/s.