



For Google Staff
About the Evaluation
CEMSE Evaluation Principles
CEMSE takes on a wide range of evaluation roles using different designs and methodologies. CEMSE's approach to evaluation is based on the following set of principles:
-
Evaluation must be grounded in a commitment to conducting rigorous studies of meaningful questions.
Including both the qualitative and quantitative approaches to data collection, rigorous studies are those that use valid, reliable, and systematic data collection methods. Those methods are of little use, however, if the questions they seek to answer are of little or no consequence to the stakeholder(s) or the field. Therefore, meaningful evaluation questions are informed by client and stakeholder needs and interests, and thus contribute to understandings about a program within a local context. Where appropriate to the program and design, evaluation questions also respond to broader concerns in the field.
-
Evaluation designs must provide useful and practical information to support improved practice.
Although evaluation can contribute to the development of theory, unless it provides information that is applicable and important to the clients, it has failed in its major purpose. Historically, evaluators have lamented the fact that their findings, whether part of formative or summative evaluation efforts, are often not used by the immediate or any extended audience of stakeholders. CEMSE is committed to evaluation designs that result in practical, useful findings and to developing and maintaining relationships with stakeholders that increase the likelihood that those findings will be utilized.
-
Evaluation designs must appropriately match the contexts and conditions of the project and be sensitive to the needs and interests of the stakeholder(s).
The value in an evaluation design comes from the extent to which it is well matched to the circumstances of the project and the needs of project leaders. Only when those needs are defined and circumstances accounted for, can evaluators generate a useful, appropriate evaluation design.
CEMSE'S Role
CEMSE evaluators take on a range of roles during the evaluation process ranging from objective observer to project historian. One of CEMSE's primary roles is that of "critical friend." In this role, CEMSE evaluators become part of the project team while still maintaining an outsider point of view. Through regular communication about project activities and findings, CEMSE evaluators embrace a collaborative approach to program improvement.
CEMSE Evaluation of Google CAPE Summer
CEMSE evaluation of Google's CAPE Summer Program had two strands. The first focused on working with program leaders to clearly articulate the CAPE program model and theory of action. The other strand focused on evaluation of the preparation for, and implementation of CAPE, 2012. The evaluation had both formative and summative components and provided timely feedback to help inform the design and implementation process, as well as determine CAPE's impact, and make recommendations for future changes to the program or its implementation. The evaluation took place from March 1st, 2012 and continued through October 31st, 2012.
The evaluation focused on three main goals of CAPE Summer: 1) Building the capacity to spread the CAPE Summer model; 2) Development of CAPE faculty capacity to facilitate CAPE experiences and teach computer science (CS); and 3) Development of concrete knowledge and attitudinal outcomes for youth.
With these goals in mind, the evaluation plan targeted seven evaluation questions:
- What is the CAPE program model (for both, faculty preparation and development and or the youth experience)?
- What is the status of CAPE program implementation?
- To what extent do CAPE faculty developed the capacity to facilitate the CAPE youth experience as intended and develop their own capacities as CS teachers?
- To what extent does the CAPE experience have an impact on students' CS knowledge and skills?
- To what extent does the CAPE experience change youth's attitudes about CS and their expectations for themselves regarding CS?
- What parts of the CAPE program model appear to be essential and which parts can or should be modified?
- To what extent is CAPE positioned to spread its model to other sites?
This evaluation employed a combination of qualitative and quantitative data collection methods to answer the research questions. Data sources included Youth questionnaires, youth interviews, youth focus groups and youth products/artifact review; faculty interviews, observations of weekend summit and CAPE summer activities, and meetings and interviews with CAPE leadership. The table below illustrates the data sources used to answer each evaluation question.
Evaluation Questions and Data Collection

Evaluation Findings and this Report
CEMSE provided CAPE leadership with periodic summaries of data collected and formative recommendations throughout the implementation of CAPE. Summative findings about CAPE and recommendations are presented in this report. This report has been created with password protected access for different report audiences.