16: Program Evaluation
Standard of Practice: The mentoring program creates and implements a formal evaluation and data collection plan that addresses tracking of implementation fidelity, mentoring relationship quality, relevant participant outcomes, and program costs.
Practices Supporting this Standard
The program has a written evaluation plan.
This plan will look different in variety of program models and settings, but it is important that every program create a written plan that outlines efforts they will take to better understand their adherence to policies and procedures, the quality and consistency with which mentoring services are delivered, the costs of those services, and the relationships and outcomes experienced as part of the program. Effective evaluation plans include information on:
- What important questions the program is trying to answer, such as testing key aspects of the theory of change (see Element 2) and measuring short-, intermediate-, and long-term outcomes, implementation fidelity, mentoring relationship quality, and program costs.
- Who will provide information to address those questions.
- When each piece of information will be collected.
- How information will be collected, both in terms of method (e.g., survey, focus group) and process (e.g., which staff members are responsible and how they will implement data collection processes, such as preparing participants and generating buy-in for data collection, obtaining consent from respondents, and scheduling survey administration).
- How the program plans to analyze and secure the data, for example, who will be responsible, what types of resources will be needed, and how the evaluation will address each question.
- How the program will share findings with stakeholders (e.g., funders, community partners, board members) and participants.
- How staff will obtain feedback on those findings and integrate it into program operations.
- Information about any data sharing agreements needed for accessing data from external sources (e.g., academic records, juvenile justice data).
The program engages in consistent, ongoing data collection and analysis to address the questions outlined in its evaluation plan.
The program should use the evaluation plan to guide its collection, analysis, and sharing of data on an established timeline. To accomplish this, programs will need to dedicate resources to data collection and analysis, including training and supporting staff as needed.
The program shares evaluation findings with stakeholders.
This includes program participants, staff, board members, funders, and other community partners. When sharing findings, programs should consider:
- generating a formal report that is accessible to all stakeholders;
- creating other summaries of the results or infographics that can be shared broadly; and
- creating a process for program participants and staff to reflect on the findings and offer suggestions for program improvements.
The program uses findings to make improvements in its services on a regular basis.
Programs should determine how findings and feedback from key stakeholders will be used to improve the program and more effectively meet client expectations and needs. Ideally these research-to-practice improvement efforts will be led by the program’s advisory committee or other ad hoc group with authority to recommend program changes to leadership. Regular and consistent reviews of new data collected should be implemented to ensure the responsiveness of program improvement efforts.
Because there is tremendous diversity in how and where mentoring is delivered to young people, here we offer additional practices and recommendations related to this Element for some common mentoring contexts. Readers should note that there may be overlap in the following categories below (e.g., a peer mentoring program in a school or a Boys & Girls Club offering a group mentoring program on-site) and read all that may be relevant to their work. The next recommendations can support program evaluation and continuous improvement efforts for some typical mentoring models and settings.

GROUP MENTORING MODELS
Group mentoring programs should largely follow the previously noted evaluation practices, but there are a few nuances they may wish to consider when building their evaluation strategy:
PEER MENTORING MODELS
As noted above for group models, peer-to-peer mentoring models will likely want to track the completion of program activities or curriculum as part of their evaluations. Additionally, peer programs are encouraged to:
E-MENTORING MODELS
E-mentoring models are another example of programs in which participants’ progression through a set of prescribed activities can be an important marker of engagement and a likely driver of program outcomes. In fact, online mentoring programs should emphasize several markers of participation in their data collection and analysis. Data points such as the number of logins or messages sent, the average word count of messages, the frequency of interactions between mentors and youth, the average response time between participants, and the total time spent engaged in the program platform may all be important predictors of how impactful the program is for participants. Additionally, e-mentoring programs may wish to:
SCHOOL- AND OTHER FORMAL SITE-BASED MODELS
School- and site-based models are also encouraged, as noted above for group, peer, and e-mentoring programs, to track participant completion of key activities related to the program’s theory of change. And as noted for peer programs, teachers, counselors, caregivers, and other faculty or site staff may be important sources of information about program impact.
When set in broader youth-serving organizations and institutions, formal school- and site-based mentoring programs are additionally encouraged to:
INFORMAL MENTORING MODELS
Organizations that are offering informal mentoring via staff may choose not to evaluate mentoring separately, given that mentoring relationships may not be experienced by all youth and the connection to other organization services or outcomes may be less clear. But, when possible, youth and caregivers should be asked about their mentoring experiences (if any) as part of the overall evaluation of the organization’s services. Programs can ask about their mentoring experiences, the benefits they received, and how the presence of mentoring may have bolstered or enhanced their overall experience in the organization.
Programs may want to set benchmarks and track progress around metrics such as:
A Brief Primer on Youth Participatory Action Research for Mentoring Programs. National Mentoring Resource Center.
This brief primer provides an overview of youth participatory action research (YPAR). YPAR is a promising approach for elevating youth voices in mentoring programs to create positive change.
The Community Builder’s Approach to Theory of Change: A Practical Guide to Theory Development. Anderson, A., Aspen Institute.
This guide provides a basic overview of the major concepts that define theories of change along with guidance and a resource toolbox to support development of theories of change.
Measurement Guidance Toolkit. National Mentoring Resource Center.
A mentoring-focused collection of measurement tools for examining participant outcomes. Includes a Selected Reading and Resources page that contains several valuable resources on general evaluation, survey design, and data sharing.
From Soft Skills to Hard Data: Measuring Youth Program Outcomes. Wilson-Ahlstrom, A., Yohalem, N., DuBois, D., & Ji, P., The Forum for Youth Investment. This compendium describes scales in a wide range of areas used to measure youth outcomes.

