The Importance of Assessment
Program assessment has been gaining a lot in importance among funding agencies in the past years. Ultimately, funders want to ensure that there is a cohesive and comprehensive way to measure the success of a program. Thus, assessment has become an important metric for reviewers and funding decision-makers.
To be competitive, proposals must develop well-thought and well-integrated assessment plans. These are no longer a box to check but could differentiate between a competitive and a non-competitive proposal. Most importantly, the proposed assessment plans should be allocated adequate funding, including:
Senior personnel (could be co-PI, director of assessment, etc.) should be engaged in the grant writing process as opposed to being listed as TBA; otherwise, the lack of their expertise will be reflected in the proposal. In addition to several months of salary, they should also be allocated resources to conduct their assessment (travel, survey tools, student assistant, etc.).
External reviewer(s) should be identified prior to submission and named in the proposal. Funding for them to travel should also be allocated (varies, but at least annual site travel). The external evaluator needs to assess the extent to which the project’s goals are being met and provide formative feedback to the project leadership team. The objective of the formative feedback is to course-correct any problems. Hence, the external evaluator is expected to be an independent, objective, and unbiased assessor of the project. This independence can only be accomplished by someone who is not attached to the project as PI or senior personnel or report to the PI team in any direct way (Source: NSF 2023).
Assessment Guidance and Resources:
While there are many good examples that can be leveraged from the web, most of them cater to student assessments of knowledge. Programmatic assessments are more complex, and expertise is hard to find when looking for “the evaluator” expert on your team. Thus, when writing these assessments, one should plan for the evaluation of a wide range of outcomes beyond traditional student learning. The following represents basic guidelines for structuring your assessments. For more detailed information please reach out to our Huck C contacts. Workshops focused on overall assessment or portions of assessment will be available throughout the year. Please check the Huck C Events and Training page for what is currently planned.
Assessment starts with the Goals and Objectives of your project. What you propose to do should be reflected in your assessment. These two cannot be disconnected. Your goals need to be specific, measurable, achievable, relevant, and time-bound (may use SMART framework).
There is Formative and Summative Assessment. Plan to have both. In brief, a formative assessment occurs throughout the project and requires ongoing monitoring and feedback loops, whereas the summative assessment occurs at the end of the project and focuses on evaluating the overall success of the project in achieving its goal and objectives.
What do you assess? Identify your large categories to assess. We’ll call those Key Performance Indicators (KPIs). They are used for specific, measurable metrics used to evaluate different aspects of the project. For example, KPIs could include Research Impact, Training and Capacity Building, Collaboration and Team Science, Broader Impacts, Community Engagement, Long-Term Sustainability, etc. Under these groups, you will list the measurable metrics such as the number of co-authored papers with interdisciplinary collaborators, number of newly formed partnerships, stakeholder satisfaction with communication and leadership, number of trainees who secure related jobs, fellowships, or academic positions post-project, participant satisfaction with training, diversity of participant backgrounds, etc.
Reporting and Feedback Mechanism
Provide a clear timeline for how often assessments will be conducted and reported and who will be responsible for their execution. Set up regular check-ins and feedback loops, then show how you will incorporate the received feedback in an agile manner.
Assessment Budget Allocation
Depending on the scope of the project, the evaluators’ time commitment per year will vary. Plan to provide a clear justification for the time commitment and subsequent budget allocation for the evaluation component.
External evaluators need to establish their hourly rate and how many hours they are dedicating to the project. They will be accounted as independent consultants.
Depending on whether the evaluator is a member of the institution’s staff or an independent consultant, budget requirements can vary. A rule of thumb we have observed in many successful projects is that the evaluator charges around 2-5% of the total budget. Certain projects go up to 10% and hire an external consultant firm to manage all the assessments (about $30-50K/year). The range varies with the scope, amount, and complexity.
Tips for a Successful Assessment
- Identify, engage, and name your evaluators prior to submission
- Allocate approximately five percent of your project budget for assessment (minimum)
Finding Assessment Experts
Finding good assessment experts is not easy. Most focus on educational assessment, being familiar with assessment methodologies and tools for student assessment, but unfamiliar with more complex, programmatic assessments. Student-focused assessment will be important of course, as it addresses a critical program component usually going under Education and Training and potentially under Broader Impacts. But it is only a fraction of your needed assessment.
When seeking your assessment expert, make sure that you ask about their experience in leading programmatic assessments. A few suggestions below can be used. It is going to be up to the PI to decide the best fit for their team and project (the list will be expanded as we identify more).
Internal to Penn State
The Penn State Office of Planning, Assessment, and Institutional Research (OPAIR)
Contact: Jessica Myers, Associate Director of Assessment
OPAIR Assessment Workshops: One example is their annual Workshop on Fundamentals of Program Assessment Planning.
Disclaimer: Assessment expertise may be budgeted on a case-by-case basis, depending on the availability and time commitments of the assessment staff.
Penn State College of Education Center for Evaluation and Education Policy Analysis (CEEPA). The mission of the CEEPA is to provide unbiased, high-quality evaluation and policy analysis services to education and other organizations in the Commonwealth of Pennsylvania and across the nation.
CEEPA researchers have much experience conducting program evaluation, policy analysis, concept mapping, cost analyses, feasibility studies, impact assessments, needs assessments, outcome evaluations, process evaluations, and proposal evaluations in the following sectors:
- Public and private education (PK-12, higher, informal and non-formal, non-traditional)
- Regional economies and workforce
- Social services and public welfare
- Health and human development services
- Engineering services
Areas of expertise will expand as the Center brings on additional evaluation specialists.
Penn State Student Affairs Research and Assessment (SARA) seeks to evaluate and improve the student experience and the educational outcomes of students by providing data relevant to Student Affairs, the academic colleges, and the Commonwealth Campuses. To accomplish this, SARA coordinates and assists in the development of a broad range of assessment and evaluation efforts.
Schreyer Consulting Instructors regularly consult with the Schreyer Institute about informal and formal feedback from students (SEEQs & SRTEs). They can help identify strategies to gather student feedback before the end of the semester through Classroom Assessment Techniques (CATs), non-evaluative class observations, and mid-semester class interviews. They are available to assist with interpretation of and responses to student feedback.
Please contact us directly for recommendations of additional University experts.
Examples of Assessment
National Science Foundation: Evaluation and Assessment Capability (EAC) supports NSF decision-makers by studying programs and activities and their impact on the people who participate in and benefit from NSF investments. EAC also provides centralized resources and guidance for data collection, analysis, evaluation design, surveys, and enterprise analytics. EAC engages stakeholders in identifying learning priorities which drive a broad portfolio of projects that use qualitative and quantitative methods, from simple descriptive analysis to advances in modeling, network analysis, and machine learning. (Source: NSF 2024)
United States Department of Agriculture (USDA) provides an excellent Evaluation Framework (see pages 38-41) and examples of analysis. The framework could be adapted to other evaluations as it applies to a wider range of projects.
The National Institutes of Health (NIH) will use the Simplified Peer Review Framework and the Guidance: Rigor and Reproducibility in Grant Applications as important KPIs.
A good example and guidance can be found at Project Evaluation Plan Samples for a Rural Health Network Development Program.
An example of a Logic Model may be found here (requires Penn State SSO credential)
We are happy to provide access to some internal examples of assessment plans for large, complex grant proposals that have received great feedback from the reviewers. Please contact us directly until we can provide those directly through a Penn State access wall.
Disclaimer: Use provided examples only as additional guidance. Each funding agency will have its own guidelines for assessment that will take priority. Your assessment lead should start from that information and follow it closely.