Getting Real About Real-Time Evaluation

Every once in a while a field creeps closer to actually being helpful to nonprofits. Thanks to evaluators Clare Nolan and Fontane Lo for explaining a type of program evaluation -- and how to make it work for you.

Have you ever had an evaluation conducted for a program, and then waited months--or even years--for the findings to come out? When you finally got the evaluation report, were you annoyed because you had already changed the program significantly?

You're not alone! Traditional evaluations emphasize proving whether or not a program has worked. This requires a rigorous study design (with things like interventions and control groups), and findings are typically issued after all data are collected and thoroughly analyzed. That approach can mean a long time frame that makes the results useful for proving something to a funder, but less useful to a nonprofit earnestly trying to improve its services.

Real-time evaluation (RTE)

Fortunately, the field of evaluation has evolved and other forms of evaluation have emerged that emphasize improving programs. Such evaluations can go by these names:

  • Formative
  • Utilization-focused
  • Participatory
  • Empowerment
  • Developmental

These terms differ in meaning, but they all describe evaluation approaches with one interest in common--producing information that can be used to help make programs work better.

Real-time evaluation (RTE) is the latest form of this evaluation type, and one that is growing in popularity among funders. RTE emphasizes timely evaluation feedback to strengthen program design and implementation. However, RTE isn't always "real-time" in the sense that information is available immediately. Some types of data -- like interviews and focus groups -- take more time to process and analyze. To account for this, real-time evaluation reporting can be timed to occur after major program milestones or at key decision points. And in contrast to traditional narrative reports, RTE findings are commonly presented more informally through memos, slide decks, and even conversations.

How is RTE different from traditional evaluation?

Here's a real-life example that illustrates how RTE differs from traditional formative and outcome evaluations. Juma Ventures (Juma) is a Bay Area nonprofit that serves youth through employment in social enterprises, college preparation, and financial asset building. Last year, Juma started a new program called CollegeSet that combines financial capability training, match-savings for college, and coaching to apply for student financial aid.

A traditional outcome evaluation might look at the success of individual students in the program: were CollegeSet students more successful in achieving academic and career success than they would be if they did not have this program? A traditional formative evaluation, on the other hand, might examine the effectiveness of the different service components with the overall goal of providing information that Juma could use to strengthen the next iteration of the program.

But Juma didn't want to wait a year or even six months to learn what worked and what didn't - they wanted to make program adaptations in real time. So we designed a real-time evaluation approach to help Juma do this.

For example, CollegeSet was being implemented by seven different partner agencies. Juma wanted to use evaluation to learn how to enhance these collaborative partnerships and how to ensure that the partners were receiving the support they needed to implement the program. Early in the evaluation, we interviewed Juma staff to hear their perspectives on strengths and challenges of this collaborative approach, and to identify areas where they would like more feedback and input. We used these perspectives to guide our interviews with partner organizations. As a result, the evaluation was able to capture valuable data to help staff craft stronger partnerships and inform the selection process for the next round of partners. Following each data collection milestone, we talked with Juma staff about what we were learning and how this information could be used to further refine the program.

Is a real-time evaluation right for you?

The table below outlines some of the key questions to ask in deciding which type of evaluation is right for you.

Real-time evaluation

Traditional evaluations

What do you want out of evaluation?

In-the-moment feedback at critical decision points

In-depth analysis in a detailed report, with the clarity of hindsight.

What types of deliverables do you prefer?

Frequent in-person meetings and data summaries.

Full report at a defined end point and potentially at mid-point.

What is the end goal?

Getting the program to work as efficiently as possible, as soon as possible.

Learning what worked and what didn't, and using that information to inform the next iteration of the program.

How much does it cost?

May be more costly due to multiple rounds of data analysis and meetings. Since evaluation activities may evolve to meet changing information needs, costs are not always as predictable.

Costs are generally more predictable because you know what activities will be conducted at the evaluation outset.

What are the trade-offs?

The analysis will not be as rigorous because in-the-moment feedback cannot achieve the same clarity as hindsight.

The analysis will not be available until midway through or after a program's end. However, with the additional time available, a higher degree of rigor is possible.

If you're contemplating a real-time evaluation, other important considerations are whether your organization has the capacity to respond to real-time feedback and picking the right evaluator. Evaluators who have an understanding of organizational development issues and experience working in the community may be a better fit than academic experts. The American Evaluation Association maintains a searchable directory of practicing evaluators, many of whom have experience evaluating community programs.

What if my funder wants a traditional evaluation?

For funders interested in a "proof" approach, a summative or outcome evaluation will be more satisfying to them. But some funders are beginning to recognize the value of RTE for nonprofits working in advocacy and systems change efforts - see, for example, the Packard Foundation's emphasis on RTE. If this might align with the interests of one of your funders consider passing along this report.

Finally, RTE is still a relatively new form of evaluation and the literature regarding best practices and lessons learned is still emerging. We're interested in learning from folks reading this whether they have experience participating in real-time evaluations. If yes, to what extent did the RTE realize the goal of providing useful information for your work?

Clare Nolan and Fontane Lo (to right) are evaluators at Harder+Company Community Research, a California consulting and research firm working with the nonprofit, philanthropic, and public sectors. Clare is a rock climbing enthusiast (which is about outcrops, not outcomes), and Fontane crews with Absolute Dragons, a Bay Area dragon boat team.

See also in Blue Avocado:

Comments (1)

  • Anonymous

    It's great to see Clare still going wonderful work in this field. Some service areas are inherently difficult to evaluate, such as mental health services or substance abuse services. Clients have deep seated and long term issues that can't be quickly resolved. What kinds of real time evaluation outcome data would you suggest?

    Apr 17, 2012

Leave a comment

Fill this field in if you want to post a name a user login

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <small> <sup> <sub> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd> <h1> <h2> <h3> <h4> <img> <br> <br/> <p> <div> <span> <b> <i> <pre> <img> <u><strike>
  • Lines and paragraphs break automatically.

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
This question is for testing whether you are a human visitor and to prevent automated spam submissions.