In systems change processes, evaluation is complex. Traditional evaluation methodologies or approaches do not provide the necessary answer, as these processes are not linear and there are no start-up solutions, but emerge over the course of the project.
What are complex challenges?
These are the global challenges we face as a society, which no one-off solution can solve. The socio-ecological transition, the reinvention of the care system, the future of work, the impact of Artificial Intelligence on democratic practices and structural inequality are some of the most relevant examples.
How can we address complex challenges from Social Innovation?
Traditional evaluation approaches are designed for programs and projects that execute linear processes, in stable contexts and that aim to find solutions to defined problems, using classical practices.
In these cases, evaluation assesses the results after implementing the solutions, focusing on accountability to understand questions such as: Did we do what we said we would do? Has this had the impact we thought it would?
But how do you assess complexity? How do you assess the value of an innovation process that is not linear, that addresses a complex problem, that operates in a context characterized by the unknown and in which the starting solutions are not proposed?
The Developmental Evaluation offers a monitoring approach that supports social innovation processes, through loops that feed-back information in real time, helping the caption of emerging impact.
A mindset change in evaluation
In traditional evaluation approaches, the evaluator participates at the end of the project, capturing the impact for the presentation of results. This evaluation cycle makes learning difficult, as information is collected at the end, when it is too late to make strategic decisions.
The Developmental Evaluation is constant, accompanying the project process from start to finish. Its aim is to situate the assessment within the framework of the complexity of the complex challenge, to support decision-making from the earliest stages of an intervention. However, this brings with it a big challenge: How do we evaluate the added value of a process when we don't know which way we are going to go or where we might be involved?
The inclusion of Developmental Evaluation implies a change of mentality in relation to the way in which the evaluation of Social Innovation processes is perceived, when the tendency is: "We are not really sure that we have results to share and we feel that it is too early to evaluate".
It is a continuous learning process and we understand the value it brings to processes through exploration, experience and experimentation.
Developmental Evaluation does not contemplate a division of roles in the same way as the traditional approach, but rather the evaluation exercise is a team effort, in which each member plays a role in the evaluation process. The team combines the internal knowledge of the organizations involved and external knowledge.
Data collection and exploitation
Developmental Evaluation can work with data that has been collected through other processes and even establish specific data collection processes to support decision-making.
Characteristics of the Developmental Evaluation
Explore hypotheses
Is what we are doing helping us achieve our goals?
Learning
What's working and what's not? What do we need to change and what do we need to continue doing?
Internal and external communication
What is going on? What impact are we generating?
Management
What are our hypotheses for activating the transformation? What are the activities we will do? How are we contrasting the activities we are doing?
Traditional Evaluation vs Developmental Evaluation
Traditional Evaluation vs Developmental Evaluation, by Michael Quinn Patton (2006):
Traditional Evaluation | Developmental Evaluation |
Goals: Support incremental improvement and measurement | Support the innovation processes and adaptation to dynamic environments |
Roles and responsibilities: Evaluators are external to the program to ensure their independence and objectivity | It works as an internal group, integrated into the implementation process and testing new solutions in real time |
Measurement: Focuses on explicit, pre-established criteria | Focused on program values and committed to long-term impact |
Results: Formal reports and cases of good practices | Real-time, learning-focused feedback and reporting |
Complexity: the evaluator tries to control the evaluation process | Immediate responsiveness, no full control over the process |
Fundamental criteria: rigour, independence, credibility with external agents and critical analysis | Adaptability, complex system mentality, ambiguity, openness and agility, teamwork |
(PATTON, 2006)