This is the second post in a series about design thinking in evaluation. The goal of this series is to share insights from the world of design that may help you think differently about how you work and, hopefully, start a conversation about what the world of social sciences can learn from the world of design. If you missed Part 1 about radical collaboration, check it out here. This time around we’re focusing on another key idea in the design thinking world: human values. As evaluators, we may deal largely in numbers and spend a lot of time in front of spreadsheets. Yet we can’t forget the real reason we do what we do – helping people. And to help people, we have to understand where they’re coming from. The Institute of Design at Stanford describes it this way: “Empathy for the people you are designing for and feedback from these users is fundamental to good design.”
What does empathy mean for designers in all fields, not just evaluation? Well, designers have certainly been known to create things that are beautiful, but maybe a little bit esoteric. The white couch that you can’t sit on without wrinkling its perfect linen upholstery. The beautiful teapot that doesn’t have a handle. The elegant solution to a public health issue that ignores the values of the people in the community. We work with people who are passionate about people. People helping people. People who are embroiled in intense emotional and even life-threatening situations. But when we design evaluations, we sometimes forget to think about those people, their values, and what they are experiencing.
A focus on human values should be obvious, but many times it’s overshadowed by concerns about rigor, replication, and publication. We feel that if we customize our research too much, we won’t be able to generalize and share it. But if we don’t, we risk creating evaluation tools and designs that don’t work for our clients or, even worse, that ignore their everyday realities altogether.
I once encountered an evaluator who was designing an outcome survey for a group of adolescent boys involved with the juvenile justice system. These adolescents participated in a male mentoring program. The program was fairly limited, with only eight two-hour sessions, and many of the boys struggled greatly in school, especially with reading. The evaluator developed a lovely survey that was totally reliable and valid. The problem was that it was eight pages long. If this evaluator had thought about the actual people who would be filling this survey out, she would have realized that the survey was bound to invoke testing anxiety and fear of failure among the youth. Not only that, but from a program administrator’s point of view, the survey would take up far too much of the short, valuable time that they had to spend with the boys. Methodological considerations aside (and there were many), the approach was far from human-centered.
In another instance, we worked with an out of school time program that was struggling with tracking daily attendance for its students. At first, we were very frustrated and couldn’t understand why they were having so much trouble checking off names on a list. So we went to the school and observed the beginning of the program and the check-in process. Pretty quickly we let go of OUR frustration and understood THEIR frustration. They had a list with names, but kids were running in and out of the program, the way kids do after school and before they get settled into their next activity. Teachers would check a name off, but then the kid would run out and back in, making it hard for the teacher to be sure if they had checked off that kid or if they accidentally checked off a different kid. With 100 children in the room, it was hard to keep track. So we created a simple system for them that involved cards and hand-held scanners. The teachers scanned the students’ cards when they came in, and the attendance was recorded in an Excel spreadsheet. This saved time and ensured that if a kid wasn’t enrolled, there would be no card, and if the kid had already been recorded, the system wouldn’t record it again. Overall, it saved tons of time and energy and …frustration.
In creating solutions for both of the above situations we also incorporated feedback from users. We listened (probably the most important skill we have as consultants) to the concerns of our clients, talked to them about how the boys were reacting to the surveys, observed the classrooms where the attendance system was used. There is always room to ask clients how the tools we are using are working for them, to find out how things look in the real world and, whenever possible, we love to observe our work in action. It often looks different from what we expected.
Our scientific selves and our training are often at odds with our relationships with clients. It’s important to provide clients with the best data possible, so they can make the best decisions possible and use that data to tell their stories. It’s just as important to remember that real people with real constraints, frustrations, and often chaos all around them are the ones who will be filling out our forms and using our information.
Really, we do better work when we think about the people we’re doing the work for.
Check back soon for Part 3 in this series!