Harried multi-taskers in small community anchors such as libraries and community centers are doing the bulk of the work to close our digital divide. But who has time to reflect on the impact of your programs when the toilet is blocked, a teary child is standing at your desk, and every phone call is for you? Project evaluation can appear to be a luxury of time and resources when it gets heaped on to the to-do lists of the busy people who staff community-based organizations.
Recently, I interviewed two people who are trying to bridge the evaluation divide for community technology projects. From the University of Washington Information School, Mike Crandall and Samantha Becker are approaching this divide from many angles. Under the moniker the U. S. Impact Study, they are creating national surveys which can be used by independent libraries to help them evaluate technology related services. They have an ongoing relationship with a network of digital divide groups active throughout Washington State. Focus groups and interviews help them understand the impact of these programs on the constituents they serve. For the Knight Foundation, Becker and Crandall are also examining the role of community technology projects in the context of the broader communities they serve. Consummate multi-taskers themselves, they are collaborating with other national experts convened by the Gates Foundation to identify benchmarks for community technology projects.
Evaluation helps your project stay on target.
Becker: “[Evaluation] serves performance because it provides the grantees with information about their users, helping to shape programmatic decisions.” For example, quarterly reporting based on goals set in the planning phase of a project (often the grant-writing phase) creates a feedback loop that gives project leaders time to make adjustments to steward their project to success.
Evaluation uncovers new ways to define success.
The U.S Impact Team helps the groups they work with to identify unintended benefits of their community technology projects. Crandall: “One of the CTOP grantees was really focused on youth issues but also discovered they were helping with employment preparation and employment job skills training. That wasn’t a specific goal of their grant, but it was a valuable outcome. We’re trying to capture that in this grant by sharing the goals between the different providers.”
Evaluation communicates success.
A vital outcome of evaluation is the coherent way that it can be used to communicate the impact a project has on its intended beneficiaries. Crandall: “If you do it well then you get results that are quite impressive and you can take those results and use them as leverage for future efforts.” The Impact Team plays a role in helping the BTOP coalition to communicate their impact to stakeholders from the local to federal.
Collaborative evaluation is raising the status of the field.
One interesting aspect of the Impact Team’s work is how it supports digital divide work as a field. For example, when they helped a state coalition of groups identify the aggregate outcomes of their individual community technology projects, they got the attention of state government which led to the creation a state grant program called the Community Technology Opportunities Program (CTOP).
Trust is not an issue.
Federal BTOP applications were given higher ranks when they employed collaboration. In my travels, I have found that because of pressure from private and public funders to push groups to collaborate for funding, the benefits of collaboration get more attention than the detractions. My hope is that both funders and people in the field will become ever more skillful at understanding the mechanics of healthy collaboration. The way that the Impact Team works with the Washington BTOP grantees may point toward some of those vital mechanics.
I imagined that one of the challenges of the kind of group evaluation that the Impact Team does might be that people might perceive it as risky. One organization might not compare favorably to a peer organization, harming its chances for future funding. When I asked Becker and Crandall whether grantees saw evaluation as risky they denied that this was the case for the groups they work with. Becker: "I haven’t heard it from any of the groups that we’ve worked with so far in terms of feeling anxious about that." I suspect that the role that Becker and Crandall are playing with BTOP grantees in Washington State may alleviate some of the challenges that can happen in umbrella funding. They are at once both insiders and outsiders in that team. They are trusted insiders within the BTOP grant, who speak of the of BTOP project as "we" and "us." This suggests they are more collaborative enablers within the BTOP team than indifferent outside evaluators. I can only imagine that this encourages a degree of trust among the BTOP grantees which enables them to fully participate in the evaluation process. Yet, Becker and Crandall are also outsiders in the BTOP project. Their team is an independent entity within the BTOP coalition. Their role as evaluators is expansive in that evaluation is threaded through planning, operations, and communications. But it is also confined. Evaluation is the only role that they play within the coalition. They are helping individual groups to define success for themselves and they are also brokering the aggregate of groups to define success in a way which seems to not to play into a winner/loser mentality within the coalition. This same process helps create a coherent, methodical way to communicate the coalition’s successes to important stakeholders - especially state and national funders.
This dual insider/outsider relationship between umbrella funded groups and evaluation teams deserves consideration as "a right way" to take on this work. We can’t ask the harried multi-taskers on the front lines of the digital divide to take on the task of communicating digital inclusion work in a way that big foundations and beltway insiders can understand. But we can’t expect the work to move forward if front-line knowledge and experience is not reflected at all scales. Becker and Crandall demonstrate how evaluators can facilitate this critical knowledge exchange.
For more BTOP tools and resources, visit: Broadband Stories from the Field on TechSoup.org