Early Impressions from Edge Initiative Beta Test Data Analysis

Early Impressions from Edge Initiative Beta Test Data Analysis

Although we’re still in the early stages of digesting the rich data provided to us by the 160 libraries that took the time to participate in the Edge Initiative beta test, the University of Washington team has gained some initial impressions and insights from the responses.

The first of these is the fact that we had 160 participants—this beta took time and effort to complete, and the high response rate is indicative of the interest and engagement we have seen in other venues where the Edge Initiative has been discussed.  This is a good indication that this effort is getting at something that libraries care about, and a worthwhile investment of time and energy.

As part of the research, we completed interviews with 40 of the libraries that responded to get at some of the non-quantifiable dimensions of the benchmarks, including effort to complete, their general impressions of the instrument, and other details related to the design of the instrument itself.  One key finding is that even in these early stages of development, most spent less than 3 hours filling out the survey, which they felt was a reasonable amount of time for the returns gained from the effort.

We also found the usual creative approaches libraries come up with in using activities like this to share knowledge and gain insights that might not happen if they sat in a room alone. Several libraries teamed up to complete the survey as a group, and found it a good learning experience as well as a way to generate new ideas.  This quote is a good example: 

Our Internet librarian got excited about a few items like digital literacy goals. Lots of times I was filling out the survey and thought ‘We’re not doing that? Why aren’t we doing that?’”

Another early impression from the interviews is something that we also found in the U.S. Impact Study—the importance of staff training and one-on-one help. Many libraries do not have the space or terminals to offer formal classes at the levels called out in the indicators for these areas, so this is something we will be investigating further to see if there is a way to adjust for differing capacities in libraries that do not have the ability to offer formal classes but do provide individual assistance as part of their services.

Overall, we continue to see confirmation that the benchmarks and indicators in the Edge Initiative are “roughly right” and that with some tuning based on the data obtained from the survey and interviews we will have a solid first version in place for the rollout. We’re continuing to do a deeper statistical analysis of the rich data provided by the libraries to refine the weighting, scoring, and wording of the indicators so that the final product will truly provide a useful and generally applicable instrument for libraries to use in assessing and improving their technology services to meet their community needs.

We are thankful to be working in a community that understands the value of good data and is willing to contribute their time and energy to initiatives like Edge that help to show the impact they are having in their communities. And we’re excited to see how well libraries are doing in this important area, even though many don’t think they are before they get their data back.

Watch the Edge Initiative website for more updates and to keep track of progress as we move closer to full rollout: http://www.libraryedge.org/.  Thanks to all of you who have contributed your time and knowledge to getting us one step further down the road.

Mike Crandall

Senior Lecturer and Chair, Masters of Science in Information Management

University of Washington iSchool



Creative Commons License

This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.