As TechSoup and the other agencies working on the public access technology benchmark initiative get feedback and refine the benchmarks, the ALA Office for Information Technology Policy (OITP) is doing a lot of work to document the process.
I thought I'd share some of what is being written about the benchmarks, both to shine a light on our current work and to start a conversation about the value of using benchmarks in libraries. What you'll read below is a draft, shared with you to offer some insight into the benchmark project. We'd love to hear what you think about it. Please weigh in and share your thoughts.
Why use benchmarks?
1. Assess where the library stands in terms of public access technology
Through using the benchmarks, a library can gain an understanding of where its public access technology services fall in relation to national best practices. The benchmarks will be updated over time, making it easier for libraries to keep up with the latest changes in technology and shifting practices, when, alternatively, libraries might spend valuable time searching for this information individually. In addition, the benchmarks, as a universal “measuring stick,” have the potential to help the library community nationally, state-by-state, and by library type and size, to determine their common strengths and challenges in providing public access technology. The benchmarks also encourage libraries to share experiences and insights with one another, promoting communication within the field.
2. Pinpoint specific areas for improvement and investment
The benchmarks are divided into three general categories, and success on each benchmark, in turn, is determined by a set of indicators. By working through the framework it’s possible for a library to see, on a number of different levels, where it is doing well, and where it could improve. For instance, it may be revealed that the library (or libraries across a state, or the nation) is particularly strong in terms of two categories, but weaker in the third. Within this category the library might do well in terms of some benchmarks and indicators, but not others. This can give the library ideas of ways to develop and the most critical issues to focus on.
3. Generate data useful for advocacy and fundraising
Libraries that use the benchmarks will collect a set of data useful for advocacy and fundraising efforts, and the framework will also help them to organize this data in a consistent way. The benchmarks also require libraries to think through their activities strategically in relation to public access technology – e.g. how do library policies, and staff competencies support it? How does public access technology address the community’s needs? Overall, the benchmarks will help libraries gain clarity about the state of their public access technology, and help them to identify opportunities for and challenges to improving it. This clarity is useful in terms of communicating with decision makers and funders. That is, libraries using the framework can say: “we’ve done the background work to determine where we stand, and this is precisely what we need to do (and need funds for) to make this service better for the community.”
4. Obtain insight on how public access technology can be used to meet community goals
To achieve a high score, libraries must demonstrate that they actively take community needs and goals into account in relation to public access technology planning. Working with library staff members that have direct contact with library visitors, as well as with external community partners, it’s possible to understand the challenges the community faces, and how public access technology helps to address them. Using the benchmarks can help to spark conversations on this topic, and give pointers towards activities (such as connecting with e-government services) that public access technology may support in the community.
5. Create clarity about the value of the library and public access technology
The benchmarks encourage libraries to engage stakeholders, including both decision makers and patrons, through regular communication. Doing so, libraries will obtain a clearer understanding of stakeholder points of view, including what’s most important to them. Libraries are also encouraged to develop strategies to let the community know about the availability of public access technology services, as well as to communicate to decision makers about how public access technology addresses specific community needs. This may involve pointing out how library public access technology already makes a big difference in terms of supporting education, e-government, health, and workforce initiatives, and how it could do an even better job with more resources.
Both libraries that score well in relation to the benchmarks and those that receive lower scores can benefit from using them. Libraries that score highly can show decision makers results in the context of what has allowed the library to do so well – highlighting things like good management practices, community partnerships, or adequate funding. They can then point out how, and in what critical areas, budget cuts would impact public access technology service. On the other hand, libraries that have lower scores can use these to make the case to decision makers what specific interventions are necessary and could significantly improve public access technology in the community.
Is this what you expected the benchmarks to offer your library? Does this make sense? Do you think there is value in using benchmarks? Please poke holes in it, get riled up, and share your thoughts. The benchmarks are only as useful as you find them, and we can make them a better tool with your input. Thank you.