Statement of Competency
Introduction
For information professionals to offer the most accurate and useful services, institutional goals need to stay connected to what users actually need. Collecting both quantitative and qualitative data that reflect those priorities helps guide thoughtful, evidence-based decisions and supports stronger planning overall. By using clear frameworks, measurable indicators, and data-driven analysis, libraries can show their value, make more informed choices, and continue building their role as responsive, user-centered institutions.
Evaluative Frameworks
Evaluative frameworks help connect the resources a library invests with the results it achieves. They offer a structured way to understand how funding, staffing, and collections support the services that matter most, such as instruction sessions, circulation, and community programs (Bakkalbasi, 2017, p. 212). As Bakkalbasi (2017, p. 214) notes, any good assessment starts by considering the specific environment of the library. Each institution has its own mission, community, and constraints, so an effective evaluation needs to be realistic, actionable, and aligned with those unique factors and with their stakeholders. Bakkalbasi (2017, p. 215) outlines a five-stage process for assessment that begins with defining clear questions or goals. From there, the next steps are choosing methods for collecting data—often a mix of quantitative and qualitative—analyzing and interpreting that data, integrating the findings into planning, and sharing results with stakeholders. This process keeps assessment tied to everyday practice rather than treating it as a one-time task, and is cyclical in nature. The logic model (Yim et al., 2020, p. 2) offers another useful way to visualize the assessment process. It begins with inputs (resources), moves to processes (service capabilities), outputs (use of services), outcomes (changes for patrons), and finally, long-term impacts. What both frameworks share is an emphasis on connection which links the work libraries do to meaningful results. Using models like these encourages reflection and helps libraries demonstrate their value in clear, measurable ways. In summation, assessment becomes not just about collecting numbers but about understanding how libraries make a difference in the communities they serve.
Use of Measurable Criteria
The use of measurable criteria is essential to meaningful library assessment and evidence-based decision-making. Rubin (2004) emphasizes the importance of identifying clear indicators and targets of success to make outcomes measurable and actionable. Outcome statements, as Rubin explains, should specify what is expected to happen, to whom, and within what time frame, providing a concrete basis for evaluation. To do this, Rubin suggests a process that begins with defining indicators for each outcome and recognizing that indicators differ from outcomes in that they can be measured in concrete ways, while outcomes are often more conceptual. Establishing realistic targets for each outcome is also key, and Rubin notes that these should be determined by individuals familiar with the organization and its broader community context. Once indicators and targets are in place, composing outcome statements allows assessors to link measurable change directly to program goals. Building on this approach, Hiller and Self (2004) call for a balance between quantitative and qualitative data in assessment. They argue that the shift toward a user-centered library model in the 1990s expanded evaluation beyond simple usage statistics to include more nuanced, experience-based data. By integrating user feedback and qualitative research with traditional quantitative measures, libraries can develop a more holistic understanding of their current impact. Together, these perspectives highlight how both numerical and experiential measurable criteria work to create a more complete and actionable picture of library performance and perceived value for the library’s constituents.
Evidence
INFO 210: RUSA Reference Interview
My first piece of evidence is a reference interview that I conducted for INFO 210: Reference Services with Dr. Johanna Tunon (Fall 2024). We were instructed to conduct a reference interview with a reference librarian. I made an appointment with a reference subject librarian named Michelle, who was welcoming and helpful. Using the guidelines from the Reference & User Services Association (RUSA), I evaluated our reference interview exchange and presented my findings and experiences in this assignment.
This evidence supports Competency N by demonstrating my ability to evaluate library services (specifically reference interactions) using an established organizational framework. Through qualitative evaluation, I analyzed observational data to interpret the effectiveness of the service encounter and the librarian’s application of RUSA behavioral criteria. In this case, my perspective as a user provided valuable insight into service quality and could serve as the foundation for broader, user-centered assessment and analysis. This experience also reinforced the importance of empathy, communication, and responsiveness as measurable indicators of successful reference service. By connecting these interpersonal behaviors to institutional standards, I gained a clearer understanding of how user experience data can guide professional reflection, staff development, and continuous service improvement.
INFO 282: Project Management Software Review
My second piece of evidence is a review of project management software that was completed in INFO 282: Project Management with Dr. Sean Gaffney (Spring 2025). This assignment required me to choose a particular project management software program and evaluate it to understand the software’s capabilities in addressing workflows, user experience with the interface, interoperability with other oft-used workplace software applications, pricing structures, and technical support availability. To achieve this, I downloaded a free copy of Trello, a project management software which largely employs Kanban Boards to organize projects in a visually appealing and understandable manner. I took several days to play around with the features by plugging in my week's goals, deliverables, and deadlines.
This project supports Competency N by demonstrating my ability to evaluate tools and services using measurable, evidence-based criteria. In my Project Management Software Review, I assessed Trello through both quantitative and qualitative analysis. Quantitative measurable criteria included pricing tiers, feature limits, scalability levels, and user ratings drawn from comparative data sources which allowed for evaluation of value for the user. This analysis offered insight into how users interact with the platform and how those experiences align with institutional goals. By combining measurable data with my personal interpretation, I applied a balanced approach using both quantitative and qualitative evidence to support informed decision-making.
My third piece of evidence is a critique report of the website thriftbooks.com completed for INFO 246: Information Architecture with Dr. Virginia Tucker (Spring 2025). This Critique report is the final submission to a weeklong forum where we were grouped and asked to evaluate a website and fill out a “heuristic report card” which used, as a basis for evaluation, Abby Covert’s Information Architecture Heuristic principles as our measurable criteria. After completing this report card, we posted our findings in a Canvas discussion forum with our small group members. We exchanged our thoughts as responses to one another, and for my own edification, I created a spreadsheet which tracked all of our findings/thoughts. Based upon these efforts, I synthesized all of this to create a critique report to summarize and reflect upon the site, by way of the evaluative design heuristics.
This item of evidence supports Competency N by showing my ability to apply an established evaluation framework (Covert’s Information Architecture Heuristics) to assess a digital information system (website). I showed competency in merging quantitative analysis with qualitative analysis by taking the numerical grading scores I gave to each observable heuristic, then supporting these scores with written interpretations. In the written report, I discussed the strengths and weaknesses of the website and proffered actionable measures to improve the website. Through all of these steps, I demonstrated competency by my data-driven and reflective approach.
Conclusion
The principles of Competency N use data and assessment not just to measure, but to improve. This meaningful evaluation is a foundation for informed decision-making in libraries. Using measurable criteria allows institutions to connect their goals with user needs and demonstrate their impact in clear, evidence-based ways. In my professional future, I will approach evaluation with a nuanced and balanced approach. I plan on looking for new ways to improve user experience while aligning these improvements with institutional goals. I believe attending industry-specific conferences will provide me with ways to see how other professionals in my field are seeking to do the same.
References
Bakkalbasi, N. (2017). Assessment and evaluation, promotion, and marketing of academic library services. In T. Gilman (Ed.), Academic librarianship today (pp. 211-220). Rowman & Littlefield Publishers.
Hiller, S., & Self, J. (2004). From measurement to management: Using data wisely for planning and decision-making. Library Trends, 53(1), 129-155.
Rubin, R. J. (2004). Demonstrating results: Using outcome measurement in your library. ALA Editions.
Yim, M., Fellows, M., & Coward, C. (2020). Mixed-methods library evaluation integrating the patron, library, and external perspectives: The case of Namibia regional libraries. Evaluation and Program Planning, 79, 101782. https://doi.org/10.1016/j.evalprogplan.2020.101782