Thursday, April 12, 2012

Quality vs. Quantity

Evaluation and assessment. Essential to any fully functioning program. But how do we evaluate the school library program? And what happens if we don't assess it, are we going to get cut? This is what Wools tries to tackle in chapter 13.

Assessment of our programs is hard, for several reasons. First, we as humans hate to admit when we're wrong. It's painful to admit that you're not doing the best you can. So, we tend to avoid assessment, especially in the world of education.* Second, it's incredibly difficult (nay, impossible) to quantitatively assess a library. Actually, let me restate that: we shouldn't be quantitatively evaluating a library, because, as Wools said "collections may meet an arbitrary numerical count, but be out of date, in poor condition, or of no value to the current curriculum." This sort of counting of services and systems can be helpful, but it's very one-sided, and doesn't take into account the real humans and preferences they hold.

So, we need to qualitatively assess our library programs, systems, and services to get an accurate picture of where we are succeeding and failing. This is really hard, because it can be very subjective and it is difficult to accurately identify the problems that lead to failure. As Wools suggests, we should compare "what it is" to "what it should be," using accepted standards and practices as our guide.

Then we can delve into staff assessment, which is a touchy subject. It's touchy because administrators typically only assess teachers, which means they're not necessarily equipped with the right understanding to evaluate librarians. The requirements of running a school library are much different than the requirements for running a classroom, although they do overlap. I think the best way to assess whether you're doing a good job (or your assistant is doing a good job), is to compare you actions against what is required in your job description. In fact, we could apply the same logic used in qualitative analysis: what are we doing vs. what should we be doing. Hopefully, everything aligns.

Let's talk for a minute about collection evaluation. I love that Wools is so open about the fact that, although a collection should be evaluated every year, it probably only happens every five years. Even then, I don't think it happens as often as that. Especially in my current experience, I don't think the  libraries have been weeded in decades, if the books from the 40's and 50's are any indication. Of course, just because a book is old doesn't necessarily mean it's useless. However, we have a responsibility to our educational community to provide them with accurate and useful materials, so where does a book from 1973 on the future of lasers fit into that?

So, we've evaluated our systems and services, now what do we do? As Terrence E. Young Jr. says in Better Data, Better Decisions, we should "identify goals for improving the library media center program," "inform principals...and other stakeholders in order to gain support/advocacy for your library media center program," "secure additional funding," and "develop an action plan." In short, we collect data so we can implement the changes it suggests. Why would we ask the question if we just ignore the answer? I also see this as a huge part of advocacy. Document, document document (or as my undergrad history teacher said, cover your a**), if you want people to support you, then you need to show tem why they should support you. If you have a program that works, flaunt it! If you have a program that's broken, show how you can fix it!

This brings me to the most important, and most difficult, part of assessment: evaluating student learning. How do you assess whether students have learned when all you might get with them is a one-off lesson? And how do you take the information literacy skills from wrote knowledge, to something that is demonstrable in a test or assignment? Jan Mueller, in Authentic Assessment in the Classroom...and the Library Media Center, suggests that we can help students demonstrate these skills. We need to start by collaborating with teachers, evaluating ourselves and our assessment tasks, and then designing our student assessments to meet these needs. In order to effectively assess our students learning, we need to have more than just a standardized multiple choice test. It may mean we give them a test, it may also mean we evaluate a final product according to a rubric or checklist. Either way, it's going to be individualized to our students' experiences and needs.

*I personally think this is why there is so much backlash against teacher evaluations. If we're doing a good job, then we shouldn't be worried about being observed. Although, there are other political situations that most likely incite fear as well.


  1. I think the part that is difficult for me is knowing that when we talk about "our" students is that our students are...all of the students. How do we evaluate all of them? Can we?

  2. The student evaluation part also seemed tricky to me. There's no standardized testing for library/information literacy skills (and even if there were, that wouldn't necessarily help the problem), and librarians aren't like classroom teachers where they can evaluate student progress in other ways.


Thanks for commenting!