Now that hindsight is offering a bit of perspective, there has been a lot of discussion recently about the net effect of the Library 2.0 movement (and I don’t think it’s hyperbolic to describe it as such), focusing on the perceived platitudes of “twopointopians” as well as whether the new technologies being implemented in so many areas of librarianship are living up to their potential. Two camps seem to emerge – those who think that no blog is better than a dead blog, and those that believe a dead blog is better than no blog at all.
I’ve noticed that, whether in support or criticism of 2.0 implementation in libraries, many seem to take this all-or-nothing stance on the matter, suggesting that if new library technologies are not quickly and thoroughly adopted by users they are not effective. Furthermore, the logic runs that if implemented without regard to user need such technologies are ultimately counterproductive, and serve to leech energy from proven and traditional modes of service.
In Library 2.0 Debased, John Blyberg makes the excellent point that “when we use technology, it should be transparent, intuitive, and a natural extension of the patron experience.” In a thorough and thoughtful response to this post, Meredith Farkas attempts to define the ideal Library 2.0, citing a “culture of assessment”, “believing in our users”, technology awareness, and looking beyond libraries for technology inspiration as among its key features. Of her points, I find the following to be most critical to the potential success of emerging tech/2.0 service in libraries:
“Getting rid of the culture of perfect – being able and willing to experiment, learning from failure, being agile as an organization, continuously improving services based on feedback rather than working behind the scenes for ages to create the “perfect” product or service.”
I would add that due to the nature of library patronage (task oriented and point of need-based), users simply may not often want or need to use the user-focused technologies we develop to the “social” depth for which they were intended. While a lack of user-generating tagging in a social catalog or no comments posted to a library blog may prove disappointing, it definitely does not mean that users are not benefiting from these services in other ways. For example, I would argue that the simple act of creating updated subject guide interfaces via a wiki might make them more appealing and usable to the typical user, even if they wouldn’t dream of editing one of these guides. Similarly, posting materials acquisitions in blog format is obviously a much more lasting method than the alternative – updating a static web page – and should therefore be seen as a useful means of content management even if faculty would rather receive such information in email form. Most importantly, updating our sites and services using 2.0 tools makes our resources incrementally resemble the rest of the web, which helps stem the tide of library alienation that many of our users experience (and that I observe on a daily basis).
The wide range of benefits that accompany 2.0 platforms can make up for the tepid reception their feedback mechanisms may have received among our users – it’s all in the practical application of the tools we adopt and create, and whether they can be justified as having multiple layers of functionality. While many library blogs have faltered, the WSIWYG editors that power their content creation and that of most other 2.0 platforms have made the idea and practice of web editing that much more accessible to the typical librarian. WordPress and other systems can be modified to provide much more than a simple blog, and dynamic content of any kind can help enliven an otherwise stagnant library web interface.
I have a perfect example of what I would describe as an ideal Library 2.0 success/failure story – the OU Libraries FAQ page, which is powered by open source KnowledgebasePublisher software. I codeveloped this bright idea of Chad Boeninger’s, which was conceived as a means of providing a dynamic, lasting conversation between librarian and user that would outmode our static approach to frequently asked questions (which we all have due to the fact that library sites are incredibly involved and full of confusing language). The software allows the creation of public question/answer pages generated from a submission form and managed from a collaborative admin WYSIWYG interface, allowing multiple librarians to use patron-initiated inquiries to create lasting, visual, interactive FAQ entries. For example (click for full view):
In addition, users may search, rate, and comment on questions. We embedded a Meebo widget under all search results, which shows prominently if users receive no hits on their query. Each question publicly displays the number of page views, the date created and last updated, the number of comments submitted, and so forth. The management interface lets us sort questions by page views and other criteria, allowing us to see which items are viewed most (and least) frequently – an excellent window into the needs and habits of our web users.
What we’ve found is that while FAQ entries are viewed often and the question submission form is used regularly, patrons virtually never rate or comment on questions. Out of many thousands of total page views, we have about 4 ratings and 4 comments. Which, depending on your perspective, translates into either a resounding success or a total failure. Despite the fact that only half of the site’s functionality is used by those it was intended to benefit I choose to take the resounding success angle – the FAQs are easy to modify, arguably transparent, and people are using it to serve their purpose.
I believe there are many such examples out there – tools developed with one idea in mind only to be used in the way they are actually needed. We cannot create enough surveys or focus groups to ever really predict the success of library services deployed using new platforms and software, but we can attempt to design transparent and flexible tools that can benefit users on multiple levels. This way, if they reject one aspect, perhaps they will find utility in another. And if they reject all aspects, we can fix or ditch them as needed.