Welcome to the Urban Development Research Website Evaluation website.

across to previous page - Rationale up to higher page - Rationale on to next page - Evaluation Guidelines






Previous Evaluations




There are few documented evaluations of websites produced either by or for CSOs (note 5). Of those published even fewer have looked at the usage of development-focused websites from the perspective of their users. This is despite the wide availability of writings on the topic (note 6).

Evaluators of CSO websites have tended to conduct a quantitative assessment of site statistics; counting hits or the number of pages requested.

Whilst any estimate of site activity must refer to log-file statistics, taken alone, such techniques are now widely recognised to be a misleading measure of website performance (note 7).

In isolation quantitative evaluations ignore qualitative factors such as user experience, content quality and other organisational concerns, which are the subject of this research.

Scott Anderson et al (note 8) provides a detailed critique of quantitative and qualitative research methods for the evaluation of websites. Topics covered include the 'pros and cons' of log-file analysis and the limitations of user surveys.

Anderson's paper does not, however, advocate any specific methodological approach. Its purpose is simply to describe the range of research tools and techniques available to website evaluators.

For the purpose of this research, Anderson's paper makes it clear that whilst impact in its strictest sense is elusive, an accurate assessment of website effectiveness can be undertaken by utilising a range of well-defined research methods.

To evaluate website outputs appropriately, this research adopts a range of both quantitative and qualitative research techniques to examine several aspects of website design and usage: from the technology used through to issues around the opinions of users.

Other website evaluators, such as Victor Sandoval at the E'Cole Central, Paris (note 9) have also adopted an eclectic methodology to examine both quantitative and qualitative indicators of website performance.

Yet Sandoval's methodology cannot be taken as a model for other evaluators to follow, since the quantitative aggregation of ill-defined criteria such as 'user friendliness' and 'first impression' leave us guessing as to the appropriateness of the questions being asked.

The same could be said of another well-intentioned study, namely the evaluation methodology proposed by Batsirai Chivhanga at the Internet Studies Research Group, London (note 10). Chivhanga claims that users of the ISRG web-resource can evaluate the impact of their website using a predefined checklist. But evaluators attracted by the simplicity of the approach are left wondering how - using this checklist - one can actually measure and assess site 'usability' or 'stability'.

Simply asking questions, such as 'Is content thoughtfully collected?', however reasonable, is not good enough. To whom are we asking the questions? By what method? And what tests have been designed to ensure that meaningful data has been collected?

 
on to next page - Evaluation Guidelines
Previous Evaluations

Home | Research Presentation | Evaluation Services | Who's Involved?


Site last produced on Wed 21 Aug 2002. All rights reserved.
Copyright (C) 2001 Urban Development Research Website Evaluation. Page generated by AWF .