Participatory Action Research for Electronic Community Networking

COMMUNITY DEVELOPMENT: Journal of the Community Development Society, Vol. 36, No. 1, 2005
Participatory Action Research for Electronic
Community Networking Projects
Larry Stillman
The paper encourages the adoption of participative action research methodologies for the evaluation of
community technology, given the complex and emergent mix of community development and information
technology, which these projects represent. Much of the richness of the processes that take place can
best be captured through collaborative participative research that is valued by communities, rather
than through less engaged approaches. The use of participative methodologies will give communities a
better understanding of research processes leading to uses of new technologies that are more effective.
A participatory action research tool test for electronic community networking is introduced, and its use
is described in several countries. For an emergent field such as community technology or community
networking, an adaptation of action research that can provide a “thick description,” the range of
meanings, interpretations, and effects of human and technical interactions, which come to constitute
community networking for community development, therefore appears timely.
Keywords: participatory action research, participatory evaluation, community-based research,
community technology, electronic community networking, community informatics, effective use
Participatory action research, now also known as community-based research (Stoecker
2005c), addresses a number of key issues facing communities utilizing expanded information
and communications technologies (ICTs). For the practitioner or researcher, participatory
action methods are particularly useful when looking at new fields of endeavor such as electronic
community networking since they can draw on a full range of qualitative means of discovering
community knowledge. Participatory action research is particularly relevant to communities
because of its real-time orientation towards knowledge discovery and utilization and the ability
to utilize the technologies it is examining in documenting activity. Participatory action research
has the potential to empower participants and engender collaboration as social learning
(Wadsworth, 1991; 1998).
A recent international study of the sustainability of information and communications
technologies highlighted a number of issues related to the potential effects of communitybased ICT on community development . The study included the need for several key elements:
• Practical community-level research methods
• Community and agency awareness and training
• Business planning skills, and
• Collaboration among agencies, including neighboring ICT and telecenter projects (Roman
& Colle, 2002).
These findings reflect a lack of exposure to community-based research techniques. While
there are no studies of the background of those conducting community ICT projects in, for
example, Australia, North America, or the UK, an explanation may lie in the lack of a good
understanding of “social” or “human factors” by many practitioners or researchers engaged in
Larry Stillman is at the Center for Community Networking Research, Monash University, Victoria, Australia.
E-mail: [email protected].
 2005, The Community Development Society
COMMUNITY DEVELOPMENT: Journal of the Community Development Society
community networking projects, a phenomenon familiar from information systems practice (Rose,
2000). Practitioners have probably come to engage with communities based on their skill with
new technology, information systems, librarianship, or management, rather than their skills in
community work or community development.
Furthermore, the commitment of large amounts of money to ICTs by governments and
foundations over the past decade (at least in developed countries) has probably attracted numbers
of people more interested in potential business and technical opportunities (and new careers)
than numbers of people interested in long-term social and community development.
A recent (June 2005) workshop on Qualitative Research and Community Technology at the
Open University in the UK—in which the author played a key role—highlighted the limitations
of such approaches to community interactions with technology. The workshop suggested a new
orientation of community informatics to qualitative and humanistic forms of research that have a
more rounded understanding of technological innovation in local communities. The workshop
was a significant event because it resulted in the formation of a qualitative issues special interest
group of the Community Informatics Research Network, an international network of researchers
and practitioners concerned with community technology issues.1
USING ICT FOR COMMUNITY DEVELOPMENT:
A BRIEF REVIEW
Community technology (also known as community networking or electronic community
networking) is an emerging field of community development that sees new technologies as key
tools for community development. New systems of information storage, creation, and transfer
are powerful resources and tools for community life that intersect with visions of community
development in new and unexpected ways. Community Informatics (CI) is a term used by those
coming from information systems or management systems approach.2 Gurstein defined the field
in the following way:
Community Informatics pays attention to physical communities and the design and
implementation of technologies and applications, which enhance and promote their objectives.
CI begins with ICT, as providing resources and tools that communities and their members
can use for local economic, cultural and civic development, and community health and
environmental initiatives among others (Gurstein 2000, p. 2).
The Centre for Community Networking at Monash University found the following taxonomy
useful in its own work in developing research questions. Community networks types can include
several categories:
• Individual organizations
Networks based around individual organizations can be large or small, ranging from a small
all-volunteer club or community group Website to a large organization’s Intranet and Internet
services with a mix of volunteer and paid staff, such as the Country Fire Authority in Victoria,
Australia (http://www.cfa.vic.gov.au). While nominally a government site, the password-protected
members’ section contains contact information for local brigades, accreditation and training
information, incident reports, and the myriad of information necessary for emergency services
work. Thus, it complements the face-to-face work (outside of fighting fires) performed by
thousands of volunteer firefighters in a country prone to destructive summer fires.
• Clusters of like organizations
These ICT projects are composed of organizations brought together through a common
interest in using ICT within a specific activity area. An example is a community health or adult
education network in which there is a shared interest in common standardized interoperable
78
Stillman
technological platforms for sharing information and communication. Some of these networks
are administered by state or national chapters of organizations on behalf of their members. In
the community development area, see the Association of Neighbourhood Houses and Learning
Centres (http://www.anhlc.asn.au) in Victoria, Australia. Funders may have high expectations
for such networks. For example, Neighbourhood Houses can be viewed as means by which
social and community capital are bonded through technology, the outcome of activity undertaken
in the Houses. Such an aspiration was made explicit by the Victorian Community Services
Minister in 2001:
Places like Neighbourhood Houses are the glue that helps hold communities together. The
funds to get them online, upgraded and get staff internet-trained will help give access to the
information age to people who otherwise might not have access. . . . The bottom line is
stronger communities. We know strong communities mean fewer social problems and less
isolation, crime and homelessness.3
• Cross-sectoral collectivities of geographically-based stakeholders
The following statement by Amy Borgstrom in the United States exemplifies the community
development ideals behind this type of network:
Community networking to me comes out of a sense of place. Community networking to me
is what happens when a group of people in a physical geographical community gets together
to solve problems and respond to opportunities. This can happen in a church basement, a
local council office, or a meeting like this. Community networks are the electronic public
spaces, the communication and information tools that we can use to facilitate the work we
do to make our communities a better place to live (Borgstrom, 1999).
This collectivist approach is often regarded as the ideal type of community network, which
attempts to include and reach out to everyone in a particular geographical area. There are high
expectations of positive effect on social capital (Putnam, 1995).
Well known models include the Freenets, Seattle Community Network (http://www.scn.org),
the Blacksburg Electronic Village (http://www.bev.net) (Cohill & Kavanaugh, 2000), and the
Missouri Express (Pigg, 2001) all in the United States of America, “Netville” in Toronto,
Canada (subject to longitudinal study by Wellman and others),4 or more recently, the Range
Intranet in a new housing development in Williamstown, Australia (Arnold, Gibbs et al., 2003).
Another example, targeting disadvantaged families, is the Computer in Homes project (http://
www.computersinhomes.org.nz) in Wellington, New Zealand.
• Civic networks
Such networks are usually linked to government or government instrumentalities, and they
act as a portal for a full range of community information, with different degrees of community
participation, governance, and opportunity for self-publishing and content creation. Examples
include VICNET, part of the State Library of Victoria, Australia (http://www.vicnet.net.au), or
the Civic Network of Milan, Italy (http://www.retecivica.milano.it).
• Service and application provider organizations
Provider organizations are based around volunteers and paid staff. They offer technical
support for the use of hardware and software applications to community-based organizations
wishing to go online. Computer support and recycling organizations such as such as Computerbank
(http://www.computerbank.org.au), the Intranet and database specialist network Infoxchange (http:/
/www.infoxchange.net.au) in Australia, Techsoup (http://www.techsoup.org) in the United States,
and international community development activity supporting the development of ICT
infrastructure in East Timor are examples.
79
COMMUNITY DEVELOPMENT: Journal of the Community Development Society
While the above taxonomies are useful in developing general descriptions of different types
of community networks, the impact of ICTs on communities is subject to continuing study and
dispute within the emerging community networking research literature (Gurstein, 2000; Pigg,
2001; Wellman, 2001; Malina, 2002), and there is no agreement on the core research questions
for the field (Stoecker, 2005b).
Consequently, governments, funders, and communities do not always have the same
expectations of the purpose and outcomes of projects, which are meant to use ICTs in communities.
By extension then, evaluation questions are frequently unclear. The notion of “effective use”
has gained some currency as a problematizing phrase in community technology listserves and at
different conferences, in addition to the equally complex idea of “network sustainability,” another
concern of many funders’ project managers (Gurstein, 2003). This paper will focus on the first
phrase as a starting point for discussions about evaluation, though the term “effective use” itself
needs clarification during the research process: what a community considers “effective” (and by
implication, “sustainable”) may not be thought so by the funders. With this consideration in
mind, how can participatory action research contribute to discovering “effective use” in community
networking projects?
However, not all community networks (as the taxonomy shows) are the same, and this natural
variability explains the great difficulty of applying templates to plans or models for evaluating
community networking endeavors. However, this more flexible approach challenges the more
formal and prescriptive orientation of many bureaucracies in funding and administering community
technology programs. They are risk averse, and they do not always appreciate failure or the
unexpected. Participatory Action Research may be a way of educating them about a new approach
to technology projects, given the valuable and grounded information that they generate.
THE CONTRIBUTIONS OF PARTICIPATORY ACTION
RESEARCH FOR EVALUATION
Participatory Evaluation is a term covering a range of methods, ranging from a non-ideological
pragmatism for limited briefs, to highly collaborative techniques with the aim of social
transformation and empowerment (Whitmore, 1998). My particular interpretation of it as
Participatory Action Research for Evaluation is an ongoing engagement between the researchers
and researched to develop, implement, and evaluate project or program processes and outcomes.
This form of engagement is particularly relevant to community technology, in which the interaction
between people and technology in community development is nascent, and more prescriptive,
particularly quantitative and technically-oriented information systems methodologies are unlikely
to capture the multidimensional complexity of technology use and its effect in communities.
The methodology firstly assumes that “the researched” (“the community”) has particular
and often tacit knowledge about how technology is being used. This knowledge can only be
discovered through active engagement with the community. Secondly, it assumes that the process
of such action research (in this case, about “effective use”) is one that also provides for
transformative effects, including enlightenment and improvement of the community.
With its emphasis on insisting that people’s views, knowledge, and skills are valuable
and valid, Participatory Action Research is thus part of the family of interpretive traditions
of research that accept and value the existence of multiple and even contrasting “definitions
of the situation” (Berger & Luckmann, 1966), in natural and real world settings, as opposed
to the positivist family of research with its assumption that only clearly demonstrable,
calculable, and verifiable facts have empirical value. Participatory Action Research is a
form of research that makes no claim that its outcome is ultimately “objective” or true, but it
accepts that we can only know the world imperfectly and makes its best effort to describe
that imperfection well. What we can hope for is to improve on that knowledge constantly
through cycles of action and research.
80
Stillman
In contrast, positivist research, including much quantitative social research, is very hard to
conduct in community development settings, because it is impossible to control for real world
social variables and unanticipated outcomes. Prepared survey instruments, often used to generate
numbers, can exclude much that is valuable, particularly if it cannot be put into a check box.
Participatory Action Research, because it is embedded in a reflective understanding of human
process, has no such problems: it has no need to “control” variables or the unanticipated, because
that is what it is looking for, in developing a multi-layered, rich, and demonstrable evaluation of
what is happening in a community.
This form of research is also clearly linked to values around increased democratization
and capacity in communities, familiar from the work of Freire, Illich, and others (Fetterman,
1994). Indeed, the very notion of “researcher and researched” is challenged as reflecting
power inequities, when ideally, more equal, pragmatic, and respectful relationships should be
the norm, resulting in a contribution to the particular community as well as the enlightenment
of the practitioner or researcher.
EVALUATION USING PARTICIPATORY ACTION RESEARCH
FOR INFORMATION AND COMMUNICATION
TECHNOLOGIES
As part of a new concept of community technology practice, community technologists need
to understand the purpose of evaluation. Why evaluate in the first place? While this question
seems obvious, it is challenging to answer, particularly if there is a concern for the development
of authentic and meaningful data that are consequential to participants and funders.
The following outline explains some of the motives for evaluating a project. More often
than not, a report on a project needs to be prepared, providing information about why the project
is justified, what happened, and how funds were expended. Such a report is highly desirable for
a number of core reasons (Owen & Rogers, 1999):
• to provide accountability and decision-making capacity for funders, as well as bureaucratic
and political masters, about the future direction (to continue, alter, or terminate) of projects
and programs;
• to provide accountability and a means of community development for participants in programs;
• to satisfy genuine intellectual curiosity about the processes and outcomes of the uses of ICT;
and
• to enable the development of new methodologies, theories, and concepts that can be applied
with a degree of reliability (and risk reduction) in new projects.
Program evaluation can occur at three core different stages of a program or project:
• Before: as a structured “up front” planning method for determining the feasibility of further
stages of a project.
• During: during the life of a project or program as a means of ongoing accountability,
clarification, and enlightenment about process, program, or project alteration, and directionsetting. This is sometimes referred to as a formative evaluation.
• After: post-program or project, for determining final impacts and recommendation. This is
sometimes referred to as summative evaluation.
The third category is probably the most familiar to practitioners; yet, it is the least effective
means for looking at on-going processes during the life of a project. This type of evaluation
takes place after the event, yet projects are often completed, and people and documentation are
often irretrievably dispersed. As a consequence, the particular knowledge and human capital
that exist in formative stages and during the life of a project can be the most difficult to capture
and easiest to lose. Given the complexity of human-technology interactions in community
technology projects, including unanticipated outcomes, and given the difficulty of conceiving of,
or implementing pre-designed and prescriptive evaluation methods to capture the complex and
81
COMMUNITY DEVELOPMENT: Journal of the Community Development Society
thick form that emerges (Geertz, 1973), some of the answers about how to, and what to capture
in the evaluation will only come from using participatory evaluation before, during, and after a
project has taken place.
I suggest that participatory techniques used during all stages of an evaluation have the potential
to establish a cycle and culture of evaluation that is useful for all stages of community networking
projects. The information discovered will move beyond a technical focus or number-crunching
(e.g., numbers of participants) to one that reflects the process and outcome of community change
(How did participants’ lives change through using ICTs?). Another similar question is to ask
what forms of multimedia (video, digital recordings, or Websites) capture “community spirit”
and the fine-grained discourse about community memory that emerges.
I am reminded here of a story told by one of the participants at the Open University workshop.
He said that water is “socially-constructed” in many villages in India. At first, we did not understand
the “social construction of water.” He explained that this was a way of describing how the water
was understood in village life. It was not just something used for cooking and washing, but it was
also the important responsibility of women to manage the water resources. Better management
of such a valuable resource thus involved engaging with women’s lives and their capacity to
learn about the significance of water in their lives.
An apparently simple “water + Internet” project is in fact a complex project about women’s
community development, the effect it has on their families, literacy, and use of technology (for
example, the ability to send email). Getting information about how these women used radio and the
Internet to “understand” water in new ways and manage it in different ways (“effective use”) can
only be captured through careful participatory ethnographic work. The emphasis of research moves
from a technical discussion of the outcomes of measures promoting health (such as the reduction of
disease) to a much more complex—but valuable discussion with and among the women themselves—
about changes to their lives through new understandings of water gained through interaction with
new technology (since they can also exchange messages about water through the technology).
Getting them to document the changes that occur is a challenge both to the researcher and to people
with generally low levels of education.
However, the literature I have reviewed (such as field reports about ICT projects and more
traditional academic writing) shows that so far, there has only been a limited application of
participatory action research techniques, not to say of recognized evaluation methods, to evaluate
the different stages and purposes of community networking. In his review of the applications of
evaluation in the early days of community networking, Frank Odasz first focused upon classic
questions of qualitative and quantitative methodology, such as:
How can a community network be defined, sustained, and evaluated for effectiveness and
efficiency …[A] numerical analysis stops short of presenting the whole picture…. Community
networking will forever be a very human phenomena (sic) with more variables than can be
accounted in numerical surveys. We must find ways of measuring how people benefit and
how people can be taught more efficient ways of achieving yet greater benefits (Odasz,
1994).
In another study, Gygi argued for the incorporation of evaluation procedures into the planning
and implementation of online community networks, based upon what she called a “strategic
planning and development indicator” approach used by community economic development
practitioners. She argued that there was the need for an enumeration of the “chain of events” that
link project goals to specific activities and outcomes, and to account for unanticipated, possibly
negative, outcomes. Gygi’s discussion touched upon important issues of causation and elucidation
of program theory or assumptions behind programs and projects, issues that are dealt with
extensively in evaluation practice (McClintock, 1987; Gygi, 1995; The Aspen Institute, 1995;
1999; 2002).
The challenge raised by Odasz’s and Gygi’s queries has still not been adequately addressed
in community networking literature, despite the more recent study of O’Neil (2002), which
82
Stillman
analyzed numerous evaluation reports. O’Neil provided important information about key
dimensions that indicate the success of a community networking project. These dimensions
include strengthened sense of democracy, social capital, individual empowerment, sense of
community, and economic development opportunities. However, this research did not critically
discuss the application of particular forms of evaluation, nor did it reference evaluation literature.
A further step was taken by Gurstein in his 2003 paper. He identified “effective use” as a
neglected concept in the conceptualization (and evaluation) of the uses of ICT. Prior to his
reframing of the purpose of the (at least the government’s) role of supporting community
technology, the discourse had focused around “access” and “sustainability,” usually meaning
hardware and bandwidth access issues, but it had neglected the social dimensions of technology.
In his important attempt to reframe the debate, he defined effective use as “the capacity and
opportunity to integrate ICT successfully into the accomplishment of self or collaboratively
identified goals,” though the dimensions of his discussion are still framed as a technical and
technocratic “social facilitation” issue, rather than a “bottom-up” community development issue
(Gurstein, 2003, p. 7).
Despite this conceptual difference, such a definition of “effective use” appears ripe for
elaboration through action research, and indeed, the CRACIN project in Canada sees participatory
action research underlying many of the components of a large, nationally-focused study of ICT
use. CRACIN is a large-scale project investigating a full range of government-supported ICT
programs across Canada involving researchers nationally and internationally. However, CRACIN’s
interest in action evaluation appears to be oriented towards developing recommendations about
macro-level program or project development for government and others, rather than locallybased and participatory action research for community development. 5
THE BENEFITS OF PARTICIPATORY ACTION RESEARCH IN
EVALUATING COMMUNITY NETWORKING
Community networkers and their participant communities can learn and employ methodologies
of participatory action research. Participatory action research is ideally collaborative, conscious of
its assumptions, and iterative—it reflects upon its methods, questions, and multiple answers, and
uses these to move onto the next stage of action. It places a community of interest at its core and
regards the evidence brought forth by participants as critical data. It is intellectually rigorous,
looking for contestation of ideas and discovery of answers through the interaction of participants
(Wadsworth, 1991; Fetterman, 1994; Fetterman, 1997; Wadsworth, 1998).
A well-conceived participatory and collaborative evaluation will benefit all stakeholders—
funders, paid and volunteer staff, clients, and the community at large. In fact, many participants
become enthusiasts for evaluation, once hesitations about self-examination have been removed
(Cooper, 1997). At least one project report of an ICT initiative has recorded the usefulness of
action research, using storytelling in a methodologically rigorous way (Harris, 2000).
The cycle of activity in participatory action research—as a form of action research—revolves
around identifying problems or issues, studying them, reflecting upon them, and then implementing
or changing activity in a new project phase. With this perspective in mind, I suggest that if we
can harness participatory action research to the study of the process and effects of technology on
community development, then a major hurdle in understanding and demonstrating what electronic
community networks do and mean to people will have been surmounted.
Thus, the use of participatory action research as an “up front” planning tool prior to initiating
a community technology project should contribute to its improved conceptualization and
implementation. It then follows that formative and summative answers of interest to funders
(most often about ongoing and final outcomes and accountability) will be discovered, alongside
answers of more interest to those interested in process (how change was achieved, what change
meant to people, etc.).
83
COMMUNITY DEVELOPMENT: Journal of the Community Development Society
Participatory action research has the capacity to incorporate change, and change is endemic
in people’s adaptation of technology in community settings. Participatory action research
recognizes that time is an indispensable dimension in human activity, and that action is never
stable throughout a research cycle. Because new technologies are often about the structuring of
activity and communication across time and space, this aspect of participatory action research is
attractive since it has an inbuilt sensitivity to the time dimension in human action.
An understanding of the emergent nature of the evaluation process has some correspondence
with socially-based interpretations of the use of ICTs. I have discussed the example of the
relationship between water and technology in development projects. Rather than seeing technology
as a “black box” into which people push information and something comes out the other end,
ICTs are considered as “technology-in-use,” a changing and frequently unpredictable interrelationship between user and machine, effected by time and space, two key dimensions that
affect the structuring of contemporary forms of social organization (Giddens, 1981; Giddens,
1991; Orlikowski, 1992; Orlikowski, 1999).
Consequently, despite the attempts of technology designers to “inscribe” particular patterns
of use, at the practice level, technology is interpreted in personal ways, depending on such factors
as gender, skills, attitudes, or place in an organization, or more broadly, the general community
(Bijker, 1989; Bijker & Law, 1994). A now-familiar example is the highly variant ways in which
people set up their PC “desktops,” much to the chagrin of someone borrowing a friend’s PC. The
same general effect is familiar from the different ways in which telecenters are used. Telecenters
can be expensive propositions to set up and maintain, but despite the best intentions of technical
designers and program managers, they are sometimes rejected in particular communities for
reasons that are social rather than technical. The implications of such differences cannot be
underestimated because they have important commercial and wide-ranging implications. Although
Intel Corporation, one of the giants of computer hardware manufacturing, has not made its research
publicly available, the author has seen presentations about their own anthropological research
through telecenters, because from a commercial perspective Intel is interested to know how
cultures in developing countries use technology.
The obvious observation that technology use is socially situated in a particular time and
space settings—despite the best wishes of technological determinists or bureaucrats to predict
patterns of use and their outcomes—helps explain and justify the inherent differences we
find in how people use similar pieces of technology. Technology is multidimensional and
emergent in its effects, particularly when it is meant to work with people separated across
time and space in developing new forms of community. Its evaluation has to be reflexive
and socially constituted.
THE CHALLENGES OF PARTICIPATORY ACTION RESEARCH
We cannot find out everything that goes on in a project, and choices need to be made about
which social or technological effects to study and how to assess them. In the Open University
workshop, a problem that arose was the overload of data from ICT projects. It might be suggested
that this problem could have been solved in many projects by prior thought about the processes
of data management. Program evaluation lends an experiential familiarity to that need to choose
what questions to ask, which methods to find answers, and how to manage the data.
The priority of any evaluation process is to narrow down the number of questions to a
set that is acceptable to all who have a stake in the evaluation, through a process of negotiation
and evaluation project clarification with stakeholders. A more intensive and open process
for this is assumed by the results of introducing participatory action research (Smith, 1989;
Wholey, 1994; Owen & Rogers, 1999). The General Accounting Office of the United States,
responsible for innumerable government audits and evaluations (including studies of the use
of ICT), puts it this way:
84
Stillman
The first and surely the most fundamental aspect of every design effort is to ensure that
questions posed for the evaluation are the correct ones. Posing a study incorrectly is an
excellent way to lead a study in the wrong direction. In fact, reaching agreement with the
sponsors, users, program operators, and others on the contents and implications of a question
can be difficult and challenging. …How a problem is stated has implications for the kinds of
data to be collected, the sources of data, the analyses that will be necessary in trying to
answer the question, and the conclusions that will be drawn (United States General Accounting
Office, 1991).
From a perspective of community development, project participants need to be engaged
in the initial determination of questions to be researched and methods to be used (for example,
in defining what “effective use” means). Practitioners must anticipate this engagement for
several reasons. First, in emphasizing community development, participation is educative,
democratic, and empowering. Second, in dealing with questions of community development
and its interaction with technology, many questions and answers about human behavior are
inherently ambiguous and changing. Third, because ICTs are complex and people respond
to them in very different ways, such questions cannot be answered though formulaic or unidimensional survey techniques at the end of a particular phase of a project. We need to be
able to account for unexpected and unanticipated factors throughout the life of a project.
Furthermore, because we are working with networks of people, we are seeking a holistic
understanding of participation with a particular technology and the means by which it contributes
to community development, not just to patterns of individual use or understanding.
If an agency commissioning an evaluation does not want surprises, or cannot accept nonquantitative data, this is not the way to go. It is not a form of evaluation to be bolted on from the
outside with a high degree of control of process and outcomes. The following axioms should be
therefore carefully considered if such a form of evaluation is to be undertaken.
First, as a participatory and collaborative process, research questions can be agreed upon to
accommodate high expectations, to ensure that the process will be ethical and collaborative, to
confirm that expectations are realistic, and to verify that interpersonal relations—including the
effects of “groupthink” and interpersonal politics—can be well-managed. Of course, given the
culture of power, gender, and race relations in different communities, none of these factors can
be assumed or taken for granted. It assumes that an evaluation plan and its questions can be
assembled to satisfy all stakeholders.
Second, it is resource intensive: high-energy work requires process, diplomatic and
documentation skills, and sensitive leadership. The researcher needs to be in it for the long haul.
Third, it expects discipline and rigor of participants, and this cannot be assumed to be a
strength with all people. Some people are likely to be better contributors than others. This
characteristic is likely to bring into sharp focus the tendency to depend on the expertise of certain
participants (particularly the facilitator) to provide solutions.
Fourth, it is not a form of social research to be undertaken when an evaluation contract is
formulaic or politically motivated, with little room for innovation, risk, or capacity for innovation
and change of directions on the fly. It is necessary to expect the unexpected, notwithstanding
contractual demands for answers to particular (and sometimes absurd) questions.
Finally, the scalability of the proposed method of action research will be greatly affected by
the resources (people, time, organizational support) available to the researcher. It is probably
best-suited to small-scale projects whose results could be clustered for comparative purposes.
Consequently, the components outlined below are best suited for group process on one or several
occasions, in which participants bring along their documentation for working and recording in
groups. With increased use and familiarity, the methodology could be adapted for larger project
settings, utilizing Internet technology. For example, online blogs and wikis (types of instant
diaries, using free, open-source software) could be used for recording and sharing collaborative
data across dispersed sites.
85
COMMUNITY DEVELOPMENT: Journal of the Community Development Society
A “TOOLKIT” OF PARTICIPATORY ACTION RESEARCH IN
ELECTRONIC COMMUNITY NETWORKING
The Centre for Community Networking Research (CCNR) at Monash University, Victoria,
Australia, was initially funded through the support of the Victorian State Government in 2000 to
develop specifications for an action research “toolkit” as part of a project to evaluate a governmentfunded community networking project.6 Significantly, the Centre is part of a faculty of information
technology, which meant that it has also had the role of providing awareness of the social dimensions
of technology to information management and information systems specialists, as well as to
government clients. The Australian Centre has been instrumental in supporting conferences on
community technology at the Monash Centre in Prato, Italy, bringing together researchers and
practitioners internationally, as well as events such as the workshop at Open University.
Recently, the Centre was challenged to develop methods demonstrating to Australian
governments the significance of effective community collaboration in designing and evaluating
projects using community technology. The government has taken a direct approach, drawing
upon more hierarchical and traditional methods of evaluation and policy development.
It should be noted that the toolkit remains a conceptual work-in-progress, having only been
used in part, but its principles have guided and inspired the development of research projects for
the Centre, including an Australia-wide survey of technology use in community-based
organizations, consultations with government and community on community-technology policy,
and research into networks of particular non-profit groupings in the State of Victoria.
Through their own work, numbers of individuals provided inspiration for the idea of a toolkit.
David Wilcox, Drew Mackie, and others in the UK, and Terry Grunwald in the United States are
members of a consultancy called Making the Net Work (www.makingthenetwork.org). The author
collaborated with them occasionally and found their skills, resources, and techniques particularly
useful in group facilitation around issues involving community technology.
Furthermore, Randy Stoecker of the University of Wisconsin has worked with the author
and other members of the Centre for Community Networking Research on several occasions.
His advocacy of community-based research techniques for a wide-range of community-based
projects, although developed independently of the community technology focus of the toolkit,
supports its emphasis on community engagement for creating and recording rich and meaningful
data, as well as for encouraging the process orientation towards social change (Stoecker, 2005c).
Each of the elements below is a building block to create effective engagement with the
community by capturing what goes on in technology projects, and at the same time, providing
these projects with self-evaluation skills. Through these elements, communities will become
empowered and engaged in their use of ICTs.
Element One: Facilitation
A better understanding of the social dimensions of community networking depends upon
developing good relationships with the community, particularly on the part of information technology
specialists. In developing evaluations, practitioners need to develop facilitative skills to work with
communities, to build confidence and the capacity to participate effectively in participatory evaluative
processes. The function of a facilitator is to help preserve project memory and impetus, which can
easily go astray in community-based initiatives. This is a familiar story to all those engaged in
community organization and development, in which the facilitator has many roles—enabler, broker,
advocate, bureaucrat, administrator, researcher—and in ICT settings, a technological “Mr. or Ms.
Fixit.” (Grosser, 1973; Greenwood & Leven, 1998; Schafft & Greenwood, 2002).
Element Two: Workshop Processes
Educational workshops can help participants learn to contend with various issues:
• Boundaries: Workshops can help participants learn how to set limits in what is expected
86
Stillman
and not expected of community, government, or other associates in an evaluation.
• Process: By participating in a workshop, participants can be encouraged to acquire
skills in building trust, team spirit, and leadership.
• Data Framing and Capture: Through the use of matrices (described in more detail below),
processes and outcomes for projects can be realistically defined, including expectations
for performance indicators, benchmarks, and milestones. These elements can help
participants learn how to manage technical and non-technical issues. At reporting stages,
participants can fill in these “matrices” as the evaluation proceeds.
One set of tools is particularly recommended for instructional workshops. The partners of Making
the Net Work designed extensive strategies in planning “games” involving formatted templates (in
fact, a type of knowledge-structuring matrix). Game players may practice their skills in planning for
community development with ICTs. The “games” employ typical community network situations that
can be downloaded and adapted for players to practice making decisions: the “how, what, where, why,
and by whom” related to a community network. During workshop sessions, players receive sets of
training cards with likely key technical and community issues—the cards become icebreakers,
brainteasers, and prioritizing tools, as players document their ideas on matrix sheets.
In the current model of the “game,” evaluation is seen as an optional, rather than integral,
part of the training process in learning exercises, reflecting the “summative” rather than formative
or process-oriented capacities of evaluation. The partners of Making the Net Work could easily
incorporate complementary materials to teach participatory “layer” of decision-making, thus
allowing players to practice evaluating issues and processes at various stages of a project (the
“Before, During, After” facets). Thus, the game could include valuable educational measures to
help players improve their skills in data collection, management, and reporting.
The partners of Making the Net Work suggest seven essential criteria to help participants
and players learn how to evaluate community networking projects, and players could use these as
needed during the life of, and after the completion of the project. These criteria are characteristic
of qualities desirable in individual planners:
• Creativity
• Connectivity and connectedness
• Confidence
• Competencies
• Capacity building—self-reliance and “ownership”
• Choice
• Content. 7
Similar to the cards in the planning game, players should regard these seven criteria only as
starting points, and project participants must engage their planning skills to select methodologies
most relevant to their project. Engaging members of a community network in finding out how
such criteria would be evaluated—and then setting about recording that story (using the framework
of matrices outlined below)—are of themselves important exercises and methodologies in
community development.
The “Melbourne Outer Fringe Project” illustrates the potential for employing these
methodologies to resolve disputes. In this case, a government funding agency engaged the Centre
for Community Networking Research to help develop a plan for implementing technology for
certain community groups because a year-long impasse between vital community players and
government officials obstructed the project. No one could decide who should take the first step
or what that step should be. Government employees needed to reset the “circuit breaker,” and
CCNR was asked to provide the “spark” to generate action to move the project forward.
CCNR set in place a process to engage the community in self-evaluation during the planning
stage. After telephone calls, emails, and two face-to-face meetings with key community
organizations and stakeholders, CCNR held two participatory workshops to identify community
needs and wants, to engage new participants, to locate additional resources, and in particular, to
87
COMMUNITY DEVELOPMENT: Journal of the Community Development Society
develop a detailed business plan for the project. To the surprise of well-established community
members, new players enthusiastically came onto the scene, attracted by the opportunity to own
a new project and build new community links with technology.
The Making the Net Work process was modified to meet local needs for the workshop, and
the Making the Net Work planning game was used to generate ideas about the shape of the new
community project, its priorities, and its timelines. The outcome was the generation of new
ideas, priorities for action, and a community committee of both “the usual suspects” and “newbies”
in a new coalition. Although the CCNR contract did not ask for followup, over a year later, the
project is thriving in the community, with a used Website and an active committee of management.
Element Three: Matrices
What is a matrix? Dictionaries define matrices as structures within which matter is contained,
and mathematically, as “a rectangular array of numerical or algebraic quantities treated as an algebraic
entity” (American Heritage Dictionary). Obviously, the mathematical analogy is not strictly
applicable to data that will be mostly qualitative (which can be mixed with more quantitative data),
but the notion of an array—a grid-like pattern of data matched against particular qualities—is first,
a useful way of categorizing qualitative and quantitative data in participant action research, when
the process of documentation can get messy and easily unstructured. For example, a set of important
evaluation criteria developed by the community could then be matched in the short, medium, and
long-term, and data in a sort of spreadsheet, and relevant data filled in, or referenced to particular
boxes. Second, because a matrix mimics a numerical or logical structure, it displays structured
meaning—important when demonstrating the rigor of particular observations or conclusions.
The use of knowledge matrices is well-documented in the research, albeit from an academic,
rather than participatory approach for the focusing of evaluation questions, organizing data, and
presenting results (Miles & Huberman, 1988; 1994). The staff of Making the Net Work developed
the technique of transferring the matrix structure from the individual research to the group level
through their templates for workshops. The matrix structure suggests certain ideas about
technology, but group work generates local priorities and ownership of particular issues, as well
as the generation and documentation of new ideas and processes.
Matrices provide frameworks to develop methodologies promoting action research.
Individuals may be familiar with creating lists and classifications, and the process of creating a
visual continuum of relationships between data and ideas in a grid fashion may be a familiar
activity to some. In a group setting, it provides community development practitioners the
opportunity through the generation of ideas and the manipulation of text and other data or records
to sort and prioritize information under emerging categories. In the Melbourne Outer Fringe
Project, Making the Net Work templates were transferred onto butcher’s paper. At times,
participants struggled to keep their hands off the paper when simultaneously they all wanted to
reposition the small cards with different technical and social ideas—such was the energy generated
in developing the matrix! The processes generated through the matrix technique are akin to
“grounded” or “from-the-ground-up” theories. As the term suggests, theories generated from the
ground up, through generated information, have proved to be some of the most powerful designs
in qualitative research (Glaser, 1967).
As already noted, although the Making the Net Work materials were focused on learning
skills in planning, the game can be developed to focus on evaluation questions. Empowered as
agents, participants can pose strategic questions (or ideas for them can be generated), suggest
data collection methods, and collect post-event data. Data collected can then be sorted and
cross-referenced to particular categories in the large-scale matrix.
The matrix can be seen as an instrument not only for one group, but also for many. For
example, in a workshop in New Zealand in 2001, practitioners suggested to the Maori that they
could use matrices to signify hierarchy in iwi, hapu, and whanau status (familial and tribal
organization). The matrices could record multiple “snapshots” and case studies using ICT, in
88
Stillman
deference to other methods that local groups use in collecting and processing information.
Localized and cluster workshops could develop questions on evaluation to be filled in on paper
or online. Members of the Maori community recorded multiple sources of evidence they gathered
through various methods, which demonstrated the rigor, depth, and richness of the research method
as the process itself educated and empowered the community (Patton, 1990; Miles & Huberman,
1994; Hurworth, 1996; Patton, 1999).
Element Four: Outcomes
In at least three areas, practitioners anticipate improved outcomes through use of the Toolkit,
though, of course, other outcomes could be expected or planned for in an ICT project.
1. Improved organizational capacity. Participatory evaluation could be used to gauge effective
use including, for example, abilities at self-help with technical issues rather than dependency
on outside help. An evaluation using participatory action research could help an organization
improve its internal processes in planning and implementing ICT projects, which include
changes in patterns of knowledge transfer between staff and its constituent community, as
well as integration of ICT planning into overall strategic planning activity. For example, in
a project conducted with Randy Stoecker with Neighbourhood Houses, a type of community
support center network in Melbourne, Australia, we helped community workers find solutions
to problems using technical support for their computer networks. The workers came to
realize that many solutions for their problems lay in better collaborative training and support
networks that met the particular culture of part-time and volunteer work they engaged in.
While this project did not use a data matrix, it did emphasize the valuing and collecting of
information in the workers’ own words, and this has had an impact on policymakers as well,
leading to changes in funding (Stillman & Stoecker, 2005).
2. Improved levels of community and social capital. This area is of great interest to many
funders. If practitioners assist an electronic community in undertaking evaluative
planning to use technology to build social capital for the community, practitioners can
develop realistic expectations for using technology to create social change in the community.
The local community can select meaningful categories of data that realistically reflect
what changes in social capital actually mean to the community. If practitioners teach
members of a community skills to evaluate a project during the life of that project, the
community can use those skills in self-assessment for other purposes.
3. Improved documentation of outcomes. Practitioners can anticipate improved documentation
of outcomes by developing and implementing realistic data-gathering and analysismethodologies, as well as creating realistic performance indicators and benchmarks.
Practitioners intend to create documentation that stakeholders regard as useful, valid, and
reliable (the audit or evidence-trail). Stakeholders are far more likely to adopt and use welldocumented evidence, recommendations, and conclusions. This has certainly been the case in
the work with Neighbourhood Houses.
CONCLUSIONS
Community technology requires evaluation methodologies in participatory action research
that engage members of communities. The evaluation of technology use needs to be understood
not just as a “technical” or quantitative measurement problem, but as a process that recognizes
the dynamics of community engagement in all its complexity. Such evaluations need to be engaged
with and understood by communities. “Effective use,” a term that has gained currency in
community technology circles, should be linked to a process that respects and engages
communities in harvesting their own, particularly tacit knowledge.
89
COMMUNITY DEVELOPMENT: Journal of the Community Development Society
To evaluate processes focused on community development, practitioners are advised that
participatory action research is ideally suited as a means whereby participants can discover
knowledge and educate themselves, given that a primary goal of community networking is to
engage and to empower the community.
Although many individuals may be challenged in implementing or managing community
technology initiatives, participatory action research offers the advantage of allowing individuals
to evaluate a project during its planning, formative, and summative stages. It can assist in the
discovery of skills and knowledge that would otherwise elude traditional methodologies based
on non-engaged evaluation.
Furthermore, although elements in the “toolkit” used to evaluate projects may represent a
challenge to the thinking and practice of those engaged in community technology, participants
can learn to use resources and techniques successfully. Participatory action research requires
practitioners to be prepared to change direction and to cope with the unanticipated, characteristics
of “good” community development. It requires practitioners and participants to accept wellstructured, largely qualitative data collection and management methodologies, in preference to
more traditional and prescriptive means. Indeed, given the ever-changing nature of ICT, it is
likely that many answers to how technology is being used in community development will only
be discovered by intensive research activity at different points in the life of a project through
engagement with key stakeholders—the participants.
NOTES
1 Roundtable Workshop: Supporting Community Through ICT, Open University, Milton Keynes (UK), June 23, 2005. Electronic Proceedings, including edited online video of sessions is available via http://kmi.open.ac.uk/
events/ci2005. A special issue of the Journal of Community Informatics (http://ci-journal.net/) is also anticipated.
2 See http://en.wikipedia.org/wiki/Community_informatics for an entry substantially developed by the author.
3 http://www.dhs.vic.gov.au/humanservicesnews/apr01/wwn.htm, April 2001.
4 See publications at Barry Wellman’s website, http://www.chass.utoronto.ca/~wellman.
5 See http://www.fis.utoronto.ca/research/iprp/cracin/ for links to the project proposal.
6 Elements of the project can be found at http://www.ccnr.net.
7 http://www.makingthenetwork.org/tools/cin99.htm.
REFERENCES
Arnold, M., M. R. Gibbs, et al. 2003. Intranets and Local Community: ‘Yes, an intranet is all very well,
but do we still get free beer and a barbeque?’ In M. Huysman, E. Wenger, & V. Wulf. Dordrecht
(eds.), Communities and technologies: proceedings of the First International Conference on
Communities and Technologies, C&T 2003. Boston, Kluwer Academic Publishers: 185-204.
Berger, P. L., & T. Luckmann. 1966. The social construction of reality; a treatise in the sociology of
knowledge. Garden City, NY: Doubleday.
Bijker, W. 1989. The Social Construction of Bakelite: Toward a Theory of Invention. In T. P. Hughes, T.
J. Pinch, & W. E. Bijker (eds), The Social construction of technological systems: new directions
in the sociology and history of technology. Cambridge, MA, MIT Press: 159-187.
Bijker, W. E., & J. Law. 1994. Shaping technology/building society: studies in sociotechnical change.
Cambridge, MA, MIT Press.
Borgstrom, A. 1999. Keynote: Community Networking at the Crossroads: Moving to a Sustainable
Model in the US. http://communityconference.vicnet.net.au/99/confpapers.htm. Retrieved on
December 1, 2003.
Cohill, A. M., & A. L. Kavanaugh (eds). 2000. Community Networks. Lessons from Blacksburg, Virginia.
Boston, Artech House.
Denison, T., G. Hardy, et al. 2002. Community Networks: Identities, Taxonomies and Evaluations. Electronic
Networking 2002- Building Community: Conference Proceedings (CD Rom). Monash University,
Centre for Community Networking Research.
90
Stillman
Fetterman, D. 1994. Empowerment Evaluation. Evaluation Practice 15(1): 1-15.
Fetterman, D. 1997. Empowerment Evaluation: A Response to Patton and Scriven. Evaluation Practice
18(3): 253-266.
Geertz, C. 1973. The Interpretation of Cultures: Selected Essays. New York, Basic Books.
Giddens, A. 1981. Agency, institution, and time-space analysis. In K. D. Knorr-Cetina & A. V. Cicourel
(eds.), Advances in social theory and methodology: toward an integration of micro- and macrosociologies. Boston, Routledge & Kegan Paul: 161-174.
Giddens, A. 1984. The Constitution of Society: Outline of the Theory of Structuration. Berkeley: University
of California Press.
Giddens, A. 1991. Modernity and self-identity: self and society in the late modern age. Stanford, CA:
Stanford University Press.
Glaser, B. G., & A. L. Strauss. 1967. The discovery of grounded theory: strategies for qualitative research.
New York: Aldine.
Greenwood, D. J., & M. Leven. 1998. Introduction to action research: Social Research for social
change. Thousand Oaks, CA: Sage.
Grosser, C. F. 1973. New Directions in Community Organization: From Enabling to Advocacy. NY,
Praeger.
Gurstein, M. 2000. Community informatics: enabling communities with information and communications
technologies. In M. Gurstein (ed.), Community informatics: enabling communities with
information and communications technologies. Hershey, PA, Idea Group Publishing.
Gurstein, M. 2003. A community informatics strategy beyond the digital divide. Many Voices, Many
Places— Electronically Enabling Communities for An Information Society: A Colloquium. Prato,
Italy, Centre for Community Networking Research, Monash University.
Gygi, K. 1995. Developing an Evaluation Framework for Community Computer Networks. University of
New Mexico, Community, and Regional Planning Program.
Harris, R. 2000. Revealing the Soul of a Project. Stories as Evaluation: Towards a Methodology. http:/
/rogharris.org/stories.pdf. Retrieved on November, 2002.
Hurworth, R. 1996. Qualitative Methodology. Some questions and answers about analysis of qualitative
data in evaluation. Evaluation News and Comment 5(2): 63-64.
Law, J. 2001. Networks, Relations, Cyborgs: on the Social Study of Technology (draft), Centre for Science
Studies and the Department of Sociology, Lancaster University. http://www.comp.lancs.ac.uk/
sociology/soc042jl.html. Retrieved on June 6, 2003.
Malina, A. 2002. Community networking and perceptions of civic value, Communications 27: 211-234.
McClintock, C. 1987. Conceptual and Action Heuristics: Tools for the Evaluator. In L. Bickman (ed.),
Using Program Theory in Evaluation. San Francisco, Jossey Bass: 43-57.
McConney, A., A. Rudd, et al. 2002. Getting to the Bottom Line: A Method for Synthesizing Findings
Within Mixed Method Program Evaluations. American Journal of Evaluation 23(2): 121-140.
Miles, M. B. & A. M. Huberman. 1988. Drawing Valid Meaning from Qualitative Data: Toward a Shared
Craft. In D. M. Fetterman (ed.), Qualitative Approaches to Evaluation in Education. NY, Praeger.
Miles, M. B. & A. M. Huberman. 1988. 1994. Qualitative Data Analysis: An Expanded Sourcebook.
Newbury Park, Sage.
Odasz, F. 1994. The need for rigorous evaluation of community networks. In Cisler, S. (ed), Ties that
Bind: Converging Communities. Cupertino, CA: Apple Computer Corp. Library.
O’Neil, D. 2002. Assessing community informatics: A review of methodological approaches for evaluating
community networks and community technology centers. Internet Research 12(1): 76-102.
Orlikowski, W. J. 1992. The Duality of Technology: Rethinking the Concept of Technology in Organizations.
Organization Science 3(3): 398-427.
91
COMMUNITY DEVELOPMENT: Journal of the Community Development Society
Orlikowski, W. J. 1992. 1999. Technologies-in-practice: an enacted lens for studying technology in
organizations. Cambridge, MA: Sloan School of Management, Massachusetts Institute of
Technology.
Owen, J. M., & P. J. Rogers. 1999. Program Evaluation, Forms and Approaches. St. Leonards: Allen &
Unwin.
Patton, M. Q. 1990. Qualitative Evaluation and Research Methods. Newbury Park, Sage.
Patton, M. Q. 1990. 1999. Evaluation and the non-profit sector. Evaluation Journal of Australasia 11(1):
72-78.
Pigg, K. 2001. Applications of Community Informatics for Building Community and Enhancing Civic
Society. Information, Communication and Society 4(4): 505-527.
Putnam, R. 1995. Bowling Alone: America’s Declining Social Capital. Journal of Democracy 6(1): 6578.
Roman, R. & R. D. Colle. 2002. Themes and Issues in Telecentre Sustainability. http://idpm.man.ac.uk/
idpm/diwpf10.htm. Retrieved on July 22, 2002.
Rose, J. 2000. Information Systems Development as Action Research - Soft Systems Methodology and
Structuration. Ph.D. Thesis, School of Management. Lancaster: University of Lancaster UK.
Schafft, K. A. & D. J. Greenwood. 2002. The Promises and Dilemmas of Participation: Action Research,
Search Conference Methodology and Community Development. http://www.cardi.cornell.edu/
canal/Greenwood_Schafft.pdf. Retrieved on June 1, 2003.
Smith, M. F. 1989. Evaluability assessment: a practical approach. Boston: Norwell, MA: Kluwer
Academic.
Stillman, L., & R. Stoecker. 2005. Structuration, ICTs, and Community Work. Journal of Community
Informatics 1(3). http://www.ci-journal.net/. Retreived on June 15, 2005.
Stoecker, R. 2005a. The Foundation of Community Information Technology: Community-Based Research.
Community Technology Review Spring-Summer. http://www.comtechreview.org/spring-summer2005/000311.html. Retrieved on June 15, 2005
Stoecker, R. 2005b. Is Community Informatics good for communities? Questions confronting an emerging
field. The Journal of Community Informatics 1(3): http://www.ci-journal.net/. Retrieved on
June 15, 2005.
Stoecker, R. 2005c. Research methods for community change: a project-based approach. Thousand
Oaks, Sage Publications.
The Aspen Institute. 1995. New Approaches to Evaluating Community Initiatives. Washington, D.C., The
Aspen Institute.
The Aspen Institute. 1999. Voices from the Field. Learning from the Early Work of Comprehensive
Community Initiatives. http://www.aspenroundtable.org/voices/index.htm. Retrieved on July 24,
2001.
The Aspen Institute. 2002. Voices from the Field II. Reflections on Comprehensive Community Change.
Washington, D.C., The Aspen Institution.
United States General Accounting Office. 1991. Designing Evaluations. Washington, D.C., General
Accounting Office.
Wadsworth, Y. 1991. Everyday Evaluation on the Run. Melbourne: Action Research Issues Association.
Wadsworth, Y. 1998. What is Participatory Action Research? http://www.scu.edu.au/schools/gcm/ar/ari/
p-ywadsworth98.html. Retrieved on June 17, 2001.
Wellman, B. 2001. Computer networks as social networks. Science 293(5537): 2031-2034.
Whitmore, E. 1998. Understanding and Practicing Participatory Evaluation. San Francisco, JosseyBass.
Wholey, J. S. A. 1994. Evaluability Assessment. Evaluation News and Comment (2): 2-13.
92