debating accounting research journal rankings

DEBATING ACCOUNTING RESEARCH JOURNAL RANKINGS:
EMPIRICAL ISSUES FROM A CITATION-BASED ANALYSIS AND
THEORETICAL DILEMMAS FROM ECONOMICS
Markus J. Milne
Department of Accountancy and Business Law
University of Otago
New Zealand
Fax: ++64-3-479-8450
E-mail: [email protected]
In helping prepare this paper the author would like to expressly thank Carolyn Edwards, Hayley O’Cain, and
Kelly Kahi for their invaluable research assistance. The author is also extremely grateful to Alan MacGregor
for several long conversations over the issues raised in this paper, and for comments from Chris Guilding
and other participants at a Griffith University Gold Coast, Australia seminar. The paper has also benefited
from participants at 2001 BAA Annual conference, University of Nottingham.
DEBATING ACCOUNTING RESEARCH JOURNAL RANKINGS:
EMPIRICAL ISSUES FROM A CITATION-BASED ANALYSIS AND
THEORETICAL DILEMMAS FROM ECONOMICS
Abstract
This paper reports the results of a citation-based analysis of accounting research journals. From a
database developed from the citations to 27 accounting research journals within those same 27
research journals over the ten-year period 1990-1999, this paper reports the relative use (in terms of
citation) of the 27 journals. From both the entire citation set, and also from subsets drawn by the
geographical base of the journal (e.g. Non-US, US), the geographical base of the authors (e.g. US,
Non-US), and whether the journals appear in the Social Science Citation Index (SSCI), various
measures of journal impact are developed. These relative impact measures are then used to examine
journal rankings derived from peer-based surveys conducted in the UK (Brinn et al., 1996) and the
US (Hull & Wright, 1990; Brown and Huefner, 1994). This study also draws on Arrow’s (1951)
work on social preference functions to question the theoretical validity of journal ranking studies.
The findings from this study suggest that other than for a very few journals, accounting journals
receive largely indifferent use, and, individually, appear to have very little general relevance to
accounting academics as a whole. Also reported is the diversity of citation behaviour, both between
US-based authors and non-US based authors, and across different journals. Such diversity, when
combined with Arrow’s work, seems to throw into serious doubt the theoretical validity of some
attempts to generate universal journal rankings even within a single country.
INTRODUCTION
In the 1996 edition of Accounting and Business Research (ABR), Brinn et al. presented a ranking
of 44 accounting and finance journals based on the peer rankings of 88 UK-based accounting
academics. They noted at the time the limitations of such a subjective-based analysis, and also the
difficulties associated with establishing an objective assessment of relative journal quality. Most
notable among their concerns regarding their peer-based method were possible sample limitations
—in terms of accurately reflecting the population of UK-based active researchers, the possibility
of bias through a lack of familiarity with the journal(s) being rated, and general attitude
measurement problems such as “conditioned” responses. In terms of objectively assessing journal
quality, they noted citation-based approaches also have limitations, and not least of these are that
only a very few, five in fact, (mostly US-based) accounting journals appear on the Social Science
Citation Index (SSCI).
The purpose of this paper is to critique and continue the debate into journal ranking studies. In
doing so, it reexamines the results of Brinn et al.’s study in both the light of the empirical data on
relative journal usage (citation-based data) and on the basis of Arrow’s (1951) theoretical work on
aggregating individual preferences into aggregate preference functions. While Arrow’s work casts
serious doubts on the validity of constructing aggregate journal rankings at all, the citation data
shows that Brinn et al.’s peer-based results seem to have more in common with the previous peerbased rankings, and the citation-behaviour of US-based scholars than the citation behaviour of
non-US scholars. If citation behaviour indicates a close degree of familiarity with journals by
known active researchers, and indicates relative value “in-use” to scholars, then such an analysis
seems to raise several questions about peer-based journal rankings that require further research.
One such question, for example, is whether it is appropriate to try to develop universal rankings
when the citation-data clearly shows several quite distinct research groupings. Such empirical
evidence, furthermore, when coupled with Arrow’s theoretical work, seems to reinforce Arrow’s
conclusions that it may not be valid to construct group rankings at all. These issues are revisited at
the end of this paper, but first the relevant literature on accounting (and finance) journal rankings,
and the relevant literature on citation-based methodology is examined. The paper then presents
the methods for deriving and analysing the citation data, before presenting the results. Finally, the
results are discussed, with implications drawn for journal ranking studies, along with several
issues being identified that require further debate.
PRIOR LITERATURE
There appears to be growing interest in aspects of publishing, publishing productivity, citation
behaviour and ranking journals in accounting research. Although several studies were conducted
in the 1970s and 80s, the frequency of such studies during the 90s seems to have expanded
exponentially.1 Such interest no doubt reflects the expansion of the research literature over the
past 30 years, thereby providing a database to be researched, but it also reflects growing interest,
almost to the point of obsession, in measuring and quantifying the research outputs of accounting
academics. This section reviews some of this literature, but mostly it is concerned with the
literature that is relevant to tracking information flows between different accounting journals:
namely, citation studies of the accounting literature.
1
During the 1970s, for example, two studies interested in the accounting literature were found (McRae, 1974;
Benjamin and Brenner, 1974). During the 1990s, however, the following were found: Brinn, et al. (1996), Brown
(1996), Brown & Heufner (1994), Brownell & Godfrey (1993), Cottingham & Hussey (2000), Durden et al. (1999),
Englebrecht et al. (1994), Hasselback & Reinstein (1995), Heck et al. (1990), Hull & Wright (1990), Jones &
Roberts (2000), Lukka & Kasanen (1996), Prather-Kinsey & Rueschhoff (1999), Reinstein & Hasselback (1997),
Whittington (1993, 1997), Wilkinson & Durden (1998), Zeff (1996), and Zivney et al. (1995). Further, there is now
also an ‘anti-measurement’ literature that has developed. See, for example, Gray & Hellier, (1994), Guthrie et al.
(2000), Humphrey et al. (1995), Milne et al. (1999), Parker et al. (1998), Puxty et al. (1994), Puxty & Tinker (1995),
and Willmott (1995).
1
Citation Studies
Citation studies in the accounting literature have typically focused on only a small sample of
journals, most notably those appearing on the Social Science Citation Index (SSCI). They have
also usually been concerned with measuring or assessing the impact of influential articles,
authors, business schools or doctoral programs on accounting research. Dyckman and Zeff (1984),
for example, used citation analysis, among other measures, to assess the impact of Journal of
Accounting Research (JAR) over its first 20 years. Smith and Krogstad (1984, 1988, 1991) use
citation analysis to examine the impact of and the impacts on Auditing: A Journal of Practice and
Theory (AJPT) over its first ten years. Likewise, Brown et al., (1987) examine the impact of
Accounting, Organisations & Society (AOS) from 1976-1984 using citation analysis. Brown and
Gardner (1985a, 1985b) also examine the impact of journal articles on publications appearing
between 1976 and 1982 in The Accounting Review (AR), JAR, AOS, and/or Journal of Accounting
and Economics (JAE). Their interest was to identify graduates, faculty, their US schools, and
individual articles that have most impacted on the publications in those 4 journals. Using a “cites
per year” metric, Brown (1996) extended this analysis to identify the 100 articles most cited in
JAR, AR, AOS, JAE, and Contemporary Accounting Research (CAR) over the period 1976-1992.2
The authors of these articles were then examined for their institutional affiliations, and doctorate
programs.
More concerned with the structure of accounting knowledge, McRae (1974) and Bricker (1988,
1989) examine the inter-citations between accounting journals. McRae’s analysis, however, was
limited to only three academic journals in a pool of 17 (AR, JAR, and Abacus)3 for the years 19681969. Underlining the importance of academic journals to academic journals, McRae observes
that 45% of all citations within the academic network were to the three accounting academic
journals.4
To discover the possible sub-structures within accounting knowledge and whether they were
integrated or fragmented, Bricker (1989) uses cocitation clustering to examine the pooled sample
of 11,000 citations appearing in the six SSCI accounting journals (AR, JAR, AOS, JAE, Abacus,
and Journal of Accounting and Public Policy (JAPP))5 during 1983-1986. Bricker identifies 12
research clusters (e.g. positive accounting, statistical auditing etc.) and notes that none of these are
strongly associated with publications in Abacus or JAPP, and that none of these transcend all of
the remaining 4 journals. Furthermore, AOS, and JAE were associated with only two and three
clusters respectively and these were entirely unrelated between the two journals. Bricker also
reports that of the 11,000 citations, 2,400 were within the six journals themselves, and from these
inter-journal citations we can note: (1) JAR articles account for 39% of the cited articles, AR for
33%, JAE and AOS each approximately 12-13%, and Abacus and JAPP less than 2% each. (2)
Abacus and JAPP are virtually uncited in the remaining 4 journals, with AOS faring little better in
2
The citations in these journals over this period are ‘pooled’, and the cited articles needed to have been published in
one of those same five journals or in either Auditing: A Journal of Practice and Theory or Journal of Accounting,
Auditing and Finance. In the event, the ‘top 100’ articles were published in one of JAR, AR, JAE or AOS.
3
The remaining journals included mostly professional body journals (e.g. Accountancy, Canadian CA), and two
professional management accounting journals (e.g. Management Accounting (UK, and US)). Of course, at that time
only 4 academic accounting journals were in existence, the other being the International Journal of Accounting
Education and Research.
4
McRae (1974), however, only includes journal citations in his study, and excludes books, newspapers, case law and
doctoral theses. Consequently, his proportions do overstate the relative importance of accounting academic journals
to accounting academic journals.
5
Abacus and Journal of Accounting & Public Policy were dropped from the SSCI after 1992.
2
JAE and JAR. (3) JAE, JAR and AR have strong inter-citation relationships. Similar findings are
reported in Brown and Gardner (1985b, pp. 89-96), and seem to confirm McRae’s (1974, p.87)
much earlier observation that “American authors seem to pay little attention to material published
outside the United States”, —a point returned to in the results and discussion sections.
To date, then, only a very little is known about the inter-relationships between accounting journals
as measured by citation analysis, and all of this understanding has been gained from analyses of
journals appearing on the Social Science Citation Index. Consequently, nothing is known about
the use of non-US journals other than the patchy analysis of Abacus and AOS, and most of this
understanding is about how these journals are used or not used in a selected few US accounting
publications.
(Peer-based or Questionnaire-based) Ranking Studies
Since Benjamin and Brenner’s (1974) survey of US faculty perceptions of accounting journal
quality, a number of follow up studies have occurred in the US (e.g. Howard and Nikolai, 1983;
Hull and Wright, 1990; Brown and Huefner, 1994; Hasselback and Reinstein, 1995), the UK (e.g.
Nobes, 1985, 1986; Brinn et al., 1996) and Australia (Brownell and Godfrey, 1993; Casser and
Holmes, 1999). Such studies typically ask respondents to rank journals relative to one another,
before seeking to combine them into either an overall aggregate ranking and/or several rankings
by sub-discipline (e.g. financial, management, auditing). The stated aim of such studies is to
provide some collective perception of the relative merits of journals to help guide others in
making tenure, promotion and appointment decisions.
Guthrie et al. (2000), however, question the underlying motives for journal preferences and
remark that despite ‘quality’ being a wide and elusive concept (Tinker & Puxty, 1995), from all
the available evidence of journal rankings it is being treated as a single-dimensional concept.
Furthermore, they believe if journal-ranking studies are reflecting only a single dimension of a
complex ‘quality process’, then it makes sense to be clear about what it is that is in fact being
measured.6 They note, “assessment of quality might reflect, inter alia, the personal influence of
the journal, one's admiration for the journal, and/or one's perception of its reputation.” Some
studies have sought personal attitudes, others have sought “views on others’ views”, and yet
others either mix these up or are even more vague. Casser and Holmes (1999), for example, ask
respondents to “rank journals from an Australian perspective”, while Hull and Wright (1990) ask
respondents to “rate the value to them or to their institution of a single-authored article appearing
in the main section of each publication”. For yet other approaches, Brinn et al. (1996) ask for a
“personal assessment of each particular journal’s academic quality…” and Brown and Huefner
(1994) ask respondents to classify journals as “Most prestigious”, “Significant” and so on.
The manner in which studies have sought to illicit responses, then, raises questions not only about
what might have been measured in some cases, but also whether the studies are comparable.
Guthrie et al. (2000) also question whether such measured characteristics are related to the
intrinsic quality of the journals, which they suggest might include the rigour of its refereeing
system; its citation in the discipline (or a relevant section of the discipline); or its readership. They
6
Guthrie et al. (2000) identify potential influences on assessments of journal quality as inter alia: (1) frequency with
which the journal is read; (2) personal and library subscription rates; (3) personal submission and acceptance/rejection
rates; (4) frequency with which you, and your academic heroes publish in the journal; (5) the journal’s actual
rejection rate; (6) your views of how others perceive the journal; (7) your knowledge of previous ranking/reputation
studies; (8) your subject/method/ideological orientation; (9) the purpose of the journal’s research; (10) the purpose of
your publications; (11) your experience as a referee/editor; (12) how often you cite the journal, and; (13) how often it
is recommended reading to students.
3
note, for example, that US ‘professional journals’ are often placed ahead of double-blind refereed
non-US accounting journals in some studies, and that outside of a ‘top 4’ the rankings in many
studies are highly inconsistent. To overcome these difficulties, Guthrie et al. propose a move
away from uni-dimensional attempts to construct a ladder of journals towards a matrix structure
that incorporates a concern for content as well as recognising objective aspects of the ‘quality
process’. To date, however, no study has sought to compare the results of journal perception
studies with citation measures of journals.
Strengths and Weaknesses of Citation-based and Peer-based Analyses
While the relative strengths and weaknesses of citation-based and peer-based or questionnairebased analyses of journals have been covered in the literature7, some of these are briefly covered
here as they relate to the current study. Claims in favour of citation analysis are that the procedure
is objective and independent of perceptions and it includes articles published prior to the
publication count period (Brown and Gardner, 1985a). The procedure also confines itself to a
sample of known, active, successful and peer-reviewed researchers, who, for the majority of the
citations they make, will be known to be familiar with the journals they draw the citations from.
Doyle et al., (1996) refer to this concept as “effortful voting”, where researchers through reading
and writing express (reveal) their preferences for different journals. With citation analyses,
authors get to vote for the journals by citing them, and only the journals they cite get counted. Of
course, researchers do not cite everything they read, nor do they sometimes read articles they cite,
and while the motives for citing published work are complex (see Jones et al., 1996b), citation
analyses will indicate at least some notion of relevance to the author. Whether a citation reflects
the cited article’s (and hence, journal’s) quality, however, is more debatable.
Many citations, for example, are negative rather than positive. Citation analysis can also be biased
in favour of popular authors, and authors can cite the popular authors to “legitimate” their own
papers. Authors also cite themselves, cite review articles more often, and “hot or fashionable
topics” can give rise to many short-run citations (Brown and Gardner, 1985b, pp. 86-87). While
Brown and Gardner attempt to dispel many of these limitations —for example, they cite evidence
that 90% of citations are in fact positive—many of these limitations are likely to be most
problematic for attempts to evaluate individuals’ research. At the level of journal analysis, many
of these limitations will be washed out in the aggregation process. The total of any journal’s
citation count might overstate its underlying positive impact, but there is no reason to believe it
will overstate it relative to other journals, especially if a reasonably long period of publications is
considered.8
Peer-based approaches, of course, suffer all the usual limitations of survey-based work: namely,
sample representation, non-response (often greater than 50%), question-order, and questionframing or leading question problems. The issue of journal “familiarity” also plagues journal
ranking studies as does concern over the ‘referent’ (see note 6) respondents might be using to
undertake the task and whether a consistent referent is used by all respondents. Questionnairebased approaches, then, while potentially capable of capturing a wider sample of ‘voters’ (i.e.
both published and non-published individuals), are potentially more likely to mix samples of
motives and familiarity, unless a very careful attempt is made to instruct and classify responses.
The greatest advantage to date of survey based approaches is their ability to capture a more
7
See, for example, Brown and Gardner (1985b), Brown and Huefner (1994), Hull and Wright (1990) and the not
inconsequential debate that occurred in the pages of Omega between Doyle and Arthurs (1995), Jones et al. (1996a),
Doyle et al. (1996), and Jones et al. (1996b).
8
Perhaps the obvious exception here is the Journal of Accounting Literature, which exclusively publishes review
articles.
4
comprehensive set of accounting journals beyond those on the SSCI. Peer-based studies have
offered some understanding of the relative merits of 40 or more other accounting journals.
Economists, Utility, Preferences and Arrow’s (1951) Impossibility Theorem
Aggregating individual preference rankings into a collective preference function has long been of
concern to economists. Arrow (1951) has demonstrated that subject to certain conditions of
‘reasonableness’ in most cases it is logically impossible to construct collective preference
functions from individual preference functions involving 3 or more individuals making choices
from 3 or more alternatives.9 As Luce and Raiffa (1957, pp. 328) explain:
Arrow sought to define “fair” methods for aggregating a set of individual rankings into a single
ranking for society, given the preference rankings of m alternatives by the members of a society
of n individuals. Arrow has shown that five seemingly innocuous requirements of “fairness” for
social welfare functions are inconsistent (i.e. no welfare function exists which satisfies them
all)...
They later (p. 369) conclude:
One might have expected, a priori, that simple majority rule would satisfy Arrow's conditions.
Indeed it does except when the individual rankings are very dissimilar,10 in which case it gives
rise to intransitives [i.e. illogical preference orderings]. It is natural, therefore, to search for
reasonable restrictions to be placed on the profiles of individual rankings such that majority rule
always leads to a consistent social ordering.
The following examples illustrate the problems of intransitives and their relevance to journal
ranking studies. Suppose eleven individuals rank three alternatives (x, y, and z) as follows:
x > y > z (4 individuals prefer this order)
z > x > y (3 individuals prefer this order)
y > z > x (4 individuals prefer this order)
Where: x > y indicates x is preferred to y, and so on.
Then by making pair-wise comparisons we get by simple majority rule:
x > y 7 to 4; y > z 8 to 3; z > x 7 to 4.
Therefore, for the group of 11 individuals we have the intransitive (i.e. illogical or circular)
ordering:
x>y>z>x
If we were to score these preferences to calculate an aggregate function (e.g. awarding the highest
ranking a “3”, second place a “2”, and so on11), then we would calculate the following:
9
Luce and Raiffa (1957), and Dasgupta and Pearce (1972) both provide readily accessible interpretations of Arrow’s
impossibility theorem, and evaluate various attempts to escape it.
10
Luce and Raiffa had earlier (p. 334) noted that “Some groups, possibly because of self-selection or because of a
common ethic, do not often exhibit a wide divergence of opinions, and for such societies majority rule probably never
will be embarrassed by an intransitive set or orders”.
11
Note, it does not matter what arbitrary scores are used to convert these ordinal preferences into numerical values as
long as we maintain the preference ordering for each individual and assume a similar intensity of preference for all
individuals (e.g. 3,2,1 will generate the same results as 30,10,2).
5
x = 4*3 + 3*2 + 4*1 = 22
y = 4*2 + 3*1 + 4*3 = 23
z = 4*1 + 3*3 + 4*2 = 21
And conclude the following preference ordering for the group:
y>x>z
From the above information, however, we know y is preferred to x (y > x) by only 4 of 11
individuals, and x is preferred to z (x > z) by only 4 of 11 individuals. Consequently, the clear
preference function generated for the group violates simple majority rule.
Whether simple majority rule is violated is a function of the diversity of opinion about the
alternatives being ranked. Consequently, fair questions to ask are how likely are we to have
diverse opinions about the relative merits of journals, and how much diversity do you need before
simple majority rule is violated? The citation data, which follows, suggests there is quite a lot of
diversity of opinion regarding accounting journals. Moreover, the following example illustrates
just how little variation is required before the preferences of the majority can be reversed.
Suppose 40 individuals rank 20 journals, and we score them Journal 1 = 20, Journal 2 = 19, and so
on. Further assume that a lot of these individuals are clones and they entirely agree on the
rankings. If we start with all 40 in total agreement on all twenty journals then the journals would
be scored as follows:
Journal 1 = 800; Journal 2 = 760; Journal 3 = 720; and so on.
Now, how many individuals need to be different before we can reverse these preference
orderings? The answer is 3 individuals of the 40 need to rank the journals differently. If two
individuals rank Journal 2 first, and Journal 1 last, and the third individual ranks Journal 2 first,
and Journal 1 second, then we get the following:
Journal 1 = 761; Journal 2 = 763; and Journal 3 = 720.
So, despite the fact that 37 out of 40 people prefer Journal 1 to Journal 2, because of two
individuals who really dislike Journal 1 and one who prefers it less to Journal 2, we have a
‘group’ preference ordering that reverses the (vast) majority view!12 Unless fair concepts such as
simple majority rule are discarded, such a result seems unacceptable.
Of course, many readers may object to this approach to constructing group preference orderings
from individuals’ rankings because it is the analyst who is in fact ascribing to each individual’s
ordinal rankings their own cardinal scale of values. The analyst under such approaches is, in
effect, assuming each individual’s intensity of preferences. Furthermore, by giving each rank a
score of 20, 19, 18 etc, the analyst is also assuming that all individuals prefer their first journal
choice over their second by as much as they prefer their second over their third and so on. They
further assume that these functions are comparable and additive. For these reasons inter alia
Milne (2000) criticised Casser and Holmes’ (1999) attempt to construct rankings of journals for
Australian accounting academics.
12
Of course, for this few individuals to reverse some of the preference orderings of the vast majority those individuals
need to hold extremely different opinions. However, it is easy to show that moderate amounts of diversity, but still
held by a minority of the group, can easily upset the majority held preferences.
6
However, even if individuals derive their own cardinal preference functions (which show their
own intensities of preference), economists have generally abandoned the belief that you can make
‘interpersonal comparisons of utility’, and can, therefore, simply add up the values individuals
ascribe to their preferences. The issue for economists is the apparent non-existence of a definitive
unit of measurement. Again, to quote Luce and Raiffa (1957, p. 33):
Suppose two people are isolated from each other and each is given a measuring stick marked off
in certain and possibly different arbitrary units. The one subject is given full-scale plans for an
object to be constructed to the same size by the other, and he is permitted to send only messages
stating angles and lengths (in his units) of the object. With such limited communication it is
clearly possible for the second man to construct a scale model of the object, but it will only be of
the correct size if it happens that both measuring rods are marked in the same units.
The same happens with attempts to measure journals using methods as those used by Howard and
Nikolai (1983), Nobes (1985), Hull and Wright (1990) and Brinn et al. (1996), for example.
While they might anchor a specific journal with a value (e.g. ABR = 100) and ask others to
express intensities of preference relative to that journal, we cannot know that all individuals are
expressing those preferences in like units. For example, individual A might rank AOS with a score
of 120, while individual B might use a score of 130 to express the exact same intensity of
preference for AOS relative to ABR (i.e. Individual B’s units of preference are 3 to Individual A’s
2). Simply assuming Individual B’s 10 extra units of preference means that they prefer AOS more
than Individual A is false according to most economists.
Brown and Huefner’s (1994) approach, for the most part, seems to overcome many of Arrow’s
problems. For a start they ask respondents not to rank journals against each other but to categorise
each journal into one of five categories including a ‘not familiar’ category. They then report the
proportion of respondents that classified each journal as “category 1”, “category 2” and so on.
Given the category descriptions also imply orders of difference from 1 through 5 they can also
legitimately report the proportion of respondents that classified a journal as “at least category 2”
(i.e. categories 1 and 2 combined). They also tend to focus on the views of the simple majority,
reporting for example that >50% classified AR, JAR and JAE as category 1 journals. Greater than
50% also classified CAR, AOS, AJPT, Journal of the American Tax Association, National Tax
Journal, Journal of Accounting, Auditing & Finance, and JAPP as at least category 2 journals.
Where they fall foul of economists’ concerns, however, is in their attempt to distinguish CAR and
AOS from this latter group by calculating weighted average “composite” scores, and showing
these scores were less than 2.00. Such a procedure, as discussed above, assumes the intensities of
preference between the categories as an integer value of one, that it is the same for all individuals,
and that they are comparable and additive.
If economists are right, and interpersonal comparisons of utility are not possible, then we are
forced to confront ordinal preferences. However, in dealing with ordinal preferences, Arrow has
shown that except for situations where individuals have largely similar preference functions, we
cannot aggregate individuals’ ordinal preference functions into a collective choice outcome
without violating several reasonable conditions including simple majority rule. An essential step
in validating journal rankings, then, is ensuring little variation exists in the ranking of alternative
accounting journals. Placing reasonable restrictions on the profiles of individual rankings such
that majority rule always leads to a consistent social ordering (Luce and Raiffa, 1957) is one way
to achieve valid rankings.13 Examining the relative use of journals (as measured by citations) by
13
This is in effect what Brown and Huefner (1994) and Guthrie et al., (2000) do by restricting their categories to three
or four journal ‘bands’. Many ranking studies (e.g. Hull & Wright) also construct several ‘sub discipline’ rankings.
The studies, however, have different ways of describing or determining their bands that inevitably lead to quite
different journal classifications.
7
authors both within a given journal and across different journals can provide insights into the
extent to which such restrictions obtain for accounting academics.
METHOD
The Citation Database
The initial focus for the database was those academic accounting journals identified by Zeff
(1996). The database was designed primarily to provide data for an examination of the relative
inter-citation behaviour between accounting academics from different traditions, most notably
between US- and non-US academics. Nonetheless, a sufficient spread of journals has been used to
gain insights into the relative rankings of accounting journals by non-US academics. Zeff
identifies 77 possible accounting research journals in existence at 1996. Zeff also collected data
on the relative journal holdings from 12 university libraries from universities with accounting
departments recognised for research (5 US, 5 UK, and 2 Australian).
Zeff’s list was further reduced using two additional criteria. 1) The journal needed to be in
existence at least from 1990 —this was to provide a sufficient time period over which to collect
citations. 2) The journal needed to be held by more than half of the libraries that Zeff sampled —
more widely held journals are at least available to be read and cited. To obtain a balance between
US-based and non-US based journals some further minor adjustments were made.14 This
produced 27 accounting journals (listed below in Table 1), all included in Brinn et al.’s (1996)
study.
Data collection involved two methods. First, for the five (SSCI) journals, the Web of Science
(http://webofscience.com) was used. Reference data was extracted from each article to Excel™
spreadsheets where it was coded according to geographical base of the authors as non-US or US.15
Sorting by journal title then permitted citation counts to all the 27 journals for each year from
1990 to 1999 for each of the two author groups. Year dates for each of the citations was also
collected to permit average-age citation calculations.
For the remaining 22 journals the citation data was collected by a physical inspection of each
issue of the journal over the 10 year period, again collecting citation counts and age statistics for
each author group for each of the ten years. For all the 27 journals, then, calculations of the
number and age of the citations made to each journal for any combination of the years 1990-1999,
and for either the combined or separate bases of US or non-US authors are possible. Individual
author details of either publications or citations are not collected.
Table 1 (below) illustrates the data collected for ABR. Columns 1, 3 & 5 show journals such as
Journal of Business Finance & Accounting (JBFA), JAR, AR, AOS and JAE are more widely cited
in ABR over the combined period 1990-1999 compared to other journals. However, this could be
because those journals are more relevant, because those journals are older, more established, and
have a larger body of literature to be cited, or both these factors. The newer journals like
Accounting, Auditing & Accountability Journal (AAAJ), for example, might be cited relatively
14
Applying the age and holdings criteria generated 19 US-based journals and 10 non-US-based journals. We
eliminated Issues in Accounting Education, Journal of International Financial Management and Advances in
Accounting from the US list and added Financial Accountability & Management and Pacific Accounting Review to
the non-US list. Both these latter journals met the age criteria although not the library holding criteria.
15
In the case of mixed nationality joint authorships (e.g. US-based and non-US based), the article was treated as USbased. While this certainly creates some bias between the categories, two points should be noted. First, the frequency
of mixed US-based and non-US based authorships is small compared to all authorships. Second, to the extent there is
bias, it will likely overstate the use of non-US literature by ‘US-based authors’ in this study.
8
less simply because they are newer rather than they are less relevant. To be able to control for the
effects of age and history of a journal, therefore, a second citation measure was also developed.
This was constructed on the publications in the 27 journals over the years 1995-1999, and allows
for only the 1990-1999 literature to be counted for citations. This second measure permits an
assessment of which journals are currently relevant to one another. For ABR, this measure is
shown in columns 2,4 & 6 in Table 1.
―Table 1 About Here―
Citation Analysis
After constructing data sets for all 27 journals, these were then combined in various combinations
(of journals and by author groups) to examine relative citation behaviour. To generate the
combinations, two methods are used. First, the raw citation counts for each journal from within
the various journal combinations are simply added together and compared. For example, in
calculating an all authors-all journals citation pattern, the 516 citations to The Accounting Review
appearing in ABR would be added together with all the other citations to The Accounting Review
appearing in the other journals to generate a total count for The Accounting Review. This total
count could then be compared against the like totals for the other journals. Whether ‘self
citations’—to The Accounting Review within The Accounting Review—are included depends
whether one is interested in the relative impact of a journal beyond itself or both to itself and more
widespread. Both measures are used in this study and made explicit where they are excluded.
Table 2 provides a breakdown of the total citation data set (including ‘self-citations’) by groups of
journals (e.g. SSCI, other US, other non-US) and groups of authors (e.g. US, non-US) for both the
1990-99 publication period and the 1995-1999 publication period.
―Table 2 About Here―
Based on all literature available for citing and on 10 years of publications in the 27 journals, the
total citation data set generates 55,750 citations. Of these, US-based authors account for 32,448
and non-US authors 23,302. US-based authors clearly prefer to use US-based journals with eleven
non-US journals (other than AOS—a SSCI journal) accounting for only 8.5% (2746/32,448) of all
US-based authors’ citations. Citations in SSCI journals account for 43% of all US-based authors’
citations from this data set, while citations to SSCI journals account for 77% of all US-based
authors’ citations. In contrast, for non-US authors, citations in SSCI journals account for only
13.5% of their citations, while citations to SSCI journals account for 57% of all their citations.
The eleven non-US journals account for 29% of all non-US authors’ citations. When examined on
the basis of the most recent 10 years’ literature and most recent 5 years’ publications (lower
panels in Table 2) these patterns repeat, except for non-US authors, where non-US journals
become relatively more important (41%), and citations to SSCI journals decline overall to 40%.16
16
More dramatic differences can be seen between US and non-US based authors when the classifications are kept to
strictly US/non-US journals as follows:
US-based Authors
Non-US based Authors
US Journals Non-US Journals
US Journals
Non-US Journals
US Journals
21885
5067
3039
7765
Non-US Journals
2018
3478
945
11553
US Journals
6629
964
877
1957
Non-US Journals
586
1030
361
4634
9
Combining raw citation counts may create bias by giving more weight to those journals that have
more issues in the ten-year period and consequently publish more papers, especially when ‘selfcitations’ are not equal amongst journals. As far as is known, however, there are no limitations
placed by any journal on the number of articles that might be read and cited within any particular
article. Also some journals have established a tradition in which the papers published seem to
have much longer lists of references than others. Nonetheless, to control for potential variations, a
proportional based measure can be used that “normalises” each journal’s contribution to the total
citation count for all journals at one unit (1.0). In other words, when combining several journals’
citation patterns, each journal carries equal weight regardless of the volume of material it has
published. For example, from Table 1 The Accounting Review would count as a proportion 0.185
(516/2795) for all authors, 0.177 (416/2351) for non-US authors, and 0.225 (100/444) for US
authors. Like proportions calculated for The Accounting Review in each of the other journals
could then be added (and averaged) to generate a total proportion score for each journal/author set
combination. Again, the option exists to include or exclude a journal’s own proportion within
such an aggregate.
Both the raw count and the normalised methods were applied to the two different data sets. In the
event, however, both the raw count and normalised methods generated very highly correlated
measures for given data set/journal set/author combinations.17 Therefore, only the results for the
normalised (proportional) method are reported in the next section.
ANALYSIS AND RESULTS
Relative Citation Behaviour
Note that the 27 accounting journal citations collected are likely to collectively account for much
less than half of the total citations made in any one accounting journal, yet they may collectively
account for almost 100% of the accounting journal citations made in any one journal.18 Tables 3
to 6 below show the relative use of the 27 accounting journals for each journal in groupings of:
(1) ‘self-citation’, (2) ‘top 4’ citations—it is evident that AR, JAR, JAE and AOS individually
often account for a significant proportion of the citations in each journal, although all four are not
always important for all journals, (3) non-US journals, (4) US journals. Tables 3 to 6 also indicate
the five most important individual journals that account for each journal. Tables 3 and 5 report for
non-US authors for the 1990-99 publication/all cited literature and the 1995-99 publication/19901999 cited literature measures respectively. Tables 4 and 6 report like measures for US-based
authors.
17
All the correlation coefficients between the two methods for the different author sets, database sets and journal sets
fell in the range 0.92 to 0.99.
18
This can be evidenced with reference to the five journals on the SSCI. For example, over the period 1990-1999, a
total of 10392 citations appeared in The Accounting Review. Of these, 4034 (39%) were associated with the 27
journals, and very few other citations associated with other ‘accounting’ journals were observed. Likewise with JAR,
38% of the total citations over the ten years were extracted. For JAE and AJPT the proportions extracted were 39%,
while for AOS, it equaled 22%. Clearly these US journals indicate a remarkable evenness in their relative reliance on
accounting journals, while AOS is relatively far less reliant. There will, however, be a small amount of
understatement in these results due to ‘in-press’ or ‘forthcoming’ articles that were not classified. While not all
accounting journal citations have been collected due to accounting journals excluded, these exclusions are considered
unlikely to materially change the results. For example, the Pacific Accounting Review was included in the sample.
Across the 27 journals over the ten years, this journal generated 36 citations in both itself and the other journals. The
total citation pool created from the 27 journals is in excess of 55000. Even some well-known journals created
relatively small proportions of citations across the entire set. Behavioral Research in Accounting, and Accounting and
Finance, for example, generate less than ½ % each of the citation set.
10
―Table 3 About Here―
Table 3 groups the journals into non-US and US based on editorial residence. For the most part
this seems to coincide with the relative magnitude of overall citations shown in column 1. CPA,
and CAR, however, are obvious exceptions. Clearly there are few citations shown for AR, JAR and
JAE consistent with so few non-US authors publishing in these journals over the past 10 years.
Table 3 shows that for non-US AOS authors, about 75% of their accounting journal citations are
made up of AOS itself, AR and JAR, with the remaining 25% split roughly evenly between eleven
non-US journals and twelve US journals. Of these, however, no one journal stands out as
particularly important. Like analyses could be done for the remaining journals individually, but
several overall observations can also be made.
First, is the dominance of four main journals (JAR, JAE, AR, and AOS) in the authors’ citation
behaviour. Across all 27 journals, AR is found to account for at least 5% of the accounting journal
citations, AOS in 22 journals, JAR in 20 journals, and JAE in 16 journals. Collectively, these four
journals typically account for about 50% of the citations in the remaining journals, and in some
cases (e.g. CAR, A&F, and many US-based journals) much more. Second, the non-US journals
collectively account for about 20% of the citations within each of the non-US journals (the
exceptions being CAR and A&F), on average, about 2% per journal. ABR, however, is more
important (accounting for at least 5% of the citations in 7 of the 10 non-US journals). ABR is also
important for non-US authors publishing in US journals. Several other individual journals (e.g.
JBFA, AAAJ) are also important to some journals, but generally most lesser non-US journals (i.e.
outside the top 4) do not show up as being used much by non-US authors. JBFA is important for
non-US authors publishing in JAE, but these are relatively few to JAE’s total publications and
citations. Third, of the lesser US journals, very few individually stand out, and on average these
12 journals account for a little over 1% each of each journals’ citations.
Table 4 repeats the analysis for US-based authors. In this case it is clear that within the top 4, it is
the ‘top 3’ that are even more dominant with about 82% of the accounting journal citations being
either to themselves or the other three journals. AOS, however, accounts for less than 5% of the
citations found in each of the top three American-based journals. Lesser non-US journals account
for less than ½ % each within this group, and lesser US journals do little better at about 1% each.
―Table 4 About Here―
Again, within the lesser journals (both US and non-US), the top 4 journals collectively account for
about 60% of the total accounting journal citations. All four are not always equally relevant to all
journals, however, and is notable that AOS is less relevant to the more market orientated journals
(e.g. CAR, JAAF), while JAE is less relevant to the more behavioural orientated journals (e.g.
BRIA, AJPT). Of the lesser US journals, only Accounting Horizons (AH) and AJPT show as
individually important journals with any regularity, while JAPP, AAAJ and CPA show some
importance to one or two journals. Most US journals, however, on average account for about 12% of the citations each. The lesser non-US journals typically account on average for less than
1% each of the citations within either US or non-US journals, but especially for US authors
publishing in US journals. Even when publishing outside the US, then, US-based authors on
average cite their own lesser journals (about 20% collectively) more than lesser non-US journals
(about 10%).
To examine the relative use of the current literature, Tables 5 and 6 repeat the analysis shown in
Tables 3 and 4 using the last five years’ publications and ten years’ literature. Having dropped the
pre 1990 literature, Table 5 shows a somewhat different picture. In the case of non-US AOS
authors, for example, AAAJ and CPA publications now become relatively more important than AR
11
and JAR publications. Likewise, for most other non-US journals, the top 4 have become less
relevant, collectively now accounting for about a third of all accounting journal citations. In
several cases, many of the American ‘top 3’ have individually dropped below 5% of the citations.
A wider range of other non-US journals than ABR, including for example AAAJ, JBFA, and CAR
(especially for US publications), are also now appearing as generally important sources of
literature for non-US authors. Table 5 also shows a much wider range of journals that are now
individually important to at least one other journal. Nonetheless, there are still a number of lesser
journals that individually, and collectively, appear to be little cited. Very few of the lesser US
journals appear, for example, and collectively on average these journals still account for less than
2% of the citations each.
―Table 5 About Here―
Comparisons of Tables 4 and 6 also reveal significant changes in journal usage for US-based
authors when the pre 1990 literature is dropped. Within the top 4, for example, AJPT and CAR
have now showed up as important journals. Nonetheless, within the American top 3, those same
top 3 still account for between 70 and 80% of the accounting journal citations. The top 4, or in
some cases, top 3, still account for about 50% of all the citations in many non-US journals and
lesser US journals. The obvious changes are the rise of CAR and AJPT more generally, and the
relative decline of AOS within the lesser journals, and especially those now more concerned with
traditional financial accounting matters (e.g. IJA, JAPP). AH continues to be cited by many US
authors throughout a number of journals, but also now appearing of some importance within some
of the lesser US journals are BRIA, JAL and AAAJ. Again, it is clear far more journals show up as
at least important to one other journal for US authors once the older literature is dropped.
However, it is also clear that on average non-US journals individually account for less than 1%
each of the citations by US authors publishing in many US journals and some non-US journals
(e.g. CAR and JBFA).
―Table 6 About Here―
Citations and Peer-Based Rankings
From Tables 3 through 6, apart from the top 4 journals and a few others, it is clear that many
journals appear on average to receive relatively little general use by both US and non-US authors.
Of course, for some journals this is not surprising. Some journals (e.g. MAR, JMAR) are quite
specialised and their general use would not be expected. Nonetheless, other journals (e.g. JAPP,
JAAF), which often receive very high rankings in peer-based assessments in both the US and UK,
appear to receive little use by authors. Such observations not only beg the question of why such
journals might be so highly regarded yet so little used, but also lead to a more systematic
exploration of journal usage and peer-based rankings. Tables 7 and 8 below provide two such
explorations.
Table 7 reports the average proportions (in %) of citations each journal accounts for within a
given set of journal publications. Column 1 in Table 7, for example, reports that the Accounting
Review accounts for an average proportion of 18.13% of the citations made by all authors
publishing in the remaining 26 journals (‘self citations’ within the Accounting Review being
excluded) over the period 1990 to 1999. Column 2 provides the ranks of like calculations for this
and the other journals. Column 3 shows average citation proportions for all authors and all
journals, but for the restricted publication (1995-99) and citation (1990-99) periods. Columns 5
and 7 report average proportions for non-US authors publishing in the 11 non-US journals
(excluding self citations but including AOS), while columns 9 and 11 report for US-based authors
publishing in 14 US-based journals (including the top 3, but excluding self citations). The last
12
three columns report the results of three peer-based studies conducted during the last ten years in
the UK and US.
―Table 7 About Here―
Several things stand out in Table 7. Most notable is further confirmation of the ‘top 4’: AR, JAR,
AOS and JAE are consistently the most used journals, and consistently peer ranked as the top 4.19
Moreover, they clearly dominate the other journals in relative usage, and especially on the longer
publication/all literature measures. Outside these four journals, the remaining journals generally
each account for relatively low proportions of the citations, and while it is possible to rank them
the differences between many of them are trivial. Also of note is the widespread variance between
the rankings of these lesser journals both within the citation data, within the peer-based ranks, and
across both approaches. ABR, for example, ranges from 5th to 15th, AH from 5th to 24th, AAAJ from
6th to 20th, AJPT from 6th to 16th, and JAPP from 6th to 23rd to name a few. In some cases the peerbased rankings are much higher, and in other cases much lower. There are also variations across
US and non-US authors/studies. Despite the widespread variation in the lesser journal rankings,
the rank correlation coefficients between all the six different citation measures are significant at
less than 1% and all exceed 0.65 or better. Significant and high correlation coefficients, then, do
not necessarily provide much assurance about agreement over relative ranks.
How should we make sense of Table 7, especially in regard to the ‘middle order’ journals?
Clearly there is a top 4, and clearly there are some journals that are consistently less used and less
thought of by the respondents to the surveys, but how do we decide, for example, which of CAR,
ABR, AJPT or AAAJ is higher ranked than the others? How should we make sense of the
conflicting results? While the separation between US and non-US authors and US and non-US
journals, and the separation between the current literature and all literature partly explain the
results, Table 8 below shows the issues are more complex.
―Table 8 About Here―
Table 8 shows the likelihood of mixing up relatively different sets of journal usage even when the
literature and author bases are controlled. It is not simply a case of US authors using US journals
more than non-US journals, or non-US authors using non-US journals more. As one might expect,
journal usage clusters around research interests, which themselves cluster round journals. Table 8
shows four such possible clusters: two largely US-based and two largely non-US based. The four
clusters, labeled ‘Traditional’, ‘Social/Organisational’, ‘Markets’ and ‘Behavioural’, are
constructed around the 1995-99 publications/1990-99 citations in four groups of journals. ‘Self
citations’ are excluded, and consequently the inclusion of a journal in a group is not a factor
explaining its relatively higher citation within that group. Table 8 shows the specific importance
(8-10% of the citations) of each of ABR, AAAJ, CAR, and AJPT to one group, and also illustrates
that the ‘top 4’ journals are no longer universally dominant. While relative journal use does
separate along lines of author base and journal base, it is evident this separation is most blurred by
research cluster for non-US authors. Within the two US-based research clusters, however, only
two non-US journals appear inside the first 10 journals for both clusters: AOS and CAR. Why
these two journals have relatively high levels of usage from US-based authors can be found in the
relatively high levels of US-authors they each publish, and the fact that about 80% of the citations
made to these two journals by US-based authors are of US-based authors.20
19
While AOS is shown as fifth in Brown and Huefner’s ranks here, they also treat it in one of their tables as equal
with CAR suggesting little separates the two in their minds.
20
An analysis of three SSCI journals (JAR, AR, and AJPT) reveals that non-US authors publishing in AOS or CAR
were cited by US-based authors publishing in JAR, AR and AJPT over the period 1990-99 as follows. (1) JAR—AOS
12%, CAR 21% (2) AR—AOS 24%, CAR 14% (3) AJPT—AOS 19%.
13
The relationships between these four clusters, the previous citation patterns, and the peer-based
rankings are shown in Table 9.21 Table 9 shows that Brinn et al.’s UK peer-based rankings have
more in common with previous US peer-based rankings, and US authors’ citation patterns than
they do with non-US authors’ relative citation behaviour. On current literature, Brinn et al.’s
results have more in common with both the US-based research clusters than they do with either of
the non-US research clusters. In contrast, Brown & Huefner’s and Hull and Wright’s US peerbased rankings are closely associated with each other, and with the use of the journal literature by
US-based authors. Brown and Huefner restricted their peer-based ranking to senior faculty at 40
leading US business schools, so it is perhaps not surprising that the correlation between the ranks
of journals for those respondents, and the citation behaviour of the US ‘Markets’ authors is as
high as 0.940.
―Table 9 About Here―
US-based authors are more homogeneous in their relative use of journals as evidenced by the
0.775 correlation coefficient between the two US-based research clusters. The non-US based
research clusters, however, are less related to each other (rho=0.249, p=0.210), and Brinn et al.’s
rankings are much less related to the ‘Social/Organisational’ cluster than the ‘Traditional’ cluster.
Both these clusters, however, are quite substantial: each accounts for about 40% of the citations
(about 3000 each) in the non-US author pool for 1995-99 publications (see Table 5).
DISCUSSION AND CONCLUSIONS
If citation behaviour indicates a close degree of familiarity with journals by known active
researchers, then the citation analysis raises several questions about Brinn et al.’s peer-based
results. Why, for example, do UK accounting scholars rate more highly journals that they do not
“seriously” read and consider relevant to their own work than the journals that they more
regularly use?22 Was Brinn et al.’s sample non-representative or is there a real inconsistency
between the stated preferences for journals and the use of journals by UK scholars? If it is not
familiarity and relevance, then upon what basis are the peer-based rankings being made? Given
the sizable and quite distinct research clusters, should one even attempt to develop universal
journal rankings for UK academics? And, consequently, to what extent are Brinn et al.’s rankings
relevant for rating UK-academics’ publications?
Most economists have dismissed the possibility of interpersonal comparisons of utility, and
therefore would find Brinn et al.’s scoring and aggregation process unacceptable. Such a position
requires the generation of collective preference functions to fall back on aggregating ordinal
preferences, and suggests at the very least that Brinn et al. would need to reexamine their data as
ordinal. The problem with ordinal preferences, however, as Arrow as clearly indicated, is that
where diversity exists among individual preference orderings, any group or collective preference
function is likely to violate the views of the majority. For Brinn et al.’s results to have validity,
then, requires a demonstration that the ordinal preferences of their respondents were not
21
A point of clarification is necessary here. If, in fact, the peer-based rankings of Hull & Wright (1990) and Brinn et
al, (1996) fall foul of Arrow’s concerns, then those rankings are invalid and cannot be compared to the rankings
based on usage as measured by citation. Table 9 proceeds on the assumption that the peer-based rankings are valid
and seeks to compare relative use (i.e. behaviour) with statements (i.e. attitudes) about journals.
22
The reader will no doubt note that the citation data measures the behaviour of non-US based authors rather than
UK-based authors, and perhaps differences between the citation measures and Brinn et al.’s measures are due to these
measurement differences. While this remains a possibility, it must be a remote possibility, for it would require that
the non-UK non-US based authors cite the US literature much less than their UK counterparts, and/or they dominate
the non-US citation data pool. Comparisons of the citation behaviour of non-US authors in Australian-based, UKbased and European-based journals (see Tables 3 and 5), however, do not indicate widely different uses of US
journals.
14
sufficiently dissimilar to violate majority rule. As counts of individual preference orderings are
not provided for each journal in Brinn et al.’s published results it is not possible to comment on
the diversity of opinion in their study. However, the relative use of journals, as measured by the
citation patterns across a wide range of accounting journals, and within two quite distinct research
clusters, suggests likely divergence of opinion regarding accounting journals. Consequently, the
limitations associated with ranking studies like Brinn et al.’s seem to be more deep rooted than
the sampling, familiarity and other empirical problems that were outlined earlier in this paper.
Whether Brinn et al.’s sample is non-representative is difficult to judge, but there is a significant
overlap between the journals included in the two non-US research clusters in this study and the
journals their respondents identify as having been published in. Given such overlap why are their
ranking results so unrelated to the relative journal use shown in the citation data for non-US
authors? Even for the restricted ‘Traditional’ group there are quite distinct differences in the ranks
of several journals (e.g. JAPP, JAAF). Brinn et al. report that the respondents were mostly
ranking journals on the basis of ‘general quality of the articles published…’, ‘subject area of the
journal’, ‘difficulty of getting published in the journal’, and ‘the research methodology generally
followed in the journal’. Of course, as Guthrie et al. (2000) point out, journal appreciation is a
complex process, and citing journals is not the same as rating their quality. Nonetheless,
comparisons of Brinn et al.’s results and the citation data do beg the question that if the quality of
certain journals’ articles are perceived to be so much better than other journals why are they cited
so little?
Other than four journals, very few others are generally cited, and on average account for less than
1-2% of the citations unless one restricts the examination to one or a few journals (as in the case
of the research clusters). Even with such restrictions, however, it is generally only a small number
of journals that are relevant. Even if little divergence exists, does it make sense to develop
universal journal rankings (and particularly refined ones that rank from say 1 to 20) when so
many of the journals are generally so little used? Are the lesser journals really that different?
Furthermore, it is worth remembering that of the 77 accounting research journals identified by
Zeff (1996), this study typically includes the older more established and more widely distributed
27 journals. Consequently, would it not make more sense to develop broad bands within different
research clusters as Guthrie et al. (2000) suggest, and as Brown and Huefner (1994) have
effectively done by restricting their sample and the ‘ranking’ task?
Finally, some points of clarification about the citation analysis are needed. What the citation data
show is the relative use of journals by published members of the accounting community. In fact,
to the extent that authors focus their publishing efforts on a few journals, it appears there is more
than one accounting research community. The relative use of journals not only varies by the
residential base of the authors, particularly US-based v. non-US based, but also, not surprisingly,
by the research orientation of particular journals. Furthermore, as McRae (1974) had observed
back in 1970, and Brown and Gardner (1985b) later confirmed, American-based authors make
very little use of research findings from journals outside of the U.S. Thirty years on, and across a
much wider set of journals, this finding still holds. Even with the apparent exceptions of AOS and
CAR, it turns out that 80% of the citations to AOS and CAR in leading US journals are to USbased authors.
What the citation analysis does not show is the relative usefulness or relative quality of accounting
journals. Despite Doyle et al.’s claims that citation data represent “effortful voting” any attempt
to aggregate one individual’s citations with another’s to determine an aggregate preference
function for journals would also fall foul of economists’ concerns. In the same way that it is
unreasonable for Brinn et al. and others to assume that if an individual’s score for a journal is X
units greater than another’s they must prefer the journal more, it is unreasonable to assume that a
‘citation’ as a unit of measurement means the same to all authors in terms of journal preference or
usefulness. Extra citations (at best) measure extra reading or journal use, not necessarily extra
15
usefulness nor extra quality (whatever that means). Nonetheless, despite these differences, one
would expect the use and stated usefulness of journals to coincide, in the same way one expects
behaviour and attitude to coincide. The fact that they appear to do so for US-based accounting
scholars, but not for non-US based accounting scholars raises issues worthy of further research.
16
REFERENCES
Arrow, K.J. (1951) Social Choice and Individual Values, Cowles Commission Monograph #12, John
Wiley & Sons: New York.
Brinn, T., Jones, M.J. and Pendlebury, M. (1996) “UK Accountants’ Perceptions of Research Journal
Quality”, Accounting and Business Research, Vol. 26, No. 3, pp. 265-278.
Bricker, R. (1988), ‘Knowledge Preservation in Accounting: A Citational Study’, Abacus, Vol. 24, No. 2,
pp. 120-131.
Bricker, R. (1989), ‘An Empirical Investigation of the Structure of Accounting Research’, Journal of
Accounting Research, Vol. 27, No. 2, pp. 246-262.
Brown, L.D. (1996), ‘Influential Accounting Articles, Individuals, Ph.D. Granting Institutions and
Faculties: A Citational Analysis’, Accounting, Organisations & Society, Vol. 21, No. 7/8, pp. 723-754.
Brown, L.D. and Gardner, J.C. (1985a), ‘Applying Citation Analysis to Evaluate the Research
Contributions of Accounting Faculty and Doctoral Programs’, The Accounting Review, April, pp. 262-277.
Brown, L.D. and Gardner, J.C. (1985b), ‘Using Citation Analysis to Assess the Impact of Journals and
Articles on Contemporary Accounting Research’, Journal of Accounting Research, Spring, pp. 84-109.
Brown, L.D. and Heufner, R.J. (1994), “Familiarity with and Perceived Quality of Accounting Journals:
Views of Senior Accounting Faculty in Leading US MBA Programs”, Contemporary Accounting
Research, Vol.11, No.1, Summer, pp. 223-250.
Brown, L.D., Gardner, J.C. and Vasarhelyi, M.A. (1987), ‘An Analysis of the Research Contributions of
Accounting, Organisations & Society, 1976-1984’, Accounting, Organisations & Society, Vol. 12, No. 2,
pp. 193-204.
Brownell, P. and Godfrey, J. (1993), “Professorial Rankings of Australian Accounting Departments”, The
Australian Economic Review, 4th Quarter, pp. 77-87.
Casser. G. and Holmes, S. (1999), “The Refereed Accounting Journals Valued by Australian Gatekeepers”,
Accounting, Accountability & Performance, Vol. 5, No. 3, pp. 1-18.
Cottingham, J. and Hussey, R. (2000), “Publishing in Professional Accounting Journals: Academic
Institutional Performance 1987-96”, British Accounting Review, Vol. 32, No. 1, pp. 101-115.
Dasgupta, A.K. and Pearce, D.W. (1972) Cost-Benefit Analysis: Theory and Practice, MacMillan: London.
Doyle, J.R. and Arthurs, A.J. (1995), ‘Judging the Quality of Research in Business Schools: The UK as a
Case Study’, Omega, Vol. 23, No. 3, pp. 257-270.
Doyle, J.R. Arthurs, A.J., Mcaulay, L. and Osborne, P.G. (1996), ‘Citation as Effortful Voting: a Reply to
Jones, Brinn and Pendlebury’, Omega, Vol. 24, No. 5, pp. 603-606.
Durden, C.H., Wilkinson, B.R. and Wilkinson, K.J. (1999), “Publishing Productivity of Australian “Units”
Based on Current Faculty Composition”, Pacific Accounting Review, Vol. 11, No. 1, pp. 1-28.
Dyckman, T.R. and Zeff, S.A. (1984), ‘Two Decades of the Journal of Accounting Research’, Journal of
Accounting Research, Spring, pp. 225-297.
Englebrecht, T.D., Govind, S.I. and Patterson, D.M. (1994), “An Empirical Investigation of the Publication
Productivity of Promoted Accounting Faculty”, Accounting Horizons, Vol.8, No.1, March, pp. 45-68.
17
Gee, K.P., and Gray, R.H., (1989) “Consistency and Stability of UK Academic Publication Output Criteria
in Accounting”, British Accounting Review, 21, pp. 43-54.
Gray, R.H., and Hellier, (1994) “UK Accounting Academics and Publication: An Exploration of
Observable Variables Associated With Publication Output”, British Accounting Review, 26, pp. 235-254.
Guthrie, J., Parker, L., and Gray, R.H., (2000) “Exploring The Changing Nature Of HES In Australia And
The UK: Commodification Of Quality In Accounting And Management Journals”, Working Paper,
University of Macquarie.
Hasselback, J.R., and Reinstein, A., (1995) “A Proposal for Measuring Scholarly Productivity of
Accounting Faculty”, Issues in Accounting Education, 10, pp. 269-306.
Heck, J., Jensen, R. and Cooley, P. (1990), “An Analysis of Contributors to Accounting Journals: Part 1:
the Aggregated Performance”, The International Journal of Accounting, Vol.25 No.3, pp. 202-217.
Hull, R.P. and Wright, F.B. (1990), “Faculty Perceptions of Journal Quality: An Update”, Accounting
Horizons, Vol.4 No.1, March, pp. 77-98.
Humphrey, C., Mozier, P., and Owen, D.L., (1995) “Questioning the Value of the Research Selectivity
Process in British University Accounting”, Accounting, Auditing & Accountability Journal, 8, 3, pp. 141164.
Jones, M.J., Brinn, T., and Pendlebury, M. (1996a) “Judging the Quality of Research in Business Schools:
A Comment from Accounting”, Omega, Vol. 24, No. 5, pp. 597-602.
Jones, M.J., Brinn, T., and Pendlebury, M. (1996b) “Journal Evaluation Methodologies: a Balanced
Response”, Omega, Vol. 24, No. 5, pp. 607-612.
Jones, M.J. and Roberts, R. (2000), “International Publishing Patterns: An investigation of Leading UK
and US Journals”, 23rd Annual Congress of European Accounting Association, Munich, March.
Luce, R.D. and Raiffa, H. (1957) Games and Decisions: Introduction and Critical Survey, Dover
Publications: New York.
Lukka, K. and Kasanen, E. (1996), “Is Accounting Global or a Local Discipline? Evidence from Major
Research Journals”, Accounting Organisations & Society, Vol. 21, No. 7/8, pp. 755-773.
McRae, T.W. (1974), ‘A Citation Analysis of the Accounting Information Network’, Journal of
Accounting Research, Spring, pp. 80-92.
Milne, M.J., Adler, R.W. and MacGregor, A.C. (1999), “A Critical Commentary on Wilkinson and
Durden’s (1988) Measures of Accounting Publishing Productivity”, Pacific Accounting Review, Vol. 11,
No. 1, pp. 29-44.
Parker, L., Guthrie, J., and Gray, R.H., (1998) “Accounting and Management Research: Passwords From
the Gatekeepers”, Accounting, Auditing & Accountability Journal, Vol. 11, No. 4, pp. 371-402.
Prather-Kinsey, J. and Rueschhoff, N., (1999) “An Analysis of the Authorship of International Accounting
Research in U.S. Journals and AOS: 1980 through 1996”, The International Journal of Accounting, Vol.
34, No. 2, pp. 261-282.
Puxty, A.G., Sikka, P., and Willmott, H.C., (1994) “Systems of Surveillance and the Silencing of UK
Academic Accounting Labour”, British Accounting Review, 26, pp. 137-171.
Reinstein, A. and Hasselback, J.R., (1997), ‘A Literature Review of Articles Assessing the Productivity of
Accounting Faculty Members’, Journal of Accounting Education, Vol. 12, pp. 425-455.
18
Smith, G. and Krogstad, J.L. (1984), ‘Impact of Sources and Authors on Auditing: A Journal of Practice &
Theory—a citation analysis’, Auditing: A Journal of Practice & Theory, Vol. 4, No. 2, pp. 107-117.
Smith, G. and Krogstad, J.L. (1988), ‘A taxonomy of content and citations in Auditing: A Journal of
Practice & Theory’, Auditing: A Journal of Practice & Theory, Vol. 8, No. 2, pp. 108-117.
Smith, G. and Krogstad, J.L. (1991), ‘Sources and Uses of Auditing: A Journal of Practice & Theory’s
Literature: The First Decade’, Auditing: A Journal of Practice & Theory, Vol. 10, No. 2, pp. 84-97.
Tinker, T. and Puxty, T. (1995), Policing Accounting Knowledge, London, Paul Chapman.
Whittington, G., (1993) “The 1992 Research Assessment Exercise”, British Accounting Review, 25, pp.
383-395
Whittington, G., (1997) “The 1996 Research Assessment Exercise”, British Accounting Review, 29, pp.
181-197.
Wilkinson, B.R., and Durden, C.H., (1998) “A Study of Accounting Faculty Publishing Productivity in
New Zealand”, Pacific Accounting Review, 10, 2, pp. 75-95.
Willmott, H.C., (1995) “Managing the Academics: Commodification and Control in the Development of
University Education in the UK”, Human Relations, 48, 9, pp. 993-1027.
Zeff, S.A., (1996) “A Study of Academic Research Journals in Accounting”, Accounting Horizons, 10, 3,
pp. 158-177.
Zivney, T.L., Bertin, W.J. and Gavin, T. (1995), “A Comprehensive Examination of Accounting Faculty
Publishing”, Issues in Accounting Education, Vol.10, No.1, pp. 1-25.
19
TABLE 1:
ACCOUNTING JOURNAL CITATIONS MADE IN
Accounting and Business Research
Cited Journal
Accounting, Auditing & Accountability Journal
Abacus
Accounting & Business Research
Accounting & Finance
Accounting Horizons
Accounting Historians Journal
Auditing: A Journal of Practice & Theory
Accounting, Organisations & Society
Advances in Public Interest Accounting
Accounting Review
British Accounting Review
Behavioural Research in Accounting
Contemporary Accounting Research
Critical Perspectives on Accounting
European Accounting Review
Financial Accountability & Management
International Journal of Accounting
Journal of Accounting, Auditing & Finance
Journal of Accounting & Economics
Journal of Accounting Literature
Journal of Accounting and Public Policy
Journal of Accounting Research
Journal of Business Finance and Accounting
Journal of Cost Management
Journal of Management Accounting Research
Management Accounting Research
Pacific Accounting Review
Total
All Authors
Non-US Authors
1990-99 1995-99 1990-99 1995-99
US Authors
1990-99 1995-99
52
73
473
26
67
22
88
210
14
516
48
8
53
24
33
18
56
42
300
41
35
372
192
5
7
18
2
33
23
116
5
35
2
33
41
2
89
20
8
37
16
28
6
14
14
83
10
12
64
63
5
6
20
0
47
65
428
22
57
17
53
174
13
416
48
4
44
23
31
18
46
36
260
33
31
278
176
5
6
18
2
29
20
106
4
30
2
18
39
1
67
19
5
33
16
26
6
12
10
72
9
10
49
55
5
6
20
0
5
8
45
4
10
5
35
36
1
100
0
4
9
1
2
0
10
6
40
8
4
94
16
0
1
0
0
4
3
10
1
5
0
15
2
1
22
1
3
4
0
2
0
2
4
11
1
2
15
8
0
0
0
0
2795
786
2351
670
444
116
20
TABLE 2:
TOTAL CITATION DATA SET—BREAKDOWN BY AUTHORS AND JOURNAL SETS
Citations from:
Citations by US-based Authors
SSCI
US-based
Non-US
Journals
Journals
Journals
1990-99 publication period/All literature cited
SSCI Journals
12045
Other US-based Journals
1154
Other Non-US-based Journals
752
Citations by Non-US-based Authors
SSCI
US-based
Non-US
Journals
Journals
Journals
8160
2817
1014
4783
743
980
2478
307
369
2702
988
842
8177
1833
5606
1995-1999 publication period/1990-99 literature cited
SSCI Journals
3022
1899
Other US-based Journals
469
1311
Other Non-US-based Journals
339
444
1062
279
384
583
161
204
648
448
380
1908
841
2656
21
TABLE 3:
ACCOUNTING RESEARCH JOURNAL CITATION PATTERNS
1990-1999 PUBLICATIONS, ALL LITERATURE CITED
NON-US-BASED AUTHORS
N
Percentages of a Journal’s Accounting Journal Citations
Self
Top 4
Non-US
US
Top 5 Journals*
ABR
AOS
JBFA
AAAJ
EAR
BAR
MAR
A&F
FAM
Abacus
PAR
CAR
2408
2387
2145
2036
1775
1696
1368
930
868
864
688
580
0.181
0.539
0.220
0.168
0.127
0.085
0.136
0.099
0.354
0.146
0.019
0.107
0.478
0.215
0.564
0.478
0.477
0.485
0.491
0.657
0.341
0.469
0.563
0.733
0.205
0.130
0.153
0.199
0.238
0.294
0.195
0.142
0.196
0.216
0.243
0.052
0.136
0.116
0.064
0.155
0.157
0.136
0.178
0.102
0.109
0.169
0.176
0.109
ABR, AR, JAR, JAE, JBFA
AOS, AR, JAR, CPA, AAAJ
JAR, JBFA, AR, JAE, ABR
AOS, AAAJ, AR, ABR, CPA
AOS, AR, EAR, JAR, ABR
AOS, AR, JAR, ABR, BAR
AOS, MAR, AR, JCM, JMAR
AR, JAR, JAE, AF, AOS
FAM, AOS, AAAJ, AR, ABR
AR, ABACUS, JAR, ABR, AOS
AR, AOS, JAR, JAE, ABR
AR, JAR, JAE, CAR, AOS
CPA
IJA
JAAF
JAL
JAPP
AJPT
AHJ
JMAR
BRIA
APIA
AR
JAR
AH
JAE
JCM
1396
888
438
345
319
317
269
257
247
228
189
158
117
109
39
0.107
0.170
0.055
0.035
0.125
0.303
0.178
0.109
0.081
0.057
0.296
0.430
0.256
0.440
0.590
0.646
0.419
0.785
0.617
0.527
0.584
0.390
0.568
0.729
0.645
0.571
0.380
0.376
0.339
0.205
0.152
0.280
0.068
0.157
0.241
0.069
0.323
0.148
0.089
0.211
0.058
0.063
0.214
0.138
0.000
0.095
0.131
0.091
0.191
0.107
0.044
0.108
0.175
0.101
0.088
0.074
0.127
0.154
0.083
0.205
AOS, CPA, AR, AAAJ, JAR
AR, IJA, JAR, ABR, AOS
JAR, AR, JAE, JAAF, AJPT
AR, AOS, JAR, JAE, IJA
AR, JAR, JAPP, JAE, ABR
AJPT, JAR, AR, AOS, JAE
AOS, AHJ, ABR, AR, AAAJ
AOS, AR, JMAR, MAR, JCM
AOS, AR, JAR, BRIA, AJPT
AOS, AR, AAAJ, APIA, CPA
JAR, AR, JAE, AOS, CAR
JAR, AR, JAE, AOS, AJPT
AH, JAR, AR, JAE, AOS
JAE, AR, JAR, JBFA, AJPT
JCM, AR, AH, JMAR, AOS
* These journals are in rank order from most cited.
22
TABLE 4:
ACCOUNTING RESEARCH JOURNAL CITATION PATTERNS
1990-1999 PUBLICATIONS, ALL LITERATURE CITED
US-BASED AUTHORS
N
Percentages of a Journal’s Accounting Journal Citations
Self
Top 4
Non-US
US
Top 5 Journals*
AOS
CAR
JBFA
AAAJ
ABR
Abacus
BAR
MAR
EAR
A&F
FAM
PAR
2738
2659
1355
869
476
349
234
201
122
92
92
78
0.380
0.097
0.124
0.089
0.099
0.072
0.026
0.075
0.148
0.043
0.065
0.013
0.405
0.749
0.737
0.626
0.609
0.648
0.752
0.592
0.516
0.609
0.554
0.526
0.051
0.020
0.053
0.069
0.103
0.117
0.103
0.040
0.123
0.087
0.152
0.115
0.163
0.133
0.086
0.216
0.189
0.163
0.120
0.294
0.213
0.261
0.228
0.346
AOS, AR, JAR, AJPT, JAE
JAR, AR, JAE, CAR, AJPT
JAR, AR, JAE, JBFA, CAR
AOS, AR, AAAJ, CPA, JAR
AR, JAR, JAE, ABR, AJPT
AR, JAR, JAE, AOS, ABACUS
AR, JAE, JAR, AOS, ABR
AOS, JAR, AR, JCM, JMAR
JAR, JAE, EAR, AR, AH
JAR, AR, JAE, AJPT, AOS
JAE, JAR, AR, JAPP, FAM
AR, JAE, JAR, AH, AJPT
AR
JAE
AJPT
JAR
JAAF
AH
JAL
JAPP
BRIA
IJA
CPA
JMAR
AHJ
APIA
JCM
3845
2580
2494
2361
1893
1736
1629
1475
1395
995
957
776
634
363
252
0.296
0.405
0.265
0.401
0.054
0.157
0.026
0.097
0.059
0.268
0.085
0.106
0.121
0.154
0.738
0.522
0.502
0.578
0.396
0.803
0.600
0.621
0.661
0.665
0.415
0.608
0.679
0.604
0.598
0.095
0.050
0.048
0.055
0.069
0.063
0.065
0.119
0.065
0.057
0.156
0.114
0.044
0.132
0.069
0.020
0.132
0.046
0.102
0.135
0.080
0.154
0.234
0.177
0.219
0.161
0.193
0.171
0.142
0.171
0.147
JAR, AR, JAE, AJPT, CAR
JAE, JAR, AR, CAR, AH
AJPT, AR, JAR, AOS, JAE
JAR, AR, JAE, AJPT, CAR
JAR, AR, JAE, JAAF, CAR
AR, JAR, AH, JAE, AOS
JAR, AR, AOS, AJPT, JAE
AR, JAR, JAE, JAPP, AOS
AR, JAR, AOS, AJPT, BRIA
IJA, AR, JAR, AOS, AH
AOS, AR, CPA, AH, AAAJ
AR, AOS, JAR, JMAR, JCM
AR, AOS, AHJ, JAR, AH
AOS, AR, APIA, JAR, AH
JCM, AH, JMAR, AR, AOS
*These journals are in rank order from most cited.
23
TABLE 5:
ACCOUNTING RESEARCH JOURNAL CITATION PATTERNS
1995-1999 PUBLICATIONS, 1990-1999 LITERATURE
NON-US -BASED AUTHORS
N
Percentages of a Journal’s Accounting Journal Citations
Self
Top 4 Non-US
US
Top 5 Journals*
AAAJ
EAR
AOS
ABR
JBFA
MAR
BAR
FAM
A&F
PAR
Abacus
CAR
896
875
742
670
657
553
524
318
269
256
221
170
0.300
0.239
0.484
0.158
0.336
0.269
0.126
0.431
0.138
0.027
0.199
0.147
0.279
0.331
0.066
0.339
0.361
0.289
0.334
0.204
0.509
0.395
0.326
0.694
0.229
0.256
0.241
0.316
0.225
0.199
0.370
0.270
0.193
0.301
0.294
0.076
0.192
0.174
0.209
0.187
0.078
0.242
0.170
0.094
0.160
0.277
0.181
0.082
AAAJ, AOS, CPA, BAR, FAM
EAR, AOS, ABR, JAR, JAE
AOS, CPA, AAAJ, MAR, JMAR
ABR, JAE, AR, JBFA, JAR
JBFA, JAE, AR, JAR, ABR
MAR, AOS, JMAR, JCM, AAAJ
BAR, AOS, ABR, JBFA, AR
FAM, AOS, AAAJ, AH, MAR
AR, JAR, AF, JAE, AOS
AOS, AR, JAE, AAAJ, CPA
ABACUS, ABR, AOS, AR, JAR
JAR, JAE, AR, CAR, AOS
CPA
IJA
JAAF
JAPP
AHJ
JMAR
AJPT
JAL
BRIA
JAR
APIA
AH
JAE
JCM
AR
449
248
160
114
108
105
88
87
76
58
57
56
45
19
17
0.236
0.177
0.081
0.096
0.157
0.190
0.409
0.046
0.145
0.431
0.035
0.286
0.311
0.526
0.176
0.477
0.218
0.638
0.333
0.287
0.448
0.409
0.540
0.526
0.397
0.474
0.268
0.400
0.211
0.588
0.209
0.379
0.131
0.333
0.444
0.248
0.136
0.195
0.145
0.086
0.281
0.268
0.178
0.000
0.000
0.078
0.226
0.150
0.237
0.111
0.114
0.045
0.218
0.184
0.086
0.211
0.179
0.111
0.263
0.235
AOS, CPA, AAAJ, APIA, BAR
IJA, AOS, ABR, EAR, AH
AR, JAR, JAE, JAAF, CAR
AR, JAE, IJA, ABR, JAPP
AOS, AAAJ, AHJ, ABR, CPA
AOS, JMAR, MAR, AR, AH
AJPT, AR, JAE, JAR, AOS
AOS, AR, JAE, JAR, IJA
AOS, AR, BRIA, JAR, AJPT
JAR, AOS, AR, CAR, JAE
AOS, CPA, AAAJ, MAR, JMAR
AH, IJA, JAE, JAR, AAAJ
Too few publications
“
“
* These journals are in rank order from most cited.
24
TABLE 6:
ACCOUNTING RESEARCH JOURNAL CITATION PATTERNS
1995-1999 PUBLICATIONS, 1990-1999 LITERATURE
US-BASED AUTHORS
N
Percentages of a Journal’s Accounting Journal Citations
Self
Top 4
Non-US
US
Top 5 Journals*
CAR
AOS
JBFA
AAAJ
ABR
Abacus
MAR
EAR
PAR
BAR
A&F
FAM
765
615
305
195
116
95
90
64
30
24
23
20
0.169
0.395
0.180
0.190
0.086
0.042
0.122
0.234
0.033
0.042
0.130
0.050
0.671
0.249
0.584
0.400
0.431
0.505
0.533
0.422
0.267
0.458
0.652
0.350
0.004
0.091
0.098
0.056
0.198
0.168
0.067
0.156
0.200
0.208
0.087
0.250
0.157
0.265
0.138
0.354
0.284
0.284
0.278
0.188
0.500
0.292
0.130
0.350
AR, JAR, JAE, CAR, AJPT,
AOS, AR, AJPT, JAR, CPA
AR, JAR, JBFA, JAE, CAR
AOS, CPA, AAAJ, AH, AR
AR, AJPT, JAR, JAE, ABR
AR, AH, JAE, JAR, AOS
AOS, JMAR, MAR, JAE, AR
EAR, JAE, JAR, AH, ABR
AR
JAE
JAR
AH
AJPT
BRIA
JAL
JAAF
JAPP
IJA
JMAR
CPA
AHJ
JCM
APIA
985
895
766
666
592
510
508
503
472
279
247
208
133
103
79
0.320
0.373
0.321
0.231
0.340
0.125
0.024
0.107
0.138
0.226
0.243
0.216
0.188
0.650
0.304
0.432
0.470
0.394
0.477
0.402
0.512
0.463
0.654
0.398
0.258
0.494
0.375
0.263
0.087
0.278
0.080
0.094
0.095
0.080
0.079
0.076
0.171
0.123
0.091
0.190
0.073
0.207
0.256
0.019
0.127
0.168
0.063
0.189
0.168
0.179
0.286
0.343
0.115
0.373
0.326
0.190
0.202
0.293
0.243
0.278
AR, JAR, JAE, CAR, AJPT
JAE, JAR, AR, CAR, AH
JAR, AR, JAE, CAR, AJPT
AH, JAE, JAR, AR, CAR
AJPT, AR, JAR, AH, BRIA
AR, AOS, AJPT, JAR, BRIA
JAR, AR, AOS, AJPT, CAR
AR, JAE, JAR, JAAF, CAR
AR, JAPP, JAR, AJPT, JAE
IJA, AH, JAR, AR, CAR
JMAR, AOS, AR, JAE, JCM
AOS, CPA, AAAJ, AH, AR
AOS, AHJ, CPA, AAAJ, AH
JCM, AH, JMAR, JAE, AR
APIA, AOS, AH, AAAJ, BRIA
Too few publications
“
“
“
* These journals are in rank order from most cited.
25
TABLE 7: AVERAGE CITATION PROPORTIONS BY JOURNAL AND PEER-BASED RANKINGS
♣
Average Proportion of Citations Accounted For By:
All Authors/All Journals
1990-99
1995-99
Non-US Authors/Non-US Journals
1990-99
1995-99
Peer-Based Journal Rankings*
US Authors/US Journals
1990-99
1995-99
Brinn et al.
1996
Brown &
♠
Huefner
1994
Hull &
Wright
1990
2
2
2
3
1
2
AR
18.13 1
11.67
15.27
8.24
21.34 1
13.74 1
3
3
4
2
2
1
JAR
16.06 2
8.86
12.59
6.97
18.36 2
10.58 2
1
1
1
4
5
4
AOS
13.64 3
11.78
15.73
12.54
9.93 3
9.11 3
4
4
3
1
3
3
JAE
8.24 4
8.05
7.31
7.69
8.57 4
7.84 4
9
5
5
8
15
11
ABR
3.34 5
3.16
6.28
5.60
1.04 12
1.22 15
5
8
7
24
11
13
AH
2.72 6
4.97
1.98
3.48
3.61 5
7.05 5
8
10
16
12
6
8
AJPT
2.66 7
3.21
1.62
1.57
3.46 6
4.21 7
6
9
7
10
7
JBFA
2.14 8
1.79 11
3.57
3.07
1.13 11
0.75 18
7
15
10
5
4
CAR
1.96 9
3.81
1.20
2.57
2.38 7
4.67 6
6
7
6
20
AAAJ
1.83 10
4.04
2.76
5.43
0.89 13
2.30 8
13
21
10
9
10
JAL
1.53 11
1.43 13
1.26
1.08
1.79 8
1.86 11
16
23
6
8
9
JAPP
1.36 12
1.12 18
1.11
0.75
1.52 9
1.47 13
19
18
9
7
6
JAAF
1.17 13
1.34 14
0.93
1.10
1.46 10
1.52 12
12
15
17
12
12
Abacus
1.16 14
1.08 19
1.47
1.64
0.83 15
0.65 19
11
8
13
20
CPA
1.06 15
2.66 10
1.53
3.36
0.55 18
1.92 10
17
17
19
19
JCM
0.91 16
0.95 21
1.10
1.14
0.68 16
0.83 16
21
14
11
13
JMAR
0.89 17
1.58 12
0.86
1.72
0.84 14
1.44 14
14
19
23
16
14
IJA
0.88 18
0.90 22
1.25
1.09
0.36 20
0.49 21
9
11
22
BAR
0.85 19
1.15 16
1.71
2.43
0.16 24
0.19 23
22
12
15
MAR
0.52 20
1.30 15
0.81
2.06
0.22 21
0.45 22
18
22
27
AF
0.48 21
0.53 24
1.06
1.08
0.16 23
0.16 24
25
24
18
18
APIA
0.47 22
0.70 23
0.60
0.69
0.40 19
0.79 17
26
25
16
14
BRIA
0.45 23
1.12 17
0.29
0.54
0.63 17
1.82 9
23
26
21
17
15
AHJ
0.42 24
0.30 26
0.80
0.47
0.12 25
0.14 25
20
20
14
FAM
0.37 25
0.49 25
0.90
1.08
0.03 26
0.07 26
24
13
25
EAR
0.37 26
1.04 20
0.69
1.92
0.18 22
0.50 20
27
27
26
PAR
0.06 27
0.07 27
0.14
0.14
0.02 27
0.04 27
♣
These citation results exclude ‘self-citations’ within a journal, and represent average proportions (in %) of citations across the particular journal set.
*All of the peer-based rankings have been restated to the 27 journals used in this study. Of course, the original orderings have been maintained.
♠
These rankings are based on Brown and Huefner’s Table 5 results reporting % of respondents claiming at least ‘category 2’ standing for a journal.
26
TABLE 8: PROPORTIONS OF CURRENT JOURNAL USE
WITHIN “RESEARCH CLUSTERS”
Proportions of Citations Accounted for by:
1995-99 Non-US Authors
1995-99 US Authors
“Traditional”
“Social/Organisational”
“Markets”
“Behavioural”
Abacus, ABR, A&F,
AAAJ, AOS, CPA,
AR, CAR, JAAF,
AJPT, AOS, BRIA,
BAR, EAR, JBFA
FAM, MAR
JAE, JBFA
JAPP, JMAR
rank
rank
rank
rank
%
%
%
%
AR
9.94
1
2.19
8
23.79
1
18.34
1
JAR
8.89
3
1.23
14
20.18
2
9.37
3
JAE
9.76
2
0.72
18
18.65
3
5.30
5
CAR
3.70
8
0.83
16
8.23
4
4.04
7
AJPT
1.90
11
0.38
21
4.14
5
7.68
4
AH
4.21
7
1.69
13
3.56
6
4.16
6
JAAF
1.63
16
0.02
26
2.91
7
1.21
13
AOS
8.07
5
27.71
1
1.24
8
10.11
2
JAL
1.53
17
0.61
19
1.15
9
3.24
8
JAPP
1.04
20
0.38
22
0.77
10
1.19
14
JBFA
5.30
6
0.75
17
0.75
11
0.68
18
JMAR
0.48
24
3.17
5
0.59
12
1.70
11
ABR
8.85
4
2.82
7
0.38
13
0.65
19
BRIA
0.68
22
0.33
23
0.37
14
2.91
9
IJA
1.68
14
0.25
25
0.23
15
0.37
20
Abacus
2.51
10
0.83
15
0.19
16
0.21
22
AF
1.20
19
0.27
24
0.09
17
0.08
25
CPA
1.70
13
6.17
3
0.08
18
1.69
12
BAR
1.77
12
3.25
4
0.07
19
0.11
24
JCM
0.14
27
2.12
9
0.02
22
2.31
10
MAR
1.33
18
2.89
6
0.02
20
1.19
15
EAR
1.64
15
1.90
11
0.02
21
0.27
21
APIA
0.20
26
1.93
10
0.00
27
1.13
16
AAAJ
3.64
9
10.65
2
0.00
23
0.99
17
AHJ
0.52
23
0.49
20
0.00
25
0.13
23
FAM
0.92
21
1.86
12
0.00
24
0.00
26
PAR
0.26
25
0.00
27
0.00
26
0.00
27
The proportions of citations shown in this table exclude ‘self citations’ within a journal.
27
TABLE 9: CORRELATION MATRIX FOR PEER-BASED RANKINGS
AND RANKS OF JOURNAL USAGE
Brinn et
al.
B&H
H&W
Non-US
‘Trad’
US
‘Markets’
US
‘Behav’al’
N=27
N=20
N=14
N=27
N=27
N=27
Peer-based studies
Brown & Huefner
0.794**
Hull & Wright
0.890**
0.943**
Citation Measures
All Authors & All Journals 1990-99
0.728**
0.780**
0.758**
All Authors & All Journals 1995-99
0.678**
0.728**
0.754**
Non-US Authors & Journals 1990-99
0.551**
0.573**
0.662**
Non-US Authors & Journals 1995-99
0.516**
0.519*
0.657*
US Authors & Journals 1990-99
0.758**
0.908**
0.798**
US Authors & Journals 1995-99
0.663**
0.729**
0.793**
Non-US Authors & ‘Social’ Journals 95
0.141
-0.117
0.266
Non-US Authors & ‘Trad’ Journals 95
0.534**
0.674**
0.648*
US Authors & ‘Markets’ Journals 95
0.719**
0.940**
0.793**
US Authors & ‘Behavioural’ Journals 95
0.666**
0.692**
0.793**
0.249
-0.126
0.178
0.690**
0.454*
0.775**
Spearman Rank correlation coefficients shown. ** significant @ < 1%, * significant @ < 5%.
28