Você está na página 1de 13

Journal of Banking and Finance 89 (2018) 26–38

Contents lists available at ScienceDirect

Journal of Banking and Finance

journal homepage: www.elsevier.com/locate/jbf

Evaluation of academic finance conferences

Alexander Kerl∗, Enrico Miersch, Andreas Walter
Department of Financial Services, University of Giessen, Licher Str. 74, Giessen 35394, Germany

a r t i c l e i n f o a b s t r a c t

Article history: We develop a novel framework to evaluate the quality of academic finance conferences. In particular,
Received 26 August 2015 we analyze the publication and citation outcomes of 18,535 conference papers presented at 15 academic
Accepted 27 January 2018
finance conferences between 2007 and 2012. In addition, we provide evidence of the perceived quality
Available online 31 January 2018
of these academic finance conferences by surveying a large number of participants of the 15 conferences.
JEL classification: The use of three different quality proxies enables us to contrast levels of conference quality based on
G00 the quality of presented papers to levels of conference quality perceived by conference participants. Our
results reveal two quality clusters of academic finance conferences and show that the use of the three
Keywords: quality dimensions, while generally robust, is associated with a number of peculiarities.
Finance conference
© 2018 Elsevier B.V. All rights reserved.
Assessment of research quality
Journal ranking
Citation analysis

1. Introduction conferences to participate in, thus allowing them to better allocate

their scarce time and travel budgets.
Many if not most finance researchers present their research Researchers attend conferences for several reasons. Conference
at academic conferences.1 As conference quality has not tradi- participation not only provides opportunities to improve a paper’s
tionally been evaluated by scholars or institutions, researchers quality through feedback and discussion, but it also serves as an
have relied on anecdotal evidence to determine which conferences indicator of quality, increasing both the visibility of one’s paper
to attend. Only recently, a pioneering study of Reinartz and Ur- and the reputation of the presenting author and of her co-authors.
ban (2017) aimed to replace anecdotal evidence on the quality Conference attendance further serves as a platform for cultivating
of academic finance conferences with empirical evidence. In this contacts and for expanding one’s personal network, and it presents
vein, we extend their work by introducing an advanced method of opportunities to remain informed of recent trends in research.
evaluating academic finance conferences. Unlike Reinartz and Ur- Thus, conference participation seems to be an integral facet of the
ban (2017), we not only evaluate the quality of finance conferences value chain of the publication process.2 As publication in top-tier
based on the publication outcomes of presented papers. Rather, we (finance) journals has recently become more difficult to achieve,
also extend their analyses by providing survey-based evidence of the above reasons to present at academic finance conferences have
perceived conference quality levels and of the citation outcomes likely become more important.3
of presented articles. We thus provide a holistic evaluation frame- When determining which conferences to attend, finance re-
work for rating conference quality levels that encompasses “hard searchers traditionally face a dilemma. With the exception of the
facts” on the quality of presented papers as well as “soft facts” on quality of a handful of very prestigious conferences such as annual
the perceived quality of a conference. With our conference evalu- meetings of the American Finance Association and Western Finance
ation, we provide researchers with additional guidance on which

Cf. Reinartz and Urban, 2017.
Ellison (2002) documents a dramatic slowdown in the publication process at

Corresponding author. top economics journals due to the more extensive revision requirements applied by
E-mail address: alexander.kerl@wirtschaft.uni-giessen.de (A. Kerl). these journals. Additionally, from a survey that the authors of this paper conducted
Reinartz and Urban (2017) report that 65% of all articles published in the top in 2016, the results show that more than 85% (75%) of all survey respondents be-
9 finance journals between 2010 and 2013 were presented at conferences prior to lieve that financial research has become more difficult to publish in the top 3 (top
publication. The average research article included in their dataset has been pre- 10) finance journals over the last five years. The full results for this analysis are
sented at a minimum of two conferences prior to publication. available upon request.

0378-4266/© 2018 Elsevier B.V. All rights reserved.
A. Kerl et al. / Journal of Banking and Finance 89 (2018) 26–38 27

Association, for which anecdotal evidence of high quality seems in- ence. While for the direct measure of perceived conference quality
disputable, assessments of the quality of remaining finance confer- (Top-down ranking) we directly ask survey participants about the
ences are less obvious.4 Our study, therefore, aims to measure con- overall quality of a conference, for the indirect measure survey par-
ference quality levels to rank 15 finance conferences hosted around ticipants assess sub-categories of conference quality by providing
the globe by extending the approach introduced by Reinartz and us with information needed to construct a Bottom-up ranking of
Urban (2017) to other dimensions of conference quality. conference quality. In accordance with conference rankings based
Our evaluative approach is based on a combination of three fac- on paper quality, we also find the two approaches to assess per-
tors that measure different aspects of conference quality. The first ceived conference quality to yield similar results.
and second quality proxies are based on the quality of presented The remainder of the paper is structured as follows.
papers. The rationale for the first two quality proxies is as follows: Section 2 summarizes the related literature on the evaluation
the higher the average quality of accepted conference papers, the of research output, discusses the conference ranking system
higher the quality of the academic finance conference considered. proposed by Reinartz and Urban (2017) and thereby provides
For the first quality proxy, we refer to the quality of the journal in a starting point for our evaluation framework. In Section 3 we
which a conference paper is subsequently published. In particular, introduce the methodology of our rating approach and describe
we employ the Journal Impact Factor (JIF) for 2015 as a widely used our data. Section 4 provides descriptive statistics. Results of our
measure of journal quality. The second quality proxy measures the conference ranking are presented in Section 5. The robustness
scientific impact of an individual article by calculating the number of our findings and further analyses are examined in Section 6.
of normalized, i.e., age-adjusted, citations that an article receives. Section 7 concludes.
To obtain data for these two quality proxies, we analyze the publi-
cation success of 18,535 conference papers presented at 15 finance 2. Related literature
conferences between 2007 and 2012.
In addition to our analyses of the quality of the average paper In the field of finance, hiring, tenure, and merit decisions for
presented at a finance conference, our third quality proxy is based academics are regularly based on quality assessments of research
on quality perceptions of surveyed conference participants. In par- output (Chan et al., 2013). Two approaches are commonly used to
ticular, 882 researchers participated in our survey and provided us evaluate the quality of research articles. One is based on charac-
with their (subjective) quality assessments of the 15 finance con- teristics of the journal in which an article appears and the other is
ferences. This perception study allows us to account for other qual- based on individual citations, i.e., the scholarly impact, an article
ity dimensions besides the quality of presented papers, e.g., net- triggers (Currie and Pandher, 2011).
working possibilities, and thus allows for a more comprehensive For the first approach, the quality of an article is measured in-
assessment of conference quality. For each of the three quality di- directly by referring to the quality of the journal in which an arti-
mensions we construct a separate conference rating. We use the cle is published. A widely used proxy for a journal’s quality is the
term rating when we refer to the exact evaluation result concern- Journal Impact Factor (JIF) published by Thomson Reuters (formerly
ing each quality dimensions. The ranking of the conferences is then Institute of Scientific Information, ISI). The JIF is a citation-based
derived from the rating and we order the conferences according to journal quality assessment, as the JIF of a particular journal corre-
the respective quality dimension from one to 15. Finally, to form sponds to the number of citations the average publication in that
an aggregate conference ranking, we combine the three different journal generates in a predetermined year after its publication.6
quality dimensions. In addition to this citation-based approach, several other journal
In applying a hierarchical cluster analysis, we group the con- rankings have emerged (Harzing, 2016). These journal rankings are
ferences into two quality clusters: First-tier conferences, i.e., AFA, often based on peer-reviewed (or survey-based) perception stud-
WFA, and EFA; and Second-tier conferences, i.e., EFMA, FFA, FMA, ies, as these journal rankings rate journals based on the opinions
FMAE, GFA, NFA, SSFMR, AFBC, Eastern FA, MFA, SFA, and SWFA.5 of a predetermined group of experts. In the field of finance, such
Our conference rankings are robust to various modifications of the approaches are “increasingly used as a method for ranking jour-
quality proxies employed. With respect to paper quality, we em- nal importance […]” (Currie and Pandher, 2011, p. 8). However,
ploy alternative journal rankings and citation measurements and survey-based perception studies can suffer from perception, non-
find our rankings to being rather stable. As conference quality responding, and sampling biases (Chan and Liano, 2009; Moosa,
can vary over time, we also calculate conference ratings through 2011). Therefore, some studies extend their analyses to control for
single-year analyses. As we document significant rating changes for these potential biases (Oltheten et al., 2005; Currie and Pandher,
some of the analyzed conferences in the single-year rankings, we 2011) or develop alternative ranking approaches; e.g., Beattie and
argue that it is essential to develop conference ratings based on Goodacre (2006) use submissions to the U.K. Research Assessment
paper quality through a multiple-year analysis. Exercise (RAE) as ranking input.
Concerning perceived conference quality levels, we use direct For the second approach, paper quality is measured directly
and indirect measures to assess the perceived quality of a confer- based on the citation frequency of an individual article. The ratio-
nale behind this approach is that citations represent the impact of
a scholarly paper and thus its intellectual value (Chan et al., 2002).
4 Citation-based analyses centered on articles are commonly based
Only one other recent study by Reinartz and Urban (2017) empirically evalu-
ates the quality of finance conferences. For a discussion of commonalities with and on citation indices such as the Social Science Citation Index (SSCI) or
distinctions from our rating approach, see Section 2. Scopus (Borokhovich et al., 20 0 0, 2011; Chung et al., 20 09). Citation
We use abbreviations for the conferences; the full names of the analyzed con- analyses at the paper-level have been generally criticized for self-
ferences are as follows: First-tier Conferences: Annual meetings of the American Fi- citation (Chan et al., 2002) and journal bias issues (Pons-Novell and
nance Association (AFA), the Western Finance Association (WFA), and the European
Tirado-Fabregat, 2010). Some studies have avoided these limita-
Finance Association (EFA); and Second-tier Conferences: Annual meetings of the Euro-
pean Financial Management Association (EFMA), the French Finance Association (FFA), tions by controlling for self-citations (Colquitt, 2003) or by focus-
the Financial Management Association (FMA), the Financial Management Association ing on other databases such as the Social Science Research Network
Europe (FMAE), the German Finance Association (GFA), the Northern Finance Associa-
tion (NFA), the Swiss Society for Financial Market Research (SSFMR), the Australasian
Finance and Banking Conference (AFBC), the Eastern Finance Association (Eastern FA), For the exact calculation used for the JIF, see http://ipscience-help.
the Midwest Finance Association (MFA), the Southern Finance Association (SFA), and thomsonreuters.com/inCites2Live/indicatorsGroup/aboutHandbook/
the Southwestern Finance Association (SWFA). usingCitationIndicatorsWisely/jif.html.
28 A. Kerl et al. / Journal of Banking and Finance 89 (2018) 26–38

(SSRN) (Brown, 2003) or Google Scholar to overcome the journal of other aspects related to conference quality such as feedback
bias (Keloharju, 2008). quality or networking possibilities. Third, we employ information
Journal rankings and citation frequencies have been employed on the three different proxies for conference quality to cluster fi-
as input parameters to address different research questions. For nance conferences into groups of similar quality using a cluster al-
example, both measures have been used as quality proxies to ex- gorithm. Finally, as the stability of the conference rating provided
plore potential factors that determine the quality of scholarly work by Reinartz and Urban (2017) may be sensitive to the single-year
(Chung et al., 2009) or to investigate the impact of author ordering observation period for larger conferences, we investigate a six-year
on research quality (Laband and Tollison, 2006; Van Praag and Van period running from 2007 to 2012. In turn, we control for single-
Praag, 2008; Maciejovsky et al., 2009; Brown et al., 2011). Other year effects and potential quality variations over time.
studies have applied journal rankings to create rankings for indi-
vidual researchers (Borokhovich et al., 1995; Chung et al., 2001) 3.2. Conference universe
and institutions (Kalaitzidakis et al., 2003; Kim et al., 2009; Trevino
et al., 2010). Finance conferences differ in various respects such as size, i.e.,
As described above, journal rankings are commonly used to the number of conference papers presented at a conference, and/or
rank individual financial researchers or institutions, but until thematic orientation. For example, some conferences focus on cer-
Reinartz and Urban’s (2017) study was published, they were not tain subfields such as corporate finance or risk management. Oth-
used to rate finance conferences. In their study, the authors focus ers, and especially smaller conferences, i.e., conferences with fewer
on 47 finance conferences that are mentioned in the acknowledge- than 30 paper presentations, are targeted at an exclusive audi-
ment footnotes of 3319 articles published in nine top finance jour- ence.8 To avoid potential biases and to guarantee a broad range of
nals between 2010 and 2013.7 To define the universe of conference different quality levels, we apply a selection process based on four
papers used to rank the selected conferences, the authors track the criteria. First, we focus on general finance conferences, i.e., topics
conference programs of 2008 for larger conferences in the sample, of conference papers must not be restricted to a specific subfield.
i.e., conferences presenting more than 30 conference papers, and Second, the conferences must not be restricted to an exclusive au-
the conference programs of 2006 to 2010 for smaller conferences dience, i.e., there must be a public Call for Papers that enables all
in the sample, i.e., conferences presenting fewer than 31 confer- interested researchers to submit their papers. Third, the confer-
ence papers. Thereby, Reinartz and Urban use 8946 research arti- ences must be held on a regular basis, e.g., annually. Thus, we do
cles to rate 47 finance conferences. not study conferences initiated for a special and unique incident
The conferences are ranked according to the conference-specific only. Fourth, conference programs must be available to allow us to
proportions of research articles subsequently published in the top collect and document presented conference papers.
3 finance or economic journals. The authors document strong These selection criteria result in the identification of annual
quality differences between the analyzed conferences and di- meetings of 15 finance associations (see Table 1 for the entire list
vide the conferences into “High quality conferences” and “Other of all conferences).9
conferences”. To validate their ranking results, Reinartz and Ur-
3.3. Quality proxies
ban (2017) also use additional ranking criteria, i.e., publication ra-
tios in the top 9 finance journals, 5-Year Journal Impact Factors, and
As conference quality is multifaceted, we base our ranking on
the proportion of research articles published in the highest rating
the three different quality proxies discussed above to account for
category by the 2015 Association of Business Schools (ABS) ranking.
different aspects of conference quality, i.e., journal rankings, nor-
They document a strong correlation between the different ranking
malized citations, and perceived conference quality.
To measure the research quality of conference papers, we use
the JIF as the journal-based proxy and normalized citations as the
3. Evaluation approach
citation-based proxy of the scholarly quality of a conference paper.
The JIF is a commonly used metric of journal quality (Borokhovich
3.1. General evaluation approach
et al., 2011; Chung et al., 2009), and it offers a number of advan-
tages relative to other journal rankings owing to its: (1) broad the-
Our evaluation approach differs from that proposed by
matic scope; (2) global reach; (3) broad journal coverage; and (4)
Reinartz and Urban (2017). First, while Reinartz and Ur-
yearly updates. However, as the JIF measures a journal’s quality,
ban (2017) analyze numerous finance conferences, we restrict our-
employing the JIF to proxy for the quality of a conference paper
selves to those 15 finance conferences that we identified by apply-
masks the fact that not every paper published in a journal has
ing a four-stage selection process (see Section 3.2 Conference uni-
the same scientific impact. In particular, employing the JIF assigns
verse for more details on the underlying selection process). Second,
the average quality of a publication in a specific journal to the
Reinartz and Urban (2017) use several fairly similar input parame-
conference paper under consideration. Thus, we also use individ-
ters, i.e., top 3 and top 9 journal publication ratios, 5-Year JIFs, and
ual citations of an article as a standard proxy for research qual-
ABS journal rankings. Thus, the quality assessment of a conference
ity (Chan et al., 2009). We define normalized citations as the age-
paper and of the respective finance conference is restricted to one
adjusted number of citations that an individual article receives.10
dimension: the quality of the publication outlet. In contrast, we
use three different proxies to evaluate the quality of finance con-
ferences: journal rankings, normalized citations of individual arti- In this context, a conference with an “exclusive audience” refers to a confer-
ence that either sends no public Call for Papers or that only sends invitations to a
cles, and the perceived quality of conference participants. By pro-
small number of universities. For example, 64% of all conference papers presented
viding survey-based evidence on conference quality, we not only at the Jackson Hole Finance Conference in 2008 originate from only three universi-
account for the quality of presented papers but also for a range ties (Reinartz and Urban, 2017, p. 156).
Conferences do not necessarily need to be the annual meetings of finance as-
sociations to qualify for our selection process. Thus, the Australasian Finance and
7 Banking Conference (AFBC) is not hosted by an association. However, the remaining
Reinartz and Urban (2017) define the following journals as the top 9 finance
journals: Review of Financial Studies (RFS), Journal of Finance, (JOF), Journal of Finan- 14 conferences identified are hosted by an association.
cial Economics (JFE), Review of Finance (ROF), Journal of Financial and Quantitative For a robustness check, we also use an article’s total number of citations, i.e.,
Analysis (JFQA), Journal of Corporate Finance (JCF), Financial Management (FM), Jour- the age-adjusted sum of citations that an article receives as a working paper and as
nal of Banking and Finance (JBF), and Journal of Empirical Finance (JEF). a published journal article, and we obtain almost identical results.
A. Kerl et al. / Journal of Banking and Finance 89 (2018) 26–38 29

Table 1
Overview of analyzed academic finance conference.

(1) (2) (3) (4) (5) (6) (7) (8)

Conf name Conf First Corresp journal Ø conf Ø publication Ø time lag No. of authors
abbrev meeting size p.a. ratio until publ per conf paper

American Finance Association AFA 1942 Journal of Finance 165.83 57.09% 2.68 2.41
Australasian Finance and Banking AFBC 1987 n.a. 138.83 32.05% 2.09 2.21
Eastern Finance Association Eastern 1964 Financial Review 264.33 42.24% 2.90 2.35
European Finance Association EFA 1974 Review of Finance 208.83 44.37% 2.50 2.24
European Financial Management EFMA 1991 European Financial Management 298.00 37.53% 2.51 2.08
French Finance Association FFA 1983 Finance 49.33 32.80% 2.68 2.11
Financial Management Association FMA 1970 Financial Management/Journal of 776.33 37.51% 2.34 2.26
Applied Finance
Financial Management Association FMAE 1996 Financial Management/Journal of 213.67 36.12% 2.51 2.28
Europe Applied Finance
German Finance Association GFA 1993 n.a. 81.67 38.37% 2.40 2.32
Midwest Finance Association MFA 1951 Quarterly Journal of Finance 323.00 36.22% 2.30 2.06
Northern Finance Association NFA 1988 n.a. 110.33 32.93% 2.71 2.23
Southern Finance Association SFA 1960 Journal of Financial Research 197.17 33.05% 2.17 2.16
Swiss Society for Financial Market SSFMR 1997 Financial Markets and Portfolio 73.00 37.44% 2.39 2.14
Research Management
Southwestern Finance Association SWFA 1961 Journal of Financial Research 152.00 27.63% 2.20 1.88
Western Finance Association WFA 1965 n.a. 144.50 48.10% 2.71 2.23

Average of analyzed conferences 213.12 38.51% 2.47 2.21

This table provides an overview of the 15 finance conferences analyzed in this study. Column (1) lists the associations organizing the conferences. Column (2) contains the
conference abbreviations used in this study. Column (3) provides the year of the first meeting, column (4) lists the names of the corresponding journal(s) of each conference,
if existent. The information in columns (5)–(8) are based on own calculations from our publication analysis of 18,535 conference papers presented at the analyzed conferences
between 2007 and 2012. Column (5) shows the average number of conference papers p.a. presented at each conference. Average publication ratios (column 6) are obtained
by dividing the number of successfully published conference papers by the total number of conference papers presented at each conference. Time lag until publication is
defined as the number of years (measured on a monthly basis) between conference presentation and successful journal publication of research articles. Average time lag
until publication displayed in column (7) is the average time lag until publication of all journal articles that have been presented as conference papers at the respective
conference. Column (8) provides the average number of authors per conference paper for each conference.

Using normalized (age-adjusted) citations, we mitigate the impact ences between 2007 and 2009.14 Based on quality perceptions of
of the age of an article. For instance, an article that receives 50 ci- the survey participants, we construct a Top-down ranking, i.e., sur-
tations and that was published five years ago has a normalized ci- vey participants were asked to directly rate the 15 conferences
tation number of 10 (for a similar approach, see Chan et al., 2013). with respect to overall conference quality (on a scale of 0 to 100
To obtain data for these two quality proxies, we collect the pro- points).15 As discussed above, we use the perceived quality of a
grams of all 15 conferences held between 2007 and 2012 and then conference as an additional quality proxy in the evaluation frame-
we manually track the publication success of all conference papers work as this proxy, in contrast to the JIF and normalized citations,
via Google Scholar.11 We track the publication outlets of the con- does not only account for research quality but also other quality
ference papers and the number of citations triggered as of the end dimensions that may impact the quality of a conference.16
of February 2017.12 The observation period running from 2007 to
2012 is chosen based on two restrictions. On the one hand, we do 3.4. Rating methodology
not analyze conferences held prior to 2007 due to data availability
constraints. On the other hand, to avoid potential biases resulting All three quality proxies – JIF, normalized citations, and qual-
from the typical time lag to publication, conferences occurring af- ity perceptions of conference participants – contribute to an over-
ter 2012 are not considered, as the period until publication may be
insufficient to allow for publication.13
The third quality proxy is based on the results of a survey of Please note that in a former version of this paper, we only analyzed finance
conferences occurring from 2007 to 2009. Based on papers presented in these years,
conference quality. Through this survey, we asked conference par-
we surveyed individuals who participated in conferences in January of 2016. After
ticipants about their quality perceptions of the 15 conferences. The completing this survey, we collected data for 2010 to 2012 in the autumn of 2016.
population of this expert-based perception study, which we con- Given this chronology, we only consider participants surveyed from the early years
ducted between January and April 2016, consists of all scholars of our sample period.
presenting at least one conference paper at the analyzed confer- The following question was asked through the survey: “Based on your personal
judgment, please evaluate the overall quality of each of the conferences listed be-
low. ‘Overall quality’ of a conference refers to your personal view of the quality of
the conference, i.e., your personal impression of all of the aspects that, in your view,
impact the quality of a conference. To rank the conferences, please move the slider
for each conference within the quality range”.
We focus on Google Scholar because it provides better search results than other 16
As described in more detail in Section 6, we also construct another ranking ap-
literature databases such as Business Source Premier or EconLit. proach, i.e., the Bottom-up ranking, and analyze the impact of this alternative rank-
For each paper, we document the following information: (1) the conference ing approach on conference ratings to test the robustness of the Top-down rank-
name; (2) the year of presentation; (3) the title of the conference paper; (4) the ing approach. Furthermore, for both ranking approaches, we construct various other
authors of the conference paper; (5) the publication date; (6) the publication title; rankings to control for the research experiences (publication track record; referee
(7) the publishing journal; (8) authors of the journal article; (9) the citation fre- and editor experience with top 3 and top 10 finance journals) and regional back-
quency of the working paper; and (10) the citation frequency of the journal article. grounds of conference participants. Interestingly, all of the different ranking ap-
Azar (2007) and Ellison (2002) document evidence of increased review times proaches generate similar results as indicated by ranking correlation coefficients of
for economic journals. permanently greater than 0.88.
30 A. Kerl et al. / Journal of Banking and Finance 89 (2018) 26–38

all conference ranking.17 For the first and second quality proxies, SWFA. In contrast, deviations in average time lags to publication
we obtain the rating scores for each conference by calculating the (measured in years from conference presentation to journal publi-
mean paper quality for all of the conferences considered based on cation) and in the average number of authors per conference paper
the JIF and normalized citations. For instance, the average rating do not differ materially across conferences.
score based on normalized citations for a given conference is cal-
culated as the average number of normalized citations of articles 4.2. Overlap between finance conferences
presented at a given conference. Not (yet) published conference
papers receive no rating points, i.e., zero normalized citations or a It is common practice to present articles at different confer-
JIF value of zero. The same practice applies for conference papers ences in the same or subsequent years. In our sample, approxi-
published in journals without an assigned JIF. For the third qual- mately one-third of all articles were presented at two or more con-
ity proxy, we use the average conference evaluation obtained from ferences.23 Given this, it is interesting to investigate which con-
the survey as a rating score. The overall conference ranking is ob- ferences are rather similar with respect to presented papers and
tained by calculating the equally weighted average of the three dif- which conferences differ substantially in this dimension. Thus, we
ferent sub-rankings.18 Thus, the three quality proxies are weighted analyze how many conference papers of a specific conference, the
equally. basis conference, have also been presented at each of the 14 other
finance conferences. Then, we divide the number of conference pa-
4. Descriptive statistics pers presented at both conferences by the total number of confer-
ence papers presented at the basis conference to obtain the respec-
4.1. Characteristics of finance conferences tive percentage of common papers presented at the two confer-
ences. We apply this procedure to each conference pair. The lower
Our dataset includes 18,535 conference papers presented at the the percentage of shared conference papers, the lower the degree
15 finance conferences examined between 2007 and 2012. Based of similarity between a given pair of conferences. Finally, for each
on publication outcomes of these conference papers, Table 1 pro- conference, we also calculate the total percentage share of confer-
vides descriptive information for each conference. ence papers that have also been presented at any other conference
The geographical universe of finance conferences is global, as included in our sample.
conferences held in North America, Europe, Australia, and Asia are Table 2 reports the fraction of common papers for each pair
included in our sample. However, most of the conferences are of conferences. For example, 12.13% of all conference papers pre-
held in North America. Column (4) indicates whether the host- sented at the EFA have also been presented at the AFA, and 15.28%
ing association for a conference also publishes a journal. Interest- of all conference papers presented at the AFA have also been pre-
ingly, almost three-quarters of all hosting associations publish their sented at the EFA. In general, we document large differences be-
own journals, with the AFBC,19 GFA, NFA, and WFA as exceptions. tween pairs of conferences with a pairwise overlap of less than
Columns (5) to (8) provide additional information on the confer- 5%. The NFA and FMA show the highest degree of intersection in
ences, showing the average conference size as measured by the our sample, as roughly 19% of the conference papers presented at
mean number of papers presented at each conference, mean pub- the NFA have also been presented at the FMA. Due to submission
lication ratios of conference papers,20 the average time lag to pub- policies of the AFA and WFA, it is not surprising that the intersec-
lication,21 and the average number of authors per conference pa- tion between these conferences is less than 3%. With respect to the
per.22 uniqueness of papers presented, the AFBC comes in first with only
The number of presented conference papers strongly varies be- 21.73% of all papers presented at the AFBC being presented at an-
tween conferences, with the FMA being by far the largest con- other conference as well (see the last column of Table 2). With
ference with on average approximately 776 papers presented per 69.86% more than two in three papers presented at the SSFMR
year. On the other end of the scale, the conference program of the have also been presented at one of the other 14 conferences.
FFA only includes approximately 49 articles per year on average.
There is also considerable heterogeneity with respect to average 4.3. Publication outcome
publication ratios. While more than 57% of all conference papers
presented at the AFA have been subsequently published, the pro- To evaluate conference quality, we track publication outcomes
portion is less than 28% for conference papers presented at the of the 18,535 conference papers included in our sample. For the
entire sample, we document a publication ratio of more than
17 38% and an average time lag to publication of 2.47 years (see
Please note that we use the term ranking when we put the conferences in or-
der. We use the term rating when we assign a score to a conference. For example, Table 1). The journals with the most frequent publication out-
AFA receives a rating score of 92.38 in our survey. This translates into rank num- comes are major finance journals such as Journal of Banking and
ber 1 in this dimension. For our final ranking for the conferences, we refer to the Finance (542 publications), Journal of Financial Economics (384 pub-
ranking and not the rating for each quality dimension, as rating scores are not com-
lications), Review of Financial Studies (347 publications), and Journal
parable across the categories.
18 of Finance (266 publications). Table 3 lists all journals with at least
To derive the overall conference ranking position for each of the conferences,
we rank the equally weighted average of the three different sub-rankings in as- 30 publications of prior conference papers.
cending order.
Please note that with the exception of the Australasian Finance and Banking 4.4. Conference survey
Conference (AFBC), all remaining conferences are hosted by an association.
Conference publication ratios are obtained by dividing the number of success-
fully published conference papers by the total number of conference papers pre-
We derive the survey population by collecting email addresses
sented at each conference. from the authors of all conference papers presented at the 15
The time lag to publication is defined as the number of years between the analyzed finance conferences held between 2007 and 2009. Our
month of conference presentation and the month of publication in a journal. For
example, if a paper was presented in January 2010 and was published in July 2013,
the time lag to publication is equal to 3.5 years. When conference papers are ac- 23
To identify multiple conference presentations, we compare the titles of all con-
cepted for publication but have not appeared in an issue as of September 2017, we ference papers. Only those conference papers with identical titles are counted as
assume that these accepted papers will be published in December 2017. multiple presentations. As articles occasionally change their names during the pub-
For example, Chung et al. (2009) and Walter (2011) document a positive rela- lication process (Walter, 2011), our estimate of multiple conference presentations
tionship between the number of (co-) authors of an article and its research quality. denotes the lower bound.
A. Kerl et al. / Journal of Banking and Finance 89 (2018) 26–38 31

Table 2
Overlap of academic finance conferences.

Basis conf Overlap of conference paper presented at the basis conference with another conference


AFA 0.80% 1.11% 15.28% 2.01% 1.31% 5.73% 1.51% 0.90% 1.01% 2.11% 0.60% 0.60% 0.30% 2.41% 35.68%
AFBC 0.96% 2.04% 1.32% 3.72% 0.60% 4.44% 2.04% 0.60% 2.40% 1.32% 0.60% 1.08% 0.36% 0.24% 21.73%
Eastern FA 0.69% 1.07% 1.70% 4.10% 0.88% 15.76% 4.73% 1.07% 11.03% 2.84% 5.93% 1.95% 8.95% 0.25% 60.97%
EFA 12.13% 0.88% 2.15% 4.79% 1.52% 9.42% 3.59% 3.11% 1.52% 4.23% 1.44% 1.92% 0.64% 9.90% 57.22%
EFMA 1.12% 1.73% 3.64% 3.36% 1.06% 14.32% 7.33% 1.96% 2.96% 2.07% 2.57% 2.40% 1.01% 0.45% 45.97%
FFA 4.39% 1.69% 4.73% 6.42% 6.42% 8.11% 5.74% 5.41% 3.38% 4.05% 2.03% 2.70% 2.03% 2.03% 59.12%
FMA 1.22% 0.79% 5.37% 2.53% 5.50% 0.52% 0.54% 0.90% 0.62% 0.67% 0.92% 0.43% 0.41% 1.42% 21.83%
FMAE 1.17% 1.33% 5.85% 3.51% 10.22% 1.33% 17.00% 1.87% 3.20% 2.26% 2.42% 3.35% 1.56% 1.48% 56.55%
GFA 1.84% 1.02% 3.47% 7.96% 7.14% 3.27% 6.94% 5.10% 2.86% 1.43% 1.84% 10.00% 2.65% 1.22% 56.73%
MFA 0.77% 1.55% 13.56% 1.47% 4.11% 0.77% 8.99% 3.25% 1.08% 2.87% 2.87% 1.94% 6.74% 0.85% 50.81%
NFA 3.17% 1.66% 6.80% 8.01% 5.59% 1.81% 18.88% 4.38% 1.06% 5.59% 4.23% 0.45% 2.42% 3.17% 67.22%
SFA 0.51% 0.42% 7.94% 1.52% 3.89% 0.51% 18.07% 2.62% 0.76% 3.13% 2.36% 1.10% 2.62% 0.34% 45.78%
SSFMR 1.37% 2.05% 7.08% 5.48% 9.82% 1.83% 8.22% 9.82% 11.19% 5.71% 0.68% 2.97% 2.51% 1.14% 69.86%
SWFA 0.33% 0.33% 15.57% 0.88% 1.97% 0.66% 10.09% 2.19% 1.43% 9.54% 1.75% 3.40% 1.21% 0.00% 49.34%
WFA 2.77% 0.23% 0.46% 14.30% 0.92% 0.69% 7.61% 2.19% 0.69% 1.27% 2.42% 0.46% 0.58% 0.00% 34.60%

This table shows the fraction of identical conference papers at a pair of two conferences. In particular, it shows the fraction of identical conference papers presented both
at the basis conference – displayed in the rows – and at another conference – displayed in the columns – as a fraction of all conference papers presented at the basis
conference. The lower the fraction for a specific conference pair, the lower the overlap between these conferences. Since articles occasionally change their names during the
publication process (Walter, 2011), our estimate of multiple conference presentation indicates the lower bound.

Table 3
unique dataset contains the author names and email addresses of
Journals with at least 30 publications. 7559 conference participants. We conduct our survey using Exavo,
a professional provider of survey software, and we contacted sur-
No. of
Journal publications
vey participants via e-mail in January 2016. After a survey pe-
riod of nearly two and a half months, we closed the survey in
Journal of Banking and Finance 542
April 2016.24 Of the 7559 e-mails originally sent out, 2408 were
Journal of Financial Economics 384
Review of Financial Studies 347 returned as non-deliverable. Of the 5151 researchers successfully
Journal of Finance 266 contacted, 882 participated in the survey.25
Journal of Corporate Finance 191 Table 4 shows selected descriptive statistics for the survey. Pan-
Journal of Financial and Quantitative Analysis 155 els A to D report the main characteristics of the survey partici-
Financial Management 120
Financial Review 104
pants, and Panel E shows whether survey participants only know
Review of Finance 102 of or have participated in each of the 15 finance conferences. The
Journal of Futures Markets 90 average survey participant is 51.07 years of age, is male (80.75%
European Financial Management 86 of the participants are men) and attends on average 2.56 finance
Journal of Empirical Finance 85
conferences per annum (Panel A). Of the participants, 43.37% cur-
Management Science 84
Managerial Finance 79 rently hold full professorships. Nearly one in two respondents are
Review of Quantitative Finance and Accounting 70 either associate or assistant professors (Panel B). The top 3 regions
Journal of Financial Research 67 of participants’ current affiliations are North America (41.86%), Eu-
Journal of Financial Markets 66 rope (36.38%), and Australia and Asia (12.61%) (Panel C). The re-
Journal of International Financial Markets, Institutions and 66
gional distribution of survey participants largely mirrors the con-
Applied Financial Economics 63 ference universe. In addition, Panel D reports information on the
Journal of Financial Intermediation 63 research experience of the survey participants. On average, confer-
European Journal of Finance 58 ence participants have published 1.16 (3.01) in the top 3 (top 10)
Journal of Economics and Finance 52
finance journals.26 Additionally, 35.04% (73.99%) of the respondents
Journal of International Money and Finance 52
Journal of Business Finance and Accounting 49 currently serve or have served as referees for one of the top 3 (top
Quarterly Review of Economics and Finance 49 10) finance journals, and 3.25% (9.15%) have served as editors for
Pacific-Basin Finance Journal 46 one of the top 3 (top 10) finance journals. In summary, our survey
Quantitative Finance 46 participants seem to be well suited to provide a valid assessment
Journal of Financial Services Research 45
International Review of Financial Analysis 42
of the perceived quality of the finance conferences under consider-
Journal of Multinational Financial Management 36 ation.
Journal of Real Estate Finance and Economics 36 To measure conference quality levels, we asked survey partic-
International Review of Economics and Finance 32 ipants to rate finance conferences based on their perceptions of
Accounting and Finance 30
overall conference quality (Top-down ranking27 ) and on different
Financial Analysts Journal 30
Journal of Economics and Business 30
Quarterly Journal of Finance 30

This table displays the most frequent publication outlets of the conference papers 24
The questionnaire is available upon request.
analyzed. In particular, the table presents all journals with at least 30 publications 25
The resulting participation rate of 17.12% is similar to those of other surveys
in a descending order. To derive the number of publications for each journal, we
of finance research (Schinski et al., 1998; Holder et al., 20 0 0). However, the survey
control for multiple conference paper presentation. To identify multiple conference
participants did not respond to every question. Therefore, the implicit participation
presentations, we compare the titles of all conference papers. Only those confer-
rate for many questions is less than 17.12%.
ence papers having identical titles are counted as multiple presentations and con- 26
For the purposes of the questionnaire, to define top 3 and top 10 finance jour-
sequently as one identical working paper. Then, we analyze the publication success
nals, we refer to the ranking scores of the 5-Year Journal Impact Factors (JIF) ob-
of all identified working papers by tracking the publication status as of February
tained from the Journal Citation Reports (JCR) as of September 2015.
2017. 27
For the exact question posed in the survey, please refer to footnote 17.
32 A. Kerl et al. / Journal of Banking and Finance 89 (2018) 26–38

Table 4
Conference survey–descriptive statistics.

Panel A: General information

No. of responses Mean

Total survey participants 882

No. of Conf part p.a. 324 2.56

Age (years) 529 51.07
Sex (Proportion female) 530 19.25%

Panel B: Current position

No. of responses Responses (%)

Total respondents 558

Full professor 242 43.37%

Associate professor 181 32.44%
Assistant professor 85 15.23%
Other (e.g., practitioner) 50 8.96%

Panel C: Location of current affiliation–Top 3 regions

No. of responses Responses (%)

Total respondents 547

North America 229 41.86%

Europe 199 36.38%
Australia and Asia 69 12.61%

Panel D: Research experience

Top 3 finance journals Top 10 finance journals

Experience Yes No No. of responses Yes No No. of responses

Editor 3.25% 96.75% 431 9.15% 90.85% 437

Referee 35.04% 64.96% 468 73.99% 26.01% 496
No. of publications (Mean) 1.16 474 3.01 503

Panel E: Conference evaluation

No. of Responses–Conf participants No. of Responses–Know conf

Total respondents 508 556

AFA 219 514

AFBC 78 212
EFA 170 456
Eastern FA 132 349
EFMA 179 412
FMA 246 435
FMAE 73 244
FFA 59 186
GFA 45 140
MFA 164 377
NFA 77 289
SFA 99 308
SSFMR 45 175
SWFA 61 233
WFA 135 433

This table presents descriptive statistics of the conference survey we conduct to analyze quality perceptions of confer-
ence participants. We obtain information about conference participants from analyzing the conference papers presented
at 15 finance conferences between 2007 and 2009. The conference survey was conducted between January and April
2016. Panel A reports some general information on conference participants. For each characteristic listed in the first
column, the number of responses and the mean value of all responses are shown in the second and third column,
respectively. Number of Conf Part p.a. is the number of conference participations p.a., i.e. the number of conferences a
survey participant participates in per annum. Panel B and C present further information on the current position and
location of current affiliations of survey participants. We report the percentage share for each of the categories based
on the number of total respondents (column 2) in the third column. Panel D documents information about the research
experience of survey respondents. We measure research experience by editor, referee, and publication experience in top
3 and top 10 finance journals, respectively. The definitions of the top 3 and top 10 finance journals refer to the ranking
scores of the 5-Year Journal Impact Factors (JIF) obtained from the Journal Citation Reports (JCR) as of September 2015.
Number of Publications (Mean) measures the mean number of publications in top 3 and top 10 finance journals per
respondent, respectively, based on information provided by conference participants. Panel E documents the number of
survey participants whose quality perceptions are used to rank each conference. Number of Responses – Conf participants
sums the responses used to conduct the conference ranking based on quality perceptions of researchers having partic-
ipated in the conference for which a ranking is provided. We use these responses to conduct the sub-ranking based on
quality perceptions of conference participants. Number of Responses–Know conf lists the responses used to conduct the
conference ranking based on quality perceptions of researchers knowing the underlying conference for which a ranking
is provided, i.e., researchers have not necessarily participated in the respective conference but at least know the confer-
ence. These responses are used to construct an alternative conference rating that is used to test the stability of ranking
A. Kerl et al. / Journal of Banking and Finance 89 (2018) 26–38 33

Table 5
Conference ranking results.

(1) (2) (3) (4) (5) (6) (7) (8) (9)

Group Conference Rank Ø Rating Rank JIF Ø Rating score norm Rank norm Ø Rating score Rank conference
score JIF journal citations journal citations conference survey survey

(1) First-tier Conferences AFA 1 1.87 1 27.67 1 92.38 1

WFA 2 1.64 2 23.90 2 91.30 2
EFA 3 1.20 3 21.02 3 83.22 3
(2) Second-tier Conferences NFA 4 0.55 5 9.86 5 68.40 5
FMAE 5 0.46 7 7.79 6 65.58 6
FMA 6 0.43 8 6.82 8 73.86 4
GFA 6 0.52 6 7.06 7 62.49 7
FFA 8 0.61 4 11.22 4 53.54 14
SSFMR 9 0.40 9 6.75 9 58.33 9
EFMA 10 0.39 10 5.76 10 61.15 8
AFBC 11 0.32 12 4.96 11 57.53 10
Eastern FA 12 0.33 11 3.73 13 57.22 11
MFA 13 0.28 13 4.01 12 55.69 12
SFA 14 0.21 14 3.39 14 54.31 13
SWFA 15 0.14 15 2.99 15 49.48 15

This table presents the results of our conference evaluation. Overall ranking positions for each conference are shown in column (3). Conferences (column 2) are sorted by
their overall ranking positions. The overall conference ranking is derived from the equally weighted average of the three different sub-rankings based on JIF (column 5),
normalized citations (column 7), and quality perceptions from the conference survey (column 9). The three different sub-rankings are based on the average rating scores for
each quality proxy, listed in column (4), column (6), and column (8), respectively. Column (1) shows the quality clusters the analyzed conferences are tiered into based on
the results of a Ward cluster analysis. The cluster analysis uses the standardized average rating scores from each of the three sub-rankings as input factors.

aspects of conference quality (Bottom-up ranking28 ). As previous frequencies than conference papers presented at the other confer-
participation in a conference should be central in judging the qual- ences. In aggregate, papers presented at the AFA, WFA and EFA
ity of a given conference, we asked survey participants to indicate trigger an average of roughly 24 normalized citations while con-
whether they have already participated in a specific conference or ference papers presented at the remaining 12 finance conferences
whether they just knew of the conference. With respect to partic- only trigger roughly six normalized citations. It is noteworthy that
ipation rates, the FMA is at the top of the list with 246 former the rating results based on JIF and normalized citations are quite
conference attendees participating in the survey. The most well- similar and provide identical ranking positions for ten of the 15
known finance conference is the AFA with 514 survey participants conferences. Accordingly, the correlation coefficient for these two
being aware of this conference. sub-rankings is as high as 0.99.
Column (8) provides average rating scores for the third qual-
ity proxy derived from surveying conference participants. The low-
5. Results
est (highest) possible score for a conference is 0 (100). Average
rating scores for each conference are calculated based on the re-
Table 5 presents our conference rankings. For each conference,
sponses of survey participants who have participated in the con-
we report its aggregated ranking derived from the average ranking
ference considered. For instance, the average rating score for the
for the three quality proxies (column 3). In addition, the average
FMA, 73.86, is the average rating score provided by the 246 respon-
rating scores and the corresponding ranks based on each quality
dents who have participated in the FMA (see Table 4). The third
proxy are shown in columns (4) to (9).
quality proxy based on the quality perceptions of conference par-
The results for the three quality proxies show considerable het-
ticipants provides similar ranking results not only for the leading
erogeneity between the 15 finance conferences. With respect to the
conferences but also for most of the other conferences. The AFA
first quality proxy, conference papers presented at the AFA are on
scores highest with 92.38 points and the SWFA scores lowest with
average published in a journal with a JIF-value of 1.87 whereas
49.48 points. However, we observe a lower degree of heterogeneity
conference papers presented at the SWFA are published in jour-
in perceived rather than paper-based quality proxies when calcu-
nals with an average JIF-value of 0.14 (column 4).29 Conference pa-
lating mean rating scores of the top conferences (the AFA, WFA and
pers presented at the WFA and EFA are on average also published
EFA) compared to the other conferences; the mean rating score for
in high-quality journals with respective JIF-values of 1.64 and 1.20,
the top three conferences is 89.97 while the respective mean rating
respectively. Accordingly, the AFA, WFA and EFA are positioned as
score for the other conferences is 59.80.
the leading conferences by far. In particular, the average JIF-value
Although, the third quality proxy largely mirrors ranking posi-
across the AFA, WFA and EFA is 1.57 whereas we observe a respec-
tions derived from the paper-based quality assessment, two confer-
tive value of 0.39 for the remaining 12 finance conferences.
ences stand out; the FMA and the FFA. For these conferences, their
The sub-rating based on our second quality proxy, normalized
ranking positions deviate by plus four and minus ten rankings, re-
citations (column 6), provides further evidence of high heterogene-
spectively, compared to the results for the other sub-rankings. As a
ity between finance conferences, as conference papers presented at
consequence, the FMA has a rather strong reputation, as it comes
the AFA, WFA, and EFA are associated with much higher citation
in fourth in our survey as opposed to ranking in position eight in
paper-based rankings. In contrast, the FFA, which comes in fourth
The Bottom-up ranking is based on survey participants’ perceptions of differ- in our paper-based quality assessment, is rated rather at the bot-
ent quality measures of finance conferences such as networking opportunities or tom of the list by the participants, i.e., position 14. These findings
the quality of presented papers. For more information on the construction of the reveal the existence of differences between survey-based percep-
Bottom-up ranking system, see Section 6.2.
tions of conference quality on one hand and paper-based confer-
As all conference papers are used for the calculation, articles that have not (yet)
been published or that have been published in a journal not covered by the JIF are ence evaluation results on the other.
given a JIF-value of 0.
34 A. Kerl et al. / Journal of Banking and Finance 89 (2018) 26–38

The overall ranking of conference quality (column 3) is derived similar ranking positions. In particular, the correlation between the
by ranking equally weighted averages of the three different sub- two approaches is equal to 97.7% for journal ratings.
rankings in ascending order. To group all of the analyzed confer- Third, rather than focusing on the citation frequencies of pub-
ences into categories of comparable quality, we perform a clus- lished articles alone, we use the total number of citations that con-
ter analysis. We use the Ward (1963) hierarchical cluster method ference papers receive to measure the impact of a research paper,
based on standardized average rating scores of the three quality as the former can suffer from journal bias (Pons-Novell and Tirado-
proxies as input factors, and we obtain two quality clusters30 : First- Fabregat, 2010). The normalized number of total citations is de-
tier Conferences, i.e., AFA, WFA, and EFA; and Second-tier Confer- fined as the age-adjusted sum of citations that an article receives
ences, i.e., all remaining conferences. In turn, the outstanding po- as a working paper and as a published journal article. Unreported
sitions of the three leading conferences are confirmed by the clus- conference rankings based on the normalized number of total ci-
ter analysis. This finding corresponds well with anecdotal wisdom tations are very similar to rankings based on normalized citations.
held in the finance community and with the recent findings of In particular, the correlation between both approaches is 98.8% for
Reinartz and Urban (2017). Although the cluster analysis tiers all of conference ratings.
the remaining conferences into one quality category, some differ- Finally, for our paper-based quality proxies, we investigate
ences regarding quality levels exist among these conferences. For whether conference rankings are stable for each year of our obser-
instance, conference papers presented at the FFA are cited on av- vation period. Therefore, we calculate the respective rating scores
erage three times more often than their counterparts presented at for each year separately. Panel A (B) of Table 6 reports the results
the SWFA.31 for the journal-based (citation-based) approach. We find confer-
ence ratings to vary in single year analyses. For instance, EFMA’s
6. Robustness checks and further analyses ranking positions, which are based on normalized citations (Panel
B), deviate between the sixth (2011) and twelfth rank (year 2010).
The conference ranking results are rather stable across the dif- SSFMR’s ranking positions based on the JIF (Panel A) even range
ferent quality proxies. To verify the validity of our evaluation ap- from the fourth (2012) to the thirteenth rank (2007). Interestingly,
proach, we apply robustness checks and analyze their impacts on there is no monotonic upward or downward trend, i.e., steady
our rating results.32 quality improvement or deterioration, for any of the analyzed con-
ferences for the six years of observation. In addition, the First-
6.1. Further analysis of ratings based on paper quality levels tier conferences confirm their leading positions in each single year.
While overall conference rating results are mainly confirmed by
First, we focus on additional journal rankings as alternatives to the single-year analysis, the single year analysis documents sig-
the JIF. To this end, we use ten additional journal rankings ob- nificant variations in conferences’ exact ranking positions. Thus, it
tained from Harzing (2016) to measure the quality of publication seems to be important to base the conference rating system on
outcomes.33 By examining a variety of journal rankings, we can multiple years.
avoid potential regional or methodological biases. In unreported
results, we find conference ratings based on ten journal rankings to 6.2. Further analysis of the survey-based rating system
be very similar to ratings obtained from the JIF, as the mean cor-
relation with the JIF is 94.4% for conference rankings and is 97.5% We further investigate whether modifications in the ways that
for conference ratings.34 Consequently, conference ratings based on we infer the quality perceptions of conference participants affect
the publication outcomes of conference papers are robust to differ- our main findings. Using our baseline approach, we equally weight
ent journal rankings. the individual ratings of every survey participant who had partici-
In a second analysis, we modify the population of the sample pated in each respective conference. Now we apply various alterna-
by conducting a new sub-ranking for the JIF based exclusively on tive evaluation approaches to rank the conferences based on survey
published conference papers. In doing so, the analyzed sample size responses. Corresponding results are shown in Table 7.
decreases from 18535 to 7137 published conference papers. In un- First, unlike in our baseline analyses, in which only past partic-
reported analyses, we find our initial results to be generally con- ipants are obliged to rate a conference, we expand the Top-down
firmed in the restricted sample, as most of the conferences obtain ranking by allowing for responses from survey participants who at
least know of certain conferences.35 The results shown in column
(4) prove to be very robust, as rankings largely remain consistent
To group conferences into a reasonable number of clusters, we use the resulting
dendrogram to determine the number of clusters. The cluster formation is robust or change by a maximum of one position. The only exception is
to changes in the applied cluster method; i.e., we obtain the same clusters when the GFA, which has a rather low reputation in general (ranking po-
using alternative cluster algorithms (e.g., Average Linkage, k-Means, and Complete sition 12), but which is rather highly ranked by past participants
Linkage). (ranking position 7).
The difference between both citation frequencies is statistically significant at
Second, we account for different levels of expertise of our sur-
the 1% level.
To preserve space, we present only the most important tables. However, the
vey participants. In constructing rankings, columns (5) and (6) as-
other tables are readily available upon request. cribe double the weight to survey participants who have published
These journal rankings stem from international – in the majority of cases, aca- in one of the top 3 or top 10 journals, respectively. In columns (7)
demic – institutions and differ with respect to underlying ranking approaches or and (8), conference ratings provided by participants with editorial
regional focuses. These additional journal rankings are the HEC (Hautes Études Com-
or referee experience with the top 3 or top 10 journals are given
merciales de Paris Ranking List, July 2011), ABDC (Australian Business Deans Council
Journal Rankings List, September 2016), ABS (Association of Business Schools Academic
double the weight in constructing the conference ranking. As can
Journal Quality Guide, February 2015), AST (Aston, March 2008), BJM (British Jour- be inferred from Table 7, rankings remain largely unchanged.
nal of Management 2001 Business & Management RAE rankings, 2001), HKB (Hong Third, we consider potential regional biases of the survey par-
Kong Baptist University School of Business, 2005), Theo (Theoharakis et al., 2005), ticipants by constructing separate rankings for survey participants
VHB (Assoc. of Professors of Business in German speaking countries, 2015), CNRS (Cen-
from North America and Europe (columns 9 and 10), respectively.
tre National de la Recherche Scientifique, 2015), and ESS (ESSEC Business School Paris,
January 2016). The results imply that European (North American) researchers
Interestingly, whereas the journal ratings themselves are positive but far from
perfectly correlated, correlation coefficients of the resulting conference ratings are
much higher. Panel E of Table 4 documents responses used to construct this rating.
Table 6
Conference rating based on single year analyses.

Panel A: Single year analysis–JIF

Group Conf 2007 2008 2009 2010 2011 2012 Total (2007–2012)

Ø Rating Rank JIF Ø Rating Rank JIF Ø Rating Rank JIF Ø Rating Rank JIF Ø Rating Rank JIF Ø Rating Rank JIF Ø Rating Rank JIF
score JIF score JIF score JIF score JIF score JIF score JIF score JIF

(1) First-tier Conferences AFA 2.15 1 1.91 1 1.62 2 1.98 1 2.18 1 1.51 1 1.87 1
WFA 1.91 2 1.78 2 1.87 1 1.72 2 1.25 3 1.30 2 1.64 2
EFA 1.09 3 1.01 3 1.29 3 1.43 3 1.48 2 0.93 3 1.20 3
(2) Second-tier Conferences NFA 0.60 5 0.71 4 0.74 5 0.35 11 0.59 5 0.35 10 0.55 5
FMAE 0.55 6 0.43 9 0.48 7 0.44 8 0.48 7 0.37 8 0.46 7
FMA 0.48 7 0.48 8 0.45 8 0.43 9 0.38 9 0.36 9 0.43 8

A. Kerl et al. / Journal of Banking and Finance 89 (2018) 26–38

GFA 0.41 8 0.53 6 0.80 4 0.46 6 0.51 6 0.45 5 0.52 6
FFA 0.82 4 0.60 5 0.62 6 0.57 4 0.66 4 0.35 10 0.61 4
SSFMR 0.23 13 0.52 7 0.35 11 0.33 12 0.36 10 0.62 4 0.40 9
EFMA 0.39 9 0.35 10 0.41 9 0.42 10 0.38 8 0.43 6 0.39 10
AFBC 0.28 11 0.32 11 0.28 13 0.48 5 0.31 11 0.31 12 0.32 12
Eastern FA 0.32 10 0.27 14 0.27 14 0.45 7 0.25 12 0.39 7 0.33 11
MFA 0.26 12 0.29 13 0.38 10 0.21 13 0.28 13
SFA 0.21 14 0.31 12 0.29 12 0.11 13 0.20 13 0.16 15 0.21 14
SWFA 0.21 15 0.14 15 0.17 15 0.06 14 0.08 14 0.18 14 0.14 15

Panel B: Single year analysis–normalized citations

Group Conf 2007 2008 2009 2010 2011 2012 Total (2007–2012)

Ø Rating score Rank Ø Rating score Rank Ø Rating score Rank Ø Rating score Rank Ø Rating score Rank Ø Rating score Rank Ø Rating score Rank
norm cit norm cit norm cit norm cit norm cit norm cit norm cit norm cit norm cit norm cit norm cit norm cit norm cit norm cit

(1) First-tier Conferences AFA 23.57 1 21.64 2 22.68 2 29.86 1 44.10 1 22.16 2 27.67 1
WFA 20.80 2 25.24 1 23.26 1 26.70 2 22.25 3 25.77 1 23.90 2
EFA 18.61 3 12.60 3 21.23 3 22.50 3 31.00 2 21.03 3 21.02 3
(2) Second-tier Conferences NFA 7.69 7 9.42 5 8.89 4 11.24 5 13.86 4 9.57 5 9.86 5
FMAE 10.36 5 6.33 7 6.93 8 10.57 7 6.22 10 6.21 9 7.79 6
FMA 7.05 8 6.61 6 7.52 6 7.49 8 4.93 11 7.38 7 6.82 8
GFA 8.12 6 5.06 10 8.49 5 7.22 9 6.40 9 6.70 8 7.06 7
FFA 14.00 4 12.30 4 7.23 7 11.25 4 7.09 7 16.93 4 11.22 4
SSFMR 4.15 13 6.11 8 4.66 11 10.95 6 7.90 5 7.43 6 6.75 9
EFMA 6.95 9 5.88 9 5.33 9 4.58 12 7.29 6 4.54 11 5.76 10
AFBC 5.57 10 4.92 11 3.73 12 5.01 10 6.61 8 4.73 10 4.96 11
Eastern FA 4.89 12 3.47 14 2.95 13 4.60 11 2.75 14 3.65 12 3.73 13
MFA 5.27 11 3.13 15 5.25 10 3.20 14 4.01 12
SFA 2.87 15 4.05 12 2.92 14 2.32 13 4.32 12 3.58 13 3.39 14
SWFA 3.15 14 4.00 13 2.08 15 2.25 14 4.22 13 2.72 15 2.99 15

This table presents the results of the conference evaluation based on the single year analysis for the JIF (Panel A) and normalized citations (Panel B). The conferences are sorted with respect to their ranking positions in the
overall conference ranking presented in Table 5. In the single year analysis, we conduct the sub-ranking based on JIF and normalized citations for each year based on the conference papers presented at the analyzed conferences
in the respective year. The table lists the corresponding average rating score as well as the respective rank for all conferences and each year for both quality proxies. The last columns of Panel A and B contain the average rating
score and corresponding conference rank for the entire observation period from 2007 to 2012. Missing values for MFA in 2010 and 2011 are due to unavailable conference programs for both years.

36 A. Kerl et al. / Journal of Banking and Finance 89 (2018) 26–38

Table 7
Survey-based conference rating–alternative rating approaches.

(1) (2) (3) (4) (5) (6) (7) (8) (9) (10)
Group Conference Top down Top down Top down Top down Top down Top down Top down Top down
Part Know Know Know Know Know Know Know
Top3 Publ Top10 Publ Top3 Editor Top10 Ref North Am Europe

(1) First-tier Conferences AFA 1 1 1 1 1 1 1 1

WFA 2 2 2 2 2 2 2 2
EFA 3 3 3 3 3 3 3 3
(2) Second-tier Conferences NFA 5 5 5 5 5 5 5 6
FMAE 6 6 6 6 6 6 6 5
FMA 4 4 4 4 4 4 4 4
GFA 7 12 12 12 12 12 14 10
FFA 14 13 13 13 13 13 15 13
SSFMR 9 8 7 7 8 7 9 8
EFMA 8 7 8 8 7 8 7 9
AFBC 10 9 9 9 9 9 8 7
Eastern FA 11 10 11 10 10 10 10 11
MFA 12 11 10 11 11 11 12 12
SFA 13 14 14 14 14 14 11 14
SWFA 15 15 15 15 15 15 13 15

This table presents all Top-Down survey-based conference rankings. The conferences (column 2) are sorted with respect to their ranking positions in the overall conference
ranking presented in Table 5. Columns (3) to (10) list the results of the different survey-based conference ranking approaches. Column (3) shows the survey-based conference
rating used as one of the three sub-rankings for the overall conference ranking presented in Table 5. For this rating only respondents who participated in the respective
conference where asked to rank the “overall conference quality” on a scale from 0 (low) to 100 (high). The exact question to construct the Top-Down Ranking in the survey
was: “Based on your personal judgment, please evaluate the overall quality of each of the conferences listed below. “Overall quality” of a conference refers to your personal
view on the quality of the conference, i.e., your personal impression of all aspects that in your view impact the quality of a conference. To rank the conferences, please move
the slider for each conference within the quality range [0–100].” The Top-Down Ranking is then constructed by ranking the average quality scores obtained from participants’
quality perceptions of the analyzed conferences. Whereas conference participation is needed for the rating in column (3), column (4) contains the Top-Down Ranking based
on respondents who at least know the respective conference, i.e., participation is not required to rate a conference. Columns (5) to (8) also show rating results of the
Top-Down Ranking based on the respondents who at least know the respective conference. However, in contrast to the rating in column (4) where all responses are equally
weighted, we control for respondents research experience by assigning double weightings to the responses of survey participants who have at least one top 3 publication
(column 5), have at least one top 10 publication (column 6), have experience as editor of a top 3 journal (column 7), or have experience as referee of a top 10 journal
(column 8), respectively. For the purpose of the questionnaire, to define top 3 and top 10 finance journals, we refer to the ranking scores of the 5-Year Journal Impact Factors
(JIF) obtained from the Journal Citation Reports (JCR) as of September 2015. The next two columns present conference ratings based on Top-Down Ranking approach based on
respondents who know the respective conference and whose current affiliation is located in North America (column 9) or Europe (column 10), respectively. By focusing on
survey participants from these two regions, we control for potentially existent regional biases.

evaluate European (North American) conferences more favorably. ceives a mean score of 5.98 whereas the FFA is assessed with a
In general, however, our ranking results are only mildly affected, mean value of 2.91. Dispersion levels are considerably lower for
as the correlation between the rankings of columns (9) and (10) is the aspect related to keeping oneself informed about recent re-
valued at 92.5%. search trends where the WFA scores highest with 6.04 while the
In addition to the Top-down ranking, we apply a Bottom-up rank- FFA scores lowest with 4.59. With respect to conference-specific
ing approach. The Bottom-up ranking is constructed over two steps. heterogeneity across the seven quality aspects, as can be inferred
In a first step, we identify seven aspects of conference quality, i.e., from column (13) of Panel B, the EFA shows the most balanced
(1) scholarly quality, (2) possibilities to expand the personal net- performance with a standard deviation of 0.27. In contrast, confer-
work, (3) keeping oneself informed about recent trends in research, ence participants view the quality of the seven dimensions as least
(4) getting inspiration for new research ideas, (5) possibility to in- balanced for the GFA with a respective standard deviation of 0.65.
crease own reputation, (6) feedback quality, and (7) increasing the Rating scores for the (equally weighted) Bottom-up rating are
likelihood of publication. Then, we ask survey participants to eval- shown in column (10) of Panel B. The Bottom-up ranking position of
uate the relevance of each quality factor to overall conference qual- a conference is shown in column (11). Respective rankings for the
ity on a scale of one (unimportant) to seven (very important). The Top-down approach are shown in column (12). We find the rank-
seven aspects are considered to be of similar importance to confer- ing positions to be largely unaffected regardless of which rank-
ence quality (Panel A of Table 8). In particular, aspect (3) (keeping ing approach is used. In particular, the correlation coefficient be-
oneself informed about recent trends in research) is rated high- tween the Top-down ranking and Bottom-up ranking is as high as
est with a mean of 5.71 and aspect (7) (increasing the likelihood 0.95. Thus, we conclude that to assess the perceived quality of a
of publication) is rated least important with a respective mean of conference, the use of the direct Top-down approach appears to be
5.14. As a consequence, we weight each quality dimension equally sufficient.
for our Bottom-up ranking.
In a second step, survey participants who have participated in 7. Conclusion
the respective conferences evaluate each of the seven aspects for
each conference on a scale of one (low) to seven (high). Overall, we Due to a lack of empirical evidence, researchers have tradi-
find the aspect of remaining informed about recent research trends tionally needed to rely on anecdotal evidence to assess a confer-
to be the most positively viewed across the conferences (Panel C). ence’s quality. Following Reinartz and Urban (2017), we develop a
In particular, the mean score for this aspect is 5.05. In contrast, the novel and comprehensive framework for ranking 15 finance con-
aspect related to increasing the likelihood of publication is on av- ferences. Our evaluation approach is based on the combination of
erage rated least favorably with a respective mean value of 3.90 for three factors that measure different aspects of conference quality,
the considered conferences. With respect to heterogeneity in qual- i.e., the quality of publication outcomes of conference papers, an
ity assessment across the conferences, we also find highest disper- article’s impact measured by citations, and quality perceptions of
sion value for this aspect (see Panel B of Table 8). The AFA re- conference participants. To execute the conference ranking, we col-
A. Kerl et al. / Journal of Banking and Finance 89 (2018) 26–38 37

Table 8
Survey-based conference rating–Bottom-Up ranking.

Panel A: Relevance of each quality aspect on overall conference quality

Quality aspect No. of respondents Mean St Dev

Scholarly quality 622 5.61 1.26

Network 620 5.52 1.39
Recent research trends 622 5.71 1.28
New research ideas 624 5.67 1.38
Reputation 627 5.36 1.51
Feedback 625 5.74 1.40
Publication probability 628 5.14 1.65

Panel B: Evaluation of quality of each quality aspect and ranking results

(1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13)
Group Conf Scholarly Network Recent New Reputation Feedback Publication Mean Bottom-up Top-down St Dev of
quality research research probability rating ranking ranking all QA per
trends ideas Score Conf

(1) First-tier AFA 6.55 5.97 5.59 6.18 6.44 6.63 5.98 6.19 1 1 0.37
Conferences WFA 6.54 6.15 6.04 5.92 6.28 6.48 5.80 6.17 2 2 0.28
EFA 5.93 5.54 5.67 5.56 5.81 5.74 5.09 5.62 3 3 0.27
(2) Second-tier NFA 5.07 4.82 5.16 4.72 4.93 4.45 3.97 4.73 5 5 0.41
Conferences FMAE 4.68 4.65 5.04 4.58 4.76 4.17 3.76 4.52 6 6 0.42
FMA 5.07 4.85 5.38 5.07 5.27 4.88 4.24 4.97 4 4 0.37
GFA 4.70 4.75 5.19 4.49 4.50 4.33 3.11 4.44 7 7 0.65
FFA 4.30 4.22 4.59 4.13 4.26 3.78 2.91 4.03 15 14 0.55
SSFMR 4.46 4.52 4.68 4.30 4.23 3.77 3.03 4.14 13 9 0.57
EFMA 4.53 4.27 4.63 4.50 4.63 4.18 3.61 4.34 9 8 0.36
AFBC 4.43 4.33 4.75 4.51 4.60 4.20 3.72 4.36 8 10 0.34
Eastern FA 4.35 4.37 4.71 4.45 4.50 4.09 3.38 4.27 10 11 0.43
MFA 4.35 4.25 4.61 4.34 4.37 3.91 3.21 4.15 12 12 0.46
SFA 4.33 4.28 4.89 4.37 4.34 4.02 3.54 4.25 11 13 0.41
SWFA 4.09 4.09 4.75 4.32 4.19 3.90 3.14 4.07 14 15 0.49

Panel C: Mean and Standard Deviation of all conferences for each quality aspect

Scholarly quality Network Recent research trends New research ideas Reputation Feedback Publication probability

Mean of all Conf per QA 4.89 4.74 5.05 4.76 4.87 4.57 3.90
St Dev of all Conf per QA 0.81 0.65 0.45 0.63 0.74 0.95 0.98

The Bottom-Up Ranking is constructed in two steps. First, we analyze the relevance of each of the quality dimensions. Therefore, survey participants are asked to evaluate the
importance of each of the seven different quality dimensions with respect to its contribution to overall conference quality on a scale ranging from one (unimportant) to seven
(very important). Panel A reports the number of respondents for each quality aspect as well as the mean relevance and the standard deviation for each quality aspect. As
each of the different quality dimensions is evaluated almost as equally important, we use equal weights for all seven quality dimensions in the Bottom-Up Ranking, displayed
in Panel B. Second, survey participants are asked to evaluate each conference based on the seven different quality dimensions on a scale ranging from one (poor/low) to
seven (very good/high) (Panel B). Columns (3) to (9) of Panel B list the average rating scores for each quality dimension and for each conference used to construct the
Bottom-Up Ranking. We derive the mean Bottom-Up rating score (column 10) by equally weighting each quality dimension. This rating score is then used to construct the
Bottom-Up conference ranking (column 11). Column 12 reports the conference ranking results derived from the Top-Down Ranking used in the main analyses for comparative
purposes. Column 13 documents the standard deviation of all quality aspects for each conference. Finally, Panel C shows the mean and standard deviation of all conferences
for each quality aspect.

lect two unique datasets. First, we analyze the publication and ci- To test the stability of our conference ranking, we conduct var-
tation outcomes of 18,535 conference papers presented at 15 fi- ious robustness checks by applying variations in the proxies for
nance conferences between 2007 and 2012. Second, we conduct a paper quality and we find our results to be very stable to the
survey-based perception study of finance conference quality. These variations employed. Furthermore, in applying conference rankings
datasets are used to form sub-rankings based on each of the three based on single-year analyses, we show that conference rankings
quality proxies. The overall conference ranking results are obtained change (unsystematically) over the years and we, therefore, con-
from the equally weighted average of the sub-rankings. clude that a multiple-year analysis is important to take account
Applying a hierarchical cluster analysis, we find the analyzed of potential outliers. To analyze the stability of the third quality
conferences to be tiered into two quality clusters: First-tier con- proxy, we not only apply an alternative survey ranking approach,
ferences, i.e., the AFA, WFA, and EFA; and Second-tier conferences, i.e., Bottom-up ranking, but we also control for the research experi-
i.e., the EFMA, FFA, FMA, FMAE, GFA, NFA, SSFMR, AFBC, Eastern ences and other characteristics of our survey participants. We find
FA, MFA, SFA, and SWFA. Thus, our conference evaluation supports the survey-based conference rating to be robust to several modifi-
the outstanding quality of the AFA, WFA, and EFA, a finding that cations.
corresponds well to anecdotal wisdom held in the finance commu- Our evaluation approach to ranking conferences is designed to
nity and to the recent findings of Reinartz and Urban (2017). The support finance researchers in determining which conferences to
remaining twelve conferences are rather different from the three attend. In conducting a survey that evaluates finance conference
top-rated conferences in terms of the quality of presented papers. quality, we also shed light on the perceptions of conference par-
Interestingly, most conferences assume nearly the same ranking ticipants regarding conference quality levels. Interestingly, the sur-
positions in all three sub-rankings, i.e., most conferences vary by vey participants view each of the different quality factors to be of
one ranking position between the different sub-rankings only. similar importance to overall conference quality. Furthermore, op-
portunities to remain informed of recent research trends are evalu-
38 A. Kerl et al. / Journal of Banking and Finance 89 (2018) 26–38

ated the most favorable by the survey participants. Additionally, as Chung, K.H., Cox, R.A.K., Mitchell, J.B., 2001. Citation patterns in the finance litera-
our evaluation approach suggests the existence of significant qual- ture. Financial Manag. 30, 99–118.
Colquitt, L.L., 2003. An analysis of risk, insurance, and actuarial research: citations
ity differences among the conferences analysed, we believe that a from 1996 to 20 0 0. J. Risk Insur. 70 (2), 315–338.
paper’s acceptance at one of the conferences studied could indi- Currie, R.R., Pandher, G.S., 2011. Finance journal rankings and tiers: an active scholar
rectly serve as an indication of a paper’s quality. Finally, our evalu- assessment methodology. J. Banking Finance 35 (1), 7–20.
Ellison, G., 2002. The slowdown of the economics publishing process. J. Polit. Econ.
ation approach lends itself to related disciplines such as economics 105, 947–993.
or management, wherein the quality assessment of conferences is Harzing, A.W., 2016. Journal Quality List, Fifty-seventh ed. April 2016 Available at:
also a nontrivial task. As surveying participants yields similar re- www.harzing.com.
Holder, M.E., Langrehr, F.W., Schroeder, D.M., 20 0 0. Finance journal coauthorship:
sults as those derived from the approach based on conference pa-
how do coauthors in very select journals evaluate the experience? Financial
per quality assessments, future studies may apply either of the Pract. Educ. 10, 142–152.
methods presented in this paper. Kalaitzidakis, P.M., Theofanis, P., Stengos, T., 2003. Rankings of academic journals
and institutions in economics. J. Eur. Econ. Assoc. 1 (6), 1346–1366.
Keloharju, M., 2008. What’s new in finance? Eur. Financial Manage. 14 (3), 564–608.
References Kim, E.H., Morse, A., Zingales, L., 2009. Are elite universities losing their competitive
edge? J. Financial Econ. 93 (3), 353–381.
Laband, D., Tollison, R., 2006. Alphabetized coautorship. Appl. Econ. 38 (14),
Azar, O.H., 2007. The slowdown in first-response times of economics journals: can
it be beneficial? Econ. Inq. 45 (1), 179–187.
Maciejovsky, B., Budescu, D.V., Ariely, D., 2009. The researcher as a consumer of sci-
Beattie, V., Goodacre, A., 2006. A new method for ranking academic journals in ac-
entific publications: How do name-ordering conventions affect inferences about
counting and finance. Accounting Bus. Res. 36 (2), 65–91.
contribution credits? Marketing Sci. 28 (3), 589–598.
Borokhovich, K.A., Lee, A.A., Simkins, B.J., 2011. A framework for journal assessment:
Moosa, I., 2011. The demise of the ARC journal ranking scheme: an ex post analysis
the case of the Journal of Banking and Finance. J. Banking Finance 35 (1), 1–6.
of the accounting and finance journals. Accounting Finance 51, 809–836.
Borokhovich, K.A., Bricker, R.J., Simkins, B.J., 20 0 0. An analysis of finance journal
Oltheten, E., Theoharakis, V., Travlos, N.G., 2005. Faculty perceptions and reader-
impact factors. J. Finance 55, 1457–1469.
ship patterns of finance journals: a global view. J. Financial Quant. Anal. 40 (1),
Borokhovich, K.A., Bricker, R.J., Brunarski, K.R., Simkins, B.J., 1995. Finance research
productivity and influence. J. Finance 50, 1691–1717.
Pons-Novell, J., Tirado-Fabregat, D.A., 2010. Is there life beyond the ISI journal lists?
Brown, C.L., Chan, K.C., Chen, C.R., 2011. First-author conditions: evidence from fi-
The international impact of Spanish, Italian, French and German economics
nance journal coauthorship. J. Appl. Econ. 43 (25), 3687–3697.
journals. Appl. Econ. 42 (6), 689–699.
Brown, C.L., 2003. Ranking journals using social science research network down-
Reinartz, S., Urban, D., 2017. Finance conference quality and publication success: a
loads. Rev. Quant. Finance Accounting 20, 291–307.
conference ranking. J. Empir. Finance 42, 155–174.
Chan, K.C., Chang, C.H., Chang, Y., 2013. Ranking of finance journals: some Google
Schinski, M., Kugler, A., Wick, W., 1998. Perceptions of the academic finance profes-
Scholar citation perspectives. J. Empirical Finance 21, 241–250.
sion regarding publishing and the allocation of credit in coauthorship situations.
Chan, K.C., Chang, C.H., Lo, Y.L., 2009. A retrospective evaluation of European finan-
Financial Pract. Educ. 8, 60–68.
cial management (1995–2008). Eur. Financial Manage. 15, 676–691.
Trevino, L., Mixon, F.G., Funk, C.A., Inkpen, A.C., 2010. A perspective on the state
Chan, K.C., Chen, C.R., Steiner, T.L., 2002. Production in the finance literature, institu-
of the field – international business publications in the elite as a measure of
tional reputation and labor mobility in academia: a global perspective. Financial
institutional and faculty productivity. Int. Bus. Rev. 19 (4), 378–387.
Manage. 31, 131–156.
Van Praag, C.M., Van Praag, B.M.S., 2008. The benefits of being economics professor
Chan, K.C., Liano, K., 2009. A threshold citation analysis of influential articles, jour-
A (rather than Z). Economica 75, 782–796.
nals, institutions, and researchers in accounting. Accounting Finance 49, 59–74.
Walter, A., 2011. The effects of coauthorship on the quality of financial research pa-
Chung, K.H., Cox, R.A.K., Kim, K.A., 2009. On the relation between intellectual col-
pers. J. Bus. Econ. 81 (2), 205–234.
laboration and intellectual output: evidence from the finance academe. Q. Rev.
Ward, J.H., 1963. Hierarchical groupings to optimize an objective function. J. Am.
Econ. Finance 49 (3), 893–916.
Statist. Assoc. 58, 236–244.

Interesses relacionados