Journal hierarchies are flawed and should never be the only judge of quality. On that we can all agree.
But they did not emerge because universities became evil or “neoliberal”. They emerged because academia needed scalable mechanisms to move beyond the personal (old boys’) networks, institutional prestige and patronage on which advancement used to depend. This was justified in terms of vague reputational judgements accountable to no one; journal lists are at least partially meritocratic and externally legible.
In our field, business and management, we believe that have significantly advanced both the rigour and relevance of academic research by helping to distinguish serious scholarship from weak, ideological, anecdotal or self-promotional work.
Yet the Financial Times’ recent updating of its own list of the , which feeds into its MBA rankings, prompted Carl Rhodes and Alison Pullen to complain that such “elitist” lists undermine the “social purpose” of business schools. This is because they exclude the journals that typically publish “work that interrogates power, inequality, corporate responsibility and the broader social consequence of business”, making such work “riskier to pursue, less valuable to publish and easier to marginalise”.
ߣߣƵ
The article’s core weakness, however, is that it mistakes an uncomfortable reality of knowledge production for a moral failure unique to business schools. Its argument rests on the romantic assumption that if incentives associated with publishing in elite journals no longer constrained them, many more academics would naturally produce the socially valuable, practical, ethically engaged research the authors have in mind. No evidence is offered for this.
Moreover, Rhodes and Pullen indulge in a false dichotomy between rigour and relevance. Much supposedly “irrelevant” research later becomes foundational. Entire fields, such as behavioural economics and organisational psychology, were initially criticised as abstract or detached, before becoming enormously influential in policy and industry. The demand that scholarship demonstrate immediate social utility misunderstands how intellectual progress works.
ߣߣƵ
Coming to their specific gripe, the replacement of three journals in the updated FT50 list, we suggest that the new journals increase, rather than decrease, societal relevance (they also have higher impact scores, one possible indicator of rigour). And rather than that claim resting on vague, moral notions of “social purpose”, we suggest it can be measured.
An Overton search reveals that 3,801 policy documents have cited the outgoing journal Human Relations, 2,014 have cited Organization Studies and 3,741 have cited the Journal of Business Ethics. That is much lower than the number citing the incoming American Sociological Review (17,132) and Psychological Science (6,346). No data are available for the third incoming journal, Academy of Management Annals, owing to its newness, but its sister journals have many policy cites, too.
More anecdotally, we might also forgive the FT and its readership of managers and prospective MBA students for not seeing the immediate relevance of articles such as “”, published in Organizational Studies in March. In short, it seems that if business school faculty want to have a real-world impact, publishing in the new FT50 list is a better place to start from than the old list.
But “impact” is a loaded term, of course – and most current claims of “social impact” are themselves status games. Universities increasingly market “impact”, “purpose”, “sustainability” and “stakeholder engagement” because these are fashionable institutional brands. Yet much of this language is vague, performative and politically selective. A paper advising governments on carbon regulation may be celebrated as socially meaningful, while research improving hedge-fund pricing efficiency is dismissed as elitist – despite both having real-world consequences. The distinction often reflects ideological preferences rather than objective social value.
Either way, there is something deeply anti-intellectual about the idea that scholarship must justify itself through immediate accessibility or practical visibility. Business schools are not consultancies. Their role is not simply to generate directly usable managerial advice. Highly specialised, technical and difficult work is often necessary precisely because important questions are complex. Implicitly condemning complexity as elitism risks flattening academia into a market for digestible opinion pieces and corporate workshops.
ߣߣƵ
Rhodes and Pullen further ignore the incentives of the critics themselves. Many attacks on journal hierarchies are also attempts to redefine excellence around criteria more favourable to one’s own methodological style, ideological orientation or professional strengths (though we note that our own FT50 count is lower after the revision). That does not automatically invalidate the criticism, but it is an important factor to keep in mind.
Most importantly, there is the issue of accountability. If elite journals and their system of peer review by world-leading experts were to lose authority, who would decide what counts as valuable scholarship? Presumably no one would tolerate a return to the old boys’ club, but some of the alternatives that spring to mind – governments, public opinion, activist pressure, media visibility – would be far more politicised and intellectually corrosive.
Some critics argue that papers should be judged on their own merits, rather than on where they were published, not least because quality is not consistent within journals. But it is hard to sustain the argument that time-pressed, non-specialist administrators who put in place based on their own priorities would be better arbiters than the specialist peer reviewers used by journals.
ߣߣƵ
We repeat: journal rankings and impact scores are far from flawless quality measures of individual and institutional output, and we would therefore never advocate using them as the only means for assessment. Nor does the FT.
Yet the real problem is not elite journals or rankings thereof. It is monoculture.
When publication in a handful of journals becomes the only currency, intellectual diversity does indeed narrow. But if we think other kinds of outputs are important, these need to be rewarded on their own terms – not by replacing scholarly rigour in journal lists with vague moral rhetoric.
is a professor of strategy at Copenhagen Business School and is also affiliated with the Norwegian School of Economics and Hong Kong Polytechnic University. is a professor of strategic and international management at Copenhagen Business School and, part-time, at the University of Birmingham.
ߣߣƵ
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to ձᷡ’s university and college rankings analysis
Already registered or a current subscriber?








