The politics of knowledge

This is an extended version of an article published in The Conversation on 9th January 2024.


In January 1948, Life magazine published a feature showcasing the ‘102 Great Ideas’ of Western civilization, arrayed in index boxes covering topics from #1: Angel to #102: World. The project was the brainchild of Robert M. Hutchins, then chancellor of the University of Chicago and director of the Encyclopedia Britannica. Hutchins and his team had identified what they believed were the 432 ‘basic great books’, which the Encyclopedia planned to publish in a 54-volume set. To go alongside this collection, Hutchins commissioned a team of researchers to prepare an index so that readers could navigate the complex body of work. The result was displayed as part of an extended article in Life magazine, featuring a large double-page spread in which more than a dozen tired looking indexers posed alongside the output of five years’ work and nearly a million dollars of investment.

While the index was certainly an impressive achievement, at a time before computers were widely available, the results pose more questions than answers. Who exactly decides what counts as knowledge? Who decides which books should be included and which books left out? In this case, all 432 of the ‘great books’ were written by men. Indeed, the subject of ‘Man’ was even given its own chapter in the index, while ‘Woman’ only featured as a sub-category of ‘Family’, ‘Man’ and ‘Love’.

Photo showing researchers alongside numerous boxes filled with index cards, showcasing the '102 Great Ideas' described by the Encyclopaedia Britannica.
Figure 1–Researchers stood alongside their index of the ‘102 Great Ideas’. Source: Life (1948).

Measuring the un-measurable

If the 102 Great Ideas teach us anything, it is that knowledge can never be separated from politics and the power structures that govern our everyday lives. While our social context may have changed, we still face the same challenges as the indexers of 1948. Namely: what is ‘knowledge’ and who decides what counts?

As an academic working at a well-respected Management School, I am required to conduct research and publish my work in high quality academic journals. At my own institution, we rely on a list produced by the Chartered Association of Business Schools to tell us which journals we should be publishing in. The CABS list ranks journals in certain business-related fields from 4* to 1, with 4* being the very best journal you can publish in, and 1 being the lowest. To remain in good standing in my job, I am required to publish a certain amount of research in journals rated 3 and above.

However, there is a politics to journal rankings. Just because a journal is ranked 4*, doesn’t necessarily mean that the research published in it is the ‘best’ or the most useful. The higher ranked journals tend to have the most submissions and therefore have very high rejection rates (often way above 80%). Therefore, being published in a 4* is often as much about luck and ‘playing the game’ – citing the ‘right people’ and mentioning the ‘right theories’ – as it is about the quality of research being carried out.

There are also issues with the metrics we use measure quality. The controversial ‘Impact Factor’ (IF) for example is based on the number of citations a paper receives over a given period. This system is very easy to manipulate, for academics and journals alike. Many journals will require authors cite other papers within the same journal in order to inflate the journal’s own IF. Meanwhile, some unscrupulous reviewers will request citations to their own work be added to ‘improve the quality’ of the work that they review.  

The marketization of knowledge

Clearly, there is a problem with the way that we create and manage knowledge. Knowledge today is not so much something to be ‘read’ and ‘disseminated’, but rather produced and consumed as part of an endless cycle of production and consumption. This is why journal websites tend to look more like e-commerce websites than repositories of knowledge, full of ‘related articles’ and ‘recommended readings’ in much the same way as we might find when shopping on Amazon.

Of course, it goes without saying that these ‘recommendations’ are not made in the reader’s best interest, but rather the interests of the publisher. For example, Elsevier journals only recommend articles published by other Elsevier journals. The same for Sage, or Wiley, or any other publisher you care to mention. They do this because they want readers to stay within their own ecosystem – either reading more papers within their journals (which are paid for by library subscriptions or individual download fees) or citing more of their papers. The more of a certain publisher’s papers you read, the more likely you are to cite their papers; and in so doing, boost the journal’s Impact Factor, and the money it can bring in.

AI and the automation of knowledge

As part of attempts to gain more readers, some publishers are now turning to the world of machine learning and AI-generated content. Elsevier for example has recently launched a series of ‘Topic’ pages that summarize key areas in a particular field of study ranging from Agricultural Science to Engineering and Veterinary Medicine. At the time of writing, there are some 376,328 pages of new AI-generated content. According to Science Direct’s own website, these pages have been created to make research more ‘accessible’ to the Elsevier audience – much like the work done by the human indexers back in 1948. However, these new pages don’t just repeat the same problems of old, but rather amplify them on a much greater scale. They also create a whole new set of problems that the Encyclopedia indexers never had to face.

Money talks

With some 376,328 new pages of content on its websites, Elsevier (Science Direct) gains a huge advantage in terms of search engine visibility. This comes at the expense of other publishers who are then pushed down the search engine pecking order. There is a certain economic Darwinism at play here. It’s not so much that the work in other journals is of any less quality, but rather, Elsevier has the money and the power to overload search engines with content such that other journals will struggle to compete.

The long-term implications of this strategy seem clear. Over time, the smaller journals will become less visible and so will be read and cited less often than their Elsevier competitors. Academics will therefore be forced to favour Elsevier journals for their publications and before long, Elsevier journals will rise to the top of the journal rankings, while others will be threatened with demotion, or even going out of business completely.

Selection bias

Another emerging issue is that Elsevier’s AI-generated pages have been created based on pre-existing content – i.e. the research that someone has already decided to publish. As we know, the academic publishing model is highly political and the things that get published in the top journals are often based on what is perceived to be popular or important at the time. And this doesn’t just apply to the arts and humanities. Even the natural sciences are subject to trends where researchers ‘follow the money’ to study a particular subject based on what is deemed the ‘most important’ area to study at the time.[1]

This can lead to a form of confirmation bias but replicated on a huge scale. By programmatically deciding which topics are worthy of summary and which are not, the computer algorithms are making selections based on human decisions that were already deeply political. It’s not so much that the machine programming may be biased (and it may be…), but rather that the data it is drawing on is laden with human politics right from the very start.

The politicization of hypertext

To make matters worse, Elsevier has recently started adding its own hyperlinks directly to academic papers published in its journals [see Figure 2]. While this practice is presented as ‘helpful’ to the reader, in reality, it is anything but. In the world of the internet, hyperlinks exist for a reason: to link readers to related content. Meanwhile, in the world of academic research, citations exist to create verified and traceable connections between content. This is a subtle yet important difference.

While academic citations can themselves be quite political in the way they are used, the decision to include them is at least in the hands of academic authors, editors and reviewers. However, with Elsevier’s new hyperlinks, the content is controlled by the publisher. These links are then presented in such a way that disincentivises the reader to look elsewhere for information.

AI-generated hyperlink inserted by Elsevier into an article published on Science Direct
Figure 2–Hyperlink to AI-generated content added directly to a research paper published by Elsevier (Science Direct). Source: Ryder and Downs (2022).

A broken system

It would be a mistake to think that all of the problems I describe are the result of AI, or even the result of one particularly profit-driven publisher. The problem isn’t so much AI, or Elsevier, but rather the academic publishing model itself, and the way that knowledge has become just another commodity within the neoliberal system. As universities become ever-more like businesses than places of learning, these issues are only going to become more pronounced. Add to this the many problems associated with the internet and web culture more broadly, and we find ourselves in a very tricky situation indeed – far worse than what the Encyclopedia Britannica researchers might have envisaged in 1948.

Unfortunately, there are no easy answers. And the problem isn’t as simple as making research open to everyone. The publishers already tried that with Open Access – a move that created more problems than it solved.[2] If AI is to genuinely add value to the process of research and dissemination then really, we need one single centrally-funded system for publishing research, and not the market-driven system we have right now. We also need professional reviewers who are paid for the work they do, and ideally, an end of the outmoded ‘ranking’ of journals based on arbitrary metrics that are a product of longevity as much as anything else. As an academy, we are also going to need to come to terms with the potential impact of AI as a generator of content. Given the pressures of the academic ‘publish or perish’ culture, the temptation for researchers to use AI to help them write papers just seems too great.

Either we are going to have to embrace AI or ban it completely. What we can’t have is a situation where knowledge production becomes even more polarized between the haves and have-nots; where university tenure is the only way to gain access to expensive research journals that exist only to boost the profits of the publishers. In which case, AI may well be the least of our problems…

References

Life (1948) ‘The 102 Great Ideas: Scholars Complete a Monumental Catalog’, Life, January 26, pp. 92–102. https://books.google.co.uk/books?id=p0gEAAAAMBAJ&lpg=PP1&pg=PA92#v=onepage&q&f=false

Ryder, M. and Downs, C. (2022) ‘Rethinking reflective practice: John Boyd’s OODA loop as an alternative to Kolb’, The international journal of management education, 20(3), p. 100703. doi:10.1016/j.ijme.2022.100703.


[1] See for example string theory in physics – something Lee Smolin comments on in his book The Trouble with Physics.

[2] To make a paper Open Access often requires an ‘Article Processing Charge (APC)’ be paid to the publisher. Even for supposedly ‘Open Access’ journals, this fee is typically in the thousands.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.