This article discusses the knowledge market actors, which I argue are institutions, instruments, and individuals. Knowledge-based hierarchy or inequality in a neoliberal context is creating the conditions for gaming, where competition, productivity, and efficiency are the norms of climbing in the hierarchy. I contend that what accounts for the knowledge management strategies is academic platforms as a new meta-medium which holds large portion of data while enabling faster better cheaper knowledge transactions. Platforms are promising a larger share to non-experts in the market while they are cheering up for automation and open science.
Have you prepared your CV with your previous education and published articles? How many accounts have you created so far to be visible? Have you been accepted to an academic institution? Yes? Good! Now, you are in the game as other 10.8 million graduate students and 6.16 million faculty members in the world, as stated in the blog of the founder of Academia.edu. Richard Price has launched the platform in 2008 because of ‘the time-lag between submitting his first paper to a journal and the journal publishing it’ as his own words in an interview (Shema 2012). He continued:
‘In the past, the journal would sit in between the scientist and his/her audience and mediate that relationship. We are moving toward a world where the personal brands of scientists are starting to eclipse those of journals. This is reflective of a broader trend occurring on the web, where sites like Twitter, Facebook, YouTube, Github, and others have enabled content creators to have direct relationships with their audiences.
We are moving toward a world where the key node in the network of scientific communication is the individual rather than the journal. The individual is increasingly going to be the person who drives the distribution of their own work and also the work of other people they admire.’
Richard Price was consciously or unconsciously being naïve in asserting that social media-like platforms are providing direct contact with the readers of papers; however, he remarkably emphasized two things here: first the requirement for fast publications, second the shift in the academic stand of journals. But the questions of why we need to be fast in publishing and why journals are servants of individuals require deeper institutional and instrumental analysis. My primary focus is on exploring these interrelated elements as market transactions and seek an answer to this question in the first place: Is it reasonable to treat academia and surrounding components as a market? To that end, the conditions for a market, candidate actors in play and the medium enabling market practices will be discussed in parallel with the conceptual framework of Biagioli. Then, the following question arises: if today’s publications have become a marketplace for ideas, can we correlate the mechanisms of a market managing knowledge with production management efforts that aim to reduce cost by making the business more efficient and responsive to market needs? From 1980s to today, maximizing efficiency in manufacturing of products and stimulating new technologies has become a target for companies and national institutions. Not only in the automobile industry, but also NASA has borrowed the notions of efficiency in time and cost in early 1990s by promoting the policy of faster better cheaper (FBC). I argue that current efforts in the knowledge management in a market-like setting aiming for faster better cheaper publications are like a mirror image of practices in the production management, especially in putting platforms into operation.
As the mounting pressure for publishing as fast as possible has been captured by the radars of many scholars, the victims of the same pressure are either seeing the hands of actors in the game and joining them or they claim furiously that they need to publish in this way because this is how scientific progress works. The game implied here covers everyone having a seat in academia, which offers some to director’s chair while others get to sit on uncomfortable lab stools. To be more specific, the generated advantages and disadvantages around citation game are the driving force for publication pressure, as brought to the attention of scholars by Mario Biagioli (Biagioli, 2016). On the other hand, there are masses especially in natural sciences, who are unaware of the sources of this massive pressure promoting new opportunities for academic misconduct in the form of academic impact. They think, appealingly, the only way assuring the quality of scientific work is publishing peer review articles on respectable journals. First of all, this idea is outdated. During the 19th century, printed articles were remarked as the only way of effective scientific communication where judgments were made by humans who were questioning methods implemented and focusing on process over facts (Biagioli and Lippman, 2020, Rudolph, 2019). Secondly, as amplified by Biagioli, we are now in the age of ‘impact or perish’, not in ‘publish or perish’. Papers are evaluated through markers that connect scientific texts to journal’s own fame (e.g. journal impact factor) and individual’s success story (e.g. h index). The fundamental change from early publications to today is weighting papers with metrics even before reading them. There is an epistemic distinction between the production of facts and individuals involved in this production (Latour and Woolgar, 1986). As Latour and Woolgar noticed in 1979, ‘scientists talked about data, policy and their careers almost in the same breath’ and they introduced the term ‘credit’ as discoursed in sociology or economics to define currency of distinction for individual careers. Since then, the metrics have become even more complicated that gamer individuals constantly looked for cheat codes to self-promote their seats. These cheating strategies are worth to be elaborated to reveal whether gamers are playing as in a market or not.
The baseline for market-like implications is relying upon ‘free speech as based on a free trade of ideas, and the exchange of citations as a market-like transaction’ (Delfanti 2020). Ostensibly, the notion of freedom is somewhat related to the academic productions. The freedom in academia is more in the region of ‘positive freedom’ where ‘incapable researchers’ are excluded allowing interference to keep up the ‘quality’. American economist George Stigler profoundly supported the idea that the standards of an elite on a profession should be imposed to protect ‘the freedom of inquiry’ from compulsion by student, the state, and the faculty (Nik-Khah 2017). Nik-Khah summarized the marketplace for ideas in neoliberal scheme in three strategies in his critique of academic science targeting neoliberal intellectuals from Mont Pélerin Society and specifically Stigler: eliminating regulatory agencies, forcing them cost-benefit procedures, and subjecting science to the judgments of the marketplace. These strategies are paving the way for subtle academic transformations, starting from automated mechanisms reducing labor to radically collaborative research getting more funding (Huebner, 2017). The growing publication pressure on individuals is magnified by financial pressure on institutions to secure grants and satisfy benefactors. Eventually, it is almost impossible to think researchers without ‘money and productivity issues’, which imply a market-like structure. While the paper publications in the neoliberal era are thrusted by the expected efficiency in productivity, the notion of efficiency is almost identical to the new form of the manufacturing industry after 1980s that transforms goods and materials to new products: to create quality goods that consumers can afford. The historical overlap between the knowledge management strategies and the production management practices has become more eminent with the assumption that both are embedded into the branches of ‘The Market’.
Updating previously vocalized stands of academic metrics as a playground for institutions and individuals and casting a production management map onto knowledge transactions, I abstract platforms being medium of a market aiming for faster, better, cheaper productions. Here, I propose a two-step authorization to my arguments. The first check would be the exploration of conditions that create a possibility for a market. As Biagioli suggests, ‘any metric will create the possibility of its gaming (and gaming-related misconduct), which will eventually crowd that market, thus creating an incentive to modify the metrics, which in turn will usher in the next generation of innovative gaming and manipulations’ (Biagioli, 2020). For this part, new and old school metrics and associated frauds are investigated in the sense of a market generating inequalities between researchers. The second check is the discussion of the mechanisms of knowledge market and production market where ‘capital’ can flow freely, and ideal actors are entrepreneur of themselves (Foucault, 1979). As Foucault pointed out that ‘there is formalization of society on the model of the enterprise’ and the traces of this socio-economic bigger scheme are observable for market systems that produce facts along with individual careers. Publishers and academic social media platforms allocate a market share to entrepreneur non-experts in the evolving knowledge market.
Check Point 1: Does academia look like a market?
The section ‘A Bird’s Eye View of the World’s Changing S&T Picture’ in the Science and Engineering Indicators 2010 report starts with this sentence: ‘Since the 1990s, a global wave of market liberalization has produced an interconnected world economy that has brought unprecedented levels of activity and growth, along with structural changes whose consequences are not yet fully understood.’ Then, the report continues with the highlight on governments’ knowledge-intensive economies that boost research capabilities and its commercial exploitation with intellectual work. To be competitive, many countries have allocated more resources and funding to science and technology (S&T) studies as a means of economic growth. The structural changes that were specified in the report as shifts in international high-technology markets, trade, and relative trade positions have been implemented by the combined efforts of industry and government. Their roles were identified as:
“Multinational companies (MNCs) operating in this changing environment are seeking access to developing markets, whose governments provide incentives. Modern communications and management tools support the development of globally oriented corporations that draw on far-flung, specialized global supplier networks. In turn, host governments are attaching conditions to market access and operations that, along with technology spillovers, produce new and greater indigenous S&T capabilities. Western- and Japan based MNCs are increasingly joined in world S&T markets by newcomers headquartered in developing nations.”
While this report is clearly pointing out the economical aspects of recent studies empowered by industry and government, it is almost blind to the effects of an interconnected economy forcing every actor to generate more and grow big. As an indicator to become a considerable actor, most developing countries (e.g. Asia-8 countries: India, Indonesia, Malaysia, Philippines, Singapore, South Korea, Taiwan, and Thailand) have reorganized their expenditures for science and technology enterprises to promote knowledge and technology intensive economies. In addition to become competitive, science and technology studies underlining high-value intellectual output have been published in domestic and international journals. According to the report, global article production and citation patterns have shifted indicating that 23% of US citations were to EU in 2007 increasing by 5% since 1992 whereas Asia-8 group got 2% of all citations ‘probably because of language, cultural barriers, and research quality’. The conclusion of the 2010 report overview was even more progressive: ‘Science and technology are no longer the province of developed nations; they have, in a sense, become democratized.’ (Quotation marks are used in the original document, “democratized”).
There are multiple red flags in this report. Starting from the very end, if the increasing operation of science and technology concepts in many countries were used in an analogous concept of procedural participation in democracy, then the report imply that scientific practices are regularized with indicators (citations here) to conform to a global economy. On the contrary, the report could be cheering the cognitive engagement of developing countries up although it clearly labeled their publications and related research as low-quality products. However, epistemic deliberation can be hierarchical and eventually can generate inequalities and -what a coincidence- the neoliberal ideology since Hayek advocated that unequal distribution of knowledge is an essential condition of the market. Therefore, the ending of the overview part of the report is alarming with a confusing usage of ‘democratizing’.
Secondly, the report was emphasizing on Asia-8 countries’ cultural incompetency to the US-based success metrics (citations); however, it missed the point of regional dynamics. Asia-8 countries cited increasingly more papers in the same region from 1992 to 2007 (from 2% to 12%) while the number of citations from the US was decreasing gradually (from 36% to 26%), (see Figure O-19 in the report). So, why did the US researchers cite less papers from Asia-8 countries (as a subcategory of developing countries)? Is there a sort of trust issues between the US and developing countries? What is the source of trust issues? Are citations a fair marker for quality? If so, since when? How did developing countries react to this shift in academic citations?
A very brief answer to above questions can be classified into two categories: first, a global and connected economy forced most of the countries to publish in a very structured way that is deep-rooted from the Science Indicators published in 1973 in the US; second, developing countries -sometimes- went astray from ‘regularized publication methods’ to be perceived as an outstanding global actor. The historical progress towards Science Indicators has its own merit in a frame that includes various actors.
The progression of science indicators
Before the National Science Foundation (NSF) published the first edition of Science Indicators (SI-72), Eugene Garfield, an American linguist and businessman, founded the Institute for Scientific Information (ISI) in 1956 that generated The Science Citation Index (SCI). During the 1960s, he had worked meticulously to market the SCI to scientists ‘as an aid to information retrieval’ (Wouters, 1999). The destiny of the institute was entangled with indexing of law cases. In 1873, Frank Shepard had set up a commercial business, Shepard’s Citations, Inc., in Chicago ‘to know whether a legal case was still valid’. The vice president of the company, William C. Adair, explained in 1955:
“The lawyer, however, must make sure that his authorities are still good law, that is, that the case has not been overruled, reversed, limited or distinguished in some way that makes it no longer useful as a valid authority. Here is where the use of Shepard’s Citations comes in.”
Based on this hierarchical indexing, Adair offered his expertise for science indexing and young Garfield, who had no intention of building a citation system, trilled with the idea because he was already working as a consultant in automation utilizing computers.
He was then concerning to bring his indexing technique to sociologists and historians. As a result, Robert King Merton, an American sociologist, took the bait and focused his studies on the SCI in 1962. Merton’s students by 1965 industriously excavated citation indexing efforts to get insights on scientific consensus and rewarding in return as well as the development of new fields. Garfield extended his indexing from articles to journals, ranking them with respect to how frequent they appeared in citations. In 1972, he started to advertise Journal Citation Reports, which transforms into the ‘Impact Factor’ (Csiszar, 2020).
The national institutions in the US started to recognize ‘the need for comprehensive and standardized set of indicators of the health of science’. NSF published SI-72 with the motto of being ‘the first effort to develop indicators of the state of the science enterprise in the United States.’ The report was aiming for identifying comparative indices of the level of R&D in areas dependent upon science and technology, such as technical knowledge, productivity, and international trade (NSF National Science Board, 1973). Then, the NSF reached out to the Social Science Research Council to get help from the construction of social indicators. The council organized two conferences in 1974 and 1976 to improve the quality of science indicators. They defined indicators as ‘statistical time series that measure changes in significant aspects of society’ (Godin, 2003). The acceleration of science indicators now had started. Sociologists, specifically Merton, were interested to take the role that the NSF was looking for. In 1974, a conference on science indicators were organized at the Center for Advanced Studies in the Behavioral Sciences. Following the conference, a semi-official committee gathered to discuss the evaluation of papers that were subjected to SI-72 standards. The only non-academic person was Eugene Garfield, but he made a tremendous support to discussions: he provided the citation data he obtained through his company at a price. SI-72 paper critiques offered their expertise in various areas: statistics (William Kruskal), economics (Zvi Griliches), history and philosophy of science (Gerald Holton), and political philosophy (Yaron Ezrahi). Garfield’s way of citation analysis was questioned and warned by the scientific community. One of the most dominant views in the community advocated that generating a hierarchy by citations is against the spirit of creativity and equality in science (Csiszar, 2020). Even bolder warning came from Merton to Garfield:
“Watch out for goal displacement: Whenever an indicator comes to be used in the reward system of an organization or institutional domain, there develop tendencies to manipulate the indicator so that it no longer indicates what it once did. Now that more and more scientists and scholars are becoming acutely conscious of citations as a form of behavior, some will begin, in semi-organized fashion, to work out citation-arrangements in their own little specialties.”
Merton, in his work of Bureaucratic Structure and Personality, identified the goal displacement as the process which an instrumental value becomes a terminal value (Merton, 1940). According to him, the product of the goal displacement would be the bureaucratic virtuoso who is stuck in one terminal and away from helping to many of other clients. The implementation of this abstract definition of displacement has been observed in the citation analysis where the gaming strategies have been developed to fit into the standardized metrics. Merton thought the scientific community would see the danger and adapt itself to avoid this trap. But, he was unnecessarily optimistic here and he failed to recognize the political and public representations of scientific knowledge (Csiszar, 2020).
The role of indicators
Indicators with a narrowed down definition found themselves a place in the scientific community since 1970s, although they emerged in economics in 1930s with the concepts of growth, productivity, employment, and inflation (Godin, 2003). Imagining 1990s in the background, knowledge-powered economies all over the world came to the scene to take a role on the science stage. From developing countries to the developed world, scientific community, entrepreneurs, and policy makers eagerly promoted the use of indicators as a tool assigning prosperity in science. There is no surprise that early indicators based in economics leaked into scientific evaluations since the symbiotic relation between science and economics have been restored to ensure the growth of countries. I argue that indicators are place holders or symbols for “scientificity” of countries, as Godin implied for models in STS (Godin, 2017). Because scholars and funding agencies can agree upon a basis, indicators act as a rhetorical transporter that can propagate through institutions and individuals.
While indicators (sometimes replaced with metrics) were originally a literature-indexing and search technique, they transformed into the infrastructure mattering value of research and evaluating researchers (Biagioli, 2020). Being competitive in the world was associated with the impact of science and technology studies that need to be solidified by quantitative indicators authorized by qualitative checks of peer scholars. The quantitative insurance of individuals, who are affiliated with institutions, accredited with the number of citations and journal impact factor (JIF) had an impact on researchers’ careers while they are climbing in academic hierarchy and holding management positions in universities. Besides, those ‘impactful’ individuals transformed into reviewers of journals to make sure their area of expertise is protected from improperly conducted research (e.g. if experimental conditions are not well-designed), unproductive studies (e.g. if there is no future work or potential specified), and incompetent researchers (e.g. newcomers to the area or researchers from low-ranking universities). The same individuals contributed to their institution’s fame in a way that donations and funding are granted to universities for the sake of valuable researchers. As a result, journal publications with indicators have enabled multifaceted dealings between researchers, universities, funding agencies, and policy makers. Indicators have been used in exchange with faculty or overall department productivity, the value of the publications of professors, students’ admission rates based on the university rankings, the university’s expenditure per student, alumni donations, numbers of books in the library, grant funding secured by the faculty, students’ post-graduation employment rate, faculty salary etc. (Biagioli, 2020). While indicators have become the certificate of excellence in academia, they surged into government agencies (allocating funding and arranging policies such as in DoD: Department of Defense, AFOSR: Air Force Office of Scientific Research, NIH: National Institutes of Health), public discussions (for example, Antony Fauci vs. John Ioannidis about lockdowns on Freedman, 2020), and industry decisions (between 2014 and 2016, approximately 17% of U.S. firms introduced an innovation—that is, a new or improved product or process as it explained in Science and Engineering Indicators, 2020). To such an extent, Biagioli was on point while describing the current state of academia as a market and impact factors as tokens in trade; however, he did not go far exploring market relations within and without academia. Here, I tackle the conditions for a market first and then dig into co/multi- relations of the knowledge management practices.
The conditions for a market
As introduced by Latour and Woolgar, reward and credibility have become driving forces for scientists who desire to stay in academia in return of setting up devices, writing papers, and occupying different positions (Latour and Woolgar, 1986). The notion of credit, as imported from economics, means ‘an agreement to purchase a good or service with the express promise to pay for it later’ (Kenton, 2020). The similar concept applies for the academic market. If researchers have higher h-index, their credibility can be uplifted on their research subject since their service to science ensure more credit loans, for example. Furthermore, these researchers can use their credibility to fill the role of leadership, since ‘how to be a leader’ recipes are endorsed in many recent best-seller books (Kouzes and Posner 2010). Undoubtedly, the goal displacement has occurred in the current state of academia. Individuals, institutions, and platforms have become actors of a market-like construction. Recent publication strategies have equipped the conditions for a market. The stakeholders of this market are universities, national laboratories, funding agencies, policy makers, journal publishers, science platforms, and researchers. The medium enabling transactions is journal publications. The transferred value is assigned by credits while indicators are being a unit of value.
The question was that: is it reasonable to treat academia and surrounding components as a market? My answer is, yes! However, this is a market where actors learned how to game the system because this market is enforcing a knowledge-based hierarchy between researchers from all over the world, between scientific research institutions of countries, and between academic social media platforms. As discussed earlier, any means of asymmetry among suppliers make market more vibrant in a parallel plot that shouts, ‘market decides the best!’. I claim that knowledge-based hierarchy contributes to the enlarging gap between ‘poor researchers’ and ‘rich researchers’. ‘Poor’ means academically invisible to their own community of research. To be honest, poor researchers are predestined to stay poor unless they cheat and there are multiple reasons for cheating. First of all, they are getting comparably lower citations around the world (as an example, see NSF indicators reports). Researchers from many developing countries must publish their work in high impact journals to become visible. However, ongoing bias and trust issues between countries are limiting those people prominence while they get less citations from the US due to language and cultural barriers, as implied in the Science and Engineering Indicators 2010 (SEI 2010) report. Secondly, poor researchers might be affiliated with low ranking or even unknown universities from not trusted countries. To be on the radar of competitors, ‘poor institutions’ are also in play to jump higher positions on international university rankings. In 2011, two Saudi Arabian universities transferred more than 60 distinguished scientists to use their reputation to increase their own reputation (Kehm, 2020). The reason is obvious to a certain degree:
“Rankings have become a symbol of economic status because it is argued that the more universities in a given country or region are ranked among the top ten, fifty, one hundred, or five hundred, the higher is the economic reputation and innovative capacity of that country or region.” (Kehm, 2020)
While gamer researchers from developing countries try to find a seat in academia, they intensively utilized traditional tactics such as fabrication, falsification, and plagiarism. In the automized, internet-based publishing era, it is relatively easier to catch old school academic misconduct. In the case of developed countries, the schemes are a bit more byzantine. As the moving parts (rankings, promotions, funding) in the structure have been more specialized, gaming strategies have become complicated. To give an example of developed countries, the US having 25% of the overall researchers operated the most human capital in various positions and science indicators perceived as a symbolic capital helping the accumulation of credit (Bourdieu, 1986). ‘Rich researchers’ enthusiastically looked for the ways of inflating their contributions to new fields to become celebrated pioneers. They dominated their major area of research with review papers to get more citations. Especially during Covid-19 times, writing collaborative review papers has appeared as a low hanging fruit for most group of researchers. By means of strategic collaborations with many famous co-authors, rich researchers accelerated their interdisciplinary appearance. Leading experts promoted specific issues/journals of their expertise in sister journals where they become editors.
According to the indicators report (SEI 2010), ‘even as global production and citation patterns have shifted, the relative quality distribution of worldwide articles, as measured by citations, has changed little’. This report claims that even though Asia or Europe countries are getting more citations in recent years, the quality indicators that are developed since SI-72 are ensuring the US dominancy. As a quality indicator, peer review process has been cheered up due to its role of gate-keeping the entry into the scientific field by conscious or unconscious scholars; however, the process itself has been played by many developed countries too via recommending their papers as a must to be added to references causing citation rings and suggesting themselves, friends or family members as reviewers. Moreover, peer review may generate black-boxed discriminations against genders and ethnic intellectual minorities (Wouters, 2020). Robert Lindsay, one of the editors of the Springer-published journal Osteoporosis International confessed that usually, editors in the United States and Europe know the scientific community in those regions well enough to catch potential conflicts of interest between authors and reviewers. But Lindsay says that Western editors can find this harder with authors from Asia — “where often none of us knows the suggested reviewers” (Ferguson, 2014).
Therefore, relying on quality indicators as a fair organizer cannot be totally valid, especially once we come to realize that the managers of knowledge in the marketplace of ideas are the supreme advocates of ranking, indexing, and labeling to place actors in a hierarchy. Standards and metrics regulating a broader economy have close relations to the expectations of productivity and efficiency in exerting and publishing science because science and technology infrastructure of countries have seen as an indicator of being adapted to a global economy after 1990s. The methods and mechanisms of the production management that target producing more and reducing cost to satisfy costumers have been implemented to the science market. One actor I have never mentioned so far is ‘costumer’. I believe the changing roles of institutions, instruments, and individuals in a free market are providing the opportunity to everyone of being a customer occasionally. This flexibility is one of the characteristics of neoliberal mediators admitting the motto of ‘market decides the best’. To defend my thesis that treats scientific community and its relations as in a market, I will introduce ‘platforms’ that are craving for making transactions faster, better, cheaper. These platforms can be divided into two categories: academic social media (Academia, ResearchGate, LinkedIn etc.) and journal publishers (Elsevier, Nature Publishing Group etc.). The role of platforms in a market will be discussed in the following section.
Check Point 2: Can knowledge management be inspired from production management?
The developed production mechanisms in industry has been reshaped after 1980s to have cost and time efficient production lines. The motivation behind the new production techniques was providing skill development for workers, increasing employee participation, and enhancing the quality of work life (Huxley, 2015). Japanese production management strategies (or Toyota Production System – TPS dated back to 1930s) aiming for improved overall quality and productivity have become a mainstream method of production. In late ‘80s, the US automobile industry was amazed by this fast and efficient way of production. General Motors and Toyota came to an agreement in 1984 to explore Japanese methods in the US. A quality and manufacturing engineer, John Krafcik served in this joint company, New United Motor Manufacturing, Inc, for two years and then moved to MIT where he worked in the International Motor Vehicle Program with an auto industry analyst, Jim Womack (Hamilton, 2012). After visiting manufacturing plants in 20 countries, he published the data collected based on productivity and the final quality of products (book: The Machine that Changed the World). Following years, he worked for Ford, Hyundai, and eventually ended up with Google to lead the self-driving cars unit. After a while, he established his own company, Waymo, which is intended for the commercialization of the company’s autonomous technology. To recapitulate his efforts, he Americanized the Japanese production management methods with the term of ‘lean manufacturing’. He inherited the Japanese term jidoka, “automation with a human touch”, as a vision for companies.
The reason inspired me to reiterate the story of production management here is subtle or apparent similarities in the timeline of promoting faster, better, cheaper products/publications. Both manufactured product market and knowledge-based market are aiming for valuing an end product, permitting the value stream of products, including interchangeable customers making their decisions based off the predefined values, and looking for quality. The medium of value/money customs of the production industry has increasingly become private banks. Advanced banking systems holding consumer deposits have contributed to preserving capital, assigning credits and loans along with enabling fast and secured transactions. My argument is that academic platforms accumulating publications are an equivalent of advanced banking systems and a meta-medium for the knowledge market.
Facebook-like platforms – faster
Commodifying the digital exchange of scientific knowledge via platforms has appeared as a form of communication (Delfanti, 2020). The social-media style internet platforms are welcomed in the community due to transparency offered (Mirowski, 2018). While Academia.edu and ResearchGate accept printed or drafted versions of publications, LinkedIn connects researchers to industry with their academic and job background. Eventually, a kind of big data has become available to people outside academia and to companies that base their future direction to ‘consumer’ behavior. As discussed in the concept of platform capitalism, ‘if data is a central resource, then capitalist competition places a high premium on getting that data’ (Srnicek, 2017). However, academic social platforms are so easy to game. It is almost identical to getting likes on your photo from your relatives on Facebook… Therefore, although this sort of platforms claims to be fast and innovative in keeping publication records, they are contributing to creating modified indicators – or altmetrics – which are just another face of the same coin in the market. Faster publications in a restricted time window have become important to occupy tenure positions, academic jobs, or industry positions. As Biagioli emphasized “impact takes a long time to build, creating a bottleneck in a publication economy that has become all about speed. Electronic publishing has significantly cut down on the length of review, production, and dissemination processes.” (Biagioli, 2020). Faster indicators seemed like an opportunity for funding and promotion decisions. The altmetrics manifesto defended blogs and preprint servers that shrink the communication cycle from years to days (Priem et al 2010). One of the contributors to the manifesto, Dario Taraborelli, was affiliated with Wikimedia Foundation and recently transferred to the Chan Zuckerberg Initiative as the Science Officer for Open Science, according to his LinkedIn account. Clearly, there is a transitivity among Facebook-like platforms that first want to do things fast and now open.
Large publishing companies – better
Other branch of platforms in the knowledge market is publishers. The top ones become famous for hosting journals with high impact factors. Nature Publishing Group (NPG) has its own Nature since 1869 and the group is promoting Nature Reviews journals since 2000. While researchers are constantly trying to enter the realm of ‘the protector of the better science’, journal platforms are not immune to gaming with peer review process and also not innocent entirely in contributing to knowledge-hierarchy of poor and rich scholars. As mentioned before, rich researchers are offering more sister journals in their expertise whereas the platforms are marketing their own brand. There are 32 Nature-branded research journals along with 20 Nature Reviews. The higher impact factor of a Nature journal means a greater number of citations per paper (Nature eds, 2013). Consequently, the knowledge market system is empowered by large publishers. Another example of publishing platform is Elsevier who is preparing itself for automatized peer review process in which the rejected paper is redirected to be submitted to another journal in the row (Mirowski, 2018). It is remarkable that the very pioneer of academic indicators Garfield, the production management guru Krafcik, and one of the largest publishing group Elsevier are all amazed with the ideology of automation. The reason would be promoting efficient production while reducing cost to conform with a larger economy.
Open science – cheaper
Lastly, platforms are offering cheaper methods for publications with a matching script of neoliberalism in which expertise should be diminished in science to cut cost. The most prominent trace of this trend is the increasing voices for open science. Although there are multiple dimensions of the understanding of ‘open’, openness in a commonly perceived sense is introduced as a key to increase the productivity and cost effectiveness of academic research (Leonelli and Prainsack, 2015). The transparency and openness narratives are giving rise to Facebook-like platforms where the ‘likes’ of members can decide what the truth is. This type of mechanism is similar to a free market with ‘democratized’ actors. Not only researchers around the world, but public would be exposed to raw, unfiltered, or untested data in the market and eventually can join the knowledge market as a new actor having a role in knowledge transactions. Although it seems like the knowledge-based hierarchy can be dismantled by platforms that are open to public, this massive uneducated, non-expert, new actor can dominate the market and alter what we know entirely. If knowledge itself is skewed, the management of knowledge cannot be guaranteed by indicators, indexing, and credits. The token of knowledge transactions can be likes.
To summarize, my second question was about insights between knowledge management and production management. In 1990s, both area experienced shifts in market mechanisms. Even NASA adopted the policy of faster better cheaper for space explorations to decrease the amount of time and cost for each mission (Conway, 2015). Things went well for the first attempt, Mars Pathfinder; however, the following attempts were disastrous for Mars Climate Orbiter, Mars Polar Lander and Deep Space Two. As a result, NASA engineers coined the phrase “Faster, Better, Cheaper – Pick two” (Hobbs, 2017). Now the same policy has been tried to be adapted to paper publications. Considering two parallel developments in production and knowledge industries, I firmly assert that faster better cheaper publications are heading for accounted malfunctions.
Conclusion
Science indicators have been transformed into tokens of value or impact from ‘70s to ‘90s under the pressure of financial shifts. The historical background in 1990s fostered competitive actors who come up with productivity and efficiency solutions. Recalling the founder of Academia.edu, researchers are in need for a fast-publication tool since their individual careers are correlated with publication indicators. They use indicators in exchange with academic positions, promotions, grants, rankings, and so on. Institutions (universities, agencies), instruments (platforms) and individuals (scholars) take their place in a market-like structure. This knowledge market supports a perpetual promotion of rich researchers while poor researchers stay invisible. The knowledge-based hierarchy produces more inequality, which is a necessary condition for a neoliberal market and the market is not protected against gaming because eventually ‘market decides the best’.
References
Biagioli M (2016) Watch out for Cheats in Citation Game, Nature 535 (7611): 201–201. https://doi.org/10.1038/535201a.
Biagioli M and Lippman A (eds) (2020) Gaming the Metrics: Misconduct and Manipulation in Academic Research. Cambridge: MIT Press.
Biagioli M (2020) Fraud by Numbers: Metrics and the New Academic Misconduct, Available at: https://lareviewofbooks.org/article/fraud-by-numbers-metrics-and-the-new-academic-misconduct/ (Accessed September 12, 2020).
Bourdieu P (1986) The Forms of Capital, Richardson, J., Handbook of Theory and Research for the Sociology of Education, Westport, CT: Greenwood, pp. 241–58.
Conway E M (2015) Exploration and engineering: The jet propulsion laboratory and the quest for Mars. JHU Press.
Csiszar A (2020) Gaming metrics before the game: Citation and the bureaucratic virtuoso. In: Biagioli M and Lippman A (eds) Gaming the Metrics: Misconduct and Manipulation in Academic Research. Cambridge: MIT Press, 31–42.
Delfanti A (2020) The financial market of ideas: A theory of academic social media. Social Studies of Science, 0306312720966649. https://doi.org/10.1177/0306312720966649
Ferguson C, et al. (2014) The peer-review scam, Nature vol. 515, 480-482.
Foucault M and Senellart M (ed) (2010) The birth of biopolitics: lectures at the Collège de France, 1978-1979 (1st Picador ed.). Picador.
Freedman D H (2020) A Prophet of Scientific Rigor—and a Covid Contrarian, Available at: https://www.wired.com/story/prophet-of-scientific-rigor-and-a-covid-contrarian/ (Accessed November 8, 2020).
Godin B (2003) The emergence of S&T indicators: why did governments supplement statistics with indicators? Research Policy, 32(4), 679-691.
Godin B (2017) Models of innovation: the history of an idea. MIT Press
Hamilton S A (2012) Destined to Drive, Available at https://www.technologyreview.com/2012/04/25/186543/destined-to-drive/ (Accessed November 10, 2020)
Hobbs D (2017) Faster, Better, Cheaper!, Available at http://blog.duffry.com/posts/faster-better-cheaper/ (Accessed October 25, 2020)
Huebner B Kukla R and Winsberg E (2017) Making an Author in Radically Collaborative Research (Vol. 1). Oxford University Press. https://doi.org/10.1093/oso/9780190680534.003.0005
Huxley C (2015) Three Decades of Lean Production: Practice, Ideology, and Resistance. International Journal of Sociology, 45(2), 133–151. https://doi.org/10.1080/00207659.2015.1061859
Kehm B M (2020) Gaming metrics before the game: Citation and the bureaucratic virtuoso. In: Biagioli M and Lippman A (eds) Gaming the Metrics: Misconduct and Manipulation in Academic Research. Cambridge: MIT Press, 93–100.
Kenton W (2020) Credit, Available at: https://www.investopedia.com/terms/c/credit.asp#:~:text=In%20the%20first%20and%20most,known%20as%20buying%20on%20credit.&text=The%20amount%20of%20money%20a,creditworthiness%E2%80%94is%20also%20called%20credit. (Accessed November 9, 2020).
Kouzes J M and Posner B Z (2010) The truth about leadership: The no-fads, heart-of-the-matter facts you need to know. John Wiley & Sons.
Leonelli S and Prainsack B (2015) To what are we opening science? Reform of the publishing system is only a step in a much broader re-evaluation. Impact of Social Sciences Blog.
Latour B and Woolgar S (1986) Laboratory life: The construction of scientific facts. Princeton University Press.
Merton R K (1940) Bureaucratic Structure and Personality. Social Forces, 18(4), 560–568. https://doi.org/10.2307/2570634
Mirowski P (2018) The future(s) of open science. Social Studies of Science, 48(2), 171–203. https://doi.org/10.1177/0306312718772086
Nature eds. (2013) Beware the impact factor Nature Materials, 12(2), 89–89. https://doi.org/10.1038/nmat3566
Nik-Khah E (2017) The “Marketplace of Ideas” and the Centrality of Science to Neoliberalism. In D. Tyfield, R. Lave, S. Randalls, & C. Thorpe (Eds.), The Routledge Handbook of the Political Economy of Science (1st ed., pp. 32–42). Routledge. https://doi.org/10.4324/9781315685397-4
NSF.gov (2010) Science and Engineering Indicators, Available at: https://wayback.archive-it.org/5902/20150817193548/http://www.nsf.gov/statistics/seind10/pdfstart.htm (Accessed November 5, 2020).
NSF.gov (2020) Science and Engineering Indicators, Available at: https://ncses.nsf.gov/pubs/nsb20201 (Accessed November 8, 2020).
NSF National Science Board (1973) Science Indicators, 1972. Superintendent of Documents, Washington, DC.
Price R (2011) The Number of Academics and Graduate Students in the World, Available at: https://www.richardprice.io/post/12855561694/the-number-of-academics-and-graduate-students-in#:~:text=This%20means%20that%20there%20are,graduate%20students%20in%20the%20world.&text=According%20to%20the%20Bureau%20of,comes%20down%20to%201.54%20million (Accessed November 5, 2020).
Priem J D et al (2010) Altmetrics: A manifesto, Available at: http://altmetrics.org/manifesto (Accessed October 10, 2020)
Rudolph J L (2019) How We Teach Science: What’s Changed, and Why It Matters. Harvard University Press.
Shema H (2012) Interview with Richard Price, Academia.Edu CEO, Scientific American Blog Network, Available at: https://blogs.scientificamerican.com/information-culture/interview-with-richard-price-academia-edu-ceo/ (Accessed November 5, 2020).
Srnicek N (2017) The challenges of platform capitalism: Understanding the logic of a new business model. Juncture, 23(4), 254–257. https://doi.org/10.1111/newe.12023
Wouters P (1999) “The Creation of the Science Citation Index.” In: Bowden, M. E., Hahn, T. B. (Eds.), Proceedings of the 1998 Conference on the History and Heritage of Science Information Systems. Information Today, Medford, NJ, pp. 127–136.
Wouters P (2020) Gaming metrics before the game: Citation and the bureaucratic virtuoso. In: Biagioli M and Lippman A (eds) Gaming the Metrics: Misconduct and Manipulation in Academic Research. Cambridge: MIT Press, 67–75.
Hi! I used Biagioli everywhere. Am I missing something here?
you may find this article of interest:
Homo Citans and Carbon Allotropes: For an Ethics of Citation
R. Hoffmann, A. A. Kabanov, A. A. Golov, D. M. Proserpio,
Angew. Chem. Int. Ed. 2016, 55, 10962-10976.
doi:10.1002/anie.201600655, open access
Interesting article…just one little remark. Please change everywere BIOGIOLI with the correct last name BIAGIOLI