Zoran Skoda citations

Citations and impact factors

Joint Committee on Quantitative Assessment of Research Citation Statistics A report from the International Mathematical Union (IMU) in cooperation with the International Council of Industrial and Applied Mathematics (ICIAM) and the Institute of Mathematical Statistics (IMS)

http://www.mathunion.org/fileadmin/IMU/Report/CitationStatistics.pdf

This is a report about the use and misuse of citation data in the assessment of scientific research. The idea that research assessment must be done using “simple and objective” methods is increasingly prevalent today. The “simple and objective” methods are broadly interpreted as bibliometrics, that is, citation data and the statistics derived from them. There is a belief that citation statistics are inherently more accurate because they substitute simple numbers for complex judgments, and hence overcome the possible subjectivity of peer review. But this belief is unfounded.

Arnold-Fowler also prompted

The question how to balance the weight which in academic evaluation is taken by traditional publishing with less conventional forms of academic record (e.g. work exposed in blogs) is studied in

  • Brian Lavoie, Eric Childress, Ricky Erway, Ixchel Faniel, Constance Malpas, Jennifer Schaffner, The evolving scholarly record, A4 pdf, letter size pdf

Other

  • David Crotty, Driving altmetrics performance through marketing — a new differentiator for scholarly journals?, the Scholarly Kitchen blog, Oct 2013, link
  • Thorsten Gruber, Academic sell-out: how an obsession with metrics and rankings is damaging academia, Journal of marketing for higher education 24:2, 165–177, doi
  • Donald Geman, Stuart Geman, Opinion: science in the age of selfies, PNAS 113 no. 34, 9384–9387, doi html
  • Elsevier Factsheet: salami slicing pdf
  • fast-growing-open-access-journals-stripped-coveted-impact-factors

Refereeing

  • Shannon Palus, Is double-blind review better?, APS News, July 2015 (Volume 24, Number 7) web
Wrong incentives
  • wikipedia: incremental research, least publishable unit
  • Julia Belluz, Brad Plumer, Brian Resnick, The 7 biggest problems facing science, according to 270 scientists, Vox Media, July 2016
  • E. D. Sverdlov, Incremental science: papers and grants, yes; discoveries, no, Mol. Genet. Microbiol. Virol. 33, 207–216 (2018) doi
  • Paul E. Smaldino, Richard McElreath, The natural selection of bad science, Royal Society Open Science, Sep 2016 doi
  • Jacob G. Foster, Andrey Rzhetsky, James A. Evans, Tradition and innovation in scientists’ research strategies, American Sociological Review 80:5 (2015) 875-908, doi
  • Nina Paley, Copyright is brain damage, TEDxMaastricht, youtube
  • John Deere just told the copyright office that only corporations can own property, humans can only license it, blog post
  • Julia Reda, State of the Cyber: 10 proposals for improving IT security in the EU blog
Software initiatives for academic publishing
Plagiarism, bad science authors/editors, unavailable data etc.
Historical remarks

While in court it is easier to win if somebody had a prior registration of copyright in a copyright office, in principle most of the copyright laws and patent laws in provable cases give advantage to the factual priority of the work, even if not registered. That is, every author’s work is a priori protected from the moment of creation; the registration at a copyright office just makes it easier to prove the priority in disputes.

According to some historians and anti-copyright activists, the copyright in the 19th and early 20th centuries mainly worked for the authors, while today it is structured in a way which protects mainly the publishers and less the authors. In particular, often the authors loose battles with their own publishers in attempts to make parts of their work free or published in a form which they prefer.

Last revised on October 15, 2024 at 16:22:58. See the history of this page for a list of all contributions to it.