Critical thinking is a set of complex cognitive processes, see, e.g., the textbook by Moore and Parker for definitions, theory and analysis[1].

Here we provide a basic list of checkpoints to evaluate academic products with a specific focus on their external input. The need for systematic critical thinking is amplified by the advent of generative AI that allows massive generation of academic-like products.

Potential use cases could be critical (self-)evaluation of student reports, scientific papers, or peer review reports. We thank the authors of the Calling Bullshit curriculum[2] for inspiration.

Imagine an academic product such as a student or peer review report. Consider these points of evaluation:

  • Product design / outline

    • Is the product’s design appropriate: Are aims and questions clearly articulated.

    • Is the conclusion concise, relevant to the aims and the research questions and is the conclusion backed by the claimed results?

    • Does the product refer to external knowledge sources when relevant?

  • External source examination

    • Does a given source exist? Check references to papers, blogs, news outlets etc.

    • Status of a source: Is the source primary/secondary etc. If secondary, has the primary source been checked?

    • Is the given source trustworthy, e.g., peer review status, impact measures, possible controversies / predatory behavior, retractions etc.?

    • Does the source contain the claim it is cited for?

    • Can there be a publication bias in relation to the claim?

  • Checking claims, reproducibility

    • Does the product contain theoretical or empirical claims, or does it make reference to such claims?

    • If theoretical: has a proof been provided, has the proof been examined?

    • If empirical: has the evidence been examined. Causality, experimental design, effect size, sample size, use of unbiased estimators for test quantities etc.

    • If empirical: Can experiments be reproduced based on the methods description?

    • If result is graphical / a figure or a table: Is primary data available?

    • If result is graphical / a figure: Do the inferences and conclusions find support in the content and captions of the figure?

    • Are visuals and their assumptions examined (e.g., choice of axis, log scales etc.)?

  • Ethical dimensions

    • Does the product contain normative statements, such as expressions of value?

    • Value alignment: Are the values expressed consistent with the values of the assignment? For example, is a given peer review aligned with the peer review guidelines?

    • Has the product been examined for biases, e.g., ethnicity, political, gender, age, etc.?

    • If sensitive data has been used, has consent been given and is the product within the scope of consent?

    • Is the list of authors complete?

    • Have potential conflicts of interest been declared?

    • General motives: Who benefits from the product?

    • Are broader impact concerns accounted for? Sustainability, contribution to power imbalances etc.

This is work in progress - we seek comments and questions – Lars Kai Hansen (lkai@dtu.dk)

[1] Moore, B.N., Parker, R., 2012. Critical thinking. New York: McGraw-Hill.

[2] https://callingbullshit.org/syllabus.html