Reading philosophy as a STEM student

This resource was prepared for an introductory bioethics course held in a philosophy department. The vast majority of students registered were from immunology, human biology, neuroscience, and other STEM backgrounds. In their entry surveys, they expressed concerns about how to read philosophy. The following gives one possible lens for reading philosophy as a STEM student, but it is not the only lens, and it will certainly not apply to all readings.

Overview

This page is an optional resource for thinking about reading philosophy and bioethics papers (I will focus on philosophy papers in particular, since these will reflect many of the bioethics papers we see in the first half of the course). This is not assigned reading for this course.

In session one, we discussed how philosophical argumentation is not as different from the scientific method as some people presuppose. Granted, there can be meaningful differences between them, but our main goal is to provide a heuristic for thinking about philosophical methods. This page briefly expands on this heuristic to discuss an approach to reading philosophy. This is a heuristic and not a rule, and some philosophers themselves will disagree with the methods outlined here. You should read this page as one possible resource, as a companion to the other reading resources posted on Quercus, and as just one possible tool for getting into philosophy.

Note 1: This page is designed to be useful even if you’re already used to reading philosophy. It’s written in part to show possible connections between philosophy and other STEM disciplines, and to align with our talk of “moral methodology” in class. The main goal is to help students used to reading STEM literature apply some of those skills to reading philosophy, but this also means you can think about applying existing skills in reading philosophy to reading STEM literature.

Note 2: This page is designed to be read in order and in full, but you can also just skim using bold text, focusing on what stands out most.

Background: recalling the “IMRAD” model in scientific publications

Most scientific publications follow an organizational structure known as IMRAD, which stands for: Introduction, Methods, Results, and Discussion.

The introduction (ideally) explains the problem or question being addressed, summarizes the relevant background knowledge from the field of study, identifies possible limitations and gaps in that background, and motivates the paper and study’s own theoretical and methodological orientation. For example, why is a study on patient perceptions of privacy in a clinic using a symbolic interactionist approach to analyzing transcripts?

The methods section (ideally) describes the strategies used to generate an answer to the research question, in line with the theoretical orientation chosen. For example, how exactly did we conduct the interviews in a study on patient perception of privacy, rather than a survey with multiple-choice options? Were the interviews open, closed, or mixed? How were the data coded and triangulated? How were codes chosen and coding disagreements navigated? What valid instruments, if any, were used that process? How did we manage recruitment and how representative was the study population? Overall, a methods section reports the steps taken to generate the results, in enough detail to replicate the study and to establish validity.

The results section (ideally) reports the outputs of the study: what the primary and secondary findings were, what relevant associations were found, and an initial appraisal of what those findings entail for the research question.

Finally, the discussion section (ideally) critically interprets and appraises the methods, results, and overall interpretation of the study. This often includes reporting study limitations, responding to potential alternative interpretations of the results, possible biases, scope of application, impacts on other areas of research, and future directions etc.

Comments on how we (should) read IMRAD research

Many if not most undergraduate students who are assigned IMRAD readings start by reading the abstract, then jumping to the results and conclusion. At least, that’s true of the students I have informally surveyed before. They look at what was found rather than how we found it, or how convincing those findings are. Maybe they might skim the methods and discussion if the course asks for it or tests on it, but many students report focusing just on results. And I get it: I’ve done that too! Sometimes we just want to know what was found.

Ideally, however, we should spend a little more critical focus on IMRAD research by spending more time analyzing the methods and discussion sections. We want to know not just what was found, but whether what was found is convincing. This requires critically appraising the methods, considering alternative interpretations of results, and thinking about matters of scope (external and not only internal validity, for example). While we often trust the validity of the methods used and the peer-review process, etc., more critical reading will find several gaps and issues in loads of scientific research. In upper year STEM courses in particular, you’ll want to be familiar not only with the common methods and instruments that have been considered validated for certain uses, but how apparently valid methods can be misapplied, misreported, and misused for certain study goals and designs. Meanwhile, a lot of research conducted in one context is often not repeatable in other contexts, so we should want to be able to closely read methods to determine how applicable they are for our own contexts (what you might hear called “external validity”). See the critical appraisal tools page on Quercus for some relevant links.

This is even more important today, given enduring and emerging issues in our research climate. You may have heard about widespread replication crises in medicine and other STEM fields, about a rise in AI generated publications and summaries of other publications, and about other threats against scientific confidence and validity. Meanwhile, the increasing competition for academic jobs means people are more than ever pressed to “publish or perish” and churn out lower quality work.  If you want an accessible introduction to just some of these issues, consider browsing the Retraction Watch website. In medicine, a lay-friendly book might be Rigor Mortis by Richard Harris, though it’s more focused on past research than present issues.

Even ignoring this, sometimes STEM research gives conflicting results. This does not necessarily mean there is “no answer” or that we should just pick whichever study we like best, just because the results conflict. Rather, it means we have to look more closely at how we came to those results, to determine which result is most convincing. More on this below.

Philosophical argumentation as method

As covered in class, argumentation is one type of moral methodology in philosophical investigation, similar to the different methods used in scientific investigation. In short, we identify a problem or question, develop an argument that will translate a set of prior givens into a set of results, and then critically interpret and appraise those results. Perhaps instead of using validated methods for cleaning out lab glassware to avoid cross-contamination of related biomaterials, we use systematic definitions and methods of inference that avoid cross-contamination of related concepts. Where scientists might examine how lab conditions may limit external validity, philosophers might present counter-examples and thought experiments that indicate limitations on argument scope. And so on.

Many of the issues and approaches that apply to scientific methods and publications above also apply to philosophical methods and writing.

For example, consider conflicting results in science / the “no right answer” attitude to philosophy. Just like STEM can give conflicting results from the same general methods, philosophers very often come up with different results in their arguments. This does not necessarily mean there is “no right answer” that we can eventually come up with in either science or philosophy (though some people do argue this!), but rather that we do not yet have a convincingly established right answer. So, like in STEM, we need to be able to critically appraise how convincing conflicting results are, to examine the steps and inferences and supports in authors’ arguments, and to revise and update our own arguments to account for conflicting data and generate more convincing results. Perhaps there might be no right answer at the end of it all, but we can get better and more convincing answers. Similarly, leaning too much on the attitude that philosophy is “subjective” can miss the ways in which there are established methods for evaluating arguments, just like we might evaluate students in wet lab courses for how well they follow protocols etc. (If you want to get deeper into these types of methods and appraisals in philosophy, check out our logic courses like PHL245 and PHL246!).

Meanwhile, we have methods for appraising the “validity” of philosophical arguments. Just like we can appraise the validity of scientific studies for different types of validity (internal, external, content, method, etc), philosophy also has its own concepts of validity, and its own means of appraising how convincing arguments are. We will cover a few of these as the course goes on (be on the lookout for terms like deductive validity [often just called “validity“], soundness, coherence, strength, and support). You have already have seen examples of invalid reasoning in our very first session, when we talked about the “is-ought fallacy” for example.

A suggestion: Reading philosophy papers like IMRAD papers

Philosophy papers often take a similar structure to IMRAD papers. They start by explaining relevant background, scholarship, theoretical commitments, goals, etc. They they present their argument (method!) in detail, breaking down the premises, the supports for the premises, and the steps between them. They then offer the conclusion (results!) of that argument, what the premises ultimately led us toward. Finally, most papers will then critically appraise and discuss their arguments and conclusions. For example, we will often consider counter-examples, objections, and different possible conclusions that might seem to undermine our findings, in order to show and why to what degree we should find the conclusions convincing, and the scope or limitations of those conclusions. Sometimes this discussion is interwoven with the argument itself, such as some STEM papers might do when defending their choice of methodology against possible limitations in the methods section itself. But often, the structure of a philosophy paper is not that different from an IMRAD paper. If we wanted to give it a shorthand, we might call it IACORD: Introduction, Argument, Objections and Replies, Discussion.

For a surface level understanding, we might just start with a skim of what they ultimately have to say. Like in IMRAD papers, we might look at the abstract for an overview, the results and conclusion sections for the main conclusion and summary, and then glance over the arguments to see how they are trying to generate that conclusion. For our course in particular, we ask that you read papers through at least once before lectures and tutorials, and suggest you read them again after in more detail. It is okay if you start with a more surface-level understanding and then build toward more comprehensive understanding.

For a more detailed and critical understanding, we can then critically appraise the arguments and discussions in more detail. Here is where we look into how convincing the argument actually is, beyond what the argument says. We can look at its structure, support, possible counter-examples, objections, etc. We can consider whether the conclusions apply appropriately for the contexts we’re considering (does this paper on paternalism in women’s reproductive health meaningfully work in other contexts like informed consent for pediatric surgery?). This latter approach is the type of critical reading we want to (gradually!) foster in this course.

In our course, it’s perfectly fine to do just the first surface reading before lecture, and then doing a more detailed and critical reading after tutorials. You might also wish to return to earlier assigned readings later in the course when you’ve developed more skills for appraisal.

Keep in mind: Much of what we read isn’t meant for you

As mentioned in class, a lot of academic writing is part of a conversation between experts. They will assume things that you may not know, use shorthand and concepts you may not know, through brief references to scholars you have not read, etc.

In general, it will take time to get used to different disciplinary methods and approaches, whatever those disciplines are. We will give you some tools for this along the way, and there are other courses dedicated to these methods more critically as discussed previously on this page. In lecture, we will model ways of pulling arguments out of readings and appraising them, and introducing you to concepts in bioethics and healthcare that will help guide your reading; while tutorials will expose you more directly to practice in arguing and appraising arguments, help address specific questions about readings, etc.

If you are feeling frustrated, of course there are many resources available, including those linked on our own Quercus pages. But just remember that part of that frustration are due to us trying out something new: jumping into unfamiliar conversations, learning how people talk to each other, and learning to talk back in turn. This is not easy, but it is something we will learn along the way.

Lastly: This heuristic works both ways!

If you are used to reading philosophy, these similarities also apply back to STEM literature. Scientific research can (and should!) be read critically, and you can heuristically gloss them as making particular types of arguments, following set valid methods and inferences, to generate and discuss results. Granted, you might not be used to the specific statistical methods in use and might not be familiar with the concepts or ideas they explore (but neither are many STEM students! There are many different fields and approaches across STEM and disciplines generally). We can learn to read these over time.

We’ll provide some more tools for critically reading STEM literature later in the course. Some of these are already listed on the readings resources page (look at Trisha Greenhalgh’s edited series of short articles, for example). In the second half of the course in particular, we’ll be looking at emerging issues and reading some papers that can get a bit more technical (just like parts of the Finucane and Steele reading in session two). We’ll get gradual practice over time.