Fun, fun, fun! Biological research is presented in roughly the following format: title, abstract (summary), introduction, methods, results, discussion (+ conclusion, references, other notes, appendices if applicable). Citations should be present throughout as necessary.
It’s useful to actually be involved in the experimentation, analysis of data and writing of these papers in order to understand how they are written, why and for whom.
In critiquing these often technical and information-dense reports, a stepwise approach is essential. Often, there is a specific reason each paper is consulted. Sometimes it is for a specific methods section line on the recipe of a buffer. Sometimes it is to get a broad understanding of an area, in which case the abstract and introduction are useful. Other times, it is to get to the point and see whether the experiment produced the expected results, in which case one’s eyes would hover over the results, discussion and conclusion sections.
As such, each section has different writing and formatting rules. Methods must be objective, quantitative and thorough, as to allow a separate person to replicate them. The introduction must be exhaustive, cursive and even narrative. The discussion must tread the fine line between referring to the results conservatively while speculating over their implications, given the existing knowledge in that area.
The experimental design must be adequate for testing the stated hypothesis and aims. Appropriate controls and treatments, as well as steps to deal with confounding variables must have been taken.
If a sample was taken, the effects of sampling bias and whether it was representative must be considered.
In the results section, the suitability of graphs used must be assessed as well as the use of different statistical tests. A low sample might not have sufficient statistical power to warrant the use of a statistical test. Statistical significance refers to the outcome of a statistical test that indicates the result in question is unlikely to have occurred merely by chance. It is often used as a litmus test for “proving” that something is happening in the data presented.
This is to be considered with caution. Probability is by convention set at 5% (p value = 0.05), meaning that as long as a statistical test outcome gives a probability of the result happening due to chance less than 5%, then the result is considered statistically significant. This leaves a loophole for results not always being what they are supposed to be. Additionally, there are fields or types of data that cannot subscribe reliably to the use of certain statistical tests.
Statistical tests are just mathematical tools that deal with probability, and as such cannot guarantee or create knowledge in their own right.
Means used in graphs in the results section of a paper have confidence intervals or error bars around them, indicating how wide the rest of the data is dispersed around the mean. This shows data variability and is also used to compare sets of data side by side. If their means with error bars overlap, they are similar. Non-overlapping means and error bars, especially with statistical test significance (as previously mentioned) imply a difference between data sets.
As for conclusions, they should refer to the original aim and hypothesis. In assessing the conclusion, the validity (does the data reflect what was intended to be measured?) and reliability (would this data occur again if the experiment were repeated?) of the experimental design must be accounted for.
Finally, thought must be given to whether the connections made by the results are correlation or causation. Statistical significance gives confidence to the divergence of sets of data, but cannot provide explanation or what might have created that data. The balance between the data and its analysis, and the wider narrative of the field must be merged artfully to understand reality and contribute to the advancement of new knowledge.