Science provides models for harnessing knowledge in a relatively objective, material and quantified environment. It involves high levels of reproducibility and peer-to-peer confirmation of findings, and as such can produce very widely applicable and powerful knowledge. The downside to this is that due to its basis of formulating testable hypotheses, many types of knowledge remain outside of the working field of science.
Starting out with an idea based on previous knowledge or new observation, a testable hypothesis is established as the foundation for experimentation. Hypotheses are statements to be tested. For example, “cats don’t have a food preference” is a testable hypothesis for an animal shelter in the UK, but would not be a testable hypothesis somewhere where there are no cats. A hypothesis might not be testable due to abstract constraints e.g. “people are not happier on the Moon than on Earth”, or due to the limitations of human existence at a given time e.g. time, money, politics, priorities, taboos, etc.
The default hypothesis in any case is the null hypothesis that states no change will be observed. For example, “cats don’t have a food preference for wet or dry food” is the null hypothesis. “Cats prefer wet food to dry food” is its counterpart alternative hypothesis. A statistical test on data obtained from experiments might show that the null hypothesis is to be accepted or rejected.
Once within the space of a testable hypothesis, experimental design follows. This is a preparatory exercise ahead of experimentation that ensures the experiments and outcomes are what they need to be. Experiments must adhere to guidelines such as risk assessment, reproducibility and validity of results, time and cost effectiveness, etc. For example, in a clinical trial where clinicians administer drugs and placebos randomly to patients, a double-blind experimental design is required where neither the clinicians nor the patients are aware of whether they are assigned the drug or the placebo.
Experimental design covers the equipment and reagents needed and whether these are safe and cost-effective enough to justify their use; what experiments will be carried out, when, in what order, how and how many times; what data will need to be recorded; what biases might arise and how to counteract them e.g. labelling tests using codes rather than content names; how to organise experiments to fit with experimenter’s schedules or equipment booking schedules; how to collect the right amount of data from experiments to be able to use certain statistical tests afterwards i.e. some tests need a minimum of data to apply.
Obtaining results is the next step. This involves first collecting the data which as previously mentioned, might start right from the experimental design step because sometimes the data might be missed unless specifically waited on to be collected. Sometimes an experimenter might have a split second to collect the data, else the experiment is wasted. This must be planned for in advance. Sometimes equipment collects data automatically in which case one can take a nap.
Either way, once collected, data is kept in a store (whether physical or digital) as raw data. This is then looked at and analysed using various methods such as computing, graph generator software, image processors, etc.
Evaluation of results involves fitting the new data into the existing knowledge. Sometimes this involves discarding what is outlier data, running additional statistical tests to fine tune results, dealing with unexpected results, or outright finding out that the experiment didn’t run as intended.
This feeds into the last step of drawing conclusions and using them to inform the start of a new cycle with a new testable hypothesis. It may be that the failed experiment will be carried out again; a slight variation of the experiment will be carried out again; a different experiment will be carried out; the results support rejecting or accepting the null hypothesis, and a new area of the field can be created with new experiments; the hypothesis is settled and the area is abandoned or paused in the pursuit of a different area of the field; or indeed, the findings break new ground, spawn new directions of research, and inspire innovation, business and citizen interest and application of the new knowledge.
As you can see, data regarding something like climate change comes in many different forms and from many different sources obtained in many different ways, so curating it all together in a specific field or to answer one question is a big task.
Humans are still happy to attempt this task, even though it is clearly one for the machines to tackle. Soon enough this will be the case, however for background or at least a history lesson, here is how things run with humans involved.
Scientists strive to publish their work in scientific journals which are ranked based on the popularity of their content in terms of how many new scientists refer back to older work by previous scientists (or sometimes their own previous work!), and carry out the peer review process which attempts to act as quality control on the work submitted for publishing. This builds into the scientific literature which is growing past roughly 50 million papers.
Peer review means that scientists in a relevant field to the work submitted comment on the submission. This informs the journal editor in their decision to accept or reject the submission, sometimes subject to new work being added onto the submission, or changes to be made.
[This process is very poor and has fallen victim to many fundamental issues: personal issues between scientists, peer reviewers and editors, as they can often be working together or competing; personal-political issues between scientists, institutions and private companies, as vast funding, reputations and relationships can rely on certain work being published in a certain journal at a certain time; human error, as the process often relies on as few as one or two peers judging one submission.
The publishing process itself can take as long as months or years to complete, and the top journal which is Nature is a for-profit organisation which fuels this broken system and routinely rejects most of its submissions, even though they are perfectly valid and often end up being published anyway in a “lower” journal. Hence, an artificial hierarchy is established where amazing and groundbreaking work simply does not have space in the limited space of the overly glorified and undeservedly attained top spot that just one journal has, of hundreds of others.
Quite literally scientists are forced to make it their career goal to publish in Nature or Science, and one can see the level of corruption this type of mania can entail. There are cases of withdrawn submissions, outright data fabrication and exaggeration and this type of outrageous outcomes that do nothing but hinder science and scientists.]
Despite this, when it comes to climate change, the amount of research is huge, as it has been a great effort spread over decades, thousands of scientists in different disciplines, very robust data and almost unanimous agreement amongst peers in deeming the evidence valid.
Another sphere of scientist communication (what is one to do with months between publishing and not being allowed to disclose anything prior to publishing to prevent someone else “stealing” the idea?) is conferences. These are meetings, some generic and some very specialised, where scientists in the field deliver presentations and exhibit posters of their latest work, and network with others to catch up with what everyone is up to. This is where one might find out what someone is working on before their work is actually published.
Needless to say, scientific experimentation and publication brings with it endless opportunity for ethical debate. Firstly, let’s look at publishing. Presenting results must be done in an unbiased way e.g. showing all results, not cherry-picking, not tweaking data, graphs, images or statistical analyses to show data in a light that isn’t objective.
Presenting new research must credit any previous relevant work with adequate citations and references. Citations are quick, in-text tags to each statement that uses previous work e.g. “This gene showed a marked response in cancerous rats (Name and Other Name, 2001)“.
References are alphabetically-sorted, full-detail lists of the mentioned work, added at the end of the paper e.g.:
Crenshaw, A., Jr., 2012. Surgical techniques and approaches, in: Campbell’s Operative Orthopaedics. Mosby Elsevier.
Domingos, M., Intranuovo, F., Gloria, A., Gristina, R., Ambrosio, L., Bártolo, P.J., Favia, P., 2013. Improved osteoblast cell affinity on plasma-modified 3-D extruded PCL scaffolds. Acta Biomaterialia 9, 5997–6005. doi:10.1016/j.actbio.2012.12.031
Gentile, P., Ghione, C., Tonda-Turo, C., Kalaskar, D.M., 2015. Peptide functionalisation of nanocomposite polymer for bone tissue engineering using plasma surface polymerisation. RSC Adv. 5, 80039–80047. doi:10.1039/C5RA15579G
References contain author name(s), year of publication, paper or book title, pages, and their unique identifier code by which they can be looked up. There are many citation and reference managers (software) such as Zotero (https://www.zotero.org/) that can collect and generate references automatically.
In writing new paper content, plagiarism must be avoided as part of working with honesty and integrity. This is standard “how to be a good human” stuff, but unfortunately there are pressures created inadvertently by some parts of the system of publication that have led a number of scientists to plagiarise, fabricate data or not credit others.
One of these pressures include publication-based funding that scientists rely on for their jobs. Financial insecurity can also lead people to accepting funding from big institutions such as food corporations for conducting research exclusively to create evidence that serves their cause, unethically. A big example of this is the funding of research to show a link between fat consumption and heart disease. This has been uncovered to have been driven by these companies. It diverted scientists from exploring the sugar-disease link which, as it turns out, is much stronger, compelling and relevant to public health, diet and disease.
When it comes to experimentation on animals, obvious ethical concerns arise in terms of using animals at all, using animals ethically, and using animals when the benefit outweighs the ethical cost.
A system called replacement, reduction and refinement aims to manage these dilemmas. Before conducting animal experimentation, researchers must justify the experimental design with animals by going through these steps. Replacement asks them if animals can be replaced at all in their experiments with by-product tissues or cells, or other approaches such as in vitro studies, to obtain the same data.
Reduction follows if animal use is the only option, and asks whether there is a way to minimise the use of animals. For example, multiple experiments could be carried out at the same time, and more data could be obtained from the same animals.
Refinement goes further to determine which steps can be taken to minimise any harm that may come to animals.
Of course, in many cases it can relatively easily be argued that animals must be used in ways that are indeed harmful, and commonly lethal as part of the experiment. Moreover, animals are being designed to be ill. There are many variants of lab animals such as mice that are bred to get certain diseases that are to be studied, e.g. diabetes, heart disease, Alzheimer’s, muscular dystrophy, etc.
Sometimes animal data doesn’t apply to humans. The benefit of these experiments must be analysed against the cost of using animals in this way. Doing science creatively and compassionately is a powerful quality.
When involving humans in experiments, informed consent is critical, although not always possible. For example, treating highly vulnerable children with experimental drugs may not allow for obtaining informed consent, and may create a conundrum of potentially saving them versus obtaining consent.
Additionally, the right to withdraw from a study is essential alongside confidentiality. Participants should not feel pressured to go through with anything. Any personal data obtained through the study must not be stored or used in such a way that it renders the participant vulnerable to being identified without their consent.
As previously covered in the Higher content, a risk assessment must be undertaken for any experiments intended. This lays out the severity of the risks against their likelihood, painting an overall picture for how dangerous an experiment could be, and determining whether it falls within acceptable limits. If not, it would not be allowed to take place.
A bigger regulatory framework outside of individual labs is set out through legislation. Together with the funding sources, policy and regulation, it creates the environment for scientific activity. Some practices are banned in different countries. Human cloning up to a certain number of days (embryo age) is legal in some places. Genetically modifying human embryos, similarly, is regulated. For example, China has looser regulation of human genetic modification than the UK. This has resulted in more research being carried out there than in the UK. Pets are cloned there for pet owners who wish to bring back a pet that is very similar to a pet that has died.
Funding determines the science that happens and how and why and for whom. Publicly funded bodies such as the BBSRC in the UK follow public interests such as food production and research into common diseases. Private funding from companies might prioritise military research or food and drink research. It is possible that this direction of funding impacts the area of knowledge that gets focused on, and even in what light it emerges. It is not difficult to guide science into directions that are not necessarily objective or useful.
For example, if reality were a party table with 10 types of wine and a single soft drink can, it would be perfectly true/real/factual that the party is serving soft drinks. By obscuring the other fact that there are far more alcoholic drinks on the table, and just one can of soft drink, it paints a true fraction of an untrue reality. It encourages the subconscious and often irrational human thought process that a piece of the truth should be extrapolated to the bigger picture. In this case, this piece of truth would wrongly represent the whole picture.