Science provides models for harnessing knowledge in a relatively objective, material and quantified environment. It involves high levels of reproducibility and peer-to-peer confirmation of findings, and as such can produce very widely applicable and powerful knowledge. The downside to this is that due to its basis of formulating testable hypotheses, many types of knowledge remain outside of the working field of science.
Starting out with an idea based on previous knowledge or new observation, a testable hypothesis is established as the foundation for experimentation. Hypotheses are statements to be tested. For example, “cats don’t have a food preference” is a testable hypothesis for an animal shelter in the UK, but would not be a testable hypothesis somewhere where there are no cats. A hypothesis might not be testable due to abstract constraints e.g. “people are not happier on the Moon than on Earth”, or due to the limitations of human existence at a given time e.g. time, money, politics, priorities, taboos, etc.
The default hypothesis in any case is the null hypothesis that states no change will be observed. For example, “cats don’t have a food preference for wet or dry food” is the null hypothesis. “Cats prefer wet food to dry food” is its counterpart alternative hypothesis. A statistical test on data obtained from experiments might show that the null hypothesis is to be accepted or rejected.
Once within the space of a testable hypothesis, experimental design follows. This is a preparatory exercise ahead of experimentation that ensures the experiments and outcomes are what they need to be. Experiments must adhere to guidelines such as risk assessment, reproducibility and validity of results, time and cost effectiveness, etc. For example, in a clinical trial where clinicians administer drugs and placebos randomly to patients, a double-blind experimental design is required where neither the clinicians nor the patients are aware of whether they are assigned the drug or the placebo.
Experimental design covers the equipment and reagents needed and whether these are safe and cost-effective enough to justify their use; what experiments will be carried out, when, in what order, how and how many times; what data will need to be recorded; what biases might arise and how to counteract them e.g. labelling tests using codes rather than content names; how to organise experiments to fit with experimenter’s schedules or equipment booking schedules; how to collect the right amount of data from experiments to be able to use certain statistical tests afterwards i.e. some tests need a minimum of data to apply.
Obtaining results is the next step. This involves first collecting the data which as previously mentioned, might start right from the experimental design step because sometimes the data might be missed unless specifically waited on to be collected. Sometimes an experimenter might have a split second to collect the data, else the experiment is wasted. This must be planned for in advance. Sometimes equipment collects data automatically in which case one can take a nap.
Either way, once collected, data is kept in a store (whether physical or digital) as raw data. This is then looked at and analysed using various methods such as computing, graph generator software, image processors, etc.
Evaluation of results involves fitting the new data into the existing knowledge. Sometimes this involves discarding what is outlier data, running additional statistical tests to fine tune results, dealing with unexpected results, or outright finding out that the experiment didn’t run as intended.
This feeds into the last step of drawing conclusions and using them to inform the start of a new cycle with a new testable hypothesis. It may be that the failed experiment will be carried out again; a slight variation of the experiment will be carried out again; a different experiment will be carried out; the results support rejecting or accepting the null hypothesis, and a new area of the field can be created with new experiments; the hypothesis is settled and the area is abandoned or paused in the pursuit of a different area of the field; or indeed, the findings break new ground, spawn new directions of research, and inspire innovation, business and citizen interest and application of the new knowledge.
As you can see, data regarding something like climate change comes in many different forms and from many different sources obtained in many different ways, so curating it all together in a specific field or to answer one question is…