CoverModule 1.0. Homeostasis, Membranes, Electrophysiology and ANS1.1. Homeostasis1.1.1. Homeostasis Defined1.1.2. Homeostatic Control Systems1.1.3. Feedback Response Loop1.2. Cell Transport; Water & Solutes1.2.1. Fluid Compartments1.2.2. Osmosis1.2.3. Diffusion of Solutes1.2.4. Active Transport1.2.5. Bulk Transport1.3. Electrophysiology1.3.1. Ions and Cell Membranes1.3.2. Membrane Potentials1.3.3. Graded Potential1.3.4. Action Potentials1.3.5. Refractory Periods1.3.6. Propagation of an Action Potential1.4. The Synapse1.5. The Autonomic Nervous System1.5.1. Organization of the Nervous System1.5.2. Structural Organization of the ANS1.5.3. The SNS and the PNS1.5.4. The Enteric Nervous System1.5.5. Physiology of the ANS1.5.6. Neurotransmitters of the ANS1.5.7. Receptors of the ANS1.5.8. Actions of the Autonomic Nervous System1.5.9. Table of Actions for the SNS and PNS and Some Common DrugsModule 2.0. Skeletal Muscle and Special Senses2.1. Structural Organization of Skeletal Muscle2.2.1. Neuromuscular Junction, Excitation-Contraction Coupling2.2.2. Muscle Contractures and Cramps2.3. Whole Muscle Contraction, Fiber Type, Fatigue and Muscle Pharmacology2.3.1. Motor Units2.3.2. Factors that Influence the Force of Contraction2.3.3. Energy Source for Muscle Contraction2.3.4. Skeletal Muscle Fiber Types2.3.5. Fatigue2.3.6. Muscle Pharmacology2.4. Smooth Muscle2.4.1. Smooth Muscle Contraction2.5. Control of Body Movement2.5.1. Voluntary Control of Muscle2.5.2. Reflexes2.6. Taste and Smell2.6.1. Taste2.6.2. The Sense of Smell2.7. Vision2.7.1. Structure of the Eye2.7.2. Focusing Light on the Retina2.7.3. Converting Light to Action Potentials2.7.4. The Retina2.7.5. Phototransduction2.7.6. Receptive Fields2.8. Hearing and Equilibrium2.8.1. The Nature of Sound2.8.2. The Hearing Apparatus2.8.3. Sound Vibrations to Action Potentials2.8.4. The Sense of Balance and EquilibriumModule 3.0. Cardiovascular System3.1. Structure of the Heart3.1.1. Chambers and Circulation3.2. Cardiac Cell Action Potentials3.2.1. Action Potentials in Cardiac Muscle Cells3.2.2. Action Potentials in Cardiac Autorhythmic cells3.2.3. Cellular Mechanisms of Inotropy and Chronotropy3.3. Electrophysiology of Heart Muscle3.3.1. Heart Conduction System3.3.2. Electrocardiogram (ECG)3.3.3. Abnormal ECG - Current of Injury3.4. The Cardiac Cycle3.4.1. Cardiac cycle3.4.2. Cardiac Measurements and Pressure Volume Loops3.5. Blood vessels and Blood Pressure3.5.1. Arteries and Veins3.5.2. Capillaries3.5.3. Blood Pressure Regulation and Shock3.5.4. Capillary Exchange3.5.5. Myogenic and Paracrine Regulation of Vasoconstriction and Vasodilation3.6. Blood3.6.1. Composition of Blood3.6.2. Hematopoeisis3.6.3. Breaking Down Red Blood Cells3.6.4. HemostasisModule 4.0. Urinary and Respiratory Systems4.1. Function and Structure of the Kidney4.1.1. Urinary System Function4.1.2. Functional Anatomy of the Urinary System4.1.3. The Nephron: Functional Unit of the Kidney4.1.4. The Renal Corpuscle: Bowman's Capsule4.2. Physiology of Urine Production4.2.1. Filtration4.2.2. Renal Clearance4.2.3. Tubular Reabsorption4.2.4. Urine Concentration and Dilution4.2.5. Hormonal Regulation of Urine Production4.3. Acid/Base Balance4.3.1. Buffers4.3.2. Acid/Base Disturbances4.4. The Respiratory System4.4.1. Respiratory System Structure and Function4.4.2. Respiratory Membrane4.4.3. Respiratory pressures and Inspriation/Expiration4.4.4. Alveoli and Surfactant4.4.5. Pneumothorax4.4.6. Pressure-Volume Loops and the Work of Breathing4.5. Gas Exchange and Transport4.5.1. Gas Laws4.5.2. Partial Pressure Gradients in the Lung4.5.3. Alveolar Gas Equation4.5.4. Oxygen and Carbon Dioxide Transport in the Blood4.5.5. Alveolar Ventilation4.5.6. Ventilation/Perfusion Ratio4.6. Chronic Bronchitis and Emphysema4.6.1. Respiratory Control by the Medulla Oblongata4.6.2. Chemicals that Regulate VentilationModule 5.0. Digestive, Endocrine and Reproductive Systems5.1. Functional Anatomy of the Digestive System5.1.1. Layers of the Digestive Tract5.1.2. Enteric Nervous System5.1.3. Organs of the Digestive System5.2. Digestion5.2.1. Carbohydrates5.2.2. Proteins5.2.3. Lipids5.2.4. Lipoproteins5.3. Regulation of Digestive Secretions5.4. Endocrine System5.4.1. Overview of the Endocrine System5.4.2. Hormone Receptors5.4.3. Hormones of the Body5.4.4. Other Hormones: Melatonin and Pheromones5.5. The Hypothalamus and Pituitary Gland5.5.1. Structure and Function of the Hypothalamus and Pituitary Gland5.5.2. The Posterior Pituitary5.5.3. The Anterior Pituitary5.5.4. Growth Hormone5.5.5. Prolactin5.5.6. Thyroid Hormones5.5.7. Adrenal Hormones5.6. Pancreas5.6.1. Insulin and Glucagon5.6.2. Diabetes Mellitus5.7. Reproductive System Anatomy5.7.1. Female Reproductive Anatomy5.7.2. Male Reproductive Anatomy5.7.3. Sexual Development at Puberty5.7.4. Male Reproductive Endocrine Axis5.7.5. Spermatogenesis5.7.6. Female Reproductive System: Oogenesis5.7.7. Ovulation and Fertilization5.7.8. The Ovarian Cycle5.7.9. The Uterine Cycle5.7.10. PregnancyAppendix A. GenderAppendix B. The Placebo EffectB.2.1. The Placebo EffectB.2.2. Examples of the Placebo EffectB.2.3. How do Placebos Work?B.2.4. Are Placebos Ethical?B.2.5. How do we validate actual effectiveness of placebosB.2.6. Tips for evaluating scientific evidenceB.2.7. What about Faith Healings
B.2.6

Tips for evaluating scientific evidence

How then do we validate the onslaught of products which supposedly claim to have scientific evidence? To combat this very problem an article was published in the journal of nature (a highly reputable scientific journal) with 20 tips for identifying scientific claims for validity. The tips are listed below and in some cases have been shortened from the original article by Sutherland et al., 2013.

1. Differences and chance cause variation.

The real world varies unpredictably. Science is mostly about discovering what causes the patterns we see. There are many explanations for such trends, so the main challenge of research is trying to identify which factor or process is causing the main effect. This challenge is complicated by the fact that most trends are affected by an innumerable number of other factors.

2. No measurement is exact.

Practically all measurements have some error. If the measurement process were repeated, one might record a different result. In some cases, the measurement error might be large compared with real differences. Thus, if you are told that the economy grew by 0.13% last month, there is a moderate chance that it may have shrunk.

3. Bias is rife.

Experimental design or measuring devices may produce atypical results in each direction. For example, determining voting behavior by asking people on the street, at home or through the Internet will sample different proportions of the population, and all may give different results. Because studies that report 'statistically significant' results are more likely to be written up and published, the scientific literature tends to give an exaggerated picture of the magnitude of problems or the effectiveness of solutions. An experiment might be biased by expectations: participants provided with a treatment might assume that they will experience a difference and so might behave differently or report an effect. Researchers collecting the results can be influenced by knowing who received treatment. The ideal experiment is double-blind: neither the participants nor those collecting the data know who received what.

4. Bigger is usually better for sample size.

The average taken from many observations will usually be more informative than the average taken from a smaller number of observations. Thus, the effectiveness of a drug treatment will vary naturally between subjects. Its average efficacy can be more reliably and accurately estimated from a trial with tens of thousands of participants than from one with hundreds.

5. Correlation does not imply causation.

It is tempting to assume that one pattern causes another. However, the correlation might be coincidental, or it might be a result of both patterns being caused by a third factor — a 'confounding' or 'lurking' variable. For example, ecologists at one time believed that poisonous algae were killing fish in estuaries; it turned out that the algae grew where fish died. The algae did not cause the deaths.

6. Regression to the mean can mislead.

Extreme patterns in data are likely to be, at least in part, anomalies attributable to chance or error. The next count is likely to be less extreme. For example, if speed cameras are placed where there has been a spate of accidents, any reduction in the accident rate cannot be attributed to the camera; a reduction would probably have happened anyway.

7. Extrapolating beyond the data is risky.

Patterns found within a given range do not necessarily apply outside that range. Thus, it is very difficult to predict the response of ecological systems to climate change, when the rate of change is faster than has been experienced in the evolutionary history of existing species, and when the weather extremes may be entirely new.

8. Beware the base-rate fallacy.

The ability of an imperfect test to identify a condition depends upon the likelihood of that condition occurring (the base rate). For example, a person might have a blood test that is '99% accurate' for a rare disease and test positive, yet they might be unlikely to have the disease. If 10,001 people have the test, of whom just one has the disease, that person will almost certainly have a positive test, but so too will a further 100 people (1%) even though they do not have the disease. This type of calculation is valuable when considering any screening procedure, say for terrorists at airports.

9. Controls are important.

A control group is dealt with in the same way as the experimental group, except that the treatment is not applied. Without a control, it is difficult to determine whether a given treatment really had an effect. The control helps researchers to be reasonably sure that there are no confounding variables affecting the results. Sometimes people in trials report positive outcomes because of the context or the person providing the treatment, or even the color of a tablet. This underlies the importance of comparing outcomes with a control, such as a tablet without the active ingredient (a placebo).

10. Randomization avoids bias.

Experiments should, wherever possible, allocate individuals or groups to interventions randomly. Comparing the educational achievement of children whose parents adopt a health program with that of children of parents who do not is likely to suffer from bias (for example, better-educated families might be more likely to join the program). A well-designed experiment would randomly select some parents to receive the program while others do not.

11. Seek replication, not pseudoreplication.

Results consistent across many studies, replicated on independent populations, are more likely to be solid. The results of several such experiments may be combined in a systematic review or a meta-analysis to provide an overarching view of the topic with potentially much greater statistical power than any of the individual studies. Applying an intervention to several individuals in a group, say to a class of children, might be misleading because the children will have many features in common other than the intervention. The researchers might make the mistake of pseudoreplication if they generalize from these children to a wider population that does not share the same commonalities. Pseudoreplication leads to unwarranted faith in the results. Pseudoreplication of studies on the abundance of cod in the Grand Banks in Newfoundland, Canada, for example, contributed to the collapse of what was once the largest cod fishery in the world.

12. Scientists are human.

Scientists have a vested interest in promoting their work, often for status and further research funding, although sometimes for direct financial gain. This can lead to selective reporting of results and occasionally, exaggeration. Peer review is not infallible: journal editors might favor positive findings and newsworthiness. Multiple, independent sources of evidence and replication are much more convincing.

13. Significance is significant.

Expressed as P, statistical significance is a measure of how likely a result is to occur by chance. Thus P = 0.01 means there is a 1-in-100 probability that what looks like an effect of the treatment could have occurred randomly, and in truth there was no effect at all. Typically, scientists report results as significant when the P-value of the test is less than 0.05 (1 in 20).

14. Separate no effect from non-significance.

The lack of a statistically significant result (say a P-value > 0.05) does not mean that there was no underlying effect: it means that no effect was detected. A small study may not have the power to detect a real difference. For example, tests of cotton and potato crops that were genetically modified to produce a toxin to protect them from damaging insects suggested that there were no adverse effects on beneficial insects such as pollinators. Yet none of the experiments had large enough sample sizes to detect impacts on beneficial species had there been any.

15. Effect size matters.

Small responses are less likely to be detected. A study with many replicates might result in a statistically significant result but have a small effect size (and so, perhaps, be unimportant). The importance of an effect size is a biological, physical or social question, and not a statistical one. In the 1990s, the editor of the US journal Epidemiology asked authors to stop using statistical significance in submitted manuscripts because authors were routinely misinterpreting the meaning of significance tests, resulting in ineffective or misguided recommendations for public-health policy.

16. Study relevance limits generalizations.

The relevance of a study depends on how much the conditions under which it is done resemble the conditions of the issue under consideration. For example, there are limits to the generalizations that one can make from animal or laboratory experiments to humans.

17. Feelings influence risk perception.

Broadly, risk can be thought of as the likelihood of an event occurring in some time frame, multiplied by the consequences should the event occur. People's risk perception is influenced disproportionately by many things, including the rarity of the event, how much control they believe they have and the adverseness of the outcomes, and whether the risk is voluntarily or not. For example, people in the United States underestimate the risks associated with having a handgun at home by 100-fold and overestimate the risks of living close to a nuclear reactor by 10-fold.

18. Dependencies change the risks.

It is possible to calculate the consequences of individual events, such as an extreme tide, heavy rainfall and key workers being absent. However, if the events are interrelated, (for example a storm causes a high tide, or heavy rain prevents workers from accessing the site) then the probability of their co-occurrence is much higher than might be expected. The assurance by credit-rating agencies that groups of subprime mortgages had an exceedingly low risk of defaulting together was a major element in the 2008 collapse of the credit markets.

19. Data can be dredged, or cherry picked.

Evidence can be arranged to support one point of view. To interpret an apparent association between consumption of yoghurt during pregnancy and subsequent asthma in offspring, one would need to know whether the authors set out to test this sole hypothesis or happened across this finding in a huge data set. By contrast, the evidence for the Higgs boson specifically accounted for how hard researchers had to look for it — the 'look-elsewhere effect'. The question to ask is: 'What am I not being told?'

20. Extreme measurements may mislead.

Any collation of measures (the effectiveness of a given school, say) will show variability owing to differences in innate ability (teacher competence), plus sampling (children might by chance be an atypical sample with complications), plus bias (the school might be in an area where people are unusually unhealthy), plus measurement error (outcomes might be measured in different ways for different schools). However, the resulting variation is typically interpreted only as differences in innate ability, ignoring the other sources. This becomes problematic with statements describing an extreme outcome ('the pass rate doubled') or comparing the magnitude of the extreme with the mean ('the pass rate in school x is three times the national average') or the range ('there is an x-fold difference between the highest- and lowest-performing schools'). League tables are rarely reliable summaries of performance.

End-of-Chapter Survey

: How would you rate the overall quality of this chapter?
  1. Very Low Quality
  2. Low Quality
  3. Moderate Quality
  4. High Quality
  5. Very High Quality
Comments will be automatically submitted when you navigate away from the page.
Like this? Endorse it!