Research, Monitoring, and Science: What’s the difference?

A few blogs ago I discussed the words “collaboration”, “sustainability”, and “planning” in the context of HFF’s work. This week I’ll take on three more words: “research”, “monitoring,” and “science.”

Summary

For those of you who, understandably, don’t have time or desire to read the whole blog, here are the take-home messages.

  • Science requires 1) identification of a problem, 2) collection of data, and 3) formulation and testing of hypotheses.
  • Research and monitoring activities related to natural resources frequently fail to constitute science because they lack testing of hypotheses.
  • Key steps in hypothesis-testing are review of scientific literature, formulation of testable hypotheses, selection of statistical methods, design of data-collection based on the statistical methods, and use of mathematical and computational tools to conduct the hypothesis tests.
  • The peer-reviewed scientific literature is the basis for advancement of scientific knowledge and for the application of science in decision making.
  • HFF strives to conduct science and to publish that science in peer-reviewed journals.

Read on if you want the details!

Definitions

Webster offers the following definitions of research: 1) “careful or diligent search,” 2) “studious inquiry or examination, especially investigation or experimentation aimed at the discovery and interpretation of facts, revision of accepted theories or laws in the light of new facts, or practical application of such new or revised theories or laws,” and 3) “collection of information about a particular subject.”

Monitoring is more straight-forward and has a single definition: “to watch, keep track of, or check, usually for a special purpose.”

Among several definitions of science, the one I will use is “system of knowledge covering general truths or the operation of general laws especially as obtained through the scientific method.” In turn, the “scientific method” is “principles and procedures for the systematic pursuit of knowledge involving the recognition and formulation of a problem, the collection of data through observation and experiment, and the formulation and testing of hypotheses.”

Some Examples

To discuss commonalities, relationships, and distinctions among research, monitoring, and science, let’s consider four scenarios involving one or more of these activities. These examples come directly from HFF’s work.

Scenario 1: Literature review

In preparing a report to accompany a water-rights protest, I and my collaborators first found and read all of the reports and papers that have been written about hydrology and water management in the geographic area under consideration. Then we consulted more general references that describe the current state of knowledge regarding relationships among streamflow, groundwater, and fisheries in ecological systems that are similar in climate, geology, hydrology, vegetation, and species composition to our study area. Our literature review led us to identify potential effects of water diversion under the proposed rights on streamflows, groundwater levels, and fish populations in the study area.

Scenario 2: Dissolved oxygen concentration at Island Park Dam

HFF installed a continuous-recording water-quality sonde below Island Park Reservoir last spring. Every few months, we have downloaded and graphed the data. We noticed that dissolved oxygen levels declined throughout the summer and fell below the minimum required by trout during late summer.

Scenario 3: Trout use of the Buffalo River fish ladder

Every spring and fall since construction of the fish ladder at the Buffalo River dam in 2006, HFF has counted and recorded the length of every rainbow trout that has ascended the fish ladder. Our objectives for this activity include describing numbers and sizes of fish that migrate during each season of the year, tracking numbers of fish across years, and relating timing of fish migration to streamflow in the Henry’s Fork.

Scenario 4: Habitat preferences of adult rainbow trout

HFF’s fisheries investigations to date have focused largely on habitat requirements of juvenile trout. We actually know relatively little about habitat requirements of adult trout in the Henry’s Fork and what management or restoration actions, if any, might increase habitat for adult trout. However, a review of the scientific literature indicated that aquatic macrophytes (vegetation rooted in the stream bottom) provide physical habitat for fish in streams that have flow regimes dominated by groundwater. Thus, we decided to investigate the possibility that adult rainbow trout use macrophytes for cover in the Harriman reach. We hypothesized that adult trout in the Harriman State Park reach are more likely to be found where macrophyte cover is higher than where it is not. To test this hypothesis, we observed individual fish (tagged so that we can identify individuals and account statistically for repeated observations of the same individual) and recorded a number of habitat variables, including macrophyte cover, at the fish’s location. Immediately thereafter, we observed those same variables at a randomly selected point in the stream near the fish’s location. We will use a statistical test to compare macrophyte cover at the fish’s location with that at the random location. The study design—comparison of macrophyte cover at the fish’s location with that at a random location—allows the statistical test to provide quantitative evidence for or against our hypothesis.

So, which of these scenarios involve research, monitoring, or science, or a combination of these?

Scenario 1 qualifies as research, even though no original data were collected. The process of systematically reviewing scientific and technical literature related to a particular subject is research. In this case, we applied widely accepted principles of hydrogeology and stream ecology to a site-specific situation and identified possible consequences for the fishery.

Scenario 2 is obviously monitoring, but is it research? The way I’ve presented here, it doesn’t qualify as research because there was no “careful or diligent search” or no “collection of information about a particular subject.” We simply put an instrument in the water and watched the dissolved oxygen concentration change through time.

Scenario 3 is both research and monitoring. It includes studious inquiry aimed at discovery of facts but also constitutes “keeping track of” fish use of the ladder.

Only scenario 4 qualifies as science; none of the other examples, as presented, include all three key elements of science: 1) problem identification, 2) collection of data, and most critically, 3) formulation and testing of hypotheses. Notice that this scenario qualifies as research, and in fact, research is necessary for application of the scientific method. But, as scenarios 1 and 3 illustrate, it is possible to conduct research without also conducting science.

These four examples illustrate a number of relationships among research, monitoring, and science. Research doesn’t necessarily involve collection of original data; a thorough literature review is a research undertaking. Monitoring can constitute research or not; conversely, research can include monitoring or not. Most importantly, research—regardless of whether it contains a monitoring component—is not science if it lacks any of the three key elements of the scientific method.

Science in natural-resources management

Over decades of experience with natural-resources issues, I have observed that most research and monitoring activities are motivated by a particular problem. Usually, the problem is reasonably well identified, at least at a general level. For example, “fish abundance has declined” is a typical problem statement. Assuming that this statement is based on good monitoring data, it qualifies as a sound, albeit general, problem statement. Although I have certainly encountered many situations in which data are lacking, it is also the case that most research and monitoring activities involve collection of relatively large amounts of data. In fact, the amount of data being collected has increased exponentially in recent decades, with the advance of technology that ranges from satellite-based remote-sensing to DNA sequencing. So, the first two components of the scientific method are usually present in natural-resources research and monitoring. When research and/or monitoring fail to constitute science, the missing element is hypothesis testing.

What is hypothesis testing?

The following scenario repeated itself every few weeks when I was a full-time statistics professor. A graduate student would come to my office after two field seasons of data collection, show me an enormous and poorly organized spreadsheet full of data, tell me that this was his/her last semester in school, and ask if could I help analyze the data. (Actually, this still happens to me, just not as often.) My first question was always: “What are your hypotheses?” This question was almost always met with a blank stare.  After an hour or two of questions and discussion, we would usually end up back at the problem statement, which was something general like “fish abundance has declined.”

So, how do you get from “fish abundance has declined” to “we hypothesize that adult rainbow trout use macrophytes for cover in the Harriman State Park reach of the Henry’s Fork?” The first statement is a problem; the second is a testable scientific hypothesis. The key to turning a problem statement into a testable scientific hypothesis is review of the scientific literature. In this particular case, 20 years of work on trout populations in the Henry’s Fork led to a logical sequence of papers and reports that contain answers to previous hypotheses about the Henry’s Fork trout population. This sequence of literature indicated solid understanding of juvenile trout ecology and even quantitative evidence that management actions based on this understanding had increased the number of juvenile trout entering the population each year. What was missing from the literature was an understanding of habitat requirements for adult trout. Further review revealed that macrophytes can provide cover for fish in spring-fed streams, which eventually led us to formulate a testable hypothesis about trout use of macrophytes in a particular reach of the Henry’s Fork.

Once a testable hypothesis has been developed, the next step is to identify specific statistical methods that are appropriate for testing that hypothesis. Statisticians and scientists have been debating the mathematical and philosophical aspects of scientific hypothesis for a century. There are several general classes of methods that are currently accepted as valid ways to evaluate scientific hypotheses. What all of these methods have in common is that they are based on fundamental properties of probability and involve mathematics and computing to implement.

Data collection procedures should be determined only after the hypotheses and statistical analysis methods have been identified. Too often, the data are collected first, which usually results in collection of large amounts of irrelevant or redundant data and not enough of the data needed to test the hypothesis with a reasonable level of statistical power.

The role of peer-reviewed publication

Review of scientific literature is a key step in the hypothesis-testing process and hence is a necessary component of the scientific method. Above, I lumped both “reports” and “papers” into “literature.” By my definition, a report is a technical description of a research, monitoring, or scientific undertaking, but it is not peer-reviewed. Technical reports are an important source of information and data, particularly in site-specific situations. For example, hundreds of fisheries-management reports written and published online by Idaho Department of Fish and Game provide a wealth of information on fisheries around the state. I routinely use information in these reports in my work and have for 20 years. Similarly, HFF has produced around 70 technical research reports over the past 30 years, and I also frequently consult these reports.

However, reports are not “papers”, a term I and most others in our business reserve for written documentation of science that has been reviewed by external, independent experts in the scientific discipline. These reviewers are usually anonymous to the authors of the paper, and often, even the authors are anonymous to the reviewers, to optimize objectivity in the review. Only after review by several experts, and subsequent revision of the manuscript to address critiques of the reviewers, is a paper published in a legitimate scientific journal. In fact, the standards of most top-quality journals have become so high relative to the number of manuscripts submitted, that a large fraction of manuscripts are deemed unsuitable for publication and are rejected.

There are three important reasons why peer-reviewed publication is critical to the development of science. The first is that it provides a control on quality. Only work that demonstrates thorough understanding of the existing knowledge on the subject, uses sound methodology, and draws appropriate conclusions is acceptable. I have reviewed somewhere around 40 manuscripts over the past 20 years, and by far the two most common flaws I find are 1) inadequate knowledge of the published literature on the subject, and 2) incorrect or inappropriate statistical analysis.

The second reason why peer-reviewed publication is important is because it provides objective documentation of the development of the general laws and principles of science. Each paper adds more evidence for or against specific hypotheses, providing the basis for formulation of future testable hypotheses and ultimately leading to establishment and acceptance of general laws and principles. The scientific principles that appeared in your high-school and college science textbooks were the result of decades of peer-reviewed science, reviewed and interpreted by the textbook authors.

The third reason why peer-reviewed publication is important, particularly in management of natural resources, is because managers and decision-makers are more likely to incorporate scientific information into their actions and more likely to use the appropriate scientific information if that information has been peer reviewed. Notice the phrase “more likely;” we all know that decisions supposedly based on science do not always use the science appropriately. But, peer-reviewed science does carry more weight than non-reviewed work.      

Research, Monitoring and Science at HFF

Obviously, HFF does all three of these things, but we strive to do science. This means that our data-collection activities are motivated by a clearly defined problem, which has been broken down into testable hypotheses through knowledge of the literature. Furthermore, data are collected based on these hypotheses and a particular set of statistical methods used to test them.

To illustrate our approach to science at HFF, let’s return to scenario 2 above, the monitoring of dissolved oxygen below Island Park Dam. As I presented this scenario, it constituted monitoring but not research or science. In reality, dissolved oxygen is only one of nine different parameters we are measuring, and the Island Park Dam site is only one of a dozen sites that will comprise a whole network of monitoring stations. The network was designed to address identified problems, for example, previously measured phosphorus concentrations in the Henry’s Fork that exceeded acceptable levels in streams. Specific parameters and sites were selected based on review of site-specific and general literature and on specific hypotheses. An example of one such hypothesis is “delivery of suspended sediment into Island Park Reservoir exceeds export from the reservoir.” Finally, the choice of (very expensive) continuous-recording devices was made only after we understood and selected statistical methods appropriate for testing hypotheses about variation in water-quality parameters across space and time. In other words, our water-quality monitoring program really does constitute science.

Lastly, we strive not just to do science but also to report it. The first step is to carefully document our methods and results in technical reports. These reports will guide us in formulation of future hypotheses and in designing efficient data-collection activities that balance statistical power to test these hypotheses with cost.

However, no matter how good our science is, it will not contribute to the work of the broader scientific community if it is not published in peer-reviewed outlets. The late Dr. Jack Longwell, co-founder of HFF’s research program and one of my most influential mentors, said at an HFF Board meeting back in the 1990s, “Science isn’t science unless it is published,” and he meant published in a peer-reviewed journal. I obviously remember that to this day and try to live up to Jack’s example and vision. Jack published several hundred peer-reviewed papers in his career. Neither I nor HFF will ever come close to that, but HFF has contributed directly to 29 peer-reviewed papers in its 30-year existence, and we have three more in peer-review right now. That’s not bad for a small non-profit organization, especially given the large amount of time and expertise required to produce papers that can stand up to peer review.

We are committed to expanding HFF’s capacity to publish peer-reviewed science, as evidenced by our current search for a temporary post-graduate research associate in statistical modeling (http://henrysfork.org/jobs-open). But, one reason we are searching for someone with specific expertise in statistics (i.e., the hypothesis-testing aspect of science) to help us catch up on scientific publication is so that the rest of us can remain focused on the day-to-day challenges of maintaining wild trout in a working river. We scientists need to remember that meeting these challenges involves building maintaining relationships with other watershed stakeholders as much as it involves science. HFF will continue to be a leader in both the science and art of watershed conservation.