9781934874264-ch20

Telemetry Techniques: A User Guide for Fisheries Research

Section 9.3: Developing a Quality Assurance Plan for Telemetry Studies: a Necessary Management Tool for an Effective Study

Jill M. Hardiman, Chris E. Walker, and Tim D. Counihan

doi: https://doi.org/10.47886/9781934874264.ch20

Telemetry has been used to answer various questions associated with research, management, and monitoring programs and to monitor animal behavior and population dynamics throughout the world. Many telemetry projects have been developed to study the passage, behavior, and survival of migrating adult and juvenile salmonids at hydroelectric projects on the mainstem Columbia and Snake rivers (Skalski et al. 2001a, 2001b; Skalski et al. 2002; Keefer et al. 2004; Goniea et al. 2006; Plumb et al. 2006). Telemetry based field evaluations of the survival of salmon through hydroelectric projects are costly because of the technology (tags, telemetry systems, infrastructure, etc.) and personnel required to conduct the evaluations. Given the cost of implementing these projects, and the financial and conservation implications of the decisions made from the research results (e.g., forgone electricity production and conservation of threatened and endangered animals), ensuring quality data are collected by documenting all procedures, training, data checks, and that sound protocols and quality assurance and control procedures are in place is paramount.

Telemetry studies can pose unique data collection, processing, and analysis challenges. For instance, inferences about entire populations of animals are made from study animals that are captured, held, and tagged at disparate locations. Consequently great care must be taken to ensure that any potential biases that could arise from field procedures must be minimized (Peven et al. 2005). Interrogations of released study animals are remotely conducted by telemetry systems throughout the study area. The continuous recording of telemetry systems can result in large numbers of detections over a short time frame and the potential for false positive detections from records that are weak or erroneous. Thus, there is the potential to generate large data sets (many thousands of lines) that require significant postprocessing. Data reduction can be done using software or programming code within a software package or manually to discern noise from valid data and pull out the pertinent information for analysis. In either case, consistent well-documented procedures need to be in place to ensure quality results and allow for repeatability of study methods.