What is a stock assessment? Part I

Introduction

Assessing in style

           Stock assessments are an important part of the way we manage fisheries in the U.S. A lot goes into a single stock assessment, and they can be quite daunting to navigate. However, when you break them down into their component parts they really aren’t so bad! In this post we will begin to explore stock assessments by introducing the concept and talking a bit about the process involved with stock assessments in Federal waters (*in Florida, marine waters past 3 miles offshore in the Atlantic and past 9 miles offshore in the Gulf of Mexico are governed by the Federal government through NOAA’s Fisheries branch, and waters inshore of that are governed by the State through the Florida Fish and Wildlife Conservation Commission; they also use stock assessments, and we will tackle their management process in a future post).

Background: what is a stock assessment?

At its heart, a stock assessment is simply what it says: a project aimed at assessing a stock of fish. The term “stock” simply refers to a unit of fish that is being managed. The unit might be distinguished based on biology or fishing practices. For example, in the state of Florida our fish species are often divided into Atlantic and Gulf of Mexico stocks because few fish travel between the two bodies of water, so we consider them separately. A stock assessment pulls together all of the available information on that stock, including biology and information about fishing, to try to figure out both what is going on with the stock at present (Is it overfished? Is it doing just fine?) and to predict what will happen in the future (What about 10 years from now? Can we keep fishing the same way?).
Grouper are one example of species
assessed using the SEDAR process


Stock Assessments in the South: the SEDAR process

SEDAR (short for the “Southeast Data, Assessment, and Review”) refers to the way Federal stock assessments are conducted in the Southeastern U.S. The process consists of three workshops: the Data Workshop, the Assessment process, and the Review Workshop. During the Data Workshop, fisheries scientists pull together all of the data, or information, that will be needed for the stock assessment. Next, researchers use this information to create the stock assessment models during the Assessment process (*we will talk more about models soon). Finally, a group of different experts review everything during the Review Workshop. The completed assessment (including all three reports from the workshops) are then sent to the appropriate Fisheries Management Council’s Scientific and Statistical Committee to be accepted as appropriate for management. The Committee then uses the information in the assessment to make management recommendations, which go to the Fisheries Management Council (in Florida, this would be either the Gulf ofMexico Fishery Management Council or the South Atlantic Fishery ManagementCouncil). The SEDAR process is certainly complex and involved, but it helps ensure that the stock assessments are of the highest quality and therefore that the management recommendations we get out of them are the best possible.

Stock Rebuilding Targets: Biological Reference Points

            If a stock has been assessed as overfished (meaning that too many fish were caught in the past), the Sustainable Fisheries Act (*a National Act passed by Congress) mandates that managers create a “rebuilding plan” for the stock to get it back to sustainable levels. To do this, managers have to aim for a target, or “biological reference point”, that lets them know that the stock has returned to sustainable levels. There are many different types of reference points, and we will explore them in detail in another post.
Next time: what all goes into a stock assessment? 


*Want to learn more? Check out these handy resources:

NOAA Assessment 101

So, is this “Fishing App” Worth it or not? (Author: Ryan Jiorle)

Fisheries science is a field whose very foundation (“counting the fish in the ocean”) creates doubt in many anglers’ minds.  Using smartphones to have recreational anglers upload their fishing information creates doubts in just about everyone’s minds—fisheries scientists included.  However, that has not stopped a few groups from steaming forward under the belief that something created by and for anglers will cause them to report honestly and faithfully.  The most extensive program to date is the Snook and Gamefish Foundation’s (SGF) iAngler app, the flagship app under its Angler Action Program1.  Originally started as a way to provide state scientists with more data on snook fishing in Florida, it has expanded to include fresh- and saltwater fish across the country, inevitably turning some heads around the fisheries community.

             Stock assessment scientists at the Florida Fish and Wildlife Conservation Commission’s Fish and Wildlife Research Institute (FWRI) were interested in getting as much data to help with snook assessments, but were also concerned about the reliability of this information.  Likewise, on the federal level, fisheries scientists with the National Oceanic and Atmospheric Administration (NOAA) were skeptical about the validity of data that is self-reported in a non-random manner2.  To provide the best chance of getting information that is representative of the whole angling population, there should be a fully randomized sample of anglers.  This is not what an app like iAngler does; rather, it is utilized by whoever is interested in downloading it.  What if only the most talented anglers use it (the anglers most fisheries and social scientists would expect to use such an app)?  Then, the experts are left thinking all the fishers out there have such success when they drop their lines in the water.  For reasons like this, an analysis of these volunteer fishing apps is necessary to begin solidifying or revising our assumptions.
            NOAA’s Marine Recreational Information Program (MRIP) survey is a randomized, rigorously designed sampling initiative that has interviewers intercepting anglers at boat ramps and beaches for catch-per-unit-effort (CPUE) information and calling them on the phone for effort data.  Because, as Professor John Shepherd once said, “Managing fisheries is hard: it’s like managing a forest, in which the trees are invisible and keep moving around,” we have no way of knowing the real values of the variety of fisheries metrics.  However, something like the MRIP provides data about as close to the “truth” any any other program.  So when we sought to gauge the validity of data from the iAngler app, we decided the best path would be to compare its information to that of the MRIP.
            For specific comparisons, we chose the “Three Wise Men” of fisheries metrics (or “Three Stooges,” depending on your perception of fisheries): effort, catch, and catch-per-unit-effort (or catch rate).  The results that followed were in some ways expected, but surprising in other ways.  First, the only place that had a reasonable number of trips reported under iAngler was south Florida, the Atlantic side especially (where the app was created).  Because of this, a lot of the fishing that goes on in other parts of Florida is not being captured by the app, so using this on a statewide scale would be risky.  Also, the scale between the two programs was not comparable; the number of MRIP boat-ramp interviews dwarfed the number of iAngler reported trips.  This app only began in 2012 and has been spread only by word-of-mouth, so that likely explains its relative size compared to NOAA’s 35-year-old nationwide sampling program.  Also, the focus of the anglers using iAngler was directed toward Florida’s popular inshore species: common snook, spotted seatrout, and red drum.  Even though Floridians as a whole also like to fish offshore for snappers, groupers, billfish, etc., the app is adequately capturing only these three species.
            While this seems to be two strikes against the citizen-driven app, there was one big question left: how do the catch rates compare?  The spatial bias of southeast Florida might not persist if anglers in other areas start using the app.  And even though it only has sufficient information for a handful of species, scientists assess stocks individually anyway.  We moved forward by looking at catch rates for the three inshore fish, but narrowed our focus to iAngler’s “hotspots,” in other words, south Florida specifically.  This allowed the comparison to the MRIP’s catch rates to be more representative than a statewide comparison.  When we added this specification, the iAngler catch rates were very similar to those of the MRIP—for each of the three fish we considered.  This is surprising from a statistical standpoint, given the fact that these anglers were not randomly chosen to participate—it was voluntary, and thus, non-random.
            To sum it up, SGF’s iAngler app provides recreational fisheries information that is spatially biased toward south Florida and contains mostly information on snook, seatrout, and red drum.  However, when appropriate comparisons are made, the catch rates given by anglers are very similar to those estimated by the MRIP survey.  If the participation were to increase and become more balanced throughout the state, a program like iAngler could provide valuable data to fisheries scientists, especially for relatively rare and perhaps poorly sampled fisheries like snook.  It even has some advantages over traditional survey methods like the one utilized by the MRIP.  Because boat-ramp interviews take place after a trip is completed, they miss a lot of detailed information about the fish that were thrown back—which is a lot of fish in Florida’s fisheries.  Users of an app like iAngler can submit size, weight, and other information about every fish they caught, and not just the ones they brought back to land.  Self-reporting programs will always carry an undeniable statistical risk, but being aware of and accounting for potential biases could give programs like iAngler a place in future recreational fisheries management.
1    1.  Information about the Angler Action Program: 
2    2.  The link below provides a good summary of the risks of using non-random, self-reported data for fisheries science: