You are here

The Common Oceanographer: Crowdsourcing the Collection of Oceanographic Data

TitleThe Common Oceanographer: Crowdsourcing the Collection of Oceanographic Data
Publication TypeJournal Article
Year of Publication2014
AuthorsLauro, FM, Senstius, SJacob, Cullen, J, Neches, R, Jensen, RM, Brown, MV, Darling, AE, Givskov, M, McDougald, D, Hoeke, R, Ostrowski, M, Philip, GK, Paulsen, IT, Grzymski, JJ
JournalPLoS BiolPLoS BiolPLoS Biol.
KeywordsArcGIS, citizen science, GIS and oceanography

We live on a vast, underexplored planet
that is largely ocean. Despite modern
technology, Global Positioning System
(GPS) navigation, and advanced engineering
of ocean vessels, the ocean is unforgiving,
especially in rough weather. Coastal
ocean navigation, with risks of running
aground and inconsistent weather and sea
patterns, can also be challenging and
hazardous. In 2012, more than 100
international incidents of ships sinking,
foundering, grounding, or being lost at sea
were reported (
wiki/List_of_shipwrecks_in_2012). Even
a modern jetliner can disappear in the
ocean with little or no trace [1], and the
current costs and uncertainty associated
with search and rescue make the prospects
of finding an object in the middle of the
ocean daunting [2].

Notwithstanding satellite constellations,
autonomous vehicles, and more than
300 research vessels worldwide (www.
vessels_by_country), we lack fundamental
data relating to our oceans. These missing
data hamper our ability to make basic
predictions about ocean weather, narrow
the trajectories of floating objects, or
estimate the impact of ocean acidification
and other physical, biological, and chemical
characteristics of the world’s oceans.
To cope with this problem, scientists make
probabilistic inferences by synthesizing
models with incomplete data. Probabilistic
modeling works well for certain questions
of interest to the scientific community, but
it is difficult to extract unambiguous policy
recommendations from this approach.
The models can answer important
questions about trends and tendencies
among large numbers of events but
often cannot offer much insight into
specific events. For example, probabilistic
models can tell us with some
precision the extent to which storm
activity will be intensified by global
climate change but cannot yet attribute
the severity of a particular storm to
climate change. Probabilistic modeling
can provide important insights into the
global traffic patterns of floating debris
but is not of much help to search-andrescue
personnel struggling to learn the
likely trajectory of a particular piece of
debris left by a wreck.

Oceanographic data are incomplete
because it is financially and logistically
impractical to sample everywhere. Scientists
typically sample over time, floating
with the currents and observing their
temporal evolution (the Langrangian approach),
or they sample across space to
cover a gradient of conditions—such as
temperature or nutrients (the Eulerian
approach). These observational paradigms
have various strengths and weaknesses, but
their fundamental weakness is cost. A
modern ocean research vessel typically
costs more than US$30,000 per day to
operate—excluding the full cost of scientists,
engineers, and the cost of the
research itself. Even an aggressive expansion
of oceanographic research budgets
would not do much to improve the
precision of our probabilistic models, let
alone to quickly and more accurately
locate missing objects in the huge, moving,
three-dimensional seascape. Emerging autonomous
technologies such as underwater
gliders and in situ biological samplers (e.g.,
environmental sample processors) help fill
gaps but are cost prohibitive to scale up.
Similarly, drifters (e.g., the highly successful
Argo floats program) have proven very
useful for better defining currents, but
unless retrieved after their operational
lifetime, they become floating trash, adding
to a growing problem.

Short TitlePLoS Biology
Alternate JournalPLoS Biology