Data Quality Monitoring with Witness

Data Quality Monitoring with Witness
Chris Murphy
Bluefin Robotics Corporation
553 South Street
Quincy, Massachusetts 02169 USA
[email protected]
Abstract—Seafloor surveys can be spoiled by misconfigured
payload sensors, operator error, and other events that prevent
survey data from being successfully acquired. Operators are limited in their ability to perform quality assurance mid-mission by
the amount of information available in real-time from underwater
vehicles. Witness provides an end-to-end solution for monitoring
payload data quality, allowing vehicle operators to view clips of
payload imagery from a single subsea vehicle while underway.
I. I NTRODUCTION
Witness is an end-to-end solution for transferring clips of
imagery to operators via an acoustic or Iridium SBD modem
while an underwater vehicle is underway. Sonar pings and
camera images are monitored by Witness, compressed, and
transmitted throughout a dive to give a real-time feed from the
AUV. Witness has been developed to support four use cases:
• verify payload data quality,
• tune payload parameters mid-mission,
• adapt missions based on operator feedback, and
• explore extreme environments.
The first two use cases, verifying payload data quality and
tuning payload parameters, were the primary motivations for
developing Witness. AUV operators are limited in their ability
to perform quality assurance mid-mission by the amount of
information available in real-time from the AUV. This is one of
the fundamental limitations of untethered underwater vehicles
for subsea survey work – one largely imposed by constraints of
the acoustic communications channel [1][2]. Witness seeks to
eliminate the risk of a survey being spoiled by misconfigured
or disabled payload sensors - a costly and frustrating event - by
allowing the operator to perform spot checks of the acquired
data before mission completion and vehicle recovery.
The third use case, adapting missions based on operator
feedback, seeks to limit vehicle down time by reducing the
number of vehicle recoveries. By allowing operators to review
data without requiring a vehicle recovery, a single long mission
could fulfill what would have required multiple sequential
missions and vehicle recoveries before. One direct application
would be in the field of mine countermeasure applications,
when coupled with onboard Automatic Target Recognition
(ATR). Witness could allow a human operator to confirm or
reject specific targets prior to performing more detailed examination, such as a reacquire-identify survey, or prosecution of
a given target.
The final use case, exploring extreme environments, is
motivated by the transition of AUVs from relatively safe near-
(a) Synthetic Aperture Sonar (SAS) sidescan clip showing a shipwreck (near
nadir).
(b) Sub-Bottom Profiler (SBP clip – water surface is to the left in this image.
Fig. 1. Waterfall views of sonar pings transmitted to surface operators using
Witness. This data was captured in February 2014 during field trials in the
Gulf of Mexico. Each clip is approximately 10 kilobytes in size, but represents
several minutes of sonar data.
shore enviroments to increasingly hostile ones such as underice[3][4][5], the deepest reaches of the ocean[6], or operations
in the midst of conflict. When operating in these environments,
recovery of a vehicle can range from challenging to impossible. With the increasing availability of low-cost task-specific
AUVs, it is also increasingly feasible to plan a mission without
any intention of recovering the AUV being used. In any of
these scenarios, the ability to communicate data from the AUV
without recovery could enable a mission to be successful, even
with the loss of a vehicle.
II. A RCHITECTURE
Witness is built as a module of TOPICS[7], a software architecture to manage communication for underwater vehicles over
high-latency channels. TOPICS provides an abstract interface
to acoustic and Iridium SBD modems, a basic network model,
and Medium Access Control (MAC). Witness is responsible
for generating and compressing clips, but not for managing
the underlying communication channel. This has the important
implication that Witness can easily be ported to many different
Topside Interface (HTML5 / Web Browser)
Topside
False Color Image
Color Mapping
Image
Driver ... TOPICS ... Progressive File Transmission ... Decompression
(as below)
Vehicle Modem
TOPICS
Modems
Topside Modem
Modem Driver
Priority Queueing / Flow Management
Packets
Packets
Packets
Packets
File Transfer
Transmit
Requests
Progressive File Transmission
ˆ File Chunking
File
File
File
Image
ˆ Wavelet Compression
Image
Image
Image
SAS Image Generation
MBES Image Generation
SBP Image Generation
Photo Processing
ˆ Ping Decoding & Grouping
ˆ Attenuation Compensation
ˆ Metadata Collection
ˆ Ping Decoding
ˆ Metadata Collection
ˆ Ping Decoding & Grouping
ˆ Magnitude Computation
ˆ Metadata Collection
ˆ Image Selection
ˆ Metadata Collection
SAS / SSS
Ping
MBES
Ping
SBP
Photo
Camera
Sources
Ping
Image Forming /
Compression
Compression
ˆ Discrete Wavelet Transform
Vehicle
File
ˆ Selective Acknowledgement
Fig. 2. The Witness software architecture consists of a set of sensor drivers, at bottom, coupled with a plugin to the TOPICS software architecture and an
HTML5-based topside interface.
types of modem.
Witness comprises an HTML5 user interface, a TOPICS
plugin, and a set of software drivers (one per sensor) which
are responsible for converting sensor data into small images,
or “clips”. This overall architecture is depicted in Fig. 2. This
conversion process may be as simple as reading a photo off of
camera storage, or as complex as aggregating multiple sonar
pings with significant pre-processing. The resulting images are
heavily compressed and forwarded to TOPICS, which manages
their transmission to the surface.
Once clips are received on the surface, Witness saves the
received imagery and metadata to a shared network server,
which can then be accessed by multiple users. Data access
is provided via an HTML5 web interface, which allows any
network-connected individual to review data as it arrives from
the AUV without installing additional software.
A. Sensor Drivers
Witness relies on a single driver for each sensor. The
primary role of this sensor driver is to translate the raw stream
of data coming from a sonar, or the images produced by an
optical camera, into a common imagery format that can be
compressed by Witness. As there remains little commonality
amongst sonar data formats, this process is very specific to
each sensor and manufacturer. Thus far, we have developed
software interfaces to the four main imaging sensors used
on a survey-class AUV: a Synthetic Aperture Sidescan Sonar
(SAS), a Sub-Bottom Profiler (SBP), a Multibeam EchoSounder (MBES), and a monochrome still camera.
The clips for two of these sensors, the SAS and SBP,
mimic a traditional ‘waterfall’ view of sequential ping results.
Multiple pings are averaged to yield a single row, and multiple
rows are aggregated to form a complete clip as in Fig. 1(a), or
Fig. 1(b). If all clips are successfully transmitted to the surface,
the result is a continuous waterfall view of the data acquired
by those sensors. For each SAS ping, a time-varying gain
(TVG) is applied to compensate for the significant attenuation
at longer ranges.
The other two sensors currently supported, the MBES and
camera, generate data that is best viewed as the individual
thumbnails. In the case of the multibeam echo-sounder, sub-
(a) 10312 Bytes – ‘Quality’ clips
(a) Multibeam Echo-Sounder (MBES) clips are transmitted as a
rectangular image, and warped on the surface.
(b) 5128 Bytes – ‘Reliable’ clips
Fig. 4.
(b) A clip from the monochrome camera.
Fig. 3.
In addition to sonar waterfalls, Witness also handles imagery
which is best presented as individual thumbnails. Each of these clips is also
approximately 10 kilobytes in size.
sampled water column data for a single ping is periodically
converted to an image. This data is sent to the surface as
a rectangular image, and warped prior to display into a
traditional “wedge” format as shown in Fig. 3(a). Photos
captured by the monochrome camera are the easiest to handle,
as Witness is notified of each image that the camera captures.
An image is simply read from disk, and resized to decrease
the resolution. A example clip is shown in Fig. 3(b).
In addition to the images, each clip is accompanied by
a small amount of metadata when it is transmitted. That
metadata is encoded using the Dynamic Compact Control
Language (DCCL)[8], and contains the data necessary to
georeference the clip. It may contain additional sensor-specific
fields, such as the field of view of the MBES.
B. Transmission
After a clip has been generated by a driver, it is compressed
using a wavelet-based compression algorithm. A fully embedded approach to image compression is used, motivated by the
approach taken in CAPTURE[9]. The “Embedded Zerotree
of Wavelets”[10] algorithm is one noted early example of
this class of algorithm, which led to the more efficient set
partitioning in hierarchical trees (SPIHT) [11] coding method
and others [12][13]. This approach allows the clip size to be
easily tailored to the constraints of the acoustic or Iridium
channel by simply truncating the compressed data at an
SAS false color
appropriate size. While this allows images to be compressed to
fit any specific size, the choice is currently exposed to the AUV
operator directly as a simple binary choice of “high quality”
versus “high reliability”. Operators can adjust this parameter
over the course of a mission based on whether they would
rather see a high quality single clip for quality assurance,
or a more reliable but lower quality sequence of clips for
ongoing monitoring. Fig. 4 compares the high-quality and
reliable versions of a single SAS clip. Overall, the differences
are subtle, though there is more detail visible in the sand for
the higher quality image.
III. U SER I NTERFACE
When clips are received on the surface ship, they are
decompressed, and saved to a server as TIFF images. That
server is accessible to both operators and any scientific staff,
and can be browsed using a browser-based interface. Allowing
data to be accessed in this fashion means that many users,
from scientific staff to vehicle operators, can review the data
simultaneously. It is not possible to alter the data or mission
in any way through the interface, so inexperienced users can
explore without risk.
The Witness user interface has been developed in HTML as
a web application, requiring only a web browser and network
connection to the surface Witness server. The interface, shown
in Fig. 5, organizes clips by dive and sensor. Once viewing the
clips received for a given sensor and dive, there are a small
number of controls exposed to the user. First, timestamp labels
can be shown or hidden. While generally useful for putting
data in context, the timestamps may obstruct a portion of the
clip. Second, the layout of the clips can be toggled between
the waterfall view shown in Fig. 5 and a grid view where each
clip is shown separately. By default, the SAS and SBP show
up in the waterfall view, whereas the camera and MBES clips
are shown in a grid view.
Fig. 5.
Witness User Interface running in a web browser, displaying data captured from a SAS and transmitted to the surface via acoustic modem.
Finally, since the dynamic range of many AUV sensors
exceeds that of a typical computer monitor, Witness allows
users to apply a color mapping to the original data and adjust
the minimum and maximum colormapped values. Based on
the selected colormap and data range, the web server will
colormap the TIFF files on disk and serve them to the browser
as PNG images. While this colormapping process would pose a
significant burden for any large number of users, the number of
simultaneous users is expected to be relatively small (less than
20) at any given time. In addition, colormapped images are
only generated on-demand for those images that are on a user’s
screen, not every image, as the user adjusts the colormap.
IV. C ONCLUSIONS
Bluefin Witness represents a new capability for mid-mission
monitoring. Rather than requiring a vehicle recovery, sonar
and camera data can now be available to operators before
the mission is complete. A Bluefin-21 delivered to an oil
and gas customer for subsea survey work, equipped with the
four sensors discussed here, has been the first candidate for
Witness testing. Thus far, Witness has successfully enabled
surface operators to review data from each of the sensors while
the vehicle is actively performing surveys. To date, hundreds
of clips have successfully been transmitted to the surface for
review.
R EFERENCES
[1] M. Chitre, S. Shahabudeen, and M. Stojanovic, “Underwater acoustic
communications and networking: Recent advances and future challenges,” Marine Technology Society Journal, vol. 42, no. 1, pp. 103–116,
2008.
[2] I. F. Akyildiz, D. Pompili, and T. Melodia, “Underwater Acoustic Sensor
Networks: Research Challenges,” in Ad Hoc Networks. Elsevier, Mar.
2005, vol. 3, no. 3, pp. 257–279.
[3] K. W. Nicholls, E. P. Abrahamsen, J. Buck, P. A. Dodd, C. Goldblatt,
G. Griffiths, K. J. Heywood, N. E. Hughes, A. Kaletzky, G. F. LaneSerff, S. D. McPhail, N. W. Millard, K. I. C. Oliver, J. Perrett, M. R.
Price, C. J. Pudsey, K. Saw, K. Stansfield, M. J. Stott, P. Wadhams,
A. T. Webb, and J. P. Wilkinson, “Measurements beneath an antarctic ice
shelf using an autonomous underwater vehicle,” Geophysical Research
Letters, vol. 33, 2006, l08612, doi:10.1029/2006GL025998.
[4] C. Kaminski, T. Crees, J. Ferguson, A. Forrest, J. Williams, D. Hopkin,
and G. Heard, “12 days under ice - an historic auv deployment in the
canadian high arctic,” in Autonomous Underwater Vehicles (AUV), 2010
IEEE/OES, sept. 2010, pp. 1 –11.
[5] C. Kunz, C. Murphy, H. Singh, C. Pontbriand, R. Sohn, S. Singh,
T. Sato, C. Roman, K. Nakamura, M. Jakuba, R. Eustice, R. Camilli, and
J. Bailey, “Toward extraplanetary under-ice exploration: Robotic steps
in the arctic,” Journal of Field Robotics, vol. 26, no. 4, 2009.
[6] A. Bowen, D. Yoerger, C. Taylor, R. McCabe, J. Howland, D. GomezIbanez, J. Kinsey, M. Heintz, G. McDonald, D. Peters, J. Bailey, E. Bors,
T. Shank, L. Whitcomb, S. Martin, S. Webster, M. Jakuba, B. Fletcher,
C. Young, J. Buescher, P. Fryer, and S. Hulme, “Field trials of the nereus
hybrid underwater robotic vehicle in the challenger deep of the mariana
trench,” in OCEANS 2009, MTS/IEEE Biloxi, Oct 2009, pp. 1–10.
[7] C. Murphy, “TOPICS: A modular software architecture for high-latency
communication channels,” in IEEE Oceans - San Diego, 2013, Sept
2013, pp. 1–6.
[8] T. Schneider and H. Schmidt, “The Dynamic Compact Control Language: A compact marshalling scheme for acoustic communications,”
in Proceedings of MTS/IEEE OCEANS 2010, Sydney, Australia, 2010,
pp. 1–10.
[9] C. Murphy, J. Walls, T. Schneider, R. Eustice, M. Stojanovic, and
H. Singh, “CAPTURE: A communications architecture for progressive
transmission via underwater relays with eavesdropping,” Oceanic Engineering, IEEE Journal of, vol. In Publication, pp. 1–11, 2013.
[10] J. M. Shapiro, “Embedded image coding using zerotrees of wavelet
coefficients,” Signal Processing, IEEE Transactions on, vol. 41, no. 12,
pp. 3445–3462, Dec 1993.
[11] A. Said and W. A. Pearlman, “A new, fast, and efficient image codec
based on set partitioning in hierarchical trees,” Circuits and Systems for
Video Technology, IEEE Transactions on, vol. 6, no. 3, pp. 243–0250,
Jun 1996. [Online]. Available: http://www.cipr.rpi.edu/research/SPIHT/
[12] J. Tian and R. Wells, “Embedded image coding using wavelet difference
reduction,” in Wavelet Image and Video Compression, ser. The Kluwer
International Series in Engineering and Computer Science, P. Topiwala,
Ed. Springer Netherlands, 2002, vol. 450, pp. 289–301.
[13] J. S. Walker, “Lossy image codec based on adaptively scanned wavelet
difference reduction,” Optical Engineering, vol. 39, no. 7, pp. 1891–
1897, 2000. [Online]. Available: http://dx.doi.org/10.1117/1.602573