Buoys in the Bay

Ongoing work as part of my fellowship with the Rhode Island Consortium for Coastal Ecology Assessment Innovation and Modeling. RI C‑AIM is a collaboration of engineers, scientists, designers and communicators from eight higher education institutions developing new approaches to assess, predict and respond to the effects of climate change on coastal ecosystems.

“Bringing the Bay Observatory to 3D Life” - An Article from URI Website December 2018 (PDF)

One of the many initiatives RI C-AIM was the creation of the Integrated Bay Observatory to gather real-time ecological data on the Narraganset Bay through a network of buoys. I became interested in how the information extracted from an environment could be represented in a way that acknowledges the physicality and spacial reality of a sensing device in an ecosystem. I started by building 3D assets I thought might be useful in future prototyping of an experimental web interface.

Data Platforms and WebVR: Representations of the Buoy Experience

Presented during the RI C-AIM Annual Research Symposium


WebVR is an experimental JavaScript application programming interface that allows internet browsers to render a three-dimensional scene without the use of any additional plug-ins. Visualizing data from an oceanographic research buoy in WebVR allows us to include the buoy in the representation. The goal is to create an online portal for information that reveals the sensing apparatus - in this case, an RI C-AIM buoy in Narragansett Bay - that is typically hidden or ignored in traditional visualizations of data. The final project will be a WebVR representation of the buoy in its environment. Users will be able to manipulate 3D models of the buoys and learn more about the various technical components and their scientific applications. Users can also visualize what the buoy is sensing by querying the most recent data from the buoy hosted on the RI Data Discovery Center website. The data that is retrieved will be used to generate animations of the buoy “experience.”


The Camera Takes a Selfie

Few works in the history of art have inspired as much criticism, controversy and conjecture as Diego Velázquez’s masterpiece, Las Meninas. Painted in 1656 in the Royal Alcazar of Madrid, Las Meninas (Spanish for “The Ladies-in-waiting”), is simultaneously a self-portrait of Velázquez, a peak into the artist’s studio and practice, a mind-boggling optical puzzle, a defense of painting as a liberal art rather than a mechanical process and a challenge to the royal power hierarchy and concept of “majesty.”


On the left hand side of the painting we can see Diego Velázquez holding his brushes, he appears to have just stepped back from his canvas which we can only see the back of. To the lower right of Velázquez are the titular Meninas, flanking the 5-year-old Infanta Margaret Theresa. Two dwarves and a dog occupy the bottom right corner of the painting. Above them, in the shadows, we can see the princesses’ chaperone speaking to a bodyguard. In background of the painting, paused in the illuminated doorway, is Don José Nieto Velázquez, head of the royal tapestry works. Directly to the left of the doorway a mirror hangs on the wall. In the reflection we can see the likenesses of King Philip IV and Queen Mariana.

For years, it was believed that the vantage point of the viewer was the same as the king and queen - that to view the painting was to see through the eyes of royalty and everyone in the painting their subject. The reflection of the royal couple in the mirror ostensibly affirmed this interpretation. However, in 1985 the art historian Joel Snyder pointed out that the true perspective vanishing point of the painting is the hand of Don José in the doorway. Snyder claims that the mirror is reflecting Velázquez’s canvas and the in-process portrait of King Philip IV and Queen Mariana he is painting. This leads to a radically different interpretation of the painting: Diego Velázquez is showing us how the monarchy relies on him to create the illusion of majesty. He is revealing the power he has as a painter, that a picture is a model of reality and everything in it is informed and organized through his experience. In his book, The Order of Things, Michel Foucault spends the first chapter extensively analyzing Las Meninas, writing ”Now he (the painter) can be seen, caught in a moment of stillness, at the neutral centre of his oscillation.”  

A buoy senses and records environmental data. We could approach this data as an objective numeric quantification of an environment. We could use the information to construct a representation of a location. But what if we took the Diego Velázquez approach to representing biological, chemical and spatial information? What would a Las Meninas inspired visualization of scientific data look like? Would the inclusion of a buoy in a representation of the data alter how we think about data being depicted?

Making a 3D Model

Initially, photogrammetry was used to create the 3D models of the buoys. Photogrammetry is the art, science, and technology of obtaining reliable information about physical objects and the environment through processes of recording, measuring and interpreting photographic images. The floatation collar, central column, and battery array were borrowed from Dr. Harold “Bud” Vincent and transported to RISD to be photographed with a Canon 5D Mark III. The objects were placed on a rotating base and turned ten degrees after every picture. Thirty six photographs were taken per orbit and following every full rotation, the height of the camera was increased along the z-axis. Every object received four full photographic orbits with a total of 144 images captured per component.

Two different photogrammetry programs were tested: Agisoft PhotoScanⓇ (now known as Agisoft MetaShapeⓇ) and Autodesk ReCapⓇ. Although both of these programs basicly operate the same way, Autodesk ReCapⓇ yielded better results. However, the final models were not good enough to use. Because of the geometric simplicity of the components and the reflectivity of the PVC surfaces, the polygonal mesh models never aligned perfectly. Although the textures from the photogrammetry were highly detailed, without a solid mesh to UV map, they were basically useless.

Instead of using photogrammetry, each component was drawn in a 3D modeling program called Rhinoceros 3DⓇ (commonly referred to as Rhino). Rhino geometry is based on the NURBS mathematical model, which focuses on producing mathematically precise representation of curves and freeform surfaces in computer graphics (as opposed to polygon mesh-based applications). Rhino allows for highly accurate 3D models of the buoy components that could be added to as Bud and his team continued their construction. From Rhino, technical drawings could be rendered to Adobe IllustratorⓇ, or exported models could be sent to MayaⓇ for more controlled mesh editing. MayaⓇ is an AutodeskⓇ application used to create interactive 3D applications, including video games, animated film, TV series, or visual effects.


The Ocean Online

WebVR is an experimental JavaScript application programming interface that allows internet browsers to render a three-dimensional scene without the use of any additional plug-ins. A-Frame was chosen as the platform to code the project. A-Frame is an open-source web framework for building virtual reality experiences. At A-Frame’s core is a powerful entity-component framework that provides a declarative, extensible, and composable structure to three.js. A more robust interface was necessary for the user to affect the A-Frame scene from the DOM. Vue.js - an open-source javascript framework that features a reactivity system and optimized re-rendering - was used to create the interactive menu.

Once the website is completed, users will be able to manipulate 3D models of the buoys and learn more about the various technical components and their scientific applications. Users will also be able to visualize what the buoy is sensing. This is achieved by the website querying the most recent data from the deployed buoys hosted on the RI Data Discovery Center website. The data that is retrieved will be used to generate 3D, interactive animations of what the buoy is sensing.


This material is based upon work supported in part by the National Science Foundation under EPSCoR Cooperative Agreement #OIA-165221

Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation