Katrina Aleksa successfully defended her PhD dissertation on Wednesday, making her the newest PhD in the USM Division of Marine Science. Her project examined the ecology and behavior of leatherback sea turtles in the Gulf of Mexico. Some exciting findings were that these large predators of gelatinous organisms tended to forage near sea-surface lows, which are generally sites of upwelling and anomalously high biological productivity (determined from a combination of satellite tags on turtles and remote sensing data). Turtles also foraged near the shelf break, especially around the Florida panhandle. These findings have direct applications to conservation of these large charismatic animals that make massive reproductive and foraging migrations. Congratulations to Katrina! Be on the look out for more publications from her dissertation coming soon.
We just got a new paper published in Fisheries Oceanography, which you can read here (or contact me for a pdf). In short, we were able to provide a quantitative description of the association of lobster phyllosoma with jellies (also known as gelatinous zooplankton). This is a fascinating relationship where the lobster larvae attach to gelatinous zooplankton of varying sizes (sometimes one individual phyllosoma attaches to several jellies simultaneously) and likely use them as both a floating shelter and a food resource. During the fall in the Gulf of Mexico, the phyllosoma were abundant, and ~30% of them were attached to gelatinous zooplankton, with a higher probability of attachment further from shore (toward the south). This kind of species interaction can only be revealed through in situ imaging and likely has some evolutionary benefit for the life history of lobsters.
I finished reading this book about a month ago, but it has taken me some time to organize my thoughts and decide how to summarize such a thorough and entertaining piece of non-fiction. If you don't read any more of this post, my take home message is: go read the book! Even if you have never been to the Gulf of Mexico, there are so many important lessons throughout its history.
It is rare that you get to read a book that covers so many different and often dense subjects in such an effortless and entertaining manner, but that is exactly what Jack E. Davis has accomplished in “The Gulf: The Making of an American Sea.” The book brings to life historical figures that shaped the Gulf and weaves in fascinating information about the ecology and geology that make the Gulf coast such a biologically rich area.
The first chapters summarize the various European expeditions made to map and explore the Gulf coast. In many cases, the first navigators of the Gulf had no idea what they were doing – errors were made mapping the location of the Mississippi River, and Spanish settlers did not know how to live off the productive estuaries as the Native Americans did. This lack of Gulf survival skills came from the settlers’ inability to observe and adapt to this landscape that was very different from their homeland. The chapters often mention an ethologist named Cushing, and Davis uses a unique writing style to summarize information through the lens of a sleuth uncovering clues about how the native cultures lived.
A character that periodically comes up is the artist Walter Anderson who lived in Ocean Springs, Mississippi, and the author regards Anderson as one of the first Gulf coast naturalists. Anderson loved the barrier islands and would make the 12-mile paddle out to them regularly, often staying for several days. He was particularly fascinated with Horn Island, and even survived a hurricane there. While residing on the island, he would paint the natural beauty and became acutely aware of changes caused by pollution. I can’t wait to take a trip out to Horn Island (the largest Mississippi barrier island) at some point.
In many ways, the history of the Gulf is a series of tragedies. I found myself wanting to go back in time to see the expansive, pristine pine forests that once thrived along the Gulf coast before shipbuilding led to their demise. During the bird feather fashion craze in the late 19th century, large birds, such as herons and egrets, were mercilessly shot along the Gulf coast. Upon the sound of gunfire, the parents would instinctively guard their nests, making them easy targets for slaughter. Mangroves were unwisely destroyed to make room for coastal development – people were not aware of their critical ecological role at the time. But every time the Gulf ecosystem seemed on the verge of destruction, a hero emerged to stand up for the thing that originally drew people to the Gulf coast: its natural beauty.
The most concerning and relevant material (when it comes to environmental management) comes in the final chapters where we learn about various instances when long term environmental sustainability was sacrificed for short term economic gain. This is common theme along the Gulf coast (Louisiana, in particular) as well as all over the country. I enjoyed the scientific history of the discovery of the Gulf of Mexico “dead zone” (area of low dissolved oxygen bottom waters) and its mechanisms of formation. It is a bit sad because we have known about the dead zone for decades now (as well as the processes that influence it), yet it continues to expand in size. This summer of 2017 has the largest dead zone on record. The book overall is an extremely valuable, holistic treatment of environmental history, and I hope it will be read by many, so we can learn from the mistakes of the past and preserve the Gulf for future generations.
It is pretty common knowledge that young people today are reading and writing more than ever, but it is often in an unstructured way – the kind of writing used in text messages or on Twitter. Teachers have taken note of this change in style, and they have documented a decline in writing skill (75% of 12th and 8th graders are not proficient in writing). There is a push toward developing new methods to help kids learn to write well, which seems like a daunting task. In this New York Times article, Dana Goldstein focuses on the stories of students and teachers, and what it takes to develop writing skills.
Speaking from my own experience, I remember in high school being told to free-write or draw on something from my life to inspire the written word. To me, this was not particularly helpful to develop writing skills, and the teachers in Dana Goldstein’s article agree that free-writing has not improved kids’ abilities. Writing for most of my early life was a difficult process, and I really did not come to enjoy it until college. The difference was that the writing became more goal-oriented. There was a purpose or an argument that I was striving to make, and that process – crafting the right way to organize and present evidence to build an argument – became fun, and it drove me to hone my writing skills.
So what makes good writer? Certainly some people have innate writing ability, but the most important thing that anyone needs to understand is that good writing takes hard work. Some of my most valuable writing experiences came from having a professor read a draft of a manuscript and completely rip it to shreds, metaphorically speaking, which required me to take a step back and look at the big picture goals of the manuscript at hand. Even though I probably thought I had decent writing ability at the time, my writing was not clear or well organized. Once I had some paragraphs written, I felt a sort of attachment to those words, almost like a sunk-cost fallacy, a desire to hold onto something that has taken a substantial amount of time and effort. A significant hurdle involved just generating the will to, in some cases, completely remove paragraphs and start the text fresh. Sometimes this is what it takes to create high quality prose, and any good writer has to also have a thick skin and be open to criticism.
For people interested in writing about science and other non-fiction, I highly recommend Steven Pinker’s book “The Sense of Style: The Thinking Person’s Guide to Writing in the 21st century.” If you read this or his other books, you will see that he is able to write clearly about complex subjects, which is often a struggle for me and other scientists. I especially enjoyed the parts of the book where he presents a few paragraphs from another author and goes through, in detail, how the author fails or succeeds in making his or her point. We can certainly learn a lot from just hearing how good writers critique others.
Computer vision, a form of Artificial Intelligence (AI) that involves computers extracting information from images, has tons of potential applications to all sorts of business and scientific needs. It is not surprising that various groups are investing in development of these techniques, but, as Gary Marcus points out in a New York Times op-ed, most of these approaches are bottom-up, crunching huge amounts of data on pixel color and pattern (i.e., AI as a “passive vessel”) to discern content or classify the image. At the same time, the approaches are confined to small groups in labs or companies that have little incentive to share their breakthroughs with the outside world. Another technical problem is that these approaches can produce incorrect results for reasons that are hard for a human to identify because they often come from multiple processing steps that are difficult to trace.
“To get computers to think like humans, we need a new A.I. paradigm, one that places “top down” and “bottom up” knowledge on equal footing. Bottom-up knowledge is the kind of raw information we get directly from our senses, like patterns of light falling on our retina. Top-down knowledge comprises cognitive models of the world and how it works.”
Marcus calls for approaches to AI that utilize more top-down approaches – that is, incorporate the strengths of human intelligence into the AI framework. More data (bottom-up) do not necessarily lead to a better decision, especially if that decision involves complex thought, such as considering image context or future actions of objects.
I wholeheartedly agree with Gary Marcus’s position based on my own experiences in the computer vision world. I am not a computer scientist, but I have been working for years on plankton imaging and the automated analysis of the images, so I have a general familiarity with the approaches to analyze image data. Currently, there is a push within the image processing world towards “deep learning” techniques, which is a form of AI and appears to be similar to previous approaches to recognize plankton images – extract as much data as possible and categorize based on some training set that creates a model for the algorithm to follow. Over time working with results of various computer classification techniques, I have developed a new, but profound respect for the human brain. We truly are image processing wizards – we easily look at an image in 2D and can interpret it in 3D, and we also have incredible skill for understanding the context of the image – two things that are difficult to communicate to a computer. The reason for this difficulty is that these image processing instincts humans possess are not easily translatable to a computer, which works in mathematical terms. How do you describe mathematically that an object has multiple orientations toward the camera, and all of these orientations should be considered the same type of object? This is not trivial to implement in a computer program, but it is quite easy for our brains to accomplish this task.
Our ability to construct “cognitive models of the world” leads to numerous mental shortcuts that are accurate and are computationally inexpensive. For example, within the study of predator-prey interactions and Batesian mimicry, there is an idea of “feature saltation”, which essentially means that a predator uses one or two visual traits to assess whether a potential prey item is palatable or threatening. This is exactly what humans do to recognize objects. We assess the overall shape, which computers do quite well, but then we cue in on features of the images (e.g., lighting, positioning of eyes, stems, etc.). Once we see one or two relatively subtle things, we can typically make a positive and accurate identification - as well as say something about what may be happening in the image. From a computer’s perspective, it is difficult to cue in on specific features, which is why deep learning algorithms can periodically be “tricked” in non-intuitive ways. Marcus mentioned an example of a deep learning algorithm mistaking a pattern of yellow and black stripes for a school bus.
I hope the AI community takes some of these suggestions to heart because this is an exciting field that potentially could progress faster if we change a few approaches. Although he doesn’t say this explicitly, I believe Marcus would agree that we need more research conducted on how human (and other animal) brains construct these "cognitive models", which will help computer scientists more accurately incorporate this top-down knowledge into AI.
About a week ago, the University of Southern Mississippi held an "open ship", allowing members of the public come aboard the RV Point Sur, meet some scientists, and learn about the research being done in their backyard at the School of Ocean Sciences and Technology. Lucho Chiaverano and I represented the plankton team and showed some ISIIS images and plankton samples preserved in ethanol. Most of the younger ones were naturally drawn to the shark jaws and giant whale vertebra that scientists from GCRL brought on board, but I think people walked away with a some appreciation for the little guys in the sea. I was super happy with the crowd turnout (pretty good for midday Monday), and the local news did a short story about the ship and visitors. It was great to meet people interested in marine science, and hopefully we can have another open ship soon!
Tomorrow I will be giving a 30 minute presentation about CONCORDE research for the U.S. Coast Guard Sector Mobile, Mississippi Area Committee Meeting. The talk will take place at the Grand Bay National Estuarine Research Reserve and will cover the main objectives from our research consortium, along with various applications of the findings toward oil spill mitigation. I am in the final stages of completing an overview paper summarizing some results from CONCORDE, so I have tried to adapt the content of that paper into a talk for a more general audience. I am not totally sure how it will go over, but I am excited for the opportunity to relate our work to some real-world applications in the community.
I am now reading The Gulf: The Making of an American Sea by Jack E. Davis. As an environmental history professor at the University of Florida, Dr. Davis provides an overview of the stories and people that had the most impact on the Gulf of Mexico, and, conversely, how the Gulf affected native cultures and settlers alike. I will post a full review once I am done with the book.
Congratulations to Dr. Brian Dzwonkowski of the University of South Alabama Dauphin Island Sea Lab for publishing a new paper in Continental Shelf Research! This paper describes the biogeochemical response of shelf waters to a meteorological flushing event that occurred just after the remnants of Hurricane Patricia passed over the northern Gulf of Mexico. At the edge of a freshwater influenced region, we documented a dense aggregation of Trichodesmium (Nitrogen fixing bacteria) that had apparently exploited an ecological niche in this relatively small area, as indicated by low N:P ratios and relatively high stratification in salinity. This is one of the first studies to be published as part of the CONCORDE consortium and provides insight into biological and physical coupling in this region. Stay tuned for more publications to come from our group!
My new website is live. Updates on research and news will happen here.