-
Notifications
You must be signed in to change notification settings - Fork 0
Subgroups
For the computation side: refer to Issue #14 for additional details. In summary: compare objection detection results with additional features obtained from a continuous latent space method (i.e CTL).
8/19
Computation Updates
- Gain understanding from ecology team about data/goals
- Set up AWIR (Meilun's data) into YOLO format and implement YOLOv11 on dataset. 2 versions.
First version: cropped to 1 animal per scene (easy problem)
Second version: larger tiles with multiple animals per scene (harder ish problem)
- Set up YOLO training for micro and macro scale plant ecology folks. Micro scale: seedling leaf detection from camera image
Macro scale: tree crown detection from aerial image
- Established baseline results from YOLO
TO DO
-
Train continuous latent space model from available features at micro and macro scale
-
Modify YOLOe to incorporate additional visual embeddings in training
Ecology Updates
- Plant/Leaf Project:
two levels of representation: image/plant level and object/leaf level.
Action item: annotate leaf level bounding boxes and traits for leaf level
- UAV Forest Canopy Project:
Action item: segmentation of individual tree crowns, test prediction of tree crown locations with YOLO. Train model on tree crown traits and genera.
Goals:
For the computation side: obtain better performance from object detection methods augmented with continuous latent space features and understand areas of improvement/difference.
Members:
- Scott LaRocca
- Nohemi Huanca-Nunez
- Matt Thompson
- Will Weaver
- Meilun Zhou
- Natalia Rogova
- Braedon Lineman
Outcomes: Document and/or link outcomes or results here when accomplished.
Short description of the rationale and objective for your subgroup. You can include links to supplementary documentation that already exists.
Goal(s): Biological Goals: Functional Diversity from Video/images between Rivers, understand which traits contribute most to community level variation and how much intraspecific variation is present.
Computational goals: Extract a multiple trait descriptions from images, compare methods for funcational characterization (morphometrics vs embeddings), generalizable pipeline that is easy for others to use.
Both: Have a good time!
Members:
- Angel Estruche
- Juan Garcia
- Braden DeMattei
- Maria Napolitani
- Net Zhang
- Luke Meyers
- Ankit Upadhyay
- Hilmar Lapp
Project repository: https://github.com/Imageomics/Funcapalooza-Cicli2
Outcomes:
(steps, to be replaced with products)
- Process fish videos and images to gather fish annotations
- Extract frames from videos (hyperparam FPS)
- Perform detection
- Explore segmentation--full body and specific body parts
- Prompt LLM to generate textual trait description from images
- Work with ontology and look at distance between traits.
- Compare Trait Descriptions, Morphometrics, Embeddings
- Generate trait space comparisons across biological variable
- Explore other biological questions that could answered.
Our group aims to better understand mimicry between different species. In particular, we have two datasets with different questions:
- Wasp-Moth:
- Moths mimic wasp to deter predators
- What species of wasp is the moth mimicing? With that, what visual traits are being shared between the model and mimic and what differs?
- Snakes:
- Non/less venomous snakes mimic very venomous snakes to deter bird predators.
- How well do the mimics do at representing the model (venomous) snakes?
- Does this strength of similarity correlate with other variables (such as geographic proximity, and population abundance)?
- How does the distribution of phenotypical space differ between models and mimics?
Goal: Extract visual traits between model and mimic species for quantitative analysis.
Members:
- Andressa Viol
- Sol Carolina Parra Santos
- Elizabeth Campolongo
- Michael Silagy
- Jacob Beattie
- David Carlyn
- Ziheng Zhang
- Xinyue Ma
Project Repository: https://github.com/Imageomics/mimicry-madness
Outcomes:
- Presentation: https://docs.google.com/presentation/d/12vn8RgfFCn3Jn3l7b2vKKbU8M9FXDDDYPA3AWH2qWz8/edit?usp=sharing
Using citizen science images of bird eggs, we aim to create a global database of egg traits. We plan to use this database to answer questions related to trait biogeography and function. With this database we can investigate the connections between egg traits, environmental factors, and function, such as the relationship between color and pattern on egg predation.
Goal: A dataset of egg traits from around the world, including shape, pattern, and color.
🥚 Egg photos will be masked and segmented using Grounding DINO and SAM2 ✅
🥚 Using traditional CV methods we will determine color and pattern
🥚 Using environmental, altitude, latitude, and predator range data, we will determine if color and pattern are related to predation risk
Members:
Yu Tsai-Chen
Paul Metzler
Marta Jarzyna
Net Zhang
(Honorary) Luke Meyers
Outcomes:
We merged iNaturalist citizen science photos of eggs with photos from GBIF (Global Biodiversity Information Facility), resulting in a database of 34901 unique photos across 4978 species. We used Grounding DINO and SAM2 to mask and segment egg data resulting in a clean dataset of egg image data. We also collected environmental data that is associated with the geospatial information associated with each photo.
Next Steps:
- Address false positive predictions from Grounding Dino detection
- Improve cropping
- Modify code to support batch inference
- Extract trait data such as RGB color, shape, and pattern
- Link them to global environmental data such as weather, elevation, predation risk
- Analyze global patterns of egg traits
Project Repo: https://github.com/Imageomics/global-egg-trait
Ground beetles have long been exclaimed as bioindicators of ecosystem health. However, the utility of ground beetles as bioindicators has not been tested at broad spatial scales. In this project we look to build AI tools that allow us to use images of pinned ground beetles collected at NEON sites across the United States test if land-use/disturbance history of NEON sites are predicted based on trait composition of ground beetles at NEON sites.
Goal: We hope to begin to answer the question: Are ground beetles really bioindicators. In order to achieve this goal, we plan to 1) Segment individual beetle parts and classify if we can use the individual beetle parts as measurable traits; 2) Train a Sparse Autoencoder (SAE) on images of individual ground beetles to find novel traits; and 3) Predict land-use/disturbance history as a function of trait space.
Members:
- Mike Belitz
- Aly East
- Sydne Record
- Sam Stevens
- SM Rayeed
- Fangxun Liu
- Eric Sokol
- Hilmar Lapp
Updates:
- 8/19: Part of the group is working on setting up the system to run a SAE model on segmented ground beetle images. Other members of the group are working on compiling NEON site land-use history data. We are also compiling a list of functional traits that have been collected in past ground beetle research. We came up with an approach for quantifying SAE outputs and overlapping them with segmented beetle body parts. Towards the end of the day, the group is annotating images to run the segmentation.
Outcomes: Document and/or link outcomes or results here when accomplished.
Project Repo: https://github.com/Imageomics/Beyond-Beetle-Body-Size
This event is sponsored by the Imageomics Institute and supported by the National Science Foundation under Awards No. OAC-2118240 and AWD-111317. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.