A Python tool to import observations from NatureMapr CSV exports into iNaturalist.
The codebase is organized into modular components:
main.py- Main entry point and CLI interfaceconfig.py- Configuration, environment variables, and constantsobservation_builder.py- Builds observation parameters from CSV rowsfile_downloader.py- Downloads images and audio files with retry logicinat_api.py- iNaturalist API interactions (create, delete, retries)utils.py- Utility functions (file cleanup, etc.)
-
Create a virtual environment:
python3 -m venv venv
-
Activate it:
source venv/bin/activate # macOS/Linux # or venv\Scripts\activate # Windows
-
Install dependencies:
pip install -r requirements.txt
Get your iNaturalist API token from: https://www.inaturalist.org/users/api_token
You can set the token using either method:
- Create a
.envfile in the project root directory - Add your token:
INAT_TOKEN=your_token_here - The
.envfile is automatically loaded by the script (viapython-dotenv) - Important: Make sure
.envis in your.gitignoreto keep your token secure
- macOS/Linux (bash/zsh):
export INAT_TOKEN=your_token_here - Windows (PowerShell):
$Env:INAT_TOKEN = "your_token_here"
- Downloads of images/audio are saved to
tmp_attachments/and cleaned up after each upload - Default timezone is
Australia/Sydney(configurable inconfig.py) - Default tags:
NatureMapr, claire_import(configurable inconfig.py)
Import observations from a CSV file:
python main.py --csv path/to/NatureMapr-obs.csv--csv(required) - Path to the NatureMapr CSV export file-n, --dry-run- Test mode: don't upload anything, just print what would happen--delete- Delete observations by ID (comma-separated list)--limit N- Only process the first N rows--resume N- Start processing at row N (0-based index)
# Dry run to test
python main.py --csv data.csv --dry-run
# Import first 10 rows
python main.py --csv data.csv --limit 10
# Resume from row 50
python main.py --csv data.csv --resume 50
# Delete specific observations
python main.py --delete "12345,67890,11111"The script expects a CSV file with these columns (minimum required):
Scientific Name- Scientific name of the speciesRecorded Date Utc- Date in format like4/28/20Recorded Time Utc- Time in 12-hour (1:51 AM) or 24-hour (14:00) formatPlace- Location nameLat- Latitude (decimal degrees)Long- Longitude (decimal degrees)
Image1throughImage5- URLs to image files (missing values are skipped)Audio- URL to audio fileDescription Public- Public description textAbundance- Abundance value (observation field 647)Animal health- Animal health status (observation field 443)Plant health- Plant health status (observation field 443)Animal size- Animal size (observation field 779)Circumference of trunk- Trunk circumference (observation field 779)Plant height- Plant height (observation field 779)Flower size- Flower size (observation field 779)Gender- Gender information (observation field 41)
- Timezone is automatically converted to
Australia/Sydneybefore sending to iNaturalist - Image and audio URLs are automatically downloaded to temporary files before upload
- All temporary files are cleaned up after each observation is processed
The script provides:
- Progress bar showing import status
- Per-row status messages
- Summary statistics at the end:
- Total rows considered
- Successfully created
- Dry-run skipped
- Failed
Example output:
Dry run: False
Rows to process: 100
Starting at row: 0
Importing observations: 100%|████████████| 100/100 [02:30<00:00]
Created iNat observation 123456 (CSV row 0)
Created iNat observation 123457 (CSV row 1)
...
=== IMPORT SUMMARY ===
Total rows considered: 100
Successfully created: 98
Dry-run skipped: 0
Failed: 2
======================
- The script automatically retries with exponential backoff
- To increase timeout, modify
timeoutparameter infile_downloader.pyandinat_api.py
- Bad or inaccessible URLs will show warnings and be skipped
- Check that image/audio URLs are publicly accessible
- Ensure dates are in format:
M/D/YY(e.g.,4/28/20) - Times can be 12-hour (
1:51 AM) or 24-hour (14:00) format
- Ensure
INAT_TOKENis set in.envfile or as environment variable - Check that
.envfile is in the project root directory
config.py- All configuration constants and environment setupobservation_builder.py- Core logic for building observation parametersparse_datetime()- Date/time parsingdownload_media()- Media file handlingbuild_observation_fields()- Observation field mappingmake_observation_params()- Main builder function
file_downloader.py- File downloading with retry logicinat_api.py- API interaction functionscreate_observation_with_retry()- Create with retry logicdelete_observations()- Batch delete function
utils.py- Utility functionscleanup_temp_files()- Cleanup helper
- Dependencies are pinned in
requirements.txt - To refresh from your environment:
pip freeze > requirements.txt