This repo contains code used to extract speech transcripts from the BabyView videos, which capture egocentric naturalistic data of infants' home environments. Our current pipeline uses the Whisper model and can be found in the whisper_pipeline subfolder. To extract transcripts, we're extracting the mp3 files from the current pull of unzipped raw MP4 video files (currently stored at /ccn2/dataset/babyview/unzip_2025/babyview_main_storage on the CCN2 cluster) and then running either the model whisper-large-v3-turbo or distil-large-v3 on each audio file. Both models currently output CSV files, at the video level, with utterance-level transcripts and timestamps (current output files are at /ccn2/dataset/babyview/outputs_20250312/transcripts on CCN2). More information about our current pipeline is in the README within whisper_pipeline.
We also use the Voice Type Classifier from Lavechin et al. (2020) to do speaker type identification. We merge the transcripts and speaker labels using alignment.R, such that each utterance has an associated speaker.