Skip to content

Add ACHAP task / evaluation metrics#37

Open
retkowski wants to merge 17 commits intohlt-mt:mainfrom
retkowski:achap
Open

Add ACHAP task / evaluation metrics#37
retkowski wants to merge 17 commits intohlt-mt:mainfrom
retkowski:achap

Conversation

@retkowski
Copy link
Copy Markdown

This PR adds the ACHAP task and its metrics to MCIF. Selected metrics include collar F1 for segmentation quality and BERTScore for title quality. Optionally, we can also get the WER of the transcript. It handles forced alignment automatically using torchaudio. Important: Markdown format is assumed for the structured transcript (i.e., "\n# Title\n" as separator between chapters).

Copy link
Copy Markdown
Collaborator

@sarapapi sarapapi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, just minor changes. Additionally, have you manually tested it?

@retkowski
Copy link
Copy Markdown
Author

Updated the docstring and yes, I also tested the code using a small sample XML.

@retkowski
Copy link
Copy Markdown
Author

@sarapapi Passes the checks now.

@sarapapi
Copy link
Copy Markdown
Collaborator

@sarapapi Passes the checks now.

Thanks, there are still comments pending

@sarapapi
Copy link
Copy Markdown
Collaborator

@retkowski could you please address the remaining requested changes by @mgaido91?

@retkowski
Copy link
Copy Markdown
Author

@sarapapi I already added a new commit that incorporates requested changes. However, I replied to two of his comments because I think further confirmation/clarification would be good

@mgaido91
Copy link
Copy Markdown
Contributor

@retkowski I do not see your comments, may you check please?

Copy link
Copy Markdown
Contributor

@mgaido91 mgaido91 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

Copy link
Copy Markdown
Contributor

@mgaido91 mgaido91 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@retkowski I tried this but I am having hard times. I suggested a fix to solve a first problem, but then a second problem arises:

Traceback (most recent call last):
  File "/opt/homebrew/Caskroom/miniforge/base/envs/mcif/bin/mcif_eval", line 7, in <module>
    sys.exit(cli_script())
  File "/Users/mgaido/github/mcif/src/mcif/evaluation.py", line 446, in cli_script
    scores = main(
  File "/Users/mgaido/github/mcif/src/mcif/evaluation.py", line 410, in main
    scores.update(score_achap(base_ref_path, hypo, ref, lang))
  File "/Users/mgaido/github/mcif/src/mcif/evaluation.py", line 350, in score_achap
    results = evaluate_batch(
  File "/opt/homebrew/Caskroom/miniforge/base/envs/mcif/lib/python3.10/site-packages/chunkseg/evaluate.py", line 210, in evaluate_batch
    result = evaluate(
  File "/opt/homebrew/Caskroom/miniforge/base/envs/mcif/lib/python3.10/site-packages/chunkseg/evaluate.py", line 86, in evaluate
    hyp_timestamps = _timestamps_from_transcript(
  File "/opt/homebrew/Caskroom/miniforge/base/envs/mcif/lib/python3.10/site-packages/chunkseg/evaluate.py", line 311, in _timestamps_from_transcript
    result = parse_transcript(
  File "/opt/homebrew/Caskroom/miniforge/base/envs/mcif/lib/python3.10/site-packages/chunkseg/parsers.py", line 406, in parse_transcript
    titles, sections = parse_markdown(text)
  File "/opt/homebrew/Caskroom/miniforge/base/envs/mcif/lib/python3.10/site-packages/chunkseg/parsers.py", line 228, in parse_markdown
    sections.append(_tokenize_section(section_text))
  File "/opt/homebrew/Caskroom/miniforge/base/envs/mcif/lib/python3.10/site-packages/chunkseg/parsers.py", line 121, in _tokenize_section
    return sent_tokenize(text)
  File "/opt/homebrew/Caskroom/miniforge/base/envs/mcif/lib/python3.10/site-packages/nltk/tokenize/__init__.py", line 119, in sent_tokenize
    tokenizer = _get_punkt_tokenizer(language)
  File "/opt/homebrew/Caskroom/miniforge/base/envs/mcif/lib/python3.10/site-packages/nltk/tokenize/__init__.py", line 105, in _get_punkt_tokenizer
    return PunktTokenizer(language)
  File "/opt/homebrew/Caskroom/miniforge/base/envs/mcif/lib/python3.10/site-packages/nltk/tokenize/punkt.py", line 1744, in __init__
    self.load_lang(lang)
  File "/opt/homebrew/Caskroom/miniforge/base/envs/mcif/lib/python3.10/site-packages/nltk/tokenize/punkt.py", line 1749, in load_lang
    lang_dir = find(f"tokenizers/punkt_tab/{lang}/")
  File "/opt/homebrew/Caskroom/miniforge/base/envs/mcif/lib/python3.10/site-packages/nltk/data.py", line 696, in find
    raise LookupError(resource_not_found)
LookupError: 
**********************************************************************
  Resource 'punkt_tab' not found.
  Please use the NLTK Downloader to obtain the resource:

  >>> import nltk
  >>> nltk.download('punkt_tab')

assert "TRANS" in ref.keys()
scores["TRANS-COMET"] = score_st(hypo, ref, lang)
if "ACHAP" in ref.keys():
scores.update(score_achap(hypo, ref, lang))
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
scores.update(score_achap(hypo, ref, lang))
scores.update(score_achap(base_ref_path, hypo, ref, lang))

@retkowski
Copy link
Copy Markdown
Author

@retkowski I tried this but I am having hard times. I suggested a fix to solve a first problem, but then a second problem arises:

import nltk
nltk.download('punkt_tab')

I fixed the issue within chunkseg by automatically downloading the punkt_tab sentence tokenizer if not available. Updated the dependency in the requirements.

Not entirely sure about the path issue. In my test, I provided an absolute path directly.

@mgaido91
Copy link
Copy Markdown
Contributor

I fixed the issue within chunkseg by automatically downloading the punkt_tab sentence tokenizer if not available. Updated the dependency in the requirements.

Cool, thanks!

Not entirely sure about the path issue. In my test, I provided an absolute path directly.

I provided the suggestions on how to fix it. If you just apply them, it will work. We cannot put absolute path in the xml, those have to run everywhere, in any cluster and laptop.

@sarapapi
Copy link
Copy Markdown
Collaborator

I fixed the issue within chunkseg by automatically downloading the punkt_tab sentence tokenizer if not available. Updated the dependency in the requirements.

Cool, thanks!

Not entirely sure about the path issue. In my test, I provided an absolute path directly.

I provided the suggestions on how to fix it. If you just apply them, it will work. We cannot put absolute path in the xml, those have to run everywhere, in any cluster and laptop.

@retkowski can you please review Marco's suggestions?

@retkowski
Copy link
Copy Markdown
Author

I incorporated Marco's suggestions and also again tested the code with relative path usage

…h reference transcript and using the latter for forced alignment
retkowski and others added 3 commits March 20, 2026 13:57
samples,
format="markdown",
src_lang="eng",
tgt_lang=lang,
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here, the language is two codes, but it seems the tool handles three-code languages. How does it work?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are simply passed through to the tools that require them, src_lang to FA (which expects three-letter coded languages) and tgt_lang to BERTScore (which expects two-letter coded languages).

return titles, section_to_line_map


def _replace_translation_with_transcript(
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems to me more of a caveat for making the chunkseg tool work in the crosslingual case... Isn't it something that should be, instead, natively handled by the tool, and then just passing the translation or the transcript here, with the related source and target language information? This is also related to the comment of WER being computed for ASR, but not having a quality score for ST

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about it, too. I refrained from doing so (for now) because in mcif we already have dependencies such as mwerSegmenter and comet, it's also a bigger change, to both code bases again, so I consider it out of scope for the very short term.

@mgaido91
Copy link
Copy Markdown
Contributor

Not sure why but this is currently breaking installation. On my environment (GPU) installing this has caused a new torchaudio version to be installed, which would not be compatible with my torch installation anymore, breaking the env. Not sure where this comes from, maybe the chukseg library?

@mgaido91
Copy link
Copy Markdown
Contributor

Also, after fixing the install, it was failing for me with :

    raise RuntimeError(f"Couldn't find appropriate backend to handle uri {uri} and format {format}.")

I had to fix this with:

pip install soundfile sox

@mgaido91
Copy link
Copy Markdown
Contributor

Notice that this has to be installed smoothly in a fresh env on SPEECHM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants