Conversation
sarapapi
left a comment
There was a problem hiding this comment.
Looks good, just minor changes. Additionally, have you manually tested it?
|
Updated the docstring and yes, I also tested the code using a small sample XML. |
|
@sarapapi Passes the checks now. |
Thanks, there are still comments pending |
|
@retkowski could you please address the remaining requested changes by @mgaido91? |
|
@sarapapi I already added a new commit that incorporates requested changes. However, I replied to two of his comments because I think further confirmation/clarification would be good |
|
@retkowski I do not see your comments, may you check please? |
mgaido91
left a comment
There was a problem hiding this comment.
@retkowski I tried this but I am having hard times. I suggested a fix to solve a first problem, but then a second problem arises:
Traceback (most recent call last):
File "/opt/homebrew/Caskroom/miniforge/base/envs/mcif/bin/mcif_eval", line 7, in <module>
sys.exit(cli_script())
File "/Users/mgaido/github/mcif/src/mcif/evaluation.py", line 446, in cli_script
scores = main(
File "/Users/mgaido/github/mcif/src/mcif/evaluation.py", line 410, in main
scores.update(score_achap(base_ref_path, hypo, ref, lang))
File "/Users/mgaido/github/mcif/src/mcif/evaluation.py", line 350, in score_achap
results = evaluate_batch(
File "/opt/homebrew/Caskroom/miniforge/base/envs/mcif/lib/python3.10/site-packages/chunkseg/evaluate.py", line 210, in evaluate_batch
result = evaluate(
File "/opt/homebrew/Caskroom/miniforge/base/envs/mcif/lib/python3.10/site-packages/chunkseg/evaluate.py", line 86, in evaluate
hyp_timestamps = _timestamps_from_transcript(
File "/opt/homebrew/Caskroom/miniforge/base/envs/mcif/lib/python3.10/site-packages/chunkseg/evaluate.py", line 311, in _timestamps_from_transcript
result = parse_transcript(
File "/opt/homebrew/Caskroom/miniforge/base/envs/mcif/lib/python3.10/site-packages/chunkseg/parsers.py", line 406, in parse_transcript
titles, sections = parse_markdown(text)
File "/opt/homebrew/Caskroom/miniforge/base/envs/mcif/lib/python3.10/site-packages/chunkseg/parsers.py", line 228, in parse_markdown
sections.append(_tokenize_section(section_text))
File "/opt/homebrew/Caskroom/miniforge/base/envs/mcif/lib/python3.10/site-packages/chunkseg/parsers.py", line 121, in _tokenize_section
return sent_tokenize(text)
File "/opt/homebrew/Caskroom/miniforge/base/envs/mcif/lib/python3.10/site-packages/nltk/tokenize/__init__.py", line 119, in sent_tokenize
tokenizer = _get_punkt_tokenizer(language)
File "/opt/homebrew/Caskroom/miniforge/base/envs/mcif/lib/python3.10/site-packages/nltk/tokenize/__init__.py", line 105, in _get_punkt_tokenizer
return PunktTokenizer(language)
File "/opt/homebrew/Caskroom/miniforge/base/envs/mcif/lib/python3.10/site-packages/nltk/tokenize/punkt.py", line 1744, in __init__
self.load_lang(lang)
File "/opt/homebrew/Caskroom/miniforge/base/envs/mcif/lib/python3.10/site-packages/nltk/tokenize/punkt.py", line 1749, in load_lang
lang_dir = find(f"tokenizers/punkt_tab/{lang}/")
File "/opt/homebrew/Caskroom/miniforge/base/envs/mcif/lib/python3.10/site-packages/nltk/data.py", line 696, in find
raise LookupError(resource_not_found)
LookupError:
**********************************************************************
Resource 'punkt_tab' not found.
Please use the NLTK Downloader to obtain the resource:
>>> import nltk
>>> nltk.download('punkt_tab')
src/mcif/evaluation.py
Outdated
| assert "TRANS" in ref.keys() | ||
| scores["TRANS-COMET"] = score_st(hypo, ref, lang) | ||
| if "ACHAP" in ref.keys(): | ||
| scores.update(score_achap(hypo, ref, lang)) |
There was a problem hiding this comment.
| scores.update(score_achap(hypo, ref, lang)) | |
| scores.update(score_achap(base_ref_path, hypo, ref, lang)) |
I fixed the issue within chunkseg by automatically downloading the punkt_tab sentence tokenizer if not available. Updated the dependency in the requirements. Not entirely sure about the path issue. In my test, I provided an absolute path directly. |
Cool, thanks!
I provided the suggestions on how to fix it. If you just apply them, it will work. We cannot put absolute path in the xml, those have to run everywhere, in any cluster and laptop. |
@retkowski can you please review Marco's suggestions? |
|
I incorporated Marco's suggestions and also again tested the code with relative path usage |
…h reference transcript and using the latter for forced alignment
Co-authored-by: sarapapi <57095209+sarapapi@users.noreply.github.com>
| samples, | ||
| format="markdown", | ||
| src_lang="eng", | ||
| tgt_lang=lang, |
There was a problem hiding this comment.
Here, the language is two codes, but it seems the tool handles three-code languages. How does it work?
There was a problem hiding this comment.
They are simply passed through to the tools that require them, src_lang to FA (which expects three-letter coded languages) and tgt_lang to BERTScore (which expects two-letter coded languages).
| return titles, section_to_line_map | ||
|
|
||
|
|
||
| def _replace_translation_with_transcript( |
There was a problem hiding this comment.
It seems to me more of a caveat for making the chunkseg tool work in the crosslingual case... Isn't it something that should be, instead, natively handled by the tool, and then just passing the translation or the transcript here, with the related source and target language information? This is also related to the comment of WER being computed for ASR, but not having a quality score for ST
There was a problem hiding this comment.
I thought about it, too. I refrained from doing so (for now) because in mcif we already have dependencies such as mwerSegmenter and comet, it's also a bigger change, to both code bases again, so I consider it out of scope for the very short term.
|
Not sure why but this is currently breaking installation. On my environment (GPU) installing this has caused a new torchaudio version to be installed, which would not be compatible with my torch installation anymore, breaking the env. Not sure where this comes from, maybe the chukseg library? |
|
Also, after fixing the install, it was failing for me with : I had to fix this with: |
|
Notice that this has to be installed smoothly in a fresh env on SPEECHM |
This PR adds the ACHAP task and its metrics to MCIF. Selected metrics include collar F1 for segmentation quality and BERTScore for title quality. Optionally, we can also get the WER of the transcript. It handles forced alignment automatically using torchaudio. Important: Markdown format is assumed for the structured transcript (i.e., "\n# Title\n" as separator between chapters).