Here are my processes:
Step 0: Usage Crawling
I followed the Introduction in the source, executed one of the example bash in rubick/source/crawl-batch.sh :
bash crawl_workflow.sh 'org.apache.commons.compress.archivers.tar' 'org\\.apache\\.commons\\.compress\\.archivers\\.tar\\.' '^org\\.apache\\.commons\\.compress\\.archivers\\.tar\\..*$'
The above process downloaded src.json, jars and generated release_info.json. Then I ran the command bash in source/analyze_batch.sh :
bash analyze_workflow.sh 'org.apache.commons.compress.archivers.tar' 'org\.apache\.commons\.compress\.archivers\.tar\.' 'org/apache/commons/compress/archivers/tar/.*' 'org/apache/commons/compress/archivers/tar/.*' '^org\.apache\.commons\.compress\.archivers\.tar\..*$'
The above process involved unzipping the jars, generated possible_entry and run_all.sh.
BTW, In crawl-workflow.sh, the execution of crawl_gen_runall/2.crawl_repo_info.py generated release_info.json. However, in analyze_workflow.sh, when executing crawl_gen_runall/4.generate_p_info.py, it uses github_info.json. Are these two referring to the same file?
Step 1: Usage Extraction
The generated run_all.sh script from the previous step contains several commands in the format ./run.sh -0 ex ..., which are similar to the commands in Step 1: Usage Extraction mentioned in the introduction. So, I run bash run_all.sh to invoke run.sh ex to extract the raw usage automaton from the learning examples for generating the dir extracted. This is when the issue occurred, information displayed in the console is as follows:

and Each project in the "extracted" folder is empty, just like this:

Do you know what the cause is and how to solve it?