In your documentation page, you have :
metabuli classifiedRefiner <i:read-by-read classification> <i:DBDIR> [options]
- read-by-read classification : The JobID_classifications.tsv file generated by the `classify` step.
- DBDIR : The same DBDIR used in the `classify` step.
, but I think you meant for the second input to be <i:TAXDUMP>, as in the directory with taxonomy dump files that was used to create the database.
When I attempt to use metabuli classifiedRefiner with the taxdump files for my custom database, it appears to finish, but it always thinks there's an existing file with the same name that it's trying to write to (when there isn't).
Metabuli Version (commit): 1.1.1
Remove unclassified reads false
Exclude taxId as well as its children
Select taxId as well as its children
Select columns with number, (7:full lineage, generated if absent)
Make report of refined classification file false
Adjust classification to the specified rank species
0: without higher rank, 1: with higher rank, 2: separate file for higher rank classification 0
Threads 24
Min. sequence similarity score 0
Loading nodes file ... Done, got 2729221 nodes
Loading merged file ... Done, added 96668 merged nodes.
Loading names file ... Done
Init computeSparseTable ...Done
Write refined classification result to:
results/btk_all_classifications_refined.tsv
results/btk_all_classifications_refined.tsv is already exists.
Time for processing: 0h 0m 3s 287ms
This blocks any refined output data from saving.
Note: I'm using a conda install on WSL-2
In your documentation page, you have :
, but I think you meant for the second input to be <i:TAXDUMP>, as in the directory with taxonomy dump files that was used to create the database.
When I attempt to use
metabuli classifiedRefinerwith the taxdump files for my custom database, it appears to finish, but it always thinks there's an existing file with the same name that it's trying to write to (when there isn't).This blocks any refined output data from saving.
Note: I'm using a conda install on WSL-2