Skip to content

New subsampling algorithm ("maximum overlap") #2

@averagehat

Description

@averagehat

As stated in the TODO section, the current sampling method falls victim to picking additional alignments when the minimum depth has already been reached at that reference index. i.e.
if your average read depth is 150 and you are sub selecting at 10 depth, the first base will contain 10 depth, but position 2 will contain those 10 reads, plus potentially 10 more reads. Then the 3rd position will contain 10+10+10, and so on.

These reads can stack the entire length of the position 1 sequence, causing the upper depth-bound of the random-subsample solution to be min_depth * len(read).

An iterative solution which only picks read when needed (and backtracks if necessary) can have an upper bound of 2 * min_depth (some overlap may be necessary in a case like:) insert diagram.
We avoid a higher depth by always picking the read with the highest overlap index (POS + len(sequence). This allows the user to more precisely define the wanted depth.

NOTES:

  • Always picking the read with the greatest overlap (longest read) (and almost never picking short reads) may effect variant calling.
  • The iterative algorithm (maybe?) lends itself to brief (tall) spikes, because it always picks greatest overlap. This minimizes the sum of all overflow (?). However, it may be possible and desirable to use an algorithm which spreads the overflow more evenly and results in "hills" rather than "spikes."

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions