Skip to content

[DataCap Application] <Illumina> - <Phase 3 Reanalysis with DRAGEN 3.5, 3.7, 4.0, and 4.2> #58

@zzslkj

Description

@zzslkj

Data Owner Name

1000 Genomes

Data Owner Country/Region

United States

Data Owner Industry

Life Science / Healthcare

Website

https://support.illumina.com/sequencing/sequencing_software/dragen-bio-it-platform.html

Social Media Handle

support@illumina.com

Social Media Type

Other

What is your role related to the dataset

Data Preparer

Total amount of DataCap being requested

9.6PiB

Expected size of single dataset (one copy)

1.2PiB

Number of replicas to store

8

Weekly allocation of DataCap requested

1PiB

On-chain address for first allocation

f1i7liy2zu7nfi5crqhpm65qlaz7o35rkw6tfnxxq

Data Type of Application

Public, Open Dataset (Research/Non-Profit)

Custom multisig

  • Use Custom Multisig

Identifier

No response

Share a brief history of your project and organization

All DRAGEN analyses were performed in the cloud using the Illumina Connected Analytics bioinformatics platform powered by Amazon Web Services (see 'Data solution empowering population genomics' for more information). The v3.7.6 and v4.2.7 datasets include results from trio small variant, de novo structural variant and de novo copy number variant calls on 602 trio families comprised of members from the 1000 Genomes Project Phase 3 dataset. Trio repeat expansion calling was included in the v3.7.6 dataset only. Joint cohort analysis was also performed on the entire 1KGP sample dataset (n=3202) for the v3.7.6, v4.0.3, and v4.2.7 reanalyses using DRAGEN GVCF Genotyper v3.8.3, v4.2.0, and v4.2.7, respectively (see 'Genotyping variants at population scale using DRAGEN gVCF Genotyper').

Is this project associated with other projects/ecosystem stakeholders?

No

If answered yes, what are the other projects/ecosystem stakeholders


Describe the data being stored onto Filecoin

BAM, SNV-vcf, SNV-gvcf, STR-vcf, STR-bam, SV-vcf, ROH-vcf, CNV-vcf, CNV-bw, metrics and other supporting files from DRAGEN v3.5.6b analyses in a public S3 bucket.

BAM, SNV-vcf, SNV-gvcf, STR-vcf, STR-bam, SV-vcf, ROH-vcf, CNV-vcf, CNV-bw, cyp2d6-tsv, metrics and other supporting files from DRAGEN v3.7.6 analyses in a public S3 bucket.


CRAM, SNV-vcf, SNV-gvcf, STR-vcf, STR-bam, SV-vcf, ROH-vcf, CNV-vcf, CNV-bw, cyp2b6-tsv, cyp2d6-tsv, gba-tsv, smn-tsv, star-allele-tsv, metrics and other supporting files from DRAGEN v4.0.3 analyses and Nirvana Annotation in a public S3 bucket.

CRAM, SNV-vcf, SNV-gvcf, STR-vcf, STR-bam, SV-vcf, ROH-vcf, CNV-vcf, CNV-bw, cyp2b6-tsv, cyp2d6-tsv, gba-tsv, smn-tsv, star-allele-tsv, hla-tsv, gvcf, json, metrics and other supporting files from DRAGEN v4.2.7 analyses and Nirvana Annotation in a public S3 bucket.

Where was the data currently stored in this dataset sourced from

AWS Cloud

If you answered "Other" in the previous question, enter the details here


If you are a data preparer. What is your location (Country/Region)

Hong Kong

If you are a data preparer, how will the data be prepared? Please include tooling used and technical details?

We used the Go language to develop a set of tools that automatically download, cut, package data, and place orders. The tool will download the data set file from AWS, package the data set into a car file of the corresponding size by setting parameters such as sector size, and then distribute the car file to each SP through bandwidth and hard disk mail. Then, we use Boost to issue orders offline to let the SP start packaging using the Boost.

If you are not preparing the data, who will prepare the data? (Provide name and business)


Has this dataset been stored on the Filecoin network before? If so, please explain and make the case why you would like to store this dataset again to the network. Provide details on preparation and/or SP distribution.

No,

Please share a sample of the data

aws s3 ls --no-sign-request s3://1000genomes-dragen/ 354.3269 TiB
aws s3 ls --no-sign-request s3://1000genomes-dragen-3.7.6/ 342.6459 TiB
aws s3 ls --no-sign-request s3://1000genomes-dragen-v3.7.6/(Duplicate) 342.6459 TiB
aws s3 ls --no-sign-request s3://1000genomes-dragen-v4.0.3/ 114.2916 TiB
aws s3 ls --no-sign-request s3://1000genomes-dragen-v4-2-7/ 69.5511 TiB

Confirm that this is a public dataset that can be retrieved by anyone on the Network

  • I confirm

If you chose not to confirm, what was the reason


What is the expected retrieval frequency for this data

Yearly

For how long do you plan to keep this dataset stored on Filecoin

2 to 3 years

In which geographies do you plan on making storage deals

Asia other than Greater China, North America, South America, Europe

How will you be distributing your data to storage providers

HTTP or FTP server

How did you find your storage providers

Partners

If you answered "Others" in the previous question, what is the tool or platform you used


Please list the provider IDs and location of the storage providers you will be working with.

f03637821 Germany
f03637813 Brazil
f03649204 HongKong
f03649212 HongKong
f03649217 Singapore
f03649227 Singapore
f03641974 Vietnam
f03641955 Germany
f03099888 Germany 
f03100088 United States
f03099287 Brazil
f03098965 Singapore
f03099101 Singapore
f03099008 Vietnam

new 
f03100014 Hong Kong (ACTIVE)
f03099777 Germany (ACTIVE)

How do you plan to make deals to your storage providers

Boost client

If you answered "Others/custom tool" in the previous question, enter the details here


Can you confirm that you will follow the Fil+ guideline

Yes

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions