Skip to content

Use DataFolder for handling spilled data #358

@CGodiksen

Description

@CGodiksen

Currently we use read_record_batch_from_apache_parquet_file() and write_record_batch_to_apache_parquet_file() to handle spilled data in the uncompressed data manager. Since we now have the DataFolder type which is responsible for I/O, we should use this type for spilled data as well.

One option is to create a separate database schema for uncompressed data and create a Delta Lake table in that schema for each time series table that can be used to handle uncompressed data.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions