Skip to content

seojaeohcode/atio

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
atio-logo

Atio

πŸ›‘οΈ Safe Atomic File Writing Library for Python

Python License PyPI Documentation Discord

Pandas Polars NumPy PyArrow SQLAlchemy OpenPyXL


πŸ“‹ Table of Contents


🎯 Overview

Atio is a Python library that prevents data loss and ensures safe file writing. Through atomic writing, it protects existing data even when errors occur during file writing, and supports various data formats and database connections.

✨ Why Atio?

  • πŸ”’ Zero Data Loss: Atomic operations guarantee file integrity
  • ⚑ High Performance: Minimal overhead with maximum safety
  • πŸ”„ Auto Rollback: Automatic recovery when errors occur
  • πŸ“Š Universal Support: Works with Pandas, Polars, NumPy, and more
  • 🎯 Simple API: Drop-in replacement for existing code

πŸš€ 30-Second Quick Start

pip install atio
import atio
import pandas as pd

# Create sample data
df = pd.DataFrame({
    "name": ["Alice", "Bob", "Charlie"],
    "age": [25, 30, 35],
    "city": ["Seoul", "Busan", "Incheon"]
})

# Safe atomic writing
atio.write(df, "users.parquet", format="parquet")
# βœ… File saved safely with atomic operation!

πŸ“Š Supported Formats & Libraries

Format Pandas Polars NumPy Description
CSV βœ… βœ… βœ… Comma-separated values
Parquet βœ… βœ… ❌ Columnar storage format
Excel βœ… βœ… ❌ Microsoft Excel files
JSON βœ… βœ… ❌ JavaScript Object Notation
SQL βœ… ❌ ❌ SQL database storage
Database ❌ βœ… ❌ Direct database connection
NPY/NPZ ❌ ❌ βœ… NumPy binary formats
Pickle βœ… ❌ ❌ Python serialization
HTML βœ… ❌ ❌ HTML table format

πŸ—οΈ Architecture

Atomic Writing Process

graph LR
    A[Data Object] --> B[Temp File]
    B --> C[Validation]
    C --> D[Atomic Replace]
    D --> E[Success Flag]
    
    C -->|Error| F[Rollback]
    F --> G[Original File Preserved]
    
    style A fill:#e1f5fe
    style E fill:#c8e6c9
    style F fill:#ffcdd2
    style G fill:#c8e6c9
Loading

Key Components

  • πŸ›‘οΈ Atomic Operations: Temporary file β†’ Validation β†’ Atomic replacement
  • πŸ”„ Rollback Mechanism: Automatic recovery on failure
  • πŸ“ˆ Progress Monitoring: Real-time progress for large files
  • πŸ“‹ Version Management: Snapshot-based data versioning
  • 🧹 Auto Cleanup: Automatic cleanup of temporary files

πŸ’‘ Real-World Use Cases

πŸ”₯ Data Pipeline Protection

# ETL pipeline with automatic rollback
try:
    atio.write(processed_data, "final_results.parquet", format="parquet")
    print("βœ… Pipeline completed successfully")
except Exception as e:
    print("❌ Pipeline failed, but original data is safe")
    # Original file remains untouched

πŸ§ͺ Machine Learning Experiments

# Version-controlled experiment results
atio.write_snapshot(model_results, "experiment_v1", mode="overwrite")
atio.write_snapshot(improved_results, "experiment_v1", mode="append")

# Rollback to previous version if needed
atio.rollback("experiment_v1", version_id=1)

πŸ“Š Large Data Processing

# Progress monitoring for large datasets
atio.write(large_df, "big_data.parquet", 
          format="parquet", 
          show_progress=True)
# Shows: β ‹ Writing big_data.parquet... [ 45.2 MB | 12.3 MB/s | 00:15 ]

🎯 Core Features

1. Atomic File Writing

# Safe writing with automatic rollback
atio.write(df, "data.parquet", format="parquet")
# Creates: data.parquet + .data.parquet._SUCCESS

2. Database Integration

# Direct database storage
from sqlalchemy import create_engine
engine = create_engine('postgresql://user:pass@localhost/db')
atio.write(df, format="sql", name="users", con=engine, if_exists="replace")

3. Version Management

# Snapshot-based versioning
atio.write_snapshot(df, "my_table", mode="overwrite")  # v1
atio.write_snapshot(new_df, "my_table", mode="append") # v2

# Read specific version
df_v1 = atio.read_table("my_table", version=1)

4. Progress Monitoring

# Real-time progress for large files
atio.write(large_df, "data.parquet", 
          format="parquet", 
          show_progress=True,
          verbose=True)

πŸ”§ Advanced Usage

Multi-Format Support

import polars as pl
import numpy as np

# Polars DataFrame
pl_df = pl.DataFrame({"a": [1, 2, 3], "b": [4, 5, 6]})
atio.write(pl_df, "data.parquet", format="parquet")

# NumPy Arrays
arr = np.random.randn(1000, 100)
atio.write(arr, "array.npy", format="npy")

# Multiple arrays
atio.write({'arr1': arr, 'arr2': arr*2}, "arrays.npz", format="npz")

Error Handling & Recovery

# Automatic rollback on failure
try:
    atio.write(df, "data.parquet", format="parquet")
except Exception as e:
    print(f"Write failed: {e}")
    # Original file is automatically preserved

Performance Monitoring

# Detailed performance analysis
atio.write(df, "data.parquet", format="parquet", verbose=True)
# Output:
# [INFO] Temporary directory created: /tmp/tmp12345
# [INFO] Writer to use: to_parquet (format: parquet)
# [INFO] βœ… File writing completed (total time: 0.1234s)

πŸ› οΈ Installation

Basic Installation

pip install atio

With Optional Dependencies

# For Excel support
pip install atio[excel]

# For database support
pip install atio[database]

# For all features
pip install atio[all]

Development Installation

git clone https://github.com/seojaeohcode/atio.git
cd atio
pip install -e .

πŸ“š Documentation & Examples

πŸ“– Documentation

🎯 Examples

πŸ“ Basic Usage - Simple file operations

import atio
import pandas as pd

# Create sample data
df = pd.DataFrame({
    "name": ["Alice", "Bob", "Charlie"],
    "age": [25, 30, 35],
    "city": ["Seoul", "Busan", "Incheon"]
})

# Safe atomic writing
atio.write(df, "users.parquet", format="parquet")
print("βœ… File saved safely!")

# Read back to verify
df_read = pd.read_parquet("users.parquet")
print(df_read)

πŸ“Š Progress Monitoring - Large file handling

import atio
import pandas as pd
import numpy as np

# Create large dataset
large_df = pd.DataFrame(np.random.randn(200000, 5), columns=list("ABCDE"))

# Save with progress monitoring
atio.write(large_df, "large_data.parquet", 
          format="parquet", 
          show_progress=True)
# Shows: β ‹ Writing large_data.parquet... [ 45.2 MB | 12.3 MB/s | 00:15 ]

πŸ“‹ Snapshot Management - Version control

import atio
import pandas as pd

# Version 1: Initial data
df_v1 = pd.DataFrame({"id": [1, 2, 3], "value": ["A", "B", "C"]})
atio.write_snapshot(df_v1, "my_table", mode="overwrite")

# Version 2: Append new data
df_v2 = pd.DataFrame({"score": [95, 87, 92]})
atio.write_snapshot(df_v2, "my_table", mode="append")

# Read specific version
df_latest = atio.read_table("my_table")  # Latest version
df_v1 = atio.read_table("my_table", version=1)  # Version 1

⚑ Performance Testing - Benchmarking

import atio
import pandas as pd
import time

# Performance comparison
df = pd.DataFrame(np.random.randn(100000, 10))

# Standard pandas
start = time.time()
df.to_parquet("standard.parquet")
pandas_time = time.time() - start

# Atio with safety
start = time.time()
atio.write(df, "safe.parquet", format="parquet", verbose=True)
atio_time = time.time() - start

print(f"Pandas: {pandas_time:.3f}s")
print(f"Atio: {atio_time:.3f}s")
print(f"Safety overhead: {((atio_time/pandas_time - 1) * 100):.1f}%")

πŸ§ͺ Test Scenarios

⌨️ Keyboard Interrupt - Ctrl+C safety

# test_interrupt.py
import atio
import pandas as pd
import numpy as np

print("Creating large dataset...")
df = pd.DataFrame(np.random.randn(1000000, 10))

print("Starting write operation...")
print("Press Ctrl+C to test interrupt safety!")

try:
    atio.write(df, "test_interrupt.parquet", 
              format="parquet", 
              show_progress=True)
    print("βœ… Write completed successfully!")
except KeyboardInterrupt:
    print("❌ Interrupted by user!")
    print("πŸ” Checking file safety...")
    import os
    if os.path.exists("test_interrupt.parquet"):
        print("⚠️  File exists but may be corrupted")
    else:
        print("βœ… No corrupted file left behind!")

πŸ’Ύ Out of Memory - Memory failure handling

# test_oom.py
import atio
import pandas as pd
import numpy as np

def simulate_oom():
    print("Creating extremely large dataset...")
    # This will likely cause OOM
    huge_df = pd.DataFrame(np.random.randn(10000000, 100))
    
    print("Attempting to save...")
    try:
        atio.write(huge_df, "huge_data.parquet", format="parquet")
        print("βœ… Successfully saved!")
    except MemoryError:
        print("❌ Out of Memory error!")
        print("βœ… But original file is safe!")
    except Exception as e:
        print(f"❌ Error: {e}")
        print("βœ… Atio protected your data!")

# Run the test
simulate_oom()

πŸš€ CI/CD Pipeline - Automated deployment safety

# ci_pipeline.py
import atio
import pandas as pd
import os

def deploy_artifacts():
    """Simulate CI/CD pipeline deployment"""
    
    # Generate deployment artifacts
    config = pd.DataFrame({
        "service": ["api", "web", "db"],
        "version": ["v1.2.3", "v1.2.3", "v1.2.3"],
        "status": ["ready", "ready", "ready"]
    })
    
    metrics = pd.DataFrame({
        "metric": ["cpu", "memory", "disk"],
        "value": [75.5, 68.2, 45.1],
        "unit": ["%", "%", "%"]
    })
    
    print("πŸš€ Starting deployment...")
    
    try:
        # Atomic deployment - either all succeed or all fail
        atio.write(config, "deployment_config.json", format="json")
        atio.write(metrics, "deployment_metrics.parquet", format="parquet")
        
        # Create success marker
        atio.write(pd.DataFrame({"status": ["deployed"]}), 
                  "deployment_success.parquet", format="parquet")
        
        print("βœ… Deployment completed successfully!")
        return True
        
    except Exception as e:
        print(f"❌ Deployment failed: {e}")
        print("πŸ”„ Rolling back...")
        
        # Clean up any partial files
        for file in ["deployment_config.json", "deployment_metrics.parquet"]:
            if os.path.exists(file):
                os.remove(file)
        
        print("βœ… Rollback completed - system is clean!")
        return False

# Test the pipeline
deploy_artifacts()

πŸ† Why Choose Atio?

βœ… Data Safety First

  • Zero data loss even during system failures
  • Automatic rollback on any error
  • File integrity guaranteed by atomic operations

⚑ Performance Optimized

  • Minimal overhead (1.1-1.2x vs native libraries)
  • Progress monitoring for large files
  • Memory efficient processing

πŸ”§ Developer Friendly

  • Drop-in replacement for existing code
  • Simple API with powerful features
  • Comprehensive documentation and examples

🌐 Universal Compatibility

  • Multiple data formats (CSV, Parquet, Excel, JSON, etc.)
  • Multiple libraries (Pandas, Polars, NumPy)
  • Database integration (SQL, NoSQL)

πŸ“„ License

This project is distributed under the Apache 2.0 License. See the LICENSE file for details.


πŸ›‘οΈ Atio - Because your data deserves to be safe

GitHub stars GitHub forks GitHub watchers

About

No description, website, or topics provided.

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •