This bachelor thesis presents a secure, tamper-evident, performant and modular logging system for GDPR compliance. Its impact lies in:
- Enabling verifiable audit trails with minimal integration effort
- Supporting GDPR accountability with high performance
- Laying the groundwork for future improvements (e.g. key management, export)
For an in-depth explanation of the design, implementation, and evaluation, please refer to the full bachelor thesis or the accompanying presentation slides.
Key features include:
- Asynchronous batch logging to minimize client-side latency.
- Lock-free, multi-threaded architecture for high concurrent throughput.
- Compression before encryption to reduce I/O overhead and storage costs.
- Authenticated encryption (AES-GCM) to ensure confidentiality and integrity.
- Immutable, append-only storage for compliance and auditability.
- Future-proof design prepared for secure export and verification support.
- C++17 compatible compiler (GCC 9+ or Clang 10+)
- CMake 3.15 or higher
- Git (for submodule management)
Make sure the following libraries are available on your system:
- OpenSSL - For cryptographic operations (AES-GCM encryption)
- ZLIB - For compression functionality
- Google Test (GTest) - For running unit and integration tests
-
Clone the repository with submodules:
git clone --recursive <repository-url>
-
If not cloned with
--recursive, initialize submodules manually:git submodule update --init --recursive
-
Configure and build:
mkdir build cd build cmake .. make -j$(nproc)
A simple usage example is provided in /examples/main.cpp that demonstrates how to integrate and use the logging system:
# Run the usage example
./logging_example# Run all tests
ctest
# Or run specific tests
./test_<component>A reproducible environment is provided using Nix:
nix-shell- Log Entry Submission: When a database proxy intercepts a request to the underlying database involving personal data, it generates a structured log entry containing metadata such as operation type, key identifier, and timestamp. This entry is submitted to the logging API.
- Enqueuing: Log entries are immediately enqueued into a thread-safe buffer, allowing the calling process to proceed without blocking on disk I/O or encryption tasks.
- Batch Processing: Dedicated writer threads continuously monitor the queue, dequeueing entries in bulk for optimized batch processing. Batched entries undergo serialization, compression and authenticated encryption (AES-GCM) for both confidentiality and integrity.
- Persistent Storage: Encrypted batches are concurrently written to append-only segment files. When a segment reaches its configured size limit, a new segment is automatically created.
- Export and Verification: (Planned): Closed segments can be exported for audit purposes. The export process involves decryption, decompression, and verification of batch-level integrity using authentication tags and cryptographic chaining.
The buffer queue is a lock-free, high-throughput structure composed of multiple single-producer, multi-consumer (SPMC) sub-queues. Each producer thread is assigned its own sub-queue, eliminating contention and maximizing cache locality. Writer threads use round-robin scanning with consumer tokens to fairly and efficiently drain entries. The queue supports both blocking and batch-based enqueue/dequeue operations, enabling smooth operation under load and predictable performance in concurrent environments. This component is built upon moodycamel's ConcurrentQueue, a well-known C++ queue library designed for high-performance multi-threaded scenarios. It has been adapted to fit the blocking enqueue requirements by this system.
Writer threads asynchronously consume entries from the buffer, group them by destination, and apply a multi-stage processing pipeline: serialization, compression, authenticated encryption (AES-GCM), and persistent write. Each writer operates independently and coordinates concurrent access to log files using atomic file offset reservations, thus minimizing synchronization overhead.
The segmented storage component provides append-only, immutable log files with support for concurrent writers. Files are rotated once a configurable size threshold is reached, and access is optimized via an LRU-based file descriptor cache. Threads reserve byte ranges atomically before writing, ensuring data consistency without locking. This design supports scalable audit logging while balancing durability, performance, and resource usage.
To evaluate performance under realistic heavy-load conditions, the system was benchmarked on the following hardware:
- CPU: 2× Intel Xeon Gold 6236 (32 cores, 64 threads)
- Memory: 320 GiB DDR4-3200 ECC
- Storage: Intel S4510 SSD (960 GB)
- OS: NixOS 24.11, ZFS filesystem
- Execution Model: NUMA-optimized (pinned to a single node)
To evaluate parallel scalability, a proportional-load benchmark was conducted where the number of producer and writer threads was scaled together from 1 to 16, resulting in an input data volume that grew linearly with thread count.
- Thread scaling: 1–16 producers and 1–16 writers
- Entries per Producer: 2,000,000
- Entry size: ~4 KiB
- Total data at 16× scale: ~125 GiB
- Writer Batch Size: 2048 entries
- Producer Batch Size: 4096 entries
- Queue Capacity: 2,000,000 entries
- Encryption: Enabled
- Compression: Level 4 (balanced-fast)
The system was executed on a single NUMA node with 16 physical cores. At 16 total threads (8 producers + 8 writers), the system achieved 95% scaling efficiency, demonstrating near-ideal parallelism. Even beyond this point—up to 32 total threads—the system continued to deliver strong throughput gains by leveraging hyperthreading, reaching approximately 80% scaling efficiency at full utilization. This shows the system maintains solid performance even under increased CPU contention.
- Producers: 16 asynchronous log producers
- Entries per Producer: 2,000,000
- Entry Size: ~4 KiB
- Total Input Volume: ~125 GiB
- Writer Batch Size: 2048 entries
- Producer Batch Size: 4096 entries
- Queue Capacity: 2,000,000 entries
- Encryption: Enabled
- Compression: Level 1, fast
| Metric | Value |
|---|---|
| Execution Time | 59.95 seconds |
| Throughput (Entries) | 533,711 entries/sec |
| Throughput (Data) | 2.08 GiB/sec |
| Latency | Median: 54.7 ms, Avg: 55.9 ms, Max: 182.7 ms |
| Write Amplification | 0.109 |
| Final Storage Footprint | 13.62 GiB for 124.6 GiB input |
These results demonstrate the system’s ability to sustain high-throughput logging with low latency and low storage overhead, even under encryption and compression.



