logging_mp is a Python library specifically designed for multiprocessing support in logging.
It solves the issues of log disorder, loss, or deadlock that arise with the standard logging module in multiprocessing environments. In spawn mode, logging_mp automatically handles inter-process Queue transmission and monitoring through Monkey Patch technology.
- ⚡ Zero-Config Multiprocessing: Child processes automatically send logs to the main process. No need to pass
Queueobjects manually. - 💻 Cross-Platform Support: Works seamlessly with both
fork(Linux) andspawn(Windows/macOS) start methods. - 🎨 Rich Integration: Beautiful, colorized console output powered by Rich.
- 📂 File Logging: Aggregates logs from all processes and threads into a single, rotated log file.
- 🔒Thread Safe: Fully compatible with
threadingmodules.
git clone https://github.com/silencht/logging-mp
cd logging_mp
pip install -e .pip install logging-mpUsing logging_mp is nearly identical to using the standard logging module, but with multiprocessing superpowers.
In your entry point script (e.g., main.py), initialize the system before creating any processes.
import multiprocessing
import time
import logging_mp
# basicConfig must be called before creating any processes or submodules.
# In spawn mode, this automatically starts the log listening process and injects Monkey Patches.
logging_mp.basicConfig(
level=logging_mp.INFO,
console=True,
file=True,
file_path="logs"
)
# Get a logger
logger_mp = logging_mp.getLogger(__name__)
def worker_task(name):
# In the child process, just get a logger and log!
# No queues to configure, no listeners to start.
worker_logger_mp = logging_mp.getLogger("worker")
worker_logger_mp.info(f"👋 Hello from {name} (PID: {multiprocessing.current_process().pid})")
time.sleep(0.5)
if __name__ == "__main__":
logger_mp.info("🚀 Starting processes...")
processes = []
for i in range(3):
p = multiprocessing.Process(target=worker_task, args=(f"Worker-{i}",))
p.start()
processes.append(p)
for p in processes:
p.join()
logger_mp.info("✅ All tasks finished.")The basicConfig method accepts the following arguments:
| Argument | Type | Default | Description |
|---|---|---|---|
level |
int |
logging_mp.WARNING |
The global logging threshold (e.g., INFO, DEBUG). |
console |
bool |
True |
Enable/Disable Rich console output. |
file |
bool |
False |
Enable/Disable writing to a log file. |
file_path |
str |
"logs" |
Directory to store log files. |
backupCount |
int |
10 |
Number of previous session logs to keep. |
For details, please refer to the example directory.
.
├── example
│ ├── example.py # Complete usage demonstration
│ ├── module_a
│ │ ├── module_b
│ │ └── worker_ta.py # Example worker module
│ └── module_c
│ └── worker_tc.py # Example worker module
├── src
│ └── logging_mp
│ └── __init__.py # Core library implementation
├── LICENSE
├── pyproject.toml
└── README
The native Python logging library, while thread-safe, does not support multiprocessing mode. logging_mp employs an asynchronous communication mechanism that, while maintaining multi-threading compatibility, thoroughly resolves the conflicts caused by concurrent writes in multiprocessing environments:
- Centralized Listening (Aggregation): Upon main process startup, the system automatically creates a separate background process
_logging_mp_queue_listener. This globally unique "consumer" is responsible for extracting logs from the queue and uniformly performing Rich console rendering or file writing operations. - Transparent Injection (Monkey Patch): To achieve "zero-perception" user integration, the library patches
multiprocessing.Processupon import. Inspawnmode, whenProcess.start()is executed, the system automatically injects the log queue object into the child process's bootstrapping phase (_bootstrap), ensuring the child process gains log-return capability instantly upon startup. - Full-Scenario Support (Threads & Processes):
- Multi-threading: Directly inherits the thread-safety features of native
logging. Logs between threads do not require cross-process communication, resulting in minimal overhead. - Multiprocessing: Within each child process,
logger.info()acts as a "producer". Log entries are pushed into a cross-process queue and return immediately. Since time-consuming disk I/O is performed asynchronously in the listener process, your business logic is hardly blocked by logging operations.
- Multi-threading: Directly inherits the thread-safety features of native
- Linear Order Guarantee (Ordering): Logs from all processes and threads ultimately converge into a single in-memory queue. The listener processes them in the order they are received, ensuring linear consistency in the output timeline and completely eliminating issues such as interleaved characters or file deadlocks caused by simultaneous writes from multiple processes or threads.
-
Import Order: In multiprocessing environments using
spawnmode, ensure that you importlogging_mpand callbasicConfigbefore creating anyProcessobjects. -
Windows/macOS Users: Due to the use of
spawnstartup mode, always place the startup code inside anif __name__ == "__main__":block. Otherwise, it may cause recursive startup errors. -
Process Subclassing: If you create processes by subclassing
multiprocessing.Processand override the__init__method, be sure to callsuper().__init__(). Otherwise, the logging queue may not be properly injected.
This project is licensed under the MIT License - see the LICENSE file for details.
