diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000..f8cc8e3 --- /dev/null +++ b/.gitignore @@ -0,0 +1,6 @@ +.vscode/ +.idea/ +.venv/ +venv/ +__pycache__/ +.ropeproject/ \ No newline at end of file diff --git a/README.md b/README.md new file mode 100644 index 0000000..d3869bd --- /dev/null +++ b/README.md @@ -0,0 +1,11 @@ +# Torrent Inno +--- +## Installation guide +1) Clone the repository +2) Ensure `poetry` version 2.1.2 is installed on your system +3) Change the directory to `client` +4) In the `client` directory, run `poetry install` + +Now you have two options to use the project +- Enter `poetry run cli` to run the CLI +- Enter `poetry run python3 gui/app.py` to run the GUI application diff --git a/Report.md b/Report.md new file mode 100644 index 0000000..97cb763 --- /dev/null +++ b/Report.md @@ -0,0 +1,235 @@ + +# TorrentInno Project Report + +## Introduction + +**Peer-to-peer (P2P)** file sharing protocols allow users to distribute files directly between themselves without relying on a central server for the file data itself. Systems like **BitTorrent** are widely used for distributing large files efficiently by breaking them into smaller pieces that peers download from each other and upload to others simultaneously. A central component, the "***tracker***" helps peers discover each other for a specific file (identified by an "***info-hash***"). + + **TorrentInno** is a custom implementation inspired by these principles, providing a framework for P2P file exchange coordinated by a tracker. Such systems are relevant for decentralized file distribution, reducing server load, and potentially increasing download speeds through parallel sourcing. + +## Methods + +### Overall Architecture + +The TorrentInno system consists of two main components: + +1. **Tracker:** A server application (implemented in Go) responsible for registering peers interested in a specific resource (file) and providing lists of peers to each other. +2. **Client:** A client application (implemented in Python) that interacts with the tracker to find peers and then communicates directly with those peers to download or upload file pieces. It includes a core P2P engine and a GUI layer (see [gui](client/gui)). + +Schema of the interaction: + +![Schema](specs/schema.jpg) + +### Peer Discovery (Tracker Interaction) + +As described in specifications [README.md](specs/README.md), the process begins with a peer announcing itself to the tracker: + +1. **Compute Info-Hash:** The client calculates a unique identifier (`info-hash`) for the resource (e.g., the `.torrentinno` file). The details are mentioned to be in the `client.core.common.Resource` class (method `get_info_hash`). +2. **Announce:** The client sends an HTTP request to the tracker's `/peers` endpoint. The request body contains the peer's ID (`peerId`), the `infoHash`, and the peer's public IP and port ([peer-announce.json](specs/peer-announce.json)). From [torentinno.py](client/torentinno.py): + * Peer ID generation: + ```python + def generate_random_bits(size) -> bytes: + ''' + Generate random bits usign randint + ''' + return bytes(random.randint(0, 255) for _ in range(size)) + + def generate_random_peer_id() -> str: + ''' + function what generate peerid + ''' + return generate_random_bits(32).hex() + ``` +3. **Receive Peer List:** The tracker responds with a list of other peers currently sharing the same `info-hash` (see [tracker-response.json](specs/tracker-response.json)). +4. **Periodic Updates:** The client periodically re-announces itself (approx. every 30 seconds) to stay listed by the tracker. The tracker removes peers that haven't announced recently. + +### Peer-to-Peer Communication + +Once a client has a list of peers, it attempts to establish direct connections ([peer-message-exchange.md](specs/peer-message-exchange.md)). + +1. **Handshake:** The initiating peer (determined by comparing peer IDs; the one with the smaller ID initiates) sends a 75-byte handshake message: + ``` + TorrentInno[peer-id (32 bytes)][info-hash (32 bytes)] + ``` + The receiving peer verifies the `info-hash`. If it matches a resource it manages, it replies with its own handshake message containing its `peer-id`. A successful handshake establishes a persistent, bidirectional connection for that specific resource. + +2. **Length-Prefixed Messages:** All subsequent communication uses length-prefixed messages: + ``` + [body-length (4 bytes)][message-body] + ``` + +3. **Message Body:** The `message-body` contains the message type and data: + ``` + [message-type (1 byte)][message-data] + ``` + Supported message types: + * **Request (`0x01`):** Sent by a peer to request a block of data. + * `[message-data]`: `[piece-index (4 bytes)][piece-inner-offset (4 bytes)][block-length (4 bytes)]` + * **Piece (`0x02` - *assumed type based on common practice, not explicitly numbered in docs*):** Sent in response to a Request, containing the actual file data. + * `[message-data]`: `[piece-index (4 bytes)][piece-inner-offset (4 bytes)][block-length (4 bytes)][data]` + * **Bitfield (`0x03` - *assumed type*):** Sent by a peer to inform others which pieces it possesses. + * `[message-data]`: `[bitfield]` (A byte array where each bit represents a piece, 0=missing, 1=present). + +### Client Core Logic ([resource_manager.py](client/core/p2p/resource_manager.py)) + +The [ResourceManager](client/core/p2p/resource_manager.py) class manages the lifecycle of downloading or seeding a specific resource. + +* **Initialization:** Takes the host peer ID, destination path, resource metadata, and whether the peer initially has the file. It sets up internal state, including piece status tracking. From [resource_manager.py](client/core/p2p/resource_manager.py): + ```python + class ResourceManager: + class PieceStatus(Enum): + FREE = 1 # The piece is not in work + IN_PROGRESS = 2 # Waiting for reply from some peer + RECEIVED = 3 # The data has been fetched from network and now is saving on disk + SAVED = 4 # Piece is successfully saved on disk + + def __init__( + self, + host_peer_id: str, + destination: Path, + resource: Resource, + has_file + ): + self.host_peer_id = host_peer_id + self.destination = destination + self.resource = resource + self.has_file = has_file + # ... + self.piece_status: list[ResourceManager.PieceStatus] = [] + if has_file: + self.resource_file = ResourceFile( + destination, + resource, + fresh_install=False, + initial_state=ResourceFile.State.DOWNLOADED + ) + self.piece_status = [ResourceManager.PieceStatus.SAVED] * len(self.resource.pieces) + else: + self.resource_file = ResourceFile( + destination, + resource, + fresh_install=True, # TODO: add normal restoring procedure + initial_state=ResourceFile.State.DOWNLOADING + ) + self.piece_status = [ResourceManager.PieceStatus.FREE] * len(self.resource.pieces) + # ... + ``` +* **File Handling:** Uses the [ResourceFile](client/core/p2p/resource_file.py) class to manage reading/writing pieces to a temporary file during download (`.torrentinno-filename`) and renaming it upon completion. From [resource_file.py](client/core/p2p/resource_file.py): + ```python + class ResourceFile: + # ... + class State(Enum): + DOWNLOADING = 1 + DOWNLOADED = 2 + # ... + async def save_block(self, piece_index: int, piece_inner_offset: int, data: bytes): + if self.state == ResourceFile.State.DOWNLOADED: + raise RuntimeError("Cannot perform write operation in DOWNLOADED state") + # ... + await self._ensure_downloading_destination() + async with aiofiles.open(self.downloading_destination, mode='r+b') as f: + await f.seek(offset) + await f.write(data) + # ... + async def accept_download(self): + await aiofiles.os.rename(self.downloading_destination, self.destination) + self.state = ResourceFile.State.DOWNLOADED + ``` +* **Download Loop:** Runs as an `asyncio` task, continuously finding `FREE` pieces and assigning them to available peers (`_free_peers`) that have the piece (checked via `_bitfields`). From [resource_manager.py](client/core/p2p/resource_manager.py): + ```python + async def _download_loop(self): + # ... + while True: + # Find free pieces + free_pieces: list[int] = [] + for i, status in enumerate(self.piece_status): + if status == ResourceManager.PieceStatus.FREE: + free_pieces.append(i) + # ... + # Try to find piece and peer that has this piece + found_work = False + for piece_index in free_pieces: + for peer_id in self._free_peers: + if self._peer_has_piece(peer_id, piece_index): # Peer has this piece -> run the work + # Update the status and related peer + self.piece_status[piece_index] = ResourceManager.PieceStatus.IN_PROGRESS + self._peer_in_charge[piece_index] = peer_id + # ... + task = asyncio.create_task(self._download_work(peer_id, piece_index)) + # ... + found_work = True + break + if found_work: + break + await asyncio.sleep(0.2) + ``` +* **Connection Handling:** Listens for incoming connections and initiates outgoing connections Uses a to handle messages ([see resource_manager.py](client/core/p2p/resource_manager.py)). +* **Message Processing:** + * `on_request`: Reads the requested block using [ResourceFile.get_block](client/core/p2p/resource_file.py) and sends a `Piece` message back. + * `on_piece`: Validates the received piece data against the expected hash from the resource metadata. If valid, saves it using [ResourceFile.save_block](client/core/p2p/resource_file.py), updates `piece_status` to `SAVED`, and broadcasts an updated `Bitfield` to all connected peers. + * `on_bitfield`: Updates the internal record (`_bitfields`) of which pieces the sending peer possesses. + * `on_close`: Cleans up connection state when a peer disconnects. + +### GUI Layer (see [torrent_manager_docs.md](client/gui/torrent_manager_docs.md)) + +A GUI torrent manager provides user interaction. Key functions include: + +* `initialize()`: Loads saved state or uses test data. +* `shutdown()`: Saves the current state. +* `get_files()`: Returns a list of active torrents with details (name, size, speed, blocks). +* `update_file(file_name)`: Refreshes information (speed, blocks) for a specific torrent. +* `update_files()`: Refreshes information for all torrents. +* `get_file_info(url)`: Fetches metadata for a new torrent URL/magnet link. +* `add_torrent(file_info)`: Adds a new torrent to the manager. +* `remove_torrent(file_name)`: Removes a torrent. +* `get_mock_content(source)`: Provides mock file list data (likely for UI development/testing). + +## Results + +> [!NOTE] TODO +> * Screenshots of the GUI showing active downloads/uploads. +> * Performance metrics (e.g., download/upload speed charts, time to download a specific file with varying numbers of peers). +> * Tested [resource_manager_test.py](client/core/tests/resource_manager_test.py) + + +**Logs from console** +![logs](https://github.com/user-attachments/assets/35d14a34-116e-4676-8a1f-c653fcab5775) + +## Discussion + +The TorrentInno project successfully implements the fundamental components of a P2P file-sharing system, including a tracker for peer discovery and a client capable of exchanging file pieces according to a defined protocol. The use of `asyncio` in the Python client allows for efficient handling of concurrent network operations (multiple peer connections, downloads, uploads). The separation into core logic ([core](client/core)) and GUI ([gui](client/gui)) follows Single Responsiblity Principle, enhancing maintainability and scalability. +**Challenges:** + +* **Concurrency:** Managing the state of multiple pieces across numerous peer connections concurrently (assigning pieces, handling timeouts, validating data) is inherently complex. The current implementation uses `asyncio` tasks and status tracking ([PieceStatus](client/core/p2p/resource_manager.py)) to manage this. +* **Network Reliability:** P2P networks involve unreliable peers and network conditions. The system needs robust error handling for connection drops, timeouts (a simple 1-minute timeout is implemented in [_download_work function](client/core/p2p/resource_manager.py)), and potentially corrupt data (hash checking is implemented in [on_piece function](client/core/p2p/resource_manager.py)). +* **State Management:** Ensuring consistent state, especially when resuming downloads, can be challenging. The current implementation lacks this feature, defaulting to a fresh install. + +**Potential Improvements/Optimizations:** + +* **State Restoration:** Implementing robust saving and loading of download progress to allow resuming interrupted downloads. +* **Piece Selection Strategy:** The current download loop shuffles free pieces randomly ([resource_manager.py](client/core/p2p/resource_manager.py)). We might need more advanced strategies (e.g., "rarest first") could improve download performance in swarms with uneven piece distribution. +* **Error Handling:** Enhance error handling needed for network issues and peer misbehavior. +* **Throttling/Rate Limiting:** Implement upload/download rate limiting. +* **Security:** Adding encryption to peer communication. +* **Tracker Reliability :** Implementing features like compact peer lists or UDP tracker protocol support for scalability. +* **GUI Enhancements:** Adding more detailed statistics, configuration options, and potentially integrate magnet link handling directly if not already present. + +## References + +* BitTorrent Protocol Specification (BEP_0003) + https://www.bittorrent.org/beps/bep_0003.html +- Links to relevant academic papers + - P. Sharma, A. Bhakuni and R. Kaushal, "Performance analysis of BitTorrent protocol," 2013 National Conference on Communications (NCC), New Delhi, India, 2013, pp. 1-5, doi: 10.1109/NCC.2013.6488040. keywords: {Protocols;Peer-to-peer computing;Internet;Servers;Bandwidth;Libraries;Thin film transistors;BitTorrent protocol;Peer to Peer (P2P);Internet;Networks} + https://ieeexplore.ieee.org/abstract/document/6488040 + - E. Costa-Montenegro, J.C. Burguillo-Rial, F. Gil-Castiñeira, F.J. González-Castaño, Implementation and analysis of the BitTorrent protocol with a multi-agent model,Journal of Network and Computer Applications, Volume 34, Issue 1, 2011, Pages 368-383, ISSN 1084-8045, https://doi.org/10.1016/j.jnca.2010.06.010. (https://www.sciencedirect.com/science/article/pii/S1084804510001086) + Keywords: P2P; BitTorrent; Multi-agent; Model; JADE + + - Arnaud Legout, Guillaume Urvoy-Keller, Pietro Michiardi. Understanding BitTorrent: An Experimental Perspective. [Technical Report] 2005, pp.16. ⟨inria-00000156v3⟩ + https://inria.hal.science/inria-00000156/ + +Links to documentation for key libraries: +- Python + - `asyncio` https://docs.python.org/3/library/asyncio.html + - `aiofiles` https://pypi.org/project/aiofiles/ +- Go + - `Gin` https://github.com/gin-gonic/gin diff --git a/client/.gitignore b/client/.gitignore new file mode 100644 index 0000000..ea58291 --- /dev/null +++ b/client/.gitignore @@ -0,0 +1,2 @@ +last_used.json +*.log \ No newline at end of file diff --git a/client/cli/__init__.py b/client/cli/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/client/cli/cli.py b/client/cli/cli.py new file mode 100644 index 0000000..dfb49fa --- /dev/null +++ b/client/cli/cli.py @@ -0,0 +1,230 @@ +import asyncio +import logging +import sys +import time +from pathlib import Path +import json +import shlex +import threading +import os + +import torrentInno +from core.common.resource import Resource +from torrentInno import TorrentInno, create_resource_json, create_resource_from_json + + +def get_help_message() -> str: + return ( + 'TorrentInno CLI reference:\n\n' + + '"help" - display this message\n' + + '"quit" - quit the CLI and terminate the torrent session\n' + + '"download " - ' + 'start downloading the file associated with into the \n' + + '"share " - ' + 'start sharing the existing with other peers. The ' + 'is the metadata of the file\n' + + '"show all" - show the status of all files\n' + + '"show " - show the status of file at the path \n' + + '"generate resource " - generate the resource json of the ' + 'and save the result into ' + ) + + +def create_resource_from_file(file: Path) -> Resource: + with open(file, mode='r') as f: + resource_json = json.load(f) + return create_resource_from_json(resource_json) + + +def _run_event_loop(loop): + asyncio.set_event_loop(loop) + loop.run_forever() + + +def setup_logging(): + log_file = Path(__file__).parent.joinpath('cli_logs.log') + logging.basicConfig( + level=logging.DEBUG, + filename=log_file, + force=True + ) + + +class Client: + def __init__(self): + setup_logging() + self.torrent_inno: TorrentInno = TorrentInno() + self.loop = asyncio.new_event_loop() + self.background_thread = threading.Thread(target=_run_event_loop, args=(self.loop,)) + self.background_thread.start() + + def start(self): + print( + "Welcome to TorrentInno CLI\n" + "Type \"help\" to display the help message\n" + "To exit type \"quit\"" + ) + try: + self.infinite_loop() + except KeyboardInterrupt: + print("Quitting") + + def infinite_loop(self): + while True: + print("\033[92mTorrentInno>\033[0m", end=' ') + line = input() + if line == "help": + print(get_help_message()) + continue + + tokens = shlex.split(line) + + if len(tokens) == 0: + continue + + if tokens[0] == "download": + try: + destination = Path(tokens[1]).expanduser() + resource_file = Path(tokens[2]).expanduser() + + if destination.exists(): + print(f"The {destination} already exists. Abort download") + continue + if not destination.parent.exists(): + print(f"The parent folder {destination.parent} does not exist. Abort download") + continue + + resource = create_resource_from_file(resource_file) + asyncio.run_coroutine_threadsafe( + self.torrent_inno.start_download_file(destination.resolve(), resource), + self.loop + ) + print(f"Start downloading a file into {destination.resolve()}") + except Exception as e: + print(f"Download failed: {e}") + + elif len(tokens) == 2 and tokens[0] == "show" and tokens[1] == "all": + try: + states: list[TorrentInno.State] = asyncio.run_coroutine_threadsafe( + self.torrent_inno.get_all_files_state(), + self.loop + ).result() + + for key, state in states: + all_pieces = len(state.piece_status) + saved_pieces = sum(state.piece_status) + print( + f'Destination: {state.destination}\n' + f'Upload speed {state.upload_speed_bytes_per_sec / 10 ** 6:.2f} mb/sec\n' + f'Download speed {state.download_speed_bytes_per_sec / 10 ** 6:.2f} mb/sec\n' + f'Downloaded {saved_pieces}/{all_pieces} pieces\n' + ) + except Exception as e: + print(f"Something went wrong: {e}") + + elif tokens[0] == "share": + try: + destination = Path(tokens[1]).expanduser() + resource_file = Path(tokens[2]).expanduser() + if not destination.exists(): + print(f"The {destination} does not exist. Abort share") + continue + + resource = create_resource_from_file(resource_file) + asyncio.run_coroutine_threadsafe( + self.torrent_inno.start_share_file(destination.resolve(), resource), + self.loop + ) + print(f"Start sharing file at {destination}") + except Exception as e: + print(f"Share failed: {e}") + + elif tokens[0] == "show": + try: + destination = Path(tokens[1]).expanduser() + + while True: + # Get the state asynchronously + state: TorrentInno.State = asyncio.run_coroutine_threadsafe( + self.torrent_inno.get_state(destination.resolve()), + self.loop + ).result() + + # Convert speed to MB (1 MB = 10^6 bytes) + upload_speed_mb = state.upload_speed_bytes_per_sec / 10 ** 6 + download_speed_mb = state.download_speed_bytes_per_sec / 10 ** 6 + + # Create the saved_chunks string + saved_pieces = ''.join('#' if piece else '.' for piece in state.piece_status) + + # Clear the terminal + os.system('cls' if os.name == 'nt' else 'clear') + + print( + f"Destination: {state.destination}" + " " * 20 + ) # Add padding to clear longer previous lines + print( + f"Upload speed: {upload_speed_mb:.2f} mb/sec" + " " * 20 + ) + print( + f"Download speed: {download_speed_mb:.2f} mb/sec" + " " * 20 + ) + print( + f"Saved pieces: {saved_pieces}" + " " * 20 + ) + + # Wait before updating again + time.sleep(0.5) + except KeyboardInterrupt as e: + pass # Ignore keyboard interrupt and simply continue + print() # Print the new line + except Exception as e: + print(f"Fail when fetching the status of file: {e}") + + elif len(tokens) == 4 and tokens[0] == "generate" and tokens[1] == "resource": + try: + file = Path(tokens[2]).expanduser() + resource_file = Path(tokens[3]).expanduser() + + print(file.resolve()) + + if not file.exists(): + print(f"File {file.resolve()} does not exist. Abort generation") + continue + + # Start the interactive session + print("Enter the comment: ") + comment = input() + + print("Enter the name of the resource file: ") + name = input() + + resource_json = create_resource_json( + name=name, + comment=comment, + file_path=file, + min_piece_size=1000 * 1000, + max_pieces=10000 + ) + with open(resource_file, mode='w') as f: + json.dump(resource_json, f, indent=4, ensure_ascii=False) + print("Successfully created") + except Exception as e: + print(f"Failed when generating the resource file: {e}") + else: + print("Unknown command") + + +def main(): + try: + client = Client() + client.start() + except Exception as e: + print("Quitting") diff --git a/client/core/__init__.py b/client/core/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/client/core/common/__init__.py b/client/core/common/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/client/core/common/peer_info.py b/client/core/common/peer_info.py new file mode 100644 index 0000000..4c0839c --- /dev/null +++ b/client/core/common/peer_info.py @@ -0,0 +1,8 @@ +from dataclasses import dataclass + + +@dataclass +class PeerInfo: + public_ip: str + public_port: int + peer_id: str \ No newline at end of file diff --git a/client/core/common/resource.py b/client/core/common/resource.py new file mode 100644 index 0000000..261746e --- /dev/null +++ b/client/core/common/resource.py @@ -0,0 +1,26 @@ +import datetime +from dataclasses import dataclass +import hashlib + + +@dataclass +class Resource: + @dataclass + class Piece: + sha256: str + size_bytes: int + + tracker_ip: str + tracker_port: int + comment: str + creation_date: datetime.datetime + name: str + pieces: list[Piece] + + def get_info_hash(self) -> str: + resource_repr = f"{self.tracker_ip};{self.tracker_port};{self.comment};{self.creation_date.isoformat()};{self.name};" + path_part = ','.join(f'Path(sha256={piece.sha256},size_bytes={piece.size_bytes})' for piece in self.pieces) + resource_repr += path_part + + info_hash = hashlib.sha256(resource_repr.encode(encoding='utf-8')).hexdigest() + return info_hash diff --git a/client/core/p2p/__init__.py b/client/core/p2p/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/client/core/p2p/connection.py b/client/core/p2p/connection.py new file mode 100644 index 0000000..27a79c7 --- /dev/null +++ b/client/core/p2p/connection.py @@ -0,0 +1,143 @@ +import asyncio + +from core.common.peer_info import PeerInfo +from core.p2p.connection_listener import ConnectionListener +from core.p2p.message import Request, Piece, Handshake, Message, Bitfield +from core.common.resource import Resource + + +class Connection: + """ + Represents a resource-related connection between two peers. + The class works with asyncio, therefore its methods must be called on a thread with running event loop + """ + + def __init__(self, reader: asyncio.StreamReader, writer: asyncio.StreamWriter, resource: Resource): + self.reader = reader + self.writer = writer + self.listeners: list[ConnectionListener] = [] + self.resource = resource + + self._listen_on_reader_task: asyncio.Task | None = None + + def add_listener(self, listener: ConnectionListener): + self.listeners.append(listener) + + def remove_listener(self, listener: ConnectionListener): + self.listeners.remove(listener) + + # Read big int from the saved reader + async def _read_int_big_endian(self, length: int) -> int: + return int.from_bytes(await self.reader.readexactly(length)) + + # Launch infinite loop to fetch messages from the reader and notify the listeners + async def _listen_on_reader(self): + try: + while True: + message_length = await self._read_int_big_endian(4) + message_type = await self._read_int_big_endian(1) + + if message_type == 1: + # Request message + + # Parse the message + piece_index = await self._read_int_big_endian(4) + piece_inner_offset = await self._read_int_big_endian(4) + block_length = await self._read_int_big_endian(4) + + request = Request(piece_index, piece_inner_offset, block_length) + + # Notify listeners + await asyncio.gather( + *(listener.on_request(request) for listener in self.listeners), + return_exceptions=True + ) + elif message_type == 2: + # Piece message + + # Parse the message + piece_index = await self._read_int_big_endian(4) + piece_inner_offset = await self._read_int_big_endian(4) + block_length = await self._read_int_big_endian(4) + + if block_length > 10 ** 6: + raise RuntimeError("The length of data exceeded 1 MB") + + # Retrieve the data block + data = await self.reader.readexactly(block_length) + + piece = Piece(piece_index, piece_inner_offset, block_length, data) + + # Notify the listeners + await asyncio.gather( + *(listener.on_piece(piece) for listener in self.listeners), + return_exceptions=True + ) + elif message_type == 3: + # Bitfield message + + # Parse the message + bytes_parsed = await self.reader.readexactly( + len(self.resource.pieces) // 8 + bool(len(self.resource.pieces) % 8) + ) + + bitfield = Bitfield(bitfield=[False] * len(self.resource.pieces)) + for i in range(0, len(self.resource.pieces)): + has_piece_i = bool((bytes_parsed[i // 8] >> (7 - i % 8)) & 1) + bitfield.bitfield[i] = has_piece_i + + # Notify the listeners + await asyncio.gather( + *(listener.on_bitfield(bitfield) for listener in self.listeners), + return_exceptions=True + ) + + except Exception as e: + # For now close the connection in case of any exception + await asyncio.gather( + *(listener.on_close(e) for listener in self.listeners), + return_exceptions=True + ) + finally: + self.writer.close() + await self.writer.wait_closed() + + async def send_message(self, message: Message): + self.writer.write(message.to_bytes()) + await self.writer.drain() + + async def listen(self): + if self._listen_on_reader_task is None: + loop = asyncio.get_running_loop() + self._listen_on_reader_task = loop.create_task(self._listen_on_reader()) + + async def close(self): + if self._listen_on_reader_task is not None: + self._listen_on_reader_task.cancel() + self._listen_on_reader_task = None + self.writer.close() + await self.writer.wait_closed() + + +# Create the connection with some peer +async def establish_connection( + host_peer_id: str, + destination_peer: PeerInfo, + resource: Resource +) -> Connection: + reader, writer = await asyncio.open_connection(destination_peer.public_ip, destination_peer.public_port) + + info_hash = resource.get_info_hash() + handshake = Handshake(host_peer_id, info_hash) + try: + writer.write(handshake.to_bytes()) + response = await reader.readexactly(75) + assert response[0:11].decode() == 'TorrentInno' + assert response[11:43].hex() == destination_peer.peer_id + assert response[43:75].hex() == info_hash + except Exception: + writer.close() + await writer.wait_closed() + raise # re-raise the error + + return Connection(reader, writer, resource) \ No newline at end of file diff --git a/client/core/p2p/connection_listener.py b/client/core/p2p/connection_listener.py new file mode 100644 index 0000000..584b192 --- /dev/null +++ b/client/core/p2p/connection_listener.py @@ -0,0 +1,15 @@ +from core.p2p.message import Request, Piece, Bitfield + + +class ConnectionListener: + async def on_request(self, request: Request): + pass + + async def on_piece(self, piece: Piece): + pass + + async def on_bitfield(self, bitfield: Bitfield): + pass + + def on_close(self, cause): + pass \ No newline at end of file diff --git a/client/core/p2p/message.py b/client/core/p2p/message.py new file mode 100644 index 0000000..e97b1fe --- /dev/null +++ b/client/core/p2p/message.py @@ -0,0 +1,86 @@ +import asyncio +from dataclasses import dataclass + + +class Message: + """ + A base class for messages that peers exchanges between each other + """ + + # Convert the message object into bytes according to the specs + def to_bytes(self) -> bytes: + pass + + +@dataclass +class Handshake(Message): + """ + A dataclass for Handshake message + """ + peer_id: str + info_hash: str + + def to_bytes(self) -> bytes: + return "TorrentInno".encode() + bytes.fromhex(self.peer_id) + bytes.fromhex(self.info_hash) + + +@dataclass +class Request(Message): + """ + A dataclass for Request (type 1) message + """ + piece_index: int + piece_inner_offset: int + block_length: int + + def to_bytes(self) -> bytes: + return ( + (13).to_bytes(length=4, byteorder='big') + + (1).to_bytes(length=1, byteorder='big') + + self.piece_index.to_bytes(4, byteorder='big') + + self.piece_inner_offset.to_bytes(4, byteorder='big') + + self.block_length.to_bytes(4, byteorder='big') + ) + + +@dataclass +class Piece(Message): + """ + A dataclass for Piece (type 2) message + """ + piece_index: int + piece_inner_offset: int + block_length: int + data: bytes + + def __post_init__(self): + assert len(self.data) == self.block_length + + def to_bytes(self) -> bytes: + return ( + (13 + len(self.data)).to_bytes(length=4, byteorder='big') + + (2).to_bytes(length=1, byteorder='big') + + self.piece_index.to_bytes(4, byteorder='big') + + self.piece_inner_offset.to_bytes(4, byteorder='big') + + self.block_length.to_bytes(4, byteorder='big') + + self.data + ) + + +@dataclass +class Bitfield(Message): + """ + A dataclass for Bitfield (type 3) message + """ + bitfield: list[bool] + + def to_bytes(self) -> bytes: + result: list[int] = [] + for i in range(0, len(self.bitfield), 8): + current_byte = 0 + for j in range(i, min(len(self.bitfield), i + 8)): + if self.bitfield[j]: + current_byte += 1 << (i + 7 - j) + result.append(current_byte) + + return (1 + len(result)).to_bytes(length=4) + (3).to_bytes(length=1) + bytes(result) diff --git a/client/core/p2p/resource_file.py b/client/core/p2p/resource_file.py new file mode 100644 index 0000000..8f0dabb --- /dev/null +++ b/client/core/p2p/resource_file.py @@ -0,0 +1,110 @@ +import asyncio +from itertools import accumulate +from pathlib import Path + +import aiofiles +import aiofiles.os + +from core.common.resource import Resource +from enum import Enum + + +class ResourceFile: + """ + Class that represents the destination of resource. The main purpose of this class is to hide + the complexities of managing the "hidden file" and the aiofiles-related operations. Everything else + (checking and validating pieces, for example) is the caller's responsibility. + + The class has two states + 1) Downloading state. In this state the temporary file is created and uses to write/read operations + 2) Downloaded state. In this state the destination itself is used to read operations. Write operations + raise exception. + """ + + class State(Enum): + DOWNLOADING = 1 + DOWNLOADED = 2 + + def __init__( + self, + destination: Path, + resource: Resource, + fresh_install=True, + initial_state=State.DOWNLOADING + ): + self.destination = destination + self.resource = resource + self.downloading_destination = self.get_downloading_destination() + self.lock = asyncio.Lock() + self.state = initial_state + + if fresh_install: + assert initial_state == ResourceFile.State.DOWNLOADING + + destination.unlink(missing_ok=True) + self.downloading_destination.unlink(missing_ok=True) + + self.offsets: list[int] = [0] + list(accumulate(piece.size_bytes for piece in resource.pieces)) + + def get_downloading_destination(self) -> Path: + return self.destination.parent.joinpath(f'.torrentinno-{self.destination.name}') + + def _calculate_offset(self, piece_index: int, piece_inner_offset: int) -> int: + return self.offsets[piece_index] + piece_inner_offset + + async def _create_downloading_destination(self): + self.downloading_destination.unlink(missing_ok=True) + async with aiofiles.open(self.downloading_destination, mode='wb') as f: + await f.seek(self.offsets[-1] - 1) + await f.write(b'\0') + + async def _ensure_downloading_destination(self): + async with self.lock: + if ( + not self.downloading_destination.exists() or + self.downloading_destination.stat().st_size != self.offsets[-1] + ): + await self._create_downloading_destination() + + + async def get_piece(self, index: int) -> bytes: + return await self.get_block(index, 0, self.resource.pieces[index].size_bytes) + + async def get_block(self, piece_index: int, piece_inner_offset: int, block_length: int) -> bytes: + offset = self._calculate_offset(piece_index, piece_inner_offset) + if offset + block_length > self.offsets[-1]: + raise RuntimeError("The requested read portion does not fit the file") + + take_from: Path + if self.state == ResourceFile.State.DOWNLOADED: + take_from = self.destination + else: + await self._ensure_downloading_destination() + take_from = self.downloading_destination + + async with aiofiles.open(take_from, mode='rb') as f: + await f.seek(offset) + return await f.read(block_length) + + async def save_block(self, piece_index: int, piece_inner_offset: int, data: bytes): + if self.state == ResourceFile.State.DOWNLOADED: + raise RuntimeError("Cannot perform write operation in DOWNLOADED state") + + offset = self._calculate_offset(piece_index, piece_inner_offset) + if offset + len(data) > self.offsets[-1]: + raise RuntimeError("The write portion overflows the file") + + await self._ensure_downloading_destination() + + async with aiofiles.open(self.downloading_destination, mode='r+b') as f: + await f.seek(offset) + await f.write(data) + + async def save_validated_piece(self, piece_index: int, data: bytes): + await self.save_block(piece_index, 0, data) + + async def accept_download(self): + if self.destination.exists(): + self.destination.unlink() + await aiofiles.os.rename(self.downloading_destination, self.destination) + self.state = ResourceFile.State.DOWNLOADED diff --git a/client/core/p2p/resource_manager.py b/client/core/p2p/resource_manager.py new file mode 100644 index 0000000..17f85fd --- /dev/null +++ b/client/core/p2p/resource_manager.py @@ -0,0 +1,595 @@ +import asyncio +import hashlib +import time +from pathlib import Path +from dataclasses import dataclass +import random + +from core.common.peer_info import PeerInfo +from core.p2p.connection import Connection, establish_connection +from core.p2p.message import Handshake, Request, Bitfield, Piece +from core.p2p.resource_file import ResourceFile +from core.common.resource import Resource +from core.p2p.connection_listener import ConnectionListener +from enum import Enum +import logging + +from core.p2p.resource_save import ResourceSave + + +class ResourceManager: + class PieceStatus(Enum): + FREE = 1 # The piece is not in work + IN_PROGRESS = 2 # Waiting for reply from some peer + RECEIVED = 3 # The data has been fetched from network and now is saving on disk + SAVED = 4 # Piece is successfully saved on disk + + @dataclass + class State: + piece_status: list[bool] + upload_speed_bytes_per_sec: int + download_speed_bytes_per_sec: int + + @dataclass + class NetworkStats: + # The time of last stats drop (used to calculate approximate instantaneous upload/download speed) + last_drop_timestamp_seconds: float + bytes_downloaded_since_last_drop: int = 0 + bytes_uploaded_since_last_drop: int = 0 + prev_download_bytes_per_sec: int = 0 + prev_upload_bytes_per_sec: int = 0 + + def _log_prefix(self, msg: str) -> str: + return f"[ResourceManager peer_id={self.host_peer_id[:6]} info_hash={self.info_hash[:6]}] {msg}" + + # Some new peer wants to connect to this peer + async def _handle_incoming_connection(self, reader: asyncio.StreamReader, writer: asyncio.StreamWriter): + host, port = writer.get_extra_info('peername') + logging.info(self._log_prefix(f"{host}:{port} is trying to connect")) + try: + response = await reader.readexactly(75) + assert response[0:11].decode() == 'TorrentInno' + info_hash = response[43:75].hex() + assert info_hash == self.info_hash + + peer_id: str = response[11:43].hex() + + if self.host_peer_id < peer_id: + raise RuntimeError(f"Peer {peer_id} has greater id and is trying to establish connection") + + # If we already have connection with this peer id -> abort the incoming connection + if peer_id in self._connections: + return + + # If everything is correct, then send the response handshake message + writer.write(Handshake(peer_id=self.host_peer_id, info_hash=self.info_hash).to_bytes()) + await writer.drain() + + # Create the connection object + connection = Connection(reader, writer, self.resource) + await self._add_peer(peer_id, connection) + + logging.info(self._log_prefix(f"Establish connection with {peer_id[:6]}")) + except Exception as e: + logging.exception(self._log_prefix(f"Failed to handle incoming connection with {host}")) + writer.close() + await writer.wait_closed() + + def _create_connection_listener(self, peer_id: str) -> ConnectionListener: + return ConnectionListenerImpl(peer_id, self) + + async def _add_peer(self, peer_id: str, connection: Connection): + self._connections[peer_id] = connection + self._bitfields[peer_id] = [False] * len(self.resource.pieces) + self._free_peers.add(peer_id) + + connection.add_listener(self._create_connection_listener(peer_id)) + await connection.listen() + # Send the message about the stored pieces + await self._send_bitfield(peer_id) + + async def _remove_peer(self, peer_id: str): + try: + await self._connections[peer_id].close() + except Exception as e: + # Ignore any exception with closing (probably peer_id either is not in list or connection is already closed + pass + self._connections.pop(peer_id, None) + self._bitfields.pop(peer_id, None) + self._free_peers.discard(peer_id) + + # -----MAIN DOWNLOAD LOGIC BEGINS HERE----- + + async def _download_work(self, peer_id: str, piece_index: int): + logging.info(self._log_prefix(f"Download Work to download piece {piece_index} from peer {peer_id[:6]}")) + connection = self._connections[peer_id] + await connection.send_message( + Request( + piece_index, + 0, + self.resource.pieces[piece_index].size_bytes + ) + ) + await asyncio.sleep(60) # Sleep 1 minute + if self.piece_status[piece_index] == ResourceManager.PieceStatus.IN_PROGRESS: + # If after one minute, the piece is still in progress, + # then something is wrong with peer (slow download speed or smth) + self._peer_in_charge[piece_index] = '' # This peer is no more responsible for this piece + self.piece_status[piece_index] = ResourceManager.PieceStatus.FREE + + async def _download_loop(self): + logging.info(self._log_prefix("Start download loop")) + works = set() + while True: + # Find free pieces + free_pieces: list[int] = [] + saved_pieces = 0 + for i, status in enumerate(self.piece_status): + if status == ResourceManager.PieceStatus.FREE: + free_pieces.append(i) + if status == ResourceManager.PieceStatus.SAVED: + saved_pieces += 1 + + if saved_pieces == len(self.resource.pieces): + break + + # Shuffle the pieces + random.shuffle(free_pieces) + + # Try to find piece and peer that has this piece + found_work = False + for piece_index in free_pieces: + for peer_id in self._free_peers: + if self._peer_has_piece(peer_id, piece_index): # Peer has this piece -> run the work + # Update the status and related peer + self.piece_status[piece_index] = ResourceManager.PieceStatus.IN_PROGRESS + self._peer_in_charge[piece_index] = peer_id + + task = asyncio.create_task(self._download_work(peer_id, piece_index)) + task.add_done_callback(lambda t: works.discard(t)) + works.add(task) + + found_work = True + break + if found_work: + break + await asyncio.sleep(0.2) + + async def _confirm_download_complete(self): + saved_pieces = sum( + piece_status == ResourceManager.PieceStatus.SAVED + for piece_status in self.piece_status + ) + assert saved_pieces == len(self.resource.pieces) + + await self.resource_file.accept_download() + await self.stop_download() + await self.resource_save.remove_save() + logging.info(self._log_prefix("Download is completed")) + + # -----END OF DOWNLOAD LOGIC----- + + def _peer_has_piece(self, peer_id: str, piece_index: int) -> bool: + return self._bitfields[peer_id][piece_index] == True + + def _get_bitfield(self) -> list[bool]: + return list( + piece_status == ResourceManager.PieceStatus.SAVED for piece_status in self.piece_status + ) + + async def _save_loading_state(self): + try: + await self.resource_save.write_bitfield(self._get_bitfield()) + except Exception: + logging.exception(self._log_prefix("Can't save bitfield")) + + async def _send_bitfield(self, peer_id: str): + connection = self._connections[peer_id] + bitfield = Bitfield(self._get_bitfield()) + await connection.send_message(bitfield) + + async def _send_bitfield_to_all_peers(self): + bitfield = Bitfield(self._get_bitfield()) + await asyncio.gather( + *(connection.send_message(bitfield) + for connection in self._connections.values()), + return_exceptions=True + ) + + # Periodic broadcast with bitfield to compensate possible exceptions in the Piece message + async def _periodic_broadcast(self): + while True: + await self._send_bitfield_to_all_peers() + await asyncio.sleep(30) # Sleep for 30 seconds + + async def _serve_forever(self, server: asyncio.Server): + async with server: + await server.serve_forever() + + async def _calc_network_stats(self): + while True: + delta = time.time() - self._network_stats.last_drop_timestamp_seconds + self._network_stats.prev_download_bytes_per_sec = \ + int(self._network_stats.bytes_downloaded_since_last_drop / delta) + self._network_stats.prev_upload_bytes_per_sec = \ + int(self._network_stats.bytes_uploaded_since_last_drop / delta) + + self._network_stats.last_drop_timestamp_seconds = time.time() + self._network_stats.bytes_downloaded_since_last_drop = 0 + self._network_stats.bytes_uploaded_since_last_drop = 0 + await asyncio.sleep(2) + + def __init__( + self, + host_peer_id: str, + destination: Path, + resource: Resource, + ): + """ + Create a new ResourceManager instance. + IMPORTANT: For pair (destination, resource) there MUST be only one instance of (running) `ResourceManager`. + Otherwise, the whole behaviour is undefined. + + :param host_peer_id: the peer_id that will host the resource + :param destination: The destination of the file on the filesystem. Important: if the destination exists + on the moment the class is instantiated, then it's assumed that the caller has the `destination` file + and therefore the file will only be shared (and not downloaded) + :param resource: the resource class representing the class to be uploaded/downloaded + """ + self.host_peer_id = host_peer_id + self.destination = destination + self.resource = resource + + self.info_hash = resource.get_info_hash() + + # Save state for resource + self.resource_save = ResourceSave(destination, resource) + + # If the peer can give file pieces + self.share_file = True + + # Peer dictionaries + self._connections: dict[str, Connection] = dict() # peer_id <-> Connection + self._bitfields: dict[str, list[bool]] = dict() # peer_id <-> bitfield (owned chunks) + self._free_peers: set[str] = set() # set of peer ids that are not involved in any work + + self.piece_status: list[ResourceManager.PieceStatus] = [] + + has_file = destination.exists() + + if has_file: # The caller claims to already have the file + self.resource_file = ResourceFile( + destination, + resource, + fresh_install=False, + initial_state=ResourceFile.State.DOWNLOADED + ) + self.piece_status = [ResourceManager.PieceStatus.SAVED] * len(self.resource.pieces) + else: # The caller does not the complete downloaded file + self.resource_file = ResourceFile( + destination, + resource, + fresh_install=False, + initial_state=ResourceFile.State.DOWNLOADING + ) + self.piece_status = [ResourceManager.PieceStatus.FREE] * len(self.resource.pieces) + + # Current peer id that handles the piece (empty string=no peer) + self._peer_in_charge: list[str] = [''] * len(self.resource.pieces) + + # Download state + self._network_stats = ResourceManager.NetworkStats(time.time()) + + # Various asyncio background tasks + self._download_task: asyncio.Task | None = None + self._server_task: asyncio.Task | None = None + self._broadcast_task: asyncio.Task | None = None + + self._calc_network_stats_task = asyncio.create_task(self._calc_network_stats()) + + # PUBLIC METHODS: + async def open_public_port(self) -> int: + """ + Opens a new socket that will be used to accept incoming connections from other peers + + :return: port on which the new socket is opened + """ + if self._server_task is not None: + raise RuntimeError("Peer is already accepting connections") + + # Start accepting peer connections on some random port + public_server = await asyncio.start_server( + self._handle_incoming_connection, + host='0.0.0.0', + port=0 + ) + host, port = public_server.sockets[0].getsockname() + self._server_task = asyncio.create_task(self._serve_forever(public_server)) + + # Also run the bitfield broadcast task + if self._broadcast_task is None: + self._broadcast_task = asyncio.create_task(self._periodic_broadcast()) + + logging.info(self._log_prefix(f'Open public port {port}')) + # Return port on which connection has been opened + return port + + async def close_public_port(self): + """ + Closes the socket that accepts the new connections (NOTE: after calling this method, the old port received in + `open_public_port` cannot be reused anymore as this port is received randomly from the OS) + """ + # Close the server_task connection + if self._server_task is not None: + self._server_task.cancel() + self._server_task = None + + if self._broadcast_task is not None: + self._broadcast_task.cancel() + self._broadcast_task = None + + async def restore_previous(self): + """ + Attempts to restore the saved state and start download with this state (for example, to get which pieces + are already downloaded in order to not repeat the download work). + + NOTE: If the destination file exists, then this method + assumes that the file is already completely downloaded + (and so sets the status of all pieces as SAVED (or downloaded)) + """ + if self.destination.exists(): + self.piece_status = [ResourceManager.PieceStatus.SAVED] * len(self.resource.pieces) + self._peer_in_charge = [''] * len(self.resource.pieces) + return + + try: + bitfield = await self.resource_save.read_bitfield() + for i in range(len(self.piece_status)): + if bitfield[i]: + self.piece_status[i] = ResourceManager.PieceStatus.SAVED + self._peer_in_charge[i] = '' + logging.info(self._log_prefix(f"Restored bitfield: {bitfield}")) + except Exception as e: + logging.info(self._log_prefix(f"Failed to read bitfield: {e}")) + + async def start_download(self): + """ + Start downloading the file. If the destination file already exists, then this method does nothing. The file + will usually be downloaded into a specifically named temporary file located at the same folder as `destination`. + Once the ResourceManager detects, that the file is completely downloaded + it terminates the download and renames the temporary file into the `destination`. + """ + if self.destination.exists(): + return + + if self._download_task is None: + self._download_task = asyncio.create_task(self._download_loop()) + + async def stop_download(self): + """ + Stop downloading the file. + """ + logging.info(self._log_prefix("Stop download")) + + # For each piece, mark that nobody is responsible for it + self._peer_in_charge = [''] * len(self.resource.pieces) + + if self._download_task is not None: + # Stop downloading the resource + self._download_task.cancel() + self._download_task = None + + async def start_sharing_file(self): + """ + Sets the flags that allows ResourceManager to share file (file pieces) with other peers. + This is the default behaviour. + """ + self.share_file = True + + async def stop_sharing_file(self): + """ + Forbid the ResourceManager to share file (file pieces) with other peers. + """ + self.share_file = False + + async def full_start( + self, + restore_previous=True, + start_sharing_file=True, + start_download=True, + open_public_port=True + ) -> int | None: + """ + A method to start the `ResourceManager`. Clients MUST call this method in order to fully start the `ResourceManager`. + The method has a bunch of flags that allow clients to adjust the parameters of `ResourceManager` + at the beginning. Any disabled flag can be enabled later by calling the appropriate public method. + + The method is not guaranteed to be idempotent (i.e. repeating calls of `full_start()` to the + running ResourceManager may cause exceptions/various errors). + + :return: if `open_public_port` is True then the return value is the same as `open_public_port()` otherwise None + """ + if restore_previous: await self.restore_previous() + if start_sharing_file: await self.start_sharing_file() + if start_download: await self.start_download() + listen_port = None + if open_public_port: listen_port = await self.open_public_port() + + if self._calc_network_stats_task is None: + self._calc_network_stats_task = asyncio.create_task(self._calc_network_stats()) + return listen_port + + async def shutdown(self): + """ + A method to stop the running `ResourceManager`. The clients MUST call this method in order to fully stop the + running `ResourceManager` and release all associated resources. + + The method is idempotent (repeating calls do not cause any errors/exception) + """ + await self.close_public_port() + await asyncio.gather( + *(connection.close() for connection in self._connections.values()), + return_exceptions=True + ) + await self.stop_download() + await self.stop_sharing_file() + if self._calc_network_stats_task is not None: + self._calc_network_stats_task.cancel() + + async def submit_peers(self, peers: list[PeerInfo]): + """ + The only way to tell ResourceManager about peers related to the resource. Usually these will be the peers fetched + from the tracker response on announce request with the *same* info hash as the RequestManager was created with. + + :param peers: The list of peers known to be related with the resource + """ + for peer in peers: + if peer.peer_id == self.host_peer_id: + logging.warning(self._log_prefix("host_peer_id is passed in submit_peers")) + continue + + # IMPORTANT RULE: The initiator of connection is always peer with the smaller id + if self.host_peer_id < peer.peer_id: + try: + if peer.peer_id not in self._connections: # We do not want repeating connections + connection = await establish_connection(self.host_peer_id, peer, self.resource) + await self._add_peer(peer.peer_id, connection) + logging.info(self._log_prefix(f"Establish connection with {peer.peer_id[:6]}")) + except Exception: + logging.exception( + self._log_prefix(f"Exception while establishing connection with {peer.peer_id[:6]}")) + + async def get_state(self) -> 'ResourceManager.State': + """ + Get the current state of the resource (i.e. downloaded pieces, upload/download speed etc.) + :return: The current state of the resource (file) + """ + delta_sec = time.time() - self._network_stats.last_drop_timestamp_seconds + return ResourceManager.State( + self._get_bitfield(), + upload_speed_bytes_per_sec=self._network_stats.prev_upload_bytes_per_sec, + download_speed_bytes_per_sec=self._network_stats.prev_download_bytes_per_sec + ) + + +class ConnectionListenerImpl(ConnectionListener): + def __init__(self, connected_peer_id: str, resource_manager: ResourceManager): + self.resource_manager = resource_manager + self.connected_peer_id = connected_peer_id + + def _log(self, level, msg: str): + logging.log(level, self.resource_manager._log_prefix(msg)) + + async def on_request(self, request: Request): + if not self.resource_manager.share_file: + self._log(logging.DEBUG, + f"Ignore Request message from peer {self.connected_peer_id[:6]} as sharing is disabled") + return + + try: + data = await self.resource_manager.resource_file.get_block( + request.piece_index, + request.piece_inner_offset, + request.block_length + ) + connection = self.resource_manager._connections[self.connected_peer_id] + await connection.send_message( + Piece( + request.piece_index, + request.piece_inner_offset, + request.block_length, + data + ) + ) + + # Update the network stats + self.resource_manager._network_stats.bytes_uploaded_since_last_drop += request.block_length + + self._log(logging.DEBUG, + f"Send piece {request.piece_index} on Request message to peer {self.connected_peer_id[:6]}") + except Exception: + logging.exception( + self.resource_manager._log_prefix( + f"Exception on request message from peer {self.connected_peer_id[:6]}" + ) + ) + pass + + async def on_piece(self, piece: Piece): + # This peer is not in charge on this piece + if self.resource_manager._peer_in_charge[piece.piece_index] != self.connected_peer_id: + self._log( + logging.DEBUG, + f"Discard piece {piece.piece_index} from {self.connected_peer_id[:6]} as not in charge" + ) + return + self.resource_manager.piece_status[piece.piece_index] = ResourceManager.PieceStatus.RECEIVED + try: + # Check that the received piece matches the hash + expected_hash = hashlib.sha256(piece.data).hexdigest() + received_hash = self.resource_manager.resource.pieces[piece.piece_index].sha256 + + if expected_hash != received_hash: + self._log( + logging.WARNING, + f"Incorrect hash of piece {piece.piece_index} on Piece message from peer {self.connected_peer_id}\n" + + f"Expected: {expected_hash}\n" + f"Received: {received_hash}" + ) + return + + await self.resource_manager.resource_file.save_block( + piece.piece_index, + piece.piece_inner_offset, + piece.data + ) + + # Update the network stats + self.resource_manager._network_stats.bytes_downloaded_since_last_drop += len(piece.data) + + # If the piece is saved, then broadcast the bitfield to all connections and change the status + self.resource_manager.piece_status[piece.piece_index] = ResourceManager.PieceStatus.SAVED + + saved_pieces = sum( + piece_status == ResourceManager.PieceStatus.SAVED + for piece_status in self.resource_manager.piece_status + ) + self._log( + logging.INFO, + f"Save piece {piece.piece_index} from {self.connected_peer_id[:6]}. " + f"Now has {saved_pieces}/{len(self.resource_manager.resource.pieces)} pieces" + ) + + # Also update the information about saved piece in the file: + await self.resource_manager._save_loading_state() + + if saved_pieces == len(self.resource_manager.resource.pieces): + # The file is successfully downloaded! + try: + await self.resource_manager._confirm_download_complete() + except Exception: + logging.exception(self.resource_manager._log_prefix("Cannot complete download")) + + await self.resource_manager._send_bitfield_to_all_peers() + except Exception: + logging.exception( + self.resource_manager._log_prefix( + f"Exception on piece message from peer {self.connected_peer_id[:6]}" + ) + ) + self.resource_manager.piece_status[piece.piece_index] = ResourceManager.PieceStatus.FREE + pass + + async def on_bitfield(self, bitfield: Bitfield): + self.resource_manager._bitfields[self.connected_peer_id] = bitfield.bitfield + owned_pieces = sum(bitfield.bitfield) + self._log( + logging.DEBUG, + f"Bitfield from {self.connected_peer_id[:6]}." + f" The peer claims to have {owned_pieces}/{len(bitfield.bitfield)} pieces" + ) + + async def on_close(self, cause): + # The connection with peer for some reason is closed + self._log(logging.INFO, f"The connection with {self.connected_peer_id[:6]} is closed") + await self.resource_manager._remove_peer(self.connected_peer_id) diff --git a/client/core/p2p/resource_manager_api.md b/client/core/p2p/resource_manager_api.md new file mode 100644 index 0000000..2400890 --- /dev/null +++ b/client/core/p2p/resource_manager_api.md @@ -0,0 +1,135 @@ +# Resource Manager API + +## Constructor +```python +def __init__( + self, + host_peer_id: str, + destination: Path, + resource: Resource, +): + """ + Create a new ResourceManager instance. + IMPORTANT: For pair (destination, resource) there MUST be only one instance of (running) `ResourceManager`. + Otherwise, the whole behaviour is undefined. + + :param host_peer_id: the peer_id that will host the resource + :param destination: The destination of the file on the filesystem. Important: if the destination exists + on the moment the class is instantiated, then it's assumed that the caller has the `destination` file + and therefore the file will only be shared (and not downloaded) + :param resource: the resource class representing the class to be uploaded/downloaded + """ + ... +``` +## Public methods: +### The most useful ones: +```python +async def full_start( + self, + restore_previous=True, + start_sharing_file=True, + start_download=True, + open_public_port=True +) -> int | None: + """ + A method to start the `ResourceManager`. Clients MUST call this method in order to fully start the `ResourceManager`. + The method has a bunch of flags that allow clients to adjust the parameters of `ResourceManager` + at the beginning. Any disabled flag can be enabled later by calling the appropriate public method. + + The method is not guaranteed to be idempotent (i.e. repeating calls of `full_start()` to the + running ResourceManager may cause exceptions/various errors). + + :return: if `open_public_port` is True then the return value is the same as `open_public_port()` otherwise None + """ +``` +```python +async def shutdown(self): + """ + A method to stop the running `ResourceManager`. The clients MUST call this method in order to fully stop the + running `ResourceManager` and release all associated resources. + + The method is idempotent (repeating calls do not cause any errors/exception) + """ + ... +``` +```python +async def submit_peers(self, peers: list[PeerInfo]): + """ + The only way to tell ResourceManager about peers related to the resource. Usually these will be peers fetched + from the tracker response on announce request with the *same* info hash as the RequestManager was created with. + + :param peers: The list of peers known to be related with the resource + """ + ... +``` +```python +async def get_state(self) -> 'ResourceManager.State': + """ + Get the current state of the resource (i.e. downloaded pieces, upload/download speed etc.) + :return: The current state of the resource (file) + """ + ... +``` + +### The rest public methods: +```python +async def open_public_port(self) -> int: + """ + Opens a new socket that will be used to accept incoming connections from other peers + + :return: port on which the new socket is opened + """ + ... +``` +```python +async def close_public_port(self): + """ + Closes the socket that accepts the new connections (NOTE: after calling this method, the old port received in + `open_public_port` cannot be reused anymore as this port is received randomly from the OS) + """ + ... +``` +```python +async def restore_previous(self): + """ + Attempts to restore the saved state and start download with this state (for example, to get which pieces + are already downloaded in order to not repeat the download work). + + NOTE: If the destination file exists, then this method + assumes that the file is already completely downloaded + (and so sets the status of all pieces as SAVED (or downloaded)) + """ + ... +``` +```python +async def start_download(self): + """ + Start downloading the file. If the destination file already exists, then this method does nothing. The file + will usually be downloaded into a specifically named temporary file located at the same folder as `destination`. + Once the ResourceManager detects, that the file is completely downloaded + it terminates the download and renames the temporary file into the `destination`. + """ + ... +``` +```python +async def stop_download(self): + """ + Stop downloading the file. + """ + ... +``` +```python +async def start_sharing_file(self): + """ + Sets the flags that allows ResourceManager to share file (file pieces) with other peers. + This is the default behaviour. + """ + ... +``` +```python +async def stop_sharing_file(self): + """ + Forbid the ResourceManager to share file (file pieces) with other peers. + """ + ... +``` \ No newline at end of file diff --git a/client/core/p2p/resource_save.py b/client/core/p2p/resource_save.py new file mode 100644 index 0000000..dbb0eea --- /dev/null +++ b/client/core/p2p/resource_save.py @@ -0,0 +1,24 @@ +from pathlib import Path +import json + +import aiofiles + +from core.common.resource import Resource + + +class ResourceSave: + def __init__(self, destination: Path, resource: Resource): + self.save_file = \ + destination.parent.joinpath(f".torrentinno_save-file_{destination.name}_{resource.get_info_hash()}") + + async def remove_save(self): + self.save_file.unlink(missing_ok=True) + + async def read_bitfield(self) -> list[bool]: + async with aiofiles.open(self.save_file, mode='r') as f: + result = json.loads(await f.read()) + return result + + async def write_bitfield(self, bitfield: list[bool]): + async with aiofiles.open(self.save_file, mode='w') as f: + await f.write(json.dumps(bitfield)) diff --git a/client/core/s2p/__init__.py b/client/core/s2p/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/client/core/s2p/server_manager.py b/client/core/s2p/server_manager.py new file mode 100644 index 0000000..1d58872 --- /dev/null +++ b/client/core/s2p/server_manager.py @@ -0,0 +1,27 @@ +import requests +import time +import asyncio +import logging + +def update_peer(server_url, peer) -> str: + logging.info(server_url) + ''' + Using post requests create or update peer information on tracker server + ''' + try: + response = requests.post(server_url, json=peer, timeout=5) + if response.status_code == 200: + logging.info("Peer updated successfully.") + else: + logging.info(f"Failed to update peer. Status code: {response.status_code}") + logging.info(f"Response content: {response.text}") # Log the response content for debugging + return response.text + except requests.exceptions.RequestException as e: + logging.info(f"Error updating peer: {e}") + return '' + +async def heart_beat(server_url, peer, on_tracker_response) -> str: + while True: + response_text = update_peer(server_url, peer) + await on_tracker_response(response_text) + await asyncio.sleep(30) diff --git a/client/core/tests/__init__.py b/client/core/tests/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/client/core/tests/advanved_resource_manager_test.py b/client/core/tests/advanved_resource_manager_test.py new file mode 100644 index 0000000..4dacdd6 --- /dev/null +++ b/client/core/tests/advanved_resource_manager_test.py @@ -0,0 +1,164 @@ +import asyncio +import datetime +import hashlib +from pathlib import Path +import random +import json +import shutil +import logging + +from core.common.peer_info import PeerInfo +from core.common.resource import Resource +from core.p2p.resource_file import ResourceFile +from core.p2p.resource_manager import ResourceManager +from core.p2p.resource_save import ResourceSave + +random.seed(0) + + +def random_bits(size) -> bytes: + return bytes(random.randint(0, 255) for _ in range(size)) + + +def random_peer_id() -> str: + return random_bits(32).hex() + + +async def simulate_ownership(bitfield: list[bool], data: list[bytes], destination: Path, resource: Resource): + resource_file = ResourceFile(destination, resource) + resource_save = ResourceSave(destination, resource) + await resource_save.write_bitfield(bitfield) + for i, piece_status in enumerate(bitfield): + if piece_status: + await resource_file.save_validated_piece(i, data[i]) + + +test_run = 1 + + +async def main(): + logging.basicConfig(level=logging.DEBUG) + + # Temporary directory + tmp = Path(__file__).parent.joinpath('tmp') + # shutil.rmtree(tmp, ignore_errors=True) + tmp.mkdir(parents=True, exist_ok=True) + + # Generate stub data + data: list[bytes] = [random_bits(random.randint(100, 1000)) for _ in range(10)] + pieces: list[Resource.Piece] = [ + Resource.Piece( + sha256=hashlib.sha256(piece_data).hexdigest(), + size_bytes=len(piece_data) + ) + for piece_data in data + ] + resource = Resource( + tracker_ip='0.0.0.0', + tracker_port=8080, + comment='Test file', + creation_date=datetime.datetime(year=2000, month=1, day=1, hour=1, minute=1, second=1), + name='Random testing file', + pieces=pieces + ) + + source_peer_id = random_peer_id() # peer_id is unique PER PEER (not per ResourceManager) + source_destination = tmp.joinpath('source', resource.name) + source_destination.parent.mkdir(parents=True, exist_ok=True) + + # Example where the initial peer has only some parts + source_bitfield = [False] * len(resource.pieces) + source_bitfield[0] = True + source_bitfield[1] = True + source_bitfield[-1] = True + await simulate_ownership(source_bitfield, data, source_destination, resource) + + # Set up source peer_id. This can be used as an example of working with ResourceManager + source_resource_manager = ResourceManager(source_peer_id, source_destination, resource) + source_port = await source_resource_manager.full_start() + source_peer_info = PeerInfo('127.0.0.1', source_port, source_peer_id) + + consumer_peer_ids = [random_peer_id() for _ in range(5)] + consumer_destinations = [tmp.joinpath(peer_id, resource.name) for peer_id in consumer_peer_ids] + for consumer_destination in consumer_destinations: + consumer_destination.parent.mkdir(parents=True, exist_ok=True) + + # For first consumer simulate ownership of some other parts + consumer_0_bitfield = [False] * len(resource.pieces) + consumer_0_bitfield[1] = True + consumer_0_bitfield[2] = True + consumer_0_bitfield[3] = True + consumer_0_bitfield[-2] = True + await simulate_ownership(consumer_0_bitfield, data, consumer_destinations[0], resource) + + consumer_resource_managers = [ + ResourceManager(consumer_peer_id, consumer_destination, resource) + for consumer_peer_id, consumer_destination in zip(consumer_peer_ids, consumer_destinations) + ] + consumer_ports = [ + await resource_manager.full_start() + for resource_manager in consumer_resource_managers + ] + consumer_peer_infos = [ + PeerInfo('127.0.0.1', port, peer_id) + for port, peer_id in zip(consumer_ports, consumer_peer_ids) + ] + + all_peer_infos = consumer_peer_infos + [source_peer_info] + + for resource_manager in consumer_resource_managers: + await resource_manager.start_download() + + # For all the peers, submit the PeerInfo list + await asyncio.gather( + source_resource_manager.submit_peers(all_peer_infos), + *( + consumer_resource_manager.submit_peers(all_peer_infos) + for consumer_resource_manager in consumer_resource_managers + ) + ) + + await asyncio.sleep(5) + + print("Source peer disconnected!") + await source_resource_manager.shutdown() # Sudden shutdown of source resource manager + + await asyncio.sleep(2) + print("Adding new sudden peer") + + sudden_peer_id = random_peer_id() + sudden_peer_destination = tmp.joinpath(sudden_peer_id, resource.name) + sudden_peer_destination.parent.mkdir(parents=True) + sudden_peer_bitfield = [False] * 10 + sudden_peer_bitfield[3:-2] = [True] * len(sudden_peer_bitfield[3:-2]) + await simulate_ownership(sudden_peer_bitfield, data, sudden_peer_destination, resource) + sudden_peer_resource_manager = ResourceManager(sudden_peer_id, sudden_peer_destination, resource) + sudden_peer_port = await sudden_peer_resource_manager.full_start() + sudden_peer_info = PeerInfo('127.0.0.1', sudden_peer_port, sudden_peer_id) + + all_peer_infos = all_peer_infos + [sudden_peer_info] + + # For all the peers, submit the PeerInfo (yes, again) + await asyncio.gather( + source_resource_manager.submit_peers(all_peer_infos), + *( + consumer_resource_manager.submit_peers(all_peer_infos) + for consumer_resource_manager in consumer_resource_managers + ) + ) + + +if __name__ == "__main__": + try: + loop = asyncio.new_event_loop() + asyncio.set_event_loop(loop) + loop.create_task(main()) + loop.run_forever() + print("\nSTOPPING LOOP\n") + loop.stop() + print("\nSTARTING AGAIN, NOW WITH SAVED DATA\n") + loop.create_task(main()) + finally: + pass + tmp = Path(__file__).parent.joinpath('tmp') + shutil.rmtree(tmp) diff --git a/client/core/tests/mocks.py b/client/core/tests/mocks.py new file mode 100644 index 0000000..f4eeabf --- /dev/null +++ b/client/core/tests/mocks.py @@ -0,0 +1,34 @@ +import datetime + +from core.p2p.message import Request, Piece, Bitfield +from core.common.resource import Resource + +mock_resource = Resource( + tracker_ip="127.0.0.1", + tracker_port=8080, + comment="Test torrent for unit testing", + creation_date=datetime.datetime(2025, 4, 26, 15, 30, 0), + name="sample_file.txt", + pieces=[ + Resource.Piece(sha256="a" * 64, size_bytes=512), + Resource.Piece(sha256="b" * 64, size_bytes=1500), + Resource.Piece(sha256="c" * 64, size_bytes=768) + ] +) + +mock_request = Request( + piece_index=1, + piece_inner_offset=100 * 200, + block_length=1337 +) + +mock_piece = Piece( + piece_index=0, + piece_inner_offset=400, + block_length=64, + data=b'a' * 64 +) + +mock_bitfield = Bitfield( + bitfield=[False, True] +) diff --git a/client/core/tests/resource_manager_test.py b/client/core/tests/resource_manager_test.py new file mode 100644 index 0000000..949744f --- /dev/null +++ b/client/core/tests/resource_manager_test.py @@ -0,0 +1,132 @@ +import asyncio +import hashlib +import datetime +from itertools import accumulate +from pathlib import Path +import random +import shutil +import logging +import time +import aiofiles + +from core.common.peer_info import PeerInfo +from core.common.resource import Resource +from core.p2p.resource_manager import ResourceManager + + +def random_bytes(size) -> bytes: + return bytes(random.randint(0, 255) for _ in range(size)) + + +def random_peer_id() -> str: + return random_bytes(32).hex() + + +async def create_peer(peer_id: str, destination: Path, resource) -> tuple[PeerInfo, ResourceManager]: + destination.parent.mkdir(parents=True, exist_ok=True) + resource_manager = ResourceManager(peer_id, destination, resource) + port = await resource_manager.full_start() + peer_info = PeerInfo('127.0.0.1', port, peer_id) + return peer_info, resource_manager + + +def setup_logging(): + log_file = Path(__file__).parent.joinpath("resource_manager_test.log") + logging.basicConfig( + filename=log_file, + level=logging.DEBUG + ) + + +async def main(): + setup_logging() + + # Temporary directory and necessary file tree manipulations + tmp = Path(__file__).parent.joinpath('tmp') + shutil.rmtree(tmp, ignore_errors=True) + tmp.mkdir(parents=True) + + # Make the randomizer deterministic + random.seed(0) + + # Generate the initial file + source_peer_id = random_peer_id() + source_peer_destination = tmp.joinpath('source', 'data') + source_peer_destination.parent.mkdir(parents=True) + piece_sizes = [random.randint(5 * 10 ** 5, 10 ** 6) for _ in range(10)] + offset = [0] + list(accumulate(piece_sizes)) + + with open(source_peer_destination, mode='wb') as file: + file.seek(offset[-1] - 1) + file.write(b'\0') + + print("Start generating file...") + pieces = [] + async with aiofiles.open(source_peer_destination, mode='r+b') as file: + for i in range(len(piece_sizes)): + await file.seek(offset[i]) + arr = random_bytes(min(100, piece_sizes[i])) + data = arr * (piece_sizes[i] // len(arr)) + b'\0' * (piece_sizes[i] % len(arr)) + assert len(data) == piece_sizes[i] + piece = Resource.Piece( + sha256=hashlib.sha256(data).hexdigest(), + size_bytes=len(data) + ) + pieces.append(piece) + await file.write(data) + + assert source_peer_destination.stat().st_size == offset[-1] + + resource = Resource( + tracker_ip='0.0.0.0', + tracker_port=8080, + comment='Test file', + creation_date=datetime.datetime(year=2000, month=1, day=1, hour=1, minute=1, second=1), + name='Random testing file', + pieces=pieces + ) + + source_peer_info, source_resource_manager = await create_peer( + source_peer_id, + source_peer_destination, + resource + ) + + # Now create consumer peers + consumer_peer_ids = [random_peer_id() for _ in range(5)] + consumer_destinations = [ + tmp.joinpath(consumer_peer_id, resource.name) + for consumer_peer_id in consumer_peer_ids + ] + consumer_tuples = [ + await create_peer( + consumer_peer_id, + consumer_destination, + resource + ) for consumer_peer_id, consumer_destination in zip(consumer_peer_ids, consumer_destinations) + ] + consumer_peer_infos = [t[0] for t in consumer_tuples] + consumer_resource_managers = [t[1] for t in consumer_tuples] + + all_peer_infos = [source_peer_info] + consumer_peer_infos + + # Submit the info about peers to all peers + await asyncio.gather( + source_resource_manager.submit_peers(all_peer_infos), + *( + consumer_resource_manager.submit_peers(all_peer_infos) + for consumer_resource_manager in consumer_resource_managers + ) + ) + + +if __name__ == "__main__": + try: + loop = asyncio.new_event_loop() + asyncio.set_event_loop(loop) + loop.create_task(main()) + loop.run_forever() + finally: + pass + tmp = Path(__file__).parent.joinpath('tmp') + shutil.rmtree(tmp) diff --git a/client/core/tests/test_connection.py b/client/core/tests/test_connection.py new file mode 100644 index 0000000..d8635f0 --- /dev/null +++ b/client/core/tests/test_connection.py @@ -0,0 +1,71 @@ +import asyncio + +import pytest + +from core.p2p.connection import Connection +from core.p2p.connection_listener import ConnectionListener +from core.p2p.message import Piece, Request, Bitfield +from core.tests.mocks import mock_resource, mock_request, mock_piece, mock_bitfield +from core.common.resource import Resource + + +async def get_connections(resource: Resource) -> (Connection, Connection): + queue = asyncio.Queue() + + async def handle_client(reader, writer): + await queue.put((reader, writer)) + + server = await asyncio.start_server(handle_client, '127.0.0.1', 0) + host, port = server.sockets[0].getsockname() + client_reader, client_writer = await asyncio.open_connection(host, port) + server_reader, server_writer = await queue.get() + + client_connection = Connection(client_reader, client_writer, resource) + server_connection = Connection(server_reader, server_writer, resource) + return client_connection, server_connection + + +@pytest.mark.asyncio +async def test_connection(): + print() + + connections = await get_connections(mock_resource) + sender: Connection = connections[0] + receiver: Connection = connections[1] + + class SenderListener(ConnectionListener): + async def on_piece(self, piece: Piece): + print(piece) + assert piece == mock_piece + + async def on_bitfield(self, bitfield: Bitfield): + print(bitfield) + assert bitfield == mock_bitfield + + async def on_close(self, cause): + print(f"SenderListener; onClose: {cause}") + + class ReceiverListener(ConnectionListener): + async def on_request(self, request: Request) -> bytes: + print(request) + assert request == mock_request + return b'' + + async def on_close(self, cause): + print(f"ReceiverListener; onClose: {cause}") + + sender_listener = SenderListener() + receiver_listener = ReceiverListener() + sender.add_listener(sender_listener) + receiver.add_listener(receiver_listener) + + await receiver.listen() + await sender.listen() + + await sender.send_message(mock_request) + await receiver.send_message(mock_piece) + await receiver.send_message(mock_bitfield) + + await sender.close() + await receiver.close() + diff --git a/client/core/tests/test_message.py b/client/core/tests/test_message.py new file mode 100644 index 0000000..c14cdc3 --- /dev/null +++ b/client/core/tests/test_message.py @@ -0,0 +1,49 @@ +from core.p2p.message import Handshake, Request, Piece, Bitfield + + +def test_handshake_to_bytes(): + peer_id = 'aabbccddeeff00112233445566778899' + info_hash = '99887766554433221100ffeeddccbbaa' + handshake = Handshake(peer_id, info_hash) + expected = ( + 'TorrentInno'.encode() + + bytes.fromhex(peer_id) + + bytes.fromhex(info_hash) + ) + assert handshake.to_bytes() == expected + + +def test_request_to_bytes(): + request = Request(10, 1024, 500) + expected = ( + (13).to_bytes(4) + + (1).to_bytes(1) + + request.piece_index.to_bytes(4) + + request.piece_inner_offset.to_bytes(4) + + request.block_length.to_bytes(4) + ) + assert request.to_bytes() == expected + + +def test_piece_to_bytes(): + piece = Piece(10, 1024, 4, b'102b') + expected = ( + (17).to_bytes(4) + + (2).to_bytes(1) + + piece.piece_index.to_bytes(4, 'big') + + piece.piece_inner_offset.to_bytes(4, 'big') + + piece.block_length.to_bytes(4, 'big') + + piece.data + ) + assert piece.to_bytes() == expected + + +def test_bitfield_to_bytes(): + bitfield = Bitfield([True, True, False, False, True, True, True, False, True, False]) + expected = ( + (3).to_bytes(4) + + (3).to_bytes(1) + + bytes([206, 128]) + ) + + assert bitfield.to_bytes() == expected diff --git a/client/core/tests/test_resource_file.py b/client/core/tests/test_resource_file.py new file mode 100644 index 0000000..fececb8 --- /dev/null +++ b/client/core/tests/test_resource_file.py @@ -0,0 +1,43 @@ +import aiofiles +import pytest +import random +import asyncio +from pathlib import Path +from core.p2p.resource_file import ResourceFile +from core.tests.mocks import mock_resource + + +@pytest.mark.asyncio +async def test_resource_file(tmp_path): + data: list[bytes] = [] + for piece in mock_resource.pieces: + data.append(bytes(random.randint(0, 255) for _ in range(piece.size_bytes))) + + destination = tmp_path / 'test_file' + + resource_file = ResourceFile(destination, mock_resource) + + async def write_piece(piece_index: int, piece_data: bytes): + await resource_file.save_validated_piece(piece_index, piece_data) + + # Write all the pieces concurrently + await asyncio.gather( + *(write_piece(i, piece_data) for i, piece_data in enumerate(data)) + ) + + # Accept the download (simulate renaming) + await resource_file.accept_download() + + # Ensure the file content is correctly saved and pieces are correctly fetched by resource_file + async with aiofiles.open(destination, mode='rb') as f: + for piece_index, piece_data in enumerate(data): + on_file = await f.read(len(piece_data)) + from_resource_file = await resource_file.get_piece(piece_index) + assert on_file == piece_data + assert piece_data == from_resource_file + + # Ensure the downloading resource is moved and no longer exists + assert not resource_file.get_downloading_destination().exists() + + with pytest.raises(Exception): + await resource_file.save_validated_piece(2, "\x02\xa0".encode()) diff --git a/client/core/tests/torrentInno_test.py b/client/core/tests/torrentInno_test.py new file mode 100644 index 0000000..a6d127d --- /dev/null +++ b/client/core/tests/torrentInno_test.py @@ -0,0 +1,25 @@ +import asyncio +from torrentInno import TorrentInno, create_resource_json, create_resource_from_json +from core.common.resource import Resource +import logging + +logging.basicConfig(level=logging.DEBUG) + +async def main(): + client1 = TorrentInno() + print(client1.peer_id) + client2 = TorrentInno() + print(client2.peer_id) + + resource_json = create_resource_json('Lab_8_Docker.html', 'presentation for 5', '/home/setterwars/Downloads/Lab_8_Docker.html') + resource: Resource = create_resource_from_json(resource_json) + + await client1.start_share_file('/home/setterwars/Downloads/Lab_8_Docker.html', resource) + await asyncio.sleep(2) + await client2.start_download_file('/home/setterwars/Documents/Lab_8_Docker.html', resource) + +if __name__ == "__main__": + loop = asyncio.new_event_loop() + asyncio.set_event_loop(loop) + loop.create_task(main()) + loop.run_forever() \ No newline at end of file diff --git a/client/gui/app.py b/client/gui/app.py new file mode 100644 index 0000000..d944998 --- /dev/null +++ b/client/gui/app.py @@ -0,0 +1,689 @@ +from kivymd.app import MDApp +from kivy.clock import Clock +from kivy.uix.screenmanager import ScreenManager, Screen +from kivy.properties import ListProperty, StringProperty, NumericProperty +from kivy.metrics import dp +from kivymd.uix.boxlayout import MDBoxLayout +import torrent_manager +from os.path import expanduser + +class TorrentFileItem(MDBoxLayout): + """Class representing a single torrent file in the list""" + file_name = StringProperty('') + file_size = StringProperty('') + file_type = StringProperty('') + download_speed = StringProperty('') + upload_speed = StringProperty('') + blocks = ListProperty([]) + index = NumericProperty(-1) # Store the index of this item in the files list + + def __init__(self, **kwargs): + # Initialize blocks properly to avoid shared list issue + blocks_data = kwargs.pop('blocks', [0] * 1) + self.index = kwargs.pop('index', -1) # Get the index from kwargs + super(TorrentFileItem, self).__init__(**kwargs) + self.blocks = blocks_data + # Variables for double click detection + self.last_click_time = 0 + self.double_click_timeout = 0.3 # 300ms for double click detection + + def on_touch_down(self, touch): + """Handle touch down event""" + if self.collide_point(*touch.pos): + # Start a clock to detect long press + touch.ud['long_press'] = Clock.schedule_once(lambda dt: self.on_long_press(touch), 0.5) # 500ms for long press + + # Double click detection + current_time = Clock.get_time() + if current_time - self.last_click_time < self.double_click_timeout: + # This is a double click + self.on_double_click() + # Cancel the long press detection since we've detected a double click + if 'long_press' in touch.ud: + Clock.unschedule(touch.ud['long_press']) + self.last_click_time = current_time + + return super(TorrentFileItem, self).on_touch_down(touch) + + def on_touch_up(self, touch): + """Handle touch up event""" + if 'long_press' in touch.ud: + # Cancel the long press clock if touch is released + Clock.unschedule(touch.ud['long_press']) + return super(TorrentFileItem, self).on_touch_up(touch) + + def on_double_click(self): + """Handle double click event""" + # Show the same options dialog as for long press + self.show_options_dialog() + + def on_long_press(self, touch): + """Handle long press event""" + if self.collide_point(*touch.pos): + self.show_options_dialog() + + def show_options_dialog(self): + """Show options dialog for the torrent file""" + from kivymd.uix.dialog import MDDialog + from kivymd.uix.button import MDFlatButton + + self.dialog = MDDialog( + title=f"Действия с {self.file_name}", + text="Выберите действие:", + buttons=[ + MDFlatButton( + text="Удалить", + on_release=lambda x: self.delete_item() + ), + MDFlatButton( + text="Назад", + on_release=lambda x: self.dialog.dismiss() + ), + ], + ) + self.dialog.open() + + def delete_item(self): + """Delete this item and remove the physical file if it exists""" + # Close the dialog + self.dialog.dismiss() + + # Get the app instance and main screen + app = MDApp.get_running_app() + main_screen = app.root.get_screen('main') + + # Get the file name from the current item + file_name = self.file_name + + # Try to delete the physical file if it exists + try: + # Assume the file is in the Downloads folder + from os.path import expanduser, join + import os + + # Get the default download path + download_path = expanduser("~/Downloads") + file_path = join(download_path, file_name) + + # Check if file exists and delete it + if os.path.exists(file_path): + os.remove(file_path) + from kivymd.toast import toast + toast(f"Файл {file_name} успешно удален") + except PermissionError: + # Handle permission error + from kivymd.toast import toast + toast(f"Недостаточно прав для удаления файла {file_name}") + except Exception as e: + # Handle other errors + from kivymd.toast import toast + toast(f"Ошибка при удалении файла: {str(e)}") + + # Call the remove_torrent method of MainScreen to update the UI and torrent_state.json + main_screen.remove_torrent(self.index) + +class MainScreen(Screen): + """Main screen of the application showing the list of torrent files""" + files = ListProperty([]) + + def __init__(self, **kwargs): + super(MainScreen, self).__init__(**kwargs) + # Инициализируем менеджер торрентов + torrent_manager.initialize() + # Получаем список файлов из менеджера торрентов + self.files = torrent_manager.get_files() + # Start the clock to update download progress every second + Clock.schedule_interval(self.update_progress, 1) + + def on_kv_post(self, base_widget): + """Called after the kv file is loaded""" + self.update_file_list() + + def update_progress(self, dt): + """Update the download progress of files""" + self.files = torrent_manager.update_files() + self.update_file_list() + + def update_file_list(self): + """Update the file list with current data""" + file_list = self.ids.file_list + file_list.clear_widgets() + + for i, file in enumerate(self.files): + item = TorrentFileItem( + file_name=file['name'], + file_size=file['size'], + file_type=file['type'], + download_speed=file['download_speed'], + upload_speed=file['upload_speed'], + blocks=file['blocks'].copy(), # Use a copy to prevent reference issues + index=i # Pass the index to the item + ) + + # Create progress blocks + progress_container = item.ids.progress_container + progress_container.clear_widgets() + + # Use item.blocks instead of file['blocks'] to avoid duplication + for i, block in enumerate(item.blocks): + block_widget = MDBoxLayout(size_hint_x=1/len(item.blocks)) + if block == 1: # Downloaded block + block_widget.md_bg_color = (0.3, 0.8, 0.3, 1) # Green + else: # Not downloaded block + block_widget.md_bg_color = (1, 1, 1, 1) # White + + progress_container.add_widget(block_widget) + + file_list.add_widget(item) + + def on_back_pressed(self): + """Handle back button press - exit the application""" + # Получаем экземпляр приложения и завершаем его работу + app = MDApp.get_running_app() + app.stop() + + def show_info(self): + """Show app information - open GitHub repository""" + import webbrowser + webbrowser.open('https://github.com/Timofeq1/TorrentInno') + + def show_menu(self): + """Show app menu""" + pass # this would show a menu + + def add_torrent(self): + """Add a new torrent""" + from kivymd.uix.dialog import MDDialog + from kivymd.uix.button import MDFlatButton + from kivymd.uix.boxlayout import MDBoxLayout + from kivymd.uix.label import MDLabel + from kivy.metrics import dp + + # Create a custom content for the dialog + self.dialog_content = MDBoxLayout( + orientation='vertical', + spacing=dp(10), + padding=dp(10), + adaptive_height=True + ) + + # Add a label with instructions + self.dialog_content.add_widget(MDLabel( + text="Выберите действие:", + size_hint_y=None, + height=dp(48) + )) + + # Create the dialog + self.add_dialog = MDDialog( + title="Добавить торрент", + type="custom", + content_cls=self.dialog_content, + buttons=[ + MDFlatButton( + text="ВЫГРУЗИТЬ", + on_release=self.open_upload_file_manager + ), + MDFlatButton( + text="ЗАГРУЗИТЬ", + on_release=self.open_download_dialog + ), + MDFlatButton( + text="ОТМЕНА", + on_release=lambda x: self.add_dialog.dismiss() + ), + ], + ) + + # Show the dialog + self.add_dialog.open() + + def open_upload_file_manager(self, *args): + """Open file manager to select a file to share""" + from kivymd.uix.filemanager import MDFileManager + + # Initialize file manager + self.file_manager = MDFileManager( + exit_manager=self.exit_file_manager, + select_path=self.select_file_to_share, + ) + + # Set the starting path to user's home directory + home_dir = expanduser("~") + self.add_dialog.dismiss() + self.file_manager.show(home_dir) + + def open_download_dialog(self, *args): + """Open dialog to download a torrent""" + from kivymd.uix.dialog import MDDialog + from kivymd.uix.button import MDFlatButton + from kivymd.uix.textfield import MDTextField + from kivymd.uix.boxlayout import MDBoxLayout + from kivymd.uix.filemanager import MDFileManager + + # Dismiss the previous dialog + self.add_dialog.dismiss() + + # Create a custom content for the dialog + self.download_content = MDBoxLayout( + orientation='vertical', + spacing=dp(10), + padding=dp(10), + adaptive_height=True + ) + + # Add a text field for JSON input + self.json_field = MDTextField( + hint_text="Вставьте JSON метаданные", + multiline=True, + size_hint_y=None, + height=dp(100) + ) + self.download_content.add_widget(self.json_field) + + # Create the dialog + self.download_dialog = MDDialog( + title="Загрузить торрент", + type="custom", + content_cls=self.download_content, + buttons=[ + MDFlatButton( + text="ВЫБРАТЬ JSON ФАЙЛ", + on_release=self.open_json_file_manager + ), + MDFlatButton( + text="ОТМЕНА", + on_release=lambda x: self.download_dialog.dismiss() + ), + MDFlatButton( + text="ЗАГРУЗИТЬ", + on_release=self.process_json_input + ), + ], + ) + + # Initialize file manager for JSON selection + self.json_file_manager = MDFileManager( + exit_manager=self.exit_json_file_manager, + select_path=self.select_json_file, + ) + + # Show the dialog + self.download_dialog.open() + + def exit_file_manager(self, *args): + """Close the file manager""" + self.file_manager.close() + + def exit_json_file_manager(self, *args): + """Close the JSON file manager""" + self.json_file_manager.close() + + def select_file_to_share(self, path): + """Handle file selection for sharing""" + self.file_manager.close() + + # Save the selected file path + self.selected_file_path = path + + # Show dialog to enter comment and name + self.show_resource_creation_dialog(path) + + def show_resource_creation_dialog(self, file_path): + """Show dialog to enter comment and name for resource creation""" + import os + from kivymd.uix.dialog import MDDialog + from kivymd.uix.button import MDFlatButton + from kivymd.uix.textfield import MDTextField + from kivymd.uix.boxlayout import MDBoxLayout + from kivymd.uix.label import MDLabel + + # Get the file name + file_name = os.path.basename(file_path) + + # Create content for the dialog + content = MDBoxLayout( + orientation='vertical', + spacing=dp(10), + padding=dp(10), + size_hint_y=None, + height=dp(200) + ) + + # Add fields for comment and name + content.add_widget(MDLabel( + text="Введите комментарий к файлу:", + size_hint_y=None, + height=dp(30) + )) + + self.comment_field = MDTextField( + hint_text="Комментарий", + text="", + size_hint_y=None, + height=dp(48) + ) + content.add_widget(self.comment_field) + + content.add_widget(MDLabel( + text="Введите имя файла (оставьте пустым для использования исходного имени):", + size_hint_y=None, + height=dp(30) + )) + + self.name_field = MDTextField( + hint_text="Имя файла", + text=file_name, + size_hint_y=None, + height=dp(48) + ) + content.add_widget(self.name_field) + + # Create the dialog + self.resource_dialog = MDDialog( + title="Создание ресурса", + type="custom", + content_cls=content, + buttons=[ + MDFlatButton( + text="ОТМЕНА", + on_release=lambda x: self.resource_dialog.dismiss() + ), + MDFlatButton( + text="СОЗДАТЬ", + on_release=self.create_and_share_resource + ), + ], + ) + + self.resource_dialog.open() + + def open_json_file_manager(self, *args): + """Open file manager to select a JSON file""" + home_dir = expanduser("~") + # Установка фильтров для отображения JSON файлов + self.json_file_manager.ext = [".json"] + self.json_file_manager.show(home_dir) + + def select_json_file(self, path): + """Handle JSON file selection""" + self.json_file_manager.close() + + # Check if the file is a JSON file + if path.endswith('.json'): + try: + import json + with open(path, 'r') as f: + json_data = json.load(f) + self.json_field.text = json.dumps(json_data, indent=2) + except Exception as e: + from kivymd.toast import toast + toast(f"Ошибка при чтении JSON файла: {str(e)}") + else: + from kivymd.toast import toast + toast("Выбранный файл не является JSON файлом") + + def process_json_input(self, *args): + """Process the JSON input and start download""" + import json + from kivymd.toast import toast + + json_text = self.json_field.text.strip() + if not json_text: + toast("Введите JSON метаданные или выберите файл") + return + + try: + # Parse the JSON + resource_json = json.loads(json_text) + + # Show dialog to select save path + self.show_save_path_dialog(resource_json) + except json.JSONDecodeError: + toast("Некорректный формат JSON") + + def show_save_path_dialog(self, resource_json): + """Show dialog to select save path""" + from kivymd.uix.dialog import MDDialog + from kivymd.uix.button import MDFlatButton + from kivymd.uix.textfield import MDTextField + from kivymd.uix.boxlayout import MDBoxLayout + from kivymd.uix.label import MDLabel + + # Dismiss the previous dialog + self.download_dialog.dismiss() + + # Create content for the dialog + content = MDBoxLayout( + orientation='vertical', + spacing=dp(10), + padding=dp(10), + size_hint_y=None, + height=dp(100) + ) + + # Add a label for save path + content.add_widget(MDLabel( + text="Путь для сохранения файла:", + size_hint_y=None, + height=dp(30) + )) + + self.save_path_field = MDTextField( + text=expanduser("~/Downloads/") + resource_json.get("name", "downloaded_file"), + size_hint_y=None, + height=dp(48) + ) + content.add_widget(self.save_path_field) + + # Create the dialog + self.save_path_dialog = MDDialog( + title="Выберите путь сохранения", + type="custom", + content_cls=content, + buttons=[ + MDFlatButton( + text="ОТМЕНА", + on_release=lambda x: self.save_path_dialog.dismiss() + ), + MDFlatButton( + text="ЗАГРУЗИТЬ", + on_release=lambda x: self.start_download_with_resource(resource_json) + ), + ], + ) + + self.save_path_dialog.open() + + def create_and_share_resource(self, *args): + """Create resource from file and start sharing""" + import os + import json + from kivymd.toast import toast + + # Get the values from fields + comment = self.comment_field.text + name = self.name_field.text if self.name_field.text else None + file_path = self.selected_file_path + + try: + # Create resource JSON + resource_json = torrent_manager.create_resource_from_file(file_path, comment, name) + + # Start sharing the file + file_info = torrent_manager.start_sharing_file(file_path, resource_json) + + # Close the dialog + self.resource_dialog.dismiss() + + # Show dialog to save or copy the resource JSON + self.show_resource_save_dialog(resource_json, file_path) + + # Update the file list + self.files = torrent_manager.get_files() + self.update_file_list() + except Exception as e: + toast(f"Ошибка при создании ресурса: {str(e)}") + self.resource_dialog.dismiss() + + def show_resource_save_dialog(self, resource_json, file_path): + """Show dialog to save or copy the resource JSON""" + import os + import json + from kivymd.uix.dialog import MDDialog + from kivymd.uix.button import MDFlatButton + from kivymd.uix.textfield import MDTextField + from kivymd.uix.boxlayout import MDBoxLayout + from kivymd.uix.label import MDLabel + from kivy.uix.scrollview import ScrollView + + # Create content for the dialog + content = MDBoxLayout( + orientation='vertical', + spacing=dp(10), + padding=dp(10), + size_hint_y=None, + height=dp(300) + ) + + # Add a label with instructions + content.add_widget(MDLabel( + text="Ресурс успешно создан. Вы можете скопировать JSON или сохранить его в файл:", + size_hint_y=None, + height=dp(40) + )) + + # Add a text field with the JSON + json_text = json.dumps(resource_json, indent=2) + json_field = MDTextField( + text=json_text, + multiline=True, + readonly=True, + size_hint_y=None, + height=dp(200) + ) + + # Add the text field to a scroll view + scroll = ScrollView() + scroll.add_widget(json_field) + content.add_widget(scroll) + + # Generate default save path + file_name = os.path.basename(file_path) + default_save_path = os.path.splitext(file_path)[0] + "_meta.json" + + # Add a text field for save path + self.json_save_path = MDTextField( + hint_text="Путь для сохранения JSON", + text=default_save_path, + size_hint_y=None, + height=dp(48) + ) + content.add_widget(self.json_save_path) + + # Create the dialog + self.resource_save_dialog = MDDialog( + title="Сохранение ресурса", + type="custom", + content_cls=content, + buttons=[ + MDFlatButton( + text="КОПИРОВАТЬ", + on_release=lambda x: self.copy_to_clipboard(json_text) + ), + MDFlatButton( + text="СОХРАНИТЬ", + on_release=lambda x: self.save_resource_json(resource_json) + ), + MDFlatButton( + text="ЗАКРЫТЬ", + on_release=lambda x: self.resource_save_dialog.dismiss() + ), + ], + ) + + self.resource_save_dialog.open() + + def copy_to_clipboard(self, text): + """Copy text to clipboard""" + from kivy.core.clipboard import Clipboard + from kivymd.toast import toast + + Clipboard.copy(text) + toast("JSON скопирован в буфер обмена") + + def save_resource_json(self, resource_json): + """Save resource JSON to file""" + import json + from kivymd.toast import toast + + save_path = self.json_save_path.text + try: + with open(save_path, 'w') as f: + json.dump(resource_json, f, indent=4) + toast(f"JSON сохранен в {save_path}") + self.resource_save_dialog.dismiss() + except Exception as e: + toast(f"Ошибка при сохранении JSON: {str(e)}") + + def start_download_with_resource(self, resource_json): + """Start downloading file with resource JSON""" + from kivymd.toast import toast + + # Get the save path + save_path = self.save_path_field.text + + try: + # Start downloading the file + file_info = torrent_manager.start_download_file(save_path, resource_json) + + # Close the dialog + self.save_path_dialog.dismiss() + + # Update the file list + self.files = torrent_manager.get_files() + self.update_file_list() + + toast(f"Начата загрузка файла {resource_json.get('name', 'unknown')}") + except Exception as e: + toast(f"Ошибка при загрузке файла: {str(e)}") + self.save_path_dialog.dismiss() + + def remove_torrent(self, index=None): + """Remove a torrent and save changes to torrent_state.json""" + if index is not None and 0 <= index < len(self.files): + # Получаем имя файла для удаления + file_name = self.files[index]['name'] + # Удаляем торрент через менеджер + torrent_manager.remove_torrent(file_name) + # Обновляем локальный список файлов + self.files = torrent_manager.get_files() + # Обновляем отображение + self.update_file_list() + # Сохраняем изменения в torrent_state.json + torrent_manager.shutdown() + +class TorrentInnoApp(MDApp): + def build(self): + self.theme_cls.primary_palette = "BlueGray" + self.theme_cls.accent_palette = "Teal" + self.theme_cls.theme_style = "Light" + + return super().build() + + def on_stop(self): + """Вызывается при закрытии приложения""" + # Сохраняем состояние торрентов + torrent_manager.shutdown() + + """ [DEPRECATED] + This part of code cause doublicationg of blocks + as in https://stackoverflow.com/questions/62752997/all-elements-rendering-twice-in-kivy-kivymd + """ + # root = Builder.load_file('torrentinno.kv') + # return root + + # screen manager and add it to the root widget + +if __name__ == '__main__': + TorrentInnoApp().run() \ No newline at end of file diff --git a/client/gui/data/file_icons/mp3.png b/client/gui/data/file_icons/mp3.png new file mode 100644 index 0000000..a887480 --- /dev/null +++ b/client/gui/data/file_icons/mp3.png @@ -0,0 +1,11 @@ + + + + + + + + + + MP3 + \ No newline at end of file diff --git a/client/gui/data/file_icons/mp4.png b/client/gui/data/file_icons/mp4.png new file mode 100644 index 0000000..75febf3 --- /dev/null +++ b/client/gui/data/file_icons/mp4.png @@ -0,0 +1,12 @@ + + + + + + + + + + + MP4 + \ No newline at end of file diff --git a/client/gui/data/file_icons/png.png b/client/gui/data/file_icons/png.png new file mode 100644 index 0000000..ca7000e --- /dev/null +++ b/client/gui/data/file_icons/png.png @@ -0,0 +1,11 @@ + + + + + + png + + + + UNKNOWN + \ No newline at end of file diff --git a/client/gui/data/file_icons/txt.png b/client/gui/data/file_icons/txt.png new file mode 100644 index 0000000..700ea63 --- /dev/null +++ b/client/gui/data/file_icons/txt.png @@ -0,0 +1,15 @@ + + + + + + + + + + + + + + TXT + \ No newline at end of file diff --git a/client/gui/data/file_icons/unknown.png b/client/gui/data/file_icons/unknown.png new file mode 100644 index 0000000..15f3a58 --- /dev/null +++ b/client/gui/data/file_icons/unknown.png @@ -0,0 +1,11 @@ + + + + + + ? + + + + UNKNOWN + \ No newline at end of file diff --git a/client/gui/torrent_manager.py b/client/gui/torrent_manager.py new file mode 100644 index 0000000..8c7629f --- /dev/null +++ b/client/gui/torrent_manager.py @@ -0,0 +1,380 @@ +import asyncio +import json +import os +import threading +import time +import logging +from pathlib import Path + +# Импортируем реальный функционал из torrentInno +import sys +sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) +from torrentInno import TorrentInno, create_resource_json, create_resource_from_json +from core.common.resource import Resource + +# Путь к файлу для сохранения состояния торрентов +TORRENT_STATE_FILE = 'torrent_state.json' + +# Настройка логирования +logging.basicConfig(level=logging.DEBUG) + +# Глобальные переменные +_active_torrents = [] +_torrent_inno: TorrentInno | None = None +_loop = None +_background_thread = None + +# Словарь для хранения путей к файлам +_file_paths = {} + +def _run_event_loop(loop): + """Запускает цикл событий asyncio в отдельном потоке""" + asyncio.set_event_loop(loop) + loop.run_forever() + +def _load_torrent_state(): + """Загружает состояние торрентов из файла""" + if os.path.exists(TORRENT_STATE_FILE): + try: + with open(TORRENT_STATE_FILE, 'r') as f: + return json.load(f) + except Exception as e: + print(f"Ошибка при загрузке состояния торрентов: {e}") + return [] + +def _save_torrent_state(): + """Сохраняет состояние торрентов в файл""" + try: + with open(TORRENT_STATE_FILE, 'w') as f: + json.dump(_active_torrents, f) + except Exception as e: + print(f"Ошибка при сохранении состояния торрентов: {e}") + +def initialize(): + """Инициализирует менеджер торрентов, загружая сохраненное состояние""" + global _active_torrents, _torrent_inno, _loop, _background_thread + + # Создаем экземпляр TorrentInno + _torrent_inno = TorrentInno() + + # Создаем и запускаем цикл событий в отдельном потоке + _loop = asyncio.new_event_loop() + _background_thread = threading.Thread(target=_run_event_loop, args=(_loop,)) + _background_thread.daemon = True # Поток завершится при завершении основного потока + _background_thread.start() + + # Загружаем сохраненное состояние + saved_state = _load_torrent_state() + if saved_state: + _active_torrents = saved_state + else: + _active_torrents = [] + +def shutdown(): + """Завершает работу менеджера торрентов, сохраняя текущее состояние""" + _save_torrent_state() + + # Останавливаем цикл событий + if _loop and _loop.is_running(): + _loop.call_soon_threadsafe(_loop.stop) + + # Ждем завершения потока + if _background_thread and _background_thread.is_alive(): + _background_thread.join(timeout=1.0) + +def get_files(): + """Возвращает список всех торрент-файлов + + Returns: + list: Список словарей с информацией о торрент-файлах + """ + return [file.copy() for file in _active_torrents] + +def _convert_state_to_file_info(state, file_path): + """Конвертирует состояние TorrentInno.State в формат файла для GUI + + Args: + state (TorrentInno.State): Состояние файла + file_path (str): Путь к файлу + + Returns: + dict: Словарь с информацией о файле для GUI + """ + # Получаем имя файла из пути + file_name = os.path.basename(file_path) + + # Определяем тип файла по расширению + file_ext = file_name.split('.')[-1] if '.' in file_name else 'unknown' + + # Вычисляем общий размер файла + resource: Resource = _torrent_inno.resource_manager_dict[str(Path(file_path).resolve())].resource + total_size: int = sum(piece.size_bytes for piece in resource.pieces) + + # Форматируем размер файла + if total_size < 1024: + size_str = f"{total_size} B" + elif total_size < 1024 * 1024: + size_str = f"{total_size / 1024:.2f} KB" + elif total_size < 1024 * 1024 * 1024: + size_str = f"{total_size / (1024 * 1024):.2f} MB" + else: + size_str = f"{total_size / (1024 * 1024 * 1024):.2f} GB" + + # Конвертируем скорость в удобный формат + download_speed = state.download_speed_bytes_per_sec + upload_speed = state.upload_speed_bytes_per_sec + + if download_speed < 1024: + download_speed_str = f"{download_speed}B/s" + elif download_speed < 1024 * 1024: + download_speed_str = f"{download_speed / 1024:.1f}KB/s" + else: + download_speed_str = f"{download_speed / (1024 * 1024):.1f}MB/s" + + if upload_speed < 1024: + upload_speed_str = f"{upload_speed}B/s" + elif upload_speed < 1024 * 1024: + upload_speed_str = f"{upload_speed / 1024:.1f}KB/s" + else: + upload_speed_str = f"{upload_speed / (1024 * 1024):.1f}MB/s" + + # Создаем блоки для отображения прогресса + blocks = [1 if piece else 0 for piece in state.piece_status] + + # Если блоков слишком много, уменьшаем их количество до 20 + if len(blocks) > 20: + # Группируем блоки + group_size = len(blocks) // 20 + grouped_blocks = [] + for i in range(0, len(blocks), group_size): + group = blocks[i:i+group_size] + # Если хотя бы половина блоков в группе загружена, считаем группу загруженной + grouped_blocks.append(1 if sum(group) >= len(group) / 2 else 0) + blocks = grouped_blocks[:20] # Берем только первые 20 групп + + # Если блоков меньше 20, дополняем до 20 + while len(blocks) < 20: + blocks.append(0) + + return { + 'name': file_name, + 'size': size_str, + 'type': file_ext, + 'download_speed': download_speed_str, + 'upload_speed': upload_speed_str, + 'blocks': blocks + } + +async def _get_all_states(): + """Получает состояние всех файлов + + Returns: + list: Список состояний файлов + """ + if not _torrent_inno: + return [] + + try: + states = await _torrent_inno.get_all_files_state() + return states + except Exception as e: + logging.error(f"Ошибка при получении состояния файлов: {e}") + return [] + +def update_files(): + """Обновляет информацию о всех торрент-файлах + + Returns: + list: Список обновленных торрент-файлов + """ + global _active_torrents + + try: + + # Получаем состояние всех файлов + states_future = asyncio.run_coroutine_threadsafe(_get_all_states(), _loop) + states = states_future.result(timeout=5.0) # Ждем результат не более 5 секунд + + # Обновляем информацию о файлах + updated_files = [] + for file_path, state in states: + file_info = _convert_state_to_file_info(state, file_path) + updated_files.append(file_info) + # Сохраняем путь к файлу + _file_paths[file_info['name']] = file_path + + _active_torrents = updated_files + _save_torrent_state() + + except Exception as e: + logging.error(f"Ошибка при обновлении файлов: {e}") + + return get_files() + +def create_resource_from_file(file_path, comment="", name=None): + """Создает ресурс JSON из файла + + Args: + file_path (str): Путь к файлу + comment (str): Комментарий к файлу + name (str): Имя файла (если не указано, используется имя исходного файла) + + Returns: + dict: JSON с метаданными торрента + """ + # Если имя не указано, используем имя исходного файла + if name is None: + name = os.path.basename(file_path) + + # Используем реальную функцию из torrentInno + return create_resource_json( + name=name, + comment=comment, + file_path=Path(file_path), + min_piece_size=1000 * 1000, # 1MB + max_pieces=10000 + ) + +def start_sharing_file(file_path: str, resource_json): + """Начинает раздачу файла + + Args: + file_path (str): Путь к файлу + resource_json (dict): JSON с метаданными торрента + + Returns: + dict: Информация о добавленном торренте + """ + # Создаем ресурс из JSON + resource = create_resource_from_json(resource_json) + + # Запускаем раздачу файла + asyncio.run_coroutine_threadsafe( + _torrent_inno.start_share_file(str(Path(file_path).resolve()), resource), + _loop + ) + + # Создаем информацию о торренте для GUI + file_name = os.path.basename(file_path) + file_size = os.path.getsize(file_path) + file_ext = file_name.split('.')[-1] if '.' in file_name else 'unknown' + file_ext = file_ext if file_ext in {'mp3', 'mp4', 'png', 'txt'} else 'unknown' + + # Форматируем размер файла + if file_size < 1024: + size_str = f"{file_size} B" + elif file_size < 1024 * 1024: + size_str = f"{file_size / 1024:.2f} KB" + elif file_size < 1024 * 1024 * 1024: + size_str = f"{file_size / (1024 * 1024):.2f} MB" + else: + size_str = f"{file_size / (1024 * 1024 * 1024):.2f} GB" + + # Создаем новый торрент с полностью загруженными блоками + new_file = { + 'name': file_name, + 'size': size_str, + 'type': file_ext, + 'download_speed': '0MB/s', + 'upload_speed': '0.5MB/s', # Начальная скорость раздачи + 'blocks': [1] * 20 # Все блоки загружены + } + + # Добавляем файл в список активных торрентов + _active_torrents.append(new_file) + _file_paths[file_name] = file_path + _save_torrent_state() + + return new_file.copy() + +def start_download_file(destination_path, resource_json): + """Начинает загрузку файла + + Args: + destination_path (str): Путь для сохранения файла + resource_json (dict): JSON с метаданными торрента + + Returns: + dict: Информация о добавленном торренте + """ + # Создаем ресурс из JSON + resource = create_resource_from_json(resource_json) + + # Запускаем загрузку файла + asyncio.run_coroutine_threadsafe( + _torrent_inno.start_download_file(str(Path(destination_path).resolve()), resource), + _loop + ) + + # Получаем имя файла из пути назначения + file_name = os.path.basename(destination_path) + + # Определяем тип файла по расширению + file_ext = file_name.split('.')[-1] if '.' in file_name else 'unknown' + + # Вычисляем общий размер файла из ресурса + total_size = sum(piece['size'] for piece in resource_json['pieces']) + + # Форматируем размер файла + if total_size < 1024: + size_str = f"{total_size} B" + elif total_size < 1024 * 1024: + size_str = f"{total_size / 1024:.2f} KB" + elif total_size < 1024 * 1024 * 1024: + size_str = f"{total_size / (1024 * 1024):.2f} MB" + else: + size_str = f"{total_size / (1024 * 1024 * 1024):.2f} GB" + + # Создаем новый торрент с незагруженными блоками + new_file = { + 'name': file_name, + 'size': size_str, + 'type': file_ext, + 'download_speed': '1MB/s', # Начальная скорость загрузки + 'upload_speed': '0MB/s', + 'blocks': [0] * 20 # Все блоки не загружены + } + + # Добавляем файл в список активных торрентов + _active_torrents.append(new_file) + _file_paths[file_name] = destination_path + _save_torrent_state() + + return new_file.copy() + +def remove_torrent(index): + """Удаляет торрент из списка активных по индексу + + Args: + index (int): Индекс файла в списке + + Returns: + bool: True если торрент успешно удален, иначе False + """ + try: + if 0 <= index < len(_active_torrents): + file_name = _active_torrents[index]['name'] + file_path = _file_paths.get(file_name) + + # Если есть путь к файлу, останавливаем раздачу/загрузку + if file_path and _torrent_inno: + try: + # Пытаемся остановить раздачу файла + asyncio.run_coroutine_threadsafe( + _torrent_inno.stop_share_file(str(Path(file_path).resolve())), + _loop + ).result(timeout=5.0) + except Exception as e: + logging.error(f"Ошибка при остановке раздачи файла: {e}") + + # Удаляем файл из списка активных торрентов + del _active_torrents[index] + if file_name in _file_paths: + del _file_paths[file_name] + + _save_torrent_state() + return True + except Exception as e: + logging.error(f"Ошибка при удалении торрента: {e}") + + return False \ No newline at end of file diff --git a/client/gui/torrent_manager_docs.md b/client/gui/torrent_manager_docs.md new file mode 100644 index 0000000..461e1fe --- /dev/null +++ b/client/gui/torrent_manager_docs.md @@ -0,0 +1,142 @@ +## Функции + +### initialize() + +```python +def initialize() +``` + +Инициализирует менеджер торрентов, загружая сохраненное состояние из файла. Если сохраненное состояние отсутствует, используются тестовые данные. + +**Возвращаемое значение:** Нет + +### shutdown() + +```python +def shutdown() +``` + +Завершает работу менеджера торрентов, сохраняя текущее состояние в файл. + +**Возвращаемое значение:** Нет + +### get_files() + +```python +def get_files() +``` + +Возвращает список всех активных торрент-файлов. + +**Возвращаемое значение:** Список словарей с информацией о торрент-файлах. Каждый словарь содержит следующие ключи: +- `name`: Имя файла +- `size`: Размер файла (строка с единицами измерения) +- `type`: Тип файла +- `download_speed`: Скорость загрузки (строка с единицами измерения) +- `upload_speed`: Скорость выгрузки (строка с единицами измерения) +- `blocks`: Список блоков файла (0 - не загружен, 1 - загружен) + +### update_file(file_name) + +```python +def update_file(file_name) +``` + +Обновляет информацию о конкретном торрент-файле. + +**Параметры:** +- `file_name`: Имя файла для обновления + +**Возвращаемое значение:** Словарь с обновленной информацией о скорости и блоках, или None если файл не найден. Словарь содержит следующие ключи: +- `download_speed`: Скорость загрузки +- `upload_speed`: Скорость выгрузки +- `blocks`: Список блоков файла + +### update_files() + +```python +def update_files() +``` + +Обновляет информацию о всех торрент-файлах. + +**Возвращаемое значение:** Список обновленных торрент-файлов (аналогично `get_files()`) + +### get_file_info(url) + +```python +def get_file_info(url) +``` + +Получает информацию о торрент-файле по URL. + +**Параметры:** +- `url`: URL торрент-файла или magnet-ссылка + +**Возвращаемое значение:** Словарь с информацией о торрент-файле, содержащий следующие ключи: +- `name`: Имя файла +- `size`: Размер файла +- `type`: Тип файла +- `download_speed`: Скорость загрузки (начальное значение) +- `upload_speed`: Скорость выгрузки (начальное значение) +- `blocks`: Список блоков файла (все блоки изначально не загружены) + +### add_torrent(file_info) + +```python +def add_torrent(file_info) +``` + +Добавляет новый торрент в список активных. + +**Параметры:** +- `file_info`: Словарь с информацией о торрент-файле + +**Возвращаемое значение:** `True` если торрент успешно добавлен, иначе `False` + +### remove_torrent(file_name) + +```python +def remove_torrent(file_name) +``` + +Удаляет торрент из списка активных. + +**Параметры:** +- `file_name`: Имя файла для удаления + +**Возвращаемое значение:** `True` если торрент успешно удален, иначе `False` + +### get_mock_content(source) + +```python +def get_mock_content(source) +``` + +Возвращает тестовый список файлов в торренте. + +**Параметры:** +- `source`: URL или путь к торрент-файлу + +**Возвращаемое значение:** Список словарей с информацией о файлах в торренте. Каждый словарь содержит следующие ключи: +- `name`: Имя файла +- `size`: Размер файла +- `selected`: Флаг выбора файла для загрузки + +## Внутренние функции + +### _load_torrent_state() + +Загружает состояние торрентов из файла. + +### _save_torrent_state() + +Сохраняет состояние торрентов в файл. + +### _update_file_speeds(file) + +Обновляет скорость загрузки и выгрузки для файла. + +### _update_file_blocks(file) + +Обновляет блоки загрузки для файла. \ No newline at end of file diff --git a/client/gui/torrentinno.kv b/client/gui/torrentinno.kv new file mode 100644 index 0000000..6e66c0e --- /dev/null +++ b/client/gui/torrentinno.kv @@ -0,0 +1,146 @@ +#:kivy 2.0.0 +#:import get_color_from_hex kivy.utils.get_color_from_hex + +: + orientation: 'vertical' + size_hint_y: None + height: dp(100) + padding: dp(5) + spacing: dp(2) + file_name: '' + file_size: '' + file_type: '' + download_speed: '' + upload_speed: '' + blocks: [] + + canvas.before: + Color: + rgba: 0.95, 0.95, 0.95, 1 + Rectangle: + pos: self.pos + size: self.size + Color: + rgba: 0.9, 0.9, 0.9, 1 + Line: + points: [self.x, self.y, self.x + self.width, self.y] + width: 1 + + MDBoxLayout: + orientation: 'horizontal' + size_hint_y: None + height: dp(70) + spacing: dp(10) + padding: dp(5) + + # File icon + MDBoxLayout: + size_hint_x: None + width: dp(60) + + Image: + source: 'data/file_icons/' + root.file_type + '.png' if root.file_type else 'data/file_icons/unknown.png' + size_hint: None, None + size: dp(50), dp(50) + pos_hint: {'center_x': 0.5, 'center_y': 0.5} + + #MDLabel: + # text: root.file_type + # halign: 'center' + # font_size: dp(14) + # size_hint_y: None + # height: dp(20) + # pos_hint: {'center_x': 0.5, 'bottom': 0} + + # File info + MDBoxLayout: + orientation: 'vertical' + spacing: dp(5) + + MDBoxLayout: + orientation: 'horizontal' + + MDLabel: + text: root.file_name + font_size: dp(18) + halign: 'left' + size_hint_x: 0.6 + + MDLabel: + text: root.download_speed + font_size: dp(16) + halign: 'right' + color: get_color_from_hex('#4CAF50') # Green color for download + size_hint_x: 0.2 + + MDLabel: + text: root.upload_speed + font_size: dp(16) + halign: 'right' + color: get_color_from_hex('#F44336') # Red color for upload + size_hint_x: 0.2 + + MDLabel: + text: root.file_size + font_size: dp(16) + halign: 'left' + + # Progress bar with blocks + MDBoxLayout: + size_hint_y: None + height: dp(20) + padding: [dp(70), 0, dp(10), 0] + + MDBoxLayout: + id: progress_container + orientation: 'horizontal' + spacing: dp(1) + + # This will be filled with block widgets in Python code + canvas: + Color: + rgba: 0.9, 0.9, 0.9, 1 + Rectangle: + pos: self.pos + size: self.size + + # Generate progress blocks + Widget: + canvas: + Color: + rgba: 0, 0, 0, 1 + Line: + rectangle: [self.x, self.y, self.width, self.height] + width: 1 + +: + name: 'main' + + MDBoxLayout: + orientation: 'vertical' + + # App bar + MDTopAppBar: + title: "TorrentInno" + elevation: 4 + left_action_items: [['arrow-left', lambda x: app.root.current_screen.on_back_pressed()]] + right_action_items: [['information', lambda x: app.root.current_screen.show_info()], ['dots-vertical', lambda x: app.root.current_screen.show_menu()]] + + # Scrollable list of torrent files + ScrollView: + do_scroll_x: False + + MDList: + id: file_list + spacing: dp(2) + padding: dp(5) + + # Add button (floating action button) + MDFloatingActionButton: + icon: 'plus' + pos_hint: {'right': 0.95, 'y': 0.01} + on_release: app.root.current_screen.add_torrent() + +ScreenManager: + MainScreen: + name: 'main' \ No newline at end of file diff --git a/client/poetry.lock b/client/poetry.lock new file mode 100644 index 0000000..53b63ba --- /dev/null +++ b/client/poetry.lock @@ -0,0 +1,618 @@ +# This file is automatically @generated by Poetry 2.1.2 and should not be changed by hand. + +[[package]] +name = "aiofiles" +version = "24.1.0" +description = "File support for asyncio." +optional = false +python-versions = ">=3.8" +groups = ["main"] +files = [ + {file = "aiofiles-24.1.0-py3-none-any.whl", hash = "sha256:b4ec55f4195e3eb5d7abd1bf7e061763e864dd4954231fb8539a0ef8bb8260e5"}, + {file = "aiofiles-24.1.0.tar.gz", hash = "sha256:22a075c9e5a3810f0c2e48f3008c94d68c65d763b9b03857924c99e57355166c"}, +] + +[[package]] +name = "certifi" +version = "2025.4.26" +description = "Python package for providing Mozilla's CA Bundle." +optional = false +python-versions = ">=3.6" +groups = ["main"] +files = [ + {file = "certifi-2025.4.26-py3-none-any.whl", hash = "sha256:30350364dfe371162649852c63336a15c70c6510c2ad5015b21c2345311805f3"}, + {file = "certifi-2025.4.26.tar.gz", hash = "sha256:0a816057ea3cdefcef70270d2c515e4506bbc954f417fa5ade2021213bb8f0c6"}, +] + +[[package]] +name = "charset-normalizer" +version = "3.4.1" +description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet." +optional = false +python-versions = ">=3.7" +groups = ["main"] +files = [ + {file = "charset_normalizer-3.4.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:91b36a978b5ae0ee86c394f5a54d6ef44db1de0815eb43de826d41d21e4af3de"}, + {file = "charset_normalizer-3.4.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7461baadb4dc00fd9e0acbe254e3d7d2112e7f92ced2adc96e54ef6501c5f176"}, + {file = "charset_normalizer-3.4.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e218488cd232553829be0664c2292d3af2eeeb94b32bea483cf79ac6a694e037"}, + {file = "charset_normalizer-3.4.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:80ed5e856eb7f30115aaf94e4a08114ccc8813e6ed1b5efa74f9f82e8509858f"}, + {file = "charset_normalizer-3.4.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b010a7a4fd316c3c484d482922d13044979e78d1861f0e0650423144c616a46a"}, + {file = "charset_normalizer-3.4.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4532bff1b8421fd0a320463030c7520f56a79c9024a4e88f01c537316019005a"}, + {file = "charset_normalizer-3.4.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:d973f03c0cb71c5ed99037b870f2be986c3c05e63622c017ea9816881d2dd247"}, + {file = "charset_normalizer-3.4.1-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:3a3bd0dcd373514dcec91c411ddb9632c0d7d92aed7093b8c3bbb6d69ca74408"}, + {file = "charset_normalizer-3.4.1-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:d9c3cdf5390dcd29aa8056d13e8e99526cda0305acc038b96b30352aff5ff2bb"}, + {file = "charset_normalizer-3.4.1-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:2bdfe3ac2e1bbe5b59a1a63721eb3b95fc9b6817ae4a46debbb4e11f6232428d"}, + {file = "charset_normalizer-3.4.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:eab677309cdb30d047996b36d34caeda1dc91149e4fdca0b1a039b3f79d9a807"}, + {file = "charset_normalizer-3.4.1-cp310-cp310-win32.whl", hash = "sha256:c0429126cf75e16c4f0ad00ee0eae4242dc652290f940152ca8c75c3a4b6ee8f"}, + {file = "charset_normalizer-3.4.1-cp310-cp310-win_amd64.whl", hash = "sha256:9f0b8b1c6d84c8034a44893aba5e767bf9c7a211e313a9605d9c617d7083829f"}, + {file = "charset_normalizer-3.4.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:8bfa33f4f2672964266e940dd22a195989ba31669bd84629f05fab3ef4e2d125"}, + {file = "charset_normalizer-3.4.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:28bf57629c75e810b6ae989f03c0828d64d6b26a5e205535585f96093e405ed1"}, + {file = "charset_normalizer-3.4.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f08ff5e948271dc7e18a35641d2f11a4cd8dfd5634f55228b691e62b37125eb3"}, + {file = "charset_normalizer-3.4.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:234ac59ea147c59ee4da87a0c0f098e9c8d169f4dc2a159ef720f1a61bbe27cd"}, + {file = "charset_normalizer-3.4.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fd4ec41f914fa74ad1b8304bbc634b3de73d2a0889bd32076342a573e0779e00"}, + {file = "charset_normalizer-3.4.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:eea6ee1db730b3483adf394ea72f808b6e18cf3cb6454b4d86e04fa8c4327a12"}, + {file = "charset_normalizer-3.4.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:c96836c97b1238e9c9e3fe90844c947d5afbf4f4c92762679acfe19927d81d77"}, + {file = "charset_normalizer-3.4.1-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:4d86f7aff21ee58f26dcf5ae81a9addbd914115cdebcbb2217e4f0ed8982e146"}, + {file = "charset_normalizer-3.4.1-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:09b5e6733cbd160dcc09589227187e242a30a49ca5cefa5a7edd3f9d19ed53fd"}, + {file = "charset_normalizer-3.4.1-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:5777ee0881f9499ed0f71cc82cf873d9a0ca8af166dfa0af8ec4e675b7df48e6"}, + {file = "charset_normalizer-3.4.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:237bdbe6159cff53b4f24f397d43c6336c6b0b42affbe857970cefbb620911c8"}, + {file = "charset_normalizer-3.4.1-cp311-cp311-win32.whl", hash = "sha256:8417cb1f36cc0bc7eaba8ccb0e04d55f0ee52df06df3ad55259b9a323555fc8b"}, + {file = "charset_normalizer-3.4.1-cp311-cp311-win_amd64.whl", hash = "sha256:d7f50a1f8c450f3925cb367d011448c39239bb3eb4117c36a6d354794de4ce76"}, + {file = "charset_normalizer-3.4.1-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:73d94b58ec7fecbc7366247d3b0b10a21681004153238750bb67bd9012414545"}, + {file = "charset_normalizer-3.4.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dad3e487649f498dd991eeb901125411559b22e8d7ab25d3aeb1af367df5efd7"}, + {file = "charset_normalizer-3.4.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c30197aa96e8eed02200a83fba2657b4c3acd0f0aa4bdc9f6c1af8e8962e0757"}, + {file = "charset_normalizer-3.4.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2369eea1ee4a7610a860d88f268eb39b95cb588acd7235e02fd5a5601773d4fa"}, + {file = "charset_normalizer-3.4.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc2722592d8998c870fa4e290c2eec2c1569b87fe58618e67d38b4665dfa680d"}, + {file = "charset_normalizer-3.4.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ffc9202a29ab3920fa812879e95a9e78b2465fd10be7fcbd042899695d75e616"}, + {file = "charset_normalizer-3.4.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:804a4d582ba6e5b747c625bf1255e6b1507465494a40a2130978bda7b932c90b"}, + {file = "charset_normalizer-3.4.1-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:0f55e69f030f7163dffe9fd0752b32f070566451afe180f99dbeeb81f511ad8d"}, + {file = "charset_normalizer-3.4.1-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:c4c3e6da02df6fa1410a7680bd3f63d4f710232d3139089536310d027950696a"}, + {file = "charset_normalizer-3.4.1-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:5df196eb874dae23dcfb968c83d4f8fdccb333330fe1fc278ac5ceeb101003a9"}, + {file = "charset_normalizer-3.4.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:e358e64305fe12299a08e08978f51fc21fac060dcfcddd95453eabe5b93ed0e1"}, + {file = "charset_normalizer-3.4.1-cp312-cp312-win32.whl", hash = "sha256:9b23ca7ef998bc739bf6ffc077c2116917eabcc901f88da1b9856b210ef63f35"}, + {file = "charset_normalizer-3.4.1-cp312-cp312-win_amd64.whl", hash = "sha256:6ff8a4a60c227ad87030d76e99cd1698345d4491638dfa6673027c48b3cd395f"}, + {file = "charset_normalizer-3.4.1-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:aabfa34badd18f1da5ec1bc2715cadc8dca465868a4e73a0173466b688f29dda"}, + {file = "charset_normalizer-3.4.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:22e14b5d70560b8dd51ec22863f370d1e595ac3d024cb8ad7d308b4cd95f8313"}, + {file = "charset_normalizer-3.4.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8436c508b408b82d87dc5f62496973a1805cd46727c34440b0d29d8a2f50a6c9"}, + {file = "charset_normalizer-3.4.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2d074908e1aecee37a7635990b2c6d504cd4766c7bc9fc86d63f9c09af3fa11b"}, + {file = "charset_normalizer-3.4.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:955f8851919303c92343d2f66165294848d57e9bba6cf6e3625485a70a038d11"}, + {file = "charset_normalizer-3.4.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:44ecbf16649486d4aebafeaa7ec4c9fed8b88101f4dd612dcaf65d5e815f837f"}, + {file = "charset_normalizer-3.4.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:0924e81d3d5e70f8126529951dac65c1010cdf117bb75eb02dd12339b57749dd"}, + {file = "charset_normalizer-3.4.1-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:2967f74ad52c3b98de4c3b32e1a44e32975e008a9cd2a8cc8966d6a5218c5cb2"}, + {file = "charset_normalizer-3.4.1-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:c75cb2a3e389853835e84a2d8fb2b81a10645b503eca9bcb98df6b5a43eb8886"}, + {file = "charset_normalizer-3.4.1-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:09b26ae6b1abf0d27570633b2b078a2a20419c99d66fb2823173d73f188ce601"}, + {file = "charset_normalizer-3.4.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:fa88b843d6e211393a37219e6a1c1df99d35e8fd90446f1118f4216e307e48cd"}, + {file = "charset_normalizer-3.4.1-cp313-cp313-win32.whl", hash = "sha256:eb8178fe3dba6450a3e024e95ac49ed3400e506fd4e9e5c32d30adda88cbd407"}, + {file = "charset_normalizer-3.4.1-cp313-cp313-win_amd64.whl", hash = "sha256:b1ac5992a838106edb89654e0aebfc24f5848ae2547d22c2c3f66454daa11971"}, + {file = "charset_normalizer-3.4.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f30bf9fd9be89ecb2360c7d94a711f00c09b976258846efe40db3d05828e8089"}, + {file = "charset_normalizer-3.4.1-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:97f68b8d6831127e4787ad15e6757232e14e12060bec17091b85eb1486b91d8d"}, + {file = "charset_normalizer-3.4.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:7974a0b5ecd505609e3b19742b60cee7aa2aa2fb3151bc917e6e2646d7667dcf"}, + {file = "charset_normalizer-3.4.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fc54db6c8593ef7d4b2a331b58653356cf04f67c960f584edb7c3d8c97e8f39e"}, + {file = "charset_normalizer-3.4.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:311f30128d7d333eebd7896965bfcfbd0065f1716ec92bd5638d7748eb6f936a"}, + {file = "charset_normalizer-3.4.1-cp37-cp37m-musllinux_1_2_aarch64.whl", hash = "sha256:7d053096f67cd1241601111b698f5cad775f97ab25d81567d3f59219b5f1adbd"}, + {file = "charset_normalizer-3.4.1-cp37-cp37m-musllinux_1_2_i686.whl", hash = "sha256:807f52c1f798eef6cf26beb819eeb8819b1622ddfeef9d0977a8502d4db6d534"}, + {file = "charset_normalizer-3.4.1-cp37-cp37m-musllinux_1_2_ppc64le.whl", hash = "sha256:dccbe65bd2f7f7ec22c4ff99ed56faa1e9f785482b9bbd7c717e26fd723a1d1e"}, + {file = "charset_normalizer-3.4.1-cp37-cp37m-musllinux_1_2_s390x.whl", hash = "sha256:2fb9bd477fdea8684f78791a6de97a953c51831ee2981f8e4f583ff3b9d9687e"}, + {file = "charset_normalizer-3.4.1-cp37-cp37m-musllinux_1_2_x86_64.whl", hash = "sha256:01732659ba9b5b873fc117534143e4feefecf3b2078b0a6a2e925271bb6f4cfa"}, + {file = "charset_normalizer-3.4.1-cp37-cp37m-win32.whl", hash = "sha256:7a4f97a081603d2050bfaffdefa5b02a9ec823f8348a572e39032caa8404a487"}, + {file = "charset_normalizer-3.4.1-cp37-cp37m-win_amd64.whl", hash = "sha256:7b1bef6280950ee6c177b326508f86cad7ad4dff12454483b51d8b7d673a2c5d"}, + {file = "charset_normalizer-3.4.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:ecddf25bee22fe4fe3737a399d0d177d72bc22be6913acfab364b40bce1ba83c"}, + {file = "charset_normalizer-3.4.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8c60ca7339acd497a55b0ea5d506b2a2612afb2826560416f6894e8b5770d4a9"}, + {file = "charset_normalizer-3.4.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b7b2d86dd06bfc2ade3312a83a5c364c7ec2e3498f8734282c6c3d4b07b346b8"}, + {file = "charset_normalizer-3.4.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:dd78cfcda14a1ef52584dbb008f7ac81c1328c0f58184bf9a84c49c605002da6"}, + {file = "charset_normalizer-3.4.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6e27f48bcd0957c6d4cb9d6fa6b61d192d0b13d5ef563e5f2ae35feafc0d179c"}, + {file = "charset_normalizer-3.4.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:01ad647cdd609225c5350561d084b42ddf732f4eeefe6e678765636791e78b9a"}, + {file = "charset_normalizer-3.4.1-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:619a609aa74ae43d90ed2e89bdd784765de0a25ca761b93e196d938b8fd1dbbd"}, + {file = "charset_normalizer-3.4.1-cp38-cp38-musllinux_1_2_i686.whl", hash = "sha256:89149166622f4db9b4b6a449256291dc87a99ee53151c74cbd82a53c8c2f6ccd"}, + {file = "charset_normalizer-3.4.1-cp38-cp38-musllinux_1_2_ppc64le.whl", hash = "sha256:7709f51f5f7c853f0fb938bcd3bc59cdfdc5203635ffd18bf354f6967ea0f824"}, + {file = "charset_normalizer-3.4.1-cp38-cp38-musllinux_1_2_s390x.whl", hash = "sha256:345b0426edd4e18138d6528aed636de7a9ed169b4aaf9d61a8c19e39d26838ca"}, + {file = "charset_normalizer-3.4.1-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:0907f11d019260cdc3f94fbdb23ff9125f6b5d1039b76003b5b0ac9d6a6c9d5b"}, + {file = "charset_normalizer-3.4.1-cp38-cp38-win32.whl", hash = "sha256:ea0d8d539afa5eb2728aa1932a988a9a7af94f18582ffae4bc10b3fbdad0626e"}, + {file = "charset_normalizer-3.4.1-cp38-cp38-win_amd64.whl", hash = "sha256:329ce159e82018d646c7ac45b01a430369d526569ec08516081727a20e9e4af4"}, + {file = "charset_normalizer-3.4.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:b97e690a2118911e39b4042088092771b4ae3fc3aa86518f84b8cf6888dbdb41"}, + {file = "charset_normalizer-3.4.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:78baa6d91634dfb69ec52a463534bc0df05dbd546209b79a3880a34487f4b84f"}, + {file = "charset_normalizer-3.4.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1a2bc9f351a75ef49d664206d51f8e5ede9da246602dc2d2726837620ea034b2"}, + {file = "charset_normalizer-3.4.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:75832c08354f595c760a804588b9357d34ec00ba1c940c15e31e96d902093770"}, + {file = "charset_normalizer-3.4.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0af291f4fe114be0280cdd29d533696a77b5b49cfde5467176ecab32353395c4"}, + {file = "charset_normalizer-3.4.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0167ddc8ab6508fe81860a57dd472b2ef4060e8d378f0cc555707126830f2537"}, + {file = "charset_normalizer-3.4.1-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:2a75d49014d118e4198bcee5ee0a6f25856b29b12dbf7cd012791f8a6cc5c496"}, + {file = "charset_normalizer-3.4.1-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:363e2f92b0f0174b2f8238240a1a30142e3db7b957a5dd5689b0e75fb717cc78"}, + {file = "charset_normalizer-3.4.1-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:ab36c8eb7e454e34e60eb55ca5d241a5d18b2c6244f6827a30e451c42410b5f7"}, + {file = "charset_normalizer-3.4.1-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:4c0907b1928a36d5a998d72d64d8eaa7244989f7aaaf947500d3a800c83a3fd6"}, + {file = "charset_normalizer-3.4.1-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:04432ad9479fa40ec0f387795ddad4437a2b50417c69fa275e212933519ff294"}, + {file = "charset_normalizer-3.4.1-cp39-cp39-win32.whl", hash = "sha256:3bed14e9c89dcb10e8f3a29f9ccac4955aebe93c71ae803af79265c9ca5644c5"}, + {file = "charset_normalizer-3.4.1-cp39-cp39-win_amd64.whl", hash = "sha256:49402233c892a461407c512a19435d1ce275543138294f7ef013f0b63d5d3765"}, + {file = "charset_normalizer-3.4.1-py3-none-any.whl", hash = "sha256:d98b1668f06378c6dbefec3b92299716b931cd4e6061f3c875a71ced1780ab85"}, + {file = "charset_normalizer-3.4.1.tar.gz", hash = "sha256:44251f18cd68a75b56585dd00dae26183e102cd5e0f9f1466e6df5da2ed64ea3"}, +] + +[[package]] +name = "colorama" +version = "0.4.6" +description = "Cross-platform colored terminal text." +optional = false +python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7" +groups = ["main", "dev"] +markers = "sys_platform == \"win32\"" +files = [ + {file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"}, + {file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"}, +] + +[[package]] +name = "docutils" +version = "0.21.2" +description = "Docutils -- Python Documentation Utilities" +optional = false +python-versions = ">=3.9" +groups = ["main"] +files = [ + {file = "docutils-0.21.2-py3-none-any.whl", hash = "sha256:dafca5b9e384f0e419294eb4d2ff9fa826435bf15f15b7bd45723e8ad76811b2"}, + {file = "docutils-0.21.2.tar.gz", hash = "sha256:3a6b18732edf182daa3cd12775bbb338cf5691468f91eeeb109deff6ebfa986f"}, +] + +[[package]] +name = "filetype" +version = "1.2.0" +description = "Infer file type and MIME type of any file/buffer. No external dependencies." +optional = false +python-versions = "*" +groups = ["main"] +files = [ + {file = "filetype-1.2.0-py2.py3-none-any.whl", hash = "sha256:7ce71b6880181241cf7ac8697a2f1eb6a8bd9b429f7ad6d27b8db9ba5f1c2d25"}, + {file = "filetype-1.2.0.tar.gz", hash = "sha256:66b56cd6474bf41d8c54660347d37afcc3f7d1970648de365c102ef77548aadb"}, +] + +[[package]] +name = "idna" +version = "3.10" +description = "Internationalized Domain Names in Applications (IDNA)" +optional = false +python-versions = ">=3.6" +groups = ["main"] +files = [ + {file = "idna-3.10-py3-none-any.whl", hash = "sha256:946d195a0d259cbba61165e88e65941f16e9b36ea6ddb97f00452bae8b1287d3"}, + {file = "idna-3.10.tar.gz", hash = "sha256:12f65c9b470abda6dc35cf8e63cc574b1c52b11df2c86030af0ac09b01b13ea9"}, +] + +[package.extras] +all = ["flake8 (>=7.1.1)", "mypy (>=1.11.2)", "pytest (>=8.3.2)", "ruff (>=0.6.2)"] + +[[package]] +name = "iniconfig" +version = "2.1.0" +description = "brain-dead simple config-ini parsing" +optional = false +python-versions = ">=3.8" +groups = ["main", "dev"] +files = [ + {file = "iniconfig-2.1.0-py3-none-any.whl", hash = "sha256:9deba5723312380e77435581c6bf4935c94cbfab9b1ed33ef8d238ea168eb760"}, + {file = "iniconfig-2.1.0.tar.gz", hash = "sha256:3abbd2e30b36733fee78f9c7f7308f2d0050e88f0087fd25c2645f63c773e1c7"}, +] + +[[package]] +name = "kivy" +version = "2.3.1" +description = "An open-source Python framework for developing GUI apps that work cross-platform, including desktop, mobile and embedded platforms." +optional = false +python-versions = ">=3.8" +groups = ["main"] +files = [ + {file = "Kivy-2.3.1-cp310-cp310-macosx_10_15_universal2.whl", hash = "sha256:ace93c166c9400f9435cfd3bd179b5ef9fdd40d69ee8171a6b8beba08c402d09"}, + {file = "Kivy-2.3.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3d6215762510b463b0461d173f8a0b22e449beb12ba79cf151e18aa1d3d12a40"}, + {file = "Kivy-2.3.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ba83dd8266fc2b1247de18c5e8114fa47ea20eb33eb7c3a9e2eb6202b9778088"}, + {file = "Kivy-2.3.1-cp310-cp310-win_amd64.whl", hash = "sha256:d28ad14162554abd0324ae8f66ce2f374c05456d2656d60cfa80814f715d62c0"}, + {file = "Kivy-2.3.1-cp311-cp311-macosx_10_15_universal2.whl", hash = "sha256:acb58843763075818de919989a73657307f4d833a7cc5547c1b16c226e260e5d"}, + {file = "Kivy-2.3.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b7a1799b19f6ab3bcfcef1e729a0229cee646167a1633e067c2add6978f928bb"}, + {file = "Kivy-2.3.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f180280df46a8c2f9988159938aa1a3e5a0094060d9586ea79df4b4ead9cad98"}, + {file = "Kivy-2.3.1-cp311-cp311-win_amd64.whl", hash = "sha256:002de19fef53955c48108758beea3092cf281326642d2e71eca1c443f4227cce"}, + {file = "Kivy-2.3.1-cp312-cp312-macosx_10_15_universal2.whl", hash = "sha256:3f74679ef305f0ed0d8bb3599a2dddc80ffc81157bdc07947498dd689fc9a5d9"}, + {file = "Kivy-2.3.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:663e9b2fe5002f53371b3ad3712dccdaaa96905bbeaa83d7c7e64f3c44fec94e"}, + {file = "Kivy-2.3.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2be79fe1494b6e60cb5aa5f124c37961530417cf27a53171b5a72c9e4c7d41cf"}, + {file = "Kivy-2.3.1-cp312-cp312-win_amd64.whl", hash = "sha256:2046f6608d17b6c1a0530ac9aa127307fa25f6f75764f1d60428a1c0f6c0af88"}, + {file = "Kivy-2.3.1-cp313-cp313-macosx_10_15_universal2.whl", hash = "sha256:d8d9e57501961c5d45e5a2c5af0caef24e48f43a0cd88f607eb3b517198cfec4"}, + {file = "Kivy-2.3.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bfe25296e9612cbfa2b68cfb0ccd3c80db1441c11261a9e131d5f8fed7618c2c"}, + {file = "Kivy-2.3.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:950d17e275f817ca34cc7c9d55f9d229067e2f7fbd0fad985a74c94893f7e739"}, + {file = "Kivy-2.3.1-cp313-cp313-win_amd64.whl", hash = "sha256:b5127af11c2fc1299f2331402fe4f6edb0985711c2841fbfdf509830c058c78e"}, + {file = "Kivy-2.3.1-cp38-cp38-macosx_10_15_universal2.whl", hash = "sha256:d9e92c4894f99685d822ab7d059a3912bbff17d812e64a12ed3cf0acd37924cb"}, + {file = "Kivy-2.3.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:445b6054afcd08fd271b75e5552a72a5ffb122b05e8511e46bf69e3b5e344d31"}, + {file = "Kivy-2.3.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7e473b10e9b9a49a6475760fd1f7d674873852f9561505ff6b4d8e5f1691d4f9"}, + {file = "Kivy-2.3.1-cp38-cp38-win_amd64.whl", hash = "sha256:ee628e5dbe5e397ceeeda7b49cf4c800b79a695c6345fee1a8f1b71d3fc530bb"}, + {file = "Kivy-2.3.1-cp39-cp39-macosx_10_15_universal2.whl", hash = "sha256:38a265ff95120694ab7dfc29ed2ccdec40a8a47344387b886f498449b0c3c66c"}, + {file = "Kivy-2.3.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ae8168549c822a7122044965715d9f953a1862fdef132ea7725df8c1d2f19e5c"}, + {file = "Kivy-2.3.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:748163206ce95aab5aaad1ada772a79e422a80b6308623510e74a1b7baf80f0a"}, + {file = "Kivy-2.3.1-cp39-cp39-win_amd64.whl", hash = "sha256:91c836b7c2b4958fb4b3839f63b1724435bd617548baace0602c122d39756746"}, + {file = "Kivy-2.3.1.tar.gz", hash = "sha256:0833949e3502cdb4abcf9c1da4384674045ad7d85644313aa1ee7573f3b4f9d9"}, +] + +[package.dependencies] +docutils = "*" +filetype = "*" +"kivy-deps.angle" = {version = ">=0.4.0,<0.5.0", markers = "sys_platform == \"win32\""} +"kivy-deps.glew" = {version = ">=0.3.1,<0.4.0", markers = "sys_platform == \"win32\""} +"kivy-deps.sdl2" = {version = ">=0.8.0,<0.9.0", markers = "sys_platform == \"win32\""} +Kivy-Garden = ">=0.1.4" +pygments = "*" +pypiwin32 = {version = "*", markers = "sys_platform == \"win32\""} +requests = "*" + +[package.extras] +angle = ["kivy-deps.angle (>=0.4.0,<0.5.0) ; sys_platform == \"win32\""] +base = ["pillow (>=9.5.0,<11)"] +dev = ["flake8", "kivy-deps.glew-dev (>=0.3.1,<0.4.0) ; sys_platform == \"win32\"", "kivy-deps.gstreamer-dev (>=0.3.3,<0.4.0) ; sys_platform == \"win32\"", "kivy-deps.sdl2-dev (>=0.8.0,<0.9.0) ; sys_platform == \"win32\"", "pre-commit", "pyinstaller", "pytest (>=3.6)", "pytest-asyncio (!=0.11.0)", "pytest-benchmark", "pytest-cov", "pytest-timeout", "responses", "sphinx (>=6.2.1,<6.3.0)", "sphinxcontrib-jquery (>=4.1,<5.0)"] +full = ["ffpyplayer ; sys_platform == \"linux\" or sys_platform == \"darwin\"", "kivy-deps.gstreamer (>=0.3.3,<0.4.0) ; sys_platform == \"win32\"", "pillow (>=9.5.0,<11)"] +glew = ["kivy-deps.glew (>=0.3.1,<0.4.0) ; sys_platform == \"win32\""] +gstreamer = ["kivy-deps.gstreamer (>=0.3.3,<0.4.0) ; sys_platform == \"win32\""] +media = ["ffpyplayer ; sys_platform == \"linux\" or sys_platform == \"darwin\"", "kivy-deps.gstreamer (>=0.3.3,<0.4.0) ; sys_platform == \"win32\""] +sdl2 = ["kivy-deps.sdl2 (>=0.8.0,<0.9.0) ; sys_platform == \"win32\""] +tuio = ["oscpy"] + +[[package]] +name = "kivy-deps-angle" +version = "0.4.0" +description = "Repackaged binary dependency of Kivy." +optional = false +python-versions = "*" +groups = ["main"] +markers = "sys_platform == \"win32\"" +files = [ + {file = "kivy_deps.angle-0.4.0-cp310-cp310-win32.whl", hash = "sha256:7873a551e488afa5044c4949a4aa42c4a4c4290469f0a6dd861e6b95283c9638"}, + {file = "kivy_deps.angle-0.4.0-cp310-cp310-win_amd64.whl", hash = "sha256:71f2f01a3a7bbe1d4790e2a64e64a0ea8ae154418462ea407799ed66898b2c1f"}, + {file = "kivy_deps.angle-0.4.0-cp311-cp311-win32.whl", hash = "sha256:c3899ff1f3886b80b155955bad07bfa33bbebd97718cdf46dfd788dc467124bc"}, + {file = "kivy_deps.angle-0.4.0-cp311-cp311-win_amd64.whl", hash = "sha256:574381d4e66f3198bc48aa10f238e7a3816ad56b80ec939f5d56fb33a378d0b1"}, + {file = "kivy_deps.angle-0.4.0-cp312-cp312-win32.whl", hash = "sha256:4fa7a6366899fba13f7624baf4645787165f45731db08d14557da29c12ee48f0"}, + {file = "kivy_deps.angle-0.4.0-cp312-cp312-win_amd64.whl", hash = "sha256:668e670d4afd2551af0af2c627ceb0feac884bd799fb6a3dff78fdbfa2ea0451"}, + {file = "kivy_deps.angle-0.4.0-cp313-cp313-win_amd64.whl", hash = "sha256:9afbf702f8bb9a993c48f39c018ca3b4d2ec381a5d3f82fe65bdaa6af0bba29b"}, + {file = "kivy_deps.angle-0.4.0-cp37-cp37m-win32.whl", hash = "sha256:24cfc0076d558080a00c443c7117311b4a977c1916fe297232eff1fd6f62651e"}, + {file = "kivy_deps.angle-0.4.0-cp37-cp37m-win_amd64.whl", hash = "sha256:48592ac6f7c183c5cd10d9ebe43d4148d0b2b9e400a2b0bcb5d21014cc929ce2"}, + {file = "kivy_deps.angle-0.4.0-cp38-cp38-win32.whl", hash = "sha256:1bbacf20bf6bd6ee965388f95d937c8fba2c54916fb44faa166c2ba58276753c"}, + {file = "kivy_deps.angle-0.4.0-cp38-cp38-win_amd64.whl", hash = "sha256:e2ba4e390b02ad5bcb57b43a9227fa27ff55e69cd715a87217b324195eb267c3"}, + {file = "kivy_deps.angle-0.4.0-cp39-cp39-win32.whl", hash = "sha256:6546a62aba2b7e18a800b3df79daa757af3a980c297646c986896522395794e2"}, + {file = "kivy_deps.angle-0.4.0-cp39-cp39-win_amd64.whl", hash = "sha256:bfaf9b37f2ecc3e4e7736657eed507716477af35cdd3118903e999d9d567ae8c"}, +] + +[[package]] +name = "kivy-deps-glew" +version = "0.3.1" +description = "Repackaged binary dependency of Kivy." +optional = false +python-versions = "*" +groups = ["main"] +markers = "sys_platform == \"win32\"" +files = [ + {file = "kivy_deps.glew-0.3.1-cp310-cp310-win32.whl", hash = "sha256:8f4b3ed15acb62474909b6d41661ffb4da9eb502bb5684301fb2da668f288a58"}, + {file = "kivy_deps.glew-0.3.1-cp310-cp310-win_amd64.whl", hash = "sha256:aef2d2a93f129d8425c75234e7f6cc0a34b59a4aee67f6d2cd7a5fdfa9915b53"}, + {file = "kivy_deps.glew-0.3.1-cp311-cp311-win32.whl", hash = "sha256:ee2f80ef7ac70f4b61c50da8101b024308a8c59a57f7f25a6e09762b6c48f942"}, + {file = "kivy_deps.glew-0.3.1-cp311-cp311-win_amd64.whl", hash = "sha256:22e155ec59ce717387f5d8804811206d200a023ba3d0bc9bbf1393ee28d0053e"}, + {file = "kivy_deps.glew-0.3.1-cp312-cp312-win32.whl", hash = "sha256:b64ee4e445a04bc7c848c0261a6045fc2f0944cc05d7f953e3860b49f2703424"}, + {file = "kivy_deps.glew-0.3.1-cp312-cp312-win_amd64.whl", hash = "sha256:3acbbd30da05fc10c185b5d4bb75fbbc882a6ef2192963050c1c94d60a6e795a"}, + {file = "kivy_deps.glew-0.3.1-cp313-cp313-win_amd64.whl", hash = "sha256:f4aa8322078359862ccd9e16e5cea61976d75fb43125d87922e20c916fa31a11"}, + {file = "kivy_deps.glew-0.3.1-cp37-cp37m-win32.whl", hash = "sha256:5bf6a63fe9cc4fe7bbf280ec267ec8c47914020a1175fb22152525ff1837b436"}, + {file = "kivy_deps.glew-0.3.1-cp37-cp37m-win_amd64.whl", hash = "sha256:d64a8625799fab7a7efeb3661ef8779a7f9c6d80da53eed87a956320f55530fa"}, + {file = "kivy_deps.glew-0.3.1-cp38-cp38-win32.whl", hash = "sha256:00f4ae0a4682d951266458ddb639451edb24baa54a35215dce889209daf19a06"}, + {file = "kivy_deps.glew-0.3.1-cp38-cp38-win_amd64.whl", hash = "sha256:3f8b89dcf1846032d7a9c5ef88b0ee9cbd13366e9b4c85ada61e01549a910677"}, + {file = "kivy_deps.glew-0.3.1-cp39-cp39-win32.whl", hash = "sha256:4e377ed97670dfda619a1b63a82345a8589be90e7c616a458fba2810708810b1"}, + {file = "kivy_deps.glew-0.3.1-cp39-cp39-win_amd64.whl", hash = "sha256:081a09b92f7e7817f489f8b6b31c9c9623661378de1dce1d6b097af5e7d42b45"}, +] + +[[package]] +name = "kivy-deps-sdl2" +version = "0.8.0" +description = "Repackaged binary dependency of Kivy." +optional = false +python-versions = "*" +groups = ["main"] +markers = "sys_platform == \"win32\"" +files = [ + {file = "kivy_deps.sdl2-0.8.0-cp310-cp310-win_amd64.whl", hash = "sha256:5af0a3b318a6ec9e0f0c1d476a4af4b2d0cbcce4dbfd89bc4681c33bcd6b3bcd"}, + {file = "kivy_deps.sdl2-0.8.0-cp311-cp311-win_amd64.whl", hash = "sha256:ae3735480841ec9a57c0fb26e8647adee474a3d746147e3d75a1fc177c0fbc01"}, + {file = "kivy_deps.sdl2-0.8.0-cp312-cp312-win_amd64.whl", hash = "sha256:bfe0cfca77883dde7e297b3b6039fa9cd7ee8df6b0d12516b38addb0551a574c"}, + {file = "kivy_deps.sdl2-0.8.0-cp313-cp313-win_amd64.whl", hash = "sha256:56b1c44565b5e8cfc510585db13396edfc605965254f49ed8931189c546d481f"}, + {file = "kivy_deps.sdl2-0.8.0-cp38-cp38-win_amd64.whl", hash = "sha256:5e9f8c0c1e76eb43f0bad8f36c5b92a46fb5696f733ec441db45e6864b1d4065"}, + {file = "kivy_deps.sdl2-0.8.0-cp39-cp39-win_amd64.whl", hash = "sha256:dbaa6718e66e8cd4967c2d4021e05114c558342e2468a86c0bce917bea10003f"}, +] + +[[package]] +name = "kivy-garden" +version = "0.1.5" +description = "" +optional = false +python-versions = "*" +groups = ["main"] +files = [ + {file = "Kivy Garden-0.1.5.tar.gz", hash = "sha256:2b8377378e87501d5d271f33d94f0e44c089884572c64f89c9d609b1f86a2748"}, + {file = "Kivy_Garden-0.1.5-py3-none-any.whl", hash = "sha256:ef50f44b96358cf10ac5665f27a4751bb34ef54051c54b93af891f80afe42929"}, +] + +[package.dependencies] +requests = "*" + +[[package]] +name = "kivymd" +version = "1.2.0" +description = "Set of widgets for Kivy inspired by Google's Material Design" +optional = false +python-versions = ">=3.7" +groups = ["main"] +files = [ + {file = "kivymd-1.2.0.tar.gz", hash = "sha256:2d33e2c59259998e93aee55acde647a4a20e5a0f962469db24ee4c9ec586962e"}, +] + +[package.dependencies] +kivy = ">=2.2.0" +pillow = "*" + +[package.extras] +dev = ["black", "coveralls", "flake8", "isort[pyproject]", "pre-commit", "pyinstaller[hook-testing]", "pytest", "pytest-cov", "pytest-timeout", "pytest_asyncio"] +docs = ["furo", "sphinx", "sphinx-autoapi (==1.4.0)", "sphinx-copybutton", "sphinx-notfound-page", "sphinx-tabs"] + +[[package]] +name = "packaging" +version = "25.0" +description = "Core utilities for Python packages" +optional = false +python-versions = ">=3.8" +groups = ["main", "dev"] +files = [ + {file = "packaging-25.0-py3-none-any.whl", hash = "sha256:29572ef2b1f17581046b3a2227d5c611fb25ec70ca1ba8554b24b0e69331a484"}, + {file = "packaging-25.0.tar.gz", hash = "sha256:d443872c98d677bf60f6a1f2f8c1cb748e8fe762d2bf9d3148b5599295b0fc4f"}, +] + +[[package]] +name = "pillow" +version = "11.2.1" +description = "Python Imaging Library (Fork)" +optional = false +python-versions = ">=3.9" +groups = ["main"] +files = [ + {file = "pillow-11.2.1-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:d57a75d53922fc20c165016a20d9c44f73305e67c351bbc60d1adaf662e74047"}, + {file = "pillow-11.2.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:127bf6ac4a5b58b3d32fc8289656f77f80567d65660bc46f72c0d77e6600cc95"}, + {file = "pillow-11.2.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b4ba4be812c7a40280629e55ae0b14a0aafa150dd6451297562e1764808bbe61"}, + {file = "pillow-11.2.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c8bd62331e5032bc396a93609982a9ab6b411c05078a52f5fe3cc59234a3abd1"}, + {file = "pillow-11.2.1-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:562d11134c97a62fe3af29581f083033179f7ff435f78392565a1ad2d1c2c45c"}, + {file = "pillow-11.2.1-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:c97209e85b5be259994eb5b69ff50c5d20cca0f458ef9abd835e262d9d88b39d"}, + {file = "pillow-11.2.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:0c3e6d0f59171dfa2e25d7116217543310908dfa2770aa64b8f87605f8cacc97"}, + {file = "pillow-11.2.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:cc1c3bc53befb6096b84165956e886b1729634a799e9d6329a0c512ab651e579"}, + {file = "pillow-11.2.1-cp310-cp310-win32.whl", hash = "sha256:312c77b7f07ab2139924d2639860e084ec2a13e72af54d4f08ac843a5fc9c79d"}, + {file = "pillow-11.2.1-cp310-cp310-win_amd64.whl", hash = "sha256:9bc7ae48b8057a611e5fe9f853baa88093b9a76303937449397899385da06fad"}, + {file = "pillow-11.2.1-cp310-cp310-win_arm64.whl", hash = "sha256:2728567e249cdd939f6cc3d1f049595c66e4187f3c34078cbc0a7d21c47482d2"}, + {file = "pillow-11.2.1-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:35ca289f712ccfc699508c4658a1d14652e8033e9b69839edf83cbdd0ba39e70"}, + {file = "pillow-11.2.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e0409af9f829f87a2dfb7e259f78f317a5351f2045158be321fd135973fff7bf"}, + {file = "pillow-11.2.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d4e5c5edee874dce4f653dbe59db7c73a600119fbea8d31f53423586ee2aafd7"}, + {file = "pillow-11.2.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b93a07e76d13bff9444f1a029e0af2964e654bfc2e2c2d46bfd080df5ad5f3d8"}, + {file = "pillow-11.2.1-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:e6def7eed9e7fa90fde255afaf08060dc4b343bbe524a8f69bdd2a2f0018f600"}, + {file = "pillow-11.2.1-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:8f4f3724c068be008c08257207210c138d5f3731af6c155a81c2b09a9eb3a788"}, + {file = "pillow-11.2.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:a0a6709b47019dff32e678bc12c63008311b82b9327613f534e496dacaefb71e"}, + {file = "pillow-11.2.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:f6b0c664ccb879109ee3ca702a9272d877f4fcd21e5eb63c26422fd6e415365e"}, + {file = "pillow-11.2.1-cp311-cp311-win32.whl", hash = "sha256:cc5d875d56e49f112b6def6813c4e3d3036d269c008bf8aef72cd08d20ca6df6"}, + {file = "pillow-11.2.1-cp311-cp311-win_amd64.whl", hash = "sha256:0f5c7eda47bf8e3c8a283762cab94e496ba977a420868cb819159980b6709193"}, + {file = "pillow-11.2.1-cp311-cp311-win_arm64.whl", hash = "sha256:4d375eb838755f2528ac8cbc926c3e31cc49ca4ad0cf79cff48b20e30634a4a7"}, + {file = "pillow-11.2.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:78afba22027b4accef10dbd5eed84425930ba41b3ea0a86fa8d20baaf19d807f"}, + {file = "pillow-11.2.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:78092232a4ab376a35d68c4e6d5e00dfd73454bd12b230420025fbe178ee3b0b"}, + {file = "pillow-11.2.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:25a5f306095c6780c52e6bbb6109624b95c5b18e40aab1c3041da3e9e0cd3e2d"}, + {file = "pillow-11.2.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0c7b29dbd4281923a2bfe562acb734cee96bbb129e96e6972d315ed9f232bef4"}, + {file = "pillow-11.2.1-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:3e645b020f3209a0181a418bffe7b4a93171eef6c4ef6cc20980b30bebf17b7d"}, + {file = "pillow-11.2.1-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:b2dbea1012ccb784a65349f57bbc93730b96e85b42e9bf7b01ef40443db720b4"}, + {file = "pillow-11.2.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:da3104c57bbd72948d75f6a9389e6727d2ab6333c3617f0a89d72d4940aa0443"}, + {file = "pillow-11.2.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:598174aef4589af795f66f9caab87ba4ff860ce08cd5bb447c6fc553ffee603c"}, + {file = "pillow-11.2.1-cp312-cp312-win32.whl", hash = "sha256:1d535df14716e7f8776b9e7fee118576d65572b4aad3ed639be9e4fa88a1cad3"}, + {file = "pillow-11.2.1-cp312-cp312-win_amd64.whl", hash = "sha256:14e33b28bf17c7a38eede290f77db7c664e4eb01f7869e37fa98a5aa95978941"}, + {file = "pillow-11.2.1-cp312-cp312-win_arm64.whl", hash = "sha256:21e1470ac9e5739ff880c211fc3af01e3ae505859392bf65458c224d0bf283eb"}, + {file = "pillow-11.2.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:fdec757fea0b793056419bca3e9932eb2b0ceec90ef4813ea4c1e072c389eb28"}, + {file = "pillow-11.2.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:b0e130705d568e2f43a17bcbe74d90958e8a16263868a12c3e0d9c8162690830"}, + {file = "pillow-11.2.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7bdb5e09068332578214cadd9c05e3d64d99e0e87591be22a324bdbc18925be0"}, + {file = "pillow-11.2.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d189ba1bebfbc0c0e529159631ec72bb9e9bc041f01ec6d3233d6d82eb823bc1"}, + {file = "pillow-11.2.1-cp313-cp313-manylinux_2_28_aarch64.whl", hash = "sha256:191955c55d8a712fab8934a42bfefbf99dd0b5875078240943f913bb66d46d9f"}, + {file = "pillow-11.2.1-cp313-cp313-manylinux_2_28_x86_64.whl", hash = "sha256:ad275964d52e2243430472fc5d2c2334b4fc3ff9c16cb0a19254e25efa03a155"}, + {file = "pillow-11.2.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:750f96efe0597382660d8b53e90dd1dd44568a8edb51cb7f9d5d918b80d4de14"}, + {file = "pillow-11.2.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:fe15238d3798788d00716637b3d4e7bb6bde18b26e5d08335a96e88564a36b6b"}, + {file = "pillow-11.2.1-cp313-cp313-win32.whl", hash = "sha256:3fe735ced9a607fee4f481423a9c36701a39719252a9bb251679635f99d0f7d2"}, + {file = "pillow-11.2.1-cp313-cp313-win_amd64.whl", hash = "sha256:74ee3d7ecb3f3c05459ba95eed5efa28d6092d751ce9bf20e3e253a4e497e691"}, + {file = "pillow-11.2.1-cp313-cp313-win_arm64.whl", hash = "sha256:5119225c622403afb4b44bad4c1ca6c1f98eed79db8d3bc6e4e160fc6339d66c"}, + {file = "pillow-11.2.1-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:8ce2e8411c7aaef53e6bb29fe98f28cd4fbd9a1d9be2eeea434331aac0536b22"}, + {file = "pillow-11.2.1-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:9ee66787e095127116d91dea2143db65c7bb1e232f617aa5957c0d9d2a3f23a7"}, + {file = "pillow-11.2.1-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9622e3b6c1d8b551b6e6f21873bdcc55762b4b2126633014cea1803368a9aa16"}, + {file = "pillow-11.2.1-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:63b5dff3a68f371ea06025a1a6966c9a1e1ee452fc8020c2cd0ea41b83e9037b"}, + {file = "pillow-11.2.1-cp313-cp313t-manylinux_2_28_aarch64.whl", hash = "sha256:31df6e2d3d8fc99f993fd253e97fae451a8db2e7207acf97859732273e108406"}, + {file = "pillow-11.2.1-cp313-cp313t-manylinux_2_28_x86_64.whl", hash = "sha256:062b7a42d672c45a70fa1f8b43d1d38ff76b63421cbbe7f88146b39e8a558d91"}, + {file = "pillow-11.2.1-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:4eb92eca2711ef8be42fd3f67533765d9fd043b8c80db204f16c8ea62ee1a751"}, + {file = "pillow-11.2.1-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:f91ebf30830a48c825590aede79376cb40f110b387c17ee9bd59932c961044f9"}, + {file = "pillow-11.2.1-cp313-cp313t-win32.whl", hash = "sha256:e0b55f27f584ed623221cfe995c912c61606be8513bfa0e07d2c674b4516d9dd"}, + {file = "pillow-11.2.1-cp313-cp313t-win_amd64.whl", hash = "sha256:36d6b82164c39ce5482f649b437382c0fb2395eabc1e2b1702a6deb8ad647d6e"}, + {file = "pillow-11.2.1-cp313-cp313t-win_arm64.whl", hash = "sha256:225c832a13326e34f212d2072982bb1adb210e0cc0b153e688743018c94a2681"}, + {file = "pillow-11.2.1-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:7491cf8a79b8eb867d419648fff2f83cb0b3891c8b36da92cc7f1931d46108c8"}, + {file = "pillow-11.2.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:8b02d8f9cb83c52578a0b4beadba92e37d83a4ef11570a8688bbf43f4ca50909"}, + {file = "pillow-11.2.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:014ca0050c85003620526b0ac1ac53f56fc93af128f7546623cc8e31875ab928"}, + {file = "pillow-11.2.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3692b68c87096ac6308296d96354eddd25f98740c9d2ab54e1549d6c8aea9d79"}, + {file = "pillow-11.2.1-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:f781dcb0bc9929adc77bad571b8621ecb1e4cdef86e940fe2e5b5ee24fd33b35"}, + {file = "pillow-11.2.1-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:2b490402c96f907a166615e9a5afacf2519e28295f157ec3a2bb9bd57de638cb"}, + {file = "pillow-11.2.1-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:dd6b20b93b3ccc9c1b597999209e4bc5cf2853f9ee66e3fc9a400a78733ffc9a"}, + {file = "pillow-11.2.1-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:4b835d89c08a6c2ee7781b8dd0a30209a8012b5f09c0a665b65b0eb3560b6f36"}, + {file = "pillow-11.2.1-cp39-cp39-win32.whl", hash = "sha256:b10428b3416d4f9c61f94b494681280be7686bda15898a3a9e08eb66a6d92d67"}, + {file = "pillow-11.2.1-cp39-cp39-win_amd64.whl", hash = "sha256:6ebce70c3f486acf7591a3d73431fa504a4e18a9b97ff27f5f47b7368e4b9dd1"}, + {file = "pillow-11.2.1-cp39-cp39-win_arm64.whl", hash = "sha256:c27476257b2fdcd7872d54cfd119b3a9ce4610fb85c8e32b70b42e3680a29a1e"}, + {file = "pillow-11.2.1-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:9b7b0d4fd2635f54ad82785d56bc0d94f147096493a79985d0ab57aedd563156"}, + {file = "pillow-11.2.1-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:aa442755e31c64037aa7c1cb186e0b369f8416c567381852c63444dd666fb772"}, + {file = "pillow-11.2.1-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f0d3348c95b766f54b76116d53d4cb171b52992a1027e7ca50c81b43b9d9e363"}, + {file = "pillow-11.2.1-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:85d27ea4c889342f7e35f6d56e7e1cb345632ad592e8c51b693d7b7556043ce0"}, + {file = "pillow-11.2.1-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:bf2c33d6791c598142f00c9c4c7d47f6476731c31081331664eb26d6ab583e01"}, + {file = "pillow-11.2.1-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:e616e7154c37669fc1dfc14584f11e284e05d1c650e1c0f972f281c4ccc53193"}, + {file = "pillow-11.2.1-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:39ad2e0f424394e3aebc40168845fee52df1394a4673a6ee512d840d14ab3013"}, + {file = "pillow-11.2.1-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:80f1df8dbe9572b4b7abdfa17eb5d78dd620b1d55d9e25f834efdbee872d3aed"}, + {file = "pillow-11.2.1-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:ea926cfbc3957090becbcbbb65ad177161a2ff2ad578b5a6ec9bb1e1cd78753c"}, + {file = "pillow-11.2.1-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:738db0e0941ca0376804d4de6a782c005245264edaa253ffce24e5a15cbdc7bd"}, + {file = "pillow-11.2.1-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9db98ab6565c69082ec9b0d4e40dd9f6181dab0dd236d26f7a50b8b9bfbd5076"}, + {file = "pillow-11.2.1-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:036e53f4170e270ddb8797d4c590e6dd14d28e15c7da375c18978045f7e6c37b"}, + {file = "pillow-11.2.1-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:14f73f7c291279bd65fda51ee87affd7c1e097709f7fdd0188957a16c264601f"}, + {file = "pillow-11.2.1-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:208653868d5c9ecc2b327f9b9ef34e0e42a4cdd172c2988fd81d62d2bc9bc044"}, + {file = "pillow-11.2.1.tar.gz", hash = "sha256:a64dd61998416367b7ef979b73d3a85853ba9bec4c2925f74e588879a58716b6"}, +] + +[package.extras] +docs = ["furo", "olefile", "sphinx (>=8.2)", "sphinx-copybutton", "sphinx-inline-tabs", "sphinxext-opengraph"] +fpx = ["olefile"] +mic = ["olefile"] +test-arrow = ["pyarrow"] +tests = ["check-manifest", "coverage (>=7.4.2)", "defusedxml", "markdown2", "olefile", "packaging", "pyroma", "pytest", "pytest-cov", "pytest-timeout", "trove-classifiers (>=2024.10.12)"] +typing = ["typing-extensions ; python_version < \"3.10\""] +xmp = ["defusedxml"] + +[[package]] +name = "pluggy" +version = "1.5.0" +description = "plugin and hook calling mechanisms for python" +optional = false +python-versions = ">=3.8" +groups = ["main", "dev"] +files = [ + {file = "pluggy-1.5.0-py3-none-any.whl", hash = "sha256:44e1ad92c8ca002de6377e165f3e0f1be63266ab4d554740532335b9d75ea669"}, + {file = "pluggy-1.5.0.tar.gz", hash = "sha256:2cffa88e94fdc978c4c574f15f9e59b7f4201d439195c3715ca9e2486f1d0cf1"}, +] + +[package.extras] +dev = ["pre-commit", "tox"] +testing = ["pytest", "pytest-benchmark"] + +[[package]] +name = "pygments" +version = "2.19.1" +description = "Pygments is a syntax highlighting package written in Python." +optional = false +python-versions = ">=3.8" +groups = ["main"] +files = [ + {file = "pygments-2.19.1-py3-none-any.whl", hash = "sha256:9ea1544ad55cecf4b8242fab6dd35a93bbce657034b0611ee383099054ab6d8c"}, + {file = "pygments-2.19.1.tar.gz", hash = "sha256:61c16d2a8576dc0649d9f39e089b5f02bcd27fba10d8fb4dcc28173f7a45151f"}, +] + +[package.extras] +windows-terminal = ["colorama (>=0.4.6)"] + +[[package]] +name = "pypiwin32" +version = "223" +description = "" +optional = false +python-versions = "*" +groups = ["main"] +markers = "sys_platform == \"win32\"" +files = [ + {file = "pypiwin32-223-py3-none-any.whl", hash = "sha256:67adf399debc1d5d14dffc1ab5acacb800da569754fafdc576b2a039485aa775"}, + {file = "pypiwin32-223.tar.gz", hash = "sha256:71be40c1fbd28594214ecaecb58e7aa8b708eabfa0125c8a109ebd51edbd776a"}, +] + +[package.dependencies] +pywin32 = ">=223" + +[[package]] +name = "pytest" +version = "8.3.5" +description = "pytest: simple powerful testing with Python" +optional = false +python-versions = ">=3.8" +groups = ["main", "dev"] +files = [ + {file = "pytest-8.3.5-py3-none-any.whl", hash = "sha256:c69214aa47deac29fad6c2a4f590b9c4a9fdb16a403176fe154b79c0b4d4d820"}, + {file = "pytest-8.3.5.tar.gz", hash = "sha256:f4efe70cc14e511565ac476b57c279e12a855b11f48f212af1080ef2263d3845"}, +] + +[package.dependencies] +colorama = {version = "*", markers = "sys_platform == \"win32\""} +iniconfig = "*" +packaging = "*" +pluggy = ">=1.5,<2" + +[package.extras] +dev = ["argcomplete", "attrs (>=19.2)", "hypothesis (>=3.56)", "mock", "pygments (>=2.7.2)", "requests", "setuptools", "xmlschema"] + +[[package]] +name = "pytest-asyncio" +version = "0.26.0" +description = "Pytest support for asyncio" +optional = false +python-versions = ">=3.9" +groups = ["main"] +files = [ + {file = "pytest_asyncio-0.26.0-py3-none-any.whl", hash = "sha256:7b51ed894f4fbea1340262bdae5135797ebbe21d8638978e35d31c6d19f72fb0"}, + {file = "pytest_asyncio-0.26.0.tar.gz", hash = "sha256:c4df2a697648241ff39e7f0e4a73050b03f123f760673956cf0d72a4990e312f"}, +] + +[package.dependencies] +pytest = ">=8.2,<9" + +[package.extras] +docs = ["sphinx (>=5.3)", "sphinx-rtd-theme (>=1)"] +testing = ["coverage (>=6.2)", "hypothesis (>=5.7.1)"] + +[[package]] +name = "pywin32" +version = "310" +description = "Python for Window Extensions" +optional = false +python-versions = "*" +groups = ["main"] +markers = "sys_platform == \"win32\"" +files = [ + {file = "pywin32-310-cp310-cp310-win32.whl", hash = "sha256:6dd97011efc8bf51d6793a82292419eba2c71cf8e7250cfac03bba284454abc1"}, + {file = "pywin32-310-cp310-cp310-win_amd64.whl", hash = "sha256:c3e78706e4229b915a0821941a84e7ef420bf2b77e08c9dae3c76fd03fd2ae3d"}, + {file = "pywin32-310-cp310-cp310-win_arm64.whl", hash = "sha256:33babed0cf0c92a6f94cc6cc13546ab24ee13e3e800e61ed87609ab91e4c8213"}, + {file = "pywin32-310-cp311-cp311-win32.whl", hash = "sha256:1e765f9564e83011a63321bb9d27ec456a0ed90d3732c4b2e312b855365ed8bd"}, + {file = "pywin32-310-cp311-cp311-win_amd64.whl", hash = "sha256:126298077a9d7c95c53823934f000599f66ec9296b09167810eb24875f32689c"}, + {file = "pywin32-310-cp311-cp311-win_arm64.whl", hash = "sha256:19ec5fc9b1d51c4350be7bb00760ffce46e6c95eaf2f0b2f1150657b1a43c582"}, + {file = "pywin32-310-cp312-cp312-win32.whl", hash = "sha256:8a75a5cc3893e83a108c05d82198880704c44bbaee4d06e442e471d3c9ea4f3d"}, + {file = "pywin32-310-cp312-cp312-win_amd64.whl", hash = "sha256:bf5c397c9a9a19a6f62f3fb821fbf36cac08f03770056711f765ec1503972060"}, + {file = "pywin32-310-cp312-cp312-win_arm64.whl", hash = "sha256:2349cc906eae872d0663d4d6290d13b90621eaf78964bb1578632ff20e152966"}, + {file = "pywin32-310-cp313-cp313-win32.whl", hash = "sha256:5d241a659c496ada3253cd01cfaa779b048e90ce4b2b38cd44168ad555ce74ab"}, + {file = "pywin32-310-cp313-cp313-win_amd64.whl", hash = "sha256:667827eb3a90208ddbdcc9e860c81bde63a135710e21e4cb3348968e4bd5249e"}, + {file = "pywin32-310-cp313-cp313-win_arm64.whl", hash = "sha256:e308f831de771482b7cf692a1f308f8fca701b2d8f9dde6cc440c7da17e47b33"}, + {file = "pywin32-310-cp38-cp38-win32.whl", hash = "sha256:0867beb8addefa2e3979d4084352e4ac6e991ca45373390775f7084cc0209b9c"}, + {file = "pywin32-310-cp38-cp38-win_amd64.whl", hash = "sha256:30f0a9b3138fb5e07eb4973b7077e1883f558e40c578c6925acc7a94c34eaa36"}, + {file = "pywin32-310-cp39-cp39-win32.whl", hash = "sha256:851c8d927af0d879221e616ae1f66145253537bbdd321a77e8ef701b443a9a1a"}, + {file = "pywin32-310-cp39-cp39-win_amd64.whl", hash = "sha256:96867217335559ac619f00ad70e513c0fcf84b8a3af9fc2bba3b59b97da70475"}, +] + +[[package]] +name = "requests" +version = "2.32.3" +description = "Python HTTP for Humans." +optional = false +python-versions = ">=3.8" +groups = ["main"] +files = [ + {file = "requests-2.32.3-py3-none-any.whl", hash = "sha256:70761cfe03c773ceb22aa2f671b4757976145175cdfca038c02654d061d6dcc6"}, + {file = "requests-2.32.3.tar.gz", hash = "sha256:55365417734eb18255590a9ff9eb97e9e1da868d4ccd6402399eaf68af20a760"}, +] + +[package.dependencies] +certifi = ">=2017.4.17" +charset-normalizer = ">=2,<4" +idna = ">=2.5,<4" +urllib3 = ">=1.21.1,<3" + +[package.extras] +socks = ["PySocks (>=1.5.6,!=1.5.7)"] +use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"] + +[[package]] +name = "urllib3" +version = "2.4.0" +description = "HTTP library with thread-safe connection pooling, file post, and more." +optional = false +python-versions = ">=3.9" +groups = ["main"] +files = [ + {file = "urllib3-2.4.0-py3-none-any.whl", hash = "sha256:4e16665048960a0900c702d4a66415956a584919c03361cac9f1df5c5dd7e813"}, + {file = "urllib3-2.4.0.tar.gz", hash = "sha256:414bc6535b787febd7567804cc015fee39daab8ad86268f1310a9250697de466"}, +] + +[package.extras] +brotli = ["brotli (>=1.0.9) ; platform_python_implementation == \"CPython\"", "brotlicffi (>=0.8.0) ; platform_python_implementation != \"CPython\""] +h2 = ["h2 (>=4,<5)"] +socks = ["pysocks (>=1.5.6,!=1.5.7,<2.0)"] +zstd = ["zstandard (>=0.18.0)"] + +[metadata] +lock-version = "2.1" +python-versions = ">=3.12" +content-hash = "2cd18e94b7002ea62ce6d0dcc4f5ced633f544dd216f76a6ff23f4de6dfb5927" diff --git a/client/pyproject.toml b/client/pyproject.toml new file mode 100644 index 0000000..5e724cf --- /dev/null +++ b/client/pyproject.toml @@ -0,0 +1,28 @@ +[project] +name = "client" +version = "0.0.1" +description = "" +requires-python = ">=3.12" +dependencies = [ + "pytest-asyncio (>=0.26.0,<0.27.0)", + "aiofiles (>=24.1.0,<25.0.0)", + "requests (>=2.32.3,<3.0.0)", + "kivy (>=2.3.1,<3.0.0)", + "kivymd (>=1.2.0,<2.0.0)" +] + +[tool.poetry.scripts] +cli = "cli.cli:main" + +[tool.poetry] +packages = [{ include = "cli" }] + +[tool.poetry.group.dev.dependencies] +pytest = "^8.3.5" + +[build-system] +requires = ["poetry-core>=2.0.0,<3.0.0"] +build-backend = "poetry.core.masonry.api" + +[tool.pytest.ini_options] +asyncio_default_fixture_loop_scope = "function" diff --git a/client/torrentInno.py b/client/torrentInno.py new file mode 100644 index 0000000..37b16a2 --- /dev/null +++ b/client/torrentInno.py @@ -0,0 +1,259 @@ +import asyncio +import random +import json +import socket +import datetime +import os +import hashlib +import math +import logging + +from typing import Dict +from pathlib import Path +from dataclasses import dataclass + +from core.p2p.resource_manager import ResourceManager +from core.s2p.server_manager import update_peer, heart_beat +from core.common.peer_info import PeerInfo +from core.common.resource import Resource + +# --- constants --- +TRACKER_IP = '80.71.232.39' +TRACKER_PORT = 8080 + +logging.basicConfig(level=logging.INFO) + +# --- utility functions --- + +def generate_random_bits(size) -> bytes: + ''' + Generate random bits using randint + ''' + return bytes(random.randint(0, 255) for _ in range(size)) + +def generate_peer_id() -> str: + ''' + function what generate peerid + ''' + return generate_random_bits(32).hex() + +def get_peer_public_ip(): + ''' + using request return public ip of the peer + ''' + hostname = socket.gethostname() + ip = socket.gethostbyname(hostname) + return ip + +def create_resource_json(name: str, comment: str, file_path, max_pieces: int = 1000, min_piece_size: int = 64 * 1024): + ''' + Create a resource by splitting the file into an adaptive number of pieces. + ''' + size_bytes = os.path.getsize(file_path) + # Calculate adaptive piece size + piece_size = max(min_piece_size, math.ceil(size_bytes / max_pieces)) + pieces = [] + total_read = 0 + + with open(file_path, 'rb') as f: + while total_read < size_bytes: + file_bytes = f.read(piece_size) + if not file_bytes: + break + sha256 = hashlib.sha256(file_bytes).hexdigest() + pieces.append({ + 'sha256': sha256, + 'size': len(file_bytes) + }) + total_read += len(file_bytes) + + assert total_read == size_bytes, f"Read {total_read} bytes, expected {size_bytes}" + assert sum(p['size'] for p in pieces) == size_bytes, "Piece sizes do not sum to file size" + + resource_json = { + 'trackerIp': TRACKER_IP, + 'trackerPort': TRACKER_PORT, + 'comment': comment, + 'creationDate': datetime.datetime.now().isoformat(), + 'name': name, + 'pieces': pieces + } + + logging.info(f"Adaptive split: {len(pieces)} pieces, piece size: {piece_size} bytes") + return resource_json + + +def create_resource_from_json(resource_json): + ''' + Create a resource from the given JSON data. + ''' + pieces = [Resource.Piece(sha256=piece['sha256'], size_bytes=piece['size']) for piece in resource_json['pieces']] + resource = Resource( + tracker_ip=resource_json['trackerIp'], + tracker_port=resource_json['trackerPort'], + comment=resource_json['comment'], + creation_date=datetime.datetime.fromisoformat(resource_json['creationDate']), + name=resource_json['name'], + pieces=pieces + ) + return resource + +# --- Torrent logic --- +class TorrentInno: + @dataclass + class State: + piece_status: list[bool] + upload_speed_bytes_per_sec: int + download_speed_bytes_per_sec: int + destination: str + + def __init__(self): + self.peer_id = generate_peer_id() + self.resource_manager_dict: Dict[str, ResourceManager] = {} + + async def start_share_file(self, destination: str, resource: Resource): + ''' + Function what starting sharing of file, and updating peer information + on tracker server + ''' + peer_public_ip = get_peer_public_ip() + local_resource_manager = ResourceManager(self.peer_id, Path(destination), resource) + self.resource_manager_dict[destination] = local_resource_manager + peer_public_port = await self.resource_manager_dict.get(destination).full_start() + resource_info_hash = resource.get_info_hash() + peer = { + "peerId": str(self.peer_id), + "infoHash": str(resource_info_hash), + "publicIp": str(peer_public_ip), + "publicPort": str(peer_public_port) + } + tracker_url = 'http://' + TRACKER_IP + f':{TRACKER_PORT}/peers' + + async def parse_peer_list(json_text): + ''' + Function to parse JSON text and return a list of PeerInfo elements. + ''' + peer_list = [] + + if not json_text.strip(): + logging.info("Error parsing peer list: Response is empty") + return + + try: + data = json.loads(json_text) + resource_info_hash = resource.get_info_hash() + for peer in data.get("peers", []): + if peer.get("infoHash") == resource_info_hash: + peer_info = PeerInfo( + public_ip=peer["publicIp"], + public_port=int(peer["publicPort"]), + peer_id=peer["peerId"] + ) + peer_list.append(peer_info) + except (json.JSONDecodeError, KeyError, ValueError) as e: + logging.info(f"Error parsing peer list: {e}") + logging.info("Share peer list:") + logging.info(peer_list) + await self.resource_manager_dict.get(destination).submit_peers(peer_list) + + task = asyncio.create_task(heart_beat(tracker_url, peer, parse_peer_list)) + + async def stop_share_file(self, destination: str): + ''' + Function what stopping sharing of file, and updating peer information + ''' + await self.resource_manager_dict.get(destination).stop_sharing_file() + await self.resource_manager_dict.get(destination).shutdown() + del self.resource_manager_dict[destination] + + + async def start_download_file(self, destination: str, resource: Resource): + ''' + Function what starting downloading of file, and updating peer information + ''' + peer_public_ip = get_peer_public_ip() + local_resource_manager = ResourceManager(self.peer_id, Path(destination), resource) + self.resource_manager_dict[destination] = local_resource_manager + peer_public_port = await self.resource_manager_dict.get(destination).full_start() + resource_info_hash = resource.get_info_hash() + peer = { + "peerId": str(self.peer_id), + "infoHash": str(resource_info_hash), + "publicIp": str(peer_public_ip), + "publicPort": str(peer_public_port) + } + + tracker_url = 'http://' + TRACKER_IP + f':{TRACKER_PORT}/peers' + + async def parse_peer_list(json_text): + peer_list = [] + if not json_text.strip(): + logging.info("Error parsing peer list: Response is empty") + return + + try: + data = json.loads(json_text) + resource_info_hash = resource.get_info_hash() + for peer in data.get("peers", []): + if peer.get("infoHash") == resource_info_hash: + peer_info = PeerInfo( + public_ip=peer["publicIp"], + public_port=int(peer["publicPort"]), + peer_id=peer["peerId"] + ) + peer_list.append(peer_info) + except (json.JSONDecodeError, KeyError, ValueError) as e: + logging.info(f"Error parsing peer list: {e}") + + filtered_peers = [p for p in peer_list if p.peer_id != self.peer_id] + await self.resource_manager_dict.get(destination).submit_peers(filtered_peers) + logging.info('download peer list:') + logging.info(filtered_peers) + + task = asyncio.create_task(heart_beat(tracker_url, peer, parse_peer_list)) + await self.resource_manager_dict.get(destination).start_download() + + async def stop_download_file(self, destination: str): + ''' + Function what stopping downloading of file, and updating peer information + on tracker server + ''' + await self.resource_manager_dict.get(destination).stop_download() + await self.resource_manager_dict.get(destination).shutdown() + del self.resource_manager_dict[destination] + + async def get_state(self, destination): + ''' + Function what starting downloading of file, and updating peer information + on tracker server + ''' + states: ResourceManager.State = await self.resource_manager_dict.get(destination).get_state() + + return self.State(states.piece_status, + states.upload_speed_bytes_per_sec, + states.download_speed_bytes_per_sec, + destination) + + async def get_all_files_state(self): + ''' + Function what returning state of all files + ''' + return_list = [] + + for key in self.resource_manager_dict.keys(): + state = await self.resource_manager_dict.get(key).get_state() + return_list.append((key, self.State( + state.piece_status, + state.upload_speed_bytes_per_sec, + state.download_speed_bytes_per_sec, + key + ))) + + return return_list + + async def remove_from_torrent(self , destination): + ''' + Function what removing file from torrent + ''' + await self.resource_manager_dict.get(destination).shutdown() + del self.resource_manager_dict[destination] \ No newline at end of file diff --git a/specs/README.md b/specs/README.md new file mode 100644 index 0000000..daedd98 --- /dev/null +++ b/specs/README.md @@ -0,0 +1,11 @@ +# The overall flow + +1) The peer announces itself to the tracker. In order to do that, the peer +a) computes the *info-hash* of the `torrentinno` file. The details on computation can be found in the client `core.common.Resource` class (method `get_info_hash`). +b) sends the http query with `peer-announce.json` as request body. + +2) The tracker accepts the peer request and returns the list of all currently online peers that have sent the announcement to the tracker with the same *info-hash*. The server response body is formatted according to `tracker-response.json` + +3) After that, the peer maintains the connection with tracker. And periodically (around 30 seconds) repeats the announcement. If the tracker detects that some peer hasn't announced itself with `info-hash` for certain time, then it stops sending that peer in response to other peers' announcements. + +4) Once the peer receives the list of peers, it begins communicating with them. The details on that communication are in `peer-message-exchange.md`. \ No newline at end of file diff --git a/specs/peer-announce.json b/specs/peer-announce.json new file mode 100644 index 0000000..945c425 --- /dev/null +++ b/specs/peer-announce.json @@ -0,0 +1,6 @@ +{ + "peerId": "6b8a2d7f30c9e8d9b3d5b7c61e0be67d48280f58ffefae3c8b8423fe7108ecb", + "infoHash": "9a3f5d4c1e85a216afba9c9aeb62fa7d9a75b57c52cd945b7f58de8a0b9dcb4f", + "publicIp": "10.908.23.123", + "publicPort": "90833" +} \ No newline at end of file diff --git a/specs/peer-message-exchange.md b/specs/peer-message-exchange.md new file mode 100644 index 0000000..6d6bf9c --- /dev/null +++ b/specs/peer-message-exchange.md @@ -0,0 +1,55 @@ +# Peer message exchange format + +## General notes: + +- All messages between peers are translated as raw bytes. + +- All indexes are zero based, unless stated otherwise. + +- Where applicable, the order of bytes is big-endian. + +- Where applicable, the sequence of bytes is translated using hexadecimal encoding. + +## Handshake +``` +TorrentInno[peer-id (32 bytes)][info-hash (32 bytes)] +``` +*Example*: +``` +TorrentInnoba1bd0139c99070e7ed25fd599e603f2b915acf8eb96fca8565a7ed34c4d705441bbcd3f70fbbbac8b5171869c75e274f0b1a6c54f784dc0f763cebb446eee5d +``` + +**Description**: +The handshake message is sent by one of the peers trying to establish a connection with some other peer. Note that the length of the handshake message is always 75 bytes. + +The `info-hash` is the hash of the `torrentinno` file. + +The `peer-id` is a sequence of 32 bytes generated by peer itself. `peer-id` is a unique identifier of a peer and used in all connections. + +The peer that receives this message must check if it knows the resource in the `info-hash` field. If it knows this resource, then reply will the same message, substituting `peer-id` with its own id. + +If the handshake is successful, then both peers create and maintain a separate connection tied to that specific resource. The connection is biderectional continuous channel where peers exchange length-prefixed messages. + +## Peer to peer communication + +Each message has the following format: `[body-length (4 bytes)][message-body]`. Where `body-length` is the length of the `[message-body]` (in bytes). Further, only `[message-body]` will be discussed. + +Each `[message-body]` has the following format: `[message-type (1 byte)][message-data]`. `[message-type]` is a number (`0x01`, for example) +Currently, 4 message types are supported +1) 'Request': The `[message-data]` has format: `[piece-index (4 bytes)][piece-inner-offset (4 bytes)][block-length (4 bytes)]`. +This message indicates that the peer wants to fetch the `[block-length]` bytes from the piece with index `[piece-index]`, with inner offset within the piece of length `[piece-inner-offset]` bytes. + +2) 'Piece': The `[message-data]` has format: `[piece-index (4 bytes)][piece-inner-offset (4 bytes)][block-length (4 bytes)]data`. The first three fields has the same meaning as in the 'Request' message. The `data` contains the requested part of the file and it must have the length of `block-length` bytes. + +3) 'Bitfield': The `[message-data]` has format `[bitfield]`. The first byte corresponds to whether the sender has pieces 0-7 from high bit to low bit. The next byte corresponds to whether the sender has pieces 8-15 etc. Spare bits at the end are set to zero. Peers exchange the `Bitfield` message with each other to indicate the updates in the chunks ownership. +*Example:* +The full message to request 1024 bytes with offset 384 bytes offset within the piece 19 looks like this: + +- `0x0000000d` - body length (13 bytes) +- `0x01` - message type (Request) +- `0x00000013` - piece index (19) +- `0x00000180` - the piece inner offset (384) +- `0x00000400` - the block length (1024) + +References: +[https://www.bittorrent.org/beps/bep_0003.html](https://www.bittorrent.org/beps/bep_0003.html) diff --git a/specs/resource.json b/specs/resource.json new file mode 100644 index 0000000..2c572a0 --- /dev/null +++ b/specs/resource.json @@ -0,0 +1,21 @@ +{ + "trackerIp": "10.907.123.20", + "trackerPort": "3434", + "comment": "Video with Tralelelo Tralala", + "creationDate": "2000-10-31T01:30:00.000-05:00", + "name": "FunnyVideo.mp4", + "pieces": [ + { + "sha256": "6b8a2d7f30c9e8d9b3d5b7c61e0be67d48280f58ffefae3c8b8423fe7108ecb", + "size": 1048576 + }, + { + "sha256": "aabf67d072b5124a8c618a125054ea92c79a0cc489fdbb2d8f9acb68e9d6fd6", + "size": 1048576 + }, + { + "sha256": "d432e67ffb96a5e742ab6d396bf52b41c4b960a5609871d4a477ad879d8a3c0", + "size": 26071 + } + ] +} \ No newline at end of file diff --git a/specs/schema.jpg b/specs/schema.jpg new file mode 100644 index 0000000..3e8b0bf Binary files /dev/null and b/specs/schema.jpg differ diff --git a/specs/tracker-response.json b/specs/tracker-response.json new file mode 100644 index 0000000..23ebb15 --- /dev/null +++ b/specs/tracker-response.json @@ -0,0 +1,15 @@ +{ + "infoHash": "6b8a2d7f30c9e8d9b3d5b7c61e0be67d48280f58ffefae3c8b8423fe7108ecb", + "peers": [ + { + "peerId": "526b73151eaa0987c084c2fa85a8be0a4b913fbd4b28b165f5b2e62b1b075d3", + "publicIp": "90.565.34.123", + "publicPort": "12312" + }, + { + "peerId": "a2f4c84fdb870c96ff7b93d4dcb8d91575d0598b6b9f8ecf5cc15f857c5a51b", + "publicIp": "23.32.23.123", + "publicPort": "8086" + } + ] +} \ No newline at end of file diff --git a/tracker/Dockerfile b/tracker/Dockerfile new file mode 100644 index 0000000..a3a5f90 --- /dev/null +++ b/tracker/Dockerfile @@ -0,0 +1,16 @@ +FROM golang:1.23-bookworm AS base + +WORKDIR /build + +COPY go.mod go.sum ./ + +RUN go mod download + +COPY . . + +RUN go build -o peers-tracker + +EXPOSE 8080 + +# Start the application +CMD ["/build/peers-tracker"] diff --git a/tracker/go.mod b/tracker/go.mod new file mode 100644 index 0000000..e2a79ae --- /dev/null +++ b/tracker/go.mod @@ -0,0 +1,34 @@ +module tracker + +go 1.23.0 + +require github.com/gin-gonic/gin v1.10.0 + +require ( + github.com/bytedance/sonic v1.11.6 // indirect + github.com/bytedance/sonic/loader v0.1.1 // indirect + github.com/cloudwego/base64x v0.1.4 // indirect + github.com/cloudwego/iasm v0.2.0 // indirect + github.com/gabriel-vasile/mimetype v1.4.3 // indirect + github.com/gin-contrib/sse v0.1.0 // indirect + github.com/go-playground/locales v0.14.1 // indirect + github.com/go-playground/universal-translator v0.18.1 // indirect + github.com/go-playground/validator/v10 v10.20.0 // indirect + github.com/goccy/go-json v0.10.2 // indirect + github.com/json-iterator/go v1.1.12 // indirect + github.com/klauspost/cpuid/v2 v2.2.7 // indirect + github.com/leodido/go-urn v1.4.0 // indirect + github.com/mattn/go-isatty v0.0.20 // indirect + github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect + github.com/modern-go/reflect2 v1.0.2 // indirect + github.com/pelletier/go-toml/v2 v2.2.2 // indirect + github.com/twitchyliquid64/golang-asm v0.15.1 // indirect + github.com/ugorji/go/codec v1.2.12 // indirect + golang.org/x/arch v0.8.0 // indirect + golang.org/x/crypto v0.23.0 // indirect + golang.org/x/net v0.25.0 // indirect + golang.org/x/sys v0.20.0 // indirect + golang.org/x/text v0.15.0 // indirect + google.golang.org/protobuf v1.34.1 // indirect + gopkg.in/yaml.v3 v3.0.1 // indirect +) diff --git a/tracker/go.sum b/tracker/go.sum new file mode 100644 index 0000000..7f08abb --- /dev/null +++ b/tracker/go.sum @@ -0,0 +1,89 @@ +github.com/bytedance/sonic v1.11.6 h1:oUp34TzMlL+OY1OUWxHqsdkgC/Zfc85zGqw9siXjrc0= +github.com/bytedance/sonic v1.11.6/go.mod h1:LysEHSvpvDySVdC2f87zGWf6CIKJcAvqab1ZaiQtds4= +github.com/bytedance/sonic/loader v0.1.1 h1:c+e5Pt1k/cy5wMveRDyk2X4B9hF4g7an8N3zCYjJFNM= +github.com/bytedance/sonic/loader v0.1.1/go.mod h1:ncP89zfokxS5LZrJxl5z0UJcsk4M4yY2JpfqGeCtNLU= +github.com/cloudwego/base64x v0.1.4 h1:jwCgWpFanWmN8xoIUHa2rtzmkd5J2plF/dnLS6Xd/0Y= +github.com/cloudwego/base64x v0.1.4/go.mod h1:0zlkT4Wn5C6NdauXdJRhSKRlJvmclQ1hhJgA0rcu/8w= +github.com/cloudwego/iasm v0.2.0 h1:1KNIy1I1H9hNNFEEH3DVnI4UujN+1zjpuk6gwHLTssg= +github.com/cloudwego/iasm v0.2.0/go.mod h1:8rXZaNYT2n95jn+zTI1sDr+IgcD2GVs0nlbbQPiEFhY= +github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= +github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/gabriel-vasile/mimetype v1.4.3 h1:in2uUcidCuFcDKtdcBxlR0rJ1+fsokWf+uqxgUFjbI0= +github.com/gabriel-vasile/mimetype v1.4.3/go.mod h1:d8uq/6HKRL6CGdk+aubisF/M5GcPfT7nKyLpA0lbSSk= +github.com/gin-contrib/sse v0.1.0 h1:Y/yl/+YNO8GZSjAhjMsSuLt29uWRFHdHYUb5lYOV9qE= +github.com/gin-contrib/sse v0.1.0/go.mod h1:RHrZQHXnP2xjPF+u1gW/2HnVO7nvIa9PG3Gm+fLHvGI= +github.com/gin-gonic/gin v1.10.0 h1:nTuyha1TYqgedzytsKYqna+DfLos46nTv2ygFy86HFU= +github.com/gin-gonic/gin v1.10.0/go.mod h1:4PMNQiOhvDRa013RKVbsiNwoyezlm2rm0uX/T7kzp5Y= +github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s= +github.com/go-playground/assert/v2 v2.2.0/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4= +github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/oXslEjJA= +github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY= +github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY= +github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY= +github.com/go-playground/validator/v10 v10.20.0 h1:K9ISHbSaI0lyB2eWMPJo+kOS/FBExVwjEviJTixqxL8= +github.com/go-playground/validator/v10 v10.20.0/go.mod h1:dbuPbCMFw/DrkbEynArYaCwl3amGuJotoKCe95atGMM= +github.com/goccy/go-json v0.10.2 h1:CrxCmQqYDkv1z7lO7Wbh2HN93uovUHgrECaO5ZrCXAU= +github.com/goccy/go-json v0.10.2/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I= +github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU= +github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= +github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM= +github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo= +github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg= +github.com/klauspost/cpuid/v2 v2.2.7 h1:ZWSB3igEs+d0qvnxR/ZBzXVmxkgt8DdzP6m9pfuVLDM= +github.com/klauspost/cpuid/v2 v2.2.7/go.mod h1:Lcz8mBdAVJIBVzewtcLocK12l3Y+JytZYpaMropDUws= +github.com/knz/go-libedit v1.10.1/go.mod h1:MZTVkCWyz0oBc7JOWP3wNAzd002ZbM/5hgShxwh4x8M= +github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ= +github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI= +github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY= +github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= +github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= +github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg= +github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= +github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M= +github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= +github.com/pelletier/go-toml/v2 v2.2.2 h1:aYUidT7k73Pcl9nb2gScu7NSrKCSHIDE89b3+6Wq+LM= +github.com/pelletier/go-toml/v2 v2.2.2/go.mod h1:1t835xjRzz80PqgE6HHgN2JOsmgYu/h4qDAS4n929Rs= +github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= +github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= +github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo= +github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA= +github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= +github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= +github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= +github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= +github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4= +github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo= +github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg= +github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= +github.com/twitchyliquid64/golang-asm v0.15.1 h1:SU5vSMR7hnwNxj24w34ZyCi/FmDZTkS4MhqMhdFk5YI= +github.com/twitchyliquid64/golang-asm v0.15.1/go.mod h1:a1lVb/DtPvCB8fslRZhAngC2+aY1QWCk3Cedj/Gdt08= +github.com/ugorji/go/codec v1.2.12 h1:9LC83zGrHhuUA9l16C9AHXAqEV/2wBQ4nkvumAE65EE= +github.com/ugorji/go/codec v1.2.12/go.mod h1:UNopzCgEMSXjBc6AOMqYvWC1ktqTAfzJZUZgYf6w6lg= +golang.org/x/arch v0.0.0-20210923205945-b76863e36670/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8= +golang.org/x/arch v0.8.0 h1:3wRIsP3pM4yUptoR96otTUOXI367OS0+c9eeRi9doIc= +golang.org/x/arch v0.8.0/go.mod h1:FEVrYAQjsQXMVJ1nsMoVVXPZg6p2JE2mx8psSWTDQys= +golang.org/x/crypto v0.23.0 h1:dIJU/v2J8Mdglj/8rJ6UUOM3Zc9zLZxVZwwxMooUSAI= +golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8= +golang.org/x/net v0.25.0 h1:d/OCCoBEUq33pjydKrGQhw7IlUPI2Oylr+8qLx49kac= +golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM= +golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.20.0 h1:Od9JTbYCk261bKm4M/mw7AklTlFYIa0bIp9BgSm1S8Y= +golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/text v0.15.0 h1:h1V/4gjBv8v9cjcR6+AR5+/cIYK5N/WAgiv4xlsEtAk= +golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU= +golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4= +golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +google.golang.org/protobuf v1.34.1 h1:9ddQBjfCyZPOHPUiPxpYESBLc+T8P3E+Vo4IbKZgFWg= +google.golang.org/protobuf v1.34.1/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos= +gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM= +gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= +gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= +gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= +nullprogram.com/x/optparse v1.0.0/go.mod h1:KdyPE+Igbe0jQUrVfMqDMeJQIJZEuyV7pjYmp6pbG50= +rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4= diff --git a/tracker/main.go b/tracker/main.go new file mode 100644 index 0000000..825cb89 --- /dev/null +++ b/tracker/main.go @@ -0,0 +1,105 @@ +package main + +import ( + "fmt" + "github.com/gin-gonic/gin" + "net/http" + "sync" + "time" +) + +type Peer struct { + PeerId string `json:"peerId"` + InfoHash string `json:"infoHash"` + PublicIp string `json:"publicIp"` + PublicPort string `json:"publicPort"` + UpdatedAt int64 `json:"-"` +} + +const PeerLifespan = 35 + +type Peers struct { + mu sync.Mutex + v map[string]map[string]Peer +} + +var peers = Peers{v: make(map[string]map[string]Peer)} + +func main() { + + //start peers cleaning task + go tick(1) + + router := gin.New() + router.Use( + gin.Recovery(), + ) + router.GET("/peers", getPeers) + router.POST("/peers", updatePeer) + s := &http.Server{ + Addr: ":8080", + Handler: router, + ReadTimeout: 10 * time.Second, + WriteTimeout: 10 * time.Second, + MaxHeaderBytes: 1 << 20, + } + err := s.ListenAndServe() + + if err != nil { + fmt.Print(err) + return + } + +} + +func updatePeer(context *gin.Context) { + peers.mu.Lock() + defer peers.mu.Unlock() + var updatedPeer Peer + if err := context.BindJSON(&updatedPeer); err != nil { + fmt.Println(err) + err = context.AbortWithError(http.StatusBadRequest, err) + return + } + + updatedPeer.UpdatedAt = time.Now().Unix() + if _, ok := peers.v[updatedPeer.InfoHash]; !ok { + peers.v[updatedPeer.InfoHash] = make(map[string]Peer) + } + peers.v[updatedPeer.InfoHash][updatedPeer.PeerId] = updatedPeer + + type Response struct { + InfoHash string `json:"infoHash"` + Peers []Peer `json:"peers"` + } + + var response Response + response.InfoHash = updatedPeer.InfoHash + for _, peer := range peers.v[updatedPeer.InfoHash] { + response.Peers = append(response.Peers, peer) + } + + context.JSON(http.StatusOK, response) +} + +func getPeers(context *gin.Context) { + peers.mu.Lock() + defer peers.mu.Unlock() + context.JSON(http.StatusOK, peers.v) +} + +func tick(n time.Duration) { + for range time.Tick(n * time.Second) { + peers.mu.Lock() + currentTime := time.Now().Unix() + for hash, friends := range peers.v { + for peerId, peer := range friends { + if peer.UpdatedAt+PeerLifespan < currentTime { + fmt.Printf("Peer %s seems to death. Removing..", peerId) + delete(peers.v[hash], peerId) + } + } + } + peers.mu.Unlock() + } +}