diff --git a/README.md b/README.md index 6f6b235..23f39a6 100644 --- a/README.md +++ b/README.md @@ -1,35 +1,55 @@ # NPLinker Web Application -This is the [NPLinker](https://nplinker.github.io/nplinker/latest/) web application (webapp), developed with [Plotly Dash](https://dash.plotly.com/). It enables interactive visualization of NPLinker predictions and is designed to be run locally or in a containerized environment. [A lightweight online demo](https://nplinker-webapp.onrender.com) is available for quick exploration. +👉 **[Webapp Live Demo](https://nplinker-webapp.onrender.com)** -

- Dashboard Screenshot 1 -

+This is the [NPLinker](https://nplinker.github.io/nplinker/latest/) web application (webapp), developed with [Plotly Dash](https://dash.plotly.com/), which enables you to visualize NPLinker predictions in an interactive way. -

- Dashboard Screenshot 2 -

+NPLinker is a Python framework for data mining microbial natural products by integrating genomics and metabolomics data. For a deep understanding of NPLinker, please refer to the [original paper](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1008920). -

- Dashboard Screenshot 3 -

+## Online Demo + +A live demo of the NPLinker webapp is automatically deployed to [Render](https://render.com/) from `main` branch. +You can try out the webapp directly in your browser [here](https://nplinker-webapp.onrender.com). + +### Getting Started with Demo Data + +The webapp includes a convenient **"Load Demo Data"** button that automatically loads some sample data for you to try. Simply: +1. Open the [live demo link](https://nplinker-webapp.onrender.com/) +2. Click the **"Load Demo Data"** button below the file uploader +3. The app will automatically download and process the sample dataset from [`tests/data/mock_obj_data.pkl`](https://github.com/NPLinker/nplinker-webapp/blob/main/tests/data/mock_obj_data.pkl) +4. Start exploring natural product linking features! -NPLinker is a Python framework for data mining microbial natural products by integrating genomics and metabolomics data. +This demo web server is intended only for lightweight demo purposes. For full functionality, including large-scale data processing and persistent storage, please install the application locally or via Docker as described below. -For a deep understanding of NPLinker, please refer to the [original paper](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1008920). +
+⚠️ Demo Server Limitations +Please note the following limitations of the hosted demo: + +* **Cold start delay**: Free-tier apps on Render sleep after 15 minutes of inactivity and may take 20–30 seconds to wake up. +* **Performance**: This is a minimal deployment on a free tier and is not optimized for large datasets or concurrent users. +* **File size limits**: The demo data button loads a small sample dataset suitable for testing. Uploading large datasets via the file uploader may lead to errors or timeouts. +* **No persistent storage**: Uploaded files are not saved between sessions. +
+ +## Using the webapp + +### Input Data + +The webapp accepts data generated by NPLinker and saved as described in the [NPLinker quickstart section](https://nplinker.github.io/nplinker/latest/quickstart/). For testing purposes, a small sample dataset is provided in [`tests/data/mock_obj_data.pkl`](https://github.com/NPLinker/nplinker-webapp/blob/main/tests/data/mock_obj_data.pkl) that can be used to try out the webapp. + +Please note that links between genomic and metabolomic data must currently be computed using the NPLinker API separately, as this functionality is not yet implemented in the webapp (see [issue #19](https://github.com/NPLinker/nplinker-webapp/issues/19)). If no links are present in your data, the scoring table will be disabled. -## Prerequisites +## Installation Before installing NPLinker webapp, ensure you have: -- [Python 3.10](https://www.python.org/downloads/) +- [Python ≥3.10](https://www.python.org/downloads/) - [Git](https://git-scm.com/downloads) +- [Conda](https://docs.conda.io/en/latest/miniconda.html) -## Installation Options +You can install and run the NPLinker webapp in two ways: directly on your local machine or using Docker. -You can install and run the NPLinker dashboard in two ways: directly on your local machine or using Docker. - -### Option 1: Local Installation +### Option 1: Local Installation (using Conda) Follow these steps to install the application directly on your system: @@ -39,16 +59,13 @@ Follow these steps to install the application directly on your system: cd nplinker-webapp ``` -2. **Set up a virtual environment** +2. **Set up a conda environment** ```bash - # Create a virtual environment - python3.10 -m venv venv - - # Activate the virtual environment - # For Windows: - venv\Scripts\activate - # For macOS/Linux: - source venv/bin/activate + # Create a new conda environment with Python 3.10 + conda create -n nplinker-webapp python=3.10 + + # Activate the environment + conda activate nplinker-webapp ``` 3. **Install dependencies** @@ -61,19 +78,22 @@ Follow these steps to install the application directly on your system: python app/main.py ``` -5. **Access the dashboard** +5. **Access the webapp** Open your web browser and navigate to `http://0.0.0.0:8050/` -#### Troubleshooting Local Installation +
+Troubleshooting Local Installation -Common issues and solutions: +#### Common issues and solutions - **Port already in use**: If port 8050 is already in use, modify the port in `app/main.py` by changing `app.run_server(debug=True, port=8050)` - **Package installation errors**: Make sure you're using Python 3.10 and that your pip is up-to-date If you encounter other problems, please check the [Issues](https://github.com/NPLinker/nplinker-webapp/issues) page or create a new issue. +
+ ### Option 2: Docker Installation Using Docker is the quickest way to get started with NPLinker webapp. Make sure you have [Docker](https://www.docker.com/) installed on your system before proceeding: @@ -88,11 +108,12 @@ Using Docker is the quickest way to get started with NPLinker webapp. Make sure docker run -p 8050:8050 ghcr.io/nplinker/nplinker-webapp:latest ``` -3. **Access the dashboard** +3. **Access the webapp** Open your web browser and navigate to `http://0.0.0.0:8050/` -#### Docker Image Information +
+Docker Image Information - **Available Tags**: - `latest`: The most recent build @@ -102,32 +123,24 @@ Using Docker is the quickest way to get started with NPLinker webapp. Make sure - **More Details**: For additional information about the Docker image, see its [GitHub Container Registry page](https://github.com/NPLinker/nplinker-webapp/pkgs/container/nplinker-webapp). -## Online Demo +
-A live demo of the NPLinker webapp is automatically deployed to [Render](https://render.com/) and updated every time the `main` branch is updated. +### Filtering Table Data -You can try out the dashboard directly in your browser here: +The "Candidate Links" tables support data filtering to help you focus on relevant results. You can enter filter criteria directly into each column’s filter cell by hovering over the cell. -👉 **[Live Demo](https://nplinker-webapp.onrender.com)** +For numeric columns like "Average Score" or "# Links": +- `34.6` or `= 34.6` (exact match) +- `> 30` (greater than) +- `<= 50` (less than or equal to) -### ⚠️ Demo Limitations +For text columns like "BGC Classes" or "MiBIG IDs": +- `Polyketide` or `contains Polyketide` (contains text) +- `= Polyketide` (exact match) -Please note the following limitations of the hosted demo: +Multiple filters can be applied simultaneously across different columns to narrow down results. -* **Cold start delay**: Free-tier apps on Render sleep after 15 minutes of inactivity and may take 20–30 seconds to wake up. -* **Performance**: This is a minimal deployment on a free tier and is not optimized for large datasets or concurrent users. -* **File size limits**: Use the small sample data provided in [`tests/data/mock_obj_data.pkl`](https://github.com/NPLinker/nplinker-webapp/blob/main/tests/data/mock_obj_data.pkl) for testing. Uploading large datasets may lead to errors or timeouts. -* **No persistent storage**: Uploaded files are not saved between sessions. - -This hosted version is intended only for lightweight demo purposes. For full functionality, including large-scale data processing and persistent storage, please install the application locally or via Docker as described above. - -## Using the Dashboard - -### Input Data - -The dashboard accepts data generated by NPLinker and saved as described in the [NPLinker quickstart section](https://nplinker.github.io/nplinker/latest/quickstart/). For testing purposes, a small sample dataset is provided in [`tests/data/mock_obj_data.pkl`](https://github.com/NPLinker/nplinker-webapp/blob/main/tests/data/mock_obj_data.pkl) that can be used to try out the webapp. - -Please note that links between genomic and metabolomic data must currently be computed using the NPLinker API separately, as this functionality is not yet implemented in the webapp (see [issue #19](https://github.com/NPLinker/nplinker-webapp/issues/19)). If no links are present in your data, the scoring table will be disabled. +For a full list of supported filter operators, see the [official Plotly documentation](https://dash.plotly.com/datatable/filtering#filtering-operators). ## Contributing diff --git a/app/callbacks.py b/app/callbacks.py index 9686c57..d955d67 100644 --- a/app/callbacks.py +++ b/app/callbacks.py @@ -12,6 +12,7 @@ import dash_uploader as du import pandas as pd import plotly.graph_objects as go +import requests from dash import ALL from dash import MATCH from dash import Dash @@ -19,7 +20,6 @@ from dash import Output from dash import State from dash import callback_context as ctx -from dash import clientside_callback from dash import dcc from dash import html from app.config import GM_FILTER_DROPDOWN_BGC_CLASS_OPTIONS_PRE_V4 @@ -47,15 +47,8 @@ TEMP_DIR = tempfile.mkdtemp() du.configure_upload(app, TEMP_DIR) -clientside_callback( - """ - (switchOn) => { - document.documentElement.setAttribute('data-bs-theme', switchOn ? 'light' : 'dark'); - return window.dash_clientside.no_update - } - """, - Output("color-mode-switch", "id"), - Input("color-mode-switch", "value"), +DEMO_DATA_URL = ( + "https://github.com/NPLinker/nplinker-webapp/blob/main/tests/data/mock_obj_data.pkl?raw=true" ) @@ -95,6 +88,55 @@ def upload_data(status: du.UploadStatus) -> tuple[str, str | None, None]: return "No file uploaded", None, None +@app.callback( + Output("dash-uploader-output", "children", allow_duplicate=True), + Output("file-store", "data", allow_duplicate=True), + Output("loading-spinner-container", "children", allow_duplicate=True), + Input("demo-data-button", "n_clicks"), + prevent_initial_call=True, +) +def load_demo_data(n_clicks): + """Load demo data from GitHub repository. + + Args: + n_clicks: Number of times the demo data button has been clicked. + + Returns: + A tuple containing a message string and the file path (if successful). + """ + if n_clicks is None: + raise dash.exceptions.PreventUpdate + + try: + # Download the demo data + response = requests.get(DEMO_DATA_URL, timeout=30) + response.raise_for_status() + + # Save to temporary file + demo_file_path = os.path.join(TEMP_DIR, f"demo_data_{uuid.uuid4()}.pkl") + with open(demo_file_path, "wb") as f: + f.write(response.content) + + # Validate the pickle file + with open(demo_file_path, "rb") as f: + pickle.load(f) + + file_size_mb = len(response.content) / (1024 * 1024) + + return ( + f"Successfully loaded demo data: demo_data.pkl [{round(file_size_mb, 2)} MB]", + str(demo_file_path), + None, + ) + + except requests.exceptions.RequestException as e: + return f"Error downloading demo data: Network error - {str(e)}", None, None + except (pickle.UnpicklingError, EOFError, AttributeError): + return "Error: Downloaded file is not a valid pickle file.", None, None + except Exception as e: + return f"Error loading demo data: {str(e)}", None, None + + @app.callback( Output("processed-data-store", "data"), Output("processed-links-store", "data"), @@ -103,12 +145,13 @@ def upload_data(status: du.UploadStatus) -> tuple[str, str | None, None]: prevent_initial_call=True, ) def process_uploaded_data( - file_path: Path | str | None, + file_path: Path | str | None, cleanup: bool = True ) -> tuple[str | None, str | None, str | None]: """Process the uploaded pickle file and store the processed data. Args: file_path: Path to the uploaded pickle file. + cleanup: Flag to indicate whether to clean up the file after processing. Returns: JSON string of processed data or None if processing fails. @@ -128,7 +171,12 @@ def process_bgc_class(bgc_class: tuple[str, ...] | None) -> list[str]: return ["Unknown"] return list(bgc_class) # Convert tuple to list - processed_data: dict[str, Any] = {"n_bgcs": {}, "gcf_data": [], "mf_data": []} + processed_data: dict[str, Any] = { + "gcf_data": [], + "n_bgcs": {}, + "class_bgcs": {}, + "mf_data": [], + } for gcf in gcfs: sorted_bgcs = sorted(gcf.bgcs, key=lambda bgc: bgc.id) @@ -149,6 +197,12 @@ def process_bgc_class(bgc_class: tuple[str, ...] | None) -> list[str]: processed_data["n_bgcs"][len(gcf.bgcs)] = [] processed_data["n_bgcs"][len(gcf.bgcs)].append(gcf.id) + for bgc_class_list in bgc_classes: + for bgc_class in bgc_class_list: + if bgc_class not in processed_data["class_bgcs"]: + processed_data["class_bgcs"][bgc_class] = [] + processed_data["class_bgcs"][bgc_class].append(gcf.id) + for mf in mfs: sorted_spectra = sorted(mf.spectra, key=lambda spectrum: spectrum.id) processed_data["mf_data"].append( @@ -246,6 +300,12 @@ def process_mg_link(mf, gcf, methods_data): except Exception as e: print(f"Error processing file: {str(e)}") return None, None, None + finally: + try: + if cleanup and file_path and os.path.exists(file_path): + os.remove(file_path) + except Exception as e: + print(f"Cleanup failed for {file_path}: {e}") @app.callback( @@ -266,7 +326,7 @@ def process_mg_link(mf, gcf, methods_data): Output("gm-filter-accordion-component", "value", allow_duplicate=True), Output("gm-scoring-accordion-component", "value", allow_duplicate=True), Output("gm-results-table-column-toggle", "value", allow_duplicate=True), - # MG tab outputs + Output("gm-graph-x-axis-selector", "value"), Output("mg-tab", "disabled"), Output("mg-filter-accordion-control", "disabled"), Output("mg-filter-blocks-id", "data", allow_duplicate=True), @@ -326,6 +386,7 @@ def disable_tabs_and_reset_blocks( [], [], default_gm_column_value, + "n_bgcs", # MG tab - disabled True, True, @@ -374,6 +435,7 @@ def disable_tabs_and_reset_blocks( [], [], default_gm_column_value, + "n_bgcs", # MG tab - enabled with initial blocks False, False, @@ -397,50 +459,116 @@ def disable_tabs_and_reset_blocks( @app.callback( Output("gm-graph", "figure"), Output("gm-graph", "style"), - [Input("processed-data-store", "data")], + Output("gm-graph-selector-container", "style"), + [Input("processed-data-store", "data"), Input("gm-graph-x-axis-selector", "value")], ) -def gm_plot(stored_data: str | None) -> tuple[dict | go.Figure, dict]: +def gm_plot(stored_data: str | None, x_axis_selection: str) -> tuple[dict | go.Figure, dict, dict]: """Create a bar plot based on the processed data. Args: stored_data: JSON string of processed data or None. + x_axis_selection: Selected x-axis type ('n_bgcs' or 'class_bgcs'). Returns: - Tuple containing the plot figure, style, and a status message. + Tuple containing the plot figure, style for graph, and style for selector. """ if stored_data is None: - return {}, {"display": "none"} - data = json.loads(stored_data) - n_bgcs = data["n_bgcs"] + return {}, {"display": "none"}, {"display": "none"} - x_values = sorted(map(int, n_bgcs.keys())) - y_values = [len(n_bgcs[str(x)]) for x in x_values] - hover_texts = [ - f"GCF IDs: {', '.join(str(gcf_id) for gcf_id in n_bgcs[str(x)])}" for x in x_values - ] + data = json.loads(stored_data) - # Adjust bar width based on number of data points - bar_width = 0.4 if len(x_values) <= 5 else None - # Create the bar plot - fig = go.Figure( - data=[ - go.Bar( - x=x_values, - y=y_values, - text=hover_texts, - hoverinfo="text", - textposition="none", - width=bar_width, - ) + if x_axis_selection == "n_bgcs": + n_bgcs = data["n_bgcs"] + x_values = sorted(map(int, n_bgcs.keys())) + y_values = [len(n_bgcs[str(x)]) for x in x_values] + hover_texts = [ + f"GCF IDs: {', '.join(str(gcf_id) for gcf_id in n_bgcs[str(x)])}" for x in x_values ] - ) - # Update layout - fig.update_layout( - xaxis_title="# BGCs", - yaxis_title="# GCFs", - xaxis=dict(type="category"), - ) - return fig, {"display": "block"} + + # Adjust bar width based on number of data points + bar_width = 0.4 if len(x_values) <= 5 else None + # Create the bar plot + fig = go.Figure( + data=[ + go.Bar( + x=x_values, + y=y_values, + text=hover_texts, + hoverinfo="text", + textposition="none", + width=bar_width, + ) + ] + ) + # Update layout + fig.update_layout( + xaxis_title="# BGCs", + yaxis_title="# GCFs", + xaxis=dict(type="category"), + ) + + else: # x_axis_selection == "class_bgcs" + class_bgcs = data["class_bgcs"] + + # Count unique GCF IDs for each class + class_gcf_counts = {} + for bgc_class, gcf_ids in class_bgcs.items(): + # Count unique GCF IDs + class_gcf_counts[bgc_class] = len(set(gcf_ids)) + + # Sort classes by count for better visualization + sorted_classes = sorted(class_gcf_counts.items(), key=lambda x: x[1], reverse=True) + x_values = [item[0] for item in sorted_classes] + y_values = [item[1] for item in sorted_classes] + + # Generate hover texts with line breaks for better readability + hover_texts = [] + for bgc_class in x_values: + # Get unique GCF IDs for this class + unique_gcf_ids = sorted(list(set(class_bgcs[bgc_class]))) + + # Format GCF IDs with line breaks every 10 items + formatted_gcf_ids = "" + for i, gcf_id in enumerate(unique_gcf_ids): + formatted_gcf_ids += gcf_id + # Add comma if not the last item + if i < len(unique_gcf_ids) - 1: + formatted_gcf_ids += ", " + # Add line break after every 10 items (but not for the last group) + if (i + 1) % 10 == 0 and i < len(unique_gcf_ids) - 1: + formatted_gcf_ids += "
" + + hover_text = f"Class: {bgc_class}
GCF IDs: {formatted_gcf_ids}" + hover_texts.append(hover_text) + + # Adjust bar width based on number of data points + bar_width = 0.4 if len(x_values) <= 5 else None + # Create the bar plot + fig = go.Figure( + data=[ + go.Bar( + x=x_values, + y=y_values, + text=hover_texts, + hoverinfo="text", + textposition="none", + width=bar_width, + ) + ] + ) + + # Update layout + fig.update_layout( + xaxis_title="BGC Classes", + yaxis_title="# GCFs", + xaxis=dict( + type="category", + # Add more space for longer class names + tickangle=-45 if len(x_values) > 5 else 0, + ), + ) + + return fig, {"display": "block"}, {"display": "block"} # ------------------ Common Filter and Table Functions ------------------ # @@ -1451,9 +1579,9 @@ def scoring_create_initial_block(block_id: str, tab_prefix: str = "gm") -> dmc.G "type": f"{tab_prefix}-scoring-dropdown-ids-cutoff-met", "index": block_id, }, - label="Cutoff", - placeholder="Insert cutoff value as a number", - value="0.05", + label="Scoring method's cutoff (>=)", + placeholder="Insert the minimum cutoff value to be considered", + value="0", className="custom-textinput", ) ], @@ -1571,9 +1699,9 @@ def scoring_display_blocks( "type": f"{tab_prefix}-scoring-dropdown-ids-cutoff-met", "index": new_block_id, }, - label="Cutoff", - placeholder="Insert cutoff value as a number", - value="0.05", + label="Scoring method's cutoff (>=)", + placeholder="Insert the minimum cutoff value to be considered", + value="0", className="custom-textinput", ), ], @@ -1615,7 +1743,7 @@ def scoring_update_placeholder( # Callback was not triggered by user interaction, don't change anything raise dash.exceptions.PreventUpdate if selected_value == "METCALF": - return ({"display": "block"}, "Cutoff", "0.05") + return ({"display": "block"}, "Cutoff", "0") else: # This case should never occur due to the Literal type, but it satisfies mypy return ({"display": "none"}, "", "") diff --git a/app/layouts.py b/app/layouts.py index 290d47c..280f338 100644 --- a/app/layouts.py +++ b/app/layouts.py @@ -103,10 +103,14 @@ def create_results_table(table_id, no_sort_columns): columns=[], data=[], editable=False, - filter_action="none", + filter_action="native", + filter_options={"placeholder_text": " filter data..."}, + style_filter={ + "backgroundColor": "#f8f9fa", + }, sort_action="native", virtualization=True, - fixed_rows={"headers": True}, # Keep headers visible when scrolling + fixed_rows={"headers": True}, sort_mode="single", sort_as_null=["None", ""], sort_by=[], @@ -158,7 +162,25 @@ def create_results_table(table_id, no_sort_columns): border: 1px solid #FF6E42; box-shadow: 2px 2px 5px rgba(0, 0, 0, 0.1); """, - } + }, + { + "selector": ".dash-filter input::placeholder", + "rule": "opacity: 1 !important; text-align: left !important;", + }, + { + "selector": ".dash-filter input", + "rule": "text-align: left !important; width: 100% !important;", + }, + # Hide the filter type indicators completely + { + "selector": ".dash-filter--case", + "rule": "display: none !important;", + }, + # Adjust padding to fill the space where indicators were + { + "selector": ".dash-filter", + "rule": "padding-left: 0 !important;", + }, ] + [ { @@ -553,10 +575,41 @@ def create_tab_content(prefix, filter_title, checkl_options, no_sort_columns): # Add graph component only for GM tab components = [] if prefix == "gm": - graph = dcc.Graph(id="gm-graph", className="mt-5 mb-3", style={"display": "none"}) + # Add x-axis selector dropdown above the graph + graph_with_selector = html.Div( + [ + dbc.Row( + [ + dbc.Col( + html.Div( + [ + html.Label("Select X-axis: ", className="me-2"), + dcc.Dropdown( + id="gm-graph-x-axis-selector", + options=[ + {"label": "# BGCs", "value": "n_bgcs"}, + {"label": "BGC Classes", "value": "class_bgcs"}, + ], + value="n_bgcs", # Default value + clearable=False, + style={"width": "200px"}, + ), + ], + className="d-flex align-items-center", + ), + width=12, + ) + ], + id="gm-graph-selector-container", + ), + dcc.Graph(id="gm-graph"), + ], + className="mt-5 mb-3", + ) + components = [ dbc.Col(filter_accordion, width=10, className="mx-auto dbc"), - dbc.Col(graph, width=10, className="mx-auto"), + dbc.Col(graph_with_selector, width=10, className="mx-auto dbc"), dbc.Col(data_table, width=10, className="mx-auto"), dbc.Col(scoring_accordion, width=10, className="mx-auto dbc"), ] @@ -575,19 +628,6 @@ def create_tab_content(prefix, filter_title, checkl_options, no_sort_columns): # ------------------ Nav Bar ------------------ # -color_mode_switch = html.Span( - [ - dbc.Label(className="fa fa-moon", html_for="color-mode-switch"), - dbc.Switch( - id="color-mode-switch", - value=False, - className="d-inline-block ms-1", - persistence=True, - ), - dbc.Label(className="fa fa-sun", html_for="color-mode-switch"), - ], - className="p-2", -) navbar = dbc.Row( dbc.Col( dbc.NavbarSimple( @@ -596,10 +636,6 @@ def create_tab_content(prefix, filter_title, checkl_options, no_sort_columns): dbc.NavItem( dbc.NavLink("About", href="https://github.com/NPLinker/nplinker-webapp"), ), - dbc.NavItem( - color_mode_switch, - className="mt-1 p-1", - ), ], brand="NPLinker Webapp", color="primary", @@ -632,6 +668,21 @@ def create_tab_content(prefix, filter_title, checkl_options, no_sort_columns): className="d-flex justify-content-center", ) ), + # Demo data button + dbc.Row( + dbc.Col( + html.Div( + dbc.Button( + "Load Demo Data", + id="demo-data-button", + color="primary", + className="mt-3", + ), + className="d-flex justify-content-center", + ), + className="d-flex justify-content-center", + ) + ), dcc.Store(id="file-store"), # Store to keep the file contents dcc.Store(id="processed-data-store"), # Store to keep the processed data dcc.Store(id="processed-links-store"), # Store to keep the processed links diff --git a/mypy.ini b/mypy.ini index 202ac4e..227993a 100644 --- a/mypy.ini +++ b/mypy.ini @@ -2,4 +2,5 @@ python_version = 3.10 warn_return_any = true warn_unused_configs = true -ignore_missing_imports = true \ No newline at end of file +ignore_missing_imports = true +disable_error_code = import-untyped \ No newline at end of file diff --git a/tests/test_callbacks.py b/tests/test_callbacks.py index 595fc9c..536d57c 100644 --- a/tests/test_callbacks.py +++ b/tests/test_callbacks.py @@ -1,4 +1,5 @@ import json +import pickle import uuid from pathlib import Path from unittest.mock import patch @@ -15,6 +16,7 @@ from app.callbacks import gm_table_toggle_selection from app.callbacks import gm_table_update_datatable from app.callbacks import gm_toggle_download_button +from app.callbacks import load_demo_data from app.callbacks import mg_filter_add_block from app.callbacks import mg_filter_apply from app.callbacks import mg_generate_excel @@ -44,7 +46,7 @@ def mock_uuid4(): @pytest.fixture def processed_data(): # Use the actual process_uploaded_data function to get the processed data - return process_uploaded_data(MOCK_FILE_PATH) + return process_uploaded_data(MOCK_FILE_PATH, cleanup=False) @pytest.fixture @@ -104,17 +106,46 @@ def test_upload_data(): assert path_string == str(MOCK_FILE_PATH) +def test_load_demo_data(): + """Test the load_demo_data callback function.""" + + # Test with no clicks - should prevent update + with pytest.raises(dash.exceptions.PreventUpdate): + load_demo_data(None) + + # Test with actual click - should load demo data + result = load_demo_data(1) + message, file_path, spinner = result + + # Check that the function returns expected format + assert isinstance(message, str) + assert isinstance(file_path, (str, type(None))) + assert spinner is None + + # If successful, should contain success message and valid file path + if file_path is not None: + assert "Successfully loaded demo data" in message + assert "demo_data_" in file_path + # Verify the file actually exists and is valid + with open(file_path, "rb") as f: + data = pickle.load(f) + assert data is not None + else: + # If failed, should contain error message + assert "Error" in message + + @pytest.mark.parametrize("input_path", [None, Path("non_existent_file.pkl")]) def test_process_uploaded_data_invalid_input(input_path): - processed_data, processed_links, _ = process_uploaded_data(input_path) + processed_data, processed_links, _ = process_uploaded_data(input_path, cleanup=False) assert processed_data is None assert processed_links is None def test_process_uploaded_data_structure(): - processed_data, processed_links, _ = process_uploaded_data(MOCK_FILE_PATH) + processed_data, processed_links, _ = process_uploaded_data(MOCK_FILE_PATH, cleanup=False) processed_data_no_links, processed_links_no_links, _ = process_uploaded_data( - MOCK_FILE_PATH_NO_LINKS + MOCK_FILE_PATH_NO_LINKS, cleanup=False ) assert processed_data is not None @@ -242,6 +273,25 @@ def test_process_uploaded_data_structure(): assert isinstance(gcf["BGC Classes"], list) +def test_process_uploaded_data_cleanup(tmp_path): + """Ensure that temporary file is deleted when cleanup=True.""" + temp_file = tmp_path / "temp_data.pkl" + + dummy_data = (None, [], None, [], None, None) + with open(temp_file, "wb") as f: + pickle.dump(dummy_data, f) + + # Confirm file exists + assert temp_file.exists() + + # Call the function with cleanup=True (default) + processed_data, _, _ = process_uploaded_data(temp_file, cleanup=True) + + # File should be deleted after processing + assert not temp_file.exists() + assert processed_data is not None # Sanity check: function still processed the file + + def test_disable_tabs(mock_uuid): default_gm_column_value = ( [GM_RESULTS_TABLE_CHECKL_OPTIONAL_COLUMNS[0]] @@ -273,6 +323,7 @@ def test_disable_tabs(mock_uuid): [], [], default_gm_column_value, + "n_bgcs", # MG tab - disabled True, True, @@ -312,6 +363,7 @@ def test_disable_tabs(mock_uuid): gm_filter_accordion_value, gm_scoring_accordion_value, gm_results_table_column_toggle, + gm_graph_dropdown, # MG tab outputs mg_tab_disabled, mg_filter_accordion_disabled, @@ -348,6 +400,7 @@ def test_disable_tabs(mock_uuid): assert gm_filter_accordion_value == [] assert gm_scoring_accordion_value == [] assert gm_results_table_column_toggle == default_gm_column_value + assert gm_graph_dropdown == "n_bgcs" # Assert MG tab outputs assert mg_tab_disabled is False