-
Notifications
You must be signed in to change notification settings - Fork 210
Task5 #93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Task5 #93
Changes from all commits
e8b56b1
4fb8381
92d5c48
2138f72
76fc6c8
bb7235a
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,13 @@ | ||
| # Report Bundle | ||
|
|
||
| Hi! | ||
| This report includes everything you need to review the image generation results. | ||
|
|
||
| What’s inside: | ||
| - `results.csv` — a table with image scores and file paths | ||
| - `config.json` — the exact settings used during generation (models, prompts, etc.) | ||
| - Grid images — labeled previews of the outputs | ||
|
|
||
| Feel free to explore or reuse any part of it. Let me know if anything’s missing! | ||
|
|
||
| — Lial |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,8 @@ | ||
| { | ||
| "task_id": "123", | ||
| "report_type": "Csv", | ||
| "data": { | ||
| "title": "My Report", | ||
| "content": "This is the content of the report." | ||
| } | ||
| } |
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,139 @@ | ||||||||||||||||||||||||||||||
| import os | ||||||||||||||||||||||||||||||
| import json | ||||||||||||||||||||||||||||||
| import csv | ||||||||||||||||||||||||||||||
| import zipfile | ||||||||||||||||||||||||||||||
| from PIL import Image | ||||||||||||||||||||||||||||||
| import torch | ||||||||||||||||||||||||||||||
| import clip | ||||||||||||||||||||||||||||||
| import lpips | ||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||
| RESULTS_FILE = 'results.csv' | ||||||||||||||||||||||||||||||
| CONFIG_FILE = 'config.json' | ||||||||||||||||||||||||||||||
| GRIDS_DIR = 'grids' | ||||||||||||||||||||||||||||||
| README_FILE = 'README.txt' | ||||||||||||||||||||||||||||||
| OUTPUT_ZIP = 'report.zip' | ||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||
| # Load models (once) | ||||||||||||||||||||||||||||||
| device = "cuda" if torch.cuda.is_available() else "cpu" | ||||||||||||||||||||||||||||||
| clip_model, preprocess = clip.load("ViT-B/32", device=device) | ||||||||||||||||||||||||||||||
| lpips_model = lpips.LPIPS(net='alex').to(device) | ||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||
|
Comment on lines
+16
to
+20
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 💡 Verification agent ❓ Verification inconclusiveSet models to eval mode; optionally lazy-load to avoid import-time cost Switch learned models to eval() to disable dropout/batch-norm training behavior. Consider lazy-loading on first use to reduce import-time latency. device = "cuda" if torch.cuda.is_available() else "cpu"
clip_model, preprocess = clip.load("ViT-B/32", device=device)
lpips_model = lpips.LPIPS(net='alex').to(device)
+clip_model.eval()
+lpips_model.eval()If startup latency is a concern, I can refactor to lazy-load these handles the first time compute_scores is called. Set models to inference mode and consider lazy‐loading After loading the CLIP and LPIPS models in • File: tasks/report_bundle/generate_report.py device = "cuda" if torch.cuda.is_available() else "cpu"
clip_model, preprocess = clip.load("ViT-B/32", device=device)
lpips_model = lpips.LPIPS(net='alex').to(device)
+clip_model.eval()
+lpips_model.eval()📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||||||||||||||||||||||||||
| def compute_scores(prompt, image_path, reference_image_path=None): | ||||||||||||||||||||||||||||||
| """ | ||||||||||||||||||||||||||||||
| compute clip_score between prompt and image_path, | ||||||||||||||||||||||||||||||
| compute lpips between image_path and reference_image_path if provided, | ||||||||||||||||||||||||||||||
| otherwise lpips = 0 (placeholder). | ||||||||||||||||||||||||||||||
| """ | ||||||||||||||||||||||||||||||
| # open image | ||||||||||||||||||||||||||||||
| img = Image.open(image_path).convert("RGB") | ||||||||||||||||||||||||||||||
| image_tensor = preprocess(img).unsqueeze(0).to(device) | ||||||||||||||||||||||||||||||
| text = clip.tokenize([prompt]).to(device) | ||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||
| with torch.no_grad(): | ||||||||||||||||||||||||||||||
| image_features = clip_model.encode_image(image_tensor) | ||||||||||||||||||||||||||||||
| text_features = clip_model.encode_text(text) | ||||||||||||||||||||||||||||||
| clip_score = torch.cosine_similarity(image_features, text_features).item() | ||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||
| lpips_score = None | ||||||||||||||||||||||||||||||
| if reference_image_path and os.path.exists(reference_image_path): | ||||||||||||||||||||||||||||||
| ref_img = Image.open(reference_image_path).convert("RGB") | ||||||||||||||||||||||||||||||
| # lpips expects tensors in range [-1,1] as float with shape [1,3,H,W] | ||||||||||||||||||||||||||||||
| # use lpips helper if available or convert | ||||||||||||||||||||||||||||||
| from torchvision import transforms | ||||||||||||||||||||||||||||||
| to_tensor = transforms.ToTensor() | ||||||||||||||||||||||||||||||
| a = to_tensor(img).unsqueeze(0).mul(2).sub(1).to(device) # map [0,1] -> [-1,1] | ||||||||||||||||||||||||||||||
| b = to_tensor(ref_img).unsqueeze(0).mul(2).sub(1).to(device) | ||||||||||||||||||||||||||||||
| with torch.no_grad(): | ||||||||||||||||||||||||||||||
| lpips_score = float(lpips_model(a, b).item()) | ||||||||||||||||||||||||||||||
|
Comment on lines
+39
to
+47
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. issue (code-quality): Extract code out into function ( |
||||||||||||||||||||||||||||||
| else: | ||||||||||||||||||||||||||||||
| lpips_score = 0.0 # placeholder if no reference provided | ||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||
| return round(clip_score, 6), round(lpips_score, 6) | ||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||
| def add_scores_to_csv(results_file): | ||||||||||||||||||||||||||||||
| """ | ||||||||||||||||||||||||||||||
| Read results.csv, expect columns at least: id, prompt, image_path | ||||||||||||||||||||||||||||||
| Optional column: reference_image_path (for lpips) | ||||||||||||||||||||||||||||||
| Writes back clip_score, lpips_score, and adds run_id column. | ||||||||||||||||||||||||||||||
| """ | ||||||||||||||||||||||||||||||
| rows = [] | ||||||||||||||||||||||||||||||
| with open(results_file, newline='', encoding='utf-8') as csvfile: | ||||||||||||||||||||||||||||||
| reader = csv.DictReader(csvfile) | ||||||||||||||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. issue (code-quality): We've found these issues:
|
||||||||||||||||||||||||||||||
| fieldnames = list(reader.fieldnames) | ||||||||||||||||||||||||||||||
| if 'clip_score' not in fieldnames: | ||||||||||||||||||||||||||||||
| fieldnames.append('clip_score') | ||||||||||||||||||||||||||||||
| if 'lpips_score' not in fieldnames: | ||||||||||||||||||||||||||||||
| fieldnames.append('lpips_score') | ||||||||||||||||||||||||||||||
| if 'run_id' not in fieldnames: | ||||||||||||||||||||||||||||||
| fieldnames.append('run_id') | ||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||
| for row in reader: | ||||||||||||||||||||||||||||||
| img_path = row.get('image_path') | ||||||||||||||||||||||||||||||
| prompt = row.get('prompt', '') # لو موجود | ||||||||||||||||||||||||||||||
| ref_path = row.get('reference_image_path') # لو موجود | ||||||||||||||||||||||||||||||
| if not img_path or not os.path.exists(img_path): | ||||||||||||||||||||||||||||||
| raise FileNotFoundError(f"Image file not found: {img_path}") | ||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||
| clip_s, lpips_s = compute_scores(prompt, img_path, reference_image_path=ref_path) | ||||||||||||||||||||||||||||||
| row['clip_score'] = clip_s | ||||||||||||||||||||||||||||||
| row['lpips_score'] = lpips_s | ||||||||||||||||||||||||||||||
| # إنشاء run_id من id (مثلاً run_1, run_2, ...) | ||||||||||||||||||||||||||||||
| row['run_id'] = f"run_{row.get('id', '')}" | ||||||||||||||||||||||||||||||
| rows.append(row) | ||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||
| with open(results_file, 'w', newline='', encoding='utf-8') as csvfile: | ||||||||||||||||||||||||||||||
| writer = csv.DictWriter(csvfile, fieldnames=fieldnames) | ||||||||||||||||||||||||||||||
| writer.writeheader() | ||||||||||||||||||||||||||||||
| writer.writerows(rows) | ||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||
| with open(results_file, 'w', newline='', encoding='utf-8') as csvfile: | ||||||||||||||||||||||||||||||
| writer = csv.DictWriter(csvfile, fieldnames=fieldnames) | ||||||||||||||||||||||||||||||
| writer.writeheader() | ||||||||||||||||||||||||||||||
| writer.writerows(rows) | ||||||||||||||||||||||||||||||
|
Comment on lines
+85
to
+94
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. suggestion: Duplicate block for writing CSV file. The file is being written twice; removing the duplicate write will improve efficiency.
Suggested change
|
||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||
|
Comment on lines
+85
to
+95
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Duplicate write to results.csv (idempotent but unnecessary, potential race window) The file is written twice back-to-back with identical content. Remove the second block. with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(rows)
-
- with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
- writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
- writer.writeheader()
- writer.writerows(rows)📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||
| def generate_config_from_frontend(config_file, settings_dict): | ||||||||||||||||||||||||||||||
| """ | ||||||||||||||||||||||||||||||
| settings_dict expected format: | ||||||||||||||||||||||||||||||
| { "run_id1": { "prompt": "...", "seed": 123, "sampler": "ddim", ... }, ... } | ||||||||||||||||||||||||||||||
| This function writes config.json from given settings (coming from frontend). | ||||||||||||||||||||||||||||||
| """ | ||||||||||||||||||||||||||||||
| with open(config_file, 'w', encoding='utf-8') as f: | ||||||||||||||||||||||||||||||
| json.dump(settings_dict, f, indent=4, ensure_ascii=False) | ||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||
| def create_report_zip(output_zip=OUTPUT_ZIP): | ||||||||||||||||||||||||||||||
| with zipfile.ZipFile(output_zip, 'w') as zf: | ||||||||||||||||||||||||||||||
| zf.write(RESULTS_FILE) | ||||||||||||||||||||||||||||||
| zf.write(CONFIG_FILE) | ||||||||||||||||||||||||||||||
| zf.write(README_FILE) | ||||||||||||||||||||||||||||||
| for image_file in os.listdir(GRIDS_DIR): | ||||||||||||||||||||||||||||||
| full_path = os.path.join(GRIDS_DIR, image_file) | ||||||||||||||||||||||||||||||
| if os.path.isfile(full_path): | ||||||||||||||||||||||||||||||
| zf.write(full_path, arcname=os.path.join('grids', image_file)) | ||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||
| def run_report(settings_from_frontend): | ||||||||||||||||||||||||||||||
| """ | ||||||||||||||||||||||||||||||
| Main entrypoint used by the API. | ||||||||||||||||||||||||||||||
| settings_from_frontend: dict (see generate_config_from_frontend) | ||||||||||||||||||||||||||||||
| """ | ||||||||||||||||||||||||||||||
| # validations | ||||||||||||||||||||||||||||||
| if not os.path.exists(RESULTS_FILE): | ||||||||||||||||||||||||||||||
| raise FileNotFoundError(f"{RESULTS_FILE} not found") | ||||||||||||||||||||||||||||||
| if not os.path.isdir(GRIDS_DIR): | ||||||||||||||||||||||||||||||
| raise FileNotFoundError(f"{GRIDS_DIR} not found") | ||||||||||||||||||||||||||||||
| if not os.path.exists(README_FILE): | ||||||||||||||||||||||||||||||
| raise FileNotFoundError(f"{README_FILE} not found") | ||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||
| # 1. compute scores and update CSV | ||||||||||||||||||||||||||||||
| add_scores_to_csv(RESULTS_FILE) | ||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||
| # 2. write config.json from frontend settings | ||||||||||||||||||||||||||||||
| generate_config_from_frontend(CONFIG_FILE, settings_from_frontend) | ||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||
| # 3. create zip | ||||||||||||||||||||||||||||||
| create_report_zip() | ||||||||||||||||||||||||||||||
| return os.path.abspath(OUTPUT_ZIP) | ||||||||||||||||||||||||||||||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,3 @@ | ||
| id,image_path,score,clip_score,lpips_score,run_id | ||
| 1,grids/grid1.png,0.95,0.214652,0.0,run_1 | ||
| 2,grids/grid2.png,0.88,0.237358,0.0,run_2 |
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,40 @@ | ||||||||||||||||||||||||||||||
| from fastapi import FastAPI, Body, HTTPException | ||||||||||||||||||||||||||||||
| from fastapi.responses import FileResponse, HTMLResponse | ||||||||||||||||||||||||||||||
| from fastapi.middleware.cors import CORSMiddleware | ||||||||||||||||||||||||||||||
|
Comment on lines
+1
to
+3
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🛠️ Refactor suggestion Avoid blocking the event loop, address Ruff warnings (B008/B904), and sanitize error details.
-from fastapi import FastAPI, Body, HTTPException
+from fastapi import FastAPI, Body, HTTPException
+from fastapi.concurrency import run_in_threadpool
from fastapi.responses import FileResponse, HTMLResponse
from fastapi.middleware.cors import CORSMiddleware
+from typing import Annotated
+import logging
import os
-from generate_report import run_report
+from .generate_report import run_report
@@
app = FastAPI()
+logger = logging.getLogger(__name__)
@@
-@app.post("/generate_report")
-async def generate_report_endpoint(frontend_settings: dict = Body(...)):
+@app.post("/generate_report")
+async def generate_report_endpoint(frontend_settings: Annotated[dict, Body(...)]):
try:
- zip_path = run_report(frontend_settings)
+ # Offload synchronous heavy work to a threadpool to prevent event loop blocking
+ zip_path = await run_in_threadpool(run_report, frontend_settings)
if not os.path.exists(zip_path):
raise HTTPException(status_code=500, detail=f"{zip_path} not found after generation.")
return FileResponse(zip_path, filename="report.zip", media_type="application/zip")
except Exception as e:
- raise HTTPException(status_code=500, detail=str(e))
+ # Don't leak internal paths/details; keep cause chained for observability
+ logger.exception("Report generation failed")
+ raise HTTPException(status_code=500, detail="Report generation failed") from eAlso applies to: 33-40 🤖 Prompt for AI Agents |
||||||||||||||||||||||||||||||
| import os | ||||||||||||||||||||||||||||||
| from generate_report import run_report | ||||||||||||||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 💡 Verification agent 🧩 Analysis chainFix import to work when running as a package (uvicorn tasks.report_bundle.server:app). Absolute import -from generate_report import run_report
+from .generate_report import run_reportIf you don’t already have them, ensure 🏁 Script executed: #!/bin/bash
# Verify package markers exist for relative imports
set -euo pipefail
for d in tasks tasks/report_bundle; do
if [ -d "$d" ]; then
if [ ! -f "$d/__init__.py" ]; then
echo "Missing $d/__init__.py"
else
echo "Found $d/__init__.py"
fi
fi
done
# Show the current import line
rg -n $'^from .*generate_report import run_report' -A 0 tasks/report_bundle/server.pyLength of output: 507 Add missing package markers and update import in The • Create empty
-from generate_report import run_report
+from .generate_report import run_report📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||
| app = FastAPI() | ||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||
| app.add_middleware( | ||||||||||||||||||||||||||||||
| CORSMiddleware, | ||||||||||||||||||||||||||||||
| allow_origins=["*"], | ||||||||||||||||||||||||||||||
| allow_credentials=True, | ||||||||||||||||||||||||||||||
| allow_methods=["*"], | ||||||||||||||||||||||||||||||
| allow_headers=["*"], | ||||||||||||||||||||||||||||||
| ) | ||||||||||||||||||||||||||||||
|
Comment on lines
+9
to
+15
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🛠️ Refactor suggestion CORS: Don’t combine allow_credentials=True with wildcard origins. Per CORS spec and Starlette behavior,
Minimal safe change: app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
- allow_credentials=True,
+ allow_credentials=False,
allow_methods=["*"],
allow_headers=["*"],
)If you need credentials, replace 📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||
| @app.get("/", response_class=HTMLResponse) | ||||||||||||||||||||||||||||||
| def read_root(): | ||||||||||||||||||||||||||||||
| html_content = """ | ||||||||||||||||||||||||||||||
| <html> | ||||||||||||||||||||||||||||||
| <head> | ||||||||||||||||||||||||||||||
| <title>DreamLayer Report API</title> | ||||||||||||||||||||||||||||||
| </head> | ||||||||||||||||||||||||||||||
| <body> | ||||||||||||||||||||||||||||||
| <h1>Welcome to DreamLayer Report API</h1> | ||||||||||||||||||||||||||||||
| <p>Use the <code>/generate_report</code> POST endpoint to generate reports.</p> | ||||||||||||||||||||||||||||||
| </body> | ||||||||||||||||||||||||||||||
| </html> | ||||||||||||||||||||||||||||||
| """ | ||||||||||||||||||||||||||||||
|
Comment on lines
+19
to
+29
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. issue (code-quality): Inline variable that is immediately returned ( |
||||||||||||||||||||||||||||||
| return html_content | ||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||
| @app.post("/generate_report") | ||||||||||||||||||||||||||||||
| async def generate_report_endpoint(frontend_settings: dict = Body(...)): | ||||||||||||||||||||||||||||||
| try: | ||||||||||||||||||||||||||||||
| zip_path = run_report(frontend_settings) | ||||||||||||||||||||||||||||||
| if not os.path.exists(zip_path): | ||||||||||||||||||||||||||||||
| raise HTTPException(status_code=500, detail=f"{zip_path} not found after generation.") | ||||||||||||||||||||||||||||||
| return FileResponse(zip_path, filename="report.zip", media_type="application/zip") | ||||||||||||||||||||||||||||||
| except Exception as e: | ||||||||||||||||||||||||||||||
| raise HTTPException(status_code=500, detail=str(e)) | ||||||||||||||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. suggestion (code-quality): Explicitly raise from a previous error (
Suggested change
|
||||||||||||||||||||||||||||||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,76 @@ | ||
| // src/components/GenerateReportButton.jsx | ||
| import React, { useState } from "react"; | ||
|
|
||
| /** | ||
| * props: | ||
| * - frontendSettings: object like { run_001: { prompt, seed, sampler }, ... } | ||
| * - className: optional CSS classes (reuse your "Generate Images" classes) | ||
| * - apiUrl: optional backend url (default http://localhost:8000/generate_report) | ||
| */ | ||
| export default function GenerateReportButton({ | ||
| frontendSettings = null, | ||
| getFrontendSettings = null, | ||
| className = "", | ||
| apiUrl = "http://localhost:8000/generate_report", | ||
| }) { | ||
| const [loading, setLoading] = useState(false); | ||
| const [error, setError] = useState(null); | ||
|
|
||
| // helper to get settings either from prop or by calling a function passed from parent | ||
| const resolveSettings = () => { | ||
| if (typeof getFrontendSettings === "function") return getFrontendSettings(); | ||
| return frontendSettings; | ||
| }; | ||
|
Comment on lines
+20
to
+23
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🛠️ Refactor suggestion Support async settings providers (await getFrontendSettings). Allow - const resolveSettings = () => {
- if (typeof getFrontendSettings === "function") return getFrontendSettings();
- return frontendSettings;
- };
+ const resolveSettings = async () => {
+ if (typeof getFrontendSettings === "function") {
+ return await Promise.resolve(getFrontendSettings());
+ }
+ return frontendSettings;
+ };
@@
- const settings = resolveSettings();
+ const settings = await resolveSettings();
if (!settings || Object.keys(settings).length === 0) {
throw new Error("No frontend settings provided. Pass frontendSettings or getFrontendSettings.");
}Also applies to: 25-33 🤖 Prompt for AI Agents |
||
|
|
||
| const handleClick = async () => { | ||
| setLoading(true); | ||
| setError(null); | ||
|
|
||
| try { | ||
| const settings = resolveSettings(); | ||
| if (!settings || Object.keys(settings).length === 0) { | ||
| throw new Error("No frontend settings provided. Pass frontendSettings or getFrontendSettings."); | ||
| } | ||
|
|
||
| const res = await fetch(apiUrl, { | ||
| method: "POST", | ||
| headers: { "Content-Type": "application/json" }, | ||
| body: JSON.stringify(settings), | ||
| }); | ||
|
|
||
| if (!res.ok) { | ||
| const txt = await res.text(); | ||
| throw new Error(txt || `Server responded ${res.status}`); | ||
| } | ||
|
|
||
| // response is a zip file — download it | ||
| const blob = await res.blob(); | ||
| const url = window.URL.createObjectURL(blob); | ||
| const a = document.createElement("a"); | ||
| a.href = url; | ||
| a.download = "report.zip"; | ||
| document.body.appendChild(a); | ||
| a.click(); | ||
| a.remove(); | ||
| window.URL.revokeObjectURL(url); | ||
| } catch (err) { | ||
| console.error("GenerateReport error:", err); | ||
| setError(err.message || "Failed to generate report"); | ||
| } finally { | ||
| setLoading(false); | ||
| } | ||
| }; | ||
|
|
||
| return ( | ||
| <div> | ||
| <button | ||
| className={`generate-images-like-button ${className}`} | ||
| onClick={handleClick} | ||
| disabled={loading} | ||
| > | ||
| {loading ? "Generating..." : "Generate Report"} | ||
| </button> | ||
| {error && <div style={{ color: "crimson", marginTop: 8 }}>{error}</div>} | ||
| </div> | ||
| ); | ||
| } | ||
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,14 @@ | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| import csv | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| def test_csv_schema(filename): | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| required_columns = {'id', 'image_path', 'score'} | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| with open(filename, newline='') as csvfile: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| reader = csv.DictReader(csvfile) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| headers = set(reader.fieldnames) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| missing = required_columns - headers | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| if missing: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Comment on lines
+8
to
+9
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. suggestion (code-quality): Use named expression to simplify assignment and conditional (
Suggested change
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| raise ValueError(f"CSV is missing required columns: {missing}") | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Comment on lines
+9
to
+10
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. issue (code-quality): Avoid conditionals in tests. ( ExplanationAvoid complex code, like conditionals, in test functions.Google's software engineering guidelines says:
Some ways to fix this:
Software Engineering at Google / Don't Put Logic in Tests |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| print("✅ CSV schema test passed!") | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| if __name__ == "__main__": | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| test_csv_schema('results.csv') | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Comment on lines
+3
to
+14
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🛠️ Refactor suggestion Schema is out of sync with new CLIP/LPIPS fields; hard requirement on “score” will fail The pipeline now writes clip_score and lpips_score (not a single score). Also add basic robustness (encoding, empty-header guard). -def test_csv_schema(filename):
- required_columns = {'id', 'image_path', 'score'}
- with open(filename, newline='') as csvfile:
- reader = csv.DictReader(csvfile)
- headers = set(reader.fieldnames)
- missing = required_columns - headers
- if missing:
- raise ValueError(f"CSV is missing required columns: {missing}")
- print("✅ CSV schema test passed!")
+def test_csv_schema(filename):
+ # Minimum columns required by add_scores_to_csv
+ required_columns = {'id', 'image_path', 'prompt'}
+ # Accept either the new pair of scores or the legacy single "score" for backward compatibility
+ acceptable_score_sets = [
+ {'clip_score', 'lpips_score'},
+ {'score'},
+ ]
+ with open(filename, newline='', encoding='utf-8') as csvfile:
+ reader = csv.DictReader(csvfile)
+ if reader.fieldnames is None:
+ raise ValueError("CSV is empty or missing a header row")
+ headers = set(reader.fieldnames)
+ missing = required_columns - headers
+ if missing:
+ raise ValueError(f"CSV is missing required columns: {missing}")
+ if not any(scores.issubset(headers) for scores in acceptable_score_sets):
+ raise ValueError("CSV must contain either {'clip_score','lpips_score'} or legacy {'score'}")
+ print("✅ CSV schema test passed!")📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Endpoint mismatch and missing payload; also revoke Blob URL
Server exposes POST /generate_report (underscore). Frontend calls /api/generate-report (hyphen) with no body. This likely 404s or fails JSON parsing. Send the settings payload (aligned with run_report) and use underscore path. Also revoke the URL.
To confirm the route name across the repo, run:
🏁 Script executed:
Length of output: 1462
Fix endpoint path, include JSON payload, and revoke Blob URL
Please update the report‐generation handler in Txt2ImgPage to match the backend’s
/generate_reportroute, send the required settings in the request body, and clean up the created Blob URL to avoid leaks.dream_layer_frontend/src/features/Txt2Img/Txt2ImgPage.tsx
const handleGenerateReport = async () => { - try { - const response = await fetch("/api/generate-report", { method: "POST" }); + try { + // Send current generation settings to the backend + const requestData = { + ...coreSettings, + ...(controlNetConfig && { controlnet: controlNetConfig }), + ...(loraConfig?.enabled && { lora: loraConfig }), + custom_workflow: customWorkflow || null, + }; + const response = await fetch("/api/generate_report", { + method: "POST", + headers: { "Content-Type": "application/json" }, + body: JSON.stringify(requestData), + }); if (!response.ok) throw new Error("Failed to generate report"); const blob = await response.blob(); const url = window.URL.createObjectURL(blob); const link = document.createElement("a"); link.href = url; link.download = "report.zip"; - link.click(); + // Append to DOM, trigger download, then clean up + document.body.appendChild(link); + link.click(); + link.remove(); + setTimeout(() => window.URL.revokeObjectURL(url), 1000); } catch (error) { console.error("Error generating report:", error); } };📝 Committable suggestion
🤖 Prompt for AI Agents