Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 23 additions & 0 deletions dream_layer_frontend/src/features/Txt2Img/Txt2ImgPage.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,21 @@ const Txt2ImgPage: React.FC<Txt2ImgPageProps> = ({ selectedModel, onTabChange })
const handleBatchSettingsChange = (batchSize: number, batchCount: number) => {
updateCoreSettings({ batch_size: batchSize, batch_count: batchCount });
};
const handleGenerateReport = async () => {
try {
const response = await fetch("/api/generate-report", { method: "POST" });
if (!response.ok) throw new Error("Failed to generate report");
const blob = await response.blob();
const url = window.URL.createObjectURL(blob);
const link = document.createElement("a");
link.href = url;
link.download = "report.zip";
link.click();
} catch (error) {
console.error("Error generating report:", error);
}
};

Comment on lines +61 to +75
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Endpoint mismatch and missing payload; also revoke Blob URL

Server exposes POST /generate_report (underscore). Frontend calls /api/generate-report (hyphen) with no body. This likely 404s or fails JSON parsing. Send the settings payload (aligned with run_report) and use underscore path. Also revoke the URL.

-  const handleGenerateReport = async () => {
-  try {
-    const response = await fetch("/api/generate-report", { method: "POST" });
+  const handleGenerateReport = async () => {
+  try {
+    // Reuse the same settings shape used for generation
+    const requestData = {
+      ...coreSettings,
+      ...(controlNetConfig && { controlnet: controlNetConfig }),
+      ...(loraConfig?.enabled && { lora: loraConfig }),
+      custom_workflow: customWorkflow || null,
+    };
+    const response = await fetch("/api/generate_report", {
+      method: "POST",
+      headers: { "Content-Type": "application/json" },
+      body: JSON.stringify(requestData),
+    });
     if (!response.ok) throw new Error("Failed to generate report");
     const blob = await response.blob();
     const url = window.URL.createObjectURL(blob);
     const link = document.createElement("a");
     link.href = url;
     link.download = "report.zip"; 
-    link.click();
+    document.body.appendChild(link);
+    link.click();
+    link.remove();
+    setTimeout(() => window.URL.revokeObjectURL(url), 1000);
   } catch (error) {
     console.error("Error generating report:", error);
   }
 };

To confirm the route name across the repo, run:


🏁 Script executed:

#!/bin/bash
rg -n -A2 -i $'/generate[_-]report'

Length of output: 1462


Fix endpoint path, include JSON payload, and revoke Blob URL

Please update the report‐generation handler in Txt2ImgPage to match the backend’s /generate_report route, send the required settings in the request body, and clean up the created Blob URL to avoid leaks.

dream_layer_frontend/src/features/Txt2Img/Txt2ImgPage.tsx

   const handleGenerateReport = async () => {
-    try {
-      const response = await fetch("/api/generate-report", { method: "POST" });
+    try {
+      // Send current generation settings to the backend
+      const requestData = {
+        ...coreSettings,
+        ...(controlNetConfig && { controlnet: controlNetConfig }),
+        ...(loraConfig?.enabled && { lora: loraConfig }),
+        custom_workflow: customWorkflow || null,
+      };
+      const response = await fetch("/api/generate_report", {
+        method: "POST",
+        headers: { "Content-Type": "application/json" },
+        body: JSON.stringify(requestData),
+      });
       if (!response.ok) throw new Error("Failed to generate report");
       const blob = await response.blob();
       const url = window.URL.createObjectURL(blob);
       const link = document.createElement("a");
       link.href = url;
       link.download = "report.zip";
-      link.click();
+      // Append to DOM, trigger download, then clean up
+      document.body.appendChild(link);
+      link.click();
+      link.remove();
+      setTimeout(() => window.URL.revokeObjectURL(url), 1000);
     } catch (error) {
       console.error("Error generating report:", error);
     }
   };
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const handleGenerateReport = async () => {
try {
const response = await fetch("/api/generate-report", { method: "POST" });
if (!response.ok) throw new Error("Failed to generate report");
const blob = await response.blob();
const url = window.URL.createObjectURL(blob);
const link = document.createElement("a");
link.href = url;
link.download = "report.zip";
link.click();
} catch (error) {
console.error("Error generating report:", error);
}
};
const handleGenerateReport = async () => {
try {
// Send current generation settings to the backend
const requestData = {
...coreSettings,
...(controlNetConfig && { controlnet: controlNetConfig }),
...(loraConfig?.enabled && { lora: loraConfig }),
custom_workflow: customWorkflow || null,
};
const response = await fetch("/api/generate_report", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(requestData),
});
if (!response.ok) throw new Error("Failed to generate report");
const blob = await response.blob();
const url = window.URL.createObjectURL(blob);
const link = document.createElement("a");
link.href = url;
link.download = "report.zip";
// Append to DOM, trigger download, then clean up
document.body.appendChild(link);
link.click();
link.remove();
setTimeout(() => window.URL.revokeObjectURL(url), 1000);
} catch (error) {
console.error("Error generating report:", error);
}
};
🤖 Prompt for AI Agents
In dream_layer_frontend/src/features/Txt2Img/Txt2ImgPage.tsx around lines 61 to
75, the handler posts to the wrong endpoint, omits the required JSON payload,
and never revokes the created Blob URL; change the fetch URL to
"/generate_report", send the required settings object in the request body as
JSON with headers "Content-Type: application/json" (use
JSON.stringify(settings)), keep the existing response/blob handling but after
programmatically clicking the link call window.URL.revokeObjectURL(url) and
remove the link element to avoid leaks, and preserve proper error handling and
response.ok checks.


const handleSamplingSettingsChange = (sampler: string, scheduler: string, steps: number, cfg: number) => {
updateCoreSettings({
Expand Down Expand Up @@ -403,13 +418,21 @@ const Txt2ImgPage: React.FC<Txt2ImgPageProps> = ({ selectedModel, onTabChange })
};

return (

<div className={`mb-4 ${isMobile ? 'grid grid-cols-1' : 'grid gap-6 md:grid-cols-[1.8fr_1fr]'}`}>
{/* Left Column - Controls */}
<div className="space-y-4">
<div className="flex flex-col">
<div className="mb-[18px] flex flex-col space-y-2 sm:flex-row sm:items-center sm:justify-between sm:space-y-0">
<h3 className="text-base font-medium">Generation Settings</h3>
<ActionButtons />
<Button
onClick={handleGenerateReport}
variant="secondary"
disabled={isGenerating}
>
Generate Report
</Button>
</div>

{isMobile && <MobileImagePreview />}
Expand Down
Binary file added tasks/report_bundle.zip
Binary file not shown.
13 changes: 13 additions & 0 deletions tasks/report_bundle/README.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# Report Bundle

Hi!
This report includes everything you need to review the image generation results.

What’s inside:
- `results.csv` — a table with image scores and file paths
- `config.json` — the exact settings used during generation (models, prompts, etc.)
- Grid images — labeled previews of the outputs

Feel free to explore or reuse any part of it. Let me know if anything’s missing!

— Lial
8 changes: 8 additions & 0 deletions tasks/report_bundle/config.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
{
"task_id": "123",
"report_type": "Csv",
"data": {
"title": "My Report",
"content": "This is the content of the report."
}
}
139 changes: 139 additions & 0 deletions tasks/report_bundle/generate_report.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
import os
import json
import csv
import zipfile
from PIL import Image
import torch
import clip
import lpips

RESULTS_FILE = 'results.csv'
CONFIG_FILE = 'config.json'
GRIDS_DIR = 'grids'
README_FILE = 'README.txt'
OUTPUT_ZIP = 'report.zip'

# Load models (once)
device = "cuda" if torch.cuda.is_available() else "cpu"
clip_model, preprocess = clip.load("ViT-B/32", device=device)
lpips_model = lpips.LPIPS(net='alex').to(device)

Comment on lines +16 to +20
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

❓ Verification inconclusive

Set models to eval mode; optionally lazy-load to avoid import-time cost

Switch learned models to eval() to disable dropout/batch-norm training behavior. Consider lazy-loading on first use to reduce import-time latency.

 device = "cuda" if torch.cuda.is_available() else "cpu"
 clip_model, preprocess = clip.load("ViT-B/32", device=device)
 lpips_model = lpips.LPIPS(net='alex').to(device)
+clip_model.eval()
+lpips_model.eval()

If startup latency is a concern, I can refactor to lazy-load these handles the first time compute_scores is called.


Set models to inference mode and consider lazy‐loading

After loading the CLIP and LPIPS models in tasks/report_bundle/generate_report.py (around lines 16–20), switch them to evaluation mode to disable dropout/BatchNorm training behavior. If startup latency becomes an issue, you can move the load + eval calls into your compute_scores function and only execute them on first use.

• File: tasks/report_bundle/generate_report.py
• Lines: 16–20

 device = "cuda" if torch.cuda.is_available() else "cpu"
 clip_model, preprocess = clip.load("ViT-B/32", device=device)
 lpips_model = lpips.LPIPS(net='alex').to(device)
+clip_model.eval()
+lpips_model.eval()
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Load models (once)
device = "cuda" if torch.cuda.is_available() else "cpu"
clip_model, preprocess = clip.load("ViT-B/32", device=device)
lpips_model = lpips.LPIPS(net='alex').to(device)
# Load models (once)
device = "cuda" if torch.cuda.is_available() else "cpu"
clip_model, preprocess = clip.load("ViT-B/32", device=device)
lpips_model = lpips.LPIPS(net='alex').to(device)
clip_model.eval()
lpips_model.eval()
🤖 Prompt for AI Agents
In tasks/report_bundle/generate_report.py around lines 16 to 20, after loading
the CLIP and LPIPS models call .eval() on both models to put them into inference
mode (e.g., clip_model.eval(); lpips_model.eval()) so dropout/BatchNorm are
disabled; additionally, to reduce startup latency optionally move the load and
.eval() calls into compute_scores and implement lazy initialization (initialize
globals on first use, load models, .to(device) and .eval() there) so the models
are only loaded and set to eval when compute_scores is first invoked.

def compute_scores(prompt, image_path, reference_image_path=None):
"""
compute clip_score between prompt and image_path,
compute lpips between image_path and reference_image_path if provided,
otherwise lpips = 0 (placeholder).
"""
# open image
img = Image.open(image_path).convert("RGB")
image_tensor = preprocess(img).unsqueeze(0).to(device)
text = clip.tokenize([prompt]).to(device)

with torch.no_grad():
image_features = clip_model.encode_image(image_tensor)
text_features = clip_model.encode_text(text)
clip_score = torch.cosine_similarity(image_features, text_features).item()

lpips_score = None
if reference_image_path and os.path.exists(reference_image_path):
ref_img = Image.open(reference_image_path).convert("RGB")
# lpips expects tensors in range [-1,1] as float with shape [1,3,H,W]
# use lpips helper if available or convert
from torchvision import transforms
to_tensor = transforms.ToTensor()
a = to_tensor(img).unsqueeze(0).mul(2).sub(1).to(device) # map [0,1] -> [-1,1]
b = to_tensor(ref_img).unsqueeze(0).mul(2).sub(1).to(device)
with torch.no_grad():
lpips_score = float(lpips_model(a, b).item())
Comment on lines +39 to +47
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (code-quality): Extract code out into function (extract-method)

else:
lpips_score = 0.0 # placeholder if no reference provided

return round(clip_score, 6), round(lpips_score, 6)


def add_scores_to_csv(results_file):
"""
Read results.csv, expect columns at least: id, prompt, image_path
Optional column: reference_image_path (for lpips)
Writes back clip_score, lpips_score, and adds run_id column.
"""
rows = []
with open(results_file, newline='', encoding='utf-8') as csvfile:
reader = csv.DictReader(csvfile)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (code-quality): We've found these issues:

fieldnames = list(reader.fieldnames)
if 'clip_score' not in fieldnames:
fieldnames.append('clip_score')
if 'lpips_score' not in fieldnames:
fieldnames.append('lpips_score')
if 'run_id' not in fieldnames:
fieldnames.append('run_id')

for row in reader:
img_path = row.get('image_path')
prompt = row.get('prompt', '') # لو موجود
ref_path = row.get('reference_image_path') # لو موجود
if not img_path or not os.path.exists(img_path):
raise FileNotFoundError(f"Image file not found: {img_path}")

clip_s, lpips_s = compute_scores(prompt, img_path, reference_image_path=ref_path)
row['clip_score'] = clip_s
row['lpips_score'] = lpips_s
# إنشاء run_id من id (مثلاً run_1, run_2, ...)
row['run_id'] = f"run_{row.get('id', '')}"
rows.append(row)

with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(rows)


with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(rows)
Comment on lines +85 to +94
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Duplicate block for writing CSV file.

The file is being written twice; removing the duplicate write will improve efficiency.

Suggested change
with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(rows)
with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(rows)
with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(rows)


Comment on lines +85 to +95
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Duplicate write to results.csv (idempotent but unnecessary, potential race window)

The file is written twice back-to-back with identical content. Remove the second block.

     with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
         writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
         writer.writeheader()
         writer.writerows(rows)
 
-
-    with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
-        writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
-        writer.writeheader()
-        writer.writerows(rows)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(rows)
with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(rows)
with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(rows)
🤖 Prompt for AI Agents
In tasks/report_bundle/generate_report.py around lines 85 to 95 there is a
duplicated block that opens results_file and writes the CSV (writeheader and
writerows) twice; remove the second identical with-open/write block so the file
is written only once (keep the first context manager), leaving a single
csv.DictWriter writeheader() + writerows(rows) call and no extra duplicate
write.


def generate_config_from_frontend(config_file, settings_dict):
"""
settings_dict expected format:
{ "run_id1": { "prompt": "...", "seed": 123, "sampler": "ddim", ... }, ... }
This function writes config.json from given settings (coming from frontend).
"""
with open(config_file, 'w', encoding='utf-8') as f:
json.dump(settings_dict, f, indent=4, ensure_ascii=False)


def create_report_zip(output_zip=OUTPUT_ZIP):
with zipfile.ZipFile(output_zip, 'w') as zf:
zf.write(RESULTS_FILE)
zf.write(CONFIG_FILE)
zf.write(README_FILE)
for image_file in os.listdir(GRIDS_DIR):
full_path = os.path.join(GRIDS_DIR, image_file)
if os.path.isfile(full_path):
zf.write(full_path, arcname=os.path.join('grids', image_file))


def run_report(settings_from_frontend):
"""
Main entrypoint used by the API.
settings_from_frontend: dict (see generate_config_from_frontend)
"""
# validations
if not os.path.exists(RESULTS_FILE):
raise FileNotFoundError(f"{RESULTS_FILE} not found")
if not os.path.isdir(GRIDS_DIR):
raise FileNotFoundError(f"{GRIDS_DIR} not found")
if not os.path.exists(README_FILE):
raise FileNotFoundError(f"{README_FILE} not found")

# 1. compute scores and update CSV
add_scores_to_csv(RESULTS_FILE)

# 2. write config.json from frontend settings
generate_config_from_frontend(CONFIG_FILE, settings_from_frontend)

# 3. create zip
create_report_zip()
return os.path.abspath(OUTPUT_ZIP)
Binary file added tasks/report_bundle/grids/grid1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added tasks/report_bundle/grids/grid2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added tasks/report_bundle/report.zip
Binary file not shown.
3 changes: 3 additions & 0 deletions tasks/report_bundle/results.csv
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
id,image_path,score,clip_score,lpips_score,run_id
1,grids/grid1.png,0.95,0.214652,0.0,run_1
2,grids/grid2.png,0.88,0.237358,0.0,run_2
40 changes: 40 additions & 0 deletions tasks/report_bundle/server.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
from fastapi import FastAPI, Body, HTTPException
from fastapi.responses import FileResponse, HTMLResponse
from fastapi.middleware.cors import CORSMiddleware
Comment on lines +1 to +3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid blocking the event loop, address Ruff warnings (B008/B904), and sanitize error details.

  • Offload CPU/IO-heavy run_report to a thread pool to keep the async server responsive.
  • Use Annotated[dict, Body(...)] to avoid function calls in defaults (Ruff B008).
  • Log and return a generic 500 to avoid leaking internals; chain the original exception (Ruff B904).
-from fastapi import FastAPI, Body, HTTPException
+from fastapi import FastAPI, Body, HTTPException
+from fastapi.concurrency import run_in_threadpool
 from fastapi.responses import FileResponse, HTMLResponse
 from fastapi.middleware.cors import CORSMiddleware
+from typing import Annotated
+import logging
 import os
-from generate_report import run_report
+from .generate_report import run_report
@@
 app = FastAPI()
+logger = logging.getLogger(__name__)
@@
-@app.post("/generate_report")
-async def generate_report_endpoint(frontend_settings: dict = Body(...)):
+@app.post("/generate_report")
+async def generate_report_endpoint(frontend_settings: Annotated[dict, Body(...)]):
     try:
-        zip_path = run_report(frontend_settings)
+        # Offload synchronous heavy work to a threadpool to prevent event loop blocking
+        zip_path = await run_in_threadpool(run_report, frontend_settings)
         if not os.path.exists(zip_path):
             raise HTTPException(status_code=500, detail=f"{zip_path} not found after generation.")
         return FileResponse(zip_path, filename="report.zip", media_type="application/zip")
     except Exception as e:
-        raise HTTPException(status_code=500, detail=str(e))
+        # Don't leak internal paths/details; keep cause chained for observability
+        logger.exception("Report generation failed")
+        raise HTTPException(status_code=500, detail="Report generation failed") from e

Also applies to: 33-40

🤖 Prompt for AI Agents
In tasks/report_bundle/server.py around lines 1-3 (and apply same changes to
lines 33-40), the review asks to avoid blocking the event loop, fix Ruff
warnings B008/B904, and sanitize error details: change endpoint parameters to
use Annotated[dict, Body(...)] instead of calling Body in a default, offload the
CPU/IO-heavy run_report call to a thread pool (e.g., FastAPI's run_in_threadpool
or loop.run_in_executor) so the async server stays responsive, catch exceptions,
log the full exception internally but return a generic HTTP 500 without internal
details, and re-raise or raise HTTPException(...) from the original exception to
preserve chaining.

import os
from generate_report import run_report
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Fix import to work when running as a package (uvicorn tasks.report_bundle.server:app).

Absolute import from generate_report import run_report will fail under package execution. Use a relative import.

-from generate_report import run_report
+from .generate_report import run_report

If you don’t already have them, ensure tasks/__init__.py and tasks/report_bundle/__init__.py exist so relative imports resolve.


🏁 Script executed:

#!/bin/bash
# Verify package markers exist for relative imports
set -euo pipefail
for d in tasks tasks/report_bundle; do
  if [ -d "$d" ]; then
    if [ ! -f "$d/__init__.py" ]; then
      echo "Missing $d/__init__.py"
    else
      echo "Found $d/__init__.py"
    fi
  fi
done

# Show the current import line
rg -n $'^from .*generate_report import run_report' -A 0 tasks/report_bundle/server.py

Length of output: 507


Add missing package markers and update import in server.py

The tasks and tasks/report_bundle directories are missing __init__.py, so Python won’t treat them as packages when invoking via uvicorn tasks.report_bundle.server:app. Additionally, the import in tasks/report_bundle/server.py must be relative.

• Create empty __init__.py in:

  • tasks/__init__.py
  • tasks/report_bundle/__init__.py
    • In tasks/report_bundle/server.py, change:
-from generate_report import run_report
+from .generate_report import run_report
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
from generate_report import run_report
from .generate_report import run_report
🤖 Prompt for AI Agents
In tasks/report_bundle/server.py around line 5, the project lacks package
markers and uses an absolute import that will fail when running via uvicorn; add
empty files tasks/__init__.py and tasks/report_bundle/__init__.py to make those
directories packages, and update the import in tasks/report_bundle/server.py to
use a relative import (import the run_report from the local module) so Python
can resolve it when invoked as tasks.report_bundle.server:app.


app = FastAPI()

app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
Comment on lines +9 to +15
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

CORS: Don’t combine allow_credentials=True with wildcard origins.

Per CORS spec and Starlette behavior, "*" with credentials is invalid. Either:

  • Set allow_credentials=False (simple fix), or
  • Replace "*" with an explicit list of origins.

Minimal safe change:

 app.add_middleware(
     CORSMiddleware,
     allow_origins=["*"],
-    allow_credentials=True,
+    allow_credentials=False,
     allow_methods=["*"],
     allow_headers=["*"],
 )

If you need credentials, replace ["*"] with explicit origins (e.g., from env) and keep allow_credentials=True.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=False,
allow_methods=["*"],
allow_headers=["*"],
)
🤖 Prompt for AI Agents
In tasks/report_bundle/server.py around lines 9 to 15, the CORS middleware is
configured with allow_origins=["*"] and allow_credentials=True which is invalid
per the CORS spec and Starlette; either set allow_credentials=False to keep the
wildcard origin, or replace the wildcard with an explicit list of allowed
origins (e.g., load a comma-separated ALLOWED_ORIGINS from environment/config
and split into a list) and keep allow_credentials=True; update the code to use
one of these two safe options and ensure allow_origins is a concrete list when
allow_credentials is True.


@app.get("/", response_class=HTMLResponse)
def read_root():
html_content = """
<html>
<head>
<title>DreamLayer Report API</title>
</head>
<body>
<h1>Welcome to DreamLayer Report API</h1>
<p>Use the <code>/generate_report</code> POST endpoint to generate reports.</p>
</body>
</html>
"""
Comment on lines +19 to +29
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (code-quality): Inline variable that is immediately returned (inline-immediately-returned-variable)

return html_content

@app.post("/generate_report")
async def generate_report_endpoint(frontend_settings: dict = Body(...)):
try:
zip_path = run_report(frontend_settings)
if not os.path.exists(zip_path):
raise HTTPException(status_code=500, detail=f"{zip_path} not found after generation.")
return FileResponse(zip_path, filename="report.zip", media_type="application/zip")
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (code-quality): Explicitly raise from a previous error (raise-from-previous-error)

Suggested change
raise HTTPException(status_code=500, detail=str(e))
raise HTTPException(status_code=500, detail=str(e)) from e

76 changes: 76 additions & 0 deletions tasks/report_bundle/src/components/GenerateReportButton.jsx
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
// src/components/GenerateReportButton.jsx
import React, { useState } from "react";

/**
* props:
* - frontendSettings: object like { run_001: { prompt, seed, sampler }, ... }
* - className: optional CSS classes (reuse your "Generate Images" classes)
* - apiUrl: optional backend url (default http://localhost:8000/generate_report)
*/
export default function GenerateReportButton({
frontendSettings = null,
getFrontendSettings = null,
className = "",
apiUrl = "http://localhost:8000/generate_report",
}) {
const [loading, setLoading] = useState(false);
const [error, setError] = useState(null);

// helper to get settings either from prop or by calling a function passed from parent
const resolveSettings = () => {
if (typeof getFrontendSettings === "function") return getFrontendSettings();
return frontendSettings;
};
Comment on lines +20 to +23
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Support async settings providers (await getFrontendSettings).

Allow getFrontendSettings to return a Promise for flexibility, then await it in the click handler.

-  const resolveSettings = () => {
-    if (typeof getFrontendSettings === "function") return getFrontendSettings();
-    return frontendSettings;
-  };
+  const resolveSettings = async () => {
+    if (typeof getFrontendSettings === "function") {
+      return await Promise.resolve(getFrontendSettings());
+    }
+    return frontendSettings;
+  };
@@
-      const settings = resolveSettings();
+      const settings = await resolveSettings();
       if (!settings || Object.keys(settings).length === 0) {
         throw new Error("No frontend settings provided. Pass frontendSettings or getFrontendSettings.");
       }

Also applies to: 25-33

🤖 Prompt for AI Agents
In tasks/report_bundle/src/components/GenerateReportButton.jsx around lines
20-23 (and similarly 25-33), the current resolveSettings assumes
getFrontendSettings is synchronous; change resolveSettings to be async and await
getFrontendSettings() when it's a function (i.e. if (typeof getFrontendSettings
=== "function") return await getFrontendSettings();), then update the click
handler to await resolveSettings() (and propagate or handle the returned
Promise) — wrap the await call in a try/catch to handle rejections and
surface/log errors appropriately so async settings providers are supported.


const handleClick = async () => {
setLoading(true);
setError(null);

try {
const settings = resolveSettings();
if (!settings || Object.keys(settings).length === 0) {
throw new Error("No frontend settings provided. Pass frontendSettings or getFrontendSettings.");
}

const res = await fetch(apiUrl, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(settings),
});

if (!res.ok) {
const txt = await res.text();
throw new Error(txt || `Server responded ${res.status}`);
}

// response is a zip file — download it
const blob = await res.blob();
const url = window.URL.createObjectURL(blob);
const a = document.createElement("a");
a.href = url;
a.download = "report.zip";
document.body.appendChild(a);
a.click();
a.remove();
window.URL.revokeObjectURL(url);
} catch (err) {
console.error("GenerateReport error:", err);
setError(err.message || "Failed to generate report");
} finally {
setLoading(false);
}
};

return (
<div>
<button
className={`generate-images-like-button ${className}`}
onClick={handleClick}
disabled={loading}
>
{loading ? "Generating..." : "Generate Report"}
</button>
{error && <div style={{ color: "crimson", marginTop: 8 }}>{error}</div>}
</div>
);
}
14 changes: 14 additions & 0 deletions tasks/report_bundle/test_schema.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
import csv

def test_csv_schema(filename):
required_columns = {'id', 'image_path', 'score'}
with open(filename, newline='') as csvfile:
reader = csv.DictReader(csvfile)
headers = set(reader.fieldnames)
missing = required_columns - headers
if missing:
Comment on lines +8 to +9
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (code-quality): Use named expression to simplify assignment and conditional (use-named-expression)

Suggested change
missing = required_columns - headers
if missing:
if missing := required_columns - headers:

raise ValueError(f"CSV is missing required columns: {missing}")
Comment on lines +9 to +10
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (code-quality): Avoid conditionals in tests. (no-conditionals-in-tests)

ExplanationAvoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:

  • loops
  • conditionals

Some ways to fix this:

  • Use parametrized tests to get rid of the loop.
  • Move the complex logic into helpers.
  • Move the complex part into pytest fixtures.

Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / Don't Put Logic in Tests

print("✅ CSV schema test passed!")

if __name__ == "__main__":
test_csv_schema('results.csv')
Comment on lines +3 to +14
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Schema is out of sync with new CLIP/LPIPS fields; hard requirement on “score” will fail

The pipeline now writes clip_score and lpips_score (not a single score). Also add basic robustness (encoding, empty-header guard).

-def test_csv_schema(filename):
-    required_columns = {'id', 'image_path', 'score'}
-    with open(filename, newline='') as csvfile:
-        reader = csv.DictReader(csvfile)
-        headers = set(reader.fieldnames)
-        missing = required_columns - headers
-        if missing:
-            raise ValueError(f"CSV is missing required columns: {missing}")
-        print("✅ CSV schema test passed!")
+def test_csv_schema(filename):
+    # Minimum columns required by add_scores_to_csv
+    required_columns = {'id', 'image_path', 'prompt'}
+    # Accept either the new pair of scores or the legacy single "score" for backward compatibility
+    acceptable_score_sets = [
+        {'clip_score', 'lpips_score'},
+        {'score'},
+    ]
+    with open(filename, newline='', encoding='utf-8') as csvfile:
+        reader = csv.DictReader(csvfile)
+        if reader.fieldnames is None:
+            raise ValueError("CSV is empty or missing a header row")
+        headers = set(reader.fieldnames)
+        missing = required_columns - headers
+        if missing:
+            raise ValueError(f"CSV is missing required columns: {missing}")
+        if not any(scores.issubset(headers) for scores in acceptable_score_sets):
+            raise ValueError("CSV must contain either {'clip_score','lpips_score'} or legacy {'score'}")
+    print("✅ CSV schema test passed!")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def test_csv_schema(filename):
required_columns = {'id', 'image_path', 'score'}
with open(filename, newline='') as csvfile:
reader = csv.DictReader(csvfile)
headers = set(reader.fieldnames)
missing = required_columns - headers
if missing:
raise ValueError(f"CSV is missing required columns: {missing}")
print("✅ CSV schema test passed!")
if __name__ == "__main__":
test_csv_schema('results.csv')
def test_csv_schema(filename):
# Minimum columns required by add_scores_to_csv
required_columns = {'id', 'image_path', 'prompt'}
# Accept either the new pair of scores or the legacy single "score" for backward compatibility
acceptable_score_sets = [
{'clip_score', 'lpips_score'},
{'score'},
]
with open(filename, newline='', encoding='utf-8') as csvfile:
reader = csv.DictReader(csvfile)
if reader.fieldnames is None:
raise ValueError("CSV is empty or missing a header row")
headers = set(reader.fieldnames)
missing = required_columns - headers
if missing:
raise ValueError(f"CSV is missing required columns: {missing}")
if not any(scores.issubset(headers) for scores in acceptable_score_sets):
raise ValueError("CSV must contain either {'clip_score','lpips_score'} or legacy {'score'}")
print("✅ CSV schema test passed!")
if __name__ == "__main__":
test_csv_schema('results.csv')
🤖 Prompt for AI Agents
tasks/report_bundle/test_schema.py around lines 3 to 14: the test currently
requires a single "score" column and opens the CSV without explicit encoding or
a guard for empty headers; update the required columns to require "clip_score"
and "lpips_score" (or both if your pipeline writes both), open the file with an
explicit encoding (e.g., encoding='utf-8', errors='replace') and check
reader.fieldnames is not None before using it (raise a clear ValueError if
headers are missing), then compute missing = required_columns -
set(reader.fieldnames) and raise the existing error message if any are missing.