Skip to content

made algos and metrics compatible with eval_model.py files#11

Closed
mkeeler43 wants to merge 1 commit intobaseline_modelsfrom
eval_metrics
Closed

made algos and metrics compatible with eval_model.py files#11
mkeeler43 wants to merge 1 commit intobaseline_modelsfrom
eval_metrics

Conversation

@mkeeler43
Copy link
Contributor

Changes to 2D algorithms (CGANs and GANs) to be compatible with all 2D problems and eval metrics.

Eval metrics can be run in respective evaluate_model.py files.

@mkeeler43 mkeeler43 requested a review from ffelten March 27, 2025 16:50
Copy link
Collaborator

@ffelten ffelten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done quickly :)

Overall looks good. I just have a "meh" about duplicating the eval file. Please fix pre commit too :)

@ffelten
Copy link
Collaborator

ffelten commented Mar 28, 2025

@njhoffman11 you might want to see this since it targets your branch

@mkeeler43 mkeeler43 requested a review from ffelten March 31, 2025 16:46
Copy link
Collaborator

@ffelten ffelten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, almost there. Be careful to avoid deleting files 😅. Also pre commit is still failing

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to push this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought it would be nice but I guess its not necessary

@mkeeler43 mkeeler43 requested a review from ffelten April 3, 2025 16:12
Copy link
Collaborator

@ffelten ffelten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very minor changes, good one Mister

.DS_Store Outdated
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can remove this boy

selected_indices: The indices of the sampled conditions and designs.
"""
### Set up testing conditions ###
np.random.seed(seed)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for i, designs in enumerate(dataloader):
for i, data in enumerate(dataloader):
designs = data[0]
print(designs.shape)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"""Compute the Maximum Mean Discrepancy (MMD) between two sets of samples.

Args:
x (np.ndarray): Array of shape (n, l, w) for generative model designs.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mkeeler43 mkeeler43 requested a review from ffelten April 7, 2025 14:16
@ffelten ffelten closed this Apr 7, 2025
@ffelten ffelten deleted the eval_metrics branch April 7, 2025 14:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants