Conversation
| param.requires_grad = True | ||
|
|
||
| model.fc = nn.Linear(model.fc.in_features, 2) # 2 classes (real, fake) | ||
|
|
There was a problem hiding this comment.
You unfreeze the old fc layer, then immediately overwrite it with a new nn.Linear layer.
That means the requires_grad settings you applied are lost.
| optimizer.step() | ||
|
|
||
| train_loss += loss.item() | ||
| correct += (outputs.argmax(1) == labels).sum().item() |
There was a problem hiding this comment.
train_loss += loss.item()
This sums the loss per batch. When you print it, the scale depends on the number of batches.
Usually, you report average loss per batch or per sample.
|
|
||
|
|
||
| # save model | ||
| torch.save(model, 'resnet18_full_model.pth') |
There was a problem hiding this comment.
This saves the entire model object, which can break if you load it in a different PyTorch version.
Safer approach is to save only state_dict().
| if img.mode == 'P' or img.mode == 'LA': | ||
| img = img.convert('RGBA') | ||
| return img | ||
|
|
There was a problem hiding this comment.
You convert some images to RGBA, but ResNet18 expects 3 channels (RGB).
After .convert("RGBA"), your ToTensor() will return 4 channels → mismatch with ResNet.
| optimizer.step() | ||
|
|
||
| train_loss += loss.item() | ||
| correct += (outputs.argmax(1) == labels).sum().item() |
There was a problem hiding this comment.
You are only tracking train accuracy. Add a test loop accuracy to monitor overfitting.
adding trained ResNet model weight