Skip to content

training_method seems not to work #30

@Con6924

Description

@Con6924

Thanks for your impressive work and clear code!

I find that modifying training_method to 'selfattn' or 'xattn' leads to failure:

create LoRA for U-Net: 0 modules.
Traceback (most recent call last):
File "/home/notebook/code/personal/S9049723/LECO/./train_lora.py", line 343, in
main(args)
File "/home/notebook/code/personal/S9049723/LECO/./train_lora.py", line 330, in main
train(config, prompts)
File "/home/notebook/code/personal/S9049723/LECO/./train_lora.py", line 89, in train
optimizer = optimizer_module(network.prepare_optimizer_params(), lr=config.train.lr, **optimizer_kwargs)
File "/home/notebook/code/personal/S9049723/Anaconda3/envs/leco/lib/python3.10/site-packages/torch/optim/adamw.py", line 50, in init
super().init(params, defaults)
File "/home/notebook/code/personal/S9049723/Anaconda3/envs/leco/lib/python3.10/site-packages/torch/optim/optimizer.py", line 187, in init
raise ValueError("optimizer got an empty parameter list")
ValueError: optimizer got an empty parameter list

According to LoRA implementation, extra LoRA modules are only attached to Conv and Linear modules, so that attn blocks have no LoRA associated. Maybe related code can be removed or refined in your later update.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions