Hi, Thanks for your valuable contribution.
I wanted to know if you have tried utilising a large collection of Loras rather than the 28 lora adapters to see the impact of scale. Particularly, have you tried using 256 Loras trained on flan2 as done in Arrow routing?