INSPNet Input Configuration:
The experimental folder indicates a default input of 3*23 for INSPNet, later explicitly set to 65. Could you clarify the reasoning for this?
Domain Transfer with VIT and INSPNet:
What are your thoughts on utilizing Vision Transformer (VIT) embeddings alongside implicit representations in INSPNet for tasks like style transfer and reconstructing images in a target domain?