This would be a path towards stitching: several source images could be mounted with additional orientation information (currently, the mount is conceptually head-on, and any reorientation is via the virtual camera's y/p/r orientation). The next step would be to do the 'voronoi thing' (like animated sequences of PTOs in lux), which would already be 'good enough' to apply patches. Eventually, even an implementation of the Burt&Adelson image splining algorithm might be slotted in for 'proper' stitching - or some other method to blend the images seamlessly.
This would be a path towards stitching: several source images could be mounted with additional orientation information (currently, the mount is conceptually head-on, and any reorientation is via the virtual camera's y/p/r orientation). The next step would be to do the 'voronoi thing' (like animated sequences of PTOs in lux), which would already be 'good enough' to apply patches. Eventually, even an implementation of the Burt&Adelson image splining algorithm might be slotted in for 'proper' stitching - or some other method to blend the images seamlessly.