"multiscales": [
{
"name": "example",
"axes": [
{ "name": "t", "type": "time", "unit": "millisecond" },
{ "name": "c", "type": "channel" },
{ "name": "z", "type": "space", "unit": "micrometer" },
{ "name": "y", "type": "space", "unit": "micrometer" },
{ "name": "x", "type": "space", "unit": "micrometer" }
],
"datasets": [
{
"path": "0",
"coordinateTransformations": [
{
// the voxel size for the first scale level (0.5 micrometer)
"type": "scale",
"scale": [1.0, 1.0, 0.5, 0.5, 0.5]
}
]
},
{
"path": "1",
"coordinateTransformations": [
{
// the voxel size for the second scale level (downscaled by a factor of 2 -> 1 micrometer)
"type": "scale",
"scale": [1.0, 1.0, 1.0, 1.0, 1.0]
}
]
},
You can set the unit to a metric unit and then set the voxel scale to a fraction.
We would have to give the volume scale as an argument (or infer from volume information).
We could add proper scaling metadata to
.zattrsso that tools like neuroglancer can show scaling bars correctly.If you look at the spec example:
You can set the unit to a metric unit and then set the voxel scale to a fraction.
We would have to give the volume scale as an argument (or infer from volume information).