Skip to content

x320 writes bad raw files? and can't write hdf5 files... #151

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
klowrey opened this issue Apr 9, 2025 · 1 comment
Open

x320 writes bad raw files? and can't write hdf5 files... #151

klowrey opened this issue Apr 9, 2025 · 1 comment

Comments

@klowrey
Copy link

klowrey commented Apr 9, 2025

With a x320 ES (which doesn't support the EVT3 format), a patched OpenEB (to support v4l2 devices) can only write raw files of collected data -- hdf5 files open then almost instantly close without any errors.

If I convert the raw files to hdf5 with metavision_file_to_hdf5 then only a partial chunk of the total data gets converted before, again, the process closes without any errors (files attached):

klowrey@smack:/tmp$ metavision_file_info -i bad.raw 
====================================================================================================

Name                bad.raw
Path                /tmp/bad.raw
Duration            975ms 342us 
Integrator          rp1-cfe
Plugin name         hal_plugin_prophesee
Data encoding       EVT21
Camera generation   0.0
Camera systemID     -1
Camera serial       rp1-cfe

====================================================================================================

Type of event       Number of events    First timestamp     Last timestamp      Average event rate  
----------------------------------------------------------------------------------------------------
CD                  569624              3                   975342              584.0 Kev/s         
klowrey@smack:/tmp$ metavision_file_info -i bad.hdf5
====================================================================================================

Name                bad.hdf5
Path                /tmp/bad.hdf5
Duration            229ms 108us 
Data encoding       ECF
Camera generation   320.0
Camera serial       rp1-cfe

====================================================================================================

Type of event       Number of events    First timestamp     Last timestamp      Average event rate  
----------------------------------------------------------------------------------------------------
CD                  67253               3                   229108              293.5 Kev/s 

Using OpenEB 4.6.2's file_to_hdf5 can convert the whole file, but the timestamps of the events are not always in sorted order (for non-trivially small datasets). In other words, the only thing we can assume about event cameras is broken.

This could be:
a) data read off the sensor is not in order, but usually openEB flags when this happen which I have not seen
b) data written to the raw file is not in order; looking into the encoded raw formats and concurrent code that does this does not seem like a good time, so I have not figured out a way to check for this. Opening the file again in metavision_viewer seems to play back the data fine (without triggering any events-out-of-order warnings)
c) conversion with hdf5 is broken. Obviously there's something buggy here when the process stops writing data but I'm unable to isolate exactly why 1) stops writing and 2) throws no error.

Little difficult to do anything else if we can't collect reliable data in a format for analysis (ie hdf5), and not sure how to begin debugging this since it touches on a number of things. If there was a way to better examine the hdf5 conversion process or test a raw file for correctness, that would be a helpful way to start.

@klowrey
Copy link
Author

klowrey commented Apr 10, 2025

I've done some more testing and the issue starts before anything gets written to disk.

I have seen openeb throw warnings for non-monotonic timestamps before but apparently that's not happening here.

In a callback to process events from the x320, we can check the timestamp for each event against the largest previously seen timestamp. At around 400kE/s a few (ie 1-10) events out of the batch of 320 in the event handler callback will have timestamps smaller than the previously largest timestamp seen. If we increase the event rate, more events are out of order until the whole batch can be bad.

The critical issue at stake is whether the sensor is producing bad events at these modest rates. Otherwise, it could be how the data is decoded? I cant imaging how one or two events in continguous memory could have their timestamps changed otherwise, since the difference in timestamp is typically something like 20 to 50 microseconds...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant