Conversation
|
overall it is looking good. lets coordinate with the physics team to get an example csv and also resolve the bbox questions. lets stage the code with an example csv file so we can write unit tests. lets also coordinate the desired format for output. getting closer. this is great. |
… boundary files from stormsim
|
Can I do a quick review? I see several issues you might consider |
trietmnj
left a comment
There was a problem hiding this comment.
Let's start off with this - I'm also trying to see if there is a potential memory leak that could become a problem
next step: use gdal to handle events and reaches files
|
@sebastianrowan @HenryGeorgist do we have any idea what the output might look like yet? |
|
I think we should write to two separate files for the results (or separate tables if using the PSQL writer). One to save the basic building characteristics copied from the NSI along with summary results from the full analysis (e.g. times flooded, times raised, final value, etc.), and the second file/table should hold the results for individual storm events. This file would include just the building fd_id for joining back to the previous table and results for the event-specific impacts to the structure (val_before, val_after, reconstruction_time, etc.) |
|
i like the idea. we will not be using psql. propose the auxiliary file for detailed results on the hazard and structure state. when a result gets passed to a results writer, the results writer can write to more than one file. so instead of serializing hazard into a json blob, you could parse and separate the components much like you suggest. The simplified file will be very useful for display of mapping and summary results, but the more detailed could be used to diagnose and/or display detailed views |
|
@sebastianrowan @HenryGeorgist I'm also trying to catalog all the inputs needed to operationalize a modeling run- you don't happened to have a list already somewhere? Or like an entry point where all the input configs for this lifecycle modeling are flowing through, do you? e.g., maybe something like the test_ functions in go-coastal? |
|
that should be the compute coastal lifecycle action and overall compute manifest json file |
I'm reading this as 3 distinct outputs:
Is it right? Or is 3 just referencing 2? (3) sounds like an awful lot of data if you're trying to do a balanced panel |
…nsesFile() Default attempt to parse field as a DateTime. Fallback to parsing as a string for csv. Will still fail if csv is not using 'YYYY-MM-DD HH:MM:SS' format
|
I think it is just 2 files. File 1: Structure lifecycle summary results:
File 2: Event-level impacts to structures
For tracking the value of the structures over time, we are not reporting value in fixed time increments between events, which is what I think you are thinking as File 3. Rather, when a structure is impacted by an event, we calculate what it's value is at the start of the event based on it's original value and considering any unrepaired damage from previous events. |
LGTM- One note on the second table though. Instead of storm_id, you would need to instead track the stormevent_id. storm_id is a reference to the physical storm archetype that is used for the baseline hazard simulation. stormevent_id is the sampled storm-events that get tracked across time. Multiple storm-events could be using the same storm archetype. |
No description provided.