Skip to content

Lifecycle sbr#9

Open
sebastianrowan wants to merge 9 commits intoUSACE-Cloud-Compute:lifecyclefrom
sebastianrowan:lifecycle_sbr
Open

Lifecycle sbr#9
sebastianrowan wants to merge 9 commits intoUSACE-Cloud-Compute:lifecyclefrom
sebastianrowan:lifecycle_sbr

Conversation

@sebastianrowan
Copy link
Copy Markdown

No description provided.

Comment thread actions/compute-payload.go Outdated
@HenryGeorgist
Copy link
Copy Markdown
Collaborator

overall it is looking good. lets coordinate with the physics team to get an example csv and also resolve the bbox questions. lets stage the code with an example csv file so we can write unit tests. lets also coordinate the desired format for output.

getting closer. this is great.

@trietmnj
Copy link
Copy Markdown

trietmnj commented Apr 1, 2026

Can I do a quick review?

I see several issues you might consider

Copy link
Copy Markdown

@trietmnj trietmnj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's start off with this - I'm also trying to see if there is a potential memory leak that could become a problem

Comment thread hazardproviders/csv_multihazard_provider.go
Comment thread hazardproviders/stormSim_lifecycle_hazardprovider.go Outdated
Comment thread hazardproviders/stormSim_lifecycle_hazardprovider.go Outdated
next step: use gdal to handle events and reaches files
Comment thread hazardproviders/stormSim_lifecycle_hazardprovider.go
@trietmnj
Copy link
Copy Markdown

@sebastianrowan @HenryGeorgist do we have any idea what the output might look like yet?
@martyheynbah

@sebastianrowan
Copy link
Copy Markdown
Author

I think we should write to two separate files for the results (or separate tables if using the PSQL writer). One to save the basic building characteristics copied from the NSI along with summary results from the full analysis (e.g. times flooded, times raised, final value, etc.), and the second file/table should hold the results for individual storm events. This file would include just the building fd_id for joining back to the previous table and results for the event-specific impacts to the structure (val_before, val_after, reconstruction_time, etc.)

@HenryGeorgist
Copy link
Copy Markdown
Collaborator

i like the idea. we will not be using psql. propose the auxiliary file for detailed results on the hazard and structure state. when a result gets passed to a results writer, the results writer can write to more than one file. so instead of serializing hazard into a json blob, you could parse and separate the components much like you suggest. The simplified file will be very useful for display of mapping and summary results, but the more detailed could be used to diagnose and/or display detailed views

@trietmnj
Copy link
Copy Markdown

trietmnj commented Apr 15, 2026

@sebastianrowan @HenryGeorgist I'm also trying to catalog all the inputs needed to operationalize a modeling run- you don't happened to have a list already somewhere? Or like an entry point where all the input configs for this lifecycle modeling are flowing through, do you?

e.g., maybe something like the test_ functions in go-coastal?

https://github.com/HydrologicEngineeringCenter/go-coastal/blob/afedfa6d3c9c063175ba8cd738ced5ae70bb335f/compute/compute_test.go#L40

@Cheng-Kevin2

@HenryGeorgist
Copy link
Copy Markdown
Collaborator

that should be the compute coastal lifecycle action and overall compute manifest json file

@trietmnj
Copy link
Copy Markdown

trietmnj commented Apr 15, 2026

I think we should write to two separate files for the results (or separate tables if using the PSQL writer). One to save the basic building characteristics copied from the NSI along with summary results from the full analysis (e.g. times flooded, times raised, final value, etc.), and the second file/table should hold the results for individual storm events. This file would include just the building fd_id for joining back to the previous table and results for the event-specific impacts to the structure (val_before, val_after, reconstruction_time, etc.)

i like the idea. we will not be using psql. propose the auxiliary file for detailed results on the hazard and structure state. when a result gets passed to a results writer, the results writer can write to more than one file. so instead of serializing hazard into a json blob, you could parse and separate the components much like you suggest. The simplified file will be very useful for display of mapping and summary results, but the more detailed could be used to diagnose and/or display detailed views

I'm reading this as 3 distinct outputs:

  1. spatial structure summary
  2. tabulated storm-event impacts
  3. timeseries hazard+structure state

Is it right? Or is 3 just referencing 2? (3) sounds like an awful lot of data if you're trying to do a balanced panel

…nsesFile()

Default attempt to parse field as a DateTime. Fallback to parsing as a string for csv. Will still fail if csv is not using 'YYYY-MM-DD HH:MM:SS' format
@sebastianrowan
Copy link
Copy Markdown
Author

I think it is just 2 files.

File 1: Structure lifecycle summary results:

fd_id sqft occtype etc. initial_val_struct final_val_struct cumulative_losses_struct times_flooded times_raised etc. geom
001 2000 res1-1s more attrs. 100000 100000 12345 2 0 other stats 0xabcdef...
002 2100 res1-2s blah 100000 110000 54321 2 1 does raising increase val? 0x12345...

File 2: Event-level impacts to structures

fd_id storm_id depth_ffe damage_struct raise_structure val_before val_after rebuild_time_days etc.
001 1111 1.0 6345 false 100000 100000 21 etc.
001 2222 0.9 6000 false 100000 100000 15 etc.
002 1111 3.0 20300 false 100000 100000 90 etc.
002 3333 4.2 34021 true 100000 110000 120 etc.

For tracking the value of the structures over time, we are not reporting value in fixed time increments between events, which is what I think you are thinking as File 3. Rather, when a structure is impacted by an event, we calculate what it's value is at the start of the event based on it's original value and considering any unrepaired damage from previous events.

@trietmnj
Copy link
Copy Markdown

I think it is just 2 files.

File 1: Structure lifecycle summary results:

fd_id sqft occtype etc. initial_val_struct final_val_struct cumulative_losses_struct times_flooded times_raised etc. geom
001 2000 res1-1s more attrs. 100000 100000 12345 2 0 other stats 0xabcdef...
002 2100 res1-2s blah 100000 110000 54321 2 1 does raising increase val? 0x12345...
File 2: Event-level impacts to structures

fd_id storm_id depth_ffe damage_struct raise_structure val_before val_after rebuild_time_days etc.
001 1111 1.0 6345 false 100000 100000 21 etc.
001 2222 0.9 6000 false 100000 100000 15 etc.
002 1111 3.0 20300 false 100000 100000 90 etc.
002 3333 4.2 34021 true 100000 110000 120 etc.
For tracking the value of the structures over time, we are not reporting value in fixed time increments between events, which is what I think you are thinking as File 3. Rather, when a structure is impacted by an event, we calculate what it's value is at the start of the event based on it's original value and considering any unrepaired damage from previous events.

LGTM-

One note on the second table though. Instead of storm_id, you would need to instead track the stormevent_id. storm_id is a reference to the physical storm archetype that is used for the baseline hazard simulation. stormevent_id is the sampled storm-events that get tracked across time. Multiple storm-events could be using the same storm archetype.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants