Skip to content

Humongous amounts of memory needed to parse large models #46

@pranavashok

Description

@pranavashok

We are trying develop some new algorithms for CTMDP reachability in PRISM and are facing some major issues with the parsing of large models. As discussed in issue #27, I had made just a few modifications in order to input CTMDPs in the same format as MDPs.

The amount of memory PRISM needs to parse a model of size 160MB exceeds 25GB. I was wondering why this happens, and why parsing is so slow (took around 6-7 minutes with 52 threads, not all running simultaneously). Would there be any relatively simple way to get around this limitation? In comparison, IMCA is able to solve this CTMDP in around the same time.

My first thoughts were to use the explicit format (tra). I assumed that the ModulesFile object coming out of parsing the explicit format can be used to instantiate a ModelGenerator in the same way as earlier, but it doesn't seem to be the case. The ModelGenerator object doesn't populate the transition list at all (on calling getTransitionList). I wonder if I am doing something wrong here.

Any hints or thoughts on how to proceed would be appreciated.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions