Replies: 3 comments 1 reply
-
|
Hello Marc. As much as I appreciate the concern on large models I think you're looking at the wrong part of the system here. There are several inaccuracies in this analysis. First assumption: the XML is causing load speed issues. How are you establishing that the XML is the issue? Note that I recently checked in a Tools/ExternalLoad directory in the NORMA project. This allows the VS2017/VS2019/VS2022 assemblies to be located outside Visual Studio and loaded. If you have a vsix install instead of a dev build then you do not need to uninstall of run the dev build to try this (VS20xx developer command prompt, set the NORMAOfficial=1 environment variable, run the VS20xx.bat file in the NORMA root and run NORMAAssembliesFromSetup.bat in the same directory. This will give other instructions as needed. Anyway, the reason I'm saying this is that there is a flag on the loader to load only non-generative models into the system. This limits the core and extension pieces to assemblies that contribute to generated code (not diagrams, for example). You can easily time your file load with and without this switch. You might be surprised. I've found that large models spend well over half the load time in pre-rendering diagrams. There is an internal flag in the DSL framework (not assemblies I can control) that indicates connection lines should not do line jumps. This line-jumping feature is very expensive, but the flag is unfortunately ignored deep within the framework, so jumps are calculated even when not used. Also note that the online ORM Model Viewer will happily load and display your XML file much faster than NORMA, and this is using xpath statements to read the file, which is much slower than NORMA XML reader that only touches each element one time. Second assumption: NORMA is constantly saving to XML. This simply is not the case. All of the post-load NORMA work is in memory. XML is saved when you save, not otherwise. The only other time XML is saved is if you activate a different editor and have generators active. In this case the latest XML is pulled from the in-memory model as a temporary file. Still, it is very unlikely that the NORMA XML save or load is your problem. The most likely performance issue in a large model is the steady regeneration of the relational model. This does a full regeneration on any significant change on the ORM side. If you try editing a copy of the model with this extension off (turn off Relational View, Map to Relational Model and Map To Abstraction Model) you'll likely see a much faster system. You can then pull large changes into a file where this is enabled to apply the changes to the relational model (without losing customizations). Similarly, if you think the generators are too slow then edit the file outside of the C# project system that triggers the generator. The relational model generation speed issue is also worth discussion but there are no easy answers. Basically, the whole stack (absorption and relational bridges) needs to be replaced with an incremental absorption system that does not slow down as model size increases. |
Beta Was this translation helpful? Give feedback.
-
|
Hi Matt,
I initially sent this after adding the Barker ER model extension and then trying to add or delete IDs to or from each entity using the object browser. After removing the extension, it's much faster.
However, I still see an automatic save going on that is still time-consuming on larger models. It saves every 5 minutes or so. But it seems like it's 2 or 3 minutes. I wish it didn't have to save at all; since I'm making many changes, I can see the need for continuous saves if the power goes out. I don't want to lose my work, but when it saves, it stops the machine and does not allow any work to proceed until it finishes automatically saving.
This is a lighter concern as adding foreign key relations and doing that work requires analyzing the data. But removing the extensions does help with the speed, but since I'm making hundreds of changes, any delay gets magnified on larger datasets. I'm looking for ways to get around this in any way. But the tool has no real competition that I'm aware of except maybe regex replacements when it's feasible.
Also, I don't need a viewer as it won't help me in my use case of constantly making changes to a database model.
…________________________________
From: Matthew Curland ***@***.***>
Sent: Sunday, March 27, 2022 2:12 AM
To: ormsolutions/NORMA ***@***.***>
Cc: marcnoon ***@***.***>; Author ***@***.***>
Subject: Re: [ormsolutions/NORMA] Speed up large models by using a binary model (Discussion #42)
Hello Marc. As much as I appreciate the concern on large models I think you're looking at the wrong part of the system here. There are several inaccuracies in this analysis.
First assumption: the XML is causing load speed issues. How are you establishing that the XML is the issue? Note that I recently checked in a Tools/ExternalLoad directory in the NORMA project. This allows the VS2017/VS2019/VS2022 assemblies to be located outside Visual Studio and loaded. If you have a vsix install instead of a dev build then you do not need to uninstall of run the dev build to try this (VS20xx developer command prompt, set the NORMAOfficial=1 environment variable, run the VS20xx.bat file in the NORMA root and run NORMAAssembliesFromSetup.bat in the same directory. This will give other instructions as needed.
Anyway, the reason I'm saying this is that there is a flag on the loader to load only non-generative models into the system. This limits the core and extension pieces to assemblies that contribute to generated code (not diagrams, for example). You can easily time your file load with and without this switch. You might be surprised.
I've found that large models spend well over half the load time in pre-rendering diagrams. There is an internal flag in the DSL framework (not assemblies I can control) that indicates connection lines should not do line jumps. This line-jumping feature is very expensive, but the flag is unfortunately ignored deep within the framework, so jumps are calculated even when not used.
Also note that the online ORM Model Viewer<https://ormsolutions.com/tools/orm.aspx> will happily load and display your XML file much faster than NORMA, and this is using xpath statements to read the file, which is much slower than NORMA XML reader that only touches each element one time.
Second assumption: NORMA is constantly saving to XML. This simply is not the case. All of the post-load NORMA work is in memory. XML is saved when you save, not otherwise. The only other time XML is saved is if you activate a different editor and have generators active. In this case the latest XML is pulled from the in-memory model as a temporary file. Still, it is very unlikely that the NORMA XML save or load is your problem.
The most likely performance issue in a large model is the steady regeneration of the relational model. This does a full regeneration on any significant change on the ORM side. If you try editing a copy of the model with this extension off (turn off Relational View, Map to Relational Model and Map To Abstraction Model) you'll likely see a much faster system. You can then pull large changes into a file where this is enabled to apply the changes to the relational model (without losing customizations). Similarly, if you think the generators are too slow then edit the file outside of the C# project system that triggers the generator.
The relational model generation speed issue is also worth discussion but there are no easy answers. Basically, the whole stack (absorption and relational bridges) needs to be replaced with an incremental absorption system that does not slow down as model size increases.
—
Reply to this email directly, view it on GitHub<#42 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AAGXTZHAMJ7KO2EE74JBP2LVCAC4LANCNFSM5RX5NNAQ>.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
|
Hi Matt,
Yes, It's 49366KB ~790 tables. No extensions enabled. However, I did have an extension enabled but removed it (the Barker ER - removed all checkboxes) after making several changes.
The original file was 34123KB, as I saved a copy before starting the changes. And I'm only 75% done making changes by adding a valid identifier.
Some of the files were not all on the primary. I moved them all to primary as there were about 15 separate files the database was based on, and I copied them to a database with just the schema changes and any relationships and constraints in the generated scripts included in SQL Server 2019.
…________________________________
From: Matthew Curland ***@***.***>
Sent: Tuesday, March 29, 2022 4:15 AM
To: ormsolutions/NORMA ***@***.***>
Cc: marcnoon ***@***.***>; Author ***@***.***>
Subject: Re: [ormsolutions/NORMA] Speed up large models by using a binary model (Discussion #42)
Hi Marc,
The automatic save is a Visual Studio feature. NORMA does not trigger it or manage it. See AutoRecover in Tools/Options diagram Environment/AutoRecover. However, there is no way this should take anywhere near that long. Is this time similar to a manual call to save the file? It is hard to tell what is going on here without knowing the file size. Can you fill me in on how large the file is (size on disk) and what extensions are enabled? I'm wondering if you're hitting some memory limits on the machine or VS. Do you see similar behavior in VS2022?
The biggest expected performance hit for a large number of repeated changes is the relational mapping regeneration. This is a separate discussion.
I wasn't suggesting the viewer as an integral part of your daily routine, just as a second app that shares zero code with NORMA that loads the XML from large .orm files very quickly.
—
Reply to this email directly, view it on GitHub<#42 (reply in thread)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AAGXTZH2PXLPQER33L56YO3VCLC43ANCNFSM5RX5NNAQ>.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I have grown to like this project over the years but have always had trouble with large database models loading very slowly. I was hoping that this project and product could have a much faster binary model in addition to the XML model. The XML model is perfect for renaming certain items or adding schema names by prepending them to terms. So, it has many other benefits, but a binary model would also be helpful in terms of sheer speed.
A binary model would be able to update items much quicker and allow for faster editing and manipulation in the GUI at a much faster clip than the XML model as the Norma system constantly updates the data by file save. Also, modern algorithms could make this product much speedier by finding a node, changing just the node in memory or the filesystem, and even saving just the revised section and not the entire model.
With NORMA, I have to break up any model into smaller sub-models and work within a single sub-model because of these limitations. The product seems to work perfectly now. I hope that optimizations can start to be considered more of a priority.
Beta Was this translation helpful? Give feedback.
All reactions