You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried importing a fairly large dataset with your approach and I find it less performant than the existing approach (disclaimer: I optimized the current approach and introduced async). This may be because of my dataset. Do you have any statistics on the size of the dataset you used?
Hey @enj5oy ,
My experience is that it highly depends on the connected nodes of the computer objects. Do you have more statistics on the file size? (750 computer objects are not that much but maybe the created Neo4j entries are far more).
@arvchristos upload of 750 computer objects takes place within 100 seconds. The file with all 7500 computer objects occupies 68 MB
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
I have implemented a multithread version, for me it was at least 10x faster.
To do that i splited the queryes read only to write, to reduce deadlocks.