Replies: 2 comments
-
er, not to butt in butt in but if the data can't be processed before it gets to 1.5Gb and you're concerned about it then what i would do personally is insert the dataset into any of the excellent json db solutions out there (eg cosmos db, postgres) and operate against the database. so the fetch activity's concern is inserting the data into the json db store. rephrasing
reasoning: this is a classic bigish data scenario that calls for either provisioning enough ram for an inmemory operation or application of appropriate computer scienceish solutions applicable to large datasets (say https://en.wikipedia.org/wiki/External_sorting) that said let's say it was costing me money to solve this equation, i would on the sly go back to the business concern owner and ask them to rephrase the question, because answering hard questions costs more (or is that less) of any two from time, cost and goodness of the fitted solution |
Beta Was this translation helpful? Give feedback.
-
Hi, thanks for the discussion. Yes, if we passed big size data not carefully, History would be easy to crash. We would like to keep the input and output of History in MB, rather than GB. What I would recommend is to use external storage like Azure Storage with these immediate data. Instead of passing the data itself, passing the data storage id is lighter as well as satisfying the requirement without endangering History. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I'd like to implement an orchestrator capable of:
I thought to use fan out/in to solve the problem:
With this approach I am concerned about memory management, especially while passing the large array to the activity responsible for merging the data.
Is the approach described above a valid one? Is there any best practice to follow in this case?
Thank you!
Fabio
Beta Was this translation helpful? Give feedback.
All reactions