Speed up read/write collection #4677
Replies: 6 comments 13 replies
-
Are there any dependencies between those records? If not you could parallelize the computations to shorten the time. |
Beta Was this translation helpful? Give feedback.
-
@rockfordlhotka the read/write list is not bound to UI. After we are done processing, we save the list. The user could optionally open another form and look at the 'completed' list. |
Beta Was this translation helpful? Give feedback.
-
Can you do the calculations using a batch technique? For external validations, we sometimes write the validation logic in a separate class that supports a bulk operation by defining a list of criteria to be supplied. A business rule in the class is configured to use that class, but it is configured to not run in check rules. That rule class only adds one criteria instance to the list. In the data portal operations, we'll use the validation class directly, passing in a full list of criteria constructed from what was fetched from the database. |
Beta Was this translation helpful? Give feedback.
-
Is it the validation rules consuming the time, or the database inserts? Do the validation rules do any round-trips to the database? If you're doing one round-trip for each insert, this would take a very long time for 20k records. You can generate a batch script in SQL for thousands of inserts and updates and fire it off in one round-trip, or maybe two if you need to retrieve the new concurrency stamp / rowversion / last changed date, etc. |
Beta Was this translation helpful? Give feedback.
-
@rockfordlhotka hinted at what it sounds like the "right" solution. You are bulk updating thousands of records. An efficient way to do that is not going to be to fetch those records into thousands of objects and then running ten calculation methods against each of those objects and then saving them in what I suspect may be thousands of individual SQL update statements. I suspect that's part of where Rocky was going with the command object suggestion. From a design perspective this should be treated like a bulk update operation because that's what it is. I know you want to preserve the logic you already wrote and sure you can do some tricks with multi threading and similar to make it better. But you will never be able to fully get around the reality that the base design pattern used will never be an efficient one to handle bulk update operations. I hate to say it but the "performant" answer may be a command object running SQL statements that do bulk updating of data assuming you are using a database server. |
Beta Was this translation helpful? Give feedback.
-
So we ran a few tests; if we move updates to 'bulk' SQL, we get improvements. Now we will try a separate non-Csla class for fees/taxes business rules, or move everything to T-sql. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
We have a use case where we retrieve thousands of records from a database using a stored procedure, with the total number ranging from 2,000 to 20,000.
We have a read/write collection that corresponds to those records. The Fetch operation is fine, but before we send those records back to the DB, we have to run almost ten business rules. Some of the business rules have to computer fees/taxes, etc.
This is making the processing time in minutes, anywhere from three to four minutes to 15 minutes. I think we are doing something wrong here :).
Any recommendations, please.
Regards
Beta Was this translation helpful? Give feedback.
All reactions