I found a simple but not perfect solution to this problem. The solution is to include a second
BulkheadPolicy positioned before the
WaitAndRetryPolicy (in an “outer” position). This extra
Bulkhead will serve only for reprioritizing the workload (by serving as an outer queue), and should have a substantially larger capacity (x10 or more) than the inner
Bulkhead that controls the parallelization. The reason is that the outer
Bulkhead could also affect (reduce) the parallelization in an unpredictable way, and we don’t want that. This is why I consider this solution imperfect, because neither the prioritization is optimal, nor it is guaranteed that the parallelization will not be affected.
Here is the combined policy of the original example, enhanced with an outer
BulkheadPolicy. Its capacity is only 2.5 times larger, which is suitable for this contrived example, but too small for the general case:
var policy = Policy.WrapAsync ( Policy.BulkheadAsync( // For improving prioritization maxParallelization: 5, maxQueuingActions: Int32.MaxValue), Policy .Handle<HttpRequestException>() .WaitAndRetryAsync(retryCount: 1, _ => TimeSpan.FromSeconds(1)), Policy.BulkheadAsync( // For controlling paralellization maxParallelization: 2, maxQueuingActions: Int32.MaxValue) );
And here is the output of the execution:
12:36:02 Starting #1/1 12:36:02 Starting #2/1 12:36:03 Starting #3/1 12:36:03 Starting #4/1 12:36:04 Starting #2/2 12:36:04 Starting #5/1 12:36:05 Starting #1/2 12:36:05 Starting #3/2 12:36:06 Starting #6/1 12:36:06 Starting #4/2 12:36:07 Starting #8/1 12:36:07 Starting #5/2 12:36:08 Starting #9/1 12:36:08 Starting #7/1 12:36:09 Starting #10/1 12:36:09 Starting #6/2 12:36:10 Starting #7/2 12:36:10 Starting #8/2 12:36:11 Starting #9/2 12:36:11 Starting #10/2
Although this solution is not perfect, I believe that it should do more good than harm in the general case, and should result in a better performance overall.
CLICK HERE to find out more related problems solutions.