Browse Prior Art Database

A Method to Prevent the Flooding of the Consumer in a Multiple Producers/ Single Consumer pattern

IP.com Disclosure Number: IPCOM000236877D
Publication Date: 2014-May-20
Document File: 2 page(s) / 29K

Publishing Venue

The IP.com Prior Art Database

Abstract

Disclosed is a method to prevent a permanent overload in the producer/consumer scenario. The computational overhead to control the throughput is minimized by allocating a quota to the producers. The capability of the buffer between producers and consumer to cushion a temporary overload of the system is preserved.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 49% of the total text.

Page 01 of 2

A Method to Prevent the Flooding of the Consumer in a Multiple Producers / /

Consumer pattern

Producers in a producer-consumer pattern generate objects that are handed over to the consumer for further processing. The exchange of those objects are managed by a kind of bucket: The producers add the objects to the bucket and the consumer takes the objects out of the bucket. If the producers add more objects to the bucket per time unit than the consumer can take out of the bucket per time unit the number of objects in the bucket increases. The bucket can be used as a buffer to handle such an overload for a limited time. If the overload persists the bucket increases to an unwanted size, and the lingering time of an object in the bucket increases as well.

Producers and consumer run in different threads. Every access to shared data like the number of objects in the bucket is unfavourable because it generates overhead for memory synchronisation between the threads and obstructs performance optimization by thread-local caching. Therefore a straight-forward approach to have the producers query the number of objects in the buckets and wait until it falls below a defined limit is not favourable. Such a polling approach additionally bears the risk a thread never sees the number of objects to drop below the limit because other threads are faster. Polling requires the capability to prioritize the threads; this requires additional communication between the threads what is unfavorable again.

Frequent determination of the number of objects in the bucket can be time-consuming depending on the implementation details of the bucket, for example if the bucket is implemented as a linked list the time to determine the number of elements in the list is linearly related to the number of element. Determination of the number of objects in the bucket by steadily counting the objects that are added and removed require access of all threads to shared data what is unfavorable again.

Assuming the number of objects that are generated by the producers per time unit and/or that are processed by the consumer per time unit can fluctuate strongly approaches that try to predict the number of objects in the bucket bear a significant risk to fail.

The presented method prevents the unlimited increase of the bucket in an efficient way. The bucket can still act as a buffer for temporary overload scenerios without blocking the producers. This increases the throughput in situations when the load is subjected to fluctuate between high and low activity. The number of accesses to shared data is strongly reduced compared to the straight-forward approach. The frequency the number of objects in the buckets is determined is kept low. There is no risk a producers is refrained from adding objects by random racing-like effects. The producer operate independently from each other and do not have to know the other producers. The consumer does not have to know the producers explicitly; it o...