Browse Prior Art Database

Imposing CPU constraints on users of a shared JVM

IP.com Disclosure Number: IPCOM000016251D
Original Publication Date: 2002-Sep-16
Included in the Prior Art Database: 2003-Jun-21
Document File: 2 page(s) / 41K

Publishing Venue

IBM

Abstract

In an environment which allows the running of untrusted code it is important to ensure that no code can, by accident or design, enter an infinite loop which will consume all the CPU of the target machine and thus deny service to other users of that machine. The current Java* Virtual Machine (JVM) specification defines no mechanism for the imposition of any such limits and thus it is necessary to find a mechanism that will suffice. Ideally this mechanism will be implemented in such a way that the resulting modifications will work on any compliant J2SE (or J2ME) implementation and thus will require no modification to either the existing JVM or class libraries. It is however likely that some of the burden of applying the constraints could usefully be transfered to the underlying OS layer. This would probably result in greater accuracy and better performance. Two techniques are proposed to solve the problem; one non-invasive (requiring no J2SE modifications), the other more invasive although still not necessarily requiring J2SE modifications but certainly requiring some access to the underlying OS layer through a native method or methods. The non-invasive technique is implemented using the deprecated (although still supported) suspend and resume methods of java/lang/Thread. First, all the threads associated with a given Principal are identified and suspended. Threads are suspended for some time T1 before resuming them all for some time T2. Thus the maximum fraction of the available CPU that they can consume is T2 (T1 T2). By stopping and starting all the user threads together the possibility of them making no forward progress due to lock contention is minimized. Note that the process of suspension itself guarantees that no JVM locks are held by the threads. In addition it may be found necessary to put locks around some I/O operations to ensure that underlying OS locks are not held which prevent the threads of other users making progress in their turn. This technique, although crude, is simple and generally effective as long as users are scheduled in a strict round-robin order which will ensure that eventually any underlying locks will be released. It implies that any attempt to set thread priority are intercepted and ensure that these are 'capped' to ensure that our 'scheduler' thread always runs at a priority higher than the threads to be suspended. Failure to do so may result in a user thread gaining high priority and then shutting out the scheduler. Using classfile modification all such calls to modify thread priority can be tracked and capped at some limit below the scheduler thread. Identification of the set of threads associated with a given Principal is not a trivial process and requires that any 'worker' theads currently driving work on behalf of that Principal be explicitly registered before the work (method call) starts and then deregistered when the work completes. It is then necessary to ensure that any threads created by such work are also registered to the Principal. This latter step can either be acomplished by using InheritableThreadLocal storage or by performing classfile modification on the user code and ensuring that allocations of Thread objects are explicitly tracked and associated with the current Principal. 1 The more invasive technique will involve the definition of some new native methods either on the Thread class or on some other new class. One such method will effectively make a temporary linkage between a Thread and a Principal identifier and simultaneously set a constraint on the proportion of the CPU allowed. Additionally there will be a static method which will allow us to query the total CPU consumption for a given Principal ID. The implementation of these methods will be platform dependent and rely totally on the monitoring facilities available in the OS layer such as quota setting. Such monitoring may well be independent of the JVM although it is very likely that some JVM internal information will be required in order to locate the underlying thread ID. Thus some monitoring function can be written at the Java level which will periodically examine the amount of CPU being consumed by a given Principal and can either operate some policy to limit the consumption of a given Principal or account for the usage being made etc. Note that by passing the CPU constraint down to the 'native' layer it is possible to ensure that the OS prevents any particular thread consuming too much resource and thus 'shutting out' the monitoring thread.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 52% of the total text.

Page 1 of 2

Imposing CPU constraints on users of a shared JVM

In an environment which allows the running of untrusted code it is important to ensure that no code can, by accident or design, enter an infinite loop which will consume all the CPU of the target machine and thus deny service to other users of that machine. The current Java* Virtual Machine (JVM) specification defines no mechanism for the imposition of any such limits and thus it is necessary to find a mechanism that will suffice. Ideally this mechanism will be implemented in such a way that the resulting modifications will work on any compliant J2SE (or J2ME) implementation and thus will require no modification to either the existing JVM or class libraries. It is however likely that some of the burden of applying the constraints could usefully be transfered to the underlying OS layer. This would probably result in greater accuracy and better performance. Two techniques are proposed to solve the problem; one non-invasive (requiring no J2SE modifications), the other more invasive although still not necessarily requiring J2SE modifications but certainly requiring some access to the underlying OS layer through a native method or methods. The non-invasive technique is implemented using the deprecated (although still supported) suspend and resume methods of java/lang/Thread. First, all the threads associated with a given Principal are identified and suspended. Threads are suspended for some time T1 before resuming them all for some time T2. Thus the maximum fraction of the available CPU that they can consume is T2 / (T1 + T2). By stopping and starting all the user threads together the possibility of them making no forward progress due to lock contention is minimized. Note that the process of suspension itself guarantees that no JVM locks are held by the threads. In addition it may be found necessary to put locks around some I/O operations to ensure that underlying OS locks are not held which prevent the threads of other users making progress in their turn. This technique, although crude, is simple and generally effective as long as users are scheduled in a strict round-robin order which will ensure that eventually any underlying locks will be released. It implies that any attempt to set thread priority are intercepted and ensure that these are 'capped' to ensure that our 'scheduler' thread always runs at a priority higher than the thr...