Surety is performing system maintenance this weekend. Electronic date stamps on new Prior Art Database disclosures may be delayed.
Browse Prior Art Database

Dynamic API Provider Protection

IP.com Disclosure Number: IPCOM000246351D
Publication Date: 2016-Jun-02
Document File: 2 page(s) / 32K

Publishing Venue

The IP.com Prior Art Database


Providing dynamic API or Infrastructure protection

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 52% of the total text.

Page 01 of 2

Dynamic API Provider Protection

Infrastructure protection is one of the critical parameters on which cloud hosted ecosystem is analyzed or compared upon. A secure system would have preventive techniques in place which ensure maximum availability, similar performance for all clients etc, where as an un-secure system is prone to attacks like denial of service, unequal performance for various clients etc.

IBM Connections Cloud is an on-premises application which has been retrofitted for infrastructure-based multi-tenancy and software-based cross-organization collaboration. IBM has brought a small set of the fourteen Connections components to the Cloud to deliver a secure and performant infrastructure for thousands of concurrent browser-based consumers.

IBM Connections Cloud is in the process of onboarding very large tenants as API consumers. For instance, one customer plans to onboard 100K API consumers, another up to 20M API consumers over three years. The API consumers request url endpoints at a far greater speed than browser-based consumers (Request-Response-Request versus Request-Response-Wait).

The current API throttling techniques rely on a clear understanding of each API costs, and that a person develop a plan which enables bursting, such as IBM API Management and Mashery.

There arises a general problem when onboarding API Consumers and without a clear API throttling framework.

The proposal protects API providers, by establishing a load balancing proxy, monitoring the Request/Response through the proxy for success/failure codes, generating an online model of success/failure, applying the online model to decide if the current request is most likely over capacity.

Capacity may be determined as the bandwidth, cpu costs, or server threads.

The proposal may start the API Requests prediction by smoothing based on available threads, thus working with applications that have no prior capacity data.

The proposal may implement random sampling for non-critical APIs, and complete sampling for critical APIs.

The model may be scoped to a set time period.

The proposal may apply weights to each of the request types, backend application, o...