ndorairaj

Cloud Integration Challenges: Dealing with API Governance Limits

Blog Post created by ndorairaj on May 16, 2016

There has been a perceptible change in the way companies now look at storing data in the cloud. More and more large enterprises have started migrating from their legacy on-premise applications to SaaS applications. These SaaS applications are built on fundamentally different architectures to leverage and scale with cloud computing. But new architectures bring come new challenges.

 

In this blog I wanted to share some thoughts on one of the most common challenges faced by cloud newcomers: dealing with API governance limits.

 

Why do these limits exist?

SaaS providers are basically renting you their infrastructure to run your business. However just as you are one the renters there would be others, all trying to access the same shared pool of resources. Now as we all know that in any such scenario to co-exist with limited resources, certain rules and limits need to be in place so that no one individual monopolizes the pool. In this case SaaS providers must ensure the activity and usage of one customer does not adversely affect the others. One of the ways to do that is to enforce limits on API interactions to prevent users from overloading the system. I would need to add sometimes these limits can be flexible...for a price .

 

What types of limits are there?

So what are some of these limits? Some of the common limits we typically come across dealing with web service APIs include:

  • Number of requests per given time frame (e.g. per second, per day)
  • Number of concurrent requests for an account
  • Limit on size or number of transactions per physical request/response
  • Maximum duration/timeout per call

 

How do we work within those limits?

So what does an integration architect/designer do when faced with the challenges above? The following would be some approaches/techniques we recommend to address them.

 

LimitStrategies
Number of requests
  • Minimize connector calls. Whenever possible try to join records together in a single request as opposed to making a multiple number of calls.
  • Leverage caching. When performing lookups in a map function, take advantage of function caching to avoid looking up the same value multiple times. If you need to retrieve a number of reference values, consider making a single call to retrieve all the values and store them in a Document Cache for quick retrieval within the process.
  • For large data sets utilize batching whenever possible when sending data. In other words send 100 records in a single request vs. one at a time. Fortunately many connectors perform this batching automatically behind the scenes.
  • Ensuring priority is accorded to real time transactions over batch mode ones. The batch mode transactions can always be staged in an intermediate location and the API called during “non peak” usage times of the real time ones.
Concurrent requests
  • Schedule multiple integrations to the same endpoint in a staggered manner to avoid hitting those limits. In other words, don't schedule every process to run on the hour.
Limit on request size
  • Split large data sets into request sizes optimal to that target API . Again, many branded connectors will do this batching automatically for you, but you will need to do it yourself when working with the generic technology connectors like HTTP and SOAP Clients.
Timeouts
  • Increase the timeout settings on the atom. Be mindful that setting a global timeout will impact all connections using that protocol (e.g. HTTP).

 

And if all the above are still not sufficient to deliver the requisite SLA, talk with your SaaS provider to see what limits can be adjusted. Do note that however this comes at a cost!

 

I would be interested to hear the challenges faced by other architects/developers and the scenarios you have come across. Please feel free to share them! The more we share, the more it helps us all in solving these problems. Have a great day!

Outcomes