Across industries, more and more companies are moving to the cloud. According to one study, approximately 93% of businesses are using cloud technologies. But what does “moving to the cloud” mean? For the purposes of this paper, the term refers to the adoption of SaaS technologies (Concur, Dropbox, etc.) and/or public infrastructure (Amazon Web Services, Azure, etc.). It can also refer to the development of new cloud-native applications.
Moving to the cloud provides organizations a number of benefits:
Improves operational efficiency and reduces costs: According to McKinsey, moving to the cloud can reduce IT overhead costs by as much as 30-40% by eliminating the overprovisioning of on-premise infrastructure and reducing application downtime.
Increases agility: IT teams can quickly develop applications by eliminating the need to maintain and configure on-premise infrastructure.
Accelerates innovation: Modern cloud platforms, such as Amazon and Google, provide rich big data and machine learning capabilities that allow organizations to drive more value with their data.
Today, integration is a strategic component of every digital transformation and initiative that involves moving to the cloud. When moving to the cloud, organizations focus on a set of key integration uses cases, including:
1. Integrating SaaS applications with on-premises data and applications.
2. Migrating existing data and applications from on-premises to cloud infrastructure.
3. Connecting cloud-native applications across on-premises and cloud environments.
In the process of implementing the above uses cases, organizations rely on three approaches to moving to the cloud. Traditional approaches involve migrating applications and data as-is by “lifting and shifting” applications and data sources, or connecting on-premises and cloud systems by building ground-to-cloud point-to-point interfaces. Additionally, some organizations find the aforementioned approaches challenging and, instead, resort to starting from scratch by re-building applications to make them more cloud-friendly.
These approaches, however, can lead to various challenges related to scale, efficiency, and application uptime. In this whitepaper, we will discuss these three integration use cases, with a focus on why traditional approaches do not suffice. We will then show how companies across industries — from HSBC to the Federal Communications Commission — have successfully addressed these challenges through API-led connectivity
Key uses cases for integration in the cloud
When moving to the cloud, organizations focus on three integration uses cases:
Migrating existing applications and data from on-premises to cloud infrastructure
One of the main use cases organizations come across when moving to the cloud is migrating existing applications and data from on-premises to cloud infrastructure. This involves rewriting integrations to be cloud-compatible and migrating integration applications across different environments.
Current approaches and challenges
To migrate existing applications and data from on-premises to cloud infrastructure, organizations turn to a variety of approaches. In this case, organizations can lift and shift applications and data “as is” from on-premises to the cloud. However, given the complexity of existing integrations between applications and systems, this approach can result in significant service disruption to the applications that are being migrated. This is especially true if those migrating the systems are not aware of the dependencies involved.
As a second approach, organizations can also skip migration altogether and, instead, rewrite on-premises applications in the cloud. This approach significantly wastes the investment put into the original applications’ data, application logic, etc.
Finally, some organizations turn to using point-to-point custom code between on-premises systems, cloud applications, and infrastructure. This approach may seem viable in the short term, however, it tightly couples on-premises systems to the 6 cloud infrastructure that you are deploying to. This hinders visibility into existing integrations, which makes makes it difficult to adhere to security and compliance environments because teams are unable to understand how various data sources are connected to each other. It also creates vendor lock-in challenges, and complicates the process of decommissioning these integrations over time.
How API-led connectivity addresses the use case
The above traditional approaches have clear drawbacks. This is why companies across industries are turning to a new approach: API-led connectivity.
API-led connectivity is a methodical way to connect data to applications through reusable and purposeful APIs. These APIs are developed to play a specific role—unlocking data from systems, composing data into processes, or delivering an experience.
System APIs: These APIs access the core systems of record and provide a means of insulating the user from the complexity or any changes to the underlying systems. Once built, many users can access data without any need to learn the complex 7 underlying system. They can also reuse these APIs in multiple projects. In terms of this specific use case, organizations can build System APIs to expose access to their existing onpremises systems of record (e.g. Customer database), and the cloud infrastructure they want to migrate the data to.
Process APIs: These APIs interact with and shape data within a single system or across systems. These APIs are created without a dependence on the source systems from which that data originates, as well as the target channels through which that data is delivered. For this specific use case, organizations can create a Process API that calls the on-premises system and cloud infrastructure.
Experience APIs: These APIs are the means by which data can be reconfigured so that it is most easily consumed by its intended audience—all from a common data source, rather than setting up separate point-to-point integrations for each channel. An Experience API is usually created with API-first design principles where the API is designed for the specific user experience in mind. In terms of this use case, organizations can build Experience APIs, which serve as the common interface for all applications consuming data from the original system.
To see the above API layers in action, refer to the image below, which serves as an example of an API-led approach to migrating data and services to the cloud. Note that the “Customer API” serves as a loosely coupled abstraction layer that provides a common interface for end-channels to maintain uninterrupted access to source system data and services during the migration.
By using API-led connectivity to migrate applications and data from on-premises to cloud infrastructure, organizations can easily expose access to systems of record through system APIs—hiding the complexity of underlying systems. In addition, they can quickly connect these endpoints through an API abstraction layer between on-premises and cloud systems of record.
This abstraction layer is key because, without it, applications and systems are tightly coupled. As a result, organizations risk application downtime when migrating data and business logic. With the abstraction layer, organizations can decouple the systems from the experience channels, making it easy to allow access to underlying system data through the API, without risking downtime.
The benefits of an API-led approach to connectivity when it comes to the above use case are clear. In the next section, we will explore the second use case — integrating SaaS applications with legacy on-premise applications and data — and the benefits of API-led connectivity.
Integrating SaaS applications with on-premises applications and data
The second use case organizations come across when moving to the cloud is integrating SaaS applications with legacy onpremise applications and data. In the process, organizations face various integration pain points, including bridging between modern data formats and transport protocols, as well as enabling cloud applications to spite of the limitations that onpremise systems may create.
Current approaches and challenges
Similar to the first use case, organizations turn to traditional approaches to implement this use case. In this case, organizations may simply want to mitigate the limitations of working with on-premises systems. To do this, they may lift and migrate on-premise endpoints to Integration as a Service (IaaS) or Platform as a Service (PaaS) environments.
This approach, however, can introduce the risk of service disruption, especially because the data source that was moved to the cloud may have downstream integrations with other applications—some of which may have been compromised in the migration. Also, migrating a monolithic application as-is may increase cloud infrastructure costs. This is because organizations with large, monolithic systems require more computing resources and this can lead to high costs. On the other hand, organizations that decouple the systems into smaller pieces require less computing resources and, in turn, incur less cost.
Some organizations acknowledge the challenges of migrating whole monolithic systems, which is why they may resort to migrating system data to the cloud, then re-building application logic and decommissioning on-premises applications. However, this approach can eradicate decades of investment, especially for organizations that have invested both time and resources 10 in building specific application logic. As a result, it can also increase costs by creating a need to rewrite these applications.
Finally, another approach organizations adopt is writing pointto-point custom code integrations between on-premises and SaaS applications. The challenge with this approach is that organizations need to write code for each incremental SaaS application that needs to be integrated; as a result, no work from the previous integration can be reused. This one-off approach is time-consuming and does not scale, especially given the rapid proliferation of SaaS applications.