It changed the core mission of SIs (systems integrators) who, to remain relevant in the new hybrid cloud marketplace, have, by default, become competent in brokering cloud services as an added value with their integration offerings. But integrating data that is housed within legacy infrastructure with various cloud solutions will present SIs with some challenges in the future.
Systems integrators emerged as powerful actors after 2006. As they grew and added more complexity to their existing array of IT systems such as storage, computing and networking. Enterprise customers typically have a flagship application that drives their specific cloud requirement. When the “application-centric” economy became more prevalent after 2007, applications needed to integrate with different subsystems such as servers and the hardware on which they ran. SIs stepped in to manage the opportunity for integration services that were needed to integrate applications, data or email.
Early ideas behind systems integration
Systems integrators focused on four approaches to connect different applications and software across enterprise networks:
- Point-to-point (or app-to-app) integration required hand-coding to connect applications to other applications.
- Middleware (or data translation) enabled various applications to connect, receive data, translate it, then share this data through this “middle layer” that connected them all. The EAI (Enterprise Application Integration) server allowed the integration of automated business processes and applications via backend APIs.
- BPM (Business Process Management) integrates the business and IS side with eventually automating suppliers, customer, employees and partner cross-functional business processes.
- Web services were touted as the seamless connection model for backend business processes, utilities and CRM, but authentication, security and bandwidth later surfaced as major concerns.
Three Minute History of Data centers
The early data centers of the 1960s were “siloed data centers” filled with mainframes from IBM and the BUNCH (Burroughs, Univac, NCR, Control Data, and Honeywell) with powerful computing power, workload management but remained expensive. Physical expansion was limited to the server room, and adding more mainframes introduced a hardware cooling problem.
During the 1970s and 1980s, minicomputers (minis) designed as engineering and scientific by DEC, HP, Data General, and Prime appeared as more economical alternatives. Proprietary OS and lack of portability of programs contributed to their decline.
After the rise of the IBM model 5150 in 1982, companies saved space, operations and cooling resources. In the early 1990s, desktop PCs began replacing mainframes in the old server rooms, leading to the client-server model, colocation and the external data center. The expansion of the internet in the late 1990s drove the adoption of enterprise colocation and data-center-as-a-service.
The Quick Story of Data Integration
Methods of data integration for the enterprise started changing around 2007, as SIs that were connecting on-premises data centers to the promised land of cloud infrastructures presented both an opportunity and a challenge.
Cloud-as-a-service offered another layer of complexity that needed to be managed, in terms of how data was transferred, stored and processed. The configuration of some data centers began the move to a “converged infrastructure” that was comprised of server and storage bundled together in one hardware block, with hypervisor (virtualization) technology added for networking functionality. Management software came with the mix, and when virtualization solutions were added, it became the first iteration of what became known as a “converged data center.”
Hybrid cloud architectures offered the opportunity for SIs to learn a new model of data integration with the growing enterprise IoT. The focus for systems integrators changed to this different data center model. But in legacy infrastructures, enterprise data was still housed on physical server arrays. On-premises server architectures, whether with traditional rackmount servers or blade servers, still do rely on physical blocks of storage.
The multitudes of enterprise device fleets being added to the enterprise IoT adds to the complexity of the current task of systems integrators. How do you manage system integration from legacy data centers connected to hybrid cloud installations?
Challenges with legacy + cloud data center architecture
Here are some challenges faced by systems integrators dealing with legacy + cloud hybrid architectures, according to recent Gartner reports.
- The movement to the hybrid cloud is revolutionizing the IP of the enterprise, as data centers become smaller, scalable, Agile deployments.
- Integrating enterprise systems that interface with the cloud struggles with the reality of real-time data integration: recoding APIs to support IoT applications and operational technologies, and redistributing workloads for devices, gateways and the backend.
- Integrating a new IoT data model that doesn’t use traditional data storage: devices and gateways with continuous data throughput, data that passes via event processing, APIs, streaming or message-oriented middleware.
- There may be no central data repository.
Future Points to Consider
Here are some focus areas to help manage this complex landscape of data, according to a recent Gartner report. Start thinking about how to integrate data and how you deal with customers who are moving toward the hybrid cloud.
- Data federation: using an iPaaS (Integration Platform as a service) or API as event processors that could manage the access of integrated data in a memory state, without the use of a central data repository.
- Using a message-oriented data middleware that captures data in messages, readable by IoT applications and synched with application integration. The message-oriented middleware facilitates business flows and interactions with a sequence of “message request/reply, publish/subscribe and routing.”