(for the moment) Clouds won the philosophical discussion about whether software applications should reside in a centrally managed source that people connect to, or on zillions of devices scattered around the organization and around the world.
Conceptually, the original IBM Mainframes have won, as they were the epitome of a brain in the center with "light clients" (it is no longer acceptable to call a terminal "dumb"!) connecting to it. Though a lot has changed in what the brain, the clients and the network look like over the last 40 years.
In retrospect the decision seems obvious. Of course you would want the control and security of central mangement, freedom from the hassle of figuring out how to install and update and manage the user experience across tens or tens of thousands of endpoints.
But this model brings with it two giant potential failure points:
- The Network
- The Cloud
Networks have become dramatically faster, more reliable and easier to manage over the years. Self-healing, error correcting, intrusion detecting, and highly reliable. The question when migrating to a cloud becomes do you have sufficient bandwidth in the network to allow the number of users you want, to use the applications you want?
One client told us about a video deployment they had tried with another vendor which had gone out of control. Everyone had access to video, and used it, and brought the network to standstill. VeaMea includes management controls that allow you to set levels of privileges, by user, user group and by domain so you can plan for, and control, the rollout and utilization of network capacity.
The other option is to let the network "throttle" traffic of different types. For many applications, the delays would go unnoticed, but for real-time communication applications like video collaboration, even small delays are not only noticed, but significantly detract from the user's perception of the quality. Our eyes and ears are remarkably adept at discerning changes, flickers, crackles, and drops.
There has been plenty written about the recent cloud outages from Amazon and other service providers. A similar danger is present for private clouds, intranets, extranets, eCommerce-nets, EDI and other integrated systems. Any time we place our faith in a system to be there when we need it, we need to plan for what happens when it isn't there, and how to minimize the amount of time it is unavailable.
Solid planning and execution of strategies for redundancy, failover and load balancing should mitigate the risk that a Cloud will simply evaporate, but having a centralized brain does present a level of risk that is greater than a distributed processing model.
As technology marches forward, there will undoubtedly be continuing tension along a number of battle lines such as autonomy vs. control, flexibility vs. standardization and "consumerized" vs. "business grade."
The right answer will differ based on the needs, skills and capabilities of each organization.
My instinct is that the trend will be toward consumer grade products that are more "enterprise ready" (perhaps I should trademark that!), so that when you bring your iPad or Dell Laptop to the office, it will be able to quickly and easily "assimilate" in a corporate network.
The network, security and data transport layers will be driven by enterprise standards, while the user presentation layers will be driven by consumer-grade standards rather than the clunky stuff enterprise users are often forced to wrestle with today.