Sometimes the best way to advance the state of technology is to return to past architectures. I call this approach "back to the future" innovation.v
VMware is a great example of "back to the future" innovation. In the 1970s, most organizations used mini-computers that ran time-sharing operating systems that allowed hundreds of applications to run on a single machine. In the 1990s, technology "advanced" using the concept of many individual, low-cost x86-based servers running single applications on Linux or Windows NT. The rationale was that the systems were so cheap that businesses would just buy individual servers for each application. In reality, inefficiencies and system management costs spiraled. VMware touted a huge “revolution” in computing by letting us run multiple applications/OS instances on a single server. VMware simply reintroduced us to the computing model of the 1970s. (Full disclosure: I have worked with VMware in the past.)
Here are my top four "Back to the Future" innovations for 2019.
1. The Return Of The SAN
For the past 10 years, the rise of “new” hyper-converged systems led many to the belief that the era of independent, networked storage was on life support. Let me be the one to point out the irony that hyper-converged systems (aside from the fancy name) are pretty much akin to the DAS (Direct Attached Storage) systems popular 30 years ago. Networking and storage were separated as IT disciplines late last century because there were major management and cost efficiencies that more than made up for the marginal increases in connectivity costs.
As time passes, we often lose track of why architectural decisions were made in the first place. One of the primary reasons storage was originally separated from servers was that each application has different storage and compute needs. The requirements will likely change over time as well. Buying any all-in-one system means you will be buying too much of something and not enough of another. And, as needs change, these inefficiencies multiply.
2. The Return Of 'Branch Office' Computing
Oh yeah, we can’t use that term because it would sound old. How about we call it “edge computing?” That sounds new, hip and modern.
Twenty years ago, when branch office datacenter users lacked the required performance, we built the branch office computing model to solve the problem. Centralization, and the demise of branch office computing, occurred when various technologies like improved bandwidth and WAN optimization allowed centralized data centers to meet user’s performance needs.
Branch office computing for end-user applications is unlikely to return, but the performance requirements of new machine-to-machine (internet of things and artificial intelligence) applications will drive growth in (a return to) edge computing. What is different this time around is that recentralization isn’t coming back anytime soon. The technology hurdle of improving performance to move back to centralized servers is a tough one -- it’s the speed of light.
I predict that, until we figure out how to send data faster than the speed of light, edge computing is back. This time to stay.
3. Long Live Content
The last 15 years was the era of online content. We moved from CDs and DVDs to all of our content stored online. Social media has exploded the amount of content created and consumed. While none of that will change, I believe the information growth pendulum is about to swing back to the data side.
All of the innovations on the horizon, like the internet of things (IoT), autonomous vehicles and artificial intelligence (AI)-based marketing all create and/or consume vast amounts of data. The data from all of these new sensors need to be processed and stored if it is going to add value in the future. To compound the problem, the data from each sensor is unique. It’s not like a movie where one copy is served to millions of consumers. For every million sensors, you have a million unique sets of data to manage. Managing this data will be nothing less than a Herculean challenge.
It is my prediction that IoT and AI will drive a second data explosion, creating a need for new technologies to efficiently store and manage all of this data.
4. The Return Of The Data Center
OK, I know you might argue that the on-premise data center (private cloud) hasn’t died, but the fact is the public cloud has appeared to be an unstoppable juggernaut consuming everything in its path. It reminds me of that old Star Trek episode called "Doomsday Machine" with the killer planet eater (the one that looked like a giant Bugles candy) on a rampage. Many IT pundits even believe that the era of businesses having their own IT is over. Not so fast.
Like many new technologies, the public cloud and, by extension, SaaS technology has grown rapidly, making an enormous impact. Because there are many applications ideally suited to this model of delivery, this is unlikely to change. However, there are also many other types of applications, especially business-critical ones, where IT needs to put in place the best possible solution — competitive differentiation will depend on it.
My main point is that this is not a game-over situation with Amazon Web Services becoming the defacto IT provider for the world. Many applications will remain too critical to any given business to simply drop in a cloud. Other applications will have competitive performance requirements that cannot be met cost-effectively with generic configurations. For some, security (including physical security) will be critical with no room for compromise.
With any good innovation, there is mass market adoption. In some cases, the existing approach might disappear, But in this case, I predict the private data center is here to stay and thrive -- at least for now.
From Forbes Technology Council which is an invitation-only community for world-class CIOs, CTOs and technology executives.