ZK Research: Home
Google+
Twitter
LinkedIn
Facebook
RSS Feed

This syndicated post originally appeared at Network World Zeus Kerravala.

The cloud has been a core component of almost every organization’s IT strategy for the past five years. However, I believe we are reaching a cloud “tipping point” where it will be used for dramatically different things than it has in the past.

The first wave of cloud growth was fueled by organizations looking for a cheaper alternative to running servers on premises. The next wave of cloud growth will be driven by organizations looking to fundamentally change their businesses through the use of advanced technologies like machine learning and artificial intelligence (AI).

Over the past year, we have seen a veritable cornucopia of AI use cases included playing poker and Go, writing news stories, filing insurance claims, driving cars and writing code. This current phase of cloud moves it from being a “nice to have” to an absolute, slam dunk, need to have as it’s almost impossible for a business to have the scale and elasticity required to power an AI platform.

The rise of AI will transform the world and change the way we live and work. Businesses that want to harness the power of AI cloud computing must ensure their cloud provider is architected to keep up with the new demands created by this kind of computing process.

This week Microsoft and NVIDIA introduced plans for the first ever hyperscale GPU accelerator to enable a scalable AI cloud. The new HGX-1 is an open source design released in concert with Microsoft’s Project Olympus that provides cloud providers with the fastest and most flexible route to AI.

Decades ago, Intel and PC manufacturers designed ATX (Advanced Technology eXtended) to improve the standard PC components like motherboard, mounting points and power supply. The HGX-1 is attempting to play a similar role with the cloud where an industry standard, or even a de factor standard, can help meet what is expected to be an explosion in demand from the rise of AI.

While there have been early use cases of AI, I believe we’re still in the first inning of this trend. In fact, more accurately the pitchers are still warming up and we have not even scratched the surface of what’s possible with AI.

In the next few years expect to see it impact almost everything in our lives including education, healthcare, customer service, research and development and almost anything you can think of. Fueling the innovation are thousands of start ups that are thinking up new ways of using AI to change the world.

As I pointed out in this post, GPUs, not traditional CPUs are what’s needed to handle the massive processing capabilities required to power a workload like AI and no one does GPUs like NVIDIA.

The HGX-1 chassis is comprised of eight NVIDIA Tesla P100 GPUs. These platforms can be connected for horizontal scaling through NVIDIA NVLink interconnect. Cloud providers can use this to enable their customers to buy GPU and CPU cycles to fuel their AI workloads. This can be tuned and tweaked as the process matures and the requirements change.

For example, consider an AI that is enabling a healthcare application. Early in the cycle it is consuming and analyzing a massive amount of data for it’s learning phase and may require a significant amount of GPU. Once the process matures, it shifts from learning to inferring and the GPU requirements decline. Customers can customize the use of CPUs and GPUs to meet any type of workload.

The HGX-1 is designed to be somewhat “plug and play” so hyperscale data centers can quickly add the capability and transform to offer “GPU as a service” today, but then scale as more businesses make AI a key part of their digital transformation strategies.

In conjunction with HGX-1, NVIDIA announced it was joining the Open Compute Project and will work with Microsoft and other members to move AI from being something only a handful of companies can leverage to being a mainstream technology that can be used by all companies of all sizes.

As organizations look at advancing their cloud strategies to include AI they need to ask if their cloud provider offers the most advanced GPUs as a service? If not, they’re buying what will now be known as a legacy cloud.

The following two tabs change content below.

Zeus Kerravala

Zeus Kerravala is the founder and principal analyst with ZK Research. Kerravala provides a mix of tactical advice to help his clients in the current business climate and long term strategic advice.

Latest posts by Zeus Kerravala (see all)

Share This Post:
No Comments

Be the first to comment!

Post a Comment:

You must be signed in to post a comment.

ZK Research is proudly powered by WordPress | Entries (RSS) | Comments (RSS) | Custom Theme by The Website Taylor