ZK Research: Home
Google+
Twitter
LinkedIn
Facebook
RSS Feed

ZK Research - a proud sponsor of AI World 2017. See you there!

AI World Conference & Expo · Boston, MA · December 11-13, 2017

Posts Tagged ‘syndicated’

Apstra’s intent-based AOS 2.0 delivers agility across
physical/virtual networks so they look like one.

Intent-based systems have been all the rage since Cisco announced its “Network Intuitive” solution earlier this year. For Cisco customers, its solution is certainly interesting. But what about businesses that want an alternative to Cisco? Or companies that want to run a multi-vendor environment?

Over a year before Cisco’s launch, a start-up called Apstra shipped the closed-loop, intent-based solution. It was designed to be multi-vendor in nature with support for Cisco but also Arista, Juniper, HP and others, including white box. Apstra operates as an overlay to networks built on any of the leading vendors to deliver intent-based networking in heterogeneous environments.

This week, Apstra announced the next release of its software, AOS 2.0, which addresses the gap that exists between physical underlay and virtual overlay networks, including VXLAN. I’ve discussed this topic with many network professionals, and there is a high degree of interest in using network virtualization, but the lack of visibility between the underlay and overlay is a huge deterrent. Without an understanding of the relationship between the two, network managers are faced with managing two separate networks — the physical network and virtual overlay.

Also, with this model, troubleshooting becomes extremely difficult as the virtual network is one big blind spot. Any application problems that occur in the overlay is, for all intents and purposes, invisible to the engineers running the physical network. The lack of visibility also creates security problems because malware or other malicious traffic could spread like wildfire across the overlay and be hidden from the security tools attached to the physical network. There’s an expression that you can’t secure or manage what you can’t see, and that’s certainly true for overlay networks today.

Bringing the two environments together using traditional management models like CLI would be like trying to compute all the algorithms in an autonomous vehicle manually. People can’t work fast enough to process huge volumes of data, analyze it and take action on the insights to make it practical. That is why the task is turned over to machine learning systems. Similarly, with a network, trying to maintain the intent of a network is hard enough to do with a single network. Bring in the virtual overlay, and all the dependencies and the task would be so monumentally difficult that it’s practically impossible, even for the largest network teams.

Apstra’s AOS 2.0 facilitates management of physical and virtual networks

Apstra’s intent-based operations works off a closed-loop model where the intent is continuously validated. Virtual overlays introduce VXLAN segments that are used in conjunction with VLANs to segment virtual machines and containers in data centers at a more granular level. When these resources are put in motion and spun up and down dynamically, it becomes very difficult to maintain specific policies, such as “all workloads in VLAN1 are to be assigned to a specific VXLAN segment.” Intent-based solutions continually gather data and automate the re-configuration.

Also, Apstra’s AOS self-documents, repairs itself, and can maintain security. The term “intent-based security” is often bandied about, but that’s more the effect of being able to understand, create and maintain policies in highly dynamic environments.

This latest release of AOS automates the full lifecycle of VXLAN-based, layer two network operations within, but also across racks, which is crucial today because east-west traffic flows are dominating data centers. The growth in east-west is driving the need to migrate from legacy, multi-tier layer two networks to more dynamic and scalable, layer three leaf-spine architectures with an agile layer two overlay. Doing this with legacy configuration methodologies, such as scripting or CLI infusion, would require extensive application testing and possibly modification to account for the changes. Apstra’s closed loop increases agility, so the transition to leaf-spine can be made without any modifications at the application layer.

In a world where digital transformation is running amok, the infrastructure teams, including network operations, must find a way to respond to line-of-business requests faster. Intent-based networks reduce the amount of downtime caused by human error (still the largest cause) and cut operational expenses. They also increase network agility.

Digital businesses need to move with speed, but they are only as agile as the least-agile IT component. And that today is the network. Apstra’s AOS 2.0 now delivers agility across the physical – virtual boundary, so now it looks like a single network instead of two distinct ones.

Cyber security remains a hot topic with nearly every IT and business leader that I speak with. In particular, there seems to be an intensified focus on network security. Security is typically deployed in layers (network, compute and application), and I expect that model to continue in the short-term, but given the fact that many of the building blocks of digitization, such as IoT and the cloud, are network-centric, there should be a stronger focus on leveraging the network and network-based security to protect the organization.

[keep reading…]

The rise of cloud applications has been well documented on this site and others.  The cloud era kicked off with a handful of SaaS applications, such as ERP, CRM and HR systems. Today, businesses are buying almost everything from the cloud from compute services, contact center software, unified communications to anything else you can think of. These apps and services may look somewhat unrelated, but they all have one thing in common, they are highly dependent on the network to perform properly.

[keep reading…]

Star Trek is filled with advice that apply to today’s
tech professionals. Here’s a look at seven from the
Enterprise’s most logical crew member, Spock.

It’s no surprise that many network engineers are also fans of Star Trek. Personally, I have been a Trekkie for as long as I can remember. One of the appealing things about Star Trek is that it pushed the limits of what’s possible. In fact, many technologies we take for granted today were previewed on Star Trek over 50 years ago. Things such as wireless communications, immersive videoconferencing and tablet computers were all used regularly on the Starship Enterprise long before we used them down on Earth.

[keep reading…]

Unveils a skills assessment and development suite,
giving IT leaders a quantitative way of assessing the
strengths of its human resources.

The topic of the technology skills gap and re-skilling has become a hot one over the past few years. The shift to digital has fundamentally changed IT forever, and it’s only going to get harder to stay current with trends. Hardware vendors have adopted software models, application developers have embraced DevOps, security has never been more difficult or critical, and making decisions is now based on data sciences. These changes are driving the need for a different skill sets across the entire technology stack — from the network up to the application layer.

[keep reading…]

Nvidia’s TensorRT 3 optimizes and compiles complex networks
to get the best possible performance for AI inferencing.

It’s safe to say the Internet of Things (IoT) era has arrived, as we live in a world where things are being connected at pace never seen before. Cars, video cameras, parking meters, building facilities and anything else one can think of are being connected to the internet, generating massive quantities of data.

The question is how does one interpret the data and understand what it means? Clearly trying to process this much data manually doesn’t work, which is why most of the web-scale companies have embraced artificial intelligence (AI) as a way to create new services that can leverage the data. This includes speech recognition, natural language processing, real-time translation, predictive services and contextual recommendations. Every major cloud provider and many large enterprises have AI initiatives underway.

However, many data centers aren’t outfitted with enough processing power for AI inferencing. For those not familiar with the different phases of AI, training is teaching the AI new capabilities from an existing set of data. Inferencing is applying that learning to new data sets. Facebook’s image recognition and Amazon’s recommendation engine are both good examples of inferencing.

This week at its GPU Technology Conference (GTC) in China, Nvidia announced TensorRT 3, which promises to improve the performance and cut the cost of inferencing. TensorRT 3 takes very complex networks and optimizes and compiles them to get the best possible performance for AI inferencing. The below graphic shows that it acts as AI “middleware” so the data can be run through any framework and sent to any GPU. Recall this post where I explained why GPUs were much better for AI applications than CPUs. Nvidia has a wide range of GPUs, depending on the type of application and processing power required.

Unlike other GPU vendors, Nvidia’s approach isn’t just great silicon. Instead it takes an architectural approach where it combines software, development tools and hardware as an end-to-end solution.

During his keynote, CEO Jensen Huang showed some stats where TensorRT 3 running on Nvidia GPUs offered performance that was 150x better than CPU-based systems for translation and 40x better for images, which will save its customer huge amounts of money and offer a better quality of service. I have no way of proving or disproving those numbers, but I suspect they’re accurate because no other vendor has the combination of a high-performance compiler, run-time engine and GPU optimized to work together.

Other Nvidia announcements

  • DeepStream SDK introduced. It delivers low-latency video analytics in real time. Video inferencing has become a key part of smart cities but is being used in entertainment, retail and other industries as well.
  • An upgrade to CUDA, Nvidia’s accelerated computing software platform. Version 9 is now optimized for the new Tesla V100 GPU accelerators, which is the highest-end GPU and ideal for AI, HPC and graphically intense applications such as virtual reality.
  • Huawei, Inspur and Lenovo using Nvidia’s HGX reference architecture to offer Volta-based systems. The server manufacturers will be granted early access to HGX architectures for data centers and design guidelines. The HGX architecture is the same one used by Microsoft and Facebook today, meaning Asia-Pac-based organizations can have access to the same GPU-based servers as the leading web-scale cloud providers.

The world is changing quickly, and it’s my belief that market leaders will be defined by the organizations that have the most data and the technologies to interpret that data. Core to that is GPU-based machine learning and AI, as these systems can do things far faster than people.

ZK Research is proudly powered by WordPress | Entries (RSS) | Comments (RSS) | Custom Theme by The Website Taylor