ZK Research: Home
Google+
Twitter
LinkedIn
Facebook
RSS Feed

ZK Research - a proud sponsor of AI World 2017. See you there!

AI World Conference & Expo · Boston, MA · December 11-13, 2017

Archive for September 2017

Star Trek is filled with advice that apply to today’s
tech professionals. Here’s a look at seven from the
Enterprise’s most logical crew member, Spock.

It’s no surprise that many network engineers are also fans of Star Trek. Personally, I have been a Trekkie for as long as I can remember. One of the appealing things about Star Trek is that it pushed the limits of what’s possible. In fact, many technologies we take for granted today were previewed on Star Trek over 50 years ago. Things such as wireless communications, immersive videoconferencing and tablet computers were all used regularly on the Starship Enterprise long before we used them down on Earth.

[keep reading…]

Unveils a skills assessment and development suite,
giving IT leaders a quantitative way of assessing the
strengths of its human resources.

The topic of the technology skills gap and re-skilling has become a hot one over the past few years. The shift to digital has fundamentally changed IT forever, and it’s only going to get harder to stay current with trends. Hardware vendors have adopted software models, application developers have embraced DevOps, security has never been more difficult or critical, and making decisions is now based on data sciences. These changes are driving the need for a different skill sets across the entire technology stack — from the network up to the application layer.

[keep reading…]

Nvidia’s TensorRT 3 optimizes and compiles complex networks
to get the best possible performance for AI inferencing.

It’s safe to say the Internet of Things (IoT) era has arrived, as we live in a world where things are being connected at pace never seen before. Cars, video cameras, parking meters, building facilities and anything else one can think of are being connected to the internet, generating massive quantities of data.

The question is how does one interpret the data and understand what it means? Clearly trying to process this much data manually doesn’t work, which is why most of the web-scale companies have embraced artificial intelligence (AI) as a way to create new services that can leverage the data. This includes speech recognition, natural language processing, real-time translation, predictive services and contextual recommendations. Every major cloud provider and many large enterprises have AI initiatives underway.

However, many data centers aren’t outfitted with enough processing power for AI inferencing. For those not familiar with the different phases of AI, training is teaching the AI new capabilities from an existing set of data. Inferencing is applying that learning to new data sets. Facebook’s image recognition and Amazon’s recommendation engine are both good examples of inferencing.

This week at its GPU Technology Conference (GTC) in China, Nvidia announced TensorRT 3, which promises to improve the performance and cut the cost of inferencing. TensorRT 3 takes very complex networks and optimizes and compiles them to get the best possible performance for AI inferencing. The below graphic shows that it acts as AI “middleware” so the data can be run through any framework and sent to any GPU. Recall this post where I explained why GPUs were much better for AI applications than CPUs. Nvidia has a wide range of GPUs, depending on the type of application and processing power required.

Unlike other GPU vendors, Nvidia’s approach isn’t just great silicon. Instead it takes an architectural approach where it combines software, development tools and hardware as an end-to-end solution.

During his keynote, CEO Jensen Huang showed some stats where TensorRT 3 running on Nvidia GPUs offered performance that was 150x better than CPU-based systems for translation and 40x better for images, which will save its customer huge amounts of money and offer a better quality of service. I have no way of proving or disproving those numbers, but I suspect they’re accurate because no other vendor has the combination of a high-performance compiler, run-time engine and GPU optimized to work together.

Other Nvidia announcements

  • DeepStream SDK introduced. It delivers low-latency video analytics in real time. Video inferencing has become a key part of smart cities but is being used in entertainment, retail and other industries as well.
  • An upgrade to CUDA, Nvidia’s accelerated computing software platform. Version 9 is now optimized for the new Tesla V100 GPU accelerators, which is the highest-end GPU and ideal for AI, HPC and graphically intense applications such as virtual reality.
  • Huawei, Inspur and Lenovo using Nvidia’s HGX reference architecture to offer Volta-based systems. The server manufacturers will be granted early access to HGX architectures for data centers and design guidelines. The HGX architecture is the same one used by Microsoft and Facebook today, meaning Asia-Pac-based organizations can have access to the same GPU-based servers as the leading web-scale cloud providers.

The world is changing quickly, and it’s my belief that market leaders will be defined by the organizations that have the most data and the technologies to interpret that data. Core to that is GPU-based machine learning and AI, as these systems can do things far faster than people.

Its Sandbox will be the core product for FireEye into the
foreseeable future, but Helix will be an important adjacent
market for the company and its customers. Here’s why.

Earlier this month I saw a post on Investor’s Business Daily outlining why FireEye was important to the company’s shareholders.  The article got me thinking about the low awareness that Helix has with security buyers. In my opinion, it’s one of the more under-rated security tools.

For better or worse, FireEye has strong association with the sandboxing market.  This has been a critical security tool for almost all businesses but many companies, even FireEye customers, don’t look to the vendor for other security functions.  Its Sandbox will be the core product for FireEye into the foreseeable future, but Helix will be an important adjacent market for the company and its customers.

[keep reading…]


As someone who has been following enterprise WAN architectures for decades, I find their evolution fascinating, especially the number of new technologies that have been deployed in isolation. For example, WAN optimization and SD-WANs are often discussed as separate solutions.  From my perspective, I can’t fathom why a business would deploy an SD-WAN and not implement WAN optimization as part of it.  If you’re going to go through the work of modernizing your WAN architecture, then why wouldn’t you integrate optimization technologies into your deployment right from the start?

[keep reading…]

The future of data centers will rely on cloud, hyperconverged
infrastructure and more powerful components

A data center is a physical facility that enterprises use to house their business-critical applications and information, so as they evolve, it’s important to think long-term about how to maintain their reliability and security.

Data center components

Data centers are often referred to as a singular thing, but in actuality they are composed of a number of technical elements such as routers, switches, security devices, storage systems, servers, application delivery controllers and more. These are the components that IT needs to store and manage the most critical systems that are vital to the continuous operations of a company. Because of this, the reliability, efficiency, security and constant evolution of a data center are typically a top priority.

[keep reading…]

ZK Research is proudly powered by WordPress | Entries (RSS) | Comments (RSS) | Custom Theme by The Website Taylor