FWS-8600 - Powered Intel® 2nd Generation Intel® Xeon® Scalable processors
With the capabilities for AI-based IoT workloads, 2nd Generation Intel Xeon Scalable processors are enabling fast remote digital security response throughout a city; rapid identification and classification of medical imaging data critical to healthcare specialists; personalized retail shopping experiences and frictionless checkout; and defect detection for manufacturing.
Powered Intel® 2nd Generation Intel® Xeon® Scalable processors
2nd Generation Intel® Xeon® Scalable processors with built-in Intel® Deep Learning Boost deliver advanced AI capabilities for deploying the next era of IoT data-driven compute at the edge and advancing business transformation. Deliver high-performance deep learning inference and vision for AI workloads, consolidate diverse IoT workloads on the same hardware, and handle massive data sets and near-real-time transactions. Now you can get even better built-in deep learning capabilities, speed deployment, and lower TCO—simply with one integrated platform and with AI workloads optimized by the Intel® Distribution of OpenVINO™ toolkit.
-Scalable Xeon® Processor Skylake-SP/Cascade Lake-SP Processor
-600 - 800W Redundant Power through Structural redundant PSU
Intel® Deep Learning Boost
New Intel® Deep Learning Boost speeds up dense computations characteristic of various neural networks. Available on all 2nd Generation Intel® Xeon® Scalable processors (code-named Cascade Lake), Intel Deep Learning Boost provides low-precision integer operations to significantly accelerate performance for deep learning inference applications, including image recognition, object detection, data analytics, and more. This dramatic performance improvement and efficiency is delivered by using a single instruction for int8 convolutions for deep learning inference applications, which required three separate instructions in previous-generation processors. No hardware changes are required to support Intel Deep Learning Boost on 2nd Generation Intel Xeon Scalable processors.
Developers can access the new Intel Deep Learning Boost capabilities to boost performance of int8 data types in deep learning workloads using Intel® Distribution of OpenVINO™ toolkit to streamline deployment of deep learning inference.
TARGET USE CASES
-Medical anomaly detection
Process more data and improve responsiveness at the edge through higher processing power.
New Intel® Deep Learning Boost accelerates AI/deep learning/vision workloads
Expect up to 14x the inference throughput performance over previous-generation processors.
Aggregate various workloads on a single system without degrading performance and helping to ensure deterministic behavior with Intel® Resource Director Technology (Intel® RDT).
Hardware mitigation for side-channel exploits helps protect systems and data by hardening the platform against any malicious attacks.
Greater memory capacity and bandwidth
Speed workloads and time to insight with Intel® Optane™ DC persistent memory—a new, revolutionary memory product for affordable, persistent, and large memory.
Enable embedded use cases
Offers SKUs with 10-year use case reliability to meet stringent conditions for embedded use cases.
Integrated accelerators and optimized libraries make virtualization of networking, storage, and security functions fast, easy, and more efficient.
Rich set of I/Os supports PCIe*, USB, and SATA* on a single platform to connect varied peripherals such as networking (Ethernet), storage (SSDs), and accelerators (FPGAs).
Maximize platform investments
Upgrade existing Intel® Xeon® Scalable processors with higher performance and new capabilities, without the need for additional infrastructure.
2nd Generation Intel® Xeon® Scalable processors with built-in Intel® Deep Learning Boost bring unprecedented performance to the wide spectrum of vision and AI applications at the edge, such as smart city traffic pattern monitoring, defect detection in industrial settings, and gender and age classification for targeted retail advertising.
The powerful capabilities enable near-real-time video analytics at the edge via on-premise servers, as well as deeper or historical analysis at the back end or cloud. Edge capabilities save on costly video data transmission and networking while offering fast time to response.
Deep learning inference powered by Intel Xeon Scalable processors with Intel Deep Learning Boost is enabling a breadth of AI use cases, including object identification and classification, facial recognition, and data analytics.
TARGET USE CASES
-Video analytics servers
-NVR (storage) servers
-Management services servers
KEY TECHNOLOGY BENEFITS
-Intel Deep Learning Boost enables convolution layers (which are important in vision applications) to be run at int8 precision, offering additional performance
-Memory and I/O