Later this month, HP Enterprise will ship what looks to be the first server aimed specifically at AI inferencing for machine learning.
Machine learning is a two-part process, training and inferencing. Training is usign powerful GPUs from Nvidia and AMD or other high-performance chips to “teach” the AI system what to look for, such as image recognition.
Inference answers if the subject is a match for trained models. A GPU is overkill for that task, and a much lower power processor can be used.
Enter Qualcomm’s Cloud AI100 chip, which is designed for artificial intelligence on the edge. It has up to 16 “AI cores” and supports FP16, INT8, INT16, FP32 data formats, all of which are used in inferencing. These are not custom Arm processors, they are entirely new SoCs designed for inferencing.
The AI100 is a part of the HPE Edgeline EL8000 edge gateway system that integrates compute, storage, and management in a single edge device. Inference workloads are often larger in scale and often require low-latency and high-throughput to enable real-time results.
The HPE Edgeline EL8000 is a 5U system that supports up to four independent server blades clustered using dual-redundant chassis-integrated switches. Its little brother, the HPE Edgeline EL8000t is a 2U design supports two independent server blades.
In addition to performance, Cloud AI100 has a low power draw. It comes in two form factors, a PCI Express card and dual M.2 chips mounted on the motherboard. The PCIe card has a 75 watt power envelope while the two M.2 form factor units draw either 15 watts or 25 watts. A typical CPU is draws more than 200 watts, and a GPU over 400 watts.
Qualcomm says Cloud AI 100 supports all key industry-standard model formats including ONNX, TensorFlow, PyTorch, and Caffe that can be imported and prepared from pre-trained models that can be compiled and optimized for deployment. Qualcomm has a set of tools for model porting and preparation including support for custom operations.
Qualcomm says the Cloud AI100 is targeting manufacturing/industrial customers, as well as those with edge AI requirements. Use cases for AI inference computing at the edge include computer vision and natural language processing (NLP) workloads.
For computer vision, this could include quality control and quality assurance in manufacturing, object detection and video surveillance, and loss prevention and detection. For NLP it ncludes programming-code generation, smart assistant operations, and language translation.
Edgeline servers will be available for purchase or lease through HPE GreenLake later this month.