Raptor Gen AI Accelerator Powers Breakthrough LLM Performance.
Raptor Gen AI Accelerator Powers Breakthrough LLM Performance.

Neuchips, a leading AI Application-Specific Integrated Circuits(ASIC) solutions provider, will demo its revolutionary Raptor Gen AI accelerator chip(previously named N3000) and Evo PCIe accelerator card LLM solutions at CES 2024.

"We are thrilled to unveil our Raptor chip and Evo card to the industry at CES 2024," said Ken Lau, CEO of Neuchips. 

"Neuchips' solutions represent a massive leap in price to performance for natural language processing. With Neuchips, any organisation can now access the power of LLMs for a wide range of AI applications."

Raptor and Evo provide an optimised stack that makes market-leading LLMs readily accessible for enterprises. 

Neuchips' AI solutions significantly reduce hardware costs compared to existing solutions. The high energy efficiency also minimizes electricity usage, further lowering the total cost of ownership.

At CES 2024, Neuchips will demo Raptor and Evo, accelerating the Whisper and Llama AI chatbots on a Personal AI Assistant application. This solution highlights the power of LLM inferencing for real business needs.

Enterprises interested in test-driving Neuchips' breakthrough performance can visit booth 62700 to enrol in a free trial program. Additional technical sessions will showcase how Raptor and Evo can slash deployment costs for speech-to-text applications.

Evo Gen 5 PCIe Card Sets New Standard for Acceleration and Low Power Consumption.
Evo Gen 5 PCIe Card Sets New Standard for Acceleration and Low Power Consumption.

The Raptor chip delivers up to 200 TOPS per chip. Its outstanding performance for AI inferencing operations such as Matrix Multiply, Vector, and embedding table lookup suits Gen-AI and transformer-based AI models.

This groundbreaking throughput is achieved via Neuchips' patented compression and efficiency optimisations tailored to neural networks.

Complementing Raptor is Neuchips' ultra-low powered Evo acceleration card. Evo combines PCIe Gen 5 with eight lanes and LPDDR5 32GB to achieve 64GB/s host I/O bandwidth and 1.6-Tbps per second of memory bandwidth at just 55 watts per card.

As demonstrated with DLRM, Evo also features 100% scalability, allowing customers to linearly increase performance by adding more chips. This modular design ensures investment protection for future AI workloads.

An upcoming half-height half-length(HHHL) form factor product, Viper, set to be launched by the second half of 2024, will provide even greater deployment flexibility. The new series brings data centre-class AI acceleration in a compact design.

저작권자 © IT비즈뉴스(ITBizNews) 무단전재 및 재배포 금지