VeriSilicon announced the latest advancements in its high-performance and scalable GPGPU-AI computing IPs, which are now empowering next-generation automotive electronics and edge server applications.
Combining programmable parallel computing with a dedicated Artificial Intelligence(AI) accelerator, these IPs offer exceptional computing density for demanding AI workloads such as Large Language Model(LLM) inference, multimodal perception, and real-time decision-making in thermally and power-constrained environments.
VeriSilicon’s GPGPU-AI computing IPs are based on a high-performance General Purpose Graphics Processing Unit(GPGPU) architecture with an integrated dedicated AI accelerator, delivering outstanding computing capabilities to AI applications.
The programmable AI accelerator and sparsity-aware computing engine accelerate transformer-based and matrix-intensive models through advanced scheduling techniques.
These IPs also support a broad range of data formats for mixed-precision computing, including INT4/8, FP4/8, BF16, FP16/32/64, and TF32, and are designed with high-bandwidth interfaces of 3D-stacked memory, LPDDR5X, HBM, as well as PCIe Gen5/Gen6 and CXL.
They are also capable of multi-chip and multi-card scale-out expansion, offering system-level scalability for large-scale AI application deployments.
VeriSilicon’s GPGPU-AI computing IPs provide native support for popular AI frameworks for both training and inference, such as PyTorch, TensorFlow, ONNX, and TVM. These IPs also support General Purpose Computing Language(GPCL) which is compatible with mainstream GPGPU programming languages, and widely used compilers.
“The demand for AI computing on edge servers, both for inference and incremental training, is growing exponentially. This surge requires not only high efficiency but also strong programmability,” said Weijin Dai, Chief Strategy Officer, Executive Vice President, and General Manager of the IP Division at VeriSilicon.
“VeriSilicon’s GPGPU-AI computing processors are architected to tightly integrate GPGPU computing with AI accelerator at fine-grained levels. The advantages of this architecture have already been validated in multiple high-performance AI computing systems.”
관련기사
- VeriSilicon Unveils New ‘LCEVC’ Video Decoding IP
- AutoChips Integrated Multiple VeriSilicon’s IPs in Its Intelligent Cockpit Domain Control SoC
- AKM Achieves Successful PoC for eFuse in High Voltage 800V Automotive Applications
- Molex Combines High-Speed Data, Signal and Power in MX-DaSH Hybrid Connector Portfolio
- Canaan’s RISC-V Based Edge AIoT SoC Adopted VeriSilicon’s ISP·GPU IP
- HARMAN Bolsters its Automotive Cybersecurity Program with New Certification
- Toshiba Releases 2-Channel Automotive Standard Digital Isolators Compliant
- SuperMicro Announces Closing of Private Offering of $2.3 Billion
- Pickering Interfaces Launches High-Speed PXI Resolver Simulation Modules
- HighTec and Elektrobit Bundle Up for Rust and AUTOSAR Classic with Infineon's Drive Core