Synaptics Showcases Range of Edge AI Vision Solutions at Embedded Vision Summit
SAN JOSE, Calif. – May 25, 2021 – Synaptics, Incorporated (SYNA) will showcase a range of its AI-enabled vision solutions at this week’s Embedded Vision Summit, with a particular focus on how product developers can implement efficient and high-performance computer vision and multimedia processing at the edge. The virtual event runs May 25-28 and Synaptics will offer demonstrations of its latest Smart Edge AI solutions in real world applications, as well as a special presentation by Synaptics Fellow, Patrick Worfolk.
Synaptics’ solutions are ideally suited for enabling smart displays, smart cameras, video soundcards, set-top boxes, voice-enabled devices and computer-vision IoT products. At the event, the company will feature its VideoSmart VS600 family of edge-computing video SOCs and Katana ultra-low power solution to demonstrate how enhancements in AI processing capabilities on edge devices are enabling richer and more efficient vision and video experiences.
- A video conference application that leverages machine learning algorithms to support uses that range from biometrics and automatic access of calendar and video conferencing applications through face and voice identification to a smarter framing of the scene in front of the camera, and understanding the language for auto-subtitling, all on the edge.
- Real-time video post-processing using machine learning that shows significant enhancements in video scaling and post-processing compared to what had been possible with traditional scaling integrated in SoCs. The demonstration will show a side-by-side comparison of scaling using a traditional hardware scaler and the Synaptics-enabled solution.
- Ultra-low power vision on the edge, featuring the Katana Edge AI processor and Tensai CC compiler developed by Eta Compute. The demonstration will showcase how Tensai CC compiles a neural network and generates power-, cycle- and memory-optimized code that takes advantage of the Katana architecture enabling a wide range of battery-powered vision use cases.
Patrick Worfolk will be presenting his talk “Visual AI at the Edge: From Surveillance Cameras to People Counters” at 10:30 AM on Thursday May 27 (all sessions are also available on-demand to registered attendees). The presentation will detail how new AI-at-the-edge processors with improved efficiencies and flexibility are unleashing a huge opportunity to democratize computer vision broadly across all markets enabling edge AI devices with small, low-cost, low-power cameras. These applications range from enhancing the image quality of a high-resolution camera’s output on higher compute edge SoCs to performing TinyML based computer vision in battery-powered devices at lower resolution. Applications that will be discussed include:
- Deep night vision: exceptional full color video in very low light conditions
- Imaging through display: De-noise and distortion correction of both 2D and 3D imagery from a time-of-flight depth camera that images through a smartphone OLED display,
- Super-resolution: enhancement of high-resolution video imagery
- Object recognition using lower-resolution sensors running on battery powered embedded devices
To register for the Embedded Vision Summit, visit the virtual event registration page.