It can seem daunting if you want to get into AI, but many tools are available to help ease the way. In this second part of a two-part piece on AI, we’ll look at tools for engineers and talk about the ecosystems and players you need to know about. Check out part one here, where we discussed how AI is an application enabler, not an actual application itself.
AI is still an emerging technology, and as it matures and gets more sophisticated, the ecosystem around it must also evolve. The AI space is in flux as companies see how much revenue it can create and how important AI is to the embedded space. Despite the rapid changes, many tools are well-developed and reliable for engineers to use when entering the space. Keep in mind that those tools are updating and evolving as rapidly as the AI space, so you’ll want to stay nimble.
All of this growth is not happening at the same pace, however. IP is moving faster than tooling and infrastructure, and the slower portions of the market are getting fragmented as a result. Synaptics is an excellent example of the fast growth happening in IP and the work being done to even out the innovation imbalances.
“The components needed for the hardware side are beginning to settle out,” says Vikram Gupta, the Chief Product Officer at Synaptics. “Over the last year or two, a slew of companies have come up with their own hardware and/or IP solutions. Some have survived, some have not. Here at Synaptics, we try to take a more holistic view of the AI journey, emphasizing IP, hardware, and tool support.”
The task of developing workable IP for AI-enabled applications is heavy, so companies with more resources are taking the lead. Traditional IP providers like Synopsys, Cadence, and Arm have become ecosystem leaders as a result. In tooling, it’s much less clear who is leading, and even harder to see where the development is headed. With that said, the players in this game are dedicating resources here to take the lead to make it easier for developers.
Says Gupta, “It's fairly confusing for people in the space. So, for people on the outside looking for solutions, it appears even more fragmented.”
So, who’s in charge?
Because of all these factors, the growing AI ecosystem is being carried by AI processor vendors, mainly because that basic building block must be in place before developers can do anything. It’s the foundation of an AI solution. As a result, those vendors have control of the markets for now, so they can focus on their key customers’ needs. What that also means, unfortunately, is that those vendors sometimes have to bear a bunch of the development burden beyond the processors or risk artificially limiting adoption.
Aware of this, processor vendors sometimes pursue solutions through acquisition. Following their lead and investing in AI providers—or looking for partners who have already done so—can set you up just right. Of course, a lousy bet will undoubtedly set you back.
AI doesn’t mean All Inclusive
AI can be a useful tool for efficiency and data management, to be sure, but it’s important to remember that it isn’t the all-seeing, all-dancing CPU of the world. It’s very exciting right now, but it can’t solve every problem in every design, nor is it the best solution to every challenge.
As with any development process, it’s critical to begin with the end goal: what problem are you trying to solve? If you’re looking to add AI to a hardware engine, you can do that with a pretty low-impact algorithm or model, but if you’re not aiming at the end goal, your product won’t be as efficient as it could have been. Think of how to optimally partition functions in a heterogeneous processing environment, including NPUs, GPUs, and DSPs. Or maybe you’d like that thinking to be done by the chip vendor’s tools. To that end, processor vendors have to keep offering the most optimal, inclusive paths from ideation to production.
If the processor vendors can do that, engineers can leverage those products for hardware and instead focus on developing the software solutions, algorithms, and other code.
As we see again and again, the most essential part of design is remembering to begin by identifying the specific problem you’re trying to solve and where the end market will be. Then, you can (and should) decide what models you need, what you’re trying to visualize, what the critical data sets are, and the rest of the software dependencies.
This is the significant change in embedded engineering strategy that’s being driven by the desire to incorporate AI. During development, you worry less about hardware metrics and needs and more about the software that will help you achieve your end goal.
It’s important to remember that AI for consumer-facing applications differs significantly from what the industry needs from AI. These differences have meant that the industry is adopting AI more rapidly in some areas because AI makes it simpler to show better outcomes and efficiencies. Sometimes that comes by replacing humans with robots in repetitive or dangerous work, or in machine vision for inventory and other data collection and analysis implementations.
If there’s one thing to take away from this two-parter, it’s that AI is a tool in the toolbox for engineers and developers, not the end goal. New products and solutions can’t be developed by asking, “How do we add AI?” Instead, focus on the end application and decide if AI tools can help get there more efficiently, just like with any other tool in the box.
Spend your time looking for reasons to use your new AI hammer, and you’ll end up breaking a window. Instead, focus on solving the problem, and you’ll end up with a lovely flowerbox.