您当前的位置:首页 > News

AMD fires back at Nvidia and details how to run native AI chatbots on Radeon and Ryzen

发布时间 : 2024-03-11

Semiconductor World News, with Nvidia and Intel recently unveiling their native-running AI chatbots, AMD doesn't seem to want to be left out, and has also released its own solution for owners of Ryzen and Radeon processors. In just five or six steps, users can start interacting with AI chatbots that run on local hardware rather than in the cloud - no coding experience required.

0919c88cea9094d43dfd5b68b88bbfd0_1710135854148237.jpg

AMD's guidelines require users to have either a Ryzen AI PC chip or an RX 7000 series GPU. Today, Ryzen AI is only available on high-end Ryzen APUs based on Phoenix and Hawk Point with Radeon 780M or 760M integrated graphics cards. This indicates that while Ryzen 5 8600G is supported, Ryzen 5 8500G May not work properly...... Except that the application itself only lists "AVX2 supported CPU" as a requirement, which means it should work (possibly very slowly) on a variety of processors.

Users will need to download and install LM Studio, which offers a ROCm version for RX 7000 series users - again, note that the standard package is also available for Intel cpus and Nvidia Gpus. Once LM Studio is installed and launched, simply search for the LLM you want, such as chat optimized Llama 2 7B. AMD recommends models with the "Q4 K M" label, which refers to a specific quantization level (4-bit) and other features. While Ryzen CPU users are free to chat with the bot at this point - it's unclear if the NPU is being used, but we're guessing it isn't - RX 7000 Series GPU users will need to open the right panel and manually enable GPU offload and drag the offload slider fully to "Max."

AMD's tutorial means there's at least one official, easy-to-use way to run AI chatbots on all consumer hardware from AMD, Intel, and Nvidia. As expected, Nvidia was the first to launch the Chat with RTX app, which naturally only runs on Nvidia Gpus. Chatting with RTX is arguably the most fulfilling solution, as it can analyze documents, videos, and other files. Additionally, support for this Nvidia chatbot dates back to Series 30, while Series 20 support may be on the table.

Meanwhile, Intel's AI CPU/NPU and GPU solutions are more varied. Instead of using an app to demonstrate native AI chatbots, Intel demonstrated how to write code in Python. While the code a user has to write isn't long, involving any coding at all is a hurdle for many potential users. Also, chat responses are displayed on the command line, which isn't exactly "cutting edge AI." You can try LM Studio, although it doesn't seem to have Intel GPU or NPU support yet, so it will only use your CPU.

While AMD doesn't have its own AI chatbot app like Nvidia does, it seems to be further along than Intel in terms of features, as there is at least ROCm GPU hardware support. AMD's next step could be to make its own version of Chat using RTX, or at least work with the developers at LM Studio to enable more features for AMD hardware. Perhaps we'll even see AI features integrated into the Radeon Adrenalin driver suite - AMD does do driver-level AI optimizations, and the driver suite often gets new features like fluid motion frames.