close
close

OpenAI is working with Broadcom and TSMC to develop a custom AI inference chip

OpenAI is working with Broadcom and TSMC to develop a custom AI inference chip

OpenAI is reportedly working with Broadcom Inc. and Taiwan Semiconductor Manufacturing Co. Ltd. on creating a new artificial intelligence chip specifically designed to run AI models after they’ve been trained.

OpenAI’s decision comes after the company explored a number of options to diversify its chip supply and reduce costs. According to Reuters, OpenAI considered building everything in-house and raising capital for an expensive plan to build its own foundries, but has since abandoned that plan given the enormous cost and time required to do so, instead now pursuing plans to focus on internal chip design efforts.

Bloomberg reported that the decision to develop chips in-house is part of OpenAI not aiming to replace GPUs like those provided by Nvidia Corp. Instead, the company is aiming to develop a specialized chip that will perform inference, the process of applying trained models to make predictions or decisions based on new data in real-time applications. The same report noted that investors and analysts expect the need for chips to support inference to only grow as more tech companies use AI models to solve more complex problems.

Sources told Bloomberg that OpenAI’s decision to work with partners on custom chips was made because it was a faster and more achievable path for now. However, in the future the company may continue research into creating its own network of foundries.

OpenAI’s move to design its own chips is part of a broader trend among major tech and artificial intelligence firms to develop specialized hardware that can more efficiently handle the unique demands of artificial intelligence workloads than general-purpose GPUs.

Currently, OpenAI relies heavily on Nvidia GPUs to train its models, a process that requires enormous computing power to refine the algorithms using huge data sets. However, inference requires various chip capabilities optimized for speed and power efficiency rather than pure processing power.

As chip costs continue to rise relative to growing demand, such as Elon Musk’s xAI seeking to double its Colossus data center to 200,000 Nvidia H100 graphics cards, and overall increased demand for AI-based services, creating custom chips will allow OpenAI to adapt its infrastructure to meet both technical needs and budget constraints.

By partnering with Broadcom and TSMC, OpenAI can leverage its existing manufacturing expertise while moving faster than a completely in-house manufacturing approach.

Image: SiliconANGLE/Ideogram

Your voice of support is important to us and helps us keep our content FREE.

One click below supports our mission of providing free, in-depth and relevant content.

Join our community on YouTube

Join a community of over 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, ​​Dell Technologies Founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many other luminaries and experts.

“TheCUBE is an important industry partner. You guys are truly a part of our events and we really appreciate you coming and I know people appreciate the content you create too.” – Andy Jassy

THANK YOU