AMD to use Google Cloud for enhancing chip design workloads

AMD will run electronic design automation (EDA) for its chip-design workloads on Google Cloud, , which is expected to further extend the on-premises capabilities of AMD data centers.
23 May 2022

AMD will run electronic design automation (EDA) for its chip design workloads on Google Cloud. (Photo by JUSTIN SULLIVAN / GETTY IMAGES NORTH AMERICA / Getty Images via AFP)

As one of the world’s biggest chip makers, AMD continues to work on innovating its chip design and capabilities to cater to more workloads and use cases. Over the last few months, the chipmaker has already announced several milestones in the industry.

These include advancements in processors to enable organizations to have better results when using their products. AMD also recently announced a collaboration with Qualcomm to improve WiFi network connectivity on laptops by leveraging the 6G band for next-gen laptop users.

When it comes to AMD data centers, the company has now announced a technology partnership with Google Cloud. AMD will run electronic design automation (EDA) for its chip-design workloads on Google Cloud, which is expected to further extend the on-premises capabilities of AMD data centers.

AMD will also leverage Google Cloud’s global networking, storage, artificial intelligence and machine learning capabilities to further improve upon its hybrid and multi-cloud strategy involving these EDA workloads.

Modern chip design typically taps scale, elasticity, and efficient utilization of resources to play critical roles — particularly given that the demand for compute processing grows with each node advancement. AMD will add Google Cloud’s newest compute-optimized C2D VM instance, powered by 3rd Gen AMD EPYC processors, to its suite of resources focused on EDA workloads to remain flexible and scale easily,

By leveraging Google Cloud, AMD anticipates being able to run more designs in parallel, giving the team more flexibility to manage short-term compute demands, without reducing allocation on long-term projects.

According to Sachin Gupta, the GM and VP of Infrastructure at Google Cloud, the speed, scale, and security of the cloud unlocks much-needed flexibility in today’s semiconductor environment.

“We are pleased to provide the infrastructure required to meet AMD’s compute performance needs and equip the company with our AI solutions to continue designing innovative chips,” commented Gupta.

For Mydung Pham, corporate vice president, Silicon Design Engineering at AMD, leveraging the Google Cloud C2D instances powered by 3rd Gen EPYC processors for AMD’s complex EDA workloads has helped their engineering and IT teams tremendously.

“C2D has allowed us to be more flexible and provided a new avenue of high-performance resources that allows us to mix and match the right compute solution for our complex EDA workflows. We’re happy to work with Google Cloud to take advantage of their wealth of cloud features and the capabilities of 3rd Gen EPYC,” added Pham.

Through this multi-year technology partnership, Google Cloud and AMD will continue to explore new capabilities and innovations, while AMD will enjoy benefits such as:

  • Increased flexibility and choice to run applications in the most efficient manner possible
  • Improved design and operations from applied Google Cloud artificial intelligence and machine learning tools and frameworks
  • More transparency with costs and resource consumption
  • Greater agility and less vendor lock-in

With that said, for semiconductor companies like AMD, leveraging the cloud for chip design will be an advantage for them to research and develop chips at a much more agile pace.