Solutions to speed time to market for AI acceleration chip

7th February 2019
Source: Mentor
Posted By : Alex Lynn
Solutions to speed time to market for AI acceleration chip

It has been announced by Mentor, a Siemens business, that artificial intelligence (AI) semiconductor innovator Graphcore successfully met its silicon test requirements and achieved rapid test bring-up on its Colossus Intelligence Processing Unit (IPU) by using Mentor’s Tessent product family.

Graphcore’s recently announced Colossus IPU targets machine intelligence training and inference in datacenters. The device lowers the cost of accelerating AI applications in cloud and enterprise datacenters, while reportedly also increasing the performance of both training and inference up to one hundred times compared to the fastest systems today.

Graphcore required a DFT solution that could reduce the cost and time challenges associated with testing the Colossus IPU’s novel architecture and exceptionally large design. Integrating 23.6 billion transistors and more than a thousand IPU cores, Colossus is one of the largest processors ever fabricated.

Mentor’s Tessent is the market leading DFT solution, helping companies achieve higher test quality, lower test cost and faster yield ramps. The register-transfer level (RTL)-based hierarchical DFT foundation in Tessent features an array of technologies specifically suited to address the implementation and pattern generation challenges of AI chip architectures.

Graphcore leveraged these capabilities and the Tessent SiliconInsight integrated silicon bring-up environment on Graphcore’s Colossus IPU to meet its test requirements, while minimising cycle time for DFT implementation, pattern generation, verification and silicon validation.

“We used Mentor’s fully automated Tessent platform for our series of initial silicon parts, together with an all-Mentor DFT flow, allowing us to ship fully tested and validated parts within the first week,” said Phil Horsfield, Vice President of Silicon at Graphcore. “We were able to have Logic BIST, ATPG and Memory BIST up and running in under three days. This was way ahead of schedule.”

Research firm IBS, has estimated that AI-related applications consumed $65bn of processing technology last year, growing at an 11.5% annual rate and significantly outpacing other segments. This processing demand has until now been supplied by microprocessors not fully optimised for high AI workloads. To meet this growing demand while significantly lowering computational cost, more than 70 companies have announced plans to create new processing architectures based on massive parallelism and specialised for AI workloads.

Brady Benware, Senior Marketing Director for the Tessent product family at Mentor, said: “Hardware acceleration for AI is now a very competitive and rapidly evolving market. As a result, fast time to market is a leading concern for this segment. Companies participating in this market are choosing Tessent because its RTL-based hierarchical DFT approach provides extremely efficient test implementation for massively parallel architectures, and Tessent’s SiliconInsight debug and characterisation capabilities eliminate costly delays during silicon bring-up.”


You must be logged in to comment

Write a comment

No comments




Sign up to view our publications

Sign up

Sign up to view our downloads

Sign up

Sensor+Test 2019
25th June 2019
Germany Nürnberg Messe
The Digital Healthcare Show 2019
26th June 2019
United Kingdom EXCEL, London
unbound london 2019
17th July 2019
United Kingdom Old Truman Brewery, London
DSEI 2019
10th September 2019
United Kingdom EXCEL, London
EMO Hannover 2019
16th September 2019
Germany Hannover