Addressing the Ever-evolving Test Challenges with System Level Test

Addressing the Ever-evolving Test Challenges with System Level Test

By Ron Leckie (INFRASTRUCTURE Advisors) and Mark Roos (Roos Instruments)

2017 CAST Workshop, “Testing Smarter," is May 4-5

From the time Moore’s law allowed the creation of complex systems on a chip (SOC) there have been problems with guaranteeing quality via traditional testing methods.  Parts which tested good were found to not work when installed in the target systems.  To address this, System-Level Test has emerged to help test the parts while inserted into the target environment. While effective the lack of standards, differences in targets and custom hardware have made this one of the least liked test methods.

Various forms of System Level Test (SLT) have been used by major chipmakers over the years to ensure that components will be fully operational when they are ultimately inserted into the final system. It was pioneered in the 1990’s by processor manufacturers such as Intel and AMD who found that even the good parts that were produced on their production ATE platforms did not all “boot up” and operate when inserted into the system’s mother board. As a result, rows of mother boards were installed in factories to further screen outgoing shipments with a “boot-up system” test that was applied 100% initially then dropped off to a sampling plan when production matured. SLT has since evolved and has been adopted by other chipmakers and system makers who have high-volume production parts that support these custom test solutions. However, there are several questions to consider…

  • How much does it really cost to essentially develop and support (in-house) a custom, focused system-level tester for each new product? Is it worth the cost and is it necessary?
  • Off-the-shelf ATE platforms are available for checking functional, at-speed, structural fault coverage and these come with the reconfigurable features and support that enable them to address a range of product types. Are there SLT capabilities or features that could be built into commercial ATE platforms to expand their ability to trap SLT-level defects?
  • What type of data is required to be extracted from SLT-level defects in order to improve the design and manufacturing processes?
  • Where is the break-even volume to justify SLT development?  Are there SLT solutions for lower-volume complex parts?
  • Will SLT eliminate the need for ATE or can ATE be enhanced to eliminate the need for SLT?
  • How will OSATS manage SLT systems from the perspective of utilization, support & service?

SOCs are not getting simpler and SLT is becoming more of a necessity so SEMI’s Collaborative Alliance for Semiconductor Test (CAST) Group feels that it is time to look into the feasibility of creating standards to improve the efficiency of SLT.

The 2017 CAST Workshop, “Testing Smarter,” to be held on May 4-5 in Silicon Valley, will explore the topics of SLT and enabling Big Data at Test.  Participants will hear from early adopters of SLT including AMD, Astronics, Cisco, Nvidia and PDF Solutions to hear about their experiences with SLT and the challenges they see ahead for its implementation.  These will be followed by an active panel discussion with all workshop attendees.  In the related topic of Big Data at Test, we will hear updates from CAST’s two active working groups that are defining new and more capable methods and standards to improve data transfer, control and management from traditional functional and structural testers as well as from SLT systems. Case studies on both RITdb and TEMS will be shared.  Visit the CAST workshop webpage to learn more and to register.

Global Update
SEMI
www.semi.org
April 25, 2017