Chip Design

139 readers
5 users here now

A community for the discussion of all things related to the creation (not usage of!) integrated circuits, both circuit- and process-level.

Related communities:

founded 1 year ago
MODERATORS
1
 
 

cross-posted from: https://lemmy.world/post/16224208

Intel said much re Lion cove lately. I don't wanna read a long article re it. I just want a comparison between it and Redwood cove. I just wanna share that it's about 10%–18% better. Let's await the Lunar lake 💻 and see the performance in programs. If what Intel said is true, props to them for continuously improving x86-architecture chips.

2
3
4
5
6
7
8
9
 
 

As formal verification becomes more common in the industry, design complexity continues to be a challenge. Article argues that this is a byproduct of design-centric approach (optimize area, power, speed) without considering verifiability. A verification-centric approach driven by polynomial formal verification analysis can produce verifiable designs.

Abstract: Recently, a lot of effort has been put into developing formal verification approaches by both academic and industrial research. In practice, these techniques often give satisfying results for some types of circuits, while they fail for others. A major challenge in this domain is that the verification techniques suffer from unpredictability in their performance. The only way to overcome this challenge is the calculation of bounds for the space and time complexities. If a verification method has polynomial space and time complexities, scalability can be guaranteed. In this tutorial paper, we review recent developments in formal verification techniques and give a comprehensive overview of Polynomial Formal Verification (PFV). In PFV, polynomial upper bounds for the run-time and memory needed during the entire verification task hold. Thus, correctness under resource constraints can be ensured. We discuss the importance and advantages of PFV in the design flow. Formal methods on the bit-level and the word-level, and their complexities when used to verify different types of circuits, like adders, multipliers, or ALUs are presented. The current status of this new research field and directions for future work are discussed.

10
 
 

One way to help alleviate the effects of the talent shortage is changing how semiconductors are designed so that organizations can achieve more with their existing workforce. This requires moving away from project-centric design and transitioning to an IP-centric design methodology.

Over the past few years, teams have moved from building relatively self-contained, isolated designs to creating complex platforms across dispersed and integrated design centers. Larger design footprints, a more comprehensive array of products and quicker time to market are other contributing factors to walking away from a project-based design methodology.

11
 
 

For battery-operated devices, the energy consumption for chip production far exceeds the lifetime energy consumption of the chips themselves. So, if we want to save energy, we’d better focus on the manufacturing process, argues Bram Nauta.

12
 
 

So how can universities train students for a continuous and rapidly changing technology? This is especially difficult because it involves both software and hardware, and more domain-specific and increasingly heterogeneous architectures. And regardless of whether these devices are tethered to a battery or plugged into a socket, they need to be much more energy-efficient. Given the slowdown in Moore’s Law and the shrinking power, performance and area/cost benefits of scaling, that often requires a mix of computer science, electrical engineering, and in packages, an increasing amount of mechanical engineering.

“Mechanical engineers, electrical engineers, those disciplinary trainings through those curriculums, they’re accredited and we have a very vigorous process that will continue. But these smaller, bite-sized chunks of curriculum will allow a student to broaden. So as a mechanical engineer, I may not necessarily have either capacity in my studies, or the depth of interest, to take an entire course on heterogeneous integration. But I might be very open to a smaller, bite-sized piece that’s looking at the thermal properties of packaging and new effects occurring because of things like heterogeneous integration. And that is going to be very important for us to be more nimble, to get these things done more quickly.

“You could hire somebody who has a background in electrical engineering or computer engineering, where they understand the low-level hardware and how to build embedded systems and how to develop them, but they don’t usually have a background in securing them,” said Dan Walters, principal embedded security engineer and lead for microelectronics solutions at MITRE. “Or you could look at students with more of a focus in security and cybersecurity. Those typically are computer science degrees. And some universities have computer or cybersecurity degrees, but that’s really software-heavy. Those students don’t understand embedded systems and the unique things that come along with that. What we essentially did was hire from one of those two groups and say, ‘Okay, we’re going to do on-the-job training for the other 50% that you’re missing.'”

13
14
 
 

Engineers in Princeton managed to train GPT4 and extend AutoSVA to generate SVA (systemverilog assertions) from buggy RTL and functionality description. SVA is widely used to verify digital design for ASIC and FPGAs. AutoSVA2, which extends open-source AutoSVA, improves the flow to generate SVA from English description. LLM was trained in multiple iterations to generate SVA with correct syntax, which is something GPT fails to do by itself. Authors argue that GPT's "creativity" allows it to write correct assertion even from a buggy RTL. Later authors used this tool to write RTL from scratch as well. RTL written by GPT was tested against the SVA generated by this tool, and SVA corrected by an engineer was fed back to LLM, which generated functionally correct FIFO queue in a few iterations.

Abstract—Formal property verification (FPV) has existed for decades and has been shown to be effective at finding intricate RTL bugs. However, formal properties, such as those written as SystemVerilog Assertions (SVA), are time-consuming and error- prone to write, even for experienced users. Prior work has attempted to lighten this burden by raising the abstraction level so that SVA is generated from high-level specifications. However, this does not eliminate the manual effort of reasoning and writing about the detailed hardware behavior. Motivated by the increased need for FPV in the era of heterogeneous hardware and the advances in large language models (LLMs), we set out to explore whether LLMs can capture RTL behavior and generate correct SVA properties. First, we design an FPV-based evaluation framework that measures the correctness and completeness of SVA. Then, we evaluate GPT4 iteratively to craft the set of syntax and semantic rules needed to prompt it toward creating better SVA. We extend the open-source AutoSVA framework by integrating our improved GPT4-based flow to generate safety properties, in addition to facilitating their existing flow for liveness properties. Lastly, our use cases evaluate (1) the FPV coverage of GPT4-generated SVA on complex open-source RTL and (2) using generated SVA to prompt GPT4 to create RTL from scratch. Through these experiments, we find that GPT4 can generate correct SVA even for flawed RTL—without mirroring design errors. Particularly, it generated SVA that exposed a bug in the RISC-V CVA6 core that eluded the prior work’s evaluation.

15
 
 

One of the biggest shortcomings of silicon is that it can only be made so thin because its material properties are fundamentally limited to three dimensions [3D]. For this reason, two-dimensional [2D] semiconductors—so thin as to have almost no height—have become an object of interest to scientists, engineers and microelectronics manufacturers.

Thinner chip components would provide greater control and precision over the flow of electricity in a device, while lowering the amount of energy required to power it. A 2D semiconductor would also contribute to keeping the surface area of a chip to a minimum, lying in a thin film atop a supporting silicon device.

But until recently, attempts to create such a material have been unsuccessful.

Now, researchers at the University of Pennsylvania School of Engineering and Applied Science have grown a high-performing 2D semiconductor to a full-size, industrial-scale wafer. In addition, the semiconductor material, indium selenide (InSe), can be deposited at temperatures low enough to integrate with a silicon chip.

"For the purposes of an advanced computing technology, the chemical structure of 2D InSe needs to be exactly 50:50 between the two elements. The resulting material needs a uniform chemical structure over a large area to work," says Song.

The team achieved this groundbreaking purity using a growth technique called "vertical metal-organic chemical vapor deposition" (MOCVD). Previous research had attempted to introduce the indium and selenium in equal quantities and at the same time. Song demonstrated, however, that this method was the source of undesirable chemical structures in the material, producing molecules with varying ratios of each element. MOCVD, by contrast, works by sending the indium in a continuous stream while introducing the selenium in pulses.

16
 
 

Compared with traditional monolithic devices, the design and manufacturing process for chiplets is significantly different. The scrap costs associated with manufacturing traditional monolithic semiconductor devices is basically linear, including single chip cost, packaging, and assembly costs.

Manufacturing processes for 2.5D/3D designs differ significantly in terms of the accumulation of scrap costs. Specifically, these costs increase geometrically from fabrication to assembly driven by scrap costs for multiple dies, multi-chip partial assemblies, and/or full 2.5D/3D packages.

Shifting tests, either left or right, in the test process is a strategy to achieve these goals and minimize the overall manufacturing cost of 2.5D/3D components. Shift left is the ability to increase test coverage earlier in the manufacturing process (e.g., during wafer inspection and partial packaging) to maximize KGD, while reducing future packaging costs. Additional tests can also be added to the process to identify new failure types or failure modes.

However, the benefits of shift left need to be weighed. For example, increasing test intensity early in the manufacturing process can positively impact known good devices but it can also lead to an increase in test costs that is not sufficiently offset by the optimizations, even after accounting for the resulting reduction in scrap costs.

Shift right means increasing test coverage later in the manufacturing process, expanding the ability to detect defects, and maintaining quality levels with the goal of reducing costs with higher parallelism testing.

Typically, a test item with a higher yield on wafer or mission pattern tests, or a high yield test that requires a longer scan test time is an ideal candidate for shifting right. These tests can be moved to final or system level test, or flexibly managed in between.

The goal of shifting tests to the left or right is to achieve the optimal combination of quality and yield throughout the entire manufacturing process, ultimately optimizing the overall cost of quality.

17
 
 

Many volume applications use FPGA because they need in-field reconfigurability (changing standards, changing algorithms, etc) but they want to improve their system’s competitiveness (power, size, cost). FPGAs are bulky, expensive and power hungry. Integrating eFPGA can greatly improve the economics while maintaining full reconfigurability and performance.

We’ve found with customers that a significant portion of the LUTs in their designs don’t change with reconfigurations: they are fixed buses to bring data to and from the reconfigurable core. This can be hardwired so the number of LUTs needed in the SoC is typically half of what’s in the FPGA. There is also a lot of cost of voltage regulators for an FPGA that disappear with integration.

Typically, the cost of eFPGA is 1/10th the cost of the FPGA it replaces but with the same speed and programmability. Power can also be cut to 1/10th because most of the power in an FPGA is the power-hungry PHYs that are mostly not needed when using eFPGA in the SoC.

18
 
 

https://semiengineering.com/challenges-in-ramping-new-manufacturing-processes/

Despite a slowdown for Moore’s Law, there are more new manufacturing processes are rolling out faster than ever before. The challenge now is to decrease time to yield, which involves everything from TCAD and design technology co-optimization, to refinement of power, performance, area/cost, and process control and analytics. Srinivas Raghvendra, vice president of engineering at Synopsys, talks about the various steps involved in determining what can be printed on a wafer, how to reduce defect density, and what other concerns need to be addressed to ramp a new process.

19
 
 

The digital RAKs provide Arm Neoverse V2 designers with several key benefits. For example, the Cadence Cerebrus AI capabilities automate and scale digital chip design, delivering better PPA and improving designer productivity. Cadence iSpatial technology provides an integrated and predictable implementation flow for the faster design closure. The RAKs also include a smart hierarchy flow that delivers optimal turnaround times on large, high-performance CPUs. The Tempus ECO technology offers signoff-accurate final design closure based on path-based analysis. Finally, the RAKs incorporate the GigaOpt activity-aware power optimization engine to significantly reduce dynamic power consumption.

20
21
 
 

Are you an engineer working on designing complex modern chips or System On Chips (SOCs) at the Register Transfer Level (RTL)? Have you ever been in one of the following frustrating situations?

•Your RTL designs suffered a major (and expensive) bug escape due to insufficient coverage of corner cases during simulation testing.

• You created a new RTL module and want to see its real flows in simulation, but realize this will take another few weeks of testbench development work.

• You tweaked a piece of RTL to aid synthesis or timing and need to spend weeks simulating to make sure you did not actually change its functionality.

• You are in the late stages of validating a design, and the continuing stream of new bugs makes it clear that your randomized simulations are just not providing proper coverage.

• You modified the control register specification for your design and need to spend lots of time simulating to make sure your changes to the RTL correctly implement these registers.

If so, congratulations: you have picked up the right book! Each of these situations can be addressed using formal verification (FV) to significantly increase both your overall productivity and your confidence in your results. You will achieve this by using formal mathematical tools to create orders-of-magnitude increases in efficiency and productivity, as well as introducing mathematical near-certainty into areas previously dependent on informal testing.

Design verification has always been essential to chip design. However as chip complexity increased over years, state-space and required verification effort exponentially exploded. With emerging powerful and commercially accessible tools, formal verification has become more viable and even unavoidable for reliable sign-off and catching bugs early in the process. I found this book a very helpful introduction to formal verification. It explains how formal can be utilized, different methods like formal property verification (FPV) and sequential equivalence checks (SEC) and where they are useful, limitations, complexity problems and how to mitigate the issues that come with formal. It explains how formal and functional can complement each other for combined sigh-off. It explains theoretical concepts with clear examples and diagrams. It explains formal algorithms as well for anyone interested, but focus is more about how to utilize formal in your projects. And if you are a total beginner, do not worry, there is section which explains essentials of Systemverilog Assertions (SVA), which you can completely skip if you know about it already.

22
 
 

cross-posted from: https://lemm.ee/post/4443753

In the past 10 years or so, tech specialists have repeatedly voiced concerns that the progress of computing power will soon hit the wall. Miniaturisation has physical limits, and then what? Have we reached these limits? Is Moore’s law dead? That’s what we’ll talk about today.

  • 00:00 Intro
  • 00:53 Moore’s Law And Its Demise
  • 06:23 Current Strategies
  • 13:14 New Materials
  • 15:50 New Hardware
  • 18:58 Summary

23
24
 
 

The German Federal Ministry of Education and Research announces to fund the development of an open-source chip design ecosystem. This includes also design software.

25
 
 

Diverse talks about chip design with open-source CAD tools open-source hardware (FPGA, ASIC)

view more: next ›