hardware26

joined 1 year ago
[–] [email protected] 2 points 1 month ago

One-time programmable (memory)

[–] [email protected] 1 points 4 months ago (1 children)
[–] [email protected] 6 points 4 months ago

If you are working in a decent workplace, you will receive lots of feedback on your code and what you do. Don't take it personal and learn from them. Sometimes there are multiple correct answers and yours can be one of them, but each workplace, project and senior colleague has their own concerns and priorities. Sometimes feedback seems to be on a trivial mundane detail, and sometimes it really will be. If you think it is valuable feedback, learn. If you disagree, discuss. Enjoy!

 

As formal verification becomes more common in the industry, design complexity continues to be a challenge. Article argues that this is a byproduct of design-centric approach (optimize area, power, speed) without considering verifiability. A verification-centric approach driven by polynomial formal verification analysis can produce verifiable designs.

Abstract: Recently, a lot of effort has been put into developing formal verification approaches by both academic and industrial research. In practice, these techniques often give satisfying results for some types of circuits, while they fail for others. A major challenge in this domain is that the verification techniques suffer from unpredictability in their performance. The only way to overcome this challenge is the calculation of bounds for the space and time complexities. If a verification method has polynomial space and time complexities, scalability can be guaranteed. In this tutorial paper, we review recent developments in formal verification techniques and give a comprehensive overview of Polynomial Formal Verification (PFV). In PFV, polynomial upper bounds for the run-time and memory needed during the entire verification task hold. Thus, correctness under resource constraints can be ensured. We discuss the importance and advantages of PFV in the design flow. Formal methods on the bit-level and the word-level, and their complexities when used to verify different types of circuits, like adders, multipliers, or ALUs are presented. The current status of this new research field and directions for future work are discussed.

 

cross-posted from: https://discuss.tchncs.de/post/8824219

One way to help alleviate the effects of the talent shortage is changing how semiconductors are designed so that organizations can achieve more with their existing workforce. This requires moving away from project-centric design and transitioning to an IP-centric design methodology.

Over the past few years, teams have moved from building relatively self-contained, isolated designs to creating complex platforms across dispersed and integrated design centers. Larger design footprints, a more comprehensive array of products and quicker time to market are other contributing factors to walking away from a project-based design methodology.

 

One way to help alleviate the effects of the talent shortage is changing how semiconductors are designed so that organizations can achieve more with their existing workforce. This requires moving away from project-centric design and transitioning to an IP-centric design methodology.

Over the past few years, teams have moved from building relatively self-contained, isolated designs to creating complex platforms across dispersed and integrated design centers. Larger design footprints, a more comprehensive array of products and quicker time to market are other contributing factors to walking away from a project-based design methodology.

 

For battery-operated devices, the energy consumption for chip production far exceeds the lifetime energy consumption of the chips themselves. So, if we want to save energy, we’d better focus on the manufacturing process, argues Bram Nauta.

[–] [email protected] 3 points 6 months ago

As you said before power on capacitor is discharged. Right after power on capacitor is still discharged, so voltage on capacitor is zero, so reset pin has Vcc. With time capacitor gets charges and voltage across capacitor increases and reset voltage becomes closer and closer to ground, until it is ground. But it is important to consider what happens at power down too. At power down capacitor is charged. If power source becomes high impedance at power down, then reset pin will probably go down to zero in time but may take a bit time depending on what source exactly does. But if power source is connected to zero at power down reset pin will observe minus vcc and slowly go up to 0. If reset pin is sensitive it may be a good idea to protect it with a diode.

[–] [email protected] 17 points 6 months ago (2 children)

If you knew about the birds and the bees, you would know that this wasn't random.

[–] [email protected] 4 points 6 months ago

Immerse yourself into technology. Become the screen.

[–] [email protected] 2 points 6 months ago (1 children)

I don't know what it is. It just reminded me of ATMEL8051 and I wanted to share.

[–] [email protected] 6 points 6 months ago* (last edited 6 months ago) (4 children)

I used Atmel8051 in college. It fits nicely on a breadboard and teaches you how to use assembly and make wonders with 512 byte (yes byte) RAM if I remember the number correctly. I think half of that RAM was even reserved.

[–] [email protected] 5 points 7 months ago

To be fair 10^(0.000000000000000000001x) is also exponential growth. And if status quo is x=0 and removing entire management means x=10 this means even the max we can get is very little improvement. It can be "exponential" and still not so much.

[–] [email protected] 21 points 7 months ago (2 children)

"Exponentially" is not synonymous to "a lot". Exponent is a mathematical term and exponential growth requires at least two variables exponentially related to each other. For this to be possibly exponential growth a) progress should be quantifiable (removing management and treating workers well should be quantized somehow) b) performance should be quantifiable and measured at a bunch of progress points (if you have only two measurements it can as well be linear) c) performance should be or can be modeled as a an exponential function of progress in removing management and treating workers well.

[–] [email protected] 5 points 7 months ago (1 children)

I wish we had an active aoe2 community.

 

I got this question multiple times while introducing myself. It sounds a bit odd and I don't think they are really interested in the origin of my name. Is this a politically correct way of asking my ethnic origin? I guess "Where are you from" wouldn't work for everyone since there are many born-and-raised British people with foreign names and ethnic origin.

 

cross-posted from: https://discuss.tchncs.de/post/4827653

So how can universities train students for a continuous and rapidly changing technology? This is especially difficult because it involves both software and hardware, and more domain-specific and increasingly heterogeneous architectures. And regardless of whether these devices are tethered to a battery or plugged into a socket, they need to be much more energy-efficient. Given the slowdown in Moore’s Law and the shrinking power, performance and area/cost benefits of scaling, that often requires a mix of computer science, electrical engineering, and in packages, an increasing amount of mechanical engineering.

“Mechanical engineers, electrical engineers, those disciplinary trainings through those curriculums, they’re accredited and we have a very vigorous process that will continue. But these smaller, bite-sized chunks of curriculum will allow a student to broaden. So as a mechanical engineer, I may not necessarily have either capacity in my studies, or the depth of interest, to take an entire course on heterogeneous integration. But I might be very open to a smaller, bite-sized piece that’s looking at the thermal properties of packaging and new effects occurring because of things like heterogeneous integration. And that is going to be very important for us to be more nimble, to get these things done more quickly.

“You could hire somebody who has a background in electrical engineering or computer engineering, where they understand the low-level hardware and how to build embedded systems and how to develop them, but they don’t usually have a background in securing them,” said Dan Walters, principal embedded security engineer and lead for microelectronics solutions at MITRE. “Or you could look at students with more of a focus in security and cybersecurity. Those typically are computer science degrees. And some universities have computer or cybersecurity degrees, but that’s really software-heavy. Those students don’t understand embedded systems and the unique things that come along with that. What we essentially did was hire from one of those two groups and say, ‘Okay, we’re going to do on-the-job training for the other 50% that you’re missing.'”

 

So how can universities train students for a continuous and rapidly changing technology? This is especially difficult because it involves both software and hardware, and more domain-specific and increasingly heterogeneous architectures. And regardless of whether these devices are tethered to a battery or plugged into a socket, they need to be much more energy-efficient. Given the slowdown in Moore’s Law and the shrinking power, performance and area/cost benefits of scaling, that often requires a mix of computer science, electrical engineering, and in packages, an increasing amount of mechanical engineering.

“Mechanical engineers, electrical engineers, those disciplinary trainings through those curriculums, they’re accredited and we have a very vigorous process that will continue. But these smaller, bite-sized chunks of curriculum will allow a student to broaden. So as a mechanical engineer, I may not necessarily have either capacity in my studies, or the depth of interest, to take an entire course on heterogeneous integration. But I might be very open to a smaller, bite-sized piece that’s looking at the thermal properties of packaging and new effects occurring because of things like heterogeneous integration. And that is going to be very important for us to be more nimble, to get these things done more quickly.

“You could hire somebody who has a background in electrical engineering or computer engineering, where they understand the low-level hardware and how to build embedded systems and how to develop them, but they don’t usually have a background in securing them,” said Dan Walters, principal embedded security engineer and lead for microelectronics solutions at MITRE. “Or you could look at students with more of a focus in security and cybersecurity. Those typically are computer science degrees. And some universities have computer or cybersecurity degrees, but that’s really software-heavy. Those students don’t understand embedded systems and the unique things that come along with that. What we essentially did was hire from one of those two groups and say, ‘Okay, we’re going to do on-the-job training for the other 50% that you’re missing.'”

 

Exascale is the next frontier in computing power, where systems are built to carry out extremely complex functions with increased speed and precision. This in turn enables researchers to accelerate their work into some of the most pressing challenges we face, including the development of new drugs, and advances in nuclear fusion to produce potentially limitless clean low-carbon energy.

The exascale system hosted at the University of Edinburgh will be able to carry out these complicated workloads while also supporting critical research into AI safety and development, as the UK seeks to safely harness its potential to improve lives across the country.

 

I sleep with my wife and (when she graces us with her presence) our cat. Last night I caught myself syncing my breath to their breaths while sleeping, or half-sleeping considering I was aware of what was happening. Eventually their breathing went out of sync, and my breathing got confused, and after a very brief period of suffocation, I realized that I have no obligation to sync my breath, and took control of my breathing and started breathing normally. It felt strange to me but I googled it and it looks like syncing your breath happens to people. Does it happen to you as well?

PS: I realized while typing, I don't know if I should be hearing my 3kg cat's breathing. I should check on that.

 

cross-posted from: https://discuss.tchncs.de/post/3979328

Engineers in Princeton managed to train GPT4 and extend AutoSVA to generate SVA (systemverilog assertions) from buggy RTL and functionality description. SVA is widely used to verify digital design for ASIC and FPGAs. AutoSVA2, which extends open-source AutoSVA, improves the flow to generate SVA from English description. LLM was trained in multiple iterations to generate SVA with correct syntax, which is something GPT fails to do by itself. Authors argue that GPT's "creativity" allows it to write correct assertion even from a buggy RTL. Later authors used this tool to write RTL from scratch as well. RTL written by GPT was tested against the SVA generated by this tool, and SVA corrected by an engineer was fed back to LLM, which generated functionally correct FIFO queue in a few iterations.

Abstract—Formal property verification (FPV) has existed for decades and has been shown to be effective at finding intricate RTL bugs. However, formal properties, such as those written as SystemVerilog Assertions (SVA), are time-consuming and error- prone to write, even for experienced users. Prior work has attempted to lighten this burden by raising the abstraction level so that SVA is generated from high-level specifications. However, this does not eliminate the manual effort of reasoning and writing about the detailed hardware behavior. Motivated by the increased need for FPV in the era of heterogeneous hardware and the advances in large language models (LLMs), we set out to explore whether LLMs can capture RTL behavior and generate correct SVA properties. First, we design an FPV-based evaluation framework that measures the correctness and completeness of SVA. Then, we evaluate GPT4 iteratively to craft the set of syntax and semantic rules needed to prompt it toward creating better SVA. We extend the open-source AutoSVA framework by integrating our improved GPT4-based flow to generate safety properties, in addition to facilitating their existing flow for liveness properties. Lastly, our use cases evaluate (1) the FPV coverage of GPT4-generated SVA on complex open-source RTL and (2) using generated SVA to prompt GPT4 to create RTL from scratch. Through these experiments, we find that GPT4 can generate correct SVA even for flawed RTL—without mirroring design errors. Particularly, it generated SVA that exposed a bug in the RISC-V CVA6 core that eluded the prior work’s evaluation.

 

Engineers in Princeton managed to train GPT4 and extend AutoSVA to generate SVA (systemverilog assertions) from buggy RTL and functionality description. SVA is widely used to verify digital design for ASIC and FPGAs. AutoSVA2, which extends open-source AutoSVA, improves the flow to generate SVA from English description. LLM was trained in multiple iterations to generate SVA with correct syntax, which is something GPT fails to do by itself. Authors argue that GPT's "creativity" allows it to write correct assertion even from a buggy RTL. Later authors used this tool to write RTL from scratch as well. RTL written by GPT was tested against the SVA generated by this tool, and SVA corrected by an engineer was fed back to LLM, which generated functionally correct FIFO queue in a few iterations.

Abstract—Formal property verification (FPV) has existed for decades and has been shown to be effective at finding intricate RTL bugs. However, formal properties, such as those written as SystemVerilog Assertions (SVA), are time-consuming and error- prone to write, even for experienced users. Prior work has attempted to lighten this burden by raising the abstraction level so that SVA is generated from high-level specifications. However, this does not eliminate the manual effort of reasoning and writing about the detailed hardware behavior. Motivated by the increased need for FPV in the era of heterogeneous hardware and the advances in large language models (LLMs), we set out to explore whether LLMs can capture RTL behavior and generate correct SVA properties. First, we design an FPV-based evaluation framework that measures the correctness and completeness of SVA. Then, we evaluate GPT4 iteratively to craft the set of syntax and semantic rules needed to prompt it toward creating better SVA. We extend the open-source AutoSVA framework by integrating our improved GPT4-based flow to generate safety properties, in addition to facilitating their existing flow for liveness properties. Lastly, our use cases evaluate (1) the FPV coverage of GPT4-generated SVA on complex open-source RTL and (2) using generated SVA to prompt GPT4 to create RTL from scratch. Through these experiments, we find that GPT4 can generate correct SVA even for flawed RTL—without mirroring design errors. Particularly, it generated SVA that exposed a bug in the RISC-V CVA6 core that eluded the prior work’s evaluation.

view more: next ›