Grofit

joined 1 year ago
[–] Grofit 2 points 2 days ago

There have been some decent results historically with checkerboard and other separated reconstruction techniques. Nvidia was working on some new checkerboard approaches before they killed off SLI.

A decade or two ago most people I knew had dual GPUs, itbwas quite common for gamers and while you were not getting 100x utilisation it was enough to be noticeable and the prices were not mega bucks back then.

On your point of buying 1 card vs many, I get that, but it seems like we are reaching some limitations with monolithic dies. Shrinking chips seems to be getting far harder and costly, so to keep performance moving forward we are now jacking up the power throughput etc.

Anyway the point I'm trying to make is that it's going to become so costly to keep getting these more powerful monolithic gpus and their power requirements will keep going up, so if it's 2 mid range gpus for $500 each or 1 high end gpu for $1.5k with possibly higher power usage im not sure if it will be as much of a shoe in as you say.

Also if multi chiplet designs are already having to solve the problem of multiple gpu cores communicating and acting like one big one, maybe some of that R&D could benefit high level multi gpu setups.

[–] Grofit 1 points 3 days ago

It was some on board gpu with my super amazing AMD K6-2, it couldn't even run mega man X without chugging. Then a friend gave me an S3 Virge with a glorious 4mb vram.

 

Given the information around how AMD are currently focusing on the mid tier, it got me thinking about their focus on multi chiplet approaches for RDNA5+, they will be having to do a lot of work to manage high speed interconnects and some form of internal scheduler/balancer for the chipets to split out the work etc.

So with this in mind if they could leverage that work on interconnectors and schedulers at a higher level to be a more cohesive form of Crossfire/SLI they wouldnt even need to release any high end cards, as they could just sell you multiple mid tier cards and you just daisy chain them together (within reason). It would allow them to sell multiple cards to individuals increasing sales numbers and also let them focus on less models so simpler/cheaper production costs.

Historically I think the issues with Crossfire/SLI was that to make best use of it the developers had to do a lot of legwork to spread out loads etc, but if they could somehow handle this at the lower levels like they do with the chiplets then maybe it could be abstracted away from the developers somewhat, i.e you designate a master/slave GPUs so the OS just treats the main one as a bigger one or something.

I doubt this is on the cards but it felt like something the was plausible and worth discussion.

[–] Grofit 1 points 1 month ago

I'm sure there is a simple answer and I'm an idiot, but given it's in a place that gets lots of sun, can they not just install solar panels with batteries at consumer/grid level?

Or is the problem not with the generation of the power and with transmitting it to properties? I don't know cost of solar installation but I'm sure the amount it's costing them when it all fails they could at least incentives individuals to install solar or something.

[–] Grofit 3 points 1 month ago

Really enjoying it so far.

I was initially saddened to hear it was going to follow in the steps of 15 and be an action based rpg, and I thought 15 was brain dead "warp strike simulator" with horrible story pacing and poor characters (until last 5% of the game).

This game though has simple but effective action combat with enough variety to be fun and the characters and pacing are a joy.

I still wish we could get some FF games like 7 or 9 where there is depth to equipment, magic and turn based combat, but jrpgs have been iterating away from complex battle systems and sell well so can't see them going back.

I still think FF7 was the pinnacle as material mixing and matching with equipment was really simple and super fun.

Anyway rnat over, FF16 is good, recommend it.

[–] Grofit 22 points 1 month ago

One point that stands out to me is that when you ask it for code it will give you an isolated block of code to do what you want.

In most real world use cases though you are plugging code into larger code bases with design patterns and paradigms throughout that need to be followed.

An experienced dev can take an isolated code block that does X and refactor it into something that fits in with the current code base etc, we already do this daily with Stackoverflow.

An inexperienced dev will just take the code block and try to ram it into the existing code in the easiest way possible without thinking about if the code could use existing dependencies, if its testable etc.

So anyway I don't see a problem with the tool, it's just like using Stackoverflow, but as we have seen businesses and inexperienced devs seem to think it's more than this and can do their job for them.

[–] Grofit 1 points 1 month ago (1 children)

Are you talking specifically about LLMs or Neural Network style AI in general? Super computers have been doing this sort of stuff for decades without much problem, and tbh the main issue is on training for LLMs inference is pretty computationally cheap

[–] Grofit 1 points 2 months ago (3 children)

I disagree, there are loads of white papers detailing applications of AI in various industries, here's an example, cba googling more links for you.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7577280/

[–] Grofit 2 points 2 months ago (5 children)

I don't mean it's like the dotcom bubble in terms of context, I mean in terms of feel. Dotcom had loads of investors scrambling to "get in on it" many not really understanding why or what it was worth but just wanted quick wins.

This has same feel, a bit like crypto as you say but I would say crypto is very niche in real world applications at the moment whereas AI does have real world usages.

They are not the ones we are being fed in the mainstream like it replacing coders or artists, it can help in those areas but it's just them trying to keep the hype going. Realistically it can be used very well for some medical research and diagnosis scenarios, as it can correlate patterns very easily showing likelyhood of genetic issues.

The game and media industry are very much trialling for voice and image synthesis for improving environmental design (texture synthesis) and providing dynamic voice synthesis based off actors likenesses. We have had peoples likenesses in movies for decades via cgi but it's only really now we can do the same but for voices and this isn't getting into logistics and/or financial where it is also seeing a lot of application.

Its not going to do much for the end consumer outside of the guff you currently use siri or alexa for etc, but inside the industries AI is very useful.

[–] Grofit 49 points 2 months ago (14 children)

A lot of the AI boom is like the DotCom boom of the Web era. The bubble burst and a lot of companies lost money but the technology is still very much important and relevant to us all.

AI feels a lot like that, it's here to stay, maybe not in th ways investors are touting, but for voice, image, video synthesis/processing it's an amazing tool. It also has lots of applications in biotech, targetting systems, logistics etc.

So I can see the bubble bursting and a lot of money being lost, but that is the point when actually useful applications of the technology will start becoming mainstream.

[–] Grofit 1 points 2 months ago (1 children)

I love SteamOS for gaming and I think going forward that may get more and more adoption, but a lot of day to day apps or dev tools I use either don't have Linux releases (and can't be run via wine/Proton). I would love to jump over on host rather than dabbling with it via vms/steamdeck but it's just not productive enough.

One especially painful thing is when certain libs I'm developing with need different versions of glibc or gtk to the ones installed by default on OS, and then I die inside.

[–] Grofit 13 points 2 months ago (6 children)

I just wish we could have less ways to do things in Linux.

I get that's one of the main benefits of the eco system, but it adds too much of a burden on developers and users. A developer can release something for Windows easily, same for Mac, but for Linux is it a flatpak, a deb, snap etc?

Also given how many shells and pluggable infrastructure there is it's not like troubleshooting on windows or mac, where you can Google something and others will have exact same problem. On Linux some may have same problem but most of the time it's a slight variation and there are less users in the pool to begin with.

So a lot of stuff is stacked against you, I would love for it to become more mainstream but to do so I feel it needs to be a bit more like android where we just have a singular way to build/install packages, try and get more people onto a common shell/infrastructure so there are more people in same setup to help each other. Even if it's not technically the best possible setup, if its consistent and easy to build for its going to speed up adoption.

I don't think it's realistically possible but it would greatly help adoption from consumers and developers imo.

[–] Grofit 8 points 2 months ago

Most companies can't even give decent requirements for humans to understand and implement. An AI will just write any old stuff it thinks they want and they won't have any way to really know if it's right etc.

They would have more luck trying to create an AI that takes whimsical ideas and turns them into quantified requirements with acceptance criteria. Once they can do that they may stand a chance of replacing developers, but it's gonna take far more than the simpleton code generators they have at the moment which at best are like bad SO answers you copy and paste then refactor.

This isn't even factoring in automation testers who are programmers, build engineers, devops etc. Can't wait for companies to cry even more about cloud costs when some AI is just lobbing everything into lambdas 😂

 

Keyboards have been around for over 40 years and since then not much has really changed in terms of the standard keyboard functionality at the driver/os level.

In the past decade we have seen quite a few keyboards coming out with analogue keys which is great but they are really sketchy to try and actually use for anything as it's not something an OS expects a keyboard to be doing so you need special 3rd party drivers/software which often don't get used in a truly analogue way anyway.

For example in a lot of games analogue directional sticks are the norm, so altering movement speed/sneaking based off the analogue amount is pretty normal, however when you get to PCs you just get keydown/keyup events so you can't process it in an analogue way.

So given we are seeing more keyboards coming out with this functionality at a lower price point is there any company/person/body trying to put together a standard that would allow for analogue key events at OS level or even DirectX (DirectInput) / OpenGl?

I imagine the answer is no, but wanted to ask incase anyone in the know had more info.

view more: next ›