TauZero

joined 1 year ago
[–] [email protected] 8 points 7 months ago

But then you have to watch Picard 😒

[–] [email protected] 0 points 7 months ago

The credit companies do not insure against fraud, they simply take the money out of the merchant account and put it back into yours. Now it's the merchant who has no recourse, if they have already shipped the product. So the only difference between CC and crypto is who is typically left holding an empty bag in case of theft - the payer or the payee. Certainly not the banks!

I'd argue in terms of assigning responsibility, it seems more fair to expect you the customer to keep your digital wallet secure from thieves, than to expect the merchant to try guess every time whether the visitor to their online store happens to be using a stolen credit card.

[–] [email protected] 2 points 7 months ago

Question: do the Japanese actually care about privacy? I know I do, but if you were to ask a Japanese person why does their country use cash, would they say "We have considered a system of payment cards and decided against it for privacy reasons" or would they just shrug and say "I dunno, I'm not in charge of payment systems, I use what I have"?

[–] [email protected] 1 points 7 months ago

Yeah, for getting kidnapped 🀣

[–] [email protected] 1 points 7 months ago

A funny culprit I found during my own investigation was the GFCI bathroom outlet, which draws an impressive 4W. The status light + whatever the trickle current it uses to do its function thus dwarfs the standby power of any other electronic device.

[–] [email protected] 3 points 7 months ago

That's how I found out that my desktop speakers consume power even with the physical button being off and status light dark. The power brick stays warm indefinitely, a good 20W feels like! I have to unplug that thing now when not in use. Any normal power brick will be <1W of course.

[–] [email protected] 2 points 7 months ago (1 children)

FYI, the magic about:config key that you need to set to false is "keyword.enabled". After that Firefox will finally stop using any non-url string as a search query and will instead say say "Hmm. That address doesn’t look right. Please check that the URL is correct and try again."

[–] [email protected] 2 points 7 months ago

I am still sad Hitachi was too embarrassed to carry on the legacy of its name and sold off the Magic Wand brand to its subsidiary manufacturer. Hitachi, the brand name was a compliment to you, not a liability! You lost out.

[–] [email protected] 2 points 7 months ago

Some notes for my use. As I understand it, there are 3 layers of "AI" involved:

The 1st is a "transformer", a type of neural network invented in 2017, which led to the greatly successful "generative pre-trained transformers" of recent years like GPT-4 and ChatGPT. The one used here is a toy model, with only a single hidden layer ("MLP" = "multilayer perceptron") of 512 nodes (also referred to as "neurons" or "dimensionality"). The model is trained on the dataset called "Pile", a collection of 886GB text from all kinds of sources. The dataset is "tokenized" (pre-processed) into 100 billion tokens by converting words or word fragments into numbers for easier calculation. You can see an example of what the text data looks like here. The transformer learns from this data.

In the paper, the researchers do cajole the transformer into generating text to help understand its workings. I am not quite sure yet whether every transformer is automatically a generator, like ChatGPT, or whether it needs something extra done to it. I would have enjoyed to see more sample text that the toy model can generate! It looks surprisingly capable despite only having 512 nodes in the hidden layer. There is probably a way to download the model and execute it locally. Would it have been possible to add the generative model as a javascript toy to supplement the visualizer?

The main transformer they use is "model A", and they also trained a twin transformer "model B" using same text but a different random initialization number, to see whether they would develop equivalent semantic features (they did).

The 2nd AI is an "autoencoder", a different type of neural network which is good at converting data fed to it into a "more efficient representation", like a lossy compressor/zip archiver, or maybe in this case a "decompressor" would be a more apt term. Encoding is also called "changing the dimensionality" of the data. The researchers trained/tuned the 2nd AI to decompose the AI models of the 1st kind into a number of semantic features in a way which both captures a good chunk of the model's information content and also keeps the features sensible to humans. The target number of features is tunable anywhere from 512 (1-to-1) to 131072 (1-to-256). The number they found most useful in this case was 4096.

The 3rd AI is a "large language model" nicknamed Claude, similar to GPT-4, that they have developed for their own use at the Anthropic company. They've told it to annotate and interpret the features found by the 2nd AI. They had one researcher slowly annotate 412 features manually to compare. Claude did as well or better than the human, so they let it finish all the rest on its own. These are the descriptions the visualization shows in OP link.

Pretty cool how they use one AI to disassemble another AI and then use a 3rd AI to describe it in human terms!

[–] [email protected] 1 points 7 months ago

Can't access the article, but wasn't China the one most vulnerable from the Malacca Strait being a chokepoint? As in, their trade towards Europe and fuel from the Middle East being potentially threatened? How does Thailand pitching to the US make sense then? How would a Thai bypass even increase security, since both routes are in the same area and can be equally blockaded? There aren't any problems with throughput capacity at Malacca, unlike say at the Panama Canal. Maybe it will make the travel distance slightly shorter, but is there really any way it could ever be cost-effective to offload and reload ships for a few hundred kilometers savings?

[–] [email protected] 1 points 7 months ago

Thank you for your detailed input!

It’s not even a platonic ideal - it’s drawing a supply/demand curve and thinking you understand how prices work in a market economy.

You got me 😁. I love drawing supply-and-demand curves. Seems pretty hopeless then if to even begin to understand how to vote "correctly" you need 5 years of game theory PhD. Hearing someone say "just trust me bro, the optimal strategy is that one" is not good enough. Voting was supposed to be for the masses...

drop everything to just start suing states and protesting for voting rights

I could get onboard with ranked-choice voting. My city used IRV for our latest mayoral primary election, and even though none of my ranked candidates won, I felt extremely satisfied that at least my voice was finally being heard. When a literal police-mayor got elected (winning primary by only 7000 votes), I had the comfort of full knowledge that this was not due to any spoiler effect on my part, but solely simply due to more people voting for him. If we'd campaign for ranked-choice voting in federal elections - presidential primaries and general - we can eliminate all the above hand-wringing. The Democratic party should be totally on board with this since they could finally get the Green protest vote.

[–] [email protected] 1 points 7 months ago (2 children)

So I am proposing that the Democratic party is acting irrationally and suboptimally, but you claim that the Democrats are acting most optimally, and it is the fringe left that is acting irrationally instead by refusing to accept a unfair split against all game theory guidance, causing all of us to eat shit (despite them making up only low single digits). Yet if the Democrats are so rational, how come they keep losing? Shouldn't they have found an optimal strategy to get around the irrational ultimatum of the left? Yet here we are.

view more: β€Ή prev next β€Ί