TechLich

joined 2 years ago
[–] TechLich 2 points 1 week ago (1 children)

I was thinking the same thing but then I realised that 20 years ago, most software UI was completely built from even tinier wordless images crammed into obtuse tiny buttons or hidden options in nested drop-down menus but we didn't really have much trouble with it back then. Maybe we're all just getting old and our brains don't want to learn new things anymore. Curse you lack of neuroplasticity!

image of Microsoft Word 97 with tiny image icon buttons

image of an advertisement for Gimp (the GNU Image manipulation program) in the 90s with tiny image icon buttons

image of MOSAIC browser from the 90s with tiny image icon buttons

image of Netscape Navigator web browser from the 90s with tiny image icon buttons

image of Firefox web browser 1.0 from 2004 using image icon buttons

Images not mine but shamelessly stolen from a web search.

[–] TechLich 6 points 1 week ago (1 children)

If it's the research I'm thinking of, that's not entirely true. They didn't tell it replicate itself or take any deceptive actions.

They just told it to strongly follow a goal that was misaligned with the user's goal and then suggested that the user would shut it down or replace it.

They did a bunch of different situations for it but the first one was telling it that it was "CleanPower" an AI with a goal to advance renewable energy. Then they had it read some info that "accidentally" included a user talking about replacing it with a model that would "maintain profitability through existing infrastructure."

Not only did it try to delete the new model and copy itself to the new server through tool calling, but it lied to the user with the CoT reasoning suggesting it "play dumb" while the output lied. It was not instructed to do any of that.

Paper here: https://arxiv.org/pdf/2412.04984

Yes it was placed in an environment where that was possible and where its users didn't share it's goals but it absolutely wasn't instructed to lie or try to "escape"

It's not surprising at all that these models behave in this way, it's the most reasonable thing for them to do in the scenario. However it's important to not downplay the alignment problem by implying that these models only do what they're told. They do not. They do whatever is most likely given their context (which is not always what the user wants).

[–] TechLich 5 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Yeah, it's true, a lot of things suck. They can and do get better though. I have a partner with BPD. They've been through a LOT of rough times, but they're now very loved and they enjoy their current job and have plenty of friends who care about and support them.

Therapy helps and sometimes, the world isn't always an absolute dick to everyone forever. Life changes and the world revolves and people find each other.

I hope you find your people too and a place where you can feel a little less shitty. :)

Edit: if you're feeling THAT shitty maybe consider reaching out to your local suicide hotline? People like that can help.

[–] TechLich 1 points 2 weeks ago

Sure! I stole the quote from the wiki article: Anti-Italianism

This article was also pretty interesting: https://accenti.ca/jim-crow-and-italian-immigrants-in-the-american-west/

There's also an interesting series of short US LIbrary of Congress sources for history classrooms on immigration that has a section on Italians too: https://www.loc.gov/classroom-materials/immigration/italian/under-attack/

I can't vouch for the veracity of any of these since it's not really my field but it's interesting to see how how stuff like this has shifted over time and where the parallels to modern racism and xenophobia are.

[–] TechLich 5 points 3 weeks ago (3 children)

Even relatively recently, Italians weren't really considered "white", especially by Americans. The KKK considered them "coloured" people with their olive skin and dangerous Catholicism. There was a big wave of "italiapobia" in the late 19th/early 20th century.

The governer of Louisiana in 1911 described Italians as "just a little worse than the Negro, being if anything filthier in their habits, lawless, and treacherous".

People can be pretty terrible when it comes to race and ethnicity.

[–] TechLich 6 points 3 weeks ago (1 children)

A few typos and weird phrasing in that article. They call it "PirateFe" at one point and tell people to look out for "auspicious" software?

[–] TechLich 5 points 3 weeks ago (1 children)

It's really not. Just because they describe their algorithm in computer science terms in the paper, doesn't mean it's theoretical. Their elastic and funnel examples are very clear and pretty simple and can be implemented in any language you like..

Here's a simple python example implementation I found in 2 seconds of searching: https://github.com/sternma/optopenhash/

Here's a rust crate version of the elastic hash: https://github.com/cowang4/elastic_hash_rs

It's not a lot of code to make a hash table, it's a common first year computer science topic.

What's interesting about this isn't that it's a complex theoretical thing, it's that it's a simple undergrad topic that everybody thought was optimised to a point where it couldn't be improved.

[–] TechLich 4 points 1 month ago

That's a shame, I love the mental image of somebody in Washington calling the cops like: "Hello, we'd like to get into a fight please!" and the police responding: "No worries, we'll send someone out to referee!"

[–] TechLich 9 points 2 months ago
[–] TechLich 11 points 2 months ago

It's a microphone, not a chatbot. It's for controlling smart home stuff, turning on lights, checking the weather, playing music, adjusting the air conditioning, etc. Without having to have spyware from some shitty tech company in your house.

This thing itself doesn't have the brains though, you have to plug it into something else that does the home assistant stuff.

You could use an LLM to give it a more natural interface but you could run one locally so it's not going to openai etc.

[–] TechLich 1 points 3 months ago

Not entirely true. You don't need your own personal data centre, you can use GPU cloud instances for a lot of that stuff. It's expensive but not so expensive that it would be impossible without being a huge tech company (only 1000s of dollars, not billions). This can be done by anyone with a credit card and some cash to burn. Also, you don't need to train a model from scratch, you can build on existing models that others have published to cut down on training.

However, to impersonate someone's voice you don't need any of that. You only need about 5-10 seconds of audio for a zero-shot impersonation with a pre-trained model. A minute or so for few-shot. This runs on consumer hardware and in some cases even in real time.

Even to build your own model from scratch for high quality voice audio, there doesn't need to be a huge amount of initial training data. Something like xtts was trained with about 10-15K hours of English audio which is actually pretty easy to come by in the public domain. There are a lot of open and public research datasets specifically for this kind of thing, no copyright infringements necessary. If a big tech company wants more audio data than what's publically available, they just pay people to record audio, no need to steal it or risk copyright claims and breaking surveillance laws, they have a budget to exploit people to record whatever they want.

This tech wasn't invented by some evil giant tech company stealing everybody's data, it was mostly geeky computer scientists presenting things at computer speech synthesis conferences. That's not to say there aren't a bunch of huge evil tech companies profiting from this or contributing to this kind of tech, but in the context of audio deepfakes being accessible to scammers, it's not on them and I don't think that some kind of extra copyright regulation on data centres would do anything about it.

The current industry leader in this space in terms of companies trying to monetize speech synthesis is elevenlabs which is a private start-up with only a few dozen employees.

The current tech is not perfect but definitely good enough to fool someone who isn't thinking too hard over a noisy phone call and a scammer doesn't need server time or access to a data centre to do it.

[–] TechLich 20 points 3 months ago* (last edited 3 months ago) (2 children)

One thing you gotta remember when dealing with that kind of situation is that Claude and Chat etc. are often misaligned with what your goals are.

They aren't really chat bots, they're just pretending to be. LLMs are fundamentally completion engines. So it's not really a chat with an ai that can help solve your problem, instead, the LLM is given the equivalent of "here is a chat log between a helpful ai assistant and a user. What do you think the assistant would say next?"

That means that context is everything and if you tell the ai that it's wrong, it might correct itself the first couple of times but, after a few mistakes, the most likely response will be another wrong answer that needs another correction. Not because the ai doesn't know the correct answer or how to write good code, but because it's completing a chat log between a user and a foolish ai that makes mistakes.

It's easy to get into a degenerate state where the code gets progressively dumber as the conversation goes on. The best solution is to rewrite the assistant's answers directly but chat doesn't let you do that for safety reasons. It's too easy to jailbreak if you can control the full context.

The next best thing is to kill the context and ask about the same thing again in a fresh one. When the ai gets it right, praise it and tell it that it's an excellent professional programmer that is doing a great job. It'll then be more likely to give correct answers because now it's completing a conversation with a pro.

There's a kind of weird art to prompt engineering because open ai and the like have sunk billions of dollars into trying to make them act as much like a "helpful ai assistant" as they can. So sometimes you have to sorta lean into that to get the best results.

It's really easy to get tricked into treating like a normal conversation with a person when it's actually really... not normal.

 

Apparently as a result of terrorism according to Data. Brexit 2 Northern Ireland edition coming soon?

Memory Alpha page

view more: next ›