Artificial Intelligence

1446 readers
8 users here now

Welcome to the AI Community!

Let's explore AI passionately, foster innovation, and learn together. Follow these guidelines for a vibrant and respectful community:

You can access the AI Wiki at the following link: AI Wiki

Let's create a thriving AI community together!

founded 2 years ago
1
2
 
 

It’s also starting to publicly test an “agentic” coding tool called Claude Code.

3
1
Ant mill (en.wikipedia.org)
submitted 5 days ago by A_A to c/ai_
 
 

Collective behavior leads to both collective intelligence and collective stupidity. This also applies to artificial intelligence (artificial neurons) and biological neurons (in my opinion).
cross-posted from: https://lemmy.world/post/26059204

An ant mill is an observed phenomenon in which a group of army ants, separated from the main foraging party, lose the pheromone track and begin to follow one another, forming a continuously rotating circle.

4
0
How I Use AI in 2025 (benjamincongdon.me)
submitted 4 weeks ago by ikidd to c/ai_
 
 

Not the author, but I'm interested in discussion about how you're using AI.

I'm not as stuck on Claude as a model as much as the author. I find their limits on rates really too low to use as an effective coding partner, I spend more time waiting for it to time back in than actually doing something useful, GPT4 does better with less hanging. O1 is better, and I haven't figured out how to use Deepseek on Cline yet. I'm not going to use a different editor than VSCode to code in, so Cursor isn't really interesting.

I don't use AI for much other than that, as I find the search results on things like Perplexity kinda worthless compared to what I can come up with without trying very hard. And chatting with an AI isn't something I've found useful either.

5
6
 
 

Good quote at the end IMO:

The greatest inventions have no owners. Ben Franklin’s heirs do not own electricity. Turing’s estate does not own all computers. AI is undoubtedly one of humanity’s greatest inventions; we believe its future will be — and should be — multi-model

7
 
 

The author argues that "by encouraging the use of GenAI, we are directly undermining the principles we have been trying to instill in our students."

8
9
 
 

OpenAl saved its biggest announcement for the last day of its 12-day "shipmas" event. On Friday, the company unveiled o3, the successor to the o1 "reasoning" model it released earlier in the year. o3 is a model family, to be more precise as was the case with o1. There's o3 and o3-mini, a smaller, distilled model fine-tuned for particular tasks. OpenAl makes the remarkable claim that o3, at least in certain conditions, approaches AGI - with significant caveats. More on that below.

10
11
 
 

It seems that when you train an AI on a historical summary of human behavior, it's going to pick up some human-like traits. I wonder if this means we should be training a "good guy" AI with only ethical, virtuous material?

12
13
14
15
 
 

Hello, I have some letters handwritten by my great-grandfather from the Mauthausen concentration camp in 1943/1944. Few of them have been transcribed by hand. They are quite a lot and really not easy to read (you can understand the situation) also if the pen trace is good and well preserved.

I am wondering if some of these new AI tools can help me transcribe them. I don't expect an automatic transcription, but any help would be welcome 😊

16
-2
submitted 3 months ago by [email protected] to c/ai_
 
 
17
 
 

The economics of how tech jobs get created and how layoffs happen is worth understanding if you're not sure how AI might fit into it.

00:00 Previously on ‪@InternetOfBugs‬ vs AI 01:50 Caveats: US Only, not Gaming 02:33 Building vs Maintaining 03:46 How Projects get funded 05:08 Hiring process 07:42 How the last decade or so has been unusual 10:26 How AI might change that 11:19 Enter the Stock Market 12:47 How Layoffs get decided on 15:58 How to ride out the apparent downturn 21:37 Bad advice from people who have never experienced a downturn 24:38 Resources on How to look for a job

#internetOfBugs

18
19
20
10
submitted 4 months ago* (last edited 4 months ago) by [email protected] to c/ai_
 
 

Media Bias/ Fact Check.

Executive Summary:

  • Researchers in the People’s Republic of China (PRC) have optimized Meta’s Llama model for specialized military and security purposes.
  • ChatBIT, an adapted Llama model, appears to be successful in demonstrations in which it was used in military contexts such as intelligence, situational analysis, and mission support, outperforming other comparable models.
  • Open-source models like Llama are valuable for innovation, but their deployment to enhance the capabilities of foreign militaries raises concerns about dual-use applications. The customization of Llama by defense researchers in the PRC highlights gaps in enforcement for open-source usage restrictions, underscoring the need for stronger oversight to prevent strategic misuse.
21
22
23
24
25
view more: next ›