Artificial Intelligence

1337 readers
11 users here now

Welcome to the AI Community!

Let's explore AI passionately, foster innovation, and learn together. Follow these guidelines for a vibrant and respectful community:

You can access the AI Wiki at the following link: AI Wiki

Let's create a thriving AI community together!

founded 1 year ago
1
 
 

Imagine using artificial intelligence to compare two seemingly unrelated creations — biological tissue and Beethoven’s “Symphony No. 9.” At first glance, a living system and a musical masterpiece might appear to have no connection. However, a novel AI method developed by Markus J. Buehler, the McAfee Professor of Engineering and professor of civil and environmental engineering and mechanical engineering at MIT, bridges this gap, uncovering shared patterns of complexity and order.

2
2
submitted 5 hours ago* (last edited 5 hours ago) by [email protected] to c/ai_
 
 

[PDF] Report.

This is kind of interesting to read.

3
4
5
 
 

Abstract

Large language models have sparked a lot of attention in the research community in recent days, especially with the introduction of practical tools such as ChatGPT and Github Copilot. Their ability to solve complex programming tasks was also shown in several studies and commercial solutions increasing the interest in using them for software development in different fields. High performance computing is one of such fields, where parallel programming techniques have been extensively used to utilize raw computing power available in contemporary multicore and manycore processors. In this paper, we perform an evaluation of the ChatGPT and Github Copilot tools for OpenMP-based code parallelization using a proposed methodology. We used nine different benchmark applications which represent typical parallel programming workloads and compared their OpenMP-based parallel solutions produced manually and using ChatGPT and Github Copilot in terms of obtained speedup, applied optimizations, and quality of the solution. ChatGPT 3.5 and Github Copilot installed with Visual Studio Code 1.88 were used. We concluded that both tools can produce correct parallel code in most cases. However, performance-wise, ChatGPT can match manually produced and optimized parallel code only in simpler cases, as it lacks a deeper understanding of the code and the context. The results are much better with Github Copilot, where much less effort is needed to obtain correct and performant parallel solution.

6
7
8
9
10
submitted 1 week ago* (last edited 1 week ago) by [email protected] to c/ai_
 
 

Media Bias/ Fact Check.

Executive Summary:

  • Researchers in the People’s Republic of China (PRC) have optimized Meta’s Llama model for specialized military and security purposes.
  • ChatBIT, an adapted Llama model, appears to be successful in demonstrations in which it was used in military contexts such as intelligence, situational analysis, and mission support, outperforming other comparable models.
  • Open-source models like Llama are valuable for innovation, but their deployment to enhance the capabilities of foreign militaries raises concerns about dual-use applications. The customization of Llama by defense researchers in the PRC highlights gaps in enforcement for open-source usage restrictions, underscoring the need for stronger oversight to prevent strategic misuse.
10
11
12
13
14
15
16
-14
submitted 2 weeks ago by [email protected] to c/ai_
 
 
17
 
 

First of all, let me explain what "hapax legomena" is: it refers to words (and, by extension, concepts) that occurred just once throughout an entire corpus of text. An example is the word "hebenon", occurring just once within Shakespeare's Hamlet. Therefore, "hebenon" is a hapax legomenon. The "hapax legomenon" concept itself is a kind of hapax legomenon, IMO.

According to Wikipedia, hapax legomena are generally discarded from NLP as they hold "little value for computational techniques". By extension, the same applies to LLMs, I guess.

While "hapax legomena" originally refers to words/tokens, I'm extending it to entire concepts, described by these extremely unknown words.

I am a curious mind, actively seeking knowledge, and I'm constantly trying to learn a myriad of "random" topics across the many fields of human knowledge, especially rare/unknown concepts (that's how I learnt about "hapax legomena", for example). I use three LLMs on a daily basis (GPT-3, LLama and Gemini), expecting to get to know about words, historical/mythological figures and concepts unknown to me, lost in the vastness of human knowledge, but I now know, according to Wikipedia, that general LLMs won't point me anything "obscure" enough.

This leads me to wonder: are there LLMs and/or NLP models/datasets that do not discard hapax? Are there LLMs that favor less frequent data over more frequent data?

18
19
 
 

You heard me, I'm curious, I know there's all those dumbass deepnude programs but has anyone actually tried to make a model that takes images of nude humans and puts clothing on them? I guess they don't have to be nude but that does remove a lot of variables in the generation.

I think it would be an interesting little tool to try out new looks you never would really mess with before

20
21
4
submitted 3 months ago by bonsaiferroni to c/ai_
22
 
 

cross-posted from: https://programming.dev/post/14979173

  • neural network is trained with deep Q-learning in its own training environment
  • controls the game with twinject

demonstration video of the neural network playing Touhou (Imperishable Night):

it actually makes progress up to the stage boss which is fairly impressive. it performs okay in its training environment but performs poorly in an existing bullet hell game and makes a lot of mistakes.

let me know your thoughts and any questions you have!

23
 
 
24
12
submitted 5 months ago by Wilshire to c/ai_
25
 
 

Well, looks like we’re done here. Lemmy, Reddit, Facebook and Twitter can pack it in because no we can have AI handle social media for us.

Guess it’s finally time to go outside, then.

view more: next ›