Artificial Intelligence

1338 readers
17 users here now

Welcome to the AI Community!

Let's explore AI passionately, foster innovation, and learn together. Follow these guidelines for a vibrant and respectful community:

You can access the AI Wiki at the following link: AI Wiki

Let's create a thriving AI community together!

founded 1 year ago
1
2
 
 

The artificial intelligence (AI) industry is facing a critical diversity crisis, with women severely underrepresented across all seniority levels. This data brief quantifies the multifaceted underrepresentation of women in the global and European Union (EU) AI talent pool, highlighting the pressing need for targeted interventions to increase female participation in this rapidly expanding field.

Our analysis of data on nearly 1.6 million AI professionals worldwide reveals stark gender imbalances. Women comprise only 22% of AI talent globally, with even lower representation at senior levels – occupying less than 14% of senior executive roles in AI. Within the EU, the disparity is equally concerning. Europe has closed 75% of its gender gap , with Sweden and Germany among the top five European economies in closing the gender gap. However, our data reveals a stark contrast in the AI sector: Germany and Sweden have some of the lowest female representations in their AI workforces in the EU, at 20.3% and 22.4% respectively. This discrepancy raises serious questions about the unique barriers faced by women in the AI field.

3
 
 

Imagine using artificial intelligence to compare two seemingly unrelated creations — biological tissue and Beethoven’s “Symphony No. 9.” At first glance, a living system and a musical masterpiece might appear to have no connection. However, a novel AI method developed by Markus J. Buehler, the McAfee Professor of Engineering and professor of civil and environmental engineering and mechanical engineering at MIT, bridges this gap, uncovering shared patterns of complexity and order.

4
 
 

[PDF] Report.

This is kind of interesting to read.

5
6
7
 
 

Abstract

Large language models have sparked a lot of attention in the research community in recent days, especially with the introduction of practical tools such as ChatGPT and Github Copilot. Their ability to solve complex programming tasks was also shown in several studies and commercial solutions increasing the interest in using them for software development in different fields. High performance computing is one of such fields, where parallel programming techniques have been extensively used to utilize raw computing power available in contemporary multicore and manycore processors. In this paper, we perform an evaluation of the ChatGPT and Github Copilot tools for OpenMP-based code parallelization using a proposed methodology. We used nine different benchmark applications which represent typical parallel programming workloads and compared their OpenMP-based parallel solutions produced manually and using ChatGPT and Github Copilot in terms of obtained speedup, applied optimizations, and quality of the solution. ChatGPT 3.5 and Github Copilot installed with Visual Studio Code 1.88 were used. We concluded that both tools can produce correct parallel code in most cases. However, performance-wise, ChatGPT can match manually produced and optimized parallel code only in simpler cases, as it lacks a deeper understanding of the code and the context. The results are much better with Github Copilot, where much less effort is needed to obtain correct and performant parallel solution.

8
9
10
11
10
submitted 1 week ago* (last edited 1 week ago) by [email protected] to c/ai_
 
 

Media Bias/ Fact Check.

Executive Summary:

  • Researchers in the People’s Republic of China (PRC) have optimized Meta’s Llama model for specialized military and security purposes.
  • ChatBIT, an adapted Llama model, appears to be successful in demonstrations in which it was used in military contexts such as intelligence, situational analysis, and mission support, outperforming other comparable models.
  • Open-source models like Llama are valuable for innovation, but their deployment to enhance the capabilities of foreign militaries raises concerns about dual-use applications. The customization of Llama by defense researchers in the PRC highlights gaps in enforcement for open-source usage restrictions, underscoring the need for stronger oversight to prevent strategic misuse.
12
13
14
15
16
17
18
-14
submitted 2 weeks ago by [email protected] to c/ai_
 
 
19
 
 

First of all, let me explain what "hapax legomena" is: it refers to words (and, by extension, concepts) that occurred just once throughout an entire corpus of text. An example is the word "hebenon", occurring just once within Shakespeare's Hamlet. Therefore, "hebenon" is a hapax legomenon. The "hapax legomenon" concept itself is a kind of hapax legomenon, IMO.

According to Wikipedia, hapax legomena are generally discarded from NLP as they hold "little value for computational techniques". By extension, the same applies to LLMs, I guess.

While "hapax legomena" originally refers to words/tokens, I'm extending it to entire concepts, described by these extremely unknown words.

I am a curious mind, actively seeking knowledge, and I'm constantly trying to learn a myriad of "random" topics across the many fields of human knowledge, especially rare/unknown concepts (that's how I learnt about "hapax legomena", for example). I use three LLMs on a daily basis (GPT-3, LLama and Gemini), expecting to get to know about words, historical/mythological figures and concepts unknown to me, lost in the vastness of human knowledge, but I now know, according to Wikipedia, that general LLMs won't point me anything "obscure" enough.

This leads me to wonder: are there LLMs and/or NLP models/datasets that do not discard hapax? Are there LLMs that favor less frequent data over more frequent data?

20
21
 
 

You heard me, I'm curious, I know there's all those dumbass deepnude programs but has anyone actually tried to make a model that takes images of nude humans and puts clothing on them? I guess they don't have to be nude but that does remove a lot of variables in the generation.

I think it would be an interesting little tool to try out new looks you never would really mess with before

22
23
4
submitted 3 months ago by bonsaiferroni to c/ai_
24
 
 

cross-posted from: https://programming.dev/post/14979173

  • neural network is trained with deep Q-learning in its own training environment
  • controls the game with twinject

demonstration video of the neural network playing Touhou (Imperishable Night):

it actually makes progress up to the stage boss which is fairly impressive. it performs okay in its training environment but performs poorly in an existing bullet hell game and makes a lot of mistakes.

let me know your thoughts and any questions you have!

25
 
 
view more: next ›