this post was submitted on 02 Mar 2024
203 points (83.9% liked)

Unpopular Opinion

6405 readers
65 users here now

Welcome to the Unpopular Opinion community!


How voting works:

Vote the opposite of the norm.


If you agree that the opinion is unpopular give it an arrow up. If it's something that's widely accepted, give it an arrow down.



Guidelines:

Tag your post, if possible (not required)


  • If your post is a "General" unpopular opinion, start the subject with [GENERAL].
  • If it is a Lemmy-specific unpopular opinion, start it with [LEMMY].


Rules:

1. NO POLITICS


Politics is everywhere. Let's make this about [general] and [lemmy] - specific topics, and keep politics out of it.


2. Be civil.


Disagreements happen, but that doesn’t provide the right to personally attack others. No racism/sexism/bigotry. Please also refrain from gatekeeping others' opinions.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Shitposts and memes are allowed but...


Only until they prove to be a problem. They can and will be removed at moderator discretion.


5. No trolling.


This shouldn't need an explanation. If your post or comment is made just to get a rise with no real value, it will be removed. You do this too often, you will get a vacation to touch grass, away from this community for 1 or more days. Repeat offenses will result in a perma-ban.



Instance-wide rules always apply. https://legal.lemmy.world/tos/

founded 2 years ago
MODERATORS
 

In the whirlwind of technological advancements, artificial intelligence (AI) often becomes the scapegoat for broader societal issues. It’s an easy target, a non-human entity that we can blame for job displacement, privacy concerns, and even ethical dilemmas. However, this perspective is not only simplistic but also misdirected.

The crux of the matter isn’t AI itself, but the economic system under which it operates - capitalism. It’s capitalism that dictates the motives behind AI development and deployment. Under this system, AI is primarily used to maximize profits, often at the expense of the workforce and ethical considerations. This profit-driven motive can lead to job losses as companies seek to cut costs, and it can prioritize corporate interests over privacy and fairness.

So, why should we shift our anger from AI to capitalism? Because AI, as a tool, has immense potential to improve lives, solve complex problems, and create new opportunities. It’s the framework of capitalism, with its inherent drive for profit over people, that often warps these potentials into societal challenges.

By focusing our frustrations on capitalism, we advocate for a change in the system that governs AI’s application. We open up a dialogue about how we can harness AI ethically and equitably, ensuring that its benefits are widely distributed rather than concentrated in the hands of a few. We can push for regulations that protect workers, maintain privacy, and ensure AI is used for the public good.

In conclusion, AI is not the enemy; unchecked capitalism is. It’s time we recognize that our anger should not be at the technology that could pave the way for a better future, but at the economic system that shapes how this technology is used.

you are viewing a single comment's thread
view the rest of the comments
[–] kromem 6 points 9 months ago

Furthermore, simple probability calculations indicate that GPT-4's reasonable performance on k=5 is suggestive of going beyond "stochastic parrot" behavior (Bender et al., 2021), i.e., it combines skills in ways that it had not seen during training.

Do these networks just memorize a collection of surface statistics, or do they rely on internal representations of the process that generates the sequences they see? We investigate this question by applying a variant of the GPT model to the task of predicting legal moves in a simple board game, Othello. Although the network has no a priori knowledge of the game or its rules, we uncover evidence of an emergent nonlinear internal representation of the board state.

So you already have research showing that GPT LLMs are capable of modeling aspects of training data at much deeper levels of abstraction than simply surface statistics of words and research showing that the most advanced models are already generating novel and new outputs distinct from anything that would be in the training data by virtue of the complexity of the number of different abstract concepts it combines from what was learned in the training data.

Like - have you actually read any of the ongoing actual research on the field at all? Or just articles written by embittered people who are generally misunderstanding the technology (for example, if you ever see someone refer to them as Markov chains, that person has no idea what they are talking about given the key factor of the transformer model is the self-attention mechanism which negates the Markov property characterizing Markov chains in the first place).