Machine Learning | Artificial Intelligence

959 readers
5 users here now

Welcome to Machine Learning – a versatile digital hub where Artificial Intelligence enthusiasts unite. From news flashes and coding tutorials to ML-themed humor, our community covers the gamut of machine learning topics. Regardless of whether you're an AI expert, a budding programmer, or simply curious about the field, this is your space to share, learn, and connect over all things machine learning. Let's weave algorithms and spark innovation together.

founded 2 years ago
MODERATORS
1
 
 

cross-posted from: https://eventfrontier.com/post/177049

I keep getting an error ValueError: perm should have the same length as rank(x): 3 != 2 when trying to convert my model using coremltools.

From my understanding the most common case for this is when your input shape that you pass into coremltools doesn't match your model input shape. However, as far as I can tell in my code it does match. I also added an input layer, and that didn't help either.

I have put a lot of effort into reducing my code as much as possible while still giving a minimal complete verifiable example. However, I'm aware that the code is still a lot. Starting at line 60 of my code is where I create my model, and train it.

I'm running this on Ubuntu, with NVIDIA setup with Docker.

Any ideas what I'm doing wrong?


from typing import TypedDict, Optional, List
import tensorflow as tf
import json
from tensorflow.keras.optimizers import Adam
import numpy as np
from sklearn.utils import resample
import keras
import coremltools as ct

# Simple tokenizer function
word_index = {}
index = 1
def tokenize(text: str) -> list:
    global word_index
    global index
    words = text.lower().split()
    sequences = []
    for word in words:
        if word not in word_index:
            word_index[word] = index
            index += 1
        sequences.append(word_index[word])
    return sequences

def detokenize(sequence: list) -> str:
    global word_index
    # Filter sequence to remove all 0s
    sequence = [int(index) for index in sequence if index != 0.0]
    words = [word for word, index in word_index.items() if index in sequence]
    return ' '.join(words)

# Pad sequences to the same length
def pad_sequences(sequences: list, max_len: int) -> list:
    padded_sequences = []
    for seq in sequences:
        if len(seq) > max_len:
            padded_sequences.append(seq[:max_len])
        else:
            padded_sequences.append(seq + [0] * (max_len - len(seq)))
    return padded_sequences

class PreprocessDataResult(TypedDict):
    inputs: tf.Tensor
    labels: tf.Tensor
    max_len: int

def preprocess_data(texts: List[str], labels: List[int], max_len: Optional[int] = None) -> PreprocessDataResult:
    tokenized_texts = [tokenize(text) for text in texts]
    if max_len is None:
        max_len = max(len(seq) for seq in tokenized_texts)
    padded_texts = pad_sequences(tokenized_texts, max_len)

    return PreprocessDataResult({
        'inputs': tf.convert_to_tensor(np.array(padded_texts, dtype=np.float32)),
        'labels': tf.convert_to_tensor(np.array(labels, dtype=np.int32)),
        'max_len': max_len
    })

# Define your model architecture
def create_model(input_shape: int) -> keras.models.Sequential:
    model = keras.models.Sequential()

    model.add(keras.layers.Input(shape=(input_shape,), dtype='int32', name='embedding_input'))
    model.add(keras.layers.Embedding(input_dim=10000, output_dim=128)) # `input_dim` represents the size of the vocabulary (i.e. the number of unique words in the dataset).
    model.add(keras.layers.Bidirectional(keras.layers.LSTM(units=64, return_sequences=True)))
    model.add(keras.layers.Bidirectional(keras.layers.LSTM(units=32)))
    model.add(keras.layers.Dense(units=64, activation='relu'))
    model.add(keras.layers.Dropout(rate=0.5))
    model.add(keras.layers.Dense(units=1, activation='sigmoid')) # Output layer, binary classification (meaning it outputs a 0 or 1, false or true). The sigmoid function outputs a value between 0 and 1, which can be interpreted as a probability.

    model.compile(
        optimizer=Adam(),
        loss='binary_crossentropy',
        metrics=['accuracy']
    )

    return model

# Train the model
def train_model(
    model: tf.keras.models.Sequential,
    train_data: tf.Tensor,
    train_labels: tf.Tensor,
    epochs: int,
    batch_size: int
) -> tf.keras.callbacks.History:
    return model.fit(
        train_data,
        train_labels,
        epochs=epochs,
        batch_size=batch_size,
        callbacks=[
            keras.callbacks.EarlyStopping(monitor='val_accuracy', patience=5),
            keras.callbacks.TensorBoard(log_dir='./logs', histogram_freq=1),
            # When downgrading from TensorFlow 2.18.0 to 2.12.0 I had to change this from `./best_model.keras` to `./best_model.tf`
            keras.callbacks.ModelCheckpoint(filepath='./best_model.tf', monitor='val_accuracy', save_best_only=True)
        ]
    )

# Example usage
if __name__ == "__main__":
    # Check available devices
    print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))

    with tf.device('/GPU:0'):
        print("Loading data...")
        data = (["I love this!", "I hate this!"], [0, 1])
        rawTexts = data[0]
        rawLabels = data[1]

        # Preprocess data
        processedData = preprocess_data(rawTexts, rawLabels)
        inputs = processedData['inputs']
        labels = processedData['labels']
        max_len = processedData['max_len']

        print("Data loaded. Max length: ", max_len)

        # Save word_index to a file
        with open('./word_index.json', 'w') as file:
            json.dump(word_index, file)

        model = create_model(max_len)

        print('Training model...')
        train_model(model, inputs, labels, epochs=1, batch_size=32)
        print('Model trained.')

        # When downgrading from TensorFlow 2.18.0 to 2.12.0 I had to change this from `./best_model.keras` to `./best_model.tf`
        model.load_weights('./best_model.tf')
        print('Best model weights loaded.')

        # Save model
        # I think that .h5 extension allows for converting to CoreML, whereas .keras file extension does not
        model.save('./toxic_comment_analysis_model.h5')
        print('Model saved.')

        my_saved_model = tf.keras.models.load_model('./toxic_comment_analysis_model.h5')
        print('Model loaded.')

        print("Making prediction...")
        test_string = "Thank you. I really appreciate it."
        tokenized_string = tokenize(test_string)
        padded_texts = pad_sequences([tokenized_string], max_len)
        tensor = tf.convert_to_tensor(np.array(padded_texts, dtype=np.float32))
        predictions = my_saved_model.predict(tensor)
        print(predictions)
        print("Prediction made.")


        # Convert the Keras model to Core ML
        coreml_model = ct.convert(
            my_saved_model,
            inputs=[ct.TensorType(shape=(max_len,), name="embedding_input", dtype=np.int32)],
            source="tensorflow"
        )

        # Save the Core ML model
        coreml_model.save('toxic_comment_analysis_model.mlmodel')
        print("Model successfully converted to Core ML format.")

Code including Dockerfile & start script as GitHub Gist: https://gist.github.com/fishcharlie/af74d767a3ba1ffbf18cbc6d6a131089

2
 
 

I created a Lemmy community specifically for TensorFlow! Check it out and subscribe if you're interested.

3
 
 

Declaration

We, the undersigned members of the Open Source community, assert that Open Source is defined solely by the Open Source Definition (OSD) version 1.9.

Any amendments or new definitions shall only be recognized if declared by clear community consensus through a transparent process to be determined.

4
 
 

Unsplash has a ton of free content and a great api - so I used the hugchat api to generate search queries based on some user input text and fetched images from there.

Buggy test site (using all free api's so will break frequently, free Unsplash* api is 50 pics/h) here: http://aisitegeneration.devsoft.co.za *sorry Unsplash I haven't added the attribution for photo's yet, I will soon ok?

Thoughts on this approach vs generative AI?

5
6
8
submitted 10 months ago* (last edited 10 months ago) by [email protected] to c/machinelearning
 
 

I am currently trying to get a bit more into ML, for me that means playing with it in some context I know already or applying it to something interesting - either way I am aware this whole endeavor is a bit of stretch and having a good grasp on machine learning requires a good mathematical knowledge.

In my off time I have been recreating a digital copy of a table card game called Scout: For The Show and I had the great idea to try and make an autonomous agent based on Machine Learning for it (definitely the best starting idea /s but there is still a lot to learn in failures).

First, I did the naive thing, imagined the inputs and outputs from a players perspective - current hand, amount of turns taken, count of cards in other player hands, ... but my intuition tells me this is in some way very wrong (?), the "shapes" of these inputs/outputs are weird - I don't think the model would respond with a valid move anytime soon during training like this, if ever.

Second, I've then searched far and wide for card games and machine learning and found some resources where they usually reduce the problem space as much as possible and apply the model only on a subset of the information (often represented in completely different formats/dimensions - Markov Decision Process).

Obviously I am not asking for the mathematical analysis of the game in question, in broad sense I am looking for any kind of pointers that might apply here, I am aware this is a very brute-force approach for something that should be carefully mathematically analyzed and from that a model could be derived.

Thanks for any pointers, wisdoms or ideas!


Notes:
I am coming from a software development background - Python mainly, so it's not that far for me programming wise, and I have already played with YOLO models though only as user.

The Scout card game has 45 cards with a number (1-10) on the top and bottom, the main objective is to capture points by playing stronger card combinations, either pairs/triples/x of a single number (1-1-1, 9-9, ...) or sequences/straights (2-3, 5-6-7-8, ...).
The twist is that cards in hand can't be moved or flipped around, only the top side number is important for most of the game (and each variation of the top/bottom numbers is contained only once, 1/10 and 10/1 is the same card, only flipped).
Players take turns in either playing a new hand on the table (Show - capturing the remaining hand, scoring) or taking a one card from the table (Scout) and putting it anywhere in their hand, even flipping top/bottom)

Resources I have found:
https://www.youtube.com/watch?v=IQLkPgkLMNg (Great explanation of the problems with solved/unsolved games, minimax, MCTS etc)
https://www.youtube.com/watch?v=vXtfdGphr3c (Reinforced Learning)

7
 
 

cross-posted from: https://slrpnk.net/post/5501378

For folks who aren't sure how to interpret this, what we're looking at here is early work establishing an upper bound on the complexity of a problem that a model can handle based on its size. Research like this is absolutely essential for determining whether these absurdly large models are actually going to achieve the results people have already ascribed to them on any sort of consistent basis. Previous work on monosemanticity and superposition are relevant here, particularly with regards to unpacking where and when these errors will occur.

I've been thinking about this a lot with regards to how poorly defined the output space they're trying to achieve is. Currently we're trying to encode one or more human languages, logical/spatial reasoning (particularly for multimodal models), a variety of writing styles, and some set of arbitrary facts (to say nothing of the nuance associated with these facts). Just by making an informal order of magnitude argument I think we can quickly determine that a lot of the supposed capabilities of these massive models have strict theoretical limitations on their correctness.

This should, however, give one hope for more specialized models. Nearly every one of the above mentioned "skills" is small enough to fit into our largest models with absolute correctness. Where things get tough is when you fail to clearly define your output space and focus training so as to maximize the encoding efficiency for a given number of parameters.

8
13
GPU Recommendation (self.machinelearning)
submitted 1 year ago by filister to c/machinelearning
 
 

I am looking to build a PC where I can run some LLMs and also pytorch and maybe some video encoding and I was wondering what is the best price wise GPU with at least 16GB of VRAM that I can buy right now.

For the record I really hate NVIDIA and I would prefer not to give them my money, but if it is the only viable option I would probably wait till they release the 4070 Ti Super, but what about the AMD (7800 XT) GPUs or the Arc A770? Are they going to work? In the past I had an AMD GPU and making ROCm work with it was a pain in the ***.

9
10
 
 

Ahoy there, matey! Welcome aboard Big Top Entertainment, the finest entertainment company on the seven seas!

11
 
 

cross-posted from: https://slrpnk.net/post/3892266

Institution: Cambridge
Lecturer: Petar Velickovic
University Course Code: seminar
Subject: #math #machinelearning #neuralnetworks
Description: Deriving graph neural networks (GNNs) from first principles, motivating their use, and explaining how they have emerged along several related research lines.

12
6
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/machinelearning
 
 

cross-posted from: https://slrpnk.net/post/3863486

Institution: MIT
Lecturer: Prof. Manolis Kellis
University Course Code: MIT 6.047
Subject: #biology #computationalbiology #machinelearning

More at [email protected]

13
14
 
 

Hello, ML enthusiasts! 🚀🤖 We analyzed rotational equilibria in our latest work, ROTATIONAL EQUILIBRIUM: HOW WEIGHT DECAY BALANCES LEARNING ACROSS NEURAL NETWORKS

💡 Our Findings: Balanced average rotational updates (effective learning rate) across all network components may play a key role in the effectiveness of AdamW.

🔗 ROTATIONAL EQUILIBRIUM: HOW WEIGHT DECAY BALANCES LEARNING ACROSS NEURAL NETWORKS

Looking forward to hearing your thoughts! Let’s discuss more about this fascinating topic together!

15
16
 
 

Pretty cool thinking and promising early results.

17
18
 
 

This is about Benjamin Grimmer's paper https://arxiv.org/abs/2307.06324 where he proves under certain conditions that large steps lead to faster convergence.

19
 
 

I've got a bot running/in development to detect and flag toxic content on Lemmy but I'd like to improve on it as I'm getting quite a few false positives. I think that part of the reason is that what constitutes toxic content often depends on the parent comment or post.

During a recent postgrad assignment I was taught (and saw for myself) that a bag of words model usually outperforms LSTM or transformer models for toxic text classification, so I've run with that, but I'm wondering if it was the right choice.

Does anyone have any ideas on what kind of model would be most suitable to include a parent as context, but to not explicitly consider whether the parent is toxic? I'm guessing some sort of transformer model, but I'm not quite sure how it might look/work.

20
 
 

Hi everyone, sorry if this is not the right community, just let me know if so.

I'm wondering if anyone has any recommendations for which job agencies to register with for ML Jobs. I have experience with Python using mainly Pytorch and a bit of tensorflow a few years ago.

21
 
 

cross-posted from: https://lemmy.ml/post/2811405

"We view this moment of hype around generative AI as dangerous. There is a pack mentality in rushing to invest in these tools, while overlooking the fact that they threaten workers and impact consumers by creating lesser quality products and allowing more erroneous outputs. For example, earlier this year America’s National Eating Disorders Association fired helpline workers and attempted to replace them with a chatbot. The bot was then shut down after its responses actively encouraged disordered eating behaviors. "

22
23
 
 

I am an ML engineer/researcher but have never looked into music before. Some quick googling gives plenty of websites doing automatic music generation but not sure what methods/ architectures are being used. I'm sure I could find papers with more searching but hoping someone can give me a summary of current SOTA and maybe some links to code/models to get started on.

24
 
 

where can i go to learn about and discuss facebook's llama 2 source code? there aren't many comments in the code.

25
 
 

The MLOps community is flooding tools and pipeline orchestration tools. What does your stack look like?

view more: next ›