this post was submitted on 11 Aug 2023
10 points (100.0% liked)
Machine Learning | Artificial Intelligence
952 readers
5 users here now
Welcome to Machine Learning – a versatile digital hub where Artificial Intelligence enthusiasts unite. From news flashes and coding tutorials to ML-themed humor, our community covers the gamut of machine learning topics. Regardless of whether you're an AI expert, a budding programmer, or simply curious about the field, this is your space to share, learn, and connect over all things machine learning. Let's weave algorithms and spark innovation together.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You aren't the author of Detoxify are you by any chance? It uses the same classifications. I was originally using it but switched to my own model as I really only needed binary classification and felt a new dataset that better suited Lemmy was needed anyway. I have 2 outputs (toxic and not-toxic).
I've been building my own dataset as the existing ones on Huggingface seemed to contain a lot of content you might see on Twitter, and were a poor match for Lemmy. Having said that, I've generally avoided putting that sort of content into the dataset as I figured if I can't easily decide if it's toxic, then how could a model.
Here's a few where I've had to go back to the parent comment or post to try and work out if it was toxic or not:
I originally thought that, and I'm actively tuning my model to try and get the best results on the comment alone, but I don't think I'll ever get better than about 80% accuracy. I've come to the conclusion that those cases in the grey zone where toxic ~= not-toxic can only be resolved by looking upstream.
Oof, pop-culture references are hard and I had not considered that at all.
Thanks for the examples, I'll have a think on how to deal with those.
My only insight is one you already had.
Test at least the comment before, and then use the output to dampen or amplify the final result.
Sorry for being no help at all.
--
My project is very basic but I'll post it here for any insight you might get out of it.
I teach Python in a variety of settings and this is part of a class.
The data used is from Kaggle: https://www.kaggle.com/competitions/jigsaw-toxic-comment-classification-challenge/
The original data came from Wikipedia toxic comments dataset.
There is code too from several users, very helpful for some insight into the problem.
Data is dirty and needs clean up so I've done so and posted result on HF here:
https://huggingface.co/datasets/vluz/Tox
Model is a very basic TensorFlow implementation intended for teaching TF basics.
https://github.com/vluz/ToxTest
Some of the helper scripts are very wonky, need fixing before I present this in class.
Here are my weights after 30 epochs:
https://huggingface.co/vluz/toxmodel30
And here is it running on a HF space:
https://huggingface.co/spaces/vluz/Tox