this post was submitted on 29 Aug 2023
63 points (95.7% liked)

No Stupid Questions

35096 readers
1503 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 1 year ago
MODERATORS
 

It's a bit of a weird shower thought but basically I was wondering hypothetical if it would be possible to take data from a social media site like Reddit and map the most commonly used words starting at 1 and use a separate application to translate it back and forth.

So if the word "because" was number 100 it would store the value with three characters instead of seven.

There could also be additions for suffixes so "gardening" could be 5000+1 or a word like "hoped" could be 2000-2 because the "e" is already present.

Would this result in any kind of space savings if you were using larger amounts of text like a book series?

you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 3 points 1 year ago (10 children)

There's a website, I can't remember the link, but any text you search can be found to have already been written on one of its pages.

It used an algorithm to generate every possible page of text in the English language, including gobbledygook and random keyboard smashing.

You can then share a link to a full page of text that is far smaller than the page itself, given infinite computational resources, it would be possible to parse any program or any bit of software into its text equivalent and then generate the URL that attaches to this algorithm for that entire page reducing a thousand characters to 16.

It would then be possible to make a page consisting of these 16 digit characters and then compress that page to another page, turning 70,000 words of text or so into 16 digits.

Once you had a page full of those, you could compress them again by finding a page that contains a link to the links of all of the pages.

Given that this is perfect compression you could literally reduce any file to the URL of the page that you need to use (and some information about how many pages deep you have to go and any needed information on the final file structure and format at the end of the reconstruction) to regenerate the page that you need to use to regenerate the page that you need to use to regenerate the page and so on until you have fully reconstructed the file.

And, given that these pages that can be summoned by a URL and do not actually exist until they are generated by the algorithm, it is entirely feasible that the algorithm itself could be stored on your local computer, meaning that with a somewhat complicated but still reasonable system, you could send a file of any size to any person in a text.

The problem is the computational resources needed to utilize this algorithm. It would have to crunch out the numbers to rebuild every single file that is sent its way and even on a fairly fast system that's going to take some time.

However, if anyone wants to take this idea and run with it I'm pretty sure the world would appreciate your efforts. Since we have the ability to create a page of text using an algorithm then we probably should use that and then once it becomes commonplace I'm sure people will build chiplet accelerators for the process.

[โ€“] [email protected] 1 points 1 year ago

You cannot represent everything using english text, and text in known languages can be extremely compressed just because we know details about its structure. (And anyway, it cannot be compressed that extremely, information theory explains this very well, computational power isn't the only limit here).

If you cannot represent everything using valid english text, you cannot compress at high rates without losing information. A big part of digital data is actually noise, and noise cannot be compressed by definition.

load more comments (9 replies)