this post was submitted on 02 Oct 2023
266 points (95.9% liked)

Programming

17313 readers
445 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities [email protected]



founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 47 points 1 year ago* (last edited 1 year ago) (5 children)

Now this is UX. Wonderful stuff.

Screenshot of the page showing me 20 mouse cursors moving across the page

[–] [email protected] 29 points 1 year ago (4 children)

And the site's dark mode is fantastic...

[–] [email protected] 11 points 1 year ago (1 children)
[–] [email protected] 3 points 1 year ago

Lol, who turned the lights out?

[–] [email protected] 4 points 1 year ago

This one really got a laugh out of me

[–] [email protected] 15 points 1 year ago

Thank god for reader view because this makes me feel physically sick to look at.

[–] [email protected] 3 points 1 year ago

I love it. People should be having more fun with their own personal sites.

load more comments (1 replies)
[–] abhibeckert 24 points 1 year ago* (last edited 1 year ago) (1 children)

I love the comparison of string length of the same UTF-8 string in four programming languages (only the last one is correct, by the way):

Python 3:

len("🤦🏼‍♂️")

5

JavaScript / Java / C#:

"🤦🏼‍♂️".length

7

Rust:

println!("{}", "🤦🏼‍♂️".len());

17

Swift:

print("🤦🏼‍♂️".count)

1

[–] [email protected] 43 points 1 year ago* (last edited 1 year ago) (2 children)

That depends on your definition of correct lmao. Rust explicitly counts utf-8 scalar values, because that's the length of the raw bytes contained in the string. There are many times where that value is more useful than the grapheme count.

[–] [email protected] 16 points 1 year ago (2 children)

And rust also has the "🤦".chars().count() which returns 1.

I would rather argue that rust should not have a simple len function for strings, but since str is only a byte slice it works that way.

Also also the len function clearly states:

This length is in bytes, not chars or graphemes. In other words, it might not be what a human considers the length of the string.

[–] [email protected] 11 points 1 year ago

None of these languages should have generic len() or size() for strings, come to think of it. It should always be something explicit like bytes() or chars() or graphemes(). But they're there for legacy reasons.

[–] [email protected] 10 points 1 year ago (1 children)

That Rust function returns the number of codepoints, not the number of graphemes, which is rarely useful. You need to use a facepalm emoji with skin color modifiers to see the difference.

The way to get a proper grapheme count in Rust is e.g. via this library: https://crates.io/crates/unicode-segmentation

[–] Djehngo 9 points 1 year ago (1 children)

Makes sense, the code-points split is stable; meaning it's fine to put in the standard library, the grapheme split changes every year so the volatility is probably better off in a crate.

[–] [email protected] 7 points 1 year ago (2 children)

Yeah, although having now seen two commenters with relatively high confidence claiming that counting codepoints ought be enough...

...and me almost having been the third such commenter, had I not decided to read the article first...

...I'm starting to feel more and more like the stdlib should force you through all kinds of hoops to get anything resembling a size of a string, so that you gladly search for a library.

Like, I've worked with decoding strings quite a bit in the past, I felt like I had an above average understanding of Unicode as a result. And I was still only vaguely aware of graphemes.

load more comments (2 replies)
[–] [email protected] 7 points 1 year ago (2 children)

Yeah, and as much as I understand the article saying there should be an easily accessible method for grapheme count, it's also kind of mad to put something like this into a stdlib.

Its behaviour will break with each new Unicode standard. And you'd have to upgrade the whole stdlib to keep up-to-date with the newest Unicode standards.

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago) (2 children)

~~The way UTF-8 works is fixed though, isn't it? A new Unicode standard should not change that, so as long as the string is UTF-8 encoded, you can determine the character count without needing to have the latest Unicode standard.~~

~~Plus in Rust, you can instead use .chars().count() as Rust's char type is UTF-8 Unicode encoded, thus strings are as well.~~

turns out one should read the article before commenting

[–] [email protected] 6 points 1 year ago (1 children)

No offense, but did you read the article?

You should at least read the section "Wouldn’t UTF-32 be easier for everything?" and the following two sections for the context here.

So, everything you've said is correct, but it's irrelevant for the grapheme count.
And you should pretty much never need to know the number of codepoints.

[–] [email protected] 3 points 1 year ago (1 children)

yup, my bad. Frankly I thought grapheme meant something else, rather stupid of me. I think I understand the issue now and agree with you.

[–] [email protected] 3 points 1 year ago

No worries, I almost commented here without reading the article, too, and did not really know what graphemes are beforehand either. 🫠

[–] [email protected] 2 points 1 year ago

Nope, the article says that what is and is not a grapheme cluster changes between unicode versions each year :)

[–] [email protected] 4 points 1 year ago

It might make more sense to expose a standard library API for unicode data provided by (and updated with) the operating system. Something like the time zone database.

[–] [email protected] 16 points 1 year ago (2 children)

Unicode is thoroughly underrated.

UTF-8, doubly so. One of the amazing/clever things they did was to build off of ASCII as a subset by taking advantage of the extra bit to stay backwards compatible, which is a lesson we should all learn when evolving systems with users (your chances of success are much better if you extend than to rewrite).

On the other hand, having dealt with UTF-7 (a very “special” email encoding), it takes a certain kind of nerd to really appreciate the nuances of encodings.

[–] [email protected] 5 points 1 year ago (1 children)

I've recently come to appreciate the "refactor the code while you write it" and "keep possible future changes in mind" ideas more and more. I think it really increases the probability that the system can live on instead of becoming obsolete.

[–] [email protected] 1 points 1 year ago

Yes, but once code becomes too spaghetti such that a "refactor while you write it" becomes too time intensive and error prone, it's already too late.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

Unrelated, but what do you think (if anything) might end up being used by the last remaining reserved bit in IP packet header flags?

https://en.wikipedia.org/wiki/Evil_bit

https://en.wikipedia.org/wiki/Internet_Protocol_version_4#Header

[–] [email protected] 14 points 1 year ago

They believed 65,536 characters would be enough for all human languages.

Gotta love these kind of misjudgements. Obviously, they were pushing against pretty hard size restrictions back then, but at the same time, they did have the explicit goal of fitting in all languages and if you just look at the Asian languages, it should be pretty clear that it's not a lot at all...

[–] [email protected] 13 points 1 year ago (1 children)

Man, Unicode is one of those things that is both brilliant and absolutely absurd. There is so much complexity to language and making one system to rule them all ends up involving so many compromises. Unicode has metadata for each character and algorithms dealing with normalization and capitalization and sorting. With human language being as varied as it is, these algorithms can have really wacky results. Another good article on it is https://eev.ee/blog/2015/09/12/dark-corners-of-unicode/

And if you want to RENDER text, oh boy. Look at this: https://faultlore.com/blah/text-hates-you/

[–] [email protected] 5 points 1 year ago

Oh no, we've been hacked! Theres chinese character in the event log! Or was it just unicode?

The entire video is worth watching, the history of "Plain text" from the beginning of computing.

[–] [email protected] 11 points 1 year ago (1 children)

Holy Jesus, what a color scheme.

[–] [email protected] 4 points 1 year ago

I prefer it to black on white. Inferior to dark mode though.

[–] [email protected] 11 points 1 year ago (2 children)

The mouse pointer background is kinda a dick move. Good article. but the background is annoying for tired old eyes - which I assume are a target demographic for that article.

[–] [email protected] 5 points 1 year ago

Wow this is awful on mobile lol

[–] [email protected] 4 points 1 year ago

js console: document.querySelector('.pointers').hidden=true

[–] [email protected] 11 points 1 year ago

Was actually a great read. I didn't realize there were so many ways to encode the same character. TIL.

[–] [email protected] 10 points 1 year ago

Because strings are such a huge problem nowadays, every single software developer needs to know the internals of them. I can't even stress it enough, strings are such a burden nowadays that if you don't know how to encode and decode one, you're beyond fucked. It'll make programming so difficult - no even worse, nigh impossible! Only those who know about unicode will be able to write any meaningful code.

[–] [email protected] 9 points 1 year ago (6 children)

currency symbols other than the $ (kind of tells you who invented computers, doesn’t it?)

Who wants to tell the author that not everything was invented in the US? (And computers certainly weren't)

[–] [email protected] 7 points 1 year ago* (last edited 1 year ago)

The stupid thing is, all the author had to do was write "kind of tells you who invented ASCII" and he'd have been 100% right in his logic and history.

load more comments (5 replies)
[–] [email protected] 7 points 1 year ago

If you go to the page without the trailing slash, the images don't load

[–] [email protected] 5 points 1 year ago* (last edited 1 year ago) (1 children)

I do understant why old unicode versions re-used “i” and “I” for turkish lowercase dotted i and turkish uppercase dotless I, but I don't understand why more recent version have not introduce two new characters that looks exactly the same but who don't require locale-dependant knowlege to do something as basic as “to lowercase”.

[–] [email protected] 1 points 1 year ago

Probably for the same reason Spanish used to consider ch, ll, and rr as a single character.

[–] [email protected] 2 points 1 year ago

Just give me plain UTF32 with ~@4 billion code points, that really should be enough for any symbol ee can come up with. Give everything it's own code point, no bullshit with combined glyphs that make text processing a nightmare. I need to be able to do a strlen either on byte length or amount of characters without the CPU spendings minute to count each individual character.

I think Unicode started as a great idea and the kind of blubbered into aimless "everybody kinda does what everyone wants" territory. Unicode is for humans, sure, but we shouldn't forget that computers actually have to do the work

[–] [email protected] 2 points 1 year ago

The article sure mentions 💩a lot.

[–] Espi 1 points 1 year ago

I'm personally waiting for utf-64 and for unicode to go back to fixed encoding and forgetting about merging code points into complex characters. Just keep a zeptillion code points for absolutely everything.

load more comments
view more: next ›