arendjr

joined 7 months ago
[–] [email protected] 6 points 1 day ago

Try browsing the list of somewhat recent #CVE rated critical, as I just did to verify. A majority of them is not related to any memory errors. Will you tell all them “just use a different programming language”?

I'm sorry, but this has been repeatedly refuted:

And yes, they are telling their engineers to use a different programming language. In fact, even the NSA is saying exactly that: https://www.nsa.gov/Press-Room/News-Highlights/Article/Article/3215760/nsa-releases-guidance-on-how-to-protect-against-software-memory-safety-issues/

It doesn’t come out today, it’s been there for a long time, and it’s standardized, proven and stable.

This seems like an extremely short-sighted red herring. C has so many gaps in its specification, because it has no problem defining things as "undefined behavior" or "implementation defined", that the standard is essentially useless for kernel-level programming. The Linux kernel is written in C and used to only build with GCC. Now it builds with GCC and LLVM, and it relies on many non-standard compiler extensions for each. The effort to add support for LLVM took them 10 years. That's 10 years for a migration from C to C. Ask yourself: how is that possible if the language is so well standardized?

[–] [email protected] 1 points 1 day ago

Great suggestions! One nitpick:

But in principle I find this quite workable, as you get to write your CI code in Rust.

Having used xtask in the past, I’d say this is a downside. CI code is tedious enough to debug as it is, and slowing down the cycle by adding Rust compilation into the mix was a horrible experience. To add, CI is a unique environment where Rust’s focus on correctness isn’t very valuable, since it’s an isolated, project-specific environment anyway.

I’d rather use Deno or indeed just for that.

[–] [email protected] 12 points 1 day ago* (last edited 1 day ago) (1 children)

No, OP asked for a black and white winner. I was elaborating because I don’t think it’s that black and white, but if you want a singular answer I think it should be clear: Rust.

[–] [email protected] 14 points 1 day ago* (last edited 1 day ago) (3 children)

I would say at this point in time it’s clearly decided that Rust will be part of the future. Maybe there’s a meaningful place for Zig too, but that’s the only part that’s too early to tell.

If you think Zig still has a chance at overtaking Rust though, that’s very much wishful thinking. Zig isn’t memory safe, so any areas where security is paramount are out of reach for it. The industry isn’t going back in that direction.

I actually think Zig might still have a chance in game development, and maybe in specialized areas where Rust’s borrow checker cannot really help anyway, such as JIT compilers.

[–] [email protected] 2 points 6 days ago

Ah yes, exactly.

[–] [email protected] 1 points 6 days ago (2 children)

Runtime performance is entirely unaffected by the use of macros. It can have a negative impact on compile-time performance though, if you overdo it.

[–] [email protected] 1 points 1 week ago

Might be relevant to mention that Rust has formal verification methods available as well, similar to SPARK, but also optional. One that looks pretty appealing is this one: https://verus-lang.github.io/verus/guide/overview.html

[–] [email protected] 19 points 1 week ago (1 children)

While I can get behind most of the advice here, I don’t actually like the conditions array. The reason being that each condition function now needs additional conditions to make sure it doesn’t overlap with the other condition functions. This was much more elegantly handled by the else clauses, since adding another condition to the array has now become a puzzle to verify the conditions remain non-overlapping.

[–] [email protected] 5 points 2 weeks ago (1 children)

I find Linear to be reasonably pleasant.

[–] [email protected] 43 points 2 weeks ago (1 children)

Issue resolved

 
43
DirectX Adopting SPIR-V (devblogs.microsoft.com)
submitted 2 weeks ago* (last edited 2 weeks ago) by [email protected] to c/[email protected]
 

SPIR-V is the intermediate shader target used by Vulkan as well, so it sounds like this may indirectly make DirectX on Linux smoother.

[–] [email protected] 2 points 2 weeks ago

I mentioned it in the first comment:

the reason I tend to recommend B-Tree maps over hash maps for ordinary programming is consistent iteration order. It is simply too easy to run into a situation where you think iteration order doesn’t matter, but then it turns out it does in some subtle unforeseen way.

I’m not talking about bugs in the implementation of the map itself, I’m talking about unforeseen consequences in the user’s code since they may not anticipate properly for the randomness in iteration.

[–] [email protected] 3 points 2 weeks ago (2 children)

Oh, I agree, they both have their use cases. But that doesn’t mean there’s not plenty of situations where the performance is effectively irrelevant, but where people tend to default to using a hash map because they heard it’s faster (probably because lookups are O(1) indeed). So that’s where I would say, as long as performance doesn’t matter it’s better to default to B-Tree maps than to hash maps, because the chance of avoiding bugs is more valuable than immeasurable performance benefits (not to mention that for smaller data sets B-Tree maps can often outperform hash maps due to better cache locality, but again that’s hardly relevant since the data set is small anyway).

 

Biome v1.9 is out!

Today we celebrate both the first anniversary of Biome 🎊 and the release of Biome v1.9! Read our blog post for a look back at the first year and the new features of Biome v1.9.

In a nutshell:

  • Stable CSS formatting and linting. Enabled by default!
  • Stable GraphQL formatting and linting. Enabled by default!
  • .editorconfig support. Opt-in
  • biome search command to search for patterns in your source code.
  • New lint rules for JavaScript and its dialects.
 

With this post I've taken a bit more of a practical turn compared to previous Post-Architecture posts: It's more aimed at providing guidance to keep (early) architecture as simple as possible. Let me know what you think!

 

After my previous post introducing Post-Architecture, I received a bunch of positive feedback, as well as enquiries from people wanting to know more. So I figured a follow-up was in order. Feel free to ask questions here as well as on Mastodon!

 

This post highlights my experience working with software architecture in startup environments. I think the approach is different enough from the traditional notion of software architecture that it may warrant its own term: post-architecture.

 

cross-posted from: https://programming.dev/post/12807878

This new version provides an easy path to migrate from ESLint and Prettier. It also introduces machine-readable reports for the formatter and the linter, new linter rules, and many fixes.

 

This new version provides an easy path to migrate from ESLint and Prettier. It also introduces machine-readable reports for the formatter and the linter, new linter rules, and many fixes.

 

I just had a random thought: a common pattern in Rust is to things such as:

let vec_a: Vec<String> = /* ... */;
let vec_b: Vec<String> = vec_a.into_iter().filter(some_filter).collect();

Usually, we need to be aware of the fact that Iterator::collect() allocates for the container we are collecting into. But in the snippet above, we've consumed a container of the same type. And since Rust has full ownership of the vector, in theory the memory allocated by vec_a could be reused to store the collected results of vec_b, meaning everything could be done in-place and no additional allocation is necessary.

It's a highly specific optimization though, so I wonder if such a thing has been implemented in the Rust compiler. Anybody who has an idea about this?

 

Just a progress update on a fun open-source project I'm involved with. Biome.js is a web toolchain written in Rust, and it provides a great excuse to play around with parsing technologies and other fun challenges :)

235
submitted 6 months ago* (last edited 6 months ago) by [email protected] to c/[email protected]
 

Slide with text: “Rust teams at Google are as productive as ones using Go, and more than twice as productive as teams using C++.”

In small print it says the data is collected over 2022 and 2023.

view more: next ›