this post was submitted on 07 Sep 2023
60 points (96.9% liked)

Programming

17313 readers
268 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities [email protected]



founded 1 year ago
MODERATORS
 

There was a time where this debate was bigger. It seems the world has shifted towards architectures and tooling that does not allow dynamic linking or makes it harder. This compromise makes it easier for the maintainers of the tools / languages, but does take away choice from the user / developer. But maybe that's not important? What are your thoughts?

all 50 comments
sorted by: hot top controversial new old
[–] LeberechtReinhold 22 points 1 year ago (2 children)

I have yet to find a memory hungry program thats its caused by its dependencies instead of its data. And frankly the disk space of all libraries is minuscule compared to graphical assets.

You know what's going to really bother the issue? If the program doesn't work because of a dependency. And this happens often across all OSes, searching for these are dime a dozen in forums. "Package managers should just fix all the issues". Until they don't, wrong versions get uploaded, issues compiling them, environment problems, etc etc.

So to me, the idea of efficiency for dynamic linking doesn't really cut it. A bloated program is more efficient that a program that doesn't work.

This is not to say that dynamic linking shouldn't be used. For programs doing any kind of elevation or administration, it's almost always better from a security perspective. But for general user programs? Static all the way.

[–] thirdBreakfast 3 points 1 year ago (2 children)

I read an interesting post by Ben Hoyt this morning called The small web is a beautiful thing - it touches a lot on this idea. (warning, long read).

I also always feel a bit uncomfortable having any dependencies at all (left-pad never forget), but runtime ones? I really like to avoid.

I have Clipper complied executables written for clients 25 years ago I can still run in a DOS VM in an emergency. They used a couple of libraries written in C for fast indexing etc, but all statically linked.

But the Visual Basic/Access apps from 20 years ago with their dependencies on a large number of DLLs? Creating the environment would be an overwhelming challenge.

[–] LeberechtReinhold 6 points 1 year ago (2 children)

I kind of agree with your points, but I think there has to be a distinction of libs. Most deps should be static IMHO. But something like OpenSSL I can understand if you go with dynamic linking, especially if it's a security critical program.

But for "string parsing library #124" or random "gui lib #35".. Yeah, go with static.

[–] [email protected] 1 points 1 year ago

string parsing library #124

this could also become a major security problem, tho.

[–] thirdBreakfast 1 points 1 year ago

Great point. Sometimes the benefit of an external dependency being changeable is a great feature.

[–] [email protected] 2 points 1 year ago (1 children)

I can't not upvote someone who brings Clipper to the table :)

[–] thirdBreakfast 1 points 1 year ago

Us looking at developers still on dBase III ~inserts Judgmental Volturi meme~

[–] uis 1 points 1 year ago (1 children)

But for general user programs? Static all the way.

Does it include browsers?

[–] [email protected] 17 points 1 year ago (2 children)

The user never had much choice to begin with. If I write a program using version 1.2.3 of a library, then my application is going to need version 1.2.3 installed. But how the user gets 1.2.3 depends on their system, and in some cases, they might be entirely unable unless they grab a flatpak or appimage. I suppose it limits the ability to write shims over those libraries if you want to customize something at that level, but that's a niche use-case that many people aren't going to need.

In a static linked application, you can largely just ship your application and it will just work. You don't need to fuss about the user installing all the dependencies at the system level, and your application can be prone to less user problems as a result.

[–] [email protected] 3 points 1 year ago

Only if the library is completely shitty and breaks between minor versions.

If the library is that bad, it's a strong sign you should avoid it entirely since it can't be relied on to do its job.

[–] uis 1 points 1 year ago

Not to disappoint you, but when I installed HL1 build from 2007, I had a lot ot libraries versions that did not exist back in 2007, but it works just excellent.

[–] [email protected] 15 points 1 year ago* (last edited 1 year ago) (3 children)

Shared libraries save RAM.

Dynamic linking allows working around problematic libraries, or even adding functionality, if the app developer can't or won't.

Static linking makes sense sometimes, but not all the time.

[–] [email protected] 3 points 1 year ago (1 children)

Shared libraries save RAM.

Citation needed :) I was surprised but I read (sorry I can't find the source again) that in most cases dynamic linking are loaded 1 time, and usually very few times. This make RAM gain much less obvious. In addition static linking allows inlining which itself allow aggressive constant propagation and dead code elimination, in addition to LTO. All of this decrease the binary size sometimes in non negligeable ways.

[–] [email protected] 2 points 1 year ago (1 children)

I was surprised but I read (sorry I can’t find the source again) that in most cases dynamic linking are loaded 1 time, and usually very few times.

That is easily disproved on my system by cat /proc/*/maps .

[–] [email protected] 2 points 1 year ago (1 children)

Someone found the link to the article I was thinking about.

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago)

Ah, yes, I think I read Drew's post a few years ago. The message I take away from it is not that dynamic linking is without benefits, but merely that static linking isn't the end of the world (on systems like his).

[–] [email protected] 1 points 1 year ago (1 children)

if the app developer can't or won't

Does this apply if the app is open source?

load more comments (1 replies)
[–] colonial 14 points 1 year ago (1 children)

Personally, I prefer static linking. There's just something appealing about an all-in-one binary.

It's also important to note that applications are rarely 100% one or the other. Full static linking is really only possible in the Linux (and BSD?) worlds thanks to syscall stability - on macOS and Windows, dynamically linking the local libc is the only good way to talk to the kernel.

(There have been some attempts made to avoid this. Most famously, Go attempted to bypass linking libc on macOS in favor of raw syscalls... only to discover that when the kernel devs say "unstable," they mean it.)

[–] [email protected] 9 points 1 year ago (3 children)

disk is cheap and it's easier to test exact versions of dependencies. As a user I'd rather not have all my non OS stuff mixed up.

[–] [email protected] 10 points 1 year ago (1 children)

From my understanding, unless a shared library is used only by one process at a time, static linking can increase memory usage by multiplying the memory footprint of that library's code segment. So it is not only about disk space.

But I suppose for an increasing number of modern applications, data and heap is much larger than that (though I am not particularly a fan ...)

[–] [email protected] 1 points 1 year ago

The gain in RAM are not even guaranteed. See my other comment

load more comments (2 replies)
[–] [email protected] 4 points 1 year ago

Dynamically linked all the way; you only have to update one thing (mostly) to fix a vulnerability in a dependency, not rebuild every package.

[–] [email protected] 4 points 1 year ago (2 children)

Disk space and RAM availability has increased a lot in the last decade, which has allowed the rise of the lazy programmer, who'll code not caring (or, increasingly, not knowing) about these things. Bloat is king now.

Dynamic linking allows you to save disk space and memory by ensuring all programs are using the only one version of a library laying around, so less testing. You're delegating the version tracking to distro package maintainers.

You can use the dl* family to better control what you use and if the dependency is FLOSS, the world's your oyster.

Static linking can make sense if you're developing portable code for a wide variety of OSs and/or architectures, or if your dependencies are small and/or not that common or whatever.

This, of course, is my take on the matter. YMMV.

[–] [email protected] 5 points 1 year ago (3 children)

Except with dynamic linking there is essentially an infinite amount of integration testing to do. Libraries change behaviour even when they shouldn't and cause bugs all the time, so testing everything packaged together once is overall much less work.

load more comments (3 replies)
load more comments (1 replies)
[–] Synthead 4 points 1 year ago (2 children)

It seems the world has shifted towards architectures and tooling that does not allow dynamic linking or makes it harder.

In what context? In Linux, dynamic links have always been a steady thing.

[–] [email protected] 4 points 1 year ago (2 children)

Some languages don't even support linking at all. Interpreted languages often dispatch everything by name without any relocations, which is obviously horrible. And some compiled languages only support translating the whole program (or at least, whole binary - looking at you, Rust!) at once. Do note that "static linking" has shades of meaning: it applies to "link multiple objects into a binary", but often that it excluded from the discussion in favor of just "use a .a instead of a .so".

Dynamic linking supports much faster development cycle than static linking (which is faster than whole-binary-at-once), at the cost of slightly slower runtime (but the location of that slowness can be controlled, if you actually care, and can easily be kept out of hot paths). It is of particularly high value for security updates, but we all known most developers don't care about security so I'm talking about annoyance instead. Some realistic numbers here: dynamic linking might be "rebuild in 0.3 seconds" vs static linking "rebuild in 3 seconds" vs no linking "rebuild in 30 seconds".

Dynamic linking is generally more reliable against long-term system changes. For example, it is impossible to run old statically-linked versions of bash 3.2 anymore on a modern distro (something about an incompatible locale format?), whereas the dynamically linked versions work just fine (assuming the libraries are installed, which is a reasonable assumption). Keep in mind that "just run everything in a container" isn't a solution because somebody has to maintain the distro inside the container.

Unfortunately, a lot of programmers lack basic competence and therefore have trouble setting up dynamic linking. If you really need frobbing, there's nothing wrong with RPATH if you're not setuid or similar (and even if you are, absolute root-owned paths are safe - a reasonable restriction since setuid will require more than just extracting a tarball anyway).

Even if you do use static linking, you should NEVER statically link to libc, and probably not to libstdc++ either. There are just too many things that can go wrong when you given up on the notion of "single source of truth". If you actually read the man pages for the tools you're using this is very easy to do, but a lack of such basic abilities is common among proponents of static linking.

Again, keep in mind that "just run everything in a container" isn't a solution because somebody has to maintain the distro inside the container.

The big question these days should not be "static or dynamic linking" but "dynamic linking with or without semantic interposition?" Apple's broken "two level namespaces" is closely related but also prevents symbol migration, and is really aimed at people who forgot to use -fvisibility=hidden.

[–] colonial 1 points 1 year ago

NEVER statically link to libc, and probably not to libstdc++ either.

This is really only true for glibc (because its design doesn't play nice with static linking) and whatever macOS/Windows have (no stable kernel interface, which Go famously found out the hard way.)

Granted, most of the time those are what you're using, but there's plenty of cases where statically linking to MUSL libc makes your life a lot easier (Apline containers, distributing cross-distro binaries.)

[–] [email protected] 2 points 1 year ago (1 children)

Depending on which is more convenient and whether your dependencies are security-critical, you can do both on the same program. :D

[–] [email protected] 4 points 1 year ago (1 children)

The main issue I was targeting was how modern languages do not support dynamic linking, or at least do not support it well, hence sorta taking away the choice. The choice is still there in C from my understanding, but it is very difficult in Rust for example.

[–] [email protected] 1 points 1 year ago (1 children)

Yeah, you can dynamically link in Rust, but it’s a pain because you have to use the C ABI since Rust’s ABI isn’t stable, and you have to miss out on exporting more fancy types

[–] [email protected] 2 points 1 year ago

Just a remark. C++ has exactly the same issues. In practice both clang and gcc have good ABI stability, but not perfect and not between each other. But in any cases, templates (and global mutable static for most use cases) don't works throught FFI.

[–] uis 2 points 1 year ago

You can statically link half a gig of Qt5 for every single application(half a gig for calendar, half a gig for file mager, etc) or keep it normal size. Also if there will be new bug in openssl, it is not your headache to monitor for vuln announcements.

This compromise makes it easier for the maintainers of the tools / languages

What do you mean? Also how would you implement plug-ins in language that explicitly forbids dynamic loading, assuming such language exists.