this post was submitted on 26 Aug 2023
22 points (100.0% liked)

Experienced Devs

3981 readers
1 users here now

A community for discussion amongst professional software developers.

Posts should be relevant to those well into their careers.

For those looking to break into the industry, are hustling for their first job, or have just started their career and are looking for advice, check out:

founded 2 years ago
MODERATORS
 

More specifically, I'm thinking about two different modes of development for a library (private to the company) that's already relied upon by other libraries and applications:

  1. Rapidly develop the library "in isolation" without being slowed down by keeping all of the users in sync. This causes more divergence and merge effort the longer you wait to upgrade users.
  2. Make all changes in lock-step with users, keeping everyone in sync for every change that is made. This will be slower and might result in wasted work if experimental changes are not successful.

As a side note: I believe these approaches are similar in spirit to the continuum of microservices vs monoliths.

Speaking from recent experience, I feel like I'm repeatedly finding that users of my library have built towers upon obsolete APIs, because there have been multiple phases of experimentation that necessitated large changes. So with each change, large amounts of code need to be rewritten.

I still think that approach #1 was justified during the early stages of the project, since I wanted to identify all of the design problems as quickly as possible through iteration. But as the API is getting closer to stabilization, I think I need to switch to mode #2.

How do you know when is the right time to switch? Are there any good strategies for avoiding painful upgrades?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 year ago (2 children)

I'm not suggesting that my library is unversioned. It's totally version controlled, and users can upgrade to whatever revision they want to pull in the changes. The changes I make upstream don't affect anyone downstream until they decide to upgrade. It could even adhere to SemVer, but my problem remains: how to minimize rewriting user code? Is it better to have more small upgrades or fewer large upgrades? When is one strategy preferable to another?

[–] [email protected] 6 points 1 year ago

SemVer seems logical. If most of your changes are breaking, I don't think it really matters if you are releasing often or occasionally though.. If it's often, the users will get fatigued with upgrades. If it's occasionally, they'll be overwhelmed and push it off.

If most of your changes are breaking, you should just disclose that the software is in an alpha/beta state and that it can't be depended on to remain consistent until you have a defined policy about what gets released and when. It will be up to the users to decide if they are comfortable with those terms.

[–] echo64 2 points 1 year ago* (last edited 1 year ago) (1 children)

Using git is not having a versioned library for what it's worth. Users can't get the latest fixes by picking a newer commit without building against the changes you put into your libraries apis. It sounds like your library is indeed entirely unversioned.

[–] [email protected] 1 points 1 year ago (1 children)

I also do SemVer-compliant releases. Backporting fixes is possible.

It doesn't change the fact that there are large breaking changes between versions. The only users of my library are within the same company, and we have all of the convenience of planning out the changes together.

The challenge arises from developers doing large amounts of R&D on separate but coupled libraries. My library happens to be a very central dependency.

[–] [email protected] 2 points 1 year ago (1 children)

If your library is a core dependency and it is constantly having breaking updates then something is deeply unwell in the environment. That is not sustainable, and it sounds like your library was created without a clear idea of what it should do.

I've been around long enough to know these things happen, but you're not going to find a good way forward because there isn't one. This is going to be a pain point until either the library is stable or the project fails.

[–] [email protected] 1 points 1 year ago

I think it's close to stability. And the scope of the library hasn't changed. It's just solving a complex problem that requires several very large data structures, and I've needed to address a couple important issues along the way.