snowe

joined 1 year ago
MODERATOR OF
[–] [email protected] 5 points 4 weeks ago

No joke, this happened to me and my sisters and their families when we had a vacation in Durango. Window in the basement was left slightly cracked, we left and came back to a bear in the house. The bear had only eaten one thing, the Oreos. I have never thought that was weird until this post.

[–] [email protected] 3 points 1 month ago

I completely agree.

[–] [email protected] 14 points 1 month ago

These posts showed up right after each other in my feed so I got to see them in order without clicking in 😂

[–] [email protected] 4 points 1 month ago* (last edited 1 month ago) (2 children)

According to a video I watched yesterday, it’s not random, it’s because they’re bored teenagers

Hilariously I left this post then scrolled down just a few posts and found this. https://www.usatoday.com/story/news/nation/2024/05/24/killer-whales-attacking-sinking-boats-are-bored-scientists-say/73558157007/

[–] [email protected] 1 points 2 months ago

I do not. I'm sorry.

[–] [email protected] 1 points 2 months ago (1 children)

Does type inference provide a practical benefit to you beyond saving you some keystrokes?

it's more readable! like, that's literally the whole point. It's more readable and you don't have to care about a type unless you want or need to.

What tools do you use for code review? Do you do them in GitHub/gitlab/Bitbucket or are you pulling every code review directly into your IDE? How frequently do you do code reviews?

I use GitHub and Intellij. I do code reviews daily, I'm one of two staff software engineers on my team. I rarely ever need to know the type, and if I do Github is perfect for 90% of use cases, and for the other 10% I literally click the PR button in intellij and open up the pull request that way. It's dead simple.

[–] [email protected] 4 points 2 months ago (9 children)

My response to the article is that you're sacrificing gains in language because some people use outdated tools. Code has more context than what is just written. Many times you can't see things in the code unless you dig in, for example responses from a database or key value store, or literally any external api. Type inference in languages that have bad IDE support leads to a bad experience, hence the author's views on ocaml. But in a language like Kotlin it's absolutely wonderful. If needed you can provide context, but otherwise the types are always there, you can view them easily if you're using a decent IDE, and type inference makes the code much more readable in the long run. I would say that a majority of the time, you do not care about the types in any application. You care about the data flow, so having a type system that protects you from mismatched types is much more important that requiring types to be specified.

[–] [email protected] 5 points 3 months ago

Rewriting projects from scratch by definition represent big step backwards because you’re wasting resources to deliver barely working projects that have a fraction of the features that the legacy production code already delivered and reached a stable state.

Joel's point was about commercial products not programming languages. I'm not the one misunderstanding here. When people talk about using Rust, it's not talking about rewriting every single thing ever written in C/C++. It's about leaving C/C++ behind and moving on to something that doesn't have the issues of the past. This is not about large scale commercial rewrites. It's about C's inability to deal with these problems.

You are just showing the world you failed to read the article.

sure thing bud.

Also, it’s telling that you opt to frame the problem as "a project is written in C instead of instead of actually secure and harden existing projects.

I didn't say that and you know it. Also it's quite telling (ooh, I can say the same things you can) that you think "better language" means "pet language". Actually laughable.

[–] [email protected] 6 points 3 months ago

I can’t speak for C, as I don’t follow it that much, but for C++, this is just not fair. It has been proven repeatedly that it can be done better, and much better. Each iteration has made so many things simpler, more productive, and also safer. Now, there are two problems with what I just said:

That comment was not talking about programming languages, it was talking about human's inability to write perfect code. Humans are unable to solve problems correctly 100% of the time. So if the language doesn't do it for them then it will not happen. See Java for a great example of this. Java has Null Pointer Exceptions absolutely everywhere. So a bunch of different groups created annotations that would give you warnings, and even fail to compile if something was mismatched or a null check was missed. But if you miss a single @NotNull annotation anywhere in the code, then suddenly you can get null errors again. It's not enforced by the type system and as a result humans can forget. Kotlin came along and 'solved it' at the type level, where types are nullable or non-nullable. But, hilariously enough, you can still get NPEs in Kotlin because it's commonly used to interop with Java.

My point is that C/C++ can't solve this at a fundamental level, the same way Kotlin and Java cannot solve this. Programmers are the problem, so you have to have a system that was built from the ground up to solve the problem. That's what we are getting in modern day languages. You can't just tack the system on after the fact, unless it completely removes any need for the programmer to do literally anything, because the programmer is the problem.

Surely not for everything. Of course I see great value if I can stop depending on OpenSSL, and move to a better library written in a better language. Seriously looking forward for the day when I see dynamic libraries written in Rust in my package manager. But I’d like to see what’s the plan for moving a large stack of C and C++ code, like a Linux distribution, to some “better language”. I work everyday on such a stack (e.g. KDE Neon in my case, but applicable to any other typical distro with KDE or GNOME), and deploy to customers on such a stack (on Linux embedded like Yocto). Will the D-Bus daemon be written in Rust? Perhaps. Systemd? Maybe. NetworkManager, Udisks, etc.? Who knows. All the plethora of C and C++ applications that we use everyday? Doubtful.

I'm not talking about whole scale rewrites. I'm talking about what Linux is already doing with writing new code in Rust, or small portions of performance critical code in a memory safe language. I'm not talking about like what Fish Shell did and rewrote the whole codebase in one go, because that's not realistic. But slowly converting an entire codebase over? That's incredibly realistic. I've done so with several 250k+ line Java codebases, converting them to Kotlin. When languages are built to be easy to move to (Rust, Kotlin, etc), then migrating to them slowly over time where it matters is easily attainable.

[–] [email protected] 9 points 3 months ago (6 children)

Forking is a foolish idea. The core principle of computer-science is that we need to live with legacy, not abandon it.

what a crazy thing to say. The core principle of computer-science is to continue moving forward with tech, and to leave behind the stuff that doesn't work. You don't see people still using fortran by choice, you see them living with it because they're completely unable to move off of it. If you're able to abandon bad tech then the proper decision is to do so. OP keeps linking Joel, but Joel doesn't say to not rewrite stuff, he says to not rewrite stuff for large scale commercial applications that currently work. C clearly isn't working for a lot of memory safe applications. The logic doesn't apply there. It also clearly doesn't apply when you can write stuff in a memory safe language alongside existing C code without rewriting any C code at all.

And there's no need. Modern C compilers already have the ability to be memory-safe, we just need to make minor -- and compatible -- changes to turn it on. Instead of a hard-fork that abandons legacy system, this would be a soft-fork that enables memory-safety for new systems.

this has nothing to do with the compiler, this has to do with writing 'better' code, which has proved impossible over and over again. The problem is the programmers and that's never going to change. Using a language that doesn't need this knowledge is the better choice 100% of the time.

C devs have been claiming 'the language can do this, we just need to implement it' for decades now. At this point it's literally easier to slowly port to a better language than it is to try and 'fix' C/C++.

[–] [email protected] 0 points 3 months ago

it does if the other ones have edible seeds, seeds without arsenic, or fewer seeds... your analogy makes no sense.

 

I will no longer be able to assist with development nor debugging actual issues with the software... Quite juvenile behavior from the devs. It stemmed from this issue where the devs continuously argued in public by opening and closing an issue. Anyway, thought I would keep y'all apprised of the situation, since these are the people maintaining the software you are currently using.

 

Over the weekend we had a large intermittent outage, followed up by unplanned maintenance that I had put off for way too long.

Lemmy runs with several different services.

  • lemmy-ui (the reactesque frontend)
  • lemmy (the rust backend)
  • postgres (the data store for operations, comments, posts, etc)
  • pictrs (the image data store)

The outage concerns itself with the last one. We always knew we'd eventually need to migrate to an object based store, but Lemmy defaults to file based picture storage and that's what we stuck with up until now. This eventually caused the VPS that programming.dev is running on to seize up, and resulted in the outage over the weekend.

Saturday night I spent several hours testing out the object migration on the beta.programming.dev site in order to validate that it worked. During this time I struggled with some very obtuse ansible errors that I hadn't encountered before and so I was not able to start the migration that night. I delayed until the next morning (thank goodness).

I began work Sunday morning at 10:00 America/Denver time. Initially the migration started off quite well, but was moving incredibly slowly. Looking back on it now, the migration would have taken over 144 hours if I left it to do its thing. I let this run for about an hour before messaging the pictrs dev to understand why logs weren't showing up for the migration (even though objects were showing up in the store). Apparently lemmy-ansible is set to use 0.4.0 of pictrs, which not only is quite old, but doesn't have the ability to run migrations concurrently. There was the issue. I asked the dev is it was possible to stop a migration in the middle of the running, upgrade, and continue. They told me what changes I'd need to make, I made them, did the upgrade, and restarted the migration. It immediately failed. This was the start of my issues.

The server was now too full of data to do anything, including running apt update or apt install to install tools to assist me. I was able to attach more block storage, but I'm not enough of a linux guru to figure out how to mount it where the current pictrs filesystem would be able to take advantage of it. I had to result to copying the entire pictrs filesystem to a fresh ~500gb mount, fixing permissions, and then rerunning the migration from there. By the time I got to this point, it was about 12:30PM. The migration from then on took several hours.

After the migration completed, I needed to deploy the new stack with the correct settings. The ansible script needed to run apt though, and, well, that wouldn't work when the server was still full. At this point I was not confident in the migration and I also hadn't realized that you could do the migration while the site was running (big oversight from me). I therefore wanted to maintain the entire pictrs file store until I proved the object store was working. I created another block storage, copied the entire pictrs directory over to it again (another 20 minutes or so) and then deleted the original directory. I was now able to run the ansible script and deploy the new settings for pictrs, confident that I had a backup available in case something went wrong (this is not the main backup method, the server is backed up externally as well, but I didn't want to have to resort to those during the migration).

That completed the migration, some 5 hours after it originally started.

There were several things that exacerbated the issue that made it take several hours longer than I wanted.

  1. I let it go so long before doing the migration to object storage that the server was too full to even perform an apt update. This resulted in me not being able to install tools I needed, along with a host of other issues as mentioned
  2. pict-rs was at a very suboptimal version. If it had just been two minor versions newer it would have migrated perfectly fine, in a few hours.
  3. my limited knowledge around ansible led me on wild goose chases several times

Things I would change if I had to do it again:

  1. Dig in a bit deeper on the concurrency flag in the pictrs docs. It was not present in the original guide I followed (from a lemmy post on another instance), and thus I didn't realize that it wouldn't run with concurrency at all.
  2. Don't wait so long so that the server is full
  3. Migrate while the server is running. That would have been dumb in this case, since the server wouldn't stay up anyway, and could have caused other issues. But there was no reason to take the server down if it had been stable, and other instances have done so with no problems.
 
 

It seems like the password limit is set to 60 characters so I’m unable to login to my instance. There probably should be no limit in the app because each server could have different limits set.

914
A thousand miles (programming.dev)
 
 

There's gods for everything, but of course computers didn't exist in ancient Roman and Greek times. What God or Goddess in your opinion would personify Testing?

And yes these answers matter. 😬

 

Start by reading these two articles:

Ok, now that you've done that (hopefully in the order I posted them), I can begin.

I have always been a strong supporter of Open Source Software (OSS), so much so that all of my projects (yes all) are OSS and fully open for anyone to use. And with that, I knew that things could be used for good... and bad. I took that risk. But I also made sure to build stuff that wasn't, in itself, inherently bad. I didn't build anything unethical to my eyes (I understand the nuance here).

But I've seen what unethical devs can do.

Just take a look at those implementing the ModFascismBot for Reddit (that's not its name, but that's what it is). That is an incredibly unethical thing to build. Not because it's a private company controlling what they want their site to do, no, that's fine by me. Reddit can do whatever they want. But because it's an attempt to lie about reality, to force users to do something through manipulation not through honesty. Even subreddits that voted overwhelmingly to shut down still got messaged by the bot telling them that the users (that voted for it) didn't want it and they had to open back up or they would be removed from mod position. This is not ethical. This is not right. This is not what the internet is about.

Or the unethical devs at Twitter, who:

It's one thing for an organization to have political lean...that is just a part of life, and that will never end. It's another to actually sow disinformation in order to accomplish nefarious things to further your profits. It is what has caused massive addiction to tobacco, the continuation of climate change, death and disfiguration from forever chemicals, ovarian cancer and mesothelioma from undisclosed exposure to asbestos, or selling 'health products' that claim to cure everything under the sun, but can "interfere with clinical lab tests, such as those used to diagnose heart attacks".

Please do not confuse this for saying that companies shouldn't be able to sell things and make a profit. If you want to sell someone something that kills them if they misuse it and you market it as such, you go for it. That's literally how every product in the cleaning aisle of your grocery store works. That's how guns work, that's how fertilizers work, that's why we have labels. But manipulation for profit is unethical, and that's why companies hide it. It hurts their bottom line. They know that their products will not be used if they reveal the truth. Instead of doing something good for humanity, they choose the subterfuge. Profits over people. Profits over Earth honestly. Profits over continuing the human race. Absolutely nothing matters to companies like this. And unethical developers enable this.


Facebook (ok, fine, Meta, still going to refer to them as FB though) is trying to join the Fediverse. We as a community, but honestly each of you as individuals, have a decision to make. Do they stay or do they go? Let's put some information on the table.

Facebook...

  • lies about the amount of misinformation it removes ^1
  • increased censorship of 'anti-state' posts ^1 ^2 ^3
  • lied to Congress about social networks polarizing people, while FB's own researchers found that they do ^2
  • attempted to attract preteens to the platform (huh, wonder where all that "you must be 13" stuff went) ^4
  • rewards outrage and discord ^3

Facebook also...

  • Allows for checking on friends and family in disasters ^6
  • Created and maintained some of the most popular open source software on the planet (including the software that runs the interface you're looking at right now) [^7][^8]

From my perspective... There's not much good about FB. It has single handedly caused the deaths of tens of thousands of people across the planet, if not hundreds of thousands. It continually makes people angrier and angrier. It's a launching pad for scammers, thieves, malevolent malefactors, manipulators, dictators, to push their conquests onto the world through manipulation, lies, tricks, and deceit. Its algorithms foster an echo chamber effect, exacerbating division and animosity, making civil discourse and mutual understanding all but impossible. Instead of being a platform for connection, it often serves as a catalyst for discord and misinformation. FB's propensity for prioritizing user engagement over factual accuracy has resulted in a global maelstrom of confusion and mistrust. Innocent minds are drawn into this vortex, manipulated by fear and falsehoods, consequently promoting harmful actions and beliefs. Despite its potential to be a tool for good, it is more frequently wielded as a weapon, sharpened by unscrupulous entities exploiting its vast reach and influence. The promise of a globally connected community seems to be overshadowed by its darker realities.


As a person, I believe that we need to choose things as a community. I do not believe in the 'BDFL'...the Benevolent Dictator For Life. Graydon Hoare, creator of Rust, wrote an article just recently about how things would have been different if they had stayed BDFL of Rust. From my position the BDFLs we currently have on this planet really suck. Not just politically, but even in tech. I don't think that path is good for society. It might work in specific circumstances, but it usually fails, and when it does, people get hurt. Badly.

So, with that in mind, I've been working on a polling feature for Lemmy. I seriously doubt I'll be done with it soon, but hopefully FB takes a while longer to implement federation. I understand there's a desire for me, or the other admins to just make a decision, but I really don't like doing that. If it comes down to it, I will implement defederation to start with, but I will still be holding a vote as soon as I can get this damn feature done.


[^8]: the website actually uses Inferno, but from what I can tell it was forked directly from React, judging from the actually documentation and references in the repo.

49
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 

I will be updating the instance to v18 at ~~20:00~~ 22:00 UTC.

See https://programming.dev/post/181191 for the changes

edit: Lemmy.ml updated and seems to have gone down. We're going to wait and see what the outage was caused by and then proceed from there.

edit 2: lemmy was down due to a ddos attack. We will upgrade at 22:00 UTC

edit 3: we had issues with the email setup getting overridden again. If you tried to sign up in the past 8 hours please try to just log in. If you can't, please message me (discord, matrix, or mastodon)

62
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 

I'm going to be working on getting the instance upgraded to 17.4 today. I had tried in the past, but had some issues with cloudflare and dns resolution.

First attempt will be at 17:00 UTC.

I'll update this post if that doesn't work and provide a second time.

 

We're seeing increased rates of memory errors in postgres, due to a default docker setting

I will be performing a fix at 23:45 UTC. This should not update the server, so it should be a very fast fix. I expect the downtime to be less than a minute.

 

Shopify wrote a new hand-written recursive descent parser. This looks like it will be a great improvement to the Ruby ecosystem!

view more: next ›