jon

joined 1 year ago
MODERATOR OF
[–] [email protected] 1 points 6 months ago

Apple poached engineers from the company that owns the patent for the blood oxygen sensor, rather than bothering to license their tech. Company sued and Apple lost, now their products are under an import ban.

[–] [email protected] 2 points 6 months ago

I've proactively blocked automatic updates on my watch in anticipation of them doing a rug pull on the feature.

[–] [email protected] 1 points 6 months ago

Get uBlock Origin and then YouTube will stop serving all ads. Or quit using YouTube entirely since Google is doing everything in their power to run the platform down the drain.

[–] [email protected] 2 points 6 months ago* (last edited 6 months ago)

It's still fairly rough, although they have pushed several patches that helped significantly improve the absolute trash the game was on launch day.

Still no modding support, which was originally supposed to be a day-one feature. DLC release also got delayed. Maybe it'll be a good game by mid to late 2024.

[–] [email protected] 4 points 6 months ago

The vast majority of the popular accounts are not run by the women on the profile. Most of them pay friends or agencies to manage the page for them, they simply show up to photo shoots every now and then and enjoy the easy money.

[–] [email protected] 1 points 6 months ago

UI doesn't come up until database migrations fully complete. Can take half an hour or more depending on how much content is indexed in your instance.

[–] [email protected] 2 points 6 months ago

There are bills to reschedule or deschedule it every year, every one so far has failed. As of today, marijuana is federally illegal, thus any federal criminal charges stand unless otherwise pardoned/commuted.

Federal agencies have been asked to reduce their amount of arrests for simple, nonviolent possession, but there's still plenty of people getting freshly charged at both the federal and state level today.

[–] [email protected] 6 points 6 months ago (2 children)

Marijuana is still classified as a schedule 1 drug and remains federally illegal.

2
submitted 6 months ago* (last edited 6 months ago) by [email protected] to c/[email protected]
 

Now that Lemmy 0.19.0 has been out for a few days, we will be proceeding with the update here on Lemmy.tf. I am tentatively planning to kick this off at 4pm EST today (3.5 hrs from the time of this post).

All instance data will be backed up prior to the update. This one will include a handful of major changes, the most impactful being that any existing 2FA configurations will be reset. Lemmy.ca has a post with some great change info - https://lemmy.ca/post/11378137

[–] [email protected] 7 points 6 months ago* (last edited 6 months ago) (1 children)

You absolutely can refuse to hire someone (in the US) for something they have no control of, assuming it's not one of the few protected classes. I could refuse to hire you over height, inability to grow facial hair, etc with zero repercussions.

[–] [email protected] 22 points 6 months ago (5 children)

That counts as unauthorized access in the eyes of the law. It's a private system and they did not have any agreements permitting them to use it as they wanted.

[–] [email protected] 8 points 6 months ago (3 children)

Why would they need to look into Apple's conduct here? Investigate Beeper for CFAA violations since they cracked into Apple's internal APIs and ignored large chunks of their ToS in the process.

Of course Apple is going to shut down unauthorized access to their messaging system. They'd lose all customer trust instantly if they didn't.

[–] [email protected] 2 points 6 months ago (1 children)

I run the self-hosted version, aside from having to deploy a couple Docker containers it's pretty much the same as the SaaS product.

 

I noticed some timeouts and DB lag when I logged in early this afternoon, so I have gone ahead and updated the instance to 0.18.4 to hopefully help clear this up.

We also have a status page available at https://overwatch.nulltheinter.net/status-page/946fd7fd-3ae3-4214-bbbf-dd7206566104 and will soon have this working on status.lemmy.tf.

11
submitted 10 months ago* (last edited 10 months ago) by [email protected] to c/[email protected]
 

As I'm sure everyone noticed, the server died hard last night. Apparently, even though OVH advised me to disable proactive interventions, I learned this morning that "the feature is not yet implemented" and that they have proceeded to go press the reset button on the machine every time their shitty monitoring detects the tiniest of ping loss. Last night, this finally made the server mad enough not to come back up.

Luckily, I did happen to have a backup from about 2 hours before the final outage. After a slow migration to the new DC, we are up and running on the new hardware. I'm still finalizing some configuration changes and need to do performance tuning, but once that's done our outage issue will be fully resolved.


Issues-

[Fixed] Pict-rs missing some images. This was caused by an incomplete OVA export, all older images were recovered from a slightly older backup.

[Fixed?] DB or federation issues- seeing some slowness and occasional errors/crashes due to the DB timing out. This appears to have resolved itself overnight- we were about 16 hours out of sync with the rest of the federation when I had posted this.


Improvements-

  • VM migrated to new location in Dallas, far away from OVH. CPU cores allocated were doubled during the move.

  • We are now in a VMware cluster with the ability to hot migrate to other nodes in the event of any actual hardware issues.

  • Basic monitoring deployed, we are still working to stand up full-stack monitoring.

 

So after a few days of back and forth with support, I may have finally received some insight as to why the server keeps randomly rebooting. Apparently, their crappy datacenter monitoring keeps triggering ping loss alerts, so they send an engineer over to physically reboot the server every time. I was not aware that this was the default monitoring option on their current server lines, and have disabled it so this should avoid forced reboots going forward.

I am standing up a basic ping monitor to alert me via email and SMS if the server actually goes down, and can quickly reboot it myself if ever needed (may even write some script to reboot via API if x concurrent ping fails, or something). Full monitoring stack is still in progress but not truly necessary to ensure stability at the moment.

4
submitted 11 months ago* (last edited 11 months ago) by [email protected] to c/[email protected]
 

OVH has scheduled a maintenance window for 5:00 EST this evening, hopefully they will be able to pinpoint the fault and get parts replaced at the same time. This will likely be an extended outage as they have more diagnostics than I was able to run, so I would expect somewhere around an hour or two of downtime during this.

I am mildly tempted to go ahead and migrate Lemmy.tf off to my new environment but it would incur even more downtime if I rush things, so it'll have to be sometime later.

Update 7:30PM:

I just received a response on my support case, they did not replace any hardware and claim their own diagnostics tool is buggy. We may be having a rushed VM migration over to a new server in the next few days... which would incur a few hours of hard downtime to migrate over to the new server (and datacenter) and switch DNS. Ideally I'd prefer to have time to plan it out and prep for a seamless cutover but I think a few hours of downtime over the weekend is worth ending the random restarts. I'm open to suggests on ideal times for this to happen.

Previous post: https://lemmy.tf/post/393063

8
submitted 11 months ago* (last edited 11 months ago) by [email protected] to c/[email protected]
 

UPDATE 07/25 10:00AM:

Support is getting a window scheduled for their maintenance. I've asked for late afternoon/early evening today with a couple hours advance notice so I can post an outage notice.

===========

UPDATE 12:00AM:

Diagnostics did in fact return with a CPU fault. I've requested they schedule the downtime with me but technically they can proceed with it whenever they want to, so there's a good chance there will be an hour or so of downtime whenever they get to my server- I'll post some advance notice if I'm able to.

===========

As I mentioned in the previous post, we appear to have a hardware fault on the server running Lemmy.tf. My provider needs full hardware diagnostics before they can take any action, and this will require the machine to be powered down and rebooted into diagnostics mode. This should be fairly quick (~15-20mins ideally) and since it is required to determine the issue, it needs done ASAP.

I will be taking everything down at 11:00PM EST tonight to run diagnostics and will reboot into normal mode as soon as I've got a support pack. If the diagnostics pinpoint a hardware fault, followup maintenance will need to be scheduled immediately, ideally overnight but exact time is up to their engineers.

I'm also prioritizing prep work to get the instance migrated over to a better server. This has been in the works for a few weeks, but first I'll need to migrate the DB over to a new Postgres cluster and kick frontend traffic through a load balancer to prevent outages from DNS propagation whenever I finally cut over to the new server. I'd also like to get Pict-rs moved up to S3, but this will likely be a separate change down the road.

4
submitted 11 months ago* (last edited 11 months ago) by [email protected] to c/[email protected]
 

EDIT 07/24: This is an ongoing issue and may be a hardware fault with the machine the instance is running on. I've opened a support case with OVH to have them run diagnostics and investigate. In the meantime I am getting a Solarwinds server spun up to alert me anytime we have issues so I can jump on and restore service. I am also looking into migrating Lemmy.tf over to another server, but this will require some prep work to avoid hard downtime or DB conflicts during DNS cutover.

==========

OP from 07/22:

Woke up this morning to notice that everything was hard down- something tanked my baremetal at OVH overnight and apparently the Lemmy VM was not set to autostart. This has been corrected and I am digging into what caused the outage in the first place.

I know there is some malicious activity going on with some of the larger instances, but as of this time I am not seeing any evidence of intrusion attempts or a DDoS or anything.

 

Lemmy 0.18.1 dropped yesterday and seems to bring a lot of performance improvements. I have already updated the sandbox instance to it and am noticing that things are indeed loading quicker.

I'm planning to upgrade this instance sometime tomorrow evening (8/9 around 6-7pm EST). Based on the update in sandbox, I expect a couple minutes of downtime while the database migrations run.

2
submitted 11 months ago* (last edited 11 months ago) by [email protected] to c/[email protected]
 

I'm running the Lemmy Community Seeder script on our instance to prepopulate some additional communities. This is causing some sporadic json errors on the account I'm using with the script, but hopefully isn't impacting anyone else. Let me know if it is and I'll halt it and schedule for late-night runs only or something.

Right now I have it watching the following instances, grabbing the top 30 communities of the day on each scan.

REMOTE_INSTANCES: '[
        "lemmy.world",
        "lemmy.ml",
        "sh.itjust.works",
        "lemmy.one",
        "lemmynsfw.com",
        "lemmy.fmhy.ml",
        "lemm.ee",
        "lemmy.dbzer0.com",
        "programming.dev",
        "vlemmy.net",
        "mander.xyz",
        "reddthat.com",
        "iusearchlinux.fyi",
        "discuss.online",
        "startrek.website",
        "lemmy.ca",
        "dormi.zone"]'

I may increase this beyond 30 communities per instance, and can add any other domains y'all want. This will hopefully make /All a bit more active for us. We've got plenty of storage available so this seems like a good way to make it a tad easier for everyone to discover new communities.

Also, just a reminder that I do have defed.lemmy.tf up and running to mirror some subreddits. Feel free to sign up and post on defed.lemmy.tf/c/requests2 with a post title of r/SUBREDDITNAME to have it automatically mirror new posts in a particular sub. Eventually I will federate that instance to lemmy.tf, but only after I'm done with the big historical imports from the reddit_archive user.

 

cross-posted from: https://sh.itjust.works/post/433151

Just an FYI post for folks who are new or recently returning to Lemmy, I have updated the linked grease/tamper/violentmonkey script for Lemmvy v0.18.

These two scripts (a compact version and a large thumbnail version) substantially rearrange the default Lemmy format.

These are (finally) relatively stable for desktop/widescreen. Future versions will focus a little more on the mobile/handheld experience.

These are theme agnostic and should work with darkly and litely (and variants) themes.

Screenshot of "Compact" version

main page

-

comments page

As always, feedback is appreciated!

19
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 

I've been stalling on this but need to get some form of community rules out with the added growth from the Reddit shutdown. These will likely be tweaked a bit going forward but this is a start.

Rules
  1. Be respectful of everyone's opinions. If you disagree with something, don't resort to inflammatory comments.

  2. No abusive language/imagery. Just expanding on #1.

  3. No racism or discrimination of any kind.

  4. No advertising.

  5. Don't upload NSFW content directly to the instance, use some third party image host and link to that in your posts/comments.

  6. Mark any NSFW/erotic/sensitive/etc posts with the NSFW tag. Any local posts violating this rule are subject to removal (but you can repost correctly if this happens).

  7. Hold the admins/mods accountable. If we start making changes that you disagree with, please feel free to post a thread or DM us to discuss! We want this instance to be a good home for everyone and welcome feedback and discussion.

NSFW Content Policy

As stated above, please upload any NSFW images to an external site and link them. All NSFW content must be properly tagged, and cannot contain material illegal in the United States.

Additional rules around NSFW content may be added in the future, if necessary. We would prefer everyone use common sense with their posts so we don't have to crack down on this category.

Defederation Policy

Many large instances have started to defederate "problem" instances. We want to avoid doing that unless an instance is causing illegal content to get indexed directly onto our server.

If we encounter the need to block some other Lemmy server, we will engage the community here before taking action.

Bot Policy

Bots are currently allowed on this instance, but we reserve the right to add restrictions if they start getting abused. You're more than welcome to use moderation bots for any communities you run or moderate, and content import/mirroring bots are okay. If you have a bot that is actively creating new posts/comments here, please make sure to use some reasonable rate limits.

Bots are subject to all instance rules.

 

So Lemmy 0.18.0 dropped today and I immediately jumped on the bandwagon and updated. That was a mistake. I did the update during my lunch hour, quickly checked to make sure everything was up (it was, at the time) and came back a few hours later to everything imploding.

As far as I can tell, things broke after the DB migrations occurred. Pict-rs was suddenly dumping stack traces on any attempt to load an image, and then at some point the DB itself fell over and started spewing duplicate key errors in an endless loop.

I wound up fiddling with container versions in docker-compose.yml until finding a fix that restored the instance. We are downgraded back to the previous pict-rs release (0.3.1), while Lemmy and Lemmy-UI are both at 0.18.0. I'm still trying to figure out what exactly went wrong so I can submit a bug report on Github.

Going forward, I will plan updates more carefully. We will have planned maintenance windows posted at least a few days in advance, and I may look into migrating the instance to my Kubernetes cluster so we can do a rolling deployment, and leave the existing pods up until everything is passing checks. In the meantime, I'm spinning up a sandbox Lemmy instance and will use that to validate upgrades before hitting this instance.

view more: next ›