this post was submitted on 24 Jun 2023
181 points (97.9% liked)

Selfhosted

37754 readers
841 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Just thought I'd share this since it's working for me at my home instance of federate.cc, even though it's not documented in the Lemmy hosting guide.

The image server used by Lemmy, pict-rs, recently added support for object storage like Amazon S3, instead of serving images directly off the disk. This is potentially interesting to you because object storage is orders of magnitude cheaper than disk storage with a VM.

By way of example, I'm hosting my setup on Vultr, but this applies to say Digital Ocean or AWS as well. Going from a 50GB to a 100GB VM instance on Vultr will take you from $12 to $24/month. Up to 180GB, $48/month. Of course these include CPU and RAM step-ups too, but I'm focusing only on disk space for now.

Vultr's object storage by comparison is $5/month for 1TB of storage and includes a separate 1TB of bandwidth that doesn't count against your main VM, plus this content is served off of Vultr's CDN instead of your instance, meaning even less CPU load for you.

This is pretty easy to do. What we'll be doing is diverging slightly from the official Lemmy ansible setup to add some different environment variables to pict-rs.

After step 5, before running the ansible playbook, we're going to modify the ansible template slightly:

cd templates/

cp docker-compose.yml docker-compose.yml.original

Now we're going to edit the docker-compose.yml with your favourite text editor, personally I like micro but vim, emacs, nano or whatever will do..

favourite-editor docker-compose.yml

Down around line 67 begins the section for pictrs, you'll notice under the environment section there are a bunch of things that the Lemmy guys predefined. We're going to add some here to take advantage of the new support for object storage in pict-rs 0.4+:

At the bottom of the environment section we'll add these new vars:

  - PICTRS__STORE__TYPE=object_storage
  - PICTRS__STORE__ENDPOINT=Your Object Store Endpoint
  - PICTRS__STORE__BUCKET_NAME=Your Bucket Name
  - PICTRS__STORE__REGION=Your Bucket Region
  - PICTRS__STORE__USE_PATH_STYLE=false
  - PICTRS__STORE__ACCESS_KEY=Your Access Key
  - PICTRS__STORE__SECRET_KEY=Your Secret Key

So your whole pictrs section looks something like this: https://pastebin.com/X1dP1jew

The actual bucket name, region, access key and secret key will come from your provider. If you're using Vultr like me then they are under the details after you've created your object store, under Overview -> S3 Credentials. On Vultr your endpoint will be something like sjc1.vultrobjects.com, and your region is the domain prefix, so in this case sjc1.

Now you can install as usual. If you have an existing instance already deployed, there is an additional migration command you have to run to move your on-disk images into the object storage.

You're now good to go and things should pretty much behave like before, except pict-rs will be saving images to your designated cloud/object store, and when serving images it will instead redirect clients to pull directly from the object store, saving you a lot of storage, cpu use and bandwidth, and therefore money.

Hope this helps someone, I am not an expert in either Lemmy administration nor Linux sysadmin stuff, but I can say I've done this on my own instance at federate.cc and so far I can't see any ill effects.

Happy Lemmy-ing!

top 42 comments
sorted by: hot top controversial new old
[–] [email protected] 21 points 1 year ago (1 children)

Great write-up! This might help some of the bigger instances as well.

[–] [email protected] 11 points 1 year ago (2 children)

I assume the larger instances would probably know this already, since they likely have more skilled sysadmin teams than we one-man-show types, but very true - if anything it would save them money to a much greater degree than small instances!

[–] timespace 13 points 1 year ago* (last edited 1 year ago) (1 children)

Pretty sure lemmy.world is just a dude, and it’s one of the largest instances.

[–] PriorProject 7 points 1 year ago

It's a team: https://lemmy.world/post/28012

I don't know much about how they share the load, and Ruud does seem more visibly active than the others... but he's not a one-man show. In the early weeks he WAS a one-man show, but has a team he works with at mastodon.world and has since brought some of them over to help here.

[–] [email protected] 3 points 1 year ago

Yes this was discussed a few times in the instance admins Matrix chat in the last couple of weeks. But if you pay for a large server it usually also comes with sufficient storage.

[–] [email protected] 10 points 1 year ago (1 children)

plus this content is served off of Vultr’s CDN instead of your instance, meaning even less CPU load for you.

Currently the Lemmy backend proxies all image requests, so this isn't true (for now).

[–] [email protected] 7 points 1 year ago (2 children)

Ah! Noted. That said, it is definitely storing the bytes on the object store. I imagine someone clever with nginx or such could set up some rewrite rules to bypass pictrs entirely for GET requests, but unfortunately that's beyond my pay grade here.

[–] [email protected] 3 points 1 year ago (2 children)

Not an expert either, but if you do it through nginx I think it will still depend on your single VPS. There probably needs to be a change in the Lemmy-ui to tell the browser to download directly from the object storage CDN.

[–] [email protected] 2 points 1 year ago (1 children)

Thinking about this a little more: I think yeah the HTTP requests will always hit your VPS, but if what you're saying is that pictrs is loading from object store and then re-serving them off your VPS, then an NGINX rule might be able to redirect the GET directly to the object store; so that instead of transferring the actual image bytes, it just 204's the browser through to the object store. I don't know how feasible this is but I may play around with it to see.

[–] [email protected] 2 points 1 year ago (2 children)

Pict-rs uses a database to match the uri hash to the file name on disk or object store. This allows for deduplication. It always needs to sit between storage and requests. I have my instance setup to use a separate CDN domain and caching servers to reduce load in my instance. One day soon I hope to get a write done on how to do it.

[–] [email protected] 1 points 1 year ago

I would love to see how you did that, that would really add value to the whole object store thing.

[–] [email protected] 1 points 1 year ago (1 children)

This allows for deduplication

Really? I've found uploading the same image to pict-rs multiple times gives a different hash. It does not seem to dedupe at all.

[–] [email protected] 2 points 1 year ago (1 children)

It allows for different hashes on the front end so individual users can still delete their upload. The sled database maps front end to back end hashes. At least this is what I read from the developer in their matrix chat room.

[–] [email protected] 1 points 1 year ago

Oh ok, that makes sense. Thanks for the info.

[–] [email protected] 1 points 1 year ago

One would imagine it's a logical feature, especially for larger images, so maybe it'll come. For now though, this is still better than not doing it, IMO.

[–] MigratingtoLemmy 0 points 1 year ago

I would love to know how this could be done

[–] [email protected] 8 points 1 year ago

I'm using S3FS to achieve the same thing, but without modifying the ansible config or using native object storage within pict-rs.

[–] [email protected] 6 points 1 year ago

If using vultr I'd recommend using backblaze for the S3 backend as data transfers will be unlimited. If using vultr object storage data to/from their object storage to the VPS is using the data cap unlike backblaze and vultr who are part of the bandwidth alliance which mean no data usage is counted and if you put your vultr instance behind cloudflare who is also part of the alliance you won't use any data on the instance for web data, but we've seen how cloudflare seems to have caused issue with Kbin so I'm not sure that's the best thing to do.

I apologize for the run-on sentence :-)

[–] [email protected] 4 points 1 year ago

I'd put Wasabi here as well, they're pretty competitive with their pricing.

Atm I moved to a herzner server woth 500GB SSDs (and consolidated my lemmy insrance eith other stuff I host), and that should be more than enought.

I imagine for larger instances, block storage might xome in handy, even with like running Minio on a storage type server.

Great writeup!

[–] [email protected] 4 points 1 year ago

Thank you so much for this. It is much needed!

[–] [email protected] 3 points 1 year ago

Oh this is great, thanks!

[–] [email protected] 2 points 1 year ago (1 children)

Hello again! I just completed object storage migration. Here's what I learned if you want to do it with an instance that's already setup:

  1. Download the binary file for pict-rs from the project's git repository.
  2. Stop the pict-rs container.
  3. Perform the migration as indicated in the pict-rs documentation. If it hangs at some point due to a missing file, re-run with --skip-missing-files.
  4. Verify that files have been migrated to object storage.
  5. Change docker-compose settings.
  6. And here the most important part.... changes won't be applied unless you run docker-compose up -d. Simply running docker-compose restart will NOT apply the new config. This might be obvious for docker users but I didn't know about it and had to rollback the first time because it wouldn't fetch images from object storage while they had already been migrated there.
[–] [email protected] 2 points 1 year ago (1 children)

Fantastic! Thanks for sharing your experience!!

[–] [email protected] 2 points 1 year ago

I just posted this. Since you helped me with object storage maybe you'll find this feature also useful. https://yiffit.net/post/232759

[–] entropicshart 2 points 1 year ago

Can anyone share what bucket permissions they used for pict-rs? I am using minio and used the below policy for an access key, but am still getting unauthorized responses

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "*"
                ]
            },
            "Action": [
                "s3:GetBucketLocation",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::pict-rs"
            ]
        },
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "*"
                ]
            },
            "Action": [
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::pict-rs/*"
            ]
        }
    ]
}
[–] [email protected] 2 points 1 year ago (1 children)

Thank you for sharing this. I'm going to try to go through this migration shortly.

Right now I'm running my instance on a fairly lean VPS so being able to lighten the CPU load and not have to pre-allocate disk space is super useful.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

Replying to confirm that this works and went very smoothly! If you can see my profile picture, it's on S3 instead of disk now.

I'm using pure ansible to deploy my containers (instead of docker compose) so I had to figure out how to start the pictrs container without actually starting pictrs so that I could run the migration. I ended up stopping the container and then running this to perform the migration:

docker run --name pictrs-migration \
  --user 991:991 \
  -v /my-pictrs-path/:/mnt \
  --rm \
  asonix/pictrs:0.4.0-rc.14 \
  pict-rs \
    migrate-store \
    filesystem \
    object-storage \
        -e https://my-s3-endpoint \
        -b my-s3-bucket-name \
        -r my-region \
        -a my-key-id \
        -s my-key-secret

Then I used ansible to redeploy the container with volume mount removed and the new s3 environment variables.

Super easy!

[–] [email protected] 2 points 11 months ago (1 children)

Thank you for this write-up. Your post is the only place I can find on the internet on making the transition to object storage specifically with Lemmy.

[–] [email protected] 2 points 11 months ago

Glad you found it helpful!

[–] MigratingtoLemmy 2 points 1 year ago

Hi, thanks for lemmy.federate.cc. I will subscribe to any communities there as they come up. Thank you for your service to the fediverse

[–] [email protected] 2 points 1 year ago

Awesome!

My current host is mich more expensive. They (Contabo) sell 250GB for €3. Storj.io sell 1TB for €4 for storage, €7 for bandwidth. But it isstille more viable than upgrading my VPS.

[–] Shadesto 1 points 10 months ago* (last edited 10 months ago)

I'm attempting this migration on an instance that has been running for about a month, is federated with the top 10+ instances and has synced a lot of data.

The steps I'm using are as follows:

stop docker: sudo docker stop domainname_pictrs_1

run docker-compose to open a session in the stopped container: sudo docker-compose run pictrs sh

run the cmdlet to migrate pictrs via https://git.asonix.dog/asonix/pict-rs/#filesystem-to-object-storage-migration

When this runs, it appers to be trying to sync like... all of the lemmy fediverse... to my object storage:

2023-08-13T17:55:44.426301Z WARN pict_rs: Running checks

2023-08-13T17:55:45.188984Z WARN pict_rs: Checks complete, migrating store

2023-08-13T17:55:45.275403Z WARN pict_rs: 56963 hashes will be migrated

Most of these fail, and I'm trying to run it again with --skip-missing-files , but based on what I'm seeing I don't know if this is really something that can be done once an instance has federated with a lot of other instances.

Am I missing something?

Edit: with --skip-missing-files its telling me that it's going to take 23403 seconds (6.5 hours) to complete this migration.

When I look into the bucket, I see all kinds of random images being migrated over, so it's definitely storing pretty much every image that my instance has ever synced. Is there a way to just migrate content that originated on my instance?

[–] [email protected] 1 points 1 year ago

Thanks for the tip! I’ll be doing the same ASAP on my private instance as well.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

You could also consider LPP for purging old posts that your community hasn't interacted with: https://lemmy.world/post/559690

[–] [email protected] 1 points 1 year ago

Thank you! This will save me some $$$

[–] entropicshart 1 points 1 year ago (1 children)

has anyone ran into errors with pict-rs throwing

   0: Error in store
   1: Error in object store
   2: Error making request: Failed to connect to host: Failed resolving hostname: failed to lookup address information: Name does not resolve

when trying to upload to object storage?

I am running a minio instance and have other applications creating/getting without issue and confirmed that the credentials are valid for the pict-rs bucket - hitting a wall on what might be causing this

[–] [email protected] 1 points 1 year ago (1 children)

Looks like your instance can’t find the object store, maybe double check the endpoint URL and also make sure pict-rs has outbound network access

[–] entropicshart 1 points 1 year ago

Turns out it was the “PICTRS__STORE_USE_PATH_STYLE” variable that had to be set to true since minio was running on a subdomain.

Appreciate the help!

[–] [email protected] 1 points 1 year ago (1 children)

Hello! Could I please ask you to confirm which of the two migration commands you used for your instance? The one with the path to the sled repo or the one without?

Thank you!

[–] [email protected] 2 points 1 year ago

Actually neither, I brought up my instance new with this configuration, sorry!

[–] [email protected] 1 points 11 months ago* (last edited 11 months ago)

I tried this with a brand new lemmy ansible setup using Vultr object storage, and my media upload requests respond with a timeout

Request error: error sending request for url (http://pictrs:8080/image): error trying to connect: dns error: failed to lookup address information: Try again

load more comments
view more: next ›