admin

joined 1 year ago
[–] [email protected] 1 points 1 year ago

Will try this, thanks for the tip

[–] [email protected] 3 points 1 year ago

Thanks, good point. Didn't know about that risk

 

I'm using my own instance. Is there a way to block the posts coming from other instances that are below a certain threshold? For example -3 upvotes. I don't need to see them I trust the moderators of other instances to handle their own posts responsibly. As it is today half my Homescreen is full of -40 voted posts that might have been deleted on their home instance already. But they get downloaded to me regardless.

[–] [email protected] 1 points 1 year ago

This already had a brim but it is quite ugly. The thing is I had the same failure while printing a bigger object like a Stormtrooper helmet. Randomly popped off the bed after like 2.5 hours

[–] [email protected] 1 points 1 year ago (1 children)

I have cleaned the bed already with alcohol and microfiber cloth. Yeah the brim is supposed to be cylindrical but it obviously isn't :D . Is that a modification I should do in Prusa Slicer? For the first layer height

 

Hi, I'm new to printing. I got myself a Kobra 2 and it was printing fine when I got it. But recently it started to knock the prints off at random times. I have done several recalibrations, adjusted the Z offset up and down. What I have noticed is that all of these models seem to have a small overflow of material which I'm guessing the nozzle is bumping into. But I have no idea what causes it. I'll post the latest print that failed, there the extra material is more pronounced than usual. Any suggestions what I should adjust? (Btw I'm using Octoprint)

[–] [email protected] 3 points 1 year ago (1 children)

It's pretty much a "develop from zero" situation. You can import assets, but will probably have to at least fix them up. If you are lucky, the two engines use the same language, but probably not. For example Unity uses C# while UE5 uses C++. And then you didn't even get to the parts where you actually use use the engine. Everything that touches the capabilities of the specific game engine need to be rewritten. That is off the top of my head: interaction, physics engine usage, collision engine usage, AI stuff etc.

[–] [email protected] 1 points 1 year ago (1 children)

Here is what I'm using atm. Is there a better way to do this? I'm still learning K8S :)

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: sonarr-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: longhorn
  resources:
    requests:
      storage: 250Mi
***
[....]
volumes:
      - name: config
        persistentVolumeClaim:
          claimName: sonarr-pvc
[–] [email protected] 1 points 1 year ago (3 children)

Yeah I'm using Longhorn. Might be that I have set it up wrong, but didn't seem to have helped with the DB corruption issue.

[–] [email protected] 9 points 1 year ago (5 children)

Basically this. I have my home stuff running in a K3S cluster, and I had to restore my Sonarr volume several times because the SQLite DB has corrupted. Transitioning to Postgres should solve this issue, and I already have quite a few other stuff in it, for example Radarr and Prowlarr

 

Thought I would let you all know in case you have missed it. A few days ago Postgres support was finally merged into Sonarr dev branch (meaning 4.x version). I have already transitioned to it, so far it runs without issue

You can mostly follow the same instructions as for Radarr from here: https://wiki.servarr.com/radarr/postgres-setup

I used the following temporary docker container to do the conversion (obviously replace stuff you need to):

docker run --rm -v Route\to\sonarr.db:/sonarr.db --network=host dimitri/pgloader pgloader --debug --verbose --with "quote identifiers" --with "data only" "sqlite://sonarr.db" "postgresql://user:pwd@DB-IP/sonarr-main"

When it completed the run, it outputs a kind of table that shows if there were any errors. In my case there were 2 tables (cant remember which ones anymore) that couldn't be inserted, so I edited those manually afterwards, so it matches the ones in the original DB.

[–] [email protected] 4 points 1 year ago (1 children)

I got the same Obsidian+Syncthing setup atm, just haven't really tried to use it for writing yet. Wanted to see what else others use that may trump it :)

 

For those who do write novels, books etc. What software do you use? What format? FOSS or proprietary?

[–] [email protected] 1 points 1 year ago

No error logs, based on the logs tho it just ignores the config and uses the filesystem. So predictably once my small config mount fills up (this was before emptyDir), it starts to error out saying no more space on disk. Seemingly this didn't cause any errors for Lemmy, still it doesn't feel right :)

 

Hi all,

I'm having an issue with my Lemmy on K8S that I selfhost. No matter what I do, Pictrs doesn't want to use my Minio instance. I even dumped the env variables inside the pod, and those seem to be like described in the documentation. Any ideas?

kind: ConfigMap
metadata:
  name: pictrs-config
  namespace: lemmy
data:
  PICTRS__STORE__TYPE: object_storage
  PICTRS__STORE__ENDPOINT: http://192.168.1.51:9000
  PICTRS__STORE__USE_PATH_STYLE: "true"
  PICTRS__STORE__BUCKET_NAME: pict-rs
  PICTRS__STORE__REGION: minio
  PICTRS__MEDIA__VIDEO_CODEC: vp9
  PICTRS__MEDIA__GIF__MAX_WIDTH: "256"
  PICTRS__MEDIA__GIF__MAX_HEIGHT: "256"
  PICTRS__MEDIA__GIF__MAX_AREA: "65536"
  PICTRS__MEDIA__GIF__MAX_FRAME_COUNT: "400"
***
apiVersion: v1
kind: Secret
metadata:
  name: pictrs-secret
  namespace: lemmy
type: Opaque
stringData: 
  PICTRS__STORE__ACCESS_KEY: SOMEUSERNAME
  PICTRS__STORE__SECRET_KEY: SOMEKEY
  PICTRS__API_KEY: SOMESECRETAPIKEY
***
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pictrs
  namespace: lemmy
spec:
  selector:
    matchLabels:
      app: pictrs
  template:
    metadata:
      labels:
        app: pictrs
    spec:
      containers:
      - name: pictrs
        image: asonix/pictrs
        envFrom:
        - configMapRef:
            name: pictrs-config
        - secretRef:
            name: pictrs-secret
        volumeMounts:
        - name: root
          mountPath: "/mnt"
      volumes:
        - name: root
          emptyDir: {}
***
apiVersion: v1
kind: Service
metadata:
  name: pictrs-service
  namespace: lemmy
spec:
  selector:
    app: pictrs
  ports:
  - port: 80
    targetPort: 8080
  type: ClusterIP
[–] [email protected] 3 points 1 year ago

Yeah, just after posting this I saw a different comment in another post linking to those tickets :D Of course I would find my question answered 5 minutes after posting it. Thanks anyway :)

 

Hi, usual disclaimer of not sure where to post this, or if this is already a thing.

But as a new Lemmier (?) I would love a feature that allows me to create a meta community or view where I can shove all the communities that have the same topic.

So I would have for example a meta community named "Tech" and I can add all the tech communities from different instances, so I can get a unified view of a topic thorough multiple instances.

Is this already a thing, or is it planned?

[–] [email protected] 2 points 1 year ago

Yeah, this is one of those competent people I was talking about. When I started creating my version, the chart didn't exist yet, but it is great to see that it is done-ish now.

 

So here are the files I have cobbled together in order to deploy Lemmy on my own cluster at home. I know there are helm charts in the work, but this might help someone else who cannot wait just like me :)

view more: next ›