Pete90

joined 1 year ago
[–] [email protected] 1 points 1 week ago (1 children)

Ich dachte, ich hätte es hier gelesen. Ich ziehe jetzt eh um, aber danke für den Tipp.

[–] [email protected] 12 points 1 week ago* (last edited 1 week ago) (1 children)

EDIT: Ich hab folgendes gefunden und das funktioniert, danke an alle und danke an den Verfasser!


Für die Leute, die keine offene Browser Session haben, hier ein kleines, aber funktionales Bash Script, welches im Ausführungsverzeichnis eine myFedditUserData.json erstellt, welche bei anderen Instanzen importiert werden kann.

Anforderungen:

  • Linux/Mac OS X Installation
  • jq installiert (Unter Ubuntu/Debian/Mint z.B. per sudo apt install -y jq

Anleitung:

  • Folgendes Script unter einem beliebigen Namen mit .sh Endung abspeichern, z.B. getMyFedditUsserData.sh
  • Script in beliebigen Textprogramm öffnen, Username/Mail und Passwort ausfüllen (optional Instanz ändern)
  • Terminal im Ordner des Scripts öffnen und chmod +x getMyFedditUsserData.sh ausführen (Namen eventuell anpassen)
  • ./getMyFedditUsserData.sh
  • Nun liegt im Ordner neben dem Script eine frische myFedditUserData.json

Anmerkung: Das Script ist recht simpel, es wird ein JWT Bearer Token angefragt und als Header bei dem GET Aufruf von https://feddit.de/api/v3/user/export_settings mitgegeben. Wer kein Linux/Mac OS X zur Verfügung hat, kann den Ablauf mit anderen Mitteln nachstellen.

Das Script:

#!/bin/bash

# Basic login script for Lemmy API

# CHANGE THESE VALUES
my_instance="https://feddit.de"			# e.g. https://feddit.nl
my_username=""			# e.g. freamon
my_password=""			# e.g. hunter2

########################################################

# Lemmy API version
API="api/v3"

########################################################

# Turn off history substitution (avoid errors with ! usage)
set +H

########################################################

# Login
login() {
	end_point="user/login"
	json_data="{\"username_or_email\":\"$my_username\",\"password\":\"$my_password\"}"

	url="$my_instance/$API/$end_point"

	curl -H "Content-Type: application/json" -d "$json_data" "$url"
}

# Get userdata as JSON
getUserData() {
	end_point="user/export_settings"

	url="$my_instance/$API/$end_point"

	curl -H "Authorization: Bearer ${JWT}" "$url"
}

JWT=$(login | jq -r '.jwt')

printf 'JWT Token: %s\n' "$JWT"

getUserData | jq > myFedditUserData.json
 

Moin. Ich hatte vor einiger Zeit mal etwas gelesen, von einer Möglichkeit des Dazenexports von feddit.de zwecks Umzug. Ich kann den Beitrag aber nicht mehr finden, weil das Auto-Verstecken ihn nicht mehr rausrückt. Kann mich jemand davor retten, nochmal 100 zu abonnieren? Danke!

[–] [email protected] 8 points 1 week ago

Ich finde den Ansatz immer äußerst kreativ. Okay, die Leute sollen arbeiten, super. Sie bekommen im Moment gerade (wenn überhaupt) so viel, dass sie überleben können. Jetzt haben wir also eine recht große Gruppe, die ein bisschen Geld gebrauchen könnte und häufig auch noch die Zeit hätte, ein bisschen zu arbeiten.

Sagen wir nun, die kriegen Mindestlohn und wollen 1 Tag die Woche arbeiten. Cool, 12,41 x 8 x 4 macht etwa 400 Euro. Klingt super, damit kann man doch etwas anfangen. Das lohnt sich ja regelrecht. Dann können ja all diese Leute die Jobs machen, die keiner machen will. 1 Tag die Woche kann man das ja gut ertragen.

Doch halt, was ist das? Das Jobcenter erlaubt einem, nur 100 Euro davon zu behalten? Komisch, ich verstehe gar nicht, warum die Leute keine Lust haben, zu arbeiten. 100 Euro, das sind ja schon 15 Döner, und dafür müsste ich nur 32 Stunden arbeiten, also alle 2 Stunden einen Döner.

Wenn man will, dass die Leute arbeiten, dann muss man ihnen auch etwas bieten. Viele können eben NICHT Vollzeit arbeiten, also lohnt sich das nicht. Andersrum wird also ein Schuh draus: Das Jobcenter kriegt 100 Euro und der Bürgergeldempfänger den Rest.

[–] [email protected] 6 points 1 week ago (1 children)

Und der besondere Spaß ist, dass sie dann, wenn sie abgelaufen sind, Hautkrebs verursachen können.

[–] [email protected] 5 points 1 week ago (4 children)

Ich hab es gedacht, aber war mir sicher, dass es was anderes sein MUSS. So irrt man sich.

[–] [email protected] 5 points 1 week ago (6 children)

Was zum Fick ist Tzabatta?

[–] [email protected] 2 points 2 weeks ago (1 children)

How would that work? I couldn't find it in that post.

[–] [email protected] 38 points 2 weeks ago (2 children)

I agree, but most games also have a higher ratio of value to cost. If I buy a game for 50 bucks, I'll play it for many hours, let's say 50. So that will be 1 per hour, pretty good. If I buy a new movie, that isn't available for subscription streaming, that ratio is easily double. If I have a subscription and need another now, that also lowers it's value. It also comes with lower comfort and ease of consumption, as you mentioned.

Another great example is YouTube premium. I'll gladly pay 5 or 7 bucks for adfree content, not 14 though. I don't need YouTube music. So I block ads where I can and donate to creators, if I can afford it. They could have had my money, but they are, simply, greedy.

I also hate it, when deals are altered without my consent. It makes me feel like a sucker, and therefore makes it less likely for me to keep investing.

[–] [email protected] 30 points 2 weeks ago (16 children)

Because often enough, results in science contradict religious belief. Heliocentric model, for example.

[–] [email protected] 9 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

You most likely won't utilize these speeds in a home lab, but I understand why you want them. I do too. I settled for 2.5GBit because that was a sweet spot in terms of speed, cost and power draw. In total, I idle at about 60W for following systems:

  • Lenovo M90q (i7 10700, 32GB, 3 x 1 TB SSD) running Proxmox, 15W idle
  • Custom NAS (Ryzen 2400G, 16GB, 4x12TB HDD)v running Truenas (30W idle)
  • Firewall (N5105, 8GB) running OPNsense (8W idle)
  • FritzBox 6660 Cable, which functions as a glorified access point, 10W idle
[–] [email protected] 5 points 4 weeks ago (1 children)

Weird, isn't it. A lot of those successful services have cute little mascots. It influences me more than it should.

[–] [email protected] 1 points 1 month ago

I know exactly what you mean. I'd also prefer Debian, Mint or Fedora. Each has its weaknesses, but you got to start somewhere. Go for it, then decide for yourself. It's not that hard to switch again.

 

I'm in the marked for a used 4TB for my offsite backup. As I've recently acquired four 12TB drives (about 10000 hours and one to two years old) for 130€ each, I was optimistic. 30 to 40€ I thought. Easy.

WRONG! Used drive, failing SMART stats, 40€. Here is a new drive, no hours on it. Oh wait, it was cold storage and it's almost 8 years old. Price? 90€ (mind you, a new drive costs about 110€). Another drive has already failed, but someone wants 25€ for e-waste. No Sir, it worked fine when I used Check-Disk, please buy. Most of the decent ones are 70 to 80€, way too close to the new price. I PAID 130 FOR 12TB. These drive were almost new and under warranty. WHY DO THIS NUMBNUT WANT 80 EURO FOR A USED 4TB Drive? And what sane person doesn't put SMART data in their offerings??? I have to ask at least 50 percent of the time. Don't even get me started on those external hard drives, they were trash to begin with. I'm SO CLOSE to buying a high capacity drive, because in that segment, people actually know what they are doing and understand what they have.

Rant over.

What gives? Did these people buy them, when they were much more expensive? Does anyone now a good site that ships refurbished drives to Germany? Most of those I found are also rippoffs...

 

Hej everyone. My traefik setup has been up and running for a few months now. I love it, a bit scary to switch at first, but I encourage you to look at, if you haven't. Middelwares are amazing: I mostly use it for CrowdSec and authentication. Theres two things I could use some feedback, though.


  1. I mostly use docker labels to setup routers in traefik. Some people only define on router (HTTP) and some both (+ HTTPS) and I did the latter.
- labels
      - traefik.enable=true
      - traefik.http.routers.jellyfin.entrypoints=web
      - traefik.http.routers.jellyfin.rule=Host(`jellyfin.local.domain.de`)
      - traefik.http.middlewares.jellyfin-https-redirect.redirectscheme.scheme=https
      - traefik.http.routers.jellyfin.middlewares=jellyfin-https-redirect
      - traefik.http.routers.jellyfin-secure.entrypoints=websecure
      - traefik.http.routers.jellyfin-secure.rule=Host(`jellyfin.local.domain.de`)
      - traefik.http.routers.jellyfin-secure.middlewares=local-whitelist@file,default-headers@file
      - traefik.http.routers.jellyfin-secure.tls=true
      - traefik.http.routers.jellyfin-secure.service=jellyfin
      - traefik.http.services.jellyfin.loadbalancer.server.port=8096
      - traefik.docker.network=media

So, I don't want to serve HTTP at all, all will be redirected to HTTPS anyway. What I don't know is, if I can skip the HTTP part. Must I define the web entrypoint in order for redirect to work? Or can I define it in the traefik.yml as I did below?

entryPoints:
  ping:
    address: ':88'
  web:
    address: ":80"
    http:
      redirections:
        entryPoint:
          to: websecure
          scheme: https
  websecure:
    address: ":443"

  1. I use homepage (from benphelps) as my dashboard and noticed, that when I refresh the page, all those widgets take a long time to load. They did not do that, when I connecte homepage to those services directly using IP:PORT. Now I use URLs provided by traefik, and it's slow. It's not really a problem, but I wonder, if I made a mistake somewhere. I'm still a beginner when it comes to this, so any pointers in the right direction are apprecciated. Thank you =)
15
submitted 3 months ago* (last edited 3 months ago) by [email protected] to c/selfhosted
 

EDIT: I found something looking through the source code on Github. I couldn't find anything at first, but then I searchedfor "periodic" and found something in middelwared/main.py.

Theses tasks (see below) are executed at system start and will be re-run after method._periodic.interval seconds. Looking at the log in var/log/middelwared.log I saw, that the intervall was 86400 seconds, exactly one day. So I'm assuming that the daily execution time is set at the last system start.

I've rebooted and will report back in a day. Maybe somebody can find the file to set it manually, not in source code. That is waaaay to advanced for me.

EDIT 2:

EDIT: I was correct, the tasks are executed 24hours later. This gives at least a crude way to change their execution time: restart the machine.


Hej everyone, in the past few weeks, I've been digging my hands into TrueNAS and have since setup a nice little NAS for all my backup needs. The drives spin down when not in use, as the instance only recieves/sends backup data once a day. Howevery, there are a few periodic tasks which wake my drives. Namely:

catalog.sync	                Success	26796 	12/03/2024 18:06:54 	12/03/2024 18:06:54 		
catalog.sync_all	        Success	26795 	12/03/2024 18:06:54 	12/03/2024 18:06:54 		
zfs.dataset.bulk_process	Success	26792 	12/03/2024 18:06:43 	12/03/2024 18:06:43 		
pool.dataset.sync_db_keys	Success	26791 	12/03/2024 18:06:42 	12/03/2024 18:06:43 		
certificate.renew_certs	        Success	26790 	12/03/2024 18:06:42 	12/03/2024 18:06:43 	
 
dscache.refresh	                Success	24991 	12/03/2024 03:30:01 	12/03/2024 03:30:01 
update.download	                Success	25027 	12/03/2024 03:46:01 	12/03/2024 03:46:02 

I spend the last hour searching online and digging through files and checking cron. I found the dscache.refresh and the update.download. I can't find the first five. At least one of them wakes my drives. Does anyone have an idea? There used to a periodic.conf, but I can't find it on my system. Thanks!

 

Network design. I started my homelab / selfhost journey about a year ago. Network design was the topic that scared me most. To challenge myself, and to learn about it, I bought myself a decent firewall box with 4 x 2.5G NICs. I installed OPNsense on it, following various guides. I setup my 3 LAN ports as a network bridge to connect my PC, NAS and server. I set the filtering to be applied between these different NICs, as to learn more about the behavior of the different services. If I want to access anything on my server from my PC, there needs to be a rule allowing it. All other trafic is blocked. This setup works great so far an I'm really happy with it.

Here is where I ran into problems. I installed Proxmox on my server and am in the process of migrating all my services from my NAS over there. I thought that all trafic from a VM in Proxmox would go this route: first VM --> OPNsense --> other VM. Then, I could apply the appropriate firewall rules. This however, doesnt seem to be the case. From what I've learned, VMs in Proxmox can communicate freely with each other by default. I don't want this.

From my research, I found different ideas and opposing solutions. This is where I could use some guidance.

  1. Use VLANs to segregate the VMs from each other. Each VLAN gets a different subnet.
  2. Use the Proxmox firewall to prevent communication between VMs. I'd rather avoid this, so I don't have to apply firewall rules twice. I could also install another OPNsense VM and use that, but same thing.
  3. Give up on filtering traffic between my PC, NAS and server. I trust all those devices, so it wouldn't be the end of the world. I just wanted the most secure setup I could do with my current knowledge.

Is there any way to just force the VM traffic through my OPNsense firewall? I thought this would be easy, but couldn't find anything or just very confusing ideas.

I also have a second question. I followed TechnoTim to setup Treafik and use my local DNS and wildcard certificates. Now, I can reach my services using service.local.example.com, which I think is neat. However, in order to do this, it was suggested to use one docker network called proxy. Each service would be assigned this network and Traefik uses lables to setup the routes. ' Would't this allow all those services to communciate freely? Normally, each container has it's own network and docker uses iptables to isolate them from each other. Is this still the way to go? I'm a bit overwhelmed by all those options.

Is my setup overkill? I'd love to hear what you guys think! Thank you so much!

25
submitted 6 months ago* (last edited 6 months ago) by [email protected] to c/selfhosted
 

EDIT: SOLUTION:

Nevermind, I am an idiot. As @ClickyMcTicker pointed out, it's the client side that is causing the trouble. His comment gave me thought so I checked my testing procedure again. Turns out that, completely by accident, everytime I copied files to the LVM-based NAS, I used the SSD on my PC as the source. In contrast, everytime I copied to the ZFS-based NAS, I used my hard drive as the source. I did that about 10 times. Everything is fine now. Maybe this can help some other dumbass like me in the futere. Thanks everyone!

Hello there.

I'm trying to setup a NAS on Proxmox. For storage, I'm using a single Samsung Evo 870 with 2TB (backups will be done anyway, no need for RAID). In order to do this, I setup a Debian 12 container, installed Cockpit and the tools needed to share via SMB. I set everything up and transfered some files: about 150mb/s with huge fluctuations. Not great, not terrible. Iperf reaches around 2.25Gbit/s, so something is off. Let's do some testing. I started with the filesystem. This whole setup is for testing anyway.

  1. Storage via creating a directory with EXT4, then adding a mount point to the container. This is what gave me those speeds mentioned above. Okay, not good. --> 150mb/s, speed fluctuates
  2. a Let's do ZFS, which I want to use anyway. I created a ZFS pool with ashift=12, atime=off, compression=lz4, xattr=sa and 1MB record size. I did "some" research and this is what I came up with, please correct me. Mount to container, and go. --> 170mb/s, stable speed
  3. b Tried OpenMediaVault and used EXT4 with ZFS as base for the VM-Drive. --> around 200mb/s
  4. LVM-Thin using Proxmox GUI, then mount to container. --> 270mb/s, which is pretty much what I'm reaching with Iperf.

So where is my mistake when using ZFS? Disable compression? A different record size? Any help would be appreciated.

 

Black friday is almost upon us and I'm itching to get some good deals on missing hardware for my setup.

My boot drive will also be VM storage and reside on two 1TB NVMe drives in a ZFS mirror. I plan on adding another SATA SSD for data storage. I can't add more storage right now, as my M90q can't be expanded easily.

Now, how would I best setup my storage? I have two ideas and could use some guidance. I want some NAS storage for documents, files, videos, backups etc. I also need storage for my VMs, namely Nextcloud and Jellyfin. I don't want to waste NVMe space, so this would go on the SATA SSD as well.

  1. Pass the SSD to a VM running some NAS OS (OpenMediaVault, TrueNas, simple Samba). I'd then set up different NFS/samba shares for my needs. Jellyfin or Nextcloud would rely on the NFS share for their storage needs. Is that even possible and if so, a good idea? I could easily access all files, if needed. I don't now if there would be a problem with permissions or diminished read/write speeds, especially since there are a lot of small files on my nextcloud.

  2. I split the SSD, pass one partition to my NAS and the other will be used by Proxmox to store virtual disks for my VMs. This is probably the cleanest, but I can't easily resize the partitions later.

What do you think? I'd love to hear your thoughts on this!

 

I've posted a few days ago, asking how to setup my storage for Proxmox on my Lenovo M90q, which I since then settled. Or so I thought. The Lenovo has space for two NVME and one SATA SSD.

There seems to a general consensus, that you shouldn't use consumer SSDs (even NAS SSDs like WD Red) for ZFS, since there will be lots of writes which in turn will wear out the SSD fast.

Some conflicting information is out there with some saying it's fine and a few GB writes per day is okay and others warning of several TBs writes per day.

I plan on using Proxmox as a hypervisor for homelab use with one or two VMs runnning Docker, Nextcloud, Jellyfin, Arr-Stack, TubeArchivist, PiHole and such. All static data (files, videos, music) will not be stored on ZFS, just the VM images themselves.

I did some research and found a few SSDs with good write endurance (see table below) and settled on two WD Red SN700 2TB in a ZFS Mirror. Those drives have 2500TBW. For file storage, I'll just use a Samsung 870EVO with 4TB and 2400TBW.

SSD TB TBW
980 PRO 1TB 600 68
2TB 1200 128
SN 700 500GB 1000 48
1TB 2000 70
2TB 2500 141
870 EVO 2TB 1200 117
4TB 2400 216
SA 500 2TB 1300 137
4TB 2500 325

Is that good enough? Would you rather recommend enterprise grade SSDs? And if so, which ones would you recommend, that are m.2 NVME? Or should I just stick with ext4 as a file system, loosing data security and the ability for snapshots?

I'd love to hear your thought's about this, thanks!

12
submitted 8 months ago* (last edited 8 months ago) by [email protected] to c/selfhosted
 

Hej everyone! I’m planning on getting acquainted with Proxmox, but I’m a total noob, so please keep that in mind.

For this experiment, I’ve purchased a Lenovo M90q (Gen 1) to use as an efficient hardware basis. This system will later replace my current one. On it, I want to set up a small number of virtual machines, mainly one for Docker and one for NAS (or set up a NAS with Proxmox itself).

My main concern right now is storage. I’d like to have some redundancy built into my setup, but I am somewhat limited with the M90q. I have space for two M.2 2280 NVMe drives as well as one SATA port. There are also several options to extend this setup using either a Wi-Fi M.2 to SATA or the PCIe x8 to either SATA or NVMe. For now, I’d like to avoid adding complexity and stick with the onboard options, but I'm open to suggestions. I'd buy some new or refurbished WD Red NAS SSDs.

Given the storage options that I have, what would be a sensible setup to have some level of redundancy? I can think of three options:

  1. ZFS Mirror using two NVMe as well as a SATA-SSD for non-critical storage. I would set up Proxmox and VMs on the same disk and mirror it to have redundancy. I could store ISOs and “ISOs” on the SATA-SSD, where no redundancy is needed, as it would be backed up to a different system anyway.

  2. Proxmox and VMs each get their own NVMe storage, non-critical storage on the SSD. Here, “redundancy” would be achieved by backing up the host and the VMs to my NAS. This process is somewhat tedious and will cause downtime if something happens.

  3. Add a Wi-Fi M.2 to SATA adapter and power two SSDs with an external power supply (possibly internal?) and install Proxmox on these.

I’d love to hear your thoughts on this. Am I being too paranoid with redundancy? I’m hosting nothing critical, but downtime would cause some inconvenience (e.g., no Jellyfin, Nextcloud, Pi-hole, Vaultwarden) until I fix it. The data of these services will always be backed up using the 3-2-1 system and I'll move to a HA system in the future when funds allow it.

EDIT: Are there any disadvantages to proxmox and the VMs being on the same disk?

 

Hei there. I've read that it's best practice to use docker volumes to store persistent container data (such as config, files) instead of using bindmount. So far, I've only used the latter and would like to change this.

From what I've read, all volumes are stored in var/lib/docker/volumes. I also understood, that a volume is basically a subdirectory in that path.

I'd like to keep things organized and would like the volumes of my containers to be stored in subdirectories for each stack in docker compose, e.g.

volumes/arr/qbit /arr/gluetun /nextcloud/nextcloud /nextcloud/database

Is this possible using compose?

Another noob question: is there any disadvantage to using the default network docker creates for each stack/container?

 

I've not been as excited about a piece of tech for a long time. I'm trying to save a few bucks on this somewhat expensive machine and have some questions, which I hope you might be able to answer. Thanks!

I'm planing on bringing my own RAM and storage. Framework lists DDR5 SO-DIMM 5600 as compatible, however they do recommend avoiding XMP. Most modules I found do have XMP. Do they just mean to disable XMP? Any ideas or recommendations?

I'm also hesitant to buy the 180W charger, even though it seems reasonalibly priced. I'm not buying a grafics module just yet, but might in the future and 180W might not be enough power by then. I only want to buy once and couldn't find any higher powered ones on amazon (EU). I only found one 140W charger from UGREEN, which is a brand I've never heard of. Another option would be, to buy a 100W charger now and another one later, but I want to reduce cost and e-waste.

One last question is concerning the input modules. Do I understand it correctly, that both the numpad as well as the macro pad can be used for custom key functions and macros?

 

Hey guys. I've been spending the last few months setting up my home server. Lot's of troubleshooting was needed, since I am somewhat of a beginner.

Now fail2ban works really well. In fact, it works too well. I've banned myself on some occasions. Here is how I set it up:

I have a filter/jail, that looks for forcefull browsing using the nginx proxy manager access logs. I've used the following filter:

[INCLUDES]

[Definition]

failregex = ^.* (405|404|403|401|\-) (405|404|403|401) - .* \[Client <HOST>\] \[Length .*\] .* \[Sent-to <F-CONTAINER>.*</F-CONTAINER>\] <F-USERAGENT>".*"</F-USERAGENT> .*$

ignoreregex = ^.* (404|\-) (404) - .*".*(\.png|\.txt|\.jpg|\.ico|\.js|\.css|\.ttf|\.woff|\.woff2)(/)*?" \[Client <HOST>\] \[Length .*\] ".*" .*$

This fishes out all those errors - so far, so good. The problem is, that for some reason, my nextcloud install throws a lot of those errors every now and then. I have no clue why. Everything works, file transfers, browsing the web ui, settings - no trouble. Still, those errors show up in the npm log, for example:

[22/Jun/2023:18:44:24 +0200] - 404 404 - GET https ###SERVERURL### "/remote.php/dav/files/Pete90/Upload/Scan/Z/2023-06-22%2011-27%201.pdf" [Client ###IP### [Length 218] [Gzip -] [Sent-to ###SERVERLANIP###] "Mozilla/5.0 (Android) Nextcloud-android/3.25.0" "-"

This must habe been the android nextcloud app, as it was automatically uploading some files.

Now here is where I need help. I've started adding things to the ignoreregex and this works as a workaround. But new error types show up every now and then which I have not added an ignoreregex for. This seems inefficient:

|.*PROPFIND.*files/Pete90.*Gzip.*|/ocs/v2.php/apps/text/workspace\?path=.2F|.*(?:/index.php/.well-known/nodeinfo|/index.php/.well-known/webfinger)|.*/core/preview.*$    ADD MORE LIKE THIS |.*REGEXYOUWANTTOIGNORE.*$

What would you do, to prevent this? Is there something wrong with my nextcloud setup? Can I find a more general regex than the ones I used? Simply exclude nextcloud from the forcefull browsing filter (I've setup a different filter/jail for nextcloud itself). Any input is appreciated!

view more: next ›