animist

joined 1 year ago
[–] [email protected] 0 points 1 year ago

ITT: people subscribing to a community for a free app they no longer use so they can keep bashing it for removing the one feature that kept memaw using it

People are EXTREMELY entitled

[–] [email protected] 3 points 1 year ago (2 children)

That's obviously not what they're saying. Stop being so negative towards strangers on the Internet.

[–] [email protected] 1 points 1 year ago (3 children)

How is it locking in if it is obvious they did something that 1) many people don't like and thus left signal for and 2) as you pointed out, they have many identical competitors? That's not convincing at all given the other parts of your argument.

[–] [email protected] 14 points 1 year ago (12 children)

It was not foolish. It was a security decision and the right one. The goal of signal isn't to have billions of users, the goal is to become a privacy and security centered app. If a feature prevents that it should be immediately removed.

[–] [email protected] 37 points 1 year ago (13 children)

"This is the only one I use. If you need to reach me either do it on that or email me." Worked for my family.

[–] [email protected] 3 points 1 year ago (1 children)

Eh this is one of those times that a broken tankie clock is correct

[–] [email protected] 5 points 1 year ago (1 children)

Look at this graph

[–] [email protected] 6 points 1 year ago

what a piece of shit country

[–] [email protected] 6 points 1 year ago (3 children)

Wait til you hear what Arnold Schwarzenegger's last name means

[–] [email protected] -2 points 1 year ago (2 children)

because it's spelled exorcise

[–] [email protected] 1 points 1 year ago

rpms were a pain for me when i transitioned as well but I've learned to love them

[–] [email protected] 0 points 1 year ago

Only once you start thinking outside the box

 

I have a torrentbox on which I have openvpn running using .ovpn>.conf files from my VPN provider.

I would like to set up a killswitch so that if the VPN fails, my torrenting will not be exposed to the wider world. I am able to ssh in as I used iptables to exclude the ssh port from the vpn connection.

I was looking at the instructions here to set up the killswitch: https://www.comparitech.com/blog/vpn-privacy/how-to-make-a-vpn-kill-switch-in-linux-with-ufw/

However, there are two issues:

  1. It says to check the conf file for the public IP address of the VPN. In the author's example there is only one IP address listed. In my .ovpn>.conf file there are two addresses with each one listing several ports. The addresses themselves nearly the exact same; only the final number is different. Which one do I pick? Is this just so in case one fails there are backups available?

  2. It is a little strange, but the IP addresses listed in the .ovpn>.conf file for my current connection do not match the IP address that I currently have through the VPN (I ran curl https://ipinfo.io/ip to check). Is this normal?

Thank you in advance for any help you can provide.

 

Since /etc/openvpn gets scanned on boot for .conf files to use openvpn to connect to, can I just rename all of my .ovpn files as .conf and it'll pick a random one each time (or at least go in alphabetical order until it finds one that works)?

 

Just wanted to say thank you to everyone in the community for being awesome, this is not a help request just me being super happy that I have finally overcome one of the biggest challenges I set for myself with self-hosting, making a media server that I can add media to at any time from anywhere in the world so that my family and I, located on different continents, can immediately enjoy it!

My Raspberry Pi started out as just a simple Nextcloud box that I could access outside the home to escape from the Dropboxes and Google Drives of the world.

I ended up finding out everything that I can do with it and became more and more enthralled and tried to challenge myself. I learned so much about NFS, config files, iptables, and Linux/networking in general that I feel the knowledge itself was worth the struggle.

While I have more than a few programs on there, the most challenging thing has been this (which I just now put the final touches on accomplishing):

  1. Have a Jellyfin server on the Pi which can be accessed from anywhere in the world.

  2. Be able to add media to Jellyfin via torrent.

  3. Used a separate (very old, Windows XP era) 32-bit computer only to be a torrent box (running latest Debian). I connect to my VPN provider via openvpn on the command line with transmission-daemon running behind that. However, I want to be able to add a torrent from anywhere in the world at any time and I cannot do that if transmission-daemon is hiding behind a VPN. Therefore I need to be able to create an ssh tunnel, but I can't do that if the entire server is behind a VPN! Therefore I had to learn to mess with iptables and ip rules, but I was able to make SSH use the default network while everything else uses the VPN, and so now I can ssh tunnel from outside the home network and open transmission in a browser that way.

  4. Since I am using two separate machines (the torrent box for downloading torrents and the Raspberry Pi for hosting the media server), I created an NFS share on the Raspberry Pi where the media would sit and mounted it on the torrent box, having all finished media files be placed in there.

  5. I set up Jellyfin to refresh every 6 hours to update the media that I now have.

If anybody here is trying to do this and is having issues, I'm happy to answer any questions!

 

I have a Raspberry Pi with a 2TB SSD on which I store all of my media. That media sits in a directory that is capable of being mounted on other computers via NFS.

I have that directory mounted on another computer via NFS in /mnt. I am able to create directories, create files, move files there, and they show up instantaneously on the Raspberry Pi (I do this without sudo because I gave my user write permissions via chown).

However, when I attempt to download a torrent via Transmission and have it automatically save to the NFS-mounted share, it does so for a few seconds, then gives me one of the two following errors:

Error:  Permission denied (/mnt/....)

or

Error: Read-only filesystem (/mnt/....)

My Transmission Daemon user is set up to be my normal user.

Anybody have any ideas? I followed these three tutorials to set it all up:

Thank you in advance for any help you can provide.

 

ETA: This is being done on a Raspberry Pi 4 running 64-bit Raspbian using an external SSD as storage.

I am following the instructions here: https://github.com/LemmyNet/lemmy-docs/blob/4249465e9960cad97245aa03b3ad4c758ff945c7/src/en/administration/install_docker.md

Please note that I have only used docker a few times in the past and have always failed so that could be a contributing factor.

My goal for the moment is just to have the instance on localhost so I can play around with it before deciding if running my own instance is something I have the time for.

Here are the steps I have taken so far and the result:

`WARNING: The Qa variable is not set. Defaulting to a blank string. WARNING: The k variable is not set. Defaulting to a blank string. ERROR: Couldn't connect to Docker daemon at http+docker://localhost - is it running?

If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable. `

I imagine I'm doing a lot of things wrong. I would be extremely appreciative for any help anybody can provide.

 

I have a Raspberry Pi that I want to be able to run Transmission on for torrenting purposes. I have Transmission installed.

I want to have openvpn running but only for Transmission and not touching the rest of the services. I have to access many of the other services on the Pi from the web and therefore cannot have the VPN interfering with that.

I have a ProtonVPN account and downloaded all of the openvpn UDP config files.

I would like to have the VPN running but split-tunneled so that only Transmission is covered by the VPN.

I have searched for guides that explain how to do this but so far none of them are adequate or go into enough detail.

Does anybody have a guide that can explain it all in detail, or know what files to edit and what to put in them?

Thank you in advance for any help you can provide.

None of this is using Docker.

 

I have Jellyfin on my Raspberry Pi and I usually access it via my local network or via SSH tunneling when I'm outside of my local network, but I want to be able to just access it via https outside of my local network.

I am following the instructions on Jellyfin's Networking page here: https://jellyfin.org/docs/general/networking/

On the part where I input this command

openssl pkcs12 -export -out jellyfin.pfx -inkey privkey.pem -in /usr/local/etc/letsencrypt/live/domain.org/cert.pem -passout pass:

I get this error

Can't open /usr/local/etc/letsencrypt/live/domain.org/cert.pem for reading, No such file or directory

Any idea what I'm doing wrong?

Got it solved! For future people reading this, the solution is here: https://github.com/jellyfin/jellyfin/issues/6697#issuecomment-1086973795

Jellyfin's Networking guide is all wrong.

 

About 25% of my apps are from F-Droid. The rest are from Play Store. I want to use Aurora Store to install apps rather than Play Store.

I have installed Aurora Store. Do I simply remove Play Store from GrapheneOS and it will remove all the Play Store apps at once or do I need to first remove them one by one, then remove Play Store, then install each one via Aurora Store?

 

for legal reasons this is a joke

 

I already use 2FA to SSH into Fedora with libpam/google-authenticator.

I also tried setting it up for GNOME desktop login. However, after logging out and going back to the login screen, I type in the 2FA code (which it accepts fine via SSH) and it says it is incorrect. I have a feeling SELinux is messing it up. Luckily I could SSH back in and fix it back.

Anybody have any experience with this?

 

title

view more: next ›