lal309

joined 2 years ago
MODERATOR OF
[–] lal309 2 points 7 months ago

I don’t have an answer for you but I have one instead. When I attempted to do swarm my biggest challenge was shared storage. I was attempting to run a swarm with shared storage on a NAS. Literally could not run apps, ran into a ton of problems running stacks (NAS share tried SMB and NFS). How did you get around this problem?

[–] lal309 2 points 7 months ago (1 children)

My apologies I mean to say bacterial wilt not blight.

[–] lal309 2 points 7 months ago* (last edited 7 months ago) (3 children)

So I’ve been just searching on Google and some of the articles I’m finding suggest that my plant has blight. Is this way off base?

[–] lal309 2 points 7 months ago (5 children)

So I looked around the plant and under leaves and I don’t see any green or white bugs. I see some spider webs here and there but I don’t think that’s harmful.

[–] lal309 5 points 7 months ago

I will check this tonight but I believe they remain the same even after the sun has gone down.

[–] lal309 8 points 7 months ago (3 children)

For the last few days, 100 degrees. Full sun (8am - 6pm). As they were getting established (it’s a new raised bed), I watered daily. Lately I’ve been trying to water every other day depending on how the soil on top looks. I added some compost about a month or so ago. As far as the little bugs, I don’t think I have seen them but I also haven’t been looking.

[–] lal309 6 points 7 months ago

Here’s a picture of the cherry tomatoes. https://lemmy.world/pictrs/image/e8270aa8-3589-45d1-9966-8e07fef6b3f3.jpeg

[–] lal309 2 points 7 months ago

Oh I didn’t know it saves settings to the headset itself. That would come in handy.

[–] lal309 1 points 7 months ago (1 children)

Are there wireless version of modmic? I only saw wired ones which would kinda defeat my purpose of cutting the chord with my current setup.

[–] lal309 1 points 7 months ago (3 children)

I was looking into these but it mentions software a lot in the reviews as in you must use the software to ensure good audio quality. Is this true? How long have you had them?

I was also looking into the Logitech Astro A30

[–] lal309 4 points 7 months ago (1 children)

I don’t think it’s necessarily true anymore. Perhaps at one point in time but generally speaking, this isn’t the case anymore.

[–] lal309 6 points 7 months ago

I’ve never heard of modmic. Will look into this. Thanks!

40
submitted 1 year ago* (last edited 1 year ago) by lal309 to c/3dprinting
 

Typically I would just buy whatever brand had the cheapest white color PLA (I like to paint my print) and quality wasn’t always top of mind. Now I have several prints that I want to do in all kids of different colors and quality matters. Given the new color and quality requirements, it no longer makes sense to get the cheapest.

What brand is of good to excellent quality that also offers a decent range of colors?

I mostly run my prints through Ender 3 Pros

Edit: Thank you for everyone’s suggestions! Certain brands are being recommended often so I’m going to start experimenting with those! Keep being awesome!

81
Alternative to ClamAV? (self.selfhosted)
submitted 1 year ago by lal309 to c/selfhosted
 

TL;DR - What are you running as a means of “antivirus” on Linux servers?

I have a few small Debian 12 servers running my services and would like to enhance my security posture. Some services are exposed to the internet and I’ve done quite a few things to protect the services and the hosts. When it comes to “antivirus”, I was looking at ClamAV as it seemed to be the most recommended. However, when I read the documentation, it stated that the recommended RAM was at least 2-4 gigs. Some of my servers have more power than other but some do not meet this requirement. The lower powered hosts are rpi3s and some Lenovo tinys.

When I searched for alternatives, I came across rkhunter and chrootkit, but they seem to no longer be maintained as their latest release was several years ago.

If possible, I’d like to run the same software across all my servers for simplicity and uniformity.

If you have a similar setup, what are you running? Any other recommendations?

P.S. if you are of the mindset that Linux doesn’t need this kind of protection then fine, that’s your belief, not mine. So please just skip this post.

27
Photoprism rebuild issues (self.selfhosted)
submitted 1 year ago by lal309 to c/selfhosted
 

TL;DR - had to rebuild my PhotoPrism database and now my originals count is off by ~5,000. Can I do a full sync of my devices and have it only upload what is missing?

Hello gurus,

I’ve been running Photoprism for quite some time and I’m happy with it.

I ran to an unrelated issue with my database (MariaDB) and has to rebuild the database. PhotoPrism uses this instance of MariaDB so naturally the metadata was gone.

The original pictures (originals) were stored in a separate array so at a minimum I still have all my pictures. I rebuilt the database and PhotoPrism (docker container) and pointed it to the array for the originals. Once that was done, I logged in to the PhotoPrism UI and perform a complete rescan and index of my originals. Once it was done, I noticed that my originals count was 27,000 but i should have 31,000 objects (according to a picture I took of the PhotoPrism UI I took the night before rebuilding the database). So I started digging a bit.

  • The array itself (where my originals are stored) is showing 27,000 objects.

  • The pictures I took the night before rebuilding the database and PhotoPrism containers said that the count of originals was ~31,000.

  • The two main devices backing media to PhotoPrism is my phone and my wife’s phone. My phone shows ~4,500 and my wife’s sores ~26,500.

  • Since these two phones are previously fully backups a few weeks before the rebuild I should have ~31,000 objects in the originals.

My question is, can I redo a full backups sync of both phones (through PhotoSync) and have it only copy the objects that are not in the originals?

Since the database has to be rebuilt, I fear that if I do another full sync, it will just copy everything again and I end up with ~60,000 objects rather than the ~31,000 I should have.

What can I do to see which objects are missing between my devices and PhotoPrism and how can I only copy those over to PhotoPrism?

33
Unraid to Backblaze (self.selfhosted)
submitted 1 year ago by lal309 to c/selfhosted
 

For those of you running Unraid and backing up their data to Backblaze, how are you doing it?

I’ve been playing a bit with KopiaUI but what is the easiest and most straight forward way?

Bonus points if I can have the same “client/software/utility” backup data from non-servers (Windows, macOS and Linux) in my network to said Unraid server.

I don’t want to complicate the setup with a bunch of dependencies and things that would make the recovery process long and tedious or require in-depth knowledge to use it for the recovery process.

To recap:

Workstations/laptop > back up data with what? > Unraid

Unraid > back up data with what? > Backblaze

 

Well I’m hopping around… again. I thought I had a good stable setup going but then something happens upstream that goes against what I want/believe in (looking at you RedHat) and I’m back on the hunt again.

I thought about trying out a Debian based distro but then I thought “why don’t I just use Debian itself (Sid, not stable/Bookworm)”.

Most if not all gaming software have a way to be installed on Debian so I don’t think that could be an issue.

Is anyone else using Sid? Am I missing something by not going with a gaming focused distro??

 

I’m wondering about your experience with it. Good, bad and ugly.

 

Basically I was able to play Cyberpunk on my Nobara setup (N37, KDE, Nvidia 10180ti, Intel i5 CPU) but everything went sideways when I upgraded from kernel version 6.2.14-300 to 6.3.5-201 (and still have issues despite several upgrades since then and now on 6.3.12 I think).

If I switched back to 6.2.14-300 I was able to play but since there has been a few upgrades since, I no longer have 6.2.15-300 as an option at start up.

The game launches but it stays on a black screen, no audio, no visuals just the cursor. I’ve reinstalled the game and nothing. Switched from X11 to wayland and nothing.

I can’t remember which nvidia driver version I have right now but it’s the latest in the Nobara repos.

Game was installed through Lutris and worked great prior to the upgrade.

 

Basically the title. I’m excited and grateful to everyone that contributed to this new iteration. Can’t wait to see the results. What do you think?

1
Are we stuck in the stoneage? (self.securityarchitecture)
 

As I work to get templates created (documents, models, visuals, etc) through Word, Excel, Visio and Sharepoint, I’m thinking to myself “Why can’t we have something a bit more modern to do our daily work?”

Technology has advanced so much but it seems like architecture is ages behind with no clear path to modernize from word documents, spreadsheets, Visio and manual data analysis. I understand that it could be worse (physical paper) but I’m wondering why are we continuing to work this way? Is there something better out there? Some web application to do form like data capture, models, reports, data mining, etc.?

15
Defeated by NGINX (self.selfhosted)
submitted 2 years ago* (last edited 2 years ago) by lal309 to c/selfhosted
 

Heads up! Long post and lots of head bashing against the wall.

Context:

I have a written an a python app (Django). I have dockerized the deployment and the compose file has three containers, app, nginx and postgres. I'm currently trying to deploy a demo of it in a VPS running Debian 11. Information below has been redacted (IPs, Domain name, etc.)

Problem:

I keep running into 502 errors. Locally things work very well even with nginx (but running on 80). As I try to deploy this I'm trying to configure nginx the best I can and redirecting http traffic to https and ssl certs. The nginx logs simply say "connect() failed (111: Connection refused) while connecting to upstream, client: 1.2.3.4, server: demo.example.com, request: "GET / HTTP/1.1", upstream: "http://192.168.0.2:8020/", host: "demo.example.com"". I have tried just about everything.

What I've tried:

  • Adding my server block configs to /etc/nginx/conf.d/default.conf
  • Adding my server block configs to a new file in /etc/nginx/conf.d/app.conf and leaving default at out of box config.
  • Tried putting the above config (default.conf and app.conf) in sites-available (/etc/nginx/sites-available/* not at the same time tho).
  • Recreated /etc/nginx/nginx.conf by copy/pasting out of box nginx.conf and then adding server blocks directly in nginx.conf
  • Running nginx -t inside of the nginx container (Syntax and config were "successful")
  • Running nginx -T when recreated /etc/nginx/nginx.conf
    • nginx -T when the server blocks where in /etc/nginx/conf.d/* lead me to think that since there were two server listen 80 blocks that I should ensure only one listen 80 block was being read by the container hence the recreated /etc/nginx/nginx.conf from above
  • Restarted container each time a change was made.
  • Changed the user block from nginx (no dice when using nginx as user) to www-data, root and nobody
  • Deleted my entire docker data and redeployed everything a few times.
  • Double checked the upstream block 1,000 times
  • Confirmed upstream block container is running and on the right exposed port Checked access.log and error.log but they were both empty (not sure why, tried cat and tail)
  • Probably forgetting more stuff (6 hours deep in the same error loop by now)

How can you help:

Please take a look at the nginx.conf config below and see if you guys can spot a problem, PLEASE! This is my current /etc/nginx/nginx.conf

`

user www-data;

worker_processes auto;

error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid;

events { worker_connections 1024; }

http { include /etc/nginx/mime.types; default_type application/octet-stream;

log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                  '$status $body_bytes_sent "$http_referer" '
                  '"$http_user_agent" "$http_x_forwarded_for"';

access_log  /var/log/nginx/access.log  main;

sendfile        on;
#tcp_nopush     on;

keepalive_timeout  65;

#gzip  on;

upstream djangoapp {
    server app:8020;
}

server {
    listen 80;
    listen [::]:80;
    server_name demo.example.com;

    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name demo.example.com;

    ssl_certificate /etc/letsencrypt/live/demo.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/demo.example.com/privkey.pem;
    #ssl_protocols TLSv1.2 TLSv1.3;
    #ssl_prefer_server_ciphers on;

    location / {
        proxy_pass http://djangoapp;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        #proxy_set_header Upgrade $http_upgrade;
        #proxy_set_header Connection keep-alive;
        proxy_redirect off;
    }

    location /static/ {
        autoindex on;
        alias /static/;
    }
}

}

`

  • EDIT: I have also confirmed that both containers are connected to the same docker network (docker network inspect frontend)

  • EDIT 2: Solved my problem. See my comments to @chaospatterns. TLDR there was an uncaught exception in the app but it didn’t cause a crash with the container. Had to dig deep into logs to find it.

2
Threat Modeling (self.securityarchitecture)
 

Is anyone using threat modeling as a means of continuous architecture? Meaning, you have a threat mode for the entire organization and you periodically review it to ensure your current architecture is capable of handling emerging and changing threats.

2
Happy 4th of July! (self.securityarchitecture)
 

I hope everyone has an amazing 4th of July celebration and that everyone keeps their 10 fingers intact!

view more: ‹ prev next ›