talkingpumpkin

joined 1 year ago
[–] talkingpumpkin 1 points 2 weeks ago

For those kind of issues I'd recommend snapshots instead of backups

[–] talkingpumpkin 1 points 4 weeks ago

Syncthing or unison might be what you want

 

Prometheus-alertmanager and graphana (especially graphana!) seem a bit too involved for monitoring my homelab (prometheus itself is fine: it does collect a lot of statistics I don't care about, but it doesn't require configuration so it doesn't bother me).

Do you know of simpler alternatives?

My goals are relatively simple:

  1. get a notification when any systemd service fails
  2. get a notification if there is not much space left on a disk
  3. get a notification if one of the above can't be determined (eg. server down, config error, ...)

Seeing graphs with basic system metrics (eg. cpu/ram usage) would be nice, but it's not super-important.

I am a dev so writing a script that checks for whatever I need is way simpler than learning/writing/testing yaml configuration (in fact, I was about to write a script to send heartbeats to something like Uptime Kuma or Tianji before I thought of asking you for a nicer solution).

[–] talkingpumpkin 0 points 1 month ago* (last edited 1 month ago)

I've not looked into Zig yet (just like you must not have looked very closely into rust, since most of the stuff you mention as a Zig highlight is present in Rust too), so I'm not gonna argue which one may be "better" (if I had to bet, I'd say both are great in their own way - OP's question is IMHO kinda dumb to begin with).

I want, however, to point out a misconception: "unsafe" rust code is not code that is bugged or in any way inferior to "regular" code, it's just code (usually small pieces) whose safety is checked and guaranteed by the developer rather than the compiler. Yeah, they really should have picked a different name for it... possibly one that isn't literally the contrary of "safe".

[–] talkingpumpkin 9 points 1 month ago (1 children)

Your system will appeal to the intersection between people who like gambling and people who like donating to charities.

Even among them, I don't see why anyone would prefer putting 100$ in your web3 thingie instead of just donating 50$, gambling with 45$, and buying a beer with the 5$ they would lose to you... well, there are a lot of ~~stupid~~ peculiar people (especially among crypto bros), so you might actually be ok.

About the implementation, the 50% to charities should be transferred automatically... what's the point of a smart contract if people must trust you to "check the total donations and create a donation on The Giving Block"?

PS:

IDK about the US, but where I live gambling is regulated very strictly: make sure to double check with a lawyer before getting into trouble.

[–] talkingpumpkin 3 points 2 months ago

2 more cents :)

I've been using syncthing for a while now, on different devices, and the only unreliability I've run into is with android killing syncthing to save battery life, which is kinda hilarious, considering all the vendor- and google-provided crap they happily waste battery on (I don't use it, but for what I've heard iOS is even worse in this regard).

Specifically, I have a samsung tablet where, no matter how much I tinkered with system settings, synchthing would only run if I manually launched the app or while the tablet was charging (BTW I still use that same tablet, but it now runs LineageOS and syncthing works flawlessly).

All this is to say, you should probably look into system settings and research ways to convince your OS to do what it's supposed to rather than tinkering with syncthing itself.

[–] talkingpumpkin 6 points 2 months ago* (last edited 2 months ago) (2 children)

I don't see the ethics implications of sharing that? What would happen if you did disclose your discoveries/techniques?

I don't know much about LLMs, but doesn't removing these safeguards just make the model as a whole less useful?

[–] talkingpumpkin 1 points 2 months ago

I fear it was nothing that entertaining: it was just my "normal" dark panel at the top of the screen and a second "default" white one at the bottom (this last one partially covered the windows I had open). I didn't try triggering notifications or otherwise causing some kind of mayhem.

[–] talkingpumpkin 2 points 2 months ago (1 children)

I'm just messing around with testing/configuring different desktop environment/window managers and I'm looking for a quick way to preview them (running the new session as my user would be fine too - I just thought it would be simpler as a different user)

[–] talkingpumpkin 6 points 2 months ago (3 children)

Wow, that's so neat!

On my machine it opens a fullscreen plasma spash and then it shows the new session intermixed/overlayed with my current one instead of in a new window... basically, it's a mess :D

If I may abuse your patience:

  • what distro/plasma version are you running? (here it's opensuse slowroll w/ plasma 6.1.4)
  • what happens if you just run startplasma-wayland from a terminal as your user? (I see the plasma splash screen and then I'm back to my old session)
 

I'm not much hoepful, but... just in case :)

I would like to be able to start a second session in a window of my current one (I mean a second session where I log in as a different user, similar to what happens with the various ctrl+alt+Fx, but starting a graphical session rather than a console one).

Do you know of some software that lets me do it?

Can I somehow run a KVM using my host disk as a the disk for the guest VM (and without breaking stuff)?

[–] talkingpumpkin 4 points 2 months ago

Read this, delete this post and try again.

[–] talkingpumpkin 1 points 2 months ago

Yes, XML is different than JSON and YAML, but it's not particularly easier or harder to manually read/edit than JSON or YAML are (IMO the are all a pain, each in its own way).

If you want to look at it from the programmer's side (which is not what OP was talking about)... marshalling/unmarshalling has been a solved issue for at least 20yrs now :) just have a library do it for you (do map json/yaml properties to you objects manually?).

You don't need to worry about attributes/child elements: <person name="jack" /> and <person><name>jack</name></person> will work the same (ok, this may depend on what language/library you pick - the lib I used back in the day worked either way).

If anything, the issue with XML is all the unnecessarily complicated stuff they added to its "core" (eg. CDATA, namespaces, non-standalone documents, ...) and all the unnecessarily complicated technologies/standards they developed around XML (from Xinclude to SOAP and many others)... but just ignore that BS (like the rest of the world does) and you'll mostly be fine :)

[–] talkingpumpkin 0 points 2 months ago (3 children)

Yaml is fundamentally the same as the json and xml it has mostly replaced (and the toml that didn't manage to replace yaml)... it's a data serialization format and just doesn't have any facility for making abstractions, which are the main tool we human use to deal with complexity.

32
submitted 3 months ago* (last edited 3 months ago) by talkingpumpkin to c/selfhosted
 

I have two subnets and am experiencing some pretty weird (to me) behaviour - could you help me understand what's going on?


Scenario 1

PC:                        192.168.11.101/24
Server: 192.168.10.102/24, 192.168.11.102/24

From my PC I can connect to .11.102, but not to .10.102:

ping -c 10 192.168.11.102 # works fine
ping -c 10 192.168.10.102 # 100% packet loss

Scenario 2

Now, if I disable .11.102 on the server (ip link set <dev> down) so that it only has an ip on the .10 subnet, the previously failing ping works fine.

PC:                        192.168.11.101/24
Server: 192.168.10.102/24

From my PC:

ping -c 10 192.168.10.102 # now works fine

This is baffling to me... any idea why it might be?


Here's some additional information:

  • The two subnets are on different vlans (.10/24 is untagged and .11/24 is tagged 11).

  • The PC and Server are connected to the same managed switch, which however does nothing "strange" (it just leaves tags as they are on all ports).

  • The router is connected to the aformentioned switch and set to forward packets between the two subnets (I'm pretty sure how I've configured it so, plus IIUC the second scenario ping wouldn't work without forwarding).

  • The router also has the same vlan setup, and I can ping both .10.1 and .11.1 with no issue in both scenarios 1 and 2.

  • In case it may matter, machine 1 has the following routes, setup by networkmanager from dhcp:

default via 192.168.11.1 dev eth1 proto dhcp              src 192.168.11.101 metric 410
192.168.11.0/24          dev eth1 proto kernel scope link src 192.168.11.101 metric 410
  • In case it may matter, Machine 2 uses systemd-networkd and the routes generated from DHCP are slightly different (after dropping the .11.102 address for scenario 2, of course the relevant routes disappear):
default via 192.168.10.1 dev eth0 proto dhcp              src 192.168.10.102 metric 100
192.168.10.0/24          dev eth0 proto kernel scope link src 192.168.10.102 metric 100
192.168.10.1             dev eth0 proto dhcp   scope link src 192.168.10.102 metric 100
default via 192.168.11.1 dev eth1 proto dhcp              src 192.168.11.102 metric 101
192.168.11.0/24          dev eth1 proto kernel scope link src 192.168.11.102 metric 101
192.168.11.1             dev eth1 proto dhcp   scope link src 192.168.11.102 metric 101

solution

(please do comment if something here is wrong or needs clarifications - hopefully someone will find this discussion in the future and find it useful)

In scenario 1, packets from the PC to the server are routed through .11.1.

Since the server also has an .11/24 address, packets from the server to the PC (including replies) are not routed and instead just sent directly over ethernet.

Since the PC does not expect replies from a different machine that the one it contacted, they are discarded on arrival.

The solution to this (if one still thinks the whole thing is a good idea), is to route traffic originating from the server and directed to .11/24 via the router.

This could be accomplished with ip route del 192.168.11.0/24, which would however break connectivity with .11/24 adresses (similar reason as above: incoming traffic would not be routed but replies would)...

The more general solution (which, IDK, may still have drawbacks?) is to setup a secondary routing table:

echo 50 mytable >> /etc/iproute2/rt_tables # this defines the routing table
                                           # (see "ip rule" and "ip route show table <table>")
ip rule add from 192.168.10/24 iif lo table mytable priority 1 # "iff lo" selects only 
                                                               # packets originating
                                                               # from the machine itself
ip route add default via 192.168.10.1 dev eth0 table mytable # "dev eth0" is the interface
                                                             # with the .10/24 address,
                                                             # and might be superfluous

Now, in my mind, that should break connectivity with .10/24 addresses just like ip route del above, but in practice it does not seem to (if I remember I'll come back and explain why after studying some more)

 

I want to have a local mirror/proxy for some repos I'm using.

The idea is having something I can point my reads to so that I'm free to migrate my upstream repositories whenever I want and also so that my stuff doesn't stop working if some of the jankiest third-party repos I use disappears.

I know the various forjego/gitea/gitlab/... (well, at least some of them - I didn't check the specifics) have pull mirroring, but I'm looking for something simpler... ideally something with a single config file where I list what to mirror and how often to update and which then allows anonymous read access over the network.

Does anything come to mind?

view more: next ›