netvor

joined 2 years ago
[–] netvor 1 points 1 week ago (1 children)

In this scenario, alcohol is less bad than soda

If by "scenario" you mean you only want to observe single parameter, then fine, but that's not really useful.

Alcohol is much worse than soda.

[–] netvor 1 points 1 week ago

the risk of data loss at that point is high. you will miss things

that's what makes it exciting πŸ˜“ πŸ˜“ πŸ˜“

[–] netvor 1 points 1 week ago

Why would it be "practical" to do it during the conversion?

They could just go to the toilet like normal people (before or after).

I mean, I don't plan to eat anymore until tomorrow, therefore it's practical that I shit myself now?

[–] netvor 2 points 1 week ago

I used to love Sailfish OS.

I guess I still do, but the problem is that while they recently expanded amount of devices they support, for some of them the "support" is just not what you think. Eg. I got Xperia 10 V just for the SFOS, but even though on their main list the device is listed as supported, turns out that camera, Android support and fingerprint sensor, these don't work. To be fair, this info was possible to find on their forums, and I did not have to pay for SFOS (they offer 6 month trial), so they have nothing to gain from communicating so badly, but it is what it is.

So in case you want to try it, just really make sure you know to what extent your device is supported.

[–] netvor 2 points 1 week ago

OpenTTD player

It's nice when people guess which AI i used to generate my avatar.

[–] netvor 1 points 1 week ago (2 children)

...well, technically, yes.

If you are well-versed in the guts of the distro (grub, /etc/fstab, /etc/crypttab...), and have extra space, you could spend part of your weekend shifting partitions around and moving everything to the encrypted side, and eventually re-configuring your install and removing the old part. (Oh and don't forget to chown your /home data if you have multiple users.) I've been there, it's not fun. It's fun[tm]. It's just far easier and less error prone to re-install if you can.

(Yeah, I'm stretching the definition of "enabling it" reeealy thin here... πŸ™ƒ )

[–] netvor 6 points 1 week ago

It's much worse: They can re-use the same wrench.

(Disgusting, I know... 😝 )

[–] netvor 2 points 1 week ago (1 children)

Great point.

I provided reasons why I encrypted my drives but this one is even better.

(Another one could be if you need to get your computer to a repair shop, and for some reason you can't just remove the drive.)

[–] netvor 1 points 1 week ago

TBH even the way you phrased your question kind of proves it's orthogonal. Yes, you can have the full matrix:

encrypted | backed up
----------|----------
       no |        no
       no |       yes
      yes |        no
      yes |       yes

In each case, you have a different set of problems.

  • Encrypting a particular medium only means that it's going to be harder to gain access to the data on that medium (harder for everyone, but trillions of less harder for someone who knows the password.
    • That's regardless of whether you also have a backup.
  • Backing up just means that a copy of the data exists somewhere else.
    • That's regardless of whether this or the other copy is encrypted.

Sure, eventually, the nature of your data's safety will be affected by both.

Disclaimer: I'm by no means a security expert, don't take what I write here as advice!

Eg. I encrypt my disks. When I do, I basically encrypt everything, ie. all partitions (except /boot). Then on those partitions, most of the data is not worth backing up since it's either temporary or can be easily obtained anyway (system files). Well, some of the data is backed up, and some of that even ends up on disks that are not encrypted (scary, I know!) :)

To be fair, just encrypting the disks does not solve all. If someone broke to my house, they would with almost 100% chance find my computer on, which means that the disks are not encrypted (technically still are, just that LUKS provides unencrypted versions as well..) So the barrier they would have to face would be basically just the desktop lock.

For that reason I don't encrypt hard drives on my remote server, since the server is always running in a virtual environment so by definition anyone who's maintaining the hardware can already open files from the unencrypted drives, ie. I think it would be pointless.

[–] netvor 2 points 1 week ago

mary beach rodeo

thank you for sharing your password 😜

[–] netvor 2 points 1 week ago (2 children)

Don't you mean LUKS with LVM on top? (That's what I use, I'm not sure LVM alone even supports encryption..)

 

Look, I'm a Debian user for 15 years, I've worked in F/OSS for a long time, can take care of myself.

But I'm always on a lookout for distros that might be good fit for other people in my non-tech vicinity, like siblings, nieces, nephews... I'm imagining some distro which is easy for gaming but can also be used for normal school, work, etc. related stuff. And yeah, also not too painful to maintain.

(Well, less painful than Windows which honestly is not a high bar nowadays... but don't listen to me, all tried in past years was to install Minecraft from the MS store... The wound is still healing.)

I have Steam Deck and I like how it works: gaming first, desktop easily accessible. But I only really use it for gaming.

So I learned about Bazzite, but from their description on their main site I'm not very wise:

The next generation of Linux gaming [Powered by Fedora and Universal Blue] Bazzite is a cloud native image built upon Fedora Atomic Desktops that brings the best of Linux gaming to all of your devices - including your favorite handheld.

Filtering out the buzzwords, "cloud native image" stands out to me, but that's weird, doesn't it mean that I'll be running my system on someone else's computer?

Funnily enough, I scrolled a bit and there's a news section with a perfectly titled article: "WTF is Cloud Native and what is all this".

But that just leads to some announcements of someone (apparently important in the community) talking about some superb community milestone and being funny about his dog. To be fair, despite the title, the announcement is not directed towards people like me, it's more towards the community, who obviously already knows.

Amongst the cruft, the most "relevant" part seems to be this:

This is the simplest definition of cloud native: One common way to linux, based around container technology. Server on any cloud provider, bare metal, a desktop, an HTPC, a handheld, and your gaming rig. It’s all the same thing, Linux.

But wait, all I want to run is a "normal" PC with a Linux distro. I don't necessarily need it to be a "traditional" distro but what I don't want is to have it running, or heavily integrated in some proprietary-ish cloud.

So how does this work? Am I missing something?

(Or are my red flags real: that all of this is just to make a lot of promises and get some VC-funding?)

50
submitted 3 weeks ago* (last edited 3 weeks ago) by netvor to c/nostupidquestions
 

These things are nothing new. First time I saw them was on Medium com, if I remember correctly.

Honestly I never understood why they were useful in the first place. Why would it even matter how long do I spend reading things? And how would such a guess even make sense in the first place? I mean, define "reading" -- is it just skimming the text with your eyes and not even thinking about it? Or somehow thinking at the exact same rate & speed for all parts of the article, from intro to any novel ideas to unclear parts to conclusion?

Also, doesn't putting a "minute price tag" on a body of text kind of devalue it?

Disclaimer: I'm probably heavily biased here, all I can think of is some sort of a pseudo book nerd who wants to be as efficient at "reading" as many things as possible with no pauses for thinking, but there has to be a real serious reason why these guesstimates are ever really useful?

(A more honest disclaimer: I actually find them distracting, to say the least. I am prone to problems with managing focus, as well as expectations, so sometimes when I open an article with curiosity, having this thing whisper to my ear "you must spend about 14 minutes and go away" is not helping. On bad days it sort of hurts even if I know it's BS.)

Again, this is not anything new but I wonder about it recently, since it's been my feeling that I've been seeing them pop up more and more, even in places they make no sense (like programmer's guides or API references). This suggests to me that they are getting incorporated into publishing platforms, and it's more about turning them off than deliberately including them.

What's the deal?

4
submitted 4 months ago* (last edited 4 months ago) by netvor to c/debian
 

I'm trying to "build" (see below) a package for another architecture. I made it through (by disabling, frankly) most of the steps.

Long story short, I end up running something like this:

debuild -us -uc --host-type riscv64-linux-gnu -d -C/dev/null

but debuild keeps failing on this line:

[...]
make[1]: Leaving directory '/home/netvor/.cache/mkittool/debstuff/build/zigdev-0.0.0+t20240906145412.egg.gbc271d0'
   dh_shlibdeps -a
   dh_installdeb
   dh_gencontrol
   dh_md5sums
   dh_builddeb
dpkg-deb: building package 'zigdev' in '../zigdev_0.0.0+t20240906145412.egg.gbc271d0-1_riscv64.deb'.
 dpkg-genbuildinfo -O../zigdev_0.0.0+t20240906145412.egg.gbc271d0-1_riscv64.buildinfo
 dpkg-genchanges -C/dev/null -O../zigdev_0.0.0+t20240906145412.egg.gbc271d0-1_riscv64.changes
dpkg-genchanges: info: including full source code in upload
 dpkg-source --after-build .
dpkg-buildpackage: info: full upload (original source is included)
debuild: fatal error at line 1062:
can't open zigdev_0.0.0+t20240906145412.egg.gbc271d0-1_amd64.changes for reading: No such file or directory

So the *_amd64.changes file does not exist, but *_riscv64.changes does:

zigdev-0.0.0+t20240906145412.egg.gbc271d0
zigdev_0.0.0+t20240906145412.egg.gbc271d0-1.debian.tar.xz
zigdev_0.0.0+t20240906145412.egg.gbc271d0-1.dsc
zigdev_0.0.0+t20240906145412.egg.gbc271d0-1_amd64.build
zigdev_0.0.0+t20240906145412.egg.gbc271d0-1_riscv64.buildinfo
zigdev_0.0.0+t20240906145412.egg.gbc271d0-1_riscv64.changes
zigdev_0.0.0+t20240906145412.egg.gbc271d0-1_riscv64.deb
zigdev_0.0.0+t20240906145412.egg.gbc271d0.orig.tar.gz

Building with amd64 architecture finishes correctly *_amd64.changes exists and is used.

First, do I really need this .changes file? (I'm not planning to upload this to Debian archive.) And if so, how to make debuild use the correct file?

The environment (when calling env inside rules file) looks like this:

ASFLAGS=
CFLAGS=-g -O2 -ffile-prefix-map=/home/netvor/.cache/mkittool/debstuff/build/zigdev-0.0.0+t20240906145412.egg.gbc271d0=. -fstack-protector-strong -Wformat -Werror=format-security
CPPFLAGS=-Wdate-time -D_FORTIFY_SOURCE=2
CXXFLAGS=-g -O2 -ffile-prefix-map=/home/netvor/.cache/mkittool/debstuff/build/zigdev-0.0.0+t20240906145412.egg.gbc271d0=. -fstack-protector-strong -Wformat -Werror=format-security
DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/11111/bus
DEBEMAIL=Me <[email protected]>
DEB_BUILD_ARCH=amd64
DEB_BUILD_ARCH_ABI=base
DEB_BUILD_ARCH_BITS=64
DEB_BUILD_ARCH_CPU=amd64
DEB_BUILD_ARCH_ENDIAN=little
DEB_BUILD_ARCH_LIBC=gnu
DEB_BUILD_ARCH_OS=linux
DEB_BUILD_GNU_CPU=x86_64
DEB_BUILD_GNU_SYSTEM=linux-gnu
DEB_BUILD_GNU_TYPE=x86_64-linux-gnu
DEB_BUILD_MULTIARCH=x86_64-linux-gnu
DEB_BUILD_OPTIONS=notest parallel=8
DEB_HOST_ARCH=riscv64
DEB_HOST_ARCH_ABI=base
DEB_HOST_ARCH_BITS=64
DEB_HOST_ARCH_CPU=riscv64
DEB_HOST_ARCH_ENDIAN=little
DEB_HOST_ARCH_LIBC=gnu
DEB_HOST_ARCH_OS=linux
DEB_HOST_GNU_CPU=riscv64
DEB_HOST_GNU_SYSTEM=linux-gnu
DEB_HOST_GNU_TYPE=riscv64-linux-gnu
DEB_HOST_MULTIARCH=riscv64-linux-gnu
DEB_RULES_REQUIRES_ROOT=binary-targets
DEB_TARGET_ARCH=riscv64
DEB_TARGET_ARCH_ABI=base
DEB_TARGET_ARCH_BITS=64
DEB_TARGET_ARCH_CPU=riscv64
DEB_TARGET_ARCH_ENDIAN=little
DEB_TARGET_ARCH_LIBC=gnu
DEB_TARGET_ARCH_OS=linux
DEB_TARGET_GNU_CPU=riscv64
DEB_TARGET_GNU_SYSTEM=linux-gnu
DEB_TARGET_GNU_TYPE=riscv64-linux-gnu
DEB_TARGET_MULTIARCH=riscv64-linux-gnu
DFLAGS=-frelease
DH_INTERNAL_BUILDFLAGS=1
DH_INTERNAL_OPTIONS=
DH_INTERNAL_OVERRIDE=dh_auto_install
FAKED_MODE=unknown-is-root
FAKEROOTKEY=2071757222
FCFLAGS=-g -O2 -ffile-prefix-map=/home/netvor/.cache/mkittool/debstuff/build/zigdev-0.0.0+t20240906145412.egg.gbc271d0=. -fstack-protector-strong
FFLAGS=-g -O2 -ffile-prefix-map=/home/netvor/.cache/mkittool/debstuff/build/zigdev-0.0.0+t20240906145412.egg.gbc271d0=. -fstack-protector-strong
GCJFLAGS=-g -O2 -ffile-prefix-map=/home/netvor/.cache/mkittool/debstuff/build/zigdev-0.0.0+t20240906145412.egg.gbc271d0=. -fstack-protector-strong
GPG_AGENT_INFO=/run/user/11111/gnupg/S.gpg-agent:0:1
HOME=/home/netvor
LANG=en_US.UTF-8
LC_COLLATE=C
LDFLAGS=-Wl,-z,relro
LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu/libfakeroot:/usr/lib64/libfakeroot:/usr/lib32/libfakeroot
LD_PRELOAD=libfakeroot-sysv.so
LOGNAME=netvor
MAKEFLAGS=w
MAKELEVEL=2
MFLAGS=-w
OBJCFLAGS=-g -O2 -ffile-prefix-map=/home/netvor/.cache/mkittool/debstuff/build/zigdev-0.0.0+t20240906145412.egg.gbc271d0=. -fstack-protector-strong -Wformat -Werror=format-security
OBJCXXFLAGS=-g -O2 -ffile-prefix-map=/home/netvor/.cache/mkittool/debstuff/build/zigdev-0.0.0+t20240906145412.egg.gbc271d0=. -fstack-protector-strong -Wformat -Werror=format-security
PATH=/usr/sbin:/usr/bin:/sbin:/bin:/usr/bin/X11
PREFIX=/usr
PWD=/home/netvor/.cache/mkittool/debstuff/build/zigdev-0.0.0+t20240906145412.egg.gbc271d0
SOURCE_DATE_EPOCH=1456533483
TERM=rxvt-unicode
ZIGDEV_ZIG_VERSION=0.13.0
ZIGSITE=/opt/zigdev
ZIGSITE_PREP=debian/tmp/opt/zigdev

Open "spoiler" below to read more about my goals. Since the fact I don't actually want to build Zig properly here might confuse and annoy people, I wrote my reasoning below.

Project overview

First and foremost, I want to learn more and become more familiar with Debian build system as well as Zig and system-level programming.

How I want to do it is to start creating zig-based binary packages for personal/experimental use. Now, already have a pipeline and tooling ecosystem which I use for Python and Bash packages: my system is DEB centric and handles package lifecycle from git repo to APT (or DNF, really) repository and I prefer when any new project can be immediately built and deployed as .deb.

So now I want to add Zig support. But means my Zig-based projects will need something to put to Build-Depends, and since Zig does not officially provide APT repo, I want to create my own -- this is what I'm focusing on right now.

So I'm creating this hacky package called zigdev whose only purpose will be to exist in my internal APT repos and deploy /opt/zigdev/zig to my test machines. One day, this package will can be easily replaced by official zig package, so for now (while building this particular zigdev package), I'm trying to cut every corner I can:

  • I don't actually build Zig, I just download tarball using curl.

  • I'm trying to disable every truly arch-specific step, since these would typically need arch-specific chroot or similar setup.

    For example, I don't care about dynamic linking, stripping or reproducibility.

Once I get this zigdev package running, I can start building my hello_world.zig's and similar. At that point I will start slowly moving towards creating a more proper binary packages by refining an rules template for my zig projects (using zig tooling, though.) (All this while also learning Zig itself and system-level programming in general, of which I have almost no experience with, so that will move with glacial speed.)

15
submitted 4 months ago* (last edited 4 months ago) by netvor to c/philosophy
 

I'm not sure if this is a right type of question for this community.

The context is not essential, but in a recent video Alex O'Connor quoted "The Apologist's Evening Prayer" by C.S.Lewis. As a non-native English speaker, I failed to understand it from hearing, so I looked it up but I still struggle with interpreting it.

Can someone here help me out with "translating" to a bit simpler English?

So here's the poem, as taken from cslewis.com:

From all my lame defeats and oh! much more From all the victories that I seemed to score; From cleverness shot forth on Thy behalf At which, while angels weep, the audience laugh; From all my proofs of Thy divinity, Thou, who wouldst give no sign, deliver me.

Thoughts are but coins. Let me not trust, instead Of Thee, their thin-worn image of Thy head. From all my thoughts, even from my thoughts of Thee, O thou fair Silence, fall, and set me free. Lord of the narrow gate and the needle’s eye, Take from me all my trumpery lest I die.

Disclaimer: I'm aware that with poetry, interpretation can be problematic, but here's my thought process: when I tried to look for "explanation" I haven't found any, which hints to me that the text is not particularly ambiguous once you can see through the poetry part. (In other words, people who quote this don't feel the need to add explanation since the meaning is rather clear for an educated native reader.)

78
submitted 6 months ago* (last edited 6 months ago) by netvor to c/nostupidquestions
 

I mean, everyone knows that in January it's hot in Australia, and in July it's cold there.

But do Australians call it "winter" in January and "summer" in July? Or does just "winter" imply hot weather and beaches, and "summer" implies ~~winter,~~ eh, i mean, snow sports and wool socks.

And given that, most of the population lives in northern hemisphere, is there a body of dad jokes and culture tropes related to the fact that "we're different", or is it just too cringe and boring. (I realize both could be true on this one.)

 

I initially wrote this as a response to this joke post, but I think it deserves a separate post.

As a software engineer, I am deeply familiar with the concept of rubber duck debugging. It's fascinating how "just" (re-)phrasing a problem can open up path to a solution or shed light on own misconceptions or confusions. (As and aside, I find that among other things that have similar effect is writing commit messages, and also re-reading own code under a different "lighting": for instance, after I finish a branch and push it to GitLab, I will sometimes immediately go and review the code (or just the diff) in GitLab (as opposed to my terminal or editor) and sometimes realize new things.)

But another thing I've been realizing for some time is that these "a-ha" moments are always mixed feelings. Sure it's great I've been able to find the solution but it also feels like bit of a downer. I suspect that while crafting the question, I've been subconsciously also looking forward for the social interaction coming from asking that question. Suddenly belonging to a group of engineers having a crack at the problem.

The thing is: I don't get that with ChatGPT. I don't get that since there's was not going to be any social interaction to begin with.

With ChatGPT, I can do the rubber duck debugging thing without the sad part.

If no rubber duck debugging happens, and ChatGPT answers my question, then that's obvious, can move on.

If no rubber duck debugging happens, and ChatGPT fails to answer my question, then by the time at least I got some clarity about the problem which I can re-use to phrase my question with an actual community of peers, be it IRC channel, a Discord server or our team Slack channel.


So I'm wondering, do other people tend to use LLMs as these sort of interactive rubber ducks?

And as a bit of a stretch of this idea---could LLM be thought of as a tool to practice asking question, prior to actually asking real people?


PS: I should mention that I'm also not a native English speaker (which I guess is probably obvious by now by my writing) so part of my "learning asking question" is also learning it specifically in English.

 

I started writing this as an answer to someone on some discord, but it would not fit the channel topic, but I'd still love to see people's views on this.

So I'll quote the comment but just as a primer:

The safest pattern to use is to not use any pattern at all and write the most straight forward code. Apply patterns only when the simplest code is actually causing real problems.

First and foremost: Many paths to hell are paved with design patterns applied willy-nilly. (A funny aside: OO community seems to be more active and organized in describing them (and often not warning strongly enough about dangers of inheritance, the true lord of the pattern rings), which leads to the lower-level, simpler patterns being underrepresented.)

But, the other extreme is not without issues, by far.

I've seen too many FastAPI endpoints talking to db like there's no tomorrow. That is definitely "straight forward" approach but the first problem is already there: it's pretty much untestable, and soon enough everyone is coupling to random DB columns (and making random assumptions about their content, usually based on "well let's see who writes what there" analysis) which makes it hard to change without playing a whack-a-bug.

And what? Our initial DB design was not future proof? Tough luck changing it now. So new endpoints will actually be trying to make up for the obsolete schema, using pandas everywhere to do what SQL or some storage layer (perhaps with some unit-of-work pattern) should be doing -- and further cementing in the obsolete design. Eventually it's close to impossible to know who writes/expects what, so now everyone better be defensive, adding even more cruft (and space for bugs).

My point is, I guess, that by the time when there are identifiable "real problems" to be solved by pattern, it's far too late.

Look, in general, postponing a decision to have more information can be a great strategy. But that depends on the quality of information you get by postponing. If that extra information is going to be just new features you added in the meantime, that is going to be heavily biased by the amount of defensive / making-up-for-bad-db junk that you forced yourself to keep adding. It's not necessarily going to be easier to see the right pattern.

So the tricky part is, which patterns are actually strong enough yet not necessarily obtrusive, so that you can start applying them early on? That's a million dollar question.

I don't think "straight forward" gets you towards answering that question. (Well, to be fair, I'm sure people have made $1M with "straight forward code", so that's that, but is that a good bet?)

(By the way, real world actually has a nice pattern specifically for getting out of that hole, and it's called "your competitor moving faster & being cheaper than you" so in a healthy market the problem should solve itself eventually...)


So what are your ideas? Do you have design patterns / disciplines that you tend to apply generally, with new projects?

I'm not looking for actual patterns (although it's fine to suggest your favorites, or link to resources), I'm mainly interested in what do people think about patterns in general, and how to apply them during the lifetime of the project.

 

When I speak, unless I'm sharing the screen I always keep looking at myself. It's kind of strange -- it clearly does not match a real-world conversation, but somehow I can't help it.

Edit: More context -- I'm wondering if others have it, if this is something that can be explained by some "brain" thing, and also how does it affect the conversation.

 

Every time I try to understand how forces which hold atoms and molecules together work, I find myself wanting to ask this question: why not the other way around? Could there be an atom which has electrons and neutrons inside, and protons outside?

It feels like a silly question, but is there something we know about the universe we live in that implies that this is not possible?

 

This is not strictly self-hosted but another approach I which is similar in philosophy, and which I actually prefer in many cases: hosted services.

--

So about 5 years ago I got fed up with having to update nextcould (or was it owncloud? I don't recall) so I was looking for a hosting service.

Initially I expected this to be a bit of a burden on my budget (especially if one scales with users), but to my surprise, I found OwnCube (owncube.de), where the price was about EUR 18 per year. Great deal. So I went ahead, set it up, tested for a while and eventually ended up configuring my parents' phones to use it for storing contacts & photos instead of Google.

To be clear, I did not use nextcloud myself directly. I had been already paying for fastmail, and it's perfect, except it's single-user, so for myself I kept using fastmail, just synchronizing fastmail (using vdirsyncer) and owncube nextcloud just to have a backup and also alternate interface.

This was working perfectly, until one day, it broke. It just stopped working (throwing some errors on sync). When I opened my web interface there was just this message, saying the nextcloud intrerface is not compatible with PHP 8.0+.

Seemed understandable: they updated the underlying server to PHP 8.0 but not the Nextcloud instance. Not superb, but fine, I'll just open a support ticket.

However, the ticket went nowhere. The support engineer kept repeating something that amounted to,

  • they needed to update PHP for security reasons,
  • the plan I subscribed to does not "come with auto-updates",
  • so

I am responsible for updating the Nextclould instance, not them.

That does not make sense. I don't have access neither to the instance nor to the updater. All I can do now is stare at the message. Their admin UI did not provide anything, either (some "magic" button, URL or SSH access).

I pointed it out but they kept repeating themselves and eventually explained that I can either cancel the service and start it again (pay again!) -- which will give me updated NC but my data will be erased, or I can "book auto-updater" which meant I should pay one time fee about 70 EUR (more than double my yearly plan).

That does not make sense. I understand that I chose the basic mini plan, I can't expect anyone to jump over hoops. I also perfectly understand that any software can break because of version mismatch (after all, I'm a software engineer myself). But nobody knows how many times per year that can happen, so if I have to pay extra every time then my plan is unpredictable.

Sadly the ticket went nowhere, the support sounded like a broken record, with "pay X amount of EUR here" link. Seems like a definition of holding my data hostage.

Eventually I decided to cancel the service.

--

So the morale, I guess..?

  • Be careful to whom you entrust your data

  • Don't get too tempted with great prices. Make sure you understand what is (NOT) included.

  • DO keep your backups.

    • For me, vdirsyncer worked great; it is a bit pain to configure and troubleshoot but the architecture is great and it gives you opportunity to sync between independent accounts and even plain text files, which can be a life-saver. (Even sync with google worked.)
  • Consider having more instances.

    • Eg. you could pay one and self-host one, use the paid one as a primary access point (public internet, usually much easier), and the self-hosted one as a backup.
    • Alternatively, one could even share a pool of instances with friends, split the bill and sync both ways.
    • (You will still need an almost-always-running cronjob somewhere to sync the data, if you're going with vdirsyncer approach.)
 

Is there some mature and usable application or tool that would enable tracking desktop activities to aid in time tracking?

Over 10 years (back when I used Windows at work), I recall I was using an app on Windows -- I forgot what it was, definitely closed source, although very well made -- that would sit somewhere in the tray and just track my activities (mostly just active window title and app), and later it would enable me to look back at the data, analyze it and categorize the time.

I recall that for my rather ADD-ish brain, this was a life-saver.

I don't recall name of the app, but it looked kinda similar like timeBro (judging just from brief look at their web page and their demo)

I haven't seen anything like that for Linux -- I admit I haven't really tried to search very hard. Given the vast diversity of desktops (from GNOME to KDE to i3), technologies (Xorg to Wayland...) and work environments (native apps, web browsers, flatpaks, command lines, IDE's, Vim's, even SSH servers) I wonder if it would even be feasible to have something like this that would work reliably everywhere-ish and provide really useful data.

23
Why do we want to know why? (self.nostupidquestions)
submitted 2 years ago by netvor to c/nostupidquestions
 

With any question, why is it always so helpful to know why the answer is the one that is? In another words, which principle of thinking and learning is most closely tied to question "why"? Or is it purely social act of expressing deeper interest? Is questioning for reasons mandatory?

I feel I know the answer to this question intuitively, but find it hard to express it into words without it sounding stereotypical and lazy.

This seems bizarre, because it's children who are most "famous" for asking "why" all the time, but: How would you, say explain to a child, why do we need to know reasons behind things?

view more: next β€Ί