this post was submitted on 19 Sep 2024
730 points (97.8% liked)

196

16714 readers
4165 users here now

Be sure to follow the rule before you head out.

Rule: You must post before you leave.

^other^ ^rules^

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 75 points 3 months ago* (last edited 3 months ago) (3 children)

You should have rolling log files of limited size and limited quantity. The issue isn't that it's a text file, it's that they're not following pretty standard logging procedures to prevent this kind of thing and make logs more useful.

Essentially, when your log file reaches a configured size, it should create a new one and start writing into that, deleting the oldest if there are more log files than your configured limit.

This prevents runaway logging like this, and also lets you store more logging info than you can easily open and go through in one document. If you want to store 20 gb of logs, having all of that in one file will make it difficult to go through. 10 2 gb log files is much easier. That's not so much a consumer issue, but that's the jist of it.

[–] [email protected] 14 points 3 months ago (1 children)

Fully agree, but the way it's worded makes it seem like log being a text file is the issue. Maybe I'm just misinterpreting intent though.

[–] [email protected] 24 points 3 months ago (2 children)

200GB of a text log file IS weird. It's one thing if you had a core dump or other huge info dump, which, granted, shouldn't be generated on their own, but at least they have a reason for being big. 200GB of plain text logs is just silly

[–] xantoxis 8 points 3 months ago (1 children)

no, 200gb of plain text logs is clearly a bug. I run a homelab with 20+ apps in it and all the logs together wouldn't add up to that for years, even without log rotation. I don't understand the poster's decision to blame this on "western game devs" when it's just a bug by whoever created the engine.

[–] [email protected] 5 points 3 months ago

Agreed, and there's a good chance that log is full of one thing spamming over and over, and the devs would love to know what it is.

[–] [email protected] 1 points 3 months ago

It could be a matter of storing non-text information in an uncompressed text format. Kind of like how all files are base 0s and 1s in assembly, other files could be "logged" as massive text versions instead of their original compressed file type.

[–] [email protected] 12 points 3 months ago (1 children)

As a sysadmin there are few things that give me more problems than unbounded growth and timezones.

[–] [email protected] 1 points 3 months ago

Printers. Desk phones. Wmi service crashing at full core lock under the guise of svchost.

[–] teejay 5 points 3 months ago (1 children)

Essentially, when your log file reaches a configured size, it should create a new one and start writing into that, ~~deleting~~ archiving the oldest

FTFY

[–] [email protected] 4 points 3 months ago

Sure! Best practices vary to your application. I'm a dev, so I'm used to configuring stuff for local env use. In prod, archiving is definitely nice so you can track back even through heavy logging. Though, tbh, if you're applications getting used by that many people a db logging system is probably just straight better