Honest question: do you avoid alcohol if you'll be a passenger in a car? To me, that would seem similar to the plane situation you're describing, but I'm sure you'll agree the majority of people wouldn't do that.
yggdar
Well there is this thing called a speed limit, that is a very clear hard limit. If you go over, it is at the very least financially unsafe.
You couldn't really do that with beer, because beer is typically carbonated and thus you'll need a very strong bag inside of the box. So strong that you'll end up with a can or bottle.
It would also be very hard to compete with products that are this mature. Linux, Windows, and macOS have been under development for a long time, with a lot of people. If you create a new OS, people will inevitably compare your new immature product with those mature products. If you had the same resources and time, then maybe your new OS would beat them, but you don't. So at launch you will have less optimizations, features, security audits, compatibility, etc., and few people would actually consider using your OS.
I find it strange that they specifically report on that one statistic. Of course most data loss events will be caused by very few people, because data loss events themselves are quite uncommon. They don't say it in the article, but I suspect most of these people only caused one data loss event. If the same people cause many data loss events, that would be worthwhile to publish.
You could get lactase tablets. Those allow you to (temporarily) digest milk, so you could continue to have your coffee with milk.
The function should be cubic, so you should be able to write it in the form "f(x) = ax^3 + bx^2 + cx + d". You could work out the entire thing to put it in that form, but you don't need to.
Since there are no weird operations, roots, divisions by x, or anything like that, you can just count how many times x might get multiplied with itself. At the top of each division, there are 3 terms with x, so you can quite easily see that the maximum will be x^3.
It's useful to know what the values x_i and x_y are though. They describe the 3 points through which the function should go: (x_1, y_1) to (x_3, y_3).
That also makes the second part of the statement ready to check. Take (x_1, y_1) for example. You want to be sure that f(x_1) = y_1. If you replace all of the "x" in the formula by x_1, you'll see that everything starts cancelling each other out. Eventually you'll get "1 * y_1 + 0 * y_2 + 0 * y_3", thus f(x_1) is indeed y_1.
They could have explained this a bit better in the book, it also took me a little while to figure it out.
LLM don't have logic, they are just statistical language models.
That is true, but from a human perspective it can still seem non-deterministic! The behaviour of the program as a whole will be deterministic, if all inputs are always the same, in the same order, and without multithreading. On the other hand, a specific function call that is executed multiple times with the same input may occasionally give a different result.
Most programs also have input that changes between executions. Hence you may get the same input record, but at a different place in the execution. Thus you can get a different result for the same record as well.
That exact version will end up making "true" false any time it appears on a line number that is divisible by 10.
During the compilation, "true" would be replaced by that statement and within the statement, "__LINE__" would be replaced by the line number of the current line. So at runtime, you end up witb the line number modulo 10 (%10). In C, something is true if its value is not 0. So for e.g., lines 4, 17, 116, 39, it ends up being true. For line numbers that can be divided by 10, the result is zero, and thus false.
In reality the compiler would optimise that modulo operation away and pre-calculate the result during compilation.
The original version constantly behaves differently at runtime, this version would always give the same result... Unless you change any line and recompile.
The original version is also super likely to be actually true. This version would be false very often. You could reduce the likelihood by increasing the 10, but you can't make it too high or it will never be triggered.
One downside compared to the original version is that the value of "true" can be 10 different things (anything between 0 and 9), so you would get a lot more weird behaviour since "1 == true" would not always be true.
A slightly more consistent version would be
((__LINE__ % 10) > 0)
Bing is managing hilarious malicious compliance!
That's a good tip, but I assume he meant he drinks juice of burned beans, rather than burned juice of beans. After all, coffee beans do need to be roasted (burned) before you use them!