akash_rawal

joined 2 years ago
[–] akash_rawal 1 points 1 month ago (1 children)
  1. Struggle to come to a conclusion on what to do with the EOL OS because of internal political factors and the reality of how enterprise works.

This is the involuntary choice. If you cannot choose from the first three, you end up implicitly choosing the fourth.

[–] akash_rawal 2 points 1 month ago

I understand how it feels.

[–] akash_rawal 1 points 1 month ago

The only way that will work is to somehow quit and rejoin as a much more highly paid consultant and enable them to upgrade EOL software in prod. I am actually considering this.

[–] akash_rawal 3 points 1 month ago (3 children)

There is something you need to know about collective wisdom; the larger the org is, the lower it gets. Yes the application works on Alma 8 and 9, but the management says 'no'.

[–] akash_rawal 3 points 1 month ago (3 children)

Any large enterprise still running RHEL 5 in Prod (or even, yes, older RHEL versions) has fully accepted the risks

It is more like 'involuntarily end up riding the risks of using unsupported old software'. RHEL 7 and RHEL 5 are in the right order.

RHEL sells an unrealistic expectation that you don't need to worry about the OS for another 10 years, so the enterprise gets designed around it and becomes unable to handle an OS upgrade, ever.

[–] akash_rawal 3 points 1 month ago (5 children)

I am not. I worked hard to make our application support RHEL 8 and then RHEL 9. And then the politics takes over and the big wigs start an extended bickering over who should pay for the OS upgrade... which never happens. Sometimes hardware partners don't support the upgrades, which means OS upgrades also end up requiring new hardware.

I blame Redhat.

[–] akash_rawal 15 points 1 month ago (7 children)
288
Enterprise misery (lemmy.world)
submitted 1 month ago by akash_rawal to c/linuxmemes
 
[–] akash_rawal 4 points 1 month ago

There is no way legacy projects are going to switch to Deno. Even when Deno is 100% compatible, the only advantage Deno provides is slightly higher performance. Node's complexity problem? All those configs needs to be supported for compatibility anyway. Typescript? The project already has tsconfig.json set up, so they might as well continue to use tsx. Security? I bet users will just get tired and use -A all the time.

To benefit from Deno, Node's legacy needs to be shed.

Wine is a different case. The reason Wine makes sense is because Windows is so much worse than Linux that even with scrappy game compatibility, Linux offers a better experience. For Linux users, the alternative to Wine is not switching to Windows, it is not being able to play games. On the other hand, legacy Node projects have a very easy alternative... just continue to use Node.

And btw Bun is making the same mistake.

[–] akash_rawal 4 points 1 month ago (3 children)

Check if there's any large file in /tmp and /run/user/*?

[–] akash_rawal 4 points 1 month ago (6 children)

Through compatibility, Deno established an upgrade path.

Sure, but Node compatibility needs to work, and it needs to work reliably. Which means every last detail of Node needs to be supported.

This is what I am trying to convey... the engineering effort to make an objectively better JS runtime while being Node compatible is likely too much effort. Many popular Node projects are already having issues with Deno. Now imagine how the compatibility scene will look like with every single proprietary Node project out there.

So instead of trying to replace NodeJS or offering an upgrade path for existing Node projects, incentivize formation of ecosystem around Deno.

[–] akash_rawal 9 points 1 month ago

Statistically speaking, you are right.

 

I think Deno made a huge mistake.

Deno intended to be the redo of 'Javascript outside the browser', making it simpler while getting rid of the legacy.

When Deno was announced in 2020, Deno was its own thing. Deno bet hard on ESM, re-used web APIs and metas wherever possible, pushed for URL imports instead of node_modules, supported executing typescript files without tsx or tsconfig.json and so on.

However since 2022, Deno is trying to imitate Node more and more, and this is destroying Deno's ecosystem.

Users' Perspective

"If Deno implemented Node APIs and tried to imitate Node and NPM ways of doing things, existing libraries and frameworks written using Node will automatically work in Deno and thus adopting Deno will be easier." I don't know who said this, someone must have said this.

What has happened instead, is that Deno trying to imitate Node has disincentivized formation of any practical ecosystem for Deno, while the existing libraries and frameworks are unreliable when used with Deno.

I tried using Next.js via Deno some time back, and Next.js dev server crashed when Turbopack is enabled. There is a workaround, so for the time being that issue is solved. But today there is another issue, type checking (and LSP) for JSX is broken.

This is my experience with using Node libraries with Deno. Every hour of work is accompanied with another hour (sometimes more) of troubleshooting the libraries themselves.

I think this is the consequence of trying to imitate something you are not. Deno is trying to be compatible with Node. but there are gaps in the said compatibility. I think achieving compatibility with Node is hard, and the gaps in compatibility will stay for a long time.

For example, at the time of writing, FileHandle.readLines is not implemented in Deno.

import fs from 'node:fs/promises';

const hd = await fs.open(Deno.args[0]);
for await (const line of hd.readLines()) {
	console.log("Line: ", line);
}

The above script crashes despite having no issues with Typescript.

$ deno check test.ts
Check file://path/to/test.ts
$ deno run -R test.ts input.txt
error: Uncaught (in promise) TypeError: hd.readLines(...) is not a function or its return value is not async iterable
for await (const line of hd.readLines()) {
                            ^
    at file://path/to/test.ts:4:29
$

Using NPM libraries is also typically accompanied with a complete disregard for Deno's security features. You just end up running deno with -A all the time.

Library devs' Perspective

Deno 1.0 is released, and library devs are excited to join the ecosystem. Projects like drollup, denodb, drizzle-deno are started,

But then Deno announces Node and NPM compatibility and all that momentum is gone.

Now, it seems like Deno's practical ecosystem is limited to first party libraries like @std and Fresh, libraries on JSR, and a small subset of libaries on NPM that works on Deno.

If you look at the situation from library or framework dev's perspective, it all seems reasonable. Most of them are not new to Javascript; they are much more familiar with Node than with Deno.

When Deno is announced, some of them might want to contribute to Deno's ecosystem. But then Deno announces Node and NPM compatibility, and now there is not enough incentive to develop software for Deno. It doesn't matter that Node compatibility is spotty, because they'd rather just go back to using Node like they're used to. Supporting multiple runtimes is painful. If you want to understand the pain, ask anyone who tried to ship any cross platform application written in C or C++.

Deno should have promoted its own API

If the competition is trying to be more like Node, Node is the winner.

There is a lesson to be learned here. If you are trying to replace a legacy system, don't re-implement the same legacy system. Instead, put the burden of backwards-compatibility on the legacy system.

Deno aimed to uncomplicate Javascript. (Deno's homepage literally says that.) By trying to mimic Node, Deno has unintentionally put Node's complexity problem at the center of the stage. And now, it cannot be removed. Instead of being a brand new thing, Deno ended up being a less reliable variant of Node.

Deno should have supported its own API on top of Node instead. Since Deno controls its API, supporting its own API on Node would be simpler than supporting Node APIs. For library and framework developers, libraries made for Deno would work on Node and there would be no need to support multiple runtimes.

This would have resulted in a much larger ecosystem of software made for Deno which is more reliable and free of Node's legacy.

 

Testcontainers is a library that starts your test dependencies in a container and stop them after you are done using them. Testcontainers needs Docker socket access for mounting within its reaper, so I made a (for now minimal) different library that does not need Docker socket access. It also works with daemonless Podman.

 

I took each rating for games on Wine Application Database, mapped them to numbers (Garbage -> 1, Bronze -> 2, Silver -> 3, Gold -> 4, Platinum -> 5) and plotted a monthly average.

 

I was exploring direct links between machines, and basically failed to break something.

I assigned IP address 192.168.0.1/24 to eth0 in two ways.

A. Adding 192.168.0.1/24 as usual

# ip addr add 192.168.0.1/24 dev eth0
# ping -c 1 192.168.0.2
PING 192.168.0.2 (192.168.0.2) 56(84) bytes of data.
64 bytes from 192.168.0.2: icmp_seq=1 ttl=64 time=0.051 ms

***
192.168.0.2 ping statistics
***
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms
#

B: Adding 192.168.0.1/32 and adding a /24 route

# ip addr add 192.168.0.1/32 dev eth0
# # 192.168.0.2 should not be reachable.
# ping -c 1 192.168.0.2
ping: connect: Network is unreachable
# # But after adding a route, it is.
# ip route add 192.168.0.0/24 dev eth0
# ping -c 1 192.168.0.2
PING 192.168.0.2 (192.168.0.2) 56(84) bytes of data.
64 bytes from 192.168.0.2: icmp_seq=1 ttl=64 time=0.053 ms

***
192.168.0.2 ping statistics
***
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms
#

Does this mean that adding an IP address with prefix is just a shorthand for adding the IP address with /32 prefix and adding a route afterwards? That is, does the prefix length has no meaning and the real work is done by the route entries?

Or is there any functional difference between the two methods?

Here is another case, these two nodes can reach each other via direct connection (no router in between) but don't share a subnet.

Node 1:

# ip addr add 192.168.0.1/24 dev eth0
# ip route add 192.168.1.0/24 dev eth0
# # Finish the config on Node B
# nc 192.168.1.1 8080 <<< "Message from 192.168.0.1"
Response from 192.168.1.1

Node 2:

# ip addr add 192.168.1.1/24 dev eth0
# ip route add 192.168.0.0/24 dev eth0
# # Finish the config on Node A
# nc -l 0.0.0.0 8080 <<< "Response from 192.168.1.1"
Message from 192.168.0.1
 

I am building my personal private cloud. I am considering using second hand dell optiplexes as worker nodes, but they only have 1 NIC and I'd need a contraption like this for my redundant network.

Then this wish came to my mind. Theoretically, such a one box solution could be faster than gigabit too.

 

Let alone including yourself in the picture. I know how you look like.

Let alone including your loved ones in the picture.

Even when their disappointment of having to face away from the monument is clearly visible in the photo.

And then you make them do stuff like 'hold the sun in your hands' or whatever.

 
8
submitted 2 years ago* (last edited 2 years ago) by akash_rawal to c/programminghorror
 
view more: next ›