While I agree mostly with the blunt of the thesis - 80% of the job is reading bad code and unfucking it, and ChatGPT sucks in all the ways - I disagree with the conclusions.
First, gen AI shifting us towards analysing more bad code to unfuck is not a good thing. It's quite specifically bad. We really don't need more bad code generators. What we need are good docs, slapping genAI as a band-aid for badly documented libraries will do more harm than good. The absolute last thing I want is genAI feeding me with more bullshit to deal with.
Second, this all comes across as an industrialist view on education. I'm sure Big Tech would very much like people to just be good at fixing and maintaining their legacy software, or shipping new bland products as quick as possible, but that's not why we should be giving people a CS education. You already need investigation skills to debug your own code. That 90% of industry work is not creative building of new amazing software doesn't at all mean education should lean that way. 90% of industry jobs don't require novel applications of algebra or analytical geometry either, and people have been complaining that "school teaches you useless things like algebra or trigonometry" for ages.
This infiltration of industry into academia is always a deleterious influence, and genAI is a great illustration of that. We now have Big Tech weirdos giving keynotes on CS conferences about how everyone should work in AI because it's The Future™. Because education is perpetually underfunded, it heavily depends on industry money. But the tech industry is an infinite growth machine; it doesn't care about any philosophical considerations with regards to education; it doesn't care about science in any way other than as a product to be packaged and shipped ASAP to grow revenue, doesn't matter if it's actually good, useful, sustainable, or anything like that. They invested billions into growing a specialised sector of CS with novel hardware and all (see TPUs) to be able to multiply matrices really fast, and the chief uses of that are Facebook's ad recommendation system and now ChatGPT.
This central conclusion just sucks from my perspective:
It’s how human programmers, increasingly, add value.
“Figure out why the code we already have isn’t doing the thing, or is doing the weird thing, and how to bring the code more into line with the things we want it to do.”
While yes, this is why even a "run-of-the-mill" job as a programmer is not likely to be outsourced to an ML model, that's definitely not we should aspire the value added to be. People add value because they are creative builders! You don't need a higher education to be able to patch up garbage codebases all week, the same way you don't need any algebra or trigonometry to work at a random paper-pushing job. What you do need it to is to become the person that writes the existing code in the first place. There's a reason these are Computer Science programmes and not "Programming @ Big Tech" programmes.