People, and especially journalists, need to get this idea of robots as perfectly logical computer code out of their heads. These aren't Asimov's robots we're dealing with. Journalists still cling to the idea that all computers are hard-coded. You still sometimes see people navel-gazing on self-driving cars, working the trolley problem. "Should a car veer into oncoming traffic to avoid hitting a child crossing the road?" The authors imagine that the creators of these machines hand-code every scenario, like a long series of if statements.
But that's just not how these things are made. They are not programmed; they are trained. In the case of self-driving cars, they are simply given a bunch of video footage and radar records, and the accompanying driver inputs in response to those conditions. Then they try to map the radar and camera inputs to whatever the human drivers did. And they train the AI to do that.
This behavior isn't at all surprising. Self-driving cars, like any similar AI system, are not hard coded, coldly logical machines. They are trained off us, off our responses, and they exhibit all of the mistakes and errors we make. The reason waymo cars don't stop at crosswalks is because human drivers don't stop at crosswalks. The machine is simply copying us.