The Human Resources team.
Puttaneska
Yes. I find it surprising.
It’s also good that there is analysis of the effect of the charge.
I would like to know if the UK tax on sugared drinks has any beneficial effect. I believe that sales (and manufacture) have reduced but that’s pretty irrelevant—eg, has it improved obesity or dental health?
I do my best to avoid both of them…it’s clearly not helping all that much.
Yes, it would be clearer if the % was after each category in the legend.
I’d remove the machine head/tuners on the bad side too. There will probably be a nut on the other side that you can undo and the tuners will just pull through. That will give you more working room and allow you to get the glue along the whole crack.
I’m less sure about this one but you could, very carefully slide something like a knife to open the crack out a little. The is might allow the glue to flow in better. If your glue is runny, it might not make that much of an improvement, so perhaps not worth risking cracking it off completely. (Although the glue will be plenty strong enough, if you take off the broken part it might not be easy to line it back up properly. If it’s still attached, you’ll be right.)
FWIW I once cycled 3 miles home with a new guitar too. It was in a heavy flight case and I thought it would be fine to carry it under one arm and wobble back one handed.
My student accommodation had cockchafers. The university didn’t believe us until one of my friends presented them with one in a matchbox.
Fantastic race, the Romans.
It would be more efficient, for researchers and for funding agencies, if the dice-rolling occurred first.
And titles (e.g., Miss, Ms, Mr, Mrs, Dr, Prof.) aren’t used with only the first name.
(Though the BBC likes to do this with their ‘celebrity’ doctors).
It seems that ChatGPT does sometimes know that what it’s offered is wrong and actually knows a better answer when challenged.
I’ve often asked for code help, which hasn’t worked. Then I’ve gone to other sources and found that ChatGPT has been wrong about something and there’s an alternative way. When this is put back to ChatGPT, it says that I’m correct (x can’t do y) and offers a perfect solution.
So it looks like it does sometimes know what it appears to not know, but inexplicably doesn’t give the correct info immediately.
I imagine that’s quite a common situation.
That’s an answer to a different question. Mine was: are there any improvements in public health?