It’d be interesting to see how much this changes if you were to restrict the training dataset to books written in the last twenty years, I suspect the model would be a lot less negative. Older books tend to include stuff which does not fit with modern ideals and it’d be a real struggle to avoid this if such texts are used for training.
For example I was recently reading a couple of the sequels to The Thirty-Nine Steps (written during WW1) and they include multiple instances that really date them to an earlier era with the main character casually throwing out jarringly racist stuff about black South Africans, Germans, the Irish, and basically anyone else who wasn’t properly English. Train an AI on that and you’re introducing the chance for problematic output - and chances are most LLMs have been trained on this series since they’re now public domain and easily available.
I’ve found a plunger useful for a sink occasionally, a bit of back and forth plunging can loosen up a hairball or break a layer of fat/soap scum. On the other hand I’ve never needed to use a plunger on a toilet - I don’t know how much of this is exaggeration on the internet but Australian toilets don’t seem to have anywhere near the amount of issues the American designs do.