I’m skeptical that an LLM could answer questions as effectively just with documentation. A big part of the value in stack overflow and similar sites is that the answers provided come from people who have experience with a given technology and have some understanding of the pain points. Often times you can ask the wrong question and still get a useful answer because the context is enough for others to figure out what you might be confused by.
I’m not sure an LLM could do the same just given the docs, but it would be interesting to see how close it could get.
Not really, if you read the paper what they’re doing is creating an image that looks like a dog, is labeled as a dog, but is very close to the model’s version of a cat in feature space. This means manual review of the training set won’t help.