An interesting concept but it would require we run it through a previously untrained neural network to keep the concept of consciousness from leaking in, which means the test is useless on deployed models.
But which AI are we going to be concerned about going sapient? The ones we are already using. I suppose the test could be applied usefully if you intended to test the network and hardware for its capacity of consciousness prior to deploying it in a live environment and training it though.