I was wondering if tuning a seed with cross-validation in order to maximize the performance of an algorithm heavily based on a randomness factor is a good idea or not. I have created an Extra Tree Classifier which performs very bad with basically every seed except the one I found by using grid search. I think this is not a problem because I really don't care about how the conditions were set as long as they classify correctly, therefore I should have the ability to try running the algorithm with different seeds until it works, in order to find the best set of casual conditions for each split. Also, note that the test is done with Leave One Out Cross Validation.
Am I right?