Hi guys,
since our add-ins have gotten quite large and change impact cannot be always foreseen anymore, I'd like to introduce automated testing for the add-ins. I gave already some thoughts on that, but there are several things that I find complicated to solve.
Example 1: Let's say for example that there is functionality which simply creates a port on a component programmaticaly. The typical thing I'd do manually is creating a model containing the component and afterwards in the test case select that component, let the functionality run and afterwards check with API methods 1) if the port was created at all 2) and that it has the right properties.
Example 2: A port can also be created for the component by the user from within the toolbox. The difference is that the add-in reacts to the events and it should be checked if the reaction is correct, i.e. that the creation was not blocked by another add-in and that it has the right properties again afterwards.
Problems:
1. In both cases I have to create the right basic model manually, which is in my opinion too much work for all test cases.
2. Rolling back to the initial state of a test model is hard, because the functionality modifies the model, but EA has no means (at least what I know) for undoing modeling actions.
3. Occuring errors cannot be certainly classified to be caused by the add-in itself, since I also use the API for checking the results.
After finding those problems, I thought about creating a mock implementation for the repository that fits the needs for example 1, but triggered EA events cannot be simulated in that way. In addition, if you have multiple add-ins reacting to the same events, testing is even harder, because the order of execution is not always the same.
Question: Do you guys have any experience or even used automated tests already in your add-in projects? I'd really appreciate some other opinions on this topic.
Jan