In my previous post I mentioned my worries on the missing data of current pubblication. But as with everything there is an oposite side to the story.
As I mentioned I am currently simulating dc-dc converters and a characteristic of these models is they are full of switching moments. And often stability in these systems is depended on the accurate moment of switching. For example, in peak current control a dc-dc converter becomes unstable once the dutycycle goes above 0.5 (D>0.5). This can be countered with a ramp adjustment signal but this is all timing dependent.
Matlab knows something like variable timing calculation (not 100% on the exact name), basically what it does it allows the program to change the step size to speed up the simulation which is especially usefull when you have complex systems over a long time period, but there is a chance that a switching moment is missed and eventhough the system should not become unstable if the switching moment was not missed it does become unstable.
An example circuit and outcomes are shown in the pictures below: The idea behind the circuit is that the change of the pulse is detected and only the positive going pulse is allowed to go through. From theory it is know that the derivative of an pulse if a very steep (dirac) pulse, which is what is needed in the system but from the graph it is shown that the result is not a steep very thin pulse but there seems a delay in the signal returning to zero and this is because of the variable timing. Matlab has decided that it only needs a timestep in simulation of x, which is the delay. Now, in the graph below it can be seen that the pulse is not really a pulse and in other circuits it can be shown that this effect causes instability.
The pulse graph:
There are various solutions too this issue but then the question arises which is the best one. Too find this out I posted the question to the mathworks forum: http://mathworks.com/matlabcentral/answers/36809-maximum-step-size-or-automatic-timing#answer_46023 and I got a reply which discusses various options.
Back to my original point: where do you stop reporting? I mean, would these settings for simulation also be reported in any paper? And the reasons why you have used these settings? Part of me is saying a clear yes, but instandly it becomes clear that this starts to eat up valuable space in your paper. But on the other hand without the information your simulation cannot be reproduced So where would / should you end?
my2p