In design and development of software, I want to know what the worst-case, longest execution time for the interrupt service routine. Not what we specified or intended in design; rather what the compiler created. This is important because we want to be sure that we have plenty of time to spare: we don't want to miss other interrupts, or important hardware events that are signalled, and so forth. And we don't want the ISR to be an infinite loop.
I've written several analysis tools over the years that can do this operation on processors as complicated as the Intel x86 to as simple as the PIC10. The basic idea is count instruction cycles down the longest execution path. (Okay, I cheated on the x86 since I didn't estimate cycle time for all possible instructions, or even accurately for the ones, just the worst case for the ones that there.) Depending on the environment, I chose to feed it the assembly from the compiler, or the object binary.
The routine works as follows:
This is almost trivial on the PIC10 platform, since it has constant time instructions, and it's conditional branch instructions always has a fixed offset. This is incrementally more complex with the PIC16, PIC18, ARM, etc.
The algorithm I outlined about has one flaw. The first step can't tell if you aren't using a bounded for loop or recursion. Recursion is typically considered a bad idea in microcontrollers, embedded, and interrupt service routines, so that is not much of a problem. But short for loops are common. To fix this, you could look at the comparison and do some analysis and see if that is a problem.
In practice, I found the simplest place to start is not with analysis of looping constructs. I found it better to grab the address of the branch, map it to the source code line, and report each one separately. That is good enough to check manually to see if there is a problem.
Then, as a second pass, I added to ability to flag a region of addresses - map the source code lines to addresses, usually with another subroutine - as okay for looping, and to assume a max iteration. So step 1 would be modified to check for this, and provide the branch instruction as place to stop examining instructions (so this analysis finishes). The scan also multiply the cycle counts by the said maximum cycle.
So there you have it, how to automatically estimate the worst case duration of the interrupt service routine (or any other).