Page 3
Timing an Event in DataFlex
by Curtis Krauskopf
Why is this
so Inconsistent?
The
program's
result changes depending on exactly when the program
started running. Even if you are able to correctly stop
the program after exactly 0.5 seconds every time, you
will still get the
same two results.
This
diagram explains what is happening when 0 seconds are
reported.
In
this diagram, the program starts and the user gets bored
all within the same second on the system's clock. The
time reported by the program is 0 seconds elapsed even
though 0.75 seconds really elapsed while the program
ran.
The
next diagram shows how a short delay can be reported
as 1 elapsed second.
In
this diagram, the program started running at :06 seconds
(according to the system clock) and stopped running
at :07 seconds. Even though only 0.25 seconds elapsed
between the start and stop times, the computer reports
that one second elapsed.
Knowing
this, I can show you an example of a third problem.
Here's a diagram of an event that lasted almost 2 seconds,
yet the computer will report that it lasted only 1 second:
In
this diagram, the program started at :06 seconds and
stopped right before :08 seconds. The elapsed time is
about 1.95 seconds, but the computer reports that only
one second elapsed.
It's
All Relative
This
entire exercise brings up some good points (rules of
thumb) to consider when timing events on a computer:
- Repeat
the test to verify that the results are reproducible.
- Consider
the accuracy of your measuring device.
- Events
that take a while don't need an accurate clock.
Timed
tests must be repeated to verify their validity. As
we will see in a later example, the operating system
can steal time slices that make a program appear to
run longer than it really has. As demonstrated in the
above diagrams, two different results can appear for
the same test, each depending on exactly when the program
was started. One way to detect this is to run the test
multiple times. For short tests, running it as many
as 10 times is a good rule of thumb. Longer tests should
be run at least four times.
The
reason that we had inaccurate results is because the
SYSDATE and SYSDATE4 commands had a resolution of one
second. This means that they return values that are,
at most, accurate to one second. But the same concept
applies no matter what resolution they return.
Pretend
that SYSDATE and SYSDATE4 were able to report the system's
clock to within 0.10 (1/10th) of a second. We would
have the same problem (albeit harder to reproduce or
notice) for events that lasted 0.05 (1/20th) of a second.
In that example, some events would be reported as having
run in 0 time and others would be reported as having
run in 0.10 seconds.
And
that example highlights the last concept: events that
take a while don't need an accurate clock. Does it really
matter to us if an event that takes 120.75 seconds is
reported as having run in 120 seconds or 121 seconds?
In most cases, no, it doesn't. If two algorithms that
I'm comparing are both within 1% of each other's time,
then it really doesn't make any difference (from a time
perspective) which algorithm I use.
Sometimes
it is important to be able to fairly and accurately
time two events. Even though the SYSDATE and SYSDATE4
commands have a resolution of 1 second, there is a way
to at least get consistent results.
Copyright 2003-2010 The Database Managers, Inc.
|