0

A delay routine that I am writing for use on a PIC32

I have an issue with a delay routine that I am writing for use on a PIC32, the PIC32MX664F128H(http://www.kynix.com/uploadfiles/pdf8798/PIC32MX664F128H-I2fMR.pdf ) in particular.

I have two separate delay functions, which use the same lower-level delay routine: one for a millisecond delay and another for a microsecond delay. Below follows the code for each function:

void TIMER_DelayMillisec(unsigned int duration)
{
    unsigned int timerCount;

    /* Timer counter for 1 millisecond delay */
    timerCount = (MILLISECOND * systemClock);

    /* Run the timer routine */
    timerRoutine(duration, timerCount);
} /* TIMER_DelayMillisec() */

and...

void TIMER_DelayMicrosec(unsigned int duration)
{
unsigned int timerCount;

/* Timer counter for 1 microsecond delay */
timerCount = (MICROSECOND * systemClock);

/* Run the timer routine */
timerRoutine(duration, timerCount);
} /* TIMER_DelayMicrosec() */

In the above two functions, the values for the defines are MILLISECOND = 0.001 and MICROSECOND = 0.000001. The variable "systemClock" gets initialized on startup. In this particular case, this value is 80 000 000 (for 80 MHz). Below follows the code for the lower-level delay routine:

static void timerRoutine(unsigned int duration,
unsigned int timerCount)
{
unsigned int divCount = 2u;

/* If the timer counter is less than 1, round it up to the nearest integer,
* i.e. 1 */
if (timerCount < 1)
{
timerCount = 1;
} /* if */

/* If the timerCount value is larger than 0xFFFF,
* divide the value by 2 until it is within this range. */
if (timerCount > 0xFFFF)
{
timerCount = timerCount / divCount;
duration = duration * divCount;
}

/* Set the PR register to the max value */
PR1 = 0xFFFF;

/* Enable the timer */
T1CONbits.ON = 1;
TMR1 = 0u;

/* Loop the amount of times required for the delay */
while(duration > 0)
{
/* Wait until timer reaches the required value */
while(TMR1 < timerCount);
duration--;

/* Reset timer */
TMR1 = 0u;
} /* while */

/* Disable the timer */
T1CONbits.ON = 0;
} /* timerRoutine() */

Some explanation on the above routine: The delay routine uses the PIC32's Timer1 (which is a 16-bit timer). The TMR1 register gets incremented with each tick of the peripheral bus clock (PBCLK), which is set to 80 MHz. Therefore, the period of one clock tick is:

For example, for a 1 millisecond delay, the equivalent amount of clock ticks would then be:


thus,


Therefore, timerCount is the amount of clock ticks for the "unit" time delay (of either a millisecond or microsecond). The timerRoutine() function then uses this value of timerCount to perform the delay by comparing the TMR1 register against this value. When the TMR1 register value reaches the value of timerCount, the correct amount of time for the "unit" delay has passed, and the TMR1 register is re-initialized. The total duration, denoted by the variable duration is obtained by repeating this loop for duration amount of times. The part where the values of timerCount and duration are manipulated (by division and multiplication, respectively), is there for when the value of timerCount exceeds the maximum 16-bit value (of 0xFFFF) for long delays. The values of these two variables are adjusted accordingly.

These routines are tested by toggling one of the PIC32's I/O pins at a specified rate, which is obtained by calling the particular delay function in-between the changing of the I/O pin states, and then measuring the actual time difference between the I/O pin states:

unsigned int delay = 1u;
while (TRUE)
{
IOPin = HIGH;
TIMER_DelayMillisec(delay);
IOPin = LOW;
TIMER_DelayMillisec(delay);
}

The same concept is used to test the microsecond delay routine, by calling TIMER_DelayMicrosec() instead of TIMER_DelayMillisec() with the required delay value.

Herewith follows some test results. First, the millisecond delay routine is tested. For 1 millisecond:

For 5 milliseconds:

It can be seen from the above two images that the delay routine for the millisecond delay function is essentially spot-on, with equal delay between the HIGH and LOW pulses.

However, something totally different happens for the microsecond delay function. For 1000 microseconds (which should equal 1 millisecond as above):

As can be seen from the above images for the microsecond delay function, there is a difference between the delay time of the HIGH and LOW pulses, with the delay from the ON-to-OFF pulse being significantly higher than the delay from the OFF-to-ON pulse. Furthermore, this effect gets more prominent as the delay time decreases.

Considering that, for example, for a delay of 1 millisecond, the delay loop for the millisecond delay routine executes exactly the same amount of times as for the microsecond delay routine for a delay of 1000 microseconds. Therefore, the actual delay times should be equal.

The only thing that comes to mind that could cause this variance in actual delay times is the amount of overhead during the whole of the delay routine. Consequently, the expected "drifting" effect should have been worse for the millisecond delay routine than for the microsecond delay routine, since it involves more iterations of the delay loop for a longer delay. However, rather counter-intuitively, the observed results point to the exact opposite, which indicates that there is probably something else causing the "drifting" effect.

What could be happening here and how can this be fixed?

0 comments

Please sign in to leave a comment.