Revert the serial_recv() timeout handling as it used to be before my

yesterday's changes (i.e. before rev. 1.10 of ser_posix.c), that is,
exit(1) in case of a timeout.  Previously, the upper layers didn't see
the timeout at all.

Quite possible that some of these drivers could handle a timeout more
intelligently though.  At least for the rather sophisticated STK500v2
protocol, I think it should be possible to retry the request.


git-svn-id: svn://svn.savannah.nongnu.org/avrdude/trunk/avrdude@460 81a1dc3b-b13d-400b-aceb-764788c761c2
This commit is contained in:
joerg_wunsch
2005-05-11 17:09:22 +00:00
parent e44b866716
commit d2825e00fd
4 changed files with 33 additions and 4 deletions

View File

@@ -128,7 +128,8 @@ static int stk500v2_recv(PROGRAMMER * pgm, unsigned char msg[], size_t maxsize)
tstart = tv.tv_sec;
while ( (state != sDONE ) && (!timeout) ) {
serial_recv(pgm->fd, &c, 1);
if (serial_recv(pgm->fd, &c, 1) < 0)
goto timedout;
DEBUG("0x%02x ",c);
checksum ^= c;
@@ -203,6 +204,7 @@ static int stk500v2_recv(PROGRAMMER * pgm, unsigned char msg[], size_t maxsize)
gettimeofday(&tv, NULL);
tnow = tv.tv_sec;
if (tnow-tstart > timeoutval) { // wuff - signed/unsigned/overflow
timedout:
fprintf(stderr, "%s: stk500_2_ReceiveMessage(): timeout\n",
progname);
return -1;