On Thu, Oct 23, 2008 at 7:49 PM, Roger Brobst <rogerb@cadence.com> wrote:
So if error_diag_len is supposed to mean size of the string buffer why sizeof(buf) is not used? Shouldn't it be called error_diag_size to be less confusing?
Yes, *IF* the intent was to accept the buffer size, then the variable error_diag_size would be a better choice. Since that's not the intent, it wasn't chosen.
I was just wondering why the buffer size concept was not chosen. I think it's a lot simpler and less error-prone. But OK, I understand that the intent was to accept maximum string length. Now let's get back to the example: char buf[DRMAA_ERROR_STRING_BUFFER]; drmaa_init(NULL, buf, sizeof(buf) - 1); How would you implement the string copying in drmaa_init()? One would probably use the following code: int drmaa_init(..., char *err_diag, size_t err_diag_len) { strncpy(err_diag, SRC, err_diag_len) } Now what happens if strlen(SRC) >= sizeof(buf) - 1? Who is responsible for adding the terminating '\0'? -- Piotr Domagalski