Instead of retrieving every single record alone a batch of records can be retrieved in one FETCH statement. The host structure/data structure must be created carefully. It must be defined as an array either with
DIM. The structure must be a data structure in either case.
The number of retrieved records can be retrieved from the SQL variable SQLERRD(3).
This can be optimized if you take the size of the data structure into account when specifing the dimension of the array.
D anzahl S 10I 0 D stm S 500A D accountDS DS dim(99) qualified D accountId 9P 0 /free stm = 'SELECT DECIMAL(account, 9, 0) ' + 'FROM accounts ' + 'WHERE type = ''accountType6'''; exec sql PREPARE stm FROM :stm; exec sql DECLARE c CURSOR FOR stm; exec sql OPEN c; if (sqlcod < 0); return -1; endif; exec sql FETCH FROM c FOR 99 ROWS INTO :accountDS; if (sqlcod >= 0); // a little bit unprecise here anzahl = sqlerrd(3); // number of retrieved records // do more processing else; anzahl = -1; endif; exec sql CLOSE c; /end-free
The SQL date and time format must match the RPG date and time format.
exec sql SET OPTION DATFMT = *ISO;
The data structure needs to be defined with a fixed number of elements. The SQL precompiler will choke if you use a variable for the number of elements.
Don't do this:
dcl-s size int(10) inz(100); dcl-ds data qualified dim(size); name char(10); end-ds;
Code it the other way around:
dcl-ds data qualified dim(100); name char(10); end-ds; dcl-s size int(10) inz(%elem(data));
As for every fetch the system makes one program call one should try to minimize the number of calls needed to process all of the data. This can be done with getting multiple rows in one call. But the question is: How many?
It seems that the number of data one can fetch with one SQL call is 32kb. So if your data structure has the size of 100 bytes you could set
dim(320) to get a more optimal result.
320 * 100 bytes = 32000 bytes
One could argue that you could squeeze even more out of it by exactly calculating it to exactly 32768 bytes. But I would not do that because the system needs some bytes for each table field returned by the call and some additional bytes for null values. That leaves 32000 bytes as a good limit (and much easier to calculate with ).