Two tools are presented in this installment. The first provides a base for understanding many of the methods I use when solving design issues. The second, hopefully, shows how to use tables to help eliminate calculation of complex operations.

Mathematics is a fun subject. Many people shun the topic, because it seems too complicated, or doesn’t seem to have a practical use. I am speaking of higher order mathematics, not just the summation processes used to get people through a checkout, or balance the books of a dealership. Mathematics is an art of attempting to explain a physical state using combinational logic to predict behavior. The real fun part is when something doesn’t fit in with existing systems, and a new methodology is developed. I suggest, if you get a chance, reading Flatland: A Romance of Many Dimensions, and Flatterland: Like Flatland, Only More So. They are books that provide a good foundation of mathematical theory in the form of a saga. Remembering that computer science was once part of the world of mathematics and not engineering, is where we step off into the void.

**Computers are stupid!**

There, I said it! They only serve to display the inadequacies of the programmer. Also, they are malicious in their adherence to a task. With that said, computers don’t care about what people see as important. When programming a computer, if efficiency and speed is the goal, the programmer must think as a computer. The computer program must speak at the level of the computer, not the programmer.

**Higher order math**

A mechanism of higher math is the use of different numbering systems. Some are familiar to programmers because they get used often. Hexadecimal, and decimal numbering systems commonly used. Degrees, radians, and grads, are units available on most calculators. Centigrade to Fahrenheit conversions are also quite common. Differing measurement units and number systems are familiar to most people, even if they are unaware. Back in my early days of video game development, I had my "ah ha" moment, when the light came on. I was attempting to do coordinate geometry operations using sin and cos functions, and having troubles with angle measurements. I was using degree measurements to access a table of values, which became clunky and seemed to require too many conversions to get a result. Also the table was enormous, which was not a good idea, since the whole thing (program and data) had to fit on an 8K EPROM.

**The enlightenment**

My enlightenment came in two steps, first, the table only had to be a quarter wave. The other sections are mirrored instances of the first (remember, watch for patterns). The second step is greater because it triggered a new line of thought for me. It started when I wanted to better fit the table into a bounded block, and was trying to use radians instead of degrees. The realization that day was; computers don’t care about units. So how about instead of degrees, radians, or grads, a new unit where the familliar 360 degrees (or 2pi) is equal to 32768 (or 2^{15}), which is signed integer max. The advantage was, all angular calculations are now just integer math, and obtaining the current angle just required ANDing the number with 0x7fff. The other advantage was the high order two bits determined which quadrant to use for sin and cos lookup. The values in the table were another piece to the puzzle. I didn’t want to use floating point operation, because it required two conversions, and back in the day, I didn’t have the luxury of a floating point processor, so floating point would be compute intensive. The table would be fixed point, and, to keep it simple, just a left shift of the value, then a right shift after the operation. Last I wanted smooth movement, not jumpy, and I also didn’t want a table with 8192 entries. The solution was use a modest number of points, then interpolate between the ends. Like in calculus, the approximation is close and gets closer with the more points used. Experimentation would provide the proper threshold for accuracy. The functions sin and cos are actually identical except cos is 90 degrees ahead of sin, so cos(t) = sin(t+90) if I was using regular cardinal degrees. That gives everything needed to build the code.

**The code…**

/************************************************************************************************ math.c History: [JAC] Original Creation when the earth was cooling Copyright(c) 2016 Chaney Firmware ************************************************************************************************/ #include "types.h" /* quarter wave sin table with 33 points (1 extra for max case), times 32768 */ SWord sTbl[] = { 0 ,1607 ,3211 ,4808 ,6392 ,7961 ,9512 ,11039 ,12539 ,14010 ,15446 ,16846 ,18204 ,19519 ,20787 ,22005 ,23170 ,24279 ,25330 ,26319 ,27245 ,28106 ,28898 ,29621 ,30273 ,30852 ,31357 ,31785 ,32138 ,32413 ,32610 ,32728, 32767 }; SWord iSin(SWord r) { bool s; SWord n[2], d; r = r & 8192 ? ~r : r; // quadrant 1,3 or 2,4 s = (r & 16384) == 0; // quadrant 1,2 or 3,4 d = r & 255; r = (r >> 8) & 31; // 32 entries in the table n[0] = sTbl[r]; n[1] = sTbl[r + 1]; // get the boundary values d = n[0] + (((n[1] - n[0]) * d) >> 8); // interpolate the intermediate value return (s ? d : (~d + 1)); // change sign for proper quadrant } SWord iCos(SWord r) { return iSin((r + 8192) & 32767); } /* end of file */

**Interpolation**

Interpolation is a powerful tool, because it reduces the amount of storage needed while retaining a high degree of accuracy. The tradeoff is the need for the interpolation calculation, which has a penalty of a single multiply operation. This could be a good time to discuss cost of operations. Program design is a continuous compromise. Each improvement in one area often has a penalty in another. Interpolating a table reduces the data storage requirement, but increases the amount of processing needed.

**The cost of an istruction**

Because each element has a cost, it is always a good practice to establish “prices” for features and changes. Understanding how computers do what they do is a valuable part of this decision process, and being concise of the need while programming, will generally improve efficiency of operations. At the code level, instructions fall into three categories, fast, moderate, and slow. Fast instructions are the simplest tasks that the computer hardware has built in, and only needs to route to a single gate. Move, shift, AND/OR/NOT, increment, and add fall into this category. Next is the moderate instructions that take more than a single hardware operation to complete. Subtract takes a NOT, increment, and add. (invert, and add one to get the negative). The more work that is needed to perform an operation, the slower the instruction. Multiply and divide instructions requires an, add, and a shift for each bit, so it becomes a slow operation. Programming cost is at the root of my avoidance of using floating point operations whenever possible. The expense is for a conversion, to a floating point, performing the operation. Additionally, operations like add are made more complex, because there is an additional requirement to align the decimal points. These penalties are often overlooked because at the high level, the equation A = B + C is nearly equivalent in complexity to A = B * C, and without knowing the B * C operation is significantly more costly for processing time than B + C, there wouldn’t seem to be any opportunity for improvement.

The solution is to do the math up front. Many times operations that require complex calculations, genuinely deal with a singular variable, and can be performed using a lookup table. The table can be built off line using a spreadsheet to calculate the values, then using an interpolated table look up the values have a very fast and relatively accurate device.

**Tables**

The simple linear interpolation used in a table with fixed intervals is good for performing conversions, particularly in the realm of sensors that behave in either linear or second or third order polynomial. But there are often cases where two independent variables influence an equation and a linear table is no longer useful without having to perform a multiplication of the two results. In these cases it is possible to use a two dimensional array with one variable on the X axis and the other variable on the Y axis. The point of intersection would contain the calculated value. This is performed with a planar interpolation. For reference, the formulas that I use for this are available in the CRC Standard Math Tables that should be available at larger libraries (if you don’t already own a copy). The equations are sort of similar, with a linear interpolation between two points, and a planar interpolation centered within four points.

A linear interpolation between two points where num is the known variable, and val is the result:

(`Y`1−`Y`0)(`X`1−`X`0) =(`v``a``l`−`Y`0)(`n``u``m`−`X`0) => `v``a``l`=`Y`0+((`n``u``m`−`X`0)×(`Y`1−`Y`0))(`X`1−`X`0)

It cleans up nicely in code as will be shown.

The planar interpolation of four points defined as (X0,Y0), (X1, Y0), (X1, Y0), and (X1, Y1) where numX and numY are the known variables, and Z is the result (the Z values are the content at each point)

`v``a``l`=(`Z`0(`X`1−`n``X`)(`Y`1−`n``Y`)+`Z`1(`n``X`−`X`0)(`Y`1−`n``Y`)+`Z`2(`X`1−`n``X`)(`n``Y`−`Y`0)+`Z`3(`n``X`−`X`0)(`n``Y`−`Y`0))(`X`1−`X`0)(`Y`1−`Y`0)

Seems like lots of multiplies, but actually, it reduces down nicely in practice, and the expense is offset considerably by the effective result.

**Units of Measure**

Before I get to the code for table lookup, which is the basis of the calibration operation, I want to offer what I find to be a useful set of measurement units. Remember, computers don’t care what units are used, as long as they can be used efficiently in operations. The requirements I used in developing my set of units were:

- keep everything in a 16 bit value. This way the table operations can be generic and always return a 16 bit value.
- provide sufficient accuracy for a majority of operations (significant digits).
- maintain consistent fixed point to reduce the number of corrections at the end of an operation to adjust the decimal place.

The units I use are:

- Angle is in binary radians as described earlier 32768 equates to 360 degrees
- Temperature is in degrees K absolute * 100 (unsigned because K doesn’t go negative) where 27315 equates to 0 C
- Pressure is in kPa (kilo Pascal) * 100 (again absolute pressure) where 65535 equates to 6.47 atmospheres
- Ratios are * 1024 (shift), where 100% = 1024
- Voltage is maintained as mV (V*1000)

Error for the units is as follows:

- angles (error +/- 0.0109 degrees)
- ratios < 6399.9% for unsigned or 3199.9% for signed (error +/- 0.098%)
- temperature < 655.35 degrees K or 382.20 degrees C (error +/- 0.01 degree C)
- pressure < 655.35 kPa or 6.47 atm (error +/- 10Pa)
- voltage +/- 32.767v (usually 5v reference)

**At last the code…**

/************************************************************************************************ tables.c History: [JAC] Original Creation when the earth was cooling Copyright(c) 2016 Chaney Firmware ************************************************************************************************/ #include "types.h" #define ONE 1024 /* input parameters are the address of the table, the number of cells, and the locater value x */ SWord getTableLin(SWord[] tbl, UWord wid, UWord x) { UWord dX, iX; SWord z[2], rVal; SLong base; dX = ((wid != 0) && (wid < ONE)) ? (ONE / wid) : 1; // dX is the step from cell to cell x /= dX; iX = x % dX; // x is the left cell of the pair, iX is the offset between cells z[0] = tbl[x]; // get left ref z[1] = (x + 1) < wid ? tbl[x + 1] : z[0] + z[0] - tbl[x - 1]; // create a phantom cell at the far end base = ((SLong)z[1] - (SLong)z[0]) * (SLong)iX / (SLong)dX; // perform interpolation rVal = z[0] + (SWord)base; // add to left ref return rVal; } /* intput parms are the address of the table, the width of the table, the height of the table, and the locaters x and y */ SWord getTablePln(SWord[] tbl, UWord wid, UWord hgt, UWord x, UWord y) { UWord dX, iX, dY, iY; SWord z[4], rVal; SLong offs, base; SLong K[4]; dX = ((wid != 0) && (wid < ONE)) ? (ONE / wid) : 1; // dX is the step from cell to cell iX = x % dX; x /= dX; // x is the left cell of the pair, iX is the offset between cells dY = ((hgt != 0) && (hgt < ONE)) ? (ONE / hgt) : 1; // repeat for y iY = y % dY; y /= dY; offs = wid * y + x; // get the table offset for the upper left corner of the set z[0] = tbl[offs]; // get upper left ref z[1] = (x + 1) < wid ? tbl[offs + 1] : z[0] + z[0] - tbl[offs - 1]; // create a phantom cell at the far end z[2] = (y + 1) < hgt ? tbl[offs + wid] : z[0] + z[0] - tbl[offs - wid]; // create a phantom cell below the bottom z[3] = ((y + 1) < hgt) && ((x + 1) < wid) ? tbl[offs + wid + 1] : z[2] + z[0] - z[1]; // sneaky way /************************************************************************************** * The equation for a planar surface interpolation for NxM points is * Z0(x2-X)(y2-Y) + Z1(X-x1)(y2-Y) + Z2(x2-X)(Y-y1) + Z3(X-x1)(Y-y1) * ----------------------------------------------------------------- * (x2-x1)(y2-y1) * X and Y are offsets. Z is the value of the corner **************************************************************************************/ K[0] = (SLong)(dX - xI) * (SLong)(dY - yI) * (SLong)z[0]; K[1] = (SLong)xI * (SLong)(dY - yI) * (SLong)z[1]; K[2] = (SLong)(dX - xI) * (SLong)yI * (SLong)z[2]; K[3] = (SLong)xI * (SLong)yI * (SLong)z[3]; base = (SLong)dX * (SLong)dY; rVal = (SWord)((K[0] + K[1] + K[2] + K[3]) / base); return rVal; } /* end of file */

Some things to note, the use of the constant ONE. Remember I said ratios were 100% = 1024. The constant ONE is used based on that 100% value so tables always run from 0 to 100% and width or height are independent of the number of cells in either direction.

…but wait. I said earlier to watch for patterns, and I see one here. The two dimension version looks like a doubling of the linear version. Including the offsets and extensions, how to use that to an advantage. Maybe I can make a single function to do either linear or two dimension tables. Make all the tables two dimension, and linear tables can have an index of tables for y. Also, not all tables should use interpolation between cells. Often the values should remain as discrete elements. The other thing to note, the table pointer links to a physical memory address. I like to keep my code as uncoupled as possible, so how about using an offset value, and let a routine determine where the physical calibration data is located with a “getCal” kind of function. Height and width of the tables don’t change, so a structure can be used to define the tables with the “offset”, “width”, and “height”. To invoke the right type of conversion; discrete, linear, or planar, a table type is also needed. After some juggling, the single routine becomes only slightly different.

**Updated code**

/************************************************************************************************ tables.c History: [JAC] Original Creation when the earth was cooling Copyright(c) 2016 Chaney Firmware ************************************************************************************************/ #include "types.h" #define ONE 1024 struct tblTbl { UByte type; // 0 = discrete, 1 = linear, 2 = plainer UByte xLen; // x size - width of table UByte yLen; // y size - height of table (linear is numbered row) UWord offs; // memory location of the table }; extern struct tblTbl calTbl[]; SWord getCal(UWord n) { /* to be defined */ } SWord getTableVal(UWord n, SWord x, SWord y) { SWord z[4], rVal = 0; SWord dX, dY, xI, yI; ULong off; SLong base, K[4]; if (n < MAX_TBL) { off = calTbl[n].offs; switch (calTbl[n].type) { case 1: /* Linear */ y = y < calTbl[n].yLen ? y : (calTbl[n].yLen - 1); dX = ONE / calTbl[n].xLen; x /= dX; xI = x % dX; if (x >= calTbl[n].xLen) { xI = 0; x = calTbl[n].xLen - 1; } off += x + (y * calTbl[n].xLen); z[0] = getCal(off); z[1] = (x + 1) < calTbl[n].xLen ? getCal(off + 1) : z[0] + z[0] - getCal(off - 1); base = ((SLong)z[1] - (SLong)z[0]) * (SLong)xI / (SLong)dX; rVal = z[0] + (SWord)base; break; case 2: /* Planar */ dY = ONE / calTbl[n].yLen; yI = y % dY; y /= dY; if (y >= calTbl[n].yLen) { yI = 0; y = calTbl[n].yLen - 1; } dX = ONE / calTbl[n].xLen; xI = x % dX; x /= dX; if (x >= calTbl[n].xLen) { xI = 0; x = calTbl[n].xLen - 1; } off += x + (y * calTbl[n].xLen); z[0] = getCal(off); z[1] = (x + 1) < calTbl[n].xLen ? getCal(off + 1) : z[0] + z[0] - getCal(off - 1); z[2] = (y + 1) < calTbl[n].yLen ? getCal(off + calTbl[n].xLen) : z[0] + z[0] - getCal(off - calTbl[n].xLen); z[3] = ((y + 1) < calTbl[n].yLen) && ((x + 1) < calTbl[n].xLen) ? getCal(off + calTbl[n].xLen + 1) : z[2] + z[0] - z[1]; /************************************************************************************** * The equation for a planar surface interpolation for NxM points is * Z0(x2-X)(y2-Y) + Z1(X-x1)(y2-Y) + Z2(x2-X)(Y-y1) + Z3(X-x1)(Y-y1) * ----------------------------------------------------------------- * (x2-x1)(y2-y1) * X and Y are offsets. Z is the value of the corner **************************************************************************************/ K[0] = (SLong)(dX - xI) * (SLong)(dY - yI) * (SLong)z[0]; K[1] = (SLong)xI * (SLong)(dY - yI) * (SLong)z[1]; K[2] = (SLong)(dX - xI) * (SLong)yI * (SLong)z[2]; K[3] = (SLong)xI * (SLong)yI * (SLong)z[3]; base = (SLong)dX * (SLong)dY; rVal = (SWord)((K[0] + K[1] + K[2] + K[3]) / base); break; default: /* Discrete */ if ((x < calTbl[n].xLen) && (y < calTbl[n].yLen)) { off += x + (y * calTbl[n].xLen); rVal = getCal(off); } break; } } return rVal; } /* end of file */

Lots on this one. Next time it will be time to start stitching things together and actually have a main().

## Top Comments