Ich bin zwar kein Profi für die Seiser Alm, aber ich fahr' ja auch zum Vergnügen hin. ©2015 Helmut Stettmaier
⇒Index⇒Aeromodel ElectronicsThe Render-Engine⇒etc Schlern, South Tyrol
The Render-Engine            This is a new Version of this article   Feb. 21st, 2016
Haptic gauges
Haptic gauges

It is a microcomputer to render data, in this case mainly telemetry data for aeromodellers.
I think such devices will gain increasing importance for RC equipments. Here I describe my development, an extra box (not really a box but a shrunk microcontroller device), for installation in a Multiplex® ROYALpro RC transmitter to increase its functionality and to try out new ideas. A first version with still limited functionality is ready now and shall be tested during the coming season. It can read all standard and one format of custom telemetry data from the MLink® module, can execute few evaluations and application code and can generate output data. It renders data using speech, vario- and shepard-tones and, of course, my haptic output.

The state of the art in this field includes audible output (speech and sounds) and displays mounted in RC-transmitter cases or a vibration alarm in a stick.

Earlier I had implemented a haptic output by mounting small servos in the transmitter case and driving levers which can be scanned with the ring fingers - I use one of them mainly for the variometer information and I do not want to miss it any more. This machinery works but it is inflexible and I reimplement it and thereby improve it substantially by enhancing its...

The idea is to design a device that fits into the RC-transmitter case, which reads the telemetry data stream, analyses it, performs some processing and renders the results in several appropriate ways. There might be three types of persons enjoying the render engine: The ones who just use it, the ones who write applications using the "platform render engine", and of course those who bring it on its way and extend it - perhaps I'm only the first of these.

Basic requirements

Requirements are to be formulated for the model pilot, the experimenter (application programmer) and the others, including myself, who might come and adapt or extend the engine substantially. The following is a bit unordered:

The applications, the code to evaluate, analyse and process the data and to generate data to be rendered, may be from "very simple" up to "somehow complex". Their usefulness might be low (e.g. a flight log) up to indispensable for the one or the other model pilot and the computing intensiveness may be wide spread. Most applications may be not really time critical, others are. It should be possible to use real time operating systems on powerful hardware or just a main loop for small devices. Rendering devices may really be very different, spanning from traditional audible signalisation and speech output, haptic and visible output of different kinds to even graphic displays in the upcoming data glasses.

Besides reading the telemetry data stream the engine shall be able to acquire data by itself: partially reference data (e.g. air pressure) or data of the user (e.g. switch settings or the "heading" of the transmitter case). Other data are semi constant and shall be defined by the user as operational parameters, for instance the capacity of the battery in a specific aeromodel.

"Independence" and "..catch the telemetry data stream" means that there must be a unique driver API so that different drivers for different RC-transmitter types and brands can be used and consequently that there must be a unified representation of telemetry data such that evaluation, analysis and rendering can be done independently of RC brands.
"Unified representation" includes a definition of the semantics of aeromodelling telemetry data, I call it "vocabulary", and usage of appropriate data formats, rules, templates and services or auxiliary programs. I designed some or these things: a simple driver API, a vocabulary and a data structure for telemetry data items, a fixed point number format for those data which can even be used efficiently with today's low end processors (Cortex-M0) and a flexible mechanism (version 0.0.1 :-) for dealing with operational parameters. I shall describe them later.

As an application should not depend on specific render methods some abstraction and classification of render methods is needed.

To summarize all this - the render engine consists of

Here it is:

Render Engine - Block Diagram

This is one of the usual block diagrams showing the render engine and something important around it. Yes, currently it does not contain a network link: For the next few years, possibly very few years, render engines will be alone and offline (and I do not consider a serial Bluetooth-connection to a smart phone or tablet as a network). To the left there is the input section and the external storage, the centre is dominated by the application. The three shades of grey stand for hardware abstraction, platform stuff and, last but not least, the application, the most interesting code which will change very often and where most experiments will take place. To the right you can see some icons representing current or imaginable render devices and others which still must be invented. All this is straight forward. I leave the question "RTOS or MainLoop?" open here, currently I use a main loop.

The input driver pushes telemetry data: It controls in so far the main loop as at least any incoming telemetry data item causes one run of the loop. In an RTOS it would have to run in an own thread and push the telemetry data items into a queue. The drivers of the switches, controls or own sensors (e.g. a compass) just read and store their values regularly and getter functions are available. The (future) file system is a standard one.

The output drivers have physical and logical layers. Most physical devices can be used for different logical devices - a complex example is (low level) audio output, which can be controlled by different logical level drivers: A gauge can be implemented as a variometer tone (up to 2 octaves represent an interval of values), a clock display (see below) can be implemented as a shepard tone, and a display is, of course, a speech output. Another example: A a standard RC servo can be used to implement a gauge in 2 different ways: a haptic output or a gauge display with a quite large, easy to read display.
Which value is to be output on what display is controlled by the application code but platform code is made available to do this in a unified way and can be controlled by operational parameters, see "plumbing" below.

Currently the main classes for render devices could be

The plumbing: This is simply a table containing the information which data item is to be rendered by which real (logical) device. I will explain it below in the parameters section.

mbed and the OM11043 board

Of course at first I thought about an embedded Linux system, it sounds "modern". There are some tiny designs (e.g. →Arietta-G25, the →40-Pin-DIL-GNUBLIN) and other "gumstick sized" quite powerful systems. If I won't choose embedded Linux I will be asked why. There are some convincing pros:

But why not embedded Linux? Embedded Linux is currently out of focus, but the RendEng2.0 might run under this system. μCLinux is simply not an option. To be honest: I would have been interested in Linux. I will not learn to configure and use embedded Linux only with the help of books and forums, if I could get real help by real persons (in the vicinity west of Munich) all this would look quite different.

32-Bit-Arduino? mbed? Bare metal?

Simple 8-Bit-Programming with standard boards is the domain of Arduino and also 32-Bit-Arduinos are available; all 32-Bit-Arduinos are based on an ARM-Cortex-Mx MPU but I don't know if they are compatible (I'm afraid they aren't). The →Teensy is in my eyes one of the most attractive designs. It is very easy to write <very simple> Arduino applications, it is much more cumbersome to write <not very simple> Arduino applications. The main problem with Arduino is, this is not new, the absence of a debugger. There is not only no debugger forseen, it is essentially not possible to attach a debugger without changing the hardware and afterwards it is not longer an Arduino. See what is to be done with the Teensy to use a debugger in →Prof. Erich Styger's blog.    I think developping hardware abstraction or platform software for such an Arduino is not possible.

mbed is more than Arduino but shares some things: It is also possible to develop <very simple> application software for an mbed system when "serial-out-debugging" will do for you: Let the online compiler do the work and store the resulting binary file in the hardware, which appears as a USB storage device. Then let it run and watch the printfs on a terminal application. Good luck!
Writing more complex or complicated software this way is also not possible but... and this is the point, if you want more any mbed compliant piece of hardware may be attached to a debugger and you may use a locally installed development system from the vendor of your hardware.

The offers of mbed compatible hardware is increasing, but nearly all boards are a combination of a debugger and a discovery/experimental/prototyping board; the debugger may be separated from the prototype board, but these boards never fulfill even modest perceptions of any form factor - simply unusable as a render engine, but they of course never were made for that. You may get such boards for very low prices and they are a good starting point.
You may also design your own hardware which will, together with the mbed-debugger/downloader, be mbed compatible.
Directly usable mbed-hardware is available in many sizes and flavours, see the →Platform list on the mbed pages. I decided to use the →OM11043 board, which has a broadened 40-pin-DIL form factor and will fit into my transmitter case. It's heart is an LPC1768, a 72-MHz clocked Cortex-M3 with 512kB flash and 32kB SRAM and a 10-Bit DAC etc., the board has a bit of external storage and few blinking lights (oh I like that:-) It shall be extended with an audio amplifier, a battery for the clock and I'll use its I²C bus later for some "ground station" reference sensors (atmospheric pressure, compass) and for more advanced render devices.

The plan was: Use the mbed platform as far as possible and when real problems arise continue with the LPCxpresse tools for debugging. After installation this didn't work, I hate that, and I had not time and skills enough to make it running...
But the other approach worked surprisingly well: Develop as much as ever possible on the PC using Eclipse-Juno and the MinGW (32 Bit) compiler and port it onto the ARM. This also causes me to use proper test frames for my modules. In fact even the audio output was developed nearly fully on the PC, only the generated samples went into a .wav file instead of the DAC. Of course the timing (interrupt servicing for the samples) cannot be simulated on a PC, but the corresponding problems were not really hard.

This encourages me to set up a simple platform on the PC which allows to develop application code for the render engine under Windows or (PC-) Linux in the future.

Current developments

The vocabulary: Independence of brands means, among other, to set up a common language such that we know what we are talking about. Telemetry data types must be defined and a telemetry data item must be tagged with this type definition. So I defined a list of types of data items in aeromodel telemetry. It is written down as an xml file, currently I can generate C-#defines of it, more is planned. I call this list "vocabulary" and it maps some technical data items into numerical codes. The mapping is organized in 3 steps: groups, items and subitems or indexes.

More examples:
Groups TDEL_SUPPLY or TDEL_DRIVE and others can contain "contain" items:
TD_VOLTAGE, TD_CURRENT, TD_TEMPERATURE, TD_CONSUMED and others. For items it is clearly defined what they are and in which physical units they are measured (let's use SI-units instead of feet, PSI etc. as far as possible).
The items may be indexed using the lowest 4 bits, some of the possible values are used for special instances, for example TD_MAXCURRENT is the maximum current measured since reset, as some sensors produce.
The groups-, item- and index- or subitem-codes are combined simply by or-ing them, e.g. TDEL_DRIVE | TD_TEMPERATURE.
TDNAV_VEHICLE groups navigational items concerning the aeromodel, TDNAV_REFERENCE the same items for a reference station, usually the RC-transmitter. Navigation items include:
TD_ALTUTUDE_MSL or TD_ALTUTUDE_REL. "..._REL" means "relative" and is defined as sharp as possible: Normally it refers to the point where the last reset of the sensor was performed and it must be guaranteed that the value 0 for TDNAV_VEHICLE | TD_ALTUTUDE_REL and TDNAV_REFERENCE | TD_ALTUTUDE_REL refer to the same point. If TDNAV_REFERENCE | TD_ALTUTUDE_REL is not available it is assumed to be =0 and TDNAV_VEHICLE | TD_ALTUTUDE_REL minus TDNAV_REFERENCE | TD_ALTUTUDE_REL can always be seen as "altitude above ground" (AGL), as precise as possible. TDNAV_REFERENCE | TD_ALTUTUDE_REL is currently not acquired but I plan to include a pressure sensor into a later version of my render engine implementation. There exists also a type TDENV_GROUND | TD_PRESSURE_PSTAT, the static pressure at the RC transmitter, it will come out of the same sensor as TDNAV_REFERENCE | TD_ALTUTUDE_REL and may be used for further corrections of other telemetry data (e.g. IAS→TAS) or for watching the weather during model flying or measuring the temperature gradient or such stuff.

The fixed point data type q1516: C++ allows to let classes look closely like primitive data types and I could not resist to write a class for a very usable data type. It was a good finger-warming after such a long, long pause in C++ coding. I'm sure it is the (n+2)nd invention if this usable number format. Ok, it's not only fun:
Render engines may be implemented on really low end MPUs as Cortex M0 kernels and a common, unified way to treat numerics is important. Although it is additional work to convert telemetry data from brand specific formats into the unified format it is a major benefit for application programmers.
q1516 is a fixed decimal point number with a sign bit, 15 bits before and 16 bits after the decimal point, this allows "at least" 4 decimals before and "4½" decimal digits after the decimal point. It fits into one 32-bit-word, is quite easy to use and does not bring even the smallest Cortex-M0 into problems. Such numbers are sufficient to represent nearly every value which might occur in the field of aeromodel telemetry - an exception might be an absolute WGS84-coordinate which should be represented by 2 float numbers - in this case usually also more complex number crunching, including trigonometry, is used and this should be done with M4-kernels. Besides overloading the usual operators and conversion methods from/to signed integer and float there is a method toString() to make such a number readable.

The parametry: The next piece is a first version of storing, looking up and managing operational parameters. Such a parameter consists of a key and its value, it is not to be altered by the render engine's software. Sets of parameters are combined to tables, tables are organized in a parametry-tree.
Managing such a tree, keeping the information together, storing the tree at specific addresses, e.g. a fixed address in the flash memory, such that they can be loaded separately and their management outside the render engine hardware is a bit tedious and there are some class definitions, routines and an auxiliary program running on the PC.
A parametry tree is stored on a PC as an .xml file, its structure can be enforced using a corresponding .xsd schema definition, such that, for example, XML-Notepad can be used to maintain parameter sets. A Windows program translates the xml-structured parameters and the vocabulary into several formats. The most important format is an unstructured "byte soup" which can be loaded into the mbed render engine. Inside the render engine the byte soup is interpreted as a tree of parameter tables, all this is not really beautiful C++ code, but it works and the parameters are always present and can even be used during construction of static objects. Currently tables may contain pairs of 32-bit-key/32-bit-value or 32-bit-key/q1516-value and 16-bit-key/16-bit-value.
Here is an excerpt of a parametry file:
<input_driver id="1" name="MLink" version="0.1.0">
    <MLinkTbl id="9" name="default">
        <KeyValue key="0x01" group="TDEL_DRIVE" code="TD_MINCELLVOLTAGE"></KeyValue>
        <KeyValue key="0x31" group="TDEL_DRIVE" code="TD_VOLTAGE"></KeyValue>
        ...etc...
        <KeyValue key="0x28" group="TDNAV_VEHICLE" code="TD_ALTUTUDE_REL"></KeyValue>
    </MLinkTbl>
    <MLinkTbl id="1" name="MiniMach">
        <KeyValue key="0x11" group="TDEL_SUPPLY" code="TD_VOLTAGE"></KeyValue>
        <KeyValue key="0x73" group="TDNAV_VEHICLE" code="TD_SPEED_CLIMB"></KeyValue>
    </MLinkTbl>
</input_driver>
<Alarms id="3" name="globalAlarms">
    <AlarmsTable id="9" name="default-Alarme">
        <TDID2Q1516 group="TDNAV_VEHICLE" item="TD_ALTUTUDE_REL" q1516value="470"/>
        ...etc...
    </AlarmsTable>
    <AlarmsTable id="1" name="MiniMach-Alarme">
        <TDID2Q1516 group="TDNAV_VEHICLE" item="TD_ALTUTUDE_REL" q1516value="250"/>
    </AlarmsTable>
    ...etc...
</Alarms>
<render-control id="9" name="stdrender">
    <gauge group="TDNAV_VEHICLE" item="TD_SPEED_CLIMB_COMP" device="left_lever" />
    ...etc...
    <text group="TDNAV_VEHICLE" item="TD_ALTUTUDE_REL" device="speech" />
</render-control>
Operational parameters control the input driver: In the case of an MLink® driver they map address/format codes, which Multiplex-RC customers are familiar with, into standard telemetry items. An example: When I fly my MiniMach (:-) the data on channel 7, formatted as "class" 3 are translated into a TDNAV_VEHICLE | TD_SPEED_CLIMB.
Application code can be controlled by operational parameters, for instance alarms to be emitted may be specified: My MiniMach will be close to its visibility limit when flown higher than 250 meters.
The "plumbing" is an essential part of the parameters: These tables map telemetry data types to the available logical and physical output devices. An example: After reception, evaluation, correction etc. the application code has computed a variable of the type TDNAV_VEHICLE | TD_SPEED_CLIMB_COMP and decides to render it. The plumbing mechanism allows to render such a data type on a logical gauge object (among others) and the parametry contains the information to render it on the device "left_lever" (haptic output on the left side). The physical devices which are available within a current render engine implementation are made available as names for the device specification. It is, of course, to be extended in the near future to allow to control the behaviour of the render classes with more detail, e.g. control when to speak a text or to specify non linear characteristics of gauges.
My first implementation of the render engine, for example, has the physical rendering devices left_lever, shepardTone, speech and varioTone, which are of the logical classes gauge, clock, text and gauge respectively. It is an absolute minimal configuration, but it will grow - work is in progress.

Audio output: As already mentioned I could develop the audio output - sound generator and speech output - nearly fully on the PC. There are some sound generator classes for a vario tone, a shepard tone, simple alarm beepers and, last but not least, silence. Tones are generated by scanning a sinus table in steps according to the frequency needed. The vario tone generator delivers tones of 2 octaves, the upper one for climbing and the lower one for descending. There are tables for controlling the tone generation for 64 different tones (as such a fine division leads to different tones where 2 adjacent tones are hardly to distinguish). Assuming a resolution of 0.1 m/s for the climb signal this allows signalisation for climbing and descend up to ± 6.3 m/s, what is regarded sufficient. Alarm beeps are chosen out of one octave (the upper one of the vario tone octaves) and the beat of 2 narrowly separated frequencies generates some shrill.
Shepard tones are a bit more complicated. They are used to build an audible clock... A clock is an interval of values which can be perpetuated at both ends - like the hours of a (half) day. Altitudes are subdivided in bands, a clock can represent any of these bands. This allows to render altitudes in high resolution to replace the vario tones during tiny or irregular climbs and descends. The price is obvious: Such a "clock" is ambiguous, it is useless for quick climbing or descends. An acoustic "clock" is defined by one octave - when the ascending tone reaches the upper end of the octave it re-enters the octave at its lower end, a similar sounding tone (half the frequency of the upper end). To avoid the rest of acoustic discontinuity 2 tones of 2 adjacent octaves are mixed in a way that there is no tone that doesn't sound exactly like its counterpart one octave lower.
→Roger Shepard mixed many sine components, not only 2, to complex tones forming virtually infinitely ascending or descending tone scales. Such a tone is ideal to trace low or irregular climb or descend in weak thermals, better than classic vario tones. The variometer/altimeter device in the aero model, which I use, can deliver the altitude with a resolution of 0.1 m/s or 0.125 m/s and 64 different shepard tones of one octave can cover 6.4 m or 8 m, which is considered as an ideal width of an altitude band for the mentioned purpose.
Here is a .wav file which contains a tone sweep over 2½ octaves up and down: ⇒Shep32Tones.wav, it is about 200kB long.
Speech output is governed by the following requirements/restrictions: The LPC1768 MPU has a DAC with a resolution of 10 bits and a flash rom of ½ MB (100 kb of which are considered enough for the program code). This should suffice for the needed speech clips. The OM11043 board has a very limited 2 MB external store which I do not want to use for speech clips. About 25 different words, including numbers 0..12, shall be used. I intentionally do not create clips like "twenty-four" (German "vier-und-zwanzig", even longer) for number output, I prefer the easier to understand "two four". At least 11025 SpS must be used to achieve a minimum of understandability. I chose to use a compression method very similar to the A-law (or also μ-law) compression especially for 16→8→10-bit. Compression is done on the PC, decompression is a simple table lookup. Currently the compressed clips are part of the source code (:-), I will change this later. Speech quality is less than ideal, I still have to learn a lot. The low pass / audio amplifier is of the low quality, high noise type based on an LM386. I'm not proud of it.

The telemetry data item class and the input driver for the MLink system: A first draft of a universal data structure for a complete telemetry data item is designed. It contains the data item's type (e.g. TDNAV_VEHICLE | TD_SPEED_CLIMB), its value (e.g. the q1516-value 1.8 - the unit m/s is defined by the type of the item), some flags (e.g. if the MLink sensor attached an alarm bit to the value), and a time stamp.
The MLink input driver consists of 2 layers: The low level driver reads the data stream from the RC equipment and the high level driver converts the data to above mentioned telemetry data items. It is ready and works reliably.

Surprise: The mbed platform hasn't implemented the C++ exception handling.
No, you can't switch it off if you don't need it, you even can't switch it on if you need it, you simply can't have it. It may sound naive but I didn't search for such traps when I looked for the right platform, and now it was too late.
The mbed guys are surely excellent software engineers and when they say that the C++ exception handling is too expensive I have to accept this. But I don't like it and I especially do not like such surprises - this information should be clearly visible in easy-to-find feature lists and not in a compiler diagnostic while porting the q1516 class. I have to use a very limited setjmp/longjmp version. The fact, that no unwinding and no destructor calling is done is not really wonderful but the solution is better than nothing.
The RTTI (run time type information) is also not implemented for mbed, but this is a minor problem and can, as far as I need it, easily be replaced.