Saturday, December 28, 2013

My First Job, or

"How I Became Afraid of Transistor Circuit Design"

The summer of 1981 found me in a bit of a bind. After a tragic and comedic series of events, I concluded the military life wasn’t me. I cancelled my full time scholarship with the Air Force, and found myself suddenly in need of money. Some friends had recommended I try to get a job at school, so I applied and was accepted. I was to work in the lab of Dr. Roger P. Webb, a professor in the School of Electrical Engineering (1). Not just any professor, Dr. Webb occupied the “Georgia Power Chair”. As I understand it, this meant that Georgia Power, the state-wide electric company, funded his position at the school. Furthermore, he specialized in electric power and was the senior professor in that field within the school (2).

On the other hand, power electronics couldn’t have been further from my interests. In my view, power engineering and electric distribution were decidedly old-school and boring. I was learning to master the intricacies of analog and high frequency radio circuits, a thrilling esoteric art in which I was becoming quite accomplished. So on learning that my assignment was to work for Dr. Webb, I was none too excited. However, I needed the job, and despite my inclinations, any work involving electronics was bound to be more interesting than the other kinds of jobs available to young college students. So I accepted, and thus began my first job.

My first day on the job, I was introduced to his lab and my assignment. Until that day, my exposure to electronics had been things I could hold in my hand, small circuit boards the size of a slice of bread, holding parts like resistors and capacitors, things the size of buttons. Or at most, I had worked with old vacuum tube radios, which were about the size of a modern microwave oven and needed two hands to lift. Upon entering Dr. Webb’s lab, I got the shock of my life. There was a “capacitor” the size of a large footlocker - so big, they were mounted on wheels! This was clearly electronics on a whole different scale than I had ever imagined!



Then we rounded the corner into another section of this damp basement. We entered a computer, on which I was to work for the next year or so. Yes, we actually entered the computer. This thing was a monstrosity, a 1947 Westinghouse A-C Network Analyzer, or Network Calculator as it was sometimes called. There were probably a dozen or more huge racks of electronic gear, arranged in a “C” shape enclosing an area about 500 square feet in size. In the middle of the “room” was a special desk and drafting table, which I later learned was the main operating console for the whole computer.




My task was to refurbish the programming unit of this computer - that is, how the operators input a problem to the computer for solution. This was 1981, and I had some familiarity with computers of the day. Programming at best involved using a video terminal if you were lucky, or perhaps a paper teletype machine. In our freshman computer classes we had to use punched cards. But even punched cards were too modern for the Westinghouse Calculator - no, this bad boy used patch cords and a plug panel, not unlike an old telephone switch board you see in the movies. These cords would be pulled out of a panel, and plugged into various sockets according to the circuit at hand. Each wire would automatically retract back into the machine when you finished using it (imagine the power cord on a modern day vacuum cleaner). This was accomplished by a series of pulleys and lead weights. Over the years, these wires had become dried and brittle, and were no longer flexible. They just didn’t work anymore, mechanically, nor electrically for fear of short circuits.



My job was to replace each and every wire in the whole machine. There were hundreds of them - one such panel alone held 60 wires, and judging from my recollection and photos, there were probably between 6 and 10 of these panels. I was given a huge spool of new wire and the biggest soldering iron I’d ever seen. They showed me how to replace a wire, which was actually pretty simple in theory, and left me to it. Boy, was it dirty work. I had to climb on top of the racks to access the section holding the wires, laying on my belly as I removed and reinstalled each wire. Each new wire had to be cut to the exact length as its predecessor, the plug moved from the old to the new wire, and then carefully re-threaded through the pulleys, lead weight, and finally re-attached to the computer’s circuits. Did I mention dirt? There was decades of dust inside this machine, and I was hacking and coughing all summer and fall of that year.



I gradually learned how this machine worked, beyond how to repair it. It was originally designed to make calculations for power distribution systems. To do this, the programmer built a scale model of the system, using the patch cords, and then studied how it performed. This central console had a bank of meters, and an keyboard which looked like the old adding machine my grandfather used. The operator could type in the ID code for any element of the scale model, say “L14”, and then read the voltage, current, phase angle and power readings on the meters corresponding to element L14. Next to the console was a specialized drafting table, with hundreds of small light bulbs which could be positioned anywhere underneath the table surface. A drawing of the power system was taped to the table top, and as the operator selected an element, the corresponding element on the drawing was illuminated from underneath.

One might wonder, as did I, why such an antique “computer” was still being maintained in 1981, when much more advanced digital computers were readily available, especially to the likes of companies like Georgia Power. I was told that despite all the advantages of digital computers, there were still some special classes of power distribution problems that were difficult to solve digitally, but were a piece of cake on this 1947 calculator. Georgia Tech, and Georgia Power, kept this machine operational for just those occasions, which were apparently rare but important.

While this machine was intended to solve problems in power distribution systems, I quickly realized that it could also operate as a general purpose analog computer. On more than one occasion, I would take a few random homework problems from class and plug them into the computer and confirm my answers using the meters on the console. But one meter on the console didn’t work correctly, and I learned that the small vacuum tube amplifier, the only active electronics in the whole calculator, was partially broken. So my next assignment was to replace this old amplifier with a modern transistor circuit. I was happy for this task, but was also a bit worried, as this type of amplifier was a bit outside my training and comfort zone. My supervisor encouraged me to proceed anyway, offering help when needed.

I spent quite a bit of effort building this new amplifier. It was constructed on a metal chassis about the size of a small briefcase. Because of the function of the amplifier, I made it using several circuit boards which plugged into sockets on the chassis. It was a huge undertaking for me, and I quickly had to become handy at metal work, printed circuit board layout and fabrication, not to mention the physical assembly and testing. After all that effort, my new transistor amplifier was a big disappointment. For reasons I don’t remember, it wasn’t very stable, and it was difficult to adjust. I could make it work for short periods of time, but it would eventually drift into oscillation or otherwise just quit working. Around this time, things were winding down on the project. And furthermore, I moved into the co-op program and started another job with another division. My unreliable transistor amplifier sat inside the guts of the calculator, presumably to be improved upon if and when it was next needed. The whole experience, while very educational and interesting, was somewhat of a disappointment. And since then, I have always been a bit gun-shy when it comes to designing large transistor circuits.



During all this time, I had been poking around the drawers and shelves, and discovered an old newspaper photo of someone named “Herbert P. Peters”. According to the text of the article, he was the original operator of this computer when it was originally installed. He looked pretty cool to me and my engineer friends. We imagined him chain smoking cigarettes while solving important problems, huddled within the walls of the machine. Well, it turned out that Mr. Peters was still alive. And although I never met him, I heard a story a couple of years later - much to my embarrassment. It seems Georgia Power had run into one of the special problems, and needed to use the Westinghouse calculator. They sent over a crew of engineers, including Mr. Peters (whom I presume was in retirement by then). As I heard the story, Mr. Peters tried my new transistor amplifier for 5 minutes, then threw it in the trash. He inspected the old, broken vacuum tube amplifier, identified the problem in another 5 minutes. They were up and running the next day when a replacement transformer arrived by express courier.

So, if there was any lesson to learn here, I think it was that we were too eager to toss out the old and bring in the new. Had I just spend a little more time examining the vacuum tube amplifier, we could have fixed it with far less cost and effort. But I lured myself into thinking, surely this old vacuum tube technology should be thrown away, and replaced by something modern. And I do wish I had had a chance to meet Mr. Peters.


(1) After I graduated, I understand Dr. Webb became head of the School of Electrical Engineering.

(2) If this came with a title, I’m not sure - within the school of electrical engineering, there are many different specialties and sub-specialties. We all knew which professors specialized in which area, but I don’t remember this being a formal distinction.

Picture credits from Cornell, Museum Victoria, and Ga Tech.

Sunday, December 08, 2013

Abusing the Preprocessor, Almost

I keep running into a mildly annoying problem in my C-language embedded software applications. It involves the declaration, definition and initialization of constant data tables. On a couple of recent projects, it became such a headache that I decided to do something about it, and to document it on my blog. Let’s get started with a simple example.

MAKING A LIST OF THINGS

It’s fairly common to use defines or enumeration to declare states within a state machine. Consider the below states from a fictitious breathalyzer:
#define STATE_INIT              (0)
#define STATE_IDLE              (1)
#define STATE_TAKING_SAMPLE     (2)
#define STATE_STONE_COLD_SOBER  (3)
#define STATE_TIPSY             (4)
#define STATE_DRUNK             (5)
#define STATE_SHIT_FACED        (6)
#define STATE_PASSED_OUT        (7)
#define STATE_NUM_ITEMS         (8)

Or alternatively, we can use an enumeration, which will automatically assign the states sequentially.
enum BREATHALYZER_STATES {
  STATE_INIT=0,
  STATE_IDLE,
  STATE_TAKING_SAMPLE,
  STATE_STONE_COLD_SOBER,
  STATE_TIPSY,
  STATE_DRUNK,
  STATE_SHIT_FACED,
  STATE_PASSED_OUT,
  STATE_NUM_ITEMS,
};

Often we don’t care about the actual value assigned to such states, but there can be advantages to having them in sequence. One reason would be to easily obtain a text name representing the state. For example:
const char *state_names[STATE_NUM_ITEMS]={
  “initializing”,
  “idle”,
  “taking a sample”,
  “stone cold sober”,
  “tipsy”,
  “drunk”,
  “shit-faced”,
  “passed out”,
};

Another advantage to using the enumeration to define the states is that the code becomes easier to maintain if you need to insert a state later on. You don’t have to manually change all the numbers in the list of defines, as the enum statement automatically assigns them in sequence.

So, why am I not happy with this? Because the combined information about the states, state numbers and text strings, most often must exist in two different files. We properly put the enumeration in the header file, but the definition and initialization of the name strings has to be in the .c file. For this trivial case, this is just a nuisance. But it can get out of hand easily with more and larger pairs of integers / strings.

CONSTANT DATA TABLES

Here is another, similar, application of the same concept. Consider a lookup table containing information about different units of measure (in this case, lengths):
enum UNITS {
  UNIT_METER=0, 
  UNIT_KILOMETER, 
  UNIT_CENTIMETER, 
  UNIT_MILLIMETER,
  UNIT_MILE,
  UNIT_YARD,
  UNIT_FEET,
  UNIT_INCH,
  UNIT_MIL,
  UNIT_NUM_ITEMS,
};

typedef struct tagUNIT_DEFN {
  const char const *name;
  const char const *abbr;
  const double fact;
  const double off;
} UNIT_DEFN;

const UNIT_DEFN unit_defn[UNIT_NUM_ITEMS]={
  { "meter", "m", 1.0L, 0.0L },
  { "kilometer", "km",  1000.0L,   0.0L },
  { "centimeter", "cm",  0.01L, 0.0L },
  { "millimeter", "mm", 0.001L, 0.0L }, 
  { "mile",  "mi", 1609.344L, 0.0L },
  { "yard",  "yd", 0.9144L, 0.0L },
  { "foot", "ft", 0.3048L, 0.0L },
  { "inch", "in", 0.0254L, 0.0L },
  { "mil",  "mil", 2.54E-05L, 0.0L },
};

Here again, it’s important to keep the indices and the constant table synchronized as changes are made, perhaps to add other length-type units such as furlongs, angstroms, and light-years.

There is one solution to the synchronization problem, which I found searching around the web. Although the downside is that it’s only available on compilers which support C99 extensions. But if you have C99, then you can specify the location of an array being initialized using what are called “designated initializers”. Therefore, the above array would look like this:
const UNIT_DEFN unit_defb[UNIT_NUM_ITEMS]={
[UNIT_METER]={"meter", "m", 1.0L, 0.L},
[UNIT_KILOMETER]={"kilometer", "km", 1000.0L, 0.L},
[UNIT_CENTIMETER]={"centimeter", "cm", 0.01L, 0.L},
[UNIT_MILLIMETER]={"millimeter", "mm", 0.001L, 0.L}, 
[UNIT_MILE]={"mile", "mi", 1609.344L, 0.L},
[UNIT_YARD]={"yard", "yd", 0.9144L, 0.L},
[UNIT_FEET]={"foot", "ft", 0.3048L, 0.L},
[UNIT_INCH]={"inch", "in", 0.0254L, 0.L},
[UNIT_MIL]={"mil", "mil", 2.54E-05L, 0.L},
};

That solves the concern about getting the data table out of sync with the enumeration. But, we still have the issue of maintaining the table in two different files. Now is the time to abuse the preprocessor (1).

LET THE PREPROCESSOR DO THE WORK

I found an obscure solution to my problem, using the preprocessor in a most unusual manner (2). After a bit of head scratching, I hit on the following approach. First of all, make a “table” in the following format, which is actually one huge preprocessor macro:
#define UNIT_TABLE(F) \
F(UNIT_METER, "meter", "m", 1.0L, 0.0L)\
F(UNIT_KILOMETER, "kilometer", "km",  1000.0L, 0.0L)\
F(UNIT_CENTIMETER, "centimeter", "cm",  0.01L, 0.0L)\
F(UNIT_MILLIMETER, "millimeter", "mm", 0.001L, 0.0L)\
F( UNIT_MILE, "mile",  "mi", 1609.344L, 0.0L)\
F( UNIT_YARD, "yard",  "yd", 0.9144L, 0.0L)\
F( UNIT_FEET, "foot", "ft", 0.3048L, 0.0L)\
F( UNIT_INCH, "inch", "in", 0.0254L, 0.0L)\
F( UNIT_MIL, "mil",  "mil", 2.54E-05L, 0.0L)\
/**/

With this “table” defined, an in just ONE place (the header file), it’s possible to enumerate the indices,
// Make the enumeration of the unit types
#define EXTRACT_ENUM( ID, NAME, ABBR, FACT, OFF ) ID,
enum UNIT_TYPES {
  UNIT_TABLE(EXTRACT_ENUM)
  UNIT_NUM_ITEMS,
};
#undef EXTRACT_UNIT_ENUM

and define/initialize the table automatically:
// Makes the constant table of units
#define EXTRACT_DEFN( ID, NAME, ABBR, FACT, OFF ) \
  { NAME, ABBR, FACT, OFF },
const UNIT_DEFN unit_defn[UNIT_NUM_ITEMS]={
  UNIT_TABLE(EXTRACT_DEFN)
};
#undef EXTRACT_DEFN

When expanded by the preprocessor, we get exactly what we want:
enum UNIT_TYPES {
  UNIT_METER, UNIT_KILOMETER, UNIT_CENTIMETER, UNIT_MILLIMETER, UNIT_MILE, UNIT_YARD, UNIT_FEET, UNIT_INCH, UNIT_MIL,
  UNIT_NUM_ITEMS,
};

const UNIT_DEFN unit_defn[UNIT_NUM_ITEMS]={
  { "meter", "m", 1.0L, 0.0L }, { "kilometer", "km", 1000.0L, 0.0L }, { "centimeter", "cm", 0.01L, 0.0L }, { "millimeter", "mm", 0.001L, 0.0L }, { "mile", "mi", 1609.344L, 0.0L }, { "yard", "yd", 0.9144L, 0.0L }, { "foot", "ft", 0.3048L, 0.0L }, { "inch", "in", 0.0254L, 0.0L }, { "mil", "mil", 2.54E-05L, 0.0L },
};

Well, not EXACTLY, that’s almost unreadable - the macro expanded into one monster (wrapped) line of code! Where are the newlines? It is a limitation of the preprocessor that you can’t force a newline, and therefore that’s one pitfall of this method. If you are tracking down a typo, and need to examine the preprocessor output, you have to contend with this monster-long-line, wrapped format. I wrote a script to “un-wrap” the file, but for occasional debugging, it’s probably sufficient to do it manually in your editor. In vim (I’m old-school), these ex commands do it:
:s/, /,^M  /g      (for the enum)
:s/}, /},^M  /g    (for the table)

Reformatting the output thusly yields the expected result:
enum UNIT_TYPES {
  UNIT_METER,
  UNIT_KILOMETER,
  UNIT_CENTIMETER,
  UNIT_MILLIMETER,
  UNIT_MILE,
  UNIT_YARD,
  UNIT_FEET,
  UNIT_INCH,
  UNIT_MIL,
  UNIT_NUM_ITEMS,
};

const UNIT_DEFN unit_defn[UNIT_NUM_ITEMS]={
  { "meter", "m", 1.0L, 0.0L },
  { "kilometer", "km", 1000.0L, 0.0L },
  { "centimeter", "cm", 0.01L, 0.0L },
  { "millimeter", "mm", 0.001L, 0.0L },
  { "mile", "mi", 1609.344L, 0.0L },
  { "yard", "yd", 0.9144L, 0.0L },
  { "foot", "ft", 0.3048L, 0.0L },
  { "inch", "in", 0.0254L, 0.0L },
  { "mil", "mil", 2.54E-05L, 0.0L },
};

This is exactly what we started with, but because it is automatically generated from the same “table” in the header file, there is no chance for the table to get out of sync, and furthermore we only have to edit one file to chance the data table.

WRAP IT UP

Therefore, we have the following code snippets to place in the header and source files:
                   unit.h:

typedef struct tagUNIT_DEFN {
  const char const *name;
  const char const *abbr;
  const double fact;
  const double off;
} UNIT_DEFN;

#define UNIT_TABLE(F) \
F(UNIT_METER, "meter", "m", 1.0L, 0.0L)\
F(UNIT_KILOMETER, "kilometer", "km",  1000.0L, 0.0L)\
F(UNIT_CENTIMETER, "centimeter", "cm",  0.01L, 0.0L)\
F(UNIT_MILLIMETER, "millimeter", "mm", 0.001L, 0.0L)\
F( UNIT_MILE, "mile",  "mi", 1609.344L, 0.0L)\
F( UNIT_YARD, "yard",  "yd", 0.9144L, 0.0L)\
F( UNIT_FEET, "foot", "ft", 0.3048L, 0.0L)\
F( UNIT_INCH, "inch", "in", 0.0254L, 0.0L)\
F( UNIT_MIL, "mil",  "mil", 2.54E-05L, 0.0L)\
/**/

// Make the enumeration of the unit types
#define EXTRACT_ENUM( ID, NAME, ABBR, FACT, OFF ) ID,
enum UNIT_TYPES {
  UNIT_TABLE(EXTRACT_ENUM)
  UNIT_NUM_ITEMS,
};
#undef EXTRACT_UNIT_ENUM

extern const UNIT_DEFN unit_defn[UNIT_NUM_ITEMS];


                   unit.c:
#include “unit.h”

// Makes the constant table of units
#define EXTRACT_DEFN( ID, NAME, ABBR, FACT, OFF ) \
  { NAME, ABBR, FACT, OFF },
const UNIT_DEFN unit_defn[UNIT_NUM_ITEMS]={
  UNIT_TABLE(EXTRACT_DEFN)
};
#undef EXTRACT_DEFN

It isn’t pretty. In fact, it’s too ugly for this simple example. But, it shows a “standard” method for defining, declaring and initializing constant data, which can be maintained in just one place. The real reason I explored this solution was because I have some much larger applications where this “ugly” solution is actually pretty, and solves some serious maintenance headaches.

Next article, I will expand on this approach, making a more general purpose module .


NOTES:

(1) this phrase is shamelessly stolen from Mr. Michael Tedder’s blog post,

(2) this method is inspired by a posting on StackOverflow by user Eyal. His method involves initializing a table in RAM, while I’m trying to initialize constant data. Each type of initialization using this method presents it’s own challenges.