The New Band Members

In 1993, I was asked to perform a concert of electronic music at Life on the Water in San Francisco, a venue that featured a very high quality, multichannel Meyer sound system. Wanting to do something new that would take advantage of the opportunity, I had the idea of moving sounds around the room in response to the motion of my hands in a “field” of some sort. I eventually hit on the idea of using a light sensor, illuminated by a small lamp, whose beam could be interrupted by my hands making shadows on the sensor.

What emerged was my first sensor-based MIDI instrument, employing four photovoltaic (“solar”) cells from my neighborhood Radio Shack, connected to a rather crude electronic circuit, including an analog-to-digital converter, some TTL logic, and a multiplexer chip, wire-wrap connected to the parallel printer port of an early laptop computer. The sensors’ voltage readings were converted to MIDI control messages by custom software on the laptop, and sent out to my musical equipment through a MIDI interface. 

The First STM Sensors

4 Photovoltaic Cells from Radio Shack


A Standard Platform

Although I used the solar cell sensors for a few more performances up through 1997 (with a final performance at Yerba Buena Center for the Arts), it was apparent that a more flexible and general purpose framework (not to mention a physical form that was robust and reproducible) was needed: an ecosystem of various sensors, connected to a common interface, using readily available components, generating MIDI messages that would be processed by custom software on a laptop computer. And all of which would be low cost, portable and easily transportable for live performance.

What emerged was a series of controllers and sensor instruments that grew steadily in scope and power, while maintaining an adherence to the same basic design guidelines:

  • Controller components mounted on a standard, dual-sided PCB board
  • Analog sensor inputs, 0-5V, in groups of 4, on 6-conductor flat cable (4 signals, common Vcc and ground) via RJ11 connectors
  • Switch inputs (open/closed) on 8-conductor flat cable (a 4x4 array of 16 switches) via RJ45 connector
  • Housing constructed of acrylic plates and aluminum spacers
  • Thru-hole components with a minimum pitch of .1 inch
  • Constructed using simple tools: drill, screwdriver, soldering iron, etc.
  • MIDI outputs (and optional inputs)

The housings of STM controllers are constructed from inexpensive, readily available components such as acrylic plates, aluminum spacers, rubber feet, and machine screws.

In order to ease the process of soldering of components to STM PCB boards, only thru-hole components with a minimum of .1″ pitch (separation between solder points) are used.

STM sensors connect to STM controllers using standard flat cable (6 and 8-conductors) to carry power and signals. Standard tools provide for cutting and attaching connectors.

Here, a STM-2 controller is attached to 4 shielded photoresistors and a 16-key keypad. The flat cable connections are extended from an adapter to individual sensors using twisted pairs.

Making Useful Things

The process can be a bit messy, but eventually a clean and portable design emerges.

STM Controllers

Consistent Design, Increasing Performance

The factor most responsible for the advance of the STM concept was the appearance, in the late 1990s and early 2000s, of online services for the production of printed circuit boards (PCBs) in low volume at accessible prices. This made it possible to leave behind the messy and error prone process of connecting and powering electronic components using point-to-point soldering or wire-wrapping. Now one could use free software to lay out the circuit, place an online production order, and receive the PCBs by shipment within a few days. Over several years, iterations of this process produced several generations of STM Controllers.

The controller is responsible for collecting and conditioning data from connected sensors, and forwarding the data to a host computer (or other MIDI device) coded as MIDI messages that indicate the source and value of the transmitted data. All the STM controllers conform to the basic STM guidelines listed above, enabling any sensor to be used with any of the STM controllers.

The first two models, STM-2 and STM-2P, took advantage of the ease of use of the Parallax Basic Stamp devices, designed primarily for educational and “maker” applications. Though not overly powerful, they nevertheless provided a usable and user-friendly platform for experimentation (this was well before the appearance of Arduino) that freed one to focus more on the application than the hardware. (Indeed, at a time when screaming 32-bit and 64-bit processors with GB of RAM are the norm, one continues to be impressed by the amount of work that can be coaxed from an 8-bit microcontroller running at 8 MHz).





The STM-3 and STM-4 models replace the Basic Stamp with an 8-bit, 50 MHz RISC microcontroller, achieving maximum performance for a MIDI device (at the time, the sampling was often too fast for a laptop or other MIDI devices to handle). Programmed entirely in the device’s native assembly language, these STM controllers continue to function well in modern high-performance applications. 

STM-4 is the final controller in the original, general purpose series, and the first to include the once expensive features of solder mask (the green layer), and white silkscreen lettering. All STM controllers and sensor instruments now include these features.

STM Sensors

Functionality Repurposed

There are literally hundreds of sensor types that are of potential use with the STM platform. Basically, any sensor, passive or active that can operate from connection to 5v (or less) and ground, to produce an output signal in the 0-5V range, can be connected to a STM controller via its 3-point connection scheme. The long list includes such types as inductive proximity switchs, magnetic closure sensors, position sensors, hall effect sensors, sonar range finders, passive infrared sensors, pressure sensors, altimeters, flex sensors, photocells and phototransistors, among many others.

The bend sensor pod inlcudes 4 sensors that respond to bending and also have a spring-like action when bent and released.

Light sensors (photocells and phototransistors) designed for installation in a room at some distance from the controller. Foam mountings provide for attachment of tubular hoods. 

light sensor pod housed on wide acrylic base. Played with hand shadows, or simply placed in a location to respond to ambient light.

STM is friendly to a wide variety of off-the-shelf sensor types, including the passive infrared motion sensor shown here.

Once the mechanism by which a sensor (or pod of sensors) operates is understood, and a suitable STM interface is designed, the possible applications of the new instrument in live electronic music and other media (including exhibits and installations) are limited only by one’s imagination (which is not to say that there won’t be serious effort involved). The standalone MaxMSP application SensorPlay, described in the next section, provides a supportive working environment for directly applying the STM concept to live performance and media control.

Fader pods monitor the position of the knob. A single STM controller can accommodate as many as eight faders.

Foot pedals are a convenient means of changing settings and device switches. The pedal pod can connect 8 pedals, and can be chained to a second unit for a total of 16 pedals.

You can add different types of sensors to a sense pod. In this example, 4 photocells are used.

button pod can be quite handy for changing presets or other settings. STM switch inputs can accommodate up to 16-buttons, and the button pod can be chained with a pedal pod for parallel operation.


MIDI to Media

SensorPlay is a large-scale MaxMSP application designed to serve as a host architecture for a diverse collection of devices which conform to SensorPlay’s host interface requirements. Originally developed in Max4 and previous releases, SensorPlay has existed in its present form since about 2005.

The specific set of devices loaded into SensorPlay at a given time is referred to as the configuration and can vary widely depending on the project or activity that is being addressed. Available modules encompass a wide range of capabilities including advanced controller mappings and triggering, virtual synthesizers, effects units, pitch-to-MIDI, VST hosting, and Audio IO, among many others. SensorPlay can accommodate up to 48 device in a given configuration, 16 each of 3 types: input/output, control mapping, and sound generation/modification.

As shown in the figure at right, SensorPlay is designed to interface with other system components:

  • External control devices and STM modules via standard interfaces (MIDI, HID, etc.)
  • Audio IO via MacOS CoreAudio
  • Media Applications via Virtual MIDI channels

In order to avoid restricting SensorPlay to a fixed collection of modules, the application is built with knowledge/inclusion of all modules that might potentially be employed in a configuration. Then, at runtime, the specific configuration is defined by the contents of an external configuration file, which indicates the specific collection of modules to be loaded at runtime.

SensorPlay processes messages from various devices to create control signals for media applications and devices. 

Input and Output

Aux and Support Devices

The main window displays and manages the states of all the devices present in the current user configuration.

The BusOut~ device routes 4 stereo pairs of audio tracks to MacOS CoreAudio IO channels.

Audio and other signals enter and exit SensorPlay via the Aux and Support Devices located in the upper left column of the main window. Devices in this column represent both physical and virtual instruments (STM devices, MIDI keyboards, Control Surfaces, etc.) that are connected to the host computer’s IO ports (USB, Firewire, Ethernet, WiFi) either directly, or via an attached digital interface.

Devices in this column may also represent other software entities running on the host computer in addition to SensorPlay (Ableton Live, for example), accessed via an interprocess communication mechanism such as a virtual MIDI port.

In the BusOut~ device shown here, four separate stereo audio signals are received on named busses (labeled “recv”) from from devices in the Audio Processing column on the lower left (or, in some cases, from other devices in the IO column). Audio bus names are user-defined and selectable by clicking on the recv name field (sP_Dry, sP_Efx1, sP_Efx2, sP_Efx3 in the example). Signals sent on the same named bus will be received by listeners whose selection matches that of the sender. 

BusOut~ also exhibits another key SensorPlay feature: the use of presets in individual modules, able to retain and recall multiple settings for all the devices parameters (presets are indicated by red field backgrounds).

Mapping Devices

Control Maps

At the heart of SensorPlay’s actions are the Control Map Devices in the upper right column. In general, the control map devices receive MIDI and other media control messages, interpreting, modifying and otherwise processing the information to generate output signals that can be used for media control in a real-time environment. In addition to the resources present in MaxMSP (the overall host environment for the app),  the actions performed by mapping devices generally involve software components written in the Java and Javascript languages.

Because the early STM module designs featured analog sensor inputs in groups of 4, many of the SensorPlay mapping devices are designed to handle 4 MIDI streams originating from a single sensor-based device. The two devices shown here exhibit this feature.

NoteTxPose receives MIDI note messages from a MIDI keyboard input device (selected in the yellow device menu at the top) and transforms the notes into up to 4 separate streams according to 4 individual procedures. The modified note streams are then directed to IO or audio processing devices by selection of named control busses (selected from the yellow menus at the bottom).

The CtrlRanger device receives selected MIDI control messages from a MIDI device (for example, a SensePod or BendSensorPod as shown above), applies a scaling and offset to the data, and then maps the output data to a specific MIDI control message and channel. The scaled and remapped control stream is then directed to an IO or audio processing device via selection of a destination control bus from menu at the bottom. 

The NoteTxPose device maps MIDI notes from a source in up to four different transformations, with output to four independent destinations.

The CtrlRanger device transforms and maps MIDI control messages from a source to up to four independent destinations.

Audio Processing

Virtual Synth and FX Devices

The PitchToMidi device analyzes an audio input channel, abstracting pitch and timing information to create MIDI notes and controls signals, which are routed to a MIDI processing device.

The SamplePlayer device receives MIDI notes and control messages from a mapping device and plays the notes using a collection of  audio samples, with stereo audio output to up to three independent destinations.

Audio processing in SensorPlay is applied both to audio streams received from external physical and virtual devices, and to audio streams generated internally by SensorPlay’s Virtual Synth and Effects devices.

The SamplePlayer device shown here is designed to load a collection of audio samples from a specified folder on the host computer, and to make the individual samples playable over an assigned range of MIDI notes. Note messages are received from control maps whose output bus assignments are set to send to the sample player. Various properties of the sample player (volume, bend, vibrato, etc.) are controllable both by preset settings, and by MIDI control messages received from other devices. Audio output is assignable to up to 3 separate audio busses.

The PitchToMidi device analyzes an audio input channel, abstracting pitch and timing information to create MIDI notes and controls signals, which are routed either to a MIDI IO device, or to a control map device for further processing and output assignment. Audio input to the device can originate in an external device, or internally from a device like the SamplePlayer.




There are no upcoming events at this time.