Find the Robot – Automagically connect to Arduino From SuperCollider

Heres some code that finds if you have a Arduino in any of your serial ports and connects to it if you do

fnIsRobot {
var ports;
ports=SerialPort.devices;{arg port;
["connecting to board", port].postln;
baudrate: 115200,
crtscts: true,
xonxoff: false);
} {
arg error; [error,"board not found, try other usb port?"].postln;

First Face Animations

So heres the first animated face.

Face Animations

There were a number of choices open to us. First, make animations in some proper designed for the job software and render a video to project on the face. But, as a programmer that didn’t ring true, I wanted control!

I ended up making a set of svgs for each of the eyebrows, each of the eyelids and the mouth using Illustrator, meaning they can all be controlled separately. These sets contain a few framed movements, for example, the move from smile to neutral of the mouth or the closing and opening of the eyes.

These are then loaded into Processing and all that does is wait for OSC commands from SuperCollider to tell it which parts to display. Therefore all the behaviour and timing is kept within SuperCollider and Processing just acts as a means to displays the shapes, which is a nice abstraction. SuperCollider is also much better for doing actions over time ({}.fork that shit) and its where the head movements will be triggered from so its nice to keep them all together.

At the moment I’m just cycling through the sequences and it all looks abit insane, more thought out sequences and expressions to come

When to move your head? And Why?

So I’m making my robot move his head to the music in response to social and musical events and I’m going to do a study to research the effects of this in about two months. To find out when it should move its head and why I’ve done some reading and written it up for you lovely folks.

First the social.

Fong et al. cited realistic facial expression as a key design factor in the social robots, especially in the demonstration of affective behaviour [2]. They report that this is often not done well and describe mechanical approaches as often clunky and abrupt.

Although 90% of gestures happen during conversation and are redundant [2], Breazeal and Fitzpatrick suggest that all bodily movements are perceived as semantically rich, even if this is not their intention. Head movements can also provide attentional cues that make up our sense of engagement with another [5]. Further, Macdorman and Cowley demonstrated that attentive head movements are sufficient to elicit the perception of what they call personhood, a concept that we have shown to have large overlaps with social presence and believability [4]. Head movements have also be used by Weinberg et al. in their musical robot Shimon in order to increase its social presence within an ensemble [9].

Now onto music.

In almost all acoustic music performance, the body, and in some cases, the head and face, are to some extent coupled to the generation of sound [3, 6–8]. However, they are also used as cues, sometimes intentionally, sometimes not, to augment the performance and to anticipate or accentuate important events. For example, in an analysis of an improvising jazz guitarist, Gratier demonstrates that musicians may use their bodily movements to convey the structure and meaning of the music [3]. Similarly, Vines et al. discovered that the perceived tension of a performance is most effected by visual cues, rather than auditory [7]. Further, that it is a combination of sound and visual stimulus that effects audience’s perception of phrasing in a musical performance. This is supported by their observation that the contours of the performers body movement tended to align with the phrasing of the music. Similarly, Thompson et al. also suggest that facial expressions are used to convey timing events, thus increasing musical intelligibility [6].

They also find that facial expressions can be used to make music sound more or less dissonant or to make musical intervals sound further apart or closer together.

Gratier suggests that facial displays of affect may serve the purpose of grounding between improvisers. For example, a musician may smile at a mistake or a particularly satisfying lick [3]. Further, whilst drawing comparisons between improvised music and conversation, she reports that mutual gaze is much less constant in the former. Though less frequent, it tends to occur during moments of structural change or importance in the music.

In a study of a performance by blues guitarist BB King, Thompson et al. find he often used facial expressions to display affect. For example, in moments of tension he takes on an introspective demeanour, looking down and shaking his head. A musicologist inter- prets this as him signally he feels the emotion but will not submit to it. Alternatively, in moments of release he opens his mouth towards the audience as if in wander. Similarly, Vines et al. found that more expressive performances will create more tension [7]. Again using nonverbal cues in relation to rhythmic timing, King’s head movements tend to re- act to individual notes and licks and he tends to reflect only his performance, rather than that of his band. Conversely, a study by the same authors of a Judy Garland performance shows that she uses hand gestures in a more illustrative fashion, literally reflecting the lyrics of the song.

[1] Pierce Brennand and Cheng Gordon. Automatic Face Replacement for Humanoid Robot with 3D Face Shaped Display. In ICSR’12, pages 469–474, 2012.
[2] Terrence Fong, Illah Nourbakhsh, and Kerstin Dautenhahn. A survey of socially interactive robots. Robotics and Autonomous Systems, 42:143–166, 2003.
[3] Maya Gratier. Grounding in musical interaction: Evidence from jazz performances. Musicae Scien- tiae, 12(1 suppl):71–110, 2008.
[4] K.F Macdorman and S.J Cowley. Long-term relationships as a benchmark for robot personhood. ROMAN ’06, pages 378–383, 2006.
[5] M.P. Michalowski, S Sabanovic, and R Simmons. A spatial model of engagement for a social robot. In Proc. 2006 Advanced Motion Control Workshop, pages 762–767, Istanbul, 2006.
[6] WF Thompson and P Graham. Seeing music performance: Visual influences on perception and experience. Semiotica, (156):203–227, 2005.
[7] Bradley W Vines, Carol L Krumhansl, Marcelo M Wanderley, and Daniel J Levitin. Cross-modal interactions in the perception of musical performance. Cognition, 101(1):80–113, 2006.
[8] Marcelo M Wanderley. Quantitative analysis of non-obvious performer gestures. pages 241–253, 2002.
[9] G. Weinberg, A. Raman, and T. Mallikarjuna. Interactive jamming with Shimon: a social robotic
musician. In Proc. 2009 Human Robot Interaction Conf., pages 233–234, San Diego, CA, 2009.

Robots are Cool, hear about mine. Starting Now


Robots are cool. Fact. But can they keep you interested for more than a short time? How about many short times? Weeks, months, years even?

Roboticists (thats what were called) have often found it hard to maintain engagement between humans and robots beyond a novelty period. They’ve either too simple or promise too much and are disappointing. So, here at Queen Mary University of London we’ve built a robot called Mortimer that can not only play the drums, but listen to humans play the piano and jam along. He can also talk (abit) and smile. We hope people will build long term relationships with him through the power of music.

Expect weekly updates