WORKSHOP 2 SCHEDULE

WORKSHOP 2 SCHEDULE

Day 1

1030 : Tea/coffee (foyer)
1045 : Welcome & introduction (MMR) Paul Stapleton & Alice Eldridge
1100 : Presentations (MMR)
  Chris Kiefer & Alice Eldridge - Resonator Feedback Cellos: Where and when is listening?
  Trevor Agus - The temporal fine structure of sounds
  John Bowers - Musicians Inside Algorithms
  Nicholas Ward - Clocks, ageing, zinc and speakers: reflections on HAL workshop 1
  Felix Gerloff - Utilising the sonic channel for human-machine-collaboration
  Franziska Schroeder - ‘Distributed Listening’

1300 : Lunch (foyer)

1400 : Afternoon performance (sonic lab)
  John Bowers - Stookie John Comes To Belfast

1430 : Presentations & demos (sonic lab)
  David Kant - Composing through machine listening
  Tom Davis - Feral Cello: temporal and fluid agencies in improvised music performance
  Owen Green - Play and Mischief as Serious Work: Hearing and Realisation
  Conor Barry - Mogees: Music-HCI industry perspective on machine learning

1600 : Tea/coffee break (foyer)
1630 : Group discussion (MMR)
1830 : Dinner at Barking Dog
2030 : Evening performances (sonic lab)
  Chris Kiefer & Alice Eldridge - Feedback Cell
  Theo Burt/The Automatics Group - Remixes

2130 : End day 1.

Day 2

1100 : Tea/coffee (foyer)
1115 : Reflections on day 1 (MMR)
1200 : Errant machine listening workshop
1300 : Lunch (foyer)
1400 : Errant machine listening workshop continues
1600 : Sketch performances and closing discussion
1730 : End day 2



SPEAKERS & PERFORMERS



Morning Presentations


image of Alice&Chris

Chris Kiefer & Alice Eldridge

Resonator Feedback Cellos: Where and when is listening?

The Resonator Feedback Cellos is an ongoing project in which we are modifying classical cellos to create new digitally-controllable, electroacoustic, actuated instruments. Signals from electromagnetic pickups sitting under each string are sent to a speaker built into the back of the instrument body and to transducers clamped in varying places around the edges. Pickup signals can monitored within the feedback loop by analogue or digital processes and can be modified as some function of the signal (amplitude, frequency content etc.). This adaptive feedback process allows for the introduction of varying degrees and types of uncontrol into the system. In installation, the cellos respond to their surrounding acoustic environment and in performance, playing the instrument is a matter of negotiation and emergent conversation rather than control. We outline the forms of listening (implicit and explicit) that we have explored to date and consider the implications of algorithmic listening in feedback instruments: What role do the listening processes play in engendering musical agency in the instrument? How does this effect the way we interact with it and the resultant nature of the collaboration? What are the consequences of the listening and sound generating mechanisms being so tightly coupled? Can we even pinpoint the when and the where of listening in these situations? Could this feedback-listening model be valuable in other machine listening scenarios?


image of Trevor Agus

Trevor Agus

The temporal fine structure of sounds

Spectral analyses of sound largely ignore a treasure trove of precise timing cues, information that our ears preserve. Reduced access to this timing information may account for some audiological deficits, particularly those that emerge in complex soundscapes. Analogously, if machine listening algorithms are expected to be robust in complex soundscapes, they may need access to this finer temporal information. In my presentation, I will illustrate the wealth of audio features that can be extracted from temporal information even when spectral information is discarded.


image of John Bowers

John Bowers

Musicians Inside Algorithms: Live Coding, Performance Ecologies and Messed Materialities in ‘Turing Tape Music’

I will describe a series of performance pieces that I have been developing with Tom Schofield which take a media archeological turn on the interplay between music and computing. In these, contemporary interest in live coding is explored by programming a device which follows the specification of an ‘a-machine’, as given by Alan Turing in 1936, where symbols are read, written and erased by a head moving over a tape, subject to various rules. Turing’s device is typically taken as an abstract de-materialised thought experiment but we materialise it as a performable technology which can be programmed live. Our device, rather than playing a role in simulating any kind of musical competence, as might be traditional in various cognitive science formulations, inhabits a performance ecology alongside other materials and improvised action. For example, in ‘The Sea Is Ground’ (Norway, 2016), we programmed the machine by measuring conductivity changes in samples of local sea water. The machine’s actions are sonified using multiple simultaneous strategies and the tape is regarded as a performable symbolic score for a modular synthesizer. Consistent with the TOPLAP live coding manifesto, we project our ‘code’ in various visualisations. Against this background, I will raise questions about the very idea of ‘humanising algorithmic listening’ and explore each of those terms and their conjunctures. My hope is that the kind of performance ecology instantiated in Turing Tape Music, with its varied relations between humans and machines, listening and playing, code and other materials, can serve as a design image that can inform work in other contexts.


image of Nick Ward

Nicholas Ward

Clocks, ageing, zinc and speakers: reflections on HAL workshop 1

My understanding of the abilities of the listener affects how I start to speak. And on an ongoing basis I continuously modulate what I’m saying and how I’m saying according to my comprehension of my listeners comprehension of what I’m saying. I once experienced a nurse speaking very slowly and quite loudly at an elderly person, assuming that they were deaf and senile. We adjust ourselves to the listener. How might these listening machines adjust me?


image of felix

Felix Gerloff

Utilising the sonic channel for human-machine-collaboration

In this presentation I want to pose a few questions and consider some possible vectors for the analysis and development of sonic media of collaboration between humans and machines. In my research context this includes settings of programmers and their computers as well as academics and presentational media. The concept of algorhythmics (Shintaro Miyazaki) serves as a valuable tool to grasp the interplay of the temporality of algorithms and the rhythmicity of sound events that are audible for humans. Fostering sonic means of collaboration in epistemic and generative contexts might lead to a clearer understanding and further development of what we might call sonic thinking. Humanising algorithmic listening might mean to develop transparent forms of sonically thinking together and collaborating with our machines.


image of Franziska Schroeder

Franziska Schroeder

Distributed Listening - An AHRC funded project led by Franziska Schroeder and Pedro Rebelo

We will introduce a newly developed listening app, called LiveShout, which explores notions of distributed listening. The talk will give a brief insight into the background research for developing LiveShout and elaborate on two practice-based theatre projects which developed out of notions of ‘distributed listening’ and that used the app as the basis for their narrative structures. The two theatre companies involved in the project were The Lyric Theatre Belfast and 42nd Street Manchester.

You can find out about ongoing and past work here: http://www.socasites.qub.ac.uk/distributedlistening/index.php/sciencefestival/



Lunch time performance


image of John Bowers

John Bowers

Stookie John Comes To Belfast

Improvised performance with machine listening, live coding, digital and analogue modular synthesis, image projection. Approximately 20 minutes.

In a number of recent performances (notably in collaboration with Tom Schofield), I have explored the possibility of connecting contemporary interest in live coding, where musicians write the code that generates the music as a performable act, with the history of technology and experiments in the relationships between different kinds of materiality (silicon, water, earth, airborne vibration, light). Performances are an improvised affair bringing together different materials, creating and executing programs, juxtaposing sound and image, and so forth. In Stookie John Comes To Belfast, I will add in a concern for machine listening to this brew. Stookie John is a beheaded ventriloquist’s doll who does his own kind of binaural listening to the performance (named in honour of Stookie Bill, the doll who provided the first televisually transmitted face in John Logie Baird’s 1926 demonstrations). The results of his listening are used to live code in an esoteric programming language I am developing. Various demons, myself included, transform and read from this code to shape the behaviour of a modular synthesiser and live room-sound sampling algorithms - all of which is, in turn, listened to by Stookie John. Esoteric code windows will be projected to secure conformance with the TOPLAP live coding manifesto. Machine listening, live coding, modular synthesizers, projected image, macabre doll, all in multiple feedback loops. What more could you want?


Presentations & demos


image of Kant

David Kant

Composing through machine listening

I believe that machine listening is an opportunity to hear the world differently. For the past six years, I have explored this suspicion by translating machine listening analyses of pop songs into music notation and asking human musicians to re-perform the results. The transcriptions are so impossibly over-specific that a dedicated ensemble—the Happy Valley Band—has evolved and crafted a unique performance practice for this idiosyncratic machine-made music. In this talk I will discuss the process of composing Happy Valley Band music, the machine listening techniques employed, my experience working with musicians, and audiences’ polarized reactions.



image of Davis

Tom Davis

Feral Cello: temporal and fluid agency in improvised music performance

As an instrument builder and improvising musician I am interested in how “things” can be characterised as ‘temporally emergent in practice.’ In this presentation I will be looking at notions of performativity, in particular how this term has been characterised by Andrew Pickering and Karen Barad, in relation to formulations of definitions of agency that are temporal and fluid. Taking this relational ontology as a starting point I will describe a performance system called the Feral Cello which seeks to explore and exploit these ideas through action. This system utilises machine listening at its heart and is an attempt to investigate conceptions of machine agency and make explicit shifting boundaries of humans, technologies and associated notions of intention within a musical context.


image of Green

Owen Green

Play and Mischief as Serious Work: Hearing and Realisation

Among the range of pertinent and pressing questions formed around the HAL agenda at the first meeting were clusters around two focal points that I will explore. First, there are questions that probed the nature of the basic categories we are invoking: humans, machines, listening. Being alert to how these categories are contingent and multi-stable is, in my view, a critical prerequisite for constructive cross-displinary work, and serious thought ought to be given the various ways in which we can prod and construct these distinctions from different disciplinary perspectives. The second cluster of questions revolved around the place of artistic practice research in the HAL endeavour: what kinds of cross-disciplinarity could emerge, with effects? I shall relate this second cluster to the first by looking at how my own research musicking bounces off some of these categorical considerations by cultivating a playful/errant relationship with machine listening techniques. Perhaps appositely, perhaps foolishly, as much of this presentation as possible will be demonstrative so that we can explore the territory by listening, as humans and machines, whatever those things may be.


image of barry Mogees: Music-HCI industry perspective on machine learning

Conor Barry

Mogees Ltd. is a multi-award winning technology company based in Shoreditch, East London. Mogees research focuses on new methods for gestural interaction, making everyday ordinary objects extraordinary with smart technology. Mogees original product Mogees Pro combines revolutionary software and a vibration sensor to transform any object into a musical instrument and was funded by suceessful Kickstarter campaigns.



EVENING Performances



image of FBC

Feedback Cell

Feedback Cell is the duo formed by cellist Alice Eldridge and computer-musician Chris Kiefer (Luuma) to explore their ever-evolving feedback cello project. Two butchered cellos, electromagnetic pickups, code, bows and lots of soldering. Emits dulcet drones and brutal yelps. In each performance we actively seek to come to stage with a new feature, configuration or development to explore. For this performance we will be collaborating with Craig Jackson to experiment with the effects of (frequency dependent) spatial diffusion on algorithmically and intuitively controlled feedback processes.


image of theo

Theo Burt/The Automatics Group

Remixes

Continuing the work started with his 2011 release Summer Mix, an album of automatically transformed club anthems (Entr’acte, Death of Rave), Theo Burt’s ongoing remixes project utilizes various processes to restructure and synthesise new material from existing music and music video. Using only offline processes, batches of music are subjected to individual transforms, from simple restructuring, to analysis and reordering via machine listening algorithms. The tracks presented are his curation of this output.


Dialogue & Discussion