• Projects

    Dig that lick:
    Analysing large-scale data for melodic patterns in jazz performances

    Principal Investigators: Dr Simon Dixon (Queen Mary University of London, UK), Dr Tillman Weyde (City, University of London, UK), Prof Krin Gabbard (Columbia University, New York, USA), Prof Gabriel Solis (University of Illinois, USA), Dr Dan Hélène Papadopoulos (National Center for Scientific Research, Paris, France)
    Funding Source: Digging into Data Challenge
    Project value: £67,135.80 (fEC)
    Duration: October 2017 – September 2019
    Overview:
    The recorded legacy of jazz spans a century and provides a vast corpus of data documenting its development. Recent advances in digital signal processing and data analysis technologies enable automatic recognition of musical structures and their linkage through metadata to his- torical and social context. Automatic metadata extraction and aggregation give unprecedented access to large collections, fostering new interdisciplinary research opportunities.
    This project aims to develop innovative technological and music-analytical methods to gain fresh insight into the jazz history by bringing together renowned scholars and results from several high-profile projects. Musicologists and computer scientists will together create a deeper and more comprehensive understanding of jazz in its social and cultural context. We exemplify our methods via a full cycle of analysis of melodic patterns, or “licks”, from audio recordings to an aesthetically contextualised and historically situated understanding.

    Automatic segmentation of audio recordings to speech and music

    Principal Investigator: Dr Daniel Wolff
    Co-Investigators: Dr Tillman Weyde, Dr Emmanouil Benetos, Dr Dan Tidhar
    Named researcher: Dr Dan Tidhar
    Funding Source: City University London Research Pump Priming Fund
    Funding: £5,940.9
    Duration: April 2015 – May 2015
    Overview:
    The main aim of this pump priming project is to improve state-of-the-art automatic speech-music segmentation technology and its applicability to the British Library’s World and traditional music collections.

    Semantic Linking of BBC Radio (SLoBR)

    Principal Investigator: Dr Kevin Page (University of Oxford)
    Advisers: Dr Ian Knopke (BBC), Prof Tim Crawford (Goldsmiths, University of London), Dr Tillman Weyde (City University London), Reinier de Valk (City University London)
    Named researchers: David Lewis (Goldsmiths, University of London),
    Funding Source: Semantic Media Network (EPSRC)
    Funding: £24,955.98 (total)
    Duration: September 2014 – Feb 2015
    Overview:
    Semantic Linking of BBC Radio (SLoBR) addresses a further crucial step in applying Linked Data (LD) as an end-to-end solution for the music domain. Previous efforts, including the successful SLICKMEM project, primarily generated data and links from and between academic and commercial sources; SLoBR focuses on the use and consumption of such LD and development of tooling to support these applications. SLoBR does so in two ways: (i) developing a specific proof-of-concept with the BBC; (ii) formulating generic approaches, lessons, and where appropriate re-usable units of software, applicable to other uses. The wider approach advocated is not a universal environment for all applications, but an investigation into the appropriate balance between specific and generic software elements and their combination in user-driven scenarios. While LD can be viewed as the ultimate generic information substrate, many successful user applications are extremely focussed (characterised by “apps”). The challenge is identifying best practice and tooling (APIs, libraries, UI toolkits) to bridge this gap, minimising development overheads for remaining elements which are unavoidably unique to each scenario. Through implementation and evaluation of the BBC demonstrator SLoBR paves the way for future investigation of this methodology.

     

    An Integrated Audio-Symbolic Model of Music Similarity

    Principal Investigator: Dr Tillman Weyde (City University London)
    Co-Investigators: Dr Alan Marsden (Lancaster University), Dr Nicolas Gold (University College London),  Dr Emmanouil Benetos (City University London), Dr Daniel Wolff (City University London)
    Named researcher: Dr Samer Abdallah (University College London)
    Funding Source: Arts & Humanities Research Council (AHRC)
    Funding (100% fEC): £45,500 (City University London), £76,963 (total)
    Duration: September 2014 – July 2015
    Overview:
    This project aims to apply the newly developed technological infrastructure for large-scale music research, which is in being developed in the Digital Music Lab project, to answer the musicological question what constitutes and contributes to similarity of music. This question is key to many aspects of music and it is typically addressed in musicology by studying musical structures on the symbolic level, i.e. in a score, while the standard approach in music information retrieval is based on distance metrics on feature vectors extracted from audio data. This proposal brings together both approaches making use of the Digital Transformations projects “Digital Music Lab” (DML), “Optical Music Recognition from Multiple Sources”, and “Transforming Musicology”. Using recent technological advances and the infrastructure developed in the DML project we will develop and evaluate a joint model of the structural and audio signal aspects and their interaction in music similarity. In this project we will focus on data-driven models, in particular probabilistic models for musical structure and distance learning models for audio. We will use data on classical, traditional and non-western music made available by the British Library.

     

    A deep learning framework for automatic music transcription

    Principal Investigator: Dr Emmanouil Benetos (City University London)
    Co-Investigators: Dr Tillman Weyde (City University London), Dr Artur d’Avila Garcez (City University London), Dr Simon Dixon (Queen Mary University of London)
    Named researcher: Mr Siddharth Sigtia (Queen Mary University of London)
    Funding Source: City University London Research Pump Priming Fund
    Funding: £5,768
    Duration: April 2014 – July 2014
    Overview:
    The aim of this pump-priming project is to perform a feasibility study on the use of deep learning technology for addressing the automatic music transcription problem, in collaboration with researchers from the Centre for Digital Music of Queen Mary University of London.

     

    Digital Music Lab – Analysing Big Music Data (DML)

    Principal Investigator: Dr Tillman Weyde (City University London)
    Co-Investigators: Prof Stephen Cottrell (City University London), Prof Jason Dykes (City University London), Dr Emmanouil Benetos (City University London), Prof Mark Plumbley (Queen Mary University of London), Dr Simon Dixon (Queen Mary University of London), Dr Nicolas Gold (University College London), Mr Mahendra Mahey (British Library)
    Project Partners: City University London, Queen Mary University of London, University College London, British Library
    Funding Source: Arts & Humanities Research Council (AHRC)
    Funding
    (100% fEC): £302,708 (City University London), £564,689 (Total)
    Duration
    : January 2014 – March 2015
    Overview:
    Music research, particularly in fields like systematic musicology, ethnomusicology, or music psychology, has developed as “data oriented empirical research”, which benefits from computing methods. However, this music research has so far been limited to relatively small datasets, because of technological and legal limitations. On the other hand, researchers in Music Information Retrieval (MIR) have started to explore large datasets, particularly in commercial recommendation and playlisting systems, but there are differences in the terminologies, methods, and goals between MIR and musicology as well as technological and legal barriers. The proposed Digital Music Lab will support music research by bridging the gap to MIR and enabling access to large music collections and powerful analysis and visualization tools.

    The Digital Music Lab project will develop research methods and software infrastructure for exploring and analysing large-scale music collections. A major output of the project will be a service infrastructure with two prototype installations. One installation will enable researchers, musicians and general users to explore, analyse and extract information from music recordings stored in the British Library (BL). Another installation will be hosted at Queen Mary University of London and provide facilities to analyse audio collections such as the I Like Music, CHARM and the Isopohnics datasets, creating a data collection of significant size (over 1m pieces). We will provide researchers with the tools to analyse music audio, scores and metadata. The combination of state-of-the art music analysis on the audio and the symbolic level with intelligent collection-level analysis methods will allow for exploration and quantitative research on music that has not been possible at this scale so far. The use of the proposed framework will be demonstrated in musicological research on classical music, as well as in folk, world and popular music. The results of these analyses will be made available in the form of highly interactive visual interfaces and will be made available as open data/open source software.

     

    Semantic Linking and Integration of Content, Knowledge and Metadata in Early Music (SLICKMEM)

    Principal Investigator: Prof Tim Crawford (Goldsmiths, University of London)
    Co-Investigator: Dr Tillman Weyde (City University London)
    Project Partners: Goldsmiths University of London, City
 University London, University of Oxford, BBC
    Funding: £10,544.00 (City University London) – £43,459.74 (Total)
    Funding Source: Semantic Media Network (EPSRC)
    Overview: Linking data from various sources via metadata and/or content is a vital task in musicology and library cataloguing, where semantic annotations play an essential role. This innovative pilot project will work with data in ECOLM of two types: (a) encoded scores OCR’d from 16‐c printed music; (b) expert metadata from British Library cataloguers. We will build on existing ontologies such as the Music Ontology, introducing key concepts embedded in our historical text and music images (e.g., place‐ and person names, dates, music titles and lyrics) and prepare the ground for a new ontology for melodic, harmonic and rhythmic sequences. 16‐c printed texts vary a lot in quality, spelling, languages, fonts and layouts, so support for approximate matching, e.g. using the Similarity Ontology, is vital for human control and interaction in cataloguing and retrieval of historical music documents. The project will produce an online demonstrator to show the principles in action, serving as a multidisciplinary pilot application of Linked Data in the study of early music which will be widely applicable for scholarship in other musical and historical repertories.

     

    I-MAESTRO: Interactive Multimedia Environment for technology enhanced music education and creative collaborative composition and performance

    Principal investigator: Dr Tillman Weyde
    Funding: €214,767.80 (City University London) – €2,350,000 (Total)
    Funding source: EU (F6 Integrated Project)
    Duration: 2005-2008
    Overview: The I-Maestro project is about researching and developing a comprehensive set of methods and tools for musical e-learning. City’s role is on development of algorithms and tools for the generation of music exercises based on musical and pedagogical models.