top of page
TECH OFFER

Method for Modeling Style of Musical Data

TECHNOLOGY OVERVIEW

A novel method for modeling musical data using deep learning is described. Musical data such as MIDI is used for training the model. A series of vector x is extracted from the musical data comprising pitch P, time difference T and duration D. Time difference T refers to the time lapse in beats between the start of preceding data and current data. Vector x is then distributed in latent space p(z). A Variational Autoencoder (VAE) distribution q(z|x) with loss function LV is constructed. The model consists of content latent space and style embedding to encode the content and style of music separately. The model is able to model and reconstruct styles of two different music styles.

Mega - Trends

Learning Innovation for spiritual knowledge, Music

Technology Readiness Level (TRL)

TRL 4

Patent Number

PI 2020004745

Get the technology fact sheet here:

Contact person for this offer:

ChM Dr. Lee Ching Shya, PhD (Dual), RTTP

Technology Transfer Manager

Email: leecs@um.edu.my

Tel: +603-7967-7351/ 013-2250151

MORE INFORMATION

You have a question to know about technologies or cooperations? 
Please Contact Us:

+603 - 7967 7351 / 013-2250151

bottom of page