dc.contributor.author |
Atlı, Hasan Sercan |
dc.contributor.author |
Uyar, Burak |
dc.contributor.author |
Sentürk, Sertan |
dc.contributor.author |
Bozkurt, Baris |
dc.contributor.author |
Serra, Xavier |
dc.date |
2014 |
dc.identifier.citation |
Atlı HS, Uyar B, Şentürk S, Bozkurt B, Serra X. Audio feature extraction for exploring Turkish makam music. Paper presented at: ATMM 2014 The 3rd International Conference on Audio Technologies for Music and Media; 2014 12-14 Nov; Ankara, Istanbul, Turkey. 12 p. |
dc.identifier.uri |
http://hdl.handle.net/10230/35018 |
dc.format |
application/pdf |
dc.language.iso |
eng |
dc.relation |
info:eu-repo/grantAgreement/EC/FP7/267583 |
dc.rights |
info:eu-repo/semantics/openAccess |
dc.title |
Audio feature extraction for exploring Turkish makam music |
dc.type |
info:eu-repo/semantics/conferenceObject |
dc.type |
info:eu-repo/semantics/acceptedVersion |
dc.description.abstract |
Comunicació presentada a: ATMM 2014 The 3rd International Conference on Audio Technologies for Music and Media, celebrat del 12 al 14 de Novembre de 2014 a Ankara i Istanbul, Turquia. |
dc.description.abstract |
For Turkish makam music, there exist several analysis tools which generally use only the audio
as the input to extract the features of the audio. This study aims at extending such approach by
using additional features such as scores, editorial metadata and the knowledge about the music. In
this paper, the existing algorithms for similar research, the improvements we apply to the existing
audio feature extraction tools and some potential topics for audio feature extraction of Turkish
makam music are explained. For the improvements, we make use of the Turkish makam music
corpus and the culture specific knowledge. We also present a web-based platform, Dunya, where
the output of our research, such as pitch histograms, melodic progressions and segmentation
information will be used to explore a collection of audio recordings of Turkish makam music. |
dc.description.abstract |
This work is partly supported by the European Research Council under the European Union’s
Seventh Framework Program, as part of the CompMusic project (ERC grant agreement 267583). |