Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Maninpasta minutes: MusicAnnotation#1 #12

Open
Amleth opened this issue Apr 14, 2021 · 2 comments
Open

Maninpasta minutes: MusicAnnotation#1 #12

Amleth opened this issue Apr 14, 2021 · 2 comments
Assignees

Comments

@Amleth
Copy link

Amleth commented Apr 14, 2021

Everything that has been discussed is encoded in this Turtle file:

https://github.com/polifonia-project/stories/blob/main/Sethus:%20Music%20Theorist/SethusVideHomo.ttl

Ontologies involved:

  • FaBiO, to model scientific papers which are referred to by musicologists
  • CRMinf, to model scholarly readings of these papers, and claims which are discussed (approved, relativised…)
  • LRMoo, to model the score corpus
  • CRM, for eveything else (generic relations)

This session brought out the need to work with a member of the ontology design group. Indeed, when we "annotate" a score, we do three things that are intertwined:

  1. We do a physical gesture on the score to select an anchor for the annotation (what about EMA?)
  2. We create or select a piece of knowledge that should be linked to the anchor. It's the gloss, and it could be modeled with an ontology if it denotes some kind of theoretical knowledge. We are currently working on an analytical ontology for studying modality&tonality (modal-tonal-ontology). We should discuss this with other Polifonia ontology designers.
  3. We define a modality of connection of the anchor and the gloss (think oa:motivation, but we need a stronger semantic for music analysis). These properties should be provided by music analysis ontologies.

Some of these aspects are also discussed in the MEI IG LD.

@Amleth
Copy link
Author

Amleth commented Apr 14, 2021

@guillotel-nothmann @margur78 @albertmeronyo => we should connect and discuss together

#7

@albertmeronyo
Copy link
Member

Session 14-05-2021: Marco and Christophe have produced a modal analysis of the Vide Homo with segment annotations and a preliminary automation implemented with music21

AP: Discuss the multidimensional modelling of "fragments": combinations of XML id's and offset 'vertical' pointers
AP: Discuss possibility of presenting this fragment model in the MEI working group meeting on 28 May

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants