April 27, 2020


This week, I worked on finalizing my project at this point, and creating a presentation that can be viewed at the link below:


Here is the final Pd Patch that I concluded with at the end of this project.


April 20, 2020

Here is my current Pd patch. This patch has the conversion from frequency number to pitch note correct, and correctly outputs it to a txt file, which is then automatically sent over to Lilypond for transcription to happen. There are 2 more things that I have been working on this last week that still need to be fixed. The counter on the right side of the patch works, however, the starting and stopping of the counter is what still needs to be worked on. In addition, the last thing that needs to be improved is the idea of keeping track of the current octave, since Lilypond works in relative octaves. So, if the first output is a c3, the second is a c4, and the third is a c4, right now it will output “c c’ c”… however this will only raise the second c’ to a c5 instead because it is relative to the previous note.

At the bottom, I tried to start keeping track of the current octave, but I think it is just redundant of what is already in the patch.

Week of April 13, 2020

I started off looking into the Pd patch and trying to refine this. I started off looking into refining the Sigmund parameters. I added the notes parameter to be able to record the frequencies when it detects a new note, rather than constantly. I also added another sigmund object to gather the amplitude data and print that out as well. This is all shown in the patch below.

This sends the pitch frequency to the txt file, however it does not send the amplitude values, because if I send it there, it will just show double numbers instead of what they are denoted as.

I also looking into the other parameters such as Stabletime, Minpower, and Growth.

Stabletime won’t work because I am already doing that with the notes input, measuring the output pitch at the beginning of each not. I am just struggling with the duration of each note.

Minipower is just used to find how frequently to detect the pitch; aka the minimum power from the audio input.

Growth is also just how frequently to detect the pitch using the root mean square approach.

After I hit somewhat of a dead end on this end, I decided to take a look at the automation end, and how to make the process flow better. I started taking a look into applescript and how to read the txt file and input it into the excel sheet, and also how to get this converted and inputted into Lilypond to be transcribed. Below are a few of the scripts I have written.

This script doesn’t work fully… it prints out a blank excel sheet

This patch doesn’t work fully…. It prints out the numbers in one cell, and then I have to run another script in the excel program to separate the numbers into separate cells. But this is a better grasp on it. This then prints out the correct note descriptions.
In the process of exporting the excel sheet to Lilypond. It works, however it spits out the notes as one word instead of spaced out… shown below

This writes directly from the text file to Lilypond correctly, however I need it to be in letters and not number that the text file spits out

Current output to Lilypond

What I currently have is that after the txt file is created from Pd, the values are sent to the excel sheet. However, right now all of the values are sent into one cell, and it is very difficult to separate into their own rows. The code below is put in place on the excel sheet to separate into individual cells.

Things to work on:

  • Work on the code to input into excel correctly
  • Work on the code to export from excel and input into Lilypond
  • Refine the Pd patch more and figure out how to get duration of notes

Week of 4/6/2020

Here’s an update on my project:

My original idea has changed a little from the beginning until now. My idea was to start with Pd and take the audio input and convert it to frequency values. Once I had the frequency values, I wanted them to automatically translate into note values, to then be transcribed onto sheet music. The end goal is for all of this to be done in real time as the audio input is coming in.

Last week I was still struggling on getting the frequency values to be written into a text file. In addition, I took a look at the transcription end to understand Lilypond when I got to that point in the project. I now have an understanding of how to create a musical score with this software.

This week I worked on solidifying my Pd patch to be able to collect the data that I want, and to put it in a format that is easy to manipulate. Below is my current Pd patch.

This patch enables the frequency values to be sent to the text file “music-notes” to be analyzed. Below is an example of the data that is sent to this text file.

Below is part of the spreadsheet that I use to reference the frequency values to translate them to the proper notation.

This is the code for the numerical – to – notation conversion in the excel spreadsheet.

Here is an example of the transcription. In the code, I have the note values as a range, to accommodate for frequency fluctuations as well as the fact that I am currently using Apple headphones as my microphone.

Once I am able to copy/paste the notation values, I place it in the Lilypond code below to be transcribed by this software shown below as well.

A few road blocks I am currently facing/next steps I am going to look into are:

  • How to export the text file from Pd without Copy/Paste
  • Automatic translation from numerical value to alphabetical value without using excel potentially
  • Getting the duration of notes in Lilypond rather than just quarter-notes
  • Automatic final product

Week of 3/30/2020

So this week has been interesting with switching over into online classes altogether. I have been chugging away at this project and have been hitting a bunch of roadblocks this week. I have figured out that this project is more of just inputting text into pd, manipulating this text, and outputting it in a different format. Right now, I am at the point that I am able to collect the input data in real-time, but I am having trouble outputting this into a text file that I can then send to Lilypond for musical transcription. Below is the current patch I have in pd after going through a lot of help files and online forums, but it seems to not be sending the output microphone data to the text file.

On the other spectrum of this project, I began looking into what I was going to do once I had this numerical data, and how to turn it into sheet music. Looking into Lilypond, the input data is alphabetical instead of numerical. So once I get the numerical data from pd, I will have to figure out how to change this to alphabetical in order to run it through the Lilypond software. An easier platform I have discovered is Frescobaldi, which basically takes the Lilypond text files, and allows you to edit and create them while seeing the real-time sheet music being created rather than having to send it to the Lilypond command line.

Below is a little image of the script coding that I was practicing with to get a better understanding of how Lilypong and Frescobaldi work. The next step in the second half process of my project is to how to manipulate the alphabetical data that I can hopefully get from pd, into the format of this software.

Week of 3/2/2020

Notes on presentation and where to go from here:

This is project is more of a text manipulation problem than I originally thought.

Next steps to work on through Spring break:

  • Read the pd book for the first few chapters to get a better understanding of pd
  • Keep a lot of data instead of trying to take data over a certain period of time
    • Want to get the exact duration of the notes when they change pitch
  • Look into using the “Text” object to export data into a text file to be analyzed easily

Week of 2/17/2020

To start, over this past week I was insanely sick, and I was unable to do a lot of work over the weekend/the beginning of the week. I started off with the simple fiddle~ patch, and decided to try to use sigmund~ instead since it was supposed to be easier to use. Below is the basic sigmund~ patch that I have been working with to collect audio input data. The video shows how I captured the recorded piano through the headset microphone.

With the data that was printed into the command line in pd, I was able to copy/paste this into an excel sheet document for analysis. Below are two screenshots of the overview of the data translated into their respective note values. I wrote a bunch of formulas for excel to categorize each note frequency to designate each note correlation. This is the first step to see the tabular movement of the notes.

Zoomed in view of transcription
Transcription overview

Things to continue to work on:

  • How to get these values inputted straight into a table/file
    • In real time?
  • How to get the duration of notes
  • More sigmund~ parameters
    • Round to the nearest frequency value for easier note transcription

Problems I ran into:

  • Am I going in the right direction/what should my next steps be?

Week of 2/10/2020

,원• 1“0 input *0

Using this patch, I was able to generate frequencies in the log. I then was able to extract the numerical values and place in excel spreadsheet.

I took a video first with recording this data with a headset microphone to document the data.

Things to continue to work on:

  • Why are numbers between ~40 and 100?
  • How to get these values inputted into a table/print to a file?
    • How to then input them in real time
  • How to convert these numerical values into note Letters
  • How to get duration of notes, not just when a not changes
  • Now figure out how to vary the parameters in the fiddle~ patch to get the output that I want

Problems I ran into :

  • My computer crashed when I tried to run the starter fiddle~ patch