Vibe Coding
by Matt CholickThe app that I used for learning key signatures disappeared from the Play store between phones. Android does a decent job of transferring things over, but if something is gone from the store it won't come over to a new phone. The app has a few other features, but the one I used it for was a section where it presented key signatures for drills, flash card style. I did fully learn the major key signatures, but hadn't yet gotten to learning the minor signatures completely.
This seemed like a good chance to really experiment with LLM coding to see where things are. My day-to-day workflow is writing code and only reaching out to LLM chat for a syntax assist or rubber ducking (with the LLM editor body ghost text completions turned off - too distracting). For this, I changed that up quite a bit and used chat via Copilot Edits to generate ~95% of the code (Copilot Edits works by attaching files to the prompt, giving instructions, and then accepting or rejecting the diffs it generates).
I used Claude's 3.7 Sonnet model. It took about ~40 prompts and a couple of hours. Here's the
final repo github.com/cholick/key-signature-flashcard-drills-pwa,
and the app is deployed to https://key-signature.cholick.com.
In most of the prompts I described the behavior I wanted, but a few were code specific (one example where it was easier to
just tell it what was wrong is "The accidentalInfo
and the way we're counting isn't working, currentKeySignature.key.length
isn't giving what's needed.")
It worked. It got me to the app I was after in far less time than I would have been able to build it. My front-end skills are super rusty, and this is the sort of Javascript and DOM manipulation work that I really don't enjoy, so this wouldn't exist if AI hadn't coded it. I would have just found some other tool, used just physical cards, or just been lazy for a while and continued to count down three half-steps to get to the minor. The only part where I had to really reorient and tell it what to do was displaying the key signatures. I knew VexFlow would work, it's a library I've used before, but I didn't start with that since I was curious what the LLM would suggest. I didn't go full Vibe Coding, but for sections of it I did just do a skim, test in the browser, and then call it good.
Toward the end, AI started to make more mistakes. The style of "Here's the files, please implement a thing" definitely has limits as complexity increases. The mistakes weren't anything big, it just wasn't getting things right on the first try every time like it had at the start. Each fix, though, did just need describing and then the AI managed a fix in a single cycle. This seems like the sort of thing where just general good software practices would keep it on the rails (nothing too big, loose coupling, high cohesion). While cycling, it also didn't fully clean up at times. To clean things up I needed to occasionally prod it with a prompt along the lines of "Do we have any dead code, styling, or comments?"
One takeaway is that I should have committed much more often using these Edit/Agent workflows. I had a few cases where things went wrong, and VS Code's undo for the large, multi-file edits isn't great. A local multi-file rollback would have been really useful, something like JetBrains robust local history.
I think some sort of custom instructions would have improved things. "Comments should reflect the current state of the code only", for example, is a directive I think would have helped. Several times it encoded information about how it was cycling that wasn't helpful. A comment in that space, for example, was "Hide the controls container instead of just the start button". Some directives around styling consistency could be useful too, like "Always use lowercase letters in hex colors."
It was an interesting experiment. I learned that I'm not taking advantage of these tools as much as I could. Working with LLMs and code over the past couple years, it's been tricky to find that line where it is productive to reach for an LLM vs a waste of time. I had that well calibrated for a year or so back, but this exercise has shown me I need to move the line a bit and continue to experiment, because they've gotten better at an impressive pace.
For reference, here's the first 2/3 or so of my prompt history. Unfortunately I lost the last 1/3. I reset my session and it seems like that history is gone, but what's here is representative.
- I'd like to create an app for doing flash drills of key signatures, to practice music. I want everything client-side, so the server just serves static files. An exercise will present a key signature and then have choices for all the options. It tracks success or failure, and runs for 5 minutes. I don't want to use react. Please lay out this project structure
- "Identify the key signature" is just displaying a string. It should display a treble clef. What are some options for that?
- How would I do this with a music font?
- I don't see how the music font is rendering a key signature in what you've suggested
- The font isn't rendering correctly, as you can see in this image
- No, I want to stick with a font. I don't want to generate all those svg images.
- It does display the correct number of sharps, but it doesn't display staff lines, and the sharps are not correctly placed on the staff lines
- This is getting a bit much to manage and encode, and will have lots of styling fixes. Let's pivot to using VexFlow to render the key signature
- The choices aren't quite presented correctly. I'd like all options available for every exercise, and all key signatures as an option. So there's no need to create an array of objects like that, with a signature mapped to choices. I don't think your approach is going to work. You're just tossing up the treble clef, and not rendering staff lines with the correctly placed sharps.
- When the page is first loaded, it should only have the start button
- I'd like to display using ♭, not b, for flats
- Let's put the buttons in a grid
- The alignment of the flat symbols seems off, they're too high. Are they vertically aligned somehow?
- Can we remove some of the empty spacing at the end of the stave?
- And can we make the clef and stave bigger? Increase the size of what's rendered, not just adding padding
- The timer that ticks each second is distracting. How could we display that more subtly?
- No, I'm thinking of something else. Maybe just a subtle bar that shrinks down
- This code is removing and re-adding all the choices every question. It only needs to do that once
- Let's change start from disabled to hidden during the exercise
- Rather than the alert, is there a more CSS (and not too complicated) way to display a modal or overlay?
- The modal is displayed before the timer bar fully depletes, something is off by a little bit
- When the exercise restarts, the timer bar fills up in an animated way. How can it fill instantly?
- When incorrect is answered, right now it's just going to the next value. I'd like to somehow tell the user the correct value. I don't want to add an additional click, and I don't want the UI to jump around. I'm thinking a something like displaying a "Correct" or "Incorrect: 3 # is A Major" that fades at the same times as the next exercise.
- Let's remove the fade part after the correct/incorrect. Let's leave the text, change it to, for example, "Correct! A is 3 sharps" or "Incorrect: 2 sharps is C#. You answered D"
- The
accidentalInfo
and the way we're counting isn't working,currentKeySignature.key.length
isn't giving what's needed. - The explicit count makes sense, but that if/else if block is pretty ugly. Let's add that "3 sharps", for example, bit to the
keySignatures
array of objects and pull it out when we need it - Do we have any dead or unused code or styling?
- I don't like the way the signature moves down after the first answer. Let's leave space for the correct/incorrect text even before it first shows up
- That broke things. The spacing is there, but the feedback never shows up
- Let's put the
feedbackDiv
in the HTML, it can just exist on the page and doesn't need to be inserted by js