Back to Blog

Bobo Ballot

First Published: MAR 09, 2026
5 min read
Project

The Problem

At FORM, our flagship event is a 24-hour music and art hackathon called All Nighter. We curate 40–45 tracks from submitted works and release them as a compilation album, with all proceeds going to charity.

When submission volume was still manageable (~100 submissions), collecting entries through Google Forms worked fine. Participants would upload their audio to SoundCloud and share the links for us to review. This kept us from having to self-host files, but had several drawbacks, the most notable being the inability to listen to lossless audio.

We eventually made the switch to self-hosting, which gave us a lot more control over how the data was being processed. It allowed us to have lossless audio playback, which is critical when evaluating production quality. Even with these tech upgrades, we still defaulted to spreadsheet-based voting, which left us with a ton of problems. All data was fully mutable by anyone on the team, and we regularly ran into situations where someone accidentally overwrote another person's votes or deleted entire rows. Thankfully, version history is supported on spreadsheets, but it still caused way more headaches than I’d like to admit.

As the event started getting more submissions, the voting experience kept getting worse. It was just death by a thousand cuts: juggling tons of tabs, manually selecting cells, opening external links, and waiting for things to load. Every one of those frictions is small on its own, but stacked together across hundreds of submissions, they really add up to a ton of time. Something needed to change.

the old spreadsheet-based voting flow


The Solution

I made Bobo Ballot with the goal of designing around all of these tiny inefficiencies. It was a very interesting challenge in interaction design and low-latency performance. Our first round of voting acts mainly as a filter. With so many submissions to sift through, the goal in this stage of voting isn't a deep critical listen. Instead, we want to narrow down what interests us and determine what is worth a second look. Reviewers will skip around a track, jump to key sections (intro, drop, outro) and make a gut call. The whole interface is built around making that process as fast as possible.

the new voting flow with Bobo Ballot is WAY faster :)

The waveform is the centerpiece of the UI for exactly this reason. A song's structure is often “readable” just by looking at it. Every popular DJ software uses waveforms for this reason. That visual map lets reviewers jump straight to the parts that matter instead of scrubbing blindly. It's large and covers most of the screen width so you're not trying to hit a tiny target when seeking around.

waveforms used in popular DJ software Rekordbox

The vote buttons are intentionally oversized for the same reason: less time spent aiming a mouse. It may seem like a tiny optimization, but when you’re spending many hours clicking – it gets very fatiguing. Every button is also mapped to a keyboard shortcut, so you can get through tracks without touching the mouse at all.

Volume is another thing that can kill the experience if you don't account for it. User-submitted files are not normalized, so levels can vary wildly. A track that's too quiet or too loud affects how it gets judged regardless of its actual quality. We are prone to ear fatigue during these long listening sessions, and songs that are mixed too loud can be perceived as harsh simply because they are a little too loud compared to the average volume of submission. There's a manual volume control on every track so reviewers can compensate on the fly without changing their system volume.

For cases where you need to find a specific track, there's also a table view. You can sort by various fields, search by name, filter by flags, and search through your private notes on each submission.

Performance is another big focus, since waiting to load between tracks would break the flow entirely. The app pre-loads the next few tracks in the queue while you're still on the current one, allowing for speedy navigation. Lossless files are large, so pre-fetching them like this helps a lot with perceived loading times. Waveforms data, which is essentially just a bunch of floating point numbers representing the volume at that point in time, are pre-computed upon upload, so all the processing is rendering out the waveform data before you even request it.

The Results

My team has loved using it so far. The whole experience is just more comfortable. You're not juggling windows, hunting for the right cell, or fighting the tooling. Everything you need is in one place and the interactions feel natural. It's a night and day difference compared to our old methods.

It saves us a significant amount of time too. At a conservative estimate of 30 seconds saved per submission, across ~700 submissions that's nearly 6 hours per reviewer. Multiply that across a team of 8 people and you're looking at 40-60 hours of labor saved per event.

If you’re interested, I would highly recommend giving one of our compilations a listen. Using tools like this, we’ve been able to catch a lot of gems in a sea of submissions and construct albums that highlight the creativity and diversity of our community.

I'm still actively building out features, with one of the biggest being support for anonymous public voting, which opens the door to letting our wider community weigh in on submissions for the first time. Serving lossless audio to a large audience simultaneously is a different beast, but it's a challenge worth solving.