Tech Notes: the BitLadle Project

Written Saturday, August 3, 2024

There's a lot of various stuff going on behind the scenes at SpoonStack, most of which isn't really stuff I've talked about anywhere - but that's exactly why I wanted to start this Dev Blog! This entry will be a bit on the tech-heavy side, so feel free to skip it if programming nerdity isn't your thing. There'll be more accessible updates about our work over time, too.

 

Today I want to talk a little bit about a project I'm very excited about, called BitLadle. BitLadle is designed to be a toolkit for creating highly efficient server software, with a Disability-Driven Development twist.

 

Right now, there is no shortage of tech out there that is supposed to make it "easy" to create and operate online services. Unfortunately, in practice, the existing options leave a lot to be desired. Things that are actually accessible for beginners or light dabblers are usually gated behind paywalls, subscriptions, advertising, or other extractive and unpleasant manifestations of surveillance capitalism. There's a growing resurgence in "DIY" or "self-hosting" tech options in recent years, which unfortunately tends to be tainted by severely ableist communities and harsh skill floors.

 

In other words, a lot of people are getting left out of making online services - either because it's too expensive, too ethically icky, too difficult, too hostile, or some combination of all of the above.

 

That's what gave me the idea to start writing BitLadle. I've had the privilege of over a decade of very direct, hands-on, deep-dive experience working on, improving, and innovating some of the most powerful server software tech in the world. I wanted to take some of that expertise, combine it with my interests in accessibility and more equitable tech-creation cultures, and produce something that could address the gap in available tools.

 

The result is BitLadle - a cross-platform, extremely efficient, highly extensible system for writing server software.

 

There are a few key aspects to the design of this project that I'm excited to talk about, but for today, I want to focus in on the aspect of cross-platform efficiency.

 

 

Serving Sockets Swiftly

One of the classic problems of high-scale network software development is the raw amount of work that goes into handling hundreds (or thousands) of simultaneous connections. This is a rare area where the Windows server ecosystem has actually historically had some major advantages over Linux and BSD. In particular, the IO Completion Port technology, which first appeared in Windows NT 3.5 (so late 1990s) is incredibly efficient.

 

The reasons for this mostly boil down to kernel/user-space context switching, which is intensely costly, and only moreso in the post-Spectre/Meltdown world. In a nutshell, the IOCP technology makes it possible to eliminate huge amounts of processing overhead when handling large numbers of active network sockets.

 

Thankfully, as of 2019 or so, Linux has been rapidly catching up, with a very similar mechanism called io_uring. There's plenty of material out there now about the system, so I won't repeat the background context; suffice it to say, it's pretty darn comparable to IOCP, and the benchmarks have got folx pretty excited - it's significantly more efficient than the older alternatives available in the Linux world.

 

However, all of this stuff is highly platform-specific. There's not a lot out there that implements both IOCP and io_uring without adding a lot of other baggage and expectations on top. Since I have extensive experience with IOCP, I decided to go ahead and create my own wrapper layer to cover both.

 

This afternoon, I successfully ran the first tests of this layer, with both Windows and Linux test machines hosting a small demonstration service and proving out basic traffic processing. The best part is, the vast majority of the code is portable C++; there's no platform-specific logic anywhere in the server or client themselves, it's all hidden in the wrapper layer.

 

From here, I can start playing around with the other major facets of BitLadle - accessibility and extensibility. But those are for another time.

 

 

Scaling Down

Before I wrap this up, I want to explore the other extreme, and another part of why I wanted to do BitLadle in the first place. After all, scaling "up" is not always a realistic thing - many services simply don't need to handle thousands of active connections at once, and it can cost a lot of time, energy, and literal electricity to build "high scale" tech that gets used by... a dozen people, a few times a week.

 

And that wastefulness is tricky. It's actually quite difficult to be both energy-efficient and scalable; to scale up requires a certain degree of complexity, and to scale down often requires a lot of very, very careful simplicity.

 

There are aspects of this that can be helped with careful design, but ultimately, it really comes down to a kind of consistent mindset. Writing extremely efficient software is almost a lost ethos. Back when I first started programming, in the early 1990s, it was much more important to be careful with things like allocating memory or using up processor cycles - resources were limited. But in the decades since, hardware has gotten more and more powerful, and software has gotten less and less concerned with efficiency.

 

In some ways, that's a good thing; more powerful hardware means it's easier to do a lot of things, and it's ok to take quite a lot of stuff for granted in today's programming world. But it has a tradeoff, which is that very few programmers these days still have much interest (let alone experience) with extremely fiddly parts of making things efficient, compact, and fast. There's also a bit of an unfortunate cultural tendency to conflate efficiency with a kind of extreme minimalism, which often ends up having very exclusionary results in practice - not something I'm keen to recreate.

 

Scaling down - that is to say, creating tech that works fine with very limited resources - is a matter of access. Today, I can spend the price of a few solid meals on a single-board computer that's literally hundreds of thousands of times more powerful and capable than the first computer I programmed on. I know what can be done with extremely limited hardware - I've done a fair bit of it, over the years!

 

And yes, it's possible to rent virtual-machine-based servers nowadays for incredibly cheap, too; but that, ultimately, requires basically renting other peoples' computer hardware. For any number of reasons, this is not always a great option, either. Not everyone has the access (or desire) to pay "cloud providers" rent to put stuff online, especially when we consider the realities outside of white-majority countries.

 

Which means there's a huge unmet need for server software tech that can do amazing things with very, very cheap and minimalistic hardware. It is entirely technically possible to run a viable web site off an old, recycled cell phone; it shouldn't be unthinkable for people with limited resources to have meaningful, fulfilling opportunities to play with Internet stuff whenever they want.

 

And that's the other unique thing that I want to do with BitLadle: support a much wider range of possibilities than existing tech thinks are relevant - both big and small.

 

This especially gets powerful when we start talking about how I want to combine the SwitchBoard tech with BitLadle - creating opportunities for people who don't want to "do programming" to actually create their own online tech, and even pretty darn effective and reliable and low-cost tech, at that. Stay tuned - there's a big future out there!