Continuing the series I started a few weeks ago, here are links to things I’ve found interesting recently.
I finally finished reading Sabrina Jewson’s Async destructors, async genericity and completion futures.
The post uses the motivating example of making sure we send the
close_notify alert on a TLS stream before closing it, as is required by the protocol1.
Async destructors are proposed as the way to make this easy to do from a language standpoint, which has a nice symmetry to the fact that we’d probably use destructors in synchronous code to do this as well.
The post then proceeds to work out a lot of the issues that come up with adding this feature and comes up with a comprehensive design that seems workable although requires a lot of new language features to be added.
It was this post that inspired the Zulip discussion in the next section that inspired me to start thinking about transactions.
On the Rust Zulip this week there’s been an interesting discussion on
defer blocks, and other ways to try to improve async cancellation.
There are a lot of interesting issues here, and it feels to me like none of the solutions so far satisfy all the design constraints we have terribly well (I should write a post diving into that).
So I’m looking for ways to reframe the problem.
What are the problems we need to solve?
What are the primitives we should provide to let people solve these problems?
Many of the issues seem to boil down to needing to ensure some version of “if X executes, then Y will also execute,” and related, “Y will not execute until X is finished.”
The first guarantee feels a lot like what you want in a transaction, so I’ve started looking at what it takes to implement transactions, whether in the form of software transactional memory, databases, file systems, or some other version of a transaction.
One recommendation I got was for Chapter 24 of Beautiful Code $ . That chapter was contributed by Simon Peyton Jones and titled Beautiful Concurrency. It introduces software transactional memory (STM) in Haskell with excellent motivating examples of why locks are insufficient (they aren’t modular), and how STM provides composable building blocks that can solve concurrency problems more robustly than locks. The explanation was excellent and it gives a good overview of the STM programming model. It touches briefly on how to implement it, although I really wanted to see more detail here (but I understand why it’s not included). I think it’d be interesting to try and implement an STM library in Rust. I’m sure one exists, but implementing one as a learning experience would still be valuable.
I tweeted asking for recommendations and got a few. I haven’t had a chance to read these yet, but I’ll post them here in case you have more time than me.
- Designing Data Intensive Applications, by Martin Kleppmann (h/t @nick_r_cameron)
- Python’s STM Documentation (h/t @davidblewett)
- Transaction Processing: Concepts and Techniques, by Jim Gray (h/t @cartazio)
- Transactional Memory, by Tim Harris, James Larus, Ravi Rajwar (h/t @fryguybob)
Game Development and the History of Computing
Lex Fridman hosted John Carmack on his podcast recently. It took me a while to get through because it was a five hour conversation and I tend to listen to podcasts in 20 minute chunks, but pretty much everything in it was interesting. I always enjoy listening to John Carmack since usually it ends up being a chance for me to relive my history with computing. I remember playing or watching others play many of the games he was involved with, like Commander Keen, Wolfenstein 3D, Doom, and Quake. It was amazing seeing what became possible as computers became more capable, and seeing the lengths game programmers would go to in order to make the most of the hardware we had available. The discussion about artificial general intelligence (AGI) was interesting. One framing that stood out to me was when Carmack said AI is at a point where the leverage a single individual could apply is perhaps higher than ever in history (I’m sure I’m butchering the paraphrase). I would like to adopt this myself in deciding what to work on: what are the points where I as an individual with the resources available to me can apply the most leverage? In the discussion on AGI, Carmack talked a lot about training an AI similar to how we teach students. The idea isn’t that we build an AI that can do anything, but that we build something that can learn and we send it to school. For example, one approach to solve self driving cars is to build a computer that you can send to driver’s ed.
Since I’m feeling nostalgic at the moment, I’ll link to a couple other books that were formative to my early computer education. These are quite old and probably not relevant now, but I enjoyed reading them as a kid. The first was Teach Yourself Game Programming in 21 Days $ by Andre Lamothe. The second was by the same author, the Black Art of 3D Game Programming, which covered much of the same material but continued on into how to write a 90s-era 3D game engine. These books made the math classes I was taking a lot more interesting, because they give a fun use for things like vectors, matrices, and trigonometry.
Apparently the genre of books about game programming with “Black” in the title is rather large. I’d be remiss if I didn’t also plug Michael Abrash’s Graphics Programming Black Book.
Anyway, I’ve been sitting on this post for long enough as it is so I’d better go ahead and post it and start working on my next set of links.
As an aside, I’m starting to think it’s better not to design protocols that require things like this. Given all the things that could prevent that alert from being sent (network outage, power outage, asteroid impact, etc.), your implementation has to handle what happens if this isn’t sent anyway, so why have the extra complexity? Granted, in this case this opinion doesn’t matter because the protocol already exists and we have to implement whatever is specified. ↩