Back when I was deciding what to do about grad school, I found hearing what others did and why to be incredibly helpful. I hope I’ve been able to return the favor to others along the way. Having officially finished grad school not too long ago, I thought I’d take a moment to tell my story in the hopes that it will help others. Most of the people I talked to said something that boiled down to “do what I did.” In this post, I’ll continue the tradition.Read more...
So it’s been a long time coming, but as of Thursday, June 30th,1 I’m officially a doctor! If you want, you can read my final dissertation here. It seems like the cool thing to do is to share the graph of commits over time, so here it is:
It should be obvious from the giant peak in around October of last year that I defended in November of 2015. In a perfect world I would have immediately made all the changes requested by my committee and finished up no later than December. Instead, I moved to California. Finally around May or so I felt settled enough in my new station in life that I could go finish up my nagging dissertation. You can of course see this in the flurry of activity around May or June on the graph.
I’m glad to be done!
It’s been a long road, and I’ve been fortunate to have tons of supportive people along the way to help out. A Ph.D. is really too big of a task for just one person, so even though I’m the one who gets to put the extra letters after my name if I want, I’m truly grateful for everyone who has helped along the way!
I mentioned earlier that I moved to California. Back in January I started working at Google, doing some really fun stuff with WebAssembly. I’m hoping in my new post grad school life I’ll also get the chance to revisit some personal projects I put on the back burner in order to graduate. One of those projects is blogging more. It’s been over two years since my last post, and I honestly find that a little bit embarrassing. Of course, the reality is that it always seems like I’ll have more time in the future than I actually end up having, so we’ll have to see.
I realize this post is pretty late as well. So it goes. ↩
Lately I’ve been toying around with learning Morse Code, and like any good computer programmer, I decided to write a program to do it for me. The whole program was under 100 lines of code. This was my first time using the new Web Audio API and I must say I’m impressed with how easy it is to use.
If you’re using Chrome, feel free to try it out:
Unfortunately, it seems that Firefox doesn’t yet support the APIs I used, although rumor has it that this should work in the nightlies. I haven’t tested Safari, but I wouldn’t be surprised if it works there because it also uses WebKit.Read more...
Recently I’ve been helping out with the local library’s Arduino workshop, which is one of the events for their summer Maker Days program. It’s been a lot of fun watching kids and adults learn to build circuits and program microcontrollers. Doing so has also inspired me to spend some more time playing with Arduinos on my own. I recently purchased a small OLED display from Adafruit, and while I was getting the hang of working with it, I decided to put together a simple Pong game. Here it is in action. If you’re interested in seeing how to build your own, read on!Read more...
Lately I’ve found monads to be more and more useful in several programming projects. For example, Harlan’s type inferencer uses a monad to keep track of what variables have been unified with each other, among other things. It took me a while to really grok monads. One reason is that many of the tutorials I’ve seen start out with category theory and the monad laws. These things don’t strike me as all that useful when I’m trying to make my code better in some way.
What I have found useful is to think of monads as a style of programming (like continuation passing style), or even a design pattern. I’ve found monads are really handy when you need to thread some object through a sequence of function calls. To see how this works, we’re going to start with a store-passing interpreter for a small language and show how to use monads to hide the store in the cases where we don’t need it.Read more...
I had the pleasure of serving as a committee member for this year’s PLDI Artifact Evaluation Process. After reading Lindsey Kuper’s post from the author’s point of view, I thought I’d say a little about my perspective from the other side. I had a lot of fun doing this, and it’s exciting to think about the implications for our field as artifact evaluation becomes a more common thing at conferences.Read more...
Happy New Year!
With the recent release of Rust 0.9, I’ve decided to start a tradition of tagging releases for my Rust projects to coincide with releases of the Rust language. Although things like Rust CI have helped keep Rust code running as the language matures, there’s still some frustration in using Rust projects if you’d rather not always run the latest master build. Sometimes, it may even be impossible to make two projects work together if their maintainers do not update at the same pace. By tagging releases with official Rust releases, it will become much easier to always find a version of my code that works with the latest release of Rust.
Without further ado, here are the projects:
- rust-opencl - OpenCL bindings for Rust. Thanks to Colin Sheratt and Ian Daniher for their contributions and help keeping this code running.
- rust-papi - Bindings to the PAPI performance counters library for Linux.
- SciRust - Some linear algebra routines.
- Boot2Rust - A small UEFI program that lets you boot and run nothing but Rust (and all the UEFI firmware stuff). See more in my previous post.
Besides shamelessly plugging my own software, I hope that this post will encourage others who maintain Rust projects to do the same. As the Rust community grows, more people will want to stick with official releases, and these are much more valuable when most of the Rust projects have easy-to-find versions that work with these releases.Read more...
In my post on Scheme debuggers, I introduced a simple Scheme interpreter. Unfortunately, the interpreter was structured such that it was hard to make a terribly sophisticated debugger. While we won’t improve upon our debugger much today, I do want to look at a different style of interpreter that should enable more advanced debugging features.
This new style is called continuation passing style. You can think of a continuation as what to do next in a program, and so continuation passing style means we make the continuation an explicit argument of a function. This means we can do a lot more with the control flow of the program, which is important in a debugger, for example, since we want to be able to pause and inspect the execution at arbitrary points.
We’ll continue with a quick introduction to continuations and continuation passing style, then look at how to apply this to our interpreter. Once that is done, we will see how to implementRead more...
call/ccin our new interpreter.
A couple nights ago I was looking over the UEFI spec, and I realized it shouldn’t be too hard to write UEFI applications in Rust. It turns out, you can, and here I will tell you how.
The thing that surprises me most about UEFI is that it now appears possible to boot your machine without ever writing a single line of assembly language. Booting used to require this tedious process of starting out in 16-bit real mode, then transitioning into 32-bit protected mode and then doing it all over again to get into 64-bit mode. UEFI firmwares, on the other hand, will happily load an executable file that you give it and run your code in 64-bit mode from the start. Your startup function receives a pointer to some functions that give you basic console support, as well as an API to access richer features. From a productivity standpoint, this seems like a win, but I also miss the sorcery you used to have to do when you were programming at this level.
Booting to Rust is a lot like writing bindings to any C library, except that the linking process is a bit more involved.Read more...
My last post showed that it’s now possible to call code written in Harlan from C++ programs. Sadly, the performance numbers I posted were pretty embarrassing. On the bright side, when you have a 20-30x slowdown like we saw before, it’s usually pretty easy to get most of that back. In this post, we’ll see how. The performance still isn’t where I’d like to be, but when we’re done today, we’ll only be seeing about a 4x slowdown relative to CUBLAS.Read more...