Other kinds of talks

Published 2025-02-24

Almost all of the talks I see at any conference fall into one of two archetypes:

  1. "What I did on my holidays". In which the speaker narrates whatever pull request they most recently merged. This talk can make sense in a specialized community where everyone actually does care what everyone else has been working on since the last time they met up, but at a bigger, more general conference it's usually tedious. Making a half-hearted effort on the last slide to draw some over-generalized lessons isn't going to save it.
  2. "Lend me your wallets". It's an advert for some software product. This talk can be totally appropriate at the kind of conference where your CIO goes to figure out which cloud database to buy. But it also likes to sneak into every other conference, usually by murdering "what I did on my holidays" and wearing it's still-warm skin for just long enough to trick everyone into watching.

I have some alternate suggestions.

distillation

Our institutions reward producing new knowledge, but don't reward organizing or communicating existing knowledge. This creates research debt:

Research debt is the accumulation of missing interpretive labor. It's extremely natural for young ideas to go through a stage of debt, like early prototypes in engineering. The problem is that we often stop at that point. Young ideas aren't ending points for us to put in a paper and abandon. When we let things stop there the debt piles up. It becomes harder to understand and build on each other's work and the field fragments.

If you do serious work in any field you have to spend years wading through and digesting a mass of tiny tidbits of discovery before you're ready to do new work yourself. But once you've done that, you're in a position to save others from having to do the same thing.

Examples:

What makes these talks work is not just that the speakers have a lot of expertise in the subject, but that they took the time to systematize their knowledge. If Mike Acton's talk had instead been "10 lessons learned from my latest game engine" it probably would have been much easier to write but noone would remember it. To give a really good distillation talk you have to think about a problem long enough to figure out how to carve it at the joints.

adversarial collaboration

Most arguments just entrench each side further into their original position. The idea of an adversarial collaboration is to make progress by:

  1. Taking two people who disagree.
  2. Figuring out how much they agree on.
  3. Making sure they agree on what they disagree about (eg are both using the same words to mean the same thing).
  4. Finding a concrete, measurable prediction that they disagree about the outcome of.
  5. Designing and conducting that experiment together.
  6. Reporting and discusssing the outcome.

There are some famous examples of adversarial collaboration in psychology. In a recent example, Killingsworth failed to replicate a famous Kahneman paper relating happiness to income, but rather than spending the next decade writing each other diss papers, as is traditional, they worked together and discovered a subtle design flaw in the original experiment.

I can't think of any similar examples in software (the closest that comes to mind is this collaborative scylladb / memcache benchmark). Maybe you could be the first!

replication

The lessons of the replication crisis apply to software too. Almost every time I have tried to implement a data-structure or algorithm from a paper, I have not been able to reproduce their benchmark results.

Often just reading a paper or blog post closely enough to even attempt replication reveals issues. Is there even enough detail to repeat the benchmark? Does the benchmark design make any sense for the problem supposedly being solved? Did they carefully measure only one side of the tradeoff seesaw? (Eg improving cpu time but not measuring memory usage.) Did they claim to improve on some other paper, but not use the same benchmark design?

The same applies to commercial and open-source projects, which all claim to be faster and more reliable than each other. Or even to common wisdom, which often turns out to be based entirely on a mis-recollection of a single poorly-designed experimennt run 20 years ago on a potato.

Examples:

The impact of this kind of work can have surprising leverage because hardly anyone has both the incentives and the time to actually verify any claims, and so without replication we build our foundations on lies, mistakes, and misconceptions. You can turn a field around by raising the bar.

failure

If an idea is tried 20 times and fails 20 times, that doesn't mean it's a bad idea. But it does mean that the 21st person to try it would benefit from knowing about the first 20 attempts so they don't fail the same way.

Similarly, suppose the 21st person succeeds and gives a talk about their amazing success. If you only see that one talk it's easy to conclude that the idea is great. But maybe it's a terrible idea and the 21st person just got lucky. Or maybe they did something different that was key to their success, but it's only noticeable by contrast against the failures.

Fields that are serious about safety and reliability talk as much about failures as successes. I can't think of any software talks about failures, but here are some written examples:

The failures don't need to be that big or detailed though. Here are some talks I would watch:


I'll concede that all of these ideas require more work than just delivering the conference equivalent of a listicle. But I think that's inherent. If you haven't done any work then you don't have anything worth sharing.

On the other hand, as a speaker the potential upside for putting in the work is huge. Most talks are terrible so it's not hard to stand out. You can easily become "the" expert in some niche just by giving a reasonably well-prepared talk about it.


As a conference organizer, you could dedicate a track to any one of these categories and probably end up with much more interesting talks as a result. FailCon? RepliCon? CheckIfItActuallyWorksCon?