As a conclusion to ARB’s series on the 2021 Hugos, I wanted to share a quick analysis on the results. That “quick analysis” turned into an extensive infographic-making project. My hope is that there are others who share my Venn diagram of “election/statistics/SF nerdery”. If you do, please enjoy the infographics below.
* * *
First, a thumbnail sketch of this year’s WorldCon and related issues:
- For a list of winners and all nominees, I suggest the SFADB.
- Full ballot and nominating data are now available from the official Hugo site.
- I have yet to see a concise take on the controversy over DisCon’s use of Raytheon as a sponsor; the decision process, financial amounts, and timeline have yet to be made public. You can read my open letter about the issue, and an apology from the con chair.
- The Con has also received criticism for failing to provide good accessibility, as detailed by Mari Ness.
- The 2023 WorldCon bid was won, somewhat unexpectedly, by Chengdu, the capital of the Sichuan province in China. There has been a lot of discussion about this, including valid concerns about human rights and who can safely attend. There was also a shady attempt to change the rules for site selection to prevent China from winning. For some further reading, I’d suggest Andrew Liptak on “soft power” and the Unofficial Hugo Book Club Blog on “Hugos Unlike Any Previous”. As noted in this series’ introduction, WorldCons to date have been extremely Anglo-centric and US-centric; Chengdu will likely be the first WorldCon where American attendees are not the majority.
* * *
As a refresher, the Hugos are nominated and decided by WorldCon members. The Hugos use a ranked-choice, instant-runoff vote (IRV). On the ballot, voters rank the 6 nominees from 1 to 6; they can also leave choices unranked and put “No Award” at any rank. (“No Award” indicates that the voter doesn’t feel that any nominees below that deserve an award; they may still rank their choices below No Award.)
The ballots are then sorted in a series of runoffs: in each round, the nominee with the fewest votes is eliminated. Those eliminated ballots then have their remaining votes re-allocated for the next round. Essentially, after each round, IRV says to the eliminated ballots: “okay, your choice didn’t win; who’s your next top pick?” This process continues until one nominee has greater than 50% of the available votes, at which point they’re declared the winner. The Hugo winners are not whoever gets the most 1st-rank votes in the first round.
The Hugo administrators provide a full list of second through sixth places, determined using the same IRV system; however, it’s worth noting (and probably for the best) that these “placed” awards are not officially promoted anywhere.
For a quick primer on IRV, contrasted with simple majority and Condorcet systems, I suggest this video using ice cream flavors from the Exploratorium. Note that the main advantages of ranked-choice voting are preventing the spoiler effect (very important when there are more than two choices) and producing winners with the broadest support. Ranked-choice voting also tends to discourage extremist, factional campaigning: rather than getting the overwhelming support of one group, it’s better to have at least some support from most groups. How that plays out for the Hugos is an exercise for the imagination.
If you’re not familiar with IRV elections, the Hugo results can look a little counterintuitive: the initial rank of a nominee (how many people picked them as their top choice) often doesn’t correspond to their final ranking. I would argue that these kinds of switches (as indicated in the bump charts below) are, in general, a good thing. They indicate both that the quality of nominees (in the voters’ estimations) are close enough to prevent early-round winners, and indicate that the Hugo winners have high community consensus (they might not be choice that the majority had for their top pick, but a strong majority will be happier with this choice than with other options).
* * *
Nitpicking through the Hugo voting data for a few days has given me a lot of vague thoughts about the awards as a whole, which I won’t go into at too much depth here. But a few things jumped out:
- It is worth noting that there aren’t a vast number of Hugo voters to begin with, and some of these categories are decided by quite small numbers of those voters. Nicholas Whyte covers some of this in his “Hugos in Detail” post.
- That said, some of the categories with relatively fewer voters strike me as more important Hugos than “larger” ones. It’s not surprising to me that more people have opinions about the high-profile Dramatic Presentation nominees than the nominees for Fancast, for example. I’m not sure what the point of giving a Hugo to a major film is, whereas many of the other categories strike me as important: internally to the SFF community, and in terms of offering an endorsement and a boost to otherwise obscure works and workers.
- I always feel a little bad for the Editor categories; they (editors) are hugely important to the field, but are not very prominent if you’re not in the publishing world somehow. Most fans need to do some significant homework to have an opinion here. It makes me hope that more publishers start using a “book credits” page; it seems like an easy step to make.
- Finally, and this is still pretty vague in my head, but I’ve been wondering about the difference between categories where a voter might reasonably have an opinion on most or all nominees, and categories where, realistically, most voters won’t have an opinion on more than one or two nominees, due to the time requirements. You can read all of the Short Story entries in an afternoon; to appreciate the Series or Video Game slates in a serious way will take hundreds of hours (arguably, Short Form Dramatic Presentation as well, since the emotional impact of a single episode relies on the larger series). There seems to me to be an important difference between “the members of this community who considered all these works picked this one as the best” and merely “fans of this work carried the day”.
But! Enough of my musing, here are the promised infographics:
* * *
Full set of Sankey diagrams for all categories, showing how the ranked ballots were allocated to produce the winners:
* * *
Full set of bump charts, showing the difference between the first round votes and the final results:
* * *
Barring any unexpected developments, this should bring ARB’s 2021 Hugo coverage to a close. Hope you’ve enjoyed, and hoping to provide some analysis and reporting on next year’s WorldCon, held here in Chicago.
* * * * * *
ARB Guide to the 201 Hugos:
Intro | Novel | Novella | Novelette | Short Story | An Open Letter | Results & Diagrams
Originally from the Pennsylvania Appalachians, Jake Casella Brookins (he/him) now lives in Chicago. He is an SF reviewer and independent scholar, and runs the Positron site for speculative fiction book clubs and other literary events in the Chicagoland area. When not making coffee (professionally), he is probably riding his bike (amateurishly). Book ramblings and occasional bread experiments can be found on his blog.
This series was commissioned by an internal pitch among the ARB editors. Review copies were not arranged by ARB; access to some titles was provided by the Hugo Voter’s Packet, which the author had access to through their personal Worldcon membership. Hugo ballot information was procured from the Official Hugo site. Hugo Sankey diagrams were inspired by Martin Pyne’s set created for the 2020 Hugos. The above diagrams were created via Sankeymatic and Google Workspace, and touched up and labelled with an open source photo editor.
2 thoughts on “ARB Guide to the ’21 Hugos: Results & Diagrams”
Thank you for your work. The graphics are informative.
Please look a little more closely at Worldcon site selection history. American fans have voted for Worldcon to go far away at almost every opportunity. I don’t know about Londin in ’57 or ’65 or Heidelberg in ’70, but at Aussiecon I in 1975 there were 100 Americans out of a total attendance of 600.