# The Most Important Problem in the World — Solved!

Toilet Seat Problem

Should the toilet seat be left up or down after use?   I decided to undertake a rigorous investigation of this critical issue.   Every woman I asked knew the answer: leave it down.  Every married man also knew the answer:  do what she says.   The truly domesticated men did one better:  leave the cover down too.  I stood in awe of their sheer servility.

Sadly, nobody seemed to feel any need to investigate further.   But, like Galileo, Einstein, Anthony Bourdain, and Stalin I was not  deterred by the disapproval of my peers.   In fact, it invigorated me.   Anything worth doing is worth doing wrong, I say.

Given a man and woman living together, the goal was simple:  introduce an unwanted technical innovation that was sure to drive the marriage apart.   Specifically, I sought a strategy which minimizes the average number of toilet seat flips per use of the bathroom, subject to the constraint that everyone sees the same average.

As with any straightforward problem I choose to examine, the universe immediately rewired its laws to require a lot of math.   Hence this important paper.  Too important for Nature, Science, or The Journal of Advanced Computational Potty Talk.   The only venue worthy of its publication is this very blog which, entirely by coincidence, I own.

Surprisingly, the result turns out to be neither trivial nor obvious.   It seems  the universe actually has something interesting to say about toilet seats and fairness, even if I do not.

Click on the link above or below to view the PDF paper.

Toilet Seat Problem

# Two Countries

There once were two countries, A and B, and two kinds of people, purple people and green people. Each country had both purple people and green people.

In country A, the purple people were in charge. A small group of purple people were the gatekeepers of all things, the decision makers, the managers of life.

In country B, the green people were in charge. A small group of green people were the gatekeepers of all things, the decision makers, the managers of life.

The two countries shared a large border and a free one. By ancient treaty, no visas were required and no checkpoints marred the landscape. But almost nobody ever crossed the border. A nearly insurmountable range of peaks obstructed much of the length, and strong rapids made the remainder treacherous.

Though fundamentally different in nature and culture, the majority of the purple and green people did not mind one another. Many even cherished the differences, and friendly relations were by far the norm in both countries.

The two governments were exceptions.

The purple leaders of country A portrayed green people as primitive, dangerous, and unable to restrain their impulses, creatures to be feared and controlled. The green people sought to dominate and oppress them, they warned. Only through constant vigilance and zeal could such a dire threat be averted. Whether they believed these words or simply found them politically expedient is unclear.

The green leaders of country B portrayed purple people as arrogant, irrational, and immoral, individuals of loose character and dishonest nature. Such people sought to lead good folk astray and never should be allowed influence, never should be listened to, they warned. Only through constant vigilance and zeal could such a dire threat be averted. Whether they believed these words or simply found them politically expedient is unclear.

Most green and purple people in both countries meant well, or at least did not intend ill. But a few did as a few will do, and this was exacerbated by the rhetoric of each government.

Every time a purple person in country B was attacked, the leaders of country A pointed and exclaimed “See, we are right. We must protect purple people from the inexcusable barbarity of the green people.” But they held no power in country B and compensated with an excess of zeal in their own country. Small crimes were made big, a growing range of behavior was criminalized, penalties grew, initiatives to advance purple people in the face of obvious oppression were advanced, and the public was freshly informed of the omnipresent danger posed by green people.

Every time a green person in country B was persecuted, the leaders of country B pointed and exclaimed “See, we are right. We must protect green people from the hysterical lunacy of the purple people.” But they held no power in country A and compensated with an excess of zeal in their own country. Small crimes were made big, a growing range of behavior was criminalized, penalties grew, initiatives to suppress the influence of purple people in the face of their obvious irresponsibility were advanced, and the public was freshly informed of the omnipresent evil posed by purple people.

The green people in country A cringed whenever something happened in country B. The inevitable furor surely would land on their heads. An inquisition would follow, jobs would be lost, lives would be ruined, and the slightest misstep would destroy them.

The purple people in country B cringed whenever something happened in country A. The inevitable furor surely would land on their heads. Vilification would follow, new restrictions would be imposed, rights would be lost, lives would be ruined, and the hope of improvement would grow ever more distant.

The majority of purple people in country A were not particularly swayed by their government’s propaganda, but they did not repudiate it. Most did not understand the plight of their green fellow citizens. They dismissed green complaints as hyperbolic, arguing that their government meant well and any real impact on green people was minimal. Those who believed the truth dared not speak up, and the purple leaders grew ever more powerful. Soon the green people sat hands in laps, eyes down, afraid that the slightest gesture or word could be seen as a threat by those purples who made a business of seeing threats everywhere. A few green sycophants found some small degree of success, but even they were not safe.

The majority of green people in country B were not particularly swayed by their government’s propaganda, but they did not repudiate it. Most did not understand the plight of their purple fellow citizens. They dismissed purple complaints as hysterical, arguing that their government meant well and any real impact on purple people was minimal. Those who believed the truth dared not speak up, and the green leaders grew ever more powerful. Soon the purple people sat hands in laps, eyes down, afraid that the slightest gesture or word could be seen as a sin by those greens who made a business of seeing sins everywhere. A few purple collaborators found some small degree of success, but even they were not safe.

Through the vagaries of geopolitics, some families happened to span both countries. On the rare occasions when they spoke, neither side believed the other.

The purple people in country A did not believe the tales told by their relatives in country B. These were exaggerations spread by politicians, they declared. After all, they experienced no such thing. If anything, their lives were easier than before. A few, seeing the oppression of green people in their own country (but unwilling to speak up about it) even rebuked their relatives. If anything, green people were the oppressed not the oppressors. It was one thing not to help them, but quite another to blame them.

The green people in country B did not believe the tales told by their relatives in country A. These were exaggerations spread by politicians, they declared. After all, they experienced no such thing. If anything, their lives were easier than before. A few, seeing the oppression of purple people in their own country (but unwilling to speak up about it) even rebuked their relatives. If anything, purple people were the oppressed not the oppressors. It was one thing not to help them, but quite another to blame them.

In this way, country A raced toward a dystopia for its green citizens and country B raced toward a dystopia for its purple citizens, yet nobody else recognized this.

Each government was the other’s best friend, and both were the people’s worst enemy.

This is how half the population did not realize the sky was falling, while the other half saw it happening with their own eyes.

But I apologize. I misspoke. The border has no mountains or rapids. It is not physical or legal, but one of social milieu, profession, and education. Yet it is no less real for this lack of topography. Despite the apparent freedom to do so, most people lack the wherewithal to cross the border.

The two countries are our country, today.

This is where we are, this is where we are going, and this is why you will not be believed if you say so.

# Be Careful Interpreting Covid-19 Rapid Home Test Results

Now that Covid-19 rapid home tests are widely available, it is important to consider how to interpret their results. In particular, I’m going to address two common misconceptions.

To keep things grounded, let’s use some actual data. We’ll assume a false positive rate of 1% and a false negative rate of 35%. These numbers are consistent with a March, 2021 metastudy [1]. We’ll denote the false positive rate $E_p=0.01$ and the false negative rate $E_n=0.35$.

It may be tempting to assume from these numbers that a positive rapid covid test result means you’re 99% likely to be infected, and a negative result means you’re 65% likely not to be. Neither need be the case. In particular,

1. A positive result does increase the probability you have Covid, but by how much depends on your previous prior. This in turn depends on how you are using the test. Are you just randomly testing yourself, or do you have some strong reason to believe you may be infected?

2. A negative result has little practical impact on the probability you have Covid.

These may seem counterintuitive or downright contradictory. Nonetheless, both are true. They follow from Bayes’ thm.

Note that when I say that the test increases or decreases “the probability you have Covid,” I refer to knowledge not fact. You either have or do not have Covid, and taking the test obviously does not change this fact. The test simply changes your knowledge of it.

Also note that the limitations on inference I will describe do not detract from the general utility of such tests. Used correctly, they can be extremely valuable. Moreover, from a behavioral standpoint, even a modest non-infinitesimal probability of being infected may be enough to motivate medical review, further testing, or self-quarantine.

Let’s denote by $C$ the event of having Covid, and by $T$ the event of testing positive for it. $P(C)$ is the prior probability of having covid. It is your pre-test estimate based on everything you know. For convenience, we’ll often use $\mu$ to denote $P(C)$.

If you have no information and are conducting a random test, then it may be reasonable to use the general local infection rate as $P(C)$. If you have reason to believe yourself infected, a higher rate (such as the fraction of symptomatic people who test positive in your area) may be more suitable. $P(C)$ should reflect the best information you have prior to taking the test.

The test adds information to your prior $P(C)$, updating it to a posterior probability of infection $P(C|O)$, where $O$ denotes the outcome of the test: either $T$ or $\neg T$.

In our notation, $P(\neg T|C)= E_n$ and $P(T|\neg C)= E_p$. These numbers are properties of the test, independent of the individuals being tested. For example, the manufacturer could test 1000 swabs known to be infected with covid from a petri dish, and $E_n$ would be the number which tested negative divided by 1000. Similarly, they could test 1000 clean swabs, and $E_p$ would be the number which tested positive divided by 1000.

What we care about are the posterior probabilities: (1) the probability $P(C|T)$ that you are infected given that you tested positive, and (2) the probability that you are not infected given that you tested negative $P(\neg C|\neg T)$. I.e. the probabilities that the test correctly reflects your infection status.

Bayes’ Thm tells us that $P(A|B)= \frac{P(B|A)P(A)}{P(B)}$, a direct consequence of the fact that $P(A|B)P(B)= P(B|A)P(A)= P(A\cap B)$.

If you test positive, what is the probability you have Covid? $P(C|T)= \frac{P(T|C)P(C)}{P(T|C)P(C)+P(T|\neg C)P(\neg C)}$, which is $\frac{(1-E_n)\mu}{(1-E_n)\mu+E_p(1-\mu)}$. The prior of infection was $\mu$, so you have improved your knowledge by a factor of $\frac{(1-E_n)}{(1-E_n)\mu+E_p(1-\mu)}$. For $\mu$ small relative to $E_p$, this is approximately $\frac{E_p}{1-E_n}$.

Suppose you randomly tested yourself in MA. According to data from Johns Hopkins [2], at the time of this writing there have been around 48,000 new cases reported in MA over the last 28 days. MA has a population of around 7,000,000. It is reasonable to assume that the actual case rate is twice that reported (in the early days of Covid, the unreported factor was much higher, but let’s assume it presently is only $1\times$).

Le’ts also assume that any given case tests positive for 14 days. I.e., 24,000 of those cases would test positive at any given time in the 4 week period (of course, not all fit neatly into the 28 day window, but if we assume similar rates before and after, this approach is fine). Including the unreported cases, we then have 48,000 active cases at any given time. We thus have a state-wide infection rate of $\frac{48000}{7000000}\approx 0.00685$, or about 0.7%. We will define $\mu_{MA}\equiv 0.00685$.

Using this prior, a positive test means you are $45\times$ more likely to be infected post-test than pre-test. This seems significant! Unfortunately, the actual probability is $P(C|T)= 0.31$.

This may seem terribly counterintuitive. After all, the test had a 1% false positive rate. Shouldn’t you be 99% certain you have Covid if you test positive? Well, suppose a million people take the test. With a 0.00685 unconditional probability of infection, we expect 6850 of those people to be infected. $E_n=0.35$, so only 4453 of those will test positive.

However, even with a tiny false positive rate of $E_p=0.01$, 9932 people who are not infected also will test positive. The problem is that there are so many more uninfected people being tested that $E_p=0.01$ still generates lots of false positives. If you test positive, you could be in the 9932 people or the 4453 people. Your probability of being infected is $\frac{4453}{9932+4453}= 0.31$.

Returning to the general case, suppose you test negative. What is the probability you do not have Covid? $P(\neg C|\neg T)= \frac{P(\neg T|\neg C)P(\neg C)}{P(\neg T|\neg C)P(\neg C)+P(\neg T|C)P(C)}= \frac{(1-E_p)(1-\mu)}{(1-E_p)(1-\mu)+E_n\mu}$. For small $\mu$ this is approximately $1$ unless $E_p$ is very close to $1$. Specifically, it expands to $1-\frac{E_n}{(1-E_p)}\mu+O(\mu^2)$.

Under $\mu_{MA}$ as the prior, the probability of being uninfected post-test is 0.99757 vs 0.9932 pre-test. For all practical purposes, our knowledge has not improved.

This too may seem counterintuitive. As an analogy, suppose in some fictional land earthquakes are very rare. Half of them are preceded by a strong tremor the day before (and such a tremor always heralds a coming earthquake), but the other half are unheralded.

If you feel a strong tremor, then you know with certainty than an earthquake is coming the next day. Suppose you don’t feel a strong tremor. Does that mean you should be more confident that an earthquake won’t hit the next day? Not really. Your chance of an earthquake has not decreased by a factor of two. Earthquakes were very rare to begin with, so the default prediction that there wouldn’t be one only is marginally changed by the absence of a tremor the day before.

Of course, $\mu_{MA}$ generally is not the correct prior to use. If you take the test randomly or for no particular reason, then your local version of $\mu_{MA}$ may be suitable. However, if you have a reason to take the test then your $\mu$ is likely to be much higher.

Graphs 1 and 2 below illustrate the information introduced by a positive or negative test result as a function of the choice of prior. In each, the difference in probability is the distance between the posterior and prior graphs. The prior obviously is a straight line since we are plotting it against itself (as the $x$-axis). Note that graph 1 has an abbreviated $x$-axis because $P(C|T)$ plateaus quickly.

From graph 1, it is clear that except for small priors (such as the general infection rate in an area with very low incidence), a positive result adds a lot of information. For $\mu>0.05$, it provides near certainty of infection.

From graph 2, we see that a negative result never adds terribly much information. When the prior is 1 or 0, we already know the answer, and the Bayesian update does nothing. The largest gain is a little over 0.2, but that’s only attained when the prior is quite high. In fact, there’s not much improvement at all until the prior is over 0.1. If you’re 10% sure you already have covid, a home test will help but you probably should see a doctor anyway.

Note that these considerations are less applicable to PCR tests, which can have sufficiently small $E_p$ and $E_n$ to result in near-perfect information for any realistic prior.

One last point should be addressed. How can tests with manufacturer-specific false positive and false negative rates depend on your initial guess at your infection probability? If you pick an unconditional local infection rate as your prior, how could they depend on the choice of locale (such as MA in our example)? That seems to make no sense. What if we use a smaller locale or a bigger one?

The answer is that the outcome of the test does not depend on such things. It is a chemical test being performed on a particular sample from a particular person. Like any other experiment, it yields a piece of data. The difference arises in what use we make of that data. Bayesian probability tells us how to incorporate the information into our previous knowledge, converting a prior to a posterior. This depends on that knowledge — i.e. the prior. How we interpret the result depends on our assumptions.

References:

[1] Rapid, point‐of‐care antigen and molecular‐based tests for diagnosis of SARS‐CoV‐2 infection — Dinnes, et al. Note that “specificity” refers to $1-E_p$ and “sensitivity” refers to $1-E_n$. See wikipedia for further details

# Fun with Voting in Cambridge, MA

My city of Cambridge, MA is one of a few municipalities which employs ranked choice voting for City Council elections. Unlike most cities, the Mayor is chosen by the City Council and is largely a ceremonial position. Most real power resides with the City Manager, who is appointed for an indefinite term by the City Council. This means that City Councils which get to appoint a new City Manager exert an inordinate influence over the future course of the city. One such point is fast approaching. Unfortunately, given the present and probable near-term composition of the City Council, the decision likely will be based on considerations other than aptitude. However, putting aside my city’s somber prognosis, the upcoming City Council election is a good opportunity to discuss an unusual method of voting and some of its shortcomings.

Ordinary winner-takes-all elections dominate the popular consciousness. National elections are of this nature. It would not be inaccurate to observe that such an approach reflects the general weltanschauung of our culture. However, there are many other voting methods. In fact, voting theory is a vibrant field of research. Together with its sibling, auction theory, it forms part of the subject commonly known as “social choice theory”.

As an aside, I recently published a paper, Social Choice using Moral Metrics in that field. It focuses on measuring distances between behaviors, rather than on voting systems per se. Back in 2008, I also wrote a voting theory piece about swing votes and block voting. What I termed “influence” in it is more commonly referred to as “voting power”. Neither are related to what I discuss in this post, but I encourage the interested reader to peruse them.

It may be argued that certain voting methods are fairer than others, by one or another definition of fairness. Particular flavors sometimes are advocated by those disenchanted with an existing method or an agenda to see some particular group gain influence.  Calls for change sometimes arise in response to highly-visible anomalies, election outcomes which appear egregiously unfair even to disinterested eyes.

In elections with a large field of candidates or those in which a number of positions are simultaneously filled (such as the Cambridge City Council election), winner-takes-all voting may not be suitable or may give rise to such anomalies.

California’s recall system is an example. The ballot in that case has 2 questions: (1) whether to recall the governor and (2) who should replace him. The first question is winner-takes-all for the governor alone. If he loses, the 2nd question is winner-takes-all for the other candidates. It is quite possible for a candidate to be chosen who easily would have lost to the recalled governor one-on-one. In 2003, 44.6% of voters voted not to recall Governor Davis. He thus was recalled, and Schwarzenegger then won with 48.58% of the votes for replacement. It is highly unlikely that in a head-to-head gubernatorial election, Republican Schwarzenegger would have beaten Democrat Davis in the heavily blue state. However, Gray was excluded from this 2nd contest and Schwarzenegger was deemed preferable to the alternatives by most voters.

## Arrow’s Theorem

It is natural to ask whether any voting system is unimpeachably fair, indicting the use of other systems as anachronistic or disingenuous. Arrow famously proved that, under even a small set of fairness constraints and for a broad class of voting systems, it is impossible to find one. Loosely speaking, when more than 2 candidates are present, no method of aggregating the rankings of candidates by voters into a single outcome ranking can simultaneously satisfy three conditions: (1) if every voter prefers candidate x to candidate y, then x outranks y in the outcome, (2) no single voter’s preference determines the outcome (i.e. no dictator), and (3) if each voter ranks x relative to y (i.e. above or below it) the same way in elections A and B (though the order can differ between voters, of course), then the outcomes of A and B do too. I.e., if voters change their overall ranking of x and y or the relative placement of other candidates, but don’t change whether x is preferred to y or vice versa, then whether x outranks y or vice versa in the outcome is unchanged.

It is quite plausible to add more fairness conditions, but most plausible definitions of fairness would require at least these three conditions to hold. Arrow showed that there is no ranked voting system (including “preponderance of the votes”) in which unfair anomalies cannot arise.

As an aside, if one were to relax a condition, the most palatable clearly would be (3). It is conceivable that a “fair” aggregation method may allow the overall ranking of candidates to affect a pairwise order in the outcome. However, this generally is deemed undesirable.

As with complexity results in computer science (CS) or Godel’s impossibility theorem in logic, the theoretical existence of hard or problematic cases does not necessarily pose a practical obstacle. In CS, an algorithm with worst-case exponential complexity may be far more useful than one with linear complexity in real-world applications. For example, the latter could have a huge constant cost (often referred to as a “galactic algorithm”) and the former could be exponential only in an infinitesimal fraction of cases or under circumstances which never arise in practice. Godel’s theorem does have real-world examples (i.e. non-meta-theorems), but (at this point) they remain rare.

Though nowhere near as profound,  Arrow’s theorem invites similar skepticism.  The impossibility of a preference system which excludes all anomalies does not mean such anomalies arise in practice, or that a system which excludes all realistic anomalies cannot be found.   Unfortunately (or fortunately, depending on one’s perspective), such anomalies do arise in practice.  Worse,  the systems in question often are of significant social import and subject to intense scrutiny.  The anomalies which do arise can be quite visible and politically troublesome.

Social choice theory exhibits another critical difference from CS and logic, one which merits additional caution.  The goal of logic, mathematics, and theoretical computer science generally is to understand which problems are solvable and how best to solve them.  Anomalies are viewed as pathological and undesirable.  They sometimes serve as useful counterexamples, guiding researchers to better understanding and helping them improve their tools.   However, they are to be avoided in real-world applications.   If a pathological case arises in such a context, alternate machinery must be employed or the framework modified to exclude it.

This need not be the case in social choice theory.   Not everyone’s goal is aligned, or social choice would be unnecessary.    With elections, there could be adverse incentives. It may be possible to game an election by identifying and exploiting anomalies endemic to the specific system involved.  There also may be groups who strongly prefer that anomalies arise, either for purposes of fomenting discord or if those anomalies serve them well.  For this reason, dismissing anomalies as almost impossible under some assumed prior may be naive. The prior must incorporate human behavior, and this very well could concentrate probability around the anomalies.  Put another way, if we naively model the probability of anomalies arising using an assumption of ideal behavior we risk ignoring the very real possibility that participants will engineer or utilize anomalies.

This issue is related to Gibbard’s theorem, which loosely states that under even weaker conditions than Arrow’s theorem (at least 3 candidates and no dictator), there is no ideal ballot which reflects a voter’s preferences. Put another way, the voting system can be gamed. In fact, a voter may need to game it (perhaps in response to polls or other information) in order to best reflect their individual preferences. The optimal ballot ranking to enact a voter’s preferences may not be their actual preference ranking of candidates.

## The Rules in Cambridge

What does all this have to do with the Cambridge elections? Cambridge employs a particular system of ranked choice voting, which they refer to as “Proportional Representation”. This often is portrayed as fairer, more democratic, and so on. I am going to offer an example of an egregious anomaly which can result. I do this not in the expectation that it will arise or be exploited.  Nor do I hope to change a voting method that is, all things considered, quite reasonable.  Rather, the anomaly serves an illustrative example of the inherent problem with claiming that one voting system is “fairer” than another.

First, I’ll describe the precise rules of the Cambridge election, as best I understand them. See MA Election Laws, section 9 for details.  State law governs the general rules for proportional representation voting in any Massachusetts municipalities which choose to employ it.  Only certain parameters and details of execution are left to local discretion.

The City Council consists of 9 individuals, and the entire body is elected once every 2 years. Voters are presented with a list of candidates and may select a 1st choice, a 2nd choice, and so on.  I do not recall the maximum number of choices which can be made, but let us suppose it is not limited. The anomaly arises whether or not this is the case. Note that a given voter is not required to rank all the candidates. They could select only their top 3 choices, for example. Whether or not a full ranking by each voter is required does not affect the anomaly.

First some definitions. N will denote the total number of ballots (i.e. the number of voters who participate in the election).  At the time of writing, the minimum number of signatures to get on the ballot is 50.  We’ll call this ‘M’, because State law gives it a role in the algorithm. Q=(N/10)+1 will be the “quota”, the minimum number of ballots a candidate needs to win.

Why not choose Q=N/9?  The type of voting system we’re describing is sometimes referred to as “single-transferable-vote” (STV) because of the use of spillovers (described below). There are two common quota methods for determining STV winners:  (1) “Hare” corresponds to Q=N/9, and (2) “Droop” corresponds to Q=(N/10)+1.   In each case, we round up if needed. The two methods generally result in the same outcome or differ only in how the last winner is chosen. Each has benefits and drawbacks vis-a-vis what is deemed fair in terms of proportional representation. Among other things, the Droop quota tends to favor small parties over large. It also is the smallest quota which guarantees no more than 9 winners. As we will see, neither method guarantees a full complement of 9 winners.  Regardless, the Droop quota is that used by Cambridge.

Once the ballots have been collected, a sequence of steps is performed by computer. An order of polling places is determined randomly by the city beforehand. Within each polling place, ballots are sorted by the choice of 1st place candidate (and then presumably randomly within each such cohort).  The ballots then go through a series of stages.  The first stage is special.

Stage 1: Any candidate who reaches Q votes is declared a winner. Subsequent 1st place votes for them are passed to the next ranked candidate on the ballot who has not already been declared a winner. Ex. if a ballot is reached with x, y, and z as the 1st, 2nd, and 3rd candidates, and both x and y already have been declared winners, it would go to z. If no non-winner choice remains on the ballot, it is swapped with a ballot that already was consumed by the winner and has non-winner choices on it. This minimizes the number of discarded ballots. Note that it always pays for a voter to rank a lot of choices, because otherwise some other voter may have their preference registered instead. It’s not clear from the law what order the 1st place candidates’ ballots should be sorted, but we’ll assume randomly. It does not matter for the anomaly we will discuss. As the sorting proceeds, any candidate with Q votes (by spillover from other candidates or by being 1st on their own) is declared a winner, and any remaining votes for them spill over as described.

Once this process has been completed, almost every ballot has been assigned to some candidate (i.e. either consumed by a winner or spilled over to a remaining candidate). Because of the ballot-swapping mechanism described, it unlikely (but still possible) for ballots to have been discarded due to lack of non-winner alternatives. Each winner has consumed precisely Q ballots, and each remaining candidate has less than Q ballots. In what follows we use “higher-ranked” to refer to the preferred candidates on a ballot. In practice, this means they have been assigned a lower number. I.e., the 1st place candidate on a ballot is “higher-ranked” than the 2nd place candidate.

At this point, any candidate with fewer than M ballots (in our case 50) is declared to have lost. Their ballots are transferred in the same manner as before to the remaining candidates. Note that this form of elimination only takes place in this first round, since the number of ballots assigned to a candidate cannot decrease in subsequent rounds.

Stages 2+: If 9 candidates have been declared winners, the process ends. Otherwise, the trailing candidate is declared to have lost, and their votes are transferred (one by one) to the remaining candidates in the same  manner as before, but with one important change. Unlike in the first round, if no remaining non-winner candidates are listed on a ballot, it is discarded rather than swapped with another. As before, any candidate who reaches Q votes is declared a winner and can accrue no more votes. There are some tie-breaker rules associated with determining who is the trailing candidate at the end of a given round, but we won’t go into those. If at any time, the number of winners plus remaining candidates is 9, all remaining candidates are declared winners. The round ends when every ballot in play either has been spilled over (once) or discarded. Those ballots not discarded or consumed by winners and those candidates not eliminated then proceed to the next round.

Note that a spillover never can result in a ballot being assigned to a higher-ranked candidate. For example, suppose a ballot already has been assigned to the 3rd listed candidate on it. This only could happen if there was a reason to skip the top 2. This means they either already were declared winners or already were eliminated. Nor do any swaps (possible only in the 1st round) affect this. Any subsequent spillovers must go to lower-ranked candidates, or the ballot would have been handed to a higher-ranked candidate already.

Note that unless every voter ranks every candidate, it is possible for some ballots to be discarded. This is highly unlikely in the first round, because swapping is allowed. However, in subsequent rounds ballots may be discarded if they list no candidates which remain in play (i.e. that have not already been declared winners or eliminated). Though there is a theoretical bound on the number of possible discarded ballots, it can be high.

It is quite possible for an insufficient number of winners to be declared. This is no surprise. If every voter lists the same three candidates, but no others, then only three candidates will win. Insufficient ranking by voters can lead to inadequate outcomes.

Unless the field of candidates is reduced below 9 in the first round (i.e. too few candidates meet the 50 vote threshold), there ultimately will be 9 winners. However, some may not get many votes. If every voter ranks every candidate, then all winners will meet quota. If not, some candidates may win without meeting quota by dint of being the last ones uneliminated.

A number of obvious anomalies come to mind. For example, if everyone votes for x,y, and z as the top 3 candidates but there is a huge field of candidates for 4th place — so that each gets 51 spillover votes — then the remaining candidates won’t be eliminated in the first round. The remaining 6 winners then will be selected by the tie-breaker procedure (which we didn’t elaborate on).  Fair yes, desirable no. However, such anomalies can be accounted voter-failures. If each voter ranks the whole field of candidates, they won’t arise.

One important thing to note is that the election method described does not obey the conditions of Arrow’s theorem. The procedure is not even deterministic, and certainly does not satisfy the 3rd fairness condition. It is quite possible for a change in the ranking of candidate z on individual ballots to affect the order of x relative to y in the outcome even if the order of x relative to y is unchanged on those individual ballots. As an extreme example, suppose x is 1st and y is 2nd on 50 ballots and y is 1st and x is 2nd on 50 ballots, and suppose z is 3rd on all of these.   If one of the 1st 50 ballots moves z to the top, x will be eliminated in the 1st round.  If one of the 2nd 50 ballots moves z to the top y will be eliminated in the 1st round.  In neither case did the ranking of x relative to y change on any ballots.  Some anomalies arise for similar reasons to those involved in Arrow’s theorem, but others arise for different reasons.

## The Anomaly

Let us now consider the specific anomaly we set out to discuss. Suppose there are 10000 ballots and 9 positions to be filled. We require 1001 votes for a candidate to win, but we’ll call it 1000 to simplify calculation. Suppose that candidate x is ranked 1st on all 10000 ballots, candidate y is ranked 3rd on all 10000 ballots, and 100 other candidates (which we’ll call z1-z100) are ranked 2nd on 100 ballots each.

Everyone agrees that candidates x and y should be on the City Council. They both rank in the top 3 choices for everyone. However, candidate y is eliminated in the first round. All the spillover votes from candidate x go to candidates z1-z100. The number could vary for each, depending on the order in which ballots are processed.  For example, it is possible that each of z1-z100 is assigned 90 spillover votes from candidate x.  It also is possible that z1-z90 would accrue 100 spillover votes each, and the rest would get 0 and be eliminated.

At the end of round 1, x is declared a winner and consumes 1000 votes, y has 0 votes, and z1-z100 each have between 0 and 100 votes.  At least 90 of them have enough to survive the 50 vote test.  However, y is eliminated.  The remaining z’s then proceed through a series of elimination and spillover rounds (with possible tie-breakers for the trailing candidate if needed) until only 8 of the z’s remain. These then are declared winners.

The result is 1 winner everyone wants, 8 winners few people agree on, and the conspicuous loss of the 2nd candidate everyone wants.

This is just one fun example of how well-intentioned voting systems can result in highly-undesirable outcomes.

# CCSearch State Space Algo

While toying with automated Fantasy Sports trading systems, I ended up designing a rapid state search algorithm that was suitable for a variety of constrained knapsack-like problems.

A reference implementation can be found on github: https://github.com/kensmosis/ccsearch.

Below is a discussion of the algorithm itself. For more details, see the source code in the github repo. Also, please let me know if you come across any bugs! This is a quick and dirty implementation.

Here is a description of the algorithm:

— Constrained Collection Search Algorithm —

Here, we discuss a very efficient state-space search algorithm which originated with a Fantasy Sports project but is applicable to a broad range of applications. We dub it the Constrained Collection Search Algorithm for want of a better term. A C++ implementation, along with Python front-end is included as well.

In the Fantasy Sports context, our code solves the following problem: We’re given a tournament with a certain set of rules and requirements, a roster of players for that tournament (along with positions, salaries and other info supplied by the tournament’s host), and a user-provided performance measure for each player. We then search for those teams which satisfy all the constraints while maximizing team performance (based on the player performances provided). We allow a great deal of user customization and flexibility, and currently can accommodate (to our knowledge) all major tournaments on Draftkings and FanDuel. Through aggressive pruning, execution time is minimized.

As an example, on data gleaned from some past Fantasy Baseball tournaments, our relatively simple implementation managed to search a state space of size approximately ${10^{21}}$ unconstrained fantasy teams, ultimately evaluating under ${2}$ million plausible teams and executing in under ${4}$ seconds on a relatively modest desktop computer.

Although originating in a Fantasy Sport context, the CCSearch algorithm and code is quite general.

— Motivating Example —

We’ll begin with the motivating example, and then consider the more abstract case. We’ll also discuss some benchmarks and address certain performance considerations.

In Fantasy Baseball, we are asked to construct a fantasy “team” from real players. While the details vary by platform and tournament, such games share certain common elements:

• A “roster”. This is a set of distinct players for the tournament.
• A means of scoring performance of a fantasy team in the tournament. This is based on actual performance by real players in the corresponding games. Typically, each player is scored based on certain actions (perhaps specific to their position), and these player scores then are added to get the team score.
• For each player in the roster, the following information (all provided by the tournament host except for the predicted fantasy-points, which generally is based on the user’s own model of player performance):
• A “salary”, representing the cost of inclusion in our team.
• A “position” representing their role in the game.
• One or more categories they belong to (ex. pitcher vs non-pitcher, real team they play on).
• A prediction of the fantasy-points the player is likely to score.
• A number ${N}$ of players which constitute a fantasy team. A fantasy team must have precisely this number of players.
• A salary cap. This is the maximum sum of player salaries we may “spend” in forming a fantasy team. Most, but not all, tournaments have one.
• A set of positions, and the number of players in each. The players on our team must adhere to these. For example, we may have ${3}$ players from one position and ${2}$ from another and ${1}$ each from ${4}$ other positions. Sometimes there are “flex” positions, and we’ll discuss how to accommodate those as well. The total players in all the positions must sum to ${N}$.
• Various other constraints on team formation. These come in many forms and we’ll discuss them shortly. They keep us from having too many players from the same real-life team, etc.

To give a clearer flavor, let’s consider a simple example: Draftkings Fantasy Baseball. There are at least 7 Tournament types listed (the number and types change with time, so this list may be out of date). Here are some current game types. For each, there are rules for scoring the performance of players (depending on whether hitter or pitcher, and sometimes whether relief or starting pitcher — all of which info the tournament host provides):

• Classic: ${N=10}$ players on a team, with specified positions P,P,C,1B,2B,3B,SS,OF,OF,OF. Salary cap is $50K. Constraints: (1) ${\le 5}$ hitters (non-P players) from a given real team, and (2) players from ${\ge 2}$ different real games must be present. • Tiers: ${N}$ may vary. A set of performance “tiers” is provided by the host, and we pick one player from each tier. There is no salary cap, and the constraint is that players from ${\ge 2}$ different real games must be present. • Showdown: ${N=6}$ players, with no position requirements. Salary cap is$50K. Constraints: (1) players from ${\ge 2}$ different real teams, and (2) ${\le 4}$ hitters from any one team.
• Arcade: ${N=6}$ players, with 1 pitcher, 5 hitters. Salary cap is $50K. Constraints are: (1) ${\le 3}$ hitters (non-P players) from a given real team, and (2) players from ${\ge 2}$ different real games must be present. • Playoff Arcade: ${N=7}$ players, with 2 pitchers and 5 hitters. Salary cap is$50K. Constraints are: (1) ${\le 3}$ hitters (non-P players) from a given real team, and (2) players from ${\ge 2}$ different real games must be present.
• Final Series (involves 2 games): ${N=8}$ players, with 2 pitchers and 6 hitters. \$50K salary cap. Constraints are: (1) ${1}$ pitcher from each of the two games, (2) ${3}$ hitters from each of the the ${2}$ games, (3) can’t have the same player twice (even if they appear in both games), and (4) Must have hitters from both teams in each game.

• Lowball: Same as Tiers, but the lowest score wins.

Although the constraints above may seem quite varied, we will see they fall into two easily-codified classes.

In the Classic tournament, we are handed a table prior to the competition. This contains a roster of available players. In theory there would be 270 (9 for each of the 30 teams), but not every team plays every day and there may be injuries so it can be fewer in practice. For each player we are given a field position (P,C,1B,2B,3B,SS,or OF), a Fantasy Salary, their real team, and which games they will play in that day. For our purposes, we’ll assume they play in a single game on a given day, though it’s easy to accommodate more than one.

Let us suppose that we have a model for predicting player performance, and are thus also provided with a mean and standard deviation performance. This performance is in terms of “points”, which is Draftkings’ scoring mechanism for the player. I.e., we have a prediction for the score which Draftkings will assign the player using their (publicly available) formula for that tournament and position. We won’t discuss this aspect of the process, and simply take the predictive model as given.

Our goal is to locate the fantasy teams which provide the highest combined predicted player scores while satisfying all the requirements (position, salary, constraints) for the tournament. We may wish to locate the top ${L}$ such teams (for some ${L}$) or all those teams within some performance distance of the best.

Note that we are not simply seeking a single, best solution. We may wish to bet on a set of 20 teams which diversify our risk as much as possible. Or we may wish to avoid certain teams in post-processing, for reasons unrelated to the constraints.

It is easy to see that in many cases the state space is enormous. We could attempt to treat this as a knapsack problem, but the desire for multiple solutions and the variety of constraints make it difficult to do so. As we will see, an aggressively pruned direct search can be quite efficient.

— The General Framework —

There are several good reasons to abstract this problem. First, it is the sensible mathematical thing to do. It also offers a convenient separation from a coding standpoint. Languages such as Python are very good at munging data when efficiency isn’t a constraint. However, for a massive state space search they are the wrong choice. By providing a general wrapper, we can isolate the state-space search component, code it in C++, and call out to execute this as needed. That is precisely what we do.

From the Fantasy Baseball example discussed (as well as the variety of alternate tournaments), we see that the following are the salient components of the problem:

• A cost constraint (salary sum)
• The number of players we must pick for each position
• The selection of collections (teams) which maximize the sum of player performances
• The adherence to certain constraints involving player features (hitter/pitcher, team, game)

Our generalized tournament has the following components:

• A number of items ${N}$ we must choose. We will term a given choice of ${N}$ items a “collection.”
• A total cost cap for the ${N}$ items.
• A set of items, along with the following for each:
• A cost
• A mean value
• Optionally, a standard deviation value
• A set of features. Each feature has a set of values it may take, called “groups” here. For each feature, a table (or function) tells us which group(s), if any, each item is a member of. If every item is a member of one and only one group, then that feature is termed a “partition” for obvious reasons.
• A choice of “primary” feature, whose role will be discussed shortly. The primary feature need not be a partition. Associated with the primary feature is a count for each group. This represents the number of items which must be selected for that group. The sum of these counts must be ${N}$. An item may be chosen for any primary group in which it is a member, but may not be chosen twice for a given collection.
• A set of constraint functions. Each takes a collection and, based on the information above, accepts or rejects it. We will refer to these as “ancillary constraints”, as opposed to the overall cost constraint, the primary feature group allocation constraints, and the number of items per collection constraint. When we speak of “constraints” we almost always mean ancillary constraints.

To clarify the connection to our example, the fantasy team is a collection, the players are items, the cost is the salary, the value is the performance prediction, the primary feature is “position” (and its groups are the various player positions), other features are “team” (whose groups are the 30 real teams), “game” (whose groups are the real games being played that day), and possibly one or two more which we’ll discuss below.

Note that each item may appear only once in a given collection even if they theoretically appear can fill multiple positions (ex. they play in two games of a double-header or they are allowed for a “flex” position as well as their actual one in tournaments which have such things).

Our goal at this point will be to produce the top ${L}$ admissible collections by value (or a good approximation thereof). Bear in mind that an admissible collection is a set of items which satisfy all the criteria: cost cap, primary feature group counts, and constraint functions. The basic idea is that we will perform a tree search, iterating over groups in the primary feature. This is why that group plays a special role. However, its choice generally is a structural one dictated by the problem itself (as in Fantasy Baseball) rather than a control lever. We’ll aggressively prune where we can based on value and cost as we do so. We then use the other features to filter the unpruned teams via the constraint functions.

It is important to note that features need not be partitions. This is true even of the primary feature. In some tournaments, for example, there are “utility” or “flex” positions. Players from any other position (or some subset of positions) are allowed for these. A given player thus could be a member of one or more position groups. Similarly, doubleheaders may be allowed, in which case a player may appear in either of 2 games. This can be accommodated via a redefinition of the features.

In most cases, we’ll want the non-primary features to be partitions if possible. We may need some creativity in defining them, however. For example, consider the two constraints in the Classic tournament described above. Hitter vs pitcher isn’t a natural feature. Moreover, the constraint seems to rely on two distinct features. There is no rule against this, of course. But we can make it a more efficient single-feature constraint by defining a new feature with 31 groups: one containing all the pitchers from all teams, and the other 30 containing hitters from each of the 30 real teams. We then simply require that there be no more than 5 items in any group of this new feature. Because only 2 pitchers are picked anyway, the 31st group never would be affected.

Our reference implementation allows for general user-defined constraints via a functionoid, but we also provide two concrete constraint classes. With a little cleverness, these two cover all the cases which arise in Fantasy Sports. Both concern themselves with a single feature, which must be a partition:

• Require items from at least ${n}$ groups. It is easy to see that the ${\ge 2}$ games and ${\ge 2}$ teams constraints fit this mold.
• Allow at most ${n}$ items from a given group. The ${\le 3,4,5}$ hitter per team constraints fit this mold.

When designing custom constraints, it is important to seek an efficient implementation. Every collection which passes the primary pruning will be tested against every constraint. Pre-computing a specialized feature is a good way to accomplish this.

— Sample Setup for DraftKings Classic Fantasy Baseball Tournament —

How would we configure our system for a real application? Consider the Classic Fantasy Baseball Tournament described above.

The player information may be provided in many forms, but for purposes of exposition we will assume we are handed vectors, each of the correct length and with no null or bad values. We are given the following:

• A roster of players available in the given tournament. This would include players from all teams playing that day. Each team would include hitters from the starting lineup, as well as the starting pitcher and one or more relief pitchers. We’ll say there are ${M}$ players, listed in some fixed order for our purposes. ${R_i}$ denotes player ${i}$ in our listing.
• A set ${G}$ of games represented in the given tournament. This would be all the games played on a given day. Almost every team plays each day of the season, so this is around 15 games. We’ll ignore the 2nd game of doubleheaders for our purposes (so a given team and player plays at most once on a given day).
• A set ${T}$ of teams represented in the given tournament. This would be all 30 teams.
• A vector ${p}$ of length ${M}$, identifying the allowed positions of each player. These are P (pitcher), C (catcher), 1B (1st base), 2B (2nd base), 3B (3rd base), SS (shortstop), OF (outfield).
• A vector ${t}$ of length ${M}$, identifying the team of each player. This takes values in ${T}$.
• A vector ${g}$ of length ${M}$, identifying the game each player participates in that day. This takes value in ${G}$.
• A vector ${s}$ of length ${M}$, providing the fantasy salary assigned by DraftKings to each player (always positive).
• A vector ${v}$ of length ${M}$, providing our model’s predictions of player performance. Each such value is the mean predicted fantasy score for the player under DraftKing’s scoring system for that tournament and player position. As an aside, DK never scores pitchers as hitters even if they bat.

Note that DraftKings provides all this info (though they may have to be munged into some useable form), except the model prediction.

We now define a new vector ${h}$ of length ${M}$ as follows: ${h_i=t_i}$ if player ${i}$ is a hitter (i.e. not a pitcher), and ${h_i=P}$ if a pitcher, where ${P}$ designates some new value not in ${T}$.

Next, we map the values of ${G}$, ${T}$, and the positions into nonnegative consecutive integers (i.e. we number them). So the games run from ${1\dots |G|}$, the teams from ${1\dots |T|}$, and the positions from ${1\dots 7}$. We’ll assign ${0}$ to the pitcher category in the ${h}$ vector. The players already run from ${1\dots M}$. The vectors ${t}$, ${g}$, ${s}$, and ${h}$ now take nonnegative integer values, while ${s}$ and ${v}$ take real ones (actually ${s}$ is an integer too, but we don’t care here).

From this, we pass the following to our algorithm:

• Number of items: ${M}$
• Size of a collection: ${10}$
• Feature 1: ${7}$ groups (the positions), and marked as a partition.
• Feature 2: ${|T|}$ groups (the teams), and marked as a partition.
• Feature 3: ${|G|}$ groups (the games), and marked as a partition.
• Feature 4: ${|T|+1}$ groups (the teams for hitters plus a single group of all pitchers), and marked as a partition.
• Primary Feature: Feature 1
• Primary Feature Group Counts: ${(2,1,1,1,1,1,3)}$ (i.e. P,P,C,1B,2B,3B,SS,OF,OF,OF)
• Item costs: ${s}$
• Item values: ${v}$
• Item Feature 1 Map: ${f(i,j)= \delta_{p_i,j}}$ (i.e. ${1}$ if player ${i}$ is in position ${j}$)
• Item Feature 2 Map: ${f(i,j)= \delta_{t_i,j}}$ (i.e. ${1}$ if player ${i}$ is on team ${j}$)
• Item Feature 3 Map: ${f(i,j)= \delta_{g_i,j}}$ (i.e. ${1}$ if player ${i}$ is in game ${j}$)
• Item Feature 4 Map: ${f(i,j)= \delta_{h_i,j}}$ (i.e. ${1}$ if player ${i}$ is a hitter on team ${j}$ or a pitcher and ${j=0}$)
• Cost Cap: ${50,000}$
• Constraint 1: No more than ${5}$ items in any one group of Feature 4. (i.e. ${\le 5}$ hitters from a given team)
• Constraint 2: Items from at least ${2}$ groups of Feature 3. (i.e. items from ${\ge 2}$ games)

Strictly speaking, we could have dispensed with Feature 2 in this case (we really only need the team through Feature 4), but we left it in for clarity.

Note that we also would pass certain tolerance parameters to the algorithm. These tune its aggressiveness as well as the number of teams potentially returned.

— Algorithm —

— Culling of Individual Items —

First, we consider each group of the primary feature and eliminate strictly inferior items. These are items we never would consider picking because there always are better choices. For this purpose we use a tolerance parameter, ${epsilon}$. For a given group, we do this as follows. Assume that we are required to select ${n}$ items from this group:

• Restrict ourselves only to items which are unique to that group. I.e., if an item appears in multiple groups it won’t be culled.
• Scan the remaining items in descending order of value. For item ${i}$ with cost ${c}$ and value ${v}$,
• Scan over all items ${j}$ with ${v_j>v_i(1+\epsilon)}$
• If there are ${n}$ such items that have ${c_j\le c_i}$ then we cull item ${i}$.

So basically, it’s simple comparison shopping. We check if there are enough better items at the same or lower cost. If so, we never would want to select the item. We usually don’t consider “strictly” better, but allow a buffer. The other items must be sufficiently better. There is a rationale behind this which will be explained shortly. It has to do with the fact that the cull stage has no foreknowledge of the delicate balance between ancillary constraints and player choice. It is a coarse dismissal of certain players from consideration, and the tolerance allows us to be more or less conservative in this as circumstance dictates.

If a large number of items appear in multiple groups, we also can perform a merged pass — in which those groups are combined and we perform a constrained cull. Because we generally only have to do this with pairs of groups (ex. a “flex” group and each regular one), the combinatorial complexity remains low. Our reference implementation doesn’t include an option for this.

To see the importance of the initial cull, consider our baseball example but with an extra 2 players per team assigned to a “flex” position (which can take any player from any position). We have ${8}$ groups with (${60}$,${30}$,${30}$,${30}$,${30}$,${30}$,${90}$,${270}$) allowed items. We need to select ${(2,1,1,1,1,1,3,2)}$ items from amongst these. In reality, fantasy baseball tournaments with flex groups have fewer other groups — so the size isn’t quite this big. But for other Fantasy Sports it can be.

The size of the overall state space is around ${5x10^{21}}$. Suppose we can prune just 1/3 of the players (evenly, so 30 becomes 20, 60 becomes 40, and 90 becomes 60). This reduces the state space by 130x to around ${4x10^19}$. If we can prune 1/2 the players, we reduce it by ${4096x}$ to around ${10^18}$. And if we can prune it by 2/3 (which actually is not as uncommon as one would imagine, especially if many items have ${0}$ or very low values), we reduce it by ${531441x}$ to a somewhat less unmanageable starting point of ${O(10^{16})}$.

Thus we see the importance of this initial cull. Even if we have to perform a pairwise analysis for a flex group — and each paired cull costs ${n^2m^2}$ operations (it doesn’t), where ${m}$ is the non-flex group size and ${n}$ is the flex-group size, we’d at worst get ${(\sum_i m_i)^2\sum_i m_i^2}$ which is ${O(10^9)}$ operations. In reality it would be far lower. So a careful cull is well worth it!

One important word about this cull, however. It is performed at the level of individual primary-feature groups. While it accommodates the overall cost cap and the primary feature group allocations, it has no knowledge of the ancillary constraints. It is perfectly possible that we cull an item which could be used to form the highest value admissible collection once the ancillary constraints are taken into account. This is part of why we use the tolerance ${\epsilon}$. If it is set too high, we will cull too few items and waste time down the road. If it is too low, we may run into problems meeting the ancillary constraints.

We note that in fantasy sports, the ancillary constraints are weak in the sense that they affect a small set of collections and these collections are randomly distributed. I.e, we would have to conspire for them to have a meaningful statistical effect. We also note that there tend to be many collections within the same tiny range of overall value. Since the item value model itself inherently is statistical, the net effect is small. We may miss a few collections but they won’t matter. We’ll have plenty of others which are just as good and are as statistically diverse as if we included the omitted ones.

In general use, we may need to be more careful. If the ancillary constraints are strong or statistically impactful, the initial cull may need to be conducted with care. Its affect must be measured and, in the worst case, it may need to be restricted or omitted altogether. In most cases, a well-chosen ${\epsilon}$ will achieve the right compromise.

In practice, ${\epsilon}$ serves two purposes: (1) it allows us to tune our culling so that the danger of an impactful ommission due to the effect of ancillary, yet we still gain some benefit from this step, and (2) it allows us to accommodate “flex” groups or other non-partition primary features without a more complicated pairwise cull. This is not perfect, but often can achieve the desired effect with far less effort.

Another approach to accommodating flex groups or avoiding suboptimal results due to the constraints is to require more than the selection count when culling in a given group. Suppose we need to select ${2}$ items from a given group. Ordinarily, we would require that there be at least ${2}$ items with value ${(1+\epsilon)v}$ and cost ${\le c}$ in order to cull an item with value ${v}$ and cost ${c}$. We could buffer this by requiring ${3}$ or even ${4}$ such better items. This would reduce the probability of discarding useful items, but at the cost of culling far fewer. In our code, we use a parameter ${ntol}$ to reflect this. If ${n_i}$ is the number of selected items for group ${i}$ (and the number we ordinarily would require to be strictly better in order to cull others), we now require ${n_i+ntol}$ strictly better items. Note that ${ntol}$ solely is used for the individual cull stage.

One final note. If a purely lossless search is required then the individual cull must be omitted altogether. In the code this is accomplished by either choosing ${ntol}$ very high or ${\epsilon}$ very high. If we truly require the top collection (as opposed to collections within a thick band near the top), we have the standard knapsack problem and there are far better algorithm than CCSearch.

— Prepare for Search —

We can think of our collection as a selection of ${n_i}$ items from each primary-feature group ${i}$ (we’ll just refer to it as “group” for short). Let’s say that ${m_i}$ is the total number of items in the ${i^{th}}$ group. Some of the same items may be available to multiple groups, but our collection must consist of distinct items. So there are ${K}$ bins, the number of primary feature groups. For the ${i^{th}}$ such group, we select ${n_i}$ items from amongst the available ${m_i}$ post-cull items.

For the search itself we iterate by group, then within each group. Conceptually, this could be thought of as a bunch of nested loops from left group to right group. In practice, it is best implemented recursively.

We can precompute certain important information:

• Each group has ${C_i= {m_i\choose n_i}}$ possible selections. We can precompute this easily enough.
• We also can compute ${RC_i= \Pi_{j\ge i} C_i}$. I.e. the product of the total combinations of this group and those that come after.
• ${BV_i}$ is the sum of the top ${n_i}$ values in the group. This is the best we can do for that group, if cost is no concern.
• ${RBV_i}$ is ${\sum_{j>i} BV_i}$. I.e., the best total value we can get from all subsequent groups.
• ${LC_i}$ is the sum of the bottom ${n_i}$ costs in the group. This is the cheapest we can do for that group, if value is no concern.
• ${RLC_i}$ is ${\sum_{j>i} LC_i}$. I.e., the cheapest we can do for all subsequent groups, if value is no concern.
• Sorted lists of the items by value and by cost.
• Sorted lists of ${n_i}$-tuples of distinct items by overall value and by overall cost. I.e., for each group, sorted lists of all combos of ${n_i}$ choices. These generally are few enough to keep in memory.

The search itself depends on two key iteration decisions. We discuss their effects on efficiency below.

• Overall, do we scan the groups from fewest to most combinations (low to high ${C_i}$) or from most to fewest (high to low ${C_i}$)?
• Within each group, do we scan the items from lowest to highest cost or from highest to lowest value. Note that of the four possible combinations, the other two choices make no sense. It must be one of these.

Based on our choice, we sort our groups, initialize our counters, and begin.

— Search —

We’ll describe the search recursively.

Suppose we find ourselves in group ${i}$, and are given the cost ${c}$ and value ${v}$ so far (from the selections for groups ${1\dots i-1}$). We also are given ${vmin}$, the lowest collection value we will consider. We’ll discuss how this is obtained shortly.

We need to cycle over all ${C_i}$ choices for group ${i}$. We use the pre-sorted list of ${n_i-tuples}$ sorted by value or by cost depending on our 2nd choice above. I.e., we are iterating over the possible selections of ${n_i}$ items in decreasing order of overall value or increasing order of overall cost.

We now discuss the individual iteration. For each step we compute the following:

• ${mc}$ is the minimum cost of all remaining groups (${i+1}$ onward). This is the lowest cost we possibly could achieve for subsequent groups. It is the pre-computed ${RLC_i}$ from above.
• ${mv}$ is the maximum value of all remaining groups (${i+1}$ onward). This is the highest value we possibly could achieve for subsequent groups. It is the pre-computed ${RBV_i}$ from above.
• ${c_i}$ is the cost of our current selection for group ${i}$
• ${v_i}$ is the value of our current selection for group ${i}$

Next we prune if necessary. There are 2 prunings, the details of which depend on the type of iteration.

If we’re looping in increasing order of cost:

• If ${c+c_i+mc>S}$ then there is no way to select from the remaining groups and meet the cost cap. Worse, all remaining iterations within group ${i}$ will be of equal or higher cost and face the same issue. So we prune both the current selection and all remaining ones. Practically, this means we terminate the iteration over combinations in group ${i}$ (for this combo of prior groups).
• If ${v+v_i+mv then there is no way to select a high enough value collection from the remaining groups. However, it is possible that other iterations may do so (since we’re iterating by cost, not value). We prune just the current selection, and move on to the next combo in group ${i}$ by cost.

If on the other hand we’re looping in decreasing order of value, we do the opposite:

• If ${v+v_i+mv then there is no way to select a high enough value collection from the remaining groups. Worse, all remaining iterations within group ${i}$ will be of equal or lower value and face the same issue. So we prune both the current selection and all remaining ones. Practically, this means we terminate the iteration over combinations in group ${i}$ (for this combo of prior groups).
• If ${c+c_i+mc>S}$ then there is no way to select from the remaining groups and meet the cost cap. However, it is possible that other iterations may do so (since we’re iterating by value, not cost). We prune just the current selection, and move on to the next combo in group ${i}$ by value.

If we get past this, our combo has survived pruning. If ${i}$ isn’t the last group, we recursively call ourselves, but now with cost ${c+c_i}$ and value ${v+v_i}$ and group ${i+1}$.

If on the other hand, we are the last group, then we have a completed collection. Now we must test it.

If we haven’t put any protections against the same item appearing in different slots (if it is in multiple groups), we must test for this and discard the collection if it is. Finally, we must test it against our ancillary constraints. If it violates any, it must be discarded. What do we do with collections that pass muster? Well, that depends. Generally, we want to limit the number of collections returned to some number ${NC}$. We need to maintain a value-sorted list of our top collections in a queue-like structure.

If our new collection exceeds all others in value, we update ${vmax}$, the best value realized, This also resets ${vmin= vmax (1-\delta)}$ for some user-defined tolerance ${\delta}$. We then must drop any already-accumulated collections which fall below the new ${vmin}$.

I.e., we keep at most ${NC}$ collections, and each must have value within a fraction ${\delta}$ of the best.

And that’s it.

— Tuning —

Let’s list all the user-defined tunable parameters and choices in our algorithm:

• What is the individual cull tolerance ${\epsilon\in [0,\infty]}$?
• What is ${ntol}$, the number of extra strictly-better items we require in a group during the individual cull?
• Do we scan the groups from fewest to most combinations or the other way?
• Within each group, do we scan the items from lowest to highest cost or from highest to lowest value?
• What is the maximum number of collections ${NC>0}$ we report back (or do we keep them all)?
• What is the collection value tolerance ${\delta\in [0,1]}$?

Clearly, ${NC}$ and ${\delta}$ guide how many results are kept and returned. High ${NC}$ and high ${\delta}$ are burdensome in terms of storage. If we want just the best result, either ${NC=1}$ or ${\delta=1}$ will do. As mentioned, ${\epsilon}$ and ${ntol}$ have specific uses related to the behavior of the individual cull. What about the sort orders?

The details of post-cull search performance will depend heavily on the primary partition structure and cost distribution, as well as our 2 search order choices. The following is a simple test comparison benchmark (using the same data and the ${10}$-player collection Classic Fantasy Baseball tournament structure mentioned above).

 GroupOrder CombosOrder Time Analyzed Value high-to-low high-to-low 12.1s 7.9MM Value high-to-low low-to-high 3.4s 1.5MM Cost low-to-high high-to-low 69.7s 47.5MM Cost low-to-high low-to-high 45.7s 18.5MM

Here, “Analyzed” refers to the number of collections which survived pruning and were tested against the ancillary constraints. The total number of combinations pruned was far greater.

Of course, these numbers mean nothing in an absolute sense. They were run with particular test data on a particular computer. But the relative values are telling. For these particular conditions, the difference between the best and worst choice of search directions was over ${20x}$. There is good reason to believe that, for any common tournament structure, the results would be consistent once established. It also is likely they will reflect these. Why? The fastest option allows the most aggressive pruning early in the process. That’s why so few collections needed to be analyzed.

• # How 22% of the Population can Rewrite the Constitution

This is a scary piece in which I analyze precisely how many voters would be required to trigger a Constitutional Convention and ratify any amendments it proposes. Because the 2/3 and 3/4 requirements in the Constitution refer to the number of States involved, the smaller States have a disproportionate effect. In Congress, the House counterbalances this – but for a Constitutional Convention, there is no such check.

# A Travel-Time Metric

Especially in urban areas, two locations may be quite close geographically but difficult to travel between. I wondered whether one could create a map where, instead of physical distances, points are arranged according to some sort of travel-time between them. This would be useful for many purposes.

Unfortunately, such a mapping is mathematically impossible in general (for topological reasons). But so is a true map of the Earth, hence the need for Mercator or other projections. The first step in constructing a useful visualization is to define an appropriate Travel-Time metric function. Navigation systems frequently compute point-to-point values, but they are not bound by the need to maintain a consistent set of Travel Times between all points. That is our challenge – to construct a Travel-Time metric.

# Inflation, Up Close and Personal

It often seems like the inflation figures touted by officials and economists have little connection with the real world. There are a number of reasons for this, some technical and some political. But there is a deeper problem than the means and motives for calculating any specific index. The issue is that any aggregate number is likely to deviate significantly from one’s personal experience. Each of us saves for different reasons and spends in different ways. Without taking these specific choices into account, we cannot accurately represent or protect against the inflation that we individually encounter. This paper elaborates on this idea and explains how each of us can identify the relevant components of inflation, and best hedge our savings.

# A Proposal for Tax Transparency

Taxes necessarily are unpopular. They represent an economic burden and do not yield obvious benefits. Though some make a show of embracing their civic duty, few voluntarily would undertake to do so if given a choice. The criminal penalties attached to evasion and the substantial efforts at enforcement are evidence of this. Nonetheless, there is a tie between one’s sense of social responsibility and the palatability of taxes. A perception that our sacrifice benefits ourselves, our loved ones, and society as a whole can mitigate the pain it causes. Conversely, if our hard earned money vanishes into an opaque hole of possible waste and corruption, resentment is engendered.

The taxes paid by an individual represent a substantial sum to him, but a mere pittance to the government. If there is no accounting for this money, then it appears to have been squandered. This assumption is natural, as the government is known to be a notorious spendthrift. Nor does the publication of a voluminous, incomprehensible, and largely euphemistic budget lend transparency. Even if it were perfectly accurate, and every taxpayer troubled to read it, the human mind isn’t wired to accurately grasp the relationships between large numbers. Thirty thousand dollars in taxes is minuscule compared to a billion or ten billion or a hundred billion, and it makes little difference which of those quantities is involved. Therefore an effort to elicit confidence through a full disclosure of expenditures would be ill fated even if well intentioned. However it would serve to enforce accountability, and should be required in addition to any other measures employed. If nothing else, this would allow watchdog organizations to analyze government behavior and identify waste.

So how could we restore individual faith in the system of government expenditure? There is in fact a way to do so and encourage fiscal responsibility at the same time. Individuals like to know where their money went. A successful tactic of certain charities is to attach each donation to a specific child or benefit. A person feels more involved, is more likely to contribute, and is better satisfied with their contribution if it makes a tangible difference. We need to know that we aren’t wasting our money.

The pain of an involuntary contribution may be assuaged through a similar approach. It may even transform into pride. There will be individuals who remain resentful, just as there are those who do not donate to charity. And some people simply don’t like being forced to do anything. However the majority of taxpayers likely will feel better if they know precisely where their money went.

We propose that an exact disposition of each individual’s taxes be reported to him. At first glance, this may seem infeasible. Funds are drawn from pooled resources rather than attached to such specific revenue streams. However, what we suggest can be accomplished without any change in the way the government does business, and our reporting requirement would not prove onerous. The federal, state, and most local governments already meticulously account for expenses – even if they do not exhibit particular restraint in incurring them. They must do so for a variety of legal and regulatory reasons, and records generally exist even if not publicly available.

Individual tax contributions need only be linked to expenditures at the time of reporting, but this must be done consistently. To that end, expenses could be randomly matched with the taxes that paid for them. This could be done each February or March for the prior year. We simply require that each dollar of taxes collected be assigned to one and only one dollar spent and vice versa. If there is a surplus, then some taxpayers would receive an assignment of “surplus” and if there is a deficit then certain expenses will be assigned a non-tax source – such as borrowed money or a prior year’s surplus. If a taxpayer’s contribution has been marked as surplus, then his true assignment is deferred until such time as the surplus is spent (again using a lottery system for matching). If it covers a prior year’s deficit then it is matched against that year’s excess expenses. The point is that every dollar of taxpayer money eventually is matched against a real expense.

For example, one taxpayer’s report could read “10K toward the construction of 121 example plaza, New York,” or better still “3K used for the purchase of air conditioning units, 5K for ductwork, and 2K for electrical routing for work done at XXX and billed to YYY contracting on ZZZ date. Work completed on AAA date.” An individual receiving such a report would feel a sense of participation, accountability, and meaningful sacrifice.

It may seem that few people would feel pride in defraying the cost of mundane items, but such an objection is misguided. These are real expenses and represent a more comprehensible and personal form of involvement than does a tiny fraction of an abstract budget. If an expense would appear wasteful, pointless, or excessive, then it is appropriate to question it.

What of the pacifist whose money goes toward weapons or the religious individual whose taxes pay for programs that contravene his beliefs? It may seem unfair to potentially violate a taxpayer’s conscience by assigning him an unpalatable expense. But no exceptions should be made. Their money is being spent in the manner described. Whether their contribution is diluted or dedicated, they live in a society that violates their ideals and they should vote accordingly.

It is our belief that a feeling of involvement in the operation of government, along with the requisite increase in transparency, would alleviate much of the general perception of irresponsibility, excess, and unaccountability. An individual may object to his relative contribution, but the means of its use would no longer be inscrutable. This could go a long way toward restoring faith in our government.

# Probabilistic Sentencing

In most real situations, we must make decisions based on partial information. We should neither allow this uncertainty to prevent action or pretend to perfect certainty in taking action. Yet in one area with a great impact on an individual’s freedom and well-being we do just that. Judges and juries are required to return an all-or-nothing verdict of guilt. They may not use their experience, intelligence, and judgment to render a level of confidence rather than a mere binary choice.

I propose adopting a sentencing mechanism based on a probabilistic assessment of guilt or innocence. This allows jurists to better express their certainty or lack thereof than does our traditional all-or-nothing verdict. The natural place to reflect such an imputed degree of guilt is in the sentencing phase. I discuss the implications of such a system as well as certain issues with implementation.

# The Requirements of Effective Democracy

The current popular notion of democracy is something to the effect of “the will of the people is effected through voting.’’ Though this is a far cry from the original meaning of the word or its various incarnations through history, let’s take it as our working definition. It certainly reflects the basic approach taken in the United States. Though often confounded by the public mind with a vague cultural notion of freedom, it only conforms to this when taken together with certain other principles – such as explicit protections of individual liberties.

This aside, let us consider the components necessary for democracy. To do so, we must make some supposition regarding the ability of an individual voter to render a decision. We assume that every voting individual, regardless of aptitude, is capable of determining their purpose in voting. We say “purpose” rather than “criterion” because we refer to a moral choice, what they hope to achieve by voting. This is far more basic and reliable than any specific set of issues or criteria. A person knows their value system, even if they can not or do not have the means of accurately expressing it. The desires to improve the country, foster religious tenets, create a certain type of society, support the weak, advance one’s own interest, protect a specific right, or promote cultural development cannot easily be manipulated or instilled. While it is possible to create a sense of urgency or attach specific issues or criteria to these values, one’s purpose itself is a reflection of that individual’s view of society and their relationship with it. To meaningfully participate in the democratic process, an individual must translate this purpose into particular votes in particular elections. Note that a purpose may embody a plurality of ideals rather than any specific one (such as in the examples above).

It is the function of democracy to proportionately reflect in our governance and society the individual purposes of the citizenry. A number of components are involved, any of whose absence undermines its ability to do so. While the consequent process may retain all the trappings of a democracy, it would not truly function as one. Though it could be argued that such imperfection is natural and speaks to the shortcomings of the participants rather than a failing of the institution itself, such a claim is misguided. Regardless of cause, if the people’s will is not accurately reflected then the society does not conform to our popular notion of a democracy. Whether another system would perform better is beyond our present consideration. We simply list certain key requirements for a democracy to function as we believe it should, and allow the reader to decide the extent to which our present society satisfies them.

Note that a particular government need not directly represents the interest of every citizen, but its formation and maintenance must meaningfully do so. In some loose sense this means that (1) the effect of a citizen is independent of who that citizen is, and (2) the opinion of a majority of citizens is reflected in the actions of the government. These are neither precise requirements nor ones satisfied in practice, particularly in representative democracies. However they reflect our vague cultural concept of democracy.

The following are the major components necessary for a democracy to function as we believe it should.

# Choice

Once a voter has decided upon a set of positions that reflect their purpose, they must have a means of voting accordingly. There must be sufficient choice to allow an individual to embody those positions in their vote. Furthermore, the choice must be real. Marginal candidates with no chance of winning may be useful for registering an opinion, but they do not offer true participation in the election. If there are only two major candidates then the voter’s entire purpose must be reduced to a binary decision. Only if it happens to be reflected in one of the choices at hand would their view be expressible.

If there are two major candidates and they differ only on a few issues that are of no consequence to a particular individual, then that person cannot express his purpose by voting. For example if a voter feels very strongly about issue X, and both major candidates have the same opposing position on that issue, then he cannot make his will known in that election. It may be argued that the presence of small candidates serves exactly this purpose and that if sentiment is strong enough one could prevail. This is not born out by history. In a two party system, a voter is reduced to a binary choice between two bundled sets of positions. As a more extreme example, suppose there are several major issues and the candidates agree on one of them. Even if every single person in the country holds the opposite position on that issue, their will still cannot be effected through that election. If there were no other important issues, then one or the other candidate surely would take the popular position – or a third party candidate would do so and prevail. However in the presence of other issues, this need not be the case.

Finally, there must be some reason to believe that the actions of a candidate once elected will reflect their proclaimed positions. Otherwise, it will be years before the voter can penalize them. Without such an assurance – and history certainly does not offer it – a nominal choice may not be a real one. The people then acts the part of a general who cannot move his troops, however much he may threaten or cajole them.

# Information

A well-intentioned individual must have a way of locating and obtaining information whose accuracy is not in question or, if uncertain in nature, is suitably qualified. Voters must have access to accurate and sufficient information. In order to translate their purpose into a vote, an individual must be able to determine the choices available and what they actually entail. Moreover, he must be able to determine the relative importance of different issues in effecting his purpose. Fear mongering, inaccurate statistics, and general misinformation could lead him to believe that a particular issue ‘X’ is of greater import than it truly is. Instead of focusing on other issues ‘Y’ and ‘Z’ which are more germaine to his purpose, he may believe that dealing with issue ‘X’ is the most important step toward it. Similarly, if the views of candidates are obfuscated or misrepresented or the significance of events is disproportionately represented, an accurate translation of his purpose into a vote may be denied a person. Even a perfectly rational and capable voter cannot make a suitable decision in the absence of information or in the presence of inaccurate information. This said, not every vehicle should be expected to provide such information. If a person prefers to listen to a news station that reports with a particular bias, that is not the fault of the information provider – unless it does so subtly and pretends otherwise.

# Aptitude

A voter must have the intelligence, critical reasoning, motivation, and general wherewithal to seek out accurate information, detect propaganda or advertising, and make an informed decision. Their perceived interest must coincide with their true interest, and their purpose be accurately represented in the choice they make. It may seem that we are advocating the disenfrachisement of a segment of the population, individuals who – while failing to meet some high standard – have valid purposes of their own which they too have the right to express. This is not the case, nor is our standard artificial. We are merely identifying a necessary ingredient, not endorsing a particular path of action. Moreover, the argument that they would be deprived of a right is a specious one. Such individuals are disenfranchised, whether or not they physically vote. They lack the ability to accurately express their purpose, and easily are misled, confused, or manipulated. At best they introduce noise, at worst their votes may systematically be exploited. A blind person may have a valid destination, but they cannot drive there.

# Access

Voters must be willing and able to participate. They cannot be blocked by bureaucratic, economic, legal, or practical obstacles – especially in a way that introduces a selection bias. Their votes must be accurately tallied and their decision implemented.

# Structure

Not only must the structure of the democratic process treat all voters equally, their de facto influence must be equal. Depending on the nature of the voting system, certain participants may have no real influence even if the system as a whole treats them symmetrically. A simple example would be a nation consisting of four states with blocks of 3, 3, 2, and 1 votes, where each block must vote as a unit. Regardless of the pattern of voting, citizens in the state with a single vote can never affect the outcome. If that vote is flipped, the majority always remains unchanged. This particular topic is addressed in another paper.

There certainly are many other technical and procedural requirements. However those listed above are critical components that directly determine a voter’s ability to express their will through the democratic process. In their absence, voters could be thwarted, manipulated, misled, or confused. The purpose of democracy isn’t to tally votes, but to register the will of the people. Without the choice and tools to express this will, the people can have nothing meaningful to register.

# A System for Fairness in Sentencing

We often hear of cases that offend our sense of fairness – excessive sentences, minor crimes that are punished more severely than serious crimes, or two equivalent crimes that are punished very differently. Rather than attempt to solve a politically and legally intractable problem, we ask a more theoretical question: whether an individual can assign sentences in a way that seems reasonable and consistent to him.  Our system is a means of doing so.  We offer a simple algorithmic method that could be used by an individual or review board to ensure that sentences meet a common-sense standard of consistency and proportionality.

We intend to offer a less mathematical and more legally-oriented version of this article in the near future.

# Why Voting Twice is a Good Thing

We should require that every bill be ratified by a second vote, one year after its original passage. It goes into effect as normal, but automatically expires if not ratified at the appropriate time.

Sometimes foolish legislation is passed in the heat of the moment or due to short term pressures. Perhaps there is an approaching election, or the media has flamed popular hysteria over some issue, or there is a demand for immediate action with no time for proper deliberation, or an important bill is held hostage to factional concerns, or legislators are falling all over one another to respond with a knee jerk reaction to some event. There are many reasons why thoughtful consideration may succumb to the influences of the moment. The consequences of such legislation can be real and long lasting. Law enforcement resources may be diverted or rights suppressed or onerous demands made on businesses. It is true that legislation may be repealed, but this requires an active effort. The same forces that induced the original legislation, though weakened by time, may threaten to damage anyone who takes the initiative to rectify it.

Here is a simple proposal that could address this problem: Every piece of legislation should be voted on a second time, one year after its original passage. This vote would serve to ratify it. By making this mandatory, the burden of attempted repeal is not placed on any individual. Rather, legislators need simply change their vote. This is less likely to create a fresh political tempest, the issue’s emotional fury long spent. When an act is passed, it goes into effect as normal. However one year from that date, it must be ratified or it will expire. Obviously this should only apply to bills for which such ratification is meaningful; there would be no point in revoting on the prior year’s budget after the money has been spent. By requiring a ratification vote, legislators are given time to breath, sit back, and consider the ramifications of a particular piece of legislation. The intervening year also may provide some flavor of its real effect. A similar approach could be used at all levels of government.

# The Optics of Camera Lens Stacks (Program)

In another post, I discussed the mathematical calculation of optical parameters for a configuration of stacked lenses and camera components. As is evident from the example worked out there, the procedure is somewhat tedious. Instead, it is better to spend twice the time writing a program to do it. Fortunately I already did this and offer it to you, gentle reader, to use and criticize. I expect no less than one rabid rant about some aspect that doesn’t pedantically conform to the IEEE standard. This is working code (and has been checked over and tested to some extent). I use it. However, it is not commercial grade and was not designed with either efficiency or robustness in mind. It is quick and dirty – but graciously so.

Think of this as a mine-shaft. You enter at your own risk and by grace of the owner. And if you fall, there won’t be non-stop human interest coverage on 20 TV channels as rescue workers try to extract you. That’s because you’re not a telegenic little kid and this is a metaphor. Rather, you will end up covered in numeric slime of dubious origin. But I still won’t care.

All this said, I do appreciate constructive criticism and suggestions. Please let me know about any bugs. I don’t plan to extensively maintain this program, but I will issue fixes for significant bugs.

The program I provide is a command line unix (including MacOS) utility. It should be quite portable, as no funky libraries are involved. The program can analyze a single user-specified configuration or scan over all possible configurations from an inventory file. In the latter case, it may restrict itself to configurations accessible using the included adapters or regardless of adapter. It also may apply a filter to limit the output to “interesting” cases such as very high magnification, very wide angle, or high telephoto.

The number of configurations can be quite large, particularly when many components are available, there are no constraints, and we account for the large number of focal/zoom choices for each given stack. For this reason, it is best to constrain scans to a few components in an inventory (by commenting out the components you don’t need). For example, if one has both 10 and 25mm extension tubes then try with only one. If this looks promising, restrict yourself to the components involved and uncomment the 25mm as well.

Either through the summary option or the use of a script to select out desirable configurations, the output may be analyzed and used for practical decisions. For example, if a 10x macro lens is needed and light isn’t an issue then a 1.4X telextender followed by a 200mm zoom followed by a reversed 28mm will do the trick. It will have a high f-stop, but if those components are already owned and we don’t need a low f-stop it may be far more cost-effective option than a dedicated ultra-macro lens (there aren’t any at 10X, but a 5X one is available).

For simple viewing of the results, I recommend the use of my “tless” utility. This isn’t a shameless plug. I wrote tless for myself, and I use it extensively.

Go to Google Code Archive for Project

# The Optics of Camera Lens Stacks (Analysis)

This first appeared on my tech blog. I like to play around with various configurations of camera lenses.  This partly is because I prefer to save money by using existing lenses where possible, and partly because I have a neurological condition (no doubt with some fancy name in the DSM-IV) that compels me to try to figure things out. I spent 5 years at an institute because of this problem and eventually got dumped on the street with nothing but a PhD in my pocket.  So let this be a warning: keep your problem secret and don’t seek help.

A typical DSLR (or SLR) owner has a variety of lenses.  Stacking these in various ways can achieve interesting effects, simulate expensive lenses (which may internally be similar to such a stack), or obtain very high magnifications.  Using 3 or 4 lenses, a telextender, a closeup lens, and maybe some extension rings (along with whatever inexpensive adapter rings are needed), a wide variety of combinations can be constructed.  In another entry, I’ll offer a companion piece of freeware that enumerates the possible configurations and computes their optical properties.

In the present piece, I examine the theory behind the determination of those properties for any particular setup.  Given a set of components (possibly reversed) and some readily available information about them and the camera, we deduce appropriate optical matrices, construct an effective matrix for the system, and extract the overall optical properties – such as focal length, nearest object distance, and maximum magnification.  We account for focal play and zoom ranges as needed.

The exposition is self-contained, although this is not a course on optics and I simply list basic results.  Rather, I focus on the application of matrix optics to real camera lenses.  I also include a detailed example of a calculation.

As far as I am aware, this is the only treatment of its kind.  Many articles discuss matrix methods or the practical aspects of reversing lenses for macro photography.  However, I have yet to come across a discussion of how to deduce the matrix for a camera lens and vice-versa.

After reading the piece, you may wonder whether it is worth the effort to perform such a calculation.  Wouldn’t it be easier to simply try the configurations?  To modify the common adage, a month on the computer can often save an hour in the lab.  The short answer is yes and no.  No I’m not an economist, why do you ask?

If you have a specific configuration in mind, then trying it is easier.  However, if you have a set of components and want to determine which of the hundreds of possible configurations are candidates for a given use (just because the calculation works, doesn’t mean the optical quality is decent), or which additional components one could buy to make best use of each dollar, or which adapter rings are needed, or what end of the focal ranges to use, then the calculation is helpful.  Do I recommend doing it by hand?  No.  I even used a perl script to generate the results for the example.  As mentioned, a freeware program to accomplish this task in a more robust manner will be forthcoming.  Think of the present piece as the technical manual for it.

# Tless Table Viewer

Over the years, I’ve found delimited text files to be an easy way to store or output small amounts of data. Unlike SQL databases, XML, or a variety of other formats, they are human readable. Many of my applications and scripts generate these text tables, as do countless other applications. Often there is a header row and a couple of columns that would best be kept fixed while scrolling. One way to view such files is to pull them into a spreadsheet, parse them, and then split the screen. This is slow and clumsy, and updates are inconvenient to process. Instead, I wanted an application like the unix utility ‘less’ but with an awareness of table columns. The main requirements were that it be lightweight (i.e. keep minimal content in memory and start quickly), parse a variety of text file formats, provide easy synchronized scrolling of columns and rows, and allow horizontal motion by columns. Strangely, no such utility existed. Even Emacs and vi don’t provide an easy solution. So I wrote my own unix terminal application. I tried to keep the key mappings as true to “less” (and hence vi) as possible. The code is based on ncurses and fairly portable. The project is hosted on Google Code and is open source.

Go to Google Code Archive for this Project

# Influence in Voting

Have you ever wondered what really is meant by a “deciding vote” on the Supreme Court or a “swing State” in a presidential election? These terms are bandied about by the media, but their meaning isn’t obvious. After all, every vote is equal, isn’t it? I decided to explore this question back in 2004 during the election year media bombardment. What started as a simple inquiry quickly grew into a substantial project. The result was an article on the subject, which I feel codifies the desired understanding. The paper contains a rigorous mathematical framework for block voting systems (such as the electoral college), a definition of “influence”, and a statistical analysis of the majority of elections through 2004. The work is original, but not necessarily novel. Most if not all has probably been accomplished in the existing literature on voting theory. This said, it may be of interest to a technical individual interested in the subject. It is self-contained, complete, and written from the standpoint of a non-expert in the field. For those who wish to go further, my definition of “influence” is related to the concept of “voting power” in the literature (though I am unaware of any analogue to my statistical definition).

# Ye Olde Physics Papers

Once upon a time there was a physicist. He was productive and happy and dwelt in a land filled with improbably proportioned and overly cheerful forest creatures. Then a great famine of funding occurred and the dark forces of string theory took power and he was cast forth into the wild as a heretic. There he fought megalomaniacs and bureaucracies and had many grand adventures that appear strangely inconsistent on close inspection. The hero that emerged has the substance of legend.

But back to me. I experienced a similar situation as a young physicist, but in modern English and without the hero bit.   However, once upon a time I DID write physics papers. This is their story…

My research was in an area called Renormalization Group theory (for those familiar with the subject, that’s the “momentum-space” RG of Quantum Field Theory, rather than the position-space version commonly employed in Statistical Mechanics – although the two are closely related).

In simple terms, one could describe the state of modern physics (then and now) as centering around two major theories: the Standard Model of particle physics, which describes the microscopic behavior of the electromagnetic, weak, and strong forces, and General Relativity, which describes the large scale behavior of gravity. These theories explain all applicable evidence to date, and no prediction they make has been excluded by observation (though almost all our effort has focused on a particular class of experiment, so this may not be as impressive as it seems). In this sense, they are complete and correct. However, they are unsatisfactory.

Their shortcomings are embodied in two of the major problems of modern physics (then and now): the origin of the Standard Model and a unification of Quantum Field Theory with General Relativity (Quantum Field Theory itself is the unification of Quantum Mechanics with Special Relativity). My focus was on the former problem.

The Standard Model is not philosophically satisfying. Besides the Higgs particle, which is a critical component but has yet to be discovered, there is a deeper issue. The Standard Model involves a large number of empirical inputs (about 21, depending on how you count them), such as the masses of leptons and quarks, various coupling constants, and so on. It also involves a specific non-trivial set of gauge groups, and doesn’t really unify the strong force and electro-weak force (which is a proper unification of the electromagnetic and weak forces). Instead, they’re just kind of slapped together. In this sense, it’s too arbitrary. We’d like to derive the entire thing from simple assumptions about the universe and maybe one energy scale.

There have been various attempts at this. Our approach was to look for a “fixed point”. By studying which theories are consistent as we include higher and higher energies, we hoped to narrow the field from really really big to less really really big – where “less really really big” is 1. My thesis and papers were a first shot at this, using a simple version of Quantum Field Theory called scalar field theory (which coincidentally is useful in it’s own right, as the Higgs particle is a scalar particle). We came up with some interesting results before the aforementioned cataclysms led to my exile into finance.

Unfortunately, because of the vagaries of copyright law I’m not allowed to include my actual papers. But I can include links. The papers were published in Physical Review D and Physical Review Letters. When you choose to build upon this Earth Shattering work, be sure to cite those. They also appeared on the LANL preprint server, which provides free access to their contents. Finally, my thesis itself is available. Anyone can view it, but only MIT community members can download or print it. Naturally, signed editions are worth well into 12 figures. So print and sign one right away.