### Dispersity in the classroom

Here’s an amusing way dispersity, a measure of how different the sizes of objects in a collection are, crops up in classroom management.

Imagine that I’m a teacher who wants more participation in class. More specifically, during class discussions a few of my students are regular contributors, but I want everyone to join in on a more equal basis. How can I track my progress at encouraging more participation?

One method might be to count the number of distinct participants in each discussion, or equivalently, count the number of people who aren’t participating. But this isn’t sensitive to my bigger problem of a few students dominating each discussion, although I might hope that getting the more reticent students over the initial threshold of joining in might make them more talkative. But how can I measure the equality of my students’ involvement more directly?

Why, assuming each has a reasonably accurate sense of how much they’re participating themselves, I can just ask them—in the form of an in-class discussion! But the counterintuitive upshot is that the higher the average reported participation, the worse the problem of participation inequality!

Here’s why: if certain students don’t participate in class discussions, I won’t hear from them when I’m asking how much time they spend participating. Instead, I’ll hear more from the students who are already participating more. If in my class of twenty students I have one student who’s speaking half the time, five students who speak 10% of the time, and fourteen who never speak, then if the discussion continues with those proportions I’ll come away thinking the average participation fraction is more like 30% than the true average of 5%:

### “How much of the discussion do you contribute?” The ratio of my fake average (30%) to the real average (5%) is the dispersity of the collection of participation fractions; in this case, the dispersity is 6. In general, this ratio is larger than 1, and it only equals 1 if everyone is participating equally.

Anyway, this is mostly a joke: if I tried to poll how much my students are participating, I wouldn’t expect it to take five times as long to say “50%” as it would to say “10%”, nor would I expect myself to be fooled into forgetting that it’s still just one student saying so. I’m just amused that the result brings in this concept of dispersity.

But I can easily imagine a realistic and qualitatively similar scenario, where I try to poll how my students are feeling about the class so far, and how good a student feels is proportional to how likely they are to speak up:

### “How is class going for you?” The danger of being misled by a few enthusiastic responses is all too real.

## One thought on “Dispersity in the classroom”

1. Owen Biesel says:

If you’re interested, here’s the math of how these averages are computed in general:

In general, if the fractions of discussion time occupied by each of my $N$ students are $p_1$ up to $p_N$, then the average fraction I’ll hear when I spend $p_n$ of the time hearing “ $p_n$” is $p_1^2 + p_2^2 + \dots + p_N^2$. This quantity is called the Simpson index of the distribution $(p_1,\dots,p_N)$; it’s a weighted average of $p_1$ up to $p_N$ (the true average is always $1/N$), and because the weights are themselves the $p_n$, the average is weighted in favor of the $p_n$ that are bigger. Thus the Simpson index is almost always bigger than $1/N$, and in fact it only equals $1/N$ when all of $p_1$ up to $p_N$ are equal.

In our case with one student taking up half the discussion time, five students with $10\%$ of the time each, and fourteen students who don’t talk at all, the Simpson index works out to $1\cdot(0.50)^2 + 5\cdot (0.10)^2 + 14\cdot (0.00)^2 = 0.30 = 30\%$.

We can think of the Simpson index as $1$ divided by the effective number of participants: if the discussion were carried by four students participating equally, then the Simpson index is $1/4$, so if we calculated a Simpson index of about $25\%$ for our discussion distribution, it would be as if the number of participants were “effectively” four. As it is, our effective number of participating students is less than that, closer to three.

Like