I love optimization—both the general principle of trying to make things the best they can be, as well as the specific techniques that go into making that happen. In an alliterative-adjective icebreaker game, I even introduced myself as “Optimizing Owen,” which somehow did not surprise my Intro Calculus students.
But optimization has a cost—it takes effort! My calculus students would agree, but when they’re trying to maximize or minimize simple, friendly functions, they have it comparatively easy: in real life, optimization can be a long process of trial and error with no clear finish line.
Let’s say you’re trying to decide how many hours per day you want to spend online. The internet is a helpful resource for a lot of life’s challenges, so the right answer is probably not 0. (Since you’re here reading this, I assume you agree.) On the other hand, spending too much time on the internet will start to get in the way of other things, so your graph of life quality versus internet usage probably looks something like this:
In calculus, we would represent this curve by a formula and then use that formula to find the maximizing value exactly, but life doesn’t hand us a formula. In reality, we’d probably have to experiment with a bunch of different internet usage levels and see what the effects are, and as a picture emerges you can zero in on what’s best for you:
The problem is that this process doesn’t naturally end, even though pretty soon into the optimization you’ll be getting such tiny gains from your improved precision that it almost surely won’t be worth the effort you’re putting in. (In general, the closer in value your alternatives are, the more effort it takes to choose between them, and the less it matters!)
So when should you quit? If you spend no time optimizing, you’re missing out on some easy ways to improve your life, but if you keep optimizing forever, you’re pouring time and effort into things that aren’t really helping anymore. Hmm… sound familiar?
“How much should I optimize?” is yet another optimization problem! As is the more general question of how to go about optimizing: the guess-and-check method I described above is one of the most basic in a large library of optimization algorithms. You can optimize your optimization method too, gradually tweaking it over time to be more and more efficient. But when does that also stop being worth the effort?
For me, as much as I enjoy optimization, my answer generally is to do very little of it before moving on. People change, ideal values change, and it’s no use trying to map a shifting landscape too precisely. By all means, I’ll try doing a simple internet time experiment (like installing the News Feed Eradicator for Facebook and seeing how I like it), but then I’ll experiment with a different part of my life next (like sleep, diet, workflow, etc.). There’s a lot to optimize, and I can always come back to something again if I feel like it later.
Let me leave you with an analogy from Beverly Cleary’s Beezus and Ramona. Beezus is looking all over for her little sister Ramona when she finally finds her in the basement, having taken a single bite out of each of several apples without finishing any. Why, she asks? Because “the first bite tastes best.” It’s true that the more bites of an apple you eat, the less satisfying they taste, but the solution isn’t to bite lots of different apples, it’s to include a wide variety of foods in your diet. That way, the next apple you bite into will have a chance to taste as good as the first.