Vlad's Roam Garden

Powered by 🌱Roam Garden

Algorithms to Live By: The Computer Science of Human Decisions

This is an "Orphan" page. Its core content has not been shared: what you see below is a loose collection of pages and page snippets that mention this page, as well as snippets of this page that were quoted elsewhere.

Second, knowing that you are using an optimal algorithm should be a relief even if you don’t get the results you were looking for. The 37% Rule fails 63% of the time. Maintaining your cache with LRU doesn’t guarantee that you will always find what you’re looking for; in fact, neither would clairvoyance. Using the Upper Confidence Bound algorithm approach to the explore/exploit tradeoff doesn’t mean that you will have no regrets, just that those regrets will accumulate ever more slowly as you go through life. Even the best strategy sometimes yields bad results — which is why computer scientists take care to distinguish between “process” and “outcome.” If you followed the best possible process, then you’ve done all you can, and you shouldn’t blame yourself if things didn’t go your way.
Outcomes make news headlines—indeed, they make the world we live in—so it’s easy to become fixated on them. But processes are what we have control over.

Alternatively, you can try to reduce, rather than maximize, the number of options that you give other people — say, offering a choice between two or three restaurants rather than ten. If each person in the group eliminates their least preferred option, that makes the task easier for everyone. And if you’re inviting somebody out to lunch, or scheduling a meeting, offering one or two concrete proposals that they can accept or decline is a good starting point.

Outcomes make news headlines — indeed, they make the world we live in — so it’s easy to become fixated on them. But processes are what we have control over.

One of the implicit principles of computer science, as odd as it may sound, is that computation is bad: the underlying directive of any good algorithm is to minimize the labor of thought. When we interact with other people, we present them with computational problems — not just explicit requests and demands, but implicit challenges such as interpreting our intentions, our beliefs, and our preferences. It stands to reason, therefore, that a computational understanding of such problems casts light on the nature of human interaction. We can be “computationally kind” to others by framing issues in terms that make the underlying computational problem easier. This matters because many problems — especially social ones, as we’ve seen — are intrinsically and inextricably hard.

Politely withholding your preferences puts the computational problem of inferring them on the rest of the group. In contrast, politely asserting your preferences (“Personally, I’m inclined toward x . What do you think?”) helps shoulder the cognitive load of moving the group toward resolution.

In almost every domain we’ve considered, we have seen how the more real-world factors we include — whether it’s having incomplete information when interviewing job applicants, dealing with a changing world when trying to resolve the explore/exploit dilemma, or having certain tasks depend on others when we’re trying to get things done — the more likely we are to end up in a situation where finding the perfect solution takes unreasonably long. And indeed, people are almost always confronting what computer science regards as the hard cases. Up against such hard cases, effective algorithms make assumptions, show a bias toward simpler solutions, trade off the costs of error against the costs of delay, and take chances.
These aren’t the concessions we make when we can’t be rational. They’re what being rational means.

The deeper point is that subtle changes in design can radically shift the kind of cognitive problem posed to human users. Architects and urban planners, for instance, have choices about how they construct our environment — which means they have choices about how they will structure the computational problems we have to solve.

A friend of ours recently mused about a childhood companion who had a disconcerting habit of flaking on social plans. What to do? Deciding once and for all that she’d finally had enough and giving up entirely on the relationship seemed arbitrary and severe, but continuing to persist in perpetual rescheduling seemed naïve, liable to lead to an endless amount of disappointment and wasted time. Solution: exponential backoff on the invitation rate. Try to reschedule in a week, then two, then four, then eight. The rate of “retransmission” goes toward zero — yet you never have to completely give up.

The moral is that you should try to stay on a single task as long as possible without decreasing your responsiveness below the minimum acceptable limit. Decide how responsive you need to be — and then, if you want to get things done, be no more responsive than that.

Likewise, seemingly innocuous language like “Oh, I’m flexible” or “What do you want to do tonight?” has a dark computational underbelly that should make you think twice. It has the veneer of kindness about it, but it does two deeply alarming things.

First, it passes the cognitive buck: “Here’s a problem, you handle it.”

Second, by not stating your preferences, it invites the others to simulate or imagine them. And as we have seen, the simulation of the minds of others is one of the biggest computational challenges a mind (or machine) can ever face

This!! I always hate questions like that or tell me something requests

Humans clearly have context-switching costs too. We feel them when we move papers on and off our desk, close and open documents on our computer, walk into a room without remembering what had sent us there, or simply say out loud, “Now, where was I?” or “What was I saying?” Psychologists have shown that for us, the the scale of minutes rather than microseconds. To put that figure in perspective, anyone you interrupt more than a few times an hour is in danger of doing no work at all.