Objecting to Decision Theory

Two years ago, my friend Tuomas and I took a seminar on decision theory taught by Caspar Hare. I was hoping to learn how to make better decisions, or else see what all the rationalist hype was about1. Tuomas was interested in its applications to AI safety.

As the zealous intellectuals, Tuomas and I were allured by the taste of abstract thought experiments that grew more bizarre and dizzying as we read onwards. An offer leading to a series of infinite coin flips. An oracle presenting two boxes.

At the end of the class, we weren’t sure what to make of the intellectual mess of decision theory, and came away with a sense that the project as a whole was rather doomed.

Over the next two years, we consolidated our reasons into a slightly more coherent objection to the project of decision theory. The conversation below is our attempt to describe our current stance on the unworkability of decision theory.

Our recorded dialogue can be found here

Note:

I expect that the people who will find this conversation most interesting will have some prior context on decision theory.

Our objections are not particularly novel, and this conversation was not intended to sway a seasoned decision theorist. Instead, it is a modest attempt to articulate the stance of two people who earnestly thought about decision theory for about half a year and changed their minds.

Background reading

Sampled quotes

“I don’t think that it’s we’re not smart enough. And I don’t think this means decision theory is useless. How I think about it now, is that decision theory, like probability theory, is a tool that is really useful in certain domains. And what we, what we’ve done by bringing into these normative contexts, is we’ve taken the tool out of out of its useful domain, and it’s breaking down.”

“one thing that this does make me think of is that decision theory is a subset of thinking more generally, or thinking in cognition, your thinking isn’t good for everything in your life. In fact, it’s good for a small subset of things. It’s good for when you have explicit problems. And, you know, problems aren’t things that exist out there. They’re more of like a way of seeing things.”

Table of contents

Transcript

Note: this transcript was automatically generated but not thoroughly proofread. There will be typos and syntactical errors. However, the timestamp plus context should make the message discernible.

What we wanted from decision theory

Max 0:01

Okay, this is the proper start. Wow, it’s kind of windy. Yeah. Okay, conversation on decision theory take one with Tuomas and Max. Okay, the structure, we want to talk about decision theory. Having taken a class A year ago, together by one of the semi notable decision theorists. And we’ve thought about it a bit, a good amount during that semester, and we’ve had about a year to reflect on it. So I think we should, I think we should go about this as we can talk about our experience, coming to decision theory, what we what we wanted from this class, we want to learn from it. The things we encountered within decision theory, both the frameworks and also the problems, and then we can have a broader reflection on what we think how we feel about decision theory now. So yeah, Tuomas, why, how? What made you want to study decision theory in the first place?

Tuomas 1:18

I think you kind of convinced me to do it.

Max 1:20

Oh, it was a it was a philosophy Capstone.

Tuomas 1:22

Oh, yeah. It was required toward the like, Major. Yeah, yeah. Yeah. It was like interesting. More mathematical branch of philosophy. That’s like, feels more familiar. Technical mind,

Max 1:40

I convinced you to take it. I think you did. Yeah. You were taking this first.

Tuomas 1:44

And then I like, switch from my other class into this one.

Max 1:48

Because you could have taken another one that wasn’t this one. Right.

Tuomas 1:52

I think I kind of had to take this if I wanted to get the philosophy major

Max 1:55

also, but you weren’t convinced? Uh, yeah. We’re gonna get it last week. Yeah.

Tuomas 1:58

I wasn’t sure if I would do that. Yeah.

Max 2:01

And what did I say? Do you remember?

Tuomas 2:03

I think you were saying that it’s like, kind of like, like, you think it’s like importance for like, like, you could be important for like AI safety? Yeah. You were pretty convinced of the importance of decision theory.

Max 2:20

Yeah. Or at least that, that people who were pretty heavily influential and ASAP thought that decision theory was? Yeah, important. I think I personally was convinced to I just didn’t know very much about it. Yeah, yeah. Yeah. And this is stuff on less wrong. And yeah, the rationalist writing about decision theory. AI safety.

Tuomas 2:44

So what’s what’s like the summary of that position? It’s like, decision theory helps us

Max 2:51

well, you need to correct decision theory, if you want your AI to not reliably or to not act in a bad way against against the desires of people.

Tuomas 3:05

Yeah, like? Like you’re capable AI should sort of follow decision theory rules. Yeah, you need to figure out how to make the decision through theory work. So that like, the AI will also work.

Max 3:20

Yeah. And I guess I should say now that I don’t feel like I understand UDT, which is the alternative? Yeah. So I can’t kind of I can dismiss that in technical detail. Or attack it or support it. Yeah. Yeah. But, yeah. So you came to it, because you had to do you had to get a philosophy Capstone major. And also, you were somewhat convinced is important for esafety. I don’t think I actually, I think I took the class, because it was Casper hair. And Casper hair was the one who introduced me to philosophy when I was 14. Oh, I took the class, the MIT 2400 Introduction to Philosophy. And I was Yeah, I was really enamored, or it may be really like, to start to think about these things a lot.

Tuomas 4:15

Yeah.

Max 4:17

And I thought it’d be fun. Yeah, I think. Also, also, I was not nearly not going to do it. It was a semester that I had off while I was working. Oh, yeah. Think Tank. And this was the one class I was considering doing. And I ended up deciding to do it just so that I could avoid paying health insurance. Yeah. Because I wasn’t at I wouldn’t be at MIT if I didn’t take this class.

Tuomas 4:40

Yeah. Yeah. Yeah, I think I was actually, I was less convinced by the AI safety argument, but I thought it’s like, plausible. But yeah, I think you were saying that it’s like a fun, good class. And it’s actually interested

Max 4:57

and that it would be fun for us

Tuomas 4:58

to together Yeah. which it was, it was it was a good time.

Max 5:03

Yeah. I also think that there’s something about decision theory. The thing that drew me to decision theory is also in common with the thing that drew me to other philosophies or philosophy classes, strands of philosophical thought. Yeah. Which was I wanted to know the right way to decide or to act. Yeah. Yeah, yeah. I wanted a system that could tell me how to be and yeah, decision theory is kind of the ultimate promise of that. Yeah. Yeah. And so I thought, maybe if I understood decisions, yes, I was really good. At academic decision theory, then I could make the best decisions in my life. Yeah. Yeah. Which is an interesting stats.

Tuomas 5:51

Yeah. So it seems like sort of the almost a grand promise of decision theory is like, like, it can really tell you sort of how to live your life. And like, also like, but not just you, it’s like a system you can apply for an AI and like to do?

Max 6:13

Well, it doesn’t tell you precisely how to live your life. But almost like, given that you have you have some inclinations are able to have establish utilities for certain options, then it can tell you the best process for going about

Tuomas 6:27

it. Yeah. Like there’s like, room for like, what’s, what are you optimizing for by like, given that, it’s supposed to sort of tell you how to do it?

The origins of decision theory

Max 6:40

Yeah. So I mean, that is, like the early. If we go back to the beginnings of decision theory, I think it comes about when von Neumann and Morgenstern start modeling an agent. And they propose that how you could make a rational decision is pretty straightforward. You just have a number of options, A, B, and C, and you assign values to each of the options. And then you multiply those by the probability of the options and then you choose kind of this problem ability weighted utility, you choose the highest probability weighted utility, and this was expected value. Yeah, this is expected this is what expected value is. Right. And first, it’s most obvious in like casino like contexts, II, when you’re playing probability games with coin flips, yeah, cards,

Tuomas 7:53

you have a clear probabilities and like, clear prices, like money or usually money.

Max 8:02

And I think what they were doing was, I actually don’t know from the outset, if it was both normative and prescriptive. This is one thing. This is one kind of one distinction you can make about the different types of decision theory. One is claiming about how you should act and one is trying to describe how people actually actually, I think it’s much more normative from the beginning.

Tuomas 8:26

I think so yeah, I think I haven’t really seen many people claim that people actually Maximize expected utility.

Max 8:40

Yeah, so it’s like the ideal rational agent. This is how they’re supposed to behave.

Tuomas 8:43

Yeah. Like this is how you should be.

Max 8:47

And so it begins with just probabilities. So okay, this is like we’ve talked about the why we wanted to do decision theory, we can talk a little bit about the substance of it begins with probability weighted, just like expected utility. But then this runs into some problems that are that are pointed out by people like Laura Buchak. I mean, actually, they’re even they’re even paradoxes early on before von Neumann and Morgenstern like the St. Petersburg paradox. Remember, that? was the one that was proposed. I think it was an 18th at some mathematician was it was about if you flip coins, and you keep doubling the reward. It was like it was like an infinite paradox. One of the infinite parallel paradoxes if you keep doubling God where’s the St. Petersburg paradox? I can you look it up. For sure. I think it’s a It’s

Tuomas 10:01

like, you end up like choosing like a backlog of zeros and zero chance of like infinite gain over like any, like the guaranteed chance of like any amount of money

Max 10:28

but so this is one of many different

Tuomas 10:33

oh no St. Peter’s books, it’s like playing a lottery. We’re like,

Max 10:43

oh, it’s like you keep betting?

Tuomas 10:45

No, I think it’s like, you have this like game where you get paid by the number of the first heads or something like, so you keep flipping a coin until you get heads, right. And then the amount you get paid is like two through the number of like flips you took to get there. And then it’s like, there’s a one half chance you just get paid off like to write and there’s one quarter chance you get paid a 4, one eighth chance you get paid eight, and so on. And the expected value is this coin flip game is infinite. So you just choose choose the option to play this game over any finite amount of money that’s guaranteed for you. Right, right. Even though like most of the times you end up with like a payoff of 2 or something.

Max 11:43

So yeah, and the question is, like, if you that also applies, like, if you’ve done the game, if you’ve flipped the coin, and you have $2, and then you have the option of keeping what you’ve gotten, or continuing to play the game. Yeah. So like, say you got super lucky and you flipped it and heads came up the fifth time. So you

Tuomas 12:09

would get that sense a fifth time, let’s say, you know, it came out the 100th time.

Max 12:12

Yes. 100 times. So you get two to the 100. Yeah. Which is like, which is like, how big

Tuomas 12:21

that is? 1000. So that would be like 1000 1000s 10s of power.

Max 12:27

1000s of the 10th power, which is

Tuomas 12:30

I don’t think we have it’s like some some weird word.

Max 12:33

Power of it. Anyways, it’s like 1010 to the 100 something? Well, yeah, so it’s 10 to the three times

Tuomas 12:42

10 to 15. Oh, no, I tend to turn to 10 to the 30. Yeah,

Max 12:45

so you have 10 to the $30. And then you’re given the option, you can have that. Yeah. Are you playing again playing this St. Petersburg? And according to expected utility, you keep playing it? Right?

Tuomas 12:56

Or at least to expect the value? Expected Value? Yeah. Cuz it’s like, right, because there’s differentiation between, right.

Max 13:04

That’s very important. Yeah. Utility is is something that gets latched on that someone, like you have your subjective utility that gets given to each option. Yeah, but for value, it’s for the money game. It’s clear, at least if you’re if you’re willing to talk about what’s the, like money Maximize? Yeah,

Tuomas 13:20

yeah. So I think it’s like, sort of the start of decision theory is kind of like, before that there was the tradition of like, probability estimation, and like, sort of gambling theory of like, what you should do to like Maximize Maximize your payoffs or whatever, right? And that’s where like, expected value came from.

Max 13:45

Yes. And then and then the moment you start talking about utility, that that is the beginning of decision.

Tuomas 13:51

Yeah. So you replace the abstract number or like, amount of dollars with the utility function that describes like, how good is this for you? Right? And then again, in the, in the frame of decision theory,

Max 14:07

yeah, how you can deal with so So, I mean, would you say that decision theory is under form of utilitarianism? Basically, everything we encountered was

Tuomas 14:23

Yeah, I think I listed like popular approaches that we you know, yeah. are like, heavily rely on utilitarianism, but that’s, I don’t know what’s the actual definition like maybe there’s like other competing that are like, quite different style of decision theory. I

Max 14:38

think that’s it. I actually, I can come up with a counter example. Now already. Yeah, we were talking about other decision theoretic like voting rules. Yeah, there’s the Board account. I got a phone call. Okay.

Social choice theory and decision theory

Max 15:00

Okay, this work is working now.

Okay, back on track decision theory, we were talking about the history of it and utility being assigned. Yeah, what made it you what made a decision theory was that there was a utility that got added. And then people thought you can wait way to wait, the decisions according to the utility that’s given to you. And then I came up with a counter example, the Borda count. There are other there are other decision theoretic frameworks that so yeah. There are other decision theoretic frameworks that don’t use utility, that are more like voting schemes. And so there’s some overlap between social choice theory, which is theory of votes. Yeah. And decision theory. So yeah, the Board account, for example, you just rank all of your choices, and you give the number of votes according to the rank and then you sum them. And that doesn’t require any conception. But

Tuomas 16:30

how do you rank them? Well, you don’t like based on different attributes, or Yeah, what is most desirable for this thing?

Max 16:37

Yeah. So for example, if we’re trying to decide whether to whether to eat hamburger, a salad, or soup, you know, you can you rank them one, two, and three. So if you want to, if in general are going to be healthiest, maybe you put salad and then soup and then hamburger. Yeah. And oh, wait, this, there’s

Tuomas

no voting here. Yeah.

Max

If you already rank them, then that’s kind of the decision.

Tuomas 17:13

Yeah, the choice. Like, what is this gonna? That’s like, that’s more like an moral uncertainty thing, right? Yeah. Like different, right? Yeah.

Contemporary approaches to decision theory; normative uncertainty, risk adjusted expected utility

Max 17:25

Yeah. So there is some overlap, but that’s only when people start. So that was Will MacAskell. Actually, Oh, yeah. So William MacAskill’s thing is normative uncertainty, where he starts to introduce the need that we can draw on parallels with voting theory and decision theory. So we use things like the board rule or Condorcet winners, that type of language to talk about what you should do under normative uncertainty and different moral theories. But yeah, actually, maybe the explicit maybe explicitly choosing usually does depend on utility. I don’t know. Let’s see what else is on this? Yeah. Okay, so So, one of the first things we we get taught in this class is that this whole problem of the St. Petersburg paradox can potentially be resolved by what this woman at UC Berkeley and Laura Buchak calls risk adjusted. Expected Utility. Yeah. And she says you can have another function that you put in, that’s just the risk function, this, this option, you can have a risk function that’s greater than 0.1, like this greater than one or less than one if you’re more risk seeking or less seeking. And that effectively, that lets you get rid of this St. Petersburg paradox. Yeah. Which, which, like the Petersburg paradox. I mean, like, the extension of it, is that the you value the money, like, the money is equivalent to utility basically.

Tuomas 19:12

Yeah. I mean, there’s like other like, more, like, standard approaches to get rid of the paradox, right. Like,

Max 19:19

yeah, like, like economics.

Tuomas

Yeah, like the usual ways to have a nonlinear utility function. It’s like maybe it’s like the square root of the amount of money you have or something like,

Max

right. But the reason that she introduces this, I think it’s because of those problems themselves. Or those those solutions themselves. I’d probably yeah, there’s still some problems. Yeah. But, in fact, she says that for any Didn’t she say that for any like, utility function? You can find some example? That

Tuomas

Yeah, sort of, like similar? Yeah, probably probably.

Max

Yeah. Yeah. Anyways, so that That. That is one attempt. One thing that I that I thought about is that in the course of this class was was that there are actually, maybe we can before we go into the problems, do you remember any of the other interesting approaches?

Okay, I don’t really remember other like decision theoretic proposals besides Laura Buchak’s. Yeah, it was just like the rest of the ones were just like papers about why these

Tuomas

problems? Yeah. And that was also a pretty problematic proposal in itself. Yeah, the blue check proposal. Yeah.

Max 20:43

So yeah, we were you came up with the idea. But then we ended up writing the paper together with a concrete example of why thorough blue checks, risk adjusted expected utility doesn’t work. And it was to do with time.

Tuomas

Yeah, it’s like a time series is like, because the way she rates like, probabilities you like depends on like, venue, calculate the, what do you consider it to be that different events? It’s like, if you have a one gamble, where there’s like, like, if you haven’t, if you bet on the outcome of two coin flips on a single gamble, right, then you will make a different choice than if you first bet on the outcome of the first camp of coin flip. And then right, but on the outcomes, the second coin flip, right, even though like the even if the all the payoffs are the same, and the probabilities are the same in the end,

Max

yeah. Yeah. So so that’s a weird thing, which is, if you give someone a bundled bet, the act very differently than if you had you unbundled it. And yeah, give it to them, even though the bundling is really just they do one decision than the next.

Tuomas

Yeah, like the bundling really isn’t like, like, it doesn’t change the real world situation at all. It’s just how we, how we model it.

Max

Right. Right. So which

Tuomas

seems inherently flawed to me?

Max

Yeah. So that’s a problem with consistency and time. And I mean, Laura Buchak talks about it later in her book, but still, this is kind of the beginning of, you know, me like maybe this is, this is not as clear as we thought. You can. Yeah, like, decision theory that works for all these things. Yeah. For general decisions.

Tuomas

What are the other other flaws or problems

Other flaws in decision theory

Max 22:41

with precision? The other one that’s that’s cool is that I like our the two papers that we read. One is Professor hare, Casper hares own paper. And the other is Ruth Cheng’s. And the first one cat Casper hares paper is about these. You Your house is burning down and their firefighters who come in, and the firefighters have to save one item. So you know that there’s your wedding album, which contains all these really precious memories with you and your wife. And that’s really important to you. But then you also know that there’s this Faberge egg, which is a really rare, like a very rich person thing given by Tsar Alexander the, the second or something like gore char Nicolas at the second. Yeah. And is just this family heirloom that’s worth a lot, a lot of money. And he says at this point, it’s like, he cannot say which one you should save. Right? And the whole thing is, he cannot say which one you should save. And even if you sweeten the deal by saying there’s $100 Next to the wedding album. Oh, yeah. Like he that doesn’t make a difference. He still can’t make an All Things Considered a choice? Uh,

Tuomas

yeah. Yeah. It’s like, like, at the base point is like, there are things there. Like, you can’t say which one is better. And it doesn’t mean that they have the same value goes if you add $100 to one, right. You still can’t say which one is better, right? Instead, there’s like something else going on there.

Max

Yeah. Yeah. Because like, yeah, you can imagine another silly example. It’s like, you have your son Tim and your daughter, Lila. And you can only feed one of them or take one of you in the apocalypse. And you can make this decision. And then you say, well, I’ll well like Tim has $1 in his pocket, you know? Or Tim has Tim as $100 in his pocket. Yeah, therefore you should. And so you just take Yeah, and it’s like this is ridiculous. Yeah, so what that pointed out, I don’t remember the exact conclusion that that came Whatever it takes that Professor hair takes, but it’s something like deciding a good decision theory has to somehow take that into account. And like maybe it’s just in general, really hard to, to make precise distinctions about choices.

Tuomas

Yeah, I think he was kind of in support of like the on par

Ruth Chang’s On the Possibility of Parity

Max

theory. Right. Right. And so that naturally leads us to the next. This is a solution. This is a proposed solution by Ruth Chang. Yeah. Which is that in this case, you cannot use relations, like the three relations which underlie decision theory, just like greater than, less than or equal to,

Tuomas 25:50

yeah, I guess, kind of deeper thing behind this is like the assumption going from like, probability theory, the decision theory is that like, you have some sort of utility function that like, given any state, you can give it give you a number that describes how, how good it is, right? And normal numbers just have these three relations, either one number is greater than smaller than or equal to the other. But it seems like these values are, don’t follow these three rules.

Max

Yes. And so what Ruth Chang says, is we need another relation, which is called on a par. Yeah, the thing is, the wedding album and the Faberge egg are not equal, they’re actually on a par, which means that there is not also so she is careful to say that it’s not that they can’t be compared. And it’s also not that they’re equal. It’s that there’s this new thing called on a par. Describe some of the relations about it. Which I don’t fully remember.

Tuomas

Yeah, yeah.

Max

But yeah, I don’t was it a satisfying answer to you?

Tuomas

I think it’s like, like, it seems like they’re on to something by kind of like, if we accept that it’s almost like breaks down decision theory itself. I feel like, because now you, like operating with things that are on a par? It just like all of these rules just kind of stopped making sense. Like,

Max

right, and how are you supposed to decide when things are on a par?

Tuomas

Yeah, it’s like, so you know,

Max

this is I think this starts to get at one of the bigger problems of decision theory, which is that there are like meta rational considerations. Like how do you decide things? Like how much utility to assign to something? Yeah, or whether something is on a par or actually equal? You’d like another decision theory for that. Yeah. And like that, and just like infinitely builds on, on itself?

Tuomas

Yeah. Yeah, when you say things like on a par, it like, you also run into some sort of like paradoxical situations of like, you know, a Faberge egg innovating album, or in a bar. And if the video album has $100 isn’t a bar, if it has $200, it’s still under par. But like, at some point, if you keep adding $100 Every time, right, you’re probably not on a par anymore, like maybe submitting Aldwin at a billion dollars, then like, yeah, you’re just gonna choose that.

Max

Yeah. It’s like a wedding album. And like the embryo that will become your future son.

Tuomas

Yeah.

Max

Or like the embryo at 10 weeks? Yeah. Well, I mean, so…

Tuomas

Yeah, that would get you I don’t know if it’s like a different kinds of things. I was just thinking like, if the Faberge egg is like mostly monetary value, and like, if you add value up to the Faberge egg, then it kind of becomes clear, because you’re like, sort of like you’re winning on all the fronts. It’s like it’s more money, and it’s more emotional value. And maybe there’s but yeah, I think precisely defining this kind of on par notion is very hard.

Max

Well, it the ‘on a par’ notion itself is an attempt to move out of precision.

Tuomas

Yeah, well, maybe it’s like, yeah, we shouldn’t. But I feel like if I remember correctly, I think the author was like, trying to frame it as like, you know, you don’t have to change the framework too much. You just kind of include this extra relation and things still work. Yeah. Which I think, yeah, doesn’t really work, actually.

Max

Yeah. So my feeling about it is that she’s trying to build on a conceptual system that is fundamentally limited, it’s like a boat that is sinking and she’s trying to bail the water out. Yeah, solution is not to just patch on another relation. It’s to realize that you cannot use decision theory for like

Tuomas 30:13

a yeah, these types of things have to give up on the boat.

Bostrom’s problems with infinity

Max

Yeah. Yeah. But that’s that’s jumping ahead of our current stance on decision theory. The other problems that we are introduced to this is this is only like the second of the problems. Right. The other things that were introduced to were problems of infinities. Yes, I mean, we talked about that the St. Petersburg paradox, but there’s also another.

There’s a crazy paper by Bostrom. Do you remember like the any of the details about that?

Tuomas 30:46

I remember the kind of broad idea, I think it’s like, it kind of shows that like these issues we have when we re approach infinity, right are actually like, much, much, much broader than you would think just like it’s more than this to St. Petersburg paradox. It’s essentially like, if we think there’s like, any chance of like an infinite infinite moral value, or like infinite utility in some possible world, which we probably should assign some nonzero probability to, right? Essentially, every decision has the same value. Like we can’t make any decisions. They’re all it’s like, you know, should I? Should I pay $50? Or should I get $50? actually doesn’t matter. It goes to like, your expected value is like $50 plus 0.0001 times infinity, or negative $50. dollars plus 0101 times infinity. And those are the same. So if there is this, yeah, if there’s any chance of infinite infinite payoff, then like, nothing else matters. Like, right? We’re basically paralyzed decision theory doesn’t tell us how to act in any situation.

Max

Right. And this is also, this is kind of parallel with the problem of moral fanaticism.

Tuomas

Oh, yeah, I think give us like part of the part of the paper maybe.

Max

Because if you if this is something that reappears in normative uncertainty, is that there might be really weird beliefs, that assign infinite values are incredibly high values to certain things, and those just dominate everything else. Yeah.

Tuomas 32:31

Which is really dangerous, because that could like, easily justify you’re doing like, very horrific acts. Right? The name of some much greater good, right? Right. There might even be just a small probability of it.

Newcomb’s paradox and the smoking lesion

Max

Yeah. Before we go to the like meta decision theory, the normative decision theory stuff. We’re still at the level of basic decision theory. This is like we’re not done with the problems with it. Right. Another problem is, is determinism. Oh, yeah. And that really comes up when we talk about new code and Newcomb’s paradox. And the smoking lesion. The two box problem,

Tuomas

right. Yeah. Yeah.

Max

And the whole, the whole thing is that if you start to introduce notions about predicting how the agents can act into the decision, things get really weird quick, right? Like the whole the To box, or the Newcomb paradoxes, where there is a room, and inside, like one box, like box A, there is $1,000, or some notable prize, and then inside box B, is it doesn’t matter if it’s larger

Tuomas

or smaller. It has to be larger. It’s like,

Max

so it’s like 5000 Yeah, it’s like $5,000 or something. And the whole end, you can either choose box B or like the bigger prize, or both of them. Yeah, but the thing is, there’s a really good predictor who’s who’s like observes your past behavior, and will choose to put nothing in both boxes, if they predict that you take or

Tuomas 34:22

nothing in the bigger box. Okay, if you’re gonna take both if they think you’re gonna take both Right, right. But the smaller box with all that we’ll always have the $1,000

Max

Yeah, so it’s like a million dollars in in big box. And then yeah, in this small box, it’s like $1,000

Tuomas

Yeah. So either you could choose to take the million dollars right box, which also might have $0 on it, or you could choose to take the million dollars or $0 box lost $2,000

Max

Yeah. And then so evidential decision theory is so the two approaches are like one is causal decision theory. One is evidential decision theory. theory. And evidential decision theory is when you two box right? One of them is like,

Tuomas 35:08

the other way, maybe? I’m not sure.

Max

I thought college decision theory was one boxing, white one is like you pay attention to the causes and you think about, like, what the predictor is likely to do. And then one is like, you pay more attention like past, I think

Tuomas

causal is like, you’re right. Actually, yeah. Causal. It’s like you’re like, okay, yeah. Either to predict your because the keys the predictor puts the money in there before you make your decision. Yeah, yeah. So it’s like, you’re like if their money was put into the box, it’s better for you to choose both boxes. Yeah. Because you will get a million dollar and 1000. Right. But if the money wasn’t put into the box, is better for you do still choose both boxes, right? You get $1,000 instead of zero, right? So it’s like, in either state of the world, it’s better to choose both boxes. Right. So clearly, obviously, that’s causal. I think that’s causal. Yeah, I’m not 100%. Okay.

Max 36:06

I did not think it was. In other cases, your names don’t really matter. But yeah, there’s, it just gets, you can go down this big rabbit hole. And then there’s versions of this problem, like the smoking lesion problem. It’s like your friend, you know that there’s some link between having a brain lesion and wanting to smoke. And like, given that your friend smokes, like, should you be happy for them about that? Or as like, yeah, I don’t think we have to go into the specific details. But suffice to say that there’s some genuine, genuinely hard. It’s like not even simple trade offs to make it just like, you. The whole framework just gets kind of like, yeah. perplexed. Yeah,

Tuomas

I think the decisions get very confusing once you get into a situation where there is like, some factor that’s like, correlated with your decisions. Yeah. Like with your like, sort of future decisions are correlated with some previous factor, like, the predictions performance, like if the predictor is better than chance than sort of like, the prediction or it is like correlated with your decision you are going to make, right, which perhaps implies some sort of lack of freewill or like, yeah, at least a problem in this framework of like, where you assume that like, independent of other things, you can just freely choose this.

Max

Yeah. So so it’s like, it’s like decision theory is implicitly built on this notion of like, decoupled causes or or freewill. Right, yeah. But if you take a deterministic lens, and you start to set up these questions that are that assume a more realistic way of acting, then it violates some of the core assumption. Yeah, I

Tuomas

think Violation theory. Yeah. Yeah. And yeah, I haven’t really seen a good way to like, reconcile that with this. Yeah. deterministic view. Well,

Max

maybe UDT the like, Yudkowsky’s thing. People talk about that? Yeah. Or FDT. There’s also functional decision theory. Yeah. But I mean, it’s unclear, like an either case. Those Those are maybe promising. We haven’t looked at them. But in general, I think, like, even if the did somehow save this, yeah, I love determinism. There’s all these other problems. There’s a problem of infinities. There’s a problem of precision, like how how precise they

Tuomas 38:32

are like things I’ve liked the problem of British precision is more like the problem of like, comparability.

Max

Right? Right. Well, I mean, it’s like precise, or, okay, sure. Like, it doesn’t really matter what we call it, but you know what I’m telling you,

Tuomas

yeah, it’s like, but yeah, it’s connected off, like, maybe you can compare with precision, or you can’t compare at all. Yeah.

Max

And there’s another problem to actually remember the options paper.

Tuomas 39:03

Oh, yeah. Like, how do you know what are your options? Yeah,

Max

so Casper hares, grad student has this paper called? What are your accessible options where he tries to argue about when you’re in a given situation, how do you even decide what the options are? Because most of these toy problems, give you the options. But like, in reality, if you’re walking down a trail, or you’re about to get in your car, there are so many different there’s like infinite different options.

Tuomas

Yeah.

Max 39:40

And how do you bracket those into different things? And he says, like, I can’t remember his exact solution. It was like, your options are the things that you can reasonably do. Or that like appear in your mind or something. Yeah. It’s like, it’s not quite that dumb. But But basically, I think this is like a a really serious problem. Because you put two different people in the same situation and they can respond in incredibly different ways. Yeah, they have different ways of seeing that are that are not innumerable.

Tuomas 40:14

Yeah. And it’s like huh? Why is this a problem? It’s a,

Decision theory fails on its own terms

Max

it’s a problem, because Okay, here, we can go back a little bit to what is a decision? Who is supposed to do? Yeah, so this is one of my favorite things from Laura Buchak is that she says, clearly, a good decision theory should be action guiding, evaluable and predictive/explainable. So it should be able to tell you what you should do in a situation that should allow you to evaluate whether you made the right decision and assign blame if you didn’t, and kind of be able to explain decisions of other actors to see if they’re, like, rational or not. Or so actually, the explainable part seems like it’s overlapping with evaluating. But there’s two criteria, action guiding evaluatable, I think are solid. That’s what I would expect from good decision theory. So the reason that this matters, that options matter is, if it is to be at all action guiding, if you’re put in the situation, like, you should know what options you want to go through.

Tuomas

Yeah. And it like, yeah, like the way the basic framework is, like, you enumerate through all the options, calculate their value, and like, right, see, what’s the what’s the best one? Yeah. But yeah, I guess that’s, that’s kind of part of the meta problem. I feel like it’s like, similar to like, how do you assign like the utility function? Right? Like, how do you know what the options are?

Max

Yeah. Yeah. And so So, yeah, there’s been, there’s been the problem of infinities. So one second, is this problem of what you call comparability or, or precision? Yeah. And then the third is problems of determinism. And, and then there’s problems with like, consistency in in time to that there were the original problems with Laura Boucek.

Meta

Tuomas 42:27

Yeah, I think that’s like, whichever specific proposal though most Yeah, most decision theories don’t really have that problem. But yeah, I think it’s like, yeah, and then there’s the meta level

Max

problems of like, yeah, and then there’s metal level problems, which are pretty serious. Yeah,

Tuomas

I think that that might be the more more bad problem really is like, how do you? How do you assign utilities? How do you assign a number? Yeah. How do you assign a world thing?

MacAskill’s Normative Decision Theory

Max

Yeah. How do you assign utilities? How do you even decide when to use your decision theory? Yeah. When should you apply it? And yeah, what are your options? So like? And then and then I guess, briefly to go through the attempted solutions, like Ruth Chang, proposes the the ANA par thing. And then we’ll McCaskill kind of has an attempt to make a meta decision theory that that brings in ideas from social choice, saying, like, you can actually just treat each while he does this, in the case of moral theories, like what do we do when we’re not sure what we how we should act? And we think there’s some possibility that utilitarianism is right, or deontology is right, you know, and so he says, what you can do is you allow them to basically vote or have their own utility theories or utility functions that assign different options values.

Tuomas 43:55

That doesn’t really work with decision theory, though, right?

Max

What do you mean doesn’t work? Or does it? This is like, oh, yeah, this

Tuomas

is how you make the decision.

Max

This is a decision theory for choosing among moral No,

Tuomas

nope. But this is how you choose. Which option is better, right? Yes. But what is actually the probability involved? Like this? Like, oh, I guess it probably is, like, part of the outcome is like, you have an outcome, like this action leads to I mean, sort of probability, probabilistic futures, they, I mean,

Max

that might depend on the, that might be an input to the moral function, you know, like, yeah, for example, like deontologist might think that any probability of this is really bad. Whereas utilitarians might be like, Oh, it depends on the probability. Yeah,

Tuomas

I think you put into probabilities into the and then, yeah, you have this probabilistic state and then you calculate for a specific theory, right? And you have those both between

Max

each other. Yeah. And so he has this whole machinery that’s trying to be a decision theory for moral decision making. That’s like there’s a credence function. There is like a state space. And it’s kind of like an RL setup almost. Yeah.

Tuomas

This framework suffers especially badly about the problem with options though, right? Yes. Because you’re ranking the different options and how fine grained you’re like options basis. Yeah. Or like which options you consider, like, can affect a lot even though like, yeah, perhaps your best option was already included? Like how many other options do you consider may change? Yeah.

Max 45:43

Yeah. So I mean, so you, you kind of jumped the gun a bit. But what will McAskill proposes, in his thesis at least, is that you can use this thing called the Borda count where you rank things. And according to each theory, so you know, if it’s like eating a salad versus eating a steak versus eating soup, if if, like a utilitarian thinks that it’s really bad to eat the soup, or it’s not really bad to eat the steak but fine to eat the soup and salad. And I don’t know like a hedonic utilitarian is like you he don’t acumen dependent utilitarian is like you have you should eat the steak because it’s the tastiest or something like how you adjudicate is you get the typical, utilitarian to rank the options like one, two and three, and then the human, hedonic utilitarian to rank the options, and then you can like sum the votes. But what were you just saying was, was that that is really dependent on how many options you have, just by adding additional kind of inconsequential options, you can change the outcome.

Tuomas

Yeah, exactly. Yeah.

Max

So all of these proposed solutions we looked at, were kind of deeply flawed. And yeah. And it doesn’t really seem like there’s hope for a decision theory that can, like they could, in principle, even in principle, do what we want, like telling us what to do in all these in any given situation.

Tuomas

Yeah, yeah. The overall overall feeling is that, like, his many, many, quite severe problems exist for decision theory. And none of the proposed solutions really seem like, satisfactory, like they raised their own issues. So I guess that kind of brings the question of like, is, is this the style of decision theory is just doomed? Like maybe we’re just like, sort of first principles? Principles are like off? Or is it just like, we’re just not smart enough? And there’s like a better way to formulate these themes that gets rid of most of the problems?

Metarational considerations in decision theory

Max

Yeah. I mean, I don’t think that it’s we’re not we’re not smart enough. And I don’t think this means decision theory is useless. How I think about it now, is that decision theory, like probability theory, is a tool that is really useful in certain domains. And what we, what we’ve done by bringing into these normative contexts, is we’ve taken the tool out of out of its useful domain, and it’s breaking down.

Tuomas

Yeah, we’re like, trying to use a tool on like, a much broader set of issues that it was really designed for. Right? Yeah, even though decision theory does, like sort of from the start, it was kind of like ambitious.

Max

Right. But I mean, I guess what I’m saying is that maybe the tool itself was like probability theory.

Tuomas

Yeah, like probability theory is like, you know, yeah,

Max

let’s use that for, you know, statistical mechanics or like, making decisions in a casino. Yeah, these things like probability was great for but once you start to talk about, like, human preferences,

Tuomas

yeah,

Max

that I think is, it’s like is the mistake.

Tuomas

Yeah, you’re overextending the use of the tool. Yeah.

Max

And so, the the problem of like, what you should actually do in a given situation is much is like, what Chapman would also call or what he would call a meta rational problem. And for that, you have to that there is not like, in principle, a system you can build a theoretical system, you can build I can tell you what to do. You have to start to move into more improvisation things like relying on reasonableness over explicit rationality, which decision theory is what In the domain of Oh, yeah. There’s also the discussion of rationality we have in the class, which is really cool. Because I hadn’t thought that explicitly about what it meant to be rational. Yeah. What is rational? And so I was using it. Basically just to mean good for right?

Tuomas

Yeah. Like you, you,

Max

Tuomas. You don’t, you won’t drink this water that had a cockroach in it, even if it’s been filtered. That’s irrational.

Tuomas

Yeah. And that’s like, implies it’s like something wrong to do.

Max

Yeah, it’s just like something. Yeah. And I think that’s colloquially how it’s,

Tuomas

yeah, I think that’s how it’s often used. But yeah, I think people are very confused about the concept.

Max

Yeah. And so there’s this, there’s a really good paper by this Russian dude. I mean, it’s actually not a good paper at all. It’s a horrible paper, where he spends 100 pages in in like excruciating, dense writing, trying to figure out what it could mean to be rational, on its own grounds. Like what, what rational, free of depending on some kind of given utility function. Without a utility function, could it make sense to say something is rational, isn’t rational, and then he kind of goes back and back. And he’s like, actually, there’s nothing that you can really rely on. So it doesn’t make sense to say something is rational or not, unless you have a very specific function or aim you’re evaluating.

Tuomas

Yeah, it has to be like with respect to something.

Max

Yeah. And that’s my current stance on it’s like, yeah, I will only say irrational. If you have something close in precision to I have a pretty good clear goal. And a clear, yeah, clear benchmark by which you’re evaluating it. So like, I would say, it’s not rational. Like, if you care about having as much money as possible in your life, then it’s not rational to buy lottery tickets. And like, if that is the main aim, yeah, that is not rational. But I would never say like, it’s irrational to buy lottery tickets. Yeah, that’s just, which is just this is how, yeah, behavioral economists talk about it all the time. Yeah, people are irrational, like, like, all this stuff. Uh, yeah. I

Tuomas

think it’s really, like so

Max

annoying. Now that now that we’ve read these papers, yeah, it’s

Tuomas

like, people assign this power to this, like, simple function, they came up with, like, you know, your utilities, the number of money you have or whatever, right. And then they assume that’s the truth. And whatever doesn’t follow this truth is irrational, right. There’s like a lot of implied kind of wrong reasoning there. I think

Max

due to the the implications of decision theory being flawed, like this are actually really big, I think. Yeah. I mean, mainly, I’m thinking of behavioral economics, most of the a lot of at least Kahneman and like, 90s 2000s, behavioral economists, I think even today are still trying to bring in utility functions and talk about people as agents. And the they’re not claiming that people are rational agents anymore, if they did that, but they are really taking seriously this notion that decision should be guided by a decision theory.

Tuomas

Yeah. And money’s like, approximation of utility or like,

Max

Yeah, I mean, I don’t know the behavioral economist. Yeah, maybe. behavioral economists subscribe to that so much. But yeah, they’re like, Yeah, I don’t know. It’s, I think, in practice, there’s a really big variation within behavioral economics papers. Yeah, some people don’t take this very seriously. Some people seem to, and it’s, yeah, I probably shouldn’t be like lumping them all together like this. But decision theory has affected a huge amount of stuff, like computer science, too. Right? Like, all of RL is basically an extension of decision theory. Right.

Tuomas

I think it’s it’s sort of like a decision theoretic framework, or all ropes. Yeah.

Max

I think it’s like 1,000% I just

Tuomas

went through Yeah, I think the thing is, like, I don’t think it’s wrong in that framework, right.

Max

It’s not wrong in that framework, because we are operating in game like scenarios.

Tuomas

Yeah. Like, like this isn’t theory does have it use cases like if you’re trying to lie To figure out like, what’s the best strategy to like Maximize your score? In a video game? Yeah, then I think decision theory is probably the right tool to use, like you have a limited number of options. It’s not clear what they are. Right? It’s clear what’s the target function, right? And outcomes are comparable to each other. Right. But it is kind of interesting, though, is like s. Reinforcement Learning tries to like move from like, more clear game like things into more realistic settings. Is this kind of basic framework going to cause problems?

Max

I think so. Yeah. Because I mean, I mean, in principle, I think it has to, because if you’re going to say, what’s the reinforcement learning agent going to do as Tuomas and Tuomas his life to Maximize his utility? Like, or to Maximize? Tuomas is like, joy or whatever?

Tuomas

Yeah, I mean, I feel like it we’re going a little too far here. Like we’re not trying to, like replacing a person with a reinforcement learning agent is, I think, pretty far out. But like, if you think of like, some intermediary things, like

Max

I was trying to just do the most extreme example. Yeah, of the most subjective thing to point out why they are the type of problem that

Tuomas

it might Yeah, for? Yeah, I mean, I think it’s like, I guess that comes to the same thing of like, there’s tools for things. And there’s like, the right scope of using those tools, like Reinforcement learning is a tool. But I think, if the goal is to figure out, like, what makes me the most joy, like, I don’t think Reinforcement learning is the tool for that. Because you don’t, there’s no reward function.

Tuomas

Yeah, like the reward function is not clear. And the other thing is, like, for reinforcement learning to work, you have to be able to, like, try things many times. Like, you explore an action, you see, what’s the reward, and then you try a different action, right? But like, in real life, things only go forward, you can’t like, okay, let’s restart and play this again. Like, that’s not an option. So it will require maybe some, like, fundamentally different approaches, even even, you’re starting to see like, sort of robotics or something like use to self driving cars like, like, they don’t really use like the classic reinforcement learning that much, right? It’s more like imitation learning. Like, there’s like, there’s like hardcoded, like, pipelines of how you act by you just like, you only use like, neural networks for a specific part of the job. But then like, actually have a rule based system that tells you, when you make actions,

Max

you mean, you only use rule based systems or explicit RL for a specific part, but use neural nets on other parts. Because your nets don’t suffer from the problem.

Tuomas

Yeah, neural nets. No, but like, like, if you’re talking like specific, like self driving cars, like, sort of like, a big, like, grand ambition of reinforcement learning, which is be like, you know, end to end RL do like, given inputs of the environment, they figure out what’s the best action to take to drive safely, right. But like, that doesn’t seem like the best approach to go. Perhaps partially because like, you know, we don’t really know the reward function. And there’s also like, other problems with the learning. But instead, it’s like, what people kind of used to get around that. It’s like they have a neural network to do vision. And then you have this whole beautiful map of like, your environment, but then you just have a rule based system that tells you given this, this is how you should drive.

But yeah, I think let’s not get too too far into this tangent.

Max

Yeah. Yeah. So what else do we have to say about the theory? I think? I like Sargon.

Tuomas

I’d like to get back to that point of like, cuz I think it’s like a pool that’s applied through far, like sort of this whole probabilistic mathematical analysis. Yeah. For making decisions. Like, what are the Yeah, I guess the limitations are that like, when it’s hard to specify what’s your reward, and liking? Yeah, it seems like there are cases where like, there doesn’t exist. A function like it’s just impossible to make a function and that kind of seems like a case where it shouldn’t be applied.

It’s like, but suppose like, we agree that the tools of like sort of utilitarian probabilistic analysis don’t really work well for decision theory, right? Is there like something else that could achieve the aims of decision theory? Like? How should we think about decisions? Is there any systematic way? That is useful?

Max

I don’t think so. I think that it is the mistake is to try to make something universalized. Like that, I think that the decisions are, like, have heavily contingent on the nature of activity that you make. Like if you’re improvising a jazz piano solo, versus if you’re trying to make a career decision. Yeah. What is versus if you’re trying to choose whether or not to break up with your girlfriend?

Tuomas

Yeah, you just should not make the same. Yeah, same type of calculation. Or like, maybe there isn’t? Maybe it’s not the same thing?

What constitutes a decision?

Max

What do you mean, it’s not like maybe,

Tuomas

maybe the whole phrase decision is somehow,

Max

yeah, this is kind of what this is something that I’ve that I have begun to feel more and more. So it started when the emotional disillusionment with decision theory began when I tried to explicitly apply it to my life. Yeah, yeah. Decisions. Yeah, about very emotional things. relationship decisions. And seeing that just was not adequate. Or I couldn’t make it work. Yeah, very unsatisfying. And then now, me going into in into much more intuitive mode of AI, it’s like, I’m not even making decisions, just things arise. And then they they just unfold. And then I go, Yeah, and do them. You know, it doesn’t feel like I’m making a bunch of decisions. Maybe decision theoretic theorist would come to me and say, you are making decision you’re making like, you know, 1000s of decisions a day or like, you know, many, many decisions. Yeah. But I guess I think I would, I would say that it does not, like, under that definition of decision. There’s like infinite decisions that are made all the time. Like, what about the decision to blink your eye or something? You know? Yeah. Like, it’s like, there’s an impulse that arises, and then you can, you can just follow it, like going on a road trip. And it doesn’t have to be like a cognition, like a reflective decision thing.

Tuomas

Yeah. Like, you just do things like, and that’s a more free way of being. Yeah, maybe you shouldn’t like because, in a way, like this whole frame of like, there’s certain decisions and like it, like sort of splits lifes and life into like, now’s the time to make a decision. Now is the time to follow through with the decision. Now you stop and like, you make another decision. Right? Whereas like, you know, there is no break. It’s like, you’re always making decisions, or, Oh, you’re never made, or you’re never making them decisions. Yeah. And, you know, if we believe the world is deterministic, then like, decision is not any different from like, a rock falling down a hill.

Max

Yeah, I mean, I kind of tried to allied or not, too, too closely. Consider like problems of determinism. Yeah, I think it makes things like very tricky. Yeah. But

yeah, one thing that this does make me think of is that decision theory is a subset of thinking more generally, or thinking in cognition, your thinking isn’t good for everything in your life. In fact, it’s good for a small subset of things. It’s good for when you have explicit problems. And, you know, problems aren’t things that exist out there. But they’re more of like a way of seeing things. But So insofar as you’re willing to see this thing as a problem, and it’s useful to see it as a problem, then you should think about it. And if you’re thinking about it, then a good tool within thinking, like while you’re thinking about it could be decision theory, but your life is not at least I wouldn’t want my life to be considered a series of problems to solve.

Tuomas

Yeah, yeah. Like Yeah. Yeah, like even though like, this comes back to sort of like, yeah, also critique of like utilitarianism and stuff. It’s like, this sort of series of problem. It’s just like a very sort of like optimizing framework of life. Like you’re just thinking, like, Oh, I’m trying to find the optimal strategy to make the best utility out of my life. Right? And like, that’s how I should, you know, make all my choices and like, maybe just like you Yeah, it’s feels too rigid to mathematical. And it’s sort of like, maybe, if we said, set it that way, it just like, like, just by thinking about it that way, you sort of lose something. Like maybe life could be something more like, magical and mysterious. Free, and like the sort of Mathematica analisis doesn’t really like.

Max

It’s not it doesn’t allow you to live in that way. But yeah, and it also can destroy that. Yeah, that’s

Tuomas

what I was thinking is like, it kind of directly. takes that away.

Max 51:00

Yeah. It’s not just sort of yeah, this is the this is how I feel. It’s like, it’s not just that it leads you to behave in certain ways that might not make you happy. Yeah. It actually can lead you to construct your environment in ways that conform with your decision theory. Yeah. And, like, perpetuate this like, type of rigid worldview, like, and this is this is this hard to precise talking about, but I’m imagining someone, like, they think they’ve got their utility function figured out. So they have like, a stable house in a suburb, and like a nice suburb of Boston or something. Yeah. And they, they set up all these things, because they know like, it gives them this amount of utility. Yeah. And they’re, they’re just like, very much like, this is the way that my life has to be because this is the way that is optimal. Yeah. And then the world inevitably changes. And they’re like, like, I am not in an optimal. I am not an equilibrium, like I’m suffering and they they suffer.

Tuomas

Yeah. It’s like, this pursuit of optimum just causes you pain.

Max 52:16

I think it’s a there’s a double edged sword here. One is you don’t know what optimum is. Yeah. And the other is that the very strong pursuit of it leads you to hurt yourself or to cause pain.

Tuomas

Yeah. And I think like yourself pain. Yeah, yeah. And even if it’s not a very strong pursuit, I think like, yeah, this sort of like, this sort of framing might just like, take away some of the joy. An excitement.

Max

But I mean, thinking about problems can be fun, too. That’s yeah, thinking about the problem. decision theory.

Tuomas

Yeah. As if you’re just doing it for fun. Yeah. Yeah. I do feel like it’s like. Yeah, I’m just in no way like, afraid of like having my life to figure it out. Like, if I very defined the true utility function. Yeah. Then it just kind of feels boring when you’re like, just trying to optimize for this target. Like, it’s not.

Max 1:08:27

I mean, I think it should. Yeah, well, it is boring. And it’s also, I think, like, incoherent to think that there’s a true utility function. Yeah. True utility function. For who? For you now, for you in 10 years.

Our ‘utility function’ changes over time

Tuomas

Yeah. Like that’s, and also like, oh, yeah, a lot of decision theory also assumes that the utility function is fixed for an agent. Yeah. But that’s yeah, I don’t think that’s true at all. And the problem is, like, it doesn’t just like very a long time, and like, predictable way. What is lightspace? The actions you take, change your future utility function, right. I think that breaks the whole thing, actually. Yeah. How do you account for varying utility function? Should you take the actions that make you have a utility function that like, is the easiest to?

Max

Yeah, I mean, people talking about this the vampire problem. Like, as silly think of vampire prom, but yeah, obviously, the problem of having kids you know, like, should you have kids is typical quandary people have Yeah, and the whole problem is that people who have kids from the outside look like they’re miserable, but you ask them and they’re like, actually, this is so meaningful.

Tuomas

And so yeah, like says this. Yeah. And

Max

they’re like, if if they hadn’t had kids, they would think that their decision was like, like, basically, in both ways, you think that the other person is worse off? So if you don’t have kids, you’re like, Wow, I’m glad I’m not suffering like that. And then because you’re Like, quote unquote utility function is more similar. But when you do have kids like your quote unquote utility function changes, and you start to like, really enjoy what looks like

Tuomas

suffering. Yeah, like you, you value different things because you have kids. Right.

Max

Deeply changed as a result. Yeah. And so how are you supposed to evaluate that? When it’s like a different being in the future? You know?

Tuomas

Yeah, it’s super interesting question.

Max

A lot of problems.

Tuomas

Yeah, it’s falling apart here. Yeah. So what do you say? How, how should you live your lives? Like, yeah, should even think of things in terms of decisions. I think there’s like certain, certain situations where there are decisions like, like, in certain cases, it makes sense. Like, you sort of have options and like, it determines some part of your life, like, where should you go to school? Or? I don’t know. Like, it seems like, some things are more decision or like, Yeah, clearly like decision like, and maybe you should like, analyze them a little more rigorously.

Max

Yeah. So I think the definitely systems are helpful decision theory is a potentially helpful system. I don’t think I would ever try to put explicit utility on things. But I would try to write out different scenarios. Going to, when you’re when I’m trying to think about, you know, what school I should go to? Yeah, I was like, between Columbia and MIT. And I never wrote it down explicitly, but I think, I mean, I was I was thinking about this in my head, what it would mean to go to, to one over the other? What would that imply about my future, what skills I would gain? Yeah, and I did kind of have a pro and con list. I think that that is, in general, a very useful thing to do. So my feeling is that it’s very, very helpful to have skill and training in technical, formal tools. So learn it, they deeply learn it, be able to apply it, and then kind of forget about it. And don’t try to use it as a framework for your life. But allow it to come up if it is appropriate in the situation, because your whole body, your like your mind and your body is is very, very good at understanding context and drawing up the useful thing to do there.

Tuomas

Yeah, yeah.

Max

Yeah, I guess the more game like this scenario, the more clearly helpful it is to use decision.

Tuomas

Yeah. Yeah, like you can make. Yeah, I think the main issue is like, it’s really not that much decision theory, just like the idea of utility functions. And like, if you don’t have a utility function, then like decision theory doesn’t really can’t really tell you how to do anything.

Max

Yeah, too bad. We don’t have utility functions. Yeah. But it is useful to think of it as though we do in certain situations. Like financial accounting. Yeah. Trying to calculate net present value between two projects. That is, that is a certain decision theory that I think is really helpful. Yeah. Because you don’t particularly care if you’re building a power plant in, in like, Kenya, or Zimbabwe or something? Yeah. He doesn’t materially mean, maybe you might care for other reasons. Or maybe you should care. But yeah. And he’s in a situation like that.

Tuomas

Yeah, I think it’s like, the more removed it is from like, kind of personal things, right. And like, feeling since less kind of detailed knowledge you have on it, then like maybe the more useful it is, like, like, if you’re thinking about money, then like, it’s a pretty good framework, right? Or, like, if you’re, you know, if you’re a leader making decisions over a large population, yeah. Then you would just kind of estimate that like, you know, certain things are good like, you know, Other people’s people’s lives have like, like, equal value or something. And like that could be a rough utility function that kind of, kind of can guide your decisions. Yeah. But yeah, when it comes to like, personal personal decisions are like decisions where there’s like many factors that kind of like, can’t be compared with each other. I think it’s sort of a hopeless pursued. Try to use decision theory. And there, maybe there just isn’t a right way to make those choices.

Max

Yeah, I mean, a right way would imply Yeah,

Tuomas

yeah. Like, yeah, like, rational. So maybe there’s, like, you know, there’s no wrong or right choice to be made.

Max

I mean, it’s, it’s sometimes

Tuomas

you can make wrong choices, right? Yeah. You can make

Max

wrong choices, but

Tuomas

I don’t. Yeah. Maybe a lot of the choices are on par. But some are not.

Max

You’re still thinking like a decision theoretic versus trying to patch it.

Tuomas

Right. I didn’t know I think it’s like, like, kind of my personal maybe how I would live my life is like thinking about something like this. Like, you could like list down sort of like your expected consequences for different actions. And then you just get like a gut feeling of like, how do I feel about the total of all this where it’s just a total of all bad? Yeah. And a lot of the time you can be like this, this some seems better to me than this thing. But sometimes you’re like, Well, this is different. I don’t really know.

Max

Okay, I think this concludes our just, yeah, discussion on decision theory.

Tuomas

On the top of the mountain,

Max

on the top, on the top of the mountain, in San Diego, yes. Good conversation.



  1. I now think see that decision theory is far more influential than I had initially realized. It embeds within it core assumptions for large swathes of economics, cognitive science, analytic philosophy. Really anything that refers to a ‘rational agent’ relies on some form of decision theory. I now believe that there can be no fundamentally coherent decision theory, and by implication, that anything based on the problematic ‘rational agent’ is ‘built on shifting sands’, so to speak. ↩︎