0:00:00 Sean Carroll: Hello everyone, and welcome to The Mindscape Podcast. I’m your host, Sean Carroll. Before jumping into today’s episode, let me just talk about some meta commentary, the state of the podcast. As we all know, this is a new project for me. We’re all learning what’s going on. I’m having fun, I hope that you’re having fun. But we’re still propagating the podcast out there in different ways. I certainly appreciate any efforts that you folks make to get the word out through Twitter, or recommending it to your friends; iTunes reviews make me just as happy as any other podcaster.
0:00:34 SC: Some of you might know that Libsyn, the host that I use for the actual podcast episodes, has an automatic thing where they will put the podcast on YouTube. It’s just a still image, it’s not really a video. It’s just the Mindscape banner logo there. But the audio is the same on YouTube as you would get on the podcast. So for those of you listening to it, if for whatever reason you would like to watch it on YouTube, go ahead. There’s no video there, there probably never will be any video. I know people say video would be better, Joe Rogan does video. Joe Rogan also makes millions of dollars off of his podcast. And so far I am putting money into it and not getting any out. Maybe some day I would get some out. I would like to at least pay for the microphones and so forth.
0:01:21 SC: I don’t know what that would eventually mean. Maybe just some kind of Patreon page or something like that. That would be ideal. If it comes to ads, then that’s fine because you can get money, and I like money. But we’ll have to see. It’s not gonna be something that happens right away. For those of you who are watching on YouTube already, just so you know, it’s also available on audio in the usual ways: On iTunes, Stitcher. It finally got onto Google Play. Google Play was having a problem with Libsyn feeds. So you don’t need to have your YouTube open there. You can just click on the links that are in the description of each video on YouTube. It will take you to the podcast website where you can just listen to the audio or download it onto your apps, onto your phone, whatever you want.
0:02:06 SC: There is even a subreddit, a Sean Carroll subreddit, that you can check out if you wanna discuss further. The YouTube comments have been awesome, but a more centralized place to go is on Reddit where people are talking about these things. So I think that’s all that I wanted to say for the commentary stuff. So let’s get into today’s episode, the theme of which is that we don’t know everything as we go through life but we still have to make choices. We still have to act in the world somehow. But we live in a situation of incomplete information. We don’t know what’s going to happen in the future, we also don’t even know everything about the world right now.
0:02:42 SC: So what we do is we attach probabilities to things. We say there’s a certain chance that something’s going to happen. A credence is how much weight we put on one possibility versus another, and then we act accordingly. Formalizing this idea of placing probabilities on things and then acting accordingly is called betting. And nowhere is the idea of betting made more explicit and formalized that in the game of poker. Poker is one of my favorite games. It’s a game of skill that has luck involved because you don’t know what the other players have, but you can make choices. And some of those choices you can win. There are people who make money over the long term, very regularly, playing poker. Those are called professional poker players. They really exist. There’s no professional slot machine players, ’cause there you’re just losing money overall.
0:03:31 SC: But poker is a game of skill. And today’s guest, Liv Boeree, is a perfect person to talk to about this stuff. Liv was a physics major as an undergraduate at the University of Manchester. And she was quite good at that, but she decided instead to add a little spice of excitement to her life by becoming a professional poker player. And she’s done pretty well. I can’t say it was a bad choice. She’s made almost $4 million dollars in winnings at the poker table. And so we’re gonna talk about what you learn from being a poker player. What that helps you do in terms of your everyday life, making choices about one thing to do or another in conditions of incomplete information where probabilities are central. Liv also still remains interested in science, and is developing TV projects to help popularize science. She also has become interested in altruism and charitable giving within the umbrella of effective altruism, trying to use empirical data to maximize our impact as charitable givers. She’s the co-founder of an organization called Raising for Effective Giving, which funnels money to organizations that have shown that they really have gotten good bang for their buck in terms of the charitable dollars.
0:04:42 SC: So we’re gonna talk about poker, we’re gonna talk about altruism. We’re also gonna talk about how to think about the existence of aliens. Whether or not aliens exist out there in the world turns out to be something that’s at the intersection of science, and also probability. How do you reason about something like that when you have so little data? So let’s go.
0:05:19 SC: Liv Boeree, welcome to the Mindscape Podcast.
0:05:21 Liv Boeree: Thank you very much.
0:05:22 SC: So now, you were trained at least undergraduate education-wise, as a physicist… Astrophysicist.
0:05:28 LB: Yes.
0:05:29 SC: Now you’re a professional poker player.
0:05:32 LB: Correct.
0:05:32 SC: So just to set the stage, I think for a lot of people, if you say “professional poker player” they know exactly what that means. I bet a lot of people in the audience are unclear that there are such a thing as professional poker players. Is that even legal? Are you in downstairs? Is it like rounders? Are you getting in trouble? What is the situation? What is your lifestyle like? How does this work, being a professional poker player?
0:05:54 LB: So it depends, obviously, from person to person. There are different types of professional poker players. But I, for the most part, play a type of poker called tournament poker where everyone puts up an equal amount of money, and in exchange for that you will get a certain amount of chips. And basically, everyone keeps playing until someone… ’til everyone else is knocked out and one person has all the chips, and wins either all the money or certain sort of distribution of the prize pot.
0:06:24 SC: It’s like capitalism.
0:06:25 LB: Exactly, yes. A very aggressive zero-sum form of capitalism. So I got into it in a very random way. As you said, I studied physics. I fully intended to carry on in physics. I loved it. But at the same time, I had been in education… I was 21, I also had ambitions of being a rock star. I was very into heavy metal and disturbing my neighbors late at night playing guitar. And so I was like, “You know what, I’m gonna take the summer off after my graduation and start… I need to make some money somehow, so I’m gonna apply for TV game shows,” ’cause I felt like that made a…
0:07:03 SC: As a contestant?
0:07:04 LB: As a contestant. I always liked playing games. Long story short, I learned to play poker on one of these game shows, and I loved the game. And I was like, “You know what? I think I want to take a few years and throw myself into this game. I think I have an aptitude for it, it seems like a lot of fun, I want to travel the world and get some life experience.” Still in the back of my mind I’m saying, “I’ll come back to physics.”
0:07:24 SC: Right. Once we’ve made our pile we can go think about the universe.
0:07:26 LB: Yeah. “Let’s make some money, pay back my student debt, etcetera.” And now 10 years later, still here.
0:07:35 SC: The debt has long since been repaid.
0:07:37 LB: Yeah, exactly. Well, things just turned out pretty well in the poker sense. And yeah, so in terms of the sort of lifestyle, it’s a lot of, for me at least, travelling to different tournaments. So there’s a big stop in Barcelona every August, or there’s a big stop in Vegas, which I’ve just come from, the World Series. And you also travel around to these… It’s not the pro tour because anyone can play them, but the professionals tend to… Like, that’s their lifestyle; going from event to event and playing in these tournaments.
0:08:06 SC: So literally anyone listening right now, as long as they have the right amount of money, can show up day of the tournament and say, “Here I am. It’s… ”
0:08:12 LB: “I wanna play.”
0:08:13 SC: It’s not like a golf tournament or basketball. Anyone is welcome, and it’s a meritocracy once you’re there.
0:08:18 LB: Exactly. Yeah, so anyone can play. It doesn’t mean you necessarily should play, but if you want to and you have the money and the desire then you absolutely can. And I mean, I think that’s what obviously separates poker a little bit from something like golf, because golf is… The results of a golfer are very closely correlated to their skill level. Whereas with poker, particularly over a small sample size of maybe 10, even 100 tournaments, the worst players could be vastly outperforming the best players. And so to say, “Oh, well we’re only gonna let in the people who’ve had the best results over the last six months,” that’s stupid. And people would just get very annoyed with it because A: First of all, everyone thinks they’re the best. That’s why poker exists. But B: It wouldn’t actually be correlated to necessarily being the best players getting in.
0:09:06 SC: The ones who are good would like the money from the people who are not good to be in the tournament pool, right?
0:09:12 LB: Exactly, yeah. Yeah, it’s this constant battle between, “Well I won’t be playing against the best, I’m gonna say I’m the best. But at the same time, I don’t wanna scare people off.” And so yeah, you need the weaker players you want to be playing against, in an ideal world.
0:09:26 SC: But it is a game of… It’s a combination of luck and skill, right? Like everything in life. But it’s not just luck. There’s a certain element of randomness. You’re dealt the cards, but then you play them in a certain way. So it’s not like playing blackjack or craps, where there’s really no way to win without some kind of cheating.
0:09:42 LB: Yeah, exactly. I mean, with those games… Yeah. The casino, it’s built its big golden halls and amazing rooms and spaces because they have this small edge that over time they make money from. And the longer you play, unless you know how to count cards, which again, the casinos will kick you out for for doing, you can’t make money playing blackjack and roulette. Whereas poker, because you’re playing against other people and the casino hosts it, basically, because… I think casinos don’t really wanna host poker. They’re not really making great money from it. Basically just a small percentage of each pot. But yeah, with poker you’re playing against other people. And so if you are consistently making better decisions than your opponents, then you are going to be making a profit in the long run.
0:10:29 SC: I do get the feeling, though, that the casinos enjoy having poker players around. Some of them are very good at playing poker, but then are perfectly willing to go blow it at the baccarat table.
0:10:39 LB: That’s a very, very correct observation. Yeah. I mean, I’ve known many, many poker players who were some of the best in the world, and yet had no money to their name because they were the worst in the world at walking past the roulette table. It’s just another game, just walking past it. You’re just, “Who’s the best at walking past it?”
0:10:56 SC: And they know that they should lose, on average.
0:10:58 LB: Yeah. But they just love it.
0:11:00 SC: Yeah, they’re paying.
0:11:03 LB: Yeah. I think to be a good poker player you have to obviously have a willingness to… You cannot be a risk-averse human. You have to be willing to gamble, to take risks, to have those hot-in-the-throat moments. But at the same time, you need to be able to do that in an intelligent, moderated way. Certainly not the best players you see around now. Most of them actually don’t have what we call sort of the degenerate gambler streak in them as much, but there are still some who do struggle with it. And it’s quite a tragedy to see someone who’s worked so hard at being so good at one particular game, but then have this sort of life… We call it a life leak, a leak in the bottom of their pocket, which is some other form of gambling. But it makes for great stories though.
0:11:48 SC: It does. And it also contributes to this aura that the game has of being slightly disreputable, right? No matter how much money and TV exposure and everything, and you still, to this day, do as you allude to, hear stories of some of the world’s best players just disappearing. Like, they’re hiding…
0:12:06 LB: Right.
0:12:07 SC: But they come back. “Oh, they just lost $5 million over the weekend.” It therefore must be a very different kind of lifestyle than maybe you thought you were signing up for when you went to university as a astrophysics major.
0:12:19 LB: Yeah, yeah. I mean, I’ve seen this sort of… I had some insights, I have to have some insights into the world of… The top chess players, for example. And it is, for the most part, a very different sort of social dynamic. Because the best chess players, they really have to… They just study and study and study and study. And I mean, they’re brilliant, obviously, at what they do. But I don’t know… I think poker players have a little bit more of the wild… The sort of, not free spirit, but you know what I mean. Like, this wildness to them.
0:12:53 SC: Right.
0:12:55 LB: These urges, I guess, that they want to fulfill. They get thrills from many other things as well, whereas chess players tend to… They have to be so laser-focused on what they do. Sorry, where am I?
0:13:06 SC: But just to continue exactly in that line, the thing that you’re doing when you’re playing poker is fundamentally different than the thing that you’re doing when you’re playing chess. When you’re playing chess, both players see the board 100%.
0:13:19 LB: Right.
0:13:20 SC: When you’re playing poker you don’t know what the other person’s cards are, and so they put out a big bet. They go, “Alright, here it is. This is all your stack. This is for your tournament life. You have a pretty good hand, but not the metaphysically perfect hand.” And you need to ideally say, “Well, there’s a certain chance that they’re bluffing, there’s a certain chance that they’re not. And I’m just gonna risk it.”
0:13:45 LB: Yeah, exactly. Like with chess you have all the information out there. There is no hidden information. I mean, sure you can’t directly see into the other person’s brain to see what specific strategies, but you usually have a pretty good idea. And there’s not very definite bits of information that you don’t have. Whereas poker, yeah, it’s incomplete. And so that’s why you have to think probabilistically. And yeah, so then it becomes sort of this largely, a science of being able to quantify these uncertainties that you have. Like, “Okay, well, what is the range of hands that they could conceivably be playing here? Okay, it’s from pocket twos to ace kings,” and like that. And also, what is the… But then I have the piece of information that they bet this particular card but not this one, and they bet this size. And then on top of that there’s… They’re even harder to quantify things like, “Oh, the pulse in their neck seems to be going really, really strongly here when I wouldn’t expect it to.”
0:14:42 LB: “This is inconsistent with the story they’re telling with their chips.” So what do I do with that? And so it’s this… There’s all these little bits of variables of information, of some of which you’re extremely confident about and others you’re really not. And you’ve got to sort of evaluate all of these somehow, and put them in together to come out to one answer that is most likely to be the correct decision that you make.
0:15:11 SC: You know Ed Witten, the physicist?
0:15:12 LB: Who?
0:15:12 SC: Ed Witten, the famous string theorist?
0:15:14 LB: I’ve heard of… Yes.
0:15:15 SC: Yeah, he would probably win the poll if you asked most theoretical physicists in the world today, “Who’s the smartest theoretical physicist around?”
0:15:23 LB: Oh really?
0:15:23 SC: Yeah.
0:15:23 LB: Okay.
0:15:23 SC: He doesn’t like poker because he says, “I don’t know what the other person has. In chess I could out-think everybody, but in poker you have to make up these… ”
0:15:33 LB: Like heuristics, basically. You have to…
0:15:34 SC: Heuristics, models, right…
0:15:35 LB: Yes.
0:15:36 SC: Of the opponent. And of course, the counter to that would be that John von Neumann helped invent game theory ’cause he wanted to become a better poker player, literally. He was playing poker.
0:15:44 LB: Yeah, that’s the thing. I’m surprised Ed Witten doesn’t like it so much, ’cause it’s still ultimately a mathematical game.
0:15:51 SC: It is. But there’s also psychology. So I’m sure one of the classic questions you’re asked is, what is the balance of…
0:15:57 LB: Yes.
0:15:58 SC: Doing math versus doing psychology when you’re literally at the table?
0:16:02 LB: So the answer to that is, it entirely depends. I hate throwing probabilities out there when there’s… It really does wildly depend. I would say like 90%. And so say, for example, I wanted to make you into the very best poker player in the world, I would want to sit with you and make sure that you have put 90% of our effort into making sure that you understand the game theory.
0:16:28 SC: The math, right. Yeah.
0:16:29 LB: The math. The building blocks of the game so that your strategies are built upon that, ’cause it’s ultimately a mathematical game. Then on top of that, if I can teach you some of the psychology and the harder to quantify parts, these sort of, I guess, build your intuitions up so that you are able to intuit from people’s behaviors; what they’re thinking. But that’s just more really just a matter of experience, I think ultimately. Some people are more inclined… Are just better at reading people than others naturally, but I still think it’s ultimately a learnable skill. And again, some of the best poker players still find ways to almost quantify their uncertainty about these very psychological human characteristics.
0:17:19 SC: Yes.
0:17:20 LB: You’ll know what the correct… You know I should be calling 40% of the time according to game theory. And then I have a piece of evidence that suggests that actually the opponent’s bluffing very more rarely than that, well how confident am I in that. And then I might waver the needle to, say, somewhere down to 35%, I’ll call instead. But it’s still something that you can quantify.
0:17:40 SC: Yeah, I once went to a movie premiere with a bunch of people. And one of the people was Phil Laak, the poker player, famous poker player.
0:17:48 LB: He calls himself the Time Scientist.
0:17:50 SC: A time scientist? I did not know that.
0:17:52 LB: Yes.
0:17:52 SC: We didn’t talk about that. But we’re chit-chatting, a whole bunch of us, in the lobby before the movie starts. And Phil suddenly says, “There’s an 83% chance that I’m gonna have to pee before the movie ends, so I should go to the bathroom right now.” And I didn’t know whether or not that sort of silly precision of 83% was a joke, or whether he’s just that good at estimating things like that.
0:18:12 LB: Knowing Phil, he is pretty good. He does like to think in terms of probabilities. But that degree of granularity, I would estimate…
0:18:19 SC: He’s just having fun.
0:18:20 LB: He’s just having fun. Yeah.
0:18:22 SC: So obviously, there’s math in the sense of, “Okay, I need a heart to come down and I only have one card. There’s some elementary math: What is the percentage…
0:18:29 LB: Yes. Paltov’s…
0:18:30 SC: Chance a heart will come down? Right.
0:18:31 SC: But there’s also higher level math, in particular the game theory situation with sort of optimal strategies and dominant strategies and exploitative strategies. Do you study that as part of your education, training as a professional poker player?
0:18:45 LB: Yeah, ideally you wanna familiarize yourself with… Well, you’re lucky in this day and age, there are these solvers out there that… You can basically, will run multicolor simulations in different… You can go, “Okay, what should I do with Jack-nine suited on a 10-7-4 one suit board of the same suit.” And, or similar scenarios like that, put them into the solver, let it run for a few hours and it will show you what their game theory optimal strategies with the range of hands that you conceivably have in that situation would be.
0:19:25 LB: So there would usually be like a percentage of those hands that you would want to check-raise because they’re really good, for example, because they have a lot of equity, but there’s a percentage that you would want to check-call because they’re kind of in the middle of the road and then there’s a percentage that you would then want to bluff because you need to balance out your range. You can’t just check-raise with the strongest hands, you need to have some bluffs in there from, so that you… So you can get called. You need to have the appearance of having a balanced range of hands, bluffs and weak hands when you make an aggressive action.
0:20:00 LB: And so anyway, you can literally find out what is the, as mathematically close to the perfect game theory optimal solution is now, in these different scenarios. And so, the very best players will sit and work with these solvers. You can’t use them in real time, but you can go away and just study for hours and sort of try to basically build them into a model that emulates these game theory optimal solutions. And then once you have these, this mental model of game theory optimal, then you can go out there and play using the style as a sort of, as a base. But then, you would then deviate from it as in when you come across an opponent who is playing not in a game theory optimal style. Because a good way to think about it is, if you and I were to play rock-paper-scissors and you don’t know anything about me, what would be the strategy, the best strategy that you would employ?
0:20:49 SC: I believe the best strategy is randomly do one third, one third, one third in rock-paper-scissors.
0:20:54 LB: Exactly, Yeah. Exactly. You wanna perfectly randomize because you don’t have any information about me. So in response to that, the best case that I can do is to also randomize perfectly.
0:21:04 SC: Right.
0:21:04 LB: ‘Cause if I don’t do anything on that you’re gonna be exploiting… You’ll be taking advantage of me. So that’s like a Nash equilibrium basically.
0:21:12 SC: Sorry, explain what that is for the people out there in podcast land.
0:21:16 LB: So, a Nash equilibrium is basically a state, a strategy where it’s unprofitable for either competitor or any competitor, it can be multiple competitors as well, like more than two, to deviate from that strategy. If you deviate from it, you’re only gonna be losing on average, in the long run.
0:21:36 SC: So, it’s the equilibrium where everyone is doing as best they can.
0:21:40 LB: As best they can and breaking even. In this situation you’re breaking even against each other. Like if we’re both playing…
0:21:45 SC: Zero-sum game.
0:21:45 LB: Exactly, yes. However, if you don’t notice that I start throwing rock every time…
0:21:52 SC: Yeah.
0:21:53 LB: Should you continue playing this…
0:21:54 SC: I could beat you there. I could have a better strategy, I guess.
0:21:56 LB: Exactly, you wouldn’t wanna… You’d be stupid to carry on just playing this perfect 33% randomized thing. No, you’d wanna start playing paper, far more often or like 100%. And so that is what you’d call then an exploit. You exploit my bad play by deviating from the previous game theory optimal.
0:22:12 SC: Right.
0:22:12 LB: And so that same kind of thing applies in poker. You’ll be playing… You’ll start off maybe ideally playing this game theory optimal strategy, and if your opponent’s equally good, they’ll hopefully try and match that and you’ll be, in the long run, breaking even against each other. But because basically, no human in the world can play perfectly game theory optimal, there are these room… You wanna see… Over the course of time, you’ll observe what these mistakes other people are playing and so you’ll deviate and start exploiting those.
0:22:39 SC: I was highly amused to learn that there were competitive rock-paper-scissors leagues.
0:22:44 LB: I didn’t know that they exist.
0:22:46 SC: They do, because people are not good at randomizing. Right?
0:22:47 LB: How? Right.
0:22:50 SC: And in fact, this is a very educational moment for me also, the New York Times had an app where you could go and play rock-paper-scissors against the app, and basically it had trained, as a AI sort of thing on millions of real life human being rock-paper-scissors players and realized what the tendencies were.
0:23:10 LB: Wow.
0:23:10 SC: What the deviations from randomness were and it would beat you. And I went in there…
0:23:15 LB: We both kind of edge over…
0:23:17 SC: It was small, it was not large, but noticeable…
0:23:20 LB: Like sub 5%?
0:23:21 SC: Over 20 hands, you could definitely tell that you were losing.
0:23:25 LB: Really?
0:23:25 SC: Yeah, that was my experience.
0:23:26 LB: Jeez.
0:23:27 SC: But then what I realized was… So, I went in there cocky. I’m like, “I know what the hell this is working, I’m gonna beat it.” And it destroyed me. So, then I realized I… There was something that I felt I should be doing, like I played rock two times in a row. I should play scissors or something like that, right? And I would tend to do that and I was losing…
0:23:45 LB: You don’t do these same thing in a row many times.
0:23:47 SC: Yeah, that’s right. That’s hard to do ’cause you don’t realize how many times those sequences come up randomly.
0:23:52 LB: Right, yeah.
0:23:53 SC: But what I realized was, if I figured out what my impulse was and had the strategy that the New York Times app would probably play a dominant strategy against my tendency…
0:24:06 LB: Natural impulses. Yeah.
0:24:07 SC: Then I should play the one that would beat that. And suddenly I kicked its ass. I really did.
0:24:11 LB: Wow.
0:24:12 SC: It was just a matter of…
0:24:14 LB: Over how many… How many iterations?
0:24:16 SC: It’s not the most exciting game in the world, right? So, dozens, but not hundreds of iterations. Let’s put it that way.
0:24:22 LB: That’s awesome. I wanna do that.
0:24:23 SC: Well, we can find that. And another thing that I… Along these lines, I remember the first time I learned about in the realm of Nash equilibrium and game theory, the fact that in games like this mixed strategies will generally be the dominant or the equilibrium optimal strategies. In other words, there’s no one action you should ever take every time in that situation and it’s perfectly obvious in rock-paper-scissors. It’s also true in poker, if you have pocket aces in some position, the best possible strategy is not to do the same thing every time, no matter what you have, it’s almost always true. And then, so you have to, if the strategy is to do one thing 90% of the time, and something else 10% of the time, it’s really hard to ever do that 10% thing. You know you’re not doing the best thing. Right?
0:25:09 LB: Right. What some of the, again, the top players have started doing now, because everyone knows that you need to be able to basically be a perfect randomizer machine to have these, not only sort of to remember what these ratios you need to have in certain situations where you want to be bluffing with, say, 10% of your hands and betting for value with 90%. How do you randomize that? Well, people have started looking… At watches like yours. You’ve taken it off, but with the second hand, old analog watches, have become popular again. Why? Because it’s a really good randomizer. If you go, okay, you just use where the secondhand is on the clock face, okay if it’s between zero, and if you know you wanna check-raise 25% of the time, is it between zero and 15? Oh yes, it is, okay, this is a check-raise situation, if it’s not, okay, I check cool.
0:26:00 SC: Do you do that?
0:26:01 LB: Not as much as I should. I’ve done it a few times, but, A: I always forget my watch, and other players do it by actually just shuffling their chips all the time, ’cause usually the chip will have the word written on it, and just where it lands…
0:26:15 SC: Interesting.
0:26:16 LB: But again… Yeah. That’s a way, and even if you’re… This is the thing, people think, oh, you need to then keep this secret that you’re doing this. No.
0:26:21 SC: It doesn’t matter. Yeah. It’s the optimal strategy. Yeah.
0:26:22 LB: Even if your opponent knows, they just go, “Oh, shit they’re playing GTO against me”. Okay. I need to… It’s just an unsettling thing to know basically.
0:26:31 SC: That’s right. Yeah. But part of me thinks that it’s less important to actually play the game theoretical optimal strategy than to make your opponent think you’re playing it. Right? Because they’re not perfect optimizers or randomizers either. So if you’re supposed to do something two thirds of the time and something else one third of the time and your randomizer makes you do the one third of the time thing three times in a row. Then I would definitely do the other thing, no matter what the randomizer told me the next time because I’m not actually, my opponent is not building the correct model of me.
0:27:01 LB: That is true to an extent, yes. But at the same time, if your opponent is also then playing this perfectly sort of randomize in the right way style, their style will not be losing out to the… Because they are playing this un-exploitable style, then arguably you’re going to still be playing less, less perfectly than they are. And so in the long run, again, it’s always down in the long run you’ll still be playing less, a less profitable style than them. Or sorry, a more exploitable style than them.
0:27:35 SC: But when it comes to poker at a table with six or 10 people, we don’t know the game theoretical optimal style.
0:27:40 LB: No.
0:27:41 SC: Poker has not been solved. And in fact, as far as I know, artificial intelligence cannot win at a 10 hand, 10 player table against a top player.
0:27:51 LB: Yeah. No there’s been the AI that successfully beat humans heads up, one on one.
0:27:55 SC: One on one. Yeah.
0:27:56 LB: But I don’t even know if anyone’s trying to do the three person and higher game, because the game tree just gets ridiculous, decision tree.
0:28:03 SC: Yeah. And yet we poor humans do it.
0:28:04 LB: Exactly. Yeah. We’re really, really good at building these sort of heuristics. These… I don’t know why we’re so good at it, but we’re able to sort of condense this down into the rules of thumb that work out really well.
0:28:22 SC: I mean, I think it kind of makes sense. That is what we’re good at, right? The thinking fast and slow. Daniel Kahneman would explain, we don’t think in terms of rational, logical choices generally. We think in terms of heuristics, we think in terms of rules of thumb and we’re really good at coming up with models of the world that are pretty good. Right? That are not great.
0:28:39 LB: Yes. Good approximations that get us by…
0:28:41 SC: Yeah, yeah, yeah. I mean do you use Bayesian analysis, or something like that, very much? Why don’t we explain what Bayesian analysis is.
0:28:49 LB: Sure. Bayesian analysis is basically updating our model of the world based upon new information, incorporating new evidence based on the weight, the strength of this evidence, whether it’s proving or disproving our beliefs. And then, when we see that thing happen, we’ll go, okay, well, this means I should change my mind by this amount. It’s like shifts the needle, if you think all trees are green, all leaves are green, you’ll go around… But then if you see… You’ll go around assuming that… But then if you see a tree with purple leaves, well, now that’s evidence to suggest that’s not so true. So you now shift towards, okay, maybe not all trees are green, etcetera. And there’s a way you can actually model that mathematically using Bayes’ theorem. Do I do it mathematically in game? Absolutely not. I don’t have likelihood ratios and so on in my head.
0:29:46 SC: Too much.
0:29:46 LB: But, ultimately our brains are Bayesian machines, right? That’s what we’re doing naturally, without really realizing it. And at least to me, it seems like it’s a sort of fundamental law of, not a physical law, but it is, it’s a, what do you call it, a psychological law? It is some kinda law of the world we live in, right?
0:30:04 SC: Bayes’ theorem. It’s a math theorem. It’s the correct way to update your model of the world, I think…
0:30:11 LB: Yeah, it seems like it.
0:30:12 SC: I think, it’s a rational way to behave, right?
0:30:15 LB: Yeah.
0:30:17 SC: And so, my impression is, this is my amateur poker player’s way of thinking about it, you can tell me whether it makes sense. You sit down at the table and you more or less assume everyone is the same, in some way, and then as you see how they play, your model of each individual player becomes more sophisticated.
0:30:32 LB: Yes.
0:30:32 SC: So, we have ways in poker of sort of describing that. There are tight players and loose players, and aggressive players and so forth. And maybe then you say, “Well, oh, this person is tight before the flop and then they become loose afterward.” And is that more or less what’s going through your mind?
0:30:47 LB: Yeah, I mean, not that I would ever advocate stereotyping people at all, but when you sit down at a poker table… Like, I just played the World Series of poker main event, for example, where it’s generally against a lot of amateur players who pony up their $10,000 once a year, and it’s the best moment of the year, it’s really good.
0:31:04 SC: It was profitable for you, right?
0:31:05 LB: Yeah, worked out pretty well this time. And so, when you first sit down, I will immediately look around the table and create literally first impressions of how someone looks. Literally their gender, their age, the way they’re sitting in their chair, the clothes they’re wearing, the way they look at me, their appearance of confidence, or not confident. And so, I’ll immediately create some kind of stereotype.
0:31:35 SC: A prior, we call it.
0:31:36 LB: A prior, exactly. A prior. But then in an ideal world, I’ll still just try and not let that influence my play too much, just a little bit. But, as soon as you start getting real information after 20-30 hands, well now, I’ve seen, okay, this person hasn’t played a single hand over 30 hands. Did I think they were tight beforehand? Actually, yes. And does the evidence continue to confirm that? Yes, it does. Okay, so I can shift the needle a little bit more to strengthen that belief. But if I notice actually the person who, like the old guy who seemed conservative and timid has played every single hand and is raising and re-raising everyone, okay…
0:32:15 SC: Better update.
0:32:16 LB: Better… It’s time to update. This is very strong evidence. Let’s shift the needle more towards, he’s an aggressive player. And so, yeah, you sort of start building these models of people, and then, by the end of the day, if you’ve been at the same table, and now you’ve got probably multiple hundreds of hands where you’ve seen a lot of information of people’s tendencies with increasing degrees of nuances, so you’ve built up a very accurate mental model of players over a given period of time.
0:32:44 SC: So do you take this way of thinking, mixed strategies, game theory optimal, etcetera, Bayes’ theorem and updating your priors. Do you extend it from poker to the rest of your life? Has being a professional poker player changed how you go about buying a house or getting a boyfriend or choosing where to go for dinner?
0:33:05 LB: Yeah, I would say so. In terms of just thinking about things in terms of probabilities and uncertainties. Beforehand I was very black and white in my thinking, it’ll either happen or it won’t. I certainly wasn’t comfortable with any kind of granularity. And now, I’ll sort of look at something and be like, “Okay, do I think… Am I gonna make it in time for this podcast judging by the LA traffic?” Eh, doesn’t seem…
0:33:34 SC: What’s the percentage… Yeah.
0:33:34 LB: Yeah, the probability’s gone down, and I’ll estimate at least mentally and hopefully, I’ll try and communicate it to people as well, what I think the likelihood is in in terms of a percentage. I’m 30% to be there on time. That kinda thing. And in terms of picking boyfriends, well, yeah, Igor…
0:33:53 SC: You picked a poker player, so what does that tell you?
0:33:54 LB: I know. It’s hard when you’re playing on the poker circuit not to also have a significant other who’s sharing the same lifestyle. But yeah, funny story, Igor and I actually… Well, Igor asked me now about half a year ago, randomly he said, “What do you think the likelihood is that we’ll still be together in three years time?” ‘Cause we were like talking about moving cities, and should we buy a house? And he said, “Well, what you think the likelihood is, we’ll actually be together in three years time? I said, “Good question.”
0:34:22 SC: That is… Sorry, that’s a question that would only be asked by number one, a poker player, and number two, someone who is pretty secure in the relationship already.
0:34:31 LB: Yeah, I… Well, yes, that’s a reasonable thing to assume. And luckily, our numbers came out fairly similarly, I think it was like 92% or 94%, or something like that.
0:34:43 SC: Excellent.
0:34:44 LB: Yeah.
0:34:45 SC: ‘Cause there’s a nonlinearity there, like, getting the wrong answer could change the probability.
0:34:46 LB: Oh, absolutely, yeah, there can be… But at the same time, it’s really useful to know, even if you don’t end up being honest with each other, the fact that it asks you to reflect on that. And you come away and think, “Oh, actually the number’s 60%, but I found myself the word 90% coming out of my mouth”. Well, if nothing else that’s information for yourself.
0:35:06 SC: That’s right.
0:35:08 LB: Right? Like, oh, there’s something not quite right here. Fortunately, I did genuinely, I was like, “No, I think it’s definitely over 90%. It’s definitely under a 100%, stuff can still happen, therefore… And luckily we were pretty much aligned. Yeah, so it definitely… I found it has bled into my thinking in all sorts of ways. And in general, you just become more strategic. I don’t know. I don’t think that’s a bad thing. I know that people… Sometimes when people ask, “Oh, what you do?” And particularly, if I’m in some kind of negotiation, they’re like, “Oh, you’re a poker player. Well, I don’t believe anything you say.” And I was like, “Oh, come on.”
0:35:44 SC: ‘Cause they think poker is all about massive, crazy bluffs, not about strategic thinking.
0:35:47 LB: Exactly. Yeah, it’s like… No, it’s about choosing when to bluff. You’re actually minimizing your risk when you bluff. I’ve found… Bluffing is stressful. It can be very fun when you get away with it but it’s really not fun when you get caught.
0:36:01 SC: Yeah. Oh, yeah.
0:36:02 LB: And having… When you… Poker players know what it’s like to get your bluffs caught, be caught in a lie. It’s, A: It’s embarrassing, and B: It actually just financially hurts and it can knock your confidence and so on. And so, away from the table, I don’t like… Lying isn’t really very fun. I get to scratch that itch at the table and it just creates complications and unless there’s really, really, really good reason for it, I tend to try and avoid it.
0:36:31 SC: And it is embarrassing, even at the poker table, but it shouldn’t be, right? I mean, you’re just playing the game theoretical optimal strategy, why…
0:36:37 LB: Yes. Yes.
0:36:38 SC: Why should I be sad about having my bluff called?
0:36:40 LB: Yeah, no, that’s definitely a thing I try to… Anyone I’m teaching poker, I say, the first time you get caught in a bluff, turn those cards over and slam ’em down and have a big have smile on your face and be proud of yourself because…
0:36:53 SC: That’s a very good advice.
0:36:53 LB: Because it’s a big part of the game and you have to get used to that sort of getting your fingers caught in the cookie jar. Because if you don’t ever get caught, then you’re not… Probably not doing it often enough.
0:37:03 SC: Right.
0:37:03 LB: So, yeah, there’s so many different facets of the game that bleed over into life. I mean, just the emotional control part of it. Poker, you learn to get used to bad luck very fast, because you play 10,000 hands in a week. You wanna have one percenters where you should win the hand 99% of the time, but you don’t. You’ll have multiple of those happen and they really sting. But probabilities, they compound over time and if there’s many, many instances of bad luck things that could happen, then eventually one of those will happen to you. And so, if you remember that, I find you get less emotionally attached and it’s less of a shock.
0:37:44 SC: And I think… Yeah, I mean, what you said about the granularity of probabilities, I think is very true. I had this idea that almost everyone thinks there are only three probabilities for anything, it’s either 0%, 100%, or 50%, right?
0:37:55 LB: 50-50, yeah.
0:37:58 SC: You know, nothing. The idea of something being 70-30 is just really hard for human beings to latch on to. And maybe this is those heuristics that we grew up with, in the wild, in the veld of Africa. You better make a decision right away under conditions of stress and you don’t get to do things over and over again. But… So you would really say that your experience as a poker player has given you a better ability to correctly handle those 70-30 situations?
0:38:26 LB: Yeah, I think so, I think you… I mean, you’re familiar with the term ‘scope insensitivity’ in terms of you… Humans, we evolved in tribes of probably up to 150 or so people, and so beyond that, numbers weren’t that useful, in terms of… We didn’t have to count much beyond that. And the classic example of, oh, there’s 1000 birds dying in oil… Dying from an oil spill somewhere, you’ll feel some degree of sadness about that. If I say, there’s a million birds dying from oil, you don’t feel a thousand times sadder. You feel probably a little bit sadder, but not a thousand times.
0:39:07 LB: And so we have this like, our brains aren’t intuitively equipped to think accurately around really large numbers or really, really small numbers. We just work on this little macroscopic level that’s in our everyday lives. And so, I would say that’s very… In realms of probabilities that sorts us out between the 30-percenters and the 70-percenters, but beyond that things just… They feel really, “Oh, it’s… ” It happens 10% of the time, “Oh, that’s a never.” That’s a never and… Oh, 85% of the time, pretty much always.
0:39:41 LB: And I don’t know, and I think on the most part, we don’t have an intuitive understanding of what actually a 5% feels like or a 7% or a 10%. But poker, you play enough, you will… Like I find my emotions genuinely correlate with… Like, when I turn the cards up on the flop, you know, we get a big all-in and I’m a… I’m a 12% underdog, so I am… Yeah, I’ve only a 12% chance to win, well I’m just not gonna get excited. I’m like, “Okay, I’ve lost. I… ”
0:40:05 SC: Yeah, you’re not gonna win.
0:40:07 LB: Yeah. Or if I have a 1% chance of winning, then my emotions are even less, I just don’t feel anything, I’m just annoyed that I got it in so bad. But if it’s a coin flip, my heart will genuinely be racing, ’cause I’m like, “Wow, I will win this 50% of the time and who knows what’s gonna happen.” And so, yeah, you become well calibrated to what these actually… These probabilities mean, because we’ve just seen these situations so many times over. Does that, therefore, apply to the everyday world? I mean, yeah, like if I’m… If someone says, “Oh you’re, I don’t know, 10% to make a flight, I think… If I can figure out that I’m roughly 10% to make a flight, then I will be… I would be extremely pleasantly surprised if I did make it. Whereas, I think… [laughter] But at the same time, I don’t think I’m that well calibrated as I am in poker.
0:41:01 SC: It’s hard, yeah, it’s hard. So just to change gears a little bit, I was pleased to see that you had an article in Vox recently. You went back to your astrophysics training, and it was a story about the application of probabilities to interesting questions about the nature of the universe. So why don’t you fill us in on what that was?
0:41:18 LB: Yeah. So recently, some of the researchers at the Future of Humanity Institute in Oxford published a paper called ‘Dissolving the Fermi Paradox,’ which I imagine most viewers know what the Fermi Paradox is?
0:41:35 SC: Let’s not imagine that. Let’s tell them what it is.
0:41:36 LB: Okay. Fermi Paradox is basically, Enrico Fermi back in, when was it? 1960-ish, ’53?
0:41:44 SC: ’50s. He didn’t live into the ’60s, he was very radioactive, yeah.
0:41:50 LB: Bad. But yeah, basically he looked up, he’s like, “Wait, there seem to be billions, if not trillions of stars out there and we know that the universe is somewhere around 14 billion years old. If there’s so many possible sites for life, why are we not at least… We’ve been releasing radio waves now for a few decades, relatively easily. Surely we should be at least hearing or picking up trace signals from other worlds or else seeing aliens all the time.
0:42:19 LB: The universe is so old there must be at least a few other thousand civilizations out there that have got a big head start on us. So where is everybody? So that’s the famous Fermi Paradox, the contradiction between the size and age of the universe and the lack of observed alien life. And so this paradox has seemingly only gotten stronger with time as we’ve become more spacefaring and we’ve discovered more and more exoplanets. And it seems like the universe is even more capable of hosting life than we imagined.
0:42:55 SC: Certainly a lot of places out there where it could be. Yeah.
0:42:57 LB: There’s so many places, right. And yet we’ve still seen nothing, and we’ve been really looking hard. And anyway, so this paper that they published uses the Drake Equation, which was sort of this equation that is the best way we could try and estimate the number of intelligent civilizations within the Milky Way. And it’s a novel form of analysis on it that hadn’t been really done before. And the answers that came out the back of that is that we are somewhere around 75% to be the only intelligent civilization within the galaxy, and somewhere around a coin flip, about 50-50, to be the only intelligent civilization in the entire observable universe, which is pretty, pretty astonishing stuff.
0:43:45 SC: Right. Counterintuitive to many people, yes.
0:43:47 LB: Extremely counterintuitive, agreed. Also quite disappointing ’cause I’d love there to be some aliens out there. I mean, at least from a curiosity standpoint. But yeah, nonetheless, pretty groundbreaking claim. And so the article was discussing how they came to that claim, or the actual processes that they did, the methods of analysis.
0:44:10 LB: But then I wanted to make a larger point from that which is, okay, well, we don’t know for certain that this is actually the case, but let’s assume that it is based upon our current best state of knowledge. It seems like we’re… If nothing else, intelligent life is extremely rare in the universe. What does that mean, philosophically speaking, for us? If we really are the only intelligent civilization out there, and may well be for the rest of the universe’s future, what does that mean? Should we do anything with that information? Should we change any of the behaviors that we’re currently exhibiting and the trajectories that we seem to be putting ourselves on to? And I think, yes, I think it speaks that we have a greater responsibility to not blow ourselves up.
0:44:54 SC: Right. I do wanna get into that. But first I wanna dig into this probability stuff, because the Drake Equation is something that has been batted around ad nauseam, right? People love talking about the Drake Equation. And so for the two of you out there in podcast land who have not heard about it, the Drake Equation is a way of estimating the number of intelligent civilizations by saying, well, how many planets are there, times what’s the fraction of planets on which life develops, times what’s the fraction of which the life becomes intelligent, etcetera. Different ways of breaking it down.
0:45:23 SC: But as far as I can tell, there’s two things in the new paper that are kind of interesting. One is they take seriously the idea of, rather than finding just our best fit number for the fraction of times life comes into existence and then becomes multicellular, they say, “What’s the uncertainty or what is the probability distribution?” So taking those probabilities a bit more seriously.
0:45:45 SC: And secondly, it is a matter of being good Bayesians and updating our priors, right? I mean they’re not saying just based on the laws of physics we’re probably the only intelligent civilization in the galaxy. They’re saying based on the fact that we haven’t seen it yet, that in some sense Fermi was right. It would have been very, very easy to run into another civilization if they were out there. We haven’t seen them. They don’t seem to be out there and therefore, from that, we draw the conclusion that probably the simplest explanation is they’re just not there. Maybe it’s just way more unlikely that intelligent civilizations exist and continue to exist than we thought it might have been.
0:46:22 LB: Right, exactly. The fact that we haven’t found any evidence of aliens is evidence unto itself, and should be incorporated into the Bayesian model that we have of the situation.
0:46:34 SC: Were they able to pinpoint one of the factors and say, this is probably the tiny one?
0:46:39 LB: In terms of which one has the…
0:46:42 SC: The smallest probability is it that life doesn’t usually form, or intelligent life doesn’t form?
0:46:45 LB: Yeah. So the one that has the biggest number of range of uncertainty on it is a fraction of planets that develop life.
0:46:53 SC: Okay. Which makes perfect sense. What do we know about that, right?
0:46:55 LB: Yes, exactly. We still haven’t really figured out how life started on Earth, the process of abiogenesis is just really not well understood yet. And so the difficulty that we’ve had with the Drake Equation is that we’re plugging in numbers, like the rate of stellar genesis within the number of stars that develop per year. That’s an astronomical number we have a reasonably accurate number for.
0:47:24 LB: We have data on that stuff. Yeah.
0:47:24 SC: Exactly. We have data for that. But then something like the fraction of planets that develop life, it could literally be… The uncertainties, they have 200 orders of magnitude of uncertainty.
0:47:34 SC: Oh my goodness. Okay.
0:47:35 LB: Yeah. It’s that, it’s that much.
0:47:38 SC: Yeah.
0:47:38 LB: And the thing is, is that because that’s… Those… Any number between that is actually completely plausible. By saying, “Oh well, yeah, but let’s ignore the fringe parts of it,” that’s making a statement… That’s making a claim of knowledge that we can’t make. Say, “Oh well, it’s probably somewhere in the middle.” No, it’s as con… The probability distribution is like fat-tailed all the way down. And so, to cut these off is just doing bad math, basically.
0:48:05 SC: Yeah, I’m glad the paper came out, ’cause I’ve run into great resistance. I’ve always felt basically this, we don’t know how likely it is that life is going to exist. The other obvious bottleneck is multicellular life, which again we don’t know how that exists. I tend to think that once we get multicellular life, intelligence is probably not far behind, but both the existence of life itself and multicellularity seem very mysterious. Maybe the chances are just 10 to the minus 100. And people say, “There are so many planets out there,” [laughter] and there is no number N, such that I cannot find another number that multiplies it and gives us a small number.
0:48:43 LB: Tiny, tiny, tiny number, exactly. I mean there’s sort of the idea of this great filter, where there’s some kind of insurmountable barrier that exists somewhere in the chain of evolution that for whatever reason most planets can’t get past, and for some reason Earth managed to get past it. I hate to even hazard a guess at where it is, but intuitively, not that intuition really apply to this, it feels to me that it’s somewhere in the step from prokaryotic to eukaryotic life, somewhere around there. But again, that’s just the…
0:49:25 SC: That’s right. So not just multicellularity, but just the idea of having a nucleus in your cells.
0:49:28 LB: Exactly, having a nucleus within a cell…
0:49:30 SC: That’s fair, that’s probably more important.
0:49:31 LB: That seems to be… Somewhere around that. There’s some other work the Future of Humanity Institute are doing right now on the similar analysis, but looking at the transitionary time, the rates it takes to go from each step to each step. And it just seems quite likely that the reason why the Earth… The reason why life isn’t everywhere is because it just takes such a long time for it to actually get to the stage where intelligent civilizations then actually spring up. We made it on Earth 5/6th of the way through of Earth’s lifespan in about 750 million years.
0:50:09 SC: Life came into being relatively quickly, but then it sat around in this boring monocellular form. Yeah.
0:50:14 LB: Nothing state for ages and ages, and then suddenly, boof, there was this explosion where evolution kickstarted faster, and interesting things started appearing. But that wasn’t basically until 5/6th of the way through of Earth’s lifespan. Had it just been a little bit longer, we would not have existed, ’cause Earth would’ve been gobbled up by the Sun. [laughter] And so, that’s a reasonably plausible explanation as to why life is incredibly rare.
0:50:42 SC: I suspect that’s the right explanation. My other favorite one is that there are plenty of civilizations, they all become highly advanced, they upload their consciousness into computers, and then they become bored, and they don’t go space traveling anymore.
0:50:53 LB: Yes, that’s a good one. Also the aestivation hypothesis. You know that one?
0:50:56 SC: I don’t know that one.
0:50:58 LB: You’ll be better to explain the exact reason why, but because computation is expensive in higher temperatures in and… You would expect a rational civilization would want to maximize the number of computations it can do over time. So it makes sense for them to hibernate or aestivate until the general temperature of the universe… Until far, far in the future and the ambient temperature of the universe is much lower, and therefore, it’s much cheaper. I think it’s something like 10 to the 30 more computations a second you could achieve or something like that. I can’t remember exa… Anders Samberg has a very fun paper on it.
0:51:39 SC: I’ve never heard of that one. I like the audacity of it, I don’t believe it. So I’ll tell you why I don’t believe it, because, one, the temperature of the universe goes down, that’s fair. But also the free energy of the universe goes down, the energy we have available to run our computation goes down. So I presume someone smarter than me has actually done the calculation and said it’s still better to wait, but there’s clearly a…
0:52:01 LB: Yeah, but I’m pretty sure he factors that in, but I’m not versed enough to explain why, nonetheless again it was one that felt intuit… It was very fun, [laughter] but intuitively, I’m like, “Nah.”
0:52:12 SC: Probably, ’cause it only takes one plucky counterexample. Only one civilization has to not buy into that. But it does remind me of John Wheeler used to say when he mixed cream into coffee, he always felt sad, because he was increasing the entropy of the universe irretrievably. But you do bring up this question of what does it mean? What implication? So what if we are the only civilization, only intelligent lifeforms, it makes you think, for better or for worse, but making you think is good, “Why are we here? What is the point?” And you actually have been thinking about things like this, right?
0:52:44 LB: Yeah. I was just having a conversation with my partner, Igor, about moral realism, I know that you’re very much a non-moral realist in terms of…
0:52:55 SC: I’m a happy Humean constructivist, yes.
0:52:58 LB: Yes, and I’m of the same mindset. But that said, if you want to, if nothing else, maintain optionality, us going extinct is definitely going against that. [laughter] Now, the options have disappeared to zero, and it definitely appears like there is a very real risk that we can go extinct by even 2100. And by real risks the estimate is anywhere between like 2% and 20%.
0:53:30 SC: I’m a moral constructivist, but the morals I can construct do say that human extinction would be bad.
0:53:34 LB: Will be bad. Yeah.
0:53:36 SC: That’s okay. I don’t need an objective story to tell myself about.
0:53:40 LB: Exactly. Yeah, it just seems like it would be a terribly big shame if all these little pockets of low entropy have managed to appear us basically going against the second law of thermodynamics to a degree. Are we actually going against the second law of thermodynamics?
0:53:57 SC: No. Nothing goes against the second law of thermodynamics.
0:53:57 LB: We’re not? So how, this is what I’ve never wrapped my head around, how do pockets of low entropy so successfully exist then?
0:54:06 SC: Well, it’s actually an outgrowth of the second law of thermodynamics, because if we think about it, if it weren’t… What would it mean to not have the second law of thermodynamics? What would it mean is that we were already in thermal equilibrium. We were already at our maximal entropy state. Once you say the early universe started with low entropy, that’s all you need to say, then entropy increases and that’s the second law. The only way out of that is to say, “You were already in high entropy,” and then there would never be any life. There would never be any complex structures, there would never be anything like that. We are not creating pockets of low entropy, we are leeching off of the fact that the early universe had an extraordinarily low entropy. We’re increasing the entropy of the universe willy nilly. There’s a separate question, related but separate, which is why are we complicated? Why are there complex structures in the universe?
0:54:53 SC: And that’s something that it makes sense that complexity develops along the way from low entropy to high entropy, but the details of how it actually happens is an ongoing, fun research problem. And life is certainly just an example of exactly that.
0:55:05 LB: Right. And so yeah, I think it would be arguably a tragedy on the universal scale, in terms of just lost utility, and lost potential, and lost optionality, even from just looking from an expected value standpoint, if we were to go extinct, even if it’s 0.01% chance, there is still going to be 10 to the… I don’t know how many possible lives, and possibly very happy lives in the future.
0:55:38 SC: I have met people who argue that the human race should go extinct, that it would be a moral good, that the rest of the planet would be better off without us.
0:55:46 LB: Right. I assume they’re extreme negative utilitarians, in terms of they think…
0:55:49 SC: No, no, no. It’s just that the utility goes to the plants and the animals, not to us. We are a net negative, they claim. I don’t believe them, I’m just saying it is out there.
0:55:56 LB: What’s their main argument as to why we are a net negative, just because we create suffering upon…
0:56:01 SC: Yeah, we’re highly distorting the ecosystem, and so much of the biomass is devoted to keeping us happy, and it’s not a natural state of being.
0:56:08 LB: Right. But define natural. That’s the thing, now I get into the semantics of that.
0:56:14 SC: Well yes, that’s right. I don’t wanna defend this, I’m just laying it out there as…
0:56:15 LB: No, I mean it’s definitely an interesting view point to discuss, and I have to say, intuit, my intuitions are very… I grew up in nature, I love nature more than anything, and nothing upsets me more than seeing images of the rainforest. These millions and millions of years old, beautiful, unbelievably complex rainforest, a tree being cut down into planks of wood, something that’s so unbelievably complex, and sustains this intricate framework of an ecosystem around it all, the insects, and animals, and fungi, and everything that grows off it and relies upon it is reduced to mere planks of wood that people then put… We put our coffee mugs on.
0:56:55 SC: Coffee table, yeah.
0:56:56 LB: That’s put down into this complicated economic system out of this complex, and I think they’re two very different things. Complex ecologicalism. So, intuitively, it seems like a huge tragedy that we are doing that. Nonetheless, I also think, intuitively, it’s a huge tragedy that this human consciousness, that does seem, so I don’t know, inherently special, but there’s definitely something inherently unique that we’re not seeing elsewhere in the natural world as much, and for that to go out. And the thing is, is if we go extinct, well chances are probably all the other, most of the other big animals will go extinct too, as well.
0:57:35 SC: I just want it on the record that we are having this conversation at a concrete table, [laughter] not a rain forest wood table, so…
0:58:01 LB: With metal legs. Yes, very good.
0:58:02 SC: Yes. We’re very morally correct in our living room here. We probably don’t really think that it affects that calculation, whether or not we’re the only civilization. Maybe it makes it more poignant, right? But we still don’t want human beings to go extinct, even if there’s 100 million others out there. There are plenty of people who think that there are 100 million others out there, and they’re waiting for us to grow up a little bit. I have my suspicions that that’s crazy also, but it is at least one of the options on the table.
0:58:14 LB: Yeah. You can’t rule anything out, that’s the thing when we’re dealing with something of such uncertainty like this. And certainly, humanity has a lot of growing up to do. But that said, there’s a lot of beauty and goodness in the things that we are doing. And it would be nice if we were given the opportunity to try and grow up.
0:58:40 SC: And you have put your money where your mouth is quite literally becoming interested not only in these philosophical questions, but the applied questions of how to be better people, how to make the world a better place, and the effect of altruism movement in particular. And you don’t just talk it, but you’ve started an organization, is that right?
0:59:00 LB: Yeah, so…
0:59:01 SC: So tell people what Effective Altruism even is.
0:59:03 LB: Yeah, so, Effective Altruism is basically the combining the head and the heart when it comes to philanthropy, in doing good. We all, well, the vast majority of us, we wanna do good in the world. If we see someone suffering, we’ll try and help them. You see someone passed out in the street, the actual human instinct is to usually go and check on them, or a dog yelping, or whatever it is, someone screaming, “Fire.” We’re all, for the most part, naturally altruistic, and that’s a great thing. But at the same time, there will also be different ways. There’ll be some ways which are more effective at getting a certain problem alleviated. And so Effective Altruism is basically applying science to philanthropy, to altruism. Doing good as effectively and efficiently as possible. I remember hearing about this that there’re… I wanted to give money to, for example, environmental charities but I often just felt lost. I was like “Well, where do I start?” I’m concerned about climate change. I’m concerned about biodiversity loss, etcetera etcetera all these different things, which… How do I weigh these off against each other? When I give my money to one charity, that means I’m actively not giving my money to another. We only have limited resources ever to give.
1:00:22 LB: And so it’s imperative that we do as much research we can or at least consult people who have done the research, to figure out where we can have the biggest positive impact with whatever we give, whether it’s not just money, as well time. If you choose to work for a charity, you wanna make sure that the actions that you are taking in your line of work are achieving the most good, because you won’t get those hours back. Time is also a scarce resource. And so after hearing all these arguments I was like, “Well, this makes a lot of sense.” And a team of full-time effective altruists, they suggested, “Well, look, we think poker players will get this.” They’re very used to thinking in numbers about uncomfortable things. They’re comfortable with the idea of sort of thinking about expected value and uncertainty.
1:01:06 SC: Variance, yeah.
1:01:08 LB: Yeah, variance, quantifying things. And they’re also… Poker players, I think, are willing to, despite the impression that we’re all like dark, degenerate money-grabbing…
1:01:22 SC: Degenerate gamblers, yes.
1:01:22 LB: Yeah, gamblers, actually. I think a lot of poker players are aware of a unique position that they have in terms of we have time, and we can make good money, and it would be nice to contribute to society in some way. And so we decided to start this organization called Raising for Effective Giving, reg-charity.org.
1:01:43 SC: Raising is upon?
1:01:44 LB: Raising, exactly. Pile on the chips thing.
1:01:45 SC: Okay, good.
1:01:47 LB: Even when I have many arguments about who came up with the name.
1:01:50 LB: But yeah. It’s me, but anyway…
1:01:51 SC: That’s why there is a 4% chance that it’s not gonna happen, yeah.
1:01:53 LB: Yeah, no one will ever know. And yeah, so four of us poker players started it alongside these Swiss Effective Altruists. And originally, we were encouraging poker players to give 2% of their profits. Sorry, 2% of their net… No, gross winnings every quarter. And some people sort of continue on that model. Others just sort of… Because poker is a very all-or-nothing kind of game. Often, you’ll have a big win followed by months of losing.
1:02:21 SC: Months and months, yes.
1:02:22 LB: Yeah. And so generally, people just tend to donate when they have a big win, as a percentage. And yeah, it’s been extraordinarily well received. I had no idea that poker players would like…
1:02:34 SC: That’s interesting.
1:02:35 LB: Get it as much as they did, and it’s raised over $6 million now. So the charities are specifically… They use GiveWell who you might know, givewell.org which is a very…
1:02:46 SC: GiveWell is usually where I go when I have $2 to throw in.
1:02:48 LB: Yeah, they’re fantastic in terms of like human suffering reduction. They are the go-to in terms of you wanna find out how you can help human lives right now as best as possible. So we use their top three recommended charities which are almost always Against Malaria Foundation, and something deworming, and direct poverty alleviation with like GiveDirectly. And that’s because the cost to save a human life, like it or hate it unfortunately, is just many hundreds of times cheaper, like around 100 times cheaper in the developing world than it is here, say in the US or in the UK. In the US, the government will spend up to around $1 million to save a life on, if you look at sort of Cross Healthcare and that kind of thing. And yet you can demonstratively save a life in some parts of Sub-Saharan Africa for around $7000. So that’s a huge, huge difference.
1:03:51 SC: Right.
1:03:51 LB: And even if you think that an American life is worth more than someone say from Malawi, do you really think that they’re worth a 100x? Would you press that button to say, “I will kill 100 Malawians to save one random American?” And it’s not all about an American you know, like someone random from America in a part you’ve never been to. Would you really say they’re worth 100… 100 mothers are worth one mother here? No. And so, yeah, it’s about this sort of idea. Unfortunately, you do. You have to like put a price effect. It is not putting a price on a human life, but it’s about saying “My actions, I can help someone with the money that I give. And so I want to help as many people as possible with that.”
1:04:33 LB: And so yes, we’ve got the best charities in human suffering alleviation, the best ones in animal suffering alleviation. By best, I mean most cost-effective. And the animal ones almost always have to do with anti-factory farming, just ’cause it’s pretty tragic what’s going on out there. And it’s very, very cheap to save various sentient animals from a life of misery. And then we also have neglected research areas, specifically ones looking into things like global catastrophic risk.
1:05:04 SC: Okay, cool.
1:05:06 LB: So yeah, like people looking into risk from bioterrorism.
1:05:10 SC: Because even if it’s a tiny chance…
1:05:12 LB: Exactly, yes. Even though it’s a 0.1% chance, if you factor that over the 100 billion or so lives that are gonna exist over the next century, that’s roughly what the number is, even over the next century, that’s a lot of people.
1:05:25 SC: It seems kind of obvious that we should try to be efficient and spend our charitable givings as effectively as possible. But there has been push back, right? I mean some people don’t like it. Personally, I’ll confess I’m not a utilitarian. I think that it’s hard to be a consistent utilitarian, but still I take the Effective Altruism to be a good nudge in the right direction, right?
1:05:48 LB: Right.
1:05:48 SC: We certainly do value the things we see right in front of our eyes, more than the things we know are going on out there in the world. And this is an excellent reminder that there’s ways to make a huge impact elsewhere in the world that might not be quite as obvious to us.
1:06:04 LB: Exactly, it’s not just… It’s taking… It’s not saying that if you react to something immediately happening in front of you, that that’s not a good way of doing good. Of course it’s good, it’s there. And a big part of when you do good, you’re trying to do something altruistic and help, it’s fine to also feel good about it. I think people go, “Oh, you shouldn’t have any positive emotions that come out of it”, or “You shouldn’t be doing in that way”. I think that’s silly. No, you wanna do whatever continues to motivate you.
1:06:34 SC: Yeah, get that dopamine from giving a good donation.
1:06:36 LB: Absolutely. And so it’s about… Like I said, it’s about balancing not just the head, but also just the heart. The heart is a big part of any kind of charity giving and so on, and I think it’s a mistake to try and remove that out of it. But at the same time, we have to be aware that just purely acting emotionally when we know that we have these big biases in terms of, we randomly will sort of value people who happen to live in the same town as from someone who lives in a town 100 miles away, even though we don’t know either of them, just because that’s just the way human intuitions are sort of built. And that can then be a mistake in terms of our sort of narrow resources that we have to allocate.
1:07:20 SC: Maybe from what we said earlier, there’d be a good charity that could give everyone free poker lessons, ’cause it would help them think in probabilistic ways about the world.
1:07:31 LB: I wouldn’t specifically say that, but yeah. There is a charity who… There are at least sort of 5013c, who train young people in applied rationality. So quantifying things, which is basically applied poker. Doesn’t at all talk about poker, but they training promising young people who are looking to get into sort of research, politics, you name it. The art of rationality, which is incredibly important, because these are going to be the future leaders, and I think we want leaders who are aware of human biases and aren’t afraid to think in sort of scientific terms. To give them the tools to understand their own emotions and so on, and therefore be better equipped decision-makers. And I think that’s an incredibly valuable thing. And that’s one… These kind of organizations that are looking into teaching rationality classes in schools. I think these are incredibly promising area to donate to, because they can create a… There’s definitely a huge lack in our current education system. How many kids are taught to think through their feelings, for example, or… Yeah, like, “Oh, it’s okay to be uncertain. Let’s see if we can estimate our probabilities”.
1:08:49 LB: These basic rationality tools that you and I take for granted, we only know them because we learnt them as adults, either through poker or through another means. But imagine if you were just taught these basic things. Or learning to think through the counterfactuals, something like that. I wish I had learned that when I was 10.
1:09:05 LB: It would’ve saved me so much time, or… Yeah, just looking up the probabilities of… I used to be a huge hypochondriac. The amount of worrying time I’ve spent going, “Oh, is my headache a brain tumor?” If I’d just learnt to do a bit of Bayesian updating, that would’ve saved me a lot of misery over in my late teens. And so, yeah, I think it’s a big shame that these areas are neglected.
1:09:29 SC: Well, I think that’s a perfect place to end, because you’ve basically encapsulated the mission statement of this podcast, trying to get people to think a little bit more rationally, a little bit more cognizantly of their biases and a little bit thinking in different ways than they usually do. So, Liv Boeree, thanks so much for coming on the podcast.
1:09:44 LB: Thank you, Sean.
This content was originally published here.