Tuesday, February 23, 2016

My gut was the one that told me that hey, maybe after your first girlfriend broke up with you, the right way to win her back back is to throw yourself into your work at MIT and call her up in tears every night for a month. My gut was the one that said: that extra slice of cake is what you want right now and will have no bearing on your fitness goals. It was the one that said: I want to appear logical and deep because somehow having other people treat me as logical and deep makes me feel like I'm actually more logical and deep.

My gut is the one that keeps telling me: the computer screen and facebook is more important than starting your homework assignment due in two days. My verbal brain ("I'll do it tomorrow!") simply gives rationalizations to this pre-written bottom line.

I don't think my gut actually follows logic, nor does anyone's. My gut's motivation comes from hard-won experience and tells me in the blink of an eye what I think will happen at any point in time. It has incredible power, and it also does not use an ounce of logical reasoning.

So why do I think my gut is telling me something so reasoned and noble as: "I want to work hard at work because I know that doing so will help me with my friends and family?"

Saturday, February 6, 2016

The Title of this Blog

Knotgrass.
This is a reference to a (translated) version of a poem penned by Xue Baochai, one of two heroines of the Dream of the Red Chamber -- a Chinese classic.

Remembering the Chrysanthemums

The autumn wind that through the knotgrass blows,
Blurs the sad gazer’s eye with unshed tears;

But autumn’s guest, who last year graced this plot, 
Only, as yet, in dreams of night appears. 
The wild geese from the North are now returning; 
The dhobi’s thump at evening fills my ears. 
Those golden flowers for which you see me pine 
I’ll meet once more at this year’s Double Nine.

There's something about that book that had a lot of grip. It showed the world as it could be -- the idyllic titular dream, of a free pasttime with quality poetry and games, and of course diligence and work, in the gardens of Imperial Qing dynasty Jinling.

Of course, the whole point is that this dream crashes down and buries half the characters with it. Life isn't that easy.

But the good parts were so good.
*
Competence.

To build a very simplistic model, I have what's called a "competence problem". The core feature of the "competence problem" is:
(1) inability to pay attention to detail. This prevents execution
(2) lack of discernment of context -- i will react in the same way when encountering a problem I'm very familiar with (outsource it to my intuition, or System 1), as I will when encountering a task I've never done before. The latter takes much longer, even if it's easy.

One example I remember quite clearly is my interview with Hudson  River Trading. The CEO asked me to do a simple task: sorting cards  by suit as they come up, with an extra rule attached -- shift the sevens one suite to the right. (Clubs,  Diamonds,  Hearts, Spades).  (Note: this was not the actual task, since I am not supposed to say those -- but it was close enough for all intents and purposes). Easy enough, I thought, and I gave an estimate of how long it would take me. Sorting a deck of cards by suit was a 30 second task -- two cards per minute seemed reasonable. Sorting it by suit with this extra layer of complexity would probably take triple that time, so one and a half minutes.

Nope. Took a long ass time.  Probably more on the order of 3 minutes.

THen -- the second time I did the same task, it took 1 and a half minutes. And then the third time, even faster.

Essentially, even if the task was easy (its just sorting cards), the first time took a while -- and my System 1 and intuition kept misfiring when I tried to do it as fast as I would sort regular cards. The context and task had changed, but my intuition (System 1) was not picking up on the difference. The signals it was taking -- i had a deck of cards, was doing a sorting that I'd done before, in front of an interview where speed was premium, signaled it to try and move fast.

But the key difference -- that it was in fact a different, if trivial, task, that I had never executed on before -- made it take thrice as long, and called for an entirely different algorithm -- the algorithm in which I was slower and charting out a new space, not relying on my intuition and gut to work automatically.

*
This small insight -- about not being able to tell the difference between a familiar task and a seemingly-familiar-but-actually-quite-different task -- seems prevalent in the way that I think. Someone who is trained  to grab falling objects will also grab a falling knife  by instinct, and injure themselves badly.

For me personally, this comes up all the time in my life in coding, in solving math problems, in learning, in playing smash brothers -- but it's especially noticeable in coding, where I sit and churn for a while w hen lacking the models of how all the code works. One thing that seems to be interesting is picking up on signals t hat in fact this is a new task doing something I've never done before -- asking  myself to imagine what the code base could be like  (it could be anything for all I know), asking myself to sketch out the API and noticing I'm coming up blank, or even the fact that I just opened a f ile I have never touched beofre.

Sunday, November 29, 2015

So I went on a date last Saturday with a girl named Jennifer.
She was smart, thoughtful, and cool -- or so it seemed. I brought my A- game in conversation. We had a good chat.

Then I came home and realized there was a giant booger hanging down my face.

Friday, July 17, 2015

New Worlds

Yesterday night, I asked out a girl for the first time in my life.
Online.

She said maybe. :)

Tuesday, July 14, 2015

Random thoughts from an Unqualified Individual: AI, Rationality, Emotions, and MIRI

So I think CFAR has made incredibly significant progress on the topic of motivation. It's really somewhat incredible, given how much money goes into the self-help business. I suppose having good epistemics goes a long way.

Here are some random thoughts, which are mostly "cute hypotheses" with one or two anecdotal pieces of evidence surrounding them.

Here's a thought experiment. Look at Timothy Chu. He's sitting there on his computer, typing, and he's trying to do things consistent with what he thinks is a good worker. This includes working at work, trying to submit more lines of code, wanting to write a good system test, and all sorts of other things that seem pretty reasonable.

Cool. Now let's ask him: "What are you doing?"

Timothy Chu replies: "I'm being a good worker!"

Now you might ask, if you were slightly dickish: "why do you want that?"

And Timothy Chu replies: "Because it's the right thing to do, and because I want to contribute."

Now you ask: "why?"

Timothy Chu replies: "because .... it's just the right thing to do. I don't understand what's hard about this. I want to be a good worker because I know it's the right thing to do. I know that being a contributor will help me become someone who can help my friends and family."

****
Pause.

Let's step back from this a bit. Something interesting is going on here.

Timothy Chu is writing a decent system test, learning vi because he thinks it's good, but he's also doing this thing where he's optimizing for amount of code submitted, which is an curious thing to optimize for given his stated  motivation.

Moreover, Timothy Chu gave you a noble sounding, verbal reason for why he was doing what he was doing. He even believes it.

But a fact about Timothy Chu is that the core of his motivation center is driven by his emotions. The verbal part of his brains, the calm and rational part, can only talk to the emotions and gently whisper: "you know, I have some knowledge that might help you get what you want."

Emotions are based on experience, and are fast and powerful, but they never follow such neat and tidy logical patterns.

If so, then... isn't it pretty fucking weird that Timothy Chu thinks that his gut motivations are guided by ideas that are logically consistent, noble, and socially acceptable?

Saturday, April 18, 2015

Update

Applying Bayes Rule to get to accurate real-life beliefs.

From askamathematician. (and lesswrong)

"Worried that someone doesn’t like you because he hasn’t returned your phone call in two days? Ask, 'how much more likely would this be to occur if he liked me than if he didn’t like me? '"

"Think that the stock you bought that went up 30% is strong evidence for you having skill as an investor? Ask, 'how much more likely is it that my highest returning stock pick would go up 30% if I was an excellent investor compared to if I was just picking stocks at random?' "

Turns out many of my beliefs violated the laws of math.
*
I feel much calmer now, with these beliefs safely proven fallacious. I'm still figuring out how reasoning like this got my gut to calm down, but it did.