Tuesday, December 17, 2013 :::
If you have a kid or, as he notes, if you don't, you might be interested in this discount program from Amazon. What most interests me right now is that Yglesias ends with
Even if you have a real baby, a fake one might be better if you don't want too many targeted deals.This seems odd to me. I can imagine privacy reasons one might want to generate a fake baby, but if I put those aside, it seems to me that what you're likely to do by falsifying data this way is not to reduce the number of ads or sales pitches you receive, simply to make them less relevant to you. Indeed, this seems to be what he's saying. Except he seems to be saying that this would be bad. In fact, he seems to take for granted that somebody who likes oranges and not grapefruit (for example) would prefer to receive ads for grapefruit than for oranges. I'm having trouble getting my head around this.
Again, this is putting aside privacy reasons, and I think perhaps that's key here; maybe he isn't worried about handing over information to Amazon, but doesn't "feel" as though his privacy has been violated until Amazon seems to be acting on it. (To emphasize, he didn't end the article with "if you don't want Amazon to have your information" — he ended with "if you don't want too many [well-] targeted [, instead of mistargeted,] deals." Where I've possibly created an emphasis he didn't intend, though I'm not sure how he could have missed it.) Maybe he's taking for granted that "targeted deals" involve price discrimination of some sort — that you would actually be getting better "deals" if Amazon didn't know quite so well what your interests were.
I think a lot of the potential computer-driven gains in economic efficiency over the course of the next generation lies in what I think of as "Netflix Prize sort of stuff" — measuring heterogeneous preferences and resources and figuring out how to better match goods and services to their highest-value users. I think the extent to which we reap these gains will depend to some extent on how willing people are to take advice that may occasionally feel like a surrender of free will — Tyler Cowen has discussed this, but I'm not finding a good link quickly — and to what extent user interface designers can mitigate this, tricking people into thinking that the things that they want to do were their ideas after all. I'm pretty sure that understanding Yglesias's final sentence would improve the extent to which this project is likely to be a success.
::: posted by dWj at 12:07 PM