Thursday, February 16, 2017

Machine Learning in Finance and Economics with Python

I recently caught a podcast via Chat with Traders that included one among several episodes related to quantitative finance and this one emphasized some basics of machine learning. Very good discussion of some fundamental concepts in machine learning regardless of your interest in finance or algorithmic trading.

You can find this episode via iTunes. But here is a link with some summary information.

Q5: Good (and Not So Good) Uses of Machine Learning in Finance w/ Max Margenot & Delaney Mackenzie

https://chatwithtraders.com/quantopian-podcast-episode-5-max-margenot/

Some of the topics covered include (swiping from the link above):

What is machine learning and how is it used in everyday life?

Supervised vs unsupervised machine learning, and when to use each class.    

Does machine learning offer anything more than traditional statistics methods.

Good (and not so good) uses of machine learning in trading and finance.

The balance between simplicity and complexity.

 I believe the guests on the show were quantopian data scientists, and quantopian is a platform for algorithmic trading and machine learning applied to finance. They do this stuff for real.

There was also some discussion of python. Following up with that there was a tweet from @chatwithtraders  linking to a nice blog,  python for finance that covers some applications using python. Very good stuff all around. I wish I still taught financial data modeling!


See also: Modeling Dependence with Copulas and Quantmod in R

Sunday, February 12, 2017

Molecular Genetics and Economics

A really interesting article in JEP:

A slice:

"In fact, the costs of comprehensively genotyping human subjects have fallen to the point where major funding bodies, even in the social sciences, are beginning to incorporate genetic and biological markers into major social surveys. The National Longitudinal Study of Adolescent Health, the Wisconsin Longitudinal Study, and the Health and Retirement Survey have launched, or are in the process of launching, datasets with comprehensively genotyped subjects…These samples contain, or will soon contain, data on hundreds of thousands of genetic markers for each individual in the sample as well as, in most cases, basic economic variables. How, if at all, should economists use and combine molecular genetic and economic data? What challenges arise when analyzing genetically informative data?"


Link:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3306008/


Reference:
Beauchamp JP, Cesarini D, Johannesson M, et al. Molecular Genetics and Economics. The journal of economic perspectives : a journal of the American Economic Association. 2011;25(4):57-82.

Saturday, February 11, 2017

Program Evaluation and Causal Inference with High Dimensional Data

Brand new from Econometrica-

Abstract: "In this paper, we provide efficient estimators and honest confidence bands for a variety of treatment effects including local average (LATE) and local quantile treatment effects (LQTE) in data-rich environments.….We provide results on honest inference for (function-valued) parameters within this general framework where any high-quality, machine learning methods (e.g., boosted trees, deep neural networks, random forest, and their aggregated and hybrid versions) can be used to learn the nonparametric/high-dimensional components of the model." Read more...

Tuesday, January 24, 2017

Saturday, January 14, 2017

Identification Through Copulas

Recently I attended a talk (see Zimmer and Trivedi below) where a paper referenced work by Han and Vytlacil that used copulas to estimate probit models with dummy endogenous regressors. The seminar offered an extension to other types of models. However, here I wanted to summarize the approach more generally. You can find the referenced working paper below for more details, which I am told is forthcoming in the Journal of Econometrics.

Copula functions can be used to simulate a dependence structure independently from the marginal distributions.

Based on Sklar's theorem the multivariate distribution F can be represented by copula C as follows:

F(x1…xp) = C{ F1(x1),…, Fp(xp); θ}

The parameter θ represents the dependence between the two distributions F1 and F2. No let's set up the framework for what we are trying to model.
Suppose we want to predict some outcome Y. Let

Y = f(x,D)

where x is a vector of controls and D is a treatment indicator. We are interested in estimating the coefficient on D as our measure of the treatment effect. However, suppose that there is selection bias, such that those that choose to engage in the program indicated by D are more likely to have higher levels of Y regardless of treatment. (for the following for more on selection bias and unobserved heterogeneity and endogeneity).

We can model selection as follows:

D = g(x,z)

where x is a vector of controls and z is an instrument, correlated with the probability of D, but uncorrelated with selection. We can jointly model the outcome and selection functions using copulas where:

P(Y, D|x,z) = C{ F(.), G(.); θ} 

As it turns out, the term θ captures the dependence between outcome and selection allowing for unbiased estimation of treatment effects associated with D. Han and Vytlacil extend the results to cases without instruments.

References:

Han, S. and E. Vytlacil (2015). Identification in a generalization of bivariate probit models with dummy endogenous regressors.Working paper, University of Texas at Austin.

A Note on IdentiÖcation of Discrete Bivariate Copulas. Pravin K. Trivedi and David M. Zimmer August 5, 2016

Tuesday, January 10, 2017

Mediators, Moderators, and Mechanisms

Recently Marc Bellemare shared a post highlighting an article in American Political Science ReviewExplaining Causal Findings Without Bias: Detecting and Assessing Direct Effects.  He does an awesome job giving an overview of the article. If you read his post, you will see that the paper emphasizes causal mechanisms and introduces this through controlled direct effects:

 "their method not only tells you whether M  is a mechanism through which D causes y, it can also tell you whether there is any significant amount of statistical variation left in the causal relationship flowing from D through y after M is accounted for"

Previously, I have been working on a post related to mediators and moderators, and his post motivated me to wrap it up today.

In the article Mediators and Mechanisms of Change in Psychotherapy Research, Kazdin provides some clarity about the differences and relationships between mediators, moderators, and mechanisms:

Mediator: an intervening variable that may account (statistically) for the relationship between the
independent and dependent variable. Something that mediates change may not necessarily explain the processes of how change came about. Also, the mediator could be a proxy for one or more other variables or be a general construct that is not necessarily intended to explain the mechanisms of change. A mediator may be a guide that points to possible mechanisms but is not necessarily a mechanism.

Mechanism: the basis for the effect, i.e., the processes or events that are responsible for the change; the reasons why change occurred or how change came about.

Moderator: a characteristic that influences the direction or magnitude of the relationship between and independent and dependent variable. If the relationship between variable x and y varies is different for males and females, sex is a moderator of the relation. Moderators are related to mediators and mechanisms because they suggest that different processes might be involved (e.g., for males or females).

Reference:

Mediators and Mechanisms of Change in Psychotherapy Research
Alan E. Kazdin Annu Rev Clin Psychol. 2007;3:1-27

Mediators and Moderators

Moderators: With moderation, a third variable impacts or interacts with the relationship between two other variables. We would say the relationship between two variables is ‘moderated.’ This can be thought of as an interaction in a standard regression:

Y = b0 + b1*X1 + b2*X2 + b3*X1*X2

b3 =moderating effect i.e. the relationship between Y and X1 changes with levels of X2
b1 = impact of X1 on Y when X2 = 0.

So in the context of the relationship between Y and X1, X2 is a moderator

Mediators: With mediation, a third variable invervenes in the relationship between two other variables. For example, in the diagram below, suppose we are interested in the relationship between x and y. This relationship may be ‘mediated’ by a third variable m.

Consider a model where Y = grade in course (our outcome of interest), k = IQ, m =  study skills. We might hypothesize that study skills ‘mediate’ the effect of IQ on course grade. A perfectly brilliant person might do OK on an exam through educated guesses, but we all might know of cases where brilliant students have done quite poor due to lax study skills. So while there may be a direct effect of IQ on grades, IQ -> grades or x -> y there is an indirect effect as well, IQ->Study Skills -> grade or x -> m -> y.

This implies that mediation can take a number of forms and can be formally tested. In the case of full mediation, the relationship between x and y becomes insignificant after a mediator ‘m’ is included in the model, or our estimate of c (modeling the path or direct effect between x and y) isn’t significantly different from 0. Partial mediation would occur if the relationship between x and y or  c is reduced (but remains significant) after m is entered into the model).  In this case we could say that x has both direct effects on y (through the path c) as well as an indirect effect (through the mediator m or paths a and b).

These relationships can be formally tested as laid out in Hair et al:

1) Test for significant correlations between x,y or estimate c; x,m or estimate a; m,y or estimate b
2) If c is significant after m is included, and the magnitude of c does not change then m is not a mediator.
3) If the magnitude of c is reduced after including m, and c remains significant, then m is a moderator. This is a case of partial mediation.
4) If including m in the model reduces the magnitude of c such that it is not significantly different from 0, then m is a mediator and this is considered a case of full mediation.

Reference: Multivariate Data Analysis. 6th Edition. Harris, Black, Babin, Anderson and Tatham. Pearson-Prentice Hall. 2006.