Tuesday, February 21, 2017

Basic Data Manipulation and Statistics in R and Python

Below are links to a couple of gists with R and Python code for some very basic data manipulation and statistics. I have been using R and SAS for almost a decade, but the R code originates to some very basic scripts that I used when I was a beginning programmer. The python script is just a translation from R to python. This does not represent the best way to solve these problems, but provides enough code for a beginner to get a feel for coding in one of these environments. This is 'starter' code in the crudest sense and intended to allow one to begin learning R or python with as little intimidation with the simplest syntax as possible. However,  once started, one can google other sources  or enroll in courses to expand their programming skillset.

Basic Data Manipulation in R

Basic Data Manipulation in Python

Basic Statistics in R

Basic Statistics in Python 

For more advanced applications in R posted to this blog see all posts with the tag R Code.

Thursday, February 16, 2017

Machine Learning in Finance and Economics with Python

I recently caught a podcast via Chat with Traders that included one among several episodes related to quantitative finance and this one emphasized some basics of machine learning. Very good discussion of some fundamental concepts in machine learning regardless of your interest in finance or algorithmic trading.

You can find this episode via iTunes. But here is a link with some summary information.

Q5: Good (and Not So Good) Uses of Machine Learning in Finance w/ Max Margenot & Delaney Mackenzie


Some of the topics covered include (swiping from the link above):

What is machine learning and how is it used in everyday life?

Supervised vs unsupervised machine learning, and when to use each class.    

Does machine learning offer anything more than traditional statistics methods.

Good (and not so good) uses of machine learning in trading and finance.

The balance between simplicity and complexity.

 I believe the guests on the show were quantopian data scientists, and quantopian is a platform for algorithmic trading and machine learning applied to finance. They do this stuff for real.

There was also some discussion of python. Following up with that there was a tweet from @chatwithtraders  linking to a nice blog,  python for finance that covers some applications using python. Very good stuff all around. I wish I still taught financial data modeling!

See also: Modeling Dependence with Copulas and Quantmod in R

Sunday, February 12, 2017

Molecular Genetics and Economics

A really interesting article in JEP:

A slice:

"In fact, the costs of comprehensively genotyping human subjects have fallen to the point where major funding bodies, even in the social sciences, are beginning to incorporate genetic and biological markers into major social surveys. The National Longitudinal Study of Adolescent Health, the Wisconsin Longitudinal Study, and the Health and Retirement Survey have launched, or are in the process of launching, datasets with comprehensively genotyped subjects…These samples contain, or will soon contain, data on hundreds of thousands of genetic markers for each individual in the sample as well as, in most cases, basic economic variables. How, if at all, should economists use and combine molecular genetic and economic data? What challenges arise when analyzing genetically informative data?"



Beauchamp JP, Cesarini D, Johannesson M, et al. Molecular Genetics and Economics. The journal of economic perspectives : a journal of the American Economic Association. 2011;25(4):57-82.

Saturday, February 11, 2017

Program Evaluation and Causal Inference with High Dimensional Data

Brand new from Econometrica-

Abstract: "In this paper, we provide efficient estimators and honest confidence bands for a variety of treatment effects including local average (LATE) and local quantile treatment effects (LQTE) in data-rich environments.….We provide results on honest inference for (function-valued) parameters within this general framework where any high-quality, machine learning methods (e.g., boosted trees, deep neural networks, random forest, and their aggregated and hybrid versions) can be used to learn the nonparametric/high-dimensional components of the model." Read more...