UMN College of Biological Sciences – Conservatory

UMN College of Biological Sciences – Conservatory

IMG_3337IMG_3335IMG_3334IMG_3333IMG_3332IMG_3331IMG_3330IMG_3329IMG_3328IMG_3327IMG_3326IMG_3324IMG_3323IMG_3322IMG_3320IMG_3319IMG_3318IMG_3317IMG_3316IMG_3315IMG_3314IMG_3312IMG_3311IMG_3310IMG_3309IMG_3308IMG_3307IMG_3306IMG_3305IMG_3304IMG_3303IMG_3301IMG_3299IMG_3298IMG_3297IMG_3296IMG_3295IMG_3294IMG_3293IMG_3292IMG_3291IMG_3290IMG_3289IMG_3288IMG_3287IMG_3286IMG_3285IMG_3284IMG_3283IMG_3282IMG_3281IMG_3280IMG_3279IMG_3278IMG_3277IMG_3275IMG_3272IMG_3271IMG_3270IMG_3269IMG_3268IMG_3267IMG_3265IMG_3264IMG_3263IMG_3262IMG_3261IMG_3260IMG_3258IMG_3257IMG_3256IMG_3254IMG_3251IMG_3249IMG_3248IMG_3246IMG_3245IMG_3244IMG_3242IMG_3241IMG_3239IMG_3238IMG_3237IMG_3235IMG_3234IMG_3232IMG_3231IMG_3230IMG_3229IMG_3227IMG_3225IMG_3224IMG_3223IMG_3221

More information can be found at https://cbs.umn.edu/conservatory

Pattern Recognition as Applied to Peer-to-Peer Lending

The purpose of this article is multi-faceted:  1) to give an overview of what pattern recognition and machine learning are; and 2) the uses of machine learning in the context of and as applied to peer-to-peer lending.  This article starts out broad and meanders a bit until it gets into the topic referenced in the title.

Pattern recognition algorithms, as their name suggests, is the recognition of patterns, in data.  That is, given a set of inputs (often called features), can an output (sometimes called labels) be inferred?  If that seems a bit foreign, well, it should not be.  Humans use our own pattern recognitions all the time.  When you see a physical book, how do know it’s a book?  Maybe you recognize features of the object, it is rectangular, appears to be made of many sheets of paper, there are words on at least two of the larger flat surfaces.  Maybe it is a book!

It is likely that someone told you, at one point in your life, that that object in front of you or maybe you were holding it, is a book.  This may have happened a few times, and then, all of a sudden, you were able to recognize books.  It is very likely that this happened at such a young age, that you do not even remember this first recognition happening. Before going any further, let’s take a detour to explain the other side of this post: peer to peer lending.  If you are bit lost, and want to know a bit more about lending, reading the post on the Geography of Social Lending first, then come back to this piece.

Social Lending. When a potential borrower uses one of these systems, and says I would like to obtain a loan, the person enters from identifying information, and from a credit bureau or two, information is pulled into the respective platform.  There is a lot of information that is used to determine 1) whether or not a person has a solid enough credit history to take out a loan; and 2) assuming they qualify for a loan, what interest rate should they receive on the loan.  All of the peer-to-peer lending platforms have their own proprietary methods for arriving upon the interest rate and usually a proprietary external or resulting risk metric, in the case of Prosper Lending, the source of the data that this article will be using, this risk metric is a letter score of AA to E in addition to HR (which I usually read as high risk).

Many of the features that go into determining this risk metric are actually available.  There are roughly 400 different features that are available via Prosper Lending’s platform.  Everything from the percent of available credit on an individual’s credit cards,  how long has the individual had credit cards, whether the individual has a mortgage, does the individual have any wage garnishments, and the list goes on.  Similar to the features that allow a person to identify that a book is a book and not a baseball cap, using the features of an individual’s credit profile, it is possible to get a sense of whether the person will fail to repay the loan.  With the availability of loan data from peer-to-peer platforms, there have been a number of competitions to see who can get the most accurate predictions on whether loans will go bad.

If you have been able to follow along so far, great!  I fear, however, that I may lose some of you with the following nerd-stuff.

Mathematicians, computer scientists, statisticians, cognitive experts and many in between have developed a fascinating array of algorithms that fall into the category of machine learning.  They have fancy names like Support Vector Machines, Ridge and Lasso Regression, Decision Trees, Extra Forest, Random Forests, Gradient Boost, and many others.  There are a couple broad groups of problem spaces in machine learning: classification and regression.  Classification is pretty much inline with the non-machine-learning definition of the word, which bucket does this thing belong in?  With regression, the objective is generally trying to identify a quantity on a continuous spectrum.  An example could be, given a person’s gender, height, weight, and maybe a geographic categorization, how much alcohol is this person likely to consume in a given year?

Let’s a take look at regression, first, before looking into classification.

Length in Inches (Y), Age in Months (X)

The basic idea behind any of these regression algorithms is to effectively fit a line to a series of points.  When you have three or more dimensions in your data, you end up with planes and hyperplanes, but for let’s keep it in simple 2D space for the moment.  In a simple example, if you have a plot of points on a graph, let’s say that the Y axis is represents children’s’ height, and the axis represents age in months and you have measurements until the children were age 3 years. For the sake of example date, let’s include the children’s gender (as assigned at birth).

Looking at the graph to the right, there’s a few things you might be able to say.  Boys, on average, will be slightly longer than girls of the same age.

What if you had some other data, and it was just age of boys, but you did not have length information on this same set of children?

If you have had a bit of algebra, you might think that there is maybe an equation of a line somewhere in these data.  You might think back to y-intercepts and slopes of lines.  If you thought of this, you just stumbled upon what amounts to 2-dimensional linear regression.  As an aside, knowing that the data are of boys is simply filtering out the girls from the data, and that variable, categorical in this case, is not necessary for the following.  Remember, that the equation of a line, in slope-intercept form, is y = mx + b.

Using statistical software, like Python’s sci-kit learn, you can use our existing dataset of ages and lengths, to get a linear equation that approximates these data.


import pandas as pd
import matplotlib.pylab as plt
import seaborn as sns
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
import numpy as np

def gender(x):
    if x == 1:
        return "Boy"
    elif x == 2:
        return "Girl"

df = pd.read_excel("https://www.cdc.gov/growthcharts/data/zscore/zlenageinf.xls")
df.columns = ['sex', 'age', 'z_-2', 'z_1.5', 'z_-1', 'z_-0.5', 'mean', 'z_0.5', 'z_1', 'z_1.5', 'z_2']
boys_df = df[df['sex'] == 1]
girls_df = df[df['sex'] == 2]
# this isn't necessary beyond helping me keep things straight in my head
boys_girls_df = boys_df.append(girls_df)

boys_girls_df['gender'] = boys_girls_df['sex'].map(gender)
# demetricify these data
boys_df['length'] = boys_df['mean'] * 0.393701
girls_df['length'] = girls_df['mean'] * 0.393701
boys_girls_df['length'] = boys_girls_df['mean'] * 0.393701

x = boys_df['age'].values.ravel()
y = boys_df['length'].values.ravel()

x_training, x_testing, y_training, y_test = train_test_split(x.reshape(-1,1), y)

clf = LinearRegression()
clf.fit(x_training, y_training)
predicts = clf.predict(x_testing)
plt.scatter(x_training, y_training, color='lightblue')
plt.xlabel("age")
plt.ylabel("length")
plt.plot(x_testing, predicts, color='red')

print(clf.coef_[0], clf.intercept_)

plt.show()

Age in months (X) plotted against Length in inches, with Linear Regression Plot

The above bit of python code downloads an Excel file from the Center for Disease Control, and puts the columns and rows from that spreadsheet into a table format that more easily programmatically manipulated. Picking the columns necessary for our and Y, as well as subsetting into boys and girls, and putting some labeling on columns, we end up with things that can be pushed into the linear regression machinery.  As an aside, you might have noticed the train_test_split method.  Basically, the idea before this method is to subset your available data into something you show an algorithm, and something you withhold from the algorithm until later to see how well the algorithm can predict unseen values.  This training set and testing set is important, and will be brought up, again, further into this post.

The above code will also print two values, one is the m in the previously mentioned equation, y = mx + b, and the other is the b in that same equation.

So, our red line has an equation of roughly:

y = 0.438x + 23.399

And if you are thinking that the red line is really not a very great fit for our light blue dots, you would be correct.  If you had new data from only 5 month to 30 month old boys, you could approximate the lengths, but if you had data for children outside of these bounds, you would end up approximations that were too long in the case of those very young, and approximations that were too short for those older.

The question now arises, how does one get more closely fitting line to the underlining data?  The answer ends up being, use fancier machines.  For the sake simplicity, we will skip over whole groupings of other algorithms and look at one called Gradient Boosting.  If you have had more advanced calculus, and the word gradient rings a bell, good.  That is the same gradient.  Perhaps in a future posting, I’ll jump into explaining gradient boosting, but not now.

Age in months versus Length in inches (Gradient Boost Algorithm)

If you swap out the LinearRegression in above code snippet with GradientBoostingRegressor, rerun the code, with the print statement commented out, and plt.plot(x_testing, predicts, color=’red’) replaced with plt.scatter(x_testing, predicts, color=’red’), you end up with the graph to the right.  As you can see, the red dots, which consist of ages from withheld-from-training age/length pairs, with predicted length, plot neatly within the blue dots of actual data points.

That’s a brief introduction into applying regression to a problem set.  Now, what about classifiers?

I tend to think of classifiers as still having dots of data on a graph, and continuing to find a line that fits these data.  However, instead of trying to get dots directly on the line, you try to figure out which side of the line the dot belongs to, and then you can say it belongs to category 1, or you can say, it belongs to category 2.  In the case of our example infant length and gender data, you age and length as input features to try to categorize the birth gender of a particular child.  On the peer-to-peer lending front, this equates into using those 400+ features that included with a loan’s listing to try to get the outcome of success or fail.  There is also a probability metric associated with the classification that could be thought of as how close to the dividing line is this prediction?

There are a handful of academic works that look at applying machine learning to the peer-to-peer lending space.  The one that seemed to come up the most, as I researched the topic myself for projects during my graduate studies, was a journal article from 2015; Risk Assessment in Social Lending via Random Forests, by Malekipirbazari & Aksakalli.  The abstract for that article is as follows:

With the advance of electronic commerce and social platforms, social lending (also known as peer-to-peer lending) has emerged as a viable platform where lenders and borrowers can do business without the help of institutional intermediaries such as banks. Social lending has gained significant momentum recently, with some platforms reaching multi-billion dollar loan circulation in a short amount of time. On the other hand, sustainability and possible widespread adoption of such platforms depend heavily on reliable risk attribution to individual borrowers. For this purpose, we propose a random forest (RF) based classification method for predicting borrower status. Our results on data from the popular social lending platform Lending Club (LC) indicate the RF-based method outperforms the FICO credit scores as well as LC grades in identification of good borrowers1.

This paper and a few others I have come across use Random Forest, and it seems to be the go to algorithm.  Much of the data from peer-to-peer lending platforms are categorical, but some is continuous.  Random Forests handle these heterogenous data quite well2.

The general classification idea, when applying machine learning to peer-to-peer lending data, is a binary problem:  Do loans succeed in being repaid or do they fail to be repaid?

Risk Assessment in Social Lending via Random Forest is really a great paper.  Even though it focuses on the more readily available data from Lending Club, it gives a great job of doing a brief literature review of prior papers that looked into peer-to-peer lending, as well as looking at three different machine learning approaches to the problem.  They also highlight that, within Lending Club’s data, simply relying upon the proprietary grade metric, is not necessarily indicative of a good borrower.  The concept of good borrower and their identification is, however, the one point that perplexes me.  I tend to couch the problem in a different, I want to successfully identify bad borrowers, so as to avoid lending to them.   Thinking of the problem space with this notion in mind, also highlights another characteristic of these data: imbalanced classes or categories of final loan state (success or fail).  Many classification techniques work best when the classes being evaluated are more or less even in size3.  Many canonical classification algorithms have the assumption that the outcome classes of a dataset are balanced4.   The data from Prosper lending, after having been mapped into a binary categorization of fail and success, the counts of each outcome show an imbalance across the whole dataset of 75% of the loans were successful, and 25% of loans failed.  This imbalance of classes can lead to classifier bias.  In my mind, this means that it is easier and more likely that a new, never before seen loan listing will be classified as success by your trained algorithm, then it is to identify loan listings that one should best be avoided.

This imbalance in classes leads us into the general area of preprocessing.  If you think of running data through your machine learning algorithm as processing, this is the step before that.  The algorithms are pretty sophisticated, but if the data have non-numerical values, for example, the Prosper Rating, the algorithm will not know what to do with these values; these values need to be turned into something numerical.  Likewise, with the imbalance of outcome classes, something could be done with this.

Let’s talk more specifically about what preprocessing was done on our Prosper Lending dataset.

If you have an investor account with Prosper, you can freely download two different sets of data.  Listing data, and Loan data.  Listing data sets contain the borrowers credit profile, the state they reside in, their occupation that has its value originating from a drop down menu or a set of pick-one radio buttons in the user interface, as well as a whole host of other bits of information.  Loan data sets contain a sort of snap shot in time as to the current status of a loan, if the loan is still actively being paid off.  In the case of loans that have “run their course”, this means that they were either successfully repaid, or they entered into a status of “default” or “charge-off”; both of these statuses for our purposes are considered fails.

We will only be concerned with loans that have either successfully been repaid, or have defaulted or been charged-off.  We are not concerned with active, in good standing loans.  As an aside, we might be interested in active, in good standing loans, if there was a secondary market to sell loans that are in the process of being repaid.  Sadly, Prosper shutdown its secondary market a few years, and as such, once a loan as originated, and the lenders receive notes, those notes are effectively an illiquid asset.

The key piece of information that is missing from these two sets of data is a linkage between the listing from the borrower, and a loan’s final outcome.  The field listing_number is noticeably absent from the loan datasets. A linkage between these two datasets is possible by matching the borrower’s state, the borrower’s interest rate on the loan, the date the listing was turned into a loan, and the amount of money of the loan.  This will get you nearly all the way to having a correctly set of datasets.  But, Prosper does make this linkage data available.  More data are available if you inquire and sign a data sharing agreement.  The linkage data, however, is in a terribly inconvenient format.  It is a 16GB (2GB compressed) comma separated format file that represents every payment and every payment attempt on the Prosper platform from nearly the start of the platform, up to roughly the previous calendar quarter.  This enormous file is data rich, and would allow for finer grained analysis of Prosper’s lending and borrowers’ repayment efforts, but the bit that we are interested in the listing_number <-> loan_number pairing that is found in this file.  As a side note, I tend to use listing_number (lower case with an underscore), Prosper tends to use ListingNumber (camelcase words); I will go back and forth between the two flavors, but they mean the same thing. I wrote a bit of wrap code that can read in all the zip files in a directory for the listing data and the loans data, and make Prosper’s CamelCase column names into snake_case.

Taking this enormous CSV file, and read it into a Pandas DataFrame, and select the two columns we are interested, ListingNumber and LoanID, then group by two columns:

 import pandas as pd 
df = pd.read_csv('./LoanLevelMonthly.zip') 
loans_listings_df = df.groupby(['LoanID', 'ListingNumber']).size().reset_index() 
loans_listings_df.columns = ['loan_number', 'listing_number', 'count']

So, where does one get access these data? You will need to contact Prosper via their help desk and ask for the “Prosper Data License Agreement”. Once you sign and return this agreement to them, within a day or two, you will be sent information on how to download the latest data from the platform.

Then, you will need historical listings and loans data that you will then use the above to link up the listings data and loan outcomes.  Assuming you have a Prosper account, and you are already logged in, as I mentioned above, you can just download all the historical data.

At this stage, we are only interested in data on loans that are done.  That is, loans that have either been successfully and fully repaid, or loans that have defaulted or has been declared a charge-off.  In these data, that means a loan_status of greater 1 and less than 6.

Reading in the listings zip files, you will want to simply read the first (and oldest) file into a Pandas DataFrame, and then read the next oldest, appending that file’s contents to the previously created DataFrame.  You will do this with all of the listings zip files.  Similarly, read all the loans data into a separate DataFrame, appending the newer loans to the older loans.

After reading in both the listing data and the loan data, we select only the loans with the statuses we want.

loan_df = loan_df[loan_df['loan_status'] > 1]
loan_df = loan_df[loan_df['loan_status'] < 6]
print(loan_df.groupby(['loan_status']).size())
loan_status
2 25244
3 90481
4 361687
dtype: int64

You should have three DataFrames at this point.  One DataFrame with the linkings of loan_number to listing_number, one DataFrame containing loans data, and one DataFrame containing listing data.

The only purpose that we link loans to listings is getting loans’ final status, we will want to select out just two columns; the loan_number and the loan_status.

loan_status_df = loan_df[['loan_number', 'loan_status']]

At this point, if we were wanting to do something like figuring out the actual, real rate of return on a loan, that is, the amount of money that the lender ultimately received, you could calculate that value at this stage.  It would look something like:

loan_df['actual_return_rate']  = (loan_df['principal_paid'] + loan_df['interest_paid'] + loan_df['service_fees_paid'] - loan_df['amount_borrowed']) / loan_df['amount_borrowed']/(loan_df['term'] / 12)

Note, service_fees_paid is negative, so we just add it to the amount to effective subtract that amount

If we did want to include actual_return_rate, we would include that in the columns we select out (above), but we would need to remember to remove actual_return_rate later, when we are attempting to classifier listings as it would effectively spike our results.  Likewise, if we were using a regressor to try to predict the actual_return_rate, you would want to remove loan_status from the mix.

But for now, we will only be concerned with looking at predicting a loan’s outcome solely on what was in the listing for that loan.  Back to our three DataFrames.

Start by merging loan_status_df to loans_listings_df.

loans_with_listing_numbers_df = loan_status_df.merge(loans_listings_df,on=['loan_number'])

This will do an inner join on the two DataFrames, and truncate off loans that are too new compared to the loans <–> listings linkage data.

Similarly for the listing data, we merge listing_df with the newly created loans_with_listing_numbers_df

complete_df = loans_with_listing_numbers_df.merge(listing_df,on=['listing_number'])

At this point, you will have a DataFrame that contains hundreds of thousands of rows of heterogeneous data. Some dates, some categorical (e.g. prosper scores, credit score range bins, and so forth), as well as continuous values (usually something dealing with a percent or a dollar amount).

There are also columns that you want to exclude or remove completely because they are present in these historic data but not present in listing data that are obtained via Prosper’s API for active listings.  We will filter, remove or otherwise transfer the DataFrame into something is slightly more usable.

def remap_bool(x):
    if x == False:
        return -1
    elif x == 'False':
        return -1
    elif x is None:
        return -1
    elif x == True:
        return 1
    elif x == 'True':
        return 1
    elif x == '0':
        return -1
    
def to_unixtime(d):    
    return time.mktime(d.timetuple())

def to_unixtime_str(d):
    if str(d) == 'nan':
        return  to_unixtime(parser.parse('2006-09-01'))

    return to_unixtime(parser.parse(d))

def remap_str_nan(x):
    return str(x)

def remap_loan_status(x):
    if x == 4:
        return 1
    else:
        return -1

df = complete_df.copy()
df['loan_status'] = df['loan_status'].map(remap_loan_status)
df['has_mortgage'] = df['is_homeowner'].map(remap_bool)
df['first_recorded_credit_line'] = df['first_recorded_credit_line'].map(to_unixtime_str)
df['scorex'] = df['scorex'].map(remap_str_nan)
df['partial_funding_indicator'] = df['partial_funding_indicator'].map(remap_bool)
df['income_verifiable'] = df['income_verifiable'].map(remap_bool)
df['scorex_change'] = df['scorex_change'].map(remap_str_nan)
df['occupation'] = df['occupation'].map(remap_str_nan)
df['fico_score'] = df['fico_score'].map(remap_str_nan)

df = df.drop(['channel_code', 'group_indicator', 'orig_date', 'borrower_city', 'loan_number', 'loan_origination_date', 'listing_creation_date', 'tu_fico_range', 'tu_fico_date', 'oldest_trade_open_date', 'borrower_metropolitan_area', 'credit_pull_date', 'last_updated_date', 'listing_end_date', 'listing_start_date', 'whole_loan_end_date', 'whole_loan_start_date', 'prior_prosper_loans61dpd', 'member_key', 'group_name', 'listing_status_reason', 'Unnamed: 0', 'actual_return_rate', 'investment_type_description', 'is_homeowner', 'listing_status', 'listing_number', 'listing_uid'], axis=1)

We end up with a slightly cleaner, slightly more meaningful (from an algorithm’s perspective) table of data.  However, there is still more that can be done to these data.  It would be up to you to determine if there were more steps in a pipeline that could be used to make these data more meaningful.  Other steps could include removal of outliers via an Isolation Forest5.  Dealing with the class imbalance is also something to consider.  For our final model that we developed, we used SMOTE6 for oversampling during our training phase.  That is, we used SMOTE to take training data and produce synthetic data with balanced classes from a 60% to 75% sampling of the whole data.  This leans you legitimate, actual historic data to verify (test) your model with to see how well it predicts what you are interested in predicting.

It should also be mentioned that we one hot encoded our data.  We split off the outcome column, and one hot encoded our features.  This has the effect of taking the remaining categorical columns, such as borrower_state, and making individual boolean columns for each category in that column (in the case of state, this resulted in something like 52 new columns or something like that).

The model we have actually deployed into a small, real world experiment where, based off the scorings produced with the model, there are actual listings being be automatically invested in, has a pipeline of something like the following:

Linking Dataset -> Cleaning & Translating -> One Hot Encoding -> Anomaly Detection & Removal -> Rescaling data between 0 and 1 -> Oversampling -> Training -> Verification -> Deployment 

Getting to the point of having a tuned, deployable model, should actually be the topic of another followup article.  The gist of the tuning involves a lot of brute force, grid searching with parameters.

Our final classifier is something like this:

clf_gb = XGBClassifier(n_estimators=639, n_jobs=-1, learning_rate=0.1, gamma=0, subsample=0.8, colsample_bytree=0.8,
 objective= 'binary:logistic', scale_pos_weight=1, max_depth=9, min_child_weight=10, silent=False, verbose=50, verbose_eval=True)

We take the intermediate things, like the min max scaler object, in addition to the classifier itself, and pickle these. Before you get your nerd panties in a twist, we do realize that there are dangers to using pickled objects in python. We’ll assume the risks for now.

These pickled objects then get deployed with a thin wrapper that takes output from Prosper’s API endpoint for listings, and does translation and blows out things to required columns (remember, we one hot encoded things, so, there are a lot of columns to fill out). This then gets run through the model to produce a scoring of success and failure probability.

If you recall, above, that the one thing that we were ultimately concerned with was identifying loans that will most likely fail. Likewise, in that same statement, there is an implicit desire to minimize the number of loans that failed but were classified as being successful during our testing phase. Here is the confusion matrix from our testing phase:

array([[ 8022,  6080],
       [24031, 90805]])

That 6,080 is smallest number in the bunch, we interpret this a good sign. To further possibly address the likelihood of investing in a falsely labeled listing, we also filter on the success probability. Something like exclude listings with a score under 0.85.  This still won’t catch truly out of whack listings, but we hope it would help.


Useful Links & Things

Geography of Social Lending

Over the last few years, the idea of applying computer assisted pattern recognition, or more commonly known as machine learning, to social lending has sort of stuck with me.  Somewhere in 2015, myself and a colleague,  first looked at this problem space.  I may write about the machine learning aspect in a future blog post, but that is not the focus in this piece.  It was not until recently that I began to think of lending in the context of geography.  Could visual patterns be teased out from the available data?  There is an existing article on the topic, but the granularity of the analysis is at the state level.  Similar to that geographical analysis of Prosper, there’s also a look at Lending Club at the ZIP3 level.  I wanted to get to a smaller unit of political geography.  Before I get into this, let’s give some context to what exactly Social Lending is all about.

The basic idea with social lending is a person who wants or needs a bit of money. The person makes a listing for a loan use an online platform like Lending Club or Prosper, instead of going directly to a bank. Social lending, more commonly known as peer to peer lending, sells the idea that it offers opportunities for both borrowers and lenders to reach their own objectives outside of direct interaction with banks. Lenders, big and small, have a potential opportunity to put their money to work, while borrowers are able to access money through an alternative to traditional bank loans and credit cards.  As with many transactional things in the era of the Internet, both lenders and borrowers fill out forms via web pages on the respective platforms.  To give background to the size of the peer to peer lending industry, by early 2015, the largest peer to peer lending platform, Lending Club, had facilitated over six billion dollars worth of loans1.  With an active listing on one of the platforms, potential investors in the loan that may result for the listing, review the listing’s information and decide whether to commit some amount of money to the final loan.

A bit of the appeal of peer-to-peer lending, along with being an alternative source of money for borrowers who might have difficulties accessing credit through other channels, is how these loans are securitized into notes and presented to investors.  Let’s take a quick detour into securitization.

The basic idea of securitization is to take many financial obligations, e.g. loans or debts, pool them together into an even larger thing, and then chop that larger thing into small pieces.  The small pieces are then sold to investors, who expect an eventual returning of their initial investment with interest payments along the way. Securitization has been around for a long time.  In the 1850s, there were offerings of farm mortgage bonds by the Racine & Mississippi railroad. These farm mortgage bonds had three components 1) the note, which stated the financial obligation of the farmer to repay the stated mortgage amount; 2) the mortgage, which offered the farm as collateral; and 3) the bond of the railroad, which offered its reputation for repayment in addition to other other assets. 2  In the 1970s, the Department of Housing and Urban Development created the first modern residential mortgage-backed security when the Government National Mortgage Association (Ginnie Mae or GNMA) sold securities backed by a bundled mortgage loans3.  There also a fascinating looking-back-in-time at a moment in securitization history in a Federal Reserve Bank of San Francisco Weekly Note from July 4, 1986.

The peer-to-peer lending industry, with a focus on everyday people who want to invest in these loans (as opposed to large banks, and private equity investors) is slightly different in how loans are securitized.  Instead of bundling many, multi-thousand dollar loans into a pool, and then dividing the pool into notes, a single loan, for example, in the amount of $10,000, is divided into notes in denominations ranging from $25 up to thousands of dollars.  An investor could buy a single $25 note, or she could buy a larger percentage of a given loan.  As an aside, a widely held objective in investing is to maximize return on investment and reduce risk.  A diversification of the risk is supposed to be achieved by buying a slice of many different loans4.

Let’s get back to the topic at hand: the geography of social lending.  First, the data.  I will be using data from Prosper.  There’s a tremendous amount of work behind getting these data into a shape and structure that lends itself to both looking at things geographically, as well as simply getting historical data that matches the data for the listing that borrower made with the data for the loan that was made following the listing.  This process involved first having an investment account with Prosper, and then applying for an additional level of access for finer grained data.  Without the finer grained level of access, the problem becomes an issue of record linkage; tying listing data to loan data based on the interest rate of the loan, the date of the loan’s origination, the amount of money the loan was for, the state of the borrower, and a couple other characteristics.  It is fairly accurate, but if one is able to get true listing to loan matches, just use that.

Location. Location. Location.

Contrary to what was said in the Orchard Platform’s article on geography and Prosper, locations at a finer resolution than state are available.  There are, however, a couple caveats.  The first being, it is the text in this field (borrower_city) is freeform and entered by the borrower.  There is no standardization.  You might get a chcgo, a chicgo, or the actual proper noun spelling, Chicago, for the city’s name.  It also appears that entering a city name might be optional, as there are some listings with an empty city.  The other caveat for borrower_city, is that it is available only in the historic data downloads, and not available via Prosper’s API.  Why is a finer grained location interesting?  Because, if you were an investor, you might want to include a prospective borrower’s city in your judgement on whether or not to invest in a loan.  I won’t trust those Minneapolis borrowers.  In my mind, this actually the reason this information is suppressed at the time of an active listing.  There are laws and regulations in the US that state lenders are not allowed to discriminate based on age, sex, and race.  Fair lending laws have been on the books since the 1960s and 1970s5, and so lenders have been keen to avoid perceptions of discrimination based on these characteristics.  Even so, both Prosper and Lending Club, in their early days, had pieces of information shared by the prospective borrowers.  Things like a photo of the borrower along with a message from the borrower were posted in the listing.  Photos could leave an impression of age and race, while the notes often included references to the person’s spouse with associated pronouns6.  Both Prosper and Lending Club have the exact addresses for successful borrowers, there are know your customer rules and regulations, after all.   By not exposing this sliver of information at the time of an active listing, the lending platforms are potentially covering themselves from both actual discriminatory liability, as well as perceived public relations issues (that doesn’t mean that one of these platforms does not periodically have both — likely a paywall on that link, by the way).

At the start of the last paragraph, I mentioned the messiness of these free form city names.  How does one cleanup these data into a normalized, relatively accurate location?  Google.  Google, through its cloud services business, offers relatively good name standardization, and geocoding services.  So, putting chcgo, or chicgo into their system, results in Chicago, IL with a bunch of other information, like the county it is located in, as well as latitude and longitude information for both a bounding box around the entity as well as a centroid.

The Google geocoding service, I should add, after a point, it is not free.  Up to 2,500 uses, there is no charge, for each additional 1,000, it is $0.50.  With a total of 477,546 loans with associated listing data, this seemed potentially expensive.  Instead, I collapsed down the borrower’s city and state into a unique value, and fed that into the geocoding service.  Getting a unique set of city and state combinations significantly reduced the number of things that I would need to geocode; from nearly 478,000 individual loans, down to about 22,000 combinations.  These standardized city/state/coordinates are then reattached to the original data.  Not every user entered city was able to be identified.  Entries like chstnt+hl+cv, md and fpo were not identified.  FPO and APO (also found in these data) are military installations, Fleet Post Office and Army Post Office, respectively.  The loan/listing entries with locations that could not be identified via Google’s Geocoding Service were removed from these data, resulting in less than 10,000 listings, or 1.9% of the total, dropping off.

I should also give some temporal context to these data; the data range in dates from November 15, 2005 to January 31, 2018.

With a collection of finer grained locations (of unknown quality, I should add), what questions can be visualized with these data?

Orchard Platform’s article on geography of peer to peer lending, as you recall, looks at state level aggregations of data.  The piece looks at choropleth maps of loan originations by volume, loan originations per capita, loans with 30 days or more past due, and finally a map of normalized unemployment rates.

The two maps, above, are originations by place at a city level.  It is effectively showing nothing more than where people live.  It’s a population map.  It is what someone should expect.  You will see more loans originating from the Los Angeles or New York City area than the Fargo, ND/Moorhead, MN area.  There’s just more people (much higher population densities) in the first two metropolitan areas than in the latter; each of those two higher population metropolitan areas are also spatially larger.  The New York metropolitan area, for example, is 13,318 square miles, while the Fargo/Moorhead area is only 2,821 square miles.

Even looking at just failed loans, which one of the above maps does, is still only identifying where populations live.

What if you wanted to look at loan originations and whether there appears to be a concentration within counties in the US that a significant proportion of a county’s population is African American?

First you would need data on race, at the county level in the United States.  The US Census Bureau’s American Community Survey is a great source for this type of information.  In addition to data on race, you need this information tied back to a counties or census tracts or states.  There’s a product made by the Institute for Social Research and Data Innovation, called the National Historic Geographic Information System, just NHGIS7.  Along with the census and survey based data, NHGIS has

ESRI shapefiles available that tie the data to place spatially.  These are the two things needed.

The above map, with its blue Prosper loan locations, and the red colored choropleth, representing the percent of a county’s population that is African American, on the surface is interesting looking, but it is really only showing where a segment of the greater population live.

I posed question of race and lending to a colleague of mine, and he thought on it for a short time, and then suggested looking at a choropleth of the number of loans in a county divided by the percent of minorities in a county.

First, define what is meant my minority.  In the case of the following, I simply defined this as not white.  The 2010 US Census found that White – Alone made up 72.4% of the US population8.  Whether or not combining all non-white populations into a single number is the correct thing to do is another story.

In the map to the right, the scatter plot of locations of borrowers is gone, and instead, what is the loan count divided by the ratio of non-whites in a given county.  It is another way to slice the data.  However, it also seems to just be identifying more diverse populations.  Los Angeles, Seattle, Chicago, Boston, Las Vegas, and Albuquerque, for example.

Another way to spin the question is to assume, for a moment, that the loans are evenly distributed throughout a county’s population.  If a county was 80% white, 15% African American, and 5% Native American or Alaskan Native, we could assume that 80% of the loans were taken out by white individuals, 15% were taken out by African Americans, and 5% were taken out by Native American or Alaskan Natives.  I highly doubt this is the case.  It would be possible to get a closer idea by looking at county subdivisions and where the geocoded cities are located within those.

So, taking the idea that things are evenly distributed, you allocate a portion of the loans to non-whites, or one could even look at the individual race groups in the American Community Survey.  This proportioned loan count is then divided by total number of non-whites in the county.  This should have the effect of dampening high loan counts but low non-white populations.

In the map to the left, there are still some larger, more diverse population centers picked up.  Los Angeles, San Fransisco and the Bay Area counties, Las Vegas, Atlanta, Chicago, and Houston.

In addition to this larger population areas, places like Arapahoe County, Colorado, which is directly east of Denver, shows up.  Mahnomen County in Minnesota’s northwest area also shows up.  There’s also the curious ring around the Washington D.C. area, too.

One final map.  Let’s take the same map as the previous, but let’s narrow the focus to loans that ultimately were not repaid; that is the number of loans, weighted by the ratio of non-whites in a given county, divided by the total county population.

I could keep slicing and dicing things and coming with more choropleths, but I won’t.  For a broader look at race and money, Propublica has a fascinating look at bankruptcy and race — Data Analysis: Bankruptcy and Race in America.  This report states that Memphis, Tennessee, and Shelby County, where Memphis is located in, have had the highest bankruptcy rate per capita in the nation.  It is curious to see that Shelby County, Tennessee, Desoto and Tunica counties in Mississippi, as well as Crittenden and Saint Francis counties in Arkansas all show up in the above map.  These are all counties that are part of the greater Memphis area.

That’s it for now.

Other ideas I have had with regard to Prosper data includes looking at whether given a borrower’s credit profile and state, can the county they reside be sussed out via pattern recognition (e.g. machine learning).  I will write, at some point, about a simpler application of machine learning: attempting to predict loan failure or success.

Semester Done, School-year Done

Semester Done, School-year Done

recommender0The fall semester is in the books.  I am now about two-thirds thru the required credits for a master’s degree in computer science.  With chipping away at the degree in a part-time manner, when complete, it will have taken me around four and a half years to finish an otherwise two-year degree.

This year, the courses I took involved HCI – Human Computer Interaction.  For those who did not click on that Wikipedia link, HCI is an area of computer science that looks at (observes) how people use computers and associated technologies, as well as the creation (design) of technologies that let people computers in interesting ways.

For the spring semester, the course I took was titled, Collaborative & Social Computing.  It explored, from a fairly high-level, the many aspects of HCI.  We looked at Wikipedia, Zooniverse, Mechanical Turk, a host of dating sites, as well as Chris McKinlay’s gaming of OkCupid (there’s also a book on this, too).  The class ended with a two person project – my partner and I implemented a very crude Wikipedia-of-CompSci-course-work.  Think of this as a free and open platform to obtain questions and their answers for computer science coursework.  This was to be an instructor-centric platform where instructors could share with other instructors their courses’ questions.  We called it AssignmentHub.

This course was a bit of a gateway-drug for HCI.  Little samples – gets you hooked, gets you interested – convinces you that the next level, HCI & UI Technology, should be a great course.

The name, HCI & UI Technology, is a bit of a catch-all and does not clearly state what my fall semester’s primary course was about: research methods within the context of human-computer interaction.  What’s that I just said?  The gist of the course was look at a research paper from the HCI-world and look at the methods of research used.  Grounded theory method, surveying of individuals, was there a statistical process applied, and so on.  We read a lot.  Thirty-eight papers or chapters, likely many hundreds of pages.

Many of the papers and chapters have blended together in my mind.  Which ones were on Facebook?  Which ones used mturk?  Which ones were about designing technologies and which ones were about evaluating perceptions of technologies?

There is one paper that stands out a bit in my mind.  It is likely that it stands out because I had to co-present it to class.  The paper was Project Ernestine: Validating GOMS for predicting and explaining real-world task performance.  

It’s a 74-page paper, published in 1993, that chronicles the scientific effort to compare work-times of telephone operators using two different workstations at NYNEX.  Those with a keen mind for remembering late 1960s television might realize that Ernestine was the name of one of Lily Tomlin’s characters on Rowan & Martin’s Laugh-In.  Ernestine was a sarcastic telephone operator.

The paper also draws on work from Frank and Lillian Gilbreth and their efforts to measure worker performance and study motion in a more scientific manner.

After all the readings, the term was rounded out with co-authoring a research paper.  The paper was modeled after Understanding User Beliefs About Algorithmic Curation in the Facebook Newsfeed. Instead of Facebook, we looked at Reddit and people’s perceptions of Reddit’s Best algorithm.

And, that’s it.  Lots of reading, three research projects (including the research paper).  Social and Collaborative Computing was certainly gateway-drug to the more hardcore HCI & UI Technology (HCI Research Methods in disguise).  It was interesting to learn more about the inner-workings of a sub-area of computer science, but I have definitely had my fill for a while.

Below is a table of nearly all that we read this term.  Enjoy.

Paper or Chapter NameAuthor(s)
Curiosity, Creativity, and Surprise as Analytic Tools: Grounded Theory MethodMichael Muller
An older version of the Wikipedia talk page for Edina, MNWikipedia.org
Excerpts from Old Edina, MN Wikipedia Talk PageWikipedia.org
Is it really about me?: message content in social awareness streams.Naaman, Mor, Jeffrey Boase, and Chih-Hui Lai.
Hustling online: understanding consolidated facebook use in an informal settlement in Nairobi.Susan P. Wyche, Andrea Forte, and Sarita Yardi Schoenebeck
Understanding User Beliefs About Algorithmic Curation in the Facebook News Feed.Rader, Emilee, and Rebecca Gray
Mediated parent-child contact in work-separated families.Yarosh, Svetlana, and Gregory D. Abowd
Experimental Research in HCIGergle and Tan
Understanding User Behavior Through Log Data and AnalysisSusan Dumais, Robin Jeffries, Daniel M. Russell, Diane Tang, Jaime Teevan
Research Ethics and HCIAmy Bruckman
Effectiveness of shared leadership in online communities.Zhu, Haiyi, Robert Kraut, and Aniket Kittur
To stay or leave?: the relationship of emotional and informational support to commitment in online health support groupsYi-Chia Wang, Robert Kraut, and John M. Levine
Practical Statistics for Human-Computer InteractionJacob O. Wobbrock
Experimental evidence of massive-scale emotional contagion through social networks.Kramer, Adam DI, Jamie E. Guillory, and Jeffrey T. Hancock
Predicting tie strength with social mediaEric Gilbert and Karrie Karahalios
Survey Research in HCIMuller et al.
Concepts, Values, and Methods for Technical Human-Computer Interaction ResearchHudson and Mankoff
Research Through Design in HCIZimmerman and Forlizzi
Skinput: appropriating the body as an input surfaceChris Harrison, Desney Tan, and Dan Morris
The bubble cursor: enhancing target acquisition by dynamic resizing of the cursor’s activation areaTovi Grossman and Ravin Balakrishnan
Field trial of Tiramisu: crowd-sourcing bus arrival times to spur co-designJohn Zimmerman, Anthony Tomasic, Charles Garrod, Daisy Yoo, Chaya Hiruncharoenvate, Rafae Aziz, Nikhil Ravi Thiruvengadam, Yun Huang, and Aaron Steinfeld
Performance and User Experience of Touchscreen and Gesture Keyboards in a Lab Setting and in the WildShyam Reyal, Shumin Zhai, and Per Ola Kristensson
The drift table: designing for ludic engagementWilliam W. Gaver, John Bowers, Andrew Boucher, Hans Gellerson, Sarah Pennington, Albrecht Schmidt, Anthony Steed, Nicholas Villars, and Brendan Walker
Predicting human interruptibility with sensors: a Wizard of Oz feasibility studyScott Hudson, James Fogarty, Christopher Atkeson, Daniel Avrahami, Jodi Forlizzi, Sara Kiesler, Johnny Lee, and Jie Yang
Zensors: Adaptive, Rapidly Deployable, Human-Intelligent Sensor Feeds.Laput, Gierad, et al.
Beyond the Belmont Principles: Ethical challenges, practices, and beliefs in the online data research communityVitak, J., Shilton, K., & Ashktorab
Unequal Representation and Gender Stereotypes in Image Search Results for OccupationsKay, Matthew, Cynthia Matuszek, and Sean A. Munson
y do tngrs luv 2 txt msg?Grinter, Rebecca E., and Margery A. Eldridge
Becoming Wikipedian: transformation of participation in a collaborative online encyclopedia.Bryant, Susan L., Andrea Forte, and Amy Bruckman
Wikipedians are born, not made: a study of power editors on Wikipedia.Panciera, Katherine, Aaron Halfaker, and Loren Terveen
Agent-based Modeling to Inform the Design of Multi-User SystemsRen and Kraut
Project Ernestine: Validating a GOMS analysis for predicting and explaining real-world task performance.Gray, Wayne D., Bonnie E. John, and Michael E. Atwood
Cooperative inquiry: developing new technologies for children with children.Druin, Allison
Sabbath day home automation: it’s like mixing technology and religion.Woodruff, Allison, Sally Augustin, and Brooke Foucault
SpeechSkimmer: interactively skimming recorded speech.Arons, Barry
Sensing techniques for mobile interaction.Hinckley, Ken, et al.
A touring machine: Prototyping 3D mobile augmented reality systems for exploring the urban environment.Feiner, Steven, et al.
"I regretted the minute I pressed share":
A Qualitative Study of Regrets on Facebook
Wang, Yang, et al.

Wind and Weather

5-in-1 Weather InstrumentIt was around mid-February of this year (2015) when the Acurite Weather station and associated things arrived.

I had waffled on whether to purchased one – it was not inexpensive, but, it is also not the cost of an entry level professional unit.  I also really only one initial use for it, too.  I wanted to answer a question that had been ruminating in the back of head for a bit over a month.  The question had come up after the solar panels went up on the chicken coop.  The solar panels only generate electricity when there is enough sun light – often poorly during the day in winter, and never during the night any time of year.  The only other obvious alternative energy generation method was wind. But, it is not as simple as buying a turbine system.  A turbine is useless without wind.  A turbine is also useless even with a small of amount of wind.

Was there enough wind at the house to generate electricity?

We mounted the outdoor part of the weather system on a fence post near the chicken yard; the indoor receiver (with its fancy colorized screen) sits in the kitchen and the Internet bridge lives in the basement.  The Internet bridge is a device that connects via a network cable into a network switch; the indoor receiver wirelessly sends weather readings to this device, and, subsequently, forwards those readings to Acurite’s My Backyard Weather service.

Acurite’s service has limited analytical capabilities.  You can produce simple line graphs of individual readings – wind speed, temperature, barometric pressure, and so on.  But, you cannot produce fancier things like a wind rose, or pull apart temperature readings into night time lows plotted against daytime highs.

Through a bit of a virtual Rube Goldberg setup, I started collecting the readings in a database of my own.  I now have readings, on average, every 20.36 minutes, from February 21, 2015 to the present1.

Using some statistical and graphing tools2,3,4,5,6, I came up with some answers to the original question.

The short answer is it’s unlikely that from six to twelve feet above the ground, there is enough wind to generate electricity.

Let me explain a bit more.

I narrowed the focus of the question to the end of winter.  I only started the collection of data at the end of February, that left March as being the closest month to a true winter-month.

marchwinds-hist2The wind turbines that I had been looking at have a wind cut-in speed of 4.2 to 6.7 MPH.  Below that speed but above 0.0 MPH, the turbine blades and head may slowly rotate, but it is not enough rotation to generate electricity.  The wind rose, above, was quite helpful in coming to an answer.  It shows that we get our dominate wind from the west — seems obvious in retrospect, as there is an enormous bluff/hill to the east.  But having direction of the wind is likely not necessary.  Plotting the March data has a histogram, you can get a very simple yet informative picture; the majority of the wind is under four miles per hour.  That’s well under what is necessary to make a turbine useful.

VariableValue
Wind > 4 MPH22.32%
Wind <= 4 MPH77.68%
Average Wind (MPH)2.49
Max Wind (MPH)10.90
Average Temperature (F)36.07
High Temperature (F)71.59
Low Temperature (F)-7.20

A wind turbine is out of the question.  There are other locations in the yard that could have more wind, but it is unlikely this would be convenient to move the generated power from that location to the battery bank at the chicken coop.  A more plausible scenario is to add both more batteries and more solar panels.  We would be able to capture more energy when it is light out, and have more storage capacity to drawn from when it is needed.

  1. data sample
  2. Jupyter Notebook is a web application for interactive data science and scientific computing.
  3. matplotlib is a python 2D plotting library.
  4. windrose (license)
  5. anaconda implementation of python3
  6. jupyter notebook with sample graphs and calculations