Behavioral Credit Scoring - The Georgetown Law Journal

Behavioral Credit Scoring
NATE CULLERTON*
TABLE OF CONTENTS
INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
808
I. BEHAVIORAL CREDIT PROFILING . . . . . . . . . . . . . . . . . . . . . . . . . .
809
A.
THE FICO SCORE
..................................
810
B.
WEB 1.0
........................................
812
C.
WEB 2.0
........................................
814
II. HARMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
817
A.
THE PUBLIC/PRIVATE DISTINCTION
......................
817
B.
TRANSPARENCY
..................................
819
C.
DISCRIMINATION
..................................
820
D.
CONTEXT
.......................................
824
III. EXISTING REGULATIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
827
A.
FCRA
.........................................
827
B.
ANTIDISCRIMINATION LAW
...........................
828
C.
CONSUMER-PROTECTION REGULATIONS
...................
829
IV. THE WHITE HOUSE APPROACH . . . . . . . . . . . . . . . . . . . . . . . . . . .
831
A.
THE CONSUMER PRIVACY BILL OF RIGHTS
.................
831
B.
ENFORCEMENT
...................................
834
C.
REFORMS
.......................................
836
CONCLUSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
837
* Georgetown University Law Center, J.D. expected 2013. © 2013, Nate Cullerton. I would like to
thank Professor Julie Cohen for first getting me excited about the subject of this Note and for helpful
comments throughout the writing process. Thanks also to the editors and staff of The Georgetown Law
Journal.
807
808
THE GEORGETOWN LAW JOURNAL
[Vol. 101:807
INTRODUCTION
In 2011, Kevin Rose, the influential founder of the website Digg, caused a stir
when he took to his video blog to share a “random idea.”1 “This might be
potentially the dumbest . . . least vetted idea that I’ve ever thrown out there,”
Rose said, “But . . . what if we could make credit cards a little bit more
social?”2 Rose suggested a system in which trusted friends would invite each
other into an extended credit network sponsored by a bank.3 The more friends
you could invite and the better credit those friends had, the lower your own
interest rate would be and the higher your available credit. Contrary to delinquency to a large, faceless bank, where the only real consequence of missed
payments would be a slight credit-score reduction, Rose argued that your
trusted friends in the newly formed credit circle would be loath to miss
payments because they would be letting you and the network down. Peer
pressure and tight social bonds would provide far greater enforcement than the
coercive power of a large institution ever could.4 It would be, in a sense, a
return to a preindustrial small-town ideal where credit was extended on a
handshake, based on one’s standing in a close-knit community.
As our commercial and social lives are increasingly mediated through digital
technologies, companies large and small have begun to make credit “social,”
and not only in the seemingly benign way proposed by Rose. By collecting and
mining the enormous wealth of personal data generated by performing the
mundane tasks of daily existence—shopping, reading, socializing—credit card
companies, large banks, and a host of start-up companies in the data-collection
and lending fields are at the cusp of a revolution in the way they determine and
price risk in credit markets. In the coming years, who you know, where you
shop, and what you read may dramatically affect your access to credit. Although
much scholarly attention has been paid to the privacy implications of online
data mining and aggregation, or “dataveillance,” for use in targeted behavioral
advertising,5 relatively little attention has been focused on the adoption of these
techniques by lenders. And although the efficiency and accuracy justifications
for total access to consumer information may be at their highest when determining credit risk, these practices also raise unique concerns regarding our privacy
expectations in digital space. The heightened potential for discrimination facilitated by online tracking deserves closer attention. This Note seeks to address
some of these concerns.
1. Kevin Rose, Random Idea—Social Credit Cards?, DODEBT! (Aug. 14, 2011), http://www.dodebt.
com/credit-cards/random-idea-social-credit-cards.
2. Id.
3. Id.
4. Id.
5. Roger A. Clarke, Information Technology and Dataveillance, 31 COMM. ACM 498, 498 (1988).
2013]
BEHAVIORAL CREDIT SCORING
809
Part I will provide an overview of current and proposed uses of data mining
and tracking technologies to control and determine access to credit. This Part
begins by examining the traditional marker of creditworthiness, the Fair Isaac
Corporation (FICO) score, and criticisms of the score that have led to the search
for more predictive alternatives. Some of these alternatives are already in use.
Web tracking companies and banks have partnered to create profiles derived
from online habits that then funnel advertisements for certain pricing and rates
on credit cards to certain profile groups. Credit issuers track purchases by users
and use them to create detailed algorithms that can lower credit limits even in
the absence of delinquency. Banks and other financial institutions are beginning
to use Facebook, Twitter, and other social media for risk monitoring of customers. A handful of start-ups have begun issuing credit based on “social credit”
scores, which assess the creditworthiness of friends and associates online. Other
solutions are being developed or proposed, including mining data on the “social
graph”—the web of connections generated by social-media activities.
Part II examines these policies from the perspective of privacy and discrimination. I argue that behavioral credit scoring violates established contextual norms
of social-media users and has a disproportionately negative impact on the poor
and minorities, entrenching disadvantage rather than empowering upward mobility. Even absent claims of disparate impact, the socioeconomic discrimination
facilitated by behavioral credit scoring, and the monetization of behavior that
accompanies it, should push us to question the morality of these practices.
Part III examines the existing regulatory framework applicable to the credit
industry, concluding that current laws provide almost no protection against data
profiling by lenders. The Fair Credit Reporting Act regulates consumer reports,
but the Act is of questionable applicability to information gathered by monitoring and tracking online activity. Moreover, even in situations where the FCRA
would apply, its remedial focus on reporting and accuracy is ill-suited to prevent
the gathering of information in the first instance.
Part IV will consider newly released proposals by the Obama Administration
that call for a Consumer Privacy Bill of Rights (CPBR), arguing that although
the proposal is an encouraging step forward, it does not go far enough and
should include a categorical ban on behavioral credit scoring.
I. BEHAVIORAL CREDIT PROFILING
The modern system of credit, using predictive statistical models of creditworthiness, began in the 1950s with the development of the FICO score.6 As the
6. Dean Foust & Aaron Pressman, Credit Scores: Not-So-Magic Numbers, BLOOMBERG BUSINESSFeb. 6, 2008, http://www.businessweek.com/stories/2008-02-06/credit-scores-not-so-magicnumbers.
WEEK,
810
THE GEORGETOWN LAW JOURNAL
[Vol. 101:807
credit score has come to touch nearly every aspect of the American consumer
economy, banks and other lenders have aggressively sought ways to increase
the accuracy of lending models and to more finely segment the population based
on the ability to take on and repay debt.
This Part briefly looks at the development, use, and growing criticism of
traditional FICO scores before describing two broad sources of alternative data
being integrated into credit determinations: consumer shopping and browsing
habits online, and social-media use.
A. THE FICO SCORE
The term “credit score” is a generic reference to the use of statistical models
(derived from group probabilities) to estimate the likelihood that individuals
will repay a loan.7 FICO, the dominant provider of credit scores in the United
States, began marketing a formula for computing creditworthiness in 1956 to
department stores and banks.8 The FICO score debuted in the 1980s.9 Since
then, the FICO score has come to dominate consumer financial markets and is
used as a benchmark in accessing consumer credit of all kinds, from bank loans
to cell phone contracts to credit cards and auto loans.10
FICO and similar scoring models no longer simply price loans because they
have become routinely used by employers and landlords to conduct background
checks and by insurance and utility companies in setting rates.11 Indeed, FICO
pervades nearly every aspect of our lives: hospitals have been reported to check
FICO before making admission decisions, retailers examine FICO scores to
determine where to open and how to design new stores, casino operators
examine FICO to find the most profitable customers, and health insurers consult
FICO models to make assessments about who is likely to take prescribed
medication.12
The basic approach to determining a credit score, apart from becoming more
automated, has not changed greatly since FICO’s launch in the 1950s.13 Consumers are assigned a number between 300 and 850 based on factors like total debt
burden, payment history, types of loans, and the number of times they have
7. BD. OF GOVERNORS OF THE FED. RESERVE SYS., REPORT TO THE CONGRESS ON CREDIT SCORING AND ITS
EFFECTS ON THE AVAILABILITY AND AFFORDABILITY OF CREDIT 12 (2007).
8. Foust & Pressman, supra note 6.
9. David S. Evans & Joshua D. Wright, The Effect of the Consumer Financial Protection Agency Act
of 2009 on Consumer Credit, 22 LOY. CONSUMER L. REV 277, 289 (2010).
10. Toddi Gutner, Anatomy of a Credit Score, BLOOMBERG BUSINESSWEEK, Nov. 27, 2005, http://www.
businessweek.com/stories/2005-11-27/anatomy-of-a-credit-score.
11. Foust & Pressman, supra note 6.
12. Id.
13. Id.
2013]
BEHAVIORAL CREDIT SCORING
811
applied for credit.14 This basic formula has led to criticism on two fronts, both
related to accuracy.
First, as the formula has become well-known, FICO has been accused of
being relatively easy to “game.”15 Critics contend that FICO may overestimate
an individual’s credit score. An entire industry of credit “fixers”—ranging from
formal debt repair companies to less formal word-of-mouth networks—has
widely disseminated techniques that can be used to quickly and, lenders argue,
artificially augment one’s credit score.16 Examples of these practices include
contesting delinquencies with the reporting agencies, “piggybacking” onto a
stranger’s credit card, adding a name to a recently paid loan, and receiving pay
stubs from a fake employer.17 Banks have complained that inaccuracies in the
FICO score contributed to the housing crisis by hiding the true financial risks of
the loans being made.18
Second, consumer groups contend that the FICO score is riddled with inaccuracies that underestimate creditworthiness.19 Studies have shown that credit
records are not properly secured, leading to widespread identity theft that is
difficult to detect and correct.20 Moreover, the records on which the score is
based have been found to contain persistent and widespread administrative
errors that lower scores.21 More fundamentally, some critics argue that the score
is not a true reflection of a consumer’s ability to repay because the metrics used
to calculate the score are biased against borrowers with nontraditional credit
experiences.22
These concerns about accuracy have pushed the drive for innovation in credit
scoring, and although FICO defends its methods, it has continued to face
growing competition from alternative scoring methods.23 Most significantly,
major banks and credit-reporting agencies, as well as a variety of start-ups, have
begun to develop methods of mining consumer behavior online for insight into
creditworthiness. These developments are described below.
14. Id.; see also BD. OF GOVERNORS, supra note 7, at 24–25; Gutner, supra note 10.
15. See Foust & Pressman, supra note 6.
16. Id.
17. See Janet Morrissey, What’s Behind Those Offers to Raise Credit Scores, N.Y. TIMES, Jan. 19,
2008, http://www.nytimes.com/2008/01/19/business/yourmoney/19money.html.
18. See Foust & Pressman, supra note 6.
19. See, e.g., NAT’L ASS’N OF STATE PIRGS, MISTAKES DO HAPPEN: A LOOK AT ERRORS IN CONSUMER
CREDIT REPORTS (2004).
20. Id. at 6.
21. Id. at 7; see also CONSUMER FED’N OF AM., CREDIT SCORE ACCURACY AND IMPLICATIONS FOR
CONSUMERS 6 (2002).
22. See BD. OF GOVERNORS, supra note 7, at 4.
23. See Equifax Credit Score vs. FICO Score, EQUIFAX, https://help.equifax.com/app/answers/detail/
a_id/244//equifax-credit-score (last visited Sept. 19, 2012).
812
THE GEORGETOWN LAW JOURNAL
[Vol. 101:807
B. WEB 1.024
A recent analysis found that over 100 companies are tracking an average
internet user’s activities from log on to log off.25 The broad outlines of the
processes used to collect data on consumers online are well-known and have
been widely discussed,26 but a brief overview here is useful.
At its most basic, data collection about an internet user’s activities is facilitated by the use of a variety of “cookies,” text files placed on a user’s browser
that can collect and record data on that user’s activities.27 The development and
spread of cookies have fueled the explosion of online-targeted, behavioral
advertising over the past decade, which relies on data generated by users as they
browse the web to build detailed user profiles and predict which products and
services to advertise to which users.28 While the data collected is nominally
“anonymized” (that is, each profile is not based on an individual’s name but
rather on a number assigned to a user through the cookie associated with her
computer) this barrier is quite thin because anonymized data can be readily
matched to individual users.29 Information is collected from the beginning to
the end of any internet activity.
First, at the point of interaction with any given site, basic registration
information, including name and address, is widely collected.30 Next, cookies
collect detailed information on a user’s web activities, tracking what sites are
visited, what is purchased, how long a user spends browsing certain sites, how
long the mouse hovers over particular items, what a user reads, what a user
writes on message boards, and the terms entered into search engines. In short,
“[w]eb surfing behavior is tracked down to the millisecond.”31
24. I will use the terms “Web 1.0” and “Web 2.0” as convenient shorthand for distinguishing
between the “old” internet experience and what most users experience today. While precise definitions
of these terms are elusive, for my purposes it is sufficient to note that Web 1.0 was generally
characterized by concentration of content creation in the hands of a relatively small group of website
creators, with the majority of users passively consuming content, whereas Web 2.0 allows all internet
users to create content and interact socially across the web. For a more detailed discussion of these
distinctions, see Graham Cormode & Balachander Krishnamurthy, Key Differences Between Web 1.0
and Web 2.0, FIRST MONDAY, June 2, 2008, http://firstmonday.org/htbin/cgiwrap/bin/ojs/ index.php/fm/
article/view/2125/1972.
25. Alexis Madrigal, I’m Being Followed: How Google—and 104 Other Companies—Are Tracking
Me on the Web, ATLANTIC, Feb. 29, 2012, http://www.theatlantic.com /technology/archive/2012/02/imbeing-followed-how-google-and-104-other-companies-are-tracking-me-on-the-web.
26. For good recent overviews, see LORI ANDREWS, I KNOW WHO YOU ARE AND I SAW WHAT YOU DID:
SOCIAL MEDIA AND THE DEATH OF PRIVACY (2011); JOSEPH TUROW, THE DAILY YOU (2011).
27. TUROW, supra note 26, at 48.
28. Id. at 78–81.
29. See Paul Ohm, Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization, 57 UCLA L. REV. 1701, 1716–26 (2010) (discussing at length the ability of computer scientists to
match data to individual users with three readily available pieces of information: zip code, sex, and
birth date); see also Madrigal, supra note 25.
30. TUROW, supra note 26, at 79.
31. Id.
2013]
BEHAVIORAL CREDIT SCORING
813
This information can, and increasingly is, connected to users’ “offline” data
that is maintained by yet other data brokers. For example, the online marketing
firm eXelate recently began offering advertisers the ability to combine data
gathered from cookies with information from Nielson that includes census data
and research from other consumer research firms.32 EXelate claims to gather
information on 200 million unique individuals monthly, determining a consumer’s sex, age, ethnicity, marital status, and profession from registration information and combining this data with browsing history to create detailed profiles of
potential interests.33 Similarly, the web firm [x⫹1] combines information gathered from browsing history and recorded Facebook, Twitter, and other website
posts, with demographic and geographic information provided by third parties.34
Although targeted advertising based on online behavioral profiling is part of
an ongoing and contentious debate in legal and policy circles,35 relatively little
attention has been paid to efforts to use the data collected on consumers’ online
activities to aid in credit determinations.36 Credit card companies already make
use of data on purchase history to set credit limits. For example, in 2008 it was
discovered that American Express adjusted credit limits based on the type and
location of stores in which its customers shopped.37 Atlanta resident Kevin
Johnson was alerted to this reality when he returned from a honeymoon to
discover that his credit line had been reduced after making purchases at a
Walmart that was correlated to higher default rates among its shoppers, even
though Johnson had no negative history of his own.38
Although this example involves a “brick and mortar” store, similar practices
are being deployed online. Capital One, one of the nation’s largest issuers of
credit cards, uses data mining firm [x⫹1] precisely for this purpose.39 Visitors
to the Capital One website are analyzed by [x⫹1] and assigned a demographic,
geographic, and behavioral profile based on their online browsing and purchase
32. Id. at 80.
33. Id.
34. Id.
35. See Bennet Kelley, Privacy and Online Behavioral Advertising, 11 J. INTERNET L. 24 (2007)
(describing debates over behavioral advertising at FTC sponsored Town Hall); Peter Eckersley, What
Does the “Track” in “Do Not Track” Mean?, ELEC. FRONTIER FOUND., Feb. 19, 2011, http://commcns.org/
sMA4DK (arguing for header based DNT mechanism to prevent tracking by marketing firms);
J. Thomas Rosch, The Dissent: Why One FTC Commissioner Thinks Do Not Track is Off-Track,
ADVERTISINGAGE (Mar. 24, 2011), http://commcns.org/vjrWGg (arguing against rush to implement
Do-Not-Track).
36. Apart from passing references in larger works, my research has revealed no scholarly work
dedicated to the subject.
37. Chris Cuomo et al., GMA Gets Answers: Some Credit Card Companies Financially Profiling
Customers, GOOD MORNING AMERICA, Jan. 28, 2009, http://abcnews.go.com/GMA/TheLaw/gma-answerscredit-card-companies-financially-profiling-customers/story?id⫽6747461#.UIQM4MXA8_c.
38. Id.
39. Emily Steel & Julia Angwin, On the Web’s Cutting Edge: Anonymity in Name Only, WALL ST. J.
Aug. 3, 2010, http://online.wsj.com/article/; see also ANDREWS, supra note 26, at 36–37.
814
THE GEORGETOWN LAW JOURNAL
[Vol. 101:807
history combined with offline demographic and geographic information.40 This
profile then determines the types of offers for credit cards they receive. Someone with an interest in travel will be shown a mileage rewards card, for
example, while a mother who shops at Walmart in Colorado Springs and makes
$50,000 will be shown offers for lower limit cards with higher rates.41 Building
on the approach taken by Capital One, large banks and other major credit
reporting agencies, like Experian, have expressed interest in developing algorithms to predict creditworthiness based on purchase history and internet activity.42
C. WEB 2.0
The credit industry has also begun to tap the power of the social web, or Web
2.0, to determine creditworthiness. These efforts range from straightforward and
intuitive approaches, such as “friending” customers on Facebook or Twitter, to
new systems of lending that seek to revolutionize the way credit is offered. This
section addresses these efforts in turn.
Most banks and lenders have developed or are developing a social-media
presence on Facebook, Twitter, and other sites that allows them to engage with
customers online, track customer activity and, most importantly, gain access to
customers’ friends and other connections.43 Although some banks’ initial forays
into social media have been crude (particularly those of the larger, traditional
lenders) and even counterproductive from their perspective,44 the desire to
develop an active and effective social-media presence has spawned a small
industry of social-media technology consultants.45 Social-media presence and
40. Steel & Angwin, supra note 39.
41. See Steel & Angwin, supra note 39. Notably, these offers are made before any application for
credit is made, which Capital One claims keeps it on the right side of existing financial regulations. Id.
42. Asked recently to comment on the use of social-media and online behavioral data, an Equifax
spokesperson responded: “Our corporate development professionals are very aware of the opportunities
to enhance our proprietary data and partner with companies who add value to the accuracy of our
reporting, which helps our customers make better decisions prior to lending.” Adrianne Jeffries, As
Banks Start Nosing Around Facebook and Twitter, the Wrong Friends Might Just Sink Your Credit,
BETABEAT.COM (Dec. 13, 2011), http://www.betabeat.com/2011/12/13/as-banks-start-nosing-aroundfacebook-and-twitter-the-wrong-friends-might-just-sink-your-credit.
43. See id.; see also Tim Grant, Going by the Book: Lenders Using Social Networking Sites Like
Facebook, Twitter To Gather Borrower Information, PITTSBURGH POST-GAZETTE, May 28, 2010, at C1;
Jeremy Quittner, Banks To Use Social Media Data for Loans and Pricing, AMERICAN BANKER (Jan. 26,
2012), http://www.americanbanker.com/issues /177_18/movenbank-social-media-lending-decisions-brettking-1046083-1.html.
44. See Steven E.F. Brown, Social Media Users Really Hate Bank of America, S.F. BUS. TIMES, Nov.
01, 2011, http://www.bizjournals.com/sanfrancisco/news/2011/11/01/report-social-media-users-hate-bofa.
html (reporting on overwhelming negative response to Bank of America and other large banks on social
media).
45. See, e.g., ACCENTURE, SOCIAL BANKING: THE SOCIAL NETWORKING IMPERATIVE FOR RETAIL BANKS
(2011), available at http://www.accenture.com/us-en/Pages/insight-social-banking-social-networkimperative-summary.aspx; Mark Evans, What’s a Social Media Consultant? MARKEVANSTECH.COM
(June 28, 2010), http://www.markevanstech.com/2010/06/28/whats-a-social-media-consultant/; SOCIAL
MEDIA AND BANKING, http://socialmediabanking.blogspot.com (last visited Oct.12, 2012).
2013]
BEHAVIORAL CREDIT SCORING
815
partnerships with third-party data companies provide lenders a wealth of personal data with which to profile users.
At the most basic level, banks, lenders, and credit collection agencies report
that they check social-media sites to verify borrowers’ identity and detect
fraud.46 Individual loan and collection officers at community banks and other
smaller institutions report scanning social-media profiles of borrowers and
collection targets to discover contact information and ensure that borrowers are
who they say they are.47
On a more sophisticated level, third-party providers are mining social-media
data—the “social graph”—to improve risk assessment.48 These social-media
monitors (SMM) can examine a customer’s posts, searching for key terms that
might alert the lender to future problems.49 For example, posts indicating that a
borrower was recently fired or searching for a new job could lead to a preemptive credit downgrade or heightened monitoring.50 The CGI group recently
launched such a monitoring service to collect and analyze social-media data for
use in collections.51 On the front end, companies such as the short-term lender
BillFloat, Inc. see great value in data mining everything from Monster.com
resume postings to LinkedIn accounts and Twitter posts to verify loan eligibility, income, and employment status.52
Having an active social-media presence also means that banks and lenders
can extend offers to the friends of their most high-value clients. The idea here is
that “birds of a feather flock together,” and that the friends of high-value users
are likely also to be higher income and lower risk.53 Capitalizing on this idea, in
2009 Facebook began allowing banks and other companies to make offers to
“friends of connections.”54 Thus, if a customer connects to a lender via Facebook, that lender now has access to the user’s entire network of friends and can
make targeted offers based on profile data of those friends’ activities and
interests provided by Facebook.55
Similarly, lenders are seeking ways to increase the accuracy of credit scoring
by integrating analysis of a potential borrower’s friends into the lending pro-
46. See Grant, supra note 43.
47. Id.
48. See Lucas Conley, How Rapleaf Is Data-Mining Your Friend Lists To Predict Your Credit Risk,
FAST CO., Nov. 16, 2009, http://www.fastcompany.com/blog/lucas-conley/advertising-branding-andmarketing/company-we-keep; see also Quittner, supra note 43.
49. See Quittner, supra note 43.
50. Id.
51. Id.
52. Id.
53. See Jeffries, supra note 42. As the founder of San Francisco-based CreditKarma.com explains:
“If you are a profitable customer for a bank, it suggests that a lot of your friends are going to be the
same credit profile. So they’ll look through the social network and see if they can identify your friends
online and then maybe they send more marketing to them. That definitely exists today.” Id.
54. How Does Friends of Connections Targeting Work?, FACEBOOK, https://www.facebook.com/help/
?faq⫽124969517582983 (last visited May 2, 2012).
55. See id.
816
THE GEORGETOWN LAW JOURNAL
[Vol. 101:807
cess.56 The principal is the same: a borrower’s friends are believed to be a good
predictor of the borrower’s own likelihood of default. The SMM company
Rapleaf, for example, claims that it can predict your credit score by analyzing
your list of social-media friends.57 This principle also drives the start-up
Lenddo.com. Based in Hong Kong with planned expansion into other emerging
markets, Lenddo offers credit based on the creditworthiness of one’s socialmedia friends.58 Users must give the company access to their full social-media
profile, and if a borrower is delinquent or defaults, Lenddo will contact the
borrower’s circle of friends to inform them of the delinquency.59 In this way,
Lenddo is the embodiment of the “social credit” discussed as a hypothetical by
Digg’s Kevin Rose.60
While Lenddo has not launched in the United States, American firms are
pursuing similar avenues. Moven (formerly Movenbank), for example, launched
in early 2011 with the express purpose of integrating social-media data into the
entire banking process.61 Moven requests that its customers provide access to
social-media profiles and friend lists.62 Thus, loan pricing and savings rates will
be based not only on traditional credit markers, but also on online relationships,
Facebook posts, Twitter feeds, the creditworthiness of friends, and the ability to
convince those friends to join. This approach builds on similar models being
used by peer-to-peer lending services such as Weemba, Inc. and SoMoLend.63
Ultimately, these lenders are interested in adopting ratings technologies
currently used to assess a user’s online reputation and influence. The company
Klout, for example, measures “social credit” by tracking the influence of
social-media users with metrics such as the number of followers a user has on
Twitter, the level of “re-tweeting,” and the user’s blog and Facebook links.64
Klout has been contacted by numerous lenders interested in integrating Klout
scores into the lending process.65 Thus, it is clear that both traditional lenders
and smaller firms are rapidly moving to exploit opportunities provided by Web
2.0 to enable more precise loan targeting, tier pricing, and risk mitigation. As
FTC commissioner Julie Brill put it recently:
56.
57.
58.
59.
60.
61.
62.
See Conley, supra note 48.
Id.
See LENDDO.COM, https://www.lenddo.com/ (last visited Oct. 21, 2012).
Jeffries, supra note 42.
See supra notes 1–3 and accompanying text.
Jeffries, supra note 42; see also MOVEN, https://www.moven.com (last visited Feb. 18, 2013).
John Adams, Bank Critic Brett King Takes Heat over Facebook-Based Account Opening,
AMERICANBANKER, Dec. 01, 2011, http://www.americanbanker.com/issues/176_232/Movenbank-FacebookPrivacy-Concerns-1044527-1.html.
63. Quittner, supra note 43.
64. KLOUT, http://klout.com/home (last visited Oct. 21, 2012).
65. See id.; see also Jeffries, supra note 42. As Moven founder Brett King explains, in marginal
cases a borrower’s active Twitter account may help push her toward a loan. See id.; see also Adams,
supra note 62.
2013]
BEHAVIORAL CREDIT SCORING
817
Analysts are undoubtedly working right now to identify certain Facebook or
Twitter habits or activities as predictive of behaviors relevant to whether a
person is a good or trustworthy employee, or is likely to pay back a
loan. . . . [M]ight there not be a day very soon when these analysts offer to sell
information scraped from social networks to current and potential employers
to be used to determine whether you’ll get a job or promotion? Or to the bank,
where you’ve applied for a loan, to help it determine whether to give you the
loan and on what terms?66
If that day has not arrived yet, it is fast approaching. Industry insiders are
confident that refined predictive models based on social-media and other web
activity can be deployed to determine credit in three to five years.67
II. HARMS
Before discussing attempts at regulation, it is vital to consider what, if any,
harms flow from the processes just described. Why is tighter regulation appropriate? Those in the vanguard of making credit more “social” often promote their
approach in nearly utopian terms, promising to deliver credit more efficiently to
customers that have traditionally been underserved by large banks.68
There are reasons, however, to question the emergence of behavioral credit
scoring, particularly as these practices are incorporated and implemented by
major financial institutions without transparency or meaningful consumer consent. Broadly speaking, harms can be described as transparency- and consentbased, discriminatory, or contextual. The development of Web 2.0 applications
and the shift to the social web presents particular problems because there are
cumulative harms involving both discrimination and context violations. First, I
briefly discuss the failure of the traditional public/private dichotomy as a
rational foundation for theorizing privacy harms before addressing each of these
harms in turn.
A. THE PUBLIC/PRIVATE DISTINCTION
Much traditional privacy discourse has focused on the dichotomy between
the public and the private as a means of delineating acceptable spaces for
regulation. Helen Nissenbaum has comprehensively addressed the failure of this
dichotomy in digital space.69 Her analysis is useful in rebutting the common
contention that that data-mining practices, particularly on Web 2.0, are justified
66. Kenneth Corbin, FTC Commissioner Talks Online Privacy, Puts Data Brokers on Notice,
CIO.COM, Jan. 26, 2012, http://www.cio.com/article/698875/FTC_Commissioner_Talks_Online_Privacy_
Puts_Data_Brokers_on_Notice.
67. Jeffries, supra note 42.
68. Lenddo describes its service as “the world’s first online platform that helps the emerging middle
class use their social connections to build their creditworthiness and access local financial services.”
What is Lenddo?, LENDDO, https://www.lenddo.com/pages/what_is_lenddo (last visited May 12, 2012).
69. HELEN NISSENBAUM, PRIVACY IN CONTEXT 113–26 (2010).
818
THE GEORGETOWN LAW JOURNAL
[Vol. 101:807
because information posted on social networks is “public.” Nissenbaum helps
debunk this idea by revealing that the three conceptions of the public/private
distinction—actors, realms, and information—fail to provide a comprehensive
framework for analyzing contemporary data practices.70
The first public/private dichotomy relates to actors: in the United States, we
have traditionally placed relatively robust categorical restrictions on intrusions
by government actors while tending to take a more diffuse and market-based
approach to conduct by private parties.71 This distinction is hardly tenable in
contemporary society because much of the surveillance considered objectionable online is carried out by private parties. Indeed, privacy restrictions on
private actors have developed, albeit slowly and generally in a piecemeal,
industry-specific fashion, and “few experts now deny that constraints need to be
imposed to limit violations by private actors of one another’s privacy.”72
Next, we might conceive of spaces or realms as public or private. Thus,
whereas the home is granted the highest level of privacy protection, activities in
the public square typically are not. Again, however, this distinction fails to
adequately capture the realities of modern surveillance. At one time it would
have been reasonable to assume that our activities in a crowded public space, if
noticed at all, would only reveal fragments of personal information.73 With the
advent of tracking technologies such as GPS and the use of advanced dataaggregation systems online, however, the revealing information that can be
gathered on individuals in public now rivals or exceeds what one might capture
by rummaging through someone’s home.74
Finally, we might label certain information as either public or private.
Beyond the fact that such a distinction appears hopelessly circular—privacy
protection is given to private information—Nissenbaum points out that not only
are the definitions of public and private information relative across cultures, but
“the dividing line within societies is virtually impossible to draw.”75 As such,
Nissenbaum and other scholars have urged alternate ways of conceptualizing
and defining privacy harms. Leaving aside the widespread concerns regarding
70. Id.
71. Id. at 114. For example, the restrictions in the Privacy Act discussed above apply only to
government agencies, and even allow circumvention if government agents receive the data from
third-party data firms rather than collecting it themselves. See, e.g., Paul M. Schwartz, Privacy and
Democracy in Cyberspace, 52 VAND. L. REV. 1609, 1633–34 (1999) (“From the earliest days of the
Republic, American law has viewed the government as the entity whose data use raises the greatest
threat to individual liberty.”).
72. NISSENBAUM, supra note 69, at 114.
73. Id. at 117.
74. Indeed, the Supreme Court recently struggled with the meaning of the public/private distinction
in the context of the Fourth Amendment in U.S. v Jones, 132 S. Ct. 945 (2012), with five Justices
appearing concerned that the old maxim “there is no reasonable expectation of privacy in public” is not
a useful basis for privacy protection in the digital world. See id. at 956 (Sotomayor, J., concurring); id.
at 963–64 (Alito, J., concurring).
75. NISSENBAUM, supra note 69, at 122.
2013]
BEHAVIORAL CREDIT SCORING
819
the accuracy of behavioral profiling, classification is problematic for several
reasons. Three approaches are described below.
B. TRANSPARENCY
Online surveillance is conducted in the dark, the subjects of the surveillance
are unaware that they are being watched, and it is unclear what information is
relevant and how it is being used to create categories of consumers. The Dutch
theorist Jeroen van den Hoven groups these transparency concerns under the
heading “informational inequality”—the power imbalances between consumers
and data collectors that are facilitated by secrecy and automation.76
For example, the inputs that determined the FICO score were generally
unknown until revisions to the FCRA mandated that credit-reporting bureaus
release credit reports that included credit scores, finally revealing some information regarding the scoring process.
Or, recall the credit downgrade that Mr. Johnson suffered as a result of
shopping at certain stores considered by American Express to indicate a lower
credit customer.77 In that case, American Express refused to disclose what
stores had triggered the downgrade.78 This particularly baffled Johnson as his
statement included only major retailers such as Ruby Tuesday and Amazon.
com. In the end, Johnson determined that the only purchase that might be
considered aberrant was from a Walmart in southeast Atlanta, an area where he
had never shopped before.79 However, this is at best an educated guess that
leaves little room for the customer to contest the determination or take affirmative steps to rectify it. Transparency concerns are amplified online where both
the amount and scope of data collection and the number of companies engaged
in tracking increase exponentially. There are also strong indications that consumers are only vaguely aware of the nature of the tracking being conducted online,
which raises doubts as to the extent consumers are consenting to surveillance.80
In addition, the collection, sorting, and determination processes are fully
automated, further limiting the ability of the subjects of surveillance to contest
the determinations that have been made or the inputs used to make them. As
those who have tried to challenge fraudulent purchases following identity theft
can readily attest, it can take years to correct persistent errors in one’s digital
profile.81 Moreover, courts have thus far been wary of policing the errors that
76. Jeroen van den Hoven, Privacy and the Varieties of Informational Wrongdoing, 27 COMPUTERS &
SOC’Y, no. 3, Sept. 1997, at 33, reprinted in READINGS IN CYBER ETHICS 493 (Richard A. Spinello &
Herman T. Tavani eds., 2004).
77. See supra section I.B.
78. See Cuomo, supra note 37.
79. Id.
80. See, e.g., CHRIS HOOFNAGLE ET AL., HOW DIFFERENT ARE YOUNG ADULTS FROM OLDER ADULTS WHEN
IT COMES TO INFORMATION PRIVACY ATTITUDES AND POLICIES? (2010), available at http://www.ftc.gov/os/
comments/privacyroundtable/544506-00125.pdf (concluding based on survey data that “[t]he entire
population of adult Americans exhibits a high level of online-privacy illiteracy”).
81. DAVID LYON, SURVEILLANCE STUDIES 88 (2007).
820
THE GEORGETOWN LAW JOURNAL
[Vol. 101:807
inevitably accompany large data-aggregation systems,82 or seem to misunderstand the nature of the modern credit economy entirely.83
C. DISCRIMINATION
Digital profiles have profound real-world effects, determining access to and
pricing for credit, insurance, and consumer products in ways that entrench
existing disadvantages of class and race. To begin with, the categories used to
segment potential borrowers themselves may be infected with politics. As David
Lyon writes: “All too often, it is already-existing categories of ‘race,’ nationality, gender, socio-economic status or deviance that inform and are amplified by
surveillance, which then enable differential treatment to be given to the ‘different’ groups.”84 Or, as Julie Cohen puts it, “the line between useful heuristics and
invidious stereotypes is vanishingly thin.”85
This process may take the form of outright classification along racial lines.
The Wall Street Journal recently examined the results of classification done by
[x⫹1] for Capital One.86 While some of the participants in the study found that
the behavioral profile matched particularly well, the most galling result for one
participant is that [x⫹1] had signaled her as a person interested in hip-hop
music and Vibe magazine, apparently because her husband was AfricanAmerican.87 While such a classification may appear benign when considering
the assignment of potential musical or literary interests, it can have profound
effects when used to make consequential credit determinations.
Of course, basing lending decisions on overt racial classifications is illegal
under U.S. law,88 but there is evidence that such practices persist. A comprehensive study conducted in 2001 by the National Community Reinvestment Coalition (NCRC) of mortgage-lending patterns in ten major metropolitan areas
found that the percentage of subprime and predatory loans increased dramatically as the percentage of minority or elderly applicants in a given neighborhood increased, even after controlling for creditworthiness, housing stock, and
income in the neighborhood.89 Thus, minority and elderly borrowers are more
82. See, e.g., Sarver v. Experian Info. Solutions, 390 F.3d 969, 972 (7th Cir. 2004) (holding that
given “the complexity of the system and the volume of information involved, a mistake [by the credit
reporting agency] does not render the procedures unreasonable” under the FCRA).
83. For example, the Ninth Circuit recently described the credit economy as follows: “Credit comes
into existence through confidence—confidence that one human being may rely on the representations of
another human being. On this utterly unmechanical, uniquely human understanding, a credit economy
is formed and wealth is created.” Tijani v. Holder, 628 F.3d 1071, 1073 (9th Cir. 2010).
84. LYON, supra note 81, at 183.
85. JULIE E. COHEN, CONFIGURING THE NETWORKED SELF 117–18 (2012).
86. See Steel & Angwin, supra note 39.
87. Id.
88. See Equal Credit Opportunity Act, 15 U.S.C. § 1691 (2006).
89. NAT’L CMTY. REINVESTMENT COAL., THE BROKEN CREDIT SYSTEM: DISCRIMINATION AND UNEQUAL
ACCESS TO AFFORDABLE LOANS BY RACE AND AGE (2003). Although some of the worst abuses in the
housing market have been subject to further regulatory scrutiny following passage of the Dodd–Frank
Act, I use the example of mortgage finance because it has been widely documented and the subject of
2013]
BEHAVIORAL CREDIT SCORING
821
likely to receive financially harmful loans than white borrowers with similar
credit scores, income, and housing stock.90 These results confirmed results of
several other government and academic studies.91
As the NCRC report notes, however, the automated nature of the credit
scoring process is cited by the lending industry as the primary defense to any
claims of discrimination; all decisions, it is claimed, are colorblind, as they are
based simply on the neutral factors that make up one’s credit score and
processed by computer programs that eliminate human discretion.92 The results
of the NCRC and other studies belie these claims and lead to one of two
conclusions: either explicit and overt prejudice is being reintroduced into the
lending process by loan officers at the back end, after the data has been
aggregated and analyzed, or other data sources are influencing determinations.
Given the long history of mortgage “redlining”93 and the widespread evidence
of fraud and abuse in subprime housing markets, the first possibility cannot be
fully discounted.94
Even if direct discrimination is not a factor, however, the very real possibility
remains that alternative data sources not included in FICO—the types of
geographic, demographic, and behavioral data collected through online tracking—
are having substantial discriminatory effects on minorities and the elderly. To
see how this works, consider how the typical internet experience is now
configured and personalized based on information gathered on each user’s
online and offline life. As Joseph Turow describes in great detail, opportunities
numerous empirical studies, both by government agencies and independent organizations. My contention is not that problems of discrimination exist exclusively in these markets, only that these have been
the best documented. Indeed, I am confident similar issues pervade other industries. See, e.g., Press
Release, U.S. Equal Emp’t Opportunity Comm’n, EEOC Files Nationwide Hiring Discrimination
Lawsuit Against Kaplan Higher Education Corp. (Dec. 21, 2010), available at http://www.eeoc.gov/eeoc/
newsroom/release/12-21-10a.cfm (announcing government lawsuit against Kaplan for use of credit
scores as a proxy for race in employment screenings).
90. See NAT’L CMTY. REINVESTMENT COAL., supra note 89, at 6–8.
91. The Justice Department has also reached settlements with financial institutions for disparate
treatment of minorities in lending in the run-up to the subprime crisis. For example, the DOJ reached a
twenty-one-million-dollar settlement with SunTrust for imposing a racial surtax on black borrowers
seeking home loans. See Press Release, Dept. of Justice, Justice Department Reaches $21 Million
Settlement To Resolve Allegations of Lending Discrimination by SunTrust Mortgage (May 21, 2012),
available at http://www.justice.gov/opa/pr/2012/May/12-crt-695.html.
92. NAT’L CMTY. REINVESTMENT COAL. at 6 (“The single most utilized defense of lenders and their
trade associations concerning bias is that credit scoring systems allow lenders to be colorblind in their
loan decisions.”).
93. See Charles L. Nier, III, Perpetuation of Segregation: Toward a New Historical and Legal
Interpretation of Redlining Under the Fair Housing Act, 32 J. MARSHALL L. REV. 617, 628–30 (1999)
(describing history of housing segregation and resulting unrest that encouraged the Johnson administration to push for adoption of Fair Housing Act, which sought to expand minority access to housing and
decrease segregation).
94. See Seeta Peña Gangadharan, Digital Inclusion and Data Profiling, 17 FIRST MONDAY, no. 5–7,
(May 2012), http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/3821/3199 (detailing use by subprime mortgage companies of predictive-profiling technologies to target low-income and
minority borrowers for predatory loans).
822
THE GEORGETOWN LAW JOURNAL
[Vol. 101:807
to engage in the consumer economy online are increasingly constrained by what
marketers think they know about you, and more importantly, your marketing
value.95 Thus, upper-middle-class white families living in the suburbs will see
very different ads online than minority families living in the inner city. The
former might receive targeted ads for luxury automobiles or clothing, discounted New York Times subscriptions, or Whole Foods coupons, while the
latter might receive advertisements for pay-day lenders and fast-food chains.96
Even the news we see is being customized according to behavioral and geodemographic data.97 Turow refers to the resulting segmentation as “social
discrimination” and describes a future in which “marketers and media firms
may find it useful to place us into personalized ‘reputation silos’ that surround
us with worldviews and rewards based on labels marketers have created reflecting our value to them.”98
Even if purportedly colorblind, the effects of being categorized as a “target”
or as “waste”99—to use the current marketing terminology—have real consequences, particularly when that data is used to make credit and lending decisions. Following the links and engaging in commerce based on these
recommendations in turn will further generate data that can, as discussed,
influence determinations of creditworthiness. This process creates a sort of
feedback mechanism that calls into doubt the notion of individual consumer
choice and objective lender decision making. Recall further that the characteristics of the neighborhood one is born into or lives in are often determined by the
same sorts of data collection processes, as retailers, banks, and grocery chains
select locations based on aggregate demographic data such as credit and income. Thus, shopping in one’s neighborhood will indicate credit unworthiness
irrespective of one’s actual income or credit score.
Here again we can see the feedback mechanism at play: minority and elderly
borrowers are more likely to receive harmful loans, which in turn are more
likely to fail and damage credit, which is likely to affect future lending and
retail-siting decisions, which further constrains consumer options both on and
offline, thereby beginning the cycle again.100
This process is a perfect example of what Oscar Gandy has referred to as
“cumulative disadvantage”: “the ways in which historical disadvantages cumu-
95. See TUROW, supra note 26.
96. These examples are based on examples provided by Turow. Id. at 3–6.
97. Id.
98. Id. at 8.
99. Id.
100. Indeed, recent reports suggest that the fallout from the subprime crisis is having a lasting and
disproportionate effect on minority communities, with civil rights groups warning that “the country is
headed toward a kind of financial segregation.” Ylan Q. Mui, For Black Americans, Financial Damage
from Subprime Implosion Is Likely To Last, WASH. POST, July 08, 2012, http://www.washingtonpost.com/
business/economy/for-black-americans-financial-damage-from-subprime-implosion-is-likely-to-last/
2012/07/08/gJQAwNmzWW_print.html.
2013]
BEHAVIORAL CREDIT SCORING
823
late over time, and across categories of experience.”101 Data analysis and
tracking technologies, and the classifications they permit when integrated into
credit markets, magnify and entrench disparities of starting position for minorities while simultaneously scrubbing the decisions of any overt racial animus.
Cumulative disadvantage “helps to explain how a racial effect can be produced
within a society that may have in fact experienced a decline in the level of
animus or negative racial intent as the motivation behind critical choices that
have been made.”102
Thus far we have been discussing the effect of behavioral credit on marginalized groups, particularly minorities. Even absent a racially disparate impact,
however, we should not be shy about making moral judgments about the
socioeconomic discrimination enabled by behavioral sorting. In a comprehensive treatment of the use of surveillance technologies in the networked world,
Julie Cohen draws on a variety of postmodern and critical theorists to question
the constraints placed on the development of individual subjectivity by pervasive surveillance.103 Cohen describes panoptic surveillance as follows:
It does not simply render personal information accessible . . . but rather seeks
to render individual behaviors and preferences transparent by conforming
them to preexisting categories. Panoptic surveillance simultaneously illuminates individual attributes and constitutes the framework within which those
attributes are located and rendered intelligible. For this reason, the logics of
transparency and discrimination are inseparable. Surveillance functions precisely to create distinctions and hierarchies among surveilled populations.104
This sorting, Cohen contends, can have harmful limiting effects on the “play”
of everyday experience—the creative experimentation and interaction with
culture that informs our “evolving subjectivity” and which, in turn, is crucial to
furthering cultural development.105 Cohen’s insights are particularly useful
when critically assessing behavioral credit scoring. Whereas on the one hand it
is argued that discrimination based on wealth is the whole point of credit
scoring, and that this is a beneficial method of risk management, when viewed
another way, categorizing every behavior based on its credit effect constructs a
particularly pernicious “reputation silo”—to go back to Turow’s apt term. It is
one thing to be surrounded by customized ads and offers based on a behavioral
profile, but quite another to have one’s access to employment or a home loan
restricted for the same reasons. If the first is sufficient to curtail the horizon of
101. OSCAR H. GANDY, JR., COMING TO TERMS WITH CHANCE 12 (2009).
102. Id.
103. See COHEN, supra note 85.
104. Id. at 136–37.
105. Id. at 151. (“The play of culture and the play of subjectivity are inextricably intertwined; each
feeds into the other. Creativity and cultural play foster the ongoing development of subjectivity. . . .
Evolving subjectivity, meanwhile, fuels the ongoing production of artistic and intellectual culture, and
the interactions among multiple, competing self-conceptions create cultural dynamism.”).
824
THE GEORGETOWN LAW JOURNAL
[Vol. 101:807
cultural play, the latter threatens to transform that horizon into nothing more
than the predictable result of a series of instrumentally driven transactions. This
creeping monetization of the basic elements of identity formation—our interests, associations, mistakes—should be resisted. If, however, we conclude that
discriminatory harms of this sort are the inevitable and acceptable results of a
properly functioning market, there are other reasons to question the use of
behavioral credit scoring, discussed below.
D. CONTEXT
Privacy scholars have struggled to identify a set of privacy harms that would
justify restrictions on tracking and collection in the digital world.106 The
discriminatory harms discussed above might provide one way to support enhanced privacy protections online. Additionally, long-standing notions of contextual integrity offer an avenue for conceptualizing the practices discussed in Part
I so as to make normative judgments on the value of data mining the social
graph.
Contextual integrity refers broadly to the idea that information gathering and
processing practices should respect the contextual norms that the users of a
particular service or technology have come to expect in their daily interactions
with that service.107 A version of this idea has been at the core of efforts to
delineate a set of universal governing principles of information privacy.108
Jeroen van den Hoven, writing in 1997, refers to violations of these contextual
expectations as “informational injustice.”109 Van den Hoven explains that “[t]he
meaning and value of information is local, and allocative schemes . . . that distribute access to information should accommodate local meaning and should
therefore be associated with specific spheres.”110 An example is our approach to
medical records. While most of us would have no problem providing doctors
and other healthcare providers with access to our medical data, many of us
would feel strongly that such data should not be used to make decisions
regarding workplace promotion or access to a mortgage.111 Van den Hoven
refers to the segregated domains of information as “spheres of access” and
concludes that “what is often seen as a violation of privacy is oftentimes more
adequately construed as the morally inappropriate transfer of data across the
106. See generally DANIEL J. SOLOVE & PAUL M. SCHWARTZ, INFORMATION PRIVACY LAW 760–61 (2011)
(summarizing various approaches to defining the harms caused by personal data collection and tracking
by commercial entities).
107. See NISSENBAUM, supra note 69.
108. See, e.g., THE WHITE HOUSE, CONSUMER DATA PRIVACY IN A NETWORKED WORLD: A FRAMEWORK
FOR PROTECTING PRIVACY AND PROMOTING INNOVATION IN THE GLOBAL DIGITAL ECONOMY 15 (2012),
available at http://www.whitehouse.gov/sites/default/files/privacy-final.pdf [hereinafter FRAMEWORK]
(adopting principle of contextual integrity).
109. Van den Hoven, supra note 76, at 493–95.
110. Id. at 494.
111. Id. at 494–95.
2013]
BEHAVIORAL CREDIT SCORING
825
boundaries of what we intuitively think of as separate . . . ‘spheres of access.’”112
A version of this idea also forms the basis of many data protection policies
for governments around the world. An influential report on information privacy
released in 1973 by the U.S. Department of Health, Education, and Welfare
listed as one of its five key recommendations to government agencies handling
an increasing amount of sensitive data on citizens that “[t]here must be a way
for an individual to prevent information about him obtained for one purpose
from being used or made available for other purposes without his consent.”113
This has come to be termed the “integrity” principle and has remained a basic
principle of Fair Information Practice Principles (FIPPs) adopted in both Europe
(as part of the European Union Directive on data protection) and the U.S. (as
part of the Privacy Act of 1974).
Building on these basic foundations, Helen Nissenbaum has articulated a
detailed schema for addressing privacy in the digital age through the lens of
what she terms “contextual integrity,” whereby data practices are evaluated
according to their level of respect for the context-specific norms of information
flow that exist in a given space or relationship.114 Although evaluating the
norms that predominate in an emerging environment like social media is
difficult, and some argue that no such fixed norms exist (and also that different
social-media services have different norms), empirical work on social-media
norms and etiquette can help inform the inquiry, as can consideration of the
purposes of the service being studied (what Nissenbaum refers to as its values
or goals).115
Rather than look at social media as creating new norms out of whole cloth, it
makes more sense to consider preexisting and long-standing norms governing
friendships and social relations, whether they exist online or off. For example, if
we consider Facebook to operate under norms of friendship and socializing,116
it becomes quite clear that third-party collection, analysis, and transmission of
data posted to a friend’s wall for a commercial purpose (for instance, offering
credit) unrelated to the reason for sharing in the first place, would violate the
expectations we would have of that relationship. Friendship is grounded on
information-transmission principles that require consent and knowledge, and the
112. Id. at 495.
113. U.S. DEPT. OF HEALTH, EDUC., & WELFARE, RECORDS, COMPUTERS AND THE RIGHTS OF CITIZENS
(1973).
114. See NISSENBAUM, supra note 69.
115. A version of this approach has been explicitly adopted by recent privacy proposals made by the
Obama Administration, see infra Part IV.
116. Recent scholarship suggests that Facebook users, both young and old, still predominantly use
the service to keep up with friends. See, e.g., Memorandum from Amanda Lenhart on Adults and Social
Network Websites to the PEW Internet Project (Jan. 14, 2009), available at http://www.pewinternet.org//
media/Files/Reports/2009/PIP_Adult_social_networking_data_memo_FINAL.pdf; Jasmine McNealy, The
Privacy Implications of Digital Preservation: Social Media Archives and the Social Networks Theory of
Privacy, 3 ELON L. REV. 133, 141 (2012).
826
THE GEORGETOWN LAW JOURNAL
[Vol. 101:807
collection of data posted to a circle of friends by a service such as Rapleaf
without consent or knowledge alters these norms in ways that implicate contextual integrity. Put simply, information shared with friends is not necessarily
information one would share with the bank.117
Moreover, sociologists have studied the destabilizing effects that follow from
the commodification of intimate relationships.118 For example, Michael Sandel
articulates an anticorruption rationale against commodification and argues that
“the degrading effect of market valuation and exchange on certain goods and
practices” should be resisted even if the goods at issue are provided in a
noncoercive manner.119 Sandel points to markets for body parts, military service, babies, and surrogacy as goods, services, or relationships that should be
free of the corrupting force of market influence.120 This “argument from corruption” provides additional support to finding a contextual violation when friendship is commodified through credit profiling.121
In addition to questioning the relational norms at play, we can also examine
the norms governing the information being implicated—in this case, financial
information. While it is true that some may have a general idea of the financial
health of their friends and acquaintances, it is also true that, as a culture, we
have traditionally considered detailed financial information to be private. This is
reflected in our laws, which restrict the types of disclosures of financial information that can be made to unauthorized third parties, as well as by social practice,
which has long considered discussion of personal finance to be taboo.122 It
would therefore seem odd to require inquiry into the personal finances of
friends (for fear that such friendships might jeopardize one’s credit score)
before accepting them as “friends” on a social network. A friend’s creditworthiness is simply not a criterion most people use for friendship, nor is it one that is
necessarily immediately obvious; therefore, credit inquiries based on the credit
scores of one’s friends would conflict with the prevailing norms surrounding
disclosure of financial data, even to a close circle of friends.
117. NISSENBAUM, supra note 69.
118. See, e.g., ARLIE RUSSELL HOCHSCHILD, THE COMMERCIALIZATION OF INTIMATE LIFE: NOTES FROM
HOME AND WORK (2003); ARLIE RUSSELL HOCHSCHILD, THE OUTSOURCED SELF: INTIMATE LIFE IN MARKET
TIMES (2012); RETHINKING COMMODIFICATION: CASES AND READINGS IN LAW AND CULTURE (Martha M.
Ertman & Joan C. Williams eds., 2005).
119. Michael J. Sandel, What Money Can’t Buy: The Moral Limits of Markets, in RETHINKING
COMMODIFICATION: CASES AND READINGS IN LAW AND CULTURE 122, 122 (Martha M. Ertman & Joan C.
Williams eds., 2005).
120. See id. at 123–25.
121. Id. at 122. Some commentators have already begun referring to Facebook as the “commercialization of friendship.” See Mattathias Schwartz, Pre-Occupied, THE NEW YORKER, Nov. 28, 2011, http://www.
newyorker.com/reporting/2011/11/28/111128fa_fact_schwartz.
122. An interesting recent expression of this social norm was made by John Bryant, who served as
Vice-Chairman to George W. Bush’s Council of Economic Advisors. Perhaps self-servingly, he
considered the difficulty in discussing personal finances as one of the drivers of the subprime crisis:
“Everybody wants it. Nobody understands it. Money is the great taboo. People just won’t talk about it.
And that is what leads you to subprime.” REFRAMING FINANCIAL LITERACY: EXPLORING THE VALUE OF
SOCIAL CURRENCY vii (Thomas A. Lucey & James D. Laney eds., 2012).
2013]
BEHAVIORAL CREDIT SCORING
827
III. EXISTING REGULATIONS
Existing regulation of the practices described in Part I is incomplete and
ineffective. This Part will briefly describe three avenues of regulation and their
shortcomings: the Fair Credit Reporting Act and its amendments, antidiscrimination law, and consumer protection regulations.
A. FCRA
The Fair Credit Reporting Act (FCRA) is the chief piece of legislation
governing the creation and use of what the Act terms “consumer reports,” such
as credit reports and criminal background checks. The core provisions of the
FCRA, as amended by the Fair and Accurate Credit Transactions (FACT) Act in
2003, provide consumers access to their credit reports,123 notice of adverse
decisions made based on information in a credit report,124 and the opportunity
to dispute inaccuracies in a credit report.125 Additionally, both the consumer
reporting agency and the institutions (such as banks) providing information to
the credit reporting agency must have reasonable procedures in place to verify
the accuracy of the information contained in a report or reported to the
agency.126 Although some of these requirements have arguably provided consumers with enhanced ability to combat identity theft, they are wholly inapplicable
to the data-mining processes described in Part I.
First, the FCRA appears not to apply at all to credit determinations made “in
house” by credit issuers if they are not based on a credit report. Thus, for
example, if American Express chooses to lower credit limits based on purchase
history or other behavioral data it obtains itself or from a third party, that
determination is not governed by the FCRA. Similarly, the FCRA does not
apply when a company like Capital One simply suggests an offer based on the
behavioral data it acquires from [x⫹1], so long as that data is not used to make
the actual lending decision.127 This is a fine distinction because a consumer may
not know to apply for any other credit card if, upon arriving at Capital One’s
site, she is only shown a certain type of offer.
Second, and more fundamentally, even if the FCRA’s reporting requirements
apply, it is difficult to imagine what sort of accuracy challenge a consumer
could make to behavioral-tracking data. Unlike an account that is erroneously
listed as in collections, a consumer’s web browsing and social-media activity
would be factually accurate, while the predictive model that determines credit
risk would remain secret. Daniel Solove has addressed this problem by challenging the accuracy of data mining in the national security context, but the analysis
applies with equal force here:
123.
124.
125.
126.
127.
15 U.S.C. § 1681g (2006).
15 U.S.C. § 1681m(a) (2006).
15 U.S.C. § 1681i(a)(1) (2006).
15 U.S.C. § 1681e(b) (2006).
See Steel & Angwin, supra note 39.
828
THE GEORGETOWN LAW JOURNAL
[Vol. 101:807
[W]hat kind of meaningful challenge can people make if they are not told
about the profile that they supposedly matched? How can we evaluate the
profiling systems if we are kept in the dark?
Predictive determinations about one’s future behavior are much more difficult to contest than investigative determinations about one’s past behavior.
Wrongful investigative determinations can be addressed in adjudication. But
wrongful predictions about whether a person might engage in terrorism at
some point in the future are often not ripe for litigation and review.128
The FCRA, through its dispute and reinvestigation provisions, provides
adjudication for erroneous investigative decisions made in the past. It provides
no relief for determinations of future creditworthiness based on statistical
analysis of “accurate” information regarding a consumer’s behavioral profile.
B. ANTIDISCRIMINATION LAW
Discrimination in issuing credit on the basis of race, color, religion, national
origin, sex, marital status, age, or because the borrower receives public assistance, is prohibited under the Equal Credit Opportunity Act (ECOA).129 When
first passed in 1976, some commentators argued for an expansive reading of the
“effects test” that would have allowed courts to aggressively police claims of
disparate impact resulting from credit-scoring metrics.130 However, as the law
has developed, the burden for plaintiffs in establishing a prima facie violation
using an effects test, or disparate impact claim, has proved very high.
Courts use a three-part inquiry to evaluate claims under the effects test:
(1) the plaintiff must demonstrate that the use of a certain factor has a disproportionately negative impact on a protected group; (2) if that threshold burden is
met, the creditor must show that the criterion makes the credit evaluation
system “more predictive than it would be otherwise” or that it is justified by a
“legitimate business need”; and (3) the plaintiff would then have the opportunity to show that the creditor’s legitimate business needs could be met by a less
discriminatory alternative.131
The most significant stumbling block to determining discriminatory negative
impact at the first step of the inquiry has been that under Regulation B issued by
the Federal Reserve in implementing the ECOA, creditors were barred from
gathering information on the race, sex, or marital status of their applicants.132
128. Daniel J. Solove, Data Mining and the Security–Liberty Debate, 75 U. CHI. L. REV. 343, 359
(2008).
129. Equal Credit Opportunity Act, 15 U.S.C. § 1691(a) (2006).
130. See, e.g., Note, Credit Scoring and the ECOA: Applying the Effects Test, 88 YALE L.J. 1450
(1979).
131. See DEE PRIDGEN & RICHARD M. ALDERMAN, CONSUMER CREDIT AND THE LAW § 3:15 (2012). See
generally Jamie Duitz, Note, Battling Discriminatory Lending: Taking a Multidimensional Approach
Through Litigation, Mediation, and Legislation, 20 J. AFFORDABLE HOUS. & CMTY. DEV. L. 101, 114–15
(2010).
132. See PRIDGEN & ALDERMAN, supra note 131, at § 3:7.
2013]
BEHAVIORAL CREDIT SCORING
829
Thus, when seeking to show that a disproportionate number of minority applicants were being affected by a seemingly neutral credit factor—for instance, zip
code—the creditor did not have the race data on file, and the statistical analysis
was rendered impossible. Regulation B was amended to allow, but not require,
such information to be collected. Records thus remain incomplete.133
A second problem is that plaintiffs must point to a specific policy that is
leading to the disparate effect. In discriminatory lending cases, this often means
that the plaintiff must show that lending-officer discretion was reintroduced into
the process after the models had determined creditworthiness, and that this
discretion is having a disproportionate impact on a protected class.134 When
dealing with data-mining policies that incorporate myriad disconnected data
points from across a borrower’s online and offline life, it may be quite difficult
to point to a specific policy or factor that is causing the disparate impact.
Indeed, the whole notion of cumulative disadvantage posits that a series of
choices and policies, applied over time by various actors, entrenches disparities
in starting position.135
Even if plaintiffs can overcome these hurdles, creditors have significant
leeway to counter any finding of disparate impact by showing an increase in the
predictive accuracy of the model, or by showing a legitimate business interest.136 The legitimate business interest test, in particular, is less stringent than
the “business need” defense under Title VII.137
Finally, the ECOA and other antidiscrimination laws, even if successful, only
apply to the protected classes listed. These laws, if plaintiffs succeed in overcoming the significant hurdles in using them, would target behavioral profiling in
credit markets indirectly, only addressing specific factors used in models that
could be shown to have demonstrated negative effects on protected classes
while leaving in place the broader practices and failing to address contextual
harms.
C. CONSUMER-PROTECTION REGULATIONS
In addition to being granted primary enforcement authority under the FCRA
and ECOA, the Federal Trade Commission (FTC), the government’s primary
consumer watchdog, has taken on a more aggressive role in monitoring privacy
violations by private parties. The FTC is granted general jurisdiction under
section 5(a) of the FTC Act, 15 U.S.C. § 45, to investigate and enforce, through
litigation, the Act’s prohibition against “unfair and deceptive trade practices.”
This jurisdiction extends to companies whose operations affect commerce, but
excludes banks, savings and loan institutions, federal credit unions, and com133. Id.
134. See Duitz, supra note 131, at 113, 118.
135. See supra section II.C.
136. See Cassandra Jones Havard, “On the Take”: The Black Box of Credit Scoring and Mortgage
Discrimination, 20 B.U. PUB. INT. L.J. 241, 257 (2011).
137. See id.
830
THE GEORGETOWN LAW JOURNAL
[Vol. 101:807
mon carriers.138 An initial problem in using the FTC to monitor banks, then, is
that its jurisdiction is limited to enforcement of the FCRA and ECOA, rather
than the broader unfair or deceptive provisions. This means that FTC enforcement is essentially split. At the point of data collection, FTC enforcement for
unfair and deceptive trade practices can be targeted at nonbank data firms, but
once the data is used to make credit determinations, FTC enforcement is limited
to the enumerated violations under the FCRA and ECOA.
FTC privacy enforcement against data companies to date has generally
focused on either violations of the terms of privacy policies or on data security
breaches or leaks.139 These approaches are referred to as “notice and choice”
and “harm based,” and both have come under criticism, which the FTC itself
recognizes.140 Many commentators, for example, remain highly critical of an
approach to privacy protection that focuses heavily on privacy policies, noting
that privacy policies are an insufficient source of consumer consent because
they are rarely read and confusingly written in legalese.141 Others note that the
harm-based model has not recognized broader harms other than data breaches or
identity theft, such as monitoring and tracking.142
Some argue that these problems stem from broader structural problems: the
FTC entered privacy regulation reluctantly, even opposing early efforts at
federal information privacy legislation, and is ill-suited to the task because it
lacks explicit grants of privacy-regulating authority.143 Finally, the agency may
lack strong enforcement power because civil penalties under section 45(a)(2)
138. See 15 U.S.C. § 46(a) (2006).
139. See, e.g., In re Reed Elsevier, Inc., No. 052-3094, 2008 WL 903806, at *2 (F.T.C. Mar. 27,
2008) (FTC action against data aggregator and consumer reporting company for failing to maintain
“reasonable and appropriate security” measures designed to prevent unauthorized access); In re Vision I
Props., L.L.C., No. 042-3068, 2005 WL 1274741 (F.T.C. Apr. 19, 2005) (FTC action against online
“shopping cart” software provider for violating terms of vendees privacy policies by selling collected
data to third parties); In re Gateway Learning Corp., No. 042-3047, 2004 WL 2618647 (F.T.C. Sept. 10,
2004) (FTC action against company for retroactively changing privacy policy without consumer notice
or consent to allow the selling of personal information collected).
140. See Stephanie Clifford, Fresh Views at Agency Overseeing Online Ads, N.Y. TIMES, Aug. 4,
2009, http://www.nytimes.com/2009/08/05/business/media/05ftc.html (quoting David Vladeck, the head
of the FTC’s Bureau of Consumer Protection, as saying: “[t]he frameworks that we‘ve been using
historically for privacy are no longer sufficient”).
141. See Steven Hetcher, The FTC as Internet Privacy Norm Entrepreneur, 53 VAND. L. REV. 2041
(2000); see also Aleecia M. McDonald & Lorrie Faith Cranor, The Cost of Reading Privacy Policies, 4
J.L. & POL’Y FOR INFO. SOC’Y 540, 561 (2008) (estimating national cost of reading privacy policies
would be $781 billion per year). Additionally, the term “privacy policy” has taken on a normative
meaning such that “[w]hen consumers see the term ‘privacy policy,’ they believe that their personal
information will be protected in specific ways; in particular, they assume that a website that advertises a
privacy policy will not share their personal information.” See Joseph Turow et al., The Federal Trade
Commission and Consumer Privacy in the Coming Decade, 3 J.L. & POL’Y FOR INFO. SOC’Y 723, 724,
730–37 (2008).
142. See FED. TRADE COMM’N, PROTECTING CONSUMER PRIVACY IN AN ERA OF RAPID CHANGE: A
PROPOSED FRAMEWORK FOR BUSINESSES AND POLICYMAKERS iii (2010), available at www.ftc.gov/os/2010/12/
101201privacyreport.pdf.
143. See Joel R. Reidenberg, Privacy Wrongs in Search of Remedies, 54 HASTINGS L.J. 877, 888
(2003).
2013]
BEHAVIORAL CREDIT SCORING
831
are limited to $10,000 for a knowing violation of the “unfair or deceptive”
provisions, though the agency can and does seek injunctive remedies.
Despite these flaws, the FTC is currently the agency of choice for regulating
information privacy online, and in recent years the agency has moved forward
with efforts to develop a more comprehensive set of governing principles
applicable to online data-mining and tracking firms. The proposed framework,
developed in consultation with industry and consumer groups as well as policymakers, culminated in the release of a final report on information privacy that
informed the White House approach discussed below.144
IV. THE WHITE HOUSE APPROACH
Given the incomplete protections under existing statutory and regulatory
schemes, calls for a comprehensive federal approach to online data privacy have
increased in recent years. Efforts at a comprehensive approach have involved
the FTC, the Department of Commerce (DOC), industry and other nongovernmental actors, and most recently, the Obama Administration, which released a
privacy “white paper” in February 2012, outlining its preferred approach to the
problem and calling for Congress to adopt a Consumer Privacy Bill of Rights
(CPBR).145 Although the legislative future of the Administration’s proposal
remains unclear, to date, the White House’s proposal represents the clearest
indication of the future direction of the policy debate.146 As such, this Part will
examine the framework in detail to determine its applicability to the range of
current and proposed tracking and monitoring practices in credit markets, and
will suggest methods to strengthen the proposal.
A. THE CONSUMER PRIVACY BILL OF RIGHTS
The Administration’s proposal builds on a multiyear effort by the DOC,
industry groups, scholars, and nonprofits to develop a comprehensive regulatory
environment governing information privacy.147 At its core is the Consumer
Privacy Bill of Rights (CPBR), which serves as a unifying center around which
government enforcement and private self-regulation might form.
The CPBR resembles Fair Information Practice Principles (FIPPs) developed
in the United States in the 1970s and widely accepted around the world,148 and
represents the first concerted effort by any U.S. administration to apply FIPPs to
144. See FED. TRADE COMM’N, supra note 142, at 2.
145. See FRAMEWORK, supra note 108.
146. Indeed, Mark Rotenberg, executive director of the Electronic Privacy Information Center,
called the Administration’s white paper “the clearest articulation of the right to privacy by a U.S.
president in history.” Jasmin Melvin, White House Internet Privacy Bill of Rights Met with Skepticism,
INS. J., Feb. 27, 2012, http://www.insurancejournal.com/news/national/2012/02/27/237051.htm.
147. See FRAMEWORK, supra note 108, at 7.
148. FIPPs are applicable to government actors through the Privacy Act of 1974, 5 U.S.C. § 552a
(2006 & Supp. V 2011). FIPPs also form the basis of data-privacy protections in Europe. See ORG. FOR
ECON. COOPERATION & DEV., GUIDELINES ON THE PROTECTION OF PRIVACY AND TRANSBORDER FLOWS OF
832
THE GEORGETOWN LAW JOURNAL
[Vol. 101:807
the private sector. Under the CPBR, consumers are granted the right to individual control, transparency, respect for context, security, access and accuracy,
focused collection, and accountability.149 Of particular relevance to the extension of credit are the control, transparency, context, and accountability principles.
The Administration’s proposals focus a great deal of attention on consumer
control and company transparency. This is not surprising as it reflects one of the
pillars of FTC privacy policy: notice and consent. As noted, there are deep
concerns with the notice-and-consent model that are only partially addressed by
the control-and-transparency provisions of the CPBR.150 The control principle
provides that “[c]onsumers have a right to exercise control over what personal
data companies collect from them and how they use it,” while the transparency
principle states that “[c]onsumers have a right to easily understandable and
accessible information about privacy and security practices.”151 Despite laudable efforts to improve on notice and consent, by failing to address harms and
categorically bar certain practices the control-and-transparency model remains
flawed.
First, though the policy seeks to address the incoherence of privacy policies
by requiring easily understandable statements from both first- and third-party
companies, the network of firms involved in collection and trading of information is “so complex that it defies meaningful notice to general users on matters
that are crucial to privacy.”152 Consumer-facing companies, for example, are
often not even aware of the activities of third parties or how those parties use
the information at issue.153
Second, although the control principle endorses some version of “Do-NotTrack” technologies that allows consumers to opt out of specific targeting at the
point of data collection or first interaction with the consumer-facing site, or,
more broadly, to opt out of all targeting using cookies embedded in the browser,
there is a significant and unresolved dispute over what Do-Not-Track technology should accomplish. Current browser-based incarnations of the technology
register a user’s preference not to be tracked by third-party data companies and
signal that preference to third parties, who are free to ignore the request.
PERSONAL DATA, available at http://www.oecd.org/document/18/0,2340,en_2649_34255_1815186_1_1_1_
1,00.html. .
149. See FRAMEWORK, supra note 108, at 10.
150. See supra section III.C.
151. See FRAMEWORK, supra note 108, at 1.
152. See Comment from Helen Nissenbaum et al. on the Dep’t of Commerce Rep., Commercial
Privacy and Innovation in the Internet Economy: A Dynamic Policy Framework (Jan. 28, 2011),
available
at
www.ntia.gov/files/ntia/comments/101214614-01/attachments/NissenbaumIPTF
Comments.pdf (emphasis omitted).
153. See Solon Barocas & Helen Nissenbaum, On Notice: The Trouble with Notice and Consent,
(Oct. 2009) (unpublished manuscript), available at http://www.nyu.edu/projects/nissenbaum/papers/
ED_SII_On_Notice.pdf.
2013]
BEHAVIORAL CREDIT SCORING
833
Even if all third parties complied with those requests, industry groups currently advocate a model of Do-Not-Track that would only prohibit delivery of
targeted ads based on the collected data, rather than banning the collection and
storage itself.154 This would leave companies free to pass information on to
lenders and creditors for use in predictive models.
Additionally, this raises the problem of notice of retroactive changes to a
privacy policy that at a later date would again put the collected data in play for a
range of uses. This is “an egregious loophole” that “places discretion in the
hands of website owners, while the onus is on users to stay abreast”155—a
loophole the White House policy does little to address.
A final problem with the notice-and-consent model embodied in the controland-transparency principles is that, by eschewing categorical prohibitions on
certain data or practices, it allows major players to exploit market dominance in
a way that minimizes true choice. For example, when Google recently changed
its privacy policy to allow sharing of personal information across all of its
platforms, it provided clear and understandable notice to its users, and requested
consent. Given that Google is a ubiquitous presence for even the most cursory
internet user, the consent request did not provide a real choice even for users
concerned about the privacy implications of the policy change; the costs of
moving away from all Google products are simply too high.
Similarly, notice and consent allows “privacy holdouts” to be bought off by
the market. As soon as a company can entice a majority to waive a claim to
privacy, those left trying to claim the right will find it increasingly hard to do so.
For example, consider background and credit checks for job seekers. Once
limited to sensitive positions, the practice is now the norm across all industries.
Although the FCRA requires notice to and consent from job applicants before
an employer can run a background or credit check, it is understood that this is
not a real choice. Similarly, one could easily see banks exploiting market share
to a similar end, requiring consent to access behavioral data as a condition of
granting credit. Absent near unanimous opposition at the outset, this policy
would gradually become the norm, making an option to opt out nearly meaningless.
In theory, a robust context principle, as advocated by Nissenbaum, would
provide baseline, enforceable measures that could preclude certain data collection and uses, and the CPBR hints that at least some in the Administration
support such an approach. The context principle provides that “[c]onsumers
have a right to expect that companies will collect, use, and disclose personal
data in ways that are consistent with the context in which consumers provide the
data.”156 Additionally, the comments to the principle state:
154. Id.
155. See Nissenbaum, supra note 152, at 2.
156. See FRAMEWORK, supra note 108, at 1.
834
THE GEORGETOWN LAW JOURNAL
[Vol. 101:807
The Administration also encourages companies engaged in online advertising
to refrain from collecting, using, or disclosing personal data that may be used
to make decisions regarding employment, credit, and insurance eligibility or
similar matters that may have significant adverse consequences to consumers.
Collecting data for such sensitive uses is at odds with the contextually
well-defined purposes of generating revenue and providing consumers with
ads that they are more likely to find relevant. Such practices also may be at
odds with the norm of responsible data stewardship that the Respect for
Context principle encourages.157
Notwithstanding the above statement, which remains a suggestion (albeit an
encouraging one),158 a significant problem with the context principle as articulated is that it is tied to the control-and-transparency principles. That is, rather
than eliminating certain practices or categories of information based on their
violations of context, the principle calls for heightened notice and control,
particularly when the context of information gathering is changed at some later
date.159 Thus, it imports the notice-and-consent problems discussed above,
rendering the principle relatively toothless. Even more troubling, the principle’s
reference to “personal” data is vague and undefined, and appears nearly analogous to “private” data. Thus, in adopting the context principle, the Administration introduces, in its own definition, the very problem context was supposed to
solve—the false dichotomy of the public/private distinction.
B. ENFORCEMENT
The CPBR remains, in its current form, largely a self-regulatory proposal.
The Administration calls for Congress to grant the FTC the ability to enforce the
CPBR but simultaneously maintains that the FTC has the power to enforce the
principles under Section 5 absent congressional implementation if companies
choose voluntarily to adopt the principles.160 The proposal appears targeted at
industry lobbyists who have resisted comprehensive federal legislation and, as
such, contains carrots and sticks that encourage adoption of enforceable codes
of conduct by promising an enforcement-safe harbor from any future legislation.161
The hope here seems to be to get big industry players on board early; they
will then have an incentive to push for adoption of the codes by other companies to maintain a level regulatory playing field. To this end, the proposal calls
for a new round of multistakeholder roundtable discussions to finalize develop-
157. Id. at 18.
158. Note, too, that the Administration’s statement on credit appears to enshrine the problematic
third-party/first-party distinction, calling only on companies engaged in “online behavioral advertising”
(third parties) to refrain from transferring information for credit purposes. Id. at 17–18.
159. Id. at 15.
160. Id. at 29–30, 35.
161. Id. at 37.
2013]
BEHAVIORAL CREDIT SCORING
835
ment of voluntary, enforceable codes of conduct.162 Some important players
have already begun the process of implementing self-regulatory proposals
modeled on the CFPB. For example, the Digital Advertising Alliance (DAA), an
industry group representing many of the data firms engaged in behavioral
analysis, both online and offline, has adopted “Self-Regulatory Principles for
Online Behavioral Advertising,”163 and supplemented these with principles
governing “Multi-Site Data” collected or used for purposes other than advertising.164
The Multi-Site Data principles explicitly provide that a “Third Party or
Service Provider should not collect, use, or transfer Multi-Site Data” for the
purpose of “[d]etermining adverse terms and conditions of or ineligibility of an
individual for credit.”165 Multi-Site Data is defined as “data collected from a
particular computer or device regarding [w]eb viewing over time and across
non-[a]ffiliate [websites],” and thus appears to cover all collection that is not
done by the owner of a particular site on that particular site.166 Questions
remain, however, regarding the ultimate ability of self-regulation to stem the
flow of objectionable behavioral monitoring.
First and most fundamentally, the principles only apply to web-viewing data,
and thus again import the troubling public/private distinction.167 For example,
Rapleaf, an SMM company discussed in Part I, would be almost entirely
excluded from the principles because the company collects only data that have
been made public by users. Thus, for instance, all posts to a friend’s Facebook
wall, if that friend has a public profile, can still be collected, as can all public
tweets, comments on message boards, blog posts, customer reviews, and a host
of other online activities not explicitly made private by an individual user.
Recent studies also reveal that even vigilant web users can easily have information about themselves revealed because of the looser privacy standards of their
friends.168
162. Id. at 33.
163. DIGITAL ADVER. ALLIANCE, SELF-REGULATORY PRINCIPLES FOR ONLINE BEHAVIORAL ADVERTISING
(2009) [hereinafter ONLINE BEHAVIORAL ADVER.].
164. DIGITAL ADVER. ALLIANCE, SELF-REGULATORY PRINCIPLES FOR MULTI-SITE DATA (2011) [hereinafter
MULTI-SITE DATA].
165. Id. at 4.
166. Id. at 11.
167. Id. at 1.
168. Indeed, large amounts of personal and lifestyle information can be gleaned using algorithms
that process tagged photos from friends and acquaintances. Thus, even a privacy-conscious user of
social media can have that privacy undermined by the lax privacy practices of her network. See Megan
Garber, On Facebook, Your Privacy Is Your Friends’ Privacy, ATLANTIC, Apr. 26, 2012, http://www.
theatlantic.com/technology/archive/ 2012/04/on-facebook-your-privacy-is-your-friends-privacy/256407/.
Additionally, there are serious doubts as to the respect paid by companies to the privacy settings of
consumers. Twitter, for example, recently settled with the FTC for improperly making tweets available
to third parties even though they had been set by users as private. See In re Twitter, Inc., 151 F.T.C. 162,
179–80 (2011).
836
THE GEORGETOWN LAW JOURNAL
[Vol. 101:807
Second, the principles only apply to “[d]etermining adverse terms and conditions” or “ineligibility” and could thus be read to allow the use by Capital One
of behavioral data in making credit offers, because at that stage, the terms and
conditions have not been determined, only offered.169
Third, the principles only cover third parties and service providers, leaving
untouched the activities of first parties (for instance, the owner of a website).
Thus, for example, Facebook collects and keeps exhaustive data on all of its
members’ activities, including interactions with other sites that have enabled
Facebook sharing. None of these activities are covered.
Finally, the Accountability principle is the least specific of the principles and
leaves considerable doubt as to the ultimate force of the standards or the ability
to monitor noncompliance. Currently, the Administration is leaving monitoring
to those companies that adopt the proposals, apparently betting that the desire
for a level, competitive playing field will spur companies to monitor themselves. While some DAA affiliates, such as the Interactive Advertising Bureau
(IAB), have conditioned renewal of membership in their organization on adoption of the principles,170 there is nothing currently forcing companies to adopt
any of the principles. To date, the Direct Marketing Association has expressed
interest in incorporating the DAA principles into its “longstanding effective
self-regulatory program,” which is to publically report instances of noncompliance, particularly those that are “uncorrected.”171
Of course, public reporting can take many forms, and notice in the back
pages of an obscure trade publication would not have the same effect as
prominent notice on a company’s website. Even if notice were to be made
prominently, because many of the companies involved in behavioral monitoring
are obscure third parties (recall too that there are over 100 companies monitoring a typical user’s activities), consumers might not ever become aware that
specific companies were persistent violators.
C. REFORMS
Although the CFPB is a significant step closer to bringing a comprehensive
information-privacy regime to the private sector, the current proposal contains
three significant loopholes that call into question the ability of the proposal to
provide robust protection against the further implementation of behavioral
credit scoring. First, the self-regulatory nature of the framework leaves enforcement diffuse and easily circumvented. Second, the first party/third party distinction, or Facebook loophole, leaves consumer-facing sites free to collect and
analyze consumer data for any purpose. Finally, the context principle, both as
169. MULTI-SITE DATA, supra note 164, at 4.
170. INTERACTIVE ADVER. BUREAU, CODE OF CONDUCT 1 (2011).
171. ONLINE BEHAVIORAL ADVER., supra note 163, at 4. The principles also state that a company
notified of noncompliance “should take steps to bring its activities into compliance,” and that noncompliance will be reported to “appropriate government agencies.” Id. at 18. Again, however, it is unclear at
this time what kind of enforcement would result from reporting to government agencies.
2013]
BEHAVIORAL CREDIT SCORING
837
proposed by the Administration and as adopted by industry, appears to exempt
enormous amounts of information by adopting the public/private dichotomy
into its definition of protected information.
A clear resolution to these issues, at least as regards credit, would be a
categorical ban on the practice of behavioral profiling for credit determinations,
for both first- and third-party sites. While a categorical ban would provide the
most robust protection, at the very least, to maintain contextual integrity and
limit secondary effects, start-ups like MovenBank and Lenddo should be segregated environments not linked to existing social networks or third-party data
collectors. Even the growth of a segregated behavioral credit industry could
prove risky, however, because over time such practices could become the norm
and, once adopted by larger institutions, could risk overrunning the preferences
of privacy holdouts.172
Absent a categorical ban, congressional authorization granting the FTC enforcement authority would be an important step forward as interpretation of the
principles would shift to the agency process through rule making and enforcement actions and away from industry self-regulation. Although a detailed
discussion of agency process is outside the scope of this Note, rule making and
enforcement in the FTC might also spurn development of a more robust context
principle through case-by-case consideration of some of the more difficult
determinations regarding what is public and what is private. This would potentially allow the FTC to flexibly respond to currently unforeseen changes in
technology. Enforcement authority could also bring the financial services industry under the ambit of the CPBR, because that industry is currently exempt from
Section 5 enforcement for unfair and deceptive trade practices. Finally, FTC
enforcement authority would provide greater incentive to industry to adopt
meaningful and comprehensive codes of conduct as a means of obtaining safe
harbor from future enforcement.
CONCLUSION
As our commercial and social lives are increasingly mediated through digital
technologies, companies large and small have begun to apply detailed behavioral profiling to individual credit determinations. By collecting and mining the
enormous wealth of personal data generated by individuals on the Internet,
credit card companies, large banks, and a host of start-up companies in the data
collection and lending fields are at the cusp of a revolution in the way they
determine and price risk in credit markets. Although these methods may increase the accuracy of risk-prediction systems, I have argued that behavioralcredit profiling should nevertheless be resisted because of its cumulative
discriminatory and contextual harms. Existing government regulations and
laudable recent efforts from the Obama Administration fail to fully address the
172. See supra section IV.A.
838
THE GEORGETOWN LAW JOURNAL
[Vol. 101:807
harms of behavioral credit, making it likely that the industry will continue to
grow. I have argued that a categorical ban on the collection and processing of
behavioral data for use in determining access to credit is needed to protect
consumers and the wider culture from the harmful effects of these practices.
Alternatively, or additionally, granting the FTC enforcement authority over the
CPBR would spur industry compliance and contribute to innovative and flexible
responses to rapid technological change.