Cathy O’Neil’s important new book Weapons of Math Destruction, is out today, and if you’re at all interested in the social significance of how mathematics is now being used, you should go out and get a copy. She has been blogging for quite a while at Mathbabe, which you should be following, and is a good place to start if your attention span is too short for the book.
Cathy has had an interesting career path, including a period as my colleague here in the math department at Columbia. She left here to pursue excitement and fortune at a top hedge fund, D.E. Shaw, where she had a front-row seat at the 2008 financial industry collapse. A large factor in that collapse was the role played by mathematical models, and her book explains some of that story (for another take on this, there’s Models.Behaving.Badly from another Columbia colleague, Emanuel Derman). As far as I’ve ever been able to figure out, the role of mathematical modeling in the mortgage backed securities debacle was as a straightforward accessory to fraud. Dubious and fraudulent lending was packaged using mathematics into something that could be marketed as a relatively safe investment, with one main role of the model that of making it hard for others to figure out what was going on. This worked quite well for those selling these things, with the models successfully doing their job of obscuring the fraud and keeping most everyone out of jail.
While this part of the story is now an old and well-worn one, what’s new and important about Weapons of Math Destruction is its examination of the much wider role that mathematical modeling now plays in our society. Cathy went on from the job at D.E. Shaw to work first in risk management and later as a data scientist at an internet media start-up. There she saw some of the same processes at work:
In fact, I saw all kinds of parallels between finance and Big Data. Both industries gobble up the same pool of talent, much of it from elite universities like MIT, Princeton and Stanford. These new hires are ravenous for success and have been focused on external metrics – like SAT scores and college admissions – their entire lives. Whether in finance or tech, the message they’ve received is that they will be rich, they they will run the world…
In both of these industries, the real world, with all its messiness, sits apart. The inclination is to replace people with data trails turning them into more effective shoppers, voters, or workers to optimize some objective… More and more I worried about the separation between technical models and real people, and about the moral repercussions of that separation. If fact, I saw the same pattern emerging that I’d witnessed in finance: a false sense of security was leading to widespread use of imperfect models, self-serving definitions of success, and growing feedback loops. Those who objected were regarded as nostalgic Luddites.
I wondered what the analogue to the credit crisis might be in Big Data. Instead of a bust, I saw a growing dystopia, with inequality rising. The algorithms would make sure that those deemed losers would remain that way. A lucky minority would gain ever more control over the data economy, taking in outrageous fortunes and convincing themselves that they deserved it.
The book then goes on to examine various examples of how Big Data and complex algorithms are working out in practice. Some of these include:
- The effect of the US News and World Report algorithm for college ranking, as colleges try and game the algorithm, while at the same time well-off families are at work gaming the complexities of elite college admissions systems.
- The effects of targeted advertising, especially the way it allows predatory advertisers (some for profit educational institutions, payday lenders, etc.) to very efficiently go after those most vulnerable to the scam.
- The effects of predictive policing, with equality before the law replaced by an algorithm that sends different degrees of law enforcement into different communities.
- The effects of automated algorithms sorting and rejecting job applications, with indirect consequences of discrimination against classes of people.
- The effects of poorly thought-out algorithms for evaluating teachers, sometimes driving excellent teachers from their jobs .
- The effects of algorithms that score credit, determine access to mortgages and to insurance, often with the effect of making sure that those deemed losers stay that way.
Finally, there’s a chapter on Facebook and the way political interests are taking advantage of the detailed information it provides to target their messages, to the detriment of democracy.
To me, Facebook is perhaps the most worrisome of all the Big Data concerns of the book. It now exercises an incredible amount of influence over what information people see, with this influence sometimes being sold to the highest bidder. Together with Amazon, Google and Apple, our economy and society have become controlled by monopolies to an unparalleled degree, monopolies that monitor our every move. In the context of government surveillance, Edward Snowden remarked that we are now “tagged animals, the primary difference being that we paid for the tags and they’re in our pockets.” A very small number of huge extremely wealthy corporations have even greater access to those tags than the government does, recording every movement, communication with others, and even every train of thought as we interact with the web.
These organizations are just starting to explore how to optimize their use of our tags, and thus of us. Of the students starting classes here today in the math department, our training will allow many of them to go on to careers working for these companies. As they go off to work on the algorithms that will govern the lives of all of us, I hope they’ll start by reading this book and thinking about the issues it raises.
read this customer review at Amazon (3 stars)
https://www.amazon.com/Weapons-Math-Destruction-Increases-Inequality/product-reviews/0553418815/ref=cm_cr_dp_text?ie=UTF8&showViewpoints=0&sortBy=helpful#R1HO7IVVTLX9VL
I struggled with the star rating for this book. There are certainly aspects of the work that merit five stars. And it is VERY thought-provoking, like a good book should be. But there are flaws, significant ones, with the biggest flaw being a glaring over-simplification of many of the systems that O’Neil decries in the book. I don’t know if O’Neil has personally ever had to take a psychology test to get a job, worked under the Kronos scheduling system, lived in a neighborhood with increased police presence due to crime rates, been victimized by insurance rates adjusted to zip codes, and endured corporate wellness programs. But all of those things have been a part of my life for years, and even I have to admit the many positive aspects of some of these systems. A few examples:
–Kronos. Despised by the rank and file of companies that I’ve worked for, Kronos software contains many aspects and automates things that previously were done by people, mostly managers. I hated it, but I have to admit overall it made things more fair. Why? Well, say you have a workplace policy that mandates chronically-late employees be written up for tardiness and eventually fired if they don’t shape up. What tended to happen at multiple companies I worked for was that managers would look the other way when their buddies were tardy, and write up people they didn’t like. Kronos changed that, because the system automatically generated write-ups for any employee that clocked in late too many times. Kronos has no buddies. Popular, habitually-late people suffered, but it was more “fair” in the true sense of the word. Some systems, like Kronos, have both aspects that level the playing field and aspects (like increased scheduling “efficiency”) that can victimize workers. O’Neil tends to harp on the negative only, and if you have not personally seen both sides of system, you might not realize there was another side at all.
–Increased police presence in high-crime areas. This one really grated me the wrong way. O’Neil positions this as something that victimizes the poor. Well I have been poor, or at least this country’s version of it, and I have lived in very high crime areas where if you didn’t shut your window at night chances were good you would hear a murder. And believe me when I say I was DEEPLY grateful for the increased police presence. But then, I wasn’t committing crimes. Now I live in a very wealthy neighborhood (though I am not wealthy) where I have not seen a single police car drive down my street in the past four months. O’Neil argues that many crimes, like drug use by affluent college students, go unpunished because the police are busy in the poorer neighborhoods. I agree, but police resources are limited and for mercy’s sake they should be sent where people are being killed, not where a college student is passed out in his living room. My current neighbors many be committing as many crimes as O’Neil implies, but I’m not terrified to walk down the street, so I don’t mind the lack of police presence. I know officers have to go deal with the more life-threatening stuff, and I am grateful to them. It all depends on your perspective.
–Corporate Wellness programs. These programs have never done anything for me except shower me with gift cards and educate me on behavioral sleep therapy. I love them. But, again, perspective. I am not overweight, I love to work out, and I eat healthy. The programs were a source of income for me and my family when we needed it most. I just would have liked acknowledgement that wellness programs really do have benefits for some people, instead of a chapter painting them as some sort of 1984-style nightmare where we are all forced to be thin. It’s more complicated than that.
–And the best for last: The psychology tests. Those things are pretty bad. Despite winning multiple Employee and Student of the Year awards in my life, I can’t pass those tests. Not to save my life. I didn’t think much of it, until I heard about another star employee how couldn’t pass them either. Then I met a third star employee (and I am talking about an employee who won two JD Power Awards in two years) who couldn’t pass them. Why? Picture holding a hundred quarters in your hands and then throwing them at a wall. Some will go off to one side and some to another, but most will probably cluster in the middle. Those tests keep the quarters in the middle, weeding out people who aren’t typical. Sometimes that’s good (deadbeats) sometimes that’s bad (talented employees that think different). Here O’Neil misses an opportunity to convince owners of companies that the tests can cost them highly desirable employees. Offering real, concrete ideas of how the tests could be improved to benefit both workers and company owners would have been a harder book to write, but a much more useful one.
A lot of the ominous implications made in the book have to do with what MIGHT happen in the future, if certain systems become more common. O’Neil often uses blanket statements to imply that certain outcomes are inevitable, but that is far from true. Irritate enough people, and the systems change. Legal challenges are made and won. Some companies, eager to lure star workers, throw out some of the more punishing aspects of commonly-used systems (that happened at a company where I worked, where “life-style” scheduling that forbid clopening and gave you two days off in a row was used in conjunction with Kronos. Worked great, people loved it.). The biggest weapon against abuse is, as O’Neil repeatedly states, transparency. Having been in the industry that creates these algorithms, she is in a unique position to expose the finer details of how they work. But the book is short on the kind of details I personally crave and long on blanket statements and generalizations, the same kind of generalizations she denounces companies for making. Not all automated systems victimize the poor, not even the ones spotlighted in this book. I know because I lived them and I was poor.
I hovered on the edge of a four star rating for this book, until a chance conversation with a Japanese woman a couple days ago. Her grandmother had lost most of her possessions and land after World War II because of land redistribution. My friend was not complaining, she thought the reforms overall a good thing, though her family had lost a lot from it. “Something may benefit 99 people of of 100,” she told me,”but there’s always that one person…”. Exactly. There’s always that one person. These systems that have come to permeate our culture need to be tweaked to minimize injustice. Unlawful algorithms need to be outlawed. Bad ideas need to be replaced with good ones. And Cathy O’Neil does discuss this, especially in the conclusion, but for me the focus of the book wasn’t on target. It was too slanted against systems I have seen both harm AND help. It over-simplified issues, at least for me. It’s a mess out there, and solutions that work for everyone wickedly hard to come by.
Because there’s always that one person.
I am constantly surprised that people continue to choose to use Facebook daily in all its awfulness. If this is how most our society chooses to squander this wonderful thing called the internet, maybe our society deserves the consequences, whatever they may be.
We tend to forget that people can delete their accounts. (I did.) Participation isn’t mandatory.
amz review (and others),
Please do not spam this blog with copied content from other places. That’s what links are for, and, in any case, anyone who wants to read Amazon reviews will see them if they follow the link to the Amazon book page included in the posting.
There are often two contradictory strands of these critiques of modern operations research (which is really what all this stuff is about). One is that it doesn’t work, the other that it works too well. So you end up with all this spaghetti being thrown at the wall, as in Peter’s synopsis.
Moreover, most of the drawbacks of pervasive personal data also have advantages. A machine that invades our privacy is also an alibi machine for the innocent for example. Pervasive video surveillance makes it hard to sneak around but it also makes it harder for police to abuse suspects. And so on.
Personally, having seen the really primitive ability of current Big Data marketing tools to predict my interests and send me appropriate ads and offers, I think the biggest issue is hype. These things are much weaker in application than in breathless accounts, and when they do work it seems mostly a good thing.
Thanks for pointing out, sounds very interesting. Will totally put this on the reading list.
Pingback: Mathematicians on Big Data and Facebook | Uncommon Descent
@srp
Your argument in favour of pervasive data collection seems to be the same as that used for state mass surveillance: ‘it’ll keep you safer, and if you’re not doing anything wrong you’ve got nothing to worry about.’
Bee & Peter,
If you’re interested, the theme from one of the ‘MLConfs’ from a few months ago was on the ethics in machine learning. You can find the videos of the talks, here: https://www.youtube.com/playlist?list=PLrbAIdPI69PiyfmA-TzKpWkmYTmuck5wS
And a summary of the conference itself: http://www.kdnuggets.com/2016/06/ethics-machine-learning-mlconf.html
I saw the headline “Math is racist” at CNN, and correctly guessed there’d be something more reasonable at this blog.
But I have to say this seems like a classic case of out of touch elites. If you have a perfectly fair and objective algorithm (or more broadly, any human-made system), some people in positions of influence, and with their own personal biases, will deem the results unacceptable and unfair, and will act to modify things to strongly push the results in a direction they find desirable. Often this tampering is done in the name of egalitarianism, to make things superficially appear more equal by actually making them less fair. This generally does more harm than good, including to the intended beneficiaries.
While the author may make many legitimate arguments, some other arguments have things exactly backwards. Instead of the problem being that “algorithms” (more broadly, any human-made systems and decision making processes) have an unintended or intended bias against certain parts of society, allegedly causing some purported social injustice, it is rather the case that some pseudo-egalitarian elites have actively rigged and tampered with the systems so as to direct favor towards those who the elites deem to be deserving recipients of some compensatory social justice.
It’s often not the algorithms/systems that are the problem, but rather it is the actions to counteract the supposed problem that are the real problem.
Actually I think there are at least two somewhat more fundamental problems concerning the social context and consequences of mathematical models, which one should think about before discussing the questions, wether the outcomes are beneficial or not.
First, one should be aware of the fact, that modelling in itself is a descriptive activity: Models try to capture the important aspects of some part of reality, necessarily leaving out “small” factors, that will not influence the outcomes (mostly average quantities) significantly. When relying on them to initiate activities, they however cease to be purely descriptive tools and acquire normative aspects.
But in the realm of normative questions there are – at least in most parts of the western world – different standards, by which rules are judged. One important aspect is, that once people are concerned it is not in general acceptable to just look at averages outcomes and _ignore_ those individuals who happen to suffer a
serious disadvantage for the greater good of the majority. This does not mean that such situations do not also occur without the involvement of machines and algorithms – but in the end they are (or rather should be) handled in the context of legal decisions by courts who always have to take individual rights and general legal principles into account. This is very much in contrast to the “Utilitarian” position cited at the end of amz review’s comment. So relying on algorithms for social decisions tends to promote a certain ethical position, which I consider to be at odds with some of the more fundamental principles of western societies. In any case, if one would like to base our society on such a position, one should openly discuss this using arguments from the relevant realms (ethics/philosophy/…) – and not just accept such a position, because it happens to harmonize well with the way models work.
Related to this is the point, that many algorithms at their core are nothing but statistically vamped up prejudices: Correlating e.g. zip codes to credit default risks is not fundamentally different to forming prejudice about people in certain neighbourhoods based on “everyday observations”. But in constrast to plain old prejudices they now claim to be serious (even scientific) answers to practical (and often pressing) questions.
This leads me to my second point: There seems to be a serious problem in the perception of what models can and can not do, both in the general population and among many of those involved in the modelling.
Talking to people outside academia and looking at statements in the media, mathematical models seem to have an almost magical status: Mathematics is seen as very complicated, so the people building the models must be very clever and consequently their models have to stand above “normal” reasoning. Of course, models are not more and not less than a precise version of ordinary reasoning in a framework (mathematics) which allows the application of a lot of machinery to make
precise reasoning easier and quantitative.
As a result, the importance of the modelling proper and the huge freedom one has when setting up a model is massively underestimated (if perceived at all). And this last point I suspect to also be true for many of the mathematicians doing the modelling.
There are systematic factors for that – one of the most important being, that at least those coming from pure mathematics usually have no experience at all in modelling. This is not just a question of maybe having learned and numerically implemented one or two well establied models, but of having made the experience that one can come up with loads of models for a given phenomenon, which are mathematically consistent and at first sight might seem plausible, but later just turn out to be wrong in important aspects.
In general I do think there are by now quite a few important questions about the consequences of increasing reliance on mathematical models and the responsabilities of those involved in building them. I therefore consider it high time that Mathematicians abandons their “we are disconnected from the world outside” attitude (as expressed e.g. in Hardy’s “Apology”) and start talking about their responsabilites – just like Physicists had to do 70 years ago.
Bernald,
The “Math is racist” headline is a really stupid take on the book. Countering that kind of stupidity with “the problem is out of touch social engineering elites” is just as stupid. Neither has anything to do with what’s actually in the book. All, the internet has quite enough of this kind of stupid reductive argument, please don’t do this here.
I have followed her since day one on her blog. I was already over in healthcare writing the same type of blog posts as I could see how financial models were just being tweaked to be used in healthcare. Do pay attention as our current Medicare/CMS acting director is all into this with stats and numbers and came from United Healthcare where modeling was his expertise, as he’s a former McKinsey Consultant and Goldman banker. WMD are really being pushed at HHS and CMS. It’s scary indeed and I’ve been writing about it for about 8 years with the incest of United Healthcare that exists there.
I go a bit further and talk about “excess scoring” as a result of the WMDs and if most folks only really knew and would educate themselves on what goes on at the prescription counter, well it might be too much for folks to take when they see the reality of how you are scored with medication prediction adherence 300 metrics based on behavior, not drug compliance really. I chat with pharmacists all the time that keep me updated in that arena. Here’s a link below that has a video and goes into detail, but in summary, Cathy is correct on the WMDs as they are everywhere, scoring people into oblivion. I’m glad she did this book.
http://ducknetweb.blogspot.com/2016/06/the-truth-about-pharmacy-benefit.html
@KMS,
Regarding “First, one should be aware of the fact, that modelling in itself is a descriptive activity: Models try to capture the important aspects of some part of reality, necessarily leaving out “small” factors, that will not influence the outcomes (mostly average quantities) significantly. When relying on them to initiate activities, they however cease to be purely descriptive tools and acquire normative aspects.”
That depends upon one’s perspective. There is an entirely defensible program of statistics which acknowledges the subjective elements of any inference, and provides a framework for incorporating it. Moreover, it is possible, even if practically difficult, to bring loss estimates into inference and adjust them accordingly. These are necessarily based upon judgment. So, I would very much challenge whether modeling in any practical application can remain purely descriptive.
Among other assertions, to my thinking, WMD reemphasizes the need to bring forward our assumptions, all of them, and acknowledge them.
One or two here seem to be suggesting that Big Data is fine so long as the algorithms are good.
In the hands of a benevolent state committed to the equal distribution of its bounties that may be true – although there are still contrary arguments to that idea.
However, does this not miss the point of the book? That Big Data is largely in the hands of large corporate self-interest groups have no motivation to establish the fair distribution of anything.
The author of this book was in the room, and only she truly knows how much she was made, wittingly or unwittingly, into a con artist. I tend to suspect hubris more than fraud on the part of the actual quants (not sure about their bosses), i.e. another cohort of geniuses brought low by the messy complexities of the real world. The entire field of economics suffers from the same delusions, as far as I’m concerned. They weren’t the first, and won’t be the last. In my field, the promises of the “omics” du jour is plagued by similar hype, and the data collected has the potential for even more harm, if misused, which it almost certainly is and will be. Perhaps it’s how I get myself to sleep at night, while acknowledging I’m just another corporate whore; but I like to think the individual players at least started off hoping to simply make a good living and provide a useful product. That, and having surveyed the abuses of academia, they thought the stench maybe wasn’t any worse on the outside.
To me the folly of the quants is just a symptom of the larger problem of trying to optimize systems before first optimizing incentives. The quants did the job they were asked to do. Probably a lot of them even believed their own dazzling bullshit. They were rewarded handsomely, nand probably broke few if any laws, the legal disincentives having been gutted long before. While people were still making billions playing the securities shell game, they were the heroes of investers, from the richest fund managers to saps like me, happy to get a cheap mortgage and a growling 401K.
The system was well primed, they just helped lubricate it. I’m far too much of a cynic now to believe adding some more ethics to the graduate maths curriculum will accomplish much without some major structural corrections to the financial sector, among many others, corrections that only sweeping govt. action can compel. It’s good to be wary of certain algorithms, but clever people will always find a new trick for the right price, and one can only be so vigilant. We must admit that nearly all of us, sadly, have our price. If we could do a better job of acknowledging that at the societal level, I think the talents of the quants could be reliably channeled to far better uses. I’d like to think the vast majority of them were fundamentally as decent as most people, and got swept up in it all.
I think the new class of methods of concern have to do with machine learning, which can produce predictions without a model in the usual sense of the word. The mathematics is in the model of learning, but not in the particular model that resulted from the data that was used as the training set. I think this class of methods is the most liable to the spiral described in the review: ” if a poor student can’t get a loan because a lending model deems him too risky (by virtue of his zip code), he’s then cut off from the kind of education that could pull him out of poverty, and a vicious spiral ensues”.
Hi, Anonyrat,
From the perspective of a lender, whose fiduciary responsibility is to maximize return on investment for their shareholders, such a model might be performing quite optimally. The lender isn’t in the business of “pull(ing) him out of poverty”, they’re in the business of collecting interest. The fundamental problem, then, isn’t a bad algorithm, though that might exacerbate things considerably if it contributes to unsustainable trends, i.e. bubbles, it’s what we as a society deem to be fair game in the student loan sector. It could be that giving people loans for a song in affluent zip codes when there isn’t the collateral to back them up, then packaging these things in such a way as to conceal risk from investors is a really crap way of performing ones fiduciary duty, but the plight of some kid on the wrong side of the tracks isn’t technically relevant until post-collapse.
Sure, big data and the algorithms to exploit it may hurt us, but they’re designed to give us the answers we want, after all. As far as I can tell, they didn’t entirely fail the financial sector. They do appear to have been phenomenally successful, over an appropriately finite duration. Microbial populations mindlessly exploit a fundamentally similar approach in a closed system; they just lack the foresight and ability to know when to exit the log phase voluntarily, so as to better reap the rewards of their success. From a brutally pure financial standpoint, investors didn’t miscalculate what to buy so much as when to sell.
I think we have to ask ourselves, while we’re mulling how we got here, why, when things don’t go so obviously wrong, the underlying status quo is still OK.
LMMI:
*The author of this book was in the room, and only she truly knows how much she was made, wittingly or unwittingly, into a con artist.*
What do you mean by this? Are you under the impression that O’Neill was involved in pricing mortgages?
LMMI,
As AJ points out, ONeil wasn’t involved in pricing mortgage backed securities, so had nothing to do with the specific fraudulent practice I mentioned in the posting. She has written some fairly detailed postings about her time at D.E. Shaw, for example, take a look here
https://mathbabe.org/2011/06/15/working-with-larry-summers-part-1/
and at other postings in that series that are linked to from there.
Yes, yes, and quite sorry, Peter and AJ. Got mixed up on sectors. But one might ponder the same questions of ethics when considering the relevant quants.
I think in general the idea here is to be appropriately nuanced, but as the “math is racist” nonsense makes amply clear, it’s apparently too easy to make “big data” and “algorithms” into bogeymen, and caricature quants as some new, insulated, coldly metrics-driven rentier class, instead of people with a particular skill set getting paid to do a particular job the way they were asked to, much like the rest of us. The manifestation of the problem is fairly novel and sophisticated, but the underlying causes and motivations are rather more perennial and prosaic, in my opinion.
LMMI,
I’m not asking the lender to be a social activist; but a lender might trust a human where an algorithm might not. A lender might come up with microlending, and so on.
Then there are some of the concerns expressed here, that are relevant to this post: http://sites.ieee.org/spotlight/ai-ethical-dilemma/ (IEEE, On the Use of AI – the Dependency Dilemma )
“What started out as Internet technologies that made it possible for individuals to share preferences efficiently has rapidly transformed into a growing array of algorithms that increasingly dictate those preferences.”
I don’t see how her extremely limited tenure at a (systematic) trading firm and subsequent rants makes her any more than a slactivist.
Peter, sorry, the content in your link to her blog is just non-specific and childish ranting, a small step down from her book. I actually approached her book really excited to get her perspective. From a rigorous and fact-based perspective her book was very thin. I realize that she’s tapping a narrative that people feel very passionate about but I disagree with your earlier use of the word “important”. It could have been important. Had she leveraged her job-hopping and intellect as opposed to what is clearly an ax she has to grind she could have started a nice converasation based on factual reporting to support her hypotheses.
Eiger: No, that’s not what I’m saying. Pervasive data can protect you AGAINST the state at times, as well as against false accusations. Not having the video can be as damaging to justice as having lots of video around.
In general, if a mathematical model is functioning in place of a human being (or being used by a human to supersede his own judgment) in making decisions that affect people, then as noted by others above the math aspect doesn’t really change any of the value-oriented policy or ethical issues. Police officers have traditionally been trained to look for what seems “out of place” in a situation and investigate that, a heuristic which has much merit but also leads to excessive stopping of motorists traveling outside “their” perceived communities. A drone programmed or trained to do the same thing would be no better and no worse in terms of fairness and public safety than the human. The same goes for loan officers, admissions officers, etc. The advantage of the mathematical model is that it is in principle easier to audit it (unless it is a neural net) and to get it to operate consistently in the way its designer intended. But if the designer has the wrong incentives, and if in practice the auditing aspect does not occur because reading the code of the model is difficult for those who might care, then you’re back to the same old problems. I don’t see how they would be worse than if the human were doing the “wrong” thing unassisted by a model but based on his incentives and perceptions.
Hi, Anonyrat,
Microlending strikes me as fairly activist in the current climate. If I’ve learned anything about corporate behemoths, it’s that they reflexively seek to minimize time, overhead, and risk. This quaint notion of two people building a mutually-beneficial relationship through engagement and trust will just take too damn long. And this trust thing is pretty subjective. Too hard to measure, too hard to track and hold accountable. That’s why these algorithms are invented. It’s not that big lenders are evil so much as impatient and risk-averse. What’s the most reliable way to close a good deal fast and get the money? That they actually wind up increasing risk and losses in the process is a considerable irony, but data and algorithms only serve as an amplifier of a flawed ethos. We lost the benefits of more deliberative practices because we valued them less than rapid returns.
The problem with social-algorithm based decision-making is that despite their short-comings (assumptions that are incomplete, false, agenda-driven, …) they are held in awe and almost beyond reproach as they are inaccessible to 99.9 percent of the population. Furthermore, such models are impersonal, veil profit motive or criminal intent, and are almost never challenged. A loan is denied, and all a loan officer has to mouth is, “it is policy, I would love to help, but my hands are tied.” And the kicker is that those who have the will and means (read governments, corporations, the Beverly Hills Zipcode) to disassemble the algorithm du jour can handily subvert it for fun and profit. It is man against the machine – and the machines are winning!
Paul Wilmott, has been warning of abuse of math in finance for a long time. Despite himself being a quantitative financial analyst and having written several books in this area.
http://rsta.royalsocietypublishing.org/content/358/1765/63
Big Data is not just about obscure algorithms and machine learning. Visualization is an important part of it. And, newer methods of visualizations, which harness certain insights from data, have helped many in decision making. Unfortunately, that is not an aspect that is usually attributed to Big Data.
Big Data and Data Science, the way it is practised today, is mostly statistics + computer science. Mathematicians and physicists need to develop better understanding of these two fields to contribute more in this space.
It’s even worse than people think. Having worked for many years on “big data” and machine learning at a very large software company, what became apparent was that careful and statistically reliable data analysis was eventually thought to be too stodgy since it often didn’t immediately lead to revenue gains. Instead, conclusions were hastily drawn and tried out at massive scale in the hope of randomly producing a winner. The resulting “lottery” winners moved up and the losers moved on.
Pingback: Ideal Syllabus – Site Title
I’ve followed her column occasionally, and its good to see someone take this seriously whose background is in modelling. How does one audit complex mathematical models in high finance where secrecy is important? As well as traders motivations, incentives and so on.
Sociologically, the algorithms are a replacement for religious precepts.
They allow the governing class to impose its values on the rest of society, and regulate their actions, behaviour, and social class. You don’t need religion to impose a belief system.