[SQUEAKING]
[RUSTLING]
[CLICKING]
GARY GENSLER: I'm going to talk a little bit about how I see financial technology in a stack. We did a little bit of this in our last two classes, but just really thinking about the finance technology stack and then turn back into AI and machine learning. In finance, we reviewed this a bit in the last class, but I just want to go back to it and talk a little bit more about it, a little bit more granular this time around.
And then take the bulk of the class around public policy frameworks and how AI fits into that, Artificial Intelligence, machine learning, chat box, and the like. So that's sort of the run of show. And Romain, you let me know when we have either 15 or 10 minutes to go because I don't have a clock here on this Zoom.
So the three readings really built upon last class's readings, but they had a little bit of a tone towards the regulatory side. And Oliver Wyman and the Mayer Brown-- Mayer Brown is a law firm. Oliver Wyman thinks about risk management and consultant risk. But each were with a little bit of a gloss on how to manage if you're at a board level.
And the Oliver Wyman really went through the lines of business, how machine learning and artificial intelligence is being used. And the Mayer Brown went through sort of the laws. And we'll come back to that. But that's why I sort of reached out and had that.
Now, the short 1- or 2-pager from Julie Stack from the Federal Reserve, I thought it was just interesting to say, all right, here's a senior-- and Julie's very well-respected in the community-- a senior career person at the US Federal Reserve writing about fintech. What are they thinking?
And just like I did earlier, and we shared sometimes, what is the official sector thinking, as we shared the chair of the FDIC's speech earlier, and I will throughout this semester, I think it's helpful if you're thinking about this to sometimes think, all right, what's the official sector saying in their speeches and so forth? But that was kind of why I put those readings out there.
I'm looking at Romain to see if there's anything, but probably not yet.
And then the study questions. This is where it gets a little bit more fun and I see if we can get a little engagement. And again, we spoke a great deal about this already. But if anybody just really quickly want to give an articulation of this question-- why are these new forms of AI enabled data analytics, pattern recognition, speech recognition, and so forth-- how do they fit into the other trends, as you see it, as we've talked about in financial technology?
GUEST SPEAKER: Luke?
GARY GENSLER: This is your fun time to either call on people as they've volunteered or otherwise. But we've discussed this. I'm just trying to get it a little going here.
GUEST SPEAKER: We can start with Luke.
AUDIENCE: So the commonality among the industries, so sector agnostically, is the fact that all the companies who can deploy this AI to their operation system is to save money, save costs, so that the bottom line is better.
GARY GENSLER: So one thing that Luke's raising is saving costs. Others want to sort of chime in. And I'm kind of also curious as others chime in how you see it fitting in with other emerging trends. We had the trends that we've already lived through, but we're building upon, like the internet and mobile and cloud. We have some other trends that we'll get to in future classes like open API.
Just kind of curious to see if somebody wants to take a crack at connecting this piece of the technology with some of these other trends.
GUEST SPEAKER: Laira?
AUDIENCE: Yeah, I think what we discussed extensively in the last lecture was Erica being one of really good examples of how the new forms of AI is emerging with what's already existing and making it not just cheaper for the firms to answer mundane questions that customers have, but also making it more user friendly.
So I think in terms of Erica, it's just a great example to show how this question kind of goes through.
GARY GENSLER: So Laira's just raising-- and sometimes I just break these down simplistically. In the artificial intelligence world, there's the consumer-customer interface. Erica at Bank America is an example, and chat bots, and the various ways we communicate with customers, tying into customers, builds what technology? What is it that you use and might even be using right now when you're watching this course?
GUEST SPEAKER: Eric, you also had your hand up.
AUDIENCE: I was going to talk about something else. I was going to say that--
GARY GENSLER: Sure. [INAUDIBLE] and I'll tie them all together. Don't worry. I'll answer [INAUDIBLE] questions, too.
AUDIENCE: Sure. I was saying that AI is being used by fintechs for better underwriting purposes, like using alternative data to better assess people's credit.
GARY GENSLER: Absolutely. So it's the data analytics. It's the customer interface. That data analytics of predictive underwriting, whether it's in insurance, whether it's in lending, predictive underwriting. It's also on the customer side where we use natural language processing and we interface with the customers.
Romain why don't we take one or two more? I'm looking for somebody who wants to tie it to the other technological trends that we see.
GUEST SPEAKER: So let's go with Nikhil and then with Wei.
GARY GENSLER: All right, and then we'll move on.
AUDIENCE: I think the Oliver Wyman reading talks about how companies that have been using AI and machine learning have done better. I think it was asset management was a specific example they took. I think it also ties to, like another class I'm taking with Simon Johnson on AI talks about David Autor's report that says there's a superstar effect where firms that have access to this data and are using AI tend to perform better in the market.
And I think that's a significant tie-in. And it's probably even more exaggerated in fintech specifically.
GARY GENSLER: So let's just pause for a second. It's data, data. What we've had is this remarkable advancement in data analytic tools, artificial intelligence. But we've also had a remarkable advancement of the ability to store and process data through the cloud and just through the emergence of much faster computers and much more connected communications.
So that data piece, the artificial intelligence and machine learning trend might not have been able to do as well if it weren't for everything that's going on, broadly speaking, whether it's in cloud or computing. And Romain the last person that was--
AUDIENCE: Yep. So I also want to maybe make a mention that in also help was a lot of times either collecting data analytics or cleaning the data analytics. Because a lot of time that in the old world there's a lot of data you potentially collect.
First of all, I can help to better collect unstructured data. And the second found that it helps to clean a lot of data you collected.
GARY GENSLER: So absolutely agreed. And often it's 80%, sometimes 90% more of a computer science group is in the cleaning up of data and standardizing data. And we'll come back to this, but a lot of fintech disruptors, a lot of startups have actually created value more around data than anything else.
And I will say not just about data, but standardizing data. And later in this class, we're going to talk about Plaid and Credit Karma, both of which were earlier this year acquired, Plaid by Visa, Credit Karma by Intuit for $5 to $7 billion-- big, big acquisitions. And we're going to talk about what was the value proposition for Visa and Intuit? Why were they paying $5 or $7 billion?
A lot of it-- not all of it, but a lot of it relates to data, but also having standardized that data, particularly in the case of Plaid. How it's affecting the competitive landscape. We've talked a great deal. Hopefully this will continue to be a theme throughout the semester about big incumbents, big tech, and fintech startups.
I will contend in this and throughout this course that AI and machine learning is now moving into the technology stack. If we think of this stack as layers of technology that incumbents incorporate, and frankly will not survive if they don't incorporate, that AI and machine learning is being incorporated quickly into the financial incumbent technology stack.
We're not fully there yet. And that the competitive landscape is such that the fintech startups and disruptors have been able to find cracks in the old business models. And using machine learning, they've been able to break in. And we'll talk a bit about that.
The big tech firms, of course we've already talked about that they are really about networks. Networks that they then layer more activities upon, and those more activities bring them more data.
And then we're going to talk a fair amount about public policy. But anybody who's sort of dug into the Mayer Brown reading want to just give two or three thoughts on the broad-- what are the-- I'll later call them the big three? But it's almost written right in the question for you, but Romain, you want to call anybody?
GUEST SPEAKER: Michael?
AUDIENCE: Yeah, so the reading kind of did touch upon bias a lot and its potential, just on the natural factors that a machine learning algorithm would trace.
GARY GENSLER: So one of the things about machine learning and deep learning is that it's remarkably successful at extracting correlations. Correlations from data sometimes that we didn't see before, that didn't come just from a linear relationship, a linear relationship that we might be able to identify just in classical statistics.
But in those remarkable abilities to extract correlations, you might see biases. If the data itself has a bias in it that people of certain gender, certain race, certain ethnic backgrounds, certain geographies are more likely to, in the data's mind-- in the data's mind-- are more likely to have lower income and in the data might have more likely to be a lower credit quality, then you might be embedding certain biases inside the data.
And many nations around the globe, not just the US, have said to the credit card companies and the other financial firms that you shouldn't have biases around race, gender, ethnic background, geography, sometimes, and the like. So one is biases.
When I consider the three big buckets here-- anybody want to just talk about the other two? Romain?
GUEST SPEAKER: Alicia.
AUDIENCE: Hi. I think we talked this last class. I think AI derives conclusions or correlations without explaining the why. So humans cannot understand why some guy has a better credit rating than another and has an issue with the law, basically.
GARY GENSLER: Yeah. And why as societies have we embedded in laws-- and we'll talk about this. But if you have a point of view, why as societies have we embedded in laws that you need to be able to explain the why when you deny somebody credit or deny somebody a financial product? We did this in the United States 50 years ago in something called the Fair Credit Reporting Act.
Data analytics was a big wave in the 1960s, believe it or not, when credit cards were invented in the 1940s and '50s. By the 1960s, data analytics were going, and the Fair Isaac Company, which became FICO, had started. And we embedded in law that you had to answer this question. Explain why you denied credit. But why do you think we embed that in country after country in our laws?
GUEST SPEAKER: Danielle?
AUDIENCE: So I think it's, going back to the bias question, to prevent bias in people who are extending credit.
GARY GENSLER: I think you're right. I don't think it's the only reason, but I think it's a dominant reason. We also in the US passed something called the Equal Credit Opportunity Act, or it generally goes by the terms ECOA. But those two laws and another law in the US, Truth in Lending Act for transparency, were kind of this bedrock out of the 1960s data analytic credit card boom. By the early '70s, we had those three.
Anti-bias, fairness, you might say, explainability. These are two bedrocks in finance in Europe and the US, country after country. What's the third challenge that comes up with data analytics or AI that often we find ourselves, and if you're starting a fintech startup you have to be aware of? Romain, any hands?
GUEST SPEAKER: Not yet. We have Luke again.
GARY GENSLER: We'll pass on Luke unless somebody else.
GUEST SPEAKER: We have Danielle again.
GARY GENSLER: All right, either one, whoever's got their mic off.
AUDIENCE: So privacy is the last one.
GARY GENSLER: Sure.
AUDIENCE: For example, companies have demonstrated the ability to predict when consumers have certain health conditions or pregnancy, for example. There is a really famous case where a company knew that a consumer was pregnant based on how their shopping patterns changed, and there are reasons we've precluded employers or credit extenders from asking about certain parts of people's lives. But we may be unexpectedly exposed to parts of those lives if we're capturing data and using it.
GARY GENSLER: So this trade-off of privacy versus financial services, thought it's not as old as sort of the fairness and the explainability, which in the US and then later in other countries was embedded in many laws 30 to 50 years ago, privacy has picked up a little bit more of a stream. By the late 1990s in the US, there was modest financial privacy protections that were embedded into law in 1999.
I actually helped work on that with then-Congressman Ed Markey, now Senator Ed Markey of Massachusetts. But in Europe, they went quite a bit further in something called the GDPR, which we'll talk about a little later. But the General Directive-- P doesn't stand for privacy, but I think it's Protection of Regulation. So those three buckets-- those three buckets are the important ones.
So again, AI machine learning fits into these other trends that we think about. And I'm going to walk through that in this class of cloud and internet and mobile and data. Fintech startups, big tech, and incumbents, I believe, are all embedding it in their technology stack. And you're really challenged if you don't.
And then the big three challenges in public policy, explainability, bias, and privacy. There are other challenges as well, but those are the big three, in a sense.
So what do I mean by technology stack? Well, I think that three things are already embedded, the internet, mobile, and the cloud. And if this class were being taught at MIT in the 1980s, none of them would be there, and by the 1990s, we would have said, wow, that internet.
The word "fintech" didn't really come about in the 1990s. But if we had applied it to the 1990s, the internet was dramatically changing. Mobile into the naughts in the cloud and so forth. I would contend you cannot really survive in the finance space giving customers what they need, whether it's in the wholesale markets of capital markets and payments, or in the retail markets if you haven't yet embedded in your technology stack.
Now, I will note that many large financial companies are slow to use the cloud. The largest amongst them tend to want to still have their own data centers. I think you're going to see that shift dramatically in the 2020s. But I'm certainly telling you that if you start a startup, you cannot survive if you're trying to do your own data center, if you're going to already embed these in your what I'll call financial stack.
The internet for connectivity, mobile in a sense for ubiquity, meaning that folks can be out there. Cloud, you're sort of renting somebody else's storage and often their software. But then the things that we're talking about in this time, in the 2020s that are being embedded into the classic standard company stack is AI, machine learning, and natural language processing, and what we'll talk about in the next class a lot about open API.
Now, we're in a transition mode. Not every company has really embedded it in their stack. And these are where the opportunities really existed in the last dozen years in fintech. Fintech startups that were savvy enough to really build this into their product offerings faster than the incumbents, or, better yet, in a more refined, targeted way. And we'll talk a fair amount about that.
Now, of course, there's other things in the stack. And this is not what this class is. Even money itself in accounting and ledgers and joint stock companies were all in a sense. We just take them completely for granted. By the time you're in a master's program at MIT, master's of finance or MBA or other graduate program, you're quite familiar, and you almost just take these for granted.
But I can assure you at earlier decades, they couldn't be taken for granted. And some of them, like securitization and derivatives, will dramatically shift your ability if you're doing a startup to compete. I see some things in the chat. Romain, are there questions?
GUEST SPEAKER: All good, Gary.
GARY GENSLER: All right. And then the question I sort of still have, and I teach this quite a bit at MIT, is blockchain technology. Will that move into the stack? I would contend it's not really yet there. You can be an incumbent. You can be a big finance firm, a big tech, or a startup and say, I'm not going to compete right there. I'm not quite sure.
Though, again, we look at Facebook. We look at Telegram, big tech companies, messaging companies, [INAUDIBLE] in Korea who are sort of pulling in some blockchain technology and looking at it. We see trade finance consortiums. And we'll talk more about this next week.
But I would say that you will not survive if you're not bringing machine learning into your technology stack. You probably won't survive that long if you don't really have a strategy around open API and data. Romain, I pause a little bit.
We talked last session about artificial intelligence and machine learning. We're not going to dive back in. I'm just going to open it if there's any questions about what we talked about. That, of course, machine learning is just a part of artificial intelligence. You narrow it down to deep learning. Fundamentally as a business school, I'm not asking each of you to be able to program with TensorFlow and run a TensorFlow project, even though many of you know how to.
I'm sort of just saying to think about, from a business side, it's about extracting from data, cleaning up that data, standardizing that data, and often labeling it. Labeling it because you can learn faster. That's called structured learning rather than unstructured. But labeling that data and then extracting correlations and decision algorithms that come out of it. Romain, any?
GUEST SPEAKER: Luke has raised his hand again.
GARY GENSLER: I'm going to pause.
AUDIENCE: Just a quick question.
GARY GENSLER: Oh, a question. Yeah.
AUDIENCE: Yeah, a question. So how can a country that is developing fintech out of not because it was underbanked, but rather overbanked, but looking for alternative investment-- so the likes of South Korea-- develop a bunch of coders or those with-- actually, better yet, those people who can draw a conclusion and extract hypotheses and build up better ways to build an open API, how can a government really step in to encourage that and make an ecosystem?
Somebody's got to do something. And I'm not sure America have a bunch of great coders and great minds, and it's a melting pot. So [INAUDIBLE] bunch of geniuses here.
GARY GENSLER: Yeah, I'm not sure I follow the question, but I'm going to take it and then move on. I think what you're saying is in a country that has a very advanced banking system, how can a government encourage this? You do it through the education system. You do it through, just as we do in the US, promoting STEM education and programs like at MIT.
I think over time, there is a challenge of how you adjust laws and regulations. Finance is a highly regulated space in every country. We're dealing with trust. We're dealing with people's money. We're dealing with inherent conflicts of interests that you can't escape in the world of finance. And so trying to deal with that with regulation but how we adjust with these new tools that are in place. But that would be how I'd answer it.
So let's go back to what we talked about and just sort of do a little bit more granular. I contend at its core that these technologies, machine learning and natural language processing and AI, need to be brought into the finance stack and the technology stack. And so every type of company, whether you're in payments or you're in lending, whether you're insurance or not, you want to think about how you bring it in, whether you're a disruptor or not.
And so that's why I think about it down the line in each of these fields and not just about disruptors. And we talked about each of these in the past, but I pause just, again, it's a little repetitive. Just if there's any questions about some of these slices. And then remember we're going to be digging quite a bit into this in the next five weeks as well in each of these areas. Romain?
GUEST SPEAKER: I don't see anyone yet.
GARY GENSLER: All right. Well, what I've said is, all right, so AI is a tool. And this is really an interesting debate that people can have at MIT. And I've been in rooms full of five to 10 faculty sometimes, sometimes one on one, where we debate. Is AI a service, or is AI a tool? And I would say that it's an interesting debate.
Most of the time we land on that it's more a tool than a service. But every new technology, every new technology that comes along, whether it was the telephone, whether it was the railroads, whether it was airplanes, every new technology that comes along has some attributes of being a new industry, a new tool, the railroad industry, for instance.
And yet many businesses use the railroad to do their shipping. AI and machine learning is more like a tool than it is a service. But it doesn't mean it's always just a tool. I did some research over the last few days, just a list where we could go through any one of these. AI as a service. Here I've listed 10 or 12 companies in finance that are actually doing AI sort of as a service.
AlphaSense was started in 2011 before the whole rage of AI, but they were data analytics as a search engine for finance firms to search key terms and key words in registration statements and other statements that are filed with these various securities regulators around the globe. Sort of think of it as the Google for financial documents. Well, Google certainly has moved dramatically into AI space. AlphaSense did as well.
There's a number of these in the insurance sector who really are around taking photographs of automobiles at an accident scene, and then based upon those automobile photographs or accident data, to use machine learning. And so Cape Analytics, Tractable are both firms that are in essence providing services to insurance companies.
They have not yet, as to the best of my knowledge, Cape Analytics or Tractable, decided to have direct consumer interface. They're not selling insurance. They're selling a software analytics tool to insurance companies. And similarly, like ComplyAdvantage in the money laundering space or Featurespace in anti-fraud.
They're saying, we can build something for fraud detection. We can build something for this world of anti-money-laundering compliance. We can build the software, and we'll put our product out there for the banking sector to basically rent us rather than building their own system. And you see others, document processing and the like.
And even Zest AI-- Zest AI, founded in 2009, before this conceptual framework and the big movement, but Zest AI in credit underwriting software, basically providing-- broadly speaking, I'm calling it AI in finance as a service, rather than building it right into the stack. Romain I'm going to pause for a bit.
GUEST SPEAKER: If you have any questions, please rate the blue little hand, as you probably know by now. I don't see anything, Gary.
GARY GENSLER: And I'd say this, that each of these go back to some of the sectors back here. So asset management, we have sentiment analysis. We have-- I'm not going to pronounce the company's name right, but Dataminr. Dataminr, which can actually do market sentiment analysis. And if you're a hedge fund, you might sort of rent into Dataminr and get that sentiment analysis.
You have several services that are doing fraud detection and regulatory, the anti-money-laundering slices. Credit and insurance-- I picked three that are doing insurance underwriting. But if you're Bank of America or J.P, Morgan or Cap One in the credit card business, you're going to be embedding this right into your business, by and large. Not always, not always, but by and large, you're embedding it right into your business.
GUEST SPEAKER: We have a question from Geetha.
GARY GENSLER: Please.
AUDIENCE: Hey, Gary. Geetha here. I work for our Capital One. I'm in the credit lending space. One--
GARY GENSLER: You're going to correct anything I say about Cap One. Please.
AUDIENCE: No. One thing that I find really surprising with the regulations is that if we develop our own AI models, regulations-- anybody, like when auditing happens, they are very specific about explainability, interpretability. But if you were to use a vendor, Shape Security, Akamai, they don't care too much about explainability.
That I always found surprising. Why is it that within the realms of bank, they're so specific about regulations, but when we use a vendor, the extent to which they care as how you use Akamai. You use Shape, and that's it.
GARY GENSLER: Gita, I'm so glad that you raised this. I think I earlier had said in our introductory class that one of the competitive advantages of disruptors is that they have certain asymmetries. Incumbents, like Cap One-- and if I might say you working for Cap One, which is one of the big seven firms in credit cards, is truly one of the incumbents.
Incumbents tend to need to protect their business models. And part of what they're protecting is also the reputational and regulatory risks. But the disruptors have a little bit of a different perspective. They're not generally protecting any in inherent or incumbent business model. And yes, they're also willing to take more risk with the regulators.
I'm not saying whether they should or shouldn't. I'm just saying this is kind of the facts on the ground that disruptors are a little bit more towards the end of basically begging for forgiveness rather than asking for permission, if you sort of remember how you might have been with your parents or if any of you have children. And incumbents are more into asking for permission of your at least internal GC, your General Counsel. Can we do this? Can we do that?
And so I think the vendors-- in that case, the vendors are a little bit more willing to take risks when explainability and the explainability that's inherent in US law and the Fair Credit Reporting Act and the like. And it doesn't mean that it's more legal for a disruptor to do it than for Cap One to do it. It's just their business model tends to be a little bit more accepting of that regulatory compliance risk.
Secondly, and I think this is probably a bit of a misreading of the risks, but sometimes the thought is if the vendor does something, it sort of insulates the big firm. Now, my understanding-- again, I'm not a lawyer-- but my understanding, it doesn't really insulate Cap One, or it doesn't really insulate Bank of America if their vendor does something that's blatantly a violation. I don't think it does. But sometimes there is a bit of that mindset as well. Does that help?
AUDIENCE: Yeah. Yeah. And the last thing is-- not taking too much time, just a comment. One other thing I find very intriguing with vendors is that they often get the data of incumbents, like maybe Bank of America, Capital One, and they charge us back for that data. That's the other thing I find [INAUDIBLE].
GARY GENSLER: So here, this is about how companies can capture data. We're going to talk a lot about this one, open API. Just to the intersection, this is one of the key things to take from these series of classes. Machine learning is nowhere if it doesn't have data. Data is facilitated by a lot of startups getting access to that which the incumbents already have.
So around the globe, in Europe, in the UK, in Brazil, in Canada, US, Australia, there are significant efforts to promote what's called open API-- Application Program Interface. In essence, that is you or I permissioning a company to get my or your data from an incumbent financial firm.
And so we permission somebody to get data, in Gita's example, from Cap One. Then they use that data, the startup. And then Gita's saying that Cap One then has to pay a fee to some startup, even though the data had initially come from Cap One.
And that's a perfect set up to two examples I just want to talk about. I want to talk about two large mergers that were announced in 2020. The first one is Credit Karma. Now, I don't know if we could do by a show of hands, but how many-- just raising the blue hands in the windows-- how many people, if you can go into participant buttons, have actually used Credit Karma, that you would consider yourself one of these members?
And Romain you'll tell me what it looks like.
GUEST SPEAKER: We have at least 10 students so far.
GARY GENSLER: No, but if you scroll down. So all right, so it's not as big a percent as I thought. Let me go back. So Credit Karma started in 2007. The entrepreneur who started it couldn't get a free credit report. So they say, why don't I start a credit report platform?
13 years later, they were able to sell for $7 billion to Intuit. Now, you might not be familiar with Intuit. Their main products at Intuit are tax software, TurboTax. They also have something called Quicken Books. And I believe it's possible they have a third product. They might even have Mint.
But Intuit saw they wanted to buy Credit Karma that had never gone public. Credit Karma apparently had nearly a billion dollars in revenue last year, and yet Credit Karma is still a free app. How is it that something that doesn't charge anything can have a billion dollars in revenue? It's that they're commercializing data.
And remarkably, 106 million members-- 106 million members. 8 billion daily decisions, credit decisions or other analytic decisions that they have. And so Intuit is saying, why are they buying Credit Karma? Even at seven times revenue, that's a healthy price. They're buying it largely around data and data analytics. And credit card has figured out how to basically commercialize that flow of data on over 100 million accounts.
And how do they do that? They do it by cross marketing. So they're marketing not just to us. But then they're also, with many financial firms, they're going back and say, this account here, this is a worthy thing. So they make arrangements. They enter into contractual arrangements with financial institutions and then market to us to take a mortgage, to take an auto loan, to take a personal loan.
Plaid. Plaid's a company that we'll talk a lot about when we talk about open API. This was software that started just seven years ago. Two developers, straight kind of hard-core computer scientists who had went to work for Bain. And for anybody who's thinking about working for a consulting firm, this is not a vote against Bain or BCG or others.
But they basically decided after a year at Bain to go out and do their own startup. And it was a startup to facilitate financial disruptors or fintech companies accessing data at banks. And there was not a standard. There was not a standard for this open API.
So they created at a hackathon-- at a hackathon that they actually won back, I think, in 2012 or 2013-- they were in their late 20s at the time, by the way, if you're just trying to figure out. Seven years later, in their mid to late 30s, they sell their business for $5.3 billion. But it all starts at Bain, computer scientists creating open API software.
Well, what happened over those seven years, 11,000 financial companies signed up to use that standard protocol to do open API. And all the other side, 2,600 fintech developers. And if anyone here has taken Michael Cusumano's class on platforms, this is the classic sweet spot of creating a platform company, when you have this two-sided many-to-many market.
Many fintech developers want to access financial data at financial firms. Many financial firms don't want to deal with thousands of fintech developers. And so inside of this many-to-many market, Plaid creates a software, a standard software for that to happen. But what did they build on top of that? They built data aggregation.
They announced a $5.3 billion merger to Visa. There's a lot of people that debate whether it was a good idea because the estimate by Forbes is there was only about $110 million of revenue. I mean, now we're talking 40 or 50 times revenue. But 200 million accounts are linked.
We'll chat about this more because those 11,000 financial firms could all stop using Plaid and go to one of Plaid's competitors now that Plaid's bought by Visa. But this gives you the sense of the power and the value, the economic value of data, machine learning, and the like. Romain questions?
GUEST SPEAKER: None so far. It seems like the class is quiet today.
GARY GENSLER: All right. So I'm going to talk a little bit about financial policy. How does this all fit in, in the next half hour. Broadly, first, is just a sense of-- I'm trying to get rid of it this participant window here for a minute.
So broad public policy frameworks have been around for thousands of years, since the Hammurabi code, since Roman and Greek times, sometimes embedded even in religious law. That's the nature of money. But four slip streams, and all four will be relevant for fintech as we talk through not just AI, but all sectors.
One is money and lending. We've, over centuries, often get official sector as a point of view, sometimes even limiting interest rates and the like. Two is financial stability. We think about a crisis. We're living through this corona crisis right now. Central banks around the globe are, with an eye towards promoting the economy, also thinking about how to ensure for financial stability.
The reverse of financial stability was happening in 2008 crisis, that that crisis where banks were faltering and closing up. And then that led to millions of people losing their jobs, millions of people losing their homes, and the like. So financial stability, I grab a couple pictures here out of the Great Depression, an earlier period of crisis.
But what we'll talk a lot about is the third and fourth bucket. The third bucket of protecting consumers and investors. Consumer protection we can think of even just in terms of ensuring that if we buy a crib for our children that it's safe. If we buy a car that it actually is safe on the road. So consumer protection refers to things much broader than finance.
Investor protection is the concept that, yes, we can take risk in markets. We're all allowed to take risk in markets. But that the markets themselves and the issuers, the people raising money, should explain to us at least the material pieces of information upon which we would take those risks. And that the markets themselves have a certain transparency and we protect against fraud and manipulation and the like.
And then guarding against illicit activity. This is one that we've really layered over the financial sector in the last 40-odd years. In an earlier era, 19th century, earlier 20th century, we didn't have as much about this, even though, of course, we did guard against bank robbers. But I'm talking about illicit activity outside of the financial sector-- money laundering, terrorism, and even sanctions. So these four slip [streams in a sense, are there.
So how does it fit back to AI and policy in finance? So I've talked about what I have come to call the big three, biases, fairness, and inclusion; explainability; and privacy. And what we mean by that is if you take a whole data set, millions or tens of millions of pieces of data, and extract correlations, and you find patterns, some of those patterns might have biases.
And those biases can exist because we as a society are not perfect. We have biases even in what we've already done. And so now you're extracting and you might be embedding some of those biases.
Secondly, sometimes it will happen just out of how you build your protocols, how you build your actual questions and query on the data. But I assure you that most data sets have some biases in them. You just might not be aware of them. And even if you have a perfect data set, the protocols themselves might sort of build some biases on top of them.
And we're finding this in AI policy not just in finance. It's true in the criminal justice system. It's true in hiring, that using machine learning, you have to sort of say, wait, is there bias? And the laws here in the US that are most relevant in finance started with something called the Equal Credit Opportunity Act. And we'll talk a little bit more about that.
Explainability and transparency, as we talked about earlier, is sort of like a cousin or a sister to the bias issue. And in the US, it was 50 years ago that we passed these twin laws within four years. The second law was the Fair Credit Reporting Act. And this was the concept about holding that data, but also being able to explain it. Romain I see the chat button.
GUEST SPEAKER: We have no question from Jorge.
GARY GENSLER: Please.
AUDIENCE: Yes, professor, thank you so much. I just want to have a little bit more color on financial inclusion, and specifically on what type of data, what models are used? What's the forefront of data modeling for using AI and to help financial inclusion? Thank you.
GARY GENSLER: I'm not sure, Jorje, I follow the question. Let me see if I do it, but please keep your audio on so we can engage here. Biases are sort of the reverse of inclusion. So financial inclusion is a concept that everyone in society has fair access and open, equal access in some way to the extension of credit, to insurance, to financial advice, to investment products and savings products that they wish to, or payment products as well.
And the reverse of inclusion is sometimes that somebody is excluded. And excluding someone could be excluding them on something that is allowed. Like I might exclude somebody only earning $50,000 a year from an investment product which is for high-risk investors, depending upon how the country is arranged.
But in the US, we have a little bit of this concept that sophisticated investors can be investing in products of higher risk. Or at least they get less disclosure. But that's how inclusion and bias are kind of the-- they complement each other. The greater inclusion you have-- you can get to greater inclusion if you have fewer biases, in a [sense fairness. But [INAUDIBLE].
AUDIENCE: No, I was just-- I totally get that. I was just trying to understand what type of models, what type of data, what is the forefront of AI currently? Because I totally get it's gathering data and finding patterns. But digging a little bit more on that, what type of--
GARY GENSLER: So let's say that one pattern that we know about already-- this is a classic pattern in credit extension. I don't know how many of you know what retreading a tire is. A retread means that you're putting rubber on your tire. Instead of replacing your tire on your automobile, you're actually paying to put new rubber on the tire-- tire retreading.
It's been known for decades that people who retread their tires are a little lower income, generally speaking. And actually there's research-- I don't mean academic research, but there is research in the credit business that retreading tires means that you're probably a little higher credit risk.
Now bring it forward to now. Bring it forward to the 2020 environment. And let's say that you can follow that those people who bought tire retreading, or even if you went to a website on your laptop about tire retreading, let's say that's built in to an algorithm that's going to give you lower extension of credit.
That might be allowed, or it might embed a different bias. It might be that tire retreading shops are perfectly acceptable in certain communities, either ethnic communities, or gender-based, or racial communities, that it's just perfectly-- it's not about creditworthiness. So it's how you extract certain patterns that are about credit extension but not about race, ethnicity, cultural backgrounds, and the like.
And if you hold just for a second, we're going to talk a little bit more about this because I'm going to talk about the Equal Credit Opportunity Act.
AUDIENCE: Thank you.
GARY GENSLER: Romain we good?
GUEST SPEAKER: All good, Gary.
GARY GENSLER: So beyond what I'm sort of calling the big three, I list four other things. But they're all really relevant. They're relevant to, broadly speaking, the official sector. But they're also relevant as you think about going into these businesses the use of alternative data. And we'll come back to that.
Basically, we've had data analytics in consumer finance since the 1960s. We have in 30-plus countries used these FICO scores. But beyond what is built into the classic data set, what about new data?
We have issues about whether the algorithms themselves will be correlated or even collude. And this is absolutely the case that one machine learning algorithm and another machine learning algorithm can actually train against each other. We've already seen this in high-frequency trading.
Even if the humans aren't talking, the machines will start to actually have a sense of cooperation. And when is cooperation collusion? When is it that they're spoofing each other or doing something against each other in a high-frequency world?
I deeply-- and this is one of the areas I want to research more with colleagues. I deeply am concerned that a future crisis-- it's remarkable. We're in the middle of the corona crisis, but a future crisis we'll find algorithmic correlation.
And this is certainly the case in smaller developing countries, where a Baidu from China or a Google from the US might come in, and they might come in with their approach to artificial intelligence. Or a large financial firm-- it could be a European, Asian, or US financial firm comes into that smaller country, and they kind of dominate the thinking about how to do underwriting, and they're the big network effect.
And all of a sudden, the crisis of 2037 might be that everybody is extending credit kind of consistently in the same way. So it's basically less resilient. We're living through a moment of crisis right now where we're testing the resiliency of humankind through the corona crisis. But I'm talking one in the financial side.
And then the question is, how does machine learning fit into current regulatory frameworks? Around the globe, a lot's been written, but it's all at a very high level, and it's non-binding. But it's these top three, as I've mentioned.
So the alternatives-- this was a question earlier-- is the official sector can stay neutral and say, listen, this is just a tool. We're still going to regulate lending. We're going to regulate capital markets. We're going to regulate everything the way we did. And new activities will just come into those frameworks. Maybe we'll clarify a little bit around the fringes.
Secondly, you can adjust. Turn the dial. When the internet came along in the 1990s, at first it was like technology neutral. And then pretty much every regulator around the globe had to adjust. What did it mean if there was an online bulletin board that was trading stocks, where buyers and sellers can meet?
Was that what's called an exchange? Should it be regulated like the New York Stock Exchange or London Stock Exchange, or regulated maybe a little differently? Where our securities regulator, where the Europeans ended up in the 1990s was to regulate these online platforms like exchanges but not identical.
So then we had a regime of fully regulated exchanges and these online electronic trading platforms. And that was then later adopted in Asia as well, with some variations.
The other thing is that the official sector often tries to promote this-- promote the innovations, promote the technologies, or promote open banking, as I've said. But an interesting piece of this all is activities. Should we think about machine learning as a tool, like a hammer that everybody is going to be using, like electricity, like the telephone that everybody is using? Or should we think about it, as I said earlier, some companies are providing AI as a service?
Activities, a technology, a tool. And the official sector grapples with this sometimes. To date, mostly they've stayed technology neutral with a little bit of promoting early stage activity and the promoting of the open banking. Romain questions?
GUEST SPEAKER: We have 15 minutes left.
GARY GENSLER: OK. Alternative data. This is data that you can extract, whether it's banking and checking information, or as Alibaba does in China, taking a whole cash flow approach. Saying I can see everything about your business. It's called cash flow underwriting.
Here in the US, a payment company, Toast, was able to do-- until restaurants closed down-- able to cash flow underwriting around their restaurants because they had that payment data. All the way down to your app usage and browsing history.
I believe this is a trend that we've been on that will be accelerated by the corona crisis. That this crisis, we're finding whether it's the large firms like Google or even smaller ones, want to contribute to trying to thwart the virus by following us or our location devices. Our location devices, of course, are called cell phones and smartphones.
But such vast parts of the population have them that with location tracking, we can possibly thwart or even contain this virus by watching how we track ourselves. I think that we will shift a little bit further into data sharing, and that will look back and 2020 will maybe be, and 2021, a pivot point.
But what it does mean, even in finance, that a lot of this data is going to be available somewhere, even more, maybe, than currently is available. So there are actually alternative data fintech companies. CB Insights, which is a leader in tracking fintech, puts together this chart. And we don't have time to go through these companies, but these are companies that are sort of marketing themselves in this alternative data set, almost like capturing the data, and then data and AI as a service.
I want to just talk about Apple Credit Card for a second because it's where you can also stub your toe. Apple Credit Card with a big rollout in conjunction with Goldman Sachs' Marcus and MasterCard-- so it's a really interesting combination of big tech, big finance together-- rolls out a credit card product. I think it was in November-- yeah, November of this past year.
And in rolling it out in a very proud rollout, an entrepreneur, here going by the Twitter account DHH, which you might know of this, goes on and finds that he is provided greater credit than his wife, and he and his wife are both billionaires, like literally worth a lot of money. Maybe it was only they were centimillionaires, but worth a lot of money. Joint tax account, joint assets, and he was being provided 10 to 20 times more credit.
So he took to the Twittersphere and sort of made this. And he says his wife had spoke to two Apple reps, both very nice. But basically saying, I don't know why. It's just the algorithm. It's just the algorithm.
What really hurt Apple even more than that was within days, the next day, Steve Wozniak, one of the co-founders of Apple, put this Twitter out. "I'm a current Apple employee and the founder, and the same thing happened to us, 10 times." Meaning Steve got 10 times the credit as his wife in the same algorithm.
"Some say to blame Goldman Sachs," et cetera, et cetera. But Apple shares the responsibility. Not a good rollout. So for Apple Credit Card, they'll survive. Apple is a big company. They'll probably fix these biases, but not a particularly good rollout, as you can well imagine, in their models. Romain.
GUEST SPEAKER: Alida has her hand up.
GARY GENSLER: Please, Alida.
AUDIENCE: Yes, you mentioned cash flow lending earlier. And that really falls under the merchant cash advance business. There's been a lot of debate about that being-- it's not very regulated right now, but that becoming more regulated. Is an entry point like Toast and other companies that have just launched these new products-- [INAUDIBLE] speed up the regulations around that business?
GARY GENSLER: So Romain or Alida I missed a word. Which company did you say in there?
AUDIENCE: Toast.
GARY GENSLER: Oh, Toast, OK. So Toast-- for people to be familiar, Toast started in the restaurant payment space. And it was basically trying to provide hardware, tablets. They thought that the point of sale would be facilitated if servers had a tablet. And so they were sort of in the hardware, software space. They found themselves getting into the payment space very quickly, and then, of course, data.
And with that data, they could do a whole cash flow underwriting. And then they started Toast Capital, where they would make loans, small business loans, to these restaurants. And I think they have 25,000 or 30,000 restaurants that are in their client list. And they did a round C funding earlier this year at $4.9 billion.
So this is working, of course until the crisis. And so then the question is about regulation about cash flow underwriting. I'm not familiar enough with how Toast feels, even though I've met the founders and things like that. They're a Boston company.
But I think they're dealing with the same set of regulations that everyone is, which I'm going to turn to right now in the last eight minutes. But cash flow underwriting, Alida, are you worried about something specifically about cash flow underwriting? Because maybe I'll learn from your concern.
AUDIENCE: I [INAUDIBLE] if you look at-- I would consider that to be like a merchant cash advance product, and those are not actually considered loans in the regulations. Now, there's a lot of movement toward having those products being considered loans and then fall under different regulatory standards that [INAUDIBLE].
GARY GENSLER: So what Alida is raising-- I'm sorry. What Alida raising is, again, in every political and regulatory process, there is some definition of what falls within a regulation, what falls out. You might think, where are the borders and boundaries of a regulatory environment? What's defined, really, as a security in the cryptocurrency space? What's defined as an exchange, an exchange regulation?
And hear Alida is saying, what's defined as a loan? And in Toast's case, doing cash flow underwriting, it might be considered a cash advance rather than a loan. So let me do a little research on that, and we'll come back to that maybe in a future class when we talk about payments.
But in terms of the consumer credit law environment, we talked about the Equal Credit Opportunity Act. The key thing is not only whether you have disparate treatment, but whether you have disparate impact. So back to my retreading analysis, if for some reason you've been reviewing and your machine algorithms say, aha, all these folks that are getting retreads should have lower credit, that might be OK unless you find that you're treating different protected classes differently.
Are you treating people of different backgrounds differently, different genders differently, as Apple certainly was in Steve Wozniak and his wife? Fair Housing Act, Fair Credit Reporting Act-- the Fair Housing Act has a lot of these same protected class perspectives. Fair Credit Reporting-- and you read this in the Mayer Brown piece-- the Fair Credit Reporting Act, you can find yourself as a fintech company or a data aggregator.
The Plaids and the other data aggregators could find that they were, in fact, coming under the Fair Credit Reporting Act. They were those vendors that Cap One might be using. And they, the vendor, might become Fair Credit Reporting Act companies. And there is usually a boundary there. Again, this is not a law class, but these are to highlight.
States also have Unfair and Deceptive Acts and Practices Act. When I was chairman of the Maryland Consumer Financial Protection Commission, we went to the state legislature in Maryland. This is a year and a half ago. And said that Maryland's Unfair and Deceptive Practices Act, UDAP, should be updated to include abusive. So we sort of broadened it a little bit.
And then privacy laws. This should say general direct-- general-- I'll correct the GDPR, but Protection Regulation, and then in the US. And those are the buckets that really matter.
Sort of trying to close out, AI, finance, and geopolitics. We've got nearly 200 countries. And those 200 countries, we're interconnected. We're very interconnected globally. And we've got a lot of standards setters, but those standards setters do not have the authority of lawmakers.
So whether it's the Organization of Economic-- OECD, or the guidelines of things like the securities regulators and the anti-crime regulators or the banking regulators, these are not enforceable standards. So what we have is competing models for AI, finance, and policy.
And I note that because if you're thinking about starting a company and you operate globally, sometimes the global law will affect you. GDPR from Europe has already affected how we deal with privacy in the US. California institutes the California Consumer Protection Act, it influences the whole country. So sometimes that works that way. Romain.
GUEST SPEAKER: We have a question from Akshay.
GARY GENSLER: Please, Akshay.
AUDIENCE: Hi, professor. So the gender bias algorithm that you mentioned about the Apple Credit Card, so the only thing that we can control here is the data that we're using. If we are not using any gender data, and if algorithm turns out and is creating biases without even using a particular data which is considered racist or sexist, so would that be counted as breaking the laws?
GARY GENSLER: So here-- it's a great question, Akshay. And again, I caution that this is not a legal class. But embedded in the US law and often in other laws is this concept about disparate treatment and disparate impact. And what you're asking is, what if you didn't mean to do it? What if there was no intent, and it's just-- wow, that you've extracted this correlation and all of a sudden there's disparate impact?
You could have a problem in a court. And it's a very established 50-year-old-- there's a lot of case law around, when would a disparate impact cause you those headaches and anxiety? And it relates a lot to explainability. If you can come back to explainability and you can truly lay out, this is why, and it has nothing to do with gender, nothing to do with race, sexual orientation, and backgrounds, and so forth, a protected class, you're going to be better in that court case.
But it would be far better to have no disparate impact. Then you're in a much broadly more safe area.
AUDIENCE: Got it. Thank you.
GARY GENSLER: Romain other questions?
GUEST SPEAKER: Perhaps one last question from Luke.
GARY GENSLER: Oh my God, Luke, you're always in there.
AUDIENCE: I just had a question.
GARY GENSLER: Before Luke goes, is there anybody else? Just, I want to make sure. OK, Luke, you can go.
AUDIENCE: It was not a question. It was a comment to Akshay's question. I'm sure he doesn't support this. But what he asked is, isn't it the same thing if a person gives a racist or a misogynist comment or a hate crime comment? And if they didn't know about it, is he liable for it? Should a corporate be held responsible in the same way a human would be?
GARY GENSLER: Well, I don't know if that's really where Akshay was going, but I see your point is that basically-- and it depends on the country, Akshay and Luke. It really does depend on the country. But here in the US, we'd have this conceptual framework [? of ?] disparate treatment, disparate impact. And then explainability is from another law.
But it really should be based on fairness and inclusion. Everybody's got the same fair shot, regardless of where the data comes from at all. So I think that sort of-- we're almost out of time.