2020-08-25_BPC: An AI National Strategy for Congress
00;00;04;06 [Jason Grumet]: Good afternoon. I am Jason Grumet. It's a pleasure to welcome you to this afternoon's discussion with representatives Will Hurd and Robin Kelly on a national AI strategy. As I think everyone is well aware, we are very focused at the moment in this country on our national politics. But you know, we at the bipartisan policy center are also thinking about what awaits the roughly 500 winners of the November elections and recognize that we are going to have a country that remains in economic and health crisis while also facing a variety of complex
00;00;36;08 challenges. And the question that we of course are asking ourselves is how do we help our government govern in a divided nation? One of the key challenges that we know we are facing is the question of how we reap the benefits and manage the challenges of the continued move towards artificial intelligence. For the past year, we've had the pleasure to work with Representatives Kelly and Hurd to develop a national strategy for artificial intelligence. We have about a half a dozen convenings where we've heard from over 80
00;01;03;23 different organizations and individuals from industry and academia, civil society organizations. Through this process, we have created four white papers on ethics, on research and development on national security and workforce issues. These white papers we hope are forming the basis of a resolution that members of Congress will be talking about here in just a little bit. They'll also be available on our website, and I think we're going to be posting them on the chat feature via YouTube. Um, today, uh,
00;01;33;01 we're going to focus a lot on both these recommendations, but also on the process that Congressman Hurd and Kelly are moving forward with to actually introduce a national resolution.
00;01;42;11 [Jason Grumet]: And before we move to that discussion, it's really a great honor and privilege to, um, have the ability to kick this off with Eric Schmidt, who I think just about everybody in the world knows was the former CEO of Google. He is the chair of the national security commission on artificial intelligence, and he is going to lead off this conversation by sharing his view of what is a strategy for America to win in the effort to advance artificial intelligence. And so once Eric concludes his remarks, we will then have a conversation with the representative. Congress will
00;02;13;18 then move to a second panel where we drill down a little bit on some of the more key technical questions. After each of these interactions, we will have an opportunity for audience Q and A, and just look to the live chat function on YouTube or Facebook or via Twitter with the hashtag BPC live. So with that, uh, opening stretch, real pleasure, Eric, to turn it over to you, to set the stage for us
00;02;36;03 [Eric Schmidt]: Jason, thank you so much. You guys have done such a good job. I'm really happy to be part of this. Um, I wanted to start by, by saying that my comments here are on as chairman of the national security commission for AI. This is a commission that was created by the Congress, and I'm not speaking for any other groups that I'm part of just literally, this is this AI commission, which was created by Congress in order to make some recommendations, which interestingly are very similar to those ones, bipartisan policy commission. So I'm really, really happy about that. Um, I
00;03;07;08 also wanted to thank, uh, representatives Will Hurd and Robin Kelly, who I think both are on right now. Uh, they saw this earlier than the other folks in Congress, and frankly, there's a difference of understanding. Some people get it most don't these guys get it.
00;03;23;18 [Eric Schmidt]: And, um, they worked pretty hard to help make this happen. Um, when they talk, I think you'll hear, you'll get a sense of why they're so good at what they do. Um, and you know, as I understand their view, AI, basically we can play a sort of a, they understand the role of both economic and social and, you know, for our security wellbeing. And so that vision led to the creation to the BPC to develop a strategy and a guide for the Congress, sort of why we're all here. Um, it's interesting that in my world on around, you know, I'm a scientist and technologists
00;03;53;27 from Silicon Valley, there are a couple of groups that seemed to be primarily in DC. Um, one is called CSET center for security and emergency technology. And other one is the Center for New American Security CNAS. Both of whom seem to have an unusually large percentage of people who can cross the technology and the policy area. I've learned that this is a relatively small group and these two are among the two or three of the top in the country. And they have been working very close to them with the BPC. So I just want to make sure everybody
00;04;22;20 understands that the people who sort of are talking about this are inclusive, but fundamentally the group of people are having this conversation is too small. And I want more of you to join one or more of these organizations. So basically BPC is now releasing their reports, which I'm strongly recommending. So basically what happened was they the, the sort of lens of national security workforce in R and D is the same path that the national security commission for AI decided to take. Again, I think showing that either common wisdom or common delusion, hopefully
00;04;53;27 common wisdom. And, uh, we, it turns out that two of our commissioners who were Jason Matheeney and Bob Work are actually associated with CSET and CNAS big surprise there.
00;05;07;15 [Eric Schmidt]: So the long part of it is that the AI commission and will be, we have a series of reports. We have a big one coming out early next year, believes that holding a global leadership position in emerging technology is both an economic and national security imperative. Both are important. People want to focus on one or the other, but you actually need both. You need to have some security for obvious reasons, but you also need economic security and economic growth for all the reasons that you could imagine. Then the two go hand in hand, and I want to emphasize that we read
00;05;37;19 this, we covered the basic fact that America is an innovator, and you have to have an innovation strategy in these areas. If you don't have an innovation strategy, your competitors, the Europeans are more likely the Chinese and the Russians will take over some very important part of technology. And that's a really bad thing. So we want to be really, really clear. And this, this leadership model that I'm talking about then also gives our government access to the best in the world. Imagine if all the really smartest encryption people and security people worked in China, and none of
00;06;07;11 them were in the U S how would our military and our intelligence group do any of their function? They'd always be beaten by the opponent. It's obvious that we need these as national resources and in the American system, they tend to be in the private sector. They tend to be in universities. They tend to be in research labs. They're not controlled by the government. So the synergy between the universities and the government and the military and the government and so forth and so on is critical in the American model. I want to be very clear that we have a new competitive
00;06;37;21 threat in the form of China.
00;06;39;25 [Eric Schmidt]: China is both our biggest strategic competitor and our current largest strategic partner. Uh, we've never faced this before in terms of the size and scale of the opponent and the collaboration, uh, at the worst points in the Soviet union, they were a third, if you will, of the GDP of America, uh, China is on its way to surpass us in many, many ways. Uh, and there cleverly run, in a way that's different from the way we would ever want to be run, but we need to take them seriously. I've spent 15 years dealing with China, uh, mostly on the losing side of negotiating
00;07;11;02 with their government. I take them very, very seriously, and I encourage everyone else to do as well. So think about it is they're going to end up with a bigger economy, more R and D investments, better quality research, wider applications of technology, and a stronger computing infrastructure. How is that okay? It's clearly not Okay. Right. We've got to act. And the good news is we have time. And if you follow the thinking of the BPC and the AI commission and some of the other affiliated groups, I think you'll see that there is a way for us to do this, but if we don't act
00;07;40;11 now in 10 or 20 years, we're going to say, Oh my God, how could we have missed this? And the Chinese model is different in the form of I'm going to use my own phrase here. Um, it's a sort of a vision of high-tech authoritarianism, which is incompatible with the way America works. I'm not saluting it. I'm not endorsing it in any way, but I'm telling you to take it seriously. Right? And it has benefits from the standpoint of a strategic execution, which we need to have a good answer to. And so obviously if
00;08;10;15 China achieves market dominance in critical technologies, it's the same argument.
00;08;15;02 [Eric Schmidt]: So how would you like it if the majority of all the things that you use during our friendly pandemic here? Sorry, that was not a good joke. Uh, we're Chinese controlled rather than, than us note. I mean, think about how difficult the pandemic is already. And imagine if we were dependent on other people, um, in this sort of horrific way. So, uh, we have to provide a model of high-tech democracy that works. And I am concerned, I'll say clearly that we collectively have not yet offered that.
00;08;48;29 I think the, the, if you read the BPC reports, you'll see the list of actions that we have to take, which I'll highlight in a minute, but we have seemed to have forgotten in our political narratives that we need some things in order to keep winning. And that winning is crucial for our economy, the stock market, all the kinds of things that we care about. And, and furthermore, this sort of foundation of high-tech, uh, economy is also our, our foundation of our freedoms, which I take very seriously that
00;09;16;22 freedom to associate the freedom, to think the freedom of speech and so forth. And so on this technology can be shifted in the other way, where it can become oppressive. And that's not acceptable to Americans. It's not acceptable to us, not acceptable to me. So, so for all those reasons, we have to provide real alternatives that are competitive, global and win in the marketplace. And that requires a refocusing on key areas of technology, which we've either forgotten about, or we found boring, or we haven't
00;09;43;22 funded enough. So, so think of it as technology leadership has a dual path, we have to protect our innovations. Nobody wants their stuff stolen. That's obviously not okay, but we have to innovate against our competitors, right?
00;09;55;24 [Eric Schmidt]: It's always better to run faster than your competitors and leave them behind all of the other strategies, blocking strategies or so forth. They may feel good for a little while, but ultimately because of network effects and other things in competition, they end up staying around. It becomes a real problem. We have to move faster. So what happens is that, that people say it's an arms race. That's too simplistic. The best outcome in my view is it there's a tech firms can out compete their global competition to create network platforms on a global
00;10;26;15 basis. That's a huge win for America. And by the way, if you care about the stock market, the gains of the stock market had largely been because of the leading tech firms, each of those established as a global network platform, uh, which I can describe if you're interested in, and those things are at risk with the devolution with China. So this is a tricky set of issues. So let me refer to a, and I'll see if I can finish up with a couple of comments. Um, there are three AI reports that the BPC has published that at least I've seen one on R and D one, a
00;10;56;09 national security, one on the workforce. Um, the, on the R and D it's pretty obvious that we need to increase our R and D spending at least in critical areas. So forever, if you're, if you're not familiar with where we are in R and D funding at the federal level, we're down to a number pre Sputnik, roughly 0.7%, uh, of a GP. And that number has been as high as two, two and a half percent. It's important that that number increases
00;11;23;25 because that money doesn't come from anywhere else, it can't be made up by private philanthropy.
00;11;28;06 [Eric Schmidt]: It can't be made up from corporations, and yet it, the federal government in particular, through its actions to the scientific community and working with us universities that has invented things that ultimately led to things like the smartwatch and GPS and so forth and so on. And obviously the internet, um, we need to place some big bets on these things on these areas, and we need to address this funding and, and the BPC agrees with this. We also need to, we need to identify some, some research areas for national security, and we should prioritize those.
00;11;58;11 I'm very worried about some new forms of encryption. I'm very and some new algorithms that are coming out. And again, we need to stay focused on making sure that we keep everything safe. The second paper that BPC talks about spends a lot of time on the question of human operators and AI systems. And let me give you an example to make the point. Uh, we have a doctrine which I fully support in America that it's human in the loop. So here you are, the time is crunched. Everything's happening incredibly
00;12;25;00 quickly. And the AI system says, press the button. Do you really think the human is going to, they have the kind of judgment and quality of thinking and cite to say, let me debate the accuracy of the AI system. Wildest thing is coming right at me. This is a very complicated problem that Jason and his team have really sort of addressed. Uh, and it's one that we're gonna have to spend an awful lot of time. There are a lot of AI issues as well. Um, and, and in particular, the high stakes situations that we find in defense and security are of concern because everything
00;12;57;26 gets, uh, compressed. Military talks about something called the OODA loop, which is observe and operate. And it's calculated around human reaction. In the AI system. This stuff can occur faster than humans can react. How do we deal with that problem? Uh, there's obviously all sorts of questions about testing and evaluation that we need to focus on.
00;13;16;01 [Eric Schmidt]: And then the third focus that BPC has is around basically the AI talent. We need more people doing AI. We need to get the smart people in the rest of the countries to come to America, to do AI here, because there are, there are incredibly excellent people. We want all of them here, and we don't want them in the, in the competitor countries. And we need to figure out a way to sort of keep them both safe and also keep our national security through that process. Um, the, the, the report represents or highlights the critical role that immigrants and the children
00;13;46;11 of immigrants have played in founding and leading us technology companies. I can say that from my own experience, given that one of the co founders of Google was a foreign born. I want America to win. I think every single person who is participating in this conversation, I know that our congressional leaders want us to win. I want a strategy to win. I am concerned. I'll use my own framing. Now that we're, we're, we're still doing too many reactive things rather than saying here's a global way to win my summary, which is very consistent with the
00;14;16;21 BPC's recommendations is that we need to invest in AI, invest in AI research, make sure the ethics are right, invest in our talent, create new industries and jobs and have American companies be the ones that dominate us globally. Uh, I just don't see any other path, the scenario where the world gets split and the Chinese in charge of this area and the cha and the one that we're in charge of is not an acceptable one. It's too dangerous for America. And I suspect you all, when you read their reports, you will
00;14;44;22 agree with their conclusions. And I, uh, I think, and I think the BPC would like the Congress and executive branch to adopt these recommendations as quickly as possible. And I strongly urge everybody to do so. Uh, thank you so much, Jason, for, uh, for, for introducing me here. What can I do to help? And again, I cannot say enough good things about what you guys have done.
00;15;05;30 [Jason Grumet]: Well, Eric, I appreciate both the perspective, but also the generosity of your time and spirit and the service on the national security commission. I think it's Incredibly important and generous thing that you're doing. Um, we do have For a couple of questions, uh, not surprisingly, this is a somewhat more tech savvy crowd than we often get at BPC. So I have a bunch to choose from. I'm only to be able to probably take about two. Um, so let me start out, Eric. One of the questions, a couple of questions are about this question kind of priority and focus. And so, um, if I could summarize, I
00;15;35;27 think the question is, should the US be moving faster in some sectors than others to deploy and develop AI systems? Or should we be just kind of pursuing all these ideas On kind of parallel pathways?
00;15;47;05 [Eric Schmidt]: You know, there's an explosion going on in terms of research in all of these areas. I think many of them will be taken care of in the market. So if you're a business and you need better analytics for selling things, that's a problem that's both solved and where your you'll pay up employees, a million dollars, a person amazing salaries, because the economic return to you is so high. I think that's going to happen. Uh, I think we should focus on the federal government and the state governments and in particular, a national security area, because they're the, the
00;16;18;09 business incentives are not the same, but the importance is just as high, if not more important. That's where I think your emphasis has been. I think that's where ours is. We need to retool re-imagine, whatever it is we need where people who understand this, who are part of every aspect of the software and security systems in America.
00;16;39;09 [Jason Grumet]: So I think, um, let me ask you one more question, which is really a perfect onramp to the next panel. And, uh, that is if you could wave a magic wand and have Congress enact one specific AI related policy, what would it be and Why?
00;16;53;13 [Eric Schmidt]: Find a way to double the funding for research in America over a five-year period or less, uh, in a way that doesn't take from everyone else. So where it's a real improvement in available funds, there are things that the federal government is the only potential funder of, and I'm including federal government, including the DOD, including The nest of security industry, et cetera. We need more money because money does drive the signals around hiring building organizations, making
00;17;26;13 experiments and so forth.
00;17;29;07 [Jason Grumet]: Well, again, We appreciate your, um, your time and perspective. And, uh, we certainly, of course, will be taking you up on the offer to, uh, to stay in touch since these are important and parallel exercises. Um, now my pleasure is to turn to, uh, to people who have been kind of the inspiration for this exceptional amount of work that we've been able to do over the last, uh, few months, Congressman Will Hurd and Robin Kelly, I think, well, they need no introduction, uh, just to remind you that Congressman Hurd represents, uh, Texas 23rd. He is the ranking member on the subcommittee of
00;18;00;21 intelligence and modernization and readiness. Uh, Congresswoman Kelly represents the second district in Illinois. She serves on the house energy and commerce committee. And, uh, it's really been a terrific pleasure to work with both of you and also with your exceptional
00;18;15;10 [Jason Grumet]: Staff over the last year, as we've tried to help you, You design this process. And I guess I just want to start out before getting kind of into any weeds, maybe with you Congressman Hurd, what do you want the American public to know about how AI is or will be affecting our lives?
00;18;32;13 [Rep. Will Hurd ]: Look, our artificial intelligence is going to impact every single point of our industries and our lives of our economy. As, as Eric ha had eloquently said, uh, we are in a, a, a struggle with the communist party of China and there's a role for all of us to play. And I think the federal government has a role. We can, we can help accelerate innovation. And then there are ways to increase that research dollars, right? It's by actually making the federal government more efficient. Uh,
00;19;02;00 one of the things I've worked on since I've been in Congress, is it procurement, it's not a sexy topic, uh, but when you make the government run better and cheaper than you could use that savings for other things, and specifically for innovation, um, we can be bringing this kinds of technology into the government so that the government is using these tools, um, that we are talking about and in Congress is uniquely positioned to manage the tradeoffs that we're going to have to deal with a new
00;19;30;14 technology in a technological explosion. Um, and so, so I, you know, a lot of times we worry about, um, the impact, this is going to have to our current way of life. It's going to be disruptive period, full stop, but can we take advantage of this technology for, it takes advantage of us. And I want to make sure that English and the dollar continues to be the dominant, uh, in this country, not, not Mandarin. And the, um, so these are, uh, and, and I would say in my short
00;20;00;20 time in Congress, um, we've seen members realize and appreciate the importance of these issues. And so, you know, the fact that the national security commission on AI was created and they recognize that, and then also what we're doing now. So it's great to be doing this with Robin. Shout out to BPC Michele and John from BPC have been fantastic in this. And it's
00;20;24;24 a pleasure to have, have Eric, um, kickoff, um, such an important occasion.
00;20;29;08 [Jason Grumet]: Representative Kelly, same question to you in addition to, uh, you know, Will's 2:00 AM voicemails. What convinced you to join this exercise and what do you want the general public to be thinking about?
00;20;42;19 [Rep. Robin Kelly ]: Well, you said the right thing, Will's 2:00 AM, uh, voicemails, just us working together, um, when I was in ranking and he was a chair of the IT, uh, task force. And I think, um, I don't know bill was out first but somewhere close. You know, we had, uh, a hearing on artificial intelligence. So when he called an ass, you know, would I be a part of this, this project? I said, definitely yes, because AI is not, the future is here with us now affecting our world.
00;21;12;16 [Jason Grumet]: So, um, Eric shared with us, So the important work to be done by the national security commission, the Trump administration put together an executive order on AI that I think was well received and informed a lot of our work. Talk a little bit about what kind of distinguishes this effort. Why were you interested in, you know, bringing this kind of group together for a congressional focus?
00;21;36;02 [Rep. Will Hurd ]: So, so Congress has, has a unique role to play in, in what we want to do from these convenings and the, and the white papers. And the recommendations from the white papers is put that together. And I think we'll start with a resolution that, that recognizes the bipartisan accomplishments and AI policy that has happened under the previous administration and the current administration. And then we also need to assert Congress's role in establishing national priorities funding,
00;22;04;03 specific research, like, like Eric was talking about.
00;22;07;01 [Rep. Will Hurd ]: And we also gotta start talking about sustaining cooperation with our allies. This is not just about the public and the private sector, the United States of America working together. We have to work with our allies in order to do what China and the authoritarian country is always going to have more data than us. They don't care about civil liberties. And so we, in order to, to, to beat them at their game, I we're going to need more data, or we're going to need algorithms that work on, on, on less data. And that's one of the things that, that we talk in, in, in, in our papers. And so we also want to try to identify a
00;22;38;27 comprehensive strategy to make sure we retain global leadership in artificial intelligence. And we tried to focus this effort over four pillars, right? Workforce development, national security research, and development, and then ethics. That's the paper that's coming out today. We want our democratic values to drive this new technology, the drive, this new tool, not that of an authoritarian government, like the Chinese government. And, and so, so this
00;23;08;15 is a one a nod to all the great work. It is trying to put priorities down the road. And it's a framework from which future congresses can work from regardless of who the next administration is going to be after this upcoming election, this is going to have to be a major issue and major, major efforts to make sure we maintain American a dominance in advanced technology. We've been doing that since the end of world war II. And if
00;23;36;00 anybody who thinks that the government, the federal government can't do it, I would say, look to a prior to world war II, about one in eight, uh, folks in the military were malnourished, or assuming one in eight in the entire country was malnourished, and they couldn't even join the military.
00;23;51;27 [Rep. Will Hurd ]: We address that with, uh, with a national food program. Similarly, the basic education of someone going in the army was fourth grade. And if you're not technically fit, you can't be combat effective. Right? So these are things that we've been able to address. And likewise, I think the federal government can do it.
00;24;10;26 [Jason Grumet]: I will just note as an aside that our, a nutrition project demonstrates that today we had the opposite problem. About 20% of all folks are too obese to actually join the military. So I think, um, yeah, words to understand, um, Robin, you know, we talked a lot about the public private importance in this collaboration, and, um, you've been an occasional critic of big tech. I think it is fair to say that in the last, um, couple of years kind of large, you know, dominant global firms have
00;24;40;21 under come a lot more heightened scrutiny growing mistrust of the role and needs just the awesome power and influence that these companies have. I guess the question is, is that going to be an important dynamic, you know, is there a meaningful rolling kind of mistrust some of these private sector leaders it's not going to inform the congressional discussions going forward?
00;25;02;06 [Rep. Robin Kelly ]: Well, I think, um, I don't know if there's a mistrust. I think we just want the big tech companies, uh, to do things correctly, to do things ethically, um, to, I would love to see the big tech companies diversify. I think that would help in some of the decisions that they make it, if you look at the big tech companies and not very diverse at all, but I think that Congress, uh, you know, wants to partner with, uh, big tech companies and see how we can work together to, uh, progress in
00;25;33;01 artificial intelligence and, and in other areas. So, I mean, there going to be some questions, that's our job to question I'm on oversight and reform. So that's my job to question, you know, and watch and see what people do and companies do and agencies within the government, uh, how they carry out their job. But I think that, um, you know, we, we want to be the leaders in artificial intelligence. So we, we need, um, um, public and private
00;26;02;05 entities to help us move that along. So there might be questions, but just, just, and only the best way, I think not to put anyone down or keep anyone down or keep us from progressing.
00;26;16;05 [Rep. Will Hurd ]: And Jason, I'm going to add on to what, to what, this is something that I've, I've, I've learned from, from Robin as, as we've worked together, um, we have laws on the books, so let's make sure, just like, if you have a teller at a bank can't discriminate against issuing a home loan, uh, the algorithm can either, and whether it's the person using the avid rhythm algorithm is doing it an improper way, or is the algorithm itself, it's still a violation of the law. And so let's make
00;26;47;28 sure that we, the, all those rules that we already have that we're following it. Now, I also think you can, when you're developing something, you have a different level of standards, versus when you're providing it out to the commercial sector or to the, to the public, um, for, for ultimate use. So you can have a different regimes on, are looking at, at those kinds of information. And so let's enforce the laws we already have. And if you want to make sure that you get rid of bias or don't have bias in the algorithm, let's make
00;27;18;07 sure you have a workforce that's representative of the country. And, and so I think that's why this, this workforce piece was so important. I think that's why we put it out the workforce paper first, because we got to make sure that because this is going to impact every industry that people that are already in jobs are ready to transition and, and are able to be prepared for the future. But we also got to prepare our kids for jobs that
00;27;44;03 don't exist today. And that means we got to prepare everyone, and we got to go into communities that haven't benefited from, or, or were left out in the previous, um, uh, industrial revolutions that they're able to take advantage of this one. And we can use algorithms to take bias out of ourselves. Right. Um, but, but that requires a diverse workforce. And that means whether, you know, you're coming into the government and working in some of these jobs of going out to a high paying job in the private sector,
00;28;14;27 but we have to make sure there's this back and forth between the two industries so that we ultimately get this right.
00;28;20;28 [Rep. Robin Kelly ]: I also think there needs to be a member education. This was not something, you know, high on my list. And I did know I was going to be the ranking member in IT. And I always say, well, has a 30,000 foot view. And I have the 2000 foot view of the everyday person. He, this was a part of his life. He was really into it. And I didn't realize how much it was a part of my life, but just a, so there needs to be member education too, as asking questions And searching for answers.
00;28;48;21 [Jason Grumet]: Let me pick up on that. You know, folks have commented that Congress doesn't understand this issue well enough to be partisan yet that there are so many issues that are kind of freighted by just core kind of tribal reaction, but that this is such an undiscovered kind of set of questions that frankly, you know, there are a lot of engineers and scientists, uh, up there. So how do you understand this education process? And do you think that this is going to become a partisan issue or are the dynamics here different than that?
00;29;20;13 [Rep. Will Hurd ]: You're on mute, Robin
00;29;26;09 [Rep. Robin Kelly ]: Let me just say this. I know how it looks to the outside world and it looks kind of ugly, but we get along much, much, much better than, uh, people think. I think we'll would agree to that. And I think that, um, again, when it comes to moving the, the United States forward and looking at our country of the countries and competitors that we all want to be first, we all want to continue to be the innovators and the leaders. So I think that I don't really see this being so partisan, when
00;29;58;21 you look at President Obama and President Trump have invested in this and will, and I have invested in it, and when we've had meetings there, then different people when we were on oversight. Will and I worked together along with Mark Meadows and Jerry Conley. So, um, I think we're off to a good start.
00;30;16;00 [Rep. Robin Kelly ]: As far as, uh, being non-partisan,.
00;30;19;18 [Jason Grumet]: Will, seriously, does the, does the white space here pose a challenge opportunity or both?
00;30;24;09 [Rep. Will Hurd ]: I think it's actually an opportunity and yes. Um, I think the premise that we need to educate, uh, more members of Congress, there's no doubt about that, but everybody recognizes number one, the threat of the government of China, right? I think people understand that, and that is a, a nonpartisan issue. And then I think people understand the, the, the need for America to stay a global leader in advanced technology.
00;30;54;00 Um, I think all of our colleagues recognize since world war II. That's one thing that has given us an edge has allowed us to have the greatest economy on the planet. And so people understand those two things. And I would say that on some of these issues, Congress is actually very well equipped, right? This, this, this question around the ethics, whether something is right or wrong, right? These are things that Congress have been debating since Congress began. And so, you know, where is the person in the, in the
00;31;22;23 process, right, as, as Eric talked about earlier, early on, these are, you don't have to understand machine learning and how that power is artificial intelligence in order to have a conversation or some input on those, on those issues. Am I drive my car a lot of different places? I don't know how necessarily I wouldn't be able to take the engine apart, but I, I know enough to make sure I take advantage of this tool. And so, so I, I don't see this getting a partisan and I think cybersecurity is a good example.
00;31;56;05 Um, cybersecurity, the fact that everybody in this country knows what OPM is. And I think Robin, this was like our second hearing with the big OPM act, 24 million records include mine.
00;32;07;29 [Rep. Will Hurd ]: Um, and like literally the, before the head of OPM came to the hearing, I got the letter that my information was stolen right by the Chinese. Um, and, and, and so, so cybersecurity has stayed a important and, and bipartisan issue and being able to harden and defend our digital infrastructure. So I think topics like AI will, um, last year we passed a bill on, on focusing our national efforts on, on quantum computing, um, artificial intelligence support, because you need data, you
00;32;40;27 need high power compute, and you need the algorithms where advanced, advanced engineering in order to, uh, to make these things work. And, and the fact that historically Congress has shown that we we've shown that in, in our budgets, um, since, since I've been in Congress. So I think where I think this is going to be one of those few areas that you're going to see opportunity for people to get things done.
00;33;04;27 [Jason Grumet]: So I want to ask you, Will, I'm going to turn to the papers. You talked about a resolution that was one of the, kind of the animating ideas, you know, a year ago. And we started this conversation and, you know, most of like, what is that? And what does it do? And when are you going to do it? And why, you know, what's the, what, what is a congressional resolution then? What is the ambition behind it?
00;33;24;03 [Rep. Will Hurd ]: A resolution is basically giving a sense of Congress that Congress says, Hey, this is the direction in which we're going. Um, there is a word I can't believe I'm gonna use it. Cause I hate this word jurisdiction. Uh, there are so many committees that getting a bill passed has to go through ferry, different committees, and they want to fight a resolution, can be a broadest stroke and say, this is the direction we're going. And this is going to, uh, recognize those previous bipartisan
00;33;53;10 accomplishments, not only in Congress, but in the, in the, in the executive branch, under the previous and current administration, it's going to, it's going to assert our role in establishing national priorities on research and development. Um, we're going to identify the need for comprehensive strategy. And then we're going to propose these four pillars as guideposts to develop, um, going down in the future.
00;34;15;30 [Rep. Will Hurd ]: And that's on workforce national security research development and ethics, and then the individual, um, recommendations. We can turn those into, um, specific pieces of legislation. There's an example already out there. Um, the, the solarium, the cyberspace solarium, uh, was, uh, a similar, a similar model. And so some of these, some of these pieces of evolution, we may get turned into bills and get passed in this Congress, or even in the lame duck Congress, but this is a framework for future
00;34;45;22 congresses and future administrations to work off of. And so, so we're hoping that we're going to introduce in September, I'm gonna say September Robin, um, and we're, we're in the, we're in the, um, uh, somewhat on the, on the backs of fingers crossed themselves a hope and Helen did do is crush herself. Um, and this is something that I think we can get done and make sure we have a good group of, of, of co-sponsors to, to get this
00;35;16;22 happen. So if anybody's watching that you're a member, you work for a member of Congress, reach out to Robin and I, and we'll get your, we'll get your boss, uh, on our, on our, as an original co-sponsor
00;35;28;14 [Jason Grumet]: Where we are. We are a 501 C3 organization, but I thank you for that opportunity. Um, so just before we kind of turned to questions and I want to remind folks, uh, you should be, uh, using the live chat function on Facebook or YouTube. Um, let's just turn to a couple of aspects of these papers. Robin, you really kind of led the discussion around the workforce issues. And, you know, I think there's a consensus that AI is and will continue to profoundly affect opportunities in this country at a moment when jobs are, um, appropriately high on the agenda.
00;35;59;07 You know, I think we all believe that this is not a V-shape recovery. We are going to have tens of millions of people looking for new work. I think there is a crushing realization that, you know, tens of millions of jobs may never come back. What can policymakers do to prepare the workforce and make sure in fact that these jobs are available to all Americans? you are a mute again, it is a,
00;36;22;13 [Rep. Robin Kelly ]: I think you guys mute me then, but anyway, uh, I think that, um, you know, it was interesting because yes, there will be many people losing their jobs, but right now in the Chicago land area, we have about 20,000 jobs open that we cannot find, uh, people, uh, qualified to work. And I think we need to start at, you know, um, from the beginning we need to, um, with our kids, we need to really get them interested in this. Um, if they can't see it though, it's hard to achieve it. And then my
00;36;54;00 district has urban, suburban and rural. And in my rural area, 40% of my folks can't even go online. They don't, they don't have the, the broadband capacity, which I hope changes with, um, a bill that should be coming down the pike or has an, uh, Congress, hopefully we can get it passed into law because I think we have to start there getting the young folks interested and then training people. Refocusing, um, you know, um, people's and
00;37;23;04 talents Toward artificial intelligence, but workforce is a very big concern. And then as you say, COVID is lending to more people being out of work. But again, we have jobs available right now that we don't have people qualified for. So, so we have to get people trained, but first and foremost, we have to get people interested because people are afraid of the thought to many people are of artificial intelligence.
00;37;50;16 [Jason Grumet]: So, gosh, I really want to kind of push in a little bit on the national security questions, which I know you have focused on A great deal and the tension between on one hand, um, you need to have a strategic focus. It requires some degree of Top-secret clearance and also desire for kind of open source access to what's happening in that box. I think Eric Schmidt made the important point. We do not aspire to be the central committee.
00;38;18;20 How do you think about that tension between the need for Security And the kind of core democratic values of openness and transparency that are so critical to our citizens.
00;38;29;21 [Rep. Will Hurd ]: Well, I, I think the technology, uh, forces us to do that, right. When, when we talk about, uh, um, the ethics around this, do we know how a something made the decision that it made? Right. Um, and, and is it auditable? I think that's something that is going to drive a lot of these conversations and we have to accept the fact like so many tools they're going to be dual use. And, and so, so I think going into, I think the difference that we're going to see in, in the fourth industrial revolution is that we have a better understanding of the downside than we
00;39;02;28 had in the other previous rate. So we can be prepared for that. Um, the future of cybersecurity is going to be good AI versus bad AI. And when you, when you are moving in within cyberspace, in, as soon as something becomes available is out there in the ether, everybody has access to it.
00;39;22;01 [Rep. Will Hurd ]: So the, the arms race in a digital environment is going to escalate in, in a, in a, in a fashion that we've, we've never seen and we have to be prepared. And I loved how Eric mentioned. We got to out innovate, and we got to run faster than our competitors. And that means we're going to always have to be training. We're going to always have to be getting better. We can't just get someplace, be complacent and drive on because the technology is going to evolve so fast. The last 40 years. And the technological explosion we see in the last 40 years has been pretty
00;39;53;10 significant, but the next 40 years is going to make the last 40 look insignificant. And so we have to be ready for that and that, and what it means we're with our partners, right? We should be strengthening our alliances, not weakening them, because the only way that we're going to deal with this is by, is by working with our friends. And, and I always say nice with nice guys and stuff with tough guys. Um, if you're going to try to steal our technology, guess what we're gonna do.
00;40;21;08 We're gonna steal your engineers. And, and I want the best engineers coming here to United States. And if that means your mama and your daddy need to come to, then let's figure that out. Right. Um, because I want to make sure the best talent across the world is here in the United States of America, because once they get here, they're going to love our way of doing business a lot better than being in some sweat shop, um, somewhere, you know, in,
00;40;47;08 in, in the middle of mainland China.
00;40;50;19 [Jason Grumet]: So I take that Strong plug for bipartisan legal immigration reform, 2021. I want to now turn to, um, the questions we have time for three or four, and then we'll introduce our next panel. Um, first comes up from a Michael Cameron who asks, how do you support this, The synergy between Silicon Valley and the federal government recognizing that many AI leaders are fighting to ensure that their technology is not used By the U S government. Um, either one of you want to talk about how that
00;41;19;22 partnership needs to work itself out.
00;41;22;28 [Rep. Will Hurd ]: Look, I, I think there's plenty of examples, right? I think, uh, Amazon, AWS is a good example of how they're working with the intelligence community to make sure that we're transitioning to the cloud and able to take advantage of, of, of this new type of technology. I guess the cloud is not new. It's been around for a while. Um, and so, so I, I think there's plenty, I think those disagreements or are taken out of context, but we need to make sure that these American companies recognize that they have benefited from this system of, of, of government, right? And
00;41;58;16 the only way that we're going to make sure that we stay leaders is if the public and the private sector is able to work together. Sometimes there's a little hubris in Silicon Valley. Sometimes there's a little hubris in, in Langley and in Fort mead, right. And, and Arlington. And so we have to realize that this is we, we gotta work together. And, and I don't believe that breaking up some of the great American companies is
00;42;25;01 the way that we're going to be able to out compete with China. And, and so, there's going to ultimately be a, a middle way where we can, we can pursue this and, and these tensions and these debates are healthy for, for our democracy. So I think we're going to have some version of this conversation for, for quite some time. And, and I think that's good for everybody.
00;42;50;04 [Jason Grumet]: So, um, a follow-up question, Robin, which may be asking you to, um, respond to, and it reflects on this question of public private collaboration, noting that the pay scales in the federal government do not compare very favorably with many in the private sector. And that while public service provides great kind of, um, psychological income, Uh, it doesn't necessarily attract the best and brightest to government. And so when dealing with this level of just substantive expertise and complexity, how can we make sure the federal government is prepared
00;43;21;12 In these conversations?
00;43;23;17 [Rep. Robin Kelly ]: This is something that we definitely, um, talked about a lot, because we can't compete, uh, with the private sector, but we talked about things like, will had the idea. I can't remember what you called it, but sorta like, uh, America, or we borrow, um, uh, your folks, the private folks for a couple of years. And they do service, uh, with the federal government. That would be a fantastic public private partnership to do that because we'll never be able to pay unless we make exceptions. But
00;43;54;24 then, you know, do we make exceptions for the doctors that work for us? You know, we could go on and on and on. So I'm hoping that maybe that's a public private partnership that we can work out, that, that you loan us some of your best and brightest for a couple of years. And people, you know, rotate out in and out of government, but I don't think we'll ever be able to pay you what, uh, what you're used to getting paid and, and what you deserve,
00;44;21;17 [Rep. Will Hurd ]: You know, on that, on that Rob and I have a friend who runs a bunch of restaurants here in San Antonio, and he says, am, I can, y'all hear Jason, can you hear me?
00;44;32;02 [Jason Grumet]: cutting it out a little bit.
00;44;34;07 [Rep. Will Hurd ]: Uh, I have a friend here that runs a number of restaurants, and he always says, the restaurant industry is the first grade of the workforce. We teach a bunch of young kids how to be responsible show up to work and do all these things. Can the federal government be kind of the first grade of high tech workers where you get people, you know, do we need to have a four year degree for some of the things, do you have to have a PhD in data analytics to do some of these positions? And so if we right
00;45;05;05 size, some of these positions get the right kind of the skillset that you need. will private industry believe when they come in, you go work at DHS for three or four years and you come out, you're going to be even better. Right. And then as, as Robin said, I was calling it the cyber national guard. You do some time in the government, then you go out into the private sector and then you can come back or you give, you know, six weeks a year, um, to, to the federal government. Cause you know, how it operates. Um,
00;45;34;04 that means we're going to have to figure out how to sort out security clearances in a quick way. But there is, there is a way to make sure that we have this cross pollination between the public and the private sector. And it's the only way that we're going. And I'm at academia as well. That's what, and we have a number of, of, of recommendations around this topic of how making it easier for, for state to come do stuff, um, on behalf of the,
00;46;02;12 of the federal government and like kind of like fellowship programs. So, so this is something where we have some really strong recommendations in our white paper.
00;46;11;00 [Rep. Robin Kelly ]: Also, the other thing is the congressional black caucus. We have fellowships and internships, and we've had fellows that are architectures and all different types of disciplines that have absolutely nothing to do with, you know, they're not going to work with the, uh, federal government, but, but they felt like it was still a great experience. They learned a lot and they could give back also in their way. So, um, I mean, I would love to see us be able to accomplish something like
00;46;40;10 that on a larger scale,
00;46;43;27 [Jason Grumet]: Two more questions, and I'm going to turn it over to Our private sector experts, many of whom I heard barely made it through high school. So I think it kind of speaks to your point about the technological capacity. Um, The next question is, uh, that there's a concern about integrating AI into the workforce and that that is a loss of human capital, one, a loss of jobs that the ability to make things more efficient could actually be shrinking the workforce at an incredibly awkward time for our country. And I guess, um, yeah. How do you, how do you respond to the robots taking over, you
00;47;15;22 know, what, what is that anxiety about and How should we be addressing it?
00;47;20;06 [Rep. Will Hurd ]: Artificial intelligence is a tool for humans to use, right? And, and plain and simple. Uh, there's always got to be a human in the process. And, and one of the, the scenarios I always use is NGA. The national geospatial agency, the former director said that to be able to analyze all the moving video that they have, you would need to hire 2 million analysts. They're not gonna be able to hire 2 million analysts, right. That's where a tool like artificial intelligence is going to be able
00;47;51;23 to help. Um, the, the category driver, uh, apparently is the most, there's more drivers in the world than any other profession. Um, we talk about autonomous vehicles. Well, those autonomous vehicles are unlikely to be able to fix themselves 100% of the way there's going to have to be that from point a to point B, something's going to have to get done your, that, that rural may evolve and change, but there is going to be the need for a human there. But that means that in those industries, where there is going
00;48;22;18 to be an impact, we have to make sure that they're ready for that change in, in position. Um, this is this, this, this there's going to be disruption. It's going to be painful, but we can take advantage of technology before it takes advantage of us. And we can, we can help make sure that these folks are ready for potentially higher paying jobs and, and use it as a tool
00;48;44;14 [Jason Grumet]: Question. And I think you'd be most, you may both want to comment, but I'll start with you Robin, and I'm just summarizing, but it basically says, Hey, this all sounds great, but how do We make sure, how do we know that when it comes to mortgages and bail and criminal prosecution, AI is not going to in fact, be reinforcing, Sure. Racism, what, what can we do that is creates more opportunity?
00;49;09;19 [Rep. Robin Kelly ]: Well, we have to do our research around by bias and fairness, and we have to use the same language. So we all are clear on what we're talking about. And like, I think we said before, we have to make sure that we hire a diverse staff that has to be all kinds of people at the table, not just one type of person or another, we have to have a diverse staff. I think that would help tremendously. Yeah. There's going to be some bumps along the way and those kinds of things. But, um, we have to do our
00;49;39;11 research. Um, we have to speak the same language. We have to be transparent and we have to have a diverse people at the table. I'm putting this together and implementing,
00;49;52;05 [Jason Grumet]: And Will, you start out with, we have an existing framework, but build on that a little bit
00;49;57;13 [Rep. Will Hurd ]: Well, I like to add that this national strategy was helped develop with BPCs helped by, uh, the first, uh, pair of, um, of African American legislators to, to run a committee together. Right. And, and so, so, so we're, we're, we're trying to be examples, um, in that way. So we, when humans were doing the bias, what did you need? You need regulatory agencies, you need Congress to oversee those regulatory
00;50;29;27 agencies. And that same principle is here. And that's why, um, some of the, the, the points that we make in the white papers and in the recommendations is you need HUD housing and urban development to understand artificial intelligence as a tool so that they know and can be aware as if that tool is being used to discriminate against people.
00;50;50;02 [Rep. Will Hurd ]: And then the oversight committees that focus on hood are able to understand this technology so they can ensure that HUD is doing their regulatory environment. And so that's why educating not just, or improving our workforce, we have to improve, or everybody that works within the federal government, I'm on the executive branch, but also in the legislative branch, because that's what we're going to need to make sure that this things are getting done. Um, and, and, and also that's why one of the areas of, of research that the federal government should double down in
00;51;22;05 is on, is on bias. And, um, and, and how you prevent that. And so, so these, there is, there is no, there is no magic wand. I know in the, in the previous segment, somebody asked that question, I had a magic wand, doesn't exist. I haven't found it. Um, and, and so, but this is, this is the process. And it starts with having a national strategy. It starts with having great organizations, the bipartisan policy center, helping to bring folks from
00;51;49;18 all different walks of life, all different perspectives to advocate for, for these positions. And it takes a lot of folks to implement these things. And so we're, we're off to a good start. I'm excited. And it's a pleasure to have been able to do this with the woman, the myth, the legend, my good friend, Robin Kelly, who I just loved working my time in Congress. And again, Jason, your team with BPC has, has been fantastic and a shout out in
00;52;20;29 class to everybody who participated in our convenings. We learned, we learned so much about this and if y'all are on y'all are awesome. Thanks for making this reality. Now it's time to try to get this resolution done and start getting some bills passed.
00;52;35;19 [Rep. Robin Kelly ]: He loves me so much, he's leaving me. I just had to throw that in there.
00;52;40;02 [Rep. Will Hurd ]: I love you too much. I can't, I can't handle a, you know, it makes my heart hurt.
00;52;45;25 [Rep. Robin Kelly ]: You can come back and train the members, you see.
00;52;49;25 [Jason Grumet]: I think, um, certainly have made clear to everybody why we have been sent to work with you. And I think we're all just fortunate to have this kind of bipartisan substantive, upbeat, optimistic leadership on an issue that I think is so important and challenging. So we will obviously, um, be looking forward to the September resolution. Hopefully we'll be able to kind of come back together after that and, uh, have another discussion with you. So thank you both. It is now my pleasure to turn to one of the other anchors in this process, uh, to moderate, uh,
00;53;21;12 the second panel Chandler Morris, who's the head of Workday's DC office and the senior director of us policy. He's also a former chief of staff to Senator Jeff flake. Um, Chandler really appreciate everything you've done over the last several months. And thank you for joining us today and I compliment your, uh, your COVID mustache. That's a, that's a nice new touch. So thank you for bringing that forward, but I turn it over to you to lead the next panel.
00;53;49;01 [Chandler C. Morse]: There we go. Can you hear me? Excellent. Thanks, Jason. I appreciate the chance to moderate this panel of esteemed experts. I don't know about their high school credentials, as you mentioned earlier, but I do know that they've been very focused on a national AI strategy for Congress. Uh, Workday was pleased to participate in the BPCs AI initiative.
00;54;20;13 And let me just take a moment to congratulate Congresswoman Kelly, Congressman Hurd, as a, as a former staff, I'm going to give a shout out to the, to the, to the staff, to, uh, their current and former staff, Matt McMurry, Connor Pfeiffer, and, uh, and Shelia to Hipalala. And obviously the BPC team, Michelle, John, and
00;54;38;10 [Chandler C. Morse]: Nina, and our experts here today. Uh, the AI initiative was a terrific collaborative process and resulted in a substantive series of, of white papers and recommendations that our experts that we have today, uh, helped craft. And it really is gonna move the needle on a congressional path forward for AI policy, um, speaking of our experts. So let me walk quickly through, uh, introductions. We have Helen Toner director of strategy for Georgetown's center for strategic and emerging technology. We have Dr. Nicole Turner Lee, the newly minted
00;55;09;13 congratulations, uh, director for the center for technology at the Brookings institution and Martijn Rasser a senior fellow in the technology national security program for the, uh, center for new American security. And I appreciate Eric Schmidt letting me know that I can pronounce that as CNAS. Uh, just a reminder to viewers to submit your questions using the live chat function on YouTube and Facebook or on Twitter using #BPCLive. And we'll get to those in a bit. I'm gonna try to cover the waterfront of
00;55;39;04 the issues that the AI strategy and the white papers touched on, um, uh, national security, R and D ethics, and maybe a little bit of what workforce, if we can get there. So I have questions for everyone, but please, everyone sort of chime in if you've got feedback, but let's get started with Helen, uh, Csats focused on AI and national security, and you help draft the white paper addressing those issues. What, what were the major topics that you highlighted in that white paper and was there synergy
00;56;06;00 between this process and what Csats been, uh, currently undertaking?
00;56;10;17 [Helen Toner]: Yeah. Thanks Chandler. And thanks for the introduction. It's great to be a part of this event today. And it's been really fantastic working with the representatives and BPC and, and, uh, all the staff, um, on this process the whole way through. So yes, there is definitely a huge synergy between the, the white paper, the national security white paper, um, uh, as well as also in fact, some of the other white papers, you know, it's really hard to fit everything into just 14 pages of, of body text and the national security white papers. So, um, I'll try and hit on highlights without taking up the whole time. We have
00;56;41;19 allotted for this panel. Um, I think three big themes that we tried to bring out in the national security paper were, uh, firstly looking at trust and trustworthiness when it comes to the use of AI in national security settings. So trust means as our expert mentioned and has come up a few times, if we're going to be using AI systems and, not only automated systems, but also AI assisted and, uh, human AI teaming systems in national security context, do the operators of those systems understand how they work? Um, do
00;57;12;20 they, um, understand when to trust them when not to trust them, uh, how those systems are coming to their, their recommendations or their decisions. And so that's, that's in some ways, a, a piece for, for the Pentagon and for DOD and the services and their training of their operators, but it's also very much a for the systems involved. So, uh, you know, machine learning and deep learning, the most advanced kind of AI techniques that we're seeing used today are known for not being very
00;57;39;11 robust. Um, they're not very explainable or interpretable in terms of why they come to the decisions that they come to.
00;57;45;09 [Helen Toner]: Um, and they have a number of other issues that make them hard to use reliably in high stakes settings. So there's one piece here in terms of trust and trustworthiness, that is about DOD process and, and implementing their ethics principles that they've, that they've written up for AI. And there's another piece that's about R and D and developing AI systems that are worthy of that trust, and that really work reliably and are easy to understand as well. The second big theme of the paper was cooperation and competition. So obviously we're in a long term strategic
00;58;15;18 competition with China. We have plenty to think about with Russia as well. And so how do we best understand that competition? How can we understand where we are relative to those other countries? So, uh, it's difficult to find and, uh, measure a good metrics for where different countries are at, in terms of AI capabilities. Um, we've done some work at CSAT looking at, uh, various metrics in terms of, you can look at, um, funding amounts. You can look at talent, you can look at research papers and so on and so forth. So one piece is
00;58;44;27 understanding what that competition really looks like. And another piece is on the international stage, looking at where we have opportunities for collaboration and cooperation, and in fact, where we have imperatives for collaboration and cooperation. And so one piece there is certainly working with our allies, which is going to be a huge factor if the U S is going to succeed in, in competing, competing with China. Um, but another piece is also pre uh, co cooperating pragmatically and selectively with those potential competitors as well. Because if, if you guys want to be, you
00;59;13;29 know, changing how so many things work in the national security space, then we need to make sure that we're prepared for those.
00;59;21;11 [Helen Toner]: And then a third theme. And then I'll wrap up a third theme in this, in this national security paper was around, um, investment controls and export controls and the need for those to be really carefully targeted and really strategically enforced. Uh, so it's one thing to just, you know, create bends or prevent transactions. It's another thing to really think about what, how are we trying to shape this space? What controls are available to the federal government? What controls can the U S use effectively? What controls do we need allied involvement for, and, and really implementing those in a, in a thoughtful and thorough way. So again,
00;59;53;23 hard to, hard to sum up, uh, all of that work in briefly, but I'll, I'll leave it there for now.
00;59;59;19 [Chandler C. Morse]: No, that's a great, that's a great job. And I know there's a lot in that paper, but turning to Martin, uh, this CNAS helped draft the white paper on R and D that paper concluded by noting that the US had long been a leader, uh, in R and D, but that prominence was sort of slowly being eroded. What are the potential impacts of that erosion trend and how did you all recommend that we remedy it.
01;00;21;23 [Martijn Rasser]: Thank you, Chandler. That's a really important question. And Eric Schmidt already talked about this a little bit in his remarks. So the strategic competition, the United States is in, at its core, it's all about technology, right? So technological leadership translates directly into economic and military power, and it also offers the means to shape international norms. So in other words, American competitiveness is rooted in its technological prowess. That's why this long-term trend of America's technological leadership roading it's
01;00;54;21 concerning, but it's also why the effort led by Congressman Hurd and Congresswoman Kelly is so important. So ultimately we need a national strategy, not just for AI, but for technology generally, to provide the framework for how we as a nation can best position ourselves to be competitive and to make sure our technology future is a beneficial one. So once we formulate that strategy, we can make smarter decisions on
01;01;20;22 investments and human capital, next generation technologies, and of course, safeguarding and enhancing our innovation base.
01;01;28;10 [Martijn Rasser]: And so we make a few very specific recommendations. One that I'd like to highlight is the need to spend more on R and D that that's just a given, not just for AI, but for federal spending and R and D across the board, specifically for AI, we call for a 25 billion to be spent annually by the year 2025. Now that seems like a big number, but if you put it in the broader perspective, it only comes up to, uh, being about 19% of
01;02;00;19 total R and D spending in FY 19. So that's a very realistic and doable amount, and it's also an amount that's, that's very important because AI is going to be such a fundamental, uh, enabling technology in the 21st century that, that we cannot afford to short change ourselves there more broadly though, uh, Mr. Schmidt, uh, referred to this as well earlier is federal spending on R and D is vitally important, particularly because so much of
01;02;30;00 federal R and D spending is in basic research. And that's where the true breakthroughs come from. So just because we're advocating for a lot of spending in AI, R and D does not mean we should neglect other science and technology areas. Thank you.
01;02;45;12 [Chandler C. Morse]: That's great that doubling of R and D spending is a, is a, a pretty easy punchline or a tagline to remember. Um, Nicol, you were an active participant in the BPC hosted discussion about AI and ethics, and I was there, and that was an active discussion, uh, as you and I have discussed in the past, um, those, uh, these, these, uh, ethics issues are the issues, uh, affecting AI that usually have the broadest range of opinions associated with them. So out of that discussion, what
01;03;13;12 were the main points of that emerge? And did any of them surprise you,
01;03;19;07 [Nicol Turner Lee]: Yeah, that was a really robust discussion that we had. Um, and I also was like one of the first people that moderated a session between, uh, Congressman, um, uh, Congresswoman Kelly and, uh, Congressman Hurd around this, which made it even more interesting, right? Because it was the first time that we actually delved into bias. Um, I mean, let me put it like this. And for people who don't know what I do at Brookings and this new role, I lead our center for technology innovation, where we're particularly focused on, you know, regulatory and legislative issues like this. And in addition to that, I lead our project on artificial
01;03;49;20 intelligence in the area of bias. So, um, out of the three pillars that we work on at Brookings, um, national security, governance, and bias, you know, I'm totally immersed in this space and what I could appreciate about the BPC process and actually the conversations that we actually had is that this is a really complicated conversation, right? When it comes to defining ethics as a sociologist and not a scientist, a lot of what comes out of this is whose principles are we basing it on, right? Who is actually the guideposts for how we actually determine what is
01;04;19;06 fair? What is ethical, what is just, and at the end of the day, I think all of us had common agreement that it's important to develop AI frameworks that lead with ethical, ethical principles, as well as some type of fairness model that ensures that there'll be minimized discrimination or harm to the consumers that the AI is touching. And I appreciate the work of BPC, Jason, John, Michele, and others who really wanted to figure out then what role does Congress play? Because I think when you start to look at
01;04;46;27 issues of discrimination, generally, it becomes very complicated. And when you overlay that with technology, the question becomes at what part of the black box are we interested in?
01;04;56;10 [Nicol Turner Lee]: Are we interested in the inputs? Are we interested in the outputs? And I think this is where our conversation primarily found ourselves. What I think was most surprising. And I can actually say this well, a lot of competence is that this report lands on this issue of multicultural sensitivities. I want to say that you're the best Congress people in the entire world, the best representatives actually dealing with this issue. And they were people of color, as it was mentioned in the remarks. And that I think brought us to a different place around looking at diversity and inclusivity and the, and the recommendations that are
01;05;27;30 actually proposing the report. But it also, I think, heightens the reason why we need to have these conversations. When we look at the state of discriminatory affairs, that impacts people of color, people who are in rural communities, people who are stricken by a host of other issues, their age, I mean, just look at what the pandemic has done, take all of the people that have been discriminated against the pandemic, and then try and apply that model to how we actually discern, you know, fairness and ethics. When we start looking at AI systems. what came out of this process is that
01;05;57;27 we need to first start in acknowledging that bias exists. And Shannon, that's not always the case because we come out of a culture where it's all about building it quickly, things fast, and then coming back and saying, I'm sorry. And what this path is basically revealed was that having guard rails or some understanding upfront actually helps us to navigate and creating better performing systems and systems that are inclusive, fair, and diverse from both the representation side of who's sitting at the table
01;06;26;24 to the type of performance that we're looking at. And though our ability to go back and evaluate when the systems act in ways that are unlawful, unfair, and unethical. So I appreciated a lot of that. And I also appreciated the fact when you're looking at AI systems, and again, I'll close here, I'm a sociologist, I am not a scientist.
01;06;45;11 [Nicol Turner Lee]: I deal in systems and structural discrimination and the pressures that create to a variety of systemic inequalities, but the ability to talk to a host of policymakers in addition to companies that have a vested interest in getting it right, really begins to help us sort of think through, we can do this with an all hands on deck approach. And I think that was really a, um, in terms of the surprise, right? That was interesting that we could actually get that far on issues that have potential pinpoints when it comes to, you know, very sensitive topics that
01;07;16;10 can pretty much determine whether or not a person has a successful trajectory and navigating through, you know, heightened quality of life or someone's experiences are minimized due to the fact that an AI system, uh, gave a raw delivered a wrong prediction that results in the rejection of a loan, housing credits, or other things that actually contribute to furthering inequality.
01;07;39;04 [Chandler C. Morse]: Yeah, I think that, um, I think raising the, the multidisciplinary nature of, of how we're going to attack these issues is certainly, uh, certainly a take home that I've had as well in, in all of these discussions. So that's, that's great to raise that, Helen. I wanna, I want to come back to you, you, you talked about how global cooperation is really important in the national security context. That's and Eric Schmidt mentioned this, that sort of, whether we're partnering with our allies or sort of keeping an eye on our competitors, uh, either
01;08;08;16 way, um, touching briefly, can you just highlight a little bit more on why that global cooperation is so important to national security context and with EU looking at a new AI, uh, regulatory regime, is it also important in the, in the non-security, uh, context as well?
01;08;26;04 [Helen Toner]: Yeah, it absolutely is. So in terms of thinking about why global cooperation is so important, I think it makes sense to break it down into two pieces. One is the allies piece. And so if we're in a long term cooperation, a long term competition with a country, the size of China, it's just common sense that the United States is going to be in a much stronger position. If we can make use of this really unique asset, that is our Alliance and partnership structure, which really no other country around the globe, certainly not China, uh, has a comparable structure. Um, one talking point that I'm going to steal from a colleague
01;08;56;18 of mine. Melissa Flag is to, to illustrate point is to look at R and D spending over time. So if you look at 1960, the United States was, uh, almost 70% of the total R and D spending around the world that is strong, that relative proportion has shrunk over time. And now in, uh, in, I think 2018, it was below 30%. So the U S used to be by far the largest R and D spender in the world and is now, you know, still very large, but
01;09;24;13 well, below half. And obviously some of that change has been due to an increase in spending by China, but it's also been due to an increase in spending by a wide range of other countries as well. And what's really interesting about that figure, um, is that if you look at what the proportion is of the United States, plus a relatively small number of close allies, but it bumps back up above the 50% number.
01;09;44;20 [Helen Toner]: So basically the take home lesson, there is the United States by itself is in a very different position versus the United States working closely together with allies. So that's the allied piece. I do think this piece about, uh, collaborating and cooperating with, uh, adversaries and competitors is really important as well. And here it's really in the interest of all sides to make sure that as the strategic environment is changing, that we have a common picture of, of how it's changing and what that means. And so, you know, obviously the go to example here is nuclear weapons and the way that they changed the overall strategic
01;10;18;03 picture dramatically in the 1950s and sixties. Um, and you have things like, uh, the Cuban missile crisis. I mean, the escalation dynamics you saw there, which were brand new for all the leaders involved, prompting new types of, of cooperation, even if very limited. And even, even though the, you know, the, the situation between the United States and the Soviet union at that time was much more hostile than the situation we have between the United States and China now. So, uh, you
01;10;43;08 know, there, in the nuclear case, you saw things like the red phones being installed to enable better communication between leaders. You also saw things like permissive action links, which are, uh, technology that you've put onto a nuclear weapon essentially to, to prevent accidental use. And, and the, you know, the United States shared that technology with the Soviets because it was in everyone's interest to, to have that technology kind of on all sides. So I think it's really important as AI shows the potential to shift the strategic picture in similar ways that we're having those conversations and making sure that we have the ability to, to build a
01;11;14;11 shared picture on those really important topics.
01;11;18;13 [Chandler C. Morse]: That's great. So, so Martin, I, I sort of drove into your swim lane and went into Helen's and I'm going to drive, uh, Helen swim lane indie into yours. When, you know, we've talked about R and D and I, and I wanna, I want to pick on this a little bit more when, and I, and I, and I, and I don't deep dove into your background. So I have a feeling I know where this answer's going to lie, but when you think about R and D, is it, is it just a U S competitiveness issue? Is it a national security issue? Is it both like, how do you sort of, how do you sort of think about
01;11;47;13 both of those?
01;11;48;23 [Martijn Rasser]: Yeah, it's, it's absolutely both. Um, ultimately, As, as I mentioned before, like so much of us competitiveness is tied up into our ability to, uh, you know, develop technological prowess on a range of levels. And that that's very much at the root of our national security posture now. And it also determines how we, uh, are able to inter operate with our allies and how we engage with our allies and how we stand up to
01;12;20;22 our adversaries. And so we made a few, uh, specific recommendations on these points. I think one, uh, which is a very important one is what the national security commission on AI put forward, um, in terms of working better with our allies on these issues. So they recommend first starting off with the five vice partners, then working with NATO to really evaluate, you know, what our collective strengths are in artificial intelligence, and
01;12;48;28 then come up with a game plan to ensure that our militaries are interoperable, because that ultimately goes to the core of the Alliance.
01;12;58;12 [Martijn Rasser]: That that's what it's all about. If, if the individual NATO members can't, uh, effectively fight together, then the Alliance as a whole kind of falls apart, but more broadly, of course, there's a lot of innovation in the civilian sector, which is critical because if you think about how much shared interests and common goals that the United States and its allies, as well as some of our competitor countries have, it's an areas like AI safety, robustness, resilience, transparency. So there's a lot that we can do together, but specifically
01;13;31;07 for, uh, America's allies, we proposed things like multinational innovation prize competitions. So this would be similar to like the X prizes, which have been very successful in tackling some very difficult, uh, technological challenges. And I think ideas like this would, would help us to overcome some of the hurdles that we've been talking about today. Um, so somebody earlier mentioned the need to, uh, be able to develop algorithms
01;14;00;08 that aren't as data hungry or as energy intensive. So there's some very important, fundamental breakthroughs that we can make that really can shift the game and artificial intelligence. Overall, another area that we talked about was, you know, having the national science foundation work with its foreign counterparts in order to, uh, affect personnel exchanges. Because again, w we have to remember that there's a lot of AI centers of excellence around the world. And
01;14;28;11 unfortunately, a lot of them are based in allied countries, but we want to be able to benefit from that expertise and exchange ideas cause so much of the scientific process is rooted in that. So ultimately, um, I'm very heartened that there is so much emphasis on
01;14;45;18 [Martijn Rasser]: Multinational cooperation and collaboration on these issues. I think it's a fundamentally important tenant of what America's approach to artificial intelligence should be. And yeah, I really commend, uh, BPC and, uh, the, the Congressman for, for really focusing on that point.
01;15;03;19 [Chandler C. Morse]: Nicol, I'm going to go back to you. Let's talk a little bit more about AI trustworthiness trust, you know, is a critical component of realizing the potential benefits of AI, but there are these persistent concerns around harmful bias and discriminatory outcomes. I think those concerns have grown with the growing attention of social, to social and racial injustice. You and I have talked about these issues in the past. So I know you've got some ideas it's a bit open-ended, but are there specific steps that you think that both the government and private
01;15;31;29 sector could take?
01;15;33;14 [Nicol Turner Lee]: Yeah. And I'm going to try and keep this like a pastor and keep it to four points on each side. Cause you know, I know we're short for time, you know, I think in terms of bias and discrimination, clearly I think what we discussed in our working group was really trying to think through then what's the framework, right? And what are we sort of mitigating against? What are we identifying when it comes to the bias and how do we actually create the right type of mitigation strategies, whether it's risk assessments or these interdisciplinary conversations? I would also suggest that I think what I love about the conversations that we had is that it is very important to make sure that
01;16;04;09 algorithms and AI systems are lawful. Uh, there are laws on the books that have been litigated that, uh, define housing, fair housing, fair credit, um, you name it, you know, there have been people like the honorable John Lewis resting in peace there that actually marched for those things to happen.
01;16;24;12 [Nicol Turner Lee]: And so we do need to make sure that these digital systems that have sort of redefined, um, accommodations and redefined the type of equity that has been fought for are still lawful. And in the assessment of fairness, we cannot make the tradeoffs that just because a judge can sort of use a criminal justice algorithm to better get through caseloads that, you know, the idea that he may include, uh, you know, incarcerated or detained one less person that he would in a face to face interaction. It's still bad because there's over-criminalization that
01;16;55;06 happens among African American men in particular this society. So I think that was really important and I really commend BPC for coming forward with that because that's near and dear to my heart. I think what we also have to understand that the government side is what use cases do we particularly care about during this pandemic? I've watched a lot of movies. I, lot of movies and I've gotten served a lot of recommendations of additional movies to watch. It doesn't mean that I want the government to sort of come in and legislate which movies I should watch and which movies I shouldn't, but when it comes to my credit, when it
01;17;22;22 comes to my employment, when it comes to my healthcare, when it comes to my use of other services, they will be use cases that actually might actually need the guidance of federal regulation to ensure again, that they're lawful to ethical and that the tradeoffs are really not dismissive of my protected traits to be able to participate equally in this economy. I think that's really important when we start thinking about things like, um, algorithmic disclosures, I'm sorry to tell a lot of people, but credit
01;17;50;03 karma is not going to get you a house, right?
01;17;52;07 [Nicol Turner Lee]: Equifax and TransUnion will because they been sort of debated against adverse factors, but most people don't understand that in the algorithmic economy. So I think a lot of what we talked about in our working groups is how do you make that message known in ways that are both, you know, pretty clear and transparent around eligibility? I think on the government side, the investment in anti-biased experimentation and research, what we're seeing right now, it's absolutely fantastic when it comes to the standards around facial recognition, technology and other technologies that have the potential for bias. We need more of that. And I
01;18;22;11 think Martin kind of talks about, you know, even in the NSF, we should be lasting dollars, appropriating dollars towards more anti-biased research that will allow us to multiply the inclusivity of trading data sets. I'm sorry. There are facial recognition systems that cannot identify me because of the, the hue of my complexion or when I change my hair. And since I'm going to continue to change my hair and I'm continued to be black, we're going to need disclosures to tell me that those systems have limitations or the investment in research that finds out ways to include
01;18;53;12 me. I think that's particularly important. And finally, where are the areas on the government side too, a lot permissions to coal evolve and coal innovate. Um, it's really particularly important when we start thinking about these use cases and where we actually might need more representation or better representation to do a better job. And honestly, I think the fourth thing that government could do, which we're seeing in the paper is figure out ways to create more streams of inclusivity within these tech companies in the broadcast communication space. You are pretty much,
01;19;23;17 there's an enforceable rule. If you don't have broadcast diversity, we don't impose that on tech companies.
01;19;28;01 [Nicol Turner Lee]: We just want the tech companies to do the right thing. Most recently, I started talking about equity dashboard. That might actually be a way to sort of put out there in a voluntary manner, just how well you're doing it. As others have said, it's good business. When tech companies are diverse, it's good business with AI's versus inshallah. You know, that based on the work that you guys do at Workday, I would just say quickly on the industry side, best practices, best practices, best practices. When it comes to calling bias, it's important that we know what are the technical best practices, what are the hiring and workforce best
01;19;59;27 practices and where are the areas that together we can learn from those things. I've talked to a lot of engineers who talk about blind algorithms, but then use a loaded proxy when it comes to actually making determinations. We need to figure out what that looks like. And I think the other thing that we need to do on this side of industry is to figure out and acknowledge that bias exists and develop a tool kit, whether it's an impact assessments or biased statements or whatever the case may be, that
01;20;27;11 companies can sort of go into an exercise those tools to be able to eliminate and reduce the type of discrimination and harm that we don't want to see. And so, you know, one last thing with it in the absence of federal privacy legislation, all of this becomes really difficult. And so I think that there should be on the industry side, just more support to come up with a collaborative and cooperative strategy around privacy. I love John at BPC and Michelle and what they're doing.
01;20;54;09 [Nicol Turner Lee]: And John and I talked about this, it then leads us, I think, to a pragmatic stage of maybe coming up with what's next, in terms of coming up with that better housekeeping seal that ensures that we're actively weeding out by is and creating more equitable and inclusive AI systems. So that's the work that I do, but I, again, I love the fact that BPC was talking about it among a collaborative group of stakeholders so that we can move this needle along and really get to a place where we can develop it innovate, you know, at a, at a central place that does not, uh,
01;21;23;28 sear of, of discrimination.
01;21;27;18 [Chandler C. Morse]: That was there's a lot there. You bet.
01;21;29;19 [Nicol Turner Lee]: I know, I know, I know that you gave me, you know,
01;21;33;25 [Chandler C. Morse]: I do. I do. Uh, we do have, uh, an audience question, so we'll get to in a second, but I want to, I want to piggyback on that and just offer this to Helen and Martin. Um, because I think it, it builds on, you know, best practices, best practices. Uh, the national AI strategy that BPC has laid out has, has called for a voluntary risk management framework to help foster some AI fairness. And I think that could lead to that. It's consistent with the national security commission on AI's recommendations to create a framework for that ethical and responsible use of AI and with a language that was tucked into the house
01;22;05;29 past NDA, that calls for NIS to stand up a framework. If we are going to move forward with the framework building process, what, what should be a part of that? I think, I think some of that answer was in Nicole's, uh, answer, but what should be a part of that framework to make it successful?
01;22;19;29 [Helen Toner]: Yeah, I can jump on it, jump in on this. I mean, I think Nicol laid out an enormous amount of the critical components here. Um, one thing that I would just add would be that it has to be these kinds of frameworks have to be tied into the technical realities of the technology and what we can do and what we can't do, uh, and what directions the research is moving in that we might be able to do in the future and what directions that's really not. And that are really more fundamentally impossible. Um, so I think that it's really important for these, these frameworks to be built, not solely from the perspective of what is desirable and what would we have in an ideal world, but also what can we actually practically achieve and also where do we need to invest more in R
01;22;54;02 and D for, you know, to be able to achieve better in the future.
01;22;58;28 [Chandler C. Morse]: Martin, did you want to add to that or I can, I can.
01;23;01;15 [Martijn Rasser]: Sure. I'll, I'll just say, well, Nicol and Helen covered it extremely well. Um, my, my 2 cents would be, you know, be creative in who you reach out to the, when you talk about ethics, you know, talk to historians, uh, anthropologists, philosophers, um, there there's, um, a tendency to just focus on a small subset of, of disciplines that could provide valuable insight. And I think, you know, just a very broad view of who would have important insight into these matters would, would be
01;23;31;05 interesting to explore.
01;23;34;05 [Chandler C. Morse]: Nicol is our resident, uh, sociologist or human, uh, science studies. Do you wanna, do you wanna, is there anything you want to add on to that answer?
01;23;43;01 [Nicol Turner Lee]: No, no. I actually think my colleagues actually tied it together and I totally want to say amen Helen, because I think without the technical realities being embedded in real time conversations, it's really hard for us to come to a central place to have a conversation around this, because right now we're sort of talking over each other. And that's why as a sociologist who is really focused on systems and structures, it's so neat to have conversations with folks who are scientists, because we can come together and I think create an
01;24;12;15 interdisciplinary approach again, in those use cases that really matter.
01;24;17;13 [Chandler C. Morse]: So, so we've all talked about, uh, spending more, uh, R and D research, I think, I think no matter what, and the question we got from, uh, Michael Checkim was, uh, so where does this money come from? Because we, he happened to be in the middle of a covid economic downturn. Um, so I just pitched the head into your lap and kind of walk away. Um, if you have ideas,
01;24;42;20 [Nicol Turner Lee]: Well, I would say this, I mean, some of the work that I'm doing at Brookings have a lot to do with like this AI label or better housekeeping seal, right? I think it's important that the money actually come in many respects from the stakeholders who are creating and developing these products and services. At the end of the day, we are no longer in a brick and mortar society. COVID-19 has basically demonstrated just how important it is to be digitally connected. And so, as a result of that, the game is now, you know, much higher in terms of stakes of companies trying to win the trust of consumers. And we're going to come out
01;25;13;23 of this really with winners around the data currency that has been expanded, and those that are good stewards of our information in ways that they have high-performing algorithms, as well as these abilities to address bias or remedy quickly, or have feedback loops that can come back to consumers to verify, you know, check and verify who they are, they're going to be the winners, and they're going to make a lot of money.
01;25;35;29 [Nicol Turner Lee]: And that money needs to be, I think, invested back into the R and D of companies to continue to not break things and say, sorry, later, but to be much more innovative and progressive going forward, I don't personally think that the federal government could take this on by itself where the federal government needs to spend money is trying to think through how do we enforce any of these violations that come with bias and discrimination. But I do think there'll be enough change out there for companies who want to continue to win the trust of consumers to invest in themselves and to invest in these structures that make sense in this new
01;26;07;20 economy,
01;26;10;12 [Chandler C. Morse]: Helen or Martin, anything to add, final words were,.
01;26;13;04 [Martijn Rasser]: Oh, sure. Yeah. I would just say, well, it's an investment In our future, ultimately, right? If you look at what our economy 20, 30 years from now will be based on it's the investments we make in R and D today. And yes, there will be some tradeoffs that money has to come from somewhere. But if you look at historically how R and D investments translate into economic growth, our entire economy right now is built on technologies that we funded in the sixties and seventies, you know, with the internet GPS, the transistor that's, that's what our entire economy is
01;26;47;25 right now. And so we can't keep coasting on those investments that we did decades ago. So we have to look to the future with this.
01;26;57;06 [Chandler C. Morse]: And on that, I think I'm going to...Helen... Unless you have a short, short addition? Great. Okay. Look, it's been wonderful. Thank you for a great discussion. And it's been so such a pleasure to work with all of you. And with that, Jason, I'll turn it right back over to you.
01;27;08;26 [Jason Grumet]: Look, all I have to say is you guys are fantastic. And that was, I think, really one of the best discussions we've had a chance to, um, to host. And we're really delighted to have had a chance to do this work together. And I guess now it's time to see what the members of Congress do, and then figure out if there's, you know, an Act II, but I'm just want to thank everybody for tuning in and, um, really appreciate everybody's thoughtful contributions and five o'clock have a good night..