AI Policy Roundup
Event Video
On October 30, 2023, President Biden signed the most far-reaching presidential action in AI, Executive Order 14,110, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The EO directs dozens of federal agencies to take over 100 discrete actions to implement it over eight distinct policy areas. The EO received significant attention and a broad range of responses from the regulated public and congressional policymakers. Moreover, the States have grown highly active in regulating AI. This panel will discuss the consequences of the EO on the federal executive branch, the federal legislative process, States, and the tech industry as well as independent federal agency AI regulatory action with an eye toward the opportunities and challenges to come.
Featuring:
- Johnathan Smith, Vice President and Legal Director, MacArthur Justice Center
- Hon. Keith Sonderling, Commissioner, Equal Employment Opportunity Commissioner
- Adam Thierer, Senior Fellow, R Street Institute
- (Moderator) Prof. Aram A. Gavoor, Associate Dean for Academic Affairs and Professorial Lecturer in Law, The George Washington University Law School
*******
As always, the Federalist Society takes no position on particular legal or public policy issues; all expressions of opinion are those of the speaker.
Event Transcript
Chayila Kleist: Hello and welcome to this FedSoc Forum webinar call. Today, June 6th, 2024, we're delighted to host panel of experts for an AI policy roundup. My name is Chayila Kleist and I'm an Associate Director of Practice Groups here at the Federalist Society. As always, please note that all expressions of opinion are those of the experts on today's program as the Federalist Society takes no position on particular legal or public policy issues. In the interest of time, we'll keep the introduction of our guests today brief, but if you'd like to know more about any of our speakers, you can access their impressive full bios at fedsoc.org. Today we are fortunate to have with us as our moderator, Professor Aram Gavoor, who currently serves as Associate Dean for Academic Affairs and as a professorial lecturer in law at the George Washington University Law School. Earlier in his career, Associate Dean Gavoor served as Senior Counsel for National Security in the Civil Division at the US Department of Justice, as third-ranked Counselor to the Administrator of the Office of Information and Regulatory Affairs in the White House Office of Management and Budget, and in private practice.
His scholarship and writings have earned placement in a wide variety of publications ranging from Florida Law Review to the Indiana Law Journal, to the Ohio State Law Journal, as well as International News Media, including CNN, BBC World News, the Wall Street Journal, and NBC News. I'll leave it to him to introduce our panel. One last note and then I'll get off your screens. If you have any questions, please submit them via the Q&A feature at the bottom museum screen so they'll be accessible when we get to that portion of today's webinar. With that, thank you all for joining us today. Professor Gavoor, the floor is yours.
Aram A. Gavoor: Thank you, Chayila, and thank you to the Federalist Society for putting on this webinar. This is one for the Administrative Law and Regulatory Practice Group. So AI. Generally speaking, we're past the gold rush of last year where it was a new technology, everyone was excited, and had a semi-euphoric and terrified reaction to it. And this year, if I could characterize it, the technology and in particular the regulatory reaction to the technology both at the federal government and state government is much more technical. There are a lot of different policies that have been laid out at all different levels of government and in the private sector and industry to industry, and it's just a little bit tough to coalesce it all because it is so trans-substantive. So really what we're looking to do today is to focus on the big picture. What are the different options available for regulation?
What are the different actions that are being taken from different vectors and spaces? And really what is the debate and what might be the best solutions moving forward for what I think really is a transformative technology? This is, in my view, a sort of social media style, transformative in terms of American society, our democracy, justice, and business. So the timing of this webinar is convenient in two respects. First, we're a little bit past six months after President Biden's Executive Order 14110, "Safe, Secure and Trustworthy Development and Use of Artificial Intelligence", which was a really large, bold, broad expression of policy that implicated over a hundred different discrete actions across at least 20 agencies, over eight distinct policy areas. Some say that this took some of the pressure off of Congress to act, and there's a lot of activity that's taken place in the executive branch since that time.
The legislative branch is still in a very thought-developing mode. There are many working groups taking place, some hearings, but also a lot of other actions taking place behind closed doors. And then the state's laboratories of democracy are very active. So for example, Colorado was one of the first movers on comprehensive AI regulation. So our panelists here, as Chayila described, their bios are on the website, but just a very brief introduction for each of them. We have Johnathan Smith, who's the Vice President and Legal Director of the MacArthur Justice Center, an inaugural position. He has extensive leadership experience in both the legal advocacy sector and both the federal and state governments. Most recently he was the Deputy Assistant Attorney General and Acting Principal, Deputy Assistant Attorney General in the Civil Rights Division of the US Department of Justice. So he was really in the cockpit of a lot of the federal government policymaking for this.
Next is the Honorable Keith Sonderling, a Commissioner on the Equal Employment Opportunity Commission, a position that he was confirmed into by the Senate on a bipartisan basis in 2020. His term expires in July, so maybe we'll be asking what his next steps are. But until January 2021, he was the commission's vice chair and before his service on the commission, he was the acting and deputy administrator of the wage and hour division of the US Department of Labor. Last but certainly not least, we have Adam Thierer, who is a Senior Fellow at the R Street Institute, which has cast quite a large shadow on the AI space. I view him as a real nuanced expert in the context of trustless transactions, certainly at the state level, but also at the federal level. He really is one of the big benchmarks for the technology and innovation team at R Street and prior to that, he spent 12 years as a Senior Fellow at the Mercatus Center. So without much ado, further ado, I will ask Adam to speak first, then Commissioner Sonderling, and then Johnathan for their own thoughts, and then we'll get into a conversation and discussion.
Adam Thierer: Well thank you and thank you for that introduction and I thank the Federalist Society for inviting me to be part of this discussion. So let's take a quick look at the lay of the land for AI policy circa summer 2024. Here right now, as of nine o'clock this morning in the United States, we have 755 AI bills pending. 642 of those bills are state bills. This number has been growing steadily, quite rapidly in fact, over time, and importantly, that number does not include the many municipal bills that are pending or have already been enacted. In fact, in many ways, one of the most important AI bills that's already passed in the United States was not federal or state. It was a New York City regulation. And so this is the kind of overlapping jurisdictional interest in AI policy we see in the United States today. Importantly, however, just a few weeks ago on May 17th, Colorado became the first state to pass really comprehensive AI regulation, and Governor Jared Polis signed it, although he had a very remarkable signing statement that read a little bit more like a veto statement, but he signed the bill anyway, expressing a great deal of reservation about how potentially burdensome that law would be and even calling on Congress to potentially preempt it.
We can talk more about that later. But importantly, the Colorado model, which is very much focused on the potential for algorithmic risks or harms or discrimination, is very different from the model we see in other states like California, which has a major bill that's pending right now that would regulate large scale. so-called Frontier AI models. That's a very different model than what Colorado's considering. And then beyond that, beyond the models of Colorado versus California, you have many other targeted AI regulatory measures pending in other states involving things like elections, jobs, law enforcement uses of AI, and so on. So that's just a very brief overview of the states, but there's a lot more going on, again, over almost 650 bills. At the federal level, you have to divide things into Congress versus the Biden administration. In Congress, we have 113 bills pending as of this morning covering a massive range of AI-related policy issues.
Again, some of them dabble in specific things like national security, elections, deep fakes, child safety, industrial policy concerns, and competition policy, but then there are also broad-based regulatory proposals that also come in many flavors, some of which actually build on self-regulatory mechanisms that have been developed through organizations like NIST or the NTIA or other types of bodies that would basically convert soft law regimes into maybe hard law regimes. We can talk more about that later as well. Importantly, in a major development just a few weeks ago, Majority Leader Senator Chuck Schumer introduced, or finally put out, and published the Senate AI Working Group report that he got started a year ago the so-called AI Insight Task Force. When that got started, people were concerned that that was going to be a fairly regulatory approach that the majority leader might propose, but actually, it was a fairly moderately worded report, a bipartisan one with other Republicans on it, with some Republicans on it like Senator Young.
And basically it ended up kicking a lot of the decision-making about AI policy right back to the committees. So initially it was thought that they were going to be preempting committees, and the majority leader was taking over. It actually kicks a lot of the decision-making right back to them. Then in the Biden administration to wrap up, we have a sort of "whole of government" approach to AI policy underway, including what I regard as a lot of efforts at indirect unilateral regulation without Congress even authorizing or interacting. The Biden administration a couple of years ago issued its so-called Blueprint for an A.I. Bill of Rights, followed that up with a number of different major statements from the administration and different agencies. And then there was, as you mentioned, the big 110-plus page executive order on AI with a litany of new directives to agencies but broadly speaking, with a general encouragement of agencies to go out and investigate how they might oversee or even regulate AI using existing authority or maybe new authority. An important development that's happening at the high levels of the White House and in the Department of Commerce is a movement towards what they call algorithmic accountability in the name of AI audits or impact assessments.
They've had major reports on this. A leading official in the Biden administration has called for "a system of AI auditing by the government" and suggested, "an army of auditors is needed to ensure algorithmic accountability or responsible AI". It's not really clear what that means, but we're talking about potentially very expansive and open-ended regulation of artificial intelligence and machine learning, again without any statutory clear authority to do so, but basically through a broad reading of delegated authorities in existing statutes. So I'll just leave it at that with a brief overview, but obviously, I can fill in a lot of gaps later and we can get into a bigger discussion about any one of these matters.
Aram A. Gavoor: Thanks so much. Commissioner Sonderling?
Hon. Keith Sonderling: Well, first of all, thank you, the Federal Society for having this, for putting together such great panelists, and I'm really looking forward to our discussion when we get there, but let me just tell you about my perspective of an actual being a regulator, being in the executive branch, being in a unique position at a so-called independent agency and having to deal with this. And when I first started and the EEOC first started looking at AI in the workforce, it was before the blueprint of an AI bill of rights, it was before a lot of generative AI became the hottest issue in state capitals and DC and capitals across the world. And because of the prolific use of AI in the workforce, specifically related to human resources and tools, not only promising employers to make using AI to make decisions faster, and more efficiently, but the big hook in my space is removing bias from employment decisions, which is the reason my agency exists.
So when we first started looking at this, how do we frame this? All we could do, being in the executive branch, is look at our existing laws and our existing loss here at the EEOC, we were created from Title VII of the Civil Rights Act, which is about to celebrate its 60th birthday, which is one of the strongest civil rights acts as you know, if not the strongest in the United States. And looking at what these tools are actually doing and looking at something else we'll talk about, how do you regulate technology? How do you actually understand how the algorithms work? I thought that would initially be a tremendous distraction, and we don't have the resources and we don't have the tools to keep up with tech companies who are going to be able to design and use these algorithms. But actually looking at it, I said, we need to slow down here and we need to actually look at the results of these decisions because, at the end of the day, there's only a finite amount of employment decisions, and all of those employment decisions are regulated by the EEOC, whether it's removing bias, whether it's preventing age discrimination, preventing religious discrimination, national origin discrimination, you name it, there is going to be some kind of underlying employment decision that these tools are either going to completely make and assist with.
And I say that to sort of simplify it because from our investigator's perspective, from our regulatory perspective, we regulate employment decisions. And until Congress gives us more authority to regulate technology or something else, that's what we're going to have to be limited to. So at the EEOC, our approach is that I say our laws are old, but they're not outdated because they're going to apply to any decision employers are going to make, whether they're using humans to make an employment decision or they're using AI to make an employment decision. So that sort of was the basis of our initiative is saying, we have all these laws and how are we now going to translate how these laws apply to new stakeholders that we have? Because before technology, the EEOC's world was really limited to employers, staffing agencies, and unions, and now we have to deal with tech vendors. Now we have to deal with people who want to invest and create and deploy and be subject to these products.
So a lot of our initiative has been around how do we apply our longstanding existing laws to these employment decisions that are now being used related to technology having an impact on that? So I just always like to set that floor because we have very strong civil rights laws. At the end of the day, these tools, they're not making a new employment decision. There's some existing basis for employers to be regulated against, but again, that's my perspective. So in the executive branches, you heard specifically the blueprint of Bill of Rights that came out certainly referenced our world in making sure the systems are fair and don't have bias and prevent discrimination, on the executive order, which Adam alluded to did have some very specific provisions related to the labor world, and a lot of it dealt with, for instance, having the Council of Economic Advisors submit a report on the labor effects.
That's a whole different narrative too in DC about how with job displacement, not necessarily how these tools are going to be used improperly, but just related to job displacement and then putting on the labor agencies to put out guidance, which the EEOC has already done. But then you saw post the executive order within the last six-month period, the Department of Labor's Office of Federal Contract Compliance put out guidelines as well, and a lot of them you're starting to see, as we'll talk about during our discussion, you're starting to see a lot of repetition from agencies that have to deal with civil rights, that have to deal with consumer protections about what these AI protections actually mean. And that's whether it's establishing a governance program, whether it's using it transparently and fairly, whether it's protecting civil rights, a lot of repetition from the various agencies without those necessarily regulatory or ability to make new laws repeating a lot of the same.
And I really think that the statement that the EEOC, the CFPB, the Department of Justice, Civil Rights, and the FTC put out last year also really shows that same mindset, that there's a lot of existing laws related to civil rights protections or consumer protections. And all of us as enforcement agencies have a duty to enforce existing legal authorities when they're applied to automated systems just as they do to any other practice. So housing for HUD, for financial, for advertising, for FTC, we all have long standing very old laws that apply and reminding everyone, we're going to keep doing that whether or not you're using these technological tools. So it's just a brief sort of overview from my perspective on how I'm looking at this personally and what's happened since the executive order related to the labor sphere.
Aram A. Gavoor: Thank you, Commissioner Sonderling. Johnathan?
Johnathan Smith: Great. Well, thank you again for the privilege to join all of you today and for the Federalist Society for hosting this conversation. The last time I was in conversation with Aram and with Professor Sonderling, I was actually at the Department of Justice and I have since left, and while there are many great things about no longer being a federal employee, one of the things I miss is the work that I was doing in this space because I do think it is incredibly important. The commissioner referenced the 60th anniversary of the Civil Rights Act of 1964. The actual signing date is just a few weeks away in early July. In many ways, as the commissioner has already said, the Civil Rights Act of 1964 was transformational in how this country has approached discrimination and holding discriminators accountable. It was the birthplace that led to the creation of the EEOC and so many other critical functions and was obviously signed and passed at a time when this country was in the midst of the civil rights movement and a lot of unrest and questions about what equality looks like.
And I give that history because I think when we think about the next 60 years of civil rights enforcement, AI seems to be at the heart of those conversations, I think that AI issues are the civil rights issues of the 21st century, simply because again, as was already discussed, AI affects every aspect of our life, every day, from transportation to schools to employment, to making reservations at a restaurant. Everything about our life is being transformed by AI. While that is incredibly powerful and will be incredibly efficient, one thing we often said about what we said when I worked in the DOJ, the Justice Department, was that if we're not careful, AI will just bake in all the existing discrimination and biases that already exist in our society. And so there's both tremendous potential in this technology, but there's also a tremendous risk, and I think this is the time period, some might say maybe the time period was a few years ago, but this is the time period to really think about how do we ensure that this technology is doing the best and not the worst?
And I think that goes to Adam's point about just the level of interest at really every level of government in terms of thinking about how to effectively regulate these technologies. And so yeah, I will say this is no surprise that within the executive branch, and although I left about a month ago, I'm pretty confident this is still the case, but there was tremendous interest in understanding how to regulate this technology. I think there was tremendous concern for all the reasons that I just mentioned. At the same token, there is a tremendous lack of expertise in what this technology is, and how it functions. In the Civil Rights Division where I worked, we were largely a division of lawyers and we knew a lot about civil rights law, about the Civil Rights Act, about how to argue a deposition or handle a deposition or take a case, but we know very little institutionally about how AI works, how generative AI works, how to understand how factors go into an algorithm or into an automated decision-making process.
And so that is its own challenge, right? Because I think it's reflective of some of the points that I think we're talking about soon. And you are asking agencies and departments and offices to regulate a very, very complicated and moving target. The AI is changing literally every day, multiple times a day without having the institutional and historic expertise to do that. And what does that mean in that regulation? The commissioner, I'll say this in closing, mentioned the joint statement that was signed by a number of agencies, the Justice Department, the EEOC, the CFPB, and the FTC. And actually last year, a number of other agencies also joined onto that statement, including the Department of Education, Health and Human Services, housing, HUD, Labor, and I think there's a general recognition across all those agencies that we can, as the commissioner said, use our existing technology to address these problems where they arise in AI. And that's incredibly important, and I'm proud to have played a role in the drafting of that statement. But I also recognize that there are really hard questions and difficult unanswered questions about what does civil rights enforcement look like under AI? What does disparate treatment or disparate impact look like under these technologies? How do you prove liability? How do you create remedies that are effective at harm and addressing the harms? These are new frontiers for these agencies, even if the laws are the same. And so there are I think tremendous opportunities for the federal governments, but also tremendous challenges that I think that the executive order and some of the action both before and after the executive order made clear from the executive branch side of things.
Adam Thierer: Professor, you're muted.
Aram A. Gavoor: Apologies for that. So thank you, Johnathan, Commissioner Sonderling, an Adam for your remarks. I've been taking notes while you've been speaking and also we all know each other. So a couple of points with the time that we have to discuss. I'm going to lay out a number of them and you can just react however you wish and we'll just sort of let the conversation go from there. So first is one of the big problems with regard to AI and regulation generally is there's just a massive technical deficit that I think has closed over the past year, but still the people who are making the decision, who are vested with the decisional authority at the federal government agencies, independent agencies, congress, states, et cetera, just have a woefully deficient knowledge of how the technology works, how it can be implemented and utilized. That's one point that's sort of a prerequisite to good decision-making.
Second, at least for businesses, one question that I want to ask open-endedly is what's the right game plan? Proactivity? Reactivity? Developing a best practice that can be adopted? Taking safe steps and just letting others lead and having a little less exposure? And then with regard to the policymaking itself, different features that can be deployed, temporality, are we going too slow too quickly, or is it just right? Is the best approach self-regulation or letting the government lead the way? What's the role of federalism? And then also, should we be thinking about substantive policy and sort of injecting politics or at least applying politics and substantive goals for the general technology? And then the last piece is what does this in all of this have to do with our national competitiveness? Because our strategic counterparts, allies abroad, and adversaries are engaged in this space as well. So with all of that, I'll ask Adam to speak and then Johnathan, and then I'll let Commissioner Sonderling close out and then it's just a free-for-all after that.
Adam Thierer: Well, these are wonderful questions. There are five or six different things you've asked there. I'll try to answer a couple of them. First and foremost, on the knowledge deficit problem, that is very real. I think it counsels humility and patience and in more of a wait-and-see approach to AI policy, there's a bit of a rush to judgment by some lawmakers right now for a technology that we still can't even agree on how to define. There are three bills that have passed that have attempted to define artificial intelligence at the federal level. They've all defined AI differently. That's a problem for regulating. And then there's just a general misunderstanding about a lot of algorithmic and computational technologies based upon the sort of dystopian narratives and worst-case thinking that pervade this debate of all the hearings that I've testified at, including two this week again and all the other ones that I've watched, every one of them, I could have a bingo card that has words like "Terminator" on it or "2001" and these words, these scare phrases come up.
The debate about AI is being shaped by science fiction basically. And that's really unfortunate because if that translates into a sort of preemptive precautionary approach to technology policy, it would be a major reversal of our internet policy model that has made America a global leader and it would end up treating our AI innovators as essentially guilty until proven innocent and forcing them to get a permission slip before they want to do anything innovative, taking us down the path of the European Union where you can't even name a leading digital technology innovator today, the only thing that the Europeans have left to export on the digital technology front is regulation. And they're trying to do that. And so let's tie that back to your point about national competitiveness. This AI debate is so important not just because of what it means for each of us individually or for our competitiveness globally or nationally, but about geopolitical security and about our standing relative to other nations.
Luckily, America is in the driver's seat right now in digital technology and AI and computation, but the Chinese are making a big play in this. They have announced their intention, their imperial ambitions to be the global leader in artificial intelligence by 2030, and made massive investments in this field, and we know that if we fail to get this right, it's not just that concern about competitiveness or security relative to China, it's about our values. Just two days ago with the 35th anniversary of the Tiananmen Square Massacre and square massacre, that basically led to the world's biggest surveillance and censorship regime being created by China. If they were winning digital technology markets and hearts and minds across the globe, they're going to be exporting those values as opposed to American values of personal liberty, individual rights, civil rights, and so on. So this is so, so crucial. This is why I've advocated in my own work, again, people say they believe in wait and see policy, being humble, and being patient.
What does that mean in terms of concrete policy? What I argued it means, and this is what I testified about this week, is we ought to consider what we've done before, which is a so-called "learning period moratorium", where we say, let's take a time out, let's give this a little while, not do anything crazy like a big new AI computation commission, a big new AI licensing regime or massive new forms of liability. Let's instead zero in on the existing regulatory infrastructure we have to address real-world harms as they develop. We have, as the commissioner pointed out, existing civil rights actions and laws. We have unfair and deceptive practices authority at the federal and state levels. We have recall authority at many federal agencies to pull defective products off the market, whether they're made with AI or anything else. We have torts and all of the other court-based actions. So we have a huge regulatory infrastructure. And let's not forget our federal government is not small. 439 federal agencies, 439, 2.2 million people working at them. You better believe a lot of them are very interested in taking a hard look at AI and many of them already are. But let's let that process run its course and be careful and humble about too aggressively shooting ourselves in the foot as this race for global competitiveness and geopolitical security begins with China and the rest of the world.
Aram A. Gavoor: Thank you. Johnathan?
Johnathan Smith: So there's a lot that I agree with what Adam said, maybe with some caveats, but I will start with the expertise gap. I agree that I touched on that before. That being said, I do think that there's real progress, at least on the executive federal side, to close that gap. A number of agencies have and are hiring technologists for the first time who are bringing in-house expertise and knowledge in places that haven't existed before. My own now former agency, not only GDC, the hirers of technologists, but just a few months ago, kind of in the aftermath of the AI executive order, the Attorney General and the Deputy Attorney General announced the creation of an Emerging Technology Board, which is designed to serve as a central repository across the department, across many different agencies, whether that's the FBI, the Bureau of Prisons or litigating components like the Civil Rights Division or the Civil Division, the National Security Division, to make sure that all the key players are in the room together.
I had the privilege of serving on that board until my departure from the department. And so I think those types of spaces exist now across the government in places that they didn't exist before. The Justice Department also hired for the first time a Chief AI Officer who both coordinates that board I just mentioned, but also they're thinking proactively about how to make sure the department is closing that knowledge gap and is seeing and is using to Adam's point, those tools as effectively and as surgically as possible to take on harms, whether they be discrimination based or not, when and where they occur. I also agree with Adam about the point of patients and seniority. To give one example of I think where this played out in a good way is so as part of the rulemaking process, I know there was a question about Chevron and what's the future of that, but putting that aside, under the current existing rulemaking process for federal agencies, the Department of Health and Human Services recently released a final rule about Section 5057 of the Affordable Care Act, which is the provision that deals with discrimination in healthcare and the initial NPRM, the notice of proposed rulemaking, which is kind of how the process starts, included a whole section about how healthcare providers should go about regulating AI technologies in healthcare, whether that's in your hospital, doctor's offices, what have you.
And again, as people like you know, it's part of the rulemaking process. You put out your NPRM, you get a ton of comments from the public, and then the agency reads those comments and makes changes or provides explanations for why they're not making changes based on those comments. And as it pertained to the AI section, the Department of Health and Human Services actually retracted or stood down that provision and said, "There is a lot we don't know here." And they were perhaps moving too fast in the first version of that and trying to come out with a rule, a final rule that was slimmer and more pointed. And so I do think agencies are being thoughtful and are being intentional. Yeah, again, not necessarily to say that that rule is perfect or what have you, but I think that back and forth reflects I think an approach that is grounded in humility and in listening to the public and to experts.
I guess the one point of quibble I would have is what Adam said previously is this period of moratorium. I agree that we should be patient. I agree that there should not be quick changes, but I also agree, and I think what the history of this country has shown, and not just in the context of technology, but I think across the board is that when industry, when business and people know that there's no regulation or deregulation, there could often be examples of people taking advantage of that, where they use that in ways that are not beneficial, whether for civil rights, for privacy, for fairness, for competition. And so again, I think that it's important for governments to be patient, to be shrewd, to be surgical, but I also think particularly for AI, where we see changes just happening so quickly and really changing our lives so much, so quickly, there is a need for government action, you can't say "We're bowing out officially". I worry that that sends the message of "Grab all you can while in this period", and I don't know if that's the type of relationship that is best fostered as opposed to a relationship of collaboration, transparency, and engagement to figure out how government and industry and all of us can work together to ensure that we are getting the best of these technologies and not the worst.
Hon. Keith Sonderling: So many great points here, and we're going to have a whole full conversation on this. I just want to touch on something briefly related to the technology gap and the skills gap within the federal government. I completely agree, and this is something that as you know, Congress is looking at and addressing on multiple levels, which obviously the executive order couldn't do as it's saying appoint a Chief AI Officer or spend more your own resources on this. It's not Congress allocating new senior executive service positions for these specific roles and more funding to stay competitive with tech companies. I just want to take a step back and again simplify this because that's what we have to do with our current constraints and saying, "Okay, well for a bias investigation in employment right now, our federal investigators show up, and how do we determine if there's a bias in an employment decision?
Pre-technology, we have to do a deposition, we have to do subpoenas, and we essentially have to try to figure out the algorithm, the black box of the human brain, which is very difficult. And that's what we've been doing since the 1960s and trying to figure out, because very rarely does somebody say they intentionally discriminate. So let's flip it. Now with the use of this technology without necessarily understanding the coding of the algorithm for employers who are using this from our investigatory standpoint, we now have a contemporaneous record of what the algorithm was asked to do, who it was asked to look at what skills were being placed within the algorithm to find the best candidates, or if there were unlawful characteristics saying like, "I don't want to hire a woman for this job" or "I don't want to hire somebody of a certain national origin" where now that these AI systems at least have that record where we've never had before.
Again, we have depositions. We have to see if somebody maybe made a note on somebody's resume about the color of their skin or something that is unlawful to make an employment decision on. So I do think even without understanding the technology, the amount of additional records we have from these tools will actually help us prove, or for an employer using it properly disprove employment discrimination. And we don't necessarily need the skilling on the algorithm looking more at the basics of the inputs and what the employer asks the system to do. So again, even within certain constraints of not having that technology to be able to make the code, our investigators are going to be able to look at that data. And I think that's really important and that's what we can and should be doing now. But really quick, Aram, I wanted to switch to what you're saying about the states and very much, and I know Adam's the expert in this area about what's going on in Brussels, what's going on in New York City, what's going on in Sacramento and Albany?
It's similar to the GDPR effect that it had. The EU's AI Act is going to absolutely have that same impact on organizations within the United States. New York City's local law 144, which has been widely documented with some issues related to it, also had employers across the country looking at it. And more importantly with Colorado and proposals in California, at some point, the large-scale or global international organizations that are really using these tools largely developed in the United States, especially in the HR context used on every continent essentially, are going to start having to look at these proposals much more seriously and saying, "Well, at this point we're going to have to integrate a lot of what's being required, a lot of those common threads that are being required, whether in my space, again, employee consent, pre-deployment audit, yearly audit testing, disclosing the vendors, and we could go through each of those and the pros and cons of voluntarily doing that, you're just starting to see a lot of commonality. And for these larger organizations, they're not going to have the time or resources to say, "Okay, well, if you're applying for a job in Florida, we're going to have a different AI system than if you're applying for a job in Colorado with the disclaimers."
But one caveat on that and why it's - outside the federal government legislating and preempting this area, which the Colorado governor essentially asked for, just compliance with the state and local law is not necessarily going to mean your compliance with the much more wide spanning of federal law. If you look at New York City's local law 1 44 requiring a pre-deployment audit requiring a yearly audit testing, that's in my opinion, and I've talked about this and we've said, that's a great thing if you're going to do audits in employment, great, because if you can see if there's bias or discrimination, you can actually audit it and make sure in advance that there is none and change it and fix it if there is discrimination before ever impacting someone's livelihood, the whole point of an audit.
But when a city like New York says, "Well you only have to audit for hiring and promotions and only needs to be on race, sex, and ethnicity", well, that may give you the false sense of security that your systems are in compliance, but the EEOC where federal law still applies in New York City is going to require everything more than hiring and promotion - wages, training, benefits, terminations, aids, religion. So you can't just think just because certain countries or cities say, this is the most important thing to us, and you do that, you're going to be fine. And that's sort of the challenging thing. And one more point on this is that now we're starting to see with this transparency, with this consent for some of these states with the contestability that consumers or employees, applicants in my space are starting to have, well now are we starting to give individuals more consumer rights, consumer protections if they're being subject to AI than they have with humans?
Because right now, if you don't get hired, very rarely does the employer say, "Well, here's exactly why we didn't hire you. Here's exactly what went into it." They say, "No, we just went with a more qualified candidate or somebody else, thank you for applying". And you don't really have access to that information unless you think it's discriminatory, where now these systems are having that contestability part of it is now putting a lot more burdens on employers and actually much more disclosure than they've ever had before prior to technology, which may prevent them from using the technology that has benefits. So you see with the state and foreign governments prodding in certain areas where there's already these protections and layering on additional things, it may cause a lot of compliance issues across the board.
Aram A. Gavoor: Well, thank you for all of that. We have about six minutes that I just want for it to be a free and open conversation. My one reaction to all of this is, well, maybe we should all set up some sort of a live tracking regulatory tool depending on your sector and your location, what's happening to you, it's like a dashboard of what you have to do now? I feel like the only real winners in all of this are the lawyers.
Adam Thierer: Yeah, I think that's right. I think that this is a pretty important reversal of the policy framework that the United States has had for the last quarter century for digital technologies and the internet. I just think it's very important for us to realize that this is something that we're going to need to probably address at some point, either through some sort of federal preemption or some sort of compact among the states. I don't know which one of those things will work. Congress just can't get its act together and do anything on this front. So I'm not hopeful.
Johnathan Smith: I think that's right, and I do think a lot can be said about the inability of Congress to pass legislation, which is probably a whole separate panel. But yes, I think in this space, just given the speed at which both technology is changing and that states and localities are engaging, I think the absence of legislation is really critical. And again, not to take away anything that the commissioner said about the power of our acoustic statutes, but statutes need to be updated. That was true before AI, it is true with AI. The one thing I will say just, and this was I think a question in the Q&A as well about the commissioner (unintelligible) statement that a number of agencies had issued and a question about what has come of that statement? And I can only really speak or best speak from it from the Justice Department perspective, but just to give a few examples of I think the type of enforcement that we've done around these issues, some right before the statement, some after, but I think it all speaks to the same point so quickly.
One, we have what I think is a groundbreaking settlement with Facebook Meta about, so say the commissioner and I are neighbors looking for a new house, what we allege was that I, as a black person, would see very different housing advertisements than what the commissioner would see in terms of what was available on the market. And the thing that's really, I think troubling about that is I would have no way, unless I actually went over his house and looked at our computer screen side by side to know that I would see a different kind of quality or inventory of properties. And so the Justice Department was able to enter into a settlement with Meta where they not only took down one of their algorithms, but they also created a variance reduction system. BRS is what they call it, that actually is training the algorithm to minimize these types of discriminatory outcomes based on race or sex. And so I do think that there are concrete actions to the commissioner's place that can be taken with existing authorities under the Fair Housing Act. Meta has said itself that it's planning to apply this BRS technology, not just to housing ads, but also to employment ads, to other advertisements on its platform. And so I do think there's a lot that can be done and is being done even notwithstanding the technological challenges or the legislative challenges that exist.
Hon. Keith Sonderling: Yeah, from my perspective now, from an employment perspective, it's difficult because under our laws, employees generally have 300 days to file a charge of discrimination for us to investigate. And I think the issue here with algorithmic discrimination and you hear about the fear of it, you hear how large Fortune 500 companies all over the world are using this for all types of applicants and HR, AI technology is - you're being subject to it in the workplace or if you're an applicant at all stages, but why haven't we seen these big litigations yet? Why haven't we seen these massive government enforcement actions yet? And that's because generally the employee needs to know they're discriminated against and then file a charge of discrimination at the EEOC, and we don't have a box on our charge of discrimination form. Those boxes are for age discrimination, for sex discrimination, for religious discrimination, for national origin discrimination, age, you name it, whatever category it is.
And even if an employee knew they were discriminated against because of this employer's use of a discriminatory algorithm, again, it wouldn't come in as AI discrimination. It'd come in let's say age, and then we would have to do an investigation and somehow pick that one out of the 80,000 plus new charges of discrimination we get every year to investigate if it's algorithmic discrimination. So I don't think you're going to see that really the uptick in these cases from an investigative standpoint, from the EEOC, with our limited ways we can start an investigation until states and foreign governments are starting to acquire that consent and disclosure knowledge for using these tools. So if an employee applicant is being subject to an algorithm and the employer has to say to continue, "You're being subject to an algorithm, this is the name of the vendor, this is what the algorithm is going to assess you on", if you don't get the job, then you may have some reasonable belief to blame the algorithm versus not being qualified for it or being selected by somebody else and then come forward in saying, "I wasn't hired by this company. They told me I was subject to an algorithm. I believe for these reasons the algorithm was potentially discriminatory" and EEOC would then start an investigation.
So that's sort of where you're starting to see states and local and foreign governments start to play in this area saying, well, if consumers - and again, consumers in our space are applicants and employees - start having the knowledge that they're being subject to algorithmic tool, it'd be much more easier for them to come to federal law enforcement agencies like us to say that their rights have been violated and that the EEOC should investigate the employer for using that tool. So it's more of a front end difficulty for us to even get these cases without knowing you're being subject to these tools.
Aram A. Gavoor: Thank you all. So now we're in the Q&A phase. To the audience, please feel welcome to ask questions. I might not be able to get through all of them, but I am curating them to get a good mix. So the first is from a representative of the news media, it's for Commissioner Sonderling. And I know you might've answered this in part, but I just want to formally ask it. You mentioned a statement from several federal agencies stating that existing consumer protection and civil rights laws would apply to automated systems. Have you seen any of the agencies follow through on that since the statement was published? If so, can you provide a few examples?
Hon. Keith Sonderling: Well, I can't talk about any open law enforcement actions that we have here. That process is confidential until a lawsuit is filed. But just broadly what I just said, I think the lack of awareness from applicants and current employees that they're being subject to the tools is the first barrier to being able to come to the EEOC and say, your civil rights were violated by an algorithm. So that knowledge gap there is I think what's preventing a lot of these cases from coming in and second, just the way our process works, again, 300 days to come in and tell us how you're discriminated against, not necessarily whether it's by technology, and then the investigations need to happen there. So there's a natural lag both from an awareness perspective and also if you have that awareness of just the time it takes for us to investigate and then refer the cases to litigation, which then would become public.
Aram A. Gavoor: Thank you. So another question that I think is really useful, this is at the local level. This might be for Johnathan and Adam to look at. What types of positive examples of local regulation have we observed? Of course, we discussed a lot of things that cause problems, but what are the good things that we are seeing in these laboratories of democracy?
Adam Thierer: I'm happy to say one quick thing. Sorry, I lost my video there for a moment and I've had to switch to my phone. So I think there are many excellent efforts underway at the local level to examine how law enforcement and government utilize algorithmic tools in decision-making. I think there's obviously added concern whenever the government is the one utilizing a certain tool or technology, and we do need to take special care with regards to how the government utilizes machine learning to administer government services or especially law enforcement. So there are multiple bills on that and I'd like to see more of that. I think that's a good use of government time and energy right now. That's the low-hanging fruit if you will. The other thing a lot of governments are doing is taking an inventory. They're actually trying to figure out what we are doing. Let's set up a task force recognition and figure out how to address these issues but also start by first figuring out what our existing regulatory capacity looks like and how it might already address these concerns that have been raised. I'd like to see that sort of thing formalized. I recommended it to Congress this week. The first order of business should be that sort of a thorough regulatory inventory, if you will, to just take a look at everything that could possibly already cover the concerns that are being raised.
Aram A. Gavoor: Johnathan?
Johnathan Smith: Yeah, I would just give maybe two quick examples. One of the components of the President's AI executive order is requiring for the Department of Justice to, I believe within a year, so by October of this year, release a report about the use of AI by law enforcement. And again, I have left the department, I don't know what that report will say, but I would expect or I would encourage that report and I would include examples of local localities of states that have already started to do some of that regulation and to highlight, I think one thing that the federal government can opt to do is use its bully pulpit to highlight examples of things that are going well and that are working.
So I think that's one thing. The second thing is maybe it works and to the commissioner's point, he's totally right as a matter of federalism that you can't opt out Title VII or some federal statute by complying with the local statute, but it's also true that many local or state anti-discrimination statutes have gone further than federal statutes in terms of protected categories, in terms of how they approach discrimination. And so I do think that's a place where the federal government or just the public can look at what are novel ways to address discrimination, what are ways in which discrimination is appearing in 2024 that it wasn't in 1964 that your Title VII or other provisions that just haven't been updated enough to adjust. But we do see jurisdictions like New York City and like many others taking actual steps to codify those concerns and come up with effective remedies for those problems.
Aram A. Gavoor: Thank you. So I guess the last question that we have time for today is a big one. The questioner says no one mentioned the role of voters, and suspects that the government does not believe or hope that voters are capable of understanding AI. I just want to tack on, I think from what I infer is the spirit of the question also is just AI and democracy. There's a national election coming up. Thoughts? So I guess Johnathan first and then Adam and Keith Sonderling, if there's something to say from the EEOC vector, maybe we'll hear but -
Hon. Keith Sonderling: I'm not going to say, it's out of my wheelhouse. I am curious what they have to say. Just let me just tack on that one thing and that will be my concluding remarks about what states are doing. Again, that requirement of a pre-deployment audit, we have a lot of guidance on that. Those audits generally going to be done towards the EEOC standards and employment, the standards that have been around since the 1970s, whether it's the EU AI Act requiring a bias audit or New York City or California and Colorado, we have sort of the global standards when it comes to testing disparate impact in employment. So the more people are encouraged to do pre-deployment audits and yearly audits the right way, using the right data, using your own data, not some of the ways out of it by using other people's data that will have the impact of essentially preventing discrimination from these tools occurring at scale in a test situation. And I think that's a good thing and it helps us do our job if employers are in a sense self-regulating even though they require bias before it's being used. So that will be my plus on the states.
Johnathan Smith: Very quickly about the AI and democracy issue. I know one thing that's incredibly important at all levels of the Justice Department is making sure that elections (unintelligible) are free and fair, that every eligible American who wants to vote has the right to do so and AI, I think, poses real challenges for that. As (unintelligible) the ways in which AI has been used to create advertisements to make images that appear to be the like of a candidate that are not. And so the Department of Justice has a number of different task forces that are designed to focus on the integrity of elections. I will also note that the Federal Communication Commission, the FCC, which is an independent commission just like the EEOC know that they've done a lot of work around robocalls, regarding deep fake technology in the voting context with some of the primary elections. So I know we're out of time, but this is a huge topic, but I do think that my point is that AI is the civil rights issue of the 21st century, I think about how to make sure that voters have accurate and proper information so that they can cast their ballots with complete full information and that AI-generated information is a challenge and I think it's only going to become more of a challenge as technology becomes more advanced and harder to detect a lot of these issues.
Adam Thierer: I'll just say in closing that there's been a lot of talk about AI and elections and I understand some of those concerns, but the sky really isn't falling and a lot of the predictions that many people have made have not really come to pass. That being said, one of the reasons they haven't come to pass is that agencies are on it. There are actually a lot of agencies, I mean the FEC was already mentioned, the FCC, the FTC, and that's just at the federal level. At the state level, we have officials that are looking into this as well. Again, goes back to my point. We already have a lot of existing law and regulation we can throw at algorithmic harms or concerns. Let's make that the first order of business and then fill gaps as needed from there.
Aram A. Gavoor: Well, I think we're out of time. Of course, we could have gone on for another three hours, but I'll hand it off to Chayila at this point.
Chayila Kleist: Thank you so much. Really appreciate you joining us today and sharing your expertise and insight. It was a fabulous discussion. Thank you also to our audience for joining and participating. We welcome listener feedback by email at [email protected], and as always, keep an eye on our website and your emails for announcements about other upcoming virtual events. With that, thank you all for joining us today. We are adjourned.