News

Orgs Using AI at Scale Need to Know: Scrutiny ‘Is What Winning Feels Like’

The fastest cars need the best brakes. It’s a point that Suraj Srinivasan, professor at Harvard Business School, made during a Newsweek webinar on Tuesday, when discussing ways to manage risk while working with AI.

“You don’t build ultra-fast cars at 150 miles-an-hour speeds without actually building brakes that can steer the car [or] stop the car from crashing,” Srinivasan said. “In the case of AI, adoption is growing faster than the speed at which we can figure out what brakes we need.” 

During the Feb. 24 webinar, which was titled “AI Governance: Balancing Innovation and Risk,” Srinivasan discussed the growing tension between the demand to scale quickly and the need to mitigate risks. Srinivasan’s guest for the conversation was Keith Enright, partner at Gibson Dunn and co-chair of the firm’s Tech and Innovation Industry Group and Artificial Intelligence Practice Group, and former chief privacy officer at Google.  

Enright said that conflict is likely familiar to anyone who’s been in a leadership position at an organization operating at scale. He noted the “relentless pressure for velocity” that comes from competing in a rapidly shifting environment to bring forward the best and most bleeding-edge products.

“When you’re trying to out-innovate your competitors, speed is an incredibly important feature of your organization,” Enright said, reflecting on his decades of experience in the tech sector. “At the same time, you were always aware of the fact that we were moving into an increasingly complicated and complex regulatory and policy environment. So you needed to make sure that you were on the right side of history.” 

Now, with the pace that AI is evolving, the stakes seem even higher, according to Enright.

“The velocity is scaling up at an unprecedented rate,” he said. “The perceived scale of change and impact is even greater than it ever was before, and there is this unique feature that, now, winning is existential.”

Companies should also be prepared for tightening restrictions as the AI regulatory environment matures, Enright added. He predicted that legacy data protection and privacy regulators will return to the tools they used five or ten years ago and “begin enforcing them more aggressively and more disruptively than they ever did before.” 

“I do think we are going to see regulators begin applying pressure and pain in some of those other areas, reminding organizations that they still have important compliance obligations to protect users and keep them safe in these other areas,” Enright said. “A lot of organizations are going to struggle with that because they over-rotated and they’re going to be caught flat-footed.” 

And what has long been a standard approach to addressing privacy concerns—asking users to click a box and consent to a dense privacy agreement about how their data is processed and used—likely won’t cut it for much longer.

“For many organizations, leaning too hard on notice and consent is actually transferring a tremendous burden to each individual user,” Enright said. “You’re saying, as long as we throw a bunch of text at you and we tell you, ‘You’ve clicked on this button, we now have the ability to do what we want with this information.’ The fact of the matter is, the world’s just more complicated now, it doesn’t work quite the same way.”

Under the European Union’s General Data Protection Regulation (GDPR), Enright said that we could see a stress-test for a scenario where organizations don’t need consent for every processing activity, but they must have “an authentic, high-integrity, deliberative process” for evaluating the risks of processing users’ personal data. 

“We’re going to need policymakers around the world to begin innovating around the way that they think about creating guardrails around this technology, so that we can, in fact, hold organizations accountable for not getting it right,” he said. “But we’re doing it in a way that we’re not shifting that burden to every individual user of technology, suggesting that they should be consenting to everything.” 

Srinivasan noted that organizations need to determine who internally will take ownership of AI compliance at the C-suite level. Many organizations are creating chief AI officer roles that might be better aligned to tackle emerging challenges than the chief privacy officers, chief information officers or general counsel alone.

He likened internal compliance to the checks and balances placed on the three branches of U.S. government, which are meant to “slow things down so we don’t make mistakes.”

“What you’re saying is that balance has shifted more in the favor of speed and we somehow have to figure out not to shortcut the risk management part, but align it,” Srinivasan said. “Creating frictions was a part of the solution and now we want to remove the frictions.”

The greatest challenge that organizations are facing, Enright said, is creating the right role definition to preserve accountability. Designing these strategies will look different for everyone, depending on an organization’s capabilities, limitations, industry and regulatory standards. He suggested that we will soon see a “plethora of new titles emerge” and organizations experiment with AI strategy.

“I still think there’s a very important role for privacy leadership and privacy compliance in organizations, but I think we’re going to see other umbrella leadership categories emerge and we’ll probably see the privacy role narrowing scope as more of those adjacent responsibilities get absorbed into some broader managerial role,” he said.  

Ultimately, Enright said, the “winning strategy” for balancing innovation, risk and regulations that are still coming into focus boils down to organizations’ willingness to operate transparently and in good faith. He said part of the reason for the success at his previous employer was that they engaged with regulators and policymakers around the world “sincerely and voluntarily.”

One consequence of success, he said, is regulatory scrutiny. The key is embracing that scrutiny.

“Whenever I heard a leader at Google lament what felt like unfair treatment, if it felt like regulators were holding us to a higher standard than other organizations, I very consistently reminded folks, ‘This is what winning feels like,'” Enright said. “If you are consistently committed to doing what you believe to be the right thing, vastly more often than not, that is going to position you in a spot where you are going to be in a credible defensive position with regulators.”

You can watch the entire webinar discussion at the top of this page.

Read More

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button