in , , , ,

The U.S. State Department Knows AI Risks

Did you know that the U.S. State Department has a detailed report on AI risk? “Defense in Depth: An Action Plan to Increase the Safety and Security of Advanced AI” is a project commissioned late in 2022, to look into what might happen with advanced AI research in American labs.

Chances are you have not heard of this initiative. That’s bad, because it seems highly relevant to what experts are saying we need: guardrails on this newly evolving field, and reassurances that when it comes to the burgeoning capabilities of AI, us humans are in control.

Hence the government-backed project, taken on by an obscure firm called Gladstone.AI. Curious as to why the U.S. State Department hired this particular contractor, I found that Gladstone’s founders, Jeremie and Edouard Harris, had already been briefing government officials since 2021 on GPT and other models – and the risks they represent.

Apparently, staffers in the State Department’s Bureau of International Security and Nonproliferation recognized AI’s escalating threat potential and put out the bat-signal: Gladstone won the contract.

What’s In It?

Well, as with any report of this size, there’s quite a lot in it. Gladstone interviewed over 200 experts for this report, and drilled down on national security issues, acknowledging that AI is likely to be a destabilizer on par with the advent of nuclear weapons. That’s partly because of the power of AI to lead to things like lethal cyberattacks and bioweapons. One of the major findings is that a big threat is likely to come from small independent labs that go too far in pioneering AI systems.

For specific threats, Gladstone cites:

  • autonomous cyberattacks
  • AI-powered bioweapon design
  • disinformation campaigns

As related coverage shows, researchers suggest that “misaligned or superhumanly capable AI systems may exhibit power-seeking behaviors, becoming effectively uncontrollable.”

With that in mind, a key suggestion in the report is that the U.S. government should set a hard limit on training data for AI systems developed stateside.

The Political Will

Here’s the question: in a continental society of hundreds of millions of people, which has previously run on free-market politics, will the democracy support this kind of regulation?

That probably has to do with awareness. On one hand, AI is extremely unpopular among parts of the American rank and file. On the other hand, we’ve been unable to rein in other technologies (like the internet) partly because of free-market ideology and our notion of America as a “land of freedom.”

To paraphrase the words of Spider-Man, Churchill and Eleanor Roosevelt, with great freedom comes great responsibility.

More Advice from the Report

What do we do about this? The report proposes a multi-layered strategy involving what researchers call “Lines of Effort” or LOEs.

There are five of them:

  • LOE1 – Establish Interim Safeguards:
  • LOE2 – Strengthen Capability & Capacity:
  • LOE3 – Boost AI Safety Research
  • LOE4 – Formalize Through Law
  • LOE5 – Internationalize Safeguards

The report also breaks down how one would pursue these goals. LOE1, the team suggests, could involve creating an AI Observatory for real-time monitoring, putting together and requiring “Responsible AI Development & Adoption” (RADA) safeguards, forming an interagency Safety Task Force, and controlling the AI supply chain, possibly with export controls. Recommendations for LOE2 include working groups, training programs, early-warning systems and contingency plans.

As for research, the report recommends safety and security focus, and for LOE4, a regulatory body to put these plans into effect. And then there’s the invocation to work with international stakeholders collectively, as if AI is an alien force with the power to destroy humanity, which in all reality, it may be.

Institutional Boots on the Ground

If you’re worried about U.S. agencies catching up to this vision, at least there are some related efforts already in place. For example, there’s the Artificial Intelligence Safety Institute Consortium (AISIC), which, under the Department of Commerce’s NIST, brings together 200 organizations including tech companies, academia, and governments to develop AI safety standards, share research, work on frontier models, and talk risk mitigation.

Here’s Biden’s U.S. Secretary of Commerce, Gina Raimondo, describing the federal act that created AISIC and speaks to its purpose:

“Through President Biden’s landmark Executive Order, we will ensure America is at the front of the pack … together we can confront these challenges to develop the measurements and standards we need to maintain America’s competitive edge and develop AI responsibly.”
— Gina Raimondo

So in a nutshell, you could say that things are going on “behind the scenes” in the U.S. government to address AI. It’s not really behind the scenes, per se – the information is all public – but the news tends to get lost in the soup as we look elsewhere at the pressing priorities of American life. Most of us would agree, though, that this kind of reining in AI is not low-priority. What do you think?

This post was created with our nice and easy submission form. Create your post!

What do you think?

Snapchat rolls out Group Streaks and 'Infinite Retention' for chats

Snapchat rolls out Group Streaks and 'Infinite Retention' for chats

Tesla's most affordable Cybertruck gets scrapped after a whopping five months

Tesla's most affordable Cybertruck gets scrapped after a whopping five months