in ,

Council Post: AI Year In Review: How A Global Effort Can Ensure Ethical Growth

Eric Loeb, Executive Vice President of Global Government Affairs and Public Policy, Salesforce.

Historians will look back on 2023 as the year AI went mainstream: Sophisticated generative AI applications made the jump from tech expert novelty to nearly mainstream productivity tool. Across industries and business functions, people are finding new and innovative ways to deploy AI tools to better serve customers, improve efficiency, drive positive change and solve challenges of all kinds.

With so much promise, it’s easy to see why generative AI tools are among the fastest-growing applications in history. But, as we know, there are also risks and there remains a trust gap. To ensure the technology is used ethically tomorrow, we must ensure responsible innovation and deployment today.

A Global Urgency To Establish Governance And Guardrails

To address this growing challenge of instilling trust in the technology, interest in establishing policy frameworks to govern AI and its uses has accelerated nearly as fast as the technology itself. Around the globe, governments, civil society and industry leaders are collaborating to build policy frameworks that will allow their economies to benefit from the promises of AI while protecting citizens from risks.

This movement gained further momentum when the UN Security Council convened in July to discuss the imminent need to ensure AI safety and effectiveness by enacting policy frameworks focused on ethical and responsible technology. The G7 nations demonstrated their intent to cooperate on AI governance frameworks, which led to the October 30 announcement of international guiding principles on AI and a voluntary Code of Conduct for AI developers.

Additionally, President Biden hosted EU and European Commission leaders in October to discuss a coordinated approach to governing AI systems. This converged at the U.K. AI Safety Summit in Bletchley Park, which brought together AI experts from governments, industry, academia and civil society to exchange ideas on responsible and ethical AI practices. This fall, we saw the work product of this collaboration through the G7 Principles, the White House Executive Order and the U.K. Safety Summit.

A Tailored Risk-Based Approach

Although establishing guardrails is fundamental to trusted AI, a one-size-fits-all approach could be almost as detrimental to society as no rules at all. To fully benefit from AI, it’s critical to balance safety with innovation, and a blunt approach to regulation may hinder innovation, disrupt healthy competition and delay the adoption of the nascent technology that consumers and businesses around the world are just starting to use to boost productivity.

Both the U.S. and EU have embraced a risk-based approach that advances trustworthy and responsible AI. Furthermore, the EU’s groundbreaking AI Act, which completed a major milestone of political negotiation in December 2023, set a global standard for risk-based responsible AI development, bridging innovation with robust safeguards against misuse. The EU AI Act encourages policy protection to focus most on high-impact applications while ensuring proper mitigation measures for potential risks, making the EU an important trailblazer for ethical and trusted AI solutions.

Shaping the Future of AI, Responsibly

As the executive vice president of global government affairs and public policy at Salesforce, I understand that trust is earned, not given, and it requires continuous investment in responsible practices and transparency. In the rapidly evolving landscape of AI, navigating the path to responsible innovation demands a multi-step approach.

Here are a few of the steps technology companies should consider taking to earn trust and develop an ethical AI framework:

1. Build trust: Even as regulatory conversations progress, the legal requirements should only serve as a baseline. Organizations should take responsible action before being required to by regulation and should exceed customer expectations when it comes to privacy, transparency, safety and trust.

2. Protect privacy: Since AI is based on data, ensuring the proper collection and protection of that data through comprehensive privacy legislation is critical to building trust and paving the way for other AI legislation.

3. Prioritize transparency: People should know when they’re interacting with AI systems and have access to information about how AI-driven decisions are made.

4. Take an active role in policy discussions: Public-private collaboration is the key to effective guardrails that protect both people and innovation. Find ways to make sure you have a seat at the table, and bring along a diverse group of global stakeholders to enhance these discussions.

By embracing these principles, organizations can navigate, shape and anticipate the regulatory landscape, positioning themselves as leaders in the development of responsible, safe and trusted AI.

Strong Momentum In 2024 And Beyond

These rapid developments are indicative of the seriousness with which governing bodies ought to take transformative technologies. Making sure these systems can be trusted to operate ethically and safely is foundational.

Making policy frameworks globally interoperable is an exciting goal for 2024, as we aspire to make the development and dispersion of AI tools globally inclusive and available. I am heartened by the commitment we have seen in 2023 from leaders worldwide to act swiftly and collaboratively and with a spirit to get this right, even when that effort will require iteration and learning as we progress.

As we look ahead to 2024 and beyond, the technology industry must continue to partner with governments, academics and civil society from all geographies and backgrounds to build a strong foundation for responsible AI progress.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


This post was created with our nice and easy submission form. Create your post!

What do you think?

The Morning After: Apple’s car project still exists

The Morning After: Apple’s car project still exists

OpenAI Quietly Scrapped a Promise to Disclose Key Documents to the Public

OpenAI Quietly Scrapped a Promise to Disclose Key Documents to the Public