Co-Founder/Managing Partner at First Bridge, CEO at POLAR
Shedding The Light
Artificial intelligence is both exciting and challenging as it advances at an unprecedented pace. The potential of AI to revolutionize our world is undeniable, but it also raises concerns about the unknown and a need to find a balance between innovation and regulation. The media reports regularly on the possibilities and risks of AI, illustrating the polarizing opinions; Bloomberg recently reported that Samsung had banned the use of ChatGPT on its systems due to concerns about the potential disclosure of confidential corporate information, as employees accidentally leaked sensitive data via ChatGPT. Samsung is preparing its own internal AI tools now. In contrast, the founder of Genies purchased ChatGPT Plus accounts for all of his employees in an attempt to increase productivity and save costs.
Despite these conflicting views, a compromise with regulators is likely, paving the way for a business and user-friendly AI ecosystem. In this article, I will delve into the challenges and opportunities presented by AI and explore how society can harness its potential while minimizing its risks.
Navigating The Bottlenecks Of AI Implementation
In its current state, artificial intelligence faces numerous challenges. One of the primary issues is the inherent bias in models due to the use of outdated training data, which leads to increasingly skewed results as time progresses. Ethical problems also emerge, stirring paradoxical thoughts of a technology moratorium to regain control. These ethical concerns cause political reactions that lead to panic. The European Parliament, for instance, grapples with the question of liability in cases where AI-operated devices cause harm, such as self-driving car accidents, leading to debates over the owner’s, manufacturer’s or programmer’s responsibility.
Security and compliance are further concerns, with the unrestricted freedom of AI potentially leading to uncontrollable data manipulation and violations of data privacy and intellectual property. For example, AI is integral to developments in healthcare, IoT and other sectors, with the highest concerns with how data privacy is regulated. The European Commission is forcing 19 tech giants, including “Amazon, Google, TikTok and YouTube, to explain their AI algorithms under the Digital Services Act in order to boost AI transparency and accountability.”
Moreover, even though most users have modern devices, outdated infrastructure and implementation standards present a bottleneck for effective AI integration. An instance of this is the case of Flanders Investment and Trade (FIT), a public organization assisted by the private firm Radix. FIT lacked the necessary infrastructure for large-scale AI projects, necessitating a roadmap for infrastructure reorganization, including the creation of a data hub and an operational database layer for better data management and future AI development.
Addressing these challenges requires a comprehensive approach encompassing ongoing data evaluation, ethical guidelines, robust security protocols and infrastructure upgrades. By navigating these issues thoughtfully, we can unlock the full potential of AI while ensuring its responsible and beneficial use in various domains.
AI Market Structure—How We See It
As the AI market developed, a clear separation into two layers can be observed. The initial layer (L1) formed under the influence of tech giants who had unparalleled access to data, including private data and sufficient resources to process public data, including intellectual property. This advantage enabled them to leverage vast amounts of data that they could access while being on the limit of legal conflict but leaving zero to no evidence and develop cutting-edge models like GPT-3, Llama, and Dall-e, surpassing major human performance benchmarks. Operating before the establishment of regulatory frameworks, they received a headstart on building complex generative, NLP and other models. Although there may be stricter regulations in the future, they have already laid a strong foundation.
L1 has already addressed complex human functions, setting the stage for the emergence of L2. In this context, L1 represents big corporations responsible for extending their existing models with more intricate functionalities. Second layer (L2) solution, on the other hand, will utilize simpler and more cost-effective models tailored to specific needs to focus on real-time data and specific tasks. Additionally, L2 requires simpler and more cost-effective models tailored to specific needs. While tighter regulations may come into effect, L2 can leverage the advancements made by L1 without violating regulations, paving the way for future developments in AI.
As a showcase of how L1 solutions/models can be integrated into L2 solutions, my team and I developed an AI-powered Social Listening Tool for marketers. Despite being a complex solution that involves processing textual and visual inputs from clients to generate synthetic datasets and create custom AI models for real-time market analysis, we discovered that the most accurate and efficient approach to identifying brand archetypes of the target audience was through the utilization of GPT’s API for NLP model purposes.
This allowed us to offer more personalized insights to our clients, helping them better understand their target audience and tailor their marketing strategies accordingly. Our tool is just one example of how the market is utilizing L1, the layer where tech giants have established themselves and where the rest of the industry is heading.
Conclusion
The AI market is expected to have a two-layered structure in the future. L1 will encompass large-scale AI functions based on vast datasets and become more transparent and data restricted. AI regulation will limit L1 solutions in data access, slowing down the pace of innovation to slowly regaining control, while already existent big AI models will stay as a foundation for the new generation (L2) of AI solutions to come. L2 solutions, tailored to specific tasks, will align with regulations and deliver relevant, verified results. Compliance and data sourcing will gain importance as European regulations will most likely require AI systems to disclose data sources by 2026. The existing growth in the use cases of L1 solutions and increased investments in L2 solutions will lead to market growth. I believe the moratorium will be ruled out due to it being a direct measure of fighting innovation rather than controlling AI tools, shifting the legislative focus towards reasonable regulations and compromises.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


