Large Language Models (LLMs) have emerged as a transformative technology and have revolutionized AI with their ability to generate human-like text with seemingly unprecedented fluency and apparent comprehension. Trained on vast datasets of human-generated text, LLMs have unlocked innovations across industries, from content creation and language translation to data analytics and code generation. Recent developments, like OpenAI’s GPT-4o, showcase multimodal capabilities, processing text, vision, and audio inputs in a single neural network.
Despite their potential for driving productivity and enabling new forms of human-machine collaboration, LLMs are still in their nascent stage. They face limitations such as factual inaccuracies, biases inherited from training data, lack of common-sense reasoning, and data privacy concerns. Techniques like retrieval augmented generation (RAG) aim to ground LLM knowledge and improve accuracy.
To explore these issues, I spoke with Amir Feizpour, CEO and founder of AI Science, an expert-in-the-loop business workflow automation platform. We discussed the transformative impacts, applications, risks, and challenges of LLMs across different sectors, as well as the implications for startups in this space.
The Evolution of Language Models
Tracing the lineage of LLMs within the broader context of AI development, Feizpour emphasized that while language has always been a critical indicator of intelligence, progress in replicating human-like language capabilities in machines has been gradual. “Language models have been around for more than people probably realize,” he noted. “But the large ones have been around for the past three to four years, and they just completely brought the game to a new level.”
This leap in performance, Feizpour explained, is largely due to two factors: the sheer volume of training data and the ambitious scope of what these models aim to achieve. Unlike previous narrow AI systems designed for specific tasks, LLMs are trained on vast swaths of internet text to predict the most likely next word in any given sequence. This simple yet powerful objective allows them to internalize complex patterns and generate coherent text across a wide range of domains.
“If you look at the animal kingdom,” Feizpour elaborated, ” as humans, we are one of the few mammals capable of using complex language, and LLMs strive to replicate this ability. There are some aspects of language, like storytelling, that are specifically tied to cognitive revolution. So as modern humans, we’ve been very interested in replicating that capability in machines.”
However, Feizpour was quick to dispel common misconceptions about LLMs. When asked if these models can think or reason independently like humans, he responded, “My position is that they do not because they’re based on statistics, and statistics, by definition, is not causal. In order to be able to reason, you need causal reasoning.” He views LLMs more as “very good reasoning parrots,” capable of mimicking the appearance of reasoning without truly understanding or engaging in causal thinking. Their tendency to hallucinate–to generate plausible but factually incorrect information–reveals their lack of true comprehension.
The Myth of Objectivity and Creativity
This distinction between appearance and reality became a recurring theme as Feizpour challenged the notion that LLMs are unbiased and objective, stating, “They are as unbiased and as objective as their designers.” He pointed out that the much-touted “alignment with human preferences” is inevitably influenced by the specific humans chosen to provide those preferences.
“One of the things that gave OpenAI an edge very quickly was the alignment with human preferences,” Feizpour observed. “But which humans? Humans that OpenAI said are the right group of humans to provide preferences for the rest of the humans.” This selection process, often opaque to the public, can embed particular biases into the models that are then perceived as objective outputs.
As well, the idea that LLMs are truly creative and innovative is another area where Feizpour urges caution. While he acknowledges their ability to generate grammatically correct and coherent text—better than 90% of people, he estimates—he questions their true creativity:
“Even through platforms like Midjourney and Stable Diffusion… most of the creativity comes in pages and pages of prompts that somebody writes,” he observed. “The content of that prompt is creative. The rest is just the model replicating what it is told to do.”
Continuous Learning and Knowledge Management
Where then does the locus of creativity lie in human-AI collaboration? Is it in the model’s outputs or in the carefully crafted prompts that guide those outputs? Feizpour challenges the notion that LLMs can continuously learn and expand their knowledge like humans, “Such continuous learning,” he explains, “would require either constant retraining from scratch—an enormously expensive proposition—or fine-tuning, which risks “catastrophic forgetting” where new information overwrites old.
This limitation has led to the development of techniques like retrieval augmented generation (RAG). Feizpour defines RAG, “as separating the knowledge base and the linguistic interface.” In this approach, the LLM serves primarily as a linguistic interface, while a separate, updateable knowledge base provides the factual information.
Feizpour elaborated on the benefits of this separation: “You have to admit right away that the language model is just the linguistic interface. It does not have knowledge. It should not rely on its knowledge… The role of the language model becomes: ‘John Doe asked X question, which is what you will augment the generation for. John Doe will make a query against these documents, figure out which paragraphs have a description that answers this question, bring them back, put them in the context of the LLM, and ask it to answer that question only using this text.’”
This separation allows for more dynamic knowledge management and opens possibilities for access control and privacy protection. “Now you can put things like privacy control, access control, removal of PII between these two,” Feizpour explained. “You can ask me a question that you’re not allowed to know the answer to, and then this intermediate layer can just reject that question and say, ‘You’re not allowed to ask this question.'”
He suggests that human creativity is still very much at the heart of the process, even if it’s less visible in the final product.
Implications for Startups and Innovation
Despite the dominance of large tech companies in LLM development, Feizpour sees ample opportunity for startups. He advises entrepreneurs to focus on solving specific customer problems rather than trying to compete on model size. “The game of very, very large models are just won by the big entities,” he acknowledged. “But that’s not all lost for smaller entities like startups.”
The key, according to Feizpour, lies in thoughtful system design and unique data. “In all likelihood, whatever you end up building for a specific customer problem is not going to be just one model,” he predicted. “It will be probably one larger model, a bunch of small models, all of them interacting with each other.” He explained that the IP will be in how these are architected to work together, to interact with each other, to control each other, to scaffold the behavior of the large language model, and to verify its output.
Moreover, he emphasized the importance of proprietary data: “The data that is available on the internet is available to everybody. So, what is your unique data moat?” This focus on specific problems and unique data can level the playing field for startups.
Feizpour also stressed the importance of rigorous product development practices. “Not skipping the best practices continues to be the baseline,” he insisted. “You cannot escape getting out there and talking to customers. You cannot escape the right product development steps. You cannot skip the right software engineering steps. You have to do all of the above, and that’s 95% of the game.”
Societal Impact and the Future of Work
Feizpour expressed both excitement and caution when broaching the broader societal implications of LLMs. He foresees significant changes in organizational structures and the nature of work itself. “If a lot of the work that middle managers were doing today–that cognitive handling of transferring business metrics into specific tasks–can be effectively executed by agents, would we need middle managers anymore?”
He even questioned the future boundaries of corporations: “If agents become very good at hiring gig workers to do something, onboarding them, having them do it, and then offboarding them, then what does it mean to have the boundaries of corporate?” These scenarios go beyond mere technological capabilities and touch the fabric of our economic and social structures.
However, Feizpour pushed back against simplistic narratives of job loss. “I think the whole conversation around job loss is a little misguided and emotional,” he stated. “What is more likely to happen is job description loss.” In other words, roles will evolve rather than disappear entirely, requiring new skills and adaptability.
“So far, you only needed to write in Python. Now you need to write in Python but also be good at English,” Feizpour illustrated. “Or maybe you’re a businessperson, so far all you did was write things in English. Now you need to write it in a very specific way that the model understands.” The challenge, then, is not just technological but educational: How can we use these tools to upskill people rapidly for the changing landscape?
Competition, not Regulation, is the Panacea to Cure LLM Missteps
He remains optimistic. Feizpour believes the path forward lies not in control—a term he finds problematic—but in “reducing the harmful sides and amplifying the useful sides” of these technologies. He advocates for a balanced, multi-stakeholder approach. “My bet is still going to be on founders who are enabled well by money, capital, connections, and influence to build guardrails, to build alignment modules, to build privacy by design, to build responsibility by design,” he said. “That’s the only way that I think we have a chance of controlling where this will go.”
Feizpour expresses doubt about the effectiveness of regulation in controlling new technologies like LLMs. He argues that regulations typically lag several years behind technological advancements and are implemented at a glacial pace. More critically, he points out that companies are adept at circumventing these regulations, often just doing enough to appear compliant while continuing their practices largely unchanged.
Instead of relying solely on regulation, Feizpour advocates for an alternative approach: empowering entrepreneurs to build competitive alternatives. He suggests that by enabling ambitious entrepreneurs to create new solutions and make them available on more suitable terms, we can foster competition. This competition, in turn, can lead to empowering a diverse ecosystem of innovators who are incentivized to build ethical, robust systems.
Feizpour believes that organizations like the Altitude Accelerator, which support and nurture startups, are doing exactly this—creating an environment where entrepreneurs can develop alternatives to the offerings of large tech companies. He sees this entrepreneurial ecosystem as a more dynamic and potentially more effective way to address the challenges posed by emerging technologies.
This approach, however, requires a shift in how we fund and support startups. It’s not just about chasing the next unicorn but about fostering a community of builders who are equally committed to technological advancement and societal well-being. “I’m very interested and always excited to have discussions around what this actually means,” Feizpour said. “What are practical, realistic ways of controlling where this is going to go?”
The Role of Open Source and Community
Feizpour repeatedly highlighted the importance of open-source initiatives in democratizing AI development. Despite the resource advantages of large corporations, he sees hope in the vibrant open-source community.
“There is activity in the open-source community,” he noted. “At least some of the companies feel that it is important to support those movements.” This distributed approach to development could serve as a counterbalance to centralized corporate control.
However, Feizpour also acknowledged the complex dynamics at play. “The fact is that they created a model that was the first product-market fit,” he said, referring to major players like OpenAI. “This, in turn, means that they have the capability of generating a lot of new data that is not necessarily even available on the internet.”
This data advantage creates a self-reinforcing cycle that can be hard for smaller players to break. Yet Feizpour remains optimistic about the potential for innovative startups and open-source projects to carve out their niches, especially in specialized domains where unique data and domain expertise matter more than sheer model size.
The future of AI is yet to be written. At this point in time Feizpour reveals a measured optimism and a pragmatic vision.
The future of LLMs, like the models themselves, remains probabilistic rather than deterministic. We witness, in real time, its capabilities and limitations. But we are reminded these powerful tools are still tools–human creations that reflect our own complexities, biases and aspirations. LLMs are already enabling new solutions, in response to their evolving idiosyncrasies in critical areas such as cybersecurity, ethical AI and data privacy. And this is, perhaps, how we incentivize a diverse ecosystem of innovators, as Feizpour so aptly put it, “it’s about amplifying the useful sides of artificial intelligence while mitigating the risks.”
This post was created with our nice and easy submission form. Create your post!