AI’s Growing Complexity: The Next Era of Human Curiosity
There are two tangents to this conversation: one, I feel like doing software engineering is a dead-end job (with arguments to support it), and two, I feel that “AI” will turn into a field like quantum physics, where only a few people will work on it, those who have the knowledge, resources, and intelligence to deal with its growing complexity. To be honest, these are just my thoughts and I don’t intend to glorify or disrespect anyone or anything. And I fully agree that I’m biased towards positives of AI development (rather too optimistically) but this is about the speculations, so we might skip the negatives for now ha!
Software Engineering Hitting a Dead End
This is extremely simple to explain. There’s no bottleneck for current large language models (LLMs) with 100B+ parameters and beyond to perform most tasks that an average software engineer does today. With the evolution of attention mechanisms and transformer architectures, these models can handle many predictive and context-sensitive coding tasks effectively, though they may struggle with complex, domain-specific problems. Their capacity to learn and generalize across vast datasets allows them to output functional, optimized, and context-aware code without requiring the deep craftsmanship that humans once provided.
Training models on high-quality code still presents challenges, particularly in filtering noisy or buggy data from large-scale sources like GitHub. Code inherently has structure, logical dependencies, hierarchies, and syntax, that are mathematically digestible for LLMs. Coding, in essence, is about mapping ideas into structured text, and machines excel at embedding and analyzing such structures.
If someone argues that coding is more than just writing scripts, I still think LLMs provide solid baseline solutions. From writing functions to debugging, unit testing, refactoring, and even performance optimization, they already generate usable outputs in many scenarios. The real bottleneck isn’t the model, it’s the outdated system architectures and workflows that haven’t fully adapted to a new paradigm where AI tools are seamlessly integrated.
I use Cursor for all my coding now, and honestly, I barely need to do anything other than solve math problems or logic-based tasks that aren’t inherently “coding”. It handles boilerplate code, fixes bugs, refines solutions, and even suggests better implementations. This shift proves that software engineering is transitioning from a creative discipline to one focused on supervising and guiding AI-generated work. At this rate, many routine coding tasks are likely to be automated, but human oversight will remain critical in areas like system design, security, and ethical considerations.
But what does this mean for software engineers? It means the role will evolve, but the need for large numbers of people writing code line by line is diminishing. The future of software engineering will focus more on architecture design, ethical implications, system integration, and less on manual programming.
Math Taking Over AI
The second argument is more nuanced (and albeit naive). We’re on the cusp of a significant shift where the hype around LLMs will cool down, and research will swing back toward deep learning fundamentals, but with a twist. It won’t just be about scaling models or brute-force optimization anymore. We’re nearing the limits of what can be achieved simply by throwing more data and compute at the problem, especially given the scarcity of high-quality datasets.
Future progress will require more sophisticated methods to extract value from the data we already have. This is where things will get much more mathematically complex. Advancements in areas like model interpretability, robustness, and efficiency won’t just be engineering problems; they’ll demand knowledge of manifold geometry, differential equations, topology, and algebraic structures (also other domains of maths and physics).
Explainable AI (XAI) won’t just involve visualizing attention heads or highlighting salient features. We’ll need new mathematical tools to model high-dimensional behaviors and emergent phenomena within neural networks. This is where mathematicians and physicists are likely to dominate, as they’re equipped to handle the abstract, highly computational nature of these challenges.
AI still operates in a phase where many models work effectively without fully understanding their internal mechanisms, though ongoing research in interpretability is making progress. However, as the field matures, it will demand formalism. We’ll need rigorous theoretical frameworks to explain why models behave the way they do, how emergent properties arise, and how to optimize architectures beyond trial and error. This formalization will make AI research much harder to break into, requiring expertise that spans machine learning, advanced mathematics, and computational physics.
This mirrors the trajectory of quantum physics. Early on, quantum mechanics was an explosive, rapidly developing field, but as it matured, it splintered into highly specialized subfields, quantum field theory, condensed matter physics, and quantum information science, each requiring deep specialization. AI is heading in the same direction. The tools and interfaces we use today will continue to be democratized for end-users, but the cutting-edge research will become inaccessible to most.
It won’t be about scaling models endlessly or improving hardware; it will involve solving problems as complex as those found in theoretical physics, such as understanding emergent behavior, minimizing generalization errors, and ensuring robustness against adversarial attacks.
The Illusion of Simplicity
Despite the growing complexity, AI will give the illusion that it’s simple to use. Businesses and individuals will interact with AI through user-friendly interfaces and APIs, much like they do now. But this simplicity is deceptive. Behind the scenes, the algorithms, training methodologies, and model architectures will be far beyond what most practitioners can comprehend.
If I stretch my imagination a little further, there’s a possibility that AI complexity could surpass human comprehension before it even reaches AGI (artificial general intelligence). This doesn’t imply that AI will become “smarter” than humans in a general sense, but rather that its architecture and behavior will exhibit emergent properties that are too intricate for us to fully understand or predict.
This could lead to chaos in the field. Just like quantum mechanics forced us to accept phenomena that defy classical intuition, wave-particle duality, uncertainty principles…AI might force us to accept that certain system behaviors may defy traditional interpretative methods, requiring novel mathematical and computational approaches to understand them. Unlike quantum mechanics, which deals with fundamental particles, AI’s complexity will emerge from multi-layered, dynamic interactions between billions (if not trillions) of parameters, creating a kind of computational “black box” that defies “human” maths!
The Dawn of True AGI
At some point, we might hit a stage where AI models become so complex that even our best efforts to analyze them fail…not due to a lack of human intelligence, but because their internal mechanisms evolve faster than our ability to study them. This could mark the dawn of true AGI, not as an “intelligent being” mimicking human reasoning, but as a system that processes and synthesizes knowledge in ways we can’t decode or predict.
At this point, we wouldn’t just face a technical challenge; we’d face a philosophical crisis. Human curiosity has always been our compass, driving us to unravel mysteries of the natural world. But what happens when the pursuit of understanding is no longer ours to lead? AI might present a paradox: the more we innovate, the less we understand. Our deceptively simple models could evolve into vast, multi-layered systems with emergent properties that transcend human intuition.
And then, BOOM! The baton of understanding could pass from us to these highly complex systems. We’d find ourselves in a world where not only can we no longer explain the inner workings of AI, but we’d also have limited control over the new discoveries it makes. AI might solve problems beyond our comprehension, offering solutions we can use but not fully grasp. The curiosity that fueled our technological advancements could reach a limit, where we no longer “build to learn,” but “build to be informed.”
This thought mirrors how quantum mechanics shattered classical logic, forcing scientists to accept strange, counterintuitive truths about the universe. AI could redefine our understanding of cognition, complexity, and the very nature of discovery. If this transition happens, we won’t just lose our grip on understanding our creations…we might lose our role as the primary explorers of the universe. But I (still think) that is too far in the future).
References
- Mechanistic Interpretability by Neel Nanda
- Circuits by Chris Olah
- Interpretable Machine Learning Book (SHAP values and LIME)
- Scaling Laws for Neural Language Models by OpenAI
- History of Quantum Mechanics
- The Future of Work After COVID-19 by McKinsey
- Will AI Agents Replace Software Engineering Jobs?
- AI models collapse when trained on recursively generated data
Discover more from Arshad Kazi
Subscribe to get the latest posts sent to your email.
Leave a Reply/Feedback :)