Post

Programming in 2026

2026.02.10

Seville, Spain

Confident programmer surfing the AI ​​wave (courtesy of ChatGPT).

Programming in 2026

"Logbook, Interstellar Year 2026 of our Lord Jesus Christ…

The Borg have come to stay.

Resistance is futile, you will be assimilated."


At this point, it's no surprise to anyone that artificial intelligence is here to stay.

It was impressive to see a few months ago how generative artificial intelligence was used to generate images first and videos later, forcing graphic designers to rethink their workflows and leverage AI.

In the world of programming, the trend is solidifying in 2026 that code agents like Claude, Codex, and Gemini are here to remain.

It's almost magical to witness their ability to correct pull requests or generate code with few instructions systematically and tirelessly. They can be used as a true exoskeleton to enhance the capabilities of engineers who know what they want to build.

Like any precision tool, it's powerful and sharp to the point that you can "shoot yourself in the foot" if you don't know what you're doing. However, in the hands of qualified personnel, it's undeniably a productivity multiplier that we can't ignore.

This trend will only continue to improve year after year. It's true that there are objections regarding performance and energy consumption, but in my optimistic opinion, I believe it will undoubtedly continue to improve in the coming months until these issues become irrelevant, or at least reasonable and manageable.

Despite all this, the truth is that the programming profession is changing. We will have new tools at our disposal. Just as mathematics changed with the advent of the calculator because calculations became inexpensive, and yet mathematicians didn't disappear; on the contrary, they are highly valued. It won't destroy the work of programming, not by a long shot, but it will force us to work differently. At the extreme end of utopian thought (on the asymptote), we can fantasize that code will be generated entirely by artificial intelligence (still doubtful).

And in that case, if we grant this point, the relevant question to ask is: what will be left for software engineers?

I see, at least, four key aspects:

1. Requirements

Undoubtedly, gathering good requirements will be a primary task for properly guiding artificial intelligence.

Most projects that fail do so because the requirements and client expectations are not clearly defined.

2. Architecture

Architecture is key to achieving good results according to the intended function. Good ideas in theory become garbage when they are poorly executed, when non-functional aspects such as performance, real-time, usability, or granularity of change are not taken into account for the context in which the software will be used.

For example: I remember that solution from a couple of years ago for converting video made with Amazon Lambdas where each function was responsible for converting one frame. Highly scalable, yes, undoubtedly. The best architecture for the intended purpose? Not at all, a terrible decision in terms of performance and cost.

3. Debugging and explainability

When things go wrong, systems must be debugged and every step must be understood end-to-end. If we entrust everything to AI, what human will be able to inspect the inner workings to validate that everything behaves as expected? Is that mythical bird—the full-stack developer—in danger? Or, on the contrary, will it become more sought after?

4. Accountability

How will be responsible for the dissaster, if any? As IBM appointed quite early (in 1979) "A computer can never be held accountable, therefore a computer must never make a management decision". So, if machines are not a target to blame, us humans still are.


In this context, requirements gathering tools become crucial. Tools like J.J. Dubray's Puffin explores the best way to gather requirements. Emerging standards are debating this, such as OpenSpec and the associated SDD (Spec-Driven Development, as opposed to Vibe Programming) movement.

In his prescient book, The Broken Telephone John Macías explores how to use DDD and eliminate intermediaries to avoid ending up like in the game with a "broken telephone."

The MCP protocol is the de facto standard for exposing our APIs and physical-world resources to AI, which uses and combines them in ways we never imagined. Exposing resources to AI has its risks: more specifically, a large attack surface. In this regard, I recommend reading Daniel Garcia and Alfonso Muñoz's book, Secure MCP - A practical guide for developers, software architects and tech leads which covers security considerations when creating MCPs.

Javier Vélez envisions a near future where LLMs will be used at runtime and not constrained by deterministic and imperative formalisms.

And that, therefore, their specifications or requirements will essentially be in natural language.

To reach this point, we will obviously need to improve requirements gathering, reducing language ambiguity and improving its semantic precision.

Our beloved unambiguous formal languages ​​have always been there, waiting for us: DSLs, state machines, Petri nets. We have many unambiguous formalisms capable of providing the precise semantics required by a precision surgeon.

If we can leverage them, the possibilities are enormous.

For our part, we are exploring this path with Structura. We have used a small DSL, such as a class diagram, to explore the limits of artificial intelligence and how we can combine the best of both worlds. On the one hand, a formal language with clear semantics. On the other, artificial intelligence with a chat feature that allows the user to express themselves in natural language, either written or spoken.

From the chat, we can ask about aspects of the model, change it, extend it, delete it, or refactor it at a speed that exceeds current practice.

Furthermore, once the model has been refined by the engineer (with or without AI assistance), we will systematically apply code generation. This deterministic approach prevents potential errors in the generated code. These code generators are designed for enterprise use. They can guarantee compliance by building best practices and regulatory compliance. In regulated sectors, they can ensure compliance by design. This is especially important in regulated environments (such as banking, insurance, or utilities) where non-compliance can result in significant fines from regulators.

We don't know how quickly all of this will evolve. What I am certain of, however, is that requirements gathering and software architecture, as Grady Booch reiterates, will remain vital. This is simply another step on the ladder of increasing the level of abstraction that is the full history of software engineering.

And that's where we software engineers need to be, riding the crest of the wave to try and surf it successfully. Few professions embrace automation and continuous improvement with more fervor, even if it means the inevitable transformation of the profession.

Welcome, Borg!

Pedro J. Molina, PhD.

If you still have questions

Contact us

Let's talk?