The current narrative around AI agents displacing SaaS tools deserves a closer look when it comes to engineering software.
By Randall Scott Newton
Managing Director, Consilia Vektor
Much of the conversation about AI agents disrupting software-as-a-service assumes that if an agent can replicate a workflow, it can replace the platform behind it. That logic might be true for horizontal tools such as project tracking, CRM, and document collaboration. What those tools offer is mostly good UI/UX and the coordination layer. It breaks down almost immediately when you apply it to Product Lifecycle Management (PLM).
PLM systems are not workflow tools. What makes them resistant to AI displacement is not a single, easily reproduced process. The specifics depend on if we are looking at the integrated PLM vendors (those who sell both PLM and CAD tools) and the pure-play, CAD-agnostic vendors.
The first is the integrated PLM vendors: Dassault Systèmes, Siemens, PTC, (and to a lesser degree) Autodesk. These companies sell both a PLM platform and the CAD kernel that generates the geometry the PLM manages. That tight coupling matters. Teamcenter stores artifacts that NX validated. 3DEXPERIENCE manages geometry that CATIA’s solver touched. The physics provenance is traceable end-to-end within a single vendor’s data model. When these systems accumulate decades of revision history, they are accumulating history in which the geometry, the simulation, and the change record are all bound together. An AI agent that must verify whether a revised component violates an aerospace safety tolerance is not just querying a document archive; it is navigating a connected record of physics-validated decisions. That is a meaningfully different and harder problem than anything a general-purpose agent can address without deep integration into the PLM kernel.
The second case is the pure-play PLM vendors; Aras being the clearest example. Aras Innovator does not own a CAD kernel. It ingests validated geometry from wherever the customer’s engineers produce it: CATIA, NX, Creo, SolidWorks, whatever the program requires. The physics was done elsewhere. What Aras holds is the compliance history, the BOM lineage, the configuration management record, and the change order chain. That is a real and deep asset; the regulatory and audit moat holds strongly. Yet describing it as a repository of “engineering physics” overstates what pure-play PLM actually controls.
Aras’s multi-CAD neutrality is simultaneously its limitation and a separate kind of moat. For large programs running heterogeneous CAD environments, a vendor-agnostic PLM backbone is often the only practical option. The lock-in there comes from the depth and complexity of the configuration management history, not from physics integration. AI agents face the same barrier, just framed differently: not “can you navigate our physics model” but “can you navigate 20 years of multi-source configuration history across a program with thousands of parts and dozens of change streams.” The answer is the same.
Compliance is the moat
In heavily regulated manufacturing sectors the audit trail is not a feature. It is, effectively, the product. Every change to a design, every revision to a bill of materials, every engineering sign-off carries legal and certification weight that a generic AI agent simply cannot carry on its own.
The argument for AI disrupting horizontal SaaS is persuasive partly because the value of those platforms has always been somewhat abstract — dashboards, visibility, coordination. Strip away the interface, and a capable agent can approximate most of what they do. PLM is different because the value is in the depth of the data model, the regulatory integration, and the years of institutional engineering knowledge baked into the system. An agent that operates outside that environment is not a replacement; it is working without a foundation.
Moving toward agentic infrastructure
Rather than being threatened by the AI agent wave, the major PLM vendors are positioning themselves as the infrastructure those agents will require to function safely in industrial environments.
PTC has been the most direct in articulating this. Their framing of an “Advise → Assist → Automate” progression is not just a marketing ladder; it reflects a real architectural strategy for moving Windchill from a passive data store toward an active, agent-ready environment. Their Document Vault AI agent, which moves from search into autonomous extraction of specifications from legacy PDF archives, is an early concrete example. Pulling structured engineering data from decades of unstructured legacy documentation is not something a general-purpose agent can accomplish without deep integration into the PLM data model.
Siemens is taking a complementary approach, deepening the integration between Teamcenter X and its design tools (NX, Solid Edge) to create an environment where AI capabilities are, by design, constrained and validated by the engineering context. The value proposition is safety and auditability, not raw capability.
Dassault Systèmes has made the most philosophically ambitious bet. With the introduction of AURA and its continued investment in the 3DEXPERIENCE Virtual Twin concept, the company is arguing that the digital twin is the appropriate cognitive environment for industrial AI. In essence, an agent is only as reliable as the model it inhabits. Whether that argument lands with industrial buyers in 2026 will be worth watching. The Virtual Twin narrative has been in development for years, but the emergence of agentic AI gives it a more concrete commercial hook than it has had before.
Market implications
Not all engineering software faces the same exposure. A simple framework for thinking about AI displacement risk in the sector:
| Category | AI Risk | Why |
|---|---|---|
| Horizontal SaaS (Asana, Smartsheet) | High | Value is in coordination and dashboards — replicable by capable agents |
| Cloud-native engineering SaaS (Autodesk Fusion) | Low–Moderate | Physics simulation and geometry modeling create stickiness; watch Fusion’s text-to-geometry adoption as the key indicator |
| Integrated PLM + CAD (Teamcenter/NX, Windchill/Creo, 3DEXPERIENCE/CATIA) | Low | Physics provenance traceable end-to-end within one vendor’s data model; AI agents depend on the digital thread and cannot substitute for it |
| Pure-play PLM (Aras Innovator) | Low | Doesn’t own physics provenance; multi-CAD configuration history and compliance chain are the moat; equally deep, differently grounded |
What to watch in the near term
For Autodesk, the meaningful signal is professional adoption of AI-assisted geometry generation in Fusion 360. If production engineers begin integrating text-to-geometry into actual design workflows, the stock will likely decouple from any broader SaaS selloff. Sole entrepreneur and educational use is noise; production adoption is the signal.
For PTC, watch for announcements of customers moving to outcome-based pricing tiers. The shift from per-seat licensing toward consumption models tied to automation outcomes is the clearest indicator that the agentic strategy is finding traction beyond pilot programs.
For Dassault Systèmes, the question is whether AURA gains meaningful industrial adoption or remains a compelling concept in search of a deployment. The 3DEXPERIENCE platform has always had a longer sales cycle than its competitors; the agentic angle may accelerate enterprise conversations, but execution will determine whether it translates into revenue.
The underlying argument
The distinction worth holding onto is the difference between tools that manage information flow and tools that anchor decisions in validated history. Horizontal SaaS manages information: schedules, contacts, tasks, documents. AI agents are, at this point, quite good at managing information. PLM systems, whether they own a CAD kernel or not, anchor engineering decisions in a record that carries regulatory, physical, and institutional weight. That is a fundamentally harder problem to replicate.
The nature of the moat differs between the integrated vendors and the pure-play vendors. For CAD+PLM vendors, it is physics provenance. For Aras and its peers, it is configuration depth. Both resist AI displacement for real reasons. Conflating them may produce a tidier argument, but it obscures what is actually protecting each category.
The companies facing the most pressure from AI agents are those whose primary value is making information visible and accessible. The companies that will benefit are those which own the authoritative record required by agents to function responsibly in consequential environments.
In engineering, the PLM moat has never been the interface. It has always been the depth of the validated history behind it.
Subscribe to Consilia Vektor to be notified of new articles.
[AI/Human contribution: This article was written by both a human and an AI agent functioning as a researching reporter. There were several rounds of guidance prompting, and a comprehensive editorial rewrite. As a result, the exact word-for-word “who or what wrote this” is a mishmash.]
Your comments are welcome