IBM TechXchange 2025 was all about navigating the AI revolution

This syndicated post originally appeared at Zeus Kerravala – SiliconANGLE.

At IBM Corp.‘s TechXchange 2025 event last week in Orlando, Florida, artificial intelligence was the primary theme, as it is at every event today. But the messaging and announcements from this conference were about getting customers over the hump and moving AI from vision to adoption.

The pace of technological change is faster than I’ve ever seen it in my almost three decades as an analyst and two decades as an information technology pro before that. The rapid evolution of technology is being driven by AI and more specifically, generative AI, which lets us interact with systems and data in an entirely new way.

It’s important to note that this isn’t just an incremental change to businesses; it’s a fundamental overhaul of the tools we use, the data we rely on, the security we build and the systems we deploy. It’s akin to the massive change we saw with the internet, although AI will dwarf that transition. AI is causing unprecedented data growth, AI agents are proliferating quickly.

The central question for every organization is: How do you keep up? To help customers with this, the TechXchange keynote addressed the following issues.

The infusion challenge: Building usability and trust

Despite the hype, only a small fraction of businesses are seeing value from generative AI. A recent MIT study found that only 5% of organizations are claiming success with their generative AI pilots. The massive 95% is held back not by a lack of availability, but by a challenge of usability and fidelity. The millions of models on platforms like Hugging Face may be available, but how many meet a bank’s or pharmaceutical company’s stringent requirements for security, data governance, explainability, and adherence to regulatory standards?

Deployments require making AI fit the enterprise, not the other way around. This involves deep integration into existing infrastructure and a commitment to the high standards enterprises have maintained for decades. This enterprise-first approach is the core of the recently announced strategic partnership between IBM and Anthropic. As Anthropic CEO Dario Amodei noted on stage at TechXchange, this collaboration is focused on driving adoption faster by combining Anthropic’s models with IBM’s deep understanding of enterprise tech stacks, infrastructure, and the complexity of change management in regulated industries.

IBM does not have the sizzle of an AI startup but it’s decades of enterprise experience, coupled with its massive consulting practice, provides the essential trust, scale and domain-specific knowledge required to move from theoretical AI potential to practical, secure business execution. No one does “big” better than IBM and with AI, that’s a critical component of success for enterprises.

Given the large numbers of AI failures, it’s important to understand what the steps to success are. At the event, IBM laid out the following foundational pillars.

Ecosystem: Power in partnership

Enterprise-grade AI will not be from a single vendor. It demands a broad ecosystem that brings together model providers, cloud providers, and hardware vendors. This collaboration ensures that the models and technologies delivered are not only powerful but are optimized, scalable, and secure across diverse enterprise environments. At TechXchange, IBM announced partnerships with Anthropic, which will integrate Claude LLMs into IBM products. Beyond Anthropic, IBM highlighted ecosystem work with Qualcomm, Salesforce, SAP, Dell, Box and others.

Developer tools: Introducing ‘Project Bob’

The second pillar focuses on maximizing developer productivity, moving beyond simple code creation to task completion. The goal is a to boost efficiency by turning tasks that once took days or weeks into processes that take minutes or hours. One example IBM gave was upgrading an application from Java 7 to Java 17. This can be done in mere minutes now versus the hours and hours it did before.

This is the vision behind Project Bob, IBM’s internal tool that has already seen strong results, delivering over 45% productivity gains for more than 6,000 internal developers. Bob is designed for the entire software lifecycle, supporting both the inner loop (coding, debugging, testing) and the outer loop (deployment, resilience, CI/CD, compliance). A core feature is the shift towards literate programming, allowing developers to express their intent in natural language, which the system then translates into code, automating the mundane and augmenting the complex.

The success of Project Bob hinges on its simple user experience, which should foster high engagement and utilization. IBM positioned Bob not as a simple code generator, but as a knowledgeable partner — a “distinguished engineer” for junior staff, or a capable assistant for seasoned experts.

This supports the thesis that AI won’t take peoples jobs but rather, it’s people that use AI that will. Project Bob can be a programmer’s best friend if used correctly.

Infrastructure simplification: The knowledge graph with ‘Project Infragraph’

The final pillar tackles the growing complexity of the infrastructure required to deploy AI. Unlike sandbox development, managing production infrastructure involves live customer data, real-time user load, and critical security implications. Current infrastructure is highly fragmented across public clouds, private clouds and various management tools (Terraform, Ansible and others), making it hard for both human and AI operators to achieve full context.

IBM developed Project Infragraph, for the HashiCorp Cloud Platform, as a solution to the complexity. It is a real-time, graph-oriented database of all infrastructure assets and provides the following:

  • Unifying silos: Infragraph aggregates data from public/private clouds, the supply chain such as Artifactory and GitLab, security tooling like Wiz and Sneak, and IBM’s own systems like Instana and Turbonomics.
  • Operational intelligence: This unified knowledge graph makes infrastructure easy to understand by mapping the relationships between all components. This is critical for rapid remediation. As an example, instantly querying all impacted web instances during a vulnerability like an OpenSSL patch, and then initiating an automated, linked remediation workflow (for example, revoking a golden image in Packer and redeploying via Terraform), eliminating the need for manual spreadsheets and email chains.
  • Foundation for agents: Infragraph provides a common data set built on a data layer showing a knowledge graph that can be queried by AI agents, enabling automated operations through natural language processing interfaces and systems like IBM Concert. This allows for intelligent, proactive actions on the infrastructure data.

IBM is banking on its ecosystem combined with productivity-enhancing developer tools like Project Bob, and the contextual operational intelligence of Infragraph, with enterprise security layered on is a roadmap to successful, secure and scalable generative AI adoption.

Author: Zeus Kerravala

Zeus Kerravala is the founder and principal analyst with ZK Research. Kerravala provides a mix of tactical advice to help his clients in the current business climate and long term strategic advice.