Most "AI-powered" project management tools in 2026 are doing the same thing: they added a sidebar chat that summarises a few tickets and called the company an AI company. The rest of the platform — the data model, the scheduling logic, the way people collaborate — is the same software they were selling in 2019.
We took a different bet with Vero. We rebuilt the platform from the data layer up around an AI engine we call AIBOL, with a knowledge graph at the core, real-time co-editing under 500ms, and AI hooks in every workflow — risk identification, document generation, traceability, semantic search, and quality validation. Today Vero runs across 5,000+ organisations.
This piece is about why "AI-native" is not a marketing word, and the architecture choices that flow from it.
The problem with bolted-on AI
A bolted-on AI assistant has three structural problems:
- It only sees what you copy into the chat. The model has no live context of your portfolio, no awareness of who edited what, no link between a status report and the underlying deliverable.
- The data model wasn't designed for retrieval. Most PMO tools store project data in relational tables optimised for the UI, not for an LLM to reason over. You end up retrofitting embeddings as an afterthought, and the answers are shallow.
- AI is a feature, not a substrate. It lives in one corner of the product. The risk register, the timesheet, the dashboard — none of them get smarter.
What AI-native actually means
For us, AI-native came down to four invariants:
1. The knowledge graph is the source of truth, not the database
Every entity in Vero — projects, tasks, requirements, deliverables, people, risks, issues, meetings, decisions — is a node in a graph. Edges carry semantic meaning: implements, blocks, derived-from, assigned-to, raised-against. We use GraphRAG (graph-based retrieval-augmented generation) on top of this, which is why semantic queries like "show me every risk that touches the regulator-facing deliverables on the Q3 portfolio" work — and work fast — without hand-written joins.
A flat vector store can't answer that question well. It returns similar text; it can't reason about paths.
2. AI is wired into every workflow, not gated behind a chat
In Vero, the model touches:
- Document generation — charters, status reports, meeting minutes — produced with full project context, not from a blank prompt.
- Risk management — predictive heat maps that identify risks ~30% earlier by reading drift signals across schedule, budget, and resource utilisation simultaneously.
- Traceability — automatic links between requirements, deliverables, and test cases. No manual matrix.
- Quality validation — AI reviewer that checks artefacts against organisational standards before they reach a human reviewer.
- Semantic search — natural-language queries across the entire portfolio.
None of this is in a chat sidebar. It's in the place the user already is.
3. Real-time collaboration under 500ms
This is where most platforms quietly give up. Sub-500ms co-editing across distributed teams means CRDTs (we use a state-based variant), a WebSocket layer with backpressure handling, and a server architecture that doesn't serialise edits through a single database write. We wrote about the engineering trade-offs separately in Real-time co-editing under 500ms: what it actually takes.
Why does this matter for an AI-native PMO? Because the AI sees edits as they happen. Live cursor positions, live document state, live risk signals. The model isn't summarising a stale snapshot from last night's batch job — it's reasoning over the same state the team is.
4. Strict data governance — multi-LLM, never lock-in
Enterprises don't want their proposal pipeline going to one US-hosted model. Vero supports OpenAI, Anthropic Claude, Azure OpenAI on private deployments, and OpenRouter — configurable per workspace, with audit logs of every prompt and every response. For regulated GCC clients, the same engine can run against an Azure OpenAI deployment in a UAE region. (More on this in AI governance for regulated enterprises in the GCC.)
The numbers that came out the other side
Two years in, across the deployments we have telemetry on:
- −45% average issue resolution time (automation + earlier risk surfacing).
- −40% time from proposal to approved project (the AI-assisted Proposal Pipeline does the heavy compliance / business-case lift).
- −60% internal email volume in the average customer org (real-time co-editing replaces send-and-review loops).
- +35% on-time delivery rate on tracked milestones.
- 99.99% uptime, <500ms co-edit latency at the 95th percentile.
These aren't marketing rounding errors — they're the kind of numbers that only show up when AI is genuinely in the path of work, not bolted on the side.
The lesson, if you're building something similar
If you're considering an AI feature on top of an existing tool, ask yourself an uncomfortable question: would this AI feature be just as useful if it lived in a separate browser tab? If the answer is yes, your AI is a chatbot, not a substrate. The leverage comes from the parts that only work because the AI is inside the data model — predictive risk, autonomous traceability, real-time quality grading.
That's the bar we set for Vero. It's also the reason we describe it as a 360° platform — every workflow gets the engine, not a chosen few.



