If asked: Is this data secure? Where does my query go?

Amber runs within the SIG’s own Azure tenant. Your queries are processed by Azure OpenAI, which is subject to Bluetooth SIG’s enterprise data agreements with Microsoft — the same agreements that govern everything else we run on Azure. Queries are not sent to a public AI endpoint, and we do not use query data to train any external model. This was a deliberate architectural decision — we moved off Anthropic Claude and onto Azure OpenAI specifically to address the organization’s compliance requirements around external AI services. Michael and Chad were involved in that decision from the IT security side.

If asked: How accurate is it really?

The honest answer is: more accurate than anything else available for Bluetooth-specific questions, but not perfect. The 82% win rate versus ChatGPT is real, but it’s a competitive benchmark — it tells you we’re better than the alternative, not that we’re infallible. What we’re building toward is a reliability score against SME-verified ground truth answers, which is a much higher bar. That evaluation is running now. Every answer includes a source citation so you can verify, and that transparency is intentional — we want members to be able to check the work, not just trust it.

If asked: What happened early on — I heard the win rate was much lower?

It was. Our initial evaluation in February showed a 25% win rate. Alex Y’s analysis identified two main causes: about half the failures were missing documents — specs that weren’t yet loaded into the index. The other major cause was over-aggressive routing logic that was refusing legitimate SIG process questions before they even reached the AI. Both of those are fixable problems, and we fixed them. Going from 25% to 82% in a controlled, reproducible way is actually the best kind of story — it means we understand the system well enough to improve it deliberately. That’s what we want before member launch.

If asked: Why Azure OpenAI and not Anthropic / ChatGPT / other?

We started with Anthropic Claude during POC, and it worked well technically. The migration to Azure OpenAI was driven by organizational compliance requirements — specifically around external AI services and data handling. Azure OpenAI runs within our existing enterprise agreements with Microsoft. It wasn’t a technical preference, it was the right governance decision. The system performs well on Azure OpenAI.

If asked: How does this relate to the Microsoft partnership?

Microsoft is our production development partner. The relationship is: we own the product — the requirements, the roadmap, the domain knowledge, the evaluation methodology. Microsoft’s development services team handles production infrastructure replatforming. We’ve described this as roughly a 30/70 split: 30% of what we’ve built in the POC — the routing logic, the domain knowledge, the query translation layer, the evaluation harness — transfers directly and represents the SIG’s intellectual contribution. The other 70% is infrastructure work that Microsoft will replatform onto managed Azure services. We remain the product owner throughout.

If asked: Does it replace [tool X]?

No. Amber doesn’t replace QW, PTS, the qualified products database, the spec portal, or Zendesk. It’s an intelligence layer on top. Think of it as the front door that knows which room to send you to — and can walk you there — rather than a replacement for any of the rooms.

If asked: Is Chapter 2 waiting on the AsciiDoc migration?

No, it’s not a technical prerequisite. Our current data is structured well enough to work with now. The coordination question is about rollout sequencing and messaging — specifically, how we position Chapter 2 in the context of the broader AsciiDoc transition that Project Blue is driving. We’re not going to let technical development be blocked on a migration timeline, but we will coordinate how we roll out to members so the two efforts reinforce each other.

If asked: What’s the timeline for Chapter 2?

Scoping begins in earnest after Bangkok and the member launch planning solidifies. We’ll be engaging working groups and test authors directly to define scope based on real authoring workflows. I don’t have a specific delivery date yet — that’s deliberate, because Chapter 2’s shape will be influenced by what we learn from Chapter 1 in the field.