AI-powered documentation support: A conversation with Warp’s Danny Neira
Warp’s Danny Neira shows how AI chatbots and unified knowledge sources power scalable support, self-serve documentation, and product-led growth.

Danny Neira is a Support Engineer at Warp, where he plays a key role in managing documentation alongside engineering and marketing teams. With two to three years at the company, he brings a unique perspective on how documentation serves as both a user resource and a support tool, while grappling with the challenge of proving its impact on retention.
We sat down to discuss how Warp approaches documentation as a collaborative effort, what makes for effective technical documentation, and the ongoing challenge of measuring documentation’s impact on user retention.
Tell me about your background and how you work with documentation at Warp.
I’m the support engineer here at Warp, and I kind of pseudo-own the docs. It’s co-owned between engineering, our marketing team, and the support team. When engineering creates new features, they’ll write out some docs, and then we’ll usually have our marketing team go through and make sure that we're not too sales-y on the docs, because we want to keep it more technical versus trying to have it be like another piece of marketing. We already have our marketing page for that, so we don’t need our docs to be a space for that.
As a support person, if we get any feedback on the features or the actual application, then I’ll go in and make changes to the docs and enhance things. Another thing I do is make sure that we put our change logs in there. There are sometimes little tweaks, enhancements, and things that maybe don’t deserve a dedicated doc page, so that way it can all be in our change log.
How does that collaborative ownership work in practice?
It works pretty well because everyone has their area of focus. Engineering knows the technical details and how features work. Marketing helps keep things accessible and not overly technical in a way that loses people. And support — that’s where I come in — we’re the ones hearing from users about what’s confusing or what’s missing.
That feedback loop is really valuable. When users hit a snag and reach out, whether through Discord or other channels, I can go back and update the docs to address those gaps. It’s an ongoing process of refinement based on real user needs.
“One of the biggest challenges is providing people with what they need without going too deep into the weeds, because then you just start to lose folks. We try to say, ‘If you just want to use the feature, these are the shortcuts, this is where you click,’ and we keep that separated from ‘this is how it works.’”
Gotcha. Talking more broadly, what do you think makes for good documentation?
I think one of the biggest challenges is providing people with what they need without going too deep into the weeds, because then you just start to lose folks and they might not actually read all that. One thing that we try to do is break our docs down into different sections with nice big headers. We try to say, “If you just want to use the feature, these are the shortcuts, this is where you click,” and we keep that separated from “this is how it works.”
If we do want to get into more details of how it works, we’ll do that in a different section. We usually include a visual representation as well. Just short, sweet — that’s what gives us the best result. We’ve actually gotten feedback from folks about the visual part, like comments on Loom videos or things like that.
That sounds like a great approach. You also mentioned using AI to help with support. How has that changed things?
We use an AI chatbot integration that pulls from multiple sources — our docs, Discord history, GitHub issues, and blog posts. Before we had it, we had a lot of users asking for things on Discord with a lot more back and forth troubleshooting. That’s great if you have a lot of volunteers or moderators or support folks, but we don’t. So it's been helpful in that sense.
I think a little bit of human touch and empathy is good sometimes, so I would say that’s kind of a downside. The bot might not help unblock someone fully, so either me or some of the moderators will scan through and make sure, “Hey, is this person good or do they still need help?” And then we’ll intervene.
It has feedback buttons, so we can refine answers and improve responses over time. But all of that requires a person in the mix knowing what the truth is versus just trusting everything that it spits out. I would say it’s 80 to 90% accurate. I’ve been impressed by some situations where it even responds to questions in different languages correctly. So it’s been helpful, but it’s not an end-all-be-all solution.
Absolutely. And you mentioned you’re thinking about documentation as a retention tool. How are you approaching that?
That’s something I just recently started to think about. We’ve been trying to improve our retention, and one question I have that I’m still figuring out is: how can we measure and quantify that? How can we show that better docs would help with retention?
We don’t really measure our success in the docs that much. I know there’s the thumbs up, thumbs down, happy face feedback in our GitBook where folks can give feedback on a specific page, and I do occasionally look at that and say, “Well, this one page needs some work.” But in terms of directly measuring how this is helping us retain users, I really don’t know the answer to that.
It’s a good question — how do I convert something that is disconnected from the product and connect it? We get a lot of NPS surveys from folks who have been using the product, and that’s one of the ways we’re measuring why folks are leaving and things like that. We have ways of measuring what type of folks are leaving after what period of time, because we do have onboarding surveys where people say, “I’m front-end, I’m back-end, I’m DevOps, or whatever.”
So we have some ways, but they’re all in the product. Turning that to something that is outside the product, I think that’s definitely going to be a challenge. We’ll see how we sort that out.
This interview was published on 24 September 2025, and conducted as part of research for the 2025 State of Docs report.


