Most conversations about AI in video security have centered on forensic search: finding footage faster, identifying objects more accurately, and helping investigators work backward after an event has already happened. In this discussion, Brad Castillo and Matt Cirnigliaro shift the focus to something more immediate: how generative AI can help security teams understand live scenes better, reduce noise, and support faster response while events are still unfolding.
That distinction matters. As Matt explains, many so-called “GenAI” conversations in security are really about the language side of the model, such as making search more intuitive. But live operations demand more than easier searching. They require scene understanding in real time.
A practical way to think about it is this: if a human were assigned to watch one camera nonstop, what would you want that person to notice, ignore, escalate, or trigger? That is the operational gap Matt points to. Cameras have long replaced some physical presence, and recordings have long replaced continuous human observation. Generative AI starts to narrow that gap by helping systems interpret what is happening in a scene, not just capture it.
Importantly, this discussion does not suggest that people disappear from the equation. In fact, the opposite point comes through clearly. Matt argues that the human role becomes more important when operators no longer spend most of their time filtering through noise. If AI can reduce nuisance alarms and elevate only the events that truly matter, the operator’s judgment becomes more valuable, not less.
This is also where the conversation gets especially forward-looking. Brad asks what happens when the response is not always human. Matt describes a future where detections can trigger autonomous actions, including robotic dispatch. In the transcript, he frames robotics as just one possible response layer: a robot could be sent to a location, add another camera angle, deliver audio instructions, assist in a medical situation, or even support non-security workflows such as dispatching a cleaning robot when a spill creates a slip hazard.
The ROI conversation is one of the strongest parts of the discussion because it keeps the subject grounded. Matt does not present AI as valuable simply because it is advanced. He presents it as worthwhile when the technology costs less than the problem it solves. That means end users need to define the operational pain clearly. Is the issue nuisance alarms? Slow response? Staffing gaps? Safety exposure in remote environments? Compliance risk? Loss prevention? Once the problem is defined, the budget logic becomes clearer. That is often the right way to talk about next-generation analytics with serious end users: not as innovation for innovation’s sake, but as a measurable response to a known operational or safety challenge.
The industries Matt highlights first also make sense. He points to remote or dangerous environments where sending people is difficult, slow, or risky, as well as large-scale retail operations where many smaller operational issues can add up across many locations.
The biggest takeaway from this conversation is that generative AI in video security should not be viewed only as a better way to search recorded footage. The more disruptive opportunity is on the live side: improving situational awareness, surfacing the right events sooner, reducing wasted human attention, and opening the door to guided or autonomous response.
Contact our team to discuss improving your security posture intelligently >>
FAQs
What is generative AI doing differently in video security?
Generative AI can move beyond helping users search old footage and start helping systems interpret live scenes more like a human observer would.
How is this different from forensic search?
Forensic search is primarily about finding recorded events faster. The live-use case is about understanding what is happening now and helping operators respond in the moment.
Will generative AI replace security operators?
Operators remain essential, and their role may become more important because AI can filter out more noise and surface the events that deserve real human judgment.
What does autonomous response mean in this context?
Autonomous response includes actions triggered by detections, such as notifications, dispatching robotics, adding live video from another device, or guiding a response based on the type of event.
How should end users think about ROI?
The technology has to cost less than the problem. That means users should start by defining the operational problem clearly and valuing the impact of incidents, inefficiencies, or nuisance activity.






