sepetember 2025

// Video Solutions

Agentive AI Reshapes Security

As agentive AI becomes a reality, experts speak to how integrators can prepare for AI-induced shifts in the industry.

By Christopher Crumley, SDM Contributing Writer

According to the experts, agentive AI will redefine roles — and fast. Image courtesy of RAD

SHARE

Agentive AI — capable of making autonomous action — is not speculation of where the technology could go, but a technology that is already here. It is rapidly evolving and redefining roles in the security industry. Its effect can be felt in monitoring centers, and security integrators can feel it in the deployment and configuration of systems and solutions. So how can integrators and monitoring providers prepare for these AI-induced shifts?

Redefining Roles

“For the integrator, an agentic AI might make setting up the system much quicker and easier,” explains Quang Trinh, business development manager, platform technologies, Axis Communications, Chelmsford, Mass. “The logic complexities of devices, components, and system settings that make up the business logic will be translated by the AI agent. This means that optimizing analytics or settings can be done by simply engaging with the AI agent in the same way that you would a person.”

Trinh provides a real-world example: “An integrator that is setting up a camera to count the amount of people and vehicles into a facility could simply say ‘I need to count all traffic into the facility from people and vehicles during business hours.’ The AI agent will understand this and connect all the devices that will gather this data for the customer. It will know which cameras and other sensors are in the facility and will set up the complex business logic of correlating the data from all these sensors into insightful information for the system.”

This agentive AI can make security solutions more proactive. “What we’re doing with auto patrol is pretty interesting,” shares Chris Brown, CEO, Immix, Tampa, Fla. “We’re proactively going out using AI to find situations to then respond to, whether that’s engaging an operator or autonomously laying a set of messages to change behavior or push somebody out of a site. That’s where things really are today. I do, however, think there is a next level — that’s what we’re talking about, and it really is here. We’re working to do some integration work with Robotic Assistance Devices (RAD). With that, our goal would be to literally be able to have an autonomous agent that receives an alarm event, with no human involved.”

Steve Reinharz, CEO, CTO, and founder, Robotic Assistance Devices (RAD), Ferndale, Mich., adds, “Agentive AI will force a complete redefinition of traditional roles, and fast. Integrators who rely on installing yesterday’s hardware with a human-in-the-loop business model are going to find themselves increasingly irrelevant. Monitoring centers that operate like call centers will face major disruptions as AI agents outperform human operators in speed, scale, consistency, and cost. This isn’t theoretical. RAD is already deploying systems like SARA (Speaking Autonomous Responsive Agent) that not only detect, but engage, escalate, and resolve incidents autonomously. The value chain is collapsing, and those not embracing AI-enabled operations will be left behind.”

While agentive AI may include new vulnerabilities, the risk of swearing off of the technology is being left behind. Image courtesy of Chayada Jeeratheepatanont / iStock / Getty Images Plus / Via Getty Images

“Agentive AI will force a complete redefinition of traditional roles, and fast. Integrators who rely on installing yesterday’s hardware with a human-in-the-loop business model are going to find themselves increasingly irrelevant.”

— Steve Reinharz, RAD

Does Agentive AI Increase Risks?

As the power to act autonomously arrives, the concern of new vulnerabilities comes with it. “The biggest risk is unintended actions,” Trinh says. “The real world is dynamic, and as much as an AI is built by a company or a person, it is not possible to imagine every possible scenario. As such, the AI will not be exposed to these exponential statistical outcomes that are possible in real-life environments.”

Trinh adds, “A lack of guard rails and inadequate training data are two additional risks. If there are not enough guardrails set with the AI agent, there is the chance of a bad output. Similarly, if there is not enough training data provided for the AI agent to fully understand the logical structures of a system and the organization’s business process and procedures, it will not be able to appropriately fulfill the desired outcome.”

Trinh warns that risks will always be present. “Mitigating them will come down to transparency from vendors and whether the end customers understand their responsibilities for minimizing risks and evolving the system over time as improvements are made and adjusted,” Trinh says.

Still, some see agentive AI as less risky — not more. “There’s always going to be risks in providing security solutions,” Brown says. “If it’s dispatch, it’s sending a responding party into an event that’s not fully defined. Some would say that there’s a risk in missing the event. I would say that AI is doing a better job at capturing the event and defining it and kind of isolating it so it can be managed [better] than a human is doing. So, I don’t see a lot of risk. I think the risk is the same risk we have in our industry all the time. It’s the ability for operators in the space — the monitoring centers — to adopt or get comfortable with this type of technology. I think the risk of them not doing that is being left behind.”

Reinharz puts it bluntly: “The real risk isn’t AI going rogue. It’s humans failing to properly define what success looks like in the first place. Poorly scoped policies, weak oversight, or blindly trusting any AI system without validation leads to trouble. That said, we must accept that agentive AI is going to make decisions, and, yes, sometimes make mistakes, at scale. The industry needs to be mature enough to weigh the current human error rate against possible AI error. Spoiler: humans don’t exactly have a flawless track record. Risk doesn’t disappear with AI, but it becomes measurable, trainable, and, most importantly, improvable. That’s not a risk, that’s progress.”

Agentive AI’s Best Role

So what roles are best suited for agentive AI? “We are still in the early phases of what can be done with agentic AI,” Trinh says. “Many vendors are asking what repetitive logic and tasks in their systems can be offloaded to an AI agent. In many cases, SOPs can benefit from agentic AI, along with system settings, calibration, and optimization. The applications that will succeed are the ones that can balance business costs and execution. Commercial success in our markets and other markets will determine what is accepted and grows versus what will not.”

Brown sees tremendous potential in monitoring. “Working in a monitoring center is not an easy job. There’s a lot of information coming at you,” he says. “You have to process a lot of data all at once — you have to make life and death decisions, and you're engaging with others to help solve a problem. This is an opportunity to use technology to streamline the workflow, to deliver better actionable data into the hands of the operator so that their decisions are more strategic. They have better information to make a decision that cuts right to the core of what the problem is and how to solve it. I think those are all huge pluses, and I think the more that monitoring center operators get comfortable with having that, the better.”

An example is being able to pull out the elements of a scene differently than a human would, Brown says. “In the human lens, you’re going to look at a scene and see the most relevant or most prominent threat,” he explains. “It’s the nuanced threats in the background that a human may miss because they’re drawn to the target. But are there other things happening in that scene that are also threats that need to be delivered, articulated, or acted on by them or by a responding party? The answer sometimes is yes, and I think AI is going to have the ability to bring those nuances to the forefront and deliver a comprehensive assessment of the scene for the operator.”

“Risks will always be present. Mitigating them will come down to transparency from vendors and whether the end customers understand their responsibilities for minimizing risks and evolving the system over time as improvements are made and adjusted.”

— Quang Trinh, Axis Communications

Preparing for Agentive AI

How can integrators prepare for a future that is putting more power in the hands of machines? “It starts with education and awareness,” says Quang Trinh of Axis Communications. “By using trusted resources, like NIST, it is possible to stay up to date on the capabilities of different AI systems and architectures.”

He adds, “Signed video is one step to ensure data integrity in video and images, since the source of the data can be verified for any alterations. Moving forward, it’s important for the AI community to take a page from the cybersecurity playbook and ‘trust but verify’ all AI systems and architectures.”

Reinharz adds, “Video monitoring, access control, and threat triage are ripe for disruption by agentive AI. These are domains that require 24/7 attention, fast pattern recognition, and decision-making under pressure — tasks where humans struggle with fatigue, bias, and inconsistency. Agentive AI excels here. It can flag anomalies, engage with intruders or visitors, and escalate based on policy, all in real time and without hesitation. We’ve built SARA to do exactly this, and the results are proving the point: smarter, faster, less expensive outcomes with no loss in reliability — in fact, usually an improvement.”

Tracy Larson, president, WeSuite, White Plains, N.Y., underscores the importance of human oversight: “AI is only as smart as AI is right now. … It will continue learning; it will continue getting better. But you have to look at things, you have to review them, and it is on our shoulders as humans to go ahead and ask, ‘Does that really fit what I’m trying to do?’”

Forgeries & Fakes

Another growing concern is the fear of the “deepfake,” or forged evidence. “The impact will be significant,” Trinh says. “Systems that currently handle the watermarking, encryption, and the security of the data will need to pivot to a horizontal strategy. Signed video, where every frame of the video can be traced back to the root source device, will ensure that any manipulated video data will be invalidated.”

Brown sees video integrity as a frontline issue. “The ability in today’s world to manufacture evidence with all of the AI tools that are out there today, that anybody can get a hold of, is insane,” he says. “So, the ability to create an event that isn’t real and use it in evidence is certainly available today. I think, inside the industry, everyone’s working hard to do things to the video — watermark, whatever it might be — in order to keep that video consistent and protected so that, when it does become evidence, it isn’t able to be manipulated.”

Brown continues, “We certainly do that in our audit trail. We stitch the audit trail together so it’s very difficult or would be almost impossible to edit that information. It’s an interesting angle that people are going to need to start to pay better attention to. We need to make sure that evidence is truly coming from a reputable source or a reputable platform rather than just camera recording on a jump drive.”

Reinharz says the response must evolve. “Generative AI is both a weapon and a shield,” he says. “On one hand, it opens up dangerous new attack vectors, convincing deepfakes, spoofed audio, synthetic credentials. On the other hand, it arms defenders with tools just as advanced. The industry’s response must be layered: authenticate at the edge, verify across multiple signals, then apply AI-driven analysis to detect inconsistencies no human could spot. Security must evolve from trusting what’s seen and heard to trusting what’s verified by intelligence. We’ve already started that transition. Others must follow, or they’ll fall victim to it.”

Shifting Liability?

As we move towards more agentive AI, how does that impact liability?

“Yes, agentive AI could shift liability, but we need to add context,” says Quang Trinh of Axis Communications. “There is already state, federal, and other global legislation that will form the framework for liability. Much of the mainstream discussion around AI has placed a high value on these outputs, but AI really is tasked to output a result based on what it is given as input. As humans, we have constructed guardrails based on our own ethics; however, these vary from person to person.”

There are already rules on data privacy, which will be a good foundation for crafting other regulations on AI and its downstream use.

“It’s important for the industry to stay informed on what is coming down the pipeline for regulations, compliance, and standards,” Trinh says. “Use reputable organizations such as SIA, ASIS, NSCA, and others to keep up to speed on how AI will impact the security industry.”

Steve Reinharz of RAD adds, “Liability will shift, and it needs to. If a human guard makes a bad call, there’s a chain of responsibility. The same will apply to AI. Integrators, manufacturers, and operators will need clear contracts, documented policies, and systems that explain their reasoning (explainable AI). But let’s not pretend AI creates chaos here; it actually introduces traceability and auditability that human decision-making rarely provides. The industry should prepare now to assign responsibility not based on who pushed a button, but on who designed, configured, and validated the AI system.”

SHARE