january 2026

ARTIFICIAL INTELLIGENCE

Solving the ‘People Problem’ in Monitoring Centers With AI Augmentation

AI is becoming the monitoring industry’s most valuable new hire, actively offloading tedious tasks, accelerating training and giving operators the data edge they need to protect lives. However, leaders caution success hinges on ethics, guardrails and culture, because when seconds matter, human judgment still defines outcomes.

By Brianna Wilson, SDM Managing Editor

SHARE

Man leaning on indoor staircase with American flag.

Chris Newhook of American Alarm and Communications, pictured outside of the company’s monitoring center, says AI is becoming a powerful assistant that streamlines verification and reduces false-alarm workload, but human judgment will continue to drive critical decisions. Image courtesy of American Alarm and Communications

It goes without saying that artificial intelligence (AI) is on everyone’s mind. We’ve all heard the narrative: AI is not optional; it’s not a buzzword; it’s going to affect every single industry. The outlook on AI is overwhelmingly positive from a business perspective, and this is the same case for monitoring centers.

Traditionally, the industry is slow to adopt new technologies, and often for good reasons, says Chris Brown, CEO, Immix, Charlotte, N.C. “Some of it is just fear and change,” he says. “The other thing is, we’re in the life safety business. It’s difficult to take big risks with how you operate your business every day. However, I am starting to see some companies take some big leaps. I think we’re closer to a rapid adoption of AI than we think we are.”

Why? The risk of escalation is higher today than it was 20 years. Because of this, operators are dealing with far more events, both critical and non-critical.

This is where AI comes in. Security industry leaders agree that AI is nowhere near ready to “take over” operations in the industry (and likely never will be). However, AI excels in handling repetitive tasks and non-critical events that the average operator wishes would just be automated so they can focus on the lives they’re committed to protecting.

In short, AI directly and effectively addresses “the people problem” that monitoring centers commonly struggle with: high turnover, unmanageable workloads, disjointed efficiency and grueling training processes.

AI Doesn’t Replace. It Augments.

AI is actively helping monitoring centers tackle these challenges. “AI and related automation technologies help address these issues by removing large amounts of routine work that can distract operators from the events that truly require their attention,” says Jim McMullen, president, COPS Monitoring, Williamstown, N.J.

These are widely considered the “low-hanging fruit” of what AI is currently capable of, meaning it’s not just available, it’s also accessible. “The widespread exposure to AI and all the dialogue and fast talk that it’s generating is creating more of a cultural shift,” says Chris Newhook, vice president of monitoring operations, American Alarm and Communications, Arlington, Mass. “People [customers and leadership] are driving the change and are more receptive and willing to embrace forms of automation that, while by no measure would classify as AI, nevertheless allow us to communicate by digital means.”

One thing AI cannot do, however, is replace people. “For us, this part is non-negotiable: AI supports the decision, but humans own the decision,” says Priya Serai, chief information officer, Zeus Fire & Security, Paoli, Pa. “Monitoring is a critical business. These are real people, real properties and real risks, so we design every AI workflow with one simple rule: AI can inform and accelerate, but it cannot decide.”

AI Is Your Next New Hire

Today, monitoring centers are using AI as an assistant to their skilled operators. Traditionally, operators had to memorize site layouts, response steps, customer preferences and device quirks. Now, tools like ZeusGPT, Zeus Fire & Security’s generative AI layer, surface the right information automatically. “The skillset shifts from ‘remember everything’ to ‘use the information well.’ This actually levels the playing field for newer operators,” Serai says. “When the job is less overwhelming, the talent pool opens up.”

Because of the amount of non-dispatchable, non-critical events — like a power outage — that come across an operators’ desk, it’s easy for burnout to occur. “It definitely takes a toll on people,” says Caroline Brown, CEO, Security Central, Statesville, N.C. The company built a conversational AI agent, Ella, to assist with these non-critical events. “Seeing that [Ella] is able to alleviate that need, and [operators] can focus on those critical emergencies, say a fire or a panic alarm, I think we’re starting to see a more positive interaction and a little bit lighter workforce because it is such a serious role when we’re triaging and helping these situations,” Brown says.

AI is also increasing operator efficiency by eliminating tasks that do not require a lot of skills or knowledge to complete. “We use AI-assisted video analysis to filter out irrelevant motion and highlight events that may require action,” McMullen says. “We use conversational IVR technology to handle routine outbound verification calls that do not require an operator. We use SMS and chat tools that enable subscribers to escalate real alarms or resolve non-emergency alarms quickly. Our goal is to remove distractions and routine tasks so our people can stay focused on emergencies and moments that truly require human intelligence.”

Guardian Protection, Warrendale, Pa., is implementing AI agent assist technology to support employees across a diverse range of tasks. “This strategic use of technology reinforces the value of human connection in service delivery, ensuring that while operational efficiency is optimized, the personal touch remains central to our customer experience,” says Jason Bradley, chief operating officer, Guardian Protection.

There are multiple ways AI addresses turnover, a particularly prominent industry issue. These solutions range from eliminating the parts of the job that are “boring” or “mundane” to catching early signs that someone is not fit for an operator role.

“A big thing that AI should help do is keep monitoring center staff engaged and excited,” Brown of Immix says. “Allowing operators to spend their time helping and winning for their customer keeps them excited, engaged and keeps them mentally healthier. I think that we’re going to find that people are more drawn to work in a center because it’s not a mundane, dark room to spend your day in. You’re actually doing things that are exciting and interesting, and you’re helping people. It will change the dynamic nature of who we get to hire and how people stay in the role.”

On the training front, he adds, “You’ll be able to tell AI, ‘Create me a scenario where there’s 20 people in a convenience store and someone comes in and robs it with a weapon,’ and run that scenario in front of an operator where they have to respond to it and take action to make decisions. I think we’re going to see AI-simulated threats become part of the training routine, and I think that will be incredibly helpful for the industry.”

Security Central has begun this process by building a learning management system (LMS) to make training more efficient. “When people aren’t necessarily talking on the phone to someone, especially in distress, we’re using AI to come up with stressful situations — they’re frustrated, they were woken up in the middle of the night, they don’t remember their passcode — and helping them through those things in a polite, professional [manner], but also with a sense of urgency,” Caroline Brown says. “We’re seeing … that we’re able to stress test [trainees] a little more. If someone is going to turn over rather quickly, we hope it’s in the training scenario.”

In the interim between training and becoming a veteran operator, Erin Bullard, director of partner relations, Immix, believes AI will also help with prioritization, ensuring that people who might not have the most training are not receiving the highest priority alarms or life safety threats that they are not prepared to handle yet.

“Monitoring is a critical business. These are real people, real properties and real risks, so we design every AI workflow with one simple rule: AI can inform and accelerate, but it cannot decide.”
— Priya Serai, Zeus Fire & Security

AI in Action

Immix monitoring dashboard with alarm, event, and operator statistics in charts.

Monitoring centers are creating customized AI tools to assist their skilled operators while also keeping their data safe. Image courtesy of Immix

American Alarm and Communications uses a text-to-speech platform for lower priority event notification, as well as an interactive SMS text application for verifying residential burglar alarms. “These are two services alone, right now, that are accounting for around 170 to 175 virtual agent hours just in the last month in October,” says Chris Newhook. “Ultimately, I think this translates to a greater focus on priority events and shorter response times as agents are not distracted by handling events for which most customers would prefer an electronic notification, whether by text or email.”

Immix is taking a “human in the loop” approach, in which AI does a lot of the heavy lifting, like detection and analysis. It will then deliver that information to an agent, who steps into the “human loop” and takes action to make the final decision. From its unique position in commercial monitoring centers for commercial video, Immix is also seeing how different companies, whether large or small, around the globe are using AI.

“We are partnering with people who are going all the way, the place everybody says we won’t be for years, but they are,” says Chris Brown of Immix. “AI detects something in a scene. It reports it to another AI that then becomes the operator, handles the event, dispatches the event to police, writes the incident report, closes it out, and a human never touches it.”

Erin Bullard of Immix adds, “AI at its core is looking at the metadata, and it’s all about how you use that. From my lens, I work with a lot of partners in manufacturing that create really cool solutions, and they’re very creative with how they go about it. We are a lot further along and a lot closer to much more automation of very advanced human functionality than we thought we would be.”

By utilizing advanced AI-driven video analytics, Guardian Protection is able to process events both in real time and retrospectively. “Deep learning models facilitate rapid detection of critical incidents, enabling our monitoring center to respond with greater speed and efficiency,” says Jason Bradley of Guardian Protection. “This approach has led to a substantial reduction in false alarms, improved labor productivity and accelerated response times, ultimately elevating protection levels and increasing customer satisfaction.”

Guardian Protection is actively pursuing new avenues for AI integration. “One initiative under consideration involves deploying AI agents to manage non-emergency call traffic, such as routine account management tasks,” Bradley says. “We are also evaluating AI-powered assistance and quality management platforms that can deliver live coaching and support to live agents, dynamically shaping future performance standards.”

Zeus Fire & Security uses AI in two main ways: to make video monitoring smarter and to turn camera data into real business intelligence. The company accomplishes this through ZeusAI and ZeusGPT. “We’re not fully deployed across all sites yet, but in the areas where we’ve layered camera analytics with cloud analytics, the operator experience is noticeably better: fewer low-value events, more context with each signal, and clearer, ‘What am I looking at?’ moments. It’s not perfect, but it’s progress — and in monitoring, progress matters,” says Priya Serai of Zeus Fire & Security.

COPS Monitoring is in the early stages of using AI-supported training and quality tools, enabling faster, easier review of calls and identification of coaching opportunities. The company is also developing AI-assisted search tools to help dealers quickly retrieve documentation and support information on MPower, COPS Monitoring’s proprietary dealer access portal.

Remaining Cautious & Maintaining Trust

We would be remiss to have a conversation about AI without discussing the ethical and legal considerations. Many monitoring centers are addressing these issues by forming departments dedicated to these technologies, carving out time in leadership meetings to discuss AI concerns, and over-emphasizing the need to remain human-centric.

Security Central’s quality assurance team, for example, is closely monitoring Ella’s accuracy and efficiency. The company’s leadership also has frequent conversations about the ethical uses of AI and actions they can take to ensure no information is being released to the wrong parties.

Similarly, American Alarm has an AI initiative team, and a number of the company’s senior and executive management personnel are involved. This is one of the first steps American Alarm has taken to apply controls with respect to sharing sensitive data related to company financial information. “It’s easy to look at all of these things that are coming our way and say, ‘Let’s just get on it, let’s get out there, let’s make sure we’re competitive, we’re leading edge.’ You’ve got to be very careful with personal information, company financial information, strategic plans,” Newhook cautions. “You can’t just cut and paste and drop that into a ChatGPT and say, ‘Give me the best approach for this.’ Right from the get-go, you want to circle the wagons to get all your people on the same page with respect to it.”

Serai and the Zeus Fire & Security team are firm in the belief that AI cannot be the final decision-maker, ever. “We are very intentional about this,” she says. “The ethical risk is obvious: if AI makes the wrong call and we dispatched (or didn’t dispatch), who owns that decision? To avoid that grey zone, we have a simple rule: AI can assist, but humans make the call. Always. This protects both safety and liability.”

COPS Monitoring has the same approach to and emphasis on the supporting role of AI paired with human oversight and decision-making. “Subscribers trust us with their lives and property, so we must preserve human verification and human decision-making at every critical step,” McMullen says. “We also take data privacy and security very seriously, especially in systems that record, analyze or store communication. Any AI tool that handles subscriber or event data must meet the strict security requirements of our SOC 2 certification.”

Bradley agrees. “Ultimately, our objective is to leverage AI as a strategic asset — augmenting our workforce, enhancing customer outcomes and upholding the highest standards of reliability and trust. This measured approach ensures that every advancement in AI is thoughtfully integrated.”

Brown of Immix is concerned with the legal aspect of using AI. “The challenge we’re going to find, which will get played out in courts probably very quickly, [is] when AI makes the wrong decision, who’s responsible? Is anybody responsible, number one? But who’s getting sued? Is it the monitoring center who chose that AI and put it to work? Is it the manufacturer of that AI who wrote it? Those are questions that have yet to be answered because we haven’t seen a lot of lawsuits filed and won or lost,” he says. Thus, he advises other leaders to be cautious and consultative when implementing AI solutions, and to not forget to add AI clauses to contracts, when applicable.

“You’ve got to be very careful with personal information, company financial information, strategic plans. You can’t just cut and paste and drop that into a ChatGPT and say, ‘Give me the best approach for this.’ Right from the get-go, you want to circle the wagons to get all your people on the same page with respect to it.”
— Chris Newhook, vice president of monitoring operations, American Alarm and Communications

AI Can’t Have a ‘Gut Feeling’

Guardian Protection operations center with numerous agents working at computers and large data screens.

Every day, operators make decisions that AI, which strictly follows pre-existing data, is not yet capable of. Image courtesy of Guardian Protection

There are countless things AI can already do, and many experts say that the AI you use today is the worst version of AI you will ever use — meaning its current capabilities will vastly improve, and it will develop brand new capabilities in little to no time.

One thing AI is not currently capable of, and what many monitoring center leaders agree AI will not be able to achieve for a long time, if ever, is human empathy.

“For those high-level, panic duress events, some of our veteran operators just know by someone’s voice that, ‘They gave me the right information, but there’s something more. Something is a little off. I need to take this a step further and go down the call list or deviate from standard process.’ That’s something unique to that human component versus the AI component,” says Caroline Brown of Security Central.

Chris Newhook of American Alarm and Communications says, “I’ve seen agents dispatch on alarm events on a hunch or because they felt something was off, and the person they were speaking to was actually being held at knife point. And that agent just made the call. That’s something you’re not going to get with AI.”

Chris Brown of Immix heard one such story at an event hosted by The Monitoring Association. A person stuck in a house fire was having trouble staying conscious because of the smoke, so the operator instructed the person to put their phone and speaker and turn the volume all the way up. That way, when firefighters made it to the house, shortly after the person had passed out, the operator was able to scream into the phone and lead firefighters to their location. “The operator took the initiative to do something that a robot would never do,” he says. “They made a very unique, conscious decision to change their behavior and ended up saving somebody’s life.”

Erin Bullard of Immix adds, “There’s nothing wrong with AI being a coworker, but humans have the ability to relate to somebody in a way that AI is not going to be able to. AI can’t come up with some of these ideas and some of the things that a person can.”

Not only can humans deviate from standard protocol to save someone’s life, but they can also offer empathy and comfort to those in a high-stakes situation. “Human interaction in these moments not only helps to calm and support the customer and others who may be affected, but also enables our operators to gather nuanced information and address emotional needs technology alone cannot fulfill,” says Jason Bradley of Guardian Protection.

AI is a tool that runs on data and doesn’t deviate from what it’s told to do. Humans are skilled at sensing when something is off even if the data presented appears normal. “Humans can evaluate a situation ethically and take personal responsibility for outcomes in a way that technology cannot,” says Jim McMullen of COPS Monitoring. “These human qualities give trained dispatchers the insight required to modify a standard procedure when the situation demands a different response. This is why we insist that AI can assist, but humans decide. AI clears the routine and predictable work out of the way, but the decisions that truly matter always remain in human hands.”

Zeus Fire & Security is similarly committed to always having a human in the loop. “If sentiment analytics improves over time, great; it becomes one more signal for operators to interpret,” says Priya Serai of Zeus Fire & Security. “But we’ll always train operators to be the human voice in the loop. We train for thinking, not clicking. Operators learn why events matter, not just which button to press.”

What’s Next?

AI is improving and developing at a pace that is nearly impossible to keep up with. It is just as difficult to predict where AI is going and what we’ll be able to do with it in the future.

“I caution everyone to not miss the fact that wherever you think we’re going to go in the next two years, we’re already there. It’s available,” Brown of Immix says.

He believes AI is headed in a direction that will allow operators to become much more predictive. “We’re starting to see large language models, or what’s kindly called LLM, in the space today,” he says. These tools will help an operator quickly look at historical data related to a particular alert, such as a black truck being parked in a fire lane a number of times within the week.

In the future, Serai sees a monitoring center amplified by AI. Operators are seeing pre-qualified events instead of endless feeds. An operator who’s only been on the floor for three weeks is working like they’ve been with the center for three years thanks to tools like ZeusGPT, which provides the site history, relevant signals, unique customer instruction, and the SOP highlighted step-by-step for any event. Supervisors aren’t chasing or correcting problems; they’re coaching and leading their people. New hires aren’t overwhelmed by screens and codes because they’re focused on scenario thinking. When an event pops up, AI automatically clips the video, identifies what’s going on, highlights the zone involved, and surfaces the SOP. “This isn’t science fiction,” Serai says. “This is where we’re heading: step by step, upgrade by upgrade, one operator win at a time.”

McMullen sees AI growing in areas such as video activity analysis, non-emergency triage and background task automation. “Routine and low-priority events will increasingly be resolved without an operator, and many support functions will become faster and more efficient. Operators will also benefit from real-time assistance that helps organize information, surface important details and reduce the workload created by routine tasks,” he says. “What will not change is the human-centered structure of a monitoring center. Emergency events still require intuition, empathy, communication and moral responsibility. These are the strengths that define human intelligence, and AI cannot replace them.”

“I caution everyone to not miss the fact that wherever you think we’re going to go in the next two years, we’re already there. It’s available.”
— Chris Brown, Immix

Lessons, Pitfalls & Considerations

Screens show security monitoring, live video, maps, and events across devices.

AI is not a “plug and play” tool. Though it is accessible and imperative, there are many factors to consider prior to a full-blown implementation of available technologies. Image courtesy of Immix

Leaders widely agree that if you’re looking at AI to replace your people, you’re probably not thinking about the technology in the best holistic mindset. AI can enhance operations and take the burden off of monitoring center employees, which is a great place to start having AI conversations.

“The best place to start is by listening to your people. Spend time understanding what makes their day harder than it needs to be,” says Jim McMullen of COPS Monitoring. This means focusing on low-priority, high-volume tasks such as routine notifications and non-emergency verifications. “These are predictable, safe areas where technology can help without affecting the core responsibility of emergency response,” McMullen adds. “Measure the impact on both operator experience and subscriber safety to prove the technology is working as intended.”

At Security Central, Caroline Brown shares that her and her team have learned that it’s a mistake to take an “all-or-none” approach to AI. Rather, it’s important to meet clients where they are. “Understand you have your high-tech customers, and you have some that are still apprehensive,” she says. “You have to show them all of the positive ways it can help grow their business.”

For customers’ sake, careful exploration and consideration of AI tools is a must. “You do have to think about the full user experience,” says Erin Bullard of Immix. “You have to think about the deliverable to the end user and what you’re packaging up as part of that event. Having an overload of information just because it’s AI is not helpful. It’s not going to be actionable.”

Bullard adds that AI exploration takes time and advises fellow leaders not to give up or be deterred if some tools aren’t up to standards or a good fit for the company. “These things change so fast that they do evolve; try it again in a year,” she says. “If they have made changes, if they have improvements, it is worth revisiting some of these companies later down the road. Make sure that you do feel that they’ve had significant enough changes to revisit it and go in with an understanding of what you are trying to accomplish.”

The cost of AI services has been a barrier for some clients. “Monitoring centers and their dealer networks need to be prepared to have those kinds of conversations with their customers,” says Jason Caldwell, director of marketing and Guard Force accounts, Immix. “Because while they may understand the technology quite well, there’s a good chance most of your customers do not yet and are not going to really understand the value that it’s getting and how you’re monitoring.”

Importantly, AI is in perpetual motion, and the monitoring industry needs to be in perpetual motion alongside it. “You can’t be slow to try something,” says Chris Brown of Immix. “This is not something you can analyze to the nth degree, because by the time you get done, all of what you analyze is gone, and something else is there because it’s all evolved. It’s a bit of jumping in the deep end of the pool, but take a life preserver with you.”

Chris Newhook of American Alarm and Communications shares three primary takeaways from AI exploration: don’t place full confidence in AI; address the security aspect of AI from the onset; and don’t wait. “Start just brainstorming and spit-balling and coming up with ideas, and understand that there are very few AI experts in the room,” he says. “Ask ‘stupid’ questions. Whether these tools eventually take everything over, that doesn’t free you from the responsibility of learning how to use them right now, responsibly. … Everyone needs to be in a three-point stance right now and ready to be able to take on some of that change.”

Priya Serai of Zeus Fire & Security says, “Implementing AI is not a straight line. It’s more like building a plane while flying it . . . during turbulence . . . with half the parts still in FedEx transit.” She advises monitoring centers to take multiple steps when introducing AI: start small, involve operators early, be transparent about what AI will or will not do, celebrate human wins and don’t lose the purpose of implementing AI.

“Before you chase AI, fix your data plumbing,” Serai concludes. “AI is only as good as the data feeding it. We all love the idea of ‘plug it in and it works.’ If your data is inconsistent, siloed, or missing context (which is the case across most integrators and monitoring centers), AI will amplify the chaos, not reduce it.”

SHARE