High School’s AI Security System Confuses Doritos Bag for a Possible Firearm

An AI security system at a Maryland high school mistakenly identified a student’s bag of Doritos as a gun, highlighting flaws in school safety technology

Technology is supposed to make schools safer. But sometimes, it can cause confusion instead of calm. That is exactly what happened at a high school in Maryland, United States, when an artificial intelligence security system mistook a bag of Doritos for a gun.

What followed was a frightening moment for one student and a serious debate about how much schools should depend on AI for safety.

The Incident

The event took place at Kenwood High School in Baltimore County. A student named Taki Allen was waiting outside the school when police arrived and ordered him to the ground. He was handcuffed and searched after the school’s AI gun detection system flagged a possible firearm.

Taki said he was only holding a bag of Doritos when the alert was triggered. According to him, “I was just holding a Doritos bag, it was two hands and one finger out, and they said it looked like a gun.” He explained that several police cars showed up and officers pointed guns at him before realizing there was no weapon at all.

How the AI System Works

The school uses an AI-powered security system from a company called Omnilert. It watches the school’s camera feeds and sends alerts if it detects something that looks like a weapon. In this case, the system reportedly identified the Doritos bag as a possible gun and sent an alert to school officials.

According to the school’s principal, Katie Smith, the alert was reviewed by the district’s safety team and later cancelled after they found no real threat. However, the cancellation message did not reach the principal immediately. Believing the alert was still active, she called the school resource officer, who then contacted local police. This delay led to the police responding to what they thought was a dangerous situation.

The Company’s Response

Omnilert, the company behind the AI system, expressed regret over the incident. In a statement, it said, “We regret that this incident occurred and wish to convey our concern to the student and the wider community affected by the events that followed.” The company also said that “the process functioned as intended” because the system correctly alerted human reviewers who then checked the footage.

In other words, Omnilert believes the system worked as designed, since humans were still part of the decision process. The issue, according to them, was not a software failure but a communication gap between the school and law enforcement.

The Human Side of the Story

For Taki, the situation was terrifying. He said he feared for his life when police arrived with guns drawn. “The first thing I was wondering was, am I about to die? Because they had a gun pointed at me,” he said. The emotional toll was also felt by other students who saw the incident happen.

Parents and local leaders have expressed anger and concern. Many are asking how a snack bag could be mistaken for a weapon and why a student had to be handcuffed before the misunderstanding was cleared up. Baltimore County Councilman Izzy Patoka said, “No child in our school system should be accosted by police for eating a bag of Doritos.”

Was It Human Error or AI Error?

Officials later said the incident was not caused by a failure in the AI system itself but by a communication breakdown. The AI detected a potential threat, humans reviewed it, and they cancelled the alert. However, the message did not reach everyone in time, leading to confusion and a heavy-handed response.

This raises a big question about how much we should trust AI in situations that require instant judgment. While the goal of these systems is to protect students, mistakes like this show how dangerous false alarms can be. A simple misunderstanding nearly led to tragedy.

Lessons for the Future

This case has started a larger conversation about AI in schools. Many districts in the United States use AI tools to detect weapons or monitor surveillance cameras. But experts warn that these systems are not perfect and need strict human oversight.

In January 2025, another AI detection system failed to spot a real gun during a school shooting in Tennessee. Now, in this case, the AI spotted something harmless and caused panic. Both examples show that AI is still not reliable enough to handle life-and-death decisions without careful checks.

Schools may need to rethink how they use these technologies. Instead of letting AI decide what is dangerous, it might be better to use it as a supportive tool that helps trained staff make quick and accurate decisions.

The Bigger Picture

Artificial intelligence is being used everywhere, from classrooms to hospitals to workplaces. It can be powerful and helpful, but it also has limits. Machines do not have context or emotion. They only see what their training data tells them to see. In this case, an object shaped like a gun turned out to be a bag of chips.

As AI becomes more common, stories like this remind us that technology must work hand in hand with human judgment. Safety systems should protect students, not frighten them. Policymakers and developers must make sure that every alert is checked properly before police take action.

The Bottom Line

The Doritos bag incident at Kenwood High School is more than just a funny misunderstanding. It shows the serious risks of over-relying on AI without proper human oversight. Technology can be smart, but it still needs humans to think, feel, and decide what is real.

AI is here to stay, but so should human judgment. The goal should always be safety, trust, and common sense. Schools should use AI to help protect students, not to create moments of fear.

Also Read: OpenAI to Let ChatGPT Generate Erotica for Verified Adults

 

Author

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top