Objectivity in an AI-Driven World: Why Scientific Rigor Still Matters

My journey toward objective, disciplined thinking began long before my FBI career. It started in a high school physics classroom with a gifted teacher, Mr. Krygowski (now Dr. Krygowski, MD), who taught more than formulas; he taught clarity of thought. And now I write, “Objectivity in an AI-Driven World.”

Inspired by his approach, I designed and built a ballistic pendulum as my independent science project. I spent hours calculating momentum transfer, calibrating measurements, and documenting results.

That early exposure to disciplined scientific methodology set me on the path to earning an undergraduate degree in physics, and it shaped the way I would later approach some of the most complex investigative challenges in the FBI.

Across all of these experiences, one theme stood out: objectivity and methodology are the foundation of reliable decision-making, whether in the laboratory, in the field, or today, in the rapidly evolving world of artificial intelligence.

Physics: A Foundation for Seeing the World Through Evidence

Physics taught me to distinguish between what I wanted to be true and what the data actually supported. That discipline carried into the FBI, where defining the problem clearly, developing testable theories, separating facts from assumptions, documenting every step, and letting evidence, not intuition, determine the next move became second nature. Investigations, much like scientific research, are structured searches for truth.

Both begin with an initial observation or anomaly, progress through theories and evidence collection, require careful analysis, and rely on peer review and transparent documentation to ensure that another person could retrace the same analytical path. The scientific method and the investigative method share the same DNA because both are anchored in objectivity, consistency, and disciplined reasoning.

Physics trains the mind to focus on measurable truth. You learn quickly that the universe does not bend to opinion or assumption. It behaves according to principles that can be tested, observed, and verified.

This mindset translated directly into my investigative work: these are not just scientific ideals. They are practical skills that ensure accuracy, fairness, and clarity in environments where assumptions can be costly.

The Scientific Method and the Investigative Method: Parallel Paths to Truth

The processes of scientific inquiry and investigative inquiry share a remarkably similar structure because both are fundamentally disciplined searches for truth. Scientific inquiry begins with observation, just as an investigation begins with a complaint, anomaly, or reported threat. From there, scientists form hypotheses in the same way investigators develop working theories or possible explanations.

Experimentation in the lab mirrors the interviews, forensic work, and digital evidence collection used in the field. Both processes then move into analysis, whether that means interpreting data or reconstructing timelines and assessing behaviors. Peer review in science finds its counterpart in case supervision and team review in investigative work. Ultimately, both methods rely on clear documentation that enables another scientist or investigator to retrace the same steps and arrive at an evidence-based conclusion.

Both demand structured inquiry, clear definitions, and transparent methodology. Both require professionals to question assumptions and test them, rather than letting assumptions quietly steer the outcome.

Objectivity Matters Most When Technology Enters the Conversation

This commitment to objectivity is more important now than ever as artificial intelligence becomes increasingly embedded in safety and security. In recent years, many discussions about AI-based gun detection have blended two very different technological concepts: simple object recognition and advanced neural network capability. It is essential to understand that AI neural networks are capable of far more than merely matching shapes or spotting predefined objects. Modern deep-learning architectures can detect complex patterns, infer contextual relationships, understand motion, classify behaviors, and make predictions based on vast, multidimensional datasets. Their strength lies in statistical learning rather than rigid, rule-based identification. In fields such as medical imaging, industrial quality control, logistics automation, and autonomous vehicles, neural networks have proven extraordinarily powerful because they learn from millions of examples and operate on high-resolution, high-quality data.

AI Object Recognition vs. Neural Network Inference: A Crucial Difference

As AI rapidly expands into the safety and security space, objectivity becomes even more essential, especially when evaluating school-security technologies marketed as “AI gun detection.”

Many of these systems blur two fundamentally different concepts: traditional AI object recognition, which relies on rule-based matching of predefined shapes or patterns, and neural network inference, which uses statistical learning from massive datasets to identify complex correlations and predict patterns it has been trained to recognize.

Although marketing language often treats these approaches as interchangeable, they are not the same, and misunderstanding the difference leads to unrealistic expectations about what such systems can reliably accomplish in real-world environments.

Non-Visual Weapons Detection and AI

A similar distinction appears in the growing debate around generation weapons-detection systems that are AI-enhanced walkthrough screening system designed to identify a broader range of objects than traditional metal detectors.

These systems often rely on complex machine-learning inference models that evaluate patterns, densities, and signal signatures to identify potential weapons. While this approach represents an impressive leap in AI-driven pattern recognition, it also invites misunderstandings about what the technology can and cannot do.

Neural network-based weapons detection is fundamentally different from the established physics-driven mechanisms of current-generation metal detectors, which rely on well-understood electromagnetic principles to identify metallic objects with extraordinary consistency.

These systems attempt to analyze a much broader range of materials and shapes, but because they depend on learned statistical correlations rather than fixed physical responses, they face limitations when confronted with unfamiliar objects, atypical carry positions, or low-resolution sensor inputs.

Traditional metal detectors do not have this challenge; their performance is consistent because the signal they read is grounded in the immutable physics of conductivity and electromagnetic response.

The comparison reflects the same truth seen in AI gun-detection claims: when marketing conflates fundamentally different technologies, expectations become misaligned with reality.

Without clear communication about capabilities and constraints, advanced AI-driven systems are often asked to perform tasks which the underlying data, and the physics, simply cannot support with complete reliability.

The Physics of Why Some AI Claims Cannot Hold True

Yet despite these physical constraints, AI remains an extraordinary tool when applied appropriately. Neural networks can analyze behavior patterns, detect anomalies in movement or posture, recognize known individuals from high-quality images, identify escalating risk factors, and process vast quantities of sensor data far faster than humans.

The challenge is not the capability of AI itself, but the assumptions placed upon it. When we ask AI to perform tasks that the available data cannot support, such as distinguishing a small handgun rendered in a handful of pixels, we set the technology up for failure and unintentionally erode trust.

In contrast, when AI is paired with appropriate sensors, transparent datasets, controlled environments, and rigorous validation, its performance often surpasses human capabilities by orders of magnitude. In other industries, this combination of thoughtful design and disciplined testing is why AI has advanced so rapidly.

Objectivity and Leadership: A Path Toward Responsible Innovation

This leads to a broader truth: objectivity is not restrictive. It is empowering. The leaders I admired most throughout my career demonstrated intellectual humility, transparent methods, critical thinking, and a willingness to test their own assumptions. They fostered cultures where evidence mattered more than ego and where challenging a hypothesis was not an act of defiance but an act of professionalism.

These environments produced clarity, innovation, and trust. In both science and investigations, objectivity prevents avoidable errors and creates room for meaningful progress. It does not slow innovation; it strengthens it.

This is not about criticizing technology. It is about creating a culture of evidence-based innovation, where tools are evaluated through the same lens we apply in science and investigations.

A Positive Call to Action: Let Evidence Lead, Let Method Guide

The lessons I learned from an inspirational high school physics teacher, then to years of investigative work in the FBI all converge on the same guiding principle: let evidence lead, let method guide, and let objectivity remain the compass.

These values are essential in a world increasingly influenced by AI. They help us evaluate technologies realistically, recognize where AI’s extraordinary neural network capabilities can shine, and understand where physical limitations require caution.

When we align our expectations with scientific reality, we design better systems, build safer communities, and promote responsible innovation.

The future belongs to those who combine the power of advanced AI with the clarity of disciplined, scientific thinking. When we align technology with objective methodology, we build safer communities, more trustworthy systems, and innovations that genuinely serve society.

By Glenn Norling   
Physics/Evidence-Based Practices/Former FBI Special Agent
Filed under 3687 Objectivity in an AI-Driven World, Posted: Chris Grollnek 11-25-25

Share this article

Glenn Norling Retired FBI Special Agent
Written by : Glenn Norling

Glenn Norling, owner of TBR Consulting LLC and retired FBI Special Agent, has extensive experience in emergency management and active shooter preparation. His firm, founded in 2020, specializes in crisis management and emergency planning. With 20 years at the FBI and 10 years in the U.S. Air Force managing multimillion-dollar projects, Glenn has trained over 15,000 people in active shooter awareness. He holds a BA in Physics and a MA in Organizational Management. Glenn is also a member of several professional security and law enforcement organizations.

Download Our White Paper

A lack of knowledge is no excuse for loss of life.

Latest articles