By Oluwapelumi BankoleResearcher, Information Systems & Cybersecurity, University of Nevada, Las Vegas Every morning, millions of Americans wake up in homes fullBy Oluwapelumi BankoleResearcher, Information Systems & Cybersecurity, University of Nevada, Las Vegas Every morning, millions of Americans wake up in homes full

Your Smart Devices Are Speaking to Hackers. Your Security System Isn’t Listening

2026/04/13 01:56
7 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

By Oluwapelumi BankoleResearcher, Information Systems & Cybersecurity, University of Nevada, Las Vegas

Every morning, millions of Americans wake up in homes full of connected devices. The thermostat knows when you leave. The doorbell camera watches your street. The hospital down the road runs infusion pumps, patient monitors, and HVAC systems that communicate over the same category of network as your smart refrigerator. And almost none of these devices are adequately protected.

Your Smart Devices Are Speaking to Hackers. Your Security System Isn’t Listening

We have built an extraordinary infrastructure of connected machines, and we are defending it with tools designed for a different era.

This is not a problem of awareness. Cybersecurity is a top federal priority. The Cybersecurity and Infrastructure Security Agency (CISA) publishes advisories weekly. Billions of dollars flow into enterprise firewalls, endpoint protection, and security operations centers. And yet, the attack surface keeps growing. As of 2024, the U.S. power grid alone hosts over 2.3 million connected IoT devices, many running outdated firmware with no patching schedule and no monitoring in place.

The gap is not between what we know and what we fear. The gap is between the security systems we have built and the environments those systems actually need to operate in.

The Lab Looks Nothing Like the Real World

Intrusion detection systems, the software designed to flag malicious activity on a network, have improved dramatically over the past decade. Machine learning and deep learning models can now identify attack patterns with remarkable accuracy in research settings. Transformer architectures borrowed from natural language processing, long short-term memory networks trained on sequential traffic data, ensemble models combining multiple classifiers: the academic literature is full of systems achieving 98 or 99 percent accuracy.

Those numbers are often misleading.

The accuracy figure typically comes from a laboratory dataset, collected in controlled conditions, with relatively clean traffic distributions, and tested on the same type of data the model was trained on. Real IoT networks do not look like that. They are messy, heterogeneous, and constantly changing. Devices from a dozen manufacturers send data in different formats. Traffic patterns shift when someone installs a new appliance, changes a routine, or simply leaves for a week. And critically, actual attacks are rare events in a sea of normal traffic.

When a model is trained on a dataset where attacks make up 40 percent of the records, and then deployed on a network where attacks account for 0.1 percent of real traffic, the model’s behavior changes completely. It has never learned what genuine rarity looks like. The result is a system that misses the very threats it was built to catch, while generating enough false alarms to overwhelm the analysts who have to review them.

The Class Imbalance Problem Is Not a Footnote

In the research community, the mismatch between training data and real-world conditions goes by a technical name: class imbalance. It is well understood, actively studied, and consistently underappreciated by the organizations deploying these systems.

Here is the core issue. A network intrusion detection system must classify each packet or traffic flow as either normal or malicious. In reality, the vast majority of traffic is normal. Attack traffic is the minority class, sometimes representing less than one percent of all observed events. Standard machine learning models, optimized to maximize overall accuracy, quickly learn that the best strategy is to simply classify almost everything as normal. That strategy produces excellent accuracy scores. It produces catastrophic real-world results.

A system that misses 80 percent of attacks because it has been trained to favor the majority class is not a security system. It is a compliance checkbox.

Research into techniques like Adaptive SMOTE, which generates synthetic examples of minority-class attacks to help models learn what rare threats look like, has shown real promise. But these approaches need to be implemented thoughtfully, tested against datasets that actually reflect deployment conditions, and evaluated on the right metrics. Recall, the percentage of real attacks the system actually catches, matters far more than overall accuracy when the consequences of a missed detection are a ransomware infection at a hospital or a false data injection into a utility’s control system.

The Multi-Dimensional Problem Nobody Wants to Solve

There is a related problem that receives even less attention: how we decide whether an intrusion detection system is good enough to deploy.

Most evaluations pick one or two metrics and optimize for them. Accuracy is common. F1 score is popular in academic papers. But a real-world IoT deployment requires trading off between at least four competing dimensions simultaneously: detection accuracy, computational efficiency, false positive rate, and adaptability to new attack types.

A system that detects 99 percent of known attacks but consumes more processing power than the IoT device it is protecting is not a deployable system. A system that runs efficiently but generates ten false alarms for every real threat creates alert fatigue so severe that analysts stop investigating. A system optimized for today’s attack taxonomy that cannot adapt when adversaries change tactics is a system with a known expiration date.

The absence of a shared, multi-dimensional evaluation framework means that organizations purchasing or deploying intrusion detection systems cannot make meaningful comparisons. A vendor can claim industry-leading detection rates while quietly optimizing for a metric that looks good in a demo and fails in production.

What Needs to Change

The path forward requires closing the distance between what researchers build and what operators actually deploy.

First, the research community needs to evaluate intrusion detection systems against realistic traffic distributions, not just balanced benchmark datasets. Testing against CIC-IDS2017 or NSL-KDD with default configurations produces numbers that are essentially fictional when compared to what a real hospital network or smart grid looks like.

Second, organizations deploying these systems need to demand multi-dimensional performance evidence before purchasing. Detection rate alone is not enough. Ask for false negative rates on rare attack categories. Ask for performance data under constrained computational budgets. Ask how the system performs six months after deployment, when the traffic patterns have shifted.

Third, and most urgently, the federal agencies responsible for protecting critical infrastructure need to establish minimum evaluation standards for AI-based intrusion detection. CISA and NIST have produced excellent frameworks. Translating those frameworks into specific, testable performance criteria for IoT security systems is the next step.

The connected devices are not going away. The attackers probing them are not going anywhere either. The question is whether the systems we build to protect them are actually built for the world those systems will operate in, or the world we wished we lived in when we wrote the training data.

Oluwapelumi Bankole is a researcher in information systems and cybersecurity at the University of Nevada, Las Vegas, where his work focuses on AI-driven intrusion detection for IoT and cloud networks. He holds a dual master’s in Management Information Systems and Cybersecurity.

Comments
Market Opportunity
Smart Blockchain Logo
Smart Blockchain Price(SMART)
$0.006478
$0.006478$0.006478
+1.98%
USD
Smart Blockchain (SMART) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags:

USD1 Genesis: 0 Fees + 12% APR

USD1 Genesis: 0 Fees + 12% APRUSD1 Genesis: 0 Fees + 12% APR

New users: stake for up to 600% APR. Limited time!