Securing IoT/OT Environments: The Password Paradox

If you’re reading this online, on a phone or laptop, chances are you have one. A password. Passwords are our faithful guardians in the digital world, securing everything from social media accounts to banking information. And the same goes for the enterprise, where companies often rely on passwords to keep their files and digitally enabled activities secure. With the growing integration of IoT and OT systems into enterprise and manufacturing, a crucial question arises: Can a simple password keep everything safe?

The password problem

In the past, a strong password was all you needed for online security. But today, that’s a fantasy. Passwords may have been the gatekeepers of our computers for years, but they can no longer be the only guardians.

Think of critical infrastructure, like a major city’s electrical grid or subway system. A weak password can be the vulnerability that opens the inner sanctum of control, paving the way for disruptive chaos by malicious actors. Brute-force attacks can crack weak passwords in an instant, and phishing scams are becoming increasingly sophisticated. An attacker who exploits human nature, like implicit trust or curiosity, can lead to a domino effect, with entire systems compromised if someone falls for a cleverly disguised email.

The truth is that today’s cyber threats are too sophisticated for a one-size-fits-all approach like passwords. We need stronger, more robust defenses to keep our interconnected world safe.

Dick Bussière

Technical Architect, Tenable.

The concept of layered defense in operational technology

Let’s pause our discussion of passwords for a moment—we’ll come back to it later. Beyond passwords, there’s a layered security approach that protects IoT and OT systems. Operational technology networks have long been segmented into “tiers,” as often described in the Purdue Model. This model divides infrastructure based on the functions provided by a particular tier. If we look at it, Tier 0 represents the physical machines, Tier 1 represents the “cyber-physical” layer, where the kinetic world intersects with the digital world, and “controllers” live. Tier 2 represents where the humans come into play, facilitating the “Human Machine Interface” (HMI) level, where humans drive the various processes. Finally, Tier 3 provides services that the other two tiers rely on.

It is clear that we can take advantage of these segregations from a security perspective. In general, the majority of traffic should flow between two layers, for example layers 1 and 2, and layers 2 and 3. There are of course exceptions, but these exceptions should be quite limited. If we take advantage of this natural segregation, we can create security rules to determine what traffic goes where. This segmentation limits communication between zones, minimizing the potential impact of a breach. We will call the segregation provided by the Purdue model “horizontal segregation,” because the segments run horizontally.

But OT facilities are HUGE. Imagine a four-unit thermal power plant or an automobile factory. Even if we have strict controls for horizontal segregation, what happens if someone enters a factory and tries to move sideways instead of up and down? In our four-unit thermal power plant example, if one unit is compromised, the attacker can move to the other three units. Similarly, in our automobile factory example, if someone breaks into the paint shop, he can move sideways into the body shop.

Vertical segregation must also be implemented. For our power plant example, there should be no way for traffic on Purdue Level 1 in Unit 2 to flow to Units 1, 3, and 4. And if an attacker gains control of the paint shop in an auto plant, that attacker should be prevented from moving to other locations.

In summary, each zone has its specific task and the traffic between them must be strictly controlled. This way, if a hacker manages to sneak into one zone, he cannot simply move to another.

Vertical and horizontal segregation zones are maintained by the “security guards” of the digital world: firewalls and access control lists (ACLs). They are like bouncers at a party. They check every message and piece of data that tries to enter a zone, to make sure it has a legitimate reason to be there. Only authorized information gets through, keeping the system running smoothly.

Layered security can be compared to a well-designed transportation network with checkpoints, controlled intersections and dedicated lanes in all directions, ensuring smooth and safe operation.

All in all: 2FA, keys and digital certificates?

Other authentication and authorization mechanisms are often used to strengthen security beyond what is possible with passwords alone. The general theme is “something you have and something you know”, and the most secure authentication occurs when two factors are used (2FA).

The use of digital certificates or one-time passcodes (application or FOB based), in combination with passwords, satisfies the 2FA requirement of “something you have and something you know”. and/or biometrics provide a more robust alternative to passwords. Additionally, in machine-to-machine communication, digital certificates serve to enable mutual authentication and strong encryption of inter-machine communication. Finally, digital certificates are used to digitally “sign” messages in secure communications, preventing interference with the message content during transmission. We see this application in use every day as we navigate the web using HTTPS.

Passwords remain an important part of authentication, but used alone, they are becoming more dangerous every day. That is why many online service providers are switching to “Passkeys”. These serve as a second factor in 2FA. Instead of an app number or a fob, your biometric data (face/fingerprint) is used to confirm that it is YOU who is using the system. Think of passwords as your front door key. You get in with it, but anyone with a copy can also unlock the door. With credential theft a common problem, passwords are no longer safe.

So they need to be supplemented with stronger authentication methods that include something else. Either a certificate or a good 2FA is absolutely essential. In fact, even if someone gets a password, with well-implemented 2FA it is impossible to use it – the system needs the second factor to authenticate, otherwise access is denied. Forging a 1024 or 2048-bit RSA key remains computationally infeasible. The same goes for your face or fingerprint.

The transition to a key-based future: challenges and considerations

While cryptographic keys offer undeniable advantages, challenges exist. First, legacy systems impose inertia—perhaps these systems remain in use until they are no longer economically viable to use.

Managing keys requires specialized skills and a certificate of authority. Careful management is essential to prevent certificate expiration and potential system failure. The added complexity requires a risk-benefit analysis to determine appropriate implementation scenarios. Moving to a key-based system also incurs costs associated with new technologies, staff training, and policy development. However, the long-term benefits outweigh the initial investment, resulting in a more secure and resilient environment.

We’re stuck with passwords (for now)

Passwords remain an unfortunate reality for some legacy OT devices that don’t support cryptographic keys. Replacing them entirely may not be an option; we become comfortable with what we know, and sometimes the cost of changing things is simply too high. Furthermore, some would question the value proposition of advanced security at lower levels of a layered security model, especially when those levels are well protected by multiple higher layers. So, what can we do? For such scenarios, strong password hygiene is critical. This includes enforcing complexity requirements, regular rotation, and implementing multi-factor authentication (MFA) where possible. Password managers further enhance security by securely storing and managing passwords, reducing the risks associated with poor password practices.

The Future of IoT/OT Security: A Holistic Approach

As IT and OT converge, exposure management, securing active directories, and Zero Trust Architecture (ZTA) become increasingly important. This is great for efficiency, but it also creates new security challenges. The golden rule is to see the whole picture, not just isolated systems.

Exposure management involves continuous monitoring and assessment of the attack surface to identify and mitigate vulnerabilities. Securing active directories ensures that only authorized users have access to critical systems, while ZTA enforces strict access controls. The proliferation of IT and OT systems introduces security risks, but also presents opportunities to modernize security strategies. By taking a comprehensive approach, organizations can protect their sensitive assets and strengthen operational resilience.

Building a fortress for the connected world

Passwords have served us well, but the changing landscape of IoT/OT security demands more. With tons of new devices connecting to the internet, we need a serious security upgrade. Cryptographic keys, layered security, and a commitment to ongoing education on password best practices are all critical to a robust defense. This requires constant vigilance and adaptation, but the rewards of a robust defense against cyberattacks are immeasurable. The ever-expanding world of interconnected devices demands our unwavering commitment to protecting them.

We list the best password managers for companies.

This article was produced as part of Ny BreakingPro’s Expert Insights channel, where we showcase the best and brightest minds in the technology sector today. The views expressed here are those of the author and do not necessarily represent those of Ny BreakingPro or Future plc. If you’re interested in contributing, you can read more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Related Post