Interview Prep4 min read

The IT Security Instinct That Fails in OT Interviews

A security consultant joined an OT (Operational Technology) environment and, on his second week, ran a vulnerability scan against a production PLC (Programmable Logic Controller). The scan crashed the controller. The controller managed a chemical mixing process. The process had to be emergency-stopped, and the plant lost six hours of production. He was a good IT security professional applying IT instincts to a domain where those instincts are dangerous.

If you walk into an OT/ICS security interview with a pure IT mindset, you will not pass. OT/ICS means Operational Technology and Industrial Control Systems: the hardware and software that run physical processes in factories, utilities, plants, and similar environments. Not because your technical knowledge is wrong, but because the priorities are inverted in ways that are non-obvious until you have worked in or around operational environments. Interviewers in this space are specifically testing for whether you understand that.

The priority inversion

IT security thinks in terms of confidentiality, integrity, and availability (CIA). OT security inverts this:

Safety first. Physical processes can injure or kill people. A compromised safety system is not a security incident; it is a potential catastrophe. Every security action must be evaluated for its effect on safety functions.

Availability second. Industrial processes often cannot be stopped and restarted easily. Downtime has real-world consequences: production losses in the millions per hour, chemical processes that cannot be safely interrupted mid-cycle, power systems that serve hospitals. Availability is not just a business concern.

Security third. This does not mean security is unimportant. It means security controls and investigation procedures have to be designed around safety and availability constraints, not the other way around.

Candidates who have not internalized this inversion stand out immediately. It is the first thing interviewers test for.

The IT instinct that gets people filtered out

Ask a candidate with a pure IT background how they would investigate unusual PLC behavior, and you often hear something like this:

"I would isolate the affected system from the network, run a vulnerability scan to identify any exploits, pull the logs, and start triaging the alerts."

Every step in that answer could cause a safety incident. Isolating a PLC from the network might disrupt control of a physical process. Running a vulnerability scan against industrial control systems can crash PLCs or cause unexpected behavior in safety systems. Taking a system offline for forensics might mean shutting down a power plant.

Hiring managers in OT security have been burned by professionals who applied IT instincts and caused operational disruptions. The interview is often explicitly designed to surface that gap.

What the safety-first mindset looks like in practice

Question: An operator reports that a PLC is behaving unexpectedly: a pump is cycling faster than normal and the HMI shows values that do not match what the field instrumentation is reading. How do you investigate without disrupting production?

"Before I do anything, I need to understand what this pump controls and what the consequence of disrupting it looks like. Is this a safety-critical function? Is there a redundant system? I am talking to the process engineers and operators before I do anything technical. They can tell me what 'unusual behavior' actually looks like in context, whether this is a known process anomaly, and what the safety envelope looks like.

Then I gather information passively: network traffic captures from a span port if available, historian trend data showing when the behavior started and whether it correlates with any network events or maintenance windows. I am looking for a timeline. If the behavior started right after a vendor remote access session, that is meaningful.

I also want to distinguish between a security incident and an operational anomaly. The discrepancy between HMI readings and field instrumentation could be a sensor failure, a communications issue, or a misconfiguration. I am not assuming malicious cause until I have ruled out benign ones.

I am not touching the PLC itself, not modifying its configuration, and not isolating it from the network until I understand what that will do to the process. If this turns out to be a security incident, the response needs to be coordinated with operations, not executed unilaterally by the security team."

This answer demonstrates the core OT mindset: safety first, consult operations, gather information passively, treat active intervention as a last resort.

Other gaps that surface in interviews

The air gap assumption. Many candidates assume OT environments are heavily isolated. Modern industrial environments frequently have IT/OT convergence points: remote vendor access, historian systems connected to corporate networks, cloud monitoring. Strong candidates do not assume isolation; they ask about it.

The patching reality. In IT, unpatched systems should be remediated quickly. In OT, patching requires vendor testing and often production downtime, so patch cycles are measured in years. Compensating controls are the operational reality. A candidate who talks about patching as standard mitigation without acknowledging this reveals a gap.

Unilateral security actions. Security in OT requires deep collaboration with process engineers and operators. If your incident response instincts default to isolation and containment, you need to reframe them for OT contexts where those actions can cause physical harm.

The good news is that strong network security fundamentals transfer well to OT, as long as you understand the constraints. If you are coming from IT security, the technical shift is smaller than the mindset shift.

OT/ICS interview questions on MyKareer are designed by practitioners who understand the safety-first mindset. Try them free.