Can You Tell What Normal Looks Like on This Network?
Interview Prep4 min read

Can You Tell What Normal Looks Like on This Network?

You are looking at network flow data and something is off. A workstation in accounting is making SMB connections to six different servers in the engineering VLAN. It has never done this before. Is this lateral movement, or did someone in accounting just get a new project that requires accessing engineering file shares?

That question, and the fact that you cannot answer it without understanding what normal looks like, is the core of every network security interview. Interviewers who have actually defended networks know that detection starts with visibility, not signatures.

The three things network security interviews always test

Understanding normal traffic

The hardest part of network defense is not detecting threats. It is knowing what normal looks like well enough to recognize when something is wrong.

A weak answer to "How would you detect lateral movement?" usually starts with IDS rules and port monitoring. Port 445 carries legitimate SMB traffic constantly in most enterprise environments. An IDS rule alone generates noise, misses context, and leaves analysts with ambiguous alerts.

A strong answer starts with baselining: What does normal traffic look like on this network? Which systems talk to which, over what protocols, at what volume? Lateral movement detection is almost entirely a baselining problem. If workstations never talk directly to each other and you suddenly see peer-to-peer SMB traffic, that is meaningful. But if peer-to-peer SMB is common in your environment, you need a different signal.

This is what I tell every candidate who asks how to prepare: before you talk about any detection tool, be ready to explain how you would establish what normal looks like in an environment you have never seen before.

Segmentation as a detection tool

Most candidates talk about segmentation as a remediation step: something you do after identifying that a flat network is risky. Strong candidates talk about it as an architectural decision with detection implications.

Every segment boundary is a place where unexpected traffic becomes highly meaningful. If a workstation should never communicate with your finance VLAN and you see it try, that is a high-confidence signal regardless of the specific technique being used. Segmentation does not just limit blast radius. It creates chokepoints where your detections are most effective.

When an interviewer asks "You are brought in to improve detection in a mostly flat network," they are specifically testing whether you recognize that the flat network constraint changes your approach. You cannot rely on segmentation boundaries for detection, so you need to lean harder on identity signals and behavioral baselines.

Detection logic that fits the environment

Generic detection rules are generic problems. Signatures catch known techniques. What catches novel or adapted techniques is behavioral anomaly detection tuned to the specific environment.

Strong candidates describe detection rules built from the environment's actual traffic patterns: How many unique systems is a user authenticating to? Are there new authentication patterns for service accounts? Are there Kerberos ticket requests that do not match normal patterns?

The operational reality matters here too. Noisy detections are worse than no detections in some ways, because they train teams to ignore alerts. A candidate who understands that effective detection requires tuning, and that tuning requires understanding normal behavior, is a candidate who will make the SOC better rather than just louder.

What a strong answer looks like in practice

"You are brought in to improve detection coverage in a company with a mostly flat network. Where do you start?"

"A flat network means limited segmentation for detection, so I lean on behavioral baselines and identity signals. First, I use network flow data to map what is actually communicating with what. That gives me a picture of real behavior versus what IT thinks the topology looks like. Those are often different.

From that baseline I identify high-value chokepoints: domain controllers, file servers, backup systems. These should have well-understood access patterns. Then I build detection logic specific to this environment. For lateral movement in a flat network, I am looking at authentication telemetry: users authenticating to unusual numbers of systems, service accounts behaving like user accounts, Kerberos requests that do not match normal patterns.

The goal is detection rules tuned to this network's normal, not generic signatures that fire on any SMB traffic."

What gets candidates filtered out

Tools before visibility. Deploying a SIEM before understanding what telemetry matters is backwards.

Lateral movement as a signature problem. Signatures catch known techniques. Behavioral detection catches the rest.

Ignoring the flat network constraint. If the question mentions it, address it. Missing that means you are answering a generic question.

No mention of tuning. Detection rules that are not tuned create alert fatigue. This signals a lack of operational experience.

Only perimeter focus. A firewall at the edge tells you about inbound and outbound traffic. Lateral movement is internal. Candidates who focus only on perimeter controls are missing where modern attacker activity actually happens.

If you want to practice talking through cloud-specific network controls, the same principle applies: understand the environment before you try to protect it.

Network security questions on MyKareer test how you think about traffic, not how you memorize port numbers. Start practicing.