Home/Blog/Framework: The Breach
Case Files

Framework: The Breach

A Maldivian trading company lost access to every system in their office within two hours of letting their IT person go. This is the story of what happened next — and what it means for every business that trusts a single person with everything.

S
SysOps Team
SysOps Team
April 6, 2026
6 min read
Framework: The Breach

The Call Came on a Friday Afternoon

It always does.

The number was unknown. A Maldivian mobile, no name attached. The voice on the other end was calm in the way people get calm when they've already exhausted panic and landed somewhere past it.

"Our IT guy was let go this morning. By noon, nothing worked. I mean nothing. Computers, internet, the server room, everything. We don't know what he did. We need help."

That was how we met them.


A Trusted Man with a Long Leash

They had operated out of a three-floor office in Malé for eleven years, importing construction materials and running a modest but steady distribution operation. Like most businesses their size, they had one IT person. Call him Riyaz (not real). He'd been there six years. He set up everything: the Windows Server, the network switches, the NAS, the firewall, the staff laptops, the CCTV system, the accounting software. He knew every password, every IP address, every cable run.

When ownership decided to let him go, they told him at 10 AM and asked him to leave by noon.

They didn't think to revoke his access first.

That was the mistake. And Riyaz made sure it cost them.


What Two Hours Can Do

By the time we arrived on site that evening, the damage was visible the moment you walked in. Reception couldn't check in deliveries. Finance couldn't open the accounting system. The warehouse team was doing everything on paper. Three managers were gathered around a phone, arguing with their ISP about an outage that wasn't the ISP's problem at all.

We started at the server room.

The domain administrator password had been changed. The firewall had been reconfigured, dropping all inbound and outbound traffic except a narrow set of rules that served no operational purpose. The NAS, which held six years of business documents, invoices, and supplier contracts, had been locked behind a new encryption key. The Wi-Fi SSID still existed, but authentication had been broken at the RADIUS level. Every staff laptop was effectively an expensive paperweight.

Riyaz hadn't destroyed anything. He was smarter than that. He'd locked the doors and taken the keys.

The question wasn't whether the data was recoverable. The question was how long it would take, and how much of the business would survive the wait.


Getting In the Front Door

The first priority was domain access. Without it, nothing else moved.

We booted the domain controller into Directory Services Restore Mode — a recovery path that Windows Server keeps available precisely for situations like this — bypassing the now-changed domain credentials entirely. Within the hour, we had restored domain admin access and began auditing every account Riyaz had touched.

What we found was methodical. He hadn't acted in anger. He'd planned. Several accounts had been disabled, including the owner's personal domain login. A scheduled task had been set to run at midnight, which — had it executed — would have wiped the shadow copies of the file server. We caught it with four hours to spare.

The firewall was next. The device still had its management interface reachable from inside the LAN, and the vendor had a documented recovery mode accessible through the console port. Within ninety minutes, the ruleset was rebuilt from a configuration backup the previous vendor had left on a shared drive that Riyaz apparently hadn't thought to delete.

Then the NAS.

This was the hardest part. The encryption was real and applied correctly. There was no shortcut. But the client had, almost by accident, done one thing right: their NAS had been configured to sync nightly backups to an external USB drive that sat in the owner's desk drawer, not the server room. The most recent backup was thirty-one hours old. Not perfect. But recoverable.

We restored from backup, verified file integrity, and brought the share back online.


Device by Device, Floor by Floor

With core infrastructure restored, the work shifted to endpoints. Fourteen laptops, two desktops, and a point-of-sale terminal in the warehouse. Each one needed its domain trust relationship repaired and its cached credentials flushed. Some needed Group Policy reapplied. Two machines had their local administrator accounts modified in a way that suggested Riyaz had intended to maintain a back door even after the domain came back up.

Both were wiped and reimaged.

The CCTV system, running on a separate embedded Linux device, had its admin password changed too. We reset it through the physical button on the unit. Standard procedure. Five minutes.

By 1:30 AM, the Wi-Fi was back. By 2:00 AM, the accounting software had reconnected to its local SQL instance. By 2:45 AM, the warehouse team's terminal was printing delivery notes again.

The owner, who had stayed the entire night, made tea nobody drank and said very little. When the last machine came online, he shook hands with everyone in the room, twice.


The Morning After

We came back the following Monday with a report and a set of recommendations.

The first was a credential vault: every password for every system, documented, stored securely, and owned by the business — not by any single individual. The second was role-based access, where IT staff would be given exactly what their role required, nothing more, and that access would be tied to their employment status through a process that could be executed in under five minutes. The third was a formal offboarding checklist, signed, every single time, before anyone walks out the door.

They agreed to all of it before we finished the meeting.


What This Was Really About

The technology here was not sophisticated. Nothing Riyaz did required advanced skill. He used access he already had, changed credentials he already controlled, and trusted that the business had no one capable of responding quickly enough to limit the damage.

He was almost right.

This is the category of threat that rarely gets discussed at conferences or in vendor briefings because it doesn't have a glamorous name. It's not ransomware. It's not a nation-state actor. It's a person who spent six years becoming indispensable, and then weaponized that indispensability on the way out.

Every company with a single IT person they trust completely is carrying this risk right now. Most of them don't know it.

Some of them will find out the same way this client did: on a Friday afternoon, when the phones stop working and the server room goes dark.

The difference between a recoverable crisis and a terminal one isn't always the attacker's sophistication. It's whether someone who knows what they're doing picks up the phone.

That Friday, they got lucky. They called the right number.