COVID-19: who’s going full doomsday prep on this?

Mr Crudley

Glock in your sock
I dont wash after a public restroom slash - I know my penis was clean before I got it out - No idea about those washroom handles and taps etc
I see your point, but I'll still take the free handwash despite the popularity of the no hands, no splash method :D
 

Squidfayce

Eats Squid
Imagine back in 2019 the board of a multinational performing a SWOT analysis, and placing a pandemic or epidemic inside the top 10 threats to the business. Would have been deemed crack pots.
Have had pandemic scenarios in Disaster Recovery and Business Continuity planning since forever where I've worked (not everyone does, but many have always had). Having a strategy for dealing with the unexpected is a big part of risk management at large multinationals, no matter how unlikley the scenario. Even if they didnt have "pandemic" specified, many would have had loss of staff, loss of access to building/technology etc. that would have covered the broad scenario of a pandemic. These plans are typically tested annually via various approaches that vary between businesses. There's usually a dedicated resource (or a few) that take care of this in a large organisation. Fun fact, Nuclear war is also a scenario covered in many multinational DR/BC plans. So the dream of debt being wiped if you blow up a city where the data centers for banks reside is a pipe dream. If you survive, you're still in debt :p
 
Last edited:

BurnieM

Likes Dirt
When I worked for a major Telecom company they gave us the task of building a DR and Business Continuity plan if both of our Sydney Data Centres (30km apart) were completely non-functional.

We produced a one liner;
"This is a nuclear disaster. The recovery team is dead. The board of directors is dead. No plan."
 

Squidfayce

Eats Squid
yeah Ive run that as a joke in a draft document before.

in reality though your only solution for an outage of your two data centers as a telco is more data centers. Though for a telco, cant really see a reason to have them on different continents. In finance, they're all over the place, no one is not paying, ever :p But in all seriousness, it covers peoples funds too. Imagine losing all your money because a bank held all its data in one city that got hit by a meteor. WOuld be good for some, devestating for others.
 

pink poodle

気が狂っている男
Have had pandemic scenarios in Disaster Recovery and Business Continuity planning since forever where I've worked (not everyone does, but many have always had). Having a strategy for dealing with the unexpected is a big part of risk management at large multinationals, no matter how unlikley the scenario. Even if they didnt have "pandemic" specified, many would have had loss of staff, loss of access to building/technology etc. that would have covered the broad scenario of a pandemic. These plans are typically tested annually via various approaches that vary between businesses. There's usually a dedicated resource (or a few) that take care of this in a large organisation. Fun fact, Nuclear war is also a scenario covered in many multinational DR/BC plans. So the dream of debt being wiped if you blow up a city where the data centers for banks reside is a pipe dream. If you survive, you're still in debt :p
It didn't look like Qantas, virgin, and a few of their competitors had much of a plan for a multinational government long term shit down of their industry.
 

BurnieM

Likes Dirt
yeah Ive run that as a joke in a draft document before.

in reality though your only solution for an outage of your two data centers as a telco is more data centers. Though for a telco, cant really see a reason to have them on different continents. In finance, they're all over the place, no one is not paying, ever :p But in all seriousness, it covers peoples funds too. Imagine losing all your money because a bank held all its data in one city that got hit by a meteor. WOuld be good for some, devestating for others.
This unnamed Telecom company had major data centres in Sydney, Melbourne, Brisbane and Perth.
The issue was several critical systems were duplicated but only in the 2 Sydney data centres.
For years we had wanted to move one lot of these critical systems to Melbourne but it needed a dual path high speed data link Sydney to Melbourne and we did not own one and it was going to cost a SLoM to lease.
 

Squidfayce

Eats Squid
This unnamed Telecom company had major data centres in Sydney, Melbourne, Brisbane and Perth.
The issue was several critical systems were duplicated but only in the 2 Sydney data centres.
For years we had wanted to move one lot of these critical systems to Melbourne but it needed a dual path high speed data link Sydney to Melbourne and we did not own one and it was going to cost a SLoM to lease.
Ha. Architecture fail. Test of your plan clearly identified the failure point though lol. Test success!

Though to be fair somone senior would have had to have signed off on allowing that risk to persist. Often companies will accept these risks if the likliehood of them materialising is exceptionally low. If there is no history of it having occurred then why spend money fixing a hypothetical failure. If it HAD happened before, fuck, rolling the dice really. Would come down to how fast can it be resolved and what's the difference in cost between solving the failure after it materialises vs the cost of preventing it.
 

BurnieM

Likes Dirt
Ha. Architecture fail. Test of your plan clearly identified the failure point though lol. Test success!

Though to be fair somone senior would have had to have signed off on allowing that risk to persist. Often companies will accept these risks if the likliehood of them materialising is exceptionally low. If there is no history of it having occurred then why spend money fixing a hypothetical failure. If it HAD happened before, fuck, rolling the dice really. Would come down to how fast can it be resolved and what's the difference in cost between solving the failure after it materialises vs the cost of preventing it.
And as we and the cheque signers would all be dead, who cares ?

We got a lot of verbal feedback from our 1 line (non) plan but nothing formal.
Apparently it was discussed at the ELT but never made it to the board.
 

Scotty T

Walks the walk
And as we and the cheque signers would all be dead, who cares ?

We got a lot of verbal feedback from our 1 line (non) plan but nothing formal.
Apparently it was discussed at the ELT but never made it to the board.
I'm less worried about DR and more worried about hacks these days. Saw a good one yesterday:

scare-a-ciso.jpg
 

Scotty T

Walks the walk
hacks are a Disaster. Need to recover from them and continue operating.
In a different way though, you can't cover that sort of disater with a backup of the same vulnerable system in a different state, being prepared for hack recovery and remediation is getting more important than covering for physical failures or geographically contained disasters. If the primary objective is data theft rather than DoS it's a completely different set of people to engage for recovery.

As services move to the cloud the geographic disater is almost completely mitigated but hacking is just getting worse and more prevalent. I'm genuinely concerned and working on improving mitigation and recovery strategies for the systems I am responsible for. It doesn't quite keep me up at night because I don't look after much private data but a breach is a breach and they are no fun.
 

Squidfayce

Eats Squid
In a different way though, you can't cover that sort of disater with a backup of the same vulnerable system in a different state, being prepared for hack recovery and remediation is getting more important than covering for physical failures or geographically contained disasters. If the primary objective is data theft rather than DoS it's a completely different set of people to engage for recovery.
Yes, but you said " I'm less worried about DR and more worried about hacks these days". Just pointing out that Disaster recovery covers Hacks. How it manages it is still relevant despite the remediation being different. If hacks weren't part of DR, the damage they'd be likely to cause would be worse.

The customer data release stuff is a concern, especially the combo of ID numbers and now wide spread disclosure of Medicare numbers via the Medibank/AHM hack. Those two numbers are all you need to apply for credit remotely, often port numbers etc. That's a huge problem. Expect Medicare numbers getting re-issued in the near future.

As services move to the cloud the geographic disater is almost completely mitigated
100% incorrect. It just shifts the responsibility to another party and geography and potentially creates additional risks (depending on where the data is stored). Additionally data stored overseas creates additional compliance obligations for Australian companies that choose services that do so (EG possibly needing to comply with GDRP as well as APP). There's pros and cons to running your own infrastructure or outsourcing. Running your own gives you complete visibility of your environments, risks and controls....IF you have the resources to manage it. When you outsource it, you're effectively trusting someone else have the appropriate resources to manage it. Even though there is some trust placed in these big brands like AWS, the same challenges in managing this exist for them as they do for smaller outfits. Seperately, if you outsource you have to commit resources to validating that the outsourced vendor is doing what they said they are doing, but with more filters and roadblock in place. its kind of musical chairs with resources, really.

but hacking is just getting worse and more prevalent.
Yep. I personally think that this will lead to some pretty intrusive measures for the general public. Whether this is intentional (i.e. part of some grand conspiracy) or coincidental, leads to a very fun topic for discussion
 
Last edited:
Top