Letter from “353” Data Centers sandwiched between a prologue and an epilogue from John.
It appears that the good people at 365-3 Data Centers have decided to admit what we all know, and have informed us that it is in fact a ransomware attack. I’ll attach the letter they sent to David at Bizbudding:
Facilities: EWR, LGA, IAD
Date: 5/25/2022Thank you for your patience over the past 10 days while we worked to regain access to the impacted cloud management systems and to restore your services following the security incident of May 14, 2022.
We are now able to confirm that the May 14th security incident was a ransomware attack. We are also able to confirm that neither 365 Data Centers nor our customers were the target of this attack. The intended target was a third party whose data is stored in a dedicated environment on our cloud platform. Unfortunately, for our valued customers and 365 Data Centers, the cyber-attacker broadened the ransomware attack.
While our investigation continues, an analysis and evaluation to date by our systems team and cybersecurity experts has revealed that, aside from the targeted third party, no data was taken from the 365 Data Centers cloud environment and there are no on-going threats in the environment.
We worked tirelessly in tandem with our experts and government authorities and positioned 365 Data Centers to initiate restoration. Unfortunately, the resolution of the third-party circumstances is not in our control and continues to prevent us from moving ahead in our recovery process.
While we continue to monitor the third party’s resolution of the cyber-attack, 365 Data Centers believes that at this point in time the prudent path forward is to rebuild the affected cloud platform. This will be conducted along with an all-out effort to retrieve all data within the existing cloud environment that can still be accessed. 365 Data Centers will work with each customer who prefers to go in this direction to restore your service on a rebuilt 365 platform tailored to your current needs at our expense. In the event the ransomware attack is resolved on all fronts, we can initiate restoration of the existing cloud environment in parallel.
If your preference is to work with us to restore your service on a new 365 platform, please inform Steve Oakie, 365 Data Centers’ Chief Revenue Officer.
We are saddened by the impact this incident has caused on our many years of collaborative hard work with you to build your cloud services. Our entire organization is sorry for the significant inconvenience that this has brought to you and your business.
We will continue to be transparent by providing factual and accurate data as soon as it is verified. We appreciate your ongoing support and patience as we navigate this complex situation.
Bob DeSantis James Cornman
Chief Executive Officer Chief Technology Office
John again:
Well, that’s nice to know. I have no idea if the notification of a class action lawsuit convinced them to be more forthcoming with their customers, or if it is all just a big fucking coincidence this came out today.
David doesn’t know exactly what this means and will be seeking clarification with them tomorrow, and I have no idea if the previous repeated assurances from 365 that our data is still safe are still there. It’s a mystery wrapped inside an enigma inside a riddle inside some unnamed 3rd party vendor’s fuckup and most definitely not 365 Data Centers, that’s fer god damned sure.
Hopefully we will know more, and should our data be safe and accessible, we will move forward. If not, I will have some decisions to make about how we proceed with a new website. For now though, y’all keep enjoying your unplanned vacation at the cottage, and I’ll keep trying to figure out if the tornado just ripped off some shingles, did major structural damage, or if our home simply no longer exists. Personally, I feel better about the news today because AT LEAST I FUCKING KNOW SOMETHING FOR SURE. I feel like I have been sitting in the waiting room at the doctor’s office for two weeks because he doesn’t have the balls to tell me I have cancer. Now we wait for the oncologist to chime in. I know I can fit in another metaphor and some mangled imagery here so I will just let you all know I’ll be here until the cows come home and we’ll be giving 110 percent and working harder than a dog to get things back to normal and to keep a stiff upper chin and don’t let the bedbugs bite.
This is TERRIBLE news. They have given up on retrieving the data. They are offering to rebuild your environment if you have a backup to provide them. That’s how I read it. They are going out of business for sure so there will be neither the time or money to retrieve the data even if that were possible.
I’m a bit confused by this. Are their data security protocols so low that the backups they should have been doing as a matter of course, were also snagged in the ransomware attack directed at one entity? Like they were keeping them all tied up on the same servers? Cause that seems like not a best practice.
That letter, my god, is Bob DeSantis James Cornman some AI? Who talks to angry customers like that.
> This will be conducted along with an all-out effort to retrieve all data within the existing cloud environment that can still be accessed.
I don’t like the sound of that. 🤔
> We will continue to be transparent
“Continue”?
I want the name of the Threat Actor involved. (e.g the ransomware group name, and APT name(s) if there was a Advanced Persistent Threat actor (or more than one) involved.)
Reasons.
I am assuming it was Russian, since they mostly are. (About 75 percent of ransomware money in 2021 went to Russian-affiliated crime groups. Some have unseemly linkages with the Russian government.)
EVT, I read that last name as “Conman” several times before I realized that it was CoRnman”.
@Edmund Dantes:
Exactly! Gross incompetence.
That’s some catch, that catch 22:
We are now able to confirm that the May 14th security incident was a ransomware attack. We are also able to confirm that neither 365 Data Centers nor our customers were the target of this attack. The intended target was a third party whose data is stored in a dedicated environment on our cloud platform. Unfortunately, for our valued customers and 365 Data Centers, the cyber-attacker broadened the ransomware attack.
@Enhanced Voting Techniques:
“That letter, my god, is Bob DeSantis James Cornman some AI? Who talks to angry customers like that.”
“Bob DeSantis James Cornman, the evil AI”
That made me crackup for some reason lol
@Edmund Dantes
Backups were stored *off-site in two different states.
Off-site, yes, but apparently all 3 sites were all part of the “353” Data Centers. So yes, the backups got snagged in the ransomware attack.
Having said that, I read this as the key sentence: “This will be conducted along with an all-out effort to retrieve all data within the existing cloud environment that can still be accessed. ”
My read, for what it’s worth, is that they can get to some of parts of the existing cloud environment, but not other parts. So it’s a crapshoot… is a particular site in the part they can get to, or not?
I guess we’ll find out.
The other key takeaway, in my opinion, is that all the data (except for the mystery site) is still there, but until/unless the double-secret- third-party pays the ransom, they may never be able to get to it. Though I’m not sure how they could be certain that it’s fine if they can’t get to it,
The way I make sense of *that* is that they can confirm that the hackers didn’t get into any of the not-the-double-secret-third-party areas, but the hackers so messed up the routing that “353” Data Centers can’t retrieve it.
That’s my current thinking, reading between the lines of this letter and the mention of virtualized route reflectors very early on.
@Steeplejack – They’re fucked and they know it.
About what we figured, but blargh, not the greatest news. Still, thanks for the update, John. I’m on board for whatever is next for this community.
Also, if $$ is needed to… Whatever, feel free to rattle the cup. Pet treats, ice cream, bird feeders, coffee… I appreciate *your* transparency.
This sounds like it could potentially take awhile : (
It would be such a shame if nearly 20 years of history were lost if the old site can’t be recovered
@Bill Arnold:
I’d be interested to know that too. I was about to comment on one of these threads that I’m surprised there haven’t been (to my knowledge) more Russian cyberattacks in the last few months
@Watergirl – I’m retired, but in my work life I knew a lot about internet routing, and this makes no sense at all.
Bob DeSantis has to be related to Ron DeSantis. Closely related. Their leadership styles are very similar.
Tip on readability: Not sure how this would work on a phone or table because I still haven’t mastered highlighting on mobile, but on a computer you can highlight the text, which turns the background gray, and then the text looks black against the gray.
@RubberDuck — “Our house was not the target of the arsonist, the target of the arsonist was the guy renting out the basement apartment. Unfortunately for everyone else living on the other floors, the arsonist decided to make sure he got his target by burning down the whole house. We could not possibly have foreseen that someone would cause so much collateral damage just to get at a single target, therefore, we didn’t think it cost-effective to install sprinkler systems that would have protected everyone.”
I mean, wow, a lawyer reviewed that, you can be sure, and the only standard being applied was to make sure that nothing he said could be interpreted as an admission of negligence or wrongdoing.
@Gin & Tonic:
What are the consequences 365 could face in your professional opinion?
@Watergirl – Also, having all your backups reliant on one vendor is, um, not best practice.
@Goku – Bankruptcy.
I didn’t understand that at all except for the part that we still don’t know if or when we’ll ever be able to return to the old site.
@Barbara – Bingo!
@Gin & Tonic
This was in one of their first updates: “Earlier today they determined that a network peering problem was caused by virtualized route reflectors that had crashed in all their northeast data centers. ”
I don’t know what a virtualized route reflector is but the impression early on was that the hackers had scrambled the communications between various parts of the data center, so they couldn’t get at some of the data that was there. 🤷♀️
Of course all of this is coming from a group whose primary goal was to obfuscate, so who knows.
@Watergirl – A virtualized route reflector means you’re too cheap to have an actual (hardware) router doing the job.
@Ohio Mom
That’s exactly it. At this point, we don’t know. But at least we know more than we knew yesterday.
I’m pretty sure that the person who wrote this must have spontaneously combusted during the writing:
“We will continue to be transparent by providing factual and accurate data as soon as it is verified. We appreciate your ongoing support and patience as we navigate this complex situation.”
@G&T:
Thanks. What would that mean for the old site?
@Goku (aka Amerikan Baka)
No one can answer your question. At this point we don’t know. That’s what we know.
@WaterGirl:
OK. Thanks for the update
Like I said upthread, it would be unfortunate to lose nearly 20 years of history
@Watergirl – It’s real routing, not virtual, but it’s late and BGP is complicated so I’ll bow out.
What are the chances of retrieving at least some of the site from the Wayback Machine if 3-whatever-and-dropping never gets its shit together?
@Edmund Dante’s: Srsly. No *secured* backups? What kind of chimps run this company? The letter is an impressive example of bullshit, tho.
They talk about restoring “service” but I notice that they didn’t say “restore your data’.
Trying to parse it all is probably a fool’s errand at this point, so I’ll stop trying now.
@Gin & Tonic
That’s fine. Understanding how virtualized route reflectors work won’t help in any concrete way anyway.
@Captain C:
I think it’s possible. I remember M^4 mentioned recently that he was able to recover some comments of his on BJ through the Internet Archive
@Gin & Tonic
While virtualized reflectors are there to save money, large datacenters don’t actually require that many physical routers. Generally 1, maybe 2 per rack is more than sufficient. Then another layer for every 10-20 racks. Scrambling the DNS tables wouldn’t take much, if you’re paying attention Amazon and Microsoft seem to be able to make that stupid mistake about every 6 months without any outside help. They fix it in hours though.
The legalese of the data maybe not being accessible is hopefully just about them covering themselves instead of making promises. But they could easily have lost some drives or other hardware in the last 2 weeks and not been able to tell with everything scrambled, that would also prevent data replication to prevent losing the last copy. Usually you find out all the server problems that were building up over time when you start booting everything back up, even though it may have seemed to be running fine before it went down.
This kind of mess is what I call a long Tuesday. The difference between the big players and smaller cloud providers is in how much it actually affects the customers. Big provider maybe you notice a blip as the network starts pointing somewhere else, maybe it even goes down for a few hours. Actually allowing more than a few servers to stay offline with customer data for weeks is insane. Have had drive heads crash and dig furrows through the platter and we still got that data back even if it takes awhile. Anything else is unacceptable. Whole datacenters offline should be shooting up news flares everywhere if 365 was actually a company anyone gave a damn about. And our CEO would be on that call every waking hour making sure it got resolved as fast as possible for something this large.
@elliottg
You may be right. Or maybe not. My guess is that they shared this information because they were notified of the class action lawsuit.
If that’s the case, it doesn’t necessarily mean that they have given up on retrieving that data. It could just mean that they had to share actual information about what’s going on.
And the double-secret third party could still pay the ransom.
I dropped a link to a tarball (7GB) of a crawl of the top level posts that goes back to 2003 (through may 13 midday of 2022) in a thread last night. Not everything, e.g. no images, but at least the text and comments, as html. If there are any gaps, I might have a few other crawls – would need poke around. (Am a packrat.)
I’m hoping the site can be restored in full, though.
Here’s a copy-paste:
—–
Here’s a google drive link to a tarball (tgz) of the crawl of top level posts/comments (back to 2003) that I mentioned yesterday.
https://drive.google.com/file/d/10NwLTn1krg2ID0sd76xerP3JUFDji0XK/view?usp=sharing
I’ve downloaded it and tested (unpacked it) and verified that it’s the one I made.
Just in case it is needed. It could form an archive html site at least, though Carlo said that it is not nearly sufficient to rebuild the site.
Well, the raw BS there will need composting before being spread. Quite a sufficient quantity at that.
More seriously, not the preferred sort of learning experience, but it will both last over time and recede in the rear view mirror.
No deaths, no major injuries showing up on the scale so far; keep that in mind.
@WaterGirl
Wait, there’s actually a class action lawsuit against 365? I thought that was just Cole’s snark
@Goku (aka Amerikan Baka)
The class action lawsuit is for real.
OK. Keep in mind that there is a big difference between restoring the live service, which appears to be what 365 is mostly interested in doing, because that keeps their customers inside the fence, and…
THE FUCKING BACKUPS, WHICH HAVE NOTHING TO DO WITH ANY OF THIS NOISE, AND WHICH THEY *STILL* COULD TURN OVER IN A FEW HOURS, AFTER WHICH BJ COULD BE BACK, GOOD AS NEW, WITHIN A FEW MORE HOURS.
Jesus, this is more misdirection. Please. Can we keep our eyes on the backups, and not on these idiots attempting to restore their incompetent service? Every time they issue any fucking statement, please, please read it and ask yourself “does this have *any* relevance to 3xx turning over the backed-up BJ data?” If it does not, then the only way forward is still the lawsuit.
Metaphors? At least we’re able to shit and not get off the pot here! What doesn’t kill us makes us stronger.
ETA: Edit button now? Dayumn!!!
Stiff upper chin is an exquisite mixed metaphor. Given that I’ve packed on some pounds recently, my upper chin would be th only stiff one. The rest are a bit flobby.
Man, there is some smattt cookies in here!
@Bill Arnold: That probably won’t fit on my Commodore 64 webserver.
@Bill Arnold: Awesome!
“ I don’t know what a virtualized route reflector is …”
Definition: It is “the mirror” in “Alice in Wonderland”.
What assholes… their toast is burnt and they know it but won’t say it. The entire company’s future (and ours!) is in the hands of some unidentified third party’s willingness or ability to pay ransom? Git da’ fuck outta here…
And that’s not even considering that the “third party” may directly or indirectly be THE FUCKING FEDS! ( which is likely why there is no news on this)
They certainly moved the goalposts from
“we’ll be right back after this brief interruption” to
“nobody stole your information, but some of you have lost your apes”
it just kind of feels like John is playing the role of Flounder after finding out that Otter and Boone have completely trashed his brother’s car….
hey face it, you fucked up, you trusted us!
unluckily for us, not sure that John had the website insured…. 🙂
I’m not proud of it, but one intentional tactic I use when working with clients in a potential data loss situation, if there’s any reasonable hope of recovery, is to tell them that it’s under control until the moment at which I conclude there’s nothing further I can do. My rationale is that the client is already freaking out, and I can serve them best by offering them confidence and safety for the duration of my efforts.
And, in the unlikely (but occasional) event that I really can’t save the day, the impact of the bad news would hardly have been lessened than if I’d been totally transparent with them from the start about my lack of certainty.
IOW: I’m not really sure that 353 was being totally straight when they said the data is safe, and I never really was after the third day. They were hopeful, maybe even reasonably so. But the above communication seems closer to saying that they’re not as hopeful anymore, and it’s time to start bracing for impact.
The big difference between me and them is I don’t promise or provide uptime, backup, security, or hosting, and that’s on purpose. I don’t want that kind of responsibility or liability. I will make best efforts to enroll people with best providers and systems, but lord knows I don’t want to have lost data to account for. If you’re going to run a business whose whole point is to store data that isn’t your own, you’d sure as shit better be bulletproof against data loss.
Also, too: if they’re saying they’re at the mercy of the third party they host who was the target of the attack, doesn’t that suggest their earlier statement, about the cybersecurity firm just checking out all the sites for vulnerabilities before restoring backups, was a bald faced lie?
I am no longer optimistic, if I ever was, that we’ll ever get anything back out of 353 Data Centers — not the old site baxk, not the archived data, not a penny of financial compensation for John Cole. 353 Data Centers simply isn’t acting like it can do these things.
Yep we are on our own here. Hope for recovery but plan for a future that is not dependant on it.
This is what I was afraid of. The backups are irretrievably corrupted, and 3XX does not have the resources to unscramble and/or decontaminate them.
The only question is whether 3XX can find a clean backup anywhere. Maybe yes, maybe no.
The way Ransom Ware attacks are traditionally executed if that after breaking into the databases, the attackers encrypt all that data whether the live data or the stored backups.
It sounds like (from this newest letter of gibberish) that not only the mystery target client data was encrypted, all of 356’s internal data has been encrypted in place. Ini other words, they have all their data, but it is now in the form of:
aasdf ASd wert qwer adsf rtyeu asdf wert zxcv wrty asdf ruyj wew
which is useless to everyone without the encryption key and the encryption method, which together might be able to unscramble the data, and all the embedded keys from front page stories to the comments written about those stories.
Unless the mystery client AND 356 do what the attackers want them to do before the attackers have offered to provide the encryption keys and methods. that data may as well be on the Titanic on the bottom of the North Sea. That may be coughing up $500,000, or providing an internal key to the data belonging to the mystery client who was the original target of the attack. Which data may be first encrypted by 365 and then that gibberish encrypted again by the ransom ware hackers.
I’ll bet the 365 boys have data centers full of garbage data now, and have fucked up badly…
If they communicated with those boys like they communicated with their customers, like arrogant rulers of the web, no wonder they don’t have the keys to their dead kingdom!!
Well, isn’t that a charming pile of hooey from 353.
Well that bit of “communication” was lawyered to hell and back.
But I agree with everyone else – there are no accessible backups and no chance of getting them. Were it I making decisions I would start afresh with a new provider. But that’s John’s decision, and I’d support – financially if John needs it – whatever he decides.
Hot! Damn! Glad I’m not longer in that goddamned game.
I choose not to read it as direly as others do.
Asking the IT people: assuming fresh new hardware is brought in (and tested) shouldn’t a back-up from before the attack be able to be pushed (losing only a day or two of data) onto an airgapped unit(s) and the offending site sandboxed before that data being transferred onto connected servers?
@NotMax Yes, assuming they have uncompromised backups. But given that after two weeks they have not done that, and given the wording of that email, they have no uncompromised backups. It sounds like they stored their backups in the same data centers that were attacked.
Even though this incident doesn’t appear to be getting coverage in the wider world, I hope that others who may be using cloud services (like, especially, ActBlue) are paying attention and taking precautions!
There go two miscreants
Yea, it would be awful if they lost my email and phone number! /s
@NotMax – they aren’t buying any new hardware. Lead times are months for that sort of stuff, and they won’t get priority.
Whether they can factory reset existing equipment and ensure there are no traces of the ransomware is another story.
And route reflectors lets you scale your routing Infrasturcture by reducing the amount of sessions between routers from (n * (n-1))/2 (a full mesh) to n. If you have 20 routers, that means 20 sessions instead of 190.
And FYWP on my iPhone. Holding the space bar won’t let me move the cursor beyond what I can already see on the screen, so no editing. I hope this works.
I am not sure I believe what they say because I really don’t trust them any more.
If this is the truth, my first thought was why wasn’t 356 paying the ransome even if they weren’t the main target, and then I realized maybe the ransome demand wasn’t just money, but something else such as information and I realized again how bad ransome are attacks are.
I have also been surprised there haven’t been more Russian cyber attacks on everything since the Ukraine war heated up. I was kind of expecting more in the news.
I still believe BJ was the target because they wanted to shut down WaterGirl’s organizing. The third party attack was a feint to throw people off the scent, like when an assassin kills innocent people to hide the intended target.
So: the guy being held up on the blog’s behalf is named Dave, and 3xx is addressing him as if it were HAL 9000 suggesting a stress pill.
The next scene should be cherce. Hope Dave supplies video.
[Once more with feeling. I’m leaning towards Team Baud the way this is going. ]
So: the guy being held up on the blog’s behalf is named Dave, and 3xx is addressing him as if it were HAL 9000 suggesting a stress pill.
The next scene should be cherce. Hope Dave supplies video.
The supposed 353 insider on the Reddit thread said that he thought that it would take weeks to months to resolve the issues. That tells me that the problem is solvable in a technical sense. It seems to me that the issue is financial and whether the MotU that own 353 figure that it’s cheaper to declare bankruptcy and walk away or to actually do their jobs and spend the money to recover the information on their boxes. IOW, will 353 be in business in the weeks/months that it would take?
I’m up for helping to pay for B-J 22.0 – Better, Stronger, Faster. Waiting for 353 seems to be delaying the inevitable, it seems to me. They have only been spouting self-protecting legalese and clearly have no concern for the people with stuff on their boxes. There’s no objective reason to think that will change.
Follow the money.
Good luck to us all, and hang in there.
Cheers,
Scott.
I know I have very little understanding of these things, but considering how long these attacks/thefts have taken place, I don’t understand why defenses aren’t automatically put in place to prevent the possibility of an attack. I would think effective defenses would be a great selling point for potential clients.
@ debbie May 26, 2022 at 8:17 AM
I don’t really understand any of this either but I am grateful that so many knowledgeable jackals are posting their explanations.
I posted this link and snippet that I found on a google search a few days ago. (The link doesn’t work anymore. Duh). The marketing team of 353 data centers pushed their company as a leader in dealing with ransomware attacks. Maybe they should have checked with the company before offering that service. 🙄
Understanding Ransomware Attacks: How Cybercriminals Infiltrate Your Mission-Critical Data?
Oct 5, 2021 — All this information may seem daunting, but 365 Data Centers is here to help and protect your data. With over a decade of experience in the . . .
Fuck the Russian hackers. Or whoever put us at the mercy of 353’s insufficient security measures.
Watergirl:
“The way I make sense of *that* is that they can confirm that the hackers didn’t get into any of the not-the-double-secret-third-party areas, but the hackers so messed up the routing that “353” Data Centers can’t retrieve it.”
Maybe, but industry experience and cynicism make me suspect that the translation from business-speak/legalese is more like this:
365 DC: “… evaluation to date by our systems team and cybersecurity experts has revealed that, aside from the targeted third party, no data was taken from the 365 Data Centers cloud environment …”
Translation: “We think they encrypted everything as quickly as they could access it, meaning they probably didn’t scan for credit card numbers and personal information. Yet.”
365 DC: “… 365 Data Centers believes that at this point in time the prudent path forward is to rebuild the affected cloud platform.”
Translation: “You’ll need to sue us to get your money and data back.”
365 DC: “This will be conducted along with an all-out effort to retrieve all data within the existing cloud environment that can still be accessed.”
Translation: “We have some backups from 2019. If any of your files are among them, we’ll give them back to you in exchange for a settlement where you indemnify us against any other damages. We don’t have anything more recent than that, because we stopped doing any due diligence or maintenance in 2020, figuring we could blame any problems on Covid.”
365 DC: “If your preference is to work with us to restore your service on a new 365 platform, please inform Steve Oakie, 365 Data Centers’ Chief Revenue Officer.”
Translation: “Sure, we’ll be happy to bilk you for more money if you’re dumb enough to keep doing business with us.”
365 DC: “We are saddened by the impact this incident has caused on our many years of collaborative hard work with you to build your cloud services. Our entire organization is sorry for the significant inconvenience that this has brought to you and your business.”
Translation: “You’re on your own. We’re too busy working out how to monetize our remaining assets and distribute the proceeds among management, in the form of golden parachutes, before we declare bankruptcy and dissolve the company. But, really, we do feel bad about it.”
Storage rather than routing. They may have hijacked routes as part of the attack but now the SAN is borked. From that Reddit poster I inferred that the cloud storage was a custom solution done in-house, and those guys/gals left after being acquired by 325.
Meaning the threat actor had no clue what they were breaking to get at “third party”, so any key is not likely to work, and there is a lack of in-house architectural skills that can stitch it back together.
@debbie
MONEY is the answer you are searching for.
@JR in WV: At the data scales in question here, it is literally not possible for an attacking process to encrypt the data. There is simply too much of it. The most practical approach is to lock databases containing metadata, mess with the layers of management software far, far above the RAID array management that make data administration practicable, on the theory that even though the data is safe and sound in those RAID arrays, and recoverable in principle, as a practical matter doing so without paying the ransom would bankrupt the company.
@Scout 211: Thanks. So promises made, but not kept. Hope that lawsuit proceeds.
@Rubber Duck: I figured money had to be in there somewhere, just not from the host.
I feel certain that one reason for the class action lawsuit is to make it “cost” too much for 365 to walk away, so it’s “cheaper” for 365 to go through the time and effort to get our data back.
Surely this class action lawsuit is representing David’s clients (our site host’s) clients (including us) and is not representing everyone harmed by 365.
I’m sure it’s intended as a way to compel them to jump through the hoops that it will take to get our data back rather than close up shop and walk away.
Sounds like John was right the other day when he said it’s like your house burned down with all of your shit inside.
After re-reading that scurrilous and laden with falsehoods letter, I want to quote just a tiny bit to illustrate the inherent contradictions these 365 guys published in their f’ing letter:
“We are also able to confirm that neither 365 Data Centers nor our customers were the target of this attack. The intended target was a third party whose data is stored in a dedicated environment on our cloud platform.”
If “third party” had data on 365 DataCenters cloud platform, they were in fact customers of 365, unless there was some other contractural relationship between “third party” and the 365 people. Like the guys’ coke dealer’s customer database or something like that.
Totally not professional to frame up one customer as anonymous third party, also too!! Now I am wondering really hard what the ransom demand to decrypt “our” data was… and how did these ignorant arrogant wankers respond to the hackers demanding that ransom?
Because pissing them off can convert the “ransomware” attack into a plain old “we destroyed your data because you pissed us off so bad” attack.
When did the feds (and which feds, it’s a large group!) get involved?
You don’t make kidnappers holding your daughter angry because they can cut parts off her and mail them to you.
@debbie
“I know I have very little understanding of these things, but considering how long these attacks/thefts have taken place, I don’t understand why defenses aren’t automatically put in place to prevent the possibility of an attack. I would think effective defenses would be a great selling point for potential clients.”
I’m an admin/forensics guy and I run a small corporate network that attracts more than its fair share of attention. I get hit several thousand times a day.
They have to get lucky once. I have to stay lucky a thousand-plus times a day.
The attackers will always have the advantage. The only way to be truly secure is to go over to the wall and disconnect the feed to the outside world. That is safe, that is secure, that is defended.
And pretty useless.
@J R in WV — If I had to guess, it would be something like their customer is a contractor to some other entity that was the target, and the contractor was storing that entity’s data as part of its services to the entity. I would also venture that the targeted entity is either a government agency or under contract with one. I would further speculate that is a big reason for the silence and apparent secrecy. Not every customer carries a threat of law enforcement.
And this is just a random fact of interest, entities are now being sued under the federal False Claims Act for falsely certifying that they meet certain types of cybersecurity standards when they clearly don’t. These kinds of companies are not used to being regulated, even minimally. They aren’t typically prepared for this kind of scrutiny of their practices.
and as Charles Pierce says, keep above the snakeline
@ Barbara
Totally agree on your first paragraph. Point by point, I have had the same thoughts. Interesting second paragraph. No one who operates like it’s the wild west wants scrutiny or standards, but we sure as hell need them.
@ The Moar You Know:
Wow, I had no idea how constant that was. Thanks.
@debbie: I’ll second that remark from @The Moar You Know, and add a datum. Back in the early 2000s, it was still possible to administer a networked Linux/Unix computer and keep a process watching the system logs for attacks. I had such a process, which would send me an email with the IP address of the attacking host. This would happen maybe once a week or so, at first. Annoyed, I would go to the trouble of looking up a sysadmin at the University or corporation housing that host, so that they could walk around and give the offender a talking-to.
Within a year or two, maybe 2003 (memory fails me) the attack rate (probe rate, really) was already so high that this was an absurd approach. Automatic log-scan-and-firewall-jail solutions were becoming mandatory, because logs were filling up with *mostly* entries showing probes — essentially people trying doorknobs to see if they could find an open door.
That was 20 years ago. Nowadays, an appreciable fraction of all traffic on the net (well by connection, if not by data volume, given the size of video streams) is malware. And system logs of any host connected to the net and reachable outside a firewall show connection scans across ports from all over the world, all day long, every few seconds, or minutes at the longest. Much of it is botnet herders looking to grow their flocks, presumably. But some is target-of-opportunity—you never know what you might score on a maladministered, high-value system…
“we can now verify that it was a ransomware attack”? like it takes 2 goddamn weeks to figure that one out?
And, since I’m having an Old Fart’s IT Cockups In Days Of Yore moment, here’s a story about what may have been the earliest, certainly the strangest, oddly inadvertent DDoS attack in internet history (WaterGirl has heard this one already, in the dark early days of the 3xx outage).
In 1994 I was a postdoc at NASA’s Goddard Space Flight Center, working in High-Energy Astrophysics. The Internet was mostly text back then, but this new-fangled http thing that Sir Tim had bodged together at CERN had gotten kind of popular, and Netscape was Cool and The Future.
NASA, never slow in the PR game, was trying to figure out how to use the web get as much visibility as it could. Astronomy Picture of the Day was born around that time, and Hubble pictures were especially popular content. However, the idea of decentralized content servers had not yet been introduced, and all the traffic from the population of future space-boosters that NASA management was trying to grow was being driven to web servers at Goddard, through a single largish (for the time) but finite-capacity pipe intended for research communication and data exchange.
You probably see where this is going. Comet Shoemaker-Levy 9 made world news in 1994 when, after a previous orbital pass at Jupiter had broken it into numerous fragments, calculations showed that those fragments were all heading for a collision with Jupiter itself, some time in July. NASA set up several space-based assets, including Hubble and the Galileo Jovian mission, to observe the events, and basically promised a livestream, possibly the first of its kind, certainly of its—unintended—scale.
Over the course of a week that July, as each fragment in succession flung itself into Jupiter, releasing a total amount of energy estimated at roughly 600 times that of the Earth’s entire nuclear arsenal, tens of millions of people from all over the world attempted to get a glimpse of what was going on through that data pipe. Which to us scientists now performed much less usefully than the 28kBps modems that we had at home (DSL broadband was a distant dream then). I felt that we’d been DDoSed by Dan Goldin, the publicity-hound NASA Administrator, who had the tech savvy of a goldfish with a learning disability. We just had to wait for Sir Isaac Newton to sort things out—eventually the last fragment dropped, the public got bored, and we got our network back.
A few pinholes:
1) You’re never getting anything back from 365. This may as well be our new home going forward. The existence of Bill Arnold’s tarball is a minor miracle: take it and be glad.
2) Think about it: 365 is telling everyone that they were not the target. Therefore it cannot be ruled out that BJ *was* the target — or, more specifically, Prof. Silverman. Select vendors/partners/platforms/strategies accordingly.
3) If you don’t have local backup *and* offsite backups, you don’t have backups.
Looking at this sentence:
“The intended target was a third party whose data is stored in a dedicated environment on our cloud platform.”
According to them the target was someone they were hosting and it was a ‘dedicated environment’.
Except either it wasn’t actually dedicated OR they had sloppy security which allowed the ransomware to cross out of the ‘dedicated environment’ OR they had a person who had sloppy security that accessed 1 infected environment and cross infected another.
@lee — It’s usually the latter. Like hackers figured out a long time ago that if you want to hack a healthcare system you start with the faculty practice sites and emails. They are far more likely to leave their doors unguarded, e.g., shared and minimally secure passwords.