It’s been a 2 weeks nightmare. In the 6 years that I’ve been blogging and running websites, I’ve never experienced anything quite like what I went through with my hosting company in the last 2 weeks. The catastrophe included all possible things that could go wrong meshed into one long incident, which has resulted in me not blogging during that time and probably upsetting and confusing my friends, readers, online colleagues and community website partners.
I’m hoping it’s all behind me now, though I can never be sure. Just so I can remember this, and share it with those affected, following is a short account of what went on :
27/04-01:51am – My dedicated server starts getting “Kernel Panics” (a bit like the Windows Blue Screen of Death") and rebooting. After a few of those it dies.
27/04-03:23pm – After a long email exchange someone realizes it’s a faulty memory. It takes a few hours for the datacenter to change the memory.
28/04-12:16am – The server completely crashed. They decide to transfer me to another server. BUT, there’s no backup of the hosting environment, just the files, so they restore something from the last server they could find. Email sent to me hoping that “nothing changed during those 14 weeks”.
28/04-02:30pm – They restore the files from the broken server to the restored server from 14 weeks ago.
29/04-04:47pm – After a day of emails trying to piece this thing together I finally realize where that backup from 14 weeks ago came from. It came from a hacked server we abandoned that contained several rootkits. I was now once again running on an abused server.
Spent the next day slowly getting the sites from the broken server to run again on the hacked server.
30/04-07:71am – Hosting provider finally confirms that the restored server has major security holes.
01/05-05:15am – Hosting company allocated yet another server to move to from the restored hacked server. Turns out the server didn’t meet the specs of the restored server having seriously inferior RAM and CPU power.
Ticket moved to manager. After 3 days of setting things up…
04/05-04:11pm – I realize RAM is still 2GB instead of 4GB.
04/05-08:10pm – After a few back and forth they finally confirm again that the setup of the new server will be 4GB.
I take another day or so to get everything running on the new one. It runs okay for a day.
07/05-06:52pm – Another kernel panic on the new server.
08/05-12:17am – Server goes down. I don’t get any replies from the datacenter or support.
09/05-10:31am – After a day of no response, I write to Customer Care briefly explaining that it’s inconceivable that no one explains to me what is going on, who’s taking care of things, and when it’s expected to go back online. Someone promptly responds that they’re sorry and they’ll have the best tech work on the problem.
I sent queries 4 hours, 10 hours, and then 18 hours later to get a followup. Nothing.
10/05-08:27am – Someone finally responds. Server was rebooted, upgraded etc. etc. should be alright from now on.
All in all… 124 email exchanges over a period of 2 weeks. About 9 days of sporadic down-time.
I had to take care of this while trying that this doesn’t affect my regular life, which does not revolve around this little side-hobby turned disaster. I honestly can’t begin to relay the frustration.
after all, you are the best!
I am impressed by your patience. I've decided to give itainan an observation period before posting.
Hanjie – yeah, understandable. I hope I was able to restore all your posts.
pengyou – thanks for understanding, friend.
hope that your blog is o.k. now
Nitzan – Thanks. Been up and running with no problem for 5 days straight. Seems alright now.
I can read your blog again~~~happy!!