Find a Question:
Tweakers 18 – Hosting History 2008-2016
Eighteen years ago began Femme this site on a shared-hostingaccountje by Pair Networks. Tweakers now running on over forty different servers and devices spread across three server racks in two locations. That happened, of course, not from one day to another and learned in the meantime there are many things, made mistakes, been replaced servers and removals.
Eight years ago we wrote about a series of articles for the tenth anniversary of Tweakers, so it is now high time for a sequel. This sequel is shorter than the trilogy that preceded it, simply because there is much less went wrong and the only significant moments upgrades its servers, where nothing went wrong.
File server athos in businessAthos MD3000iAthos MD3000i back2015 Storage Servers
Several file servers through the years
In the hosting history 1998 – 2001 Femme discusses how Tweakers grew from hosting a simple shared hosting account for the first steps in the colocatiewereld, with two servers at Fireworks Internet. This article is followed by the hosting history 2001- 2004 , which we continue hosting Tweakers: Two servers at Fireworks Internet to a rack full of servers at True in Redbus. The final part of this trilogy is hosting history 2005 – 2008 , in which we continue our story with the hosting history of Tweakers, at first in Redbus, and later moving to EUNetworks suite of True. Furthermore, we will look into the turbulent history of load balancers and file servers.
The next step: 2008 – 2010
At the end of the last series we were just moved to euNetworks and we plan to turn our storage environment through an iSCSI shared disk with gfs thereon. However, after several weeks of testing showed gfs not to be as stable as we hoped. Indeed, once the file size exceeded the 4kb, gfs returned an empty file. Because we did have standing hardware, we decided to create OCFS2.
OCFS2 also was not particularly stable and crashed frequently, or nodes quarreled with which they pulled from under the entire cluster. The low point was a Stonith series, with the servers in turn went to reboot another server. Manually caused maintenance problems; a network switch reboot led to problems with the file server, allowing a theoretical short downtime suddenly took much longer.
So handsome that actions by hundreds of years ago to
can cause bugs in our modern software
Quite apart from the problems we had with OCFS2, also showed our ISCI server is not entirely bug-free to be. Once a disk died, which happened frequently, crashed the whole appliance and he would not operate normally until the disk was replaced. This was nice effects on the servers that used the iSCSI exports, such as a “load” of more than 10,000.
MySQL also had strange problems , making the Forum every day lag of a few seconds to minutes. After a long debugging turned out to crash the server if the profile page of a user was requested and the user had given a birth date between 0000-01-01 and 0000-02-28. MySQL tries these days to count back to “the number of days since the beginning of the era, but due to various calendar customizations in the past 2000 years came this data out to a negative number, so MySQL but then decided to restart. Surely amazing that actions of hundreds to thousands of years ago can lead to bugs in our modern software.
Another new database server
In 2009 we have again the primary database server is replaced , this time by Artemis 6, the sixth new database server in less than a decade. This was the first server database server data that are no longer lifted up on a large number of turn-round pulleys, but on a disk-array consisting of six 50GB-SSDs. The memory upgrade was impressive, from 16GB to 72GB 5 in Artemis Artemis in six.
This new database server was so powerful that we split into databases that eight year period was necessary, could undo. Moreover, we could use the freed database server in a master-slave replication setup, so that, even if a database server would completely die like Big Crash 3 , never would suffer more than a few seconds of data loss.Viewing:-80
Answer this Question
You must be Logged In to post an Answer.
Not a member yet? Sign Up Now »
Star Points Scale
Earn points for Asking and Answering Questions!