“I really wish I had a dedicated Linux computer to run computer vision algorithms on,” said my fiancée a couple of weeks ago. If you were there you would have been blinded by the metaphorical light bulb that lit over my head. You see, just the week before, my friend and co-worker had ordered an old, decommissioned (complete with “non-classified” stickers!) Apple Xserve off of eBay for merely $40. Like my fiancée, he wanted to have a machine for a special purpose: test compilations of open source software on a big-endian architecture. I was quite envious that he was able to hack on such cool hardware for such a cheap price. But, I wasn’t yet ready to bring out my wallet. I couldn’t justify indulging a new hobby without good reason—I was stuck waiting for just the right impetus. I didn’t wait long. My fiancée’s wish became my command!
Too long? Skip to the good stuff.
I immediately related the story of my co-worker to her: a server for just $40! Granted, we would want a little-endian, x86-64 architecture. Plus, for her algorithms and my virtual machine research we’d probably want a lot of cores and as much RAM as possible. Oh yeah, did I mention I also wanted a beefier machine at home so I could manipulate large virtual machine images? Virtual machines (VMs)! We’d need CPUs with VT-x or AMD-V so we could run VMs with accelerated hardware support. VMs run slow as a snail without acceleration. That would make the machine useless to me.
Focused on my quest, I started scanning eBay listings daily. My co-workers even began to notice and started asking me if I was looking for something specific. I responded that I was toying with the idea of trying to grab some cheap data-center-class hardware (for the astute, cheap data-center-class hardware should be an oxymoron). I was worried my project would end in failure, and wasn’t quite ready to announce to the whole world my larger plans. After several days of failed bid attempts—I always seemed to get sniped in the last few seconds—I finally found what appeared to be the perfect fit.
There is a lot of conjecture on where the Dell 1U rackmount model CS24-SC came from. Some people say Facebook data centers. Others just say that it was mass-produced for “clouds.” Whatever these servers were used for, they were all retired by the thousands and show up all over eBay. The general consensus is that Dell never sold these to general customers; the CS24-SC was a special custom-designed server sold by the tens of thousands to certain large customers. Thus, the CS24-SC has no support from Dell. I haven’t been able to find anything outside of what random other CS24-SC owners have found in the years since the great decommission event.
The CS24-SC has a few variations, but they don’t deviate too significantly. The one I had in my sights came with two quad-core Xeon E5410 @ 2.33 GHz CPUs. OK, fairly beefy compute from a few years ago giving us eight total real cores. It had 8 GB of RAM installed, which felt a little wimpy. Articles from second-hand owners online were conflicted on the maximum amount of RAM supported by the CS24-SC. Some said 48 GB, others said 24 GB was the max. Well, it didn’t matter, because the greatest amount of RAM I could find at a reasonable price was 24 GB of data-center-class ECC RAM (PC2-5300P for the interested). Cool, what are we still missing? Oh, most of these second-hand machines don’t come with hard drives or hard drive caddies.
A quick visit to Newegg and I identified a cheap Seagate 1 TB hard drive (ST1000DM003) to slot in. Meh, lets do this right and add in an extra hard drive for RAID1 to protect our work. I also threw in two CAT6 Ethernet cables so we could use both of CS24-SCs gigabit network ports, and a power cable. Well, that’s about it right? The server on eBay had its own case, 400 watt stock power supply, motherboard, and other needed components.
We had to wait 1.5 weeks, but finally the CS24-SC arrived. I anxiously picked it up at our local FedEx location just up the street. My fiancée and I unboxed it together and hooked up all the components together. She tried putting in some of the RAM herself, so this counts as a date right? We were both worried that it wouldn’t boot, and in a sense that became a self-fulfilled fear. After plugging in a VGA monitor, we just had a black, blank screen. Uh oh, maybe the hardware is bad? Or maybe the RAM is bad?
I really racked my brain thinking of ways to check on this system. I plugged in the Baseboard Management Console (BMC) port into my router. Based on its DHCP client table, I guessed a certain device on my network was coming from the BMC port. My hunch was confirmed when I port-scanned and discovered port 81 open and running an Apache server. After going to the server in my browser I was presented with a login prompt. I was getting desperate and worried. I thought that even if the VGA port was bad for some reason, we’d at least be able to get into the remote console. But how to get past this login screen?
I tried several username/password combinations, and luckily root/root worked. I found out later online that is the default username/password combination. Thank you to whomever left this at the default, or reset it! If you sell a server with such a management console, please reset it if you customized it at all. It turned out that the VGA port wasn’t bad, we just didn’t have the monitor plugged in before the BIOS flashed its screen. The system went to a blank screen after failing to boot an OS.
Okay, phew, things seem to be working. I downloaded Ubuntu 14.04 LTS Server and copied it onto a USB stick. Our CS24-SC had no trouble booting into the Ubuntu installer off of USB. We installed Ubuntu, named our server “phoenix,” after the ever-reincarnating mythical bird, and started customizing our CS24-SC. The two hard drives, 24 GB RAM, and whole system were recognized perfectly by the BIOS and Ubuntu. The only lingering issue I have is that Ubuntu doesn’t seem to properly display through the VGA interface after it boots. Grub displays fine, and so do the early-stage kernel messages. Perhaps this is just a driver issue I need to track down. Also, the fans on the PSU don’t spin, but it doesn’t appear to be going bad yet.
Hardware virtualization seems to work, and we are setting up our own work environments within Vagrant-managed VMs. I’m using this opportunity to experiment with some advanced Linux functionality I’ve never tried before. Our two hard drives are not a traditional RAID1. I’m using the new btrfs file system to mirror our root partitions. There would be some work involved in setting up the second hard drive to boot, but we won’t lose our data. I setup the dual gigabit ports into a single bonded virtual device using the Linux kernel’s balance-alb algorithm to try and balance inbound and outbound TCP flows across both ports.
What were the total costs? And how about some links to all the hardware:
|Dell CS24-SC Rackmount Server||1||$120.00||$120.00|
|24 GB PC2-5300P RAM||1||$64.98||$64.98|
|Seagate 1 TB HDD||2||$54.99||$109.98|
|BYTECC Power Cable||1||$4.99||$4.99|
|Shipping [NewEgg Order with Cables]||1||$2.99||$2.99|
|Taxes [RAM eBay Seller]||1||$3.90||$3.90|
There you have it, a[n] [old] data-center-worthy home cloud server for only $310.28. Just for fun I tried customizing a hypothetical order on Dell’s website for new hardware configured the same way our CS24-CS is: it came out to over $3,300—with discounts it only drops to $2,400. Simply adding a second CPU on the Dell website costs over $500, more than our entire setup! The more modern hardware is faster, but our little CS24-CS is almost 90% cheaper. Thanks for reading my story of how I built a $300 home cloud server. And now maybe you can too with a little elbow grease and eBay.
tl;dr Quick Server Specifications:
|CPU||Two Quad-Core Intel Xeon E5410 @ 2.33 GHz|
|Memory||24 GB ECC RAM|
|Disk||Two 1 TB HDDs (RAID1)|
|Network||Two Gigabit Ethernet Ports|
That is terrific but does require technical information and hard work to complete the project successfully. Congratulations!
Just like fixing up an old car or motorcycle, yes it requires expertise.
But, if you love it as a hobby, it certainly is rewarding 🙂
Insightful article. Are you planning on outlining the set-up process for the software side?
Yes actually I was, definitely needs to be a separate article.
Especially the Vagrant stuff and how I’m using it to manage a “microcloud” at my home.
I agree. There’s nothing “cloud” about this. What are you going to add to turn it into a cloud server? When I think of home cloud server I think of running programs and storing data on a box I control, but can access when I’m out in the world. Old hardware I’ve got; I’d like to know more about the software side of the equation.
I do a lot of cloud computing research. This post is just showing the hardware I got to support my work at home.
I manipulate VMs and often run with a customized, modified hypervisor and cloud computing stack (OpenStack in my case).
I didn’t talk deeply about my research in this post, but I did state that I’m currently managing VMs via Vagrant for both her and me.
I will also be running benchmarks with my experimental cloud software I use in my research.
Also, in all the talk about “cloud” we often forget there actually is a _hardware_ side to the story.
This article was focused mostly on that side of the story.
Uh… exactly what is “cloud” about this? Why are you using the term “cloud”? It’s a computer in your home, running Ubuntu. “Cloud Server”.
It runs a modified hypervisor and cloud software that I use in my research.
I am testing features out that no cloud in the world has today 🙂
Great article! How loud is your server?
It sounds like a hair drier on a low setting. It’s not that bad unless under a high load and spinning the fans at maximum. I’d call it bearable.
very interesting but why did u choose to buy a server and not using Amazon EC2? Pro and cons?
Do you test a lot it? (because I think you can use the university servers)
Because I use modified hypervisor in my cloud research.
I need to *be* the cloud 😉
I can not use VMs, I am the one that manages them.
What about your electricity cost…
Not so bad, it’s a 400W server and we don’t use it 24/7. Also, it draws less power when not under load.
Have you connected something like a kill-a-watt to see what the power draw is? I think you’ll find that it’s pulling an absurd amount of power. As a general rule of thumb, each watt is $1 to run 24/7. That means if it’s pulling 300watts as I suspect, you’re paying an extra $200/year more than more energy efficient hardware. It may still be worth it, but I wanted to point out the hidden cost. Sometimes it’s worth replacing old hardware just for the power savings from a financial aspect.
Totally true. But for our use case, research workloads, we only turn it on sporadically (2-4 times per week).
Doesn’t sound like a very good idea.
The cpu itself is not very powerful but now you have a 19″ server lying around with small noisy fans.
Instead you could have bought just consumer hardware (like 20bugs normal pc case etc.). Which might have been a little bit more expensive but more quiet, much faster (and therefore better suited for computer vision algos) and easier to handle. (Spare parts or replacement parts are cheap etc.) and you don’t need special hardware which fits in your 19″ server case. Put in a graphics card for your vision algo -> no brainer.
This sever also looks like it uses a lot of power all the time.
Yes, I know. It was more of a fun project for me 🙂
Actually, graphics cards are not always by default more useful for computer vision.
So, what is your annual energy bill going to be for this data centre class hardware?
I’ll let you know in awhile.
I do need to measure its power draw!
Try ‘nomodeset’ and/or remove ‘quiet’ from your grub cmd line. That should ‘fix’ your VGA boot issue.
I’m not a huge fan of old hardware, basically because the TCO is quite high if you add power consumption to the equation. It might be worth checking the TCO for 2-3 years 24/7 uptime 😉
You are the most helpful person on both this blog and Hacker News.
Thanks for reading the article, understanding things, and actually being helpful.
You have redeemed my faith in the humanity of the Internet 🙂
I did exactly this about 2 years ago. Same machine, also Ubuntu. I used 2 * 1TB drives and a small ssd though, with 32gb mem. Mine wouldn’t boot properly at first until I did a bios update.
Anyway, THE NOISE!!?!??!
It’s like 4 hair dryers constantly running, and it heats the spare room to furnace temperatures. Nice in the winter though. Anyway, I had a lot of fun, compiling wicked fast, running Plex server wicked fast, set up owncloud.
After about 9 months I had to decommission it since my fiancé (wife now) hadn’t slept for 9 months due to the noise. It say dormant for another 9 months, then eventually got dumped outside near the trash a couple months ago when we moved. Since then I’ve been using AWS, a couple of Intel NUCs, and if I feel the need I’ll get a HP Z620 like the one under my desk at work, better spec than the CS, far lower power use, and silent.
Nevertheless, good luck, sleep well, and try not to have a heart attack when you get the electricity bill.
Haha, yes I understand the noise issue (we both do).
We aren’t running 24/7, so it really isn’t an issue.
And I waited for fiancée buy-in before actually purchasing 😉
Yeah, I had buy in, but neither of us quite realised how loud it was gonna be. Seriously though, have a look at the used HP Z600s going on eBay if you want a silent alternative at any point.
Gotcha, we are perfectly fine with the noise value on ours.
Thank you so much for your suggestion, now that I’ve done this I might be looking to expand my “mini-cloud” in the very near future 🙂
It saddens me that you simply threw this away–at least give it away to someone that wants it! The first process in recycling is re-use. And since I’m using one of these almost 4 years later (that I just acquired), I’m sure someone could have used it vs it becoming landfill material.
came to respond to the grub thing – if ‘Treffer’s approach doesn’t work – check to see if grub is outputting to serial console instead of vga?
Thanks also 🙂 I’ll try and remember to let you guys know what works.
How is this a cloud? Short answer, it isn’t.
There are two senses in which I meant cloud server.
As stated in the article:
(1) This hardware was previously used in a “cloud” somewhere, so it’s having a “cloud server” in the literal sense at home.
(2) I do cloud computing research.
I didn’t talk deeply about my research, but I generally operate with a modified hypervisor (QEMU) and run tests with virtual machines.
So it will definitely be used to deploy experimental, research cloud software.
I operate often with OpenStack. I am undecided as to whether or not I will run OpenStack on it right now.
As I said in the article, I am running VMs managed by Vagrant right now (so, very cloud-like running multiple VMs for different tenants, both me and her).
You will want to rethink your decision to go btrfs on your new disks.
Btrfs, when it decides to puke, pukes so badly that unless you have alternate backups, all your data is simply gone. Even in it’s mythical “raid1” setup, when it corrupts the disk, the “second copy” is also gone. (been there, seen that happen, twice actually).
There’s a reason why it is still marked “experimental”. It is, and if you value you data, you’ll rethink using it.
What version of btrfs were you using?
I have heard these reports, and waited until now to use it for this very reason.
CoW copying of files is _so attractive_ in duplicating VM disks that I wanted to try it out 🙂
I see on Wikipedia it is marked as stable since July 29, 2013 (well over a year now of stable development supposedly)—since Linux Kernel 3.10.
I’m a little less trusting of the state of BTRFS. I use ZFS instead for these features. Performance of ZoL is decent these days and with 0.6.3 they’ve fixed the annoying deadlock-when-system-swapping bugs. Even when those bugs were present it always flushed pending writes.
Right, I felt both lagged in terms of the quality I wanted to see.
But I always thought, at least in the Kernel, btrfs would someday win.
We are backing it up with the knowledge that btrfs might screw up.
Pingback: My $300 Home Cloud Server: A Story of Blood, Sw...
Did you look into other more quiet and less energy consuming options?
I think my wife would kill me if I tried something like this in our small apartment.
So our main guiding function was (1) maximize memory+cores, (2) minimize cost. This basically threw out your “small plug in the wall” super quiet, super low energy devices. It even threw out laptops as their compute/memory density was not high enough for us.
I mean, she already has a high-end Macbook Pro, and still wanted a dedicated high-memory system for some CV processing.
So we turned to these servers.
Since she’s fairly technical like me (read, our interests are well-aligned I think), she has no problem having such a thing at home 🙂
Great to hear another great use of the CS24. I was just given one of this for free and while dated, it’s still very usable. The price performance ratio is ridiculously great as I’ve seen these for as cheap as $40 as of late.
Besides the drives and memory, did you ever add anything else to the hardware? It looks like there’s a space for a DVD/CD-ROM (my searching brought me here), but it doesn’t seem like anyone’s done it so far that I’ve found.