Microsoft supercharges Bing servers with new programmable processors


Microsoft has made a radical decision to essentially replace the processors used inside servers powering Bing. Codenamed Project Catapult, Doug Burger at Microsoft Research has been working to not physically remove the Intel processors, but rather compliment them with field-programmable arrays (FPGA) processors by Altera. The new components can be modified by the tech giant specifically for use with its own software and tools and handle processing for search.

Microsoft Research ran a pilot-system on Burger's concept to deploy a 1,600 server farm using the new chips. After testing and analysis, Microsoft has now given the green light to roll out the FPGAs to the company's live data centers, which is expected to kick off early next year. It's an interesting solution to a problem most major web-based companies face – processing power and capabilities.

The chips by Altera aren't new and have been around for some time, but they're slowly being used more by bitcoin miners and even some Wall Street firms due to many advantages over 'ordinary' processors. The customization now available to Microsoft has enabled them to build a super-search network called Catapult (derived from the project codename). The 1,632 servers each have a combination of a single Intel Xeon processor and a card with the Altera FPGA chip.

The new system essentially takes search queries from bing and offloads the work to the FPGAs, which are custom-programmed for the processing work required to calculate in which order search results are displayed. While speed increases and capacity are good things overall, this won't mean Bing will suddenly "seem faster" to consumers using the front-end web interface, but it will enable Microsoft to cut its running costs in half and reduce energy consumption.

"If all we were doing was improving Bing, I probably wouldn't get clearance from my boss to spend this kind of money on a project like this," noted Peter Lee, the head of Microsoft Research. "The Catapult architecture is really much more general-purpose, and the kinds of workloads that Doug is envisioning that can be dramatically accelerated by this are much more wide-ranging.

Microsoft isn't the first company to look at increasing capacity and efficiency with regards to its web services. Both Facebook and Google have been looking at improvements the companies can make on hardware, including low-powered ARM processors.

Project Catapult is just another cool concept being explored at Microsoft Research, which has not yet failed to surprise us with the end results.

Source: Wired



There are 38 comments. Sign in to comment

Andrei Dorin says:

Seems like no one cares

MagusShadow says:

But they even made the joke in the article itself! It was just ASKING for this comment outright!

Loosen up.

MacDaMachine says:

I don't think this will ever get old

jeddo45 says:

Whoever made this stupid comment up needs to sue his brain for nonsupport.

Brought to you by the Nexus M8

terrokkinit says:

Aww, poor old-timer's joke. Here's your dentures...get some rest ;)

louisoneal says:

Seems processieor

I wonder if it was for Cortana?

Cortana can't be stuck in traffic!

E Lizzle says:

Seems like the takeaway here is that Intel processors suck, if you can write code for a FPGA that runs faster than code on a normal microprocessor.


therealprof says:

Well, that or they don't have any good programmers so that a single freak can make the rest of the company look terribly bad by optimizing a FPGA version so much that it runs even faster than the regular CPUs...

Search algorithms are so much better suited to general purpose CPUs than FPGAs that one really has to screw up to have a multicore CPU be slower than a rather pedstrian FPGA.

And no, no serious coin miner would use an FPGA (although it would be much more suited to mining, rather than search processing!) these days as they'll likely never break even next to the flood of cheap GPUs and ASICs.

EddieLomax says:

Intel processors are fine, and that's coming from a FPGA engineer.  The FPGA is so many times faster then a general purpose processor at many jobs because we can redesign the hardware to fit the job.

You can go even faster if you optimise the silicon down to raw gates (standard cell ASIC), and even faster if you design the logic right down to customising every bit of metal, polysilicon and diffused region (full custom ASIC - only small bits of very mass produced chips, like a intel processor or analog logic gets designed like this, you can recognise these areas on a picture of a chip design as they are generally very neat and ordered, standard cell logic looks like a mess of wires connecting up horizontal rows).

So if there is enough time, and enough money to justify it, then a more customised solution is generally available, if you need a processor to be a "jack of all trades" solution then an Intel or ARM will do the job, it will generally be "a master of none" though :)

mikosoft says:

I am not sure if you understand what this is about.

Actually, Intel microprocessor don't suck. They are a universal multipurpose processors and as such are bound to be slower than specialised chips. FPGAs are not specialised as is but they can be programmed to a specialised task set - effectively creating a specialised processor that, by definition, WILL run a given task faster than a universal processor. That is precisely the reason why FPGAs are used where speed over universalness is required - as is preciselly the case here.

Reflexx says:

Almost sounds like how a graphics card is optimized for graphics.

E Lizzle says:

Right, but a plugin FPGA card and associated development costs isn't cheap, when you can just throw additional commodity hardware at the problem.  Using specialized hardware somewhat misses one of the major points of cloud computing - scaling horizontally on commoditized hardware.

Nataku4ca says:

but that comes at the cost of a lot more energy consumption and a larger set of hardware that "can" fail, by that I mean if CPUs fail at a rate of 10% per batch then every 10 will have 1 fail and when you make it 100 then10 will fail in time making it more expensive along the way, not that they fail regularly but its something to think about

Specialized hardware to run specialized worloads that can be scaled out as a part of commodity hardware deployment can improve performance/Watt.

That's what this article is about.

E Lizzle says:

I hadn't considered performance/watt, thanks.  Makes sense.


offbeatbop says:

Intel processors suck? LOL.

crise says:

How about supercharging functionallty across the globe?  Put the energy and investment there.

Mestiphal says:

 this won't mean Bing will suddenly "seem faster" to consumers

Lies!!! -  Seems Faster

Sin Ogaris says:

The question is though, can it run Crysis?

iyae says:

Yes it can. On ultra. Playing it now.

samy_a says:

"complement", not "compliment" ;)

Schikitar says:

I have learnt something today, thank you!

borhan48 says:

How about more localization like Google search. Stupid microsoft will never succeed in Search & mapping service from Google.

terrokkinit says:

Aww, would you like some cheese with your "whine"? ;) Quite frankly, Bing is localized and allows for better searches, more response time, and a WHOLE LOT LESS privacy mayhem to deal with than Google ;) but hey, if you like a top-head corporation selling your info for a profit, you go right on ahead.

eomes says:

Bing indeed does lack localization - I barely get any results when looking for info in my language.

How about spending the money on WiFi lampost instead. Walk around the streets and get free WiFi. Would be better and proberly faster too

Ten Four says:

I care about the results more than the speed. Doesn't matter how fast if what you get isn't what you are looking for. Google and Bing are both becoming ever-more commercial and less and less useful if you aren't searching for something to spend money on.

psiu_glen says:

The real beauty of this is where they mention they can cut running costs in half, reduce energy consumption. Alternatively, you could look at it as doubling the workload and keeping running costs where they are, with sporadic hardware investments versus ongoing monthly charges (rent, electric, connectivity, security, water, donuts, etc).

rattletop says:

Mmmmm. Burger..I like burgers..