jump to navigation

High Performance on Wall St: FPGA September 20, 2006

Posted by newyorkscot in HPC.
trackback

Out of all the sessions the one that was the most interesting was one on Field Programmable Gate Arrays (FPGAs) – it also actually followed the outline described in the glossy brochure!

Although FPGAs can deliver up to 1000x faster performance than CPUs, the implementation may actually result in performance gains in the order of 40x or 200x since the developer needs to strike a balance between designing for pure performance and flexibility of functionality. Quoting the example that was given, the presenter had built a Monte Carlo simulation on a 15W FPGA chip that was 230x faster than a 3Ghz CPU, but in another solution, calculations were only 40x faster as they traded performance for flexibility. 

It would seem that most of the IBs are looking at Proof Of Concepts of FPGAs, and possibly implementing a “golden node” inside a regular grid.

One of the key messages was the relative difficulty in implementing FPGA solutions:

  • Requires a higher ratio of engineering skills to modelling
  • It is a human process rather than an automated one.
  • Higher Development costs (there exists a 20-80 rule in that 80% of the work delivers only 20% incremental performance gain)

That said, the capital costs and operating expenses is considerably lower. E.g compare a 100,000 node CPU grid with a 100 node FPGA grid .. for same performance, although it will be harder to implement, it will be cheaper to run & operate

Although anyone trying to get into this field needs new engineering skills, equipment, etc, it seems that the only way that adoption is really going to happen is by convincing business users of the potential upside and getting them to sponsor the program. Functionally, it was mentioned that the best types of applications are either a) functionality that is relatively stable (for high throughout computation for well known models) or b) high-value functionality that merits high performance (scenario-based risk analysis for complex credit derivatives).

Building solutions on FPGAs does require a new engineering approach versus CPU-based solutions as you have to design for acceleration. Upfront design based on requirements is VERY important and highly human-based.

Other Related Stuff:

At Lab49, Damien Morton has done a bunch of work on GPUs . It will be interesting to see which way the banks go with non-CPU solutions.

Matt previously posted some info on FPGAs here

Advertisements

Comments»

1. FPGAs & autoquoting « Coding the markets - September 23, 2006

[…] NewYorkScot summarises the High Performance on Wall St conference. Nice one! I guess we won’t be putting our pricing engine on FPGA any time soon. While minimal latency pricing is critical for fixed income autoquoting systems, flexibility is top priority too. Those crazy traders want to twiddle their pricing models intraday. And they want new pricing models all the time too… Posted in coding, trading | […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: