jump to navigation

Operational Risk, Compliance and Grid August 23, 2006

Posted by newyorkscot in HPC.
trackback

Over the past few months I have been asked a lot about what technologies can be used and applied to resolve internal compliance & operational risk needs as well as regulatory requirements.

The common problem many banks face is that they are trying to focus on each requirement in isloation and their approach is often one of wallpapering over the cracks together with a healthy amount of duct tape. Some banks do have decent application, middleware and hardware monitoring tools to identify performance issues and outages in real-time. This often results in “how fast can I switch/failover to my backup machine ?”

My point here is that these regulatory and compliance issues should not be addressed retroactively. THERE IS A REAL BUSINESS OPPORTUNITY to provide a highly performant, competitive and controlled business/IT environment that FIRST makes the banks more competitive and successful in the marketplace, WHILE addressing the compliance, operational risk and regulatory requirements.

When you look at the set of requirements from across the various regulations such as Sarbanes-Oxley (SarBox), RegNMS, Basel II and MiFID in particular, there are a number of really tough issues to crack, but they really should be approached by looking at the business criteria for success:

  • Market Making Availability (robustness)- regulators demand that market makers need to be “always on” by ensuring that they are continuously connected to the market and provide low latency responses to pricing and execution transactions. A good example is in equity derivatives where the trading desk needs to provide liquidity into one or more exchanges (ie be connected and responsive to RFQs, etc).  More often than not, businesses use traditional “monitor and control” mechanisms to identify in pseudo real-time performance issues and/or outages, with an associated “failover” process. For this, compute and data grids can be used to provide a robust and continuously available platform which runs independent on the state of any given physical asset in the infrastructure. So, if one node of the grid goes down, the remaining processes in the grid will ensure transactional and information integrity is retained. Additionally, if additional throughput is required, many grid solutions will allow for linear scalability as more CPUs and processes are brough-online automatically (or manually). Why would a business not want that ?!
  • Competitiveness (pricing and performance)- joined closely to the hip of availability, being competitive in the market place is an important issue for regulators where market participants provide the appropriate levels of liquidity at prices that the market can tolerate. But this is most certainly a primary business issue and where banks can differentiate themselves in the marketplace from their competition. Part of being competitive in the online marketplace requires that the front office applications in particular deal with large volumes of tick data, order flow and trade executions. Providing these services to the market at low levels of latency is particularly challenging as there are often many functional steps to creating and publishing prices and executing orders in the marketplace, especially with derivatives. Using the example of algorithmic trading of equity derivatives, where the business relies on being able to create and deploy (and remove!) trading strategies “on-the-fly”, the infrastructure needs to provide a robust high performance environment that can be flexible to changes in functionality (without having to rely in off-hours to make functional changes). One of the nice things about certain technologies such as Javaspaces (or commercially, Gigaspaces) or Tangosol is that they can be configured to hold (and replicate) objects in cache to minimize messaging latency issues around serialization-deserialization between remote application processes. Additionally, the compute time of complex pricing routines/libraries can be optimized when appropriately run across a compute grid, Monte Carlo simulations being the classic example. In both examples, there are a number of ways in which all the data and transactions can be persisted in real-time, or by write-behind. They key to all of this though is a well thought out architecture, use of efficient application design patterns and framework, and a boatload of performance tuning.
  • Auditability & Repeatability (control)- in order to satisfy both internal compliance/audit controls as well as regulatory requirements, systems need to be able to provide enough of the relevant data and information that went into any given transaction AND be able to repeat the calculation or process that created the price. For example, when pricing a client portfolio, not only does the application need to have a record of ALL of the data that went into the calculation (trade attributes, market data, sources, spreads, etc), it ALSO needs to be able to reproduce any calculated results – this means that the analytics and other code- and configuration- dependent processes need to be captured and kept for future re-calculations. Therefore, retaining actual binaries, libraries and configurations at the time of execution as well as the actual data itself is key and is very often overlooked when it comes to providing true repeatability. In order to implement such a capability, very specific design patterns and application configurations need to be considered during design & development. Additionally, SarBox requires controls around roles and permissions to be defined and enforced. Although these types of controls seems to go hand-in-hand with auditability & repeatability, what do you actually really do around things like Excel spreadsheets which proliferate trading, risk, and operations groups ?
  • Price Transparency, Best Execution and Reporting (data)- generally speaking, some of the latest requirements (especially RegNMS & MFiD) demand that banks capture and warehouse a broader and deeper set of the data that went into prices and transactions. This in turn means that banks need to be creating broader and more sophisticated data warehouses that can model and persist cross-asset and market information, and that support multi-dimensional analytics and reporting (e.g. OLAP). Generating this additional volume of information throughout the trading day will place massive demands on application development groups, the supporting infrastructure, let alone the poor persistence guys !  Related to price transparency and best execution, capturing all of this data in a liquid market would most likely meet the needs of regulators, but what about illiquid markets where there might be a void of market information ? Most likely, the repeatability of the process (as defined above) is all you can do. Large in-memory storage technologies and cache replication certainly help in dealing with vast quantities of data, but they ultimately need to be persisted in data warehouses, with ever increasingly diverse reporting applications being built to access, slice and dice the information. These are tough (and expensive) issues to solve across an entire enterprise, so no wonder many financial institutions are pushing back. That said, there is some business value in having broader data sets being captured across all trading business: richer market datasets can improve the quality of market risk models that use historical market data and derived information such as trends, volatility, correlations, etc. Both trading research desks and enterprise risk management functions could use this. The question is what is the cost-benefit ?

Solving these issues is no trivial task, but the various technologies and techniques in the “Large Scale Computing” (aka grid) domain, can help out vastly. However, all of the hype around the potential benefits of “grid computing” (which can be enormous) has to be moderated against the architecture, controls and configuration required to manage these additional assets — precisely the issues the regulators are asking to be solved. What business-driven IT groups should be thinking about should really be business driven requirements around providing reliable, competitive and controlled services.

Advertisements

Comments»

No comments yet — be the first.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: