Panopticon
Streaming data visualization for capital markets and other demanding environments
- Visual anomaly detection
- Built for real-time and time series data
- Used by 12 of the top 15 financial institutions
Products
Panopticon Designer
Quickly create visual applications on your desktop
Panopticon Server
Deploy throughout the enterprise via rich HTML5 client
Customers
End User
Our technology is widely adopted by financial institutions globally, including buy side, sell side and exchanges. Representative customers include:
-
Nasdaq, SGX, HSBC, Citi, Fidelity, Citadel, Blackrock, Deutsche Bank, Credit Suisse, UBS, BAML
OEM
We have developed strong relationships with our OEM partners over the last decade. Each partner takes our visual analysis capabilities and embeds them into their business solutions, covering trading, risk & compliance. Representative customers include:
-
Imagine Software , Factset (Portware), SkyRoad, Tibco, Thomson Reuters, OneTick, Nasdaq SMARTS, Dell Statistica
Technology Partners
We are only as good as the underlying data infrastructure. We consequently partner with leaders in CEP, tick databases, real time databases, real time cubes and big data. Partners include:
-
kx, OneTick, Tibco, Datastax, Cloudera
?
Use Cases
Panopticon is used in a variety of industries and scenarios.
Technically Speaking
Panopticon Designer
-
MS Windows OS (XP, Vista, 7, 8, 8.1, 10, 2008, 2012) with .NET 4. 2GB free disk space and 4GB available RAM
Panopticon Server for .NET
-
Windows OS (Vista, 7, 8, 8.1, 10, 2008, 2012) with .NET 4.5, MS IIS 7 to 7.5. 4GB free disk space and 8GB available RAM. In Memory Caching limited to available Server RAM
Panopticon Server for Java
Panopticon Analyst for HTML
-
MS IE 9+, Firefox 10+, Chrome 15+, Safari 5+ and 500 MB available RAM. Certified for MS Windows, iOS & Android
The complete list of supported data sources and capabilities is available in the Technical Fact Sheet
Access the complete documentation set
Real-Time
-
Monitor activity in real time by actively subscribing to operational data sources. The latest state of the business is continuously displayed and updated automatically as new data is pushed to the analytical dashboards. Updates can be paused at any time, and integration with historical data allows anomalies to be identified in real time, and investigated by drilling into the intra day view to see how the problem developed, and then into the historical view to identity historic occurrences.
Seamless compatibility with
Time Series
-
Zoom from years worth of data down to milli-second accuracy
-
Conflate, interpolate, aggregate and visualize
-
Playback through the time series either by time, or by transaction
-
Integration with Tick History / Time series databases
Visual Analysis
-
High density displays designed for highlighting abnormalities in context of the overall position
-
Whether numeric correlations, categorical correlations, peer comparisons and hierarchies, trend analysis and time series
Hierarchy
-
Visually analyze hierarchical data, whether instrument, book or counterparty
-
Dynamically reorder the hierarchy and reaggregate
-
Support non additive datasets such as Value at Risk (VaR), and limits in general
-
Integration with in-memory cubes
UI Building
- Quickly design interactive trader & analyst displays, without coding, and deploy at a touch of a button
Connectivity
-
Along with the standard sources we connect to real time, intra-day updating and historical sources
-
Data is subscribed for (push) where possible, or polled (pulled) on a defined automatic refresh period from 1 second to 1 day. Sources including: Tick Databases, CEP Engines, Message Buses, Web, Cube, and Big Data
Big Data
-
We connect to the big data ecosystem, ranging from real time streaming subscriptions from Kafka, through in memory analysis with Spark, to bulk loading from Hive. Big data reservoirs are leveraged through the appropriate connectivity to provide responsive operational analytics.
-
Hadoop Hive, Cloudera Impala, Apache Spark SQL, Apache Cassandra, Apache Kafka
Statistical Analysis
-
Basic calculations and aggregations are supported natively within the product. More complex statistical capabilities are provided through either R or Python, which are supported both as data sources and as data transformation steps in the data pipeline. Typical usage would include curve interpolation, and clustering.