Request a Software Demo

See why EMA says "FixStream is a vendor to watch very closely."

Please share your email to download the report

Download Document

Download Document

Download Document


Uncovering and Improving Control of IT Assets

Uncovering and Improving Control of IT Assets

By Sameer Padhye

The demise of the CIO has been greatly exaggerated. In fact, at one point, CIO stood for Career Is Over. Most people in the job can tell you that their focus and responsibilities has significantly evolved in the past several years in response to new business challenges and disruptive technologies. As technology has become embedded in most parts of the business, the position of the CIO has expanded. In fact, a 2018 Forbes Insight paper (The Challenges for Tomorrow’s CIO) reveals that…

“over four out of five CIOs believe their role has increased in importance over the last five years”.

CIO’s are more essential than ever.

While they are driving IT strategy across the enterprise, CIO’s are still responsible for day-to-day operations and budgets of the IT department. Maintaining a balance between these competing priorities can be a challenge.

Keeping a Lid on IT Assets and Investments

In response to the digitization of business processes, organizations are investing millions of dollars in IT infrastructure and tools, often without a clear view what they already own. This lack of visibility increases system vulnerabilities and security risks, as well as squander IT budgets and personnel.

Keeping tight controls on IT assets and infrastructure is one way that innovative CIO’s can safeguard and manage their complex disparate system environment.  To start, they need full disclosure and listing of their IT assets, including hardware, network, software, and IoT devices. But today’s distributed and hybrid IT infrastructure has become too complex and opaque to manage manually.

Auto discovery of data from all the disparate sources and devices across the IT environment helps IT leaders build and maintain a reliable, up-to-date inventory of resources. This knowledge will enable better budgeting, troubleshooting, capacity planning, maintenance and effective management of all system assets.

Detecting and Taming Technology Sprawl

When IT does not have full visibility into their environment, it exposes the business to risks and additional expenses. The problem is made worse by:

  • LOB Funding IT Purchase Decisions– In many businesses, funding for technology solutions has spread throughout the business. In the Forbes Insights survey mentioned above, 54% of CIOs noted that their company’s business units are more involved in selecting their own technology, and 74 percent say it’s more important to align with business stakeholders on IT acquisitions.
  • Shadow IT acquisitions– If IT isn’t providing a solution that employees want, chances are they will go off and obtain it anyway, without IT being involved in the decision. These “shadow IT” tools can create ongoing security, compliance and workflow vulnerabilities, along with driving up IT costs and workloads. Once those solutions become outdated or unsupported, they require expensive maintenance and don’t adapt to new business needs. And, while IT may not own all the IT purchase decisions, they are still responsible for making sure nothing goes wrong.
  • Redundant Solutions Drive Up Expenses– Point and/or redundant solutions are not just expensive to purchase, but to maintain and support. This diverts IT staff from value-creating activity; redundant technology can also waste money on software licenses that don’t deliver the right functionality to the business.
  • Overly complex IT environments are messy and vulnerable to problems– When business acquire new divisions, set up transactions with partners/vendors, or bring in new enterprise applications, some legacy solutions may be duplicated and no longer needed. They also drive the need for system interfaces and the number of platforms that must be supported. The more interfaces you have, the more fragile your system, and the harder that system is to maintain.

Key to Success: Gaining Insight into the full IT system architecture

The proliferation of IT systems and tools across the enterprise has created a more complex integration environment. But there are new tools available to help uncover system assets and simplify IT management. Auto discovery solutions can identify all the physical and virtual infrastructure components in a hybrid IT environment by automatically collecting data across the IT domain.

With auto discovery, you can develop a complete picture of the network, the IP devices on the network, the applications and services, along with their relationships and interdependencies. The result is an application map that illustrates the relationships among all IT entities, which will help with system diagnostics, migration planning, and root cause analysis down the line.

Updated insight into your organization’s application and IT infrastructure resources can lead to a better understanding of the interdependencies between system components. This Use Case illustrates how FixStream’s Auto Discovery solution helped a global semiconductor company increase visibility and develop dynamic topology maps of all their datacenters worldwide.

Sameer Padhye is founder and CEO of FixStream.

Nutanix and FixStream Team to Accelerate Delivery of Business Workloads

As originally published on

This is a guest post from Enzo Signore, Chief Marketing Officer at FixStream

Unprecedented Application-Centric Visibility into Physical and Virtual Infrastructure

Nutanix and FixStream are very excited to announce at the Nutanix .Next 2018 Conference today a new joint initiative to help customers accelerate the freedom of choice they can experience with the adoption of Nutanix Enterprise Cloud.

FixStream’s Artificial Intelligence for IT Operations (AIOps) platform combines auto-discovery, correlation and visualization to easily find and predict critical business application issues across an enterprise’s entire hybrid IT stack in minutes. Customers using Nutanix-powered environments can now accelerate the delivery of business applications and accelerate the adoption of Nutanix Enterprise Cloud.

One of the key challenges IT organizations face is the complexity associated with discovering and managing the disparate entities of legacy and cloud infrastructure in the hybrid IT environment. Environments are quickly changing as digital service deployments adopt newer technologies in the domains of virtualization, hybrid cloud, containers, micro services etc.

To solve this problem, The Nutanix Enterprise Cloud Assessment Offer from FixStream allows Nutanix consultants and partners to perform detailed assessment of their customers’ and prospects’ existing legacy and cloud infrastructure to discover and visualize their entire hybrid infrastructure to help accelerate their migration onto the Nutanix Enterprise Cloud. Based on FixStream’s Nutanix AHV Ready platform, this offer includes a custom data center inventory report highlighting:

  • Network, compute and storage components deployed in data centers
  • Physical and virtual infrastructure components in a hybrid IT environment
  • Visualization of the physical, virtual and logical connections among the devices
  • Up-to-date, near-real time inventory view
  • Dynamic reporting for data analysis and compliance needs

Leveraging FixStream powerful agentless auto-discovery capabilities, the Nutanix Enterprise Cloud Offer is designed to deliver an infrastructure assessment that applies to different scenarios. Nutanix consultants and partners will use this offer for customers planning to modernize their datacenter by migrating from legacy hybrid environments to Nutanix Enterprise Cloud and those who ho want to transform their business with the Nutanix Enterprise Cloud Transformation Service.

For those attending the .NEXT Conference this week, please join us on Wednesday, May 9th, at 3:00 – 3:30 pm at the Silver Sponsor Theatre Presentations in the Solutions Expo! To hear from FixStream’s CMO Enzo Signore and Vandana Rao, Nutanix, Director, Practice Development Services more about Artificial Intelligence to Predict Business Application Issues across Hybrid IT and what this means for Nutanix customers.

Disclaimer: The views expressed in this blog are those of the author and not those of Nutanix, Inc. or any of its other employees or affiliates. This blog may contain links to external websites that are not part of Nutanix does not control these sites and disclaims all responsibility for the content or accuracy of any external site. Our decision to link to an external site should not be considered an endorsement of any content on such site.

© 2018 Nutanix, Inc. All rights reserved. Nutanix, the Enterprise Cloud Platform, the Nutanix logo and the other Nutanix products and features mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names mentioned herein are for identification purposes only and may be the trademarks of their respective holder(s).

Flying High With AI How Machine Learning Powers AIOps, Correlation & Analytics

Flying High With AI
How Machine Learning Powers AIOps, Correlation & Analytics

By Bishnu Nayak

Digital disrupters have accelerated the pace of business. They’ve prompted digital transformation across organizations and industries. That has led to more departments within more businesses adopting more connected applications.

That has created greater reliance on underlying enterprise networks on which these critical business applications run. The infrastructure is becoming more distributed, heterogeneous, intelligent, open, and virtualized to support the growth, agility, and scale of the business applications.

Application architecture is changing to adopt newer technologies such as containers. That allows deployment of the application entities in multiple data center and cloud environments, which can scale dynamically. (In October, DockerCon Europe reported that 24 billion containers have been downloaded.)

Business network elements frequently come from a wide variety of hardware and software suppliers. And these networks are only becoming more diverse given the movement by business networking professionals to avoid vendor lock-in, embrace open architectures, and use best-of-breed solutions.

The changes in the dynamic application environment happen very abruptly. So it’s impossible to track the changes and correlate the events using legacy techniques. Manual processing of massive amounts of data – which is dynamic across the stacks – to identify patterns, anomaly scenarios, and predict capacity requirements is almost impossible. That, in turn, poses tremendous business risks and hinders business innovation

So, what’s the solution?

A solution that combines the power of machine learning with the ability to auto-discover and correlate entities across critical layers of digital business – business, application, and infrastructure.

Artificial intelligence and machine learning are not a replacement for people in this scenario. Rather, they help humans perform day to day IT operational tasks such as troubleshooting, capacity management, migration, and planning. There are companies that use AI in event driven architecture, if you are interested in finding out more you might be interested in visiting somewhere like

“Most recent advances in AI have been achieved by applying machine learning to very large data sets,”

notes McKinsey & Co. “Machine-learning algorithms detect patterns and learn how to make predictions and recommendations by processing data and experiences, rather than by receiving explicit programming instruction. The algorithms also adapt in response to new data and experiences to improve efficacy over time.”

Machine learning can correlate and analyze data, (using something similar to data lake) from multiple enterprise application and infrastructure domains, dealing with the volume, velocity, and varieties of data generated. It can uncover patterns to show what has occurred. It can use current conditions and past learning to spot exceptions and predict the future. Machine learning can even offer suggestions on what to do in various scenarios.

AIOps platforms leverage machine learning to deliver AI capabilities for IT operations. Here are some interesting use cases.

  • Multivariate anomaly detection can identify anomaly scenarios across various dependent entities. Such anomalies may signal that a planned or unplanned business event has taken place. For example, a multivariate anomaly group may represent an unplanned event like a DDoS cyberattack or a planned business effort such as Black Friday event.
  • A time-series sequential pattern detection algorithm can predict business outages triggered by events anywhere in the stack business functions are deployed.
  • It’s also possible to use AI and machine learning to predict when you’ll run out of capacity. For example, it could signal a potential lack of storage disk volume and excessive network bandwidth use of a router. Such information helps IT experts do proactive capacity planning to better meet business needs.

Machine learning automates IT operations and can notify operations teams of potential business outages before they happen. It also can detect security issues, identify infrastructure performance bottlenecks, and recommend capacity augmentation and optimization.

IT teams can then set systems to trigger actions for remediation. Executing remediation scripts or integrating with other orchestration and automation tools to take actions minimizes human tasks.

Proactively detecting issues and fixing such issues enables business continuity and assures customer satisfaction. In the age of digital transformation, such capabilities and AIOps solutions are an absolute must.

With machine learning, IT staff can continually and completely look for traffic exceptions. So IT experts can be far more effective in preventing and quickly responding to cyberattacks with the use of pentesting and vulnerability management. So businesses can stay up and running, and stay out of the headlines.

These are just a few reasons why AI and machine learning have become key components of digital transformation. And that’s only going to accelerate moving forward.

“During the next few years, the technologies associated with this [digital transformation] wave — including artificial intelligence, cloud computing, online interface design, the Internet of Things, Industry 4.0, cyberwarfare, robotics, and data analytics — will advance and amplify one another’s impact,”

note PwC analysts Leslie H. Moeller, Nicholas Hodson, and Martina Sangin.

Forrester Research says more than half of organizations already have implemented some form of an AI project. And it says another 20 percent are planning AI projects in the near future.

Your business and its IT staff should be thinking about how you can benefit from AI and machine learning too.

If you’re still on the fence, think of it like this. Machine learning is to network operations as air traffic control is to airline operations.

There are about 5,000 airplanes in the sky every hour in the U.S.
So you can’t use manual processes to track planes as they move around. It would be near impossible, and just plain dangerous.

So we use air traffic control to manage the chaos. The air traffic control system helps experts keep track of all the traffic (airplanes) among the different domains (various airports and airlines).

By bringing together the various data points and presenting a complete view of what’s happening, air traffic control helps avoid crashes and enables smoother traffic flow.

Machine learning likewise enables data correlation and analytics. That way, IT experts can keep the network and its applications running safely and on time. And that allows organizations to deliver better and safer customer experiences, make better use of their human and technological resources, and keep their applications and businesses moving forward.

That’s why artificial intelligence and machine learning are key technology enablers of the FixStream AIOps solution. They’re the AI in AIOps.

On behalf of FixStream and the entire crew, I’d like to thank you for joining us on this trip. We look forward to seeing you on board again in the near future. Have a nice day. (Sorry, I couldn’t resist!)

In my next blog, I’ll talk about automation.

Bishnu Nayak is the CTO for FixStream.

How AIOps Can Ensure SAP ERP Performance, Availability

How AIOps Can Ensure SAP ERP Performance, Availability

By Bishnu Nayak

All major businesses these days use enterprise resource planning (ERP) platforms such as CarryCulum. You probably use one yourself.

As you probably already know, ERP software helps organizations more efficiently handle billing, customer management, human resources matters, ordering, provisioning, supply chain management, and more. ERP software also can allow for better decision-making and increased agility.

Clearly, ERP systems have a lot of functionality. And businesses rely on that functionality to keep their organizations up-and-running and the wheels of industry turning.

A Complex Situation

Because these platforms do so much, they are quite complex and consume lots of resources. ERP systems employ an array of software running on various infrastructure in many data centers.

So there are lots of moving pieces, and those piece parts can be widely distributed. That creates a lot of opportunity for problems that can adversely impact ERP performance and customer SLAs.

For example, a problem with a server or a network switch or router can cause a kink in a company’s invoicing or order booking process. This kind of thing can interfere with an organization providing its customers with their bills.

You don’t need me to tell you that is a big problem.

The Cost of Delay & Outages

This kind of a situation can confuse and frustrate customers. Worse yet, it can mean late or lost payments, creating cash flow issues for the company providing the product or service.

But that’s just one example of the kind of thing that can go wrong. There are plenty of real-life examples of how ERP- and IT-related outages hurt businesses.

A few years back HSBC had a problem with a software update. As a result, thousands of the bank’s customers couldn’t cash their paychecks. Worse yet, this occurred just before a holiday weekend.

More recently, the failure of a power system in a British Airways data center resulted in canceled flights affecting more than 75,000 passengers. The airlines ended up paying $68 million in passenger reimbursement costs. And its parent company experienced a 2.8 percent stock price drop.

According to Gartner, the average cost of IT downtime is $5,600 per minute – or $300,000 an hour.

So businesses clearly want to avoid these kinds of scenarios. But it can be challenging to identify and address the sources of such issues. That challenge is even more daunting when it involves a distributed, multi-application ERP.

That’s why FixStream introduced AIOps in SAP. This solution provides multilayer correlation for SAP ERP users like you.

Ensuring Availability & Performance

AIOps in SAP allows you to quickly understand problems so you can take action to correct them before it impacts your customers and your business. That helps your business to increase customer satisfaction, better retain customers, protect your reputation, attract new customers, and grow your customer spend.

The top layer of FixStream’s AIOps in SAP focuses on business process KPI and SLA. It knows how many orders should be processed in an hour. It also understands how long it should take for an order to be processed from beginning to end.

The application layer is the second tier of AIOps in SAP. It addresses every application in SAP (or Oracle, the other major ERP supplier). That could be a server, a database, or a business process defined within the application tier.

The third layer is the infrastructure layer. That’s made up of physical and virtual compute, network, and storage resources.

FixStream’s AIOps correlates data from those three layers and applies an algorithm on top of it. This data correlation and analysis enables AIOps in SAP to address a variety of use cases.

The Use Cases

For example, AIOps in SAP flags when an SLA violation occurs because processing of customer orders is stuck. FixStream’s AIOps then sets out to identify the source of the trouble via data correlation and analysis.

AIOps in SAP also can predict when things are headed for trouble. It does that by identifying repeating patterns and dependencies.

For example, FixStream’s AIOps can understand from past data that Black Friday is a high-volume order day. And it can see that when you have more than X number of orders, your order processing getter slower. This kind of information enables your business to address that potential problem – by allocating more resources to enable fast ordering during Black Friday – so your ordering process continues to move forward at the desired pace.

FixStream also addresses capacity management. So if, for example, the number of orders your ERP processes has been ramping up monthly over the past nine months, AIOps in SAP reveals that. FixStream also provides you with a holistic view of what you have and what your business needs in terms of compute power, network resources, and storage. That means you won’t be caught off-guard by changing resource requirements.

There’s one more really important thing that AIOps in SAP addresses: IT compliance.

FixStream automates IT compliance efforts, which today are typically done using manual processes. We enable that via FixStream’s data explorer capability.

Our data explorer provides automated, up-to-date reports of all domains. That reduces the time it takes for IT personnel to ensure compliance.

As a result, IT teams can more quickly and easily see if the organization is running the latest software release, has the needed patches in place, and the like. That beats today’s manually-intensive audits, which require more IT team resources and can take one or two months to complete. So IT team can focus on value-added efforts and spend less time just keeping the lights on.

ERP Assurance & IT Resource Optimization

Forrester has reported that 69 percent of IT budgets go to maintenance and operations. The other 31 percent go to new projects – with just 14 percent of that going to investments for customer-facing and new business opportunities.

But there’s a widespread acknowledgement that digital transformation is changing the role of IT and how IT dollars should be allocated.

So IT teams need sophisticated – but easy-to-use – tools so they can ensure the performance and availability of key applications. That includes the applications supported by ERP systems, on which their employers and customers rely.

FixStream AIOps facilitates digital transformation by automating the discovery and mapping of critical processes like order-to-cash to the underlying application and infrastructure entities. By doing so, it enables IT operations to analyze mission-critical business processes, reduce mean time to repair, and predict occurrence of issues across any portion of their hybrid IT stack.

It provides customizable, single-pane-of-glass views of SAP ERP businesses processes and their operational health. It groups applications into logical business processes like O2C, P2P, SCM, and eCommerce. It provides auto-discovery of SAP ERP business processes and applications across hybrid IT environments. And it visualizes data to help people easily see what’s happening.

And, as noted earlier, it does automated correlation of SAP ERP business KPIs to application and system errors; allows for faster root cause identification of SAP ERP issues; and detects anomalies and patterns to allow businesses to address issues before they affect business operations and customers.

In my next blog I’ll tackle the subject of migration.

FixStream Accelerates the Delivery of Business Workloads in a Nutanix Enterprise Cloud Environment

As originally published on

This guest post was authored by Bishnu Nayak, FixStream Inc. CTO

In September 2016, I met and presented FixStream’s Algorithmic IT Operations (AIOps) platform to Nutanix’s Alliance and Alliance Engineering team. They noticed very quickly that FixStream addresses an important need for their customers – delivering an operational analytics platform that ensures service assurance of critical business workloads deployed on the Nutanix Enterprise Cloud infrastructure.

Built on cutting-edge Big Data technologies, FixStream’s correlation, analytics and visualization platform provides application-centric visibility in a hybrid IT environment by correlating across end-user transactions, applications and infrastructure layers. Its platform adds tremendous value in customers’ heterogeneous environment where Nutanix Enterprise Cloud is deployed, along with other multi-vendor infrastructure technologies.

We decided to work with Nutanix as our first HCI partner for several reasons – first we were impressed with their architectural approach to a hyperconverged solution; built ground up while keeping ecosystem innovation in mind. Second Nutanix presence in IT is very complementary to FixStream’s strategy in that both Nutanix and FixStream’s solutions enable customer to accelerate migration to a hybrid cloud environment.

Last (but not least) we have always been intrigued by the partner-centricity embedded in the company culture and vision. it’s pervasive in how Nutanix onboards, validates and market with its industry peers and partners.

Per Gartner, algorithmic IT operations platforms enable I&O leaders to meet the proactive, personal and dynamic demands of digital business by transforming the very nature of IT operations work via unprecedented, automated insight

“AIOps platforms utilize Big Data, modern machine learning, and other advanced analytics technologies to directly and indirectly enhance IT operations (monitoring, automation, and service desk) functions with proactive, personal, and dynamic insight.” (Gartner – “Innovation Insight for Algorithmic IT Operations Platforms”– Colin Fletcher, Refreshed: 26 April 2017 | Published: 24 March 2016)

Soon after our initial meeting, we entered into an official partnership agreement with Nutanix and with their support, developed a solution that was released in FixStream 6.0. The Nutanix alliance team has been extremely collaborative working with our R&D and engineering teams, ensuring the successful delivery of the desired capabilities.

FixStream delivers much-needed visibility and analytics for enterprises to successfully deploy critical business applications into Nutanix-powered (AHV or VMware hypervisor) environments and troubleshoot, plan and maintain going forward.

The FixStream platform:

  • Auto-discovers compute, storage, network entities from the Nutanix Enterprise Cloud environment by using Nutanix REST APIs and other FixStream native data collection techniques
  • Provides topology analytics across the entire customer network and visually represents how Nutanix Enterprise Cloud appliances connect with the end to end enterprise network, including virtual and physical networks, links, and interfaces
  • Auto-discovers application services such as app servers, databases, and web servers, running on Nutanix managed VMs, network flows, and available paths in/out into other entities in the network
  • Auto-discovers application dependencies and delivers application maps by connecting the underlying physical, virtual and logical entities in network, compute and storage with dynamic computation of network and storage path for application flows. It’s like Google Maps for critical enterprise business applications
  • Collects performance metrics, alerts and faults from all entities in the map and algorithmically correlate the events using time series analytics to identify patterns and anomalies. It then provides proactive and predictive remedial actions. For example: a business application’s performance slows down every Monday at peak hour. At the same time, the VM where the application service is running runs at high CPU, and the pattern repeats every week. FixStream identifies the trend and provides a remedial recommendation to proactively provision more CPU to the VM
  • Stores collected data contextually in a linearly scalable backend search database, allowing users to automate IT compliance and reporting functions

FixStream for the Nutanix Enterprise Cloud environment enables significant ROI for enterprises customers. According to the Digital Enterprise Journal (DEJ – 2017), 80% of companies surveyed experienced an average MTTR of 4.2 hours. The cost of a minute of service outage was $72,000 if revenue was lost. FixStream can reduce MTTR to minutes, therefore allowing companies to potentially save millions of dollars.

As technology transforms, it requires millions of data points to be manually correlated for daily operational activities such as planning, migration, workload placement, troubleshooting, and change management. This makes the activities extremely expensive, risky for business, and prone to errors.

FixStream automates these tasks by correlating these data points across transactions, applications and infrastructure.

The FixStream solution for the Nutanix environment delivers the following key business values:

  • Accelerates migration to Nutanix – With the end-to-end visualization, correlation and dependency mapping capabilities of FixStream, enterprises can now migrate from older infrastructure technologies to Nutanix HCI technology significantly reducing unforeseen business risks. The FixStream topology map, application dependency map, and data explorer capabilities provide the analytics required to plan and execute the transformation activities while lowering cost via FTE reduction and assuring performance of business services.
  • Automates troubleshooting of business outages and reduces MTTR from hours to minutes – FixStream’s application map and time series event correlation pinpoints the exact root cause of the problem in the Nutanix Enterprise Cloud infrastructure, as well as connectivity to larger corporate network connected to Nutanix real-time. This reduces the MTTR from hours to minutes. The FixStream platform proactively identifies performance bottlenecks and notifies the operations team to take corrective actions.
  • Resource Optimization and Application Workload Management – Through its performance Heatmap, FixStream provides analytics to operations team to realign resource allocation for optimization and cost reduction. Additionally, the Heatmap provides insights on available resources for new workload placement based on specific requirements for memory, disk and CPU. Automates IT Compliance and Reporting – Cross-domain, cross-vendor data across network, storage, compute and application are discovered by FixStream and stored in the backend ElasticSearch database. This allows allows users to query data for hierarchical output as needed for compliance analysis and reporting. The automation of this quarterly IT compliance activity significantly reduces costs for enterprises.

We are very excited to jointly launch this solution in the market and help Nutanix customers realize the benefits of FixStream’s innovative AIOps platform capabilities. For more details on this solution, please visit our website.

Disclaimer: The views expressed in this blog are those of the author and not those of Nutanix, Inc. or any of its other employees or affiliates. This blog may contain links to external websites that are not part of Nutanix does not control these sites and disclaims all responsibility for the content or accuracy of any external site. Our decision to link to an external site should not be considered an endorsement of any content on such site.

© 2017 Nutanix, Inc. All rights reserved. Nutanix is a trademark of Nutanix, Inc., registered in the United States and other countries. All other brand names mentioned herein are for identification purposes only and may be the trademarks of their respective holder(s).

Open Data Ingestion and API Driven Architecture by FixStream

Open Data Ingestion and API Driven Architecture by FixStream

By Bishnu Nayak

In my last blog, I discussed the need for auto-discovery in hybrid IT environments and the business values that enables for enterprises. In this blog, I will focus on an open API data ingestion architecture strategy that further leverages the power of auto-discovery and correlation to unify the heterogeneous IT operations landscape.

Enterprise IT environments are becoming increasing complex with hybrid IT deployment models with virtualization, cloud, containers, and software defined-architectures. New technology adoption introduces newer management and operations tools. The number of tools used to manage the hybrid environment is increasing across various operations domains such as infrastructure monitoring, ticketing, orchestration, automation, application management, security, etc.

In fact, more than 35 percent of IT professionals surveyed said there are too many tools and dashboards. They say this disjointed situation makes them slower to respond to critical issues and identify sources of trouble.

Nemertes Research’s John Burke says the more toolsets, the tougher it is to use them effectively.

That said, the fact is many of these tools are important to IT. And they’re not going away any time soon, at least not all of them.

So what’s the solution? The solution is to bring these tools together to a unified operations view via open API data ingestion and a single-pane-of-glass approach.

That’s exactly what FixStream has done with its Artificial Intelligence for IT Operations, or AIOps, platform.


FixStream AIOps platform is built on an open API data ingestion architecture. The open API data ingestion layer is built on a robust set of APIs used for communication with different domain specific tools. APIs are at the heart of the open API data ingestion architecture, which allows for data collection from the following categories of tools in a standard way:

  • Application Performance Management (APM) solutions like AppDynamics and New Relic
  • IT Operations Management (ITOM) software like Nagios and SolarWinds
  • IT Service Management (ITSM) systems like BMC Remedy and ServiceNow
  • Security Information and Event Management (SIEM) offers from Splunk and others

The ability to do open API data ingestion from an array of data sources is critical. It allows for an accurate, real-time view of all the moving parts in hybrid IT environments. That includes all applications, all business transactions, and all infrastructure.

 FixStream has enabled this via the introduction of connectors and Southbound APIs. We have put together some prepackaged connectors for the most popular tools in use today. Additionally, our APIs enable FixStream’s industry colleagues and customers to build their own connectors.

 As you know, APIs are bridges that connect software together. Forrester Research has called the API the poster child of digital transformation. That’s because APIs enable different systems to work together for more efficient operations and better outcomes.

For our customers, FixStream’s APIs and connectors add up to ease of use, elimination of siloes, great efficiency, and more intelligence. They enable users to get more value out of AIOps and from their existing tools too. The value of disparate, domain-specific data that exist in domain-centric tools is significantly enriched when ingested into the FixStream AIOps platform, powered by its multi-domain, multi-vendor and multi-layer correlation.

Getting an end-to-end view of your applications and network resources requires a lot of things to come together. And our APIs – and related connectors – help make that happen, and build value in the process.

Our open API approach for ecosystem enablement is in many ways similar to the disruption that happened in the smartphone mobile app ecosystem.  Smartphones deliver powerful open platforms such as iOS or Android for developers all over the world. That enables those developers to deliver powerful applications that run on the smartphone O/S.

FixStream’s open API data ingestion is pivotal to build a community and ecosystem for enablement of data ingestion from various sources. That enables our partners, customers, third-party vendors and developers to build connectors to ingest data to FixStream to leverage the power of FixStream AIOps.

I am very excited about the opportunities and benefits it will enable for our partners and customers to drive business value.

How Best to Collect Unstructured Data from Hybrid IT

How Best to Collect Unstructured Data from Hybrid IT

Auto-Discovery Provides a Simple, Scalable, Automated Approach

By Bishnu Nayak

We live in a world that is massively distributed, disparate and diverse.

There are a lot of different people speaking various languages, and different cultures. If you can interact with these individuals in their language, you can learn a lot. And the broader knowledge and insights can significantly benefit people as well as businesses across the world.

Enterprise networks are worlds of their own. And in some ways, they mirror this larger disparate world.

Enterprise hybrid IT data centers contain lots of entities across network, compute, and storage supplied by different technology vendors. The vendors that supply these technologies have their own cultures, their own syntax and languages on how they manage and interact with other entities in the IT environment.

But if you can interact with these distributed entities in a normalized way and understand how they relate to one another, you can derive a deeper understanding into the end-to-end environment. That understanding helps enterprises manage their IT environments optimally and profitably.

My point is that we live in a diverse world. And when we collect information about different entities across the world from different sources, we gain greater understanding. That can help improve human lives. I mention this because FixStream AIOps platform helps business to improve their IT operations by understanding their application and IT infrastructure resources.

FixStream AIOps technology can

  • collect data from disparate IT entities and siloed systems
  • correlate and analyze the flood of data from different sources
  • and present that information in a way that makes it quick and easy for businesses to understand and gain value from it.

Sameer Padhye has blogged about the data correlation and visualization aspects of FixStream’s AIOps solution. (I should note here that AIOps stands for Artificial Intelligence platform for IT Operations.)

But  before data is correlated and visualized, it needs to be collected from millions of disparate datasources. And FixStream uses its smart auto-discovery solution for optimal data collection.

So, I’m going hit the rewind button with this blog and address the first step in the process.

Auto-discovery in this context describes the process of automatically fetching lots of data from many disparate sources. FixStream can do that because it knows how to communicate with the various infrastructure and application entities. Data is collected by FixStream data collectors from all kinds of entities –switches, routers, load balancers, firewalls, servers, storage devices, hypervisors, VMs, application entities – both physical, virtual as well as logical.

FixStream is vendor agnostic. So, it doesn’t matter if the entity comes from Cisco, HP, IBM, Nutanix, Juniper, VMware, or some other supplier.

FixStream knows how to normalize and make sense of the massive amount of data it collects using a semantic model. And it can do that regardless of the physical location of those entities across hybrid enterprise network.

That’s really useful, especially considering that complex IT environments typically lack a real-time inventory of assets. FixStream addresses that gap. Our hybrid cloud discovery capability provides a reliable and up-to-date inventory of enterprise compute, network, storage, and application environments.

Here’s how it works. Data collectors scan IP addresses in network subnets or user input boundaries to learn about infrastructure. FixStream can then identify the make and model of an and uses the vendor-specific command library to learn how it’s configured, and other topology-related data such as dynamic table, interfaces, MAC addresses, routing information, and VLANs, etc.

That allows FixStream to provide a topology map that illustrates the relationships among all IT entities. FixStream also does automatic and dynamic discovery and mapping of applications and their infrastructure dependencies.

The contextual maps are then correlated with alerts, faults, log events, and tickets ingested from different sources via the FixStream Open API ingestion layer.

This discovery, correlation and mapping delivers tremendous operational and business value to our customers. For example, if you’re doing maintenance on a device, you can see what else is connected to it. If you are doing a migration, you can easily understand the dependent systems that can be impacted. This end- to-end correlated view allows enterprises to do faster root cause analysis, lowers business risk, and eases migration challenges.

Many of our competitors lack such capabilities. Some have them, but their approaches to auto-discovery are less than optimal. To be frank, they tend to be quite basic.

By comparison, FixStream auto-discovery allows for a very rich data experience. Our approach is extremely granular with derivation and analysis of relationship data such as hierarchies, links, and relationships across all the IT entities.

For example, the platform derives the parent-child relationships between hypervisors and all VMs hosted in it. That shows the links between the VM to hypervisor to the TOR switch they are connected with. And the FixStream AIOps solution derives all available network paths between all compute, storage, and network entities and correlates them to application flows (Flow2Path analytics). That helps IT teams with troubleshooting, maintenance, and to make more effective use of their resources.

You don’t know what you don’t know. So FixStream auto-discovery solution helps you to know the unknowns.

Some FixStream customers have uncovered assets they didn’t even realize they had. It was a nice surprise. But the fact that some businesses lose track of their IT assets isn’t particularly surprising. It’s easy to do.

FixStream discovery process enriches the value of ITSM systems by feeding the discovered data to CMDBs of ITSM systems. Traditional CMDB discovery and update process lacks the knowledge of relationships between compute, storage, network and application layer and FixStream provides rich relationship data across all layers to CMDB, further enhancing the ITSM processes such as incident, change, config and asset management.

FixStream cloud discovery makes it just as easy to identify the underutilized resources and repurposes them for business services that need additional capacity. As a result, businesses don’t have infrastructure assets sitting idle. And they can use their limited budgets on things they really need.

Knowing what’s in your network is also important from a compliance and security standpoint. If you don’t know what’s in your network, you can’t keep tabs on what’s happening with it.

It’s also important to note that FixStream’s auto-discovery approach is agentless, so it’s not intrusive.

With agentless auto-discovery there’s no need to deploy agents on the devices from which data is collected. That means businesses don’t have to install agents on thousands or tens of thousands of devices. And they don’t have to worry about implementing different agent versions to address various vendor solutions.

All of the above illustrates how FixStream makes data collection simple and scalable. Even in hybrid cloud, multi-cloud, multi-domain, and multi-vendor environments.

Not only that, but we make sure you’re collecting the right kind of data. Once we do that, FixStream uses AIOps to aggregate, correlate, analyze, and visualize your data. All that adds up to new value for your business.

FixStream continues to add value to AIOps ecosystem through its data collector and open API architecture. Our open architecture means new collectors can be built in a one to three months release cycle. And third parties can now build collectors based on the FixStream open architecture too.

(In my next blog, I’ll talk more about the importance of APIs and open data ingestion.)

But for now, let me leave you with this quote from author and speaker Deepak Chopra.

“Success comes when people act together. Failure tends to happen alone.”

The same could be said of data in the IT operations management, planning, and troubleshooting realm.

IT environments today are made up of many different and disparate entities. The cloud and virtualized technologies like containers, microservices, virtual machines, and network functions have added to the chaos.

Businesses, which are increasingly reliant on connected applications, need to get a handle on all this. They can do that by correlating, analyzing, visualizing, and acting on an array of data.

That’s the only way organizations can avoid failure, optimize networks, and ensure application – and business – success. And auto-discovery is the first step in that process.

Getting a Handle on AIOps And Learning What These Platforms and Solutions Can Do for You

Getting a Handle on AIOps And Learning What These Platforms and Solutions Can Do For You

By Enzo Signore & Bishnu Nayak

As the headline suggests, we wrote this blog to inform readers like you about AIOps. The first question many of you probably have is: What the heck is AIOps?

Excellent question.

The simple answer is that AIOps stands for Artificial Intelligence for IT Operations. It’s the next generation of IT operations analytics or ITOA. And its value is in helping organizations address IT challenges on a number of fronts.

These challenges include:

  • The increasing complexity and dynamic nature of IT architectures
  • Digital business transformation
  • Siloed IT operations
  • Exponential data growth

All of the above render traditional, domain-centric monitoring and IT operations management inadequate. Such systems can’t correlate the onslaught of data various IT domains create. What’s more, they’re unable to provide insights IT operations teams need to proactively manage their environments. And that just won’t cut it.

AIOps solutions, however, can address these challenges. They enable enterprises to unify and modernize IT operations. And they allow enterprises to make the most of their existing network investments.

Let’s confront the above-noted IT challenges one at a time. Then we’ll explain how AIOps can help your business conquer them.

The Increasing Complexity and Dynamic Nature of IT Architectures

To increase business agility, IT organizations are deploying dynamic, modern IT architectures enabled by virtualization technologies. That includes containers, elastic clouds, microservices, and virtual machines.

At least a quarter of businesses had adopted containers by late 2017. The application container market was worth $762 million in 2016. By 2022 it will balloon to $2.7 billion. The use of cloud platforms is on the rise, as more businesses migrate more applications. By July 2018, 80 percent of all IT budgets will be committed to cloud solutions.

The dynamism these architectures and technologies enables is important for businesses. It helps them adjust to the fluctuating demands of millions of digital customers around the globe.

However, that often comes at the cost of decreased visibility. That’s because application workloads and flows are now abstracted from their physical infrastructure. And that creates new challenges in pinpointing potential issues.

So without end-to-end correlated data, adoption of these key technologies can be risky and cumbersome. Because IT staff will unable to effectively map current workloads to these new environments. And they’ll struggle to manage their performance and uptime. Plus, purchasing these new technologies can be extremely expensive, and AIOps can serve as insurance that organizations get maximum ROI from those investments.

“By 2022 the applications container market will be worth $2.7 billions”

Digital Business Transformation

Enterprises across the globe are leveraging digital technology to transform their businesses. Such efforts aim to provide better experiences to their prospects, customers, suppliers, and internal stakeholders.

To succeed as digital companies, businesses need to rethink their entire IT stack and operational strategy. And they need to ground these efforts with business-first considerations.

That should include how they think about application and network uptime.

Enterprises incur an average cost of $300,000 per outage. That’s if no revenue is at stake. If the outage impacts revenues, organizations lose an average of $72,000 per minute. That means companies lose a whopping $5.6 million per outage.

You can see why modern enterprises must make applications assurance and uptime their No. 1 objective. Those that don’t could face catastrophic damage to their revenues and reputation.

“Companies lose a whopping $5.6 million per outage.”

The Problem with Siloed IT

Research suggests 41 percent of enterprises use 10 or more tools for IT performance monitoring. Seventy percent use more than six. And you need even more tools to manage a hybrid cloud environment. That will include solutions to monitor workloads running in AWS, Azure, or multi-cloud environments.

Domain-centric tools provide a deep view into a specific domain. But they lack the ability to provide a correlated and end-to-end view across domains.

That’s a problem because cross-domain data collection, correlation, and visibility are key. They can enable you to track transaction problems like failed eCommerce orders to infrastructure issues like database timeout errors, for example.

But siloed management tools prevent most organizations from making these important connections. As a result, most enterprises suffer from very longer Mean Time To Repair intervals and unhappy customers.

MTTR averages 4.2 hours and wastes precious resources. Businesses employ an average of 5.8 full-time equivalent employees to address each incident. That FTE figure is as high as 11 in 15 percent of cases.

This drain of resources and finger pointing occurs as IT staff members struggle to manually correlate data. And often a whole lot of data is involved. Solving a critical business problem often entails using hundreds of data points – imagine how complex it becomes when IT is required to use thousands or millions of data points. That’s a lot.

“Mean time to repair averages 4.2 hours and wastes precious resources.”

The Challenge of Exponential Data Growth

Indeed, millions of data points are now flowing to the IT operations team in real time. This data deluge will only accelerate as adoption of containers, microservices, and virtualization grows.

And it’s growing big time. In the last 12 months, enterprises collected 88 percent more data than the prior year. Containers alone generate 18 times more data than traditional IT environments.

There are automated ways to collect and process this massive amount of data from an individual domain, but domain specific teams then need to manually correlate it. (And 79 percent of organizations reportadding more IT staff to address this problem is not an effective strategy.) This is not only time consuming but also prone to incorrect interpretation and results, requiring skilled resources from different domains, thus leading to a very long diagnostic process for root cause identification.

“Containers alone generate 18 times more data than traditional IT environments.”

To address these challenges, organizations need a new class of technology to modernize the IT operations process. This technology needs to be able to correlate millions of data points across all IT domains. It should have the smarts to apply machine learning to detect patterns. And it should present that information so organizations can easily see what’s happening and gain insights.

This technology is what we mean when we talk about AIOps.

AIOps Defined

Gartner recognizes AIOps as a new strategic IT segment.

Artificial intelligence for IT operations (AIOps) platforms are software systems that combine big data and AI or machine learning functionality to enhance and partially replace a broad range of IT operations processes and tasks, including availability and performance monitoring, event correlation and analysis, IT service management, and automation,” (Gartner – “Market Guide for AIOps Platforms” – Will Cappelli, Colin Fletcher, Pankaj Prasad. Published: 3 August 2017)

Figure 1: Gartner’s visualization of the AIOPS platform

AIOps Platform Enabling Continuous Insights Across IT Operations Management

The general process by which AIOps platforms and solutions operate includes three basic steps.


An AIOps platform first needs to observe the nature of data and its behavior. That involves collecting information through data discovery.

AIOps data discovery needs to support big data scale. That way it can address the volume of data from different IT domains and sources. Those sources may include legacy infrastructure or new container, hybrid cloud, or virtualized environment elements.

Whatever the data or source, speed is key to the observation part of the process. So the data must be collected in near real time to detect patterns. Performance- and health-related information is collected from hundreds of sources – using an agentless or agent model. Successful AIOps platforms leverage a combination of mechanisms to collect data from a multi-domain and multi-vendor environment. That environment may include an array of containers, hypervisors, network and storage solutions, public cloud, and other technologies and architectures.

A successful AIOps platform also combines the power of big data and machine learning with domain knowledge to identify data relationships and history to solve this complex problem.


An AIOps platform provides orchestration across key IT operations domains – most importantly IT Service Management.

ITSM activities such as change management and incident management have traditionally been manual. And they’re typically heavily dependent upon the Configuration Management Database. The problem with legacy CMDBs is they are highly unreliable for environments involving frequent change.

The AIOps platform provides analytics and input to make ITSM tasks more automated and reliable. For example, AIOps can update CMDBs using its knowledge of the environment, state, and changes. The AIOps platform’s ability to observe hybrid environments on an end-to-end basis provides this power. That ensures CMDB data is relevant and reliable. That allows for automation and faster and more accurate incident management. The automation also minimizes risks that might otherwise happen due to human error. And pattern recognition allows businesses to see and address problems before they affect end-user experiences.


Automation or closed loop functions is the nirvana of AIOps platform.

Of course, automating critical IT operations using machine learning is new territory for most organizations. And IT leadership will need to get comfortable with it before they fully embrace automation. But new state-of-the-art automation – which uses advanced human inputs and machine learning – is maturing. And organizations can employ it today to do both simple and more complex jobs.

For example, they can employ it to clean log files to free up space. And they can use it to restart an application. Automation also can change application traffic policy on a router if AIOps sees the need.

How and Where AIOps Delivers Value

Enterprises that have deployed AIOps solutions have experienced transformational benefits. They include revenue growth, better customer retention, improved customer experience, lower costs, and enhanced performance.

Their operational teams have been able to:

  • Increase end-to-end business application assurance and uptime
    • Manage an integrated set of business and operational metrics
    • Predict and prevent outages
    • Dramatically reduce Mean Time to Detect and Mean Time to Repair
    • Lower the number of IT FTEs dedicated to troubleshooting
    • Decrease operational noise and alerts
  • Optimize IT and reduce IT costs
    • Replace older, silo-focused IT monitoring tools
    • Auto-discover complex, heterogeneous topologies
    • Gain visibility into the hybrid IT environment
    • Accelerate migration to the hybrid cloud
    • Expedite the adoption of hyper-convergence and microservices architecture
    • Reduce risk in consolidating and migrating data centers
  • Free up resources to enable IT operations to become a proactive source of innovation
    • Automate and reduce the cost of audits and compliance
    • Simplify IT processes
    • Break down silos across their IT teams
    • Enable less experienced staff to become more productive, faster

What the AIOps Architecture Looks Like

An AIOps solution includes the following functional blocks:

We’ll address these building blocks from the bottom up because that’s how AIOps itself works.

Open Data Ingestion

An AIOps platform collects data of all types from various sources. That may include data on faults, logs, performance alerts, and tickets. The ability to ingest data from the most diverse data sources is critical. It allows for an accurate, real-time view of all the moving parts across hybrid IT environments. More about open data ingestion here.


Given the very dynamic nature of modern IT environments, businesses need an auto-discovery process. That automatically collects data across all infrastructure and application domains – including on-premises, virtualized, and cloud deployments. And it identifies all infrastructure devices, the running applications, and the resulting business transactions. Read the auto-discovery blog.


Then it’s time for the AIOps platform to correlate this data in a contextual form. So it needs to determine the relationships between infrastructure elements, between an application and its infrastructure, and between the business transactions and the applications.

To learn more about the importance of correlation, check out this blog.


Once the end-to-end correlation process is completed, data need to be presented in an easy-to-use format. And that’s what visualization is all about.

Visualization is important because allows IT operations to quickly pinpoint issues and take corrective actions.

Of course, visualization in IT operations has become a commodity. Every solution includes a dashboard of some type. Yet an estimated 71 percent of organizations say data is not actionable. That’s why AIOps is important. It provides a new generation of visualization that makes data actionable.

Because visualization is key, we’ve also put together a blog on this topic. You can find it here.

Machine learning

Finding the root cause of a problem is key. But it’s even more critical to determine recurring patterns and predict likely future events.

AIOps solutions use supervised and unsupervised machine learning to determine patterns of events in a time-series. They also detect anomalies from expected behaviors and thresholds and predict outages and performance issues. Learn more about machine learning here.


Automation is a key component of AIOps as it delivers the end ROI to the customer. It does so by automating human IT ops tasks, reducing significant OPEX, and expediting innovation. And it reduces MTTR and can improve customer satisfaction.

AIOps enables IT operations to modernize existing processes. It allows IT Operations to make progress vs traditional ITOA strategies, abandon old, reactive processes, and become proactive, by predicting issues and preventing outages.

By providing an end-to-end correlated view of the entire IT environment, AIOps allows enterprises to accelerate their digital transformation strategies, adopt new technologies faster, and increase business productivity.

To learn more about FixStream, check out our AIOps solution whitepaper.

Visualize This – Presenting Data to Allow Faster Troubleshooting & Trendspotting

Visualize This – Presenting Data to Allow Faster Troubleshooting & Trendspotting

By Sameer Padhye

In my last blog, I wrote about the value of data correlation. As I noted, correlation is important because it makes connections between different data. And correlating data about applications and underlying infrastructure makes predictive analysis and more efficient root cause analysis possible.

Now the question becomes: What’s the best way to present that data?

If you provide it in spreadsheet form, it can be difficult and time-consuming to understand what stories the data can tell. But presenting data visually makes it easy and intuitive to see what’s happening with complex cloud and data center environments – and the applications they support.

With visualization, you can see in a minute what would otherwise take hours or days to discern. That’s really important when your team is scrambling to fix an issue that’s preventing customers from making purchases on your website, for example. And it can be a real lifesaver when managers, C-level executives, and/or customers are pushing you to find a solution – and quickly – to their application problems.

In fact, a recent survey of 1,000 workers at U.K. and U.S. businesses indicates that 86 percent of companies benefit from faster decision making through data visualization. The same study showed that 80 percent of organizations report more accurate decision making with visualization. Another study suggests that companies that adopted data visualization saw a 77 percent improvement in decision making as a result.

Here’s an example of data visualization at work that I presented during a keynote speech. It looks at Napoleon’s invasion of Russia – specifically, data on soldiers he lost along the way.

First I provided the audience with battle loss details in Excel spreadsheet form. I then asked audience members for their conclusions on the likely results of Napoleon’s efforts. (I heard crickets.)

Then I displayed a map of Napoleon’s battle path, illustrating how many men he had lost. It then became clear to the audience what Napoleon and his army were up against.

My point during the presentation, and in this blog, is that visualization helps tells a story. Visualization means you don’t have to do a lot of analysis and interpretation. Instead, you can simply see what’s happening, decide what to do about it, and act.

Research illustrates the importance of visuals in understanding. That makes sense considering how we’re built. Seventy percent of all our sensory receptors are in our eyes. Fifty percent of our brain is dedicated to visual processing. And 90 percent of information transmitted by the brain is visual.

The Social Science Research Network reports 65 percent of people are visual learners. 3M research indicates people process visuals 60,000 times faster than text. And visual aids can improve learning by up to 400 percent.

Visuals are so central to understanding that there’s a phrase to describe the phenomenon. It’s called the picture superiority effect.

A recent WIRED story notes that leading organizations, such as the World Economic Forum, are leveraging data visualization to better understand relationships between a wide variety of people and things.

“Businesses deal with data that is highly complex, with multidimensional relationships across many different, massive data sets,” says T-Sciences. “Human beings are visual creatures. As such, the time is right for organizations to implement new solutions for leveraging data visualization and unlock their true potential to meet mission and business goals.”

And Fast Company emphasizes that “visuals add a component to storytelling that text cannot: speed.”

So, how does our visualization work? It’s somewhat akin to Google Maps.

You know how Google Maps works, right? It lets you select your view of the world. You can zoom in and out, and maneuver around. And it provides you with all possible paths to your destination – as well as related data such as accidents, gas stations, and traffic jams along the way.

FixStream visualization is a lot like that. But rather than roads and gas stations, we show the topology of data centers. And we include their network, storage, and compute resources, and the applications they support.

Our platform provides a real-time view of the connections between public and/or private data centers and the resources within them. It shows applications related operational data (like events, tickets, etc.), and which resources applications are using. And – like the Google Maps real-time accident location feature – it highlights trouble spots with red dots. That way you can pinpoint problems very quickly.

Importantly, FixStream doesn’t just show data, our platform presents it in a contextual view. That way you only see what’s relevant to the application or business process you’re managing.

Google Maps visualizes only the accidents and bottlenecks along a particular journey – like Denver to Detroit. FixStream likewise presents data in an application-centric way so you can pinpoint what you want to see for that specific application. That way, you won’t be inundated with irrelevant data that slows analysis and delays decision-making.

For example, our platform can present only the data applicable to eCommerce so you can focus on fixing a problem that could lead to lost revenue. That way you won’t be distracted by alerts that may be impacting another application – like HR – which are probably also important, but likely not as time-critical.

In addition to application type, our platform lets you see data within the context of a hybrid IT infrastructure, inclusive of compute, network element, network element type or supplier, storage, virtual machine, and much more. We present you with alerts, faults, logs, tickets, and other important information.

You can explore by clicking through the various drop-down menus and topology levels. Or you can use our search tool to get where you want to go.

This provides a very different experience than what most organizations are used to today.

Legacy tools present only isolated aspects of the larger application, cloud, and data center picture. FixStream’s visualization removes the blinders that have blocked organizations like yours from having full visibility. Our platform offers a complete and real-time view of what’s happening with your applications and environments.

Let’s look at a couple examples of how this might come into play.

Consider a situation in which you notice a lot of abandoned website transactions. Or perhaps you’re just doing a regular check of your eCommerce applications. In any case, you might go into a drop-down menu called Business Groups to see what’s happening with your ecommerce-related network elements, connections, and applications.

FixStream’s visualization could highlight the fact that memory utilized has crossed a critical threshold. Because you are able to quickly see that, you don’t have to spend hours looking for the problem. Instead, you can take immediate action to address it, and you’ll be back in business. That’s important because every minute a problem persists adds up to lost revenue.

Indeed, the Digital Enterprise Journal indicates that on average companies lose $72,000 per minute during service outages. And, DEJ says, organizations spend an average of more than 60 minutes per incident repairing performance issues.

The FixStream platform can help expedite your troubleshooting and repair. We can even help you get in front of it.

Let me explain.

Perhaps you have a problem that seems to be happening every Monday morning. We enable you to review data over several weeks to see what happened every Monday morning. Our platform can present the sequence of events, since they are captured in a time series. And by visualizing data over a period of time you can detect patterns. Without visualization, determining patterns from a bunch of data is extremely difficult.

Our solution highlights every time there was a change and what happened following it. It lets you replay, pause, and analyze what happened so you can identify patterns. What’s more, our visualization capabilities illustrate the sequence of events in graph form. And our platform can predict when the next series of events of this type is likely to occur.

That can enable you to recognize and correct recurring problems. As a result, your applications will be available, your business will function more efficiently, and your technical team will be spared some of the work and stress related to troubleshooting. Less pain, more gain.

Air traffic controllers use RADAR to see what’s happening in the sky. Doctors use X-rays to see inside the human body. And with FixStream, you can get real-time visibility into your applications and your cloud and data center environments.

That means you can fix problems faster; use your resources more wisely; and better serve your customers, co-workers, supervisors, and stakeholders.

Get the picture?

(In my next blog, I’ll discuss machine learning.)

Sameer Padhye is founder and CEO of FixStream.

Making Connections – The Value of Data Correlation By Sameer Padhye

Making Connections – The Value of Data Correlation
By Sameer Padhye

Sameer Padhye avatar

The app economy is upon us, and businesses of all stripes are moving to address it. In this age of digital transformation, businesses rely on applications to serve customers and improve operations.

So, in many ways, things are really coming together with digital transformation. At the same time, however, things are really starting to come apart.

In saying that, I’m not casting aspersions on digital transformation. In fact, I’m a big believer in it.

Businesses need to introduce applications and adopt new technologies to become more agile, efficient, and responsive. And they’re doing that.

As part of those efforts, they’re employing cloud-based solutions, software-centric and microservices architectures, virtualization and containers. But these new architectures and technologies are creating challenges of their own.

In the past, each application lived on its own dedicated server. So ensuring the desired performance level was relatively simple.

In today’s highly distributed world, however, that’s simply no longer the case. Here’s why.

Some business applications today live in public clouds. And enterprises tend to have no, or very limited, visibility into those clouds. Other businesses take advantage of more distributed hybrid cloud models consisting of on-premise, public, and private clouds.

Applications run on virtual machines, rather than physical, fixed servers. So that adds another level of complexity.

As if that wasn’t enough, containers often exist alongside, or within, VMs. And the use of containers – and number of containers themselves – is quickly proliferating.

Gartner predicts that by 2020, more than 50 percent of global organizations will be running containerized applications in production. That’s up from less than 20 percent today.

The upside of containers is that they offer portability and greater scalability. However, containers move around a lot. And they appear and disappear in the blink of any eye. So that multiplies the number of moving pieces exponentially.

All that makes for a very dynamic – and complex – environment. And that’s good. And bad.

Because this environment is very different than what came before, the application performance tools created a decade or so ago no longer apply. And tools that consider only the application – and not the underlying infrastructure – fall short.

So organizations need new solutions that can address what’s happening with applications and networks today. These tools must collect and correlate information about the application itself and about the underlying infrastructure.

That should include data about application server performance, events, logs, transactions, and more. The compute, network, and storage resources involved in application delivery also need to be figured into the equation.

Only with this full complement – and correlation – of data can organizations understand what’s happening with their applications. That’s important to ensure applications perform as expected to yield the desired business results.

Intelligent data correlation data puts new insight at the fingertips of businesses like yours. And that allows you to do a lot of really amazing, time-saving, and income-impacting things.

For example, you can trim application troubleshooting efforts from weeks, months, or days down to minutes.

That’s really valuable when you consider that a business can lose millions in lost revenue from just a few minutes of app downtime. (That’s not to mention the potential loss of reputation, and losses from diverting IT resources to troubleshoot and fix such problems.)

The recent crash of Amadeus IT Group’s flight booking system shows the widespread impacts that can result from just one business application going down. As Bloomberg reported in September, several major airlines and their passengers were affected by the outage.

When applications go down or don’t perform as required, enterprise IT folks, their technology suppliers, and network service providers often spend a lot of time and energy arguing over the source of the problem. That’s before they even settle on its cause, and identify and implement a solution. Meanwhile, the business functions that rely on the app remain at a standstill.

The good news here is that FixStream has a solution.

Our platform correlates application and infrastructure resources data to identify root problems in real time. And it addresses the distributed nature of applications and related resources.

Here’s one example of how we help do that.

Containers have a short shelf life. So our platform collects data on both active containers and those that existed in the past. That way, when problems arise, businesses like yours have the evidence they need to figure out what happened.

But troubleshooting is just one way the FixStream platform can help your business survive and thrive.

The FixStream platform also employs artificial intelligence to correlate data and uncover patterns. Those patterns can allow your organization to discern what problems are likely to appear downstream from trouble spots.

Such predictive analytics enable companies like yours to address potential problems before they impact applications and business operations. Our data correlation capabilities also can reduce your compliance risk and audit costs.

Plus, we can help you optimize cloud resources and understand application dependencies. That way, you can implement more informed cloud migration strategies.

That’s important because it can help you realize the cost-saving benefits that cloud migration can deliver.

Ninety percent of companies expect savings from their move to the cloud, reports Gartner’s Ron Blair. Yet only 13 percent of them actually reduce their capital expenditures from moving to the cloud. And just 26 percent lower their operational expenditures via cloud migration, Blair said in a December presentation.

One key reason for this disparity is that many organizations carry their complexity into the cloud.

Simplifying that complexity is what FixStream is all about.

Our platform provides intelligence on application performance so businesses can better allocate resources. Our data correlation capabilities reveal what specific resources each application requires. That way businesses like yours can purchase only those cloud resources their cloud-based applications require.

So, to review, the FixStream platform:

• enables you to get more value out of your cloud migration,

• provides visibility into what’s happening with your apps and related resources,

• delivers insights on application performance, resource utilization, and what to expect next, and

• keeps your apps – and your business itself – up and running.

That adds up to a whole lot of value.

Applications are lifeblood of every enterprise. Your financial health depends upon these applications.

And more – and more mission-critical – apps are moving to the cloud everyday. So you need to know how your business apps are performing. And when they’re not performing as needed, you have to be able to move quickly to figure out why and implement a fix.

FixStream’s data correlation capabilities can go a long way toward helping you with that – and a whole lot more. For our customers that translates into dollars and cents, significant time savings, and greater business agility.

To learn more about what FixStream can do for you, click here.

Also, keep an eye out for my next blog. In that piece, I’ll discuss the importance of mapping and visualization.

Sameer Padhye is founder and CEO of FixStream.

Submit to Download