IT asset management solutions often frustrate users with inflexibility and a lack of scope. In this day and age there is no good reason for such shortcomings, but thankfully, they may soon be a thing of the past.
By considering two IT asset management use cases built with RDFox, we step beyond the typical to unlock greater functionality. The first avenue we explore is more meaningful scenario planning, incorporating the IT system’s capacity and availability. The second is real-time warning and failure propagation through the dynamic tracking of components’ dependencies and functioning status. This is by no means all of what RDFox has to offer the sector, and instead serves to inspire further application.
Well, unsurprisingly, it’s how you manage your IT assets, and more often than not it comes in the form of a software package. There are a number of reasons why you’d want an ITAM solution, not least to keep track of every IT asset your company owns. More importantly though, some offer metrics and insights that can assist with planning, maintenance, and expansion. Packages generally aim to provide a structured overview and to simplify processes that become vastly complex at scale.
Before diving into the details, we must first describe what constitutes an IT asset. Simply put, this covers all IT elements, including both software and hardware, regardless of whether the asset is internally owned like a laptop or externally sourced like a secure payment system. A great example of this is a server, running on a machine, supported by a network and power supply — four distinctly unique assets. Companies can have thousands of items with complex, interweaving dependencies so without a dedicated system, management becomes very difficult. Without management, maintenance is shambolic, and minor failures can grow into catastrophes.
To begin managing the assets, we map out the entire network in detail. Components are captured with basic properties and are linked to one another if dependent (e.g. a computer requires a power supply to remain functional). How best to create this map is the crux of the problem — not a simple one at that.
ITAM can be confusing and convoluted, so to help demonstrate the process clearly, we have created a fictional e-commerce store. Below is a drastically simplified diagram showing the flow of information as a customer makes a purchase from our online shop.
The mechanics of our shop are of little importance, but it is essential that we acknowledge the redundancy in the system. There are two ‘availability zones’ that contain the bulk of the network. Zones are established in different geographical locations so that they can be considered independent if faced with failure, e.g. if a city containing one zone had a power outage, the system could remain live, supported by the other half of the network.
Both zones contain their own database yet only the leader is queried. The follower simply creates an up-to-date copy that the servers can switch to in the case that the former goes down. These measures aim to prevent downtime, although in reality this can never be achieved with 100% certainty.
It only takes a quick google to show that there’s no shortage of ITAM solutions. So why is it that many large companies still decide to create their own? While undoubtedly adequate solutions do exist, in this bloated industry they are hard to find. The widespread problems lie in the history and foundations upon which modern ITAM solutions are typically constructed.
At their core, solutions are often reliant upon the old and outdated standards of configuration management databases (CMDB), in particular, when it comes to their schematic structures. It is widely accepted that the flexibility of graph databases means they are well suited to handling deeply interconnected data as we see here. Despite this fact, relational models remain a popular choice for the basis of solutions. Their familiarity is comfortable for those who are unwilling to upgrade, but the rigid and immutable nature of such a solution makes for a poor representation of a branching system prone to change. Anything beyond a simple description of components is taxing at best and impossible at worse.
Graph-based databases have become more commonplace, but they too are not without their flaws. In this case it’s not a criticism of the schema per se, but how it tends to be used. A knowledge graph by itself is an improvement over its relational counterpart to be sure, but issues arise when updates need to propagate through the system. Graphs provide a fantastic foundation for a solution but the more advanced features we desire are beyond their scope alone.
For these reasons, metrics that are essential for the stability and consistency of a system, such as availability and bandwidth, are omitted from solutions with shocking frequency. Transitive properties like these are difficult to implement without a lot of manual effort. That is, until you bring rules and reasoning into the mix.
Where previously an obstacle, rules make light work of inherited properties. To show this unambiguously, it’s best shown by example with our e-commerce store.
Our store — now with its assets graphically represented in our ITAM solution — is comprised of five IT services:
1. Web servers
2. Application servers
3. Databases
4. An Online file store
5. Payment processing
As for the entities within them, three primary attributes were attached:
• Availability — The probability of an asset functioning at any given time
• Capacity — a measure of bandwidth, the number of possible transactions that can be processed in a given timeframe
• A functioning status — ‘isUp’, expressed as true or false
All of the IT services contain levels of dependency, so we need only look at one service — the web servers.
At the lowest levels, we have four machines, three of type A and one of type B, each with its own power supply and network. Every unique component in our example has an availability close to 99%, or ‘two-nines’. Availability is often given in the useful terms of magnitude via a number of nines. Two-nines, for example, translates to a downtime of 3.7 days per year; three-nines, just 8.8 hours. To find this value for a machine’s output, we must consider its internal availability along with those of the components it relies on.
Consequently, the overall availability is lower than any one component alone — just one-nine (90%).
Due to the dependency relationship, if either the power supply or network fails, the corresponding machine follows. This pattern continues as false ‘isUp’ values cascade through the graph until a component can be supported by another working branch of the network.
The servers run on the aforementioned machines so directly inherit their availability (with the overly generous assumption that software is infallible). Requiring only one server to remain active, the availability of the overarching web server service is calculated differently.
In this arrangement, the service availability soars to six-nines, equating to an expected downtime of only 32 seconds per year. For context, banks spend vast sums chasing five-nines — a long way short of our store, although admittedly its fictitious nature helps.
This dependence relationship is reflected in the service’s ‘isUp’ attribute, this time an OR statement needing just one of the four providers.
Adding one more layer of detail, each server has its own limit as to what it can handle, and the sum of these (or those active at the time) provide the total capacity for the web server service in a simple calculation.
All of these calculations were conducted through the use of rules. Setting up the dependency structure throughout is a crucial step towards this goal and can be achieved easily with help from reasoning. Individual links are trivial; if one asset provides for another, we say the latter is dependent on the former. RDFox does the heavy lifting and ensures that this is consistent across the entire network by inference. Not only that, but the incremental nature of the semantic reasoning presented by RDFox means that changes to the network are updated dynamically. As the calculations themselves are also baked into rules, we can perform recursive maths over the graph, affording us a flexibility that cannot be matched by any rigid solution. Suddenly, adding a new component to the setup or dealing with real-time issues and updates becomes feasible without laborious intervention — instead values and dependencies are adjusted automatically in accordance with the change.
Speed at scale is the final great hurdle for any ITAM solution, and provides yet another reason why RDFox is perfect for the job. Its optimised in-memory approach means that even with a vast and complex IT asset network, processing remains exceedingly quick, whether querying for information, or updating the database with rippling changes that spread throughout.
It’s all well and good knowing stats like the chance of failure, but how can you apply them? A clear and diverse use, and the inspiration for this demo, is scenario planning. A flexible system that incorporates these values, allows you to test different situations, again and again, making adjustments as you go with next to no effort. If a network goes down, how does it affect the chances of catastrophic failure? Are you still able to cater to the volume of requests? Questions like these become very easy to answer. Similarly, it assists with preparation for planned maintenance, a spike in traffic around a Christmas, or any other event that will shift requirements from your daily standards.
Another application we have alluded to is warning and failure propagation — a feature that almost inherently falls out of the reasoned structure. Finding the root cause or having unexpected effects highlighted can be incredibly valuable. Beyond driving swift and specific action, this can act as a guide for targeted support, reinforcing subsystems that need it, when they need it, keeping the whole system online when at risk of buckling.
IT asset management is frankly archaic in light of the technology available today — often a part of the very assets being managed. Solutions are uninspired, lacking flexibility and insight. It’s clear that companies have had enough when they are more willing to create their own at great cost rather than use what exists. Knowledge graphs and semantic reasoning provide an opportunity to add tremendous value to the sector hungry for more. What RDFox offers is not an ITAM package, but instead the means for a feature-rich solution fit for the times. Powerful, versatile, and dynamic — there are few scenarios in which RDFox cannot provide enhancements as we have shown with the seemingly modest inclusion of availability and capacity to a standard case. The potential to extract so much more is there, it’s just a case of using the right tools.
Request a free trial of RDFox to see where you can improve. For further inspiration, check out our other blogs.
The team behind Oxford Semantic Technologies started working on RDFox in 2011 at the Computer Science Department of the University of Oxford with the conviction that flexible and high-performance reasoning was a possibility for data-intensive applications without jeopardising the correctness of the results. RDFox is the first market-ready knowledge graph designed from the ground up with reasoning in mind. Oxford Semantic Technologies is a spin-out of the University of Oxford and is backed by leading investors including Samsung Venture Investment Corporation (SVIC), Oxford Sciences Enterprises (OSE) and Oxford University Innovation (OUI).