Interpretable Outlier Detection: Frequent Patterns Outlier Factor (FPOF) | by W Brett Kennedy | May, 2024

An outlier detector methodology that helps categorical knowledge and supplies explanations for the outliers flagged10 min learn·12 hours in the pastOutlier detection is a standard job in machine studying. Specifically, it’s a type of unsupervised machine studying: analyzing knowledge the place there aren’t any labels. It’s the act of discovering objects in a dataset which are uncommon relative to the others within the dataset.There could be many causes to want to establish outliers in knowledge. If the information being examined is accounting data and we’re serious about discovering errors or fraud, there are often far too many transactions within the knowledge to look at every manually, and it’s vital to pick out a small, manageable variety of transactions to research. place to begin could be to seek out essentially the most uncommon data and study these; that is with the concept the errors and fraud ought to each be uncommon sufficient to face out as outliers.That is, not all outliers can be attention-grabbing, however errors and fraud will doubtless be outliers, so when searching for these, figuring out the outliers generally is a very sensible approach.Or, the information could comprise bank card transactions, sensor readings, climate measurements, organic knowledge, or logs from web sites. In all circumstances, it may be helpful to establish the data suggesting errors or different issues, in addition to essentially the most attention-grabbing data.Often as properly, outlier detection is used as a part of enterprise or scientific discovery, to raised perceive the information and the processes being described within the knowledge. With scientific knowledge, for instance, we’re typically serious about discovering essentially the most uncommon data, as these would be the most scientifically attention-grabbing.The want for interpretability in outlier detectionWith classification and regression issues, it’s typically preferable to make use of interpretable fashions. This may end up in decrease accuracy (with tabular knowledge, the best accuracy is often discovered with boosted fashions, that are fairly uninterpretable), however can be safer: we all know how the fashions will deal with unseen knowledge. But, with classification and regression issues, it’s additionally widespread to not want to know why particular person predictions are made as they’re. So lengthy because the fashions are fairly correct, it might be enough to only allow them to make predictions.With outlier detection, although, the necessity for interpretability is way greater. Where an outlier detector predicts a file could be very uncommon, if it’s not clear why this can be the case, we could not know learn how to deal with the merchandise, or even when we must always consider it’s anomalous.In reality, in lots of conditions, performing outlier detection can have restricted worth if there isn’t a superb understanding of why the objects flagged as outliers had been flagged. If we’re checking a dataset of bank card transactions and an outlier detection routine identifies a sequence of purchases that look like extremely uncommon, and due to this fact suspicious, we will solely examine these successfully if we all know what’s uncommon about them. In some circumstances this can be apparent, or it might turn out to be clear after spending a while analyzing them, however it’s far more efficient and environment friendly if the character of the anomalies is obvious from when they’re found.As with classification and regression, in circumstances the place interpretability just isn’t doable, it’s typically doable to attempt to perceive the predictions utilizing what are known as post-hoc (after-the-fact) explanations. These use XAI (Explainable AI) strategies corresponding to function importances, proxy fashions, ALE plots, and so forth. These are additionally very helpful and also will be coated in future articles. But, there’s additionally a really sturdy profit to having outcomes which are clear within the first place.In this text, we glance particularly at tabular knowledge, although will have a look at different modalities in later articles. There are quite a few algorithms for outlier detection on tabular knowledge generally used at this time, together with Isolation Forests, Local Outlier Factor (LOF), KNNs, One-Class SVMs, and fairly quite a few others. These typically work very properly, however sadly most don’t present explanations for the outliers discovered.Most outlier detection strategies are simple to know at an algorithm degree, however it’s nonetheless troublesome to find out why some data had been scored extremely by a detector and others weren’t. If we course of a dataset of monetary transactions with, for instance, an Isolation Forest, we will see that are essentially the most uncommon data, however could also be at a loss as to why, particularly if the desk has many options, if the outliers comprise uncommon combos of a number of options, or the outliers are circumstances the place no options are extremely uncommon, however a number of options are reasonably uncommon.Frequent Patterns Outlier Factor (FPOF)We’ve now gone over, a minimum of rapidly, outlier detection and interpretability. The the rest of this text is an excerpt from my e-book Outlier Detection in Python (https://www.manning.com/books/outlier-detection-in-python), which covers FPOF particularly.FPOF (FP-outlier: Frequent sample based mostly outlier detection) is one in all a small handful of detectors that may present some degree of interpretability for outlier detection and deserves for use in outlier detection greater than it’s.It additionally has the interesting property of being designed to work with categorical, versus numeric, knowledge. Most real-world tabular knowledge is blended, containing each numeric and categorical columns. But, most detectors assume all columns are numeric, requiring all categorical columns to be numerically encoded (utilizing one-hot, ordinal, or one other encoding).Where detectors, corresponding to FPOF, assume the information is categorical, now we have the alternative situation: all numeric options should be binned to be in a categorical format. Either is workable, however the place the information is primarily categorical, it’s handy to have the ability to use detectors corresponding to FPOF.And, there’s a profit when working with outlier detection to have at our disposal each some numeric detectors and a few categorical detectors. As there are, sadly, comparatively few categorical detectors, FPOF can be helpful on this regard, even the place interpretability just isn’t vital.The FPOF algorithmFPOF works by figuring out what are known as Frequent Item Sets (FISs) in a desk. These are both values in a single function which are quite common, or units of values spanning a number of columns that often seem collectively.Almost all tables comprise a major assortment of FISs. FISs based mostly on single values will happen as long as some values in a column are considerably extra widespread than others, which is nearly all the time the case. And FISs based mostly on a number of columns will happen as long as there are associations between the columns: sure values (or ranges of numeric values) are typically related to different values (or, once more, ranges of numeric values) in different columns.FPOF is predicated on the concept, as long as a dataset has many frequent merchandise units (which just about all do), then most rows will comprise a number of frequent merchandise units and inlier (regular) data will comprise considerably extra frequent merchandise units than outlier rows. We can make the most of this to establish outliers as rows that comprise a lot fewer, and far much less frequent, FISs than most rows.Example with real-world knowledgeFor a real-world instance of utilizing FPOF, we have a look at the SpeedDating set from OpenML (https://www.openml.org/search?type=data&sort=nr_of_likes&status=active&id=40536, licensed below CC BY 4.0 DEED).Executing FPOF begins with mining the dataset for the FISs. Various libraries can be found in Python to assist this. For this instance, we use mlxtend (https://rasbt.github.io/mlxtend/), a general-purpose library for machine studying. It supplies a number of algorithms to establish frequent merchandise units; we use one right here known as apriori.We first accumulate the information from OpenML. Normally we’d use all categorical and (binned) numeric options, however for simplicity right here, we are going to simply use solely a small variety of options.As indicated, FPOF does require binning the numeric options. Usually we’d merely use a small quantity (maybe 5 to twenty) equal-width bins for every numeric column. The pandas minimize() methodology is handy for this. This instance is even a little bit easier, as we simply work with categorical columns.from mlxtend.frequent_patterns import aprioriimport pandas as pdfrom sklearn.datasets import fetch_openmlimport warningswarnings.filterwarnings(motion=’ignore’, class=DeprecationWarning)knowledge = fetch_openml(‘SpeedDating’, model=1, parser=’auto’) data_df = pd.DataFrame(knowledge.knowledge, columns=knowledge.feature_names)data_df = data_df[[‘d_pref_o_attractive’, ‘d_pref_o_sincere’,’d_pref_o_intelligence’, ‘d_pref_o_funny’,’d_pref_o_ambitious’, ‘d_pref_o_shared_interests’]] data_df = pd.get_dummies(data_df) for col_name in data_df.columns:data_df[col_name] = data_df[col_name].map({0: False, 1: True})frequent_itemsets = apriori(data_df, min_support=0.3, use_colnames=True) data_df[‘FPOF_Score’] = 0for fis_idx in frequent_itemsets.index: fis = frequent_itemsets.loc[fis_idx, ‘itemsets’]assist = frequent_itemsets.loc[fis_idx, ‘support’] col_list = (checklist(fis))cond = Truefor col_name in col_list:cond = cond & (data_df[col_name])data_df.loc[data_df[cond].index, ‘FPOF_Score’] += assist min_score = data_df[‘FPOF_Score’].min() max_score = data_df[‘FPOF_Score’].max()data_df[‘FPOF_Score’] = [(max_score – x) / (max_score – min_score) for x in data_df[‘FPOF_Score’]]The apriori algorithm requires all options to be one-hot encoded. For this, we use panda’s get_dummies() methodology.We then name the apriori methodology to find out the frequent merchandise units. Doing this, we have to specify the minimal assist, which is the minimal fraction of rows through which the FIS seems. We don’t need this to be too excessive, or the data, even the sturdy inliers, will comprise few FISs, making them arduous to tell apart from outliers. And we don’t need this too low, or the FISs might not be significant, and outliers could comprise as many FISs as inliers. With a low minimal assist, apriori may additionally generate a really giant variety of FISs, making execution slower and interpretability decrease. In this instance, we use 0.3.It’s additionally doable, and typically accomplished, to set restrictions on the scale of the FISs, requiring they relate to between some minimal and most variety of columns, which can assist slender in on the type of outliers you’re most serious about.The frequent merchandise units are then returned in a pandas dataframe with columns for the assist and the checklist of column values (within the type of the one-hot encoded columns, which point out each the unique column and worth).To interpret the outcomes, we will first view the frequent_itemsets, proven subsequent. To embody the size of every FIS we add:frequent_itemsets[‘length’] = frequent_itemsets[‘itemsets’].apply(lambda x: len(x))There are 24 FISs discovered, the longest overlaying three options. The following desk exhibits the primary ten rows, sorting by assist.We then loop by way of every frequent merchandise set and increment the rating for every row that accommodates the frequent merchandise set by the assist. This can optionally be adjusted to favor frequent merchandise units of higher lengths (with the concept a FIS with a assist of, say 0.4 and overlaying 5 columns is, every part else equal, extra related than an FIS with assist of 0.4 overlaying, say, 2 columns), however for right here we merely use the quantity and assist of the FISs in every row.This really produces a rating for normality and never outlierness, so once we normalize the scores to be between 0.0 and 1.0, we reverse the order. The rows with the best scores at the moment are the strongest outliers: the rows with the least and the least widespread frequent merchandise units.Adding the rating column to the unique dataframe and sorting by the rating, we see essentially the most regular row:We can see the values for this row match the FISs properly. The worth for d_pref_o_attractive is [21–100], which is an FIS (with assist 0.36); the values for d_pref_o_ambitious and d_pref_o_shared_interests are [0–15] and [0–15], which can be an FIS (assist 0.59). The different values additionally are likely to match FISs.The most uncommon row is proven subsequent. This matches not one of the recognized FISs.As the frequent merchandise units themselves are fairly intelligible, this methodology has the benefit of manufacturing fairly interpretable outcomes, although that is much less true the place many frequent merchandise units are used.The interpretability could be decreased, as outliers are recognized not by containing FISs, however by not, which implies explaining a row’s rating quantities to itemizing all of the FISs it doesn’t comprise. However, it isn’t strictly essential to checklist all lacking FISs to clarify every outlier; itemizing a small set of the most typical FISs which are lacking can be enough to clarify outliers to a good degree for many functions. Statistics concerning the the FISs which are current and the the traditional numbers and frequencies of the FISs current in rows supplies good context to match.One variation on this methodology makes use of the rare, versus frequent, merchandise units, scoring every row by the quantity and rarity of every rare itemset they comprise. This can produce helpful outcomes as properly, however is considerably extra computationally costly, as many extra merchandise units should be mined, and every row is examined towards many FISs. The closing scores could be extra interpretable, although, as they’re based mostly on the merchandise units discovered, not lacking, in every row.ConclusionsOther than the code right here, I’m not conscious of an implementation of FPOF in python, although there are some in R. The bulk of the work with FPOF is in mining the FISs and there are quite a few python instruments for this, together with the mlxtend library used right here. The remaining code for FPOP, as seen above, is pretty easy.Given the significance of interpretability in outlier detection, FPOF can fairly often be value making an attempt.In future articles, we’ll go over another interpretable strategies for outlier detection as properly.All photographs are by creator

https://towardsdatascience.com/interpretable-outlier-detection-frequent-patterns-outlier-factor-fpof-0d9cbf51b17a

Recommended For You