Back to Blog

Anomaly Detection for Fraud Prevention: Protecting Users from Unusual Spending

Anomaly detection algorithms identify spending patterns that deviate from the norm—whether it's potential fraud, unusual impulse behaviour, or emerging financial problems. Discover how Whistl uses Isolation Forests, Autoencoders, and statistical methods to keep users safe.

Why Anomaly Detection Matters in Financial Apps

Not all risky spending looks the same. Some users exhibit gradual increases in spending, while others have sudden, dramatic spikes. Anomaly detection identifies these unusual patterns regardless of whether they match known risk profiles.

Whistl uses anomaly detection for multiple purposes:

Statistical Methods for Anomaly Detection

The simplest anomaly detection methods use statistical thresholds:

Z-Score Method

import numpy as np
from scipy import stats

def detect_anomalies_zscore(transactions, threshold=3.0):
    """
    Detect anomalies using Z-score method.
    Transactions with |Z-score| > threshold are anomalies.
    """
    amounts = [t.amount for t in transactions]
    mean = np.mean(amounts)
    std = np.std(amounts)
    
    anomalies = []
    for tx in transactions:
        z_score = abs((tx.amount - mean) / std)
        if z_score > threshold:
            anomalies.append({
                'transaction': tx,
                'z_score': z_score,
                'severity': 'high' if z_score > 4 else 'medium'
            })
    
    return anomalies

# Example: Transaction 5 standard deviations above mean
# z_score = 5.0 → flagged as anomaly

Modified Z-Score (Robust to Outliers)

def modified_z_score(data):
    """
    Modified Z-score using median and MAD (Median Absolute Deviation).
    More robust to outliers than standard Z-score.
    """
    median = np.median(data)
    mad = np.median(np.abs(data - median))
    
    # Avoid division by zero
    if mad == 0:
        mad = np.mean(np.abs(data - median))
    
    modified_scores = 0.6745 * (data - median) / mad
    return modified_scores

# Threshold: |modified z-score| > 3.5 indicates anomaly

Interquartile Range (IQR) Method

def detect_anomalies_iqr(data, multiplier=1.5):
    """
    Detect anomalies using IQR method (boxplot approach).
    """
    q1 = np.percentile(data, 25)
    q3 = np.percentile(data, 75)
    iqr = q3 - q1
    
    lower_bound = q1 - multiplier * iqr
    upper_bound = q3 + multiplier * iqr
    
    anomalies = []
    for i, value in enumerate(data):
        if value < lower_bound or value > upper_bound:
            anomalies.append({
                'index': i,
                'value': value,
                'type': 'low' if value < lower_bound else 'high'
            })
    
    return anomalies

Isolation Forest: Efficient High-Dimensional Anomaly Detection

Isolation Forest is a tree-based algorithm specifically designed for anomaly detection. It works on the principle that anomalies are "few and different"—easier to isolate than normal points.

How Isolation Forest Works

The algorithm builds random trees by:

  1. Randomly selecting a feature
  2. Randomly selecting a split value between min and max of that feature
  3. Repeating until all points are isolated or max depth reached

Anomalies require fewer splits to isolate, resulting in shorter average path lengths.

Isolation Forest Implementation

from sklearn.ensemble import IsolationForest

class SpendingAnomalyDetector:
    def __init__(self, contamination=0.05):
        """
        Initialize Isolation Forest for spending anomaly detection.
        
        Args:
            contamination: Expected proportion of anomalies (0-0.5)
        """
        self.model = IsolationForest(
            n_estimators=100,
            contamination=contamination,
            max_samples='auto',
            random_state=42,
            n_jobs=-1
        )
    
    def fit(self, transactions):
        """
        Fit model on historical transaction data.
        """
        # Extract features
        X = self._extract_features(transactions)
        
        # Fit Isolation Forest
        self.model.fit(X)
        
        return self
    
    def predict(self, new_transactions):
        """
        Predict anomalies in new transactions.
        
        Returns:
            -1 for anomalies, 1 for normal
        """
        X = self._extract_features(new_transactions)
        predictions = self.model.predict(X)
        scores = self.model.score_samples(X)
        
        results = []
        for i, (tx, pred, score) in enumerate(zip(new_transactions, predictions, scores)):
            results.append({
                'transaction': tx,
                'is_anomaly': pred == -1,
                'anomaly_score': -score,  # Higher = more anomalous
                'severity': self._classify_severity(-score)
            })
        
        return results
    
    def _extract_features(self, transactions):
        """Extract features for anomaly detection."""
        features = []
        for tx in transactions:
            feature_vector = [
                tx.amount,
                tx.amount / tx.user_avg_amount,  # Relative to user's average
                tx.hour_of_day,
                tx.day_of_week,
                tx.days_since_last_tx,
                tx.merchant_risk_score,
                tx.category_risk_score,
                tx.distance_from_home,
                tx.velocity_24h,  # Spending in last 24 hours
            ]
            features.append(feature_vector)
        return np.array(features)
    
    def _classify_severity(self, score):
        """Classify anomaly severity based on score."""
        if score > 0.7:
            return 'critical'
        elif score > 0.5:
            return 'high'
        elif score > 0.3:
            return 'medium'
        else:
            return 'low'

Autoencoders: Deep Learning for Anomaly Detection

Autoencoders are neural networks that learn to compress and reconstruct data. Anomalies are harder to reconstruct, resulting in higher reconstruction error.

Autoencoder Architecture

import tensorflow as tf
from tensorflow import keras

class SpendingAutoencoder:
    def __init__(self, input_dim):
        self.input_dim = input_dim
        self.model = self._build_model()
    
    def _build_model(self):
        """
        Build autoencoder architecture.
        Encoder compresses input, decoder reconstructs it.
        """
        # Encoder
        encoder_input = keras.Input(shape=(self.input_dim,))
        x = keras.layers.Dense(64, activation='relu')(encoder_input)
        x = keras.layers.Dropout(0.3)(x)
        x = keras.layers.Dense(32, activation='relu')(x)
        x = keras.layers.Dropout(0.3)(x)
        encoded = keras.layers.Dense(16, activation='relu')(x)  # Bottleneck
        
        # Decoder
        x = keras.layers.Dense(32, activation='relu')(encoded)
        x = keras.layers.Dropout(0.3)(x)
        x = keras.layers.Dense(64, activation='relu')(x)
        decoded = keras.layers.Dense(self.input_dim, activation='linear')(x)
        
        autoencoder = keras.Model(encoder_input, decoded)
        autoencoder.compile(
            optimizer=keras.optimizers.Adam(learning_rate=0.001),
            loss='mse'
        )
        
        return autoencoder
    
    def fit(self, X_train, epochs=50, batch_size=32):
        """Train autoencoder on normal transactions only."""
        self.model.fit(
            X_train, X_train,  # Target is same as input (reconstruction)
            epochs=epochs,
            batch_size=batch_size,
            validation_split=0.1,
            callbacks=[
                keras.callbacks.EarlyStopping(
                    monitor='val_loss',
                    patience=10,
                    restore_best_weights=True
                )
            ]
        )
    
    def detect_anomalies(self, X, threshold=None):
        """
        Detect anomalies based on reconstruction error.
        """
        # Calculate reconstruction error
        reconstructions = self.model.predict(X)
        mse = np.mean(np.square(X - reconstructions), axis=1)
        
        # Determine threshold if not provided
        if threshold is None:
            threshold = np.percentile(mse, 95)  # Top 5% as anomalies
        
        # Classify anomalies
        anomalies = mse > threshold
        
        return {
            'is_anomaly': anomalies,
            'reconstruction_error': mse,
            'threshold': threshold
        }

Variational Autoencoders (VAE)

VAEs add probabilistic modelling to autoencoders, providing uncertainty estimates along with anomaly scores:

class VariationalAutoencoder:
    def __init__(self, input_dim, latent_dim=8):
        self.input_dim = input_dim
        self.latent_dim = latent_dim
        self.encoder, self.decoder, self.vae = self._build_vae()
    
    def _build_vae(self):
        """Build Variational Autoencoder."""
        # Encoder outputs mean and variance
        encoder_input = keras.Input(shape=(self.input_dim,))
        x = keras.layers.Dense(64, activation='relu')(encoder_input)
        x = keras.layers.Dense(32, activation='relu')(x)
        
        mu = keras.layers.Dense(self.latent_dim)(x)
        log_var = keras.layers.Dense(self.latent_dim)(x)
        
        # Reparameterization trick
        def sampling(args):
            mu, log_var = args
            epsilon = keras.backend.random_normal(shape=keras.backend.shape(mu))
            return mu + keras.backend.exp(log_var / 2) * epsilon
        
        latent = keras.layers.Lambda(sampling)([mu, log_var])
        
        # Decoder
        decoder_input = keras.Input(shape=(self.latent_dim,))
        x = keras.layers.Dense(32, activation='relu')(decoder_input)
        x = keras.layers.Dense(64, activation='relu')(x)
        decoder_output = keras.layers.Dense(self.input_dim)(x)
        
        # Build models
        encoder = keras.Model(encoder_input, [mu, log_var, latent])
        decoder = keras.Model(decoder_input, decoder_output)
        vae_output = decoder(encoder(encoder_input)[2])
        vae = keras.Model(encoder_input, vae_output)
        
        return encoder, decoder, vae
    
    def vae_loss(self, x, x_decoded_mean):
        """VAE loss = reconstruction loss + KL divergence."""
        recon_loss = keras.losses.mse(x, x_decoded_mean)
        
        # Get mu and log_var from encoder
        mu, log_var, _ = self.encoder(x)
        
        # KL divergence
        kl_loss = -0.5 * keras.backend.mean(
            1 + log_var - keras.backend.square(mu) - keras.backend.exp(log_var),
            axis=-1
        )
        
        return keras.backend.mean(recon_loss + kl_loss)

One-Class SVM for Novelty Detection

One-Class SVM learns a decision boundary around normal data points. Anything outside this boundary is considered anomalous.

from sklearn.svm import OneClassSVM

# One-Class SVM for novelty detection
ocsvm = OneClassSVM(
    kernel='rbf',
    gamma='auto',
    nu=0.05,  # Expected proportion of anomalies
    shrinking=True
)

# Train on normal transactions only
ocsvm.fit(X_normal)

# Predict on new transactions
predictions = ocsvm.predict(X_new)  # 1 = normal, -1 = anomaly
scores = ocsvm.score_samples(X_new)  # Lower = more anomalous

Combining Multiple Detection Methods

Whistl uses an ensemble of anomaly detection methods for robust detection:

class EnsembleAnomalyDetector:
    def __init__(self):
        self.isolation_forest = IsolationForest(contamination=0.05)
        self.autoencoder = SpendingAutoencoder(input_dim=10)
        self.ocsvm = OneClassSVM(nu=0.05)
        self.weights = [0.4, 0.4, 0.2]  # Ensemble weights
    
    def fit(self, X_normal):
        """Train all detectors on normal data."""
        self.isolation_forest.fit(X_normal)
        self.autoencoder.fit(X_normal)
        self.ocsvm.fit(X_normal)
    
    def predict(self, X):
        """Combine predictions from all detectors."""
        # Get scores from each detector
        if_scores = -self.isolation_forest.score_samples(X)
        ae_scores = self.autoencoder.detect_anomalies(X)['reconstruction_error']
        svm_scores = -self.ocsvm.score_samples(X)
        
        # Normalize scores to [0, 1]
        if_scores = (if_scores - if_scores.min()) / (if_scores.max() - if_scores.min())
        ae_scores = (ae_scores - ae_scores.min()) / (ae_scores.max() - ae_scores.min())
        svm_scores = (svm_scores - svm_scores.min()) / (svm_scores.max() - svm_scores.min())
        
        # Weighted ensemble
        ensemble_score = (
            self.weights[0] * if_scores +
            self.weights[1] * ae_scores +
            self.weights[2] * svm_scores
        )
        
        # Threshold for anomaly classification
        threshold = np.percentile(ensemble_score, 95)
        anomalies = ensemble_score > threshold
        
        return {
            'ensemble_score': ensemble_score,
            'is_anomaly': anomalies,
            'threshold': threshold,
            'individual_scores': {
                'isolation_forest': if_scores,
                'autoencoder': ae_scores,
                'ocsvm': svm_scores
            }
        }

Real-World Anomaly Patterns in Spending

Whistl has identified several common anomaly patterns:

Pattern Description Detection Method Action
Amount Spike Transaction amount 5x+ user's average Z-score, IQR Immediate alert
Velocity Surge Unusual number of transactions in short time Isolation Forest Cooling-off prompt
Category Deviation Spending in从未-used category Autoencoder Confirmation request
Location Anomaly Transaction in unusual geographic area Distance-based Fraud alert
Time Pattern Break Spending at unusual hour Time-series anomaly Gentle reminder
Merchant Risk High-risk merchant for this user Rule-based + ML Strong intervention
"Whistl caught something my bank didn't—a series of small transactions at a gambling site. The anomaly detection flagged the pattern even though each transaction was below typical alert thresholds. That early catch probably saved me from a serious problem."
— Mark D., Whistl user since 2025

Balancing Sensitivity and False Alarms

Anomaly detection faces a fundamental trade-off:

Whistl addresses this through:

Getting Started with Whistl

Protect yourself from unusual spending patterns with Whistl's advanced anomaly detection. Our multi-method approach catches both obvious anomalies and subtle pattern changes that might indicate emerging problems.

Advanced Anomaly Detection for Your Finances

Join thousands of Australians using Whistl's anomaly detection system to catch unusual spending before it becomes a problem.

Crisis Support Resources

If you're experiencing severe financial distress or gambling-related harm, professional support is available:

Related Articles