Machine Learning Model Updates: OTA Deployment
Whistl's ML models improve continuously through over-the-air (OTA) updates. This technical guide explains model versioning, staged rollouts, A/B testing, rollback strategies, and how Whistl delivers model improvements without requiring app store updates.
Why OTA Model Updates?
Traditional app store updates have limitations for ML models:
- App review delay: 24-48 hours for approval
- User adoption: Only 60% update within a week
- Iteration speed: Weekly updates too slow for ML
- A/B testing: Can't test models on subsets of users
OTA updates enable daily model improvements with instant deployment.
Model Update Architecture
Whistl's OTA system delivers models securely:
Update Flow
┌─────────────┐ ┌──────────────┐ ┌─────────────┐ ┌─────────────┐
│ ML │ │ Model │ │ CDN │ │ Client │
│ Training │───▶│ Registry │───▶│ (S3 + │───▶│ App │
│ Pipeline │ │ (Versioned)│ │ CloudFront)│ │ │
└─────────────┘ └──────────────┘ └─────────────┘ └─────────────┘
│ │
│ │
┌────▼────┐ ┌──────▼──────┐
│ Model │ │ Model │
│ Metadata│ │ Manager │
│ - Version│ │ - Check │
│ - Hash │ │ - Download │
│ - Stats │ │ - Validate │
└─────────┘ │ - Activate │
└─────────────┘
Model Versioning
Each model version is uniquely identified:
Version Format
{
"model_id": "impulse_predictor",
"version": "2026.03.05.1",
"version_parts": {
"year": 2026,
"month": 3,
"day": 5,
"iteration": 1
},
"semantic_version": "2.4.1",
"build_hash": "a1b2c3d4e5f6",
"training_date": "2026-03-04T23:00:00Z",
"training_dataset": "whistl_2026_q1_v3",
"framework": "pytorch_2.1",
"architecture": "feedforward_56_128_64_32_1"
}
Model Manifest
{
"model_id": "impulse_predictor",
"version": "2026.03.05.1",
"files": [
{
"name": "model.mlpackage",
"url": "https://cdn.whistl.app/models/impulse_predictor/2026.03.05.1/model.mlpackage",
"size": 458752,
"sha256": "abc123...",
"platform": "ios"
},
{
"name": "model.tflite",
"url": "https://cdn.whistl.app/models/impulse_predictor/2026.03.05.1/model.tflite",
"size": 450560,
"sha256": "def456...",
"platform": "android"
}
],
"metadata": {
"accuracy": 0.842,
"precision": 0.864,
"recall": 0.839,
"f1_score": 0.851,
"training_samples": 2347891,
"validation_samples": 234789
},
"compatibility": {
"min_ios_version": "15.0",
"min_android_api": 26,
"min_app_version": "1.8.0"
},
"changelog": [
"Improved payday proximity detection (+3% accuracy)",
"Enhanced HRV feature weighting (+2% recall)",
"Reduced false positives for shopping (-15% FPR)"
],
"rollout": {
"percentage": 10,
"regions": ["AU", "NZ"],
"user_segments": ["beta_testers"]
}
}
Model Download
Models are downloaded securely in the background:
iOS Implementation
import Foundation
class ModelDownloader {
private let session: URLSession
func downloadModel(manifest: ModelManifest) async throws -> URL {
// Check if already downloaded
if let existingPath = getExistingModelPath(manifest.version) {
return existingPath
}
// Get appropriate file for platform
let file = manifest.files.first { $0.platform == "ios" }!
// Create download task
let (tempURL, response) = try await session.download(from: file.url)
// Verify hash
let hash = try calculateSHA256(tempURL)
guard hash == file.sha256 else {
throw ModelError.hashMismatch
}
// Move to models directory
let destination = getModelDirectory().appendingPathComponent(manifest.version)
try FileManager.default.moveItem(at: tempURL, to: destination)
return destination
}
private func calculateSHA256(_ url: URL) throws -> String {
let data = try Data(contentsOf: url)
var hash = [UInt8](repeating: 0, count: Int(CC_SHA256_DIGEST_LENGTH))
data.withUnsafeBytes { bytes in
CC_SHA256(bytes.baseAddress, CC_LONG(data.count), &hash)
}
return hash.map { String(format: "%02x", $0) }.joined()
}
}
Android Implementation
import java.security.MessageDigest
class ModelDownloader(private val context: Context) {
suspend fun downloadModel(manifest: ModelManifest): File {
// Check if already downloaded
getExistingModelPath(manifest.version)?.let { return it }
// Get appropriate file for platform
val file = manifest.files.first { it.platform == "android" }!!
// Download with OkHttp
val request = Request.Builder().url(file.url).build()
val response = httpClient.newCall(request).await()
// Save to temp file
val tempFile = File(context.cacheDir, "model_temp")
response.body!!.byteStream().use { input ->
tempFile.outputStream().use { output ->
input.copyTo(output)
}
}
// Verify hash
val hash = calculateSHA256(tempFile)
require(hash == file.sha256) { "Hash mismatch" }
// Move to models directory
val destination = File(getModelDirectory(), manifest.version)
tempFile.renameTo(destination)
return destination
}
private fun calculateSHA256(file: File): String {
val md = MessageDigest.getInstance("SHA-256")
file.inputStream().use { input ->
val buffer = ByteArray(8192)
var read: Int
while (input.read(buffer).also { read = it } != -1) {
md.update(buffer, 0, read)
}
}
return md.digest().joinToString("") { "%02x".format(it) }
}
}
Model Validation
Downloaded models are validated before activation:
Validation Checks
- Hash verification: SHA-256 matches manifest
- Signature verification: Signed with Whistl private key
- Format validation: Valid Core ML / TFLite format
- Input/output check: Expected tensor shapes
- Smoke test: Run test inference
Validation Implementation
class ModelValidator {
func validate(modelPath: URL, manifest: ModelManifest) throws {
// Verify signature
try verifySignature(modelPath, manifest.signature)
// Load model
let model = try MLModel(contentsOf: modelPath)
// Check input description
let inputDescription = model.modelDescription.inputDescriptionsByName
guard inputDescription["features"] != nil else {
throw ValidationError.invalidInput
}
// Check output description
let outputDescription = model.modelDescription.outputDescriptionsByName
guard outputDescription["probability"] != nil else {
throw ValidationError.invalidOutput
}
// Run smoke test
let testInput = try MLMultiArray(shape: [56], dataType: .float32)
// Fill with zeros for smoke test
let output = try model.prediction(features: testInput)
// Verify output is valid probability
guard output.probability >= 0 && output.probability <= 1 else {
throw ValidationError.invalidOutput
}
}
private func verifySignature(_ modelPath: URL, _ signature: String) throws {
// Verify model was signed by Whistl
let publicKey = getWhistlPublicKey()
let modelData = try Data(contentsOf: modelPath)
let verifier = CCVerifier()
guard verifier.verify(modelData, signature: signature, publicKey: publicKey) else {
throw ValidationError.invalidSignature
}
}
}
Staged Rollouts
Models are rolled out gradually to catch issues early:
Rollout Stages
| Stage | Percentage | Duration | Criteria to Advance |
|---|---|---|---|
| Internal | 0.1% | 24 hours | No crashes, metrics stable |
| Beta | 1% | 48 hours | Accuracy within 1% of target |
| Early Access | 10% | 72 hours | No regression in key metrics |
| General | 50% | 7 days | Positive user feedback |
| Full | 100% | — | All checks passed |
Rollout Configuration
class RolloutManager {
func shouldReceiveUpdate(user: User, manifest: ModelManifest) -> Bool {
let rollout = manifest.rollout
// Check region
if !rollout.regions.contains(user.region) {
return false
}
// Check user segment
if !rollout.userSegments.isEmpty && !rollout.userSegments.contains(user.segment) {
return false
}
// Check percentage
let userHash = hash(user.id + manifest.version)
let userBucket = userHash % 100
return userBucket < rollout.percentage
}
private func hash(_ string: String) -> UInt {
var hash: UInt = 5381
for char in string.utf8 {
hash = ((hash << 5) &+ hash) &+ UInt(char)
}
return hash
}
}
A/B Testing
Multiple model versions can be tested simultaneously:
A/B Test Configuration
struct ABTest {
let id: String
let name: String
let variants: [ModelVariant]
let allocation: [String: Double] // variant_id -> percentage
let primaryMetric: String
let guardrailMetrics: [String]
let minSampleSize: Int
let duration: TimeInterval
}
struct ModelVariant {
let id: String
let modelVersion: String
let description: String
}
// Example A/B test
let abTest = ABTest(
id: "impulse_predictor_v2",
name: "Impulse Predictor v2 Evaluation",
variants: [
ModelVariant(id: "control", modelVersion: "2026.02.01.1", description: "Current model"),
ModelVariant(id: "treatment", modelVersion: "2026.03.05.1", description: "New model with HRV improvements")
],
allocation: ["control": 0.5, "treatment": 0.5],
primaryMetric: "prediction_accuracy",
guardrailMetrics: ["false_positive_rate", "inference_latency"],
minSampleSize: 10000,
duration: 7 * 24 * 3600 // 7 days
)
Rollback Strategies
Problematic models are rolled back automatically:
Automatic Rollback Triggers
- Crash rate spike: >2x normal crash rate
- Accuracy drop: >5% decrease in prediction accuracy
- Latency increase: >50% increase in inference time
- Battery impact: >20% increase in battery usage
- User reports: Spike in support tickets
Rollback Implementation
class ModelRollbackManager {
private let metricsClient: MetricsClient
private var currentModelVersion: String
func checkRollbackConditions() async {
let crashRate = await metricsClient.getCrashRate()
let accuracy = await metricsClient.getModelAccuracy()
let latency = await metricsClient.getInferenceLatency()
var shouldRollback = false
var reason: String = ""
if crashRate > baseline.crashRate * 2 {
shouldRollback = true
reason = "Crash rate spike"
}
if accuracy < baseline.accuracy - 0.05 {
shouldRollback = true
reason = "Accuracy regression"
}
if latency > baseline.latency * 1.5 {
shouldRollback = true
reason = "Latency increase"
}
if shouldRollback {
await performRollback(reason: reason)
}
}
private func performRollback(reason: String) async {
// Find previous stable version
let previousVersion = await getPreviousStableVersion()
// Activate previous version
try? await modelManager.activateModel(version: previousVersion)
// Alert team
await alertService.sendAlert(
"Model rolled back: \(reason). Reverted to \(previousVersion)"
)
}
}
Model Performance Monitoring
Model performance is continuously monitored:
Tracked Metrics
| Metric | Target | Measurement |
|---|---|---|
| Prediction Accuracy | >84% | Weekly validation |
| Inference Latency | <10ms | Per-prediction |
| Model Size | <500KB | On download |
| Battery Impact | <5%/day | Daily average |
| Memory Usage | <50MB | Peak usage |
Conclusion
OTA model updates enable Whistl to continuously improve ML models without app store delays. Through staged rollouts, A/B testing, and automatic rollback, new models are deployed safely while maintaining reliability.
Your impulse prediction gets smarter over time—automatically, securely, and without any action required.
Get Continuously Improving AI
Whistl's ML models improve automatically through OTA updates. Download free and experience AI that gets smarter every day.
Download Whistl FreeRelated: Neural Networks Explained | On-Device ML | A/B Testing Infrastructure