Skip to main content

Running A/B Experiments

AppDNA experiments let you test different experiences with your users and measure which variant performs best. Experiments are created in the dashboard and assigned on-device using a deterministic hash — no server round-trip required.

How Experiments Work

  1. You create an experiment in the dashboard, defining variants (e.g., “control” and “variant_a”) and traffic allocation (e.g., 50/50).
  2. The experiment definition is included in the config bundle that the SDK downloads.
  3. When your code calls getVariant(), the SDK computes the assignment locally using a deterministic hash.
  4. The same user always gets the same variant — across sessions, across platforms, and even offline.

Assignment Algorithm

The SDK assigns users to variants using MurmurHash3, a fast, non-cryptographic hash function:
bucket = MurmurHash3(userId + experimentId + salt) % 100
InputSource
userIdThe identified user ID, or the anonymous ID if the user has not been identified
experimentIdUnique identifier for the experiment (set when creating the experiment)
saltRandom string generated when the experiment is created (prevents correlated assignments across experiments)
The resulting bucket is a number from 0 to 99. This bucket maps to a variant based on the traffic allocation configured in the dashboard:
Experiment: "paywall-test"
  control:   buckets 0-49   (50%)
  variant_a: buckets 50-99  (50%)

User "user-123" → MurmurHash3("user-123" + "paywall-test" + "x7k9") % 100 → 73 → variant_a
Because the assignment is deterministic, the same user always gets the same variant for the same experiment. No randomness, no server dependency, and it works fully offline once the experiment definition is cached.

SDK Usage

// Get the assigned variant
let variant = AppDNA.experiments.getVariant("paywall-test")

switch variant {
case "control":
    showStandardPaywall()
case "variant_a":
    showNewPaywall()
default:
    showStandardPaywall()
}

// Manually track exposure (optional -- automatically tracked on first getVariant per session)
AppDNA.experiments.trackExposure("paywall-test")

Exposure Tracking

Exposure tracking records that a user was shown a particular experiment variant. This is critical for accurate experiment analysis — you only want to measure outcomes for users who actually saw the variant.

Automatic Tracking

By default, the SDK automatically tracks an exposure event once per session the first time getVariant() is called for a given experiment. This means:
  • Calling getVariant("paywall-test") multiple times in the same session only records one exposure.
  • A new exposure is recorded when the user starts a new session.
  • You do not need to call trackExposure() manually unless you want to control exactly when the exposure is recorded.

Manual Tracking

In some cases, you may want to defer exposure tracking until the user actually sees the variant UI (not just when the code reads the variant). For this, disable automatic exposure and track manually:
// Get variant without automatic exposure tracking
let variant = AppDNA.experiments.getVariant("paywall-test", trackExposure: false)

// Later, when the UI is actually displayed
func paywallDidAppear() {
    AppDNA.experiments.trackExposure("paywall-test")
}

Experiment Delegate

You can register a delegate to receive callbacks when experiment assignments change. This happens when the config refreshes from the server and experiment definitions have been updated (e.g., traffic allocation changed, experiment stopped).
class MyExperimentHandler: ExperimentDelegate {
    func experimentAssignmentChanged(experimentId: String, newVariant: String?) {
        print("Experiment \(experimentId) now assigned to: \(newVariant ?? "none")")
        // Update UI if needed
    }
}

AppDNA.experiments.delegate = MyExperimentHandler()
Assignment changes are rare in practice. They occur only when the experiment definition itself changes on the server (e.g., traffic allocation is adjusted or the experiment is stopped). The delegate is not called on every session start.

Experiment Lifecycle

Experiments progress through a defined lifecycle:
StatusDescription
DraftExperiment is being configured. Not visible to SDKs.
RunningExperiment is active and included in the config bundle. Users are assigned to variants.
CompletedExperiment has been stopped with a declared winner. The winning variant is served to all users.
ArchivedExperiment has been removed from the config bundle. getVariant() returns nil/null.

What happens when an experiment is stopped?

When you complete an experiment and declare a winner in the dashboard:
  1. The config bundle is regenerated with the experiment marked as completed.
  2. All users now receive the winning variant from getVariant(), regardless of their original assignment.
  3. Exposure events are no longer tracked for completed experiments.

Statistical Analysis

All statistical analysis is performed server-side. The dashboard displays:
  • Conversion rates per variant with confidence intervals
  • Statistical significance (p-value) based on a frequentist approach
  • Sample size and exposure counts per variant
  • Lift (percentage improvement of treatment over control)
You do not need to implement any analytics logic in your app. The SDK handles exposure tracking and the server handles the math.
AppDNA uses a sequential testing methodology that lets you check results at any time without inflating your false positive rate. You do not need to wait for a predetermined sample size before looking at results.

Best Practices

  1. Always handle the default case. If getVariant() returns nil/null (experiment not found or archived), fall back to the control experience.
  2. Use meaningful experiment keys. Choose descriptive keys like "onboarding-v2" or "paywall-annual-price" rather than generic names like "test-1".
  3. Track custom conversion events. In addition to exposure tracking, use AppDNA.track() to record conversion events (e.g., "purchase_completed") that you configure as goals in the dashboard.
  4. Test in sandbox first. Run experiments in the sandbox environment before deploying to production. Sandbox experiments are completely isolated from production data.
  5. Do not change traffic allocation mid-experiment. Changing the allocation while an experiment is running can bias your results. If you need to adjust, consider stopping the experiment and starting a new one.