Running A/B Experiments
AppDNA experiments let you test different experiences with your users and measure which variant performs best. Experiments are created in the dashboard and assigned on-device using a deterministic hash — no server round-trip required.How Experiments Work
- You create an experiment in the dashboard, defining variants (e.g., “control” and “variant_a”) and traffic allocation (e.g., 50/50).
- The experiment definition is included in the config bundle that the SDK downloads.
- When your code calls
getVariant(), the SDK computes the assignment locally using a deterministic hash. - The same user always gets the same variant — across sessions, across platforms, and even offline.
Assignment Algorithm
The SDK assigns users to variants using MurmurHash3, a fast, non-cryptographic hash function:| Input | Source |
|---|---|
userId | The identified user ID, or the anonymous ID if the user has not been identified |
experimentId | Unique identifier for the experiment (set when creating the experiment) |
salt | Random string generated when the experiment is created (prevents correlated assignments across experiments) |
Because the assignment is deterministic, the same user always gets the same variant for the same experiment. No randomness, no server dependency, and it works fully offline once the experiment definition is cached.
SDK Usage
- iOS
- Android
- Flutter
- React Native
Exposure Tracking
Exposure tracking records that a user was shown a particular experiment variant. This is critical for accurate experiment analysis — you only want to measure outcomes for users who actually saw the variant.Automatic Tracking
By default, the SDK automatically tracks an exposure event once per session the first timegetVariant() is called for a given experiment. This means:
- Calling
getVariant("paywall-test")multiple times in the same session only records one exposure. - A new exposure is recorded when the user starts a new session.
- You do not need to call
trackExposure()manually unless you want to control exactly when the exposure is recorded.
Manual Tracking
In some cases, you may want to defer exposure tracking until the user actually sees the variant UI (not just when the code reads the variant). For this, disable automatic exposure and track manually:- iOS
- Android
- Flutter
- React Native
Experiment Delegate
You can register a delegate to receive callbacks when experiment assignments change. This happens when the config refreshes from the server and experiment definitions have been updated (e.g., traffic allocation changed, experiment stopped).- iOS
- Android
- Flutter
- React Native
Assignment changes are rare in practice. They occur only when the experiment definition itself changes on the server (e.g., traffic allocation is adjusted or the experiment is stopped). The delegate is not called on every session start.
Experiment Lifecycle
Experiments progress through a defined lifecycle:| Status | Description |
|---|---|
| Draft | Experiment is being configured. Not visible to SDKs. |
| Running | Experiment is active and included in the config bundle. Users are assigned to variants. |
| Completed | Experiment has been stopped with a declared winner. The winning variant is served to all users. |
| Archived | Experiment has been removed from the config bundle. getVariant() returns nil/null. |
What happens when an experiment is stopped?
When you complete an experiment and declare a winner in the dashboard:- The config bundle is regenerated with the experiment marked as completed.
- All users now receive the winning variant from
getVariant(), regardless of their original assignment. - Exposure events are no longer tracked for completed experiments.
Statistical Analysis
All statistical analysis is performed server-side. The dashboard displays:- Conversion rates per variant with confidence intervals
- Statistical significance (p-value) based on a frequentist approach
- Sample size and exposure counts per variant
- Lift (percentage improvement of treatment over control)
AppDNA uses a sequential testing methodology that lets you check results at any time without inflating your false positive rate. You do not need to wait for a predetermined sample size before looking at results.
Best Practices
-
Always handle the default case. If
getVariant()returnsnil/null(experiment not found or archived), fall back to the control experience. -
Use meaningful experiment keys. Choose descriptive keys like
"onboarding-v2"or"paywall-annual-price"rather than generic names like"test-1". -
Track custom conversion events. In addition to exposure tracking, use
AppDNA.track()to record conversion events (e.g.,"purchase_completed") that you configure as goals in the dashboard. - Test in sandbox first. Run experiments in the sandbox environment before deploying to production. Sandbox experiments are completely isolated from production data.
- Do not change traffic allocation mid-experiment. Changing the allocation while an experiment is running can bias your results. If you need to adjust, consider stopping the experiment and starting a new one.