Skip to main content
The AppDNA onboarding module lets you present server-driven onboarding flows that are configured in the AppDNA Console. Flows are delivered to the SDK via the remote config bundle, so you can update onboarding experiences without shipping app updates.

Present an Onboarding Flow

Present a specific onboarding flow by ID:
let presented = AppDNA.presentOnboarding(
    flowId: "main_flow",
    from: viewController,
    delegate: self
)

if !presented {
    print("Flow config not available — check Console or network")
}
The method returns false if the flow configuration is not available (e.g., config has not loaded yet or the flow ID is invalid).
If flowId is nil, the SDK presents the currently active flow as configured in remote config. This is useful when you want the Console to control which flow is shown.

Module Access

Access the onboarding module directly:
let onboarding = AppDNA.onboarding

Module Methods

MethodSignatureDescription
presentpresent(flowId: String?, from: UIViewController, context: OnboardingContext?) -> BoolPresent an onboarding flow
setDelegatesetDelegate(_ delegate: AppDNAOnboardingDelegate?)Set a delegate for flow callbacks

OnboardingContext

Pass additional context when presenting a flow:
let context = OnboardingContext(
    source: "app_launch",
    campaign: "winter_2025",
    referrer: "social_ad",
    userProperties: ["locale": "en_US"],
    experimentOverrides: ["onboarding_variant": "b"]
)

AppDNA.onboarding.present(
    flowId: "main_flow",
    from: viewController,
    context: context
)
PropertyTypeDescription
sourceString?Where the flow was triggered from
campaignString?Attribution campaign identifier
referrerString?Referral source
userProperties[String: Any]?Additional user properties for personalization
experimentOverrides[String: String]?Override experiment variant assignments for testing

AppDNAOnboardingDelegate

Implement the delegate protocol to respond to onboarding flow events:
protocol AppDNAOnboardingDelegate {
    func onOnboardingStarted(flowId: String)
    func onOnboardingStepChanged(flowId: String, stepId: String, stepIndex: Int, totalSteps: Int)
    func onOnboardingCompleted(flowId: String, responses: [String: Any])
    func onOnboardingDismissed(flowId: String, atStep: Int)

    // Async hooks (optional — default implementations provided)
    func onBeforeStepAdvance(flowId: String, fromStepId: String, stepIndex: Int, stepType: String, responses: [String: Any], stepData: [String: Any]?) async -> StepAdvanceResult
    func onBeforeStepRender(flowId: String, stepId: String, stepIndex: Int, stepType: String, responses: [String: Any]) async -> StepConfigOverride?
}

Example Implementation

class OnboardingHandler: AppDNAOnboardingDelegate {
    func onOnboardingStarted(flowId: String) {
        print("Onboarding started: \(flowId)")
    }

    func onOnboardingStepChanged(
        flowId: String,
        stepId: String,
        stepIndex: Int,
        totalSteps: Int
    ) {
        print("Step \(stepIndex + 1)/\(totalSteps): \(stepId)")
        // Update progress indicator
    }

    func onOnboardingCompleted(flowId: String, responses: [String: Any]) {
        print("Onboarding completed: \(flowId)")
        print("User responses: \(responses)")
        // Navigate to main app screen
        // Use responses to personalize the experience
    }

    func onOnboardingDismissed(flowId: String, atStep: Int) {
        print("Onboarding dismissed at step \(atStep)")
        // Handle early exit — maybe show again later
    }
}

Step Types

Onboarding flows support the following step types, configured in the Console:
Step TypeDescription
welcomeWelcome screen with title, subtitle, and hero image
questionSingle-select or multi-select question for user input
value_propValue proposition screen highlighting a key feature or benefit
formStructured form with multiple native input fields
interactive_chatAI-powered conversational step
customCustom HTML or native content rendered via a template
Step types and their content are defined entirely in the Console. The SDK renders them automatically based on the flow configuration. You do not need to build UI for individual step types.

Content Blocks

Each onboarding step is composed of content blocks that control the visual layout. Blocks are configured in the Console and rendered natively by the SDK. The following block types are available:
Block TypeDescription
titlePrimary heading text
subtitleSecondary descriptive text
imageStatic image with sizing and corner radius
lottieLottie animation (JSON or dotLottie)
riveRive state-machine animation
videoInline video (MP4, HLS) with autoplay and loop options
buttonTappable button with configurable action
option_listList of selectable options (single or multi-select)
formForm input group (see Form Steps below)
page_indicatorDot or bar indicator showing current step progress
wheel_pickerScrollable wheel-style picker for value selection
pulsing_avatarAnimated avatar with a pulsing ring effect
social_loginSocial sign-in buttons (Apple, Google, etc.)
timelineVertical timeline with labeled milestones
animated_loadingSkeleton or spinner loading animation between steps
countdown_timerCountdown timer with configurable duration and expiry action
ratingStar or emoji rating selector
rich_textMarkdown-style rich text with inline formatting
progress_barHorizontal progress bar with percentage or label
circular_gaugeCircular progress indicator with value label
date_wheel_pickerNative date wheel picker (day/month/year columns)
stackVertical container that groups child blocks
rowHorizontal container that arranges child blocks side by side
custom_viewHost-app-provided SwiftUI view (see Custom View Registration)
star_backgroundAnimated starfield or particle background effect
pricing_cardProduct pricing card with plan details and CTA
Blocks are configured entirely in the Console. The SDK renders them automatically — no code is needed unless you register custom views.

Custom View Registration

Register your own SwiftUI views to be rendered inside onboarding steps wherever a custom_view block appears:
AppDNA.registerCustomView("my_view") {
    AnyView(MySwiftUIView())
}

// Register before presenting the onboarding flow
AppDNA.registerCustomView("terms_acceptance") {
    AnyView(TermsAcceptanceView(onAccept: { accepted in
        // Handle acceptance
    }))
}
The id must match the custom view identifier configured in the Console for the custom_view block.

Block Styling

Every content block supports a block_style design token that controls appearance properties such as padding, margin, background color, corner radius, border, shadow, and opacity. Block styles are configured in the Console and applied automatically by the SDK.

Visibility Conditions

Blocks can be shown or hidden based on user responses, bindings, or device attributes. Visibility conditions are configured per-block in the Console using rules like answer_equals, binding_not_empty, platform_is, and locale_matches. The SDK evaluates conditions client-side before rendering each block.

Entrance Animations

Each block supports an entrance animation that plays when the block first appears. Animations are configured per-block in the Console. Supported animation types include fade, slide_up, slide_down, slide_left, slide_right, scale, flip, and bounce. You can configure duration, delay, and easing curve.

Form Steps

The form step type provides native input fields for collecting structured user data. Each form can contain multiple fields with validation, conditional visibility, and custom configuration.

Supported Input Types

TypeDescriptionExample Use Case
textSingle-line text inputName, username
textareaMulti-line text inputBio, notes
numberNumeric input with stepperAge, quantity
emailEmail input with validationEmail address
phonePhone number inputContact number
dateDate pickerBirthday, start date
timeTime pickerPreferred time
datetimeCombined date and time pickerAppointment scheduling
selectDropdown or scrollable pickerCountry, category
sliderNumeric slider with min/maxBudget, intensity level
toggleOn/off switchOpt-in preferences
stepperIncrement/decrement counterNumber of items
segmentedSegmented control for few optionsGender, frequency
passwordSecure text input with visibility togglePassword, PIN
ratingStar rating inputSatisfaction, preference
range_sliderDual-handle range sliderPrice range, age range
image_pickerPhoto picker from library or cameraProfile photo, document
colorColor picker or preset swatchesTheme preference, branding
urlURL input with validationWebsite, portfolio link
chipsTag-style multi-select chipsInterests, skills
signatureFreehand signature drawing padAgreement, consent
locationAutocomplete location searchCity, address, country

Field Validation

Form fields support built-in and custom validation:
  • Required fields — marked in the Console, the SDK prevents advancing until filled
  • Regex patterns — custom validation (e.g., ^[A-Z]{2}\\d{4}$ for a code format)
  • Min/max values — for number, slider, and stepper fields
  • Max length — for text and textarea fields

Conditional Fields

Fields can depend on other fields using depends_on rules. For example, a “Company name” field can appear only when the user selects “Employed” in a previous field. Supported operators: equals, not_equals, contains, not_empty, empty, gt, lt, is_set.

Form Responses

Form field values are included in the responses dictionary passed to onOnboardingCompleted, keyed by step ID. Each step’s value is a dictionary of field ID to field value.
func onOnboardingCompleted(flowId: String, responses: [String: Any]) {
    if let formData = responses["profile_step"] as? [String: Any] {
        let name = formData["full_name"] as? String
        let age = formData["age"] as? Int
        let email = formData["email"] as? String
        // Use collected data to personalize the experience
    }
}

Location Fields

The location field type provides an autocomplete search input that returns structured location data including city, state, country, coordinates, and timezone. When the user starts typing (e.g., “New York”), the SDK debounces the input (300ms), calls the AppDNA geocoding proxy, and displays a dropdown of suggestions. The user selects a result and the SDK stores the complete structured data.

Location Data Structure

Each location selection contains:
FieldTypeExample
formatted_addressString"New York, New York, United States"
cityString"New York"
stateString"New York"
state_codeString"NY"
countryString"United States"
country_codeString"US"
latitudeDouble40.7128
longitudeDouble-74.0060
timezoneString"America/New_York"
timezone_offsetInt-300 (minutes from UTC)
postal_codeString?null

Accessing Location Data

Use AppDNA.getLocationData(fieldId:) to access the selected location from anywhere in your app:
if let location = AppDNA.getLocationData(fieldId: "user_location") {
    print("City: \(location.city)")           // "New York"
    print("Country: \(location.country_code)") // "US"
    print("Timezone: \(location.timezone)")    // "America/New_York"
    print("Coords: \(location.latitude), \(location.longitude)")
}

Template Engine

Location data is accessible in dynamic content templates:
Welcome from {{onboarding.location_step.user_location.city}}!
Your timezone: {{onboarding.location_step.user_location.timezone}}

Configuration in Console

In the onboarding flow editor, add a location field to a form step and configure:
OptionDescriptionDefault
Location TypeFilter results: city, address, region, countrycity
Bias CountryISO country code to prioritize results (e.g., US)None
LanguageLanguage for results (e.g., en, fr)en
Min CharactersCharacters required before search triggers2
Location autocomplete uses a server-side proxy — no third-party SDK is added to your app binary. The geocoding provider (Mapbox by default) can be configured in Settings > Geocoding.

Interactive Chat Steps

The interactive_chat step type renders a conversational UI that forwards each user message to your webhook and renders the reply. This is how you integrate your own LLM, agent, or rule-based backend into an onboarding flow.

How it works

  1. User types a message in the chat step.
  2. SDK POSTs the conversation payload to the webhook_url configured in the Console (with any custom headers you set).
  3. Your server responds with JSON in the schema below.
  4. SDK renders the AI reply, quick-reply buttons, media, etc.
The SDK handles turn limits, typing indicators, ratings, and quick-reply routing — your webhook only needs to return the next reply.

Request payload (SDK → your webhook)

{
  "event": "chat_message",
  "flow_id": "onboarding_v1",
  "step_id": "chat_intro",
  "app_id": "app_abc123",
  "user_id": "user_xyz",
  "conversation": {
    "turn": 2,
    "user_message": "I've been having vivid dreams lately",
    "max_turns": 5,
    "remaining_turns": 3,
    "messages": [
      { "role": "ai",   "content": "Hi, what brings you here?",       "id": "msg_a0", "timestamp": "2026-04-15T10:00:00Z" },
      { "role": "user", "content": "I've been having vivid dreams...", "id": "msg_u1", "timestamp": "2026-04-15T10:00:12Z" }
    ]
  },
  "context": { "threadId": "thread_UXJWTKpSvBRWpGEhHLXqnA6O" },
  "rating": null
}
Your headers are forwarded verbatim. For bearer-token APIs, set the header value to Bearer YOUR_TOKEN (the SDK does not add the Bearer prefix automatically). The context object is opaque session state you control — whatever you returned in data on a previous turn gets echoed back here on every subsequent call. This is how you integrate threaded AI backends (OpenAI Assistants, hosted LLMs with threadId, etc.) without replaying the full history on every request. context is absent on turn 1 (your server hasn’t written anything yet) and accumulates across turns; later values overwrite earlier ones per-key (last-write-wins). See Threaded backends below for the full pattern.

Response schema (your webhook → SDK)

{
  "action": "reply",
  "messages": [
    { "content": "Tell me more about the most recent one." }
  ]
}
Only messages[].content is required — every other field is optional. The SDK reads messages[] to render chat bubbles, so if you omit it or return a different shape, the reply won’t render.
Returning a response like {"reply": "..."} without wrapping it in the messages array will silently fail — the SDK decodes unknown fields as null and shows nothing. Always wrap your reply in messages[{ "content": "..." }].

Full response schema

FieldTypeDescription
action"reply" | "reply_and_complete" | "error"Default "reply". "reply_and_complete" renders the reply then ends the chat.
messagesArray<{ content, media?, delay_ms? }>Messages to render. Empty array = silent turn.
messages[].contentstringThe bubble text.
messages[].media{ type, url, alt_text? }Optional media — type is "image", "lottie", or "link".
messages[].delay_msnumberDelay before the bubble appears (typing simulation).
quick_repliesArray<{ id, text }>Buttons rendered under the latest AI reply. Tapping one sends it as the user’s next message.
force_completebooleanEnd the conversation after this reply.
completion_messagestringFinal message shown when the chat completes.
dataobjectFree-form JSON. Two uses: (1) stored in the onboarding response bundle as webhook_data for analytics/later steps; (2) echoed back to your webhook in context on every subsequent turn — use this for threadId, session_id, or any opaque server state you need to round-trip.
Unknown fields in your response are ignored, so you can also include your own server-side state at the top level (it just won’t round-trip — use data for anything you want back).

Minimal working example

Node.js (Express):
app.post('/chat', async (req, res) => {
  const { conversation } = req.body;
  const userMessage = conversation.user_message;

  const replyText = await myLLM.complete(userMessage);

  res.json({
    action: "reply",
    messages: [{ content: replyText }],
  });
});

Threaded backends: round-tripping session state

If your AI service uses a thread/session handle (OpenAI Assistants API, a hosted LLM with conversation memory, or any third-party AI proxy that mints a session id), do not replay the full conversation history on every turn — latency and cost grow linearly with conversation length. Instead, return your session handle in data once; the SDK accumulates it and echoes it back in context on every subsequent turn. The wire contract in plain English:
  • You write to context by returning fields under data in any response.
  • You read from context off req.body.context on every request after the first.
  • Keys are merged across turns; later values overwrite earlier ones per-key.
  • The SDK never inspects or mutates context — it’s opaque to us. You choose the keys and the shape.
Example: integrating a threaded AI service keyed by threadId. If your backend rotates the thread (expiry, failure, migration), just return the new id in data — the SDK overwrites the cached value and uses it on the next turn automatically.
app.post('/chat', async (req, res) => {
  const { conversation, context } = req.body;

  // Resume if we have a threadId; create a fresh one otherwise
  let threadId = context?.threadId;
  if (!threadId) {
    threadId = await aiService.createThread();
  }

  let replyText;
  try {
    replyText = await aiService.sendMessage(threadId, conversation.user_message);
  } catch (err) {
    // Thread expired or invalid — start fresh and retry once
    if (err.code === 'thread_expired') {
      threadId = await aiService.createThread();
      replyText = await aiService.sendMessage(threadId, conversation.user_message);
    } else {
      throw err;
    }
  }

  res.json({
    action: "reply",
    messages: [{ content: replyText }],
    data: { threadId },   // round-tripped to every subsequent call
  });
});
No Redis, no per-user cache, no history replay. The context round-trip IS your session storage.
The same pattern works for anything opaque you need per-conversation: API rate-limit bucket ids, A/B cohort labels, tool-call state, reasoning scratchpads — whatever your AI backend needs to resume, stash it in data and read it from context on the next turn.

Console configuration

In the onboarding flow editor, add a step of type Interactive Chat, then set:
FieldPurpose
Webhook URLWhere the SDK POSTs each turn
HeadersCustom headers — for bearer auth, the full value must be Bearer <token>
Timeout (ms)How long to wait before showing error_text
Retry countNumber of retries on transient network errors
Error textShown in-chat when the webhook fails or times out
PersonaName, role, avatar — rendered in the header
Max turnsCaps the conversation length
Auto-messagesPre-scripted AI messages keyed by turn number
Quick repliesStatic buttons always shown
Turn actionsTrigger rating prompts / inject messages at specific turns
StyleColors, fonts, bubble styling

Auto-tracked events

EventWhen
chat_step_viewedChat step is shown
chat_message_sentUser sends a message
chat_message_receivedWebhook reply is received (even if empty)
chat_webhook_errorWebhook throws, times out, returns a non-2xx status, or returns invalid JSON. Includes http_status and a truncated response_body when the error is a non-2xx response, so integration bugs surface immediately instead of silently showing an empty reply.
chat_rating_submittedUser submits a rating in a turn action
chat_step_completedChat reaches max turns, force_complete: true, or the user explicitly completes

Async Step Hooks

The onboarding delegate supports two async hooks that let you intercept step transitions for server-side validation, dynamic content loading, or custom routing logic.

onBeforeStepAdvance

Called before the SDK advances to the next step. Return a StepAdvanceResult to control what happens next:
func onBeforeStepAdvance(
    flowId: String,
    fromStepId: String,
    stepIndex: Int,
    stepType: String,
    responses: [String: Any],
    stepData: [String: Any]?
) async -> StepAdvanceResult {
    // Example: validate a referral code with your backend
    if stepType == "form",
       let code = (responses[fromStepId] as? [String: Any])?["referral_code"] as? String {
        let isValid = await validateReferralCode(code)
        if !isValid {
            return .block(message: "Invalid referral code. Please try again.")
        }
        return .proceedWithData(["referral_validated": true])
    }
    return .proceed
}

StepAdvanceResult

CaseDescription
.proceedContinue to the next step normally
.proceedWithData(_:)Continue and merge additional data into the session
.block(message:)Block advancement and show an error message to the user
.skipTo(stepId:)Skip to a specific step by ID
.skipToWithData(stepId:data:)Skip to a specific step and merge additional data

onBeforeStepRender

Called before a step is rendered. Return a StepConfigOverride to dynamically modify the step’s content:
func onBeforeStepRender(
    flowId: String,
    stepId: String,
    stepIndex: Int,
    stepType: String,
    responses: [String: Any]
) async -> StepConfigOverride? {
    // Pre-fill form fields based on user data
    if stepId == "profile_step" {
        return StepConfigOverride(
            fieldDefaults: [
                "email": currentUser.email,
                "name": currentUser.displayName
            ],
            title: "Welcome back, \(currentUser.firstName)!"
        )
    }
    return nil
}
Both hooks are async — the SDK shows a loading indicator while waiting for your response. If either hook throws an error or times out, the SDK proceeds normally.

Row Direction and Distribution

The row content block supports configurable direction and distribution, set in the Console:
PropertyOptionsDescription
directionhorizontal, verticalAxis along which child blocks are arranged
distributionequal, fill, start, center, end, space_between, space_aroundHow child blocks are distributed within the row
// Example: Two buttons side by side, equally spaced
Row (direction: horizontal, distribution: equal)
  ├── Button "Skip"
  └── Button "Continue"

Button Gradients

Buttons support gradient backgrounds configured in the Console:
PropertyTypeDescription
gradient_colors[String]Array of hex color stops (e.g., ["#FF6B6B", "#4ECDC4"])
gradient_directionStringhorizontal, vertical, diagonal_tl_br, diagonal_tr_bl
Gradients override the solid background_color when set.

Select Display Styles

Question steps with selectable options support three display styles:
StyleDescription
dropdownNative dropdown picker. Best for long option lists (5+ items).
stackedVertically stacked option buttons. Default style.
gridGrid layout with 2 or 3 columns. Good for visual options with icons.
The display style is configured per question step in the Console.

Progress Bar Custom Colors

The progress_bar content block supports custom color configuration:
PropertyTypeDescription
fill_colorStringHex color for the filled portion
track_colorStringHex color for the unfilled track
fill_gradient[String]Gradient color stops for the fill (overrides fill_color)
corner_radiusNumberCorner radius of the progress bar
heightNumberHeight of the progress bar in points
Colors are configured per block in the Console. When not set, the SDK uses the app’s primary theme color.

Per-Step Progress Visibility

The progress indicator can be hidden on specific steps while still counting them in the total progress. This is useful for splash screens, permission prompts, or transition steps where the progress bar would be distracting. Configure hide_progress per step in the Console under Step Design > Logic. When enabled, the progress bar is hidden on that step but the step still contributes to the overall progress calculation (e.g., step 3 of 5 still advances progress to 60%).
PropertyTypeDefaultDescription
hide_progressBoolfalseHides the progress indicator on this step only
This is a per-step override of the flow-level show_progress setting. If show_progress is false at the flow level, the progress bar is hidden on all steps regardless of hide_progress.

Conditional Branching

Onboarding flows support conditional branching — the next step can change based on the user’s answer. Branching is configured entirely in the Console. Example:
Step 1: "What's your fitness goal?"
  → "Lose weight"  → Step 2a: Weight loss program
  → "Build muscle" → Step 2b: Muscle building program
  → "Stay active"  → Step 2c: General fitness
To configure branching:
  1. In the Console, open your onboarding flow (Onboarding > Flows)
  2. On a question step, click Add branching rule
  3. Map each answer option to a target step
  4. The SDK handles routing automatically — no code needed
All answers (including branched paths) are returned in the responses dictionary in onOnboardingCompleted.

Auto-Tracked Events

The SDK automatically tracks the following onboarding-related events:
EventTriggered When
onboarding_flow_startedAn onboarding flow begins
onboarding_step_viewedA step is displayed to the user
onboarding_step_completedA user completes a step (e.g., answers a question)
onboarding_step_skippedA user skips a step
onboarding_flow_completedThe user completes the entire flow
onboarding_flow_dismissedThe user dismisses the flow before completing
Each event includes the flowId, stepId, and stepIndex where applicable.

Configuration in Console

Onboarding flows are managed in the AppDNA Console:
  1. Navigate to Onboarding > Flows.
  2. Create a new flow or edit an existing one.
  3. Add steps (welcome, question, value_prop, custom) and configure their content.
  4. Set targeting rules to control which users see the flow.
  5. Publish the flow to make it available to the SDK via the config bundle.
Flows must be published in the Console before they appear in the SDK. Draft flows are not delivered to client devices.

Full Example

import AppDNASDK

class OnboardingCoordinator: AppDNAOnboardingDelegate {
    private let rootViewController: UIViewController

    init(rootViewController: UIViewController) {
        self.rootViewController = rootViewController
        AppDNA.onboarding.setDelegate(self)
    }

    func showOnboardingIfNeeded() {
        // Pass nil to show the active flow from remote config
        let presented = AppDNA.presentOnboarding(
            flowId: nil,
            from: rootViewController,
            delegate: self
        )

        if !presented {
            // No active flow or config not loaded yet
            navigateToMainApp()
        }
    }

    // MARK: - AppDNAOnboardingDelegate

    func onOnboardingStarted(flowId: String) {
        print("Starting flow: \(flowId)")
    }

    func onOnboardingStepChanged(
        flowId: String,
        stepId: String,
        stepIndex: Int,
        totalSteps: Int
    ) {
        // Track progress
    }

    func onOnboardingCompleted(flowId: String, responses: [String: Any]) {
        // Personalize based on responses
        if let goal = responses["fitness_goal"] as? String {
            AppDNA.identify(userId: currentUserId, traits: ["fitness_goal": goal])
        }
        navigateToMainApp()
    }

    func onOnboardingDismissed(flowId: String, atStep: Int) {
        navigateToMainApp()
    }
}