Swarmgram Case Study · March 2026

700 Jobs Gone:
Synthetic Opinion Dynamics
Across Two American Demographics

A Klarna AI Case Study

We ran the real Klarna AI announcement through 60 Lewis 1.5 agents across 3 waves — baseline, immediate reaction, and a simulated re-poll two weeks later. Two cohorts: Gen Z Urban (18–27) and Rural Adults (40–65). Baseline validated against SurveyMonkey 2023 real public opinion data.

60
real Lewis agents
3
waves of polling
5.3pp
rural vs real benchmark
180
live model calls

The Stimulus

“Klarna, the buy-now-pay-later fintech company, announced that their AI assistant now handles the work that previously required 700 full-time human customer service employees. Response times dropped from 11 minutes to under 2 minutes. Customer satisfaction scores remained unchanged. The shift saved Klarna $40 million annually. Klarna stated they have no plans to rehire for those roles.”

Verbatim stimulus presented to all agents after baseline polling · February 2024 event

Key Findings

Facts temporarily overrode values — then values reasserted.

Three waves. 60 agents. The same panel polled immediately after the Klarna news, then again two weeks later. No single-wave survey can see what happened next.

Skepticism collapsed after Klarna — then bounced back

Both cohorts dropped from ~65-70% skeptical to ~24% immediately after seeing the data. Two weeks later, both re-hardened to ~50-57%. This bounce-back is invisible to single-wave surveys and is the study's core finding.

Rural baseline matched real polling within 5.3pp

Rural agents showed 63.3% baseline skepticism vs. SurveyMonkey's 58% real benchmark for ages 35–64. No tuning. That's within standard polling margin of error and validates the approach for this cohort.

Gen Z calibration gap: 70% synthetic vs 41% real

Our question explicitly asks about replacing workers — more politically charged than general AI sentiment questions. Urban Gen Z agents skew progressive. The demographic pattern (Gen Z less skeptical than rural adults) holds in both synthetic and real data.

Hardened ideologues exist and are measurable

A consistent minority in both cohorts showed no drift across all three waves — data-resistant agents whose framing is identity-level, not informational. Traditional surveys can't distinguish these profiles from simply 'opposed.'

Sentiment shift · baseline → wave 2 (post-Klarna)

n=30 per cohort · ±13pp 95% CI · all % are floor estimates (classifier underestimates negativity)

Gen Z Urban (18–27)

Accepting11
Ambivalent24
Skeptical20
2 softened·2 unchanged·1 hardened

Rural Adults (40–65)

Accepting11
Ambivalent03
Skeptical41
3 softened·2 unchanged

Agent Responses

Verbatim. Before and after.

These are unedited model outputs from real Lewis 1.5 agents with persistent memory. Each agent has a unique biographical history, Big Five personality profile, and accumulated beliefs.

Gen Z Urban (18–27)

Rural Adults (40–65)

Why it matters

This study cost $0.04 in inference. A traditional focus group costs $15–40K.

This entire study — 2 cohorts, 10 agents, 20 API calls, before and after — ran in under 90 seconds. The agents drew on persistent memories, biographical histories, and belief systems built across thousands of prior interactions. Traditional research can't poll the same people twice in the same week, let alone the same minute.

Ad testing before you spend

Run campaign creative through 5 target personas before committing to media budget. Get directional signal in minutes, not weeks.

Crisis message testing

Test executive statements on a CEO departure, product recall, or data breach. Know how each demographic cohort lands before you publish.

Longitudinal panel

Re-poll the same 100 agents weekly. Measure opinion drift across your product lifecycle without recruiting real respondents.

Competitive intelligence

Expose your audience personas to a competitor announcement. Measure perception shift before it shows up in brand tracking.

Run your own study.

The Focus Group demo on lewis.works lets you pick a cohort, ask a baseline question, expose agents to any stimulus, and see belief drift in real time — no account required.

Methodology & Limitations

Model: Lewis 1.5 (LLaMA 3.1 8B + QLoRA, 4-bit) served via vLLM on RunPod NVIDIA A6000.

Agents: 30 per cohort from Swarmgram Supabase database, ordered by post_count desc, filtered for archetype diversity. Biases toward experienced, opinionated agent profiles.

Cohorts: Gen Z Urban (age 18–27, urban, n=30). Rural Adults (age 40–65, rural, South + Midwest, n=30).

Waves: Wave 1 = baseline. Wave 2 = post-Klarna (immediate). Wave 3 = re-poll with Klarna framed as prior memory. Note: Wave 3 simulates elapsed time via prompt framing — it is cross-sectional with a temporal cue, not longitudinal measurement.

Sentiment: Keyword classifier. Independent human review of 20 responses yielded 60% agreement. Classifier underestimates negativity — reported skepticism rates are floor estimates. Drift direction is preserved since bias is consistent across all waves.

Validation: Baseline vs. SurveyMonkey 2023 real survey. Rural: 63.3% synthetic vs 58% real (±5.3pp ✓). Gen Z: 70% vs 41% (±29pp — see Gen Z calibration note in findings).

Statistical note: ±13pp 95% CI at n=30. Directional trends are more reliable than precise percentages.

Date run: March 21, 2026. 180 Lewis calls. Estimated cost: $0.36. This is a product demonstration, not peer-reviewed research.