Back to blog
Change Attribution: What Shifted, Not What Caused
Analysis 7 min readApr 26, 2026

Change Attribution: What Shifted, Not What Caused

Change attribution is the section of a weekly report that tells you what likely shifted, ranked by how much your data agrees on the direction. It will not claim causation; it names contributors.

MS
Written byMukul SinghFounder, Sarenica
Solo founder building Sarenica. Writes about fatigue, focus, and what desk-work tracking actually measures.
Change attributionWeekly reportPatternsConfidence
Attribution names contributors, not causes.
Confidence escalates only when at least two metrics agree on the direction.
A directional finding is a hint, not a verdict; an experiment label can promote it to confirmed.

The problem with weekly comparisons

It is easy to compare two weeks and notice that something changed. Fatigue went up. Productive density went down. The hard part is saying anything useful about what shifted.

Change attribution is the layer that tries. It compares this week's session profile against last week's, finds the dimensions that moved most, and emits a structured finding for each one. It will not tell you a single cause. It will rank the most likely contributors and tell you how confident it is in each.

What attribution actually claims

A change-attribution finding has five required pieces: a change type, an observed shift, a list of affected metrics with directions, a confidence band, and a recommended verification.

Change types today include session-duration shift, session start-hour shift, and session-mix shift (deep work vs fragmented vs passive). Each one is computed from the same artifact data the rest of the report rests on, so attribution always lines up with what the chart and findings already showed.

Session duration distribution: this week vs last
Share of sessions in each duration bucket, for the two compared weeks.
Sample data
Sample data showing a clear shift toward longer blocks. When two metrics move in the negative direction in the new dominant bucket, attribution promotes the finding to "probable".

Directional, probable, confirmed

Every attribution finding carries one of three confidence bands. The honest top end for a normal weekly read is "probable". "Confirmed" requires an explicit experiment label tied to the period — meaning you told Sarenica you were running a deliberate test.

Directional means the magnitude crossed the detection threshold but the metric agreement is weak. Probable means the magnitude is meaningful and at least two metrics agree on the direction of the shift. Confirmed adds the experiment label as a third leg.

Directional: a hint. Worth noting, not worth restructuring around.
Probable: real signal. Worth a one-week experiment to verify.
Confirmed: experiment-backed. Worth standardizing.

The data that powers it

Attribution does not look at every signal. It rests on three artifact-level distributions: session_duration_profile, session_start_hour_profile, and session_type_distribution. Each is summarized as a share-of-sessions vector, and the L1 distance between this week's and last week's vectors becomes the magnitude.

Per-bucket metrics — fatigue average, productive density, degradation risk — are then compared between the new dominant bucket and the old one. If the new dominant bucket is worse on these metrics, the finding leans negative. If it is better, the finding leans positive.

Reading an attribution finding

A typical finding looks like this: "session-duration shift, probable: 30-45 minute blocks rose from 22% to 48% of sessions; fatigue average rose 0.18 in those buckets; productive density fell 0.07."

Read it as a signed claim, not a verdict. The shift happened. The metrics moved with it. The finding is graded "probable" because at least two of the three tracked metrics agreed on the direction. It is not telling you longer blocks made you tired. It is telling you the data is consistent with that, and worth a one-week experiment.

When attribution earns the "confirmed" call

The cleanest path to a confirmed finding is to label your experiment up front. Tell Sarenica you are running a "30 minute block week" and let it compare that week against the prior weeks where blocks were longer.

Without the label, attribution stays at probable at best, even if the data is overwhelming. That is intentional. Causation is a strong claim, and a weekly report should not make strong claims off observational data alone.

Related posts