Reading Your Weekly Progress Report
A weekly progress report is not a dashboard. It is a structured read of last week with a recommendation, a confidence band, and an attribution call when patterns shifted.
What a weekly report is for
Sarenica generates a weekly progress report every Monday morning, looking back at the previous calendar week of tracked sessions. It is not a dashboard, it is a written read.
The job is to compress about 30 to 50 sessions into one decision for the next week. Where do I work, when do I avoid demanding blocks, and what should I try to change.
Everything in the report is structured to answer that. Every section either supports the decision or tells you why the report is not confident enough to make one yet.
Window and confidence band
The header carries two facts: the date window covered, and a confidence band. The window is always the previous full week in your local timezone. The confidence band is one of strong, moderate, directional, or insufficient.
A directional or insufficient band means the report is reading too few reliable minutes to call patterns hard. Treat those weeks as context, not as conclusions. Strong and moderate weeks earn a recommendation worth acting on.
Best block, riskiest hour, top driver
Three tiles sit near the top of every report. They are the three numbers worth remembering for the week.
Best block is the session-length bucket where your fatigue stayed lowest and productive density stayed highest. For most people this lands at 15-30 minutes; some users hold strong through 30-45.
Riskiest hour is the hour-of-day where fatigue burden was highest. Midday and late afternoon are the most common; pre-lunch sometimes shows up if you start the day late.
Top driver is the single signal Sarenica believes most explains the week. It might be session-duration shift, posture burden, eye strain, or recovery context. It is the answer to "if I had to fix one thing".
Hour-by-hour fatigue and the high-strain band
Below the tiles is a chart of average fatigue across the hours you tracked, with a shaded band marking the high-strain zone. The point of the band is not to alarm you, it is to give the eye an anchor.
Spikes that touch the band are the hours your weekly recommendation will tend to push you away from. A clean run of bars below the band is a quiet good week, even if no single hour is impressive.
What to notice — the findings list
Below the chart is a short bulleted list of findings. These are the report's own interpretation: usually two to four short sentences calling out the strongest patterns it saw.
They are written conservatively. You will see "associated with" or "tracked alongside" rather than "caused". That is intentional. The signals support correlation cleanly; causation needs an experiment.
Change attribution: what shifted, not what caused
If you have at least one prior weekly report saved, the report adds a change-attribution section. It compares this week against last week and names the most likely contributor to any metric shift.
A typical entry reads like "session duration shift, directional: 30-45 minute blocks rose from 22% to 48% of sessions; fatigue rose 0.18 in those buckets". It will not claim causation. It will name the magnitude, the direction, and the confidence.
Confidence escalates only when at least two metrics agree on the direction of the shift. An explicit experiment label can promote a finding to "confirmed", but for normal weekly reads, "directional" or "probable" is the honest top end.
One decision for next week
The footer of every report ends with a recommended verification — usually one concrete change to try for the next week, framed as an experiment.
That is what the report is really for. Not a feeling, not a score, not a dashboard. One decision, applied for a week, then read back against next Monday's report. That is how the system gets sharper over time, and it is how you do.