Codeforces
CF Step
Youtube Linkedin Discord Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode Back to homepage

Hints: Remove at the lowest cost

In one operation you delete the smaller value among two adjacent elements. What can you say about any element whose value is strictly less than the current global maximum of the whole array?
Answer to Hint 1: It will eventually be compared to something at least as large as the global maximum (possibly after other deletions). So it cannot be the last remaining element. The survivor’s value must equal $\max_i a_i$.
Fix a contiguous subarray while the rest of the array is already gone. Among positions where $a$ equals the maximum on that subarray, does the order between equal values matter for who survives inside that subarray before you move outward?
Answer to Hint 3: When two adjacent elements have the same value, you choose which one disappears, but you always pay the cheaper removal cost of the two. So ties are not “free structure” — they still affect cost, but several maxima in a row behave like one block until one side becomes strictly smaller than the rest of the range’s maximum.
Ignore costs for a moment. On a segment $[l, r)$ of indices, let $M$ be the maximum $a$ on that segment, and collect every index in the segment with $a = M$. How does this partition the segment into smaller subsegments that sit between those maxima (and to the left of the first / right of the last)?
Answer to Hint 5: The maxima split the segment into disjoint subsegments strictly between consecutive max indices (and the prefix/suffix before the first / after the last max). Recursing on each piece builds a tree: a node “owns” a maximal block of equal $a$ at the maximum of its range, and its children are the recursion on those gaps. The root corresponds to the global maximum value on $[0, n)$.
For each tree node $v$, let $\mathrm{cnt}(v)$ be how many indices are glued into $v$ (all positions in that range with value equal to the segment maximum). Let $\mathrm{mn}(v)$ be the minimum $c_i$ among indices assigned to $v$. Why might the minimum cost on a root-to-$v$ path matter more than $\mathrm{mn}(v)$ alone?

Answer to Hint 7: Along the recursion, a cheaper removal cost can appear on an ancestor’s block. Whenever the “effective” per-step charge is driven by the cheapest $c$ seen on the path from the root down to $v$, define

$$ C(v) = \min_{u \text{ on path root} \to v} \mathrm{mn}(u)\,. $$

That is exactly the quantity you propagate from parent to child when you take pointwise mins down the tree.

Argue (at least informally) that the total removal cost for the whole process can be written as a sum over tree nodes of $C(v) \cdot \mathrm{cnt}(v)$, except that the root’s block should not be charged one extra copy of $C(\mathrm{root})$ — only one element remains in the end. How does that match a “$+$ contribution then $-$ one $C(\mathrm{root})$ on the root” bookkeeping?
Now add zeroings in order: after step $i$ (0-based), position $p_i$ has $c_{p_i} = 0$ forever. For each original index $j$, let $T(j)$ be the first step $i$ at which $j$ is zeroed. For a tree node $v$, let $R(v) = \min_{j \in v} T(j)$ (or $\infty$ if none yet). Why is the relevant “first time this path gets a free lunch” the minimum of $R(u)$ over $u$ on the path root $\to v$, not just $R(v)$?

Answer to Hint 10: As soon as some ancestor’s block has a zeroed index, the propagated minimum $C(\cdot)$ along that path can drop earlier than the first zero inside $v$ itself. So define

$$ T^*(v) = \min_{u \text{ on path root} \to v} R(u)\,, $$

and propagate this minimum downward exactly like $C(v)$.

Bucket each node’s contribution $C(v)\cdot \mathrm{cnt}(v)$ (with the root’s $-C(\mathrm{root})$ fix) into an array by the time index $T^*(v)$ (treat $\infty$ as “after everything,” i.e. index $n$). If $\mathrm{add}[k]$ is the total bucketed at time $k$, why does the answer after the first $i$ zeroings equal $\sum_{k \ge i} \mathrm{add}[k]$ — i.e. a suffix sum over $k$?
Answer to Hint 12: A contribution that is “priced” with the old min-cost is avoided only once the corresponding path’s min hits zero, which first happens at step $T^*(v)$. Everything scheduled at times $\ge i$ still counts toward the cost after $i$ zeroings; everything strictly before $i$ has already been absorbed into cheaper minima. Hence prefix answers come from suffix sums of the buckets.
Implementation sketch: build the tree with a range structure that repeatedly finds the maximum $a$ on $[l, r)$, pulls all positions equal to that maximum, recurses on the gaps, and assigns parents; compute $\mathrm{mn}$, $\mathrm{cnt}$, $R$ per node; propagate min for both $C$ and $T^*$ from root; accumulate $\mathrm{add}$, suffix-sum to print $n+1$ answers. What range query do you need so each index is processed only a few times across the whole recursion?
Answer to Hint 14: A segment tree (or similar) that supports range max and can deactivate positions once assigned to a node lets each recursive call find the current maximum and all ties in $O(\log n)$ amortized work per removal, which is enough for $O(n \log n)$ total when each index leaves the structure once.

Alternative solution (monotonic stack, sort by $c$, DSU)

The model program model_sol_2.cpp encodes the same cost object as the tree decomposition, but builds it by scanning indices in increasing removal cost and painting an interval for each index. Hints below follow that file.

Run a monotone stack on $a$ left to right: pop while $a[\text{top}] < a[i]$, setting $r[\text{top}] = i$; after pops, if the stack is nonempty, compare $a[\text{top}]$ with $a[i]$. If $a[\text{top}] > a[i]$, set $l[i] = \text{top}$; if they are equal, set $l[i] = l[\text{top}]$ (inherit the same left anchor). Push $i$. What geometric picture do $l[i]$ and $r[i]$ describe for each position $i$?
Answer to Alt. Hint 1: $l[i]$ is the index of the previous strictly larger $a$ (or $-1$), after compressing equal values to the same left chain. $r[i]$ is the index of the next strictly larger $a$ (or $n$). On the open interval $(l[i], r[i])$, the value $a[i]$ is a record maximum when you sweep that interval — it is the Cartesian-tree neighborhood you need for “who can eliminate whom” locally.

For a fixed index $x$, consider the closed index range

$$ [L, R] = [l[x] + 1,\, r[x] - 1]\,. $$

(If $l[x] = -1$ and $r[x] = n$, this is the whole array.) Why is it exactly the set of positions that “see” $x$ as the relevant maximum in the stack picture before you think about costs?

Answer to Alt. Hint 3: Indices in $[L, R]$ are precisely those lying between the previous and next strictly larger values than $a[x]$; inside that band, no larger $a$ appears, so $a[x]$ participates in the same maximal band as its duplicates and governs that contiguous block in the max-recursion / Cartesian view.
Sort indices $x$ by increasing $c_x$. Process them in that order. Maintain an array $\mathrm{cost}[i]$ (initially “unset”) and a DSU that points each $i$ to the next not-yet-set index (classic “union with $i+1$” skip list). When handling $x$, walk $i$ from $\mathrm{find}(L)$ while $i \le R$, set $\mathrm{cost}[i] \gets c_x$, and union $i$ with $i+1$. What invariant does this enforce about which $c_x$ wins at each position $i$?
Answer to Alt. Hint 5: The first time position $i$ is visited, it receives the minimum $c$ among all indices $x$ whose interval $[l[x]+1, r[x]-1]$ contains $i$. Later, larger $c$ never overwrite it. So $\mathrm{cost}[i]$ is the cheapest removal price that “covers” $i$ in this greedy painting — the same path-minimum structure as in the tree formulation, but built bottom-up by cost.
Let $S = \sum_i \mathrm{cost}[i]$ and let $M = \max_i \mathrm{cost}[i]$ (the code keeps a multiset of all $\mathrm{cost}[i]$ and also inserts $0$ so the maximum is well-defined if needed). Why is the minimum total paid removal cost before any zeroing equal to $S - M$?
Answer to Alt. Hint 7: Every index pays its assigned $\mathrm{cost}[i]$ when it is removed except the one survivor: along the optimal play implied by this assignment, the last remaining index is charged one fewer paid removal than the sum suggests. Equivalently, total $=$ sum of per-position charges minus the largest per-position charge (the multiset max, matching the root’s $-C(\mathrm{root})$ correction in the tree solution).
For the online zeroings, reset the DSU and re-use the same $l, r$. After the first phase you know each $\mathrm{cost}[i]$. Process $k = 0, 1, \ldots, n-1$: let $j = p_k$ be the index whose removal cost becomes $0$. Walk $i \in [l[j]+1, r[j]-1]$ with the same DSU skipping, but now clear any still-positive $\mathrm{cost}[i]$: subtract it from the running sum and erase it from the multiset, then set $\mathrm{cost}[i] = 0$ and union $i$ with $i+1$. Why is it valid to clear whole intervals repeatedly instead of only position $j$?
Answer to Alt. Hint 9: Zeroing $c_j$ makes $j$ “free” in every future comparison involving that block’s economics; the same greedy interval structure implies that every index in $j$’s $(l[j], r[j])$ window that still carried a positive inherited charge can drop to $0$ simultaneously without changing the min-cost assignment rule — the multiset and sum then reflect all remaining paid removals. After each step, print $S - \max(\text{multiset})$ as in the initial formula.
Complexity check: each index is assigned once in the increasing-$c$ sweep and cleared at most once in the zeroing sweeps because of DSU jumps. What total time does this yield?
Answer to Alt. Hint 11: $O(n \log n)$ from sorting by $c$, plus $O(n \,\alpha(n))$ for DSU traversals, and $O(n \log n)$ for multiset updates if you use a balanced structure — acceptable for $\sum n \le 2 \cdot 10^5$.