Monotonic Stack

April 01, 2026

Table of Contents

Overview
What a Monotonic Stack Actually Is
Why Monotonic Stacks Matter
Increasing vs. Decreasing Monotonic Stacks
Core Template
Next Greater Element
Next Smaller Element
Previous Greater / Previous Smaller
Classic Use Case: Largest Rectangle in Histogram
Classic Use Case: Daily Temperatures
Amortized Time Complexity (Why It’s Still O(n))
Equality Nuances: < vs <= and > vs >=
Monotonic Stack vs. Heap vs. Deque
Common Pitfalls (and Fixes)
How to Recognize a Monotonic Stack Problem
Conclusion
Key Takeaways

Overview
^

A monotonic stack is a stack that maintains its elements in either monotonically increasing or monotonically decreasing order. It is one of the most useful linear-time tools for solving array problems involving:

  • next greater / next smaller
  • previous greater / previous smaller
  • nearest boundary
  • range contribution
  • histogram / rectangle problems

The core power of the monotonic stack is that it lets you discard elements that are no longer relevant, while preserving just enough structure to answer “nearest valid candidate” queries in O(n) time.

What a Monotonic Stack Actually Is
^

A regular stack only enforces LIFO order. A monotonic stack enforces LIFO order plus value ordering.

For example:

  • Monotonic increasing stack: values increase from bottom to top
  • Monotonic decreasing stack: values decrease from bottom to top

In practice, you usually store:

  • indices, not raw values

  • so you can access both:

    • the value (arr[stack[-1]])
    • and the position (for distance / span calculations)

Why Monotonic Stacks Matter
^

Without a monotonic stack, many “nearest greater/smaller” problems are O(n²):

  • for each element, scan left or right until you find what you need

With a monotonic stack:

  • each element is pushed once
  • each element is popped at most once
  • total work stays O(n)

That is the key insight.

Increasing vs. Decreasing Monotonic Stacks
^

Stack TypeKeepsCommon UsePop Condition
Increasing stackSmallest values near the bottom, larger near the topFind next smaller / previous smallerPop while current value is smaller (or smaller/equal)
Decreasing stackLargest values near the bottom, smaller near the topFind next greater / previous greaterPop while current value is greater (or greater/equal)

The precise pop condition depends on whether duplicates should be kept or collapsed. That nuance matters a lot.

Core Template
^

The generic pattern:

def monotonic_stack_template(nums):
    stack = []  # usually stores indices
    result = [-1] * len(nums)

    for i, x in enumerate(nums):
        while stack and nums[stack[-1]] < x:
            j = stack.pop()
            result[j] = i   # or x, or i - j, depending on problem
        stack.append(i)

    return result

Key moving parts:

  • what ordering the stack maintains

  • what condition causes pops

  • what you do when popping

  • what you store in result:

    • next index
    • next value
    • distance
    • contribution span

Next Greater Element
^

Given an array, find the next element to the right that is greater.

def next_greater_element(nums):
    n = len(nums)
    ans = [-1] * n
    stack = []  # decreasing stack of indices

    for i, x in enumerate(nums):
        while stack and nums[stack[-1]] < x:
            j = stack.pop()
            ans[j] = x
        stack.append(i)

    return ans

Example:

nums = [2, 1, 2, 4, 3]
# answer = [4, 2, 4, -1, -1]

Why it works:

  • elements waiting for a greater value stay on the stack
  • when a larger value appears, it resolves as many pending elements as possible

Next Smaller Element
^

Same idea, but flipped.

def next_smaller_element(nums):
    n = len(nums)
    ans = [-1] * n
    stack = []  # increasing stack of indices

    for i, x in enumerate(nums):
        while stack and nums[stack[-1]] > x:
            j = stack.pop()
            ans[j] = x
        stack.append(i)

    return ans

This is common in:

  • histogram problems
  • stock span variations
  • subarray minimum / maximum contributions

Previous Greater / Previous Smaller
^

For “previous” queries, the stack already contains candidates from the left.

Previous greater

def previous_greater(nums):
    ans = [-1] * len(nums)
    stack = []  # decreasing stack of indices

    for i, x in enumerate(nums):
        while stack and nums[stack[-1]] <= x:
            stack.pop()
        if stack:
            ans[i] = nums[stack[-1]]
        stack.append(i)

    return ans

Previous smaller

def previous_smaller(nums):
    ans = [-1] * len(nums)
    stack = []  # increasing stack of indices

    for i, x in enumerate(nums):
        while stack and nums[stack[-1]] >= x:
            stack.pop()
        if stack:
            ans[i] = nums[stack[-1]]
        stack.append(i)

    return ans

The key distinction:

  • for “next,” popped elements get their answer when a resolving element appears
  • for “previous,” the current stack top is already the answer after popping invalid candidates

Classic Use Case: Largest Rectangle in Histogram
^

This is one of the canonical monotonic stack problems.

Idea:

  • when a bar is popped, we now know:

    • the first smaller bar on the right
    • the first smaller bar on the left (new stack top)
  • that determines the widest rectangle where the popped bar is the limiting height

def largest_rectangle_area(heights):
    stack = []
    max_area = 0
    heights.append(0)  # flush remaining bars

    for i, h in enumerate(heights):
        while stack and heights[stack[-1]] > h:
            height = heights[stack.pop()]
            left = stack[-1] if stack else -1
            width = i - left - 1
            max_area = max(max_area, height * width)
        stack.append(i)

    heights.pop()
    return max_area

This is a great example of a monotonic stack doing more than “next greater”—it’s really identifying dominance boundaries.

Classic Use Case: Daily Temperatures
^

Given temperatures, for each day find how many days until a warmer temperature.

def daily_temperatures(temps):
    n = len(temps)
    ans = [0] * n
    stack = []  # decreasing stack of indices

    for i, t in enumerate(temps):
        while stack and temps[stack[-1]] < t:
            j = stack.pop()
            ans[j] = i - j
        stack.append(i)

    return ans

Here the result is not the next value itself, but the distance to the next greater value.

That’s a common monotonic stack variation.

Amortized Time Complexity (Why It’s Still O(n))
^

At first glance, the inner while loop looks scary and suggests O(n²).

But each element:

  • gets pushed once
  • gets popped at most once

So across the whole algorithm:

  • total pushes = n
  • total pops = at most n

Therefore total time is O(n).

This is a classic amortized analysis result.

Equality Nuances
^

This is one of the most important nuances.

When duplicates exist, you must decide:

  • should equal values stay on the stack?
  • or should one of them absorb the other?

Example distinction

Suppose you want next greater:

  • while nums[stack[-1]] < x means equal values remain
  • while nums[stack[-1]] <= x means equal values get popped too

That changes:

  • tie-breaking
  • ownership of spans
  • duplicate handling in contribution problems
ConditionEffectTypical Use
< / >Keeps equal values on the stackWhen duplicates should remain distinct
<= / >=Collapses equal valuesWhen you need consistent tie-breaking or unique span ownership

For problems like:

  • sum of subarray minimums
  • sum of subarray maximums

…the strict vs non-strict inequality choice is absolutely critical.

Monotonic Stack vs. Heap vs. Deque
^

These tools are often confused, but they solve different problems.

Data StructureBest ForStrengthWeakness
Monotonic StackNearest boundary / next or previous greater-smallerO(n) nearest-relationship problemsOnly works for certain one-pass structural patterns
HeapGlobal min/max retrievalFlexible priority accessDoes not preserve nearest-position relationships
Monotonic DequeSliding window min/maxO(n) over moving windowsDifferent problem shape; handles expirations explicitly

The big conceptual difference:

  • stack answers nearest unresolved candidate
  • heap answers global best candidate
  • deque answers best candidate within an active window

Common Pitfalls (and Fixes)
^

PitfallWhy it happensFix
Storing values instead of indicesYou lose distance and position informationStore indices unless the problem truly only needs values
Using the wrong inequalityDuplicates behave differently than expectedThink carefully about strict vs non-strict comparisons
Forgetting final cleanupRemaining stack elements never get resolvedHandle leftovers explicitly or append a sentinel
Choosing the wrong monotonic direction“Next greater” vs “next smaller” gets invertedDecide what should stay on the stack and what should be popped
Assuming uniquenessMultiple valid nearest candidates may exist with duplicatesDefine tie-breaking behavior before coding

How to Recognize a Monotonic Stack Problem
^

You should think “monotonic stack” when a problem asks:

  • next greater / smaller
  • previous greater / smaller
  • nearest boundary where a condition breaks
  • for each element, find the first element to the left/right satisfying something
  • histogram area / contribution span
  • linear-time solution over array with nearest comparisons

If the brute force solution is:

  • “for each element, scan left/right until …”

…then monotonic stack is often the intended optimization.

Conclusion
^

Monotonic stacks are one of the most elegant linear-time tools in algorithm design. They turn repeated left/right scanning into a single pass by maintaining only the candidates that still matter. The main subtleties are not in the stack itself, but in the direction, strictness of comparisons, and what to do when elements are popped. Once you internalize those nuances, a whole class of “nearest greater/smaller” problems becomes much more systematic.

Key Takeaways

  • A monotonic stack is a stack that maintains increasing or decreasing order.
  • It is ideal for next/previous greater/smaller and nearest boundary problems.
  • Most problems should store indices, not raw values.
  • The biggest nuance is usually strict vs non-strict inequality with duplicates.
  • Despite the nested while loop, the total runtime is typically O(n) by amortized analysis.