must build a priority queue of length from + size, all of which need to be passed back to the coordinating node. And coordinating node needs to sort through number_or_shards + (from + size) documents in order to find the correct size documents. ▸ With big-enough from values, the sorting process can become very heavy indeed, using vast amount of CPU, memory and bandwidth. ▸ For this reason, we strongly advice against deep paging. ▸ As alternative, we can use Scan & Scroll API for deep pagination.