parallel_tuple_cost=0.1 SELECT * FROM very_large_table WHERE …; We don’t want to use parallelism if a huge number of tuples will be transferred back into the leader process after a light amount of work in the worker processes.
• If the data does not fi t, it will fi rst be broken up into separate partitions written out to disk fi rst, to be processed one at a time
• Problem: when work_mem is small, the set of partitions grows very large, and the number of fi les and per-partition bu ff ers grows very large; perhaps by more than can be saved! This is an open problem.