pyspark.RDD.mapPartitions#
- RDD.mapPartitions(f, preservesPartitioning=False)[source]#
Return a new RDD by applying a function to each partition of this RDD.
New in version 0.7.0.
- Parameters
- ffunction
a function to run on each partition of the RDD
- preservesPartitioningbool, optional, default False
indicates whether the input function preserves the partitioner, which should be False unless this is a pair RDD and the input function doesn’t modify the keys
- Returns
See also
Examples
>>> rdd = sc.parallelize([1, 2, 3, 4], 2) >>> def f(iterator): yield sum(iterator) ... >>> rdd.mapPartitions(f).collect() [3, 7]