public class ZippedPartitionsRDD2<A,B,V> extends ZippedPartitionsBaseRDD<V>
| Constructor and Description |
|---|
ZippedPartitionsRDD2(SparkContext sc,
scala.Function2<scala.collection.Iterator<A>,scala.collection.Iterator<B>,scala.collection.Iterator<V>> f,
RDD<A> rdd1,
RDD<B> rdd2,
boolean preservesPartitioning,
scala.reflect.ClassTag<A> evidence$2,
scala.reflect.ClassTag<B> evidence$3,
scala.reflect.ClassTag<V> evidence$4) |
| Modifier and Type | Method and Description |
|---|---|
void |
clearDependencies()
Clears the dependencies of this RDD.
|
scala.collection.Iterator<V> |
compute(Partition s,
TaskContext context)
:: DeveloperApi ::
Implemented by subclasses to compute a given partition.
|
scala.Function2<scala.collection.Iterator<A>,scala.collection.Iterator<B>,scala.collection.Iterator<V>> |
f() |
RDD<A> |
rdd1() |
RDD<B> |
rdd2() |
getPartitions, getPreferredLocations, partitioner, rddsaggregate, cache, cartesian, checkpoint, checkpointData, coalesce, collect, collect, collectPartitions, computeOrReadCheckpoint, conf, context, count, countApprox, countApproxDistinct, countApproxDistinct, countByValue, countByValueApprox, creationSite, dependencies, distinct, distinct, doCheckpoint, elementClassTag, filter, filterWith, first, flatMap, flatMapWith, fold, foreach, foreachPartition, foreachWith, getCheckpointFile, getCreationSite, getNarrowAncestors, getStorageLevel, glom, groupBy, groupBy, groupBy, id, intersection, intersection, intersection, isCheckpointed, iterator, keyBy, map, mapPartitions, mapPartitionsWithContext, mapPartitionsWithIndex, mapPartitionsWithSplit, mapWith, markCheckpointed, max, min, name, partitions, persist, persist, pipe, pipe, pipe, preferredLocations, randomSplit, reduce, repartition, retag, retag, sample, saveAsObjectFile, saveAsTextFile, saveAsTextFile, setName, sortBy, sparkContext, subtract, subtract, subtract, take, takeOrdered, takeSample, toArray, toDebugString, toJavaRDD, toLocalIterator, top, toString, union, unpersist, zip, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipWithIndex, zipWithUniqueIdinitializeIfNecessary, initializeLogging, isTraceEnabled, log_, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarningpublic ZippedPartitionsRDD2(SparkContext sc, scala.Function2<scala.collection.Iterator<A>,scala.collection.Iterator<B>,scala.collection.Iterator<V>> f, RDD<A> rdd1, RDD<B> rdd2, boolean preservesPartitioning, scala.reflect.ClassTag<A> evidence$2, scala.reflect.ClassTag<B> evidence$3, scala.reflect.ClassTag<V> evidence$4)
public scala.Function2<scala.collection.Iterator<A>,scala.collection.Iterator<B>,scala.collection.Iterator<V>> f()
public scala.collection.Iterator<V> compute(Partition s, TaskContext context)
RDDpublic void clearDependencies()
RDDUnionRDD for an example.clearDependencies in class ZippedPartitionsBaseRDD<V>