public class BroadcastHashJoin extends SparkPlan implements BinaryNode, HashJoin, scala.Product, scala.Serializable
| Constructor and Description |
|---|
BroadcastHashJoin(scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> leftKeys,
scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> rightKeys,
org.apache.spark.sql.execution.joins.BuildSide buildSide,
SparkPlan left,
SparkPlan right) |
| Modifier and Type | Method and Description |
|---|---|
org.apache.spark.sql.execution.joins.BuildSide |
buildSide() |
RDD<org.apache.spark.sql.catalyst.expressions.Row> |
execute()
Runs this query returning the result as an RDD.
|
SparkPlan |
left() |
scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> |
leftKeys() |
org.apache.spark.sql.catalyst.plans.physical.Partitioning |
outputPartitioning()
Specifies how data is partitioned across different nodes in the cluster.
|
scala.collection.immutable.List<org.apache.spark.sql.catalyst.plans.physical.UnspecifiedDistribution$> |
requiredChildDistribution() |
SparkPlan |
right() |
scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> |
rightKeys() |
codegenEnabled, executeCollect, makeCopyexpressions, inputSet, missingInput, org$apache$spark$sql$catalyst$plans$QueryPlan$$transformExpressionDown$1, org$apache$spark$sql$catalyst$plans$QueryPlan$$transformExpressionUp$1, output, outputSet, printSchema, references, schema, schemaString, simpleString, statePrefix, transformAllExpressions, transformExpressions, transformExpressionsDown, transformExpressionsUpapply, argString, asCode, children, collect, fastEquals, flatMap, foreach, generateTreeString, getNodeNumbered, map, mapChildren, nodeName, numberedTreeString, otherCopyArgs, stringArgs, toString, transform, transformChildrenDown, transformChildrenUp, transformDown, transformUp, treeString, withNewChildrenbuildKeys, buildPlan, buildSideKeyGenerator, hashJoin, output, streamedKeys, streamedPlan, streamSideKeyGeneratorproductArity, productElement, productIterator, productPrefixinitializeIfNecessary, initializeLogging, isTraceEnabled, log_, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarningpublic BroadcastHashJoin(scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> leftKeys,
scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> rightKeys,
org.apache.spark.sql.execution.joins.BuildSide buildSide,
SparkPlan left,
SparkPlan right)
public scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> leftKeys()
public scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Expression> rightKeys()
public org.apache.spark.sql.execution.joins.BuildSide buildSide()
public SparkPlan left()
public SparkPlan right()
public org.apache.spark.sql.catalyst.plans.physical.Partitioning outputPartitioning()
SparkPlanoutputPartitioning in class SparkPlanpublic scala.collection.immutable.List<org.apache.spark.sql.catalyst.plans.physical.UnspecifiedDistribution$> requiredChildDistribution()