public class HiveTableScan extends SparkPlan implements scala.Product, scala.Serializable
| Constructor and Description |
|---|
HiveTableScan(scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> attributes,
org.apache.spark.sql.hive.MetastoreRelation relation,
scala.Option<org.apache.spark.sql.catalyst.expressions.Expression> partitionPruningPred,
HiveContext context) |
| Modifier and Type | Method and Description |
|---|---|
scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> |
attributes() |
HiveContext |
context() |
RDD<org.apache.spark.sql.catalyst.expressions.Row> |
execute()
Runs this query returning the result as an RDD.
|
scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> |
output() |
scala.Option<org.apache.spark.sql.catalyst.expressions.Expression> |
partitionPruningPred() |
org.apache.spark.sql.hive.MetastoreRelation |
relation() |
executeCollect, outputPartitioning, requiredChildDistributionexpressions, generateSchemaString, generateSchemaString, org$apache$spark$sql$catalyst$plans$QueryPlan$$transformExpressionDown$1, org$apache$spark$sql$catalyst$plans$QueryPlan$$transformExpressionUp$1, outputSet, printSchema, schemaString, transformAllExpressions, transformExpressions, transformExpressionsDown, transformExpressionsUpapply, argString, asCode, children, collect, fastEquals, flatMap, foreach, generateTreeString, getNodeNumbered, id, makeCopy, map, mapChildren, nextId, nodeName, numberedTreeString, otherCopyArgs, sameInstance, simpleString, stringArgs, toString, transform, transformChildrenDown, transformChildrenUp, transformDown, transformUp, treeString, withNewChildrenpublic HiveTableScan(scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> attributes,
org.apache.spark.sql.hive.MetastoreRelation relation,
scala.Option<org.apache.spark.sql.catalyst.expressions.Expression> partitionPruningPred,
HiveContext context)
public scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> attributes()
public org.apache.spark.sql.hive.MetastoreRelation relation()
public scala.Option<org.apache.spark.sql.catalyst.expressions.Expression> partitionPruningPred()
public HiveContext context()
public RDD<org.apache.spark.sql.catalyst.expressions.Row> execute()
SparkPlanpublic scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> output()
output in class org.apache.spark.sql.catalyst.plans.QueryPlan<SparkPlan>