欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  IT编程

SparkSQL读取hive数据本地idea运行的方法详解

程序员文章站 2022-07-02 20:11:07
环境准备:hadoop版本:2.6.5spark版本:2.3.0hive版本:1.2.2master主机:192.168.100.201slave1主机:192.168.100.201pom.xml依...

环境准备:

hadoop版本:2.6.5
spark版本:2.3.0
hive版本:1.2.2
master主机:192.168.100.201
slave1主机:192.168.100.201

pom.xml依赖如下:

<?xml version="1.0" encoding="utf-8"?>
<project xmlns="http://maven.apache.org/pom/4.0.0"
   xmlns:xsi="http://www.w3.org/2001/xmlschema-instance"
   xsi:schemalocation="http://maven.apache.org/pom/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
 <modelversion>4.0.0</modelversion>
 
 <groupid>com.spark</groupid>
 <artifactid>spark_practice</artifactid>
 <version>1.0-snapshot</version>
 
 <properties>
  <project.build.sourceencoding>utf-8</project.build.sourceencoding>
  <maven.compiler.source>1.8</maven.compiler.source>
  <maven.compiler.target>1.8</maven.compiler.target>
  <spark.core.version>2.3.0</spark.core.version>
 </properties>
 
 <dependencies>
  <dependency>
   <groupid>junit</groupid>
   <artifactid>junit</artifactid>
   <version>4.11</version>
   <scope>test</scope>
  </dependency>
  <dependency>
   <groupid>org.apache.spark</groupid>
   <artifactid>spark-core_2.11</artifactid>
   <version>${spark.core.version}</version>
  </dependency>
 
  <dependency>
   <groupid>org.apache.spark</groupid>
   <artifactid>spark-sql_2.11</artifactid>
   <version>${spark.core.version}</version>
  </dependency>
  <dependency>
   <groupid>mysql</groupid>
   <artifactid>mysql-connector-java</artifactid>
   <version>5.1.38</version>
  </dependency>
  <dependency>
   <groupid>org.apache.spark</groupid>
   <artifactid>spark-hive_2.11</artifactid>
   <version>2.3.0</version>
  </dependency>
 </dependencies>
 
</project>

注意:一定要将hive-site.xml配置文件放到工程resources目录下

hive-site.xml配置如下: 

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl" rel="external nofollow" ?>
<configuration> 
<!-- hive元数据服务url -->
 <property>
 <name>hive.metastore.uris</name>
 <value>thrift://192.168.100.201:9083</value>
 </property>
 <property>
 <name>hive.server2.thrift.port</name>
 <value>10000</value>
 </property> 
 <property>  
  <name>javax.jdo.option.connectionurl</name>
  <value>jdbc:mysql://node01:3306/hive?createdatabaseifnotexist=true</value> 
 </property> 
 <property>  
  <name>javax.jdo.option.connectiondrivername</name>
  <value>com.mysql.jdbc.driver</value> 
 </property> 
 <property>  
  <name>javax.jdo.option.connectionusername</name>
  <value>root</value>
 </property>
 <property>
  <name>javax.jdo.option.connectionpassword</name>
  <value>123456</value>
 </property>
 <property>
  <name>hive.zookeeper.quorum</name>
   <value>node01,node02,node03</value>
  </property>
 
  <property>
  <name>hbase.zookeeper.quorum</name>
   <value>node01,node02,node03</value>
  </property>
  <!-- hive在hdfs上的存储路径 -->
 <property>
 <name>hive.metastore.warehouse.dir</name>
 <value>/user/hive/warehouse</value>
 </property>
 <!-- 集群hdfs访问url -->
 <property>
 <name>fs.defaultfs</name>
 <value>hdfs://192.168.100.201:9000</value>
 </property>
 <property>
 <name>hive.metastore.schema.verification</name>
 <value>false</value>
 </property>
 <property>
 <name>datanucleus.autocreateschema</name>
 <value>true</value>
 </property>
 <property>
 <name>datanucleus.autostartmechanism</name>
 <value>checked</value>
 </property>
 
</configuration>

主类代码:

import org.apache.spark.sql.sparksession
 
object sparksqltest2 {
 def main(args: array[string]): unit = {
 
 val spark: sparksession = sparksession
  .builder
  .master("local[*]")
  .appname("java spark hive example")
  .enablehivesupport
  .getorcreate
 
 spark.sql("show databases").show()
 spark.sql("show tables").show()
 spark.sql("select * from person").show()
 spark.stop()
 }
}

前提:数据库访问的是default,表person中有三条数据。

SparkSQL读取hive数据本地idea运行的方法详解

 测试前先确保hadoop集群正常启动,然后需要启动hive的metastore服务。

./bin/hive --service metastore 

运行,结果如下:

SparkSQL读取hive数据本地idea运行的方法详解

 如果报错:

exception in thread "main" org.apache.spark.sql.analysisexception: java.lang.runtimeexception: java.io.ioexception: (null) entry in command string: null chmod 0700 c:\users\dell\appdata\local\temp\c530fb25-b267-4dd2-b24d-741727a6fbf3_resources;
 at org.apache.spark.sql.hive.hiveexternalcatalog.withclient(hiveexternalcatalog.scala:106)
 at org.apache.spark.sql.hive.hiveexternalcatalog.databaseexists(hiveexternalcatalog.scala:194)
 at org.apache.spark.sql.internal.sharedstate.externalcatalog$lzycompute(sharedstate.scala:114)
 at org.apache.spark.sql.internal.sharedstate.externalcatalog(sharedstate.scala:102)
 at org.apache.spark.sql.hive.hivesessionstatebuilder.externalcatalog(hivesessionstatebuilder.scala:39)
 at org.apache.spark.sql.hive.hivesessionstatebuilder.catalog$lzycompute(hivesessionstatebuilder.scala:54)
 at org.apache.spark.sql.hive.hivesessionstatebuilder.catalog(hivesessionstatebuilder.scala:52)
 at org.apache.spark.sql.hive.hivesessionstatebuilder$$anon$1.<init>(hivesessionstatebuilder.scala:69)
 at org.apache.spark.sql.hive.hivesessionstatebuilder.analyzer(hivesessionstatebuilder.scala:69)
 at org.apache.spark.sql.internal.basesessionstatebuilder$$anonfun$build$2.apply(basesessionstatebuilder.scala:293)
 at org.apache.spark.sql.internal.basesessionstatebuilder$$anonfun$build$2.apply(basesessionstatebuilder.scala:293)
 at org.apache.spark.sql.internal.sessionstate.analyzer$lzycompute(sessionstate.scala:79)
 at org.apache.spark.sql.internal.sessionstate.analyzer(sessionstate.scala:79)
 at org.apache.spark.sql.execution.queryexecution.analyzed$lzycompute(queryexecution.scala:57)
 at org.apache.spark.sql.execution.queryexecution.analyzed(queryexecution.scala:55)
 at org.apache.spark.sql.execution.queryexecution.assertanalyzed(queryexecution.scala:47)
 at org.apache.spark.sql.dataset$.ofrows(dataset.scala:74)
 at org.apache.spark.sql.sparksession.sql(sparksession.scala:638)
 at com.tongfang.learn.spark.hive.hivetest.main(hivetest.java:15)

解决:

1.下载hadoop windows binary包,链接:https://github.com/steveloughran/winutils

2.在启动类的运行参数中设置环境变量,hadoop_home=d:\winutils\hadoop-2.6.4,后面是hadoop windows 二进制包的目录。

SparkSQL读取hive数据本地idea运行的方法详解

到此这篇关于sparksql读取hive数据本地idea运行的方法详解的文章就介绍到这了,更多相关sparksql读取hive数据本地idea运行内容请搜索以前的文章或继续浏览下面的相关文章希望大家以后多多支持!