-
Notifications
You must be signed in to change notification settings - Fork 0
/
README.txt
109 lines (78 loc) · 3.99 KB
/
README.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
Apache Hive (TM) @VERSION@
======================
The Apache Hive (TM) data warehouse software facilitates querying and
managing large datasets residing in distributed storage. Built on top
of Apache Hadoop (TM), it provides:
* Tools to enable easy data extract/transform/load (ETL)
* A mechanism to impose structure on a variety of data formats
* Access to files stored either directly in Apache HDFS (TM) or in other
data storage systems such as Apache HBase (TM)
* Query execution via MapReduce
Hive defines a simple SQL-like query language, called QL, that enables
users familiar with SQL to query the data. At the same time, this
language also allows programmers who are familiar with the MapReduce
framework to be able to plug in their custom mappers and reducers to
perform more sophisticated analysis that may not be supported by the
built-in capabilities of the language. QL can also be extended with
custom scalar functions (UDF's), aggregations (UDAF's), and table
functions (UDTF's).
Please note that Hadoop is a batch processing system and Hadoop jobs
tend to have high latency and incur substantial overheads in job
submission and scheduling. Consequently the average latency for Hive
queries is generally very high (minutes) even when data sets involved
are very small (say a few hundred megabytes). As a result it cannot be
compared with systems such as Oracle where analyses are conducted on a
significantly smaller amount of data but the analyses proceed much
more iteratively with the response times between iterations being less
than a few minutes. Hive aims to provide acceptable (but not optimal)
latency for interactive data browsing, queries over small data sets or
test queries.
Hive is not designed for online transaction processing and does not
support real-time queries or row level insert/updates. It is best used
for batch jobs over large sets of immutable data (like web logs). What
Hive values most are scalability (scale out with more machines added
dynamically to the Hadoop cluster), extensibility (with MapReduce
framework and UDF/UDAF/UDTF), fault-tolerance, and loose-coupling with
its input formats.
General Info
============
For the latest information about Hive, please visit out website at:
http://hive.apache.org/
Getting Started
===============
- Installation Instructions and a quick tutorial:
https://cwiki.apache.org/confluence/display/Hive/GettingStarted
- A longer tutorial that covers more features of HiveQL:
https://cwiki.apache.org/confluence/display/Hive/Tutorial
- The HiveQL Language Manual:
https://cwiki.apache.org/confluence/display/Hive/LanguageManual
Requirements
============
- Java 1.6
- Hadoop 0.20.x (x >= 1)
Upgrading from older versions of Hive
=====================================
- Hive @VERSION@ includes changes to the MetaStore schema. If
you are upgrading from an earlier version of Hive it is imperative
that you upgrade the MetaStore schema by running the appropriate
schema upgrade scripts located in the scripts/metastore/upgrade
directory.
We have provided upgrade scripts for Derby and MySQL databases. If
you are using a different database for your MetaStore you will need
to provide your own upgrade script.
- Hive @VERSION@ includes new configuration properties. If you
are upgrading from an earlier version of Hive it is imperative
that you replace all of the old copies of the hive-default.xml
configuration file with the new version located in the conf/
directory.
Useful mailing lists
====================
1. [email protected] - To discuss and ask usage questions. Send an
empty email to [email protected] in order to subscribe
to this mailing list.
2. [email protected] - For discussions about code, design and features.
Send an empty email to [email protected] in order to
subscribe to this mailing list.
3. [email protected] - In order to monitor commits to the source
repository. Send an empty email to [email protected]
in order to subscribe to this mailing list.