第一題:
tb1: url, ts
對于每個(gè)url盗棵,分別求在哪一秒點(diǎn)擊量最大,最大值是多少憨颠?
ps:ts是毫秒級int的時(shí)間戳氮趋。
第二題:
tb2: uid, page
求:訪問過page=A and page=B的設(shè)備數(shù)
第一題
建表,導(dǎo)入數(shù)據(jù):
tb1: url, ts
對于每個(gè)url鸭你,分別求在哪一秒點(diǎn)擊量最大屈张,最大值是多少擒权?
ps:ts是毫秒級int的時(shí)間戳。
create table kuaishou(
url string,
ts string)
row format delimited fields terminated by ",";
vi kuaishou
url1,1234567890123
url1,1234567890113
url1,1234567891103
url1,1234527893123
url2,1234527892123
url2,1234527892123
url2,1234527890123
url2,1234527890123
url2,1234527890113
url2,1234527891103
url2,1234527893123
url2,1234527892123
url2,1234527892123
url2,1234527890123
url3,1234527892123
url3,1234527890123
url3,1234527890123
url3,1234527890113
url2,1234527891103
url2,1234527893123
url2,1234527892123
url2,1234567892123
url2,1234567890123
load data local inpath '/home/hadoop/tmp/kuaishou' into table kuaishou;
暫時(shí)解決思路:
1.切分毫秒為秒阁谆,
2.按照url碳抄,sec分組計(jì)數(shù),
3.對2.計(jì)數(shù)結(jié)果逆序排序
4.取序號為1的記錄场绿。
select url,sec,cnt
from
(
select url,sec,cnt,row_number() over(partition by url order by cnt desc) rn --第三步
from
(
from (select url,substr(ts,1,10) sec from kuaishou)t1 --第一步
select url,sec,count(1) cnt
group by url,sec --第二步
)t3
)t4
where rn=1 --第四步
;
hive (default)>
>
> select url,sec,cnt
> from
> (
> select url,sec,cnt,row_number() over(partition by url order by cnt desc) rn --第三步
> from
> (
> from (select url,substr(ts,1,10) sec from kuaishou)t1 --第一步
> select url,sec,count(1) cnt
> group by url,sec --第二步
> )t3
> )t4
> where rn=1 --第四步
> ;
Automatically selecting local only mode for query
Query ID = hadoop_20200707072001_cad817b1-ab4e-4752-a47f-bbf591973f7f
Total jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Job running in-process (local Hadoop)
2020-07-07 07:20:06,252 Stage-1 map = 100%, reduce = 100%
Ended Job = job_local1962055191_0030
Launching Job 2 out of 2
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Job running in-process (local Hadoop)
2020-07-07 07:20:11,369 Stage-2 map = 100%, reduce = 100%
Ended Job = job_local1720680700_0031
MapReduce Jobs Launched:
Stage-Stage-1: HDFS Read: 46950 HDFS Write: 19172 SUCCESS
Stage-Stage-2: HDFS Read: 48422 HDFS Write: 19866 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK
url sec cnt
url1 1234567890 4
url2 1234567892 7
url3 1234527890 3
Time taken: 9.94 seconds, Fetched: 3 row(s)
hive (default)>
第二題
tb2: mid, page
求:訪問過page=A and page=B的設(shè)備數(shù)
建表剖效,導(dǎo)入數(shù)據(jù):
create table kuaishou2(
uid string,
page string)
row format delimited fields terminated by ",";
vi kuaishou2
u1,A
u1,A
u3,A
u1,B
u2,A
u1,B
u2,A
u4,B
u1,A
u3,A
u1,C
u2,B
u1,B
u2,A
u4,B
u1,A
u3,D
u1,B
u2,A
u1,W
u2,A
u4,B
load data local inpath '/home/hadoop/tmp/kuaishou2' into table kuaishou2;
from kuaishou2 a,kuaishou2 b
select a.uid
where a.uid=b.uid and a.page='A' and b.page='B'
;
思路:一種用自連接 ;一種用size()取巧焰盗。
優(yōu)化一點(diǎn):
①可以對數(shù)據(jù)先刷選璧尸,然后再進(jìn)行自連接;
②需要對結(jié)果去重熬拒,用group by
代替distinct
吧爷光;
歡迎補(bǔ)充~~
自連接HQL Code如下:
優(yōu)化后的:
1.篩選出訪問過A,B的設(shè)備澎粟;
2.做自連接(on t1.uid=t2.uid)蛀序;
3.count() group by做去重優(yōu)化(優(yōu)于distinct())
有2個(gè)job;Time taken: 26.834 seconds, Fetched: 1 row(s)
select count(uid) uid_cnt -- 最后count設(shè)備數(shù)活烙。
from (
select
t1.uid
from(
select uid from kuaishou2 where page='A'
)t1
join(
select uid from kuaishou2 where page='B'
)t2
on
t1.uid=t2.uid
group by
t1.uid
)t3;
1.where做條件過濾和自連接徐裸;
2.count(distinct a.uid)求最終設(shè)備數(shù)
有1個(gè)job,Time taken: 20.426 seconds, Fetched: 1 row(s)
from kuaishou2 a,kuaishou2 b
select count(distinct a.uid) uid_cnt
where
a.uid=b.uid and
a.page='A' and
b.page='B'
;
第二種方法:思路見代碼注釋啸盏,
有2個(gè)job重贺,Time taken: 9.919 seconds, Fetched: 1 row(s)
耗時(shí)最少。
--1.篩選訪問過A回懦,B的設(shè)備檬姥;
--2.按uid分組,
--3.按uid分組粉怕,用collect_set()自動(dòng)去重健民,形成:uid,{訪問過的頁面}
--4.篩選出集合大小為2的記錄
--5.count()求出最終設(shè)備數(shù)
select count(uid) uid_cnt --5.count()求出最終設(shè)備數(shù)
from
(
select uid,collect_set(page) ct --3按uid分組贫贝,用collect_set()自動(dòng)去重秉犹,
--形成:uid,{訪問過的頁面}
from (
from kuaishou2 a
select a.uid,a.page
where a.page='A' or a.page='B' --1.篩選訪問過A稚晚,B的設(shè)備崇堵;
)t1
group by uid --2 按uid分組,
having size(ct)=2 --4.篩選出集合大小為2的記錄
)t3
;