500行SQL快速實現UCF
寫在前面話
UCF通常是User-base Collaborative Filter的簡寫;大體的算法思路是根據用戶行為計算相似群體(鄰居),為用戶推薦其鄰居喜好的內容;感覺是不是很簡單、那廢話不多說先擼個SQL。
SQL
- select uid1,uid2,sim
- from (
- select uid1
- ,uid2
- ,cnt12 / sqrt(cnt1*cnt2) sim
- ,row_number() over(partition by uid1 order by cnt12 / sqrt(cnt1*cnt2) desc) sim_rn
- from (
- select a.uid uid1
- ,b.uid uid2
- ,count(a.iid) cnt12
- from tb_behavior a
- join tb_behavior b
- on a.iid = b.iid
- where a.uid <> b.uid
- group by a.uid,b.uid
- ) a12
- join (select uid,count(iid) cnt1 from tb_behavior group by uid) a1
- on a12.uid1 = a1.uid
- join (select uid,count(iid) cnt2 from tb_behavior group by uid) a2
- on a12.uid1 = a2.uid
- ) tb_neighbour
- where sim > 0.1 and sim_rn <= 30
讀者實現的話只需要把上面的tb_behavior表替換成自己業務的用戶行為即可;iid,uid分別對應物品id和用戶id;
根據共現相似度,即共同喜好的物品個數比上各自喜好物品總數乘積取平方;最后截斷用戶最相似的前30個鄰居作為推薦的依據。
上面構造了鄰居表,下面就是根據鄰居的喜好為用戶推薦了,具體sql如下:
- select uid1,iid
- from (
- select uid1
- ,iid
- ,max(sim) score
- ,row_number() over(partition by uid1 order by max(sim) desc) user_rn
- from tb_neighbour a12
- join (select uid,iid from tb_behavior) a2
- on a12.uid2 = a2.uid
- join (select uid,collect_set(iid) iids1 from tb_behavior group by uid) a1
- on a12.uid1 = a1.uid
- where not array_contaions(iids1,a2.iid)
- group by uid1,iid
- ) tb_rec
- where user_rn <= 500
這里說明下包括上面的top30鄰居和用戶top500的最大推薦列表都是工程優化,截斷節約些存儲;具體讀者可以根據自己業務需要進行設置;
然后大概說下各個表的含義:a1表是用戶已消費過的物品,a2表是用戶每個鄰居喜好的物品;那么也就是說從鄰居喜好的物品中過濾掉已經消費的
物品整體根據共現相似度進行排序。
思考
但思路很簡單、實際作者開發中總會遇到各種各樣的問題,下面就撿幾個主要的和大家一起討論下:
- 1.join引起的數據傾斜問題:tb_neighbour表很大,往往熱點物品會占據80%的曝光和消費記錄,如何解決?
- 2.增量更新問題:上面的框架,tb_behavior表每次都是全量計算,是否能改造成增量更新鄰居表和推薦結果,并減少計算時間呢?
join引起的數據傾斜問題
先思考問題1,既然我們目的是求相似鄰居,物品join只是為了關聯上一組用戶對,那自然的想法是可以根據feed做近似采樣、相似度精度也幾乎無損失。
下面我試著實現下這種思路:
- with tb_behavior_sample as (
- select uid,iid
- from (
- select uid
- ,iid
- ,row_number() over(partition by iid order by rand()) feed_rn
- from tb_behavior
- ) bh
- where feed_rn <= 50000
- )
- select uid1,uid2,sim
- from (
- select uid1
- ,uid2
- ,cnt12 / sqrt(cnt1*cnt2) sim
- ,row_number() over(partition by uid1 order by cnt12 / sqrt(cnt1*cnt2) desc) sim_rn
- from (
- select a.uid uid1
- ,b.uid uid2
- ,count(a.iid) cnt12
- from tb_behavior_sample a
- join tb_behavior_sample b
- on a.iid = b.iid
- where a.uid <> b.uid
- group by a.uid,b.uid
- ) a12
- join (select uid,count(iid) cnt1 from tb_behavior group by uid) a1
- on a12.uid1 = a1.uid
- join (select uid,count(iid) cnt2 from tb_behavior group by uid) a2
- on a12.uid1 = a2.uid
- ) tb_neighbour
- where sim > 0.1 and sim_rn <= 30
這里用了hive的with as語法,讀者可自行查閱,篇幅有限,就不展開了;feed_rn就是隨機采樣了50000條,實際操作時讀者可以先統計下item的分布、大概找到一個閾值;
比如取top10的item的出現次數作為閾值;那計算相似度時分子最多減小10,分母不變。這對大多數情況精度應該足夠了,而且因為避免了數據傾斜,大大降低了計算時間。
增量更新問題
問題2是一個工程問題,lambda架構能使初始結果效果不錯,可直接上線灰度了;在此基礎上再加小時或者天增量;kappa架構相對就比較繁瑣、需要一開始就設計增量流程。
精度方面也需要一定的累積;不過如何選擇,讀者可以根據自己的數據量和熟悉程度自行選擇;作者這里僅以kappa架構說明。
重新review上面sql,我們發現我們僅需要記錄下cnt12,cnt1,cnt2,iids1這些計算關鍵即可,其中iids2是用戶鄰居喜好的物品數組;數值類型可累加更新、
數組類型合并起來比較麻煩,一種解決方案是注冊UDF;這里采取另一種這種的方案:把iids1合并成字符串,過濾的時候再分割為字符串數組。
- with tb_behavior_sample_incr as (
- select uid,iid
- from (
- select uid
- ,iid
- ,row_number() over(partition by iid order by rand()) feed_rn
- from tb_behavior_incr
- ) bh
- where feed_rn <= 50000
- )
- insert overwrite table tb_neighbour
- select uid1,uid2,sim
- from (
- select uid1
- ,uid2
- ,sum(cnt12) / sqrt(sum(cnt1)*sum(cnt2)) sim
- ,row_number() over(partition by uid1 order by sum(cnt12) / sqrt(sum(cnt1)*sum(cnt2)) desc) sim_rn
- from (
- select uid1,uid2,cnt12,cnt1,cnt2
- from tb_neighbour
- union all
- select a.uid uid1
- ,b.uid uid2
- ,count(a.iid) cnt12
- ,cnt1
- ,cnt2
- from tb_behavior_sample_incr a
- join tb_behavior_sample_incr b
- on a.iid = b.iid
- where a.uid <> b.uid
- group by a.uid,b.uid
- ) a12
- join (select uid,count(iid) cnt1 from tb_behavior_incr group by uid) a1
- on a12.uid1 = a1.uid
- join (select uid,count(iid) cnt2 from tb_behavior_incr group by uid) a2
- on a12.uid1 = a2.uid
- group by uid1,uid2
- ) tb_neighbour
- where sim > 0.1 and sim_rn <= 30
其中tb_behavior_sample_incr,tb_behavior_incr是相應tb_behavior_sample,tb_behavior的增量表;使用union all和group by聚合相同用戶對的結果
kappa架構初次計算即是增量,不斷累積每次增量的結果更新tb_neighbour;相當于lambda初始全量計算的一種回放,直至追到最新的時間分區。
- insert overwrite table tb_user_consume
- select uid,substring_index(concat_ws(",",collect_list(iids1)),",",10000) iids1
- from (
- select uid,concat_ws(",",collect_set(cast(iid as string))) iids1
- from tb_behavior_incr
- union all
- select uid,iids1
- from tb_user_consume
- ) a
- group by uid
- select uid1,iid
- from (
- select uid1
- ,iid
- ,max(sim) score
- ,row_number() over(partition by uid1 order by max(sim) desc) user_rn
- from tb_neighbour a12
- join (select uid,cast(iid as string) iid from tb_behavior_incr) a2
- on a12.uid2 = a2.uid
- join (select uid,split(iids1,",") iids1 from tb_user_consume) a1
- on a12.uid1 = a1.uid
- where not array_contaions(iids1,a2.iid)
- group by uid1,iid
- ) tb_rec
- where user_rn <= 500
使用tb_user_consume緩存用戶最近消費的前10000條記錄,將用戶鄰居最新喜好物品推薦給用戶。
寫在后面的話
呼!終于寫完了;雖然說有了上面這一套操作,UCF推薦基本完成;但有沒有更好的方式呢?我想應該就是embedding大法了吧;比如item2vec對用戶聚類,根據聚類
推薦;再或者根據好友關系,推薦好友喜好的物品。前者表征更細致,值得一說的是其也有負采樣策略和checkpoint增量更新;后者好友信任度更高,解釋性更強。