• Timeline personalization
• For keeping Retention-Rate
• Using ML to predict demographic of users
Slide 7
Slide 7 text
For making Timeline flexible
Slide 8
Slide 8 text
Very flexible Query
Slide 9
Slide 9 text
• By the way.. Timeline requires Speed.
Slide 10
Slide 10 text
•6 sec (40,000,000 docs)
•25 sec (160,000,000 docs)
• Need 100x speedup at least!!
Slide 11
Slide 11 text
Solr for TL / Distributed Index
App App
API
MySQL
blackhole
black
hole Q4M
Solr
(master)
worker
trigger dequeue
black
hole Q4M
Solr
(master)
worker
trigger dequeue
black
hole Q4M
Solr
(master)
worker
trigger dequeue
black
hole Q4M
Solr
(master)
worker
trigger dequeue
soft commit in 200 msec
Update (realtime)
Filtering fresh item and insert to BlackHole table
Emulate Pub/Sub by Replication and trigger and Q4M
Update only item selected by consistent hashing
৽͍͠(N)ͷΈΛBlackHole tableʹinsert
ReplicationͱTriggerͱQ4MͰPub/Sub Emu
consistent-hashingͰબΕͨͷΈudpate
Slide 12
Slide 12 text
ɹaa //
Slide 13
Slide 13 text
• API
• Plack, Gazelle, WWW::Form::UrlEncoded::XS, JSON::XS and DBIx::Sunny
• Worker
• Parallel::Prefork, Cache::Memory::Simple
LWP::UserAgent and Algorithm::ConsistentHash::Ketama
Slide 14
Slide 14 text
black
hole Q4M
Solr
(master)
worker
trigger dequeue
consul
my $res = $ua->get(‘http://localhost/v1/health/service/".$SRV.'?passing');
my $ref = JSON::XS::decode_json($res->content);
my @list = sort { $a cmp $b } map { $_->{Node}{Address} } @$ref;
my $ketama = Algorithm::ConsistentHash::Ketama->new();
$ketama->add_bucket($_ . '_' . $timestamp, 1) for @list;
my $s1 = $ketama->hash($item_id);
return 1 if $s1 eq $my_ip;
my $s2 = $s1;
while ($s2 eq $s1) {
$s2 = $ketama->hash($item_id.’_'.$i);
$i++;
}
return 1 if $s2 eq $my_ip;
return;
Get server list from Consul
Make Consistent Hash
Drawing by consistent-hashing
Update Solr