Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
ndb
Search
spicyj
May 28, 2014
Technology
0
130
ndb
spicyj
May 28, 2014
Tweet
Share
More Decks by spicyj
See All by spicyj
React: What Lies Ahead
spicyj
6
370
Creating interactive learning interfaces at Khan Academy
spicyj
0
110
Understanding state in React
spicyj
1
100
css
spicyj
2
860
Other Decks in Technology
See All in Technology
オブザーバビリティと育てた ID管理・認証認可基盤の歩み / The Journey of an ID Management, Authentication, and Authorization Platform Nurtured with Observability
kaminashi
2
1.5k
AWS DMS で SQL Server を移行してみた/aws-dms-sql-server-migration
emiki
0
270
AIの個性を理解し、指揮する
shoota
3
590
知覚とデザイン
rinchoku
1
690
20251027_マルチエージェントとは
almondo_event
1
500
東京大学「Agile-X」のFPGA AIデザインハッカソンを制したソニーのAI最適化
sony
0
180
SRE × マネジメントレイヤーが挑戦した組織・会社のオブザーバビリティ改革 ― ビジネス価値と信頼性を両立するリアルな挑戦
coconala_engineer
0
390
DMMの検索システムをSolrからElasticCloudに移行した話
hmaa_ryo
0
320
SOTA競争から人間を超える画像認識へ
shinya7y
0
660
次世代のメールプロトコルの斜め読み
hirachan
0
120
.NET 10のBlazorの期待の新機能
htkym
0
170
251029 JAWS-UG AI/ML 退屈なことはQDevにやらせよう
otakensh
0
120
Featured
See All Featured
Music & Morning Musume
bryan
46
6.9k
Large-scale JavaScript Application Architecture
addyosmani
514
110k
[Rails World 2023 - Day 1 Closing Keynote] - The Magic of Rails
eileencodes
37
2.6k
How GitHub (no longer) Works
holman
315
140k
A better future with KSS
kneath
239
18k
10 Git Anti Patterns You Should be Aware of
lemiorhan
PRO
658
61k
Building a Scalable Design System with Sketch
lauravandoore
463
33k
Build your cross-platform service in a week with App Engine
jlugia
234
18k
The MySQL Ecosystem @ GitHub 2015
samlambert
251
13k
Art, The Web, and Tiny UX
lynnandtonic
303
21k
The Art of Delivering Value - GDevCon NA Keynote
reverentgeek
16
1.7k
4 Signs Your Business is Dying
shpigford
186
22k
Transcript
ndb “NDB is a better datastore API for the Google
App Engine Python runtime.”
Part 1 of 2
Why ndb? 1. Less stupid by default 2. More flexible
queries 3. Tasklets with autobatching
Less stupid by default With db: class UserVideo(db.Model): user_id =
db.StringProperty() video = db.ReferenceProperty(Video) user_video = UserVideo.get_for_video_and_user_data( video, user_data) return jsonify(user_video) # slow
Less stupid by default With ndb: class UserVideo(ndb.Model): user_id =
ndb.StringProperty() video = ndb.KeyProperty(kind=Video) user_video = UserVideo.get_for_video_and_user_data( video, user_data) return jsonify(user_video) # not slow!
More flexible queries ndb lets you build filters using ndb.AND
and ndb.OR: questions = Feedback.query() .filter(Feedback.type == 'question') .filter(Feedback.target == video_key) .filter(ndb.OR( Feedback.is_visible_to_public == True, Feedback.author_user_id == current_id)) .fetch(1000) Magic happens.
Performance The datastore is slow. How can we speed things
up? 4 Batch operations together 4 Do things in parallel 4 Avoid the datastore
Tasklets and autobatching def get_user_exercise_cache(user_data): uec = UEC.get_for_user_data(user_data) if not
uec: user_exercises = UE.get_all(user_data) uec = UEC.build(user_exercises) return uec def get_all_uecs(user_datas): return map(get_user_exercise_cache, user_datas)
Tasklets and autobatching @ndb.tasklet def get_user_exercise_cache_async(user_data): uec = yield UEC.get_for_user_data_async(user_data)
if not uec: user_exercises = yield UE.get_all(user_data) uec = UEC.build(user_exercises) raise ndb.Return(uec) @ndb.synctasklet def get_all_uecs(user_datas): uecs = yield map(get_user_exercise_cache_async, user_datas) raise ndb.Return(uecs)
Moral ndb is awesome. Use it.
Part 2 of 2
The sad truth ndb isn't perfect.
Mysterious errors You heard from Marcia about this gem back
in March: TypeError: '_BaseValue' object is not subscriptable
Q: What's worse than code that doesn't work at all?
A: Code that mostly works but breaks in subtle ways.
Secret slowness #1 Multi-queries, with IN and OR: answers =
Feedback.query() .filter(Feedback.type == 'answer') .filter(Feedback.in_reply_to.IN(question_keys)) .fetch(1000) Doesn't run in parallel!
Secret slowness #1 A not-horribly-slow multi-query: answers = Feedback.query() .filter(Feedback.type
== 'answer') .filter(Feedback.in_reply_to.IN(question_keys)) .order(Feedback.__key__) .fetch(1000)
Secret slowness #2 Query iterators: query = Feedback.query().filter( Feedback.topic_ids ==
'algebra') questions = [] for q in query.iter(batch_size=20): if q.is_visible_to(user_data): questions.append(q) if len(questions) >= 10: break
Secret slowness #2 Solution? Sometimes you have to do it
by hand.
Moral ndb isn't perfect. Pay attention. Profile your code.
The End