Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Importing Wikipedia in Plone
Search
Makina Corpus
October 02, 2013
Technology
1
76
Importing Wikipedia in Plone
Eric BREHAULT – Plone Conference 2013
Makina Corpus
October 02, 2013
Tweet
Share
More Decks by Makina Corpus
See All by Makina Corpus
Publier vos données sur le Web - Forum TIC de l'ATEN 2014
makinacorpus
0
750
Créez votre propre fond de plan à partir de données OSM en utilisant TileMill
makinacorpus
0
130
Team up Django and Web mapping - DjangoCon Europe 2014
makinacorpus
3
870
Petit déjeuner "Les bases de la cartographie sur le Web"
makinacorpus
0
420
Petit déjeuner "Développer sur le cloud, ou comment tout construire à partir de rien" le 11 février - Toulouse
makinacorpus
0
270
CoDe, le programme de développement d'applications mobiles de Makina Corpus
makinacorpus
0
110
Petit déjeuner "Alternatives libres à GoogleMaps" du 11 février 2014 - Nantes - Sylvain Beorchia
makinacorpus
0
670
Petit déjeuner "Les nouveautés de la cartographie en ligne" du 12 décembre
makinacorpus
1
390
Tests carto avec Mocha
makinacorpus
0
810
Other Decks in Technology
See All in Technology
ChatGPTとPlantUML/Mermaidによるソフトウェア設計
gowhich501
1
130
Aurora DSQLはサーバーレスアーキテクチャの常識を変えるのか
iwatatomoya
1
790
Snowflakeの生成AI機能を活用したデータ分析アプリの作成 〜Cortex AnalystとCortex Searchの活用とStreamlitアプリでの利用〜
nayuts
1
470
DevIO2025_継続的なサービス開発のための技術的意思決定のポイント / how-to-tech-decision-makaing-devio2025
nologyance
1
370
Firestore → Spanner 移行 を成功させた段階的移行プロセス
athug
1
440
生成AI時代のデータ基盤設計〜ペースレイヤリングで実現する高速開発と持続性〜 / Levtech Meetup_Session_2
sansan_randd
1
150
生成AIでセキュリティ運用を効率化する話
sakaitakeshi
0
550
Skrub: machine-learning with dataframes
gaelvaroquaux
0
120
Kiroと学ぶコンテキストエンジニアリング
oikon48
6
9.9k
品質視点から考える組織デザイン/Organizational Design from Quality
mii3king
0
200
BPaaSにおける人と協働する前提のAIエージェント-AWS登壇資料
kentarofujii
0
130
企業の生成AIガバナンスにおけるエージェントとセキュリティ
lycorptech_jp
PRO
2
160
Featured
See All Featured
RailsConf & Balkan Ruby 2019: The Past, Present, and Future of Rails at GitHub
eileencodes
139
34k
[Rails World 2023 - Day 1 Closing Keynote] - The Magic of Rails
eileencodes
36
2.5k
XXLCSS - How to scale CSS and keep your sanity
sugarenia
248
1.3M
Building an army of robots
kneath
306
46k
Fantastic passwords and where to find them - at NoRuKo
philnash
52
3.4k
Speed Design
sergeychernyshev
32
1.1k
Learning to Love Humans: Emotional Interface Design
aarron
273
40k
Rails Girls Zürich Keynote
gr2m
95
14k
How STYLIGHT went responsive
nonsquared
100
5.8k
The Pragmatic Product Professional
lauravandoore
36
6.9k
Side Projects
sachag
455
43k
JavaScript: Past, Present, and Future - NDC Porto 2020
reverentgeek
51
5.6k
Transcript
Importing Wikipedia in Plone Eric BREHAULT – Plone Conference 2013
ZODB is good at storing objects • Plone contents are
objects, • we store them in the ZODB, • everything is fine, end of the story.
But what if ... ... we want to store non-contentish
records? Like polls, statistics, mail-list subscribers, etc., or any business-specific structured data.
Store them as contents anyway That is a powerfull solution.
But there are 2 major problems...
Problem 1: You need to manage a secondary system •
you need to deploy it, • you need to backup it, • you need to secure it, • etc.
Problem 2: I hate SQL No explanation here.
I think I just cannot digest it...
How to store many records in the ZODB? • Is
the ZODB strong enough? • Is the ZCatalog strong enough?
My grandmother often told me "If you want to become
stronger, you have to eat your soup."
Where do we find a good soup for Plone? In
a super souper!!!
souper.plone and souper • It provides both storage and indexing.
• Record can store any persistent pickable data. • Created by BlueDynamics. • Based on ZODB BTrees, node.ext.zodb, and repoze.catalog.
Add a record >>> soup = get_soup('mysoup', context) >>> record
= Record() >>> record.attrs['user'] = 'user1' >>> record.attrs['text'] = u'foo bar baz' >>> record.attrs['keywords'] = [u'1', u'2', u'ü'] >>> record_id = soup.add(record)
Record in record >>> record['homeaddress'] = Record() >>> record['homeaddress'].attrs['zip'] =
'6020' >>> record['homeaddress'].attrs['town'] = 'Innsbruck' >>> record['homeaddress'].attrs['country'] = 'Austria'
Access record >>> from souper.soup import get_soup >>> soup =
get_soup('mysoup', context) >>> record = soup.get(record_id)
Query >>> from repoze.catalog.query import Eq, Contains >>> [r for
r in soup.query(Eq('user', 'user1') & Contains('text', 'foo'))] [<Record object 'None' at ...>] or using CQE format >>> [r for r in soup.query("user == 'user1' and 'foo' in text")] [<Record object 'None' at ...>]
souper • a Soup-container can be moved to a specific
ZODB mount- point, • it can be shared across multiple independent Plone instances, • souper works on Plone and Pyramid.
Plomino & souper • we use Plomino to build non-content
oriented apps easily, • we use souper to store huge amount of application data.
Plomino data storage Originally, documents (=record) were ATFolder. Capacity about
30 000.
Plomino data storage Since 1.14, documents are pure CMF. Capacity
about 100 000. Usally the Plomino ZCatalog contains a lot of indexes.
Plomino & souper With souper, documents are just soup records.
Capacity: several millions.
Typical use case • Store 500 000 addresses, • Be
able to query them in full text and display the result on a map. Demo
What is the limit? Can we import Wikipedia in souper?
Demo with 400 000 records Demo with 5,5 millions of records
Conclusion • Usage performances are good, • Plone performances are
not impacted. Use it!
Thoughts • What about a REST API on top of
it? • Massive import is long and difficult, could it be improved?
Makina Corpus For all questions related to this talk, please
contact Éric Bréhault
[email protected]
Tel : +33 534 566 958 www.makina-corpus.com