A detailed look at the interesting LSM file organisation seen in BigTable, Cassandra and most recently MongoDB
Data architectures need infrastructure that combines both streaming and persistent state
A popular essay looking at whether products like MongoDB are viable threats to the incumbent database vendors
A lighthearted look at Oracle & Google using a metaphorical format. The style won’t suit everyone, but it’s a bit of fun!
- RBS-2014: Scaling Data
- BigDataCon-2013: The Return of Big Iron? (video)
- JAX-2013: The Return of Big Iron?
- QCon-2012: Where does Big Data meet Big Database? (video)
- QCon-2012: Progressive Architectures at RBS (video)
- JavaOne-2011: Balancing Replication and Partitioning in a Distributed Java Database
- QCon-2011: Beyond the Data Grid: Coherence, Normalisation, Joins and Linear Scalability (video)
- OpenWorld-2011: Adopting Oracle Coherence as an Enterprise Standard
- UCL-2011: A Paradigm Shift: The Increasing Dominance of Memory-Oriented Solutions for High Performance Data Access
- CoSIG-2011: Oracle Coherence Implementation Patterns (Special Interest Group)
- ICST-2011: Test-Oriented Languages: a new era?
- ICST-2011: Enabling Testing, Design and Refactoring Practices in Remote Locations
- Birkbeck-2011: Data Storage for Extreme Use Cases
- RefTest-2010: Has Mocking Gone Wrong?
- RBS-2009: Data Grids with Oracle Coherence
- Brunel-2008: The Architect's Two Hats
- Brunel-2007: Architecture and Design in Industry
Other Stuff (all)
- A Guide to building a Central, Consolidated Data Store for a Company (2014)
- A World of Chinese Whispers (2014)
- Database Y (2013)
- The Big Data Conundrum (2012)
- Big Data: Evolution or Revolution? (2012)
- A Story about George (2012)
- The Rebirth of the In-Memory Database (2011)
- Is the Traditional Database a Thing of the Past? (2009)
- Software Writing and the Intellectual Superiority Complex (2005)
- Component Software. Where is it going? (2005)
- Do Metrics Have a Place in Software Engineering Today? (2004)
Team / Process / Interviewing (all)
- Building a Career in Technology (2015)
- The Iffy Tractor (Can they code OO?) (2011)
- The Business Analyst Test (2011)
- Distributing Skills Across a Continental Divide (2011)
- Learning Practices for Distributed Teams (ICST) (2011)
- Interviewing: The Importance of Examining Applied Knowledge (2010)
- Mapping Personal Practices (2010)
- Four HPC Architecture Questions – With Answers (2009)
Test Driven Development (all)
Data Tech (all)
- Best of VLDB 2014 (2015)
- Log Structured Merge Trees (2015)
- An initial look at Actian’s ‘SQL in Hadoop’ (2014)
- The Best of VLDB 2012 (Very Large Database Conference) (2012)
- Thinking in Graphs: Neo4J (2012)
- ODC – RBS’s Distributed Datastore (2012)
- Shared Nothing v.s. Shared Disk Architectures: An Independent View (2009)
- Coherence Code Examples (GitHub)
- Beyond the Data Grid: Coherence, Normalisation, Joins and Linear Scalability (QCon)
- Coherence Part I: An Introduction
- Coherence Part II: Delving a Little Deeper
- Coherence Part III: The Coherence Toolbox
- Coherence Part IV: Merging Data And Processing
- Coherence: The Fallacy of Linear Scalability
- How Fault Tolerant Is Coherence Really?
- Merging Data And Processing: Why it doesn’t “just work”
- POF Primer
- Sizing Coherence Indexes
- When is POF a Good Idea?
Coherence Patterns (all)
- An Overview of some of the best Coherence Patterns (2011)
- Cluster Time and Consistent Snapshotting (2012)
- GUI Sorting and Pagination with Chained CQCs (2012)
- Joins: Advanced Patterns for Data Stores (2011)
- Joins: Simple joins using CQC or Key-Association (2009)
- Latest-Versioned/Marker Patterns and MVCC (2011)
- Reliable version of putAll() (2011)
- Singleton Service (2011)
- The Collections Cache (2011)
Best of VLDB 2014Mar 8th, 2015
Interesting paper on write ahead logs in persistent in memory media. Recent non-volatile memory (NVM) technologies, such as PCM, STT-MRAM and ReRAM, can act as both main memory and storage. This has led to research into NVM programming models, where persistent data structures remain in memory and are accessed directly through CPU loads and stores. REWIND outperforms state-of-the-art approaches for data structure recoverability as well as general purpose and NVM-aware DBMS-based recovery schemes by up to two orders of magnitude.
As the number of cores increases, the complexity of coordinating competing accesses to data will likely diminish the gains from increased core counts.We conclude that rather than pursuing incremental solutions, many-core chips may require a completely redesigned DBMS architecture that is built from ground up and is tightly coupled with the hardware.
Log Structured Merge TreesFeb 14th, 2015
It’s nearly a decade since Google released its ‘Big Table’ paper. One of the many cool aspects of that paper was the file organisation it uses. The approach is more generally known as the Log Structured Merge Tree, after this 1996 paper (although not specifically referenced as such by Google).
LSM is now used in a number of products as the main file organisation strategy. HBase, Cassandra, LevelDB, SQLite, even MongoDB 3.0 comes with an optional LSM engine, after it’s acquisition of Wired Tiger.
What makes LSM trees interesting is their departure from binary tree style file organisations that have dominated the space for decades. LSM seems almost counter intuitive when you first look at it, only making sense when you closely consider how files work in modern, memory heavy systems.
In a nutshell LSM trees are designed to provide better write throughput than traditional B+ tree or ISAM approaches. They do this by removing the need to perform random update-in-place operations.
So why is this a good idea? At its core it’s the old problem of disks being slow for random operations, but fast when accessed sequentially. A gulf exists between these two types of access, regardless of whether the disk is magnetic or solid state or even, although to a lesser extent, main memory.
The figures in this ACM report here/here make the point well. They show that, somewhat counter intuitively, sequential disk access is faster than randomly accessing main memory. More relevantly they also show sequential access to disk, be it magnetic or SSD, to be at least three orders of magnitude faster and random IO. This means random operations are to be avoided. Sequential access is well worth designing for.
So with this in mind lets consider a little thought experiment: if we are interested in write throughput, what is the best method to use? A good starting point is to simply append data to a file. This approach, often termed logging, journaling or a heap file, is fully sequential so provides very fast write performance equivalent to theoretical disk speeds (typically 200-300MB/s per drive).
Benefiting from both simplicity and performance log/journal based approaches have rightfully become popular in many big data tools. Yet they have an obvious downside. Reading arbitrary data from a log will be far more time consuming than writing to it, involving a reverse chronological scan, until the required key is found. (more…)
List of Database/BigData BenchmarksFeb 13th, 2015
I did some research at the end of last year looking at the relative performance of different types of databases: key value, Hadoop, NoSQL, relational.
I’ve started a collaborative list of the various benchmarks I came across. There are many! Checkout below and contribute if you know of any more (link).
Building a Career in TechnologyJan 2nd, 2015
I was asked to talk to some young technologists about about their career path in technology. These are my notes which wander somewhat between career and general advice.
- Don’t assume progress means a career into management – unless you really love management. If you do, great, do that. You’ll get paid well, but it will come with downsides too. Focus on what you enjoy.
- Don’t confuse management with autonomy or power, it alone will give you neither. If you work in a company, you will always have a boss. The value you provide to the company gives you autonomy. Power comes mostly from the respect others have for you. Leadership and management are not synonymous. Be valuable by doing things that you love.
- If you want to be a good programmer a foundation in Computer Science is really important. If you didn’t do undergrad CS (like me) you need to compensate. Ensure you know the basics well (data structures, algorithms, hardware architecture, networks, security etc). Learn this sooner rather than later as knowledge compounds.
- Practice communicating your ideas. Blog, convince friends, colleagues, use github, whatever. If you want to change things you need to communicate your ideas, finding ways to reach your different audiences. If you see something that seems wrong, try to change it by both communicating and by doing.
- Try to always have one side project (either in work or outside) bubbling along. Something that’s not directly part of your job. Go to a hack night, learn a new language, write a new website, whatever. Something that makes you learn in new avenues.
- Sometimes things don’t come off the way you expect. Normally there is something good in there anyway. This is ok.
- The T-shaped people idea from the Valve handbook is a good way to think about your personal development. What’s your heavy weaponry?
- It’s cliched but so very true: Don’t prematurely optimise. Know that all good engineers do. All of them. Including you. Even if you don’t think you do, you probably do. Realise this about yourself and fight it.
- If you think any particular technology is the best thing since sliced bread, and it’s somewhere near a top of the Gartner hype-curve, you are probably not seeing the full picture yet. Be critical of your own opinions and look for bias in yourself.
- In my experience the most important characteristic of a good company is that its employee’s assume, by default, that the rest of the company are smart people. If the modus operandi of a company (or worse, a team) is ‘everyone else is an idiot’ look elsewhere.
- If you’re motivated to do something, try to capitalise on that motivation there and then and enjoy the productivity that comes with it. Motivation is your most precious commodity.
- Learn to control your reaction to negative situations. The term ‘well-adjusted’ means exactly that. Start with email. Never press send if you feel angry or slighted. In tricky situations stick purely to facts and remove all subjective or emotional content. Let the tricky situation diffuse organically. Doing this face to face takes more practice as you need to notice the onset of stress and then cage your reaction, but the rules are the same (stick to facts, avoid emotional language, let it go).
- If you offend someone always apologise. Even if you are right about whatever it was it is unlikely you intention was to offend them.
- Recognise the difference between being politically right and emotionally right. As humans we’re great at creating plausible rationalisations/justifications for our actions, both to ourselves and others. Rationalisations are often a ‘sign’ of us covering an emotional mistake. Learn to look past them to your moral compass.
Quite a few company’s are looking at some form of centralised operational store, data warehouse, or analytics platform. The company I work for set out to build a centralised scale-out operational store using NoSQL technologies five or six years ago and it’s been an interesting journey. The technology landscape has changed a lot in that time, particularly around analytics, although that was not our main focus (but it will be an area of growth). Having an operational store that is used by many teams is, in many ways, harder than an analytics platform as there is a greater need for real time consistency. The below is essentially a brain dump on the subject.
On Inter-System (Enterprise) Architecture
- Having a single schema for the data in your company has little value unless you use it to write down the data it describes as a permanent record. Standardising the wire format alone can create more problems than it solves. Avoid Enterprise Messaging Schemas for the the problem of bulk state transfer between data stores (see here). Do use messages/messaging for notifying the need to act.
- Prefer direct access at source, using data virtualisation. Where that doesn’t work (high cardinality joins) collocate data using with replication technologies (relational or via NoSQL) to materialise read only clones of the source. Avoid enterprise messaging.
- Federated approaches, which leave data sets in place, will get you there faster if you can get all the different parts of the company to conform. That itself is a big ask though, but a good technical strategy can help. Expect to spend a lot on operations and automation lining disparate datasets up with one another.
- When standardising the persisted representation don’t create a single schema upfront if you can help it. You’ll end up in EDW paralysis. Evolve to it over time.
- Start with disparate data models and converge them incrementally over time using schema-on-need (and yes you can do this relationally, it’s just a bit harder).
Useful talk on Linux Performance ToolsAug 24th, 2014
View full blogroll