Posted: September 5, 2024 By Laura Wyglendacz

How Liberis is decomposing its monolithic data: approaches to scaling whilst unlocking data value

Learn how Liberis scaled by decomposing monolithic data, adopting a Data Mesh, and aligning teams to accelerate product development and unlock data value.

Return to blog posts

In early 2023 Liberis was at a crossroads familiar to many scaleups. We were reaping the benefits of radical changes made to scale product engineering. Intelligence product development felt difficult and slow, and our use of business data for analytics lagged the rapid pace of product development.

Why? Micro-services, Mono-data

Application engineers and product managers worked together in teams arranged by business domain. In a microservices architecture, they used apis and events to share data with each other.

Meanwhile, an Enterprise Data team took care of a monolithic data estate, where operational and analytical concerns interleaved. Many microservices still read/wrote to monolithic data servers and warehouses.

Liberis has a rich and growing store of diverse customer data. 14 years of customer data power personalised risk assessments, providing a service for SMEs that remains out of reach for traditional lenders.

This valuable asset was centralised in such a way that ease of use was a challenge. Product development in domain teams was often bogged down by mixing of operational/analytical concerns, a lack of ownership of data, and ensuing maintenance challenges. Centralised data was ready for an overhaul.

Step 1 – Jump on the domain bounded bandwagon!

I asked domain teams to extend the domain bounded approach to all of their operational data. This meant getting them acquainted with a data estate they knew little about.

The Enterprise Data team spent most of their time keeping the data estate up and running, so they knew the infrastructure well. They spent their time fixing performance issues, syncing and integrating data. They were always busy and in demand. How could we find the capacity to drive change?

First we gathered data. Tickets coming into the Enterprise Data team were categorised into three main areas.

  1. Support: maintenance, bugs, issues, incidents, data quality etc.
  2. New development: iterations, new capabilities or data
  3. Integrating Partner Data – an essential, early step in Liberis’ personalised customer journey

We switched from development to ‘lights on’ mode, providing Support and Partner integration only. New development was paused, excepting requests that met a high bar of proven value for customers.

As Support requests came in, we categorised them further. Some data described in support requests was identified as operational. We paired with domain teams to hand it over. Official ownership changes were communicated as domains encapsulated their operational data.

In some cases domain operational data was baked into the monolith in non trivial ways. Domain teams made plans to rebuild, and began executing.

New microservices and domain databases continue to replace fragile stored procedures and data warehouses, shrinking the monolith and its maintenance burden week by week.

Step 2 – Conway’s Law: Fake it until you make it

With operational data ownership distributed, how could we start delighting analytical data users? From a data consumer point of view, data infrastructure and tooling remained unreliable and slow despite low data volumes. Data itself was inconsistent, divergent and error prone.

💡“Organizations which design systems… are constrained to produce designs which are copies of the communication structures of these organizations.”

Melvin E. Conway, How Do Committees Invent?

Would a new, centrally owned analytical platform solve our problems?

Considering Conway’s Law – no. Why not?

The Enterprise Data team shuttled data between producers and end consumers. They knew a lot about pipelines, but they were not in a position to be experts on either data produced, or data user needs and pain points. In this model the only option for scale as data need grows, is to add more data engineers to a team.

Use of a new technical platform, database technology or data engineering optimisation could in theory improve performance, following lengthy analysis and migration. But without expertise about data as produced or consumed, platform choice and design would not be optimised for Liberis data needs now, or next. The risk of changing tech without making organisational changes, is always that you rebuild the same problematic systems, in a new tech stack!

Liberis knew that decoupling and domain alignment supported both lean product innovation and scaling to a growing customer base. I knew from experience that applying the same principles to analytical data, along with product thinking, unlocks innovation and scale of data too.

Following Conway’s Law, I changed the organisational structure. Goodbye centralised Enterprise Data Team, hello Data Platform team.

Step 3 – Rinse and repeat (back to domain boundaries!)

Without a centralised Enterprise Data team moving everyone’s data around, data consumers now negotiate directly with domain data producers to make value-driven tradeoffs about data.

Any questions about specific data that reach the Data Platform team, are directed back to domain team Engineering and Product Managers. Liberis favours open channels and clear, org-wide outcomes to keep conversations on track.

This allows the Data Platform team to focus on platform-as-product capability development.

By aligning Data Platform development with lean product development initiatives end-end, we build the capabilities that are proven to be needed. We build iteratively and collaboratively with both data publishers and consumers.

Case Study – Launching a new Product in less than 6 months

Outcome 1: Rapid delivery timelines for analytics data

For the first time:

  • Operational and analytical data can be developed entirely off-monolith.
  • Analytical data users are treated as first-class citizens, involved in design and product discussions with delivery teams much earlier.

The result?

  • Delivery timelines for analytical data are explicit and match, rather than lag product releases. For example, co-ordinated end-end release testing now explicitly includes analytics data and customer facing data products.
  • Measurement of product and business success is baked in and informs tech design. For example, cross-domain service, api, kafka topic and event design have all been influenced by analytical data requirements.

💡Conway’s Law again – the technical system reflects how people organise themselves.

Including data users early, and treating them as valued customers, lowers the risk of generating technical debt or significant architectural rework to meet analytical data needs that are considered too late in the development process.

Outcome 2: Incremental development of Data Platform

Lean product development is driving Data Platform capability.

  1. Data-producing domains can now publish one type of analytical data product. The deployed modules of infrastructure and code forming an Analytical Data Product are owned by domain teams entirely. The new product will use this pattern.
  2. By product launch, Liberis will go-live with the first iteration of self-serve data modelling and reporting capabilities, for data consumers and customer facing data products.

By taking three pragmatic steps toward a Data Mesh, data AND data capabilities keep pace with product development 🚀 – speeding up access to data that tells us if we’re on the right track!

Step 4…

As we continue to extract data from the monolith and deliver it as self-serve products, we will unlock similar benefits in intelligent product development. Our Machine Learning and AI specialists will be able to focus on delivering unique intelligence products and spend less time on data wrangling.

Trusted by
Backed by