Business Problem

The client’s huge data sets residing in widely distributed legacy systems resulted in the following issues:

Lack of system integration that would help improve and provide visibility of operations and real-time analytics.

  • Lack of a customer master that was responsive and could be accessed across regions.
  • Data access was a problem as different module systems needed to be accessed for proper data retrieval.
  • Multiple touchpoints resulted in a higher probability of data corruption raising concerns regarding data reliability.
  • The client needed to structure the ad-hoc IT systems which were error-prone, labor-intensive, and expensive to maintain.

The Solutions

Efficient Data Capture

Developed a pipeline with change data capture, distributed event streaming, no-SQL database components.

Reliable Real-time Data

Created real time data flow between different components with data validation and data monitoring in place to make sure that data reliability is intact.

Accelerated Data Queries

Used the right tools for data aggregation to get faster query tools.

Cost-effective Infrastructure

Most of the tools were built on open source to save infrastructure costs.

Dynamic Data Visualization

Combined Dash and Plotly to create real-time, interactive visualization for data along with quick turnaround time.

Unified Processes

Streamlined disparate processes into an agile, consistent, accurate, and user-friendly process.

Value Delivered

This solution was rolled out across 6 South East Asian Countries and helped migrate 10 Mn policies. The customer master (Data Lake) maintains all cumulative data for customers. This is used by applications for operations, also used for reporting & analytics.

Business teams had valuable insights into the pipeline status of policies.

Reporting was Improved and Ad-hoc MIS reports could be created on-demand.

Provided a single source of truth to Data scientists, MIS teams, Business teams, Point of Sales with right and efficient API’s.

Created real-time alerts on data anomaly to accurately monitor data quality. This reduced error rate in the database.

Removed inefficiencies in the system that involved manual, laborious, one-off, inaccurate departmental efforts.