Pango Case Study: Becoming a Data-Driven Company

Pango had decided to turn into a data-driven company. The solution they were looking for would have to meet both every day and strategic needs.

1.The Customer

Pango provides payment solutions regarding parking and other driver and vehicle related services.  They continue to innovate, lead and make the experience of parking, vehicle and road services more accessible to drivers using advanced technology, while also providing value-added features and an excellent user experience.

Pango, with Pay-by-Phone technology, provides a smarter and easier way to park and pay. No longer do drivers have to look for change, deal with parking meters or worry about a costly parking ticket. With Pango, you can park and Pay-by-Phone quickly and easily using your iPhone, Android, or any other cell phone and customize your Pango account to suit your needs.

Pango is used in more than 350 cities and private parking lots with millions of Pay-by-Phone transactions per month and more than 2,200,00 highly satisfied customers. The Pango App has become synonymous with driver and vehicle solutions. It is one of the most widespread applications in Israel in terms of the number of downloads and frequency of use.

Their advanced parking services put an end to fines and save time and money by locating parking spaces, enabling quick entry into parking lots and automatic termination. They also offer a variety of convenient vehicle services through their app, including flat tire service, discounted car wash prices and car insurance at attractive prices, as well as smart and vital roadside assistance, a direct dial for police assistance and payment for travel on toll roads.

The solution they were looking for would have to meet both every day and strategic needs

2. Problem Statement/Definition

Pango had decided to turn into a data-driven company. With over 2,000,000 B2C private clients, over 10,000 B2C corporate clients and over 250 parking lots B2B corporate partners, it had a distinct need to both deliver improved information to the end clients and to its partners, to deepen the usage of Pango’s services.

The solution they were looking for would have to meet both every day and strategic needs. The key business goals they were seeking to achieve by leveraging their data were:

Enhance their ability to make Data Driven Decisions
Increase wallet share by selling additional services
Initiating personal relationships with clients
Increase B2C customers and B2B partner satisfaction
Monetization of data and expansion of the sources of income
Reducing dependency on third parties for data accessibility


3.    Proposed Solution & Architecture

3.1   General

Any Data & Analytics project must focus on where the data ends up and how it is used. Therefore, Twingo’s Data Experts sat with the relevant stakeholders from various positions across Pango to define the mandatory reports and dashboards. From there they drew up a Source-to-Target document (S2T), which clearly maps out the various sources of the information and the targets they will service. This document also presents the formats and any manipulations, such as aggregations and cross-referencing, need to be applied throughout the process. Following this, a star-schema design of the Data Warehouse was presented.

Twingo’s Data Experts sat with the relevant stakeholders from various positions across Pango to define the mandatory reports and dashboards.day and strategic needs

3.2   Infrastructure

Any Data & Analytics project must focus on where the data ends up and how it is used. Therefore, Twingo’s architecture of the Data &Analytics solution included a major overhaul of the entire flow of data. The data would be ingested via SQL queries, utilizing AWS Lambda functions and Amazon SQS for interprocess communications. The data would then be placed in an AWS S3 bucket. Another combination of Lambda functions and Amazon SQS would be used to call upon tasks and for loading data into the Amazon Redshift data warehouse (DWH). Data also flows from the S3 bucket and from the DWH to an Amazon S3-based data lake. The data lake was set up with appropriate partitioning and sorting. AWS Step and Lambda functions perform data transformations and updates and additional Lambda functions extract the update states of the data.  

3.3   Creating BI

Part of the process required a paradigm shift to become customer-centric. For this purpose, the new information infrastructure had to replace the company’s existing reporting. The BI & Analytics system would:

Enhance querying capabilities and data analysis for advanced functional information sharing in a dynamic and flexible research environment.
Remove dependency on the technical and R&D departments and provide a drag & drop research environment for users.
Process automation to save manual data collection time and to focus on data analysis rather than on collection.
Build a Business Alerts array defined by users to enable stakeholders and end users to set up KPI and measure their success.
Provide “historical depth” that will allow trends to be analyzed and to create customer stories.
Allow the creation of reports required further down the road.

Part of the process required a paradigm shift to become customer-centric. For this purpose, the new information infrastructure had to replace the company’s existing reporting

3.4    AWS Data & Analytics tools and services utilized:

Amazon S3 buckets for storage: This service was chosen because it could handle the amounts of data ingested easily, because of its scalability, and because of its high levels of security and performance.
AWS SQS for handling the transferring and logging.
RDS Management and Cloud Watch for database administration and monitoring.
Redshift for the DWH: This solution proved its worth in handling the heavy workloads, both in terms of performance and scale.
Amazon Athena:  For querying the data on top of Pango data lake
AWS Glue Catalog serves as the metadata data store for Amazon Athena.

4.    Benefits

Data Driven: The primary and most important outcome was one, unified solution which provided the infrastructure for Pango to become a Data Driven Decision-Making company.
Aggregated Queries: Redshift was the exact solution to provide this capability, which was the very reason the client needed this solution, replacing the slow or unfeasible solutions they had tried to utilize before.
Data Freshness: By using Redshift online, the outcome achieved was reduced data freshness from 24 hours via the replica server to 3 hours having the ability to query the production aggregate data near real time.
Data Insights: The data is no longer under the ownership of third parties, and it now sits in the Data & Analytics framework established on AWS for Pango alone. Moving the data infrastructure from third parties to AWS has provided Pango the insights they did not have until this date.
Fast insights: Whether for the B2B partners or the B2C customers, all the informations, ranging from most recently used parking lots to monthly trends of service usage and payments, these are now available with one click.

Recent Articles

Follow Us