[header_ad_block]

Bengaluru, 8th August 2025: In a world where tech stacks grow increasingly complex, the smartest solutions often come from simplifying what already exists. This conversation dives into the architectural principles behind future ready data platforms, explores high impact AI use cases in retail investing, and showcases the power of cross functional problem solving in driving efficiency and innovation.

Join Mr. Sunny Shah, Head Engineering at INDmoney, in an engaging discussion with Mr. Marquis Fernandes, who leads the India Business at Quantic India. They explore Mr. Sunny’s engineering philosophy, the mindset behind impactful decision making, and his passion for building purposeful solutions beyond the workplace.

With AI adoption rapidly growing in financial services, what are some underexplored but high impact use cases that excite you for the next 3 years?

Democratizing Advanced Investment Strategies: AI could bring sophisticated, professional level investing tools to everyday retail investors. For instance, hedging against stock downturns by purchasing put options is a common tactic among experts, but most retail investors avoid it due to a lack of knowledge and skills. AI powered investment assistants could provide real-time, personalized guidance, analyzing market conditions, user portfolios, and risk tolerance to recommend and even automate such strategies. This democratization could level the playing field, reducing barriers for non-professionals and potentially increasing overall market participation.

AI-Assisted Goal-Based Investing: Goal-based investing, aligning portfolios with specific life objectives like retirement, education, or home purchases, is widely recognized as a superior strategy, yet it often requires costly financial planners, deterring most retail investors. AI could soon bridge this gap by serving as an affordable, intelligent advisor that maps out customized plans based on user inputs, market forecasts, and ongoing adjustments. For example, it might simulate scenarios, optimize asset allocation across timelines, and automate rebalancing while factoring in personal risk profiles and economic shifts, empowering everyday users to achieve their aspirations without elite expertise.

In your experience, what are the top 3 architectural principles to ensure that a data platform remains scalable and future proof as organizational needs evolve?

  1. Simplicity: Have as few moving parts as possible. The shelf life of simple engineering is very high, as it’s easy to maintain and extend for future architects and engineers. Usually, a scalable data platform consists of the following components:

    1. Source database (MySQL, PostgreSQL, or MongoDB)
    2. Debezium or any other CDC tool
    3. Kafka
    4. Spark job to read from Kafka and write to OLAP database
    5. OLAP database (Redshift, Snowflake, or ClickHouse)
    6. Airflow to schedule Spark jobs
    7. Kubernetes or YARN to run Spark jobs.

Usually, these 7 components are enough to build a reliable and scalable data platform. To simplify this further, we can rely on the native functionality of today’s OLAP databases to ingest data from Kafka. With this small change, we can remove three components from the data platform. Now we don’t need the Spark job to ingest data from Kafka to the OLAP database. We don’t require Airflow to schedule Spark jobs, and we don’t need Kubernetes or YARN to run these jobs.

Apart from this, I usually avoid ETL and use ELT. In plain English, rather than building Spark or Python jobs to create flat or ETL tables, I like doing the entire transformation in the OLAP database (Redshift, Snowflake, or ClickHouse) itself. That way, it allows us to reduce the moving parts. If the ELT jobs are simple, then I rely on auto refreshing materialized views. Auto incremental materialized views allow us not to worry about scheduling the ELT jobs to run at fixed intervals.

One more example of simplicity in deployments: Maintaining a single 24 core high performance ClickHouse server is much easier than managing a ClickHouse cluster of 3 machines with 8 cores each.

  1. Observability: A data platform is only effective if it’s always accurate and reliable. We can only make reliable platforms if we obsessively measure all the inaccuracies and sync delays. We also put the accuracy metrics and targets in the OKRs of the developers building and maintaining the data platform. An obsessive focus on reliability gives platform users tremendous confidence in the platform.
  2. Continuous Improvements: It’s easy to stop iterating once we have a reliable, stable, and functional platform, but over time, this approach results in obsolete platforms. For example, at INDmoney, we moved to Spark on Kubernetes from Spark on Hadoop-YARN in early 2021. This made our Spark jobs much more scalable and simpler, as DevOps teams are usually far more comfortable with Kubernetes than with Hadoop YARN. We recently moved our critical dashboards to ClickHouse from Redshift. This change allowed us to have real-time dashboards rather than ones that update every hour.

You emphasize “programmatic problem solving.” Can you share a real-life example where combining data engineering, data science, and domain expertise led to a breakthrough solution?

In 2022 at INDmoney, we halved our AWS costs by integrating data engineering, data science, and backend expertise to address soaring network outbound (egress) and NAT Gateway charges.

  • Data Engineering: Enabled VPC flow logs to capture real-time traffic data, stored in S3 for querying via AWS Athena.
  • Data Science: Analyzed terabytes of logs with custom queries to pinpoint anomalies, identifying a backend app causing excessive transfers from inefficient API calls.
  • Domain Expertise: Code review revealed a bug creating redundant loops, inflating costs 100x. Fixes via caching and compression cut costs immediately while boosting performance.

This programmatic approach transformed a vague issue into a targeted fix, highlighting the power of cross-disciplinary collaboration.

Your work often involves finding hidden patterns in data. If you weren’t in tech, do you think you’d make a good detective, and what kind of cases would you solve?

Well, I consider myself a bit of a maker. In addition to building software, I handle most of the electrical, plumbing, and carpentry work at my home.

If budget and deadlines didn’t matter, what passion project would you build just for the joy of creating?

My favorite pet project is to build a fully robotic kitchen that can produce high quality staple Indian foods, such as roti sabzi and idli dosas, and democratize access to nutritious and hygienic meals for every Indian.

What’s the single most important skill for aspiring DevOps/Security/Engineering leaders?

First principles thinking.

To know more about us / publish your article, reach us at
www.quanticindia.com
marquis@quanticindia.com.

[blog_bottom_ad]
Share.
Leave A Reply