Let me introduce Serter Solak, an inspiring professional in Data Science, Business Intelligence, and Big Data. He can boast of over a decade of experience in progressively challenging roles in the banking industry. In April 2024, Serter Solak was appointed the Director of DWH, BI & Big Data Application Development at Yapı Kredi Teknoloji.
Serter possesses deep expertise in technical domains and managing diverse teams and projects. Over the last years, he spearheaded initiatives that revolutionised data management practices within such companies as TEB, KKB Kredi Kayıt Bürosu, Vodafone and Yapı Kredi. We’ve invited Serter to talk about how data analysis is evolving, and bringing new challenges and discoveries.
- Serter, can you tell us a few words about your current position, what are the most exciting projects you are engaged in right now?
I am the Director of data-related development teams at Yapı Kredi Teknoloji. We are developing an LLM-based reporting environment. For example, at Yapı Kredi Teknoloji we use MS Teams messaging and an online meeting program. We want to implement an add-on for online answering and reporting. So, we are training our data model with LLM to achieve this.
- You have extensive experience in technical roles, such as Senior Data Architect, and managerial positions, like Director of DWH, BI & Big Data Application Development, how would you describe your professional identity? Do you see yourself primarily as a “tech guy” who enjoys hands-on involvement in technical projects, or do you lean more towards being a manager who excels in strategic planning, team leadership, and driving organisational objectives?
My professional identity combines technical expertise with strategic management, with a stronger emphasis on the technical side. At least this is how I see myself.
When it comes to technical issues, these are the most crucial aspects for me:
First of all, it’s hands-on involvement and technical expertise. I deeply enjoy working on technical projects. With a background in data architecture and application development, I relish tackling complex technical challenges, designing innovative solutions, and staying updated with the latest technologies. I thrive in environments where I can apply my technical knowledge to solve real-world problems, optimise systems, and ensure the robustness and scalability of data platforms.
I am also committed to constant learning and improvement, always exploring new technologies, methodologies, and best practices in data warehousing, business intelligence, and big data.
In terms of strategic management, I try to foster a collaborative environment where members can grow and excel. I believe in empowering my team and promoting a culture of innovation and accountability. My experience as a director has sharpened my skills in managing diverse teams and aligning their efforts with our goals.
Besides, I am adept at strategic planning, setting clear objectives, and developing comprehensive roadmaps to achieve business goals. I focus on aligning technology initiatives with business strategy, ensuring that technical solutions drive organisational success. My role often involves balancing short-term tactical decisions with long-term strategic vision, so that the company remains agile and responsive to changing market demands.
One of my key strengths is bridging the gap between technical teams and business stakeholders. I can translate complex technical concepts into actionable business insights and vice versa. What is more, by combining my technical expertise with strategic management skills, I can drive innovation while ensuring operational efficiency. This dual focus helps in creating smart solutions.
- In your experience, what tools do you find most effective for data analysis and presentation? Could you recommend any particular software that you believe is essential for professionals in data management and analytics?
From my experience, the effectiveness of tools for data analysis and presentation depends on the specific requirements of the project, the scale of the data, and the technical proficiency of the team.
I recommend beginning with such data analysis tools as SQL, Python, R, and Excel, as they are essential. For data presentation, one can try out Tableau, Power BI, and QlikView, which are commonly used.
Data management often involves Apache Hadoop for distributed storage and processing of large datasets, as it is a foundational technology for big data ecosystems and Apache Spark for large-scale data processing and analytics due to its fast in-memory data processing capabilities and support for various programming languages and libraries.
Data integration and ETL processes use Apache NiFi, Oracle Data Integrator, Informatica, DataStage, and Apache Airflow. Additionally, Jupyter Notebooks, Git, and Docker are valuable tools for various aspects of data science.
However, we should always bear in mind that the choice of tools depends on the specific needs of the project, the team’s expertise, and the existing tech stack within the company.
- Given the increasing presence of artificial intelligence and machine learning in data-related tasks, how frequently does your team employ AI technologies in data analysis and decision-making? Maybe you could share some specific cases where AI has made a significant impact on the outcomes of your initiatives.
Yes, that’s right, AI and machine learning technologies play a significant role in our data analysis and decision-making processes. My team frequently employs these technologies to enhance the accuracy, efficiency, and depth of our data insights. Let me bring a couple of cases where AI has made a notable impact:
-
Predictive Analytics for Customer Churn
Problem: We needed to identify customers who were likely to churn to implement proactive retention strategies.
Solution: We developed a predictive model using machine learning algorithms (logistic regression, random forests, and gradient boosting) to analyse customer behaviour data, including transaction history, service usage patterns, and customer support interactions.
Impact:
- Accuracy: The model achieved high predictive accuracy, identifying potential churners with over 85% precision.
- Proactive Retention: By targeting these high-risk customers with tailored retention campaigns, we reduced churn rates by 20% in the first six months.
- Cost Efficiency: Focused retention efforts led to significant cost savings compared to blanket marketing strategies.
-
Fraud Detection
Problem: I am working in a financial sphere and identifying and preventing fraudulent transactions was a critical task for the company, as we wanted to minimise losses and protect customers.
Solution: We deployed a machine learning-based fraud detection system using algorithms like anomaly detection, clustering, and supervised learning techniques (e.g., decision trees, SVMs). The system analysed transaction patterns, user behaviour, and other relevant features in real-time.
Impact:
- Enhanced Detection: The model significantly improved fraud detection rates, identifying fraudulent transactions with over 90% accuracy.
- Real-Time Monitoring: Implementing real-time monitoring and alerting reduced the time to detect and respond to fraudulent activities.
- Customer Trust: Increased security measures helped build customer trust and loyalty.
- Could you share a particularly challenging project you’ve led or been involved in in your professional way? How did you approach it?
The data warehouse modernisation project at Yapı Kredi Bank was the most challenging project of my business life. 12k unique users were working on the 240TB Sybase IQ database, data processing jobs running over 10k, and 15k report inventories already running in the reporting inventory and task hours added to run.
My involvement in the project was at the data engineering level. First of all, we handled the data architecture part. Together with this, we determined the data governance, which took about 10 months. Within these 10 months, we also progressed the installation of the data engineering environment in parallel. Here, we set up Oracle Exadata and Informatica data processing applications and determined the governance for software developer engineers for this process.
Then, we implemented each step of the software development cycle by integrating business units into our teams. It’s easy to tell this story in just a few lines, but achieving this took us nearly six years. In our new environment, we established 13,000 ETL jobs and developed 25 Business Objects Universes.
Along with a completely self-service reporting environment, we also provided the data visualisation part with PowerBI technology. We are currently working on a huge data stack of 380TB, where 14k users will receive support and at the same time, unlike the old system, there are working areas in the Analytical teams. Our daily batches are finished between 5-6 in the morning and we prepare the data for our report users before the day starts.
- That is impressive! Now, let us switch to a slightly different topic. Data security and compliance are of extreme importance, especially in the banking sector. How do you ensure that data management practices adhere to regulatory requirements and industry standards?
That is a good question to ask. We are moving forward with on-the-fly masking in data security. With authorisation based on user clusters, when users make queries outside the areas users need to see when accessing reporting and sandbox environments, we hide them or block them using encrypted display options.
- Finally, what excites you the most about the future of data management and analytics, what changes should we expect and get ready for?
The part I am most excited about is that, in my opinion, there will be significant improvements in all processes with the use of quality-generated data in AI and ML.
