Based in the Group Customer Analytics & Decisioning team, you will be responsible for developing & optimizing Large Corporate Banking data pipelines & architecture. These will support analytical needs around topics such as Anti-Money Laundering, Customer Limits Management, Contact Management.
Create and maintain the optimal data pipeline architecture to enable ingestion from a wide variety of structured and unstructured data sources.
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery and cleansing/transformation.
Support development and deployment of applications utilising the data pipelines to provide actionable insights into trade base money laundering, customer analytics and dashboarding.
Minimum 12 years of working experience in data management, ETL and analysis functions
Understanding of banking with exposure to Large Corporate Banking is preferred
Strong hands-on skills in SQL & PL/SQL
Solid background in traditional structured database environments such as Teradata / Oracle
Exposure to data management and analysis functions
Knowledge on data warehouse and FSLDM concepts
Exposure to Hadoop Ecosystem such as HIVE, Sqoop and Streaming data systems
Experience in end to end automation - building procedures, ETL and automated job scheduling - using the likes of DataStage, Talend, Kafka, Airflow, Pentaho
Ideally some knowledge of analytical software tools such as Python / R / SAS / QlikView / Tableau / Spark
Energetic personality with an innovative, self-starting spirit. Someone that likes to ask "why?"