- Join a market leader within the Financial Services Industry.
- Work alongside & learn from best in class talent.
- Opportunity within a company with a solid track record of performance.
Our Client is a leading multinational banking organisation, headquartered in Singapore. They have a global network of more than 450 branches and offices located at Asia Pacific, Europe and North America.
The company's core business is commercial and corporate banking services, personal financial services, private banking and asset management services, as well as corporate finance, venture capital, investment, and insurance services.
You will be responsible for :
- Developing scripts to process structured and unstructured data.
- Recommending, developing and implementing ways to improve data reliability, efficiency and quality.
- Supporting translation of data business needs into technical system requirements.
- Working with stakeholders to understand needs in order with respect to the data structure, availability, scalability and accessibility.
- Defining, developing and maintaining reports to support decision making.
- Processing & Interpreting data to get actionable insights.
- Working closely with business users to understand their data analysis needs/requirements.
- Contributing to and driving continuous process improvement initiatives to meet business needs.
- Establishing project plans, resources, budgets and time-frames, and assigning tasks.
- Gathering, analysing, defining and formalising business requirements and processes into project/system specifications.
- Identifying, tracking and communicating progress, milestones, deliverables, risks and issues.
- Managing vendor relationship and deliverables.
- Preparing project feasibility studies, cost-benefit analysis and proposals and obtain required approvals from IT management and project sponsors.
- Designing and setting up Hadoop cluster in line with current and future needs.
- Developing and troubleshooting on Hadoop technologies.
- Extracting, transforming, loading and integration of data from various sources.
- Monitoring the Hadoop cluster for performance and capacity planning.
- Providing expertise on the various components & features of the Hadoop ecosystem (such as Spark, Map/Reduce, YARN, Hive, Pig, Impala/Drill etc.).
- Identifying trends, doing follow-up analysis, preparing visualizations.
- Implementing real-time analytics use-cases on the Hadoop ecosystem.
- You possess a degree in Computer Science, Applied Mathematics, Engineering or related field.
- You have at least 8 years experience, ideally within a Business Analyst- Data Sciences, Data Analyst or Data Engineer role.
- You have experience within Big Data, Data Design, BI Reporting & Tools, Data Visualisation Tools (Tableau, QlikView, Hue etc.), Hadoop Ecosystem (Hadoop, Spark, Map/Reduce, YARN, Hive, Pig, Impala/Drill etc.) and Extraction, Transformation & Load (ETL) would be a strong advantage.
- You demonstrate strong experience working with large and complex data sets as well as experience analyzing volumes of data.
- You have strong interpersonal and communication skills and are adept at working with multiple stakeholders to drive desired outcomes.
- You have good presentation and communication skills and the ability to present your findings clearly and accessible in the form of reports and presentations to senior colleagues.
- You are highly goal-driven and work well in fast-paced environments.