Dell Senior Data Scientist in Seoul, South Korea
Apply NowSenior Data Scientistat Pivotal Software(View all jobs)
PIVOTAL is setting the pace in the database/data warehousing space. Building the world's largest Data Warehouses, PIVOTAL is committed to pioneering massively parallel data-intensive analytic processing. We are looking for talented individuals to join our analytics team, which is developing a fundamentally new approach to how business generates meaning, and then value, from data. This team will develop new methodologies and tools to enable advanced modeling and statistical analysis against petabyte-scale data sets. We are growing and providing leading-edge solutions to major companies in the industry. Our Data Scientists will work with prospects and customers to prove the analytical capabilities of PIVOTAL’s technology in generating business insights from big data. They will seek to understand our customers’ most urgent questions, and use statistical methods and models to provide answers, either as part of pre-sales proofs-of-concept or as a paid engagement. The Data Scientists will develop new practices and methodologies for working with the PIVOTAL data technologies, often working closely with leading academics and industry experts. They will also work with engineers to create new tools and features that support sophisticated analytics within the database.
PRINCIPAL DUTIES AND RESPONSIBILITIES
Lead and deliver short, time-bound engagements with customers to identify top opportunities and insights from their data assets
Apply a broad range of techniques and theories from statistics, machine learning, and business intelligence to deliver actionable business insights to prospects and customers based on large-scale data.
Drive creation of Pivotal software opportunities by guiding prospective customer to a top-priority use case based on their business pressures. Partner with customers while deliver Pivotal Data Science engagements to achieve customer success using big data analytics and Pivotal software.
Present the Pivotal value proposition related to analytics and develop proposals based on prospects’ business pressures. Perform half and full-day workshops as needed to identify Pivotal opportunities.
Work, under limited supervision, with internal and external teams to understand customers’ business problems and develop proposals to respond to those problems.
Following initial high-level guidance, perform end-to-end steps involved in model development. These include preliminary data exploration and data preparation steps, variable/algorithm selection, and model development/validation and scoring.
With guidance, develop and test algorithms' efficacy (i.e., by applying to test/sampled data and assessing accuracy/fit/predictive strength) for differing analytical use-cases.
Work with development teams to create applications based on developed statistical models
Perform occasional public presentations covering trends in big data analytics and successful uses of data science across all markets.
Deliver results and presentations in a timely manner.
Lead interaction with external customers to gather project requirements, provide status updates, and share analytical insights. Likely presenting project output to external customers, with limited assistance in presentation preparation.
Collaborate with PIVOTAL Sales teams to educate prospects and customers on PIVOTAL software offerings. Participate in pre-sales discussions by presenting on analytic service offering and technology stack.
Work with the academic and business community to contribute to research in the area of analytics on large databases.
Generate new product requirements for the PIVOTAL engineering group to enhance the analytics capabilities of the big data platform.
Strong statistical foundation, with broad knowledge of deterministic and probabilistic statistical methods.
Broad experience across numerous statistical toolkits, including: SAS, R, SPSS, Matlab, Mahout/MADLib
Programming strength in a variety of languages: Python, R and SQL. Details for Python like Scitkit learn, tensorflow etc.
Focus on PaaS and frameworks to bring models into production like API first
Spoken and written fluency in English and Korean.
Deep understanding of cloud computing and strong in machine learning.
Optional programming strength in the following Hadoop tools: MapReduce, Pig, Hive, Hbase.
Optional but desired exposure to agile development methodologies
Initial understanding of the nature of the data available in one vertical/horizontal market, familiarity with typical analytics projects involved in the market, and higher level awareness of business trends in the same.
Natural ability to communicate basic and complex quantitative concepts clearly.
Natural curiosity to research and identify possible quantitative solutions to common business problems.
Team-oriented and collaborative nature, while also able to work in a self-directed manner.
Innate customer orientation, with a proactive focus on collaborative problem-solving.
Solid knowledge foundation for technical concepts, including distributed computing, database architectures, business intelligence, and ETL processes.
Ability to travel for projects (2-6 weeks) as needed, but not expected to exceed 25% of work time
Education Required: Bachelors (Non -Technical)
Experience Required: 1.5-2 years relevant experience